docker安装使用ELK

1. 安装docker

Docker 分为 CE 和 EE 两大版本。CE 即社区版(免费),EE 即企业版,强调安全,付费使用,这里我们使用的CE版

为了确保系统的稳定性,建议先update一下

1
sudo yum update

安装依赖包

1
sudo yum install -y yum-utils device-mapper-persistent-data lvm2

添加docker镜像

1
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

如果官方源下载速度太慢,建议使用国内源

1
sudo yum-config-manager --add-repo https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo

安装docker

1
2
sudo yum makecache fast
sudo yum install docker-ce

测试是否安装成功

1
docker run hello-world

建立一个docker组,并将当前用户加入到此组中,这样不用root用户即可访问到 Docker 引擎的 Unix socket

1
2
3
4
# 创建docker组
sudo groupadd docker
# 将当前用户加入到组中
sudo usermod -aG docker $USER

如果上面安装失败,我们可以卸载docker,重新安装

1
2
3
4
5
6
7
8
9
10
sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine

2. 安装docker-compose

docker-compose是一个docker编排工具,它可以有效的解决我们镜像之间的依赖关系

这里提供两种方式安装:

直接下载

  1. 下载docker-compose文件

    1
    curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
  2. 赋予文件可执行权限

    1
    sudo chmod +x /usr/local/bin/docker-compose
  3. 验证是否安装成功

    1
    docker-compose version

pip方式安装

  1. 安装pip

    1
    2
    3
    4
    5
    6
    7
    8
    #安装依赖
    yum -y install epel-release
    #安装pip
    yum -y install python-pip
    #更新pip
    pip install --upgrade pip
    # 验证pip
    pip --version
  2. 安装docker-compose

    1
    pip install -U docker-compose==1.23.2
  3. 验证安装是否成功

    1
    docker-compose version

3. 安装ELKC

ELKC为 elasticsearch(搜索型数据库)、logstash(日志搜集、过滤、分析)、kibana(提供Web页面分析日志)、cerebro(监控elasticsearch状态)

docker-compose.yml 文件如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
version: '2.2'
services:
# elasticsearch节点1
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.1.0
container_name: es7_01
environment:
- cluster.name=pibigstar
- node.name=es7_01
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.seed_hosts=es7_01
- cluster.initial_master_nodes=es7_01,es7_02
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es7data1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- es7net
# elasticsearch节点2
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:7.1.0
container_name: es7_02
environment:
- cluster.name=pibigstar
- node.name=es7_02
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.seed_hosts=es7_01
- cluster.initial_master_nodes=es7_01,es7_02
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es7data2:/usr/share/elasticsearch/data
networks:
- es7net
# kibana
kibana:
image: docker.elastic.co/kibana/kibana:7.1.0
container_name: kibana7
environment:
- I18N_LOCALE=zh-CN
- XPACK_GRAPH_ENABLED=true
- TIMELION_ENABLED=true
- XPACK_MONITORING_COLLECTION_ENABLED="true"
ports:
- "5601:5601"
networks:
- es7net
# cerebro
cerebro:
image: lmenezes/cerebro:0.8.3
container_name: cerebro
ports:
- "9000:9000"
command:
- -Dhosts.0.host=http://elasticsearch:9200
networks:
- es7net
volumes:
es7data1:
driver: local
es7data2:
driver: local

networks:
es7net:
driver: bridge

启动

1
docker-compose up

注意:

1、如果你看到这个提示:
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least”
那说明你设置的 max_map_count 小了,编辑/etc/sysctl.conf,追加以下内容:vm.max_map_count=262144保存后,执行:sysctl -p重新启动。

2、如果启动过程中出现问题,关闭后再次启动前要先清除下数据

1
2
3
4
# 停止容器并且移除数据
docker-compose down -v
# 再次启动
docker-compose up

4. 启动Logstash

  1. 下载测试数据
    http://files.grouplens.org/datasets/movielens/ml-latest-small.zip

  2. 下载Logstash

https://www.elastic.co/cn/downloads/logstash

  1. 配置logstash.conf

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    input {
    file {
    path => ["F:/elasticsearch/ml-latest-small/movies.csv"]
    start_position => "beginning"
    sincedb_path => "nul"
    }
    }

    filter {
    csv {
    separator => ","
    columns => ["id","content","genre"]
    }

    mutate {
    split => { "genre" => "|" }
    remove_field => ["path", "host","@timestamp","message"]
    }

    mutate {

    split => ["content", "("]
    add_field => { "title" => "%{[content][0]}"}
    add_field => { "year" => "%{[content][1]}"}
    }


    mutate {
    convert => {
    "year" => "integer"
    }
    strip => ["title"]
    remove_field => ["path", "host","@timestamp","message","content"]
    }


    }

    output {
    elasticsearch {
    hosts => "http://localhost:9200"
    index => "movies"
    document_id => "%{id}"
    }
    stdout {}
    }
  2. 启动logstash

    1
    2
    cd bin
    logstash -f F:\elasticsearch\conf\logstash.cnf
-------------本文结束感谢您的阅读-------------