ELK伪集群+kafka伪集群搭建

待我称王封你为后i 2022-11-07 05:49 401阅读 0赞

文章目录

  • 前言
    • 准备
    • 搭建ES集群
    • 搭建Kibana
    • 搭建Logstash
    • 搭建Kafka 集群
    • 搭建Filebeat

前言

搭建一个ELK伪集群+ kafka伪集群,使用Filebeat收集Nginx的日志,所有应用在一台服务器上搭建,虚拟多块网卡。

  1. ElasticSearch:一个基于 JSON 的分布式的搜索和分析引擎,作为 ELK 的核心,它集中存储数据,用来搜索、分析、存储日志。它是分布式的,可以横向扩容,可以自动发现,索引自动分片。本次搭建3台互为主从的ES集群
  2. Logstash :一个动态数据收集管道,支持以 TCP/UDP/HTTP 多种方式收集数据(也可以接受 Beats 传输来的数据),并对数据做进一步丰富或提取字段处理。用来采集日志,把日志解析为json格式交给ElasticSearch
  3. Kibana :一个数据可视化组件,将收集的数据进行可视化展示(各种报表、图形化数据),并提供配置、管理 ELK 的界面
  4. Filebeat :隶属于Beats,轻量级数据收集引擎。基于原先 Logstash-fowarder 的源码改造出来。换句话说:Filebeat就是新版的 Logstash-fowarder,也会是 ELK Stack 在 Agent 的第一选择。

在这里插入图片描述


准备

一台服务器,虚拟4个ip地址
服务器:Centos7
内存:16GB
ip地址①:192.168.230.133
ip地址①:192.168.230.134
ip地址①:192.168.230.135
ip地址①:192.168.230.136


各ip对应安装的应用和端口号:

  1. 192.168.230.133 虚拟网卡
  2. **********************************
  3. 部署应用 所用端口
  4. es主节点/es数据节点 9200 9300
  5. head 9100
  6. zookeeper 2888 3888
  7. kibana 5601
  8. kafka 9092
  9. 192.168.230.134 虚拟网卡
  10. **********************************
  11. 部署应用 所用端口
  12. es主节点/es数据节点 9200 9300
  13. head 9100
  14. zookeeper 2888 3888
  15. logstash 9600
  16. kafka 9092
  17. 192.168.230.135 虚拟网卡
  18. **********************************
  19. 部署应用 所用端口
  20. es主节点/es数据节点 9200 9300
  21. zookeeper 2888 3888
  22. kafka 9092
  23. 192.168.230.136 虚拟网卡
  24. **********************************
  25. nginx 80
  26. Filebeat

所需环境和版本信息如下图,已打包 下载链接:https://download.csdn.net/download/qq_26129413/15728637
在这里插入图片描述


1. 关闭防火墙等

  1. 1.关闭防火墙
  2. systemctl stop firewalld && systemctl disable firewalld
  3. 2. 关闭selinux
  4. sed -i 's/=enforcing/=disabled/g' /etc/selinux/config && setenforce 0
  5. 3. 修改文件描述符数量,不修改可能使es启动失败
  6. # vim /etc/security/limits.conf
  7. * soft nofile 65536
  8. * hard nofile 131072
  9. * soft nproc 2048
  10. * hard nproc 4096
  11. # vim /etc/sysctl.conf
  12. vm.max_map_count=655360
  13. 5. 刷新 sysctl -p
  14. #将所有安装包放到该目录下
  15. mkdir /software && cd /software

2. 安装JDK1.8环境

  1. tar -zvxf jdk-8u211-linux-x64.tar.gz && mv jdk1.8.0_211/ /usr/local/jdk
  2. # vim /etc/profile
  3. JAVA_HOME=/usr/local/jdk
  4. PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin
  5. CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/jre/lib
  6. export JAVA_HOME PATH CLASSPATH
  7. # source !$
  8. # java -version
  9. # ln -s /usr/local/jdk/bin/java /usr/local/bin/java

3. 安装 node.js

  1. wget https://nodejs.org/dist/v10.15.3/node-v10.15.3-linux-x64.tar.xz
  2. tar -Jxf node-v10.15.3-linux-x64.tar.xz && mv node-v10.15.3-linux-x64/ /usr/local/node
  3. # vim /etc/profile
  4. export NODE_HOME=/usr/local/node
  5. export PATH=$NODE_HOME/bin:$PATH
  6. export NODE_PATH=$NODE_HOME/lib/node_modules:$PATH
  7. # source !$
  8. # node -v

4. 安装head
作用,一个简易的ElasticSearch 分片查询web页面,用来测试es功能是否正常。

  1. #wget https://github.com/mobz/elasticsearch-head/archive/master.zip
  2. # unzip master.zip && mv elasticsearch-head-master/ /usr/local/elasticsearch-head
  3. # cd /usr/local/elasticsearch-head
  4. # npm install -g cnpm --registry=https://registry.npm.taobao.org
  5. # cnpm install -g grunt-cli
  6. # cnpm install -g grunt
  7. # cnpm install grunt-contrib-clean
  8. # cnpm install grunt-contrib-concat
  9. # cnpm install grunt-contrib-watch
  10. # cnpm install grunt-contrib-connect
  11. # cnpm install grunt-contrib-copy
  12. # cnpm install grunt-contrib-jasmine #若报错就再执行一遍
  13. -------------------------------------------------------------------------------------------------------
  14. vim /usr/local/elasticsearch-head/Gruntfile.js
  15. #找到下面connect属性,95行左右,新增 hostname: '192.168.230.133', #第一台设备ip
  16. connect: {
  17. server: {
  18. options: {
  19. hostname: '192.168.230.133', #不要忘了后面的逗号
  20. port: 9100,
  21. base: '.',
  22. keepalive: true
  23. }
  24. }
  25. }
  26. 后台启动
  27. cd /usr/local/elasticsearch-head
  28. nohup grunt server &
  29. eval "cd /usr/local/elasticsearch-head/ ; nohup npm run start >/dev/null 2>&1 & "
  30. -----------------------------------------------------------------------------------------------------
  31. 后台脚本
  32. vim /usr/bin/elasticsearch-head
  33. #!/bin/bash
  34. #chkconfig: 2345 55 24
  35. #description: elasticsearch-head service manager
  36. data="cd /usr/local/elasticsearch-head/ ; nohup npm run start >/dev/null 2>&1 & "
  37. START() {
  38. eval $data
  39. }
  40. STOP() {
  41. ps -ef | grep grunt | grep -v "grep" | awk '{print $2}' | xargs kill -s 9 >/dev/null
  42. }
  43. case "$1" in
  44. start)
  45. START
  46. ;;
  47. stop)
  48. STOP
  49. ;;
  50. restart)
  51. STOP
  52. sleep 2
  53. START
  54. ;;
  55. *)
  56. echo "Usage: elasticsearch-head (|start|stop|restart)"
  57. ;;
  58. esac
  59. chmod +x /usr/bin/elasticsearch-head
  60. elasticsearch-head start 启动 elasticsearch-head stop 停止 elasticsearch-head restart 重启

5. 创建用户

  1. groupadd elkms
  2. useradd elkms -g elkms
  3. passwd elkms
  4. 123456
  5. groupadd elklsnoe
  6. useradd elklsnoe -g elklsnoe
  7. passwd elklsnoe
  8. 123456
  9. groupadd elklstow
  10. useradd elklstow -g elklstow
  11. passwd elklstow
  12. 123456

搭建ES集群

  1. 解压es
  2. tar zxf elasticsearch-6.7.1.tar.gz && mv elasticsearch-6.7.1 /usr/local/elasticsearchms
  3. tar zxf elasticsearch-6.7.1.tar.gz && mv elasticsearch-6.7.1 /usr/local/elasticsearchlsnoe
  4. tar zxf elasticsearch-6.7.1.tar.gz && mv elasticsearch-6.7.1 /usr/local/elasticsearchlstow
  5. tar zxf elasticsearch-6.7.1.tar.gz && mv elasticsearch-6.7.1 /usr/local/elasticsearchbak ##多备份了一个
  6. 创建数据目录,并给予权限
  7. [root@localhost local]# mkdir /usr/local/elasticsearchms/data
  8. [root@localhost local]# mkdir /usr/local/elasticsearchlsnoe/data
  9. [root@localhost local]# mkdir /usr/local/elasticsearchlstow/data
  10. [root@localhost local]# chown -R elkms:elkms /usr/local/elasticsearchms
  11. [root@localhost local]# chown -R elklsnoe:elklsnoe /usr/local/elasticsearchlsnoe
  12. [root@localhost local]# chown -R elklstow:elklstow /usr/local/elasticsearchlstow
  13. ##########################################################:修改数据 ms 192.168.230.133
  14. vim /usr/local/elasticsearchms/config/elasticsearch.yml
  15. cluster.name: elk #集群名,同一集群必须相同
  16. node.name: elk-133 #指定节点主机名
  17. node.master: true #允许成为主节点
  18. node.data: true #数据节点
  19. path.data: /usr/local/elasticsearchms/data #数据存放路径
  20. path.logs: /usr/local/elasticsearchms/logs #日志路径
  21. bootstrap.memory_lock: false #关闭锁定内存,设置为true会报错
  22. network.host: 192.168.230.133 #监听ip
  23. http.port: 9200 #http端口
  24. transport.tcp.port: 9300
  25. discovery.zen.ping.unicast.hosts: ["192.168.230.133", "192.168.230.134", "192.168.230.135"] #初始主机列表
  26. discovery.zen.minimum_master_nodes: 2 # n/2+1
  27. http.enabled: true #使用http协议对外提供服务
  28. http.cors.enabled: true #允许head插件访问es
  29. http.cors.allow-origin: "*"
  30. #########################################################修改数据 lsnoe 192.168.230.134
  31. vim /usr/local/elasticsearchlsnoe/config/elasticsearch.yml
  32. cluster.name: elk
  33. node.name: elk-134
  34. node.master: true
  35. node.data: true
  36. path.data: /usr/local/elasticsearchlsnoe/data
  37. path.logs: /usr/local/elasticsearchlsnoe/logs
  38. bootstrap.memory_lock: false
  39. network.host: 192.168.230.134
  40. http.port: 9200
  41. transport.tcp.port: 9300
  42. discovery.zen.ping.unicast.hosts: ["192.168.230.133", "192.168.230.134", "192.168.230.135"]
  43. discovery.zen.minimum_master_nodes: 2
  44. http.enabled: true
  45. http.cors.enabled: true
  46. http.cors.allow-origin: "*"
  47. ########################################################修改数据 lstow 192.168.230.135
  48. vim /usr/local/elasticsearchlstow/config/elasticsearch.yml
  49. cluster.name: elk
  50. node.name: elk-135
  51. node.master: true
  52. node.data: true
  53. path.data: /usr/local/elasticsearchlstow/data
  54. path.logs: /usr/local/elasticsearchlstow/logs
  55. bootstrap.memory_lock: false
  56. network.host: 192.168.230.135
  57. http.port: 9200
  58. transport.tcp.port: 9300
  59. discovery.zen.ping.unicast.hosts: ["192.168.230.133", "192.168.230.134", "192.168.230.135"]
  60. discovery.zen.minimum_master_nodes: 2
  61. http.enabled: true
  62. http.cors.enabled: true
  63. http.cors.allow-origin: "*"
  64. ------------------------------------------------------------------------------------------------------
  65. 修改虚拟内存大小
  66. sed -i 's/-Xms4g/-Xms1g/' /usr/local/elasticsearchms/config/jvm.options
  67. sed -i 's/-Xmx4g/-Xmx1g/' /usr/local/elasticsearchms/config/jvm.options
  68. sed -i 's/-Xms4g/-Xms1g/' /usr/local/elasticsearchlsnoe/config/jvm.options
  69. sed -i 's/-Xmx4g/-Xmx1g/' /usr/local/elasticsearchlsnoe/config/jvm.options
  70. sed -i 's/-Xms4g/-Xms1g/' /usr/local/elasticsearchlstow/config/jvm.options
  71. sed -i 's/-Xmx4g/-Xmx1g/' /usr/local/elasticsearchlstow/config/jvm.options
  72. ---------------------------------------------------------------------------------------------------------
  73. 启动
  74. su - elkms -c "/usr/local/elasticsearchms/bin/elasticsearch -d"
  75. su - elklsnoe -c "/usr/local/elasticsearchlsnoe/bin/elasticsearch -d"
  76. su - elklstow -c "/usr/local/elasticsearchlstow/bin/elasticsearch -d"
  77. 设置开机自启:
  78. vim /etc/rc.local
  79. su - elkms -c "/usr/local/elasticsearchms/bin/elasticsearch -d"
  80. su - elklsnoe -c "/usr/local/elasticsearchlsnoe/bin/elasticsearch -d"
  81. su - elklstow -c "/usr/local/elasticsearchlstow/bin/elasticsearch -d"
  82. chmod +x /etc/rc.local

搭建Kibana

  1. 解压 kibana
  2. # tar zxf kibana-6.7.1-linux-x86_64.tar.gz && mv kibana-6.7.1-linux-x86_64 /usr/local/kibana
  3. 修改配置
  4. # vim /usr/local/kibana/config/kibana.yml
  5. server.port: 5601 #监听端口
  6. server.host: "192.168.230.133" #监听IP
  7. elasticsearch.hosts: ["http://192.168.230.133:9200","http://192.168.230.134:9200","http://192.168.230.135:9200"] #集群es地址
  8. logging.dest: /usr/local/kibana/logs/kibana.log #日志路径
  9. kibana.index: ".kibana" #默认索引
  10. # mkdir /usr/local/kibana/logs && touch /usr/local/kibana/logs/kibana.log
  11. ---------------------------------------------------------------------------------------------------------------------
  12. 启动kibana
  13. /usr/local/kibana/bin/kibana &
  14. 自启
  15. vim /etc/rc.local
  16. /usr/local/kibana/bin/kibana &

搭建Logstash

  1. 解压
  2. # tar zxf logstash-6.7.1.tar.gz && mv logstash-6.7.1/ /usr/local/logstash
  3. # mkdir /usr/local/logstash/conf.d
  4. 修改配置
  5. #cp /usr/local/logstash/config/logstash.yml /usr/local/logstash/config/logstash.yml.bak
  6. # vim /usr/local/logstash/config/logstash.yml
  7. 36
  8. http.port: 9600
  9. -----------------------------------------------------------------------------------------------
  10. logstash 搜集 搜集Nginx日志配置
  11. # vim /etc/nginx/nginx.d/elk.conf # 做一个子配置文件 虚拟主机
  12. log_format main2 '$http_host $remote_addr - $remote_user [$time_local] "$request" ' #定义日志为main2
  13. '$status $body_bytes_sent "$http_referer" '
  14. '"$http_user_agent" "$upstream_addr" $request_time';
  15. server {
  16. listen 81;
  17. server_name elk.test.com;
  18. location / {
  19. proxy_pass http://192.168.230.133:5601; # 代理一下 kibana
  20. proxy_set_header Host $host;
  21. proxy_set_header X-Real-IP $remote_addr;
  22. proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  23. }
  24. access_log /var/log/nginx/elk_access.log main2; # 指定日志生成文件
  25. }
  26. # vim /usr/local/logstash/conf.d/nginx_access.conf #编辑 logstash 配置文件
  27. input {
  28. file {
  29. path => "/var/log/nginx/elk_access.log" #设置为nginx访问日志的路径
  30. start_position => "beginning"
  31. type => "nginx"
  32. }
  33. }
  34. filter {
  35. grok {
  36. match => { "message" => "%{IPORHOST:http_host} %{IPORHOST:clientip} - %{USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:response} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{NUMBER:request_time:float}"}
  37. }
  38. geoip {
  39. source => "clientip"
  40. }
  41. }
  42. output {
  43. stdout { codec => rubydebug }
  44. elasticsearch {
  45. hosts => ["192.168.30.129:9200"] #也可以为集群内其它机器的地址
  46. index => "nginx-test-%{+YYYY.MM.dd}" #设置索引的名字、也是导出的名字
  47. }
  48. }
  49. --------------------------------------------------------------------------------------------
  50. 启动 logstash
  51. nohup /usr/local/logstash/bin/logstash --path.settings /usr/local/logstash/ -f /usr/local/logstash/conf.d/nginx_access.conf &
  52. 自启
  53. vim /etc/rc.local
  54. nohup /usr/local/logstash/bin/logstash --path.settings /usr/local/logstash/ -f /usr/local/logstash/conf.d/nginx_access.conf &

搭建Kafka 集群

  1. Zookeeper 配置
  2. cp -r kafka_2.11-2.1.0.bak kafka1
  3. cp -r kafka_2.11-2.1.0.bak kafka2
  4. cp -r kafka_2.11-2.1.0.bak kafka3
  5. su - elkms -c "sed -i 's/^[^#]/#&/' /usr/local/kafka1/config/zookeeper.properties"
  6. su - elklsnoe -c "sed -i 's/^[^#]/#&/' /usr/local/kafka2/config/zookeeper.properties"
  7. su - elklstow -c "sed -i 's/^[^#]/#&/' /usr/local/kafka3/config/zookeeper.properties"
  8. vim /usr/local/kafka1/config/zookeeper.properties
  9. dataDir=/usr/local/kafka1/zookeeper1/data
  10. dataLogDir=/usr/local/kafka1/zookeeper1/logs
  11. clientPort=2181
  12. tickTime=2000
  13. initLimit=20
  14. syncLimit=10
  15. server.1=192.168.230.133:2888:3888
  16. server.2=192.168.230.134:2888:3888
  17. server.3=192.168.230.135:2888:3888
  18. vim /usr/local/kafka2/config/zookeeper.properties
  19. dataDir=/usr/local/kafka2/zookeeper2/data
  20. dataLogDir=/usr/local/kafka2/zookeeper2/logs
  21. clientPort=2182
  22. tickTime=2000
  23. initLimit=20
  24. syncLimit=10
  25. server.1=192.168.230.133:2888:3888
  26. server.2=192.168.230.134:2888:3888
  27. server.3=192.168.230.135:2888:3888
  28. vim /usr/local/kafka3/config/zookeeper.properties
  29. dataDir=/usr/local/kafka3/zookeeper3/data
  30. dataLogDir=/usr/local/kafka3/zookeeper3/logs
  31. clientPort=2183
  32. tickTime=2000
  33. initLimit=20
  34. syncLimit=10
  35. server.1=192.168.230.133:2888:3888
  36. server.2=192.168.230.134:2888:3888
  37. server.3=192.168.230.135:2888:3888
  38. mkdir -p /usr/local/kafka1/zookeeper1/{ data,logs}
  39. mkdir -p /usr/local/kafka2/zookeeper2/{ data,logs}
  40. mkdir -p /usr/local/kafka3/zookeeper3/{ data,logs}
  41. echo 1 > /usr/local/kafka1/zookeeper1/data/myid
  42. echo 2 > /usr/local/kafka2/zookeeper2/data/myid
  43. echo 3 > /usr/local/kafka3/zookeeper3/data/myid
  44. chown -R elkms:elkms /usr/local/kafka1
  45. chown -R elklsnoe:elklsnoe /usr/local/kafka2
  46. chown -R elklstow:elklstow /usr/local/kafka3
  47. -------------------------------------------------------------------------
  48. kafka 配置
  49. sed -i 's/^[^#]/#&/' /usr/local/kafka1/config/server.properties
  50. sed -i 's/^[^#]/#&/' /usr/local/kafka2/config/server.properties
  51. sed -i 's/^[^#]/#&/' /usr/local/kafka3/config/server.properties
  52. vim /usr/local/kafka1/config/server.properties
  53. broker.id=1
  54. listeners=PLAINTEXT://192.168.230.133:9092
  55. num.network.threads=3
  56. num.io.threads=8
  57. socket.send.buffer.bytes=102400
  58. socket.receive.buffer.bytes=102400
  59. socket.request.max.bytes=104857600
  60. log.dirs=/usr/local/kafka1/zookeeper1/logs
  61. num.partitions=6
  62. num.recovery.threads.per.data.dir=1
  63. offsets.topic.replication.factor=2
  64. transaction.state.log.replication.factor=1
  65. transaction.state.log.min.isr=1
  66. log.retention.hours=168
  67. log.segment.bytes=536870912
  68. log.retention.check.interval.ms=300000
  69. zookeeper.connect=0.0.0.0:2181,0.0.0.0:2182,0.0.0.0:2183
  70. zookeeper.connection.timeout.ms=6000
  71. group.initial.rebalance.delay.ms=0
  72. vim /usr/local/kafka2/config/server.properties
  73. broker.id=2
  74. listeners=PLAINTEXT://192.168.230.134:9092
  75. num.network.threads=3
  76. num.io.threads=8
  77. socket.send.buffer.bytes=102400
  78. socket.receive.buffer.bytes=102400
  79. socket.request.max.bytes=104857600
  80. log.dirs=/usr/local/kafka2/zookeeper2/logs
  81. num.partitions=6
  82. num.recovery.threads.per.data.dir=1
  83. offsets.topic.replication.factor=2
  84. transaction.state.log.replication.factor=1
  85. transaction.state.log.min.isr=1
  86. log.retention.hours=168
  87. log.segment.bytes=536870912
  88. log.retention.check.interval.ms=300000
  89. zookeeper.connect=0.0.0.0:2181,0.0.0.0:2182,0.0.0.0:2183
  90. zookeeper.connection.timeout.ms=6000
  91. group.initial.rebalance.delay.ms=0
  92. vim /usr/local/kafka3/config/server.properties
  93. broker.id=3
  94. listeners=PLAINTEXT://192.168.230.135:9092
  95. num.network.threads=3
  96. num.io.threads=8
  97. socket.send.buffer.bytes=102400
  98. socket.receive.buffer.bytes=102400
  99. socket.request.max.bytes=104857600
  100. log.dirs=/usr/local/kafka3/zookeeper3/logs
  101. num.partitions=6
  102. num.recovery.threads.per.data.dir=1
  103. offsets.topic.replication.factor=2
  104. transaction.state.log.replication.factor=1
  105. transaction.state.log.min.isr=1
  106. log.retention.hours=168
  107. log.segment.bytes=536870912
  108. log.retention.check.interval.ms=300000
  109. zookeeper.connect=0.0.0.0:2181,0.0.0.0:2182,0.0.0.0:2183
  110. zookeeper.connection.timeout.ms=6000
  111. group.initial.rebalance.delay.ms=0
  112. ------------------------------------------------
  113. 启动
  114. zookeeper
  115. su - elkms -c"nohup /usr/local/kafka1/bin/zookeeper-server-start.sh /usr/local/kafka1/config/zookeeper.properties &"
  116. su - elklsnoe -c "nohup /usr/local/kafka2/bin/zookeeper-server-start.sh /usr/local/kafka2/config/zookeeper.properties &"
  117. su - elklstow -c "nohup /usr/local/kafka3/bin/zookeeper-server-start.sh /usr/local/kafka3/config/zookeeper.properties &"

搭建Filebeat

  1. 1)下载
  2. wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.5.4-linux-x86_64.tar.gz
  3. 2)解压
  4. tar xzvf filebeat-6.5.4-linux-x86_64.tar.gz -C /usr/local/
  5. cd /usr/local/
  6. mv filebeat-6.5.4-linux-x86_64 filebeat
  7. cd filebeat/
  8. 3)修改配置
  9. 修改 Filebeat 配置,支持收集本地目录日志,并输出日志到 Kafka 集群中
  10. [root@es-3-head-kib filebeat]# mv filebeat.yml filebeat.yml.bak
  11. [root@es-3-head-kib filebeat]# vim filebeat.yml
  12. filebeat.prospectors:
  13. - input_type: log #指定输入的类型
  14. paths:
  15. - /var/log/nginx/*.log #日志的路径 json.keys_under_root: true json.add_error_key: true json.message_key: log output.kafka: hosts: ["192.168.246.234:9092","192.168.246.231:9092","192.168.246.235:9092"] #kafka服务器 topic: 'nginx' #输出到kafka中的topic json.keys_under_root: true #可以让字段位于根节点,默认为false json.add_error_key: true #将解析错误的消息记录储存在error.message字段中 json.message_key: log #message_key是用来合并多行json日志使用的,如果配置该项还需要配置multiline的设置 Filebeat 6.0 之后一些配置参数变动比较大,比如 document_type 就不支持,需要用 fields 来代替等等。 ##### (4)启动 [root@es-3-head-kib filebeat]# nohup ./filebeat -e -c filebeat.yml & [root@es-3-head-kib filebeat]# tail -f nohup.out 2019-08-04T16:55:54.708+0800 INFO kafka/log.go:53 kafka message: client/metadata found some partitions to be leaderless 2019-08-04T16:55:54.708+0800 INFO kafka/log.go:53 client/metadata retrying after 250ms... (2 attempts remaining) ... 验证kafka是否生成topic [root@es-3-head-kib filebeat]# cd /usr/local/kafka_2.11-2.1.0/ [root@es-3-head-kib kafka_2.11-2.1.0]# bin/kafka-topics.sh --zookeeper 192.168.246.231:2181 --list __consumer_offsets nginx #已经生成topic testtopic

现在我们去编辑logstach连接kafka的输出文件

配置完kafka之后查看

在这里插入图片描述

登录到kibana

在这里插入图片描述
在这里插入图片描述

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

======================================================================================================================================================================================================================================================================================================
辛苦浏览观看,如果对你有帮助,请顺手点个赞吧 (σ゚∀゚)σ…:*☆

发表评论

表情:
评论列表 (有 0 条评论,401人围观)

还没有评论,来说两句吧...

相关阅读