ELK伪集群+kafka伪集群搭建
文章目录
- 前言
- 准备
- 搭建ES集群
- 搭建Kibana
- 搭建Logstash
- 搭建Kafka 集群
- 搭建Filebeat
前言
搭建一个ELK伪集群+ kafka伪集群,使用Filebeat收集Nginx的日志,所有应用在一台服务器上搭建,虚拟多块网卡。
- ElasticSearch:一个基于 JSON 的分布式的搜索和分析引擎,作为 ELK 的核心,它集中存储数据,用来搜索、分析、存储日志。它是分布式的,可以横向扩容,可以自动发现,索引自动分片。
本次搭建3台互为主从的ES集群
- Logstash :一个动态数据收集管道,支持以 TCP/UDP/HTTP 多种方式收集数据(也可以接受 Beats 传输来的数据),并对数据做进一步丰富或提取字段处理。用来采集日志,把日志解析为json格式交给ElasticSearch
- Kibana :一个数据可视化组件,将收集的数据进行可视化展示(各种报表、图形化数据),并提供配置、管理 ELK 的界面
- Filebeat :隶属于Beats,轻量级数据收集引擎。基于原先 Logstash-fowarder 的源码改造出来。换句话说:Filebeat就是新版的 Logstash-fowarder,也会是 ELK Stack 在 Agent 的第一选择。
准备
一台服务器,虚拟4个ip地址
服务器:Centos7
内存:16GB
ip地址①:192.168.230.133
ip地址①:192.168.230.134
ip地址①:192.168.230.135
ip地址①:192.168.230.136
各ip对应安装的应用和端口号:
192.168.230.133 虚拟网卡
**********************************
部署应用 所用端口
es主节点/es数据节点 9200 9300
head 9100
zookeeper 2888 3888
kibana 5601
kafka 9092
192.168.230.134 虚拟网卡
**********************************
部署应用 所用端口
es主节点/es数据节点 9200 9300
head 9100
zookeeper 2888 3888
logstash 9600
kafka 9092
192.168.230.135 虚拟网卡
**********************************
部署应用 所用端口
es主节点/es数据节点 9200 9300
zookeeper 2888 3888
kafka 9092
192.168.230.136 虚拟网卡
**********************************
nginx 80
Filebeat
所需环境和版本信息如下图,已打包 下载链接:https://download.csdn.net/download/qq_26129413/15728637
1. 关闭防火墙等
1.关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
2. 关闭selinux
sed -i 's/=enforcing/=disabled/g' /etc/selinux/config && setenforce 0
3. 修改文件描述符数量,不修改可能使es启动失败
# vim /etc/security/limits.conf
* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096
# vim /etc/sysctl.conf
vm.max_map_count=655360
5. 刷新 sysctl -p
#将所有安装包放到该目录下
mkdir /software && cd /software
2. 安装JDK1.8环境
tar -zvxf jdk-8u211-linux-x64.tar.gz && mv jdk1.8.0_211/ /usr/local/jdk
# vim /etc/profile
JAVA_HOME=/usr/local/jdk
PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/jre/lib
export JAVA_HOME PATH CLASSPATH
# source !$
# java -version
# ln -s /usr/local/jdk/bin/java /usr/local/bin/java
3. 安装 node.js
wget https://nodejs.org/dist/v10.15.3/node-v10.15.3-linux-x64.tar.xz
tar -Jxf node-v10.15.3-linux-x64.tar.xz && mv node-v10.15.3-linux-x64/ /usr/local/node
# vim /etc/profile
export NODE_HOME=/usr/local/node
export PATH=$NODE_HOME/bin:$PATH
export NODE_PATH=$NODE_HOME/lib/node_modules:$PATH
# source !$
# node -v
4. 安装head
作用,一个简易的ElasticSearch 分片查询web页面,用来测试es功能是否正常。
#wget https://github.com/mobz/elasticsearch-head/archive/master.zip
# unzip master.zip && mv elasticsearch-head-master/ /usr/local/elasticsearch-head
# cd /usr/local/elasticsearch-head
# npm install -g cnpm --registry=https://registry.npm.taobao.org
# cnpm install -g grunt-cli
# cnpm install -g grunt
# cnpm install grunt-contrib-clean
# cnpm install grunt-contrib-concat
# cnpm install grunt-contrib-watch
# cnpm install grunt-contrib-connect
# cnpm install grunt-contrib-copy
# cnpm install grunt-contrib-jasmine #若报错就再执行一遍
-------------------------------------------------------------------------------------------------------
vim /usr/local/elasticsearch-head/Gruntfile.js
#找到下面connect属性,95行左右,新增 hostname: '192.168.230.133', #第一台设备ip
connect: {
server: {
options: {
hostname: '192.168.230.133', #不要忘了后面的逗号
port: 9100,
base: '.',
keepalive: true
}
}
}
后台启动
cd /usr/local/elasticsearch-head
nohup grunt server &
eval "cd /usr/local/elasticsearch-head/ ; nohup npm run start >/dev/null 2>&1 & "
-----------------------------------------------------------------------------------------------------
后台脚本
vim /usr/bin/elasticsearch-head
#!/bin/bash
#chkconfig: 2345 55 24
#description: elasticsearch-head service manager
data="cd /usr/local/elasticsearch-head/ ; nohup npm run start >/dev/null 2>&1 & "
START() {
eval $data
}
STOP() {
ps -ef | grep grunt | grep -v "grep" | awk '{print $2}' | xargs kill -s 9 >/dev/null
}
case "$1" in
start)
START
;;
stop)
STOP
;;
restart)
STOP
sleep 2
START
;;
*)
echo "Usage: elasticsearch-head (|start|stop|restart)"
;;
esac
chmod +x /usr/bin/elasticsearch-head
elasticsearch-head start 启动 elasticsearch-head stop 停止 elasticsearch-head restart 重启
5. 创建用户
groupadd elkms
useradd elkms -g elkms
passwd elkms
123456
groupadd elklsnoe
useradd elklsnoe -g elklsnoe
passwd elklsnoe
123456
groupadd elklstow
useradd elklstow -g elklstow
passwd elklstow
123456
搭建ES集群
解压es包
tar zxf elasticsearch-6.7.1.tar.gz && mv elasticsearch-6.7.1 /usr/local/elasticsearchms
tar zxf elasticsearch-6.7.1.tar.gz && mv elasticsearch-6.7.1 /usr/local/elasticsearchlsnoe
tar zxf elasticsearch-6.7.1.tar.gz && mv elasticsearch-6.7.1 /usr/local/elasticsearchlstow
tar zxf elasticsearch-6.7.1.tar.gz && mv elasticsearch-6.7.1 /usr/local/elasticsearchbak ##多备份了一个
创建数据目录,并给予权限
[root@localhost local]# mkdir /usr/local/elasticsearchms/data
[root@localhost local]# mkdir /usr/local/elasticsearchlsnoe/data
[root@localhost local]# mkdir /usr/local/elasticsearchlstow/data
[root@localhost local]# chown -R elkms:elkms /usr/local/elasticsearchms
[root@localhost local]# chown -R elklsnoe:elklsnoe /usr/local/elasticsearchlsnoe
[root@localhost local]# chown -R elklstow:elklstow /usr/local/elasticsearchlstow
##########################################################:修改数据 ms 192.168.230.133
vim /usr/local/elasticsearchms/config/elasticsearch.yml
cluster.name: elk #集群名,同一集群必须相同
node.name: elk-133 #指定节点主机名
node.master: true #允许成为主节点
node.data: true #数据节点
path.data: /usr/local/elasticsearchms/data #数据存放路径
path.logs: /usr/local/elasticsearchms/logs #日志路径
bootstrap.memory_lock: false #关闭锁定内存,设置为true会报错
network.host: 192.168.230.133 #监听ip
http.port: 9200 #http端口
transport.tcp.port: 9300
discovery.zen.ping.unicast.hosts: ["192.168.230.133", "192.168.230.134", "192.168.230.135"] #初始主机列表
discovery.zen.minimum_master_nodes: 2 # n/2+1
http.enabled: true #使用http协议对外提供服务
http.cors.enabled: true #允许head插件访问es
http.cors.allow-origin: "*"
#########################################################修改数据 lsnoe 192.168.230.134
vim /usr/local/elasticsearchlsnoe/config/elasticsearch.yml
cluster.name: elk
node.name: elk-134
node.master: true
node.data: true
path.data: /usr/local/elasticsearchlsnoe/data
path.logs: /usr/local/elasticsearchlsnoe/logs
bootstrap.memory_lock: false
network.host: 192.168.230.134
http.port: 9200
transport.tcp.port: 9300
discovery.zen.ping.unicast.hosts: ["192.168.230.133", "192.168.230.134", "192.168.230.135"]
discovery.zen.minimum_master_nodes: 2
http.enabled: true
http.cors.enabled: true
http.cors.allow-origin: "*"
########################################################修改数据 lstow 192.168.230.135
vim /usr/local/elasticsearchlstow/config/elasticsearch.yml
cluster.name: elk
node.name: elk-135
node.master: true
node.data: true
path.data: /usr/local/elasticsearchlstow/data
path.logs: /usr/local/elasticsearchlstow/logs
bootstrap.memory_lock: false
network.host: 192.168.230.135
http.port: 9200
transport.tcp.port: 9300
discovery.zen.ping.unicast.hosts: ["192.168.230.133", "192.168.230.134", "192.168.230.135"]
discovery.zen.minimum_master_nodes: 2
http.enabled: true
http.cors.enabled: true
http.cors.allow-origin: "*"
------------------------------------------------------------------------------------------------------
修改虚拟内存大小
sed -i 's/-Xms4g/-Xms1g/' /usr/local/elasticsearchms/config/jvm.options
sed -i 's/-Xmx4g/-Xmx1g/' /usr/local/elasticsearchms/config/jvm.options
sed -i 's/-Xms4g/-Xms1g/' /usr/local/elasticsearchlsnoe/config/jvm.options
sed -i 's/-Xmx4g/-Xmx1g/' /usr/local/elasticsearchlsnoe/config/jvm.options
sed -i 's/-Xms4g/-Xms1g/' /usr/local/elasticsearchlstow/config/jvm.options
sed -i 's/-Xmx4g/-Xmx1g/' /usr/local/elasticsearchlstow/config/jvm.options
---------------------------------------------------------------------------------------------------------
启动
su - elkms -c "/usr/local/elasticsearchms/bin/elasticsearch -d"
su - elklsnoe -c "/usr/local/elasticsearchlsnoe/bin/elasticsearch -d"
su - elklstow -c "/usr/local/elasticsearchlstow/bin/elasticsearch -d"
设置开机自启:
vim /etc/rc.local
su - elkms -c "/usr/local/elasticsearchms/bin/elasticsearch -d"
su - elklsnoe -c "/usr/local/elasticsearchlsnoe/bin/elasticsearch -d"
su - elklstow -c "/usr/local/elasticsearchlstow/bin/elasticsearch -d"
chmod +x /etc/rc.local
搭建Kibana
解压 kibana
# tar zxf kibana-6.7.1-linux-x86_64.tar.gz && mv kibana-6.7.1-linux-x86_64 /usr/local/kibana
修改配置
# vim /usr/local/kibana/config/kibana.yml
server.port: 5601 #监听端口
server.host: "192.168.230.133" #监听IP
elasticsearch.hosts: ["http://192.168.230.133:9200","http://192.168.230.134:9200","http://192.168.230.135:9200"] #集群es地址
logging.dest: /usr/local/kibana/logs/kibana.log #日志路径
kibana.index: ".kibana" #默认索引
# mkdir /usr/local/kibana/logs && touch /usr/local/kibana/logs/kibana.log
---------------------------------------------------------------------------------------------------------------------
启动kibana
/usr/local/kibana/bin/kibana &
自启
vim /etc/rc.local
/usr/local/kibana/bin/kibana &
搭建Logstash
解压
# tar zxf logstash-6.7.1.tar.gz && mv logstash-6.7.1/ /usr/local/logstash
# mkdir /usr/local/logstash/conf.d
修改配置
#cp /usr/local/logstash/config/logstash.yml /usr/local/logstash/config/logstash.yml.bak
# vim /usr/local/logstash/config/logstash.yml
36
http.port: 9600
-----------------------------------------------------------------------------------------------
logstash 搜集 搜集Nginx日志配置
# vim /etc/nginx/nginx.d/elk.conf # 做一个子配置文件 虚拟主机
log_format main2 '$http_host $remote_addr - $remote_user [$time_local] "$request" ' #定义日志为main2
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$upstream_addr" $request_time';
server {
listen 81;
server_name elk.test.com;
location / {
proxy_pass http://192.168.230.133:5601; # 代理一下 kibana
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
access_log /var/log/nginx/elk_access.log main2; # 指定日志生成文件
}
# vim /usr/local/logstash/conf.d/nginx_access.conf #编辑 logstash 配置文件
input {
file {
path => "/var/log/nginx/elk_access.log" #设置为nginx访问日志的路径
start_position => "beginning"
type => "nginx"
}
}
filter {
grok {
match => { "message" => "%{IPORHOST:http_host} %{IPORHOST:clientip} - %{USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:response} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{NUMBER:request_time:float}"}
}
geoip {
source => "clientip"
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
hosts => ["192.168.30.129:9200"] #也可以为集群内其它机器的地址
index => "nginx-test-%{+YYYY.MM.dd}" #设置索引的名字、也是导出的名字
}
}
--------------------------------------------------------------------------------------------
启动 logstash
nohup /usr/local/logstash/bin/logstash --path.settings /usr/local/logstash/ -f /usr/local/logstash/conf.d/nginx_access.conf &
自启
vim /etc/rc.local
nohup /usr/local/logstash/bin/logstash --path.settings /usr/local/logstash/ -f /usr/local/logstash/conf.d/nginx_access.conf &
搭建Kafka 集群
Zookeeper 配置
cp -r kafka_2.11-2.1.0.bak kafka1
cp -r kafka_2.11-2.1.0.bak kafka2
cp -r kafka_2.11-2.1.0.bak kafka3
su - elkms -c "sed -i 's/^[^#]/#&/' /usr/local/kafka1/config/zookeeper.properties"
su - elklsnoe -c "sed -i 's/^[^#]/#&/' /usr/local/kafka2/config/zookeeper.properties"
su - elklstow -c "sed -i 's/^[^#]/#&/' /usr/local/kafka3/config/zookeeper.properties"
vim /usr/local/kafka1/config/zookeeper.properties
dataDir=/usr/local/kafka1/zookeeper1/data
dataLogDir=/usr/local/kafka1/zookeeper1/logs
clientPort=2181
tickTime=2000
initLimit=20
syncLimit=10
server.1=192.168.230.133:2888:3888
server.2=192.168.230.134:2888:3888
server.3=192.168.230.135:2888:3888
vim /usr/local/kafka2/config/zookeeper.properties
dataDir=/usr/local/kafka2/zookeeper2/data
dataLogDir=/usr/local/kafka2/zookeeper2/logs
clientPort=2182
tickTime=2000
initLimit=20
syncLimit=10
server.1=192.168.230.133:2888:3888
server.2=192.168.230.134:2888:3888
server.3=192.168.230.135:2888:3888
vim /usr/local/kafka3/config/zookeeper.properties
dataDir=/usr/local/kafka3/zookeeper3/data
dataLogDir=/usr/local/kafka3/zookeeper3/logs
clientPort=2183
tickTime=2000
initLimit=20
syncLimit=10
server.1=192.168.230.133:2888:3888
server.2=192.168.230.134:2888:3888
server.3=192.168.230.135:2888:3888
mkdir -p /usr/local/kafka1/zookeeper1/{ data,logs}
mkdir -p /usr/local/kafka2/zookeeper2/{ data,logs}
mkdir -p /usr/local/kafka3/zookeeper3/{ data,logs}
echo 1 > /usr/local/kafka1/zookeeper1/data/myid
echo 2 > /usr/local/kafka2/zookeeper2/data/myid
echo 3 > /usr/local/kafka3/zookeeper3/data/myid
chown -R elkms:elkms /usr/local/kafka1
chown -R elklsnoe:elklsnoe /usr/local/kafka2
chown -R elklstow:elklstow /usr/local/kafka3
-------------------------------------------------------------------------
kafka 配置
sed -i 's/^[^#]/#&/' /usr/local/kafka1/config/server.properties
sed -i 's/^[^#]/#&/' /usr/local/kafka2/config/server.properties
sed -i 's/^[^#]/#&/' /usr/local/kafka3/config/server.properties
vim /usr/local/kafka1/config/server.properties
broker.id=1
listeners=PLAINTEXT://192.168.230.133:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/usr/local/kafka1/zookeeper1/logs
num.partitions=6
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=2
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=536870912
log.retention.check.interval.ms=300000
zookeeper.connect=0.0.0.0:2181,0.0.0.0:2182,0.0.0.0:2183
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
vim /usr/local/kafka2/config/server.properties
broker.id=2
listeners=PLAINTEXT://192.168.230.134:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/usr/local/kafka2/zookeeper2/logs
num.partitions=6
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=2
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=536870912
log.retention.check.interval.ms=300000
zookeeper.connect=0.0.0.0:2181,0.0.0.0:2182,0.0.0.0:2183
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
vim /usr/local/kafka3/config/server.properties
broker.id=3
listeners=PLAINTEXT://192.168.230.135:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/usr/local/kafka3/zookeeper3/logs
num.partitions=6
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=2
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=536870912
log.retention.check.interval.ms=300000
zookeeper.connect=0.0.0.0:2181,0.0.0.0:2182,0.0.0.0:2183
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
------------------------------------------------
启动
zookeeper
su - elkms -c"nohup /usr/local/kafka1/bin/zookeeper-server-start.sh /usr/local/kafka1/config/zookeeper.properties &"
su - elklsnoe -c "nohup /usr/local/kafka2/bin/zookeeper-server-start.sh /usr/local/kafka2/config/zookeeper.properties &"
su - elklstow -c "nohup /usr/local/kafka3/bin/zookeeper-server-start.sh /usr/local/kafka3/config/zookeeper.properties &"
搭建Filebeat
(1)下载
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.5.4-linux-x86_64.tar.gz
(2)解压
tar xzvf filebeat-6.5.4-linux-x86_64.tar.gz -C /usr/local/
cd /usr/local/
mv filebeat-6.5.4-linux-x86_64 filebeat
cd filebeat/
(3)修改配置
修改 Filebeat 配置,支持收集本地目录日志,并输出日志到 Kafka 集群中
[root@es-3-head-kib filebeat]# mv filebeat.yml filebeat.yml.bak
[root@es-3-head-kib filebeat]# vim filebeat.yml
filebeat.prospectors:
- input_type: log #指定输入的类型
paths:
- /var/log/nginx/*.log #日志的路径 json.keys_under_root: true json.add_error_key: true json.message_key: log output.kafka: hosts: ["192.168.246.234:9092","192.168.246.231:9092","192.168.246.235:9092"] #kafka服务器 topic: 'nginx' #输出到kafka中的topic json.keys_under_root: true #可以让字段位于根节点,默认为false json.add_error_key: true #将解析错误的消息记录储存在error.message字段中 json.message_key: log #message_key是用来合并多行json日志使用的,如果配置该项还需要配置multiline的设置 Filebeat 6.0 之后一些配置参数变动比较大,比如 document_type 就不支持,需要用 fields 来代替等等。 ##### (4)启动 [root@es-3-head-kib filebeat]# nohup ./filebeat -e -c filebeat.yml & [root@es-3-head-kib filebeat]# tail -f nohup.out 2019-08-04T16:55:54.708+0800 INFO kafka/log.go:53 kafka message: client/metadata found some partitions to be leaderless 2019-08-04T16:55:54.708+0800 INFO kafka/log.go:53 client/metadata retrying after 250ms... (2 attempts remaining) ... 验证kafka是否生成topic [root@es-3-head-kib filebeat]# cd /usr/local/kafka_2.11-2.1.0/ [root@es-3-head-kib kafka_2.11-2.1.0]# bin/kafka-topics.sh --zookeeper 192.168.246.231:2181 --list __consumer_offsets nginx #已经生成topic testtopic
现在我们去编辑logstach连接kafka的输出文件
配置完kafka之后查看
登录到kibana
======================================================================================================================================================================================================================================================================================================
辛苦浏览观看,如果对你有帮助,请顺手点个赞吧 (σ゚∀゚)σ…:*☆
还没有评论,来说两句吧...