ELK 日志收集

我会带着你远行 2024-04-03 06:53 185阅读 0赞

文章目录

  • 方案
  • 部署 FileBeat
    • helm部署
    • yaml 部署
  • 部署 Elastic
    • 安装 eck
    • 安装 elastic
  • 部署 Kibana

方案

  1. 部署 FileBeat 从集群每个节点采集日志
  2. FileBeat 发送日志到 ElasticSearch 并保存
  3. 部署 Kibana 展示 ElasticSearch 数据

采集

采集

采集

发送

发送

发送

展示

节点

FileBeat

节点

FileBeat

节点

FileBeat

Elastic

Kibana

部署 FileBeat

helm部署

官方文档: https://github.com/elastic/helm-charts/tree/master/filebeat

部署命令

  1. helm repo add elastic https://helm.elastic.co
  2. helm install fb elastic/filebeat -f values.yaml -n kube-system

当前最新为 8.5.1,但本文使用 7.17 版本
在这里插入图片描述

values.yaml 文件

配置参考文档:https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-container.html

input.paths:模糊匹配需要收集的日志路径
input.fields:定义收集日志的 key
input.fields_under_root:设置 ture,fields存储在输出文档的顶级位置,参考文章 :filebeat输出结果到elasticsearch的多个索引
setup.template:自定义索引
setup.ilm.enabled:设置 falsel;若启用,则忽略 output.elasticsearch.index 的设置
output.elasticsearch.indices.index:设置 elastic 的索引
output.elasticsearch.indices.when.contains:对应 input 的 fields

  1. filebeatConfig:
  2. filebeat.yml: |
  3. filebeat.inputs:
  4. - type: container
  5. paths:
  6. - /var/log/containers/*dev_permission-service*.log
  7. fields:
  8. app_id: "dev-permission"
  9. fields_under_root: true
  10. - type: container
  11. paths:
  12. - /var/log/containers/*dev_converter-service*.log
  13. fields:
  14. app_id: "dev-converter"
  15. fields_under_root: true
  16. setup.template.enabled: true
  17. setup.template.fields: fields.yml
  18. setup.template.name: "k8s"
  19. setup.template.pattern: "k8s-*"
  20. setup.template.enabled: true
  21. setup.ilm.enabled: false
  22. output.elasticsearch:
  23. index: "k8s-other"
  24. username: 'elastic'
  25. password: 'xxxxx'
  26. protocol: http
  27. hosts: ["http://elsmy.saas.api.gd-njc.com:80"]
  28. indices:
  29. - index: "k8s-dev-permission-%{+yyyy.MM.dd}"
  30. when.contains:
  31. app_id: "dev-permission"
  32. - index: "k8s-dev-converter-%{+yyyy.MM.dd}"
  33. when.contains:
  34. app_id: "dev-converter"

yaml 部署

官方文档:https://www.elastic.co/guide/en/beats/filebeat/current/running-on-kubernetes.html

编辑 filebeat-kubernetes.yaml

  1. ---
  2. apiVersion: v1
  3. kind: ConfigMap
  4. metadata:
  5. name: filebeat-config
  6. namespace: kube-system
  7. labels:
  8. k8s-app: filebeat
  9. data:
  10. filebeat.yml: |-
  11. filebeat.inputs:
  12. - type: container
  13. paths:
  14. - /var/log/containers/*dev_permission-service*.log
  15. fields:
  16. app_id: "dev-permission"
  17. fields_under_root: true
  18. - type: container
  19. paths:
  20. - /var/log/containers/*dev_converter-service*.log
  21. fields:
  22. app_id: "dev-converter"
  23. fields_under_root: true
  24. setup.template.enabled: true
  25. setup.template.fields: fields.yml
  26. setup.template.name: "k8s"
  27. setup.template.pattern: "k8s-*"
  28. setup.template.enabled: true
  29. setup.ilm.enabled: false
  30. output.elasticsearch:
  31. index: "k8s-other"
  32. username: 'elastic'
  33. password: 'xxxxx'
  34. protocol: http
  35. hosts: ["http://elsmy.saas.api.gd-njc.com:80"]
  36. indices:
  37. - index: "k8s-dev-permission-%{+yyyy.MM.dd}"
  38. when.contains:
  39. app_id: "dev-permission"
  40. - index: "k8s-dev-converter-%{+yyyy.MM.dd}"
  41. when.contains:
  42. app_id: "dev-converter"
  43. ---
  44. apiVersion: apps/v1
  45. kind: DaemonSet
  46. metadata:
  47. name: filebeat
  48. namespace: kube-system
  49. labels:
  50. k8s-app: filebeat
  51. spec:
  52. selector:
  53. matchLabels:
  54. k8s-app: filebeat
  55. template:
  56. metadata:
  57. labels:
  58. k8s-app: filebeat
  59. spec:
  60. serviceAccountName: filebeat
  61. terminationGracePeriodSeconds: 30
  62. hostNetwork: true
  63. dnsPolicy: ClusterFirstWithHostNet
  64. containers:
  65. - name: filebeat
  66. image: docker.elastic.co/beats/filebeat:8.4.2
  67. args: [
  68. "-c", "/etc/filebeat.yml",
  69. "-e",
  70. ]
  71. env:
  72. - name: ELASTICSEARCH_HOST
  73. value: elasticsearch
  74. - name: ELASTICSEARCH_PORT
  75. value: "9200"
  76. - name: ELASTICSEARCH_USERNAME
  77. value: elastic
  78. - name: ELASTICSEARCH_PASSWORD
  79. value: changeme
  80. - name: ELASTIC_CLOUD_ID
  81. value:
  82. - name: ELASTIC_CLOUD_AUTH
  83. value:
  84. - name: NODE_NAME
  85. valueFrom:
  86. fieldRef:
  87. fieldPath: spec.nodeName
  88. securityContext:
  89. runAsUser: 0
  90. # If using Red Hat OpenShift uncomment this:
  91. #privileged: true
  92. resources:
  93. limits:
  94. memory: 200Mi
  95. requests:
  96. cpu: 100m
  97. memory: 100Mi
  98. volumeMounts:
  99. - name: config
  100. mountPath: /etc/filebeat.yml
  101. readOnly: true
  102. subPath: filebeat.yml
  103. - name: data
  104. mountPath: /usr/share/filebeat/data
  105. - name: varlibdockercontainers
  106. mountPath: /var/lib/docker/containers
  107. readOnly: true
  108. - name: varlog
  109. mountPath: /var/log
  110. readOnly: true
  111. volumes:
  112. - name: config
  113. configMap:
  114. defaultMode: 0640
  115. name: filebeat-config
  116. - name: varlibdockercontainers
  117. hostPath:
  118. path: /var/lib/docker/containers
  119. - name: varlog
  120. hostPath:
  121. path: /var/log
  122. # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
  123. - name: data
  124. hostPath:
  125. # When filebeat runs as non-root user, this directory needs to be writable by group (g+w).
  126. path: /var/lib/filebeat-data
  127. type: DirectoryOrCreate
  128. ---
  129. apiVersion: rbac.authorization.k8s.io/v1
  130. kind: ClusterRoleBinding
  131. metadata:
  132. name: filebeat
  133. subjects:
  134. - kind: ServiceAccount
  135. name: filebeat
  136. namespace: kube-system
  137. roleRef:
  138. kind: ClusterRole
  139. name: filebeat
  140. apiGroup: rbac.authorization.k8s.io
  141. ---
  142. apiVersion: rbac.authorization.k8s.io/v1
  143. kind: RoleBinding
  144. metadata:
  145. name: filebeat
  146. namespace: kube-system
  147. subjects:
  148. - kind: ServiceAccount
  149. name: filebeat
  150. namespace: kube-system
  151. roleRef:
  152. kind: Role
  153. name: filebeat
  154. apiGroup: rbac.authorization.k8s.io
  155. ---
  156. apiVersion: rbac.authorization.k8s.io/v1
  157. kind: RoleBinding
  158. metadata:
  159. name: filebeat-kubeadm-config
  160. namespace: kube-system
  161. subjects:
  162. - kind: ServiceAccount
  163. name: filebeat
  164. namespace: kube-system
  165. roleRef:
  166. kind: Role
  167. name: filebeat-kubeadm-config
  168. apiGroup: rbac.authorization.k8s.io
  169. ---
  170. apiVersion: rbac.authorization.k8s.io/v1
  171. kind: ClusterRole
  172. metadata:
  173. name: filebeat
  174. labels:
  175. k8s-app: filebeat
  176. rules:
  177. - apiGroups: [""] # "" indicates the core API group
  178. resources:
  179. - namespaces
  180. - pods
  181. - nodes
  182. verbs:
  183. - get
  184. - watch
  185. - list
  186. - apiGroups: ["apps"]
  187. resources:
  188. - replicasets
  189. verbs: ["get", "list", "watch"]
  190. - apiGroups: ["batch"]
  191. resources:
  192. - jobs
  193. verbs: ["get", "list", "watch"]
  194. ---
  195. apiVersion: rbac.authorization.k8s.io/v1
  196. kind: Role
  197. metadata:
  198. name: filebeat
  199. # should be the namespace where filebeat is running
  200. namespace: kube-system
  201. labels:
  202. k8s-app: filebeat
  203. rules:
  204. - apiGroups:
  205. - coordination.k8s.io
  206. resources:
  207. - leases
  208. verbs: ["get", "create", "update"]
  209. ---
  210. apiVersion: rbac.authorization.k8s.io/v1
  211. kind: Role
  212. metadata:
  213. name: filebeat-kubeadm-config
  214. namespace: kube-system
  215. labels:
  216. k8s-app: filebeat
  217. rules:
  218. - apiGroups: [""]
  219. resources:
  220. - configmaps
  221. resourceNames:
  222. - kubeadm-config
  223. verbs: ["get"]
  224. ---
  225. apiVersion: v1
  226. kind: ServiceAccount
  227. metadata:
  228. name: filebeat
  229. namespace: kube-system
  230. labels:
  231. k8s-app: filebeat
  232. ---

部署命令

  1. kubectl apply -f filebeat-kubernetes.yaml

部署 Elastic

官网:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html

安装 eck

官网:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html

安装 crds 和 operator

  1. kubectl create -f https://download.elastic.co/downloads/eck/2.5.0/crds.yaml
  2. kubectl apply -f https://download.elastic.co/downloads/eck/2.5.0/operator.yaml

查看 operator 运行日志

  1. kubectl -n elastic-system logs -f statefulset.apps/elastic-operator

在这里插入图片描述

安装 elastic

部署命令

  1. kubectl apply -f deployment.yaml

deployment.yaml 文件

namespace:配置命名空间
tls:关闭 tls 协议,配置文章:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-transport-settings.html
storageClassName:配置存储卷,可以参考文章 nfs-client
storage:配置存储空间,预设 10Gi,配置文章:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-volume-claim-templates.html
resources:配置最大最小内存,配置文章:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-managing-compute-resources.html
ES_JAVA_OPTS:配置最大最小 java 内存,配置文章:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-jvm-heap-dumps.html

  1. apiVersion: elasticsearch.k8s.elastic.co/v1
  2. kind: Elasticsearch # 需要安装es operator
  3. metadata:
  4. name: es
  5. namespace: els-test
  6. spec:
  7. version: 7.11.2
  8. http:
  9. tls:
  10. selfSignedCertificate:
  11. disabled: true
  12. nodeSets:
  13. - name: es
  14. count: 3
  15. volumeClaimTemplates:
  16. - metadata:
  17. name: elasticsearch-data
  18. spec:
  19. accessModes:
  20. - ReadWriteOnce
  21. resources:
  22. requests:
  23. storage: 10Gi
  24. storageClassName: nfs-client
  25. podTemplate:
  26. spec:
  27. containers:
  28. - name: elasticsearch
  29. env:
  30. - name: ES_JAVA_OPTS
  31. value: "-Xms2g -Xmx2g"
  32. resources:
  33. requests:
  34. memory: 3Gi
  35. limits:
  36. memory: 3Gi

部署结果
在这里插入图片描述

解决报错

报错信息:

ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

解决报错

  1. 修改配置: vi /etc/sysctl.conf
  2. 添加配置: vm.max_map_count=655360
  3. 并=执行命令: sysctl -p
  4. 检查:sysctl -a | grep vm.max_map_count

登录
账号:elastic
密码:

  1. kubectl get secret es-es-elastic-user -n els-test -o=jsonpath='{.data.elastic}' | base64 --decode

在这里插入图片描述

配置 ingress 或 nodeport ,参考文章:ingress 部署
在这里插入图片描述

结果如下即 elastic 部署成功,cluster_name 是 es

在这里插入图片描述

查看elastic集群索引信息:/_cat/indices?v
在这里插入图片描述

部署 Kibana

部署命令

  1. kubectl apply -f kibana.yaml

kibana.yaml

namespace:和 elastic 一样
tls:关闭 tls 协议
elasticsearchRef:和 elastic 的 cluster_name 一样
编写 deployment.yaml 文件

  1. apiVersion: kibana.k8s.elastic.co/v1
  2. kind: Kibana
  3. metadata:
  4. name: kibana
  5. namespace: els-test
  6. spec:
  7. version: 7.11.2
  8. count: 1
  9. elasticsearchRef:
  10. name: es
  11. http:
  12. tls:
  13. selfSignedCertificate:
  14. disabled: true

部署结果
在这里插入图片描述

配置 ingress 或 nodeport ,参考文章:ingress 部署
在这里插入图片描述

配置索引正则: [manage space] -> [index patterns]
在这里插入图片描述
创建索引正则匹配
在这里插入图片描述

在 discover 可以选择配置好的正则
在这里插入图片描述

选择 message,只看日志消息
在这里插入图片描述

发表评论

表情:
评论列表 (有 0 条评论,185人围观)

还没有评论,来说两句吧...

相关阅读