etcd 分布式集群搭建和使用

逃离我推掉我的手 2022-12-13 12:45 273阅读 0赞

目录

  • 1.多节点服务集群搭建
    • 启动节点1
    • 启动节点2
    • 启动节点3
  • 2.使用
    • 查看节点信息
    • 集群状态
    • 键值对 读写
    • 分布式锁
  • 3.参考

上篇文章etcd 入门 中已经介绍了etcd基本功能,单节点服务的安装和使用。

本文将介绍etcd 分布式多节点的安装、配置和使用。

由于资源限制,本文以单机多节点服务为例进行阐述。

在实际应用场景中,为了高可用和容灾,是需要多机器安装的。

1.多节点服务集群搭建

操作系统使用的 centos 6.8。

默认情况下,端口2379用于提供HTTP API服务,端口2380用于节点间通信。

在单机情况下,多节点服务需要使用不同端口。

三节点集群信息如下:


























name ip port
etcd-01 127.0.0.1 2379,2380
etcd-02 127.0.0.1 2479,2480
etcd-03 127.0.0.1 2579,2580

启动节点1

定义启动脚本start_etcd1.sh

  1. TOKEN=token-01
  2. CLUSTER_STATE=new
  3. NAME_1=etcd-01
  4. NAME_2=etcd-02
  5. NAME_3=etcd-03
  6. HOST_1=127.0.0.1
  7. HOST_2=127.0.0.1
  8. HOST_3=127.0.0.1
  9. PORT_API_1=2379
  10. PORT_PEER_1=2380
  11. PORT_API_2=2479
  12. PORT_PEER_2=2480
  13. PORT_API_3=2579
  14. PORT_PEER_3=2580
  15. CLUSTER=${NAME_1}=http://${HOST_1}:${PORT_PEER_1},${NAME_2}=http://${HOST_2}:${PORT_PEER_2},${NAME_3}=http://${HOST_3}:${PORT_PEER_3}
  16. # For every machine
  17. THIS_NAME=${NAME_1}
  18. THIS_IP=${HOST_1}
  19. THIS_PORT_API=${PORT_API_1}
  20. THIS_PORT_PEER=${PORT_PEER_1}
  21. ./etcd --data-dir=data.${THIS_NAME} --name ${THIS_NAME} \
  22. --initial-advertise-peer-urls http://${THIS_IP}:${THIS_PORT_PEER} --listen-peer-urls http://${THIS_IP}:${THIS_PORT_PEER} \
  23. --advertise-client-urls http://${THIS_IP}:${THIS_PORT_API} --listen-client-urls http://${THIS_IP}:${THIS_PORT_API} \
  24. --initial-cluster ${CLUSTER} \
  25. --initial-cluster-state ${CLUSTER_STATE} --initial-cluster-token ${TOKEN}

启动

  1. bash start_etcd2.sh

启动节点2

定义启动脚本start_etcd2.sh,
只需将启动脚本start_etcd1.sh中的配置项修改:

  1. THIS_NAME=${NAME_2}
  2. THIS_IP=${HOST_2}
  3. THIS_PORT_API=${PORT_API_2}
  4. THIS_PORT_PEER=${PORT_PEER_2}

启动

  1. bash start_etcd2.sh

启动节点3

定义启动脚本start_etcd3.sh,
只需将启动脚本start_etcd1.sh中的配置项修改:

  1. THIS_NAME=${NAME_3}
  2. THIS_IP=${HOST_3}
  3. THIS_PORT_API=${PORT_API_3}
  4. THIS_PORT_PEER=${PORT_PEER_3}

启动

  1. bash start_etcd3.sh

2.使用

查看节点信息

  1. export ETCDCTL_API=3
  2. HOST_1=127.0.0.1
  3. HOST_2=127.0.0.1
  4. HOST_3=127.0.0.1
  5. PORT_API_1=2379
  6. PORT_API_2=2479
  7. PORT_API_3=2579
  8. ENDPOINTS=$HOST_1:${PORT_API_1},$HOST_2:${PORT_API_2},$HOST_3:${PORT_API_3}
  9. ./etcdctl --endpoints=$ENDPOINTS member list
  10. $ ./etcdctl --endpoints=$ENDPOINTS member list
  11. 264ae6bc59e99892, started, etcd-01, http://127.0.0.1:2380, http://127.0.0.1:2379, false
  12. dbafe5ad6b652eda, started, etcd-02, http://127.0.0.1:2480, http://127.0.0.1:2479, false
  13. f570ae41f524bdcb, started, etcd-03, http://127.0.0.1:2580, http://127.0.0.1:2579, false

或者

  1. $ ./etcdctl --endpoints=$ENDPOINTS --write-out=table member list
  2. +------------------+---------+---------+-----------------------+-----------------------+------------+
  3. | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
  4. +------------------+---------+---------+-----------------------+-----------------------+------------+
  5. | 264ae6bc59e99892 | started | etcd-01 | http://127.0.0.1:2380 | http://127.0.0.1:2379 | false |
  6. | dbafe5ad6b652eda | started | etcd-02 | http://127.0.0.1:2480 | http://127.0.0.1:2479 | false |
  7. | f570ae41f524bdcb | started | etcd-03 | http://127.0.0.1:2580 | http://127.0.0.1:2579 | false |

集群状态

  1. $ ./etcdctl --endpoints=$ENDPOINTS endpoint status
  2. 127.0.0.1:2379, 264ae6bc59e99892, 3.4.13, 20 kB, false, false, 8, 10, 10,
  3. 127.0.0.1:2479, dbafe5ad6b652eda, 3.4.13, 33 kB, true, false, 8, 10, 10,
  4. 127.0.0.1:2579, f570ae41f524bdcb, 3.4.13, 20 kB, false, false, 8, 10, 10,
  5. $ ./etcdctl --endpoints=$ENDPOINTS --write-out=table endpoint status
  6. +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  7. | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
  8. +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  9. | 127.0.0.1:2379 | 264ae6bc59e99892 | 3.4.13 | 20 kB | false | false | 8 | 10 | 10 | |
  10. | 127.0.0.1:2479 | dbafe5ad6b652eda | 3.4.13 | 33 kB | true | false | 8 | 10 | 10 | |
  11. | 127.0.0.1:2579 | f570ae41f524bdcb | 3.4.13 | 20 kB | false | false | 8 | 10 | 10 | |
  12. +----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

可以看到,127.0.0.1:2479 这个节点是leader。

查看节点健康状况

  1. $ ./etcdctl --endpoints=$ENDPOINTS endpoint health
  2. 127.0.0.1:2379 is healthy: successfully committed proposal: took = 5.843376ms
  3. 127.0.0.1:2479 is healthy: successfully committed proposal: took = 4.92724ms
  4. 127.0.0.1:2579 is healthy: successfully committed proposal: took = 7.661623ms
  5. $ ./etcdctl --endpoints=$ENDPOINTS --write-out=table endpoint health
  6. +----------------+--------+------------+-------+
  7. | ENDPOINT | HEALTH | TOOK | ERROR |
  8. +----------------+--------+------------+-------+
  9. | 127.0.0.1:2379 | true | 8.470348ms | |
  10. | 127.0.0.1:2479 | true | 4.540441ms | |
  11. | 127.0.0.1:2579 | true | 8.666543ms | |
  12. +----------------+--------+------------+-------+

键值对 读写

  1. $ ./etcdctl --endpoints=$ENDPOINTS put foo "Hello World"
  2. OK
  3. $ ./etcdctl --endpoints=$ENDPOINTS get foo
  4. foo
  5. Hello World

分布式锁

在终端1中获取锁

  1. $ ./etcdctl --endpoints=$ENDPOINTS lock mutex1
  2. mutex1/3dcb7516bf48a904

在终端2中获取同一把锁,会被卡住

  1. $ ./etcdctl --endpoints=$ENDPOINTS lock mutex1

此时,停止终端1的操作,释放锁,终端2才可以获取到锁:

  1. $ ./etcdctl --endpoints=$ENDPOINTS lock mutex1
  2. mutex1/3dcb7516bf48a907

3.参考

github etcd 官方文档

发表评论

表情:
评论列表 (有 0 条评论,273人围观)

还没有评论,来说两句吧...

相关阅读

    相关 Etcd

    前言: > 前面我提到k8s的高可用搭建,于是后面有人评论etcd用的master自己的还是外部的etcd集群。我回答是用master自带的,今天为了解决那个朋友提出的问