k8s安装集群版mysql以及遇到的坑

逃离我推掉我的手 2022-12-11 01:30 1589阅读 0赞

参考:
https://kubernetes.io/zh/docs/concepts/storage/storage-classes/
https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/
https://github.com/kubernetes-retired/external-storage/tree/master/nfs
https://www.jianshu.com/p/65ed4bdf0e89
https://www.cnblogs.com/panwenbin-logs/p/12196286.html

遇到的错误

1,3 pod has unbound immediate PersistentVolumeClaims.

根据官网安装mysql,需要PersistentVolumeClaim,不然会报3 pod has unbound immediate PersistentVolumeClaims:

2,mysql Back-off restarting failed container

这个原因有很多,比如k8s gcr.io被墙导致ImagePullBackOff,或者节点内存不足导致caused “process_linux.go:101: executing setns process caused “exit status 1, 等。

如果k8s gcr.io被墙导致ImagePullBackOff,可以通过以下命令拉去镜像。

  1. docker pull ist0ne/xtrabackup
  2. docker tag ist0ne/xtrabackup:latest gcr.io/google-samples/xtrabackup:1.0

注意如果是云集群,那个每个节点都要执行以上命令拉去镜像。

3,chown: changing ownership of ‘/var/lib/mysql/’: Operation not permitted

这个是由nfs的权限设置导致的。具体解决办法见NFS安装步骤–主节点执行。

安装步骤

  • 一、环境准备–NFS

    我的k8s有三个节点,一个master,两个node分别是以下三个。
    192.168.0.11 k8s-master
    192.168.0.22 k8s-node1
    192.168.0.33 k8s-node2

1,主节点执行

  1. yum -y install nfs-utils rpcbind
  2. mkdir -p /home/nfs
  3. vi /etc/exports
  4. #加入,注意加入no_root_squash,这个是放开root权限,不加的话会报chown: changing ownership of '/var/lib/mysql/': Operation not permitted错误。
  5. /home/nfs *(insecure,rw,async,no_root_squash)
  6. #启动并设置开机启动
  7. systemctl start rpcbind.service
  8. systemctl status rpcbind.service
  9. systemctl enable rpcbind.service
  10. systemctl start nfs.service
  11. systemctl enable nfs.service
  12. systemctl status nfs.service

2,node节点验证nfs可用

  1. yum -y intall nfs-utils
  2. showmount -e 192.1681.0.11
  3. # 挂载至本地/mnt目录
  4. mount -t nfs 192.1681.0.11:/home/nfs /mnt
  5. df -h
  6. umount /mnt
  • 二、配置PersistentVolumeClaims

1,配置account并配置相关权限(rabc.yaml),并使用 kubectl apply -f rabc.yaml执行,该文件主要设置nfs相关角色、权限信息。

  1. # rabc.yaml
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: nfs-client-provisioner
  6. # replace with namespace where provisioner is deployed
  7. namespace: default #根据实际环境设定namespace,下面类同
  8. ---
  9. kind: ClusterRole
  10. apiVersion: rbac.authorization.k8s.io/v1
  11. metadata:
  12. name: nfs-client-provisioner-runner
  13. rules:
  14. - apiGroups: [""]
  15. resources: ["persistentvolumes"]
  16. verbs: ["get", "list", "watch", "create", "delete"]
  17. - apiGroups: [""]
  18. resources: ["persistentvolumeclaims"]
  19. verbs: ["get", "list", "watch", "update"]
  20. - apiGroups: ["storage.k8s.io"]
  21. resources: ["storageclasses"]
  22. verbs: ["get", "list", "watch"]
  23. - apiGroups: [""]
  24. resources: ["events"]
  25. verbs: ["create", "update", "patch"]
  26. ---
  27. kind: ClusterRoleBinding
  28. apiVersion: rbac.authorization.k8s.io/v1
  29. metadata:
  30. name: run-nfs-client-provisioner
  31. subjects:
  32. - kind: ServiceAccount
  33. name: nfs-client-provisioner
  34. # replace with namespace where provisioner is deployed
  35. namespace: default
  36. roleRef:
  37. kind: ClusterRole
  38. name: nfs-client-provisioner-runner
  39. apiGroup: rbac.authorization.k8s.io
  40. ---
  41. kind: Role
  42. apiVersion: rbac.authorization.k8s.io/v1
  43. metadata:
  44. name: leader-locking-nfs-client-provisioner
  45. # replace with namespace where provisioner is deployed
  46. namespace: default
  47. rules:
  48. - apiGroups: [""]
  49. resources: ["endpoints"]
  50. verbs: ["get", "list", "watch", "create", "update", "patch"]
  51. ---
  52. kind: RoleBinding
  53. apiVersion: rbac.authorization.k8s.io/v1
  54. metadata:
  55. name: leader-locking-nfs-client-provisioner
  56. subjects:
  57. - kind: ServiceAccount
  58. name: nfs-client-provisioner
  59. # replace with namespace where provisioner is deployed
  60. namespace: default
  61. roleRef:
  62. kind: Role
  63. name: leader-locking-nfs-client-provisioner
  64. apiGroup: rbac.authorization.k8s.io

2,创建StorageClass,并使用 kubectl apply -f storage-class.yaml执行,

  1. # storage-class.yaml
  2. apiVersion: storage.k8s.io/v1
  3. kind: StorageClass
  4. metadata:
  5. name: managed-nfs-storag
  6. provisioner: nfs #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致
  7. parameters:
  8. type: gp2
  9. reclaimPolicy: Retain
  10. allowVolumeExpansion: true
  11. mountOptions:
  12. - debug
  13. volumeBindingMode: Immediate

3,创建provisioner,并使用 kubectl apply -f nfs-provisioner.yaml执行

  1. # nfs-provisioner.yaml
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: nfs-client-provisioner
  6. labels:
  7. app: nfs-client-provisioner
  8. # replace with namespace where provisioner is deployed
  9. namespace: default #与RBAC文件中的namespace保持一致
  10. spec:
  11. replicas: 1
  12. selector:
  13. matchLabels:
  14. app: nfs-client-provisioner
  15. strategy:
  16. type: Recreate
  17. selector:
  18. matchLabels:
  19. app: nfs-client-provisioner
  20. template:
  21. metadata:
  22. labels:
  23. app: nfs-client-provisioner
  24. spec:
  25. serviceAccountName: nfs-client-provisioner
  26. containers:
  27. - name: nfs-client-provisioner
  28. image: quay.io/external_storage/nfs-client-provisioner:latest
  29. volumeMounts:
  30. - name: nfs-client-root
  31. mountPath: /persistentvolumes
  32. env:
  33. - name: PROVISIONER_NAME
  34. value: nfs #provisioner名称,请确保该名称与 nfs-StorageClass.yaml文件中的provisioner名称保持一致
  35. - name: NFS_SERVER
  36. value: 192.168.0.11 #NFS Server IP地址
  37. - name: NFS_PATH
  38. value: /home/nfs #NFS挂载卷
  39. volumes:
  40. - name: nfs-client-root
  41. nfs:
  42. server: 192.168.0.11 #NFS Server IP地址
  43. path: /home/nfs #NFS 挂载卷

4,创建PersistentVolumeClaim,,并使用 kubectl apply -f nfs-pvc.yaml执行

  1. # pvc-nfs.yaml
  2. apiVersion: v1
  3. kind: PersistentVolumeClaim
  4. metadata:
  5. name: data
  6. spec:
  7. accessModes:
  8. - ReadWriteOnce
  9. volumeMode: Filesystem
  10. resources:
  11. requests:
  12. storage: 8Gi
  13. storageClassName: managed-nfs-storag
  • 三、创建mysql集群

1,创建mysql-configmap,该文件主要是mysql的配置信息,这里设置mysql主节点与子节点的读写策略。

  1. # mysql-configmap.yaml
  2. kind: ConfigMap
  3. apiVersion: v1
  4. metadata:
  5. name: mysql
  6. namespace: default
  7. labels:
  8. app: mysql
  9. data:
  10. master.cnf: |
  11. # Apply this config only on the master.
  12. [mysqld]
  13. log-bin
  14. slave.cnf: |
  15. # Apply this config only on slaves.
  16. [mysqld]
  17. super-read-only

2,创建mysql-services.yaml

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: mysql
  5. labels:
  6. app: mysql
  7. spec:
  8. ports:
  9. - name: mysql
  10. port: 3306
  11. clusterIP: None
  12. selector:
  13. app: mysql
  14. ---
  15. apiVersion: v1
  16. kind: Service
  17. metadata:
  18. name: mysql-read
  19. labels:
  20. app: mysql
  21. spec:
  22. ports:
  23. - name: mysql
  24. port: 3306
  25. selector:
  26. app: mysql

3,创建mysql-statefulset.yaml

  1. kind: StatefulSet
  2. apiVersion: apps/v1
  3. metadata:
  4. name: mysql
  5. namespace: default
  6. spec:
  7. replicas: 3
  8. selector:
  9. matchLabels:
  10. app: mysql
  11. template:
  12. metadata:
  13. creationTimestamp: null
  14. labels:
  15. app: mysql
  16. spec:
  17. volumes:
  18. - name: conf
  19. emptyDir: {}
  20. - name: config-map
  21. configMap:
  22. name: mysql
  23. defaultMode: 420
  24. initContainers:
  25. - name: init-mysql
  26. image: 'mysql:5.7'
  27. command:
  28. - bash
  29. - '-c'
  30. - |
  31. set -ex
  32. # Generate mysql server-id from pod ordinal index.
  33. [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
  34. ordinal=${BASH_REMATCH[1]}
  35. echo [mysqld] > /mnt/conf.d/server-id.cnf
  36. # Add an offset to avoid reserved server-id=0 value.
  37. echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
  38. # Copy appropriate conf.d files from config-map to emptyDir.
  39. if [[ $ordinal -eq 0 ]]; then
  40. cp /mnt/config-map/master.cnf /mnt/conf.d/
  41. else
  42. cp /mnt/config-map/slave.cnf /mnt/conf.d/
  43. fi
  44. resources: {}
  45. volumeMounts:
  46. - name: conf
  47. mountPath: /mnt/conf.d
  48. - name: config-map
  49. mountPath: /mnt/config-map
  50. terminationMessagePath: /dev/termination-log
  51. terminationMessagePolicy: File
  52. imagePullPolicy: IfNotPresent
  53. - name: clone-mysql
  54. image: 'gcr.io/google-samples/xtrabackup:1.0'
  55. command:
  56. - bash
  57. - '-c'
  58. - >
  59. set -ex
  60. # Skip the clone if data already exists.
  61. [[ -d /var/lib/mysql/mysql ]] && exit 0
  62. # Skip the clone on master (ordinal index 0).
  63. [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
  64. ordinal=${BASH_REMATCH[1]}
  65. [[ $ordinal -eq 0 ]] && exit 0
  66. # Clone data from previous peer.
  67. ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C
  68. /var/lib/mysql
  69. # Prepare the backup.
  70. xtrabackup --prepare --target-dir=/var/lib/mysql
  71. resources: {}
  72. volumeMounts:
  73. - name: nfs-pvc
  74. mountPath: /var/lib/mysql
  75. subPath: mysql
  76. - name: conf
  77. mountPath: /etc/mysql/conf.d
  78. terminationMessagePath: /dev/termination-log
  79. terminationMessagePolicy: File
  80. imagePullPolicy: IfNotPresent
  81. containers:
  82. - name: mysql
  83. image: 'mysql:5.7'
  84. ports:
  85. - name: mysql
  86. containerPort: 3306
  87. protocol: TCP
  88. env:
  89. - name: MYSQL_ALLOW_EMPTY_PASSWORD
  90. value: '1'
  91. resources:
  92. requests:
  93. cpu: 50m
  94. memory: 50Mi
  95. volumeMounts:
  96. - name: nfs-pvc
  97. mountPath: /var/lib/mysql
  98. subPath: mysql
  99. - name: conf
  100. mountPath: /etc/mysql/conf.d
  101. livenessProbe:
  102. exec:
  103. command:
  104. - mysqladmin
  105. - ping
  106. initialDelaySeconds: 30
  107. timeoutSeconds: 5
  108. periodSeconds: 10
  109. successThreshold: 1
  110. failureThreshold: 3
  111. readinessProbe:
  112. exec:
  113. command:
  114. - mysql
  115. - '-h'
  116. - 127.0.0.1
  117. - '-e'
  118. - SELECT 1
  119. initialDelaySeconds: 5
  120. timeoutSeconds: 1
  121. periodSeconds: 2
  122. successThreshold: 1
  123. failureThreshold: 3
  124. terminationMessagePath: /dev/termination-log
  125. terminationMessagePolicy: File
  126. imagePullPolicy: IfNotPresent
  127. - name: xtrabackup
  128. image: 'gcr.io/google-samples/xtrabackup:1.0'
  129. command:
  130. - bash
  131. - '-c'
  132. - >
  133. set -ex
  134. cd /var/lib/mysql
  135. # Determine binlog position of cloned data, if any.
  136. if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" !=
  137. "x" ]]; then
  138. # XtraBackup already generated a partial "CHANGE MASTER TO" query
  139. # because we're cloning from an existing slave. (Need to remove the tailing semicolon!)
  140. cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in
  141. # Ignore xtrabackup_binlog_info in this case (it's useless).
  142. rm -f xtrabackup_slave_info xtrabackup_binlog_info
  143. elif [[ -f xtrabackup_binlog_info ]]; then
  144. # We're cloning directly from master. Parse binlog position.
  145. [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
  146. rm -f xtrabackup_binlog_info xtrabackup_slave_info
  147. echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
  148. MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
  149. fi
  150. # Check if we need to complete a clone by starting replication.
  151. if [[ -f change_master_to.sql.in ]]; then
  152. echo "Waiting for mysqld to be ready (accepting connections)"
  153. until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
  154. echo "Initializing replication from clone position"
  155. mysql -h 127.0.0.1 \
  156. -e "$(<change_master_to.sql.in), \
  157. MASTER_HOST='mysql-0.mysql', \
  158. MASTER_USER='root', \
  159. MASTER_PASSWORD='', \
  160. MASTER_CONNECT_RETRY=10; \
  161. START SLAVE;" || exit 1
  162. # In case of container restart, attempt this at-most-once.
  163. mv change_master_to.sql.in change_master_to.sql.orig
  164. fi
  165. # Start a server to send backups when requested by peers.
  166. exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
  167. "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
  168. tail -f /dev/null
  169. ports:
  170. - name: xtrabackup
  171. containerPort: 3307
  172. protocol: TCP
  173. resources:
  174. requests:
  175. cpu: 50m
  176. memory: 50Mi
  177. volumeMounts:
  178. - name: nfs-pvc
  179. mountPath: /var/lib/mysql
  180. subPath: mysql
  181. - name: conf
  182. mountPath: /etc/mysql/conf.d
  183. terminationMessagePath: /dev/termination-log
  184. terminationMessagePolicy: File
  185. imagePullPolicy: IfNotPresent
  186. restartPolicy: Always
  187. terminationGracePeriodSeconds: 30
  188. dnsPolicy: ClusterFirst
  189. securityContext: {}
  190. schedulerName: default-scheduler
  191. volumeClaimTemplates:
  192. - kind: PersistentVolumeClaim
  193. apiVersion: v1
  194. metadata:
  195. name: nfs-pvc
  196. creationTimestamp: null
  197. spec:
  198. accessModes:
  199. - ReadWriteMany
  200. resources:
  201. requests:
  202. storage: 1Mi
  203. storageClassName: managed-nfs-storage
  204. volumeMode: Filesystem
  205. status:
  206. phase: Pending

发表评论

表情:
评论列表 (有 0 条评论,1589人围观)

还没有评论,来说两句吧...

相关阅读