《Kubernetes部署篇:Centos7.6部署kubernetes1.17.4单master集群》

亦凉 2023-01-15 03:41 343阅读 0赞

文章目录

  • 一、架构图
  • 二、部署环境
  • 三、环境初始化
    • 3.1、内核升级
    • 3.2、系统初始化配置
  • 四、Docker部署
  • 五、kubernetes部署
    • 5.1、设置kubernetes源
    • 5.2、安装kubelet、kubeadm和kubectl
    • 5.3、k8s相关镜像下载
    • 5.4、kubeadm初始化master节点
    • 5.5、配置kubectl
    • 5.6、配置flannel网络
    • 5.7、master节点kubelet服务启动
    • 5.8、worker节点部署
  • 总结:整理不易,如果对你有帮助,可否点赞关注一下?

一、架构图

在这里插入图片描述


二、部署环境


































主机名 系统版本 内核版本 IP地址 备注
k8s-master-211 centos7.6.1810 5.11.16 192.168.1.211 master节点
k8s-worker-213 centos7.6.1810 5.11.16 192.168.1.213 worker节点
k8s-worker-214 centos7.6.1810 5.11.16 192.168.1.214 worker节点

说明:建议操作系统选择centos7.5或centos7.6,centos7.2,centos7.3,centos7.4版本存在一定几率的kubelet无法启动问题。


三、环境初始化

说明:以下操作无论是master节点和worker节点均需要执行。

3.1、内核升级

说明:centos7.6系统内核默认是3.10.0,这里建议内核版本为5.11.16。

  1. 内核选择:
  2. kernel-ltlt=long-term)长期有效
  3. kernel-mlml=mainline 主流版本

升级步骤如下:

  1. # 1、下载内核
  2. CSDN下载地址:https://download.csdn.net/download/m0_37814112/17047556
  3. wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-ml-5.11.16-1.el7.elrepo.x86_64.rpm
  4. # 2、安装内核
  5. yum localinstall kernel-ml-5.11.16-1.el7.elrepo.x86_64.rpm -y
  6. # 3、查看当前内核版本
  7. grub2-editenv list
  8. # 4、查看所有内核启动grub2
  9. awk -F \' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
  10. # 5、修改为最新的内核启动
  11. grub2-set-default 'CentOS Linux (5.11.16-1.el7.elrepo.x86_64) 7 (Core)'
  12. # 6、再次查看内核版本
  13. grub2-editenv list
  14. # 7、reboot重启服务器(必须)

3.2、系统初始化配置

  1. # 1、分别三个节点修改主机名
  2. master节点: hostnamectl set-hostname k8s-master-211
  3. worker1节点:hostnamectl set-hostname k8s-worker-213
  4. worker2节点:hostnamectl set-hostname k8s-worker-214
  5. # 2、内核参数修改
  6. cat > /etc/sysctl.d/k8s.conf <<EOF vm.swappiness = 0 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-arptables = 1 EOF
  7. sysctl --system
  8. # 3、永久新增br_netfilter模块
  9. # k8s网络使用flannel,该网络需要设置内核参数bridge-nf-call-iptables=1,修改这个参数需要系统有br_netfilter模块。
  10. cat > /etc/rc.sysinit << EOF #!/bin/bash for file in /etc/sysconfig/modules/*.modules ; do [ -x $file ] && $file done EOF
  11. cat > /etc/sysconfig/modules/br_netfilter.modules << EOF modprobe br_netfilter EOF
  12. chmod 755 /etc/sysconfig/modules/br_netfilter.modules
  13. # 4、关闭防火墙
  14. systemctl stop firewalld && systemctl disable firewalld
  15. # 5、永久关闭selinux
  16. sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
  17. sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
  18. # 6、关闭swap
  19. swapoff -a && sed -i 's/.*swap.*/#&/' /etc/fstab
  20. # 7、安装ipvsadm
  21. yum install ipvsadm -y
  22. cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack" for kernel_module in \${ipvs_modules}; do /sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1 if [ $? -eq 0 ]; then /sbin/modprobe \${kernel_module} fi done EOF
  23. chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep --color ip_vs >/dev/null
  24. lsmod | grep ip_vs
  25. # 8、修改文件描述符和进程数(根据实际情况修改)
  26. vim /etc/security/limits.conf
  27. root soft nofile 65535
  28. root hard nofile 65535
  29. root soft nproc 65535
  30. root hard nproc 65535
  31. root soft memlock unlimited
  32. root hard memlock unlimited
  33. * soft nofile 65535
  34. * hard nofile 65535
  35. * soft nproc 65535
  36. * hard nproc 65535
  37. * soft memlock unlimited
  38. * hard memlock unlimited
  39. # 9、reboot重启服务器(必须)

如下图所示:
在这里插入图片描述


四、Docker部署

说明:以下操作无论是master节点和worker节点均需要执行。

  1. # 1、安装依赖包
  2. yum install yum-utils device-mapper-persistent-data lvm2 -y
  3. # 2、设置Docker源
  4. yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
  5. yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  6. # 3、docker安装版本查看
  7. yum list docker-ce --showduplicates | sort -r
  8. # 4、指定安装的docker版本为18.09.9
  9. yum install docker-ce-18.09.9 docker-ce-cli-18.09.9 containerd.io -y
  10. # 5、启动Docker
  11. systemctl start docker && systemctl enable docker
  12. # 6、修改Cgroup Driver
  13. # 修改docker数据目录,默认是/var/lib/docker,建议数据目录为主机上最大磁盘空间目录或子目录
  14. # 配置阿里云加速器
  15. vim /etc/docker/daemon.json
  16. {
  17. "registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"],
  18. "exec-opts": ["native.cgroupdriver=systemd"],
  19. "graph": "/data/docker"
  20. }
  21. systemctl daemon-reload && systemctl restart docker

如下图所示:
在这里插入图片描述


五、kubernetes部署

5.1、设置kubernetes源

说明:以下操作无论是master节点和worker节点均需要执行。

  1. cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
  2. yum clean all && yum -y makecache

5.2、安装kubelet、kubeadm和kubectl

说明:以下操作无论是master节点和worker节点均需要执行。

  1. # 1、版本查看
  2. yum list kubelet --showduplicates | sort -r
  3. # 2、安装1.17.4版本
  4. yum install kubelet-1.17.4 kubeadm-1.17.4 kubectl-1.17.4 -y

5.3、k8s相关镜像下载

说明:以下操作无论是master节点和worker节点均需要执行。

  1. vim get_image.sh
  2. #!/bin/bash
  3. url=registry.cn-hangzhou.aliyuncs.com/google_containers
  4. version=v1.17.4
  5. images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`)
  6. for imagename in ${images[@]} ; do
  7. docker pull $url/$imagename
  8. docker tag $url/$imagename k8s.gcr.io/$imagename
  9. docker rmi -f $url/$imagename
  10. done

如下图所示:
在这里插入图片描述


5.4、kubeadm初始化master节点

说明:以下操作只需要在master节点执行。

  1. # 1、创建初始化配置文件
  2. kubeadm config print init-defaults > kubeadm-config.yaml
  3. # 根据实际部署环境修改信息:
  4. vim kubeadm-config.yaml
  5. apiVersion: kubeadm.k8s.io/v1beta1
  6. kind: ClusterConfiguration
  7. networking:
  8. serviceSubnet: "10.96.0.0/16" #service网段
  9. podSubnet: "10.48.0.0/16" #pod网段
  10. kubernetesVersion: "v1.17.4" #kubernetes版本
  11. controlPlaneEndpoint: "192.168.1.211:6443" #apiserver ip和端口
  12. apiServer:
  13. extraArgs:
  14. authorization-mode: "Node,RBAC"
  15. service-node-port-range: 30000-36000 #service端口范围
  16. imageRepository: ""
  17. ---
  18. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  19. kind: KubeProxyConfiguration
  20. mode: "ipvs"
  21. # 3、初始化master
  22. kubeadm init --config=kubeadm-config.yaml --upload-certs --ignore-preflight-errors=all

如下图所示:
在这里插入图片描述


5.5、配置kubectl

说明:以下操作只需要在master节点执行。

  1. mkdir -p $HOME/.kube
  2. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. sudo chown $(id -u):$(id -g) $HOME/.kube/config

5.6、配置flannel网络

说明:以下操作只需要在master节点执行。

  1. # 1、下载官方模板文件
  2. wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  3. # 2、部署flannel网络
  4. kubectl create -f kube-flannel.yml
  5. # 3、根据实际情况修改kube-flannel.yml
  6. vim kube-flannel.yml
  7. ---
  8. apiVersion: policy/v1beta1
  9. kind: PodSecurityPolicy
  10. metadata:
  11. name: psp.flannel.unprivileged
  12. annotations:
  13. seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
  14. seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
  15. apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
  16. apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
  17. spec:
  18. privileged: false
  19. volumes:
  20. - configMap
  21. - secret
  22. - emptyDir
  23. - hostPath
  24. allowedHostPaths:
  25. - pathPrefix: "/etc/cni/net.d"
  26. - pathPrefix: "/etc/kube-flannel"
  27. - pathPrefix: "/run/flannel"
  28. readOnlyRootFilesystem: false
  29. # Users and groups
  30. runAsUser:
  31. rule: RunAsAny
  32. supplementalGroups:
  33. rule: RunAsAny
  34. fsGroup:
  35. rule: RunAsAny
  36. # Privilege Escalation
  37. allowPrivilegeEscalation: false
  38. defaultAllowPrivilegeEscalation: false
  39. # Capabilities
  40. allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  41. defaultAddCapabilities: []
  42. requiredDropCapabilities: []
  43. # Host namespaces
  44. hostPID: false
  45. hostIPC: false
  46. hostNetwork: true
  47. hostPorts:
  48. - min: 0
  49. max: 65535
  50. # SELinux
  51. seLinux:
  52. # SELinux is unused in CaaSP
  53. rule: 'RunAsAny'
  54. ---
  55. kind: ClusterRole
  56. apiVersion: rbac.authorization.k8s.io/v1
  57. metadata:
  58. name: flannel
  59. rules:
  60. - apiGroups: ['extensions']
  61. resources: ['podsecuritypolicies']
  62. verbs: ['use']
  63. resourceNames: ['psp.flannel.unprivileged']
  64. - apiGroups:
  65. - ""
  66. resources:
  67. - pods
  68. verbs:
  69. - get
  70. - apiGroups:
  71. - ""
  72. resources:
  73. - nodes
  74. verbs:
  75. - list
  76. - watch
  77. - apiGroups:
  78. - ""
  79. resources:
  80. - nodes/status
  81. verbs:
  82. - patch
  83. ---
  84. kind: ClusterRoleBinding
  85. apiVersion: rbac.authorization.k8s.io/v1
  86. metadata:
  87. name: flannel
  88. roleRef:
  89. apiGroup: rbac.authorization.k8s.io
  90. kind: ClusterRole
  91. name: flannel
  92. subjects:
  93. - kind: ServiceAccount
  94. name: flannel
  95. namespace: kube-system
  96. ---
  97. apiVersion: v1
  98. kind: ServiceAccount
  99. metadata:
  100. name: flannel
  101. namespace: kube-system
  102. ---
  103. kind: ConfigMap
  104. apiVersion: v1
  105. metadata:
  106. name: kube-flannel-cfg
  107. namespace: kube-system
  108. labels:
  109. tier: node
  110. app: flannel
  111. data:
  112. cni-conf.json: |
  113. {
  114. "name": "cbr0",
  115. "cniVersion": "0.3.1",
  116. "plugins": [
  117. {
  118. "type": "flannel",
  119. "delegate": {
  120. "hairpinMode": true,
  121. "isDefaultGateway": true
  122. }
  123. },
  124. {
  125. "type": "portmap",
  126. "capabilities": {
  127. "portMappings": true
  128. }
  129. }
  130. ]
  131. }
  132. net-conf.json: |
  133. {
  134. "Network": "10.48.0.0/16", #这里的网段地址需要与kubeadm-config.yaml配置文件定义的必须保持一致
  135. "Backend": {
  136. "Type": "vxlan"
  137. }
  138. }
  139. ---
  140. apiVersion: apps/v1
  141. kind: DaemonSet
  142. metadata:
  143. name: kube-flannel-ds
  144. namespace: kube-system
  145. labels:
  146. tier: node
  147. app: flannel
  148. spec:
  149. selector:
  150. matchLabels:
  151. app: flannel
  152. template:
  153. metadata:
  154. labels:
  155. tier: node
  156. app: flannel
  157. spec:
  158. affinity:
  159. nodeAffinity:
  160. requiredDuringSchedulingIgnoredDuringExecution:
  161. nodeSelectorTerms:
  162. - matchExpressions:
  163. - key: kubernetes.io/os
  164. operator: In
  165. values:
  166. - linux
  167. hostNetwork: true
  168. priorityClassName: system-node-critical
  169. tolerations:
  170. - operator: Exists
  171. effect: NoSchedule
  172. serviceAccountName: flannel
  173. initContainers:
  174. - name: install-cni
  175. image: quay.io/coreos/flannel:v0.14.0-rc1
  176. command:
  177. - cp
  178. args:
  179. - -f
  180. - /etc/kube-flannel/cni-conf.json
  181. - /etc/cni/net.d/10-flannel.conflist
  182. volumeMounts:
  183. - name: cni
  184. mountPath: /etc/cni/net.d
  185. - name: flannel-cfg
  186. mountPath: /etc/kube-flannel/
  187. containers:
  188. - name: kube-flannel
  189. image: quay.io/coreos/flannel:v0.14.0-rc1
  190. command:
  191. - /opt/bin/flanneld
  192. args:
  193. - --ip-masq
  194. - --kube-subnet-mgr
  195. resources:
  196. requests:
  197. cpu: "100m"
  198. memory: "50Mi"
  199. limits:
  200. cpu: "100m"
  201. memory: "50Mi"
  202. securityContext:
  203. privileged: false
  204. capabilities:
  205. add: ["NET_ADMIN", "NET_RAW"]
  206. env:
  207. - name: POD_NAME
  208. valueFrom:
  209. fieldRef:
  210. fieldPath: metadata.name
  211. - name: POD_NAMESPACE
  212. valueFrom:
  213. fieldRef:
  214. fieldPath: metadata.namespace
  215. volumeMounts:
  216. - name: run
  217. mountPath: /run/flannel
  218. - name: flannel-cfg
  219. mountPath: /etc/kube-flannel/
  220. volumes:
  221. - name: run
  222. hostPath:
  223. path: /run/flannel
  224. - name: cni
  225. hostPath:
  226. path: /etc/cni/net.d
  227. - name: flannel-cfg
  228. configMap:
  229. name: kube-flannel-cfg

说明:官方模板文件只需要修改”Network”: “10.48.0.0/16”这一字段,且这里的网段地址需要与kubeadm-config.yaml配置文件定义的必须保持一致。
如下图所示:
在这里插入图片描述


5.7、master节点kubelet服务启动

说明:以下操作只需要在master节点执行。

  1. systemctl restart kubelet && systemctl enable kubelet
  2. [root@node6 install-kubernetes]# kubectl get nodes
  3. NAME STATUS ROLES AGE VERSION
  4. k8s-master-211 Ready master 17m v1.17.4
  5. # 如果需要k8s的master节点允许调度,则执行如下操作,如果不需要可以忽略下操作
  6. kubectl taint nodes --all node-role.kubernetes.io/master-

5.8、worker节点部署

说明:以下操作只需要在worker节点执行。
在这里插入图片描述
如上图所示,操作如下:

  1. kubeadm join 192.168.1.211:6443 --token cvczz1.dyuc9k0mm15ez3g0 --discovery-token-ca-cert-hash sha256:a91c73bc4d83ffcca97fca7f83e0f60d41eeb8114d73e2c43e11938be0726ba0

如果你忘了token和公钥,可以通过如下命令执行:

  1. 执行命令生成token
  2. kubeadm token list
  3. 获取CA(证书)公钥哈希值
  4. openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^ .* //'
  5. kubeadm join 192.168.1.211:6443 --token 新生成的Token填写此处 --discovery-token-ca-cert-hash sha256:获取的公钥哈希值填写此处

如下所示,kubernetes一主多从,就部署完成了。

  1. [root@k8s-master-211 ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-master-211 Ready master 126m v1.17.4
  4. k8s-worker-213 Ready <none> 105m v1.17.4
  5. k8s-worker-214 Ready <none> 105m v1.17.4

如下所示,kubernetes一主多从,就部署完成了。

下一章:《Kubernets部署篇:Centos7.6部署kubernetes1.17.4高可用集群(方案一)》


总结:整理不易,如果对你有帮助,可否点赞关注一下?

更多详细内容请参考:企业级K8s集群运维实战
在这里插入图片描述

发表评论

表情:
评论列表 (有 0 条评论,343人围观)

还没有评论,来说两句吧...

相关阅读