Kubeadm安装Kubernetes1.16.2集群 喜欢ヅ旅行 2023-06-20 09:11 42阅读 0赞 Kubernetes从1.4版本开始后就引入了kubeadm用于简化集群搭建的过程,在Kubernetes 1.13版本中,kubeadm工具进入GA阶段,可用于生产环境Kubernetes集群搭建。本节将使用Kubeadm搭建Kubernetes1.16.2集群,宿主机采用3台Vagrant构建的Centos7虚拟机,配置如下所示(Kubernetes推荐宿主机最低内存不能低于2G,CPU核心数最低不能低于2): <table> <thead> <tr> <th>操作系统</th> <th>IP</th> <th>角色</th> <th>CPU核心数</th> <th>内存</th> <th>Hostname</th> </tr> </thead> <tbody> <tr> <td>centos7</td> <td>192.168.33.11</td> <td>master</td> <td>2</td> <td>4096M</td> <td>master</td> </tr> <tr> <td>centos7</td> <td>192.168.33.12</td> <td>worker</td> <td>2</td> <td>4096M</td> <td>node1</td> </tr> <tr> <td>centos7</td> <td>192.168.33.13</td> <td>worker</td> <td>2</td> <td>4096M</td> <td>node2</td> </tr> </tbody> </table> 分享下我的Vagrantfile配置: <table> <tbody> <tr> <td> <pre>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 </pre> </td> <td> <pre>Vagrant.configure("2") do |config| config.vm.box = "centos/7" config.vm.define "one" do |one| one.vm.network "private_network", ip: "192.168.33.11" one.vm.hostname = "master" one.vm.provider "virtualbox" do |v| v.memory = 4096 v.cpus = 2 end end config.vm.define "two" do |two| two.vm.network "private_network", ip: "192.168.33.12" two.vm.hostname = "node1" two.vm.provider "virtualbox" do |v| v.memory = 4096 v.cpus = 2 end end config.vm.define "three" do |three| three.vm.network "private_network", ip: "192.168.33.13" three.vm.hostname = "node2" three.vm.provider "virtualbox" do |v| v.memory = 4096 v.cpus = 2 end end end </pre> </td> </tr> </tbody> </table> 启动后如下所示: ![QQ截图20191028210242.png][QQ_20191028210242.png] ## 准备工作 ## 下面这些准备工作分别在3台机器上使用root账号操作: **1.安装必要软件:** <table> <tbody> <tr> <td> <pre>1 </pre> </td> <td> <pre>yum install -y net-tools.x86_64 vim wget </pre> </td> </tr> </tbody> </table> **2.配置hosts:** <table> <tbody> <tr> <td> <pre>1 </pre> </td> <td> <pre>vim /etc/hosts </pre> </td> </tr> </tbody> </table> 内容如下所示: <table> <tbody> <tr> <td> <pre>1 2 3 </pre> </td> <td> <pre>192.168.33.11 master 192.168.33.12 node1 192.168.33.13 node2 </pre> </td> </tr> </tbody> </table> **3.关闭防火墙:** 为了避免kubernetes的Master节点和各个工作节点的Node节点间的通信出现问题,我们可以关闭本地搭建的Centos虚拟机的防火墙。生产环境推荐的做法是在防火墙上配置各个组件需要相互通信的端口。 <table> <tbody> <tr> <td> <pre>1 2 </pre> </td> <td> <pre>systemctl disable firewalld systemctl stop firewalld </pre> </td> </tr> </tbody> </table> **4.禁用SELinux,让容器可以顺利地读取主机文件系统:** <table> <tbody> <tr> <td> <pre>1 </pre> </td> <td> <pre>setenforce 0 </pre> </td> </tr> </tbody> </table> <table> <tbody> <tr> <td> <pre>1 </pre> </td> <td> <pre>sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config </pre> </td> </tr> </tbody> </table> **5.安装18.09版本的docker:** 因为本节需要安装的kubernetes集群版本为1.16.2,而该版本的kubernetes最高支持的docker版本为18.09。可以通过该地址查看kubernetes和docker的版本对应关系:[https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md\#downloads-for-v1160][https_github.com_kubernetes_kubernetes_blob_master_CHANGELOG-1.16.md_downloads-for-v1160]: ![QQ截图20191028211129.png][QQ_20191028211129.png] 安装必要依赖: <table> <tbody> <tr> <td> <pre>1 2 3 </pre> </td> <td> <pre>yum install -y yum-utils \ device-mapper-persistent-data \ lvm2 </pre> </td> </tr> </tbody> </table> 添加docker稳定版仓库: <table> <tbody> <tr> <td> <pre>1 2 3 </pre> </td> <td> <pre>yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo </pre> </td> </tr> </tbody> </table> 安装18.09版本: <table> <tbody> <tr> <td> <pre>1 </pre> </td> <td> <pre>yum install docker-ce-18.09.0 docker-ce-cli-18.09.0 containerd.io </pre> </td> </tr> </tbody> </table> 启动docker,并设置开机自启: <table> <tbody> <tr> <td> <pre>1 </pre> </td> <td> <pre>systemctl enable docker && systemctl start docker </pre> </td> </tr> </tbody> </table> 修改/etc/docker/daemon.json文件: <table> <tbody> <tr> <td> <pre>1 </pre> </td> <td> <pre>vim /etc/docker/daemon.json </pre> </td> </tr> </tbody> </table> 内容如下所示: <table> <tbody> <tr> <td> <pre>1 2 3 </pre> </td> <td> <pre>{ "exec-opts": ["native.cgroupdriver=systemd"] } </pre> </td> </tr> </tbody> </table> 重启docker: <table> <tbody> <tr> <td> <pre>1 2 </pre> </td> <td> <pre>systemctl daemon-reload systemctl restart docker </pre> </td> </tr> </tbody> </table> **6.将桥接的IPv4流量传递到iptables的链** <table> <tbody> <tr> <td> <pre>1 2 3 4 </pre> </td> <td> <pre>cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF </pre> </td> </tr> </tbody> </table> <table> <tbody> <tr> <td> <pre>1 </pre> </td> <td> <pre>sysctl --system </pre> </td> </tr> </tbody> </table> **7.关闭swap** Swap是操作系统在内存吃紧的情况申请的虚拟内存,按照Kubernetes官网的说法,Swap会对Kubernetes的性能造成影响,不推荐使用Swap。 <table> <tbody> <tr> <td> <pre>1 </pre> </td> <td> <pre>swapoff -a </pre> </td> </tr> </tbody> </table> ## 安装Master ## 准备工作完毕后,接着开始在192.168.33.11 Master虚拟机上安装Kubernetes Master。 **1.配置国内的kubernetes源:** <table> <tbody> <tr> <td> <pre>1 2 3 4 5 6 7 8 9 10 11 </pre> </td> <td> <pre>cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF </pre> </td> </tr> </tbody> </table> **2.安装kubelet、kubeadm和kubectl工具:** <table> <tbody> <tr> <td> <pre>1 </pre> </td> <td> <pre>yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes </pre> </td> </tr> </tbody> </table> **3.启动kubelet并设置开机自启:** <table> <tbody> <tr> <td> <pre>1 </pre> </td> <td> <pre>systemctl enable kubelet && systemctl start kubelet </pre> </td> </tr> </tbody> </table> **4.使用下面这条命令启动master:** <table> <tbody> <tr> <td> <pre>1 2 3 4 5 </pre> </td> <td> <pre>kubeadm init --kubernetes-version=v1.16.2 \ --pod-network-cidr=10.244.0.0/16 \ --service-cidr=10.1.0.0/16 \ --apiserver-advertise-address=192.168.33.11 \ --image-repository registry.aliyuncs.com/google_containers </pre> </td> </tr> </tbody> </table> 配置含义如下: * kubernetes-version: 用于指定k8s版本,这里指定为最新的1.16.2版本; * apiserver-advertise-address:用于指定kube-apiserver监听的ip地址,就是master本机IP地址。 * pod-network-cidr:因为后面我们选择flannel作为Pod的网络插件,所以这里需要指定Pod的网络范围为10.244.0.0/16 * service-cidr:用于指定SVC的网络范围; * image-repository: 其中默认的镜像仓库k8s.gcr.io没有科学上网的话无法访问,我们可以将它修改为国内的阿里镜像仓库registry.aliyuncs.com/google\_containers 启动时,需要拉取镜像,过程比较缓慢耐心等待即可。如果你想先拉好镜像再启动,你可以使用`kubeadm config images list`命令列出需要拉取的镜像。 启动成功后,你会看到类似如下提示: <table> <tbody> <tr> <td> <pre>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 </pre> </td> <td> <pre>Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.33.11:6443 --token yf7sct.o63ceq25gxdu71cd \ --discovery-token-ca-cert-hash sha256:bcd15ddd7432d393d3831c75ac7673f582d4e9895ff2c579c3f545d2a5d3026e </pre> </td> </tr> </tbody> </table> 意思是,如果你想要非root用户也能使用kubectl命令的话,需要执行下面这些操作: <table> <tbody> <tr> <td> <pre>1 2 3 </pre> </td> <td> <pre>mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config </pre> </td> </tr> </tbody> </table> 而如果你是root用户的话,直接运行下面这段命令即可: <table> <tbody> <tr> <td> <pre>1 </pre> </td> <td> <pre>export KUBECONFIG=/etc/kubernetes/admin.conf </pre> </td> </tr> </tbody> </table> 而下面这段则是用于工作节点Node加入Master集群用的,后面会使用到 <table> <tbody> <tr> <td> <pre>1 2 </pre> </td> <td> <pre>kubeadm join 192.168.33.11:6443 --token yf7sct.o63ceq25gxdu71cd \ --discovery-token-ca-cert-hash sha256:bcd15ddd7432d393d3831c75ac7673f582d4e9895ff2c579c3f545d2a5d3026e </pre> </td> </tr> </tbody> </table> ## 安装Node节点,加入集群 ## 接着在192.168.33.12和192.168.33.13虚拟机上操作。 和安装Master步骤一样,先安装好kubeadm相关工具,然后执行下面这条命令将Node加入到集群: <table> <tbody> <tr> <td> <pre>1 2 </pre> </td> <td> <pre>kubeadm join 192.168.33.11:6443 --token yf7sct.o63ceq25gxdu71cd \ --discovery-token-ca-cert-hash sha256:bcd15ddd7432d393d3831c75ac7673f582d4e9895ff2c579c3f545d2a5d3026e </pre> </td> </tr> </tbody> </table> 当输出如下内容是说明加入成功: ![QQ截图20191028215138.png][QQ_20191028215138.png] ## 安装网络插件 ## 在Master上执行kubectl get nodes命令,会发现Kubernetes提示Master为NotReady状态,这是因为还没有安装CNI网络插件: ![QQ截图20191028215653.png][QQ_20191028215653.png] 对于CNI网络插件,可以有许多选择,请参考[https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/\#pod-network][https_kubernetes.io_docs_setup_independent_create-cluster-kubeadm_pod-network]的说明。这里我选择的flannel: <table> <tbody> <tr> <td> <pre>1 </pre> </td> <td> <pre>wget https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml </pre> </td> </tr> </tbody> </table> 修改kube-flannel.yml: <table> <tbody> <tr> <td> <pre>1 </pre> </td> <td> <pre>vim kube-flannel.yml </pre> </td> </tr> </tbody> </table> 修改的地方如下所示: <table> <tbody> <tr> <td> <pre>1 2 3 4 5 6 7 8 9 10 11 </pre> </td> <td> <pre>...... containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr - --iface=eth1 # 新增部分 ...... </pre> </td> </tr> </tbody> </table> Vagrant 在多主机模式下有多个网卡,eth0 网卡用于nat转发访问公网,而eth1网卡才是主机真正的IP,在这种情况下直接部署k8s flannel 插件会导致CoreDNS无法工作,所以我们需要添加上面这条配置强制flannel使用eth1。 安装flannel: <table> <tbody> <tr> <td> <pre>1 </pre> </td> <td> <pre>kubectl create -f kube-flannel.yml </pre> </td> </tr> </tbody> </table> 输出如下所示时,表示安装成功: ![QQ截图20191028215831.png][QQ_20191028215831.png] 稍等片刻后,再次查看节点状态: ![QQ截图20191028215910.png][QQ_20191028215910.png] 可以看到所有节点都是Ready状态。 执行`kubectl get pods --all-namespaces`,验证Kubernetes集群的相关Pod是否都正常创建并运行: ![QQ截图20191028220122.png][QQ_20191028220122.png] 到这里通过Kubeadm安装Kubernetes 1.16.2集群已经成功了。如果安装失败,则可以执行`kubeadm reset`命令将主机恢复原状,重新执行`kubeadm init`命令,再次进行安装。 ## 小试牛刀 ## 为了快速地验证一下上面搭建集群是否可用,我们创建一个Nginx Deployment: <table> <tbody> <tr> <td> <pre>1 2 3 </pre> </td> <td> <pre>kubectl create deployment nginx --image=nginx kubectl expose deployment nginx --port=80 --type=NodePort </pre> </td> </tr> </tbody> </table> 使用命令`kubectl get pod,svc`查看是否正常: ![QQ截图20191028221659.png][QQ_20191028221659.png] 使用命令`kubectl get pods,svc -o wide`查看该Pod具体位于哪一个节点: ![QQ截图20191028221805.png][QQ_20191028221805.png] 可以看到其位于Node2节点,该节点IP为192.168.33.13,端口为30935,使用浏览器访问该地址: ![QQ截图20191028221923.png][QQ_20191028221923.png] 使用`kubectl get pods`命令查看Pod的情况: ![QQ截图20191029202111.png][QQ_20191029202111.png] 使用`kubectl delete`命令删除这个Pod看看会怎样: ![QQ截图20191029202330.png][QQ_20191029202330.png] 可以看到,刚刚的名为xxx的Pod处于Terminating(结束中)的状态,而另一个新的名为xxx的Pod正处于ContainerCreating(创建中)状态,因为默认情况下,`replicas`的值为1,Kubernetes集群会始终保持Nginx的实例为1。 要删除Nginx可以通过删除deployment来完成,使用`kubectl get deployments`命令查看当前的deployment: ![QQ截图20191029202412.png][QQ_20191029202412.png] 使用命令`kubectl delete deployment nginx`: ![QQ截图20191029202439.png][QQ_20191029202439.png] 实际中我们一般通过yml或者json文件来创建应用,下面我们使用yml的方式创建一个3实例的Nginx集群: <table> <tbody> <tr> <td> <pre>1 </pre> </td> <td> <pre>vim nginx-rc.yml </pre> </td> </tr> </tbody> </table> 内容如下所示: <table> <tbody> <tr> <td> <pre>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 </pre> </td> <td> <pre>apiVersion: v1 kind: ReplicationController metadata: name: nginx-rc spec: replicas: 3 selector: name: nginx template: metadata: labels: name: nginx spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80 </pre> </td> </tr> </tbody> </table> <table> <tbody> <tr> <td> <pre>1 </pre> </td> <td> <pre>vim nginx-service.yml </pre> </td> </tr> </tbody> </table> 内容如下所示: <table> <tbody> <tr> <td> <pre>1 2 3 4 5 6 7 8 9 10 11 12 </pre> </td> <td> <pre>apiVersion: v1 kind: Service metadata: name: nginx-service spec: ports: - port: 8080 targetPort: 80 protocol: TCP type: NodePort selector: name: nginx </pre> </td> </tr> </tbody> </table> 接着执行下面这两条命令启动Nginx集群: <table> <tbody> <tr> <td> <pre>1 2 </pre> </td> <td> <pre>kubectl create -f nginx-rc.yml kubectl create -f nginx-service.yml </pre> </td> </tr> </tbody> </table> 使用`kubectl get pods`命令查看Pod情况: ![QQ截图20191029202540.png][QQ_20191029202540.png] 使用`kubectl get services`命令查看Service情况: ![QQ截图20191029202644.png][QQ_20191029202644.png] 使用`kubectl describe svc nginx-service`命令查看Nginx Service详情: ![QQ截图20191029202719.png][QQ_20191029202719.png] 使用命令`kubectl get pods,svc -o wide`查看Nginx Pod具体位于哪一个节点: ![QQ截图20191029202746.png][QQ_20191029202746.png] 可以看到在node1和node2节点上都有Nginx的Pod,使用浏览器访问[http://192.168.33.12:32631/][http_192.168.33.12_32631]或者[http://192.168.33.13:32631/][http_192.168.33.13_32631]: ![QQ截图20191029202923.png][QQ_20191029202923.png] 删除的话执行下面这两条命令即可: <table> <tbody> <tr> <td> <pre>1 2 </pre> </td> <td> <pre>kubectl delete -f nginx-service.yml kubectl delete -f nginx-rc.yml </pre> </td> </tr> </tbody> </table> [QQ_20191028210242.png]: https://imgconvert.csdnimg.cn/aHR0cHM6Ly9tcmJpcmQuY2MvaW1nLzEyMzQxMjM0LnBuZw?x-oss-process=image/format,png [https_github.com_kubernetes_kubernetes_blob_master_CHANGELOG-1.16.md_downloads-for-v1160]: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#downloads-for-v1160 [QQ_20191028211129.png]: https://imgconvert.csdnimg.cn/aHR0cHM6Ly9tcmJpcmQuY2MvaW1nL1FRJUU2JTg4JUFBJUU1JTlCJUJFMjAxOTEwMjgyMTExMjkucG5n?x-oss-process=image/format,png [QQ_20191028215138.png]: https://imgconvert.csdnimg.cn/aHR0cHM6Ly9tcmJpcmQuY2MvaW1nL1FRJUU2JTg4JUFBJUU1JTlCJUJFMjAxOTEwMjgyMTUxMzgucG5n?x-oss-process=image/format,png [QQ_20191028215653.png]: https://imgconvert.csdnimg.cn/aHR0cHM6Ly9tcmJpcmQuY2MvaW1nL1FRJUU2JTg4JUFBJUU1JTlCJUJFMjAxOTEwMjgyMTU2NTMucG5n?x-oss-process=image/format,png [https_kubernetes.io_docs_setup_independent_create-cluster-kubeadm_pod-network]: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network [QQ_20191028215831.png]: https://imgconvert.csdnimg.cn/aHR0cHM6Ly9tcmJpcmQuY2MvaW1nL1FRJUU2JTg4JUFBJUU1JTlCJUJFMjAxOTEwMjgyMTU4MzEucG5n?x-oss-process=image/format,png [QQ_20191028215910.png]: https://imgconvert.csdnimg.cn/aHR0cHM6Ly9tcmJpcmQuY2MvaW1nL1FRJUU2JTg4JUFBJUU1JTlCJUJFMjAxOTEwMjgyMTU5MTAucG5n?x-oss-process=image/format,png [QQ_20191028220122.png]: https://imgconvert.csdnimg.cn/aHR0cHM6Ly9tcmJpcmQuY2MvaW1nL1FRJUU2JTg4JUFBJUU1JTlCJUJFMjAxOTEwMjgyMjAxMjIucG5n?x-oss-process=image/format,png [QQ_20191028221659.png]: https://imgconvert.csdnimg.cn/aHR0cHM6Ly9tcmJpcmQuY2MvaW1nL1FRJUU2JTg4JUFBJUU1JTlCJUJFMjAxOTEwMjgyMjE2NTkucG5n?x-oss-process=image/format,png [QQ_20191028221805.png]: https://imgconvert.csdnimg.cn/aHR0cHM6Ly9tcmJpcmQuY2MvaW1nL1FRJUU2JTg4JUFBJUU1JTlCJUJFMjAxOTEwMjgyMjE4MDUucG5n?x-oss-process=image/format,png [QQ_20191028221923.png]: https://imgconvert.csdnimg.cn/aHR0cHM6Ly9tcmJpcmQuY2MvaW1nL1FRJUU2JTg4JUFBJUU1JTlCJUJFMjAxOTEwMjgyMjE5MjMucG5n?x-oss-process=image/format,png [QQ_20191029202111.png]: https://imgconvert.csdnimg.cn/aHR0cHM6Ly9tcmJpcmQuY2MvaW1nL1FRJUU2JTg4JUFBJUU1JTlCJUJFMjAxOTEwMjkyMDIxMTEucG5n?x-oss-process=image/format,png [QQ_20191029202330.png]: https://imgconvert.csdnimg.cn/aHR0cHM6Ly9tcmJpcmQuY2MvaW1nL1FRJUU2JTg4JUFBJUU1JTlCJUJFMjAxOTEwMjkyMDIzMzAucG5n?x-oss-process=image/format,png [QQ_20191029202412.png]: https://imgconvert.csdnimg.cn/aHR0cHM6Ly9tcmJpcmQuY2MvaW1nL1FRJUU2JTg4JUFBJUU1JTlCJUJFMjAxOTEwMjkyMDI0MTIucG5n?x-oss-process=image/format,png [QQ_20191029202439.png]: https://imgconvert.csdnimg.cn/aHR0cHM6Ly9tcmJpcmQuY2MvaW1nL1FRJUU2JTg4JUFBJUU1JTlCJUJFMjAxOTEwMjkyMDI0MzkucG5n?x-oss-process=image/format,png [QQ_20191029202540.png]: https://imgconvert.csdnimg.cn/aHR0cHM6Ly9tcmJpcmQuY2MvaW1nL1FRJUU2JTg4JUFBJUU1JTlCJUJFMjAxOTEwMjkyMDI1NDAucG5n?x-oss-process=image/format,png [QQ_20191029202644.png]: https://imgconvert.csdnimg.cn/aHR0cHM6Ly9tcmJpcmQuY2MvaW1nL1FRJUU2JTg4JUFBJUU1JTlCJUJFMjAxOTEwMjkyMDI2NDQucG5n?x-oss-process=image/format,png [QQ_20191029202719.png]: https://imgconvert.csdnimg.cn/aHR0cHM6Ly9tcmJpcmQuY2MvaW1nL1FRJUU2JTg4JUFBJUU1JTlCJUJFMjAxOTEwMjkyMDI3MTkucG5n?x-oss-process=image/format,png [QQ_20191029202746.png]: https://imgconvert.csdnimg.cn/aHR0cHM6Ly9tcmJpcmQuY2MvaW1nL1FRJUU2JTg4JUFBJUU1JTlCJUJFMjAxOTEwMjkyMDI3NDYucG5n?x-oss-process=image/format,png [http_192.168.33.12_32631]: http://192.168.33.12:32631/ [http_192.168.33.13_32631]: http://192.168.33.13:32631/ [QQ_20191029202923.png]: https://imgconvert.csdnimg.cn/aHR0cHM6Ly9tcmJpcmQuY2MvaW1nL1FRJUU2JTg4JUFBJUU1JTlCJUJFMjAxOTEwMjkyMDI5MjMucG5n?x-oss-process=image/format,png
还没有评论,来说两句吧...