ceph在CentOS7.2部署教程

客官°小女子只卖身不卖艺 2022-04-22 04:10 362阅读 0赞

系统

  1. [root@ceph-1 ~]# cat /etc/redhat-release
  2. CentOS Linux release 7.2.1511 (Core)

主机


























hostname ip 功能
ceph-1 10.39.47.63 deploy、mon1、osd1
ceph-2 10.39.47.64 mon1、osd1
ceph-3 10.39.47.65 mon1、osd1

主机硬盘

  1. [root@ceph-1 ~]# lsblk
  2. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
  3. vda 253:0 0 20G 0 disk
  4. └─vda1 253:1 0 20G 0 part /
  5. vdb 253:16 0 4G 0 disk [SWAP]
  6. vdc 253:32 0 80G 0 disk

安装wget ntp vim工具

  1. yum -y install wget ntp vim

添加host

  1. [root@ceph-1 ~]# cat /etc/hosts
  2. ...
  3. 10.39.47.63 ceph-1
  4. 10.39.47.64 ceph-2
  5. 10.39.47.65 ceph-3

如果以前安装失败需要环境清理

  1. ps aux|grep ceph |awk '{print $2}'|xargs kill -9
  2. ps -ef|grep ceph
  3. #确保此时所有ceph进程都已经关闭!!!如果没有关闭,多执行几次。
  4. umount /var/lib/ceph/osd/*
  5. rm -rf /var/lib/ceph/osd/*
  6. rm -rf /var/lib/ceph/mon/*
  7. rm -rf /var/lib/ceph/mds/*
  8. rm -rf /var/lib/ceph/bootstrap-mds/*
  9. rm -rf /var/lib/ceph/bootstrap-osd/*
  10. rm -rf /var/lib/ceph/bootstrap-rgw/*
  11. rm -rf /var/lib/ceph/tmp/*
  12. rm -rf /etc/ceph/*
  13. rm -rf /var/run/ceph/*

需要在每个主机上执行以下指令

修改yum源

  1. yum clean all
  2. curl http://mirrors.aliyun.com/repo/Centos-7.repo >/etc/yum.repos.d/CentOS-Base.repo
  3. curl http://mirrors.aliyun.com/repo/epel-7.repo >/etc/yum.repos.d/epel.repo
  4. sed -i '/aliyuncs/d' /etc/yum.repos.d/CentOS-Base.repo
  5. sed -i '/aliyuncs/d' /etc/yum.repos.d/epel.repo
  6. yum makecache

增加ceph的源

  1. vim /etc/yum.repos.d/ceph.repo
  2. ##内容如下
  3. [ceph]
  4. name=ceph
  5. baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/x86_64/
  6. gpgcheck=0
  7. [ceph-noarch]
  8. name=cephnoarch
  9. baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch/
  10. gpgcheck=0

安装ceph客户端

  1. yum makecache
  2. yum install ceph ceph-radosgw rdate -y

关闭selinux&firewalld

  1. sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
  2. setenforce 0
  3. systemctl stop firewalld
  4. systemctl disable firewalld

同步各个节点时间

  1. yum -y install rdate
  2. rdate -s time-a.nist.gov
  3. echo rdate -s time-a.nist.gov >> /etc/rc.d/rc.local
  4. chmod +x /etc/rc.d/rc.local

开始部署

在部署节点(ceph-1)安装ceph-deploy,下文的部署节点统一指ceph-1

  1. [root@ceph-1 ~]# yum -y install ceph-deploy
  2. [root@ceph-1 ~]# ceph-deploy --version
  3. 1.5.39
  4. [root@ceph-1 ~]# ceph -v
  5. ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)

设置免密码登录

  1. [root@ceph-1 cluster]# ssh-keygen
  2. Generating public/private rsa key pair.
  3. Enter file in which to save the key (/root/.ssh/id_rsa):
  4. Created directory '/root/.ssh'.
  5. Enter passphrase (empty for no passphrase):
  6. Enter same passphrase again:
  7. Your identification has been saved in /root/.ssh/id_rsa.
  8. Your public key has been saved in /root/.ssh/id_rsa.pub.
  9. The key fingerprint is:
  10. 54:f8:9b:25:56:3b:b1:ce:fc:6d:c5:61:b1:55:79:49 root@ceph-1
  11. The key's randomart image is:
  12. +--[ RSA 2048]----+
  13. | .. .E=|
  14. | .. o +o|
  15. | .. . + =|
  16. | . + = + |
  17. | S. O ....|
  18. | o + o|
  19. | . ..|
  20. | . o|
  21. | . |
  22. +-----------------+
  23. [root@ceph-1 cluster]# ssh-copy-id 10.39.47.63
  24. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  25. /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
  26. Warning: Permanently added '10.39.47.63' (ECDSA) to the list of known hosts.
  27. root@10.39.47.63's password:
  28. Number of key(s) added: 1
  29. Now try logging into the machine, with: "ssh '10.39.47.63'"
  30. and check to make sure that only the key(s) you wanted were added.
  31. [root@ceph-1 cluster]# ssh-copy-id 10.39.47.64
  32. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  33. /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
  34. Warning: Permanently added '10.39.47.64' (ECDSA) to the list of known hosts.
  35. root@10.39.47.64's password:
  36. Number of key(s) added: 1
  37. Now try logging into the machine, with: "ssh '10.39.47.64'"
  38. and check to make sure that only the key(s) you wanted were added.
  39. [root@ceph-1 cluster]# ssh-copy-id 10.39.47.65
  40. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  41. /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
  42. Warning: Permanently added '10.39.47.65' (ECDSA) to the list of known hosts.
  43. root@10.39.47.65's password:
  44. Number of key(s) added: 1
  45. Now try logging into the machine, with: "ssh '10.39.47.65'"
  46. and check to make sure that only the key(s) you wanted were added.

验证

  1. [root@ceph-1 cluster]# ssh 10.39.47.65
  2. Warning: Permanently added '10.39.47.65' (ECDSA) to the list of known hosts.
  3. Last login: Fri Nov 2 10:06:39 2018 from 10.4.95.63
  4. [root@ceph-3 ~]#

在部署节点创建部署目录并开始部署

  1. [root@ceph-1 ~]# mkdir cluster
  2. [root@ceph-1 ~]# cd cluster/
  3. [root@ceph-1 cluster]# ceph-deploy new ceph-1 ceph-2 ceph-3

执行完之后生成一下文件

  1. [root@ceph-1 cluster]# ls -l
  2. total 16
  3. -rw-r--r-- 1 root root 235 Nov 2 10:40 ceph.conf
  4. -rw-r--r-- 1 root root 4879 Nov 2 10:40 ceph-deploy-ceph.log
  5. -rw------- 1 root root 73 Nov 2 10:40 ceph.mon.keyring

根据自己的IP配置向ceph.conf中添加public_network,并稍微增大mon之间时差允许范围(默认为0.05s,现改为2s):

  1. [root@ceph-1 cluster]# echo public_network=10.39.47.0/24 >> ceph.conf
  2. [root@ceph-1 cluster]# echo mon_clock_drift_allowed = 2 >> ceph.conf
  3. [root@ceph-1 cluster]# cat ceph.conf
  4. [global]
  5. fsid = 4a3e86f0-1511-4ad7-9f69-b435ae16dc28
  6. mon_initial_members = ceph-1, ceph-2, ceph-3
  7. mon_host = 10.39.47.63,10.39.47.64,10.39.47.65
  8. auth_cluster_required = cephx
  9. auth_service_required = cephx
  10. auth_client_required = cephx
  11. public_network=10.39.47.0/24
  12. mon_clock_drift_allowed = 2

开始部署monitor

  1. [root@ceph-1 cluster] ceph-deploy mon create-initial
  2. //执行成功之后显示
  3. [root@ceph-1 cluster]# ls -l
  4. total 56
  5. -rw------- 1 root root 113 Nov 2 10:45 ceph.bootstrap-mds.keyring
  6. -rw------- 1 root root 71 Nov 2 10:45 ceph.bootstrap-mgr.keyring
  7. -rw------- 1 root root 113 Nov 2 10:45 ceph.bootstrap-osd.keyring
  8. -rw------- 1 root root 113 Nov 2 10:45 ceph.bootstrap-rgw.keyring
  9. -rw------- 1 root root 129 Nov 2 10:45 ceph.client.admin.keyring
  10. -rw-r--r-- 1 root root 292 Nov 2 10:43 ceph.conf
  11. -rw-r--r-- 1 root root 27974 Nov 2 10:45 ceph-deploy-ceph.log
  12. -rw------- 1 root root 73 Nov 2 10:40 ceph.mon.keyring

查看集群状态

  1. [root@ceph-1 cluster]# ceph -s
  2. cluster 4a3e86f0-1511-4ad7-9f69-b435ae16dc28
  3. health HEALTH_ERR
  4. no osds
  5. monmap e1: 3 mons at {ceph-1=10.39.47.63:6789/0,ceph-2=10.39.47.64:6789/0,ceph-3=10.39.47.65:6789/0}
  6. election epoch 6, quorum 0,1,2 ceph-1,ceph-2,ceph-3
  7. osdmap e1: 0 osds: 0 up, 0 in
  8. flags sortbitwise,require_jewel_osds
  9. pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
  10. 0 kB used, 0 kB / 0 kB avail
  11. 64 creating

开始部署OSD

  1. ceph-deploy --overwrite-conf osd prepare ceph-1:/dev/vdc ceph-2:/dev/vdc ceph-3:/dev/vdc --zap-disk
  2. ceph-deploy --overwrite-conf osd activate ceph-1:/dev/vdc1 ceph-2:/dev/vdc1 ceph-3:/dev/vdc1

部署完成之后查看集群状态

  1. [root@ceph-1 cluster]# ceph -s
  2. cluster 4a3e86f0-1511-4ad7-9f69-b435ae16dc28
  3. health HEALTH_OK
  4. monmap e1: 3 mons at {ceph-1=10.39.47.63:6789/0,ceph-2=10.39.47.64:6789/0,ceph-3=10.39.47.65:6789/0}
  5. election epoch 6, quorum 0,1,2 ceph-1,ceph-2,ceph-3
  6. osdmap e14: 3 osds: 3 up, 3 in
  7. flags sortbitwise,require_jewel_osds
  8. pgmap v28: 64 pgs, 1 pools, 0 bytes data, 0 objects
  9. 322 MB used, 224 GB / 224 GB avail
  10. 64 active+clean

查看osd

  1. [root@ceph-1 cluster]# ceph osd tree
  2. ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
  3. -1 0.21959 root default
  4. -2 0.07320 host ceph-1
  5. 0 0.07320 osd.0 up 1.00000 1.00000
  6. -3 0.07320 host ceph-2
  7. 1 0.07320 osd.1 up 1.00000 1.00000
  8. -4 0.07320 host ceph-3
  9. 2 0.07320 osd.2 up 1.00000 1.00000

查看pool有多种方式
这个rdb pool默认创建的pool

  1. [root@ceph-1 cluster]# rados lspools
  2. rbd
  3. [root@ceph-1 cluster]# ceph osd lspools
  4. 0 rbd,

创建POOL

  1. [root@ceph-1 cluster]# ceph osd pool create testpool 64
  2. pool 'testpool' created
  3. [root@ceph-1 cluster]# ceph osd lspools
  4. 0 rbd,1 testpool,

参考:
INSTALLATION (CEPH-DEPLOY)
Ceph 快速部署(Centos7+Jewel)
ceph学习之pool

发表评论

表情:
评论列表 (有 0 条评论,362人围观)

还没有评论,来说两句吧...

相关阅读