一、说明
ceph官方介绍中有好几种方法,目前主推的是使用Cephadm部署。本文使用的是官方的手动部署进行部署。
二、环境
系统:ubuntu 20.4
ip:192.168.1.211
ip:192.168.1.212
ip:192.168.1.213
部署架构图
官方的是一个mon、一个mgr、两个osd。先按照这个架构部署,然后将osd、mon、mgr都扩展到3个。
三、部署过程
1.主机初始化
1.1 配置主机名
1.1.1 配置每台主机的主机名
1.1.2 将以下内容写入到/etc/hosts中。
192.168.1.211 ceph-1-211
192.168.1.212 ceph-1-212
192.168.1.213 ceph-1-213
1.2 配置时间同步
2.安装ceph软件包
2.1 ubuntu
sudo apt-get update && sudo apt-get install ceph ceph-mds
2.2 centos
1.安装yum-plugin-priorities.
sudo yum install yum-plugin-priorities
2.Ensure /etc/yum/pluginconf.d/priorities.conf exists.
3.Ensure priorities.conf enables the plugin.
[main]
enabled = 1
4.Ensure your YUM ceph.repo entry includes priority=2. See Get Packages for details:
[ceph]
name=Ceph packages for $basearch
baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/$basearch
enabled=1
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/noarch
enabled=1
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-source]
name=Ceph source packages
baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/SRPMS
enabled=0
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
5.Install pre-requisite packages:
sudo yum install snappy leveldb gdisk python-argparse gperftools-libs
Once you have added either release or development packages, or added aceph.repo file to /etc/yum.repos.d, you can install Ceph packages.
sudo yum install ceph
3.部署mon
这里用的集群名字是ceph。可以自定义集群名字。将ceph修改为自定义名称即可。
下面ceph.conf中的ceph即是集群名称。
3.1 生成uuid
uuid=`uuidgen`;echo "[global]" > /etc/ceph/ceph.conf;echo "fsid = $uuid" >>/etc/ceph/ceph.conf
3.2 配置主机名
host_name=`hostname`;echo "mon initial members = $host_name" >> /etc/ceph/ceph.conf
3.3 配置ip地址
获取ip的命令在不同的系统上可能会获取不到。配置前测试下。
ip_addr=`ifconfig |grep inet|grep netmask|grep -v '127.0.0.1'|awk '{print $2}'`;echo "mon host = $ip_addr" >> /etc/ceph/ceph.conf
3.4 创建管理员key
sudo ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
3.5 Generate an administrator keyring, generate a client.admin user and addthe user to the keyring.
sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
3.6 创建bootstrap-osd key
sudo ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
3.7 导入证书
sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
3.8 修改证书所属用户和组
sudo chown ceph:ceph /tmp/ceph.mon.keyring
3.9 使用主机名、主机IP地址和FSID生成一个监控地图。将其保存为/tmp/monmap
monmaptool --create --add ceph-1-211 192.168.1.211 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap
3.10 创建默认的数据目录
sudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph-1-211
3.11 Populate the monitor daemon(s) with the monitor map and keyring.
sudo -u ceph --mkfs -i ceph-1-211 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
3.12 启动
sudo systemctl start ceph-mon@ceph-1-211
3.13 允许防火墙
sudo firewall-cmd --zone=public --add-service=ceph-mon
sudo firewall-cmd --zone=public --add-service=ceph-mon --permanent
3.14 查看集群状态
sudo ceph -s
cluster:
id: a7f64266-0894-4f1e-a635-d0aeaca0e993
health: HEALTH_OK
services:
mon: 1 daemons, quorum mon-node1
mgr: mon-node1(active)
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage: 0 kB used, 0 kB / 0 kB avail
pgs:
4.部署mgr
4.1 创建密钥
ceph auth get-or-create mgr.$name mon 'allow profile mgr' osd 'allow *' mds 'allow *'
Place that key as file named keyring into mgr data path, which for a cluster “ceph”and mgr $name “foo” would be /var/lib/ceph/mgr/ceph-foo respective /var/lib/ceph/mgr/ceph-foo/keyring.
4.2 启动mgr
ceph-mgr -i $name
4.3 查看ceph status
mgr active: $name
5.部署osd
1.拷贝配置文件
配置先可以直接用mon的,如果正式使用的话,建议阅读文档重新规划。
- 拷贝key文件
将mon上面/var/lib/ceph/bootstrap-osd/ceph.keyring(ceph是集群名称) 复制到osd相同路径下。
复制mon/etc/ceph/ceph.conf的配置文件到osd的相同路径,并修改文件的所属用户和组。
- 拷贝配置文件
- 将mon上面的/etc/ceph/xiu8.conf复制到osd相同的路径。
- 将mon上面的/etc/ceph/xiu8.conf复制到osd相同的路径。
2.创建osd
在osd的服务器上执行以下命令 /dev/sdb1 是硬盘设备
sudo ceph-volume lvm create --data /dev/sdb1
3.准备与激活
sudo ceph-volume lvm prepare --data /dev/sdb1
4.修改osd服务启动集群名称
vi /lib/systemd/system/ceph-osd@.service
# 将文件中的Environment=CLUSTER=ceph修改为Environment=CLUSTER=$cluster-name #$cluster-name是集群的名字
5.激活
- 查看osd ID和FSID
sudo ceph-volume lvm list
- 激活
其中0是 ID,a7f64266-0894-4f1e-a635-d0aeaca0e993是FSID
sudo ceph-volume lvm activate 0 a7f64266-0894-4f1e-a635-d0aeaca0e993
6.将mon扩展到3个
1.配置文件拷贝
从第一台mon上将/etc/ceph/xiu8.client.admin.keyring拷贝到要配置mon的服务器的相同路径。
2.创建mon
ceph-1-212中的ceph为集群名称。如果是自定义的集群,需要修改这个名字。
mkdir /var/lib/ceph/mon/ceph-ceph-1-212
3.创建临时文件夹
mkdir tmp
4.etrieve the keyring for your monitors, where {tmp} is the path tothe retrieved keyring, and {key-filename} is the name of the filecontaining the retrieved monitor key:
命令格式:
# ceph auth get mon. -o {tmp}/{key-filename}
# 示例:
ceph auth get mon. -o /tmp/keyring
5.Retrieve the monitor map, where {tmp} is the path tothe retrieved monitor map, and {map-filename} is the name of the filecontaining the retrieved monitor map
命令格式;
# ceph mon getmap -o {tmp}/{map-filename}
# 示例
ceph mon getmap -o /tmp/mapfile
6.Prepare the monitor’s data directory created in the first step. You must specify the path to the monitor map so that you can retrieve theinformation about a quorum of monitors and their fsid. You must alsospecify a path to the monitor keyring:
sudo ceph-mon -i {mon-id(这个填主机名)} --mkfs --monmap {tmp}/{map-filename} --keyring {tmp}/{key-filename}
7.Start the new monitor and it will automatically join the cluster.The daemon needs to know which address to bind to, via either the –public-addr {ip} or –public-network {network} argument.For example:
命令格式
# ceph-mon -i {mon-id} --public-addr {ip:port}
# 命令示例
ceph-mon -i ceph-1-212 --public-addr 192.168.1.212:6789
7.health报警修复
开启msgr2
# 异常:3 monitors have not enabled msgr2
ceph --cluster $cluster-name mon enable-msgr2
禁用不安全模式
ceph --cluster $cluster-name config set mon auth_allow_insecure_global_id_reclaim false