一、linux挂载ceph块存储
1.说明
note:Ensure your Ceph Storage Cluster is in an active + clean statebefore working with the Ceph Block Device.
The Ceph Block Device is also known as RBD or RADOSBlock Device.
You may use a virtual machine for your ceph-client node, but do notexecute the following procedures on the same physical node as your CephStorage Cluster nodes (unless you use a VM). See FAQ for details.
2.创建pool
2.1 在ceph集群上创建
On the admin node, use the ceph tool to create a pool(we recommend the name ‘rbd’).
ceph osd pool create one
On the admin node, use the rbd tool to initialize the pool for use by RBD:
rbd pool init name
3.挂载块设备
On the ceph-client node, create a block device image.
rbd create foo --size 4096 --image-feature layering [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring] [-p {pool-name}]
On the ceph-client node, map the image to a block device.
sudo rbd map foo --name client.admin [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring] [-p {pool-name}]
Use the block device by creating a file system on the ceph-clientnode.
sudo mkfs.ext4 -m0 /dev/rbd/{pool-name}/foo
Mount the file system on the ceph-client node.
sudo mkdir /mnt/ceph-block-device
sudo mount /dev/rbd/{pool-name}/foo /mnt/ceph-block-device
cd /mnt/ceph-block-device
Optionally configure the block device to be automatically mapped and mountedat boot (and unmounted/unmapped at shutdown) – see the rbdmap manpage.