We will first setup a 5th node, which we are going to call “cephclient” from which we will run all the client testing:
benoit@admin:~/ceph-deploy$ ceph-deploy install cephclient benoit@admin:~/ceph-deploy$ ceph-deploy admin cephclient benoit@admin:~/ceph-deploy$ ssh cephclient benoit@cephclient:~$ sudo chown benoit /etc/ceph/ceph.client.admin.keyring benoit@cephclient:~$ ceph health HEALTH_OK benoit@cephclient:~$The purpose of the last command is just to check that the Ceph software is well installed and configured on the client and that we have a correct access to our cluster.
Ensure that your cluster is in an active + clean state (HEALTH_OK) before working with the Ceph Block Device.
Remark: The Ceph Block Device is also known as RBD or RADOS Block Device.
Remark: Don’t do the below procedure on any node of your cluster or you will mess up the cluster configuration and behavior.
benoit@cephclient:~$ rbd create --image-format 2 cephstor --size 8128The –image-format 2 is used here to create an image that will be compatible with cloning (see the second part of this tutorial after having successfully created your filesystem)
benoit@cephclient:~$ sudo modprobe rbd benoit@cephclient:~$ sudo rbd map cephstor [--pool rbd --name client.admin] /dev/rbd0The returned value is the device name to use.
By default the command use a pool named rbd and a name for the keyring client.admin (the pool rbd exists by default after initializing the storage cluster, client.admin means the file /etc/ceph/ceph.client.admin.keyring)
There is also symbolic link pointing to this device file: /dev/rbd/rbd/cepstor
benoit@cephclient:~$ sudo mkfs.ext4 –m0 /dev/rbd0 mke2fs 1.42.9 (4-Feb-2014) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=1024 blocks, Stripe width=1024 blocks 520192 inodes, 2080768 blocks 104038 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2130706432 64 block groups 32768 blocks per group, 32768 fragments per group 8128 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done benoit@cephclient:~$ sudo mount /dev/rbd/rbd/cephstor /mnt benoit@cephclient:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/ubuntu--vg-root 47G 2.2G 43G 5% / none 4.0K 0 4.0K 0% /sys/fs/cgroup udev 990M 4.0K 990M 1% /dev tmpfs 201M 432K 200M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1001M 12K 1001M 1% /run/shm none 100M 0 100M 0% /run/user /dev/sda1 236M 101M 124M 45% /boot /dev/rbd0 7.7G 18M 7.3G 1% /mnt benoit@cephclient:~$So we have our Ceph Block Device connected to our client, an EXT4 filesystem created on it and mounted on our client.
From now, we can work on this filesystem as in any other filesystem in Linux.
Snapshots
The Ceph Block Device supports snapshots, so you can create a read-only state of the filesystem at a given point of time.
benoit@cephclient:/$ sudo touch BeforeSnapThis file is created for testing the snapshot read-only copy at a given time.
benoit@cephclient:~$ rbd snap create rbd/cephstor@snap1 benoit@cephclient:/$ sudo touch AfterSnapThis file is created for testing the snapshot read-only copy at a given time.
benoit@cephclient:/$ rbd snap protect rbd/cephstor@snap1Protecting the snapshot is necessary, the clone(s) that we create afterward will fail miserably if someone remove the snapshot without removing the clone first. This protection avoid removing / purging the snapshot with any clone linked to it.
benoit@cephclient:/$ rbd clone rbd/cephstor@snap1 rbd/cephclone benoit@cephclient:/$ sudo rbd map cephclone /dev/rbd1 benoit@cephclient:/$ sudo mount /dev/rbd1 /mnt2 benoit@cephclient:/$ ls /mnt AfterSnap BeforeSnap lost+found benoit@cephclient:/$ ls /mnt2 BeforeSnap lost+foundSo indeed, the mounted clone only shows the files created / modified before we create the snapshot.
You can revert your main filesystem to the snapshot copy:
benoit@cephclient:/$ rbd snap rollback rbd/cephstor@snap1This will destroy all changes on the filesystem after the time you made the snapshot
Destroying the clone and snapshot:
benoit@cephclient:/$ rbd flatten rbd/cephclone Image flatten: 100% complete...done. benoit@cephclient:/$ rbd snap unprotect rbd/cephstor@snap1 benoit@cephclient:/$ rbd snap rm rbd/cephstor@snap1 benoit@cephclient:/$ sudo rbd unmap /dev/rbd1 benoit@cephclient:/$ rbd snap ls rbd/cephstorThe last command shows any snapshort more