Ceph Block Device usage

We saw previously the steps to follow to have a Ceph cluster up and running, with some distributed and protected storage.
Now, it is time to add some services on it to let your applications, VM or servers to access and use this storage.
We will add a block device, a Ceph filesystem and an Object gateway (compatible with OpenStack Swift and Amazon S3)

Muli-master OpenLDAP with 3 or more nodes

Going from a two-nodes multi-master configuration to one with more than two nodes is not really complex, once you have understood what we do in two nodes configuration :

  • In the two nodes configuration, each node has a different ServerID, in N nodes too. To let the local LDAP differentiate between the various masters, the configuration will now list a ServerID directive by node followed by the LDAP URI to connect this LDAP server.

GFS on iSCSI shared storage

This method, based on software version delivered with CentOS 6.0 use dlm_controld.pcmk and gfs_controld.pcmk, which are special version developped to be used directly by Pacemaker. After upgrading the OS to CentOS 6.2, the RPM providing dlm_controld.pcmk and gfs_controld.pcmk were replaced by cman, wichi provides the standard gfs_controld and dlm_controld. To be use these two with Pacemaker, we need to enable CMAN with Corosync.

MySQL active-passive cluster

We will use the iSCSI Lun defined in our iSCSI cluster as a shared storage and we will run MySQL in active-passive (fail-over) mode using Pacemaker and Corosync cluster engine.

The cluster will have to connect to the iSCSI target, mount the iSCSI partition on one node and start a MySQL service which has all its data on this partition.

We will need the following resources and resource agents (RA) on this cluster :

  • virtual IP → ocf:heartbeat:IPAddr2