ceph charm: migration to ceph-mon and ceph-osd

Important

This page has been identified as being affected by the breaking changes introduced between versions 2.9.x and 3.x of the Juju client. Read support note Breaking changes between Juju 2.9.x and 3.x before continuing.

Note

This page describes a procedure that may be required when performing an upgrade of an OpenStack cloud. Please read the more general Upgrades overview before attempting any of the instructions given here.

In order to continue to receive updates to newer Ceph versions, and for general improvements and features in the charms to deploy Ceph, users of the ceph charm should migrate existing services to using ceph-mon and ceph-osd.

Note

This example migration assumes that the ceph charm is deployed to machines 0, 1 and 2 with the ceph-osd charm deployed to other machines within the model.

Procedure

Upgrade charms

The entire suite of charms used to manage the cloud should be upgraded to the latest stable charm revision before any major change is made to the cloud such as the current migration to these new charms. See Charms upgrade for guidance.

Deploy ceph-mon

Warning

Every new ceph-mon unit introduced will result in a Ceph monitor receiving a new IP address. However, due to an issue in Nova, this fact is not propagated completely throughout the cloud under certain circumstances, thereby affecting Ceph RBD volume reachability.

Any instances previously deployed using Cinder to interface with Ceph, or using Nova’s libvirt-image-backend=rbd setting will require a manual database update to change to the new addresses. For Cinder, its stale data will also need to be updated in the ‘block_device_mapping’ table.

Failure to do this can result in instances being unable to start as their volumes cannot be reached. See bug LP #1452641.

First deploy the ceph-mon charm; if the existing ceph charm is deployed to machines 0, 1 and 2, you can place the ceph-mon units in LXD containers on these machines:

juju deploy --to lxd:0 ceph-mon
juju config ceph-mon no-bootstrap=True
juju add-unit --to lxd:1 ceph-mon
juju add-unit --to lxd:2 ceph-mon

These units will install ceph, but will not bootstrap into a running monitor cluster.

Bootstrap ceph-mon from ceph

Next, we’ll use the existing ceph application to bootstrap the new ceph-mon units:

juju integrate ceph ceph-mon

Once this process has completed, you should have a Ceph MON cluster of 6 units; this can be verified on any of the ceph or ceph-mon units:

sudo ceph -s

Deploy ceph-osd to ceph units

In order to retain any running Ceph OSD processes on the ceph units, the ceph-osd charm must be deployed to the existing machines running the ceph units:

juju config ceph-osd osd-reformat=False
juju add-unit --to 0 ceph-osd
juju add-unit --to 1 ceph-osd
juju add-unit --to 2 ceph-osd

As of the 18.05 charm release, the osd-reformat configuration option has been completely removed.

The charm installation and configuration will not impact any existing running Ceph OSDs.

Relate ceph-mon to all ceph clients

The new ceph-mon units now need to be related to the ceph-osd application:

juju integrate ceph-mon ceph-osd

Depending on your deployment you’ll also need to add relations for other applications, for example:

juju integrate ceph-mon cinder-ceph
juju integrate ceph-mon glance
juju integrate ceph-mon nova-compute
juju integrate ceph-mon ceph-radosgw
juju integrate ceph-mon gnocchi

Once hook execution completes across all units, each client should be configured with six MON addresses.

Remove the ceph application

Its now safe to remove the ceph application from your deployment:

juju remove-application ceph

As each unit of the ceph application is destroyed, its stop hook will remove the MON process from the Ceph cluster monmap and disable Ceph MON and MGR processes running on the machine; any Ceph OSD processes remain untouched and are now owned by the ceph-osd units deployed alongside ceph.