Highly available Block Storage API¶
Cinder provides Block-Storage-as-a-Service suitable for performance sensitive scenarios such as databases, expandable file systems, or providing a server with access to raw block level storage.
Persistent block storage can survive instance termination and can also be moved across instances like any external storage device. Cinder also has volume snapshots capability for backing up the volumes.
Making the Block Storage API service highly available in active/passive mode involves:
In theory, you can run the Block Storage service as active/active. However, because of sufficient concerns, we recommend running the volume component as active/passive only.
Add Block Storage API resource to Pacemaker¶
On RHEL-based systems, create resources for cinder’s systemd agents and create constraints to enforce startup/shutdown ordering:
pcs resource create openstack-cinder-api systemd:openstack-cinder-api --clone interleave=true pcs resource create openstack-cinder-scheduler systemd:openstack-cinder-scheduler --clone interleave=true pcs resource create openstack-cinder-volume systemd:openstack-cinder-volume pcs constraint order start openstack-cinder-api-clone then openstack-cinder-scheduler-clone pcs constraint colocation add openstack-cinder-scheduler-clone with openstack-cinder-api-clone pcs constraint order start openstack-cinder-scheduler-clone then openstack-cinder-volume pcs constraint colocation add openstack-cinder-volume with openstack-cinder-scheduler-clone
If the Block Storage service runs on the same nodes as the other services, then it is advisable to also include:
pcs constraint order start openstack-keystone-clone then openstack-cinder-api-clone
Alternatively, instead of using systemd agents, download and install the OCF resource agent:
# cd /usr/lib/ocf/resource.d/openstack # wget https://opendev.org/x/openstack-resource-agents/raw/branch/master/ocf/cinder-api # chmod a+rx *
You can now add the Pacemaker configuration for Block Storage API resource. Connect to the Pacemaker cluster with the crm configure command and add the following cluster resources:
primitive p_cinder-api ocf:openstack:cinder-api \ params config="/etc/cinder/cinder.conf" \ os_password="secretsecret" \ os_username="admin" \ os_tenant_name="admin" \ keystone_get_token_url="http://10.0.0.11:5000/v2.0/tokens" \ op monitor interval="30s" timeout="30s"
This configuration creates
p_cinder-api, a resource for managing the
Block Storage API service.
The command crm configure supports batch input, copy and paste the
lines above into your live Pacemaker configuration and then make changes as
required. For example, you may enter
edit p_ip_cinder-api from the
crm configure menu and edit the resource to match your preferred
virtual IP address.
Once completed, commit your configuration changes by entering commit from the crm configure menu. Pacemaker then starts the Block Storage API service and its dependent resources on one of your nodes.
Configure Block Storage API service¶
/etc/cinder/cinder.conf file. For example, on a RHEL-based system:
1[DEFAULT] 2# This is the name which we should advertise ourselves as and for 3# A/P installations it should be the same everywhere 4host = cinder-cluster-1 5 6# Listen on the Block Storage VIP 7osapi_volume_listen = 10.0.0.11 8 9auth_strategy = keystone 10control_exchange = cinder 11 12volume_driver = cinder.volume.drivers.nfs.NfsDriver 13nfs_shares_config = /etc/cinder/nfs_exports 14nfs_sparsed_volumes = true 15nfs_mount_options = v3 16 17[database] 18connection = mysql+pymysql://cinder:CINDER_DBPASS@10.0.0.11/cinder 19max_retries = -1 20 21[keystone_authtoken] 22# 10.0.0.11 is the Keystone VIP 23identity_uri = http://10.0.0.11:35357/ 24www_authenticate_uri = http://10.0.0.11:5000/ 25admin_tenant_name = service 26admin_user = cinder 27admin_password = CINDER_PASS 28 29[oslo_messaging_rabbit] 30# Explicitly list the rabbit hosts as it doesn't play well with HAProxy 31rabbit_hosts = 10.0.0.12,10.0.0.13,10.0.0.14 32# As a consequence, we also need HA queues 33rabbit_ha_queues = True 34heartbeat_timeout_threshold = 60 35heartbeat_rate = 2
CINDER_DBPASS with the password you chose for the Block Storage
CINDER_PASS with the password you chose for the
cinder user in the Identity service.
This example assumes that you are using NFS for the physical storage, which will almost never be true in a production installation.
If you are using the Block Storage service OCF agent, some settings will be filled in for you, resulting in a shorter configuration file:
1# We have to use MySQL connection to store data: 2connection = mysql+pymysql://cinder:CINDER_DBPASS@10.0.0.11/cinder 3# Alternatively, you can switch to pymysql, 4# a new Python 3 compatible library and use 5# sql_connection = mysql+pymysql://cinder:CINDER_DBPASS@10.0.0.11/cinder 6# and be ready when everything moves to Python 3. 7# Ref: https://wiki.openstack.org/wiki/PyMySQL_evaluation 8 9# We bind Block Storage API to the VIP: 10osapi_volume_listen = 10.0.0.11 11 12# We send notifications to High Available RabbitMQ: 13notifier_strategy = rabbit 14rabbit_host = 10.0.0.11
CINDER_DBPASS with the password you chose for the Block Storage
Configure OpenStack services to use the highly available Block Storage API¶
Your OpenStack services must now point their Block Storage API configuration to the highly available, virtual cluster IP address rather than a Block Storage API server’s physical IP address as you would for a non-HA environment.
Create the Block Storage API endpoint with this IP.
If you are using both private and public IP addresses, create two virtual IPs and define your endpoint. For example:
$ openstack endpoint create --region $KEYSTONE_REGION \ volumev2 public http://PUBLIC_VIP:8776/v2/%\(project_id\)s $ openstack endpoint create --region $KEYSTONE_REGION \ volumev2 admin http://10.0.0.11:8776/v2/%\(project_id\)s $ openstack endpoint create --region $KEYSTONE_REGION \ volumev2 internal http://10.0.0.11:8776/v2/%\(project_id\)s
Use Cinder volume backup and restore service¶
Cinder provides a feature to backup and restore volumes and snapshots. The first backup of a volume must be handled as a full backup. Subsequent backups may be either full or incremental backups from the last full backup. See also the Cinder Block Storage Administration Guide’s section on backing up and restoring volumes and snapshots.