CephFS driver

The CephFS driver enables manila to export shared filesystems backed by Ceph’s File System (CephFS) using either the Ceph network protocol or NFS protocol. Guests require a native Ceph client or an NFS client in order to mount the filesystem.

When guests access CephFS using the native Ceph protocol, access is controlled via Ceph’s cephx authentication system. If a user requests share access for an ID, Ceph creates a corresponding Ceph auth ID and a secret key if they do not already exist, and authorizes the ID to access the share. The client can then mount the share using the ID and the secret key. To learn more about configuring Ceph clients to access the shares created using this driver, please see the Ceph documentation

And when guests access CephFS through NFS, an NFS-Ganesha server (or CephFS NFS service) mediates access to CephFS. The driver enables access control by managing the NFS-Ganesha server’s exports.

Supported Operations

The following operations are supported with CephFS backend:

  • Create, delete, update and list share

  • Allow/deny access to share

    • Only cephx access type is supported for CephFS native protocol.

    • Only ip access type is supported for NFS protocol.

    • read-only and read-write access levels are supported.

  • Extend/shrink share

  • Create, delete, update and list snapshot

  • Create, delete, update and list share groups

  • Delete and list share group snapshots

Important

Share group snapshot creation is no longer supported in mainline CephFS. This feature has been removed from manila W release.

Prerequisites

Important

A manila share backed by CephFS is only as good as the underlying filesystem. Take care when configuring your Ceph cluster, and consult the latest guidance on the use of CephFS in the Ceph documentation.

Ceph testing matrix

As Ceph and Manila continue to grow, it is essential to test and support combinations of releases supported by both projects. However, there is little community bandwidth to cover all of them. For simplicity sake, we are focused on testing (and therefore supporting) the current Ceph active releases. Check out the list of Ceph active releases here.

Below is the current state of testing for Ceph releases with this project. Adjacent components such as devstack-plugin-ceph and tripleo are added to the table below. Contributors to those projects can determine what versions of ceph are tested and supported with manila by those components; however, their state is presented here for ease of access.

OpenStack release

Manila

devstack-plugin-ceph

Wallaby

Pacific

Pacific

Xena

Pacific

Quincy

Yoga

Quincy

Quincy

Zed

Quincy

Quincy

2023.1 (“Antelope”)

Quincy

Quincy

2023.2 (“Bobcat”)

Quincy

Reef

2024.1 (“Caracal”)

Reef

Reef

2024.2 (“Dalmation”)

Reef

Reef

Additionally, it is expected that the version of the Ceph client available to manila is aligned with the Ceph server version. Mixing server and client versions is strongly unadvised.

In case of using the NFS Ganesha driver, it’s also a good practice to use the versions that align with the Ceph version of choice.

Common Prerequisites

  • A Ceph cluster with a filesystem configured (See Create ceph filesystem on how to create a filesystem.)

  • python3-rados and python3-ceph-argparse packages installed in the servers running the manila-share service.

  • Network connectivity between your Ceph cluster’s public network and the servers running the manila-share service.

For CephFS native shares

For CephFS NFS shares

There are two ways for the CephFS driver to provision and export CephFS shares via NFS. Both ways involve the user space NFS service, NFS-Ganesha.

Since the Quincy release of Ceph, there is support to create and manage an NFS-Ganesha based “ceph nfs” service. This service can be clustered, i.e., it can have one or more active NFS services working in tandem to provide high availability. You can also optionally deploy an ingress service to front-end this cluster natively using ceph’s management commands. Doing this allows ease of management of an NFS service to serve CephFS shares securely as well provides an active/active high availability configuration for it which may be highly desired in production environments. Please follow the ceph documentation for instructions to deploy a cluster with necessary configuration. With an NFS cluster, the CephFS driver uses Ceph mgr APIs to create and manipulate exports when share access rules are created and deleted.

The CephFS driver can also work with Manila’s in-built NFS-Ganesha driver to interface with an independent, standalone NFS-Ganesha service that is not orchestrated via Ceph. Unlike when under Ceph’s management, the high availability of the NFS server must be externally managed. Typically deployers use Pacemaker/Corosync for providing active/passive availability for such a standalone NFS-Ganesha service. See the NFS-Ganesha documentation for more information. The CephFS driver can be configured to store the NFS recovery data in a RADOS pool to facilitate the server’s recovery if the service is shut down and respawned due to failures/outages.

Since the Antelope (2023.1) release of OpenStack Manila, we recommend the use of ceph orchestrator deployed NFS service. The use of a standalone NFS-Ganesha service is deprecated as of the Caracal release (2024.1) and support will be removed in a future release.

The CephFS driver does not specify an NFS protocol version when setting up exports. This is to allow the deployer to configure the appropriate NFS protocol version/s directly in NFS-Ganesha configuration. NFS-Ganesha enables both NFS version 3 and version 4.x by virtue of default configuration. Please note that there are many differences at the protocol level between NFS versions. Many deployers enable only NFS version 4.1 (and beyond) to take advantage of enhancements in locking, security and ease of port management. Be aware that not all clients support the latest versions of NFS.

The pre-requisites for NFS are:

  • NFS client installed in the guest.

  • Network connectivity between your Ceph cluster’s public network and NFS-Ganesha service.

  • Network connectivity between your NFS-Ganesha service and the client mounting the manila share.

  • Appropriate firewall rules to permit port access between the clients and the NFS-Ganesha service.

If you’re deploying a standalone NFS-Ganesha service, we recommend using the latest version of NFS-Ganesha. The server must be deployed with at least NFS-Ganesha version 3.5.

Authorizing the driver to communicate with Ceph

Capabilities required for the Ceph manila identity have changed from the Wallaby release. The Ceph manila identity configured no longer needs any MDS capability. The MON and OSD capabilities can be reduced as well. However new MGR capabilities are now required. If not accorded, the driver cannot communicate to the Ceph Cluster.

Important

The driver in the Wallaby (or later) release requires a Ceph identity with a different set of Ceph capabilities when compared to the driver in a pre-Wallaby release.

When upgrading to Wallaby you’ll also have to update the capabilities of the Ceph identity used by the driver (refer to Ceph user capabilities docs) E.g. a native driver that already uses client.manila Ceph identity, issue command ceph auth caps client.manila mon ‘allow r’ mgr ‘allow rw’

If you are deploying the CephFS driver with Native CephFS or using an NFS service deployed with ceph management commands, the auth ID should be set as follows:

ceph auth get-or-create client.manila -o manila.keyring \
  mgr 'allow rw' \
  mon 'allow r'

If you’re deploying the CephFS NFS driver with a standalone NFS-Ganesha service, we use a specific pool to store exports (configurable with the config option “ganesha_rados_store_pool_name”). The client.manila ceph user requires permission to access this pool. So, the auth ID should be set as follows:

ceph auth get-or-create client.manila -o manila.keyring \
  osd 'allow rw pool=<ganesha_rados_store_pool_name>" \
  mgr 'allow rw' \
  mon 'allow r'

manila.keyring, along with your ceph.conf file, will then need to be placed on the server running the manila-share service.

Important

To communicate with the Ceph backend, a CephFS driver instance (represented as a backend driver section in manila.conf) requires its own Ceph auth ID that is not used by other CephFS driver instances running in the same controller node.

In the server running the manila-share service, you can place the ceph.conf and manila.keyring files in the /etc/ceph directory. Set the same owner for the manila-share process and the manila.keyring file. Add the following section to the ceph.conf file.

[client.manila]
client mount uid = 0
client mount gid = 0
log file = /opt/stack/logs/ceph-client.manila.log
admin socket = /opt/stack/status/stack/ceph-$name.$pid.asok
keyring = /etc/ceph/manila.keyring

It is advisable to modify the Ceph client’s admin socket file and log file locations so that they are co-located with manila services’s pid files and log files respectively.

Enabling snapshot support in Ceph backend

From Ceph Nautilus, all new filesystems created on Ceph have snapshots enabled by default. If you’ve upgraded your ceph cluster and want to enable snapshots on a pre-existing filesystem, you can do so:

ceph fs set {fs_name} allow_new_snaps true

Configuring CephFS backend in manila.conf

Configure CephFS native share backend in manila.conf

Add CephFS to enabled_share_protocols (enforced at manila api layer). In this example we leave NFS and CIFS enabled, although you can remove these if you will only use a CephFS backend:

enabled_share_protocols = NFS,CIFS,CEPHFS

Create a section like this to define a CephFS native backend:

[cephfsnative1]
driver_handles_share_servers = False
share_backend_name = CEPHFSNATIVE1
share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
cephfs_conf_path = /etc/ceph/ceph.conf
cephfs_protocol_helper_type = CEPHFS
cephfs_auth_id = manila
cephfs_cluster_name = ceph
cephfs_filesystem_name = cephfs

Set driver-handles-share-servers to False as the driver does not manage the lifecycle of share-servers. For the driver backend to expose shares via the native Ceph protocol, set cephfs_protocol_helper_type to CEPHFS.

Then edit enabled_share_backends to point to the driver’s backend section using the section name. In this example we are also including another backend (“generic1”), you would include whatever other backends you have configured.

Finally, edit cephfs_filesystem_name with the name of the Ceph filesystem (also referred to as a CephFS volume) you want to use. If you have more than one Ceph filesystem in the cluster, you need to set this option.

Important

For Native CephFS shares, the backing cephfs_filesystem_name is visible to end users through the __mount_options metadata. Make sure to add the __mount_options metadata key to the list of admin only modifiable metadata keys (admin_only_metadata), as explained in the additional configuration options page.

enabled_share_backends = generic1, cephfsnative1

Configure CephFS NFS share backend in manila.conf

Note

Prior to configuring the Manila CephFS driver to use NFS, you must have installed and configured NFS-Ganesha. If you’re using ceph orchestrator to create the NFS-Ganesha service and manage it alongside ceph, refer to the Ceph documentation on how to setup this service. If you’re using an independently deployed standalone NFS-Ganesha service, refer to the NFS-Ganesha setup guide.

Add NFS to enabled_share_protocols if it’s not already there:

enabled_share_protocols = NFS,CIFS,CEPHFS

Create a section to define a CephFS NFS share backend. The following is an example for using a ceph orchestrator deployed NFS service:

[cephfsnfs1]
driver_handles_share_servers = False
share_backend_name = CEPHFSNFS1
share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
cephfs_protocol_helper_type = NFS
cephfs_conf_path = /etc/ceph/ceph.conf
cephfs_auth_id = manila
cephfs_cluster_name = ceph
cephfs_filesystem_name = cephfs
cephfs_nfs_cluster_id = mycephfsnfscluster

The following is an example for using an independently deployed standalone NFS-Ganesha service:

[cephfsnfs1]
driver_handles_share_servers = False
share_backend_name = CEPHFSNFS1
share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
cephfs_protocol_helper_type = NFS
cephfs_conf_path = /etc/ceph/ceph.conf
cephfs_auth_id = manila
cephfs_cluster_name = ceph
cephfs_filesystem_name = cephfs
cephfs_ganesha_server_is_remote= False
cephfs_ganesha_server_ip = 172.24.4.3
ganesha_rados_store_enable = True
ganesha_rados_store_pool_name = cephfs_data

The following options are set in the driver backend sections above:

  • driver-handles-share-servers to False as the driver does not manage the lifecycle of share-servers.

  • cephfs_protocol_helper_type to NFS to allow NFS protocol access to the CephFS backed shares.

  • ceph_auth_id to the ceph auth ID created in Authorizing the driver to communicate with Ceph.

  • cephfs_nfs_cluster_id - Use this option with a ceph orchestrator deployed clustered NFS service. Set it to the name of the cluster created with the ceph orchestrator.

  • cephfs_ganesha_server_is_remote - Use this option with a standalone NFS-Ganesha service. Set it to False if the NFS-ganesha server is co-located with the manila-share service. If the NFS-Ganesha server is remote, then set the options to True, and set other options such as cephfs_ganesha_server_ip, cephfs_ganesha_server_username, and cephfs_ganesha_server_password (or cephfs_ganesha_path_to_private_key) to allow the driver to manage the NFS-Ganesha export entries over SSH.

  • cephfs_ganesha_server_ip - Use this option with a standalone NFS-Ganesha service. Set it to the ganesha server IP address. It is recommended to set this option even if the ganesha server is co-located with the manila-share service.

  • ganesha_rados_store_enable - Use this option with a standalone NFS-Ganesha service. Set it to True or False. Setting this option to True allows NFS Ganesha to store exports and its export counter in Ceph RADOS objects. We recommend setting this to True and using a RADOS object since it is useful for highly available NFS-Ganesha deployments to store their configuration efficiently in an already available distributed storage system.

  • ganesha_rados_store_pool_name - Use this option with a standalone NFS-Ganesha service. Set it to the name of the RADOS pool you have created for use with NFS-Ganesha. Set this option only if also setting the ganesha_rados_store_enable option to True. If you want to use one of the backend CephFS’s RADOS pools, then using CephFS’s data pool is preferred over using its metadata pool.

Edit enabled_share_backends to point to the driver’s backend section using the section name, cephfsnfs1.

Finally, edit cephfs_filesystem_name with the name of the Ceph filesystem (also referred to as a CephFS volume) you want to use. If you have more than one Ceph filesystem in the cluster, you need to set this option.

enabled_share_backends = generic1, cephfsnfs1

Space considerations

The CephFS driver reports total and free capacity available across the Ceph cluster to manila to allow provisioning. All CephFS shares are thinly provisioned, i.e., empty shares do not consume any significant space on the cluster. The CephFS driver does not allow controlling oversubscription via manila. So, as long as there is free space, provisioning will continue, and eventually this may cause your Ceph cluster to be over provisioned and you may run out of space if shares are being filled to capacity. It is advised that you use Ceph’s monitoring tools to monitor space usage and add more storage when required in order to honor space requirements for provisioned manila shares. You may use the driver configuration option reserved_share_percentage to prevent manila from filling up your Ceph cluster, and allow existing shares to grow.

Creating shares

Create CephFS native share

The default share type may have driver_handles_share_servers set to True. Configure a share type suitable for CephFS native share:

openstack share type create cephfsnativetype false
openstack share type set cephfsnativetype --extra-specs vendor_name=Ceph storage_protocol=CEPHFS

Then create a share,

openstack share create --share-type cephfsnativetype --name cephnativeshare1 cephfs 1

Note the export location of the share:

openstack share export location list cephnativeshare1

The export location of the share contains the Ceph monitor (mon) addresses and ports, and the path to be mounted. It is of the form, {mon ip addr:port}[,{mon ip addr:port}]:{path to be mounted}

Create CephFS NFS share

Configure a share type suitable for CephFS NFS share:

openstack share type create cephfsnfstype false
openstack share type set cephfsnfstype --extra-specs vendor_name=Ceph storage_protocol=NFS

Then create a share:

openstack share create --share-type cephfsnfstype --name cephnfsshare1 nfs 1

Note the export location of the share:

openstack share export location list cephnfsshare1

The export location of the share contains the IP address of the NFS-Ganesha server and the path to be mounted. It is of the form, {NFS-Ganesha server address}:{path to be mounted}

Allowing access to shares

Allow access to CephFS native share

Allow Ceph auth ID alice access to the share using cephx access type.

openstack share access create cephnativeshare1 cephx alice

Note the access status, and the access/secret key of alice.

openstack share access list cephnativeshare1

Allow access to CephFS NFS share

Allow a guest access to the share using ip access type.

openstack share access create cephnfsshare1 ip 172.24.4.225

Mounting CephFS shares

Note

The cephfs filesystem name will be available in the __mount_options share’s metadata.

Mounting CephFS native share using FUSE client

Using the secret key of the authorized ID alice create a keyring file, alice.keyring like:

[client.alice]
        key = AQA8+ANW/4ZWNRAAOtWJMFPEihBA1unFImJczA==

Using the mon IP addresses from the share’s export location, create a configuration file, ceph.conf like:

[client]
        client quota = true
        mon host = 192.168.1.7:6789, 192.168.1.8:6789, 192.168.1.9:6789

Finally, mount the filesystem, substituting the filenames of the keyring and configuration files you just created, and substituting the path to be mounted from the share’s export location:

sudo ceph-fuse ~/mnt \
--id=alice \
--conf=./ceph.conf \
--keyring=./alice.keyring \
--client-mountpoint=/volumes/_nogroup/4c55ad20-9c55-4a5e-9233-8ac64566b98c

Mounting CephFS native share using Kernel client

If you have the ceph-common package installed in the client host, you can use the kernel client to mount CephFS shares.

Important

If you choose to use the kernel client rather than the FUSE client the share size limits set in manila may not be obeyed in versions of kernel older than 4.17 and Ceph versions older than mimic. See the quota limitations documentation to understand CephFS quotas.

The mount command is as follows:

mount -t ceph {mon1 ip addr}:6789,{mon2 ip addr}:6789,{mon3 ip addr}:6789:/ \
    {mount-point} -o name={access-id},secret={access-key}

With our earlier examples, this would be:

mount -t ceph 192.168.1.7:6789, 192.168.1.8:6789, 192.168.1.9:6789:/ \
    /volumes/_nogroup/4c55ad20-9c55-4a5e-9233-8ac64566b98c \
    -o name=alice,secret='AQA8+ANW/4ZWNRAAOtWJMFPEihBA1unFImJczA=='

Mount CephFS NFS share using NFS client

In the guest, mount the share using the NFS client and knowing the share’s export location.

sudo mount -t nfs 172.24.4.3:/volumes/_nogroup/6732900b-32c1-4816-a529-4d6d3f15811e /mnt/nfs/

Known restrictions

  • A CephFS driver instance, represented as a backend driver section in manila.conf, requires a Ceph auth ID unique to the backend Ceph Filesystem. Using a non-unique Ceph auth ID will result in the driver unintentionally evicting other CephFS clients using the same Ceph auth ID to connect to the backend.

  • Snapshots are read-only. A user can read a snapshot’s contents from the .snap/{manila-snapshot-id}_{unknown-id} folder within the mounted share.

Security

Security with CephFS native share backend

As the guests need direct access to Ceph’s public network, CephFS native share backend is suitable only in private clouds where guests can be trusted.

Configuration Reference

Description of CephFS share driver configuration options

Configuration option = Default value

Description

[DEFAULT]

cephfs_auth_id = manila

(String) The name of the ceph auth identity to use.

cephfs_cluster_name = ceph

(String) The name of the cluster in use, if it is not the default (‘ceph’).

cephfs_conf_path = /etc/ceph/ceph.conf

(String) Fully qualified path to the ceph.conf file.

cephfs_protocol_helper_type = CEPHFS

(String) The type of protocol helper to use. Default is CEPHFS.

cephfs_ganesha_server_is_remote = False

(Boolean) Whether the NFS-Ganesha server is remote to the driver.

cephfs_ganesha_server_ip = None

(String) The IP address of the NFS-Ganesha server.

cephfs_protocol_helper_type = CEPHFS

(String) The type of protocol helper to use. Default is CEPHFS.

cephfs_ganesha_server_username = root

(String) The username to authenticate as in the remote NFS-Ganesha server host.

cephfs_ganesha_path_to_private_key = None

(String) The path of the driver host’s private SSH key file.

cephfs_ganesha_server_password = None

(String) The password to authenticate as the user in the remote Ganesha server host. This is not required if ‘cephfs_ganesha_path_to_private_key’ is configured.

cephfs_ganesha_export_ips = []

(String) List of IPs to export shares. If not supplied, then the value of ‘cephfs_ganesha_server_ip’ will be used to construct share export locations.

cephfs_volume_mode = 755

(String) The read/write/execute permissions mode for CephFS volumes, snapshots, and snapshot groups expressed in Octal as with linux ‘chmod’ or ‘umask’ commands.

cephfs_filesystem_name = None

(String) The name of the filesystem to use, if there are multiple filesystems in the cluster.”

cephfs_ensure_all_shares_salt = manila_cephfs_reef_caracal

(String) Provide a unique string value to make the driver ensure all of the shares it has created during startup. Ensuring would re-export shares and this action isn’t always required, unless something has been administratively modified on CephFS.

cephfs_nfs_cluster_id = None

(String) The ID of the NFS cluster to use.

The manila.share.drivers.cephfs.driver Module

class CephFSDriver(*args, **kwargs)

Bases: ExecuteMixin, GaneshaMixin, ShareDriver

Driver for the Ceph Filesystem.

property ceph_mon_version
check_for_setup_error()

Returns an error if prerequisites aren’t met.

create_share(context, share, share_server=None)

Create a CephFS volume.

Parameters:
  • context – A RequestContext.

  • share – A Share.

  • share_server – Always None for CephFS native.

Returns:

The export locations dictionary.

create_share_from_snapshot(context, share, snapshot, share_server=None, parent_share=None)

Create a CephFS subvolume from a snapshot

create_share_group(context, sg_dict, share_server=None)

Create a share group.

Parameters:
  • context

  • share_group_dict – The share group details EXAMPLE: { ‘status’: ‘creating’, ‘project_id’: ‘13c0be6290934bd98596cfa004650049’, ‘user_id’: ‘a0314a441ca842019b0952224aa39192’, ‘description’: None, ‘deleted’: ‘False’, ‘created_at’: datetime.datetime(2015, 8, 10, 15, 14, 6), ‘updated_at’: None, ‘source_share_group_snapshot_id’: ‘some_fake_uuid’, ‘share_group_type_id’: ‘some_fake_uuid’, ‘host’: ‘hostname@backend_name’, ‘share_network_id’: None, ‘share_server_id’: None, ‘deleted_at’: None, ‘share_types’: [<models.ShareGroupShareTypeMapping>], ‘id’: ‘some_fake_uuid’, ‘name’: None }

Returns:

(share_group_model_update, share_update_list) share_group_model_update - a dict containing any values to be updated for the SG in the database. This value may be None.

create_share_group_snapshot(context, snap_dict, share_server=None)

Create a share group snapshot.

Parameters:
  • context

  • snap_dict

    The share group snapshot details EXAMPLE: .. code:

    {
    'status': 'available',
    'project_id': '13c0be6290934bd98596cfa004650049',
    'user_id': 'a0314a441ca842019b0952224aa39192',
    'description': None,
    'deleted': '0',
    'created_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
    'updated_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
    'share_group_id': 'some_fake_uuid',
    'share_group_snapshot_members': [
        {
         'status': 'available',
         'share_type_id': 'some_fake_uuid',
         'user_id': 'a0314a441ca842019b0952224aa39192',
         'deleted': 'False',
         'created_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
         'share': <models.Share>,
         'updated_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
         'share_proto': 'NFS',
         'share_name': 'share_some_fake_uuid',
         'name': 'share-snapshot-some_fake_uuid',
         'project_id': '13c0be6290934bd98596cfa004650049',
         'share_group_snapshot_id': 'some_fake_uuid',
         'deleted_at': None,
         'share_id': 'some_fake_uuid',
         'id': 'some_fake_uuid',
         'size': 1,
         'provider_location': None,
        }
    ],
    'deleted_at': None,
    'id': 'some_fake_uuid',
    'name': None
    }
    

Returns:

(share_group_snapshot_update, member_update_list) share_group_snapshot_update - a dict containing any values to be updated for the CGSnapshot in the database. This value may be None.

member_update_list - a list of dictionaries containing for every member of the share group snapshot. Each dict should contains values to be updated for the ShareGroupSnapshotMember in the database. This list may be empty or None.

create_snapshot(context, snapshot, share_server=None)

Is called to create snapshot.

Parameters:
  • context – Current context

  • snapshot – Snapshot model. Share model could be retrieved through snapshot[‘share’].

  • share_server – Share server model or None.

Returns:

None or a dictionary with key ‘export_locations’ containing a list of export locations, if snapshots can be mounted.

delete_share(context, share, share_server=None)

Is called to remove share.

delete_share_group(context, sg_dict, share_server=None)

Delete a share group

Parameters:
  • context – The request context

  • share_group_dict

    The share group details EXAMPLE: .. code:

    {
    'status': 'creating',
    'project_id': '13c0be6290934bd98596cfa004650049',
    'user_id': 'a0314a441ca842019b0952224aa39192',
    'description': None,
    'deleted': 'False',
    'created_at': datetime.datetime(2015, 8, 10, 15, 14, 6),
    'updated_at': None,
    'source_share_group_snapshot_id': 'some_fake_uuid',
    'share_share_group_type_id': 'some_fake_uuid',
    'host': 'hostname@backend_name',
    'deleted_at': None,
    'shares': [<models.Share>], # The new shares being created
    'share_types': [<models.ShareGroupShareTypeMapping>],
    'id': 'some_fake_uuid',
    'name': None
    }
    

Returns:

share_group_model_update share_group_model_update - a dict containing any values to be updated for the group in the database. This value may be None.

delete_share_group_snapshot(context, snap_dict, share_server=None)

Delete a share group snapshot

Parameters:
  • context

  • snap_dict

    The share group snapshot details EXAMPLE: .. code:

    {
    'status': 'available',
    'project_id': '13c0be6290934bd98596cfa004650049',
    'user_id': 'a0314a441ca842019b0952224aa39192',
    'description': None,
    'deleted': '0',
    'created_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
    'updated_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
    'share_group_id': 'some_fake_uuid',
    'share_group_snapshot_members': [
        {
         'status': 'available',
         'share_type_id': 'some_fake_uuid',
         'share_id': 'some_fake_uuid',
         'user_id': 'a0314a441ca842019b0952224aa39192',
         'deleted': 'False',
         'created_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
         'share': <models.Share>,
         'updated_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
         'share_proto': 'NFS',
         'share_name':'share_some_fake_uuid',
         'name': 'share-snapshot-some_fake_uuid',
         'project_id': '13c0be6290934bd98596cfa004650049',
         'share_group_snapshot_id': 'some_fake_uuid',
         'deleted_at': None,
         'id': 'some_fake_uuid',
         'size': 1,
         'provider_location': 'fake_provider_location_value',
        }
    ],
    'deleted_at': None,
    'id': 'f6aa3b59-57eb-421e-965c-4e182538e36a',
    'name': None
    }
    

Returns:

(share_group_snapshot_update, member_update_list) share_group_snapshot_update - a dict containing any values to be updated for the ShareGroupSnapshot in the database. This value may be None.

delete_snapshot(context, snapshot, share_server=None)

Is called to remove snapshot.

Parameters:
  • context – Current context

  • snapshot – Snapshot model. Share model could be retrieved through snapshot[‘share’].

  • share_server – Share server model or None.

do_setup(context)

Any initialization the share driver does while starting.

ensure_shares(context, shares)

Invoked to ensure that shares are exported.

Driver can use this method to update the “status” and/or list of export locations of the shares if they change. To do that, a dictionary of shares should be returned. In addition, the driver can seek to “reapply_access_rules” (boolean) on a per-share basis. When this property exists and is set to True, the share manager service will invoke “update_access” with all the access rules from the service database. :shares: A list of all shares for updates. :returns: None or a dictionary of updates in the format.

Example:

{
    '09960614-8574-4e03-89cf-7cf267b0bd08': {
        'export_locations': [{...}, {...}],
        'status': 'error',
        'reapply_access_rules': False,
    },

    '28f6eabb-4342-486a-a7f4-45688f0c0295': {
        'export_locations': [{...}, {...}],
        'status': 'available',
        'reapply_access_rules': True,
    },

}
extend_share(share, new_size, share_server=None)

Extends size of existing share.

Parameters:
  • share – Share model

  • new_size – New size of share (new_size > share[‘size’])

  • share_server – Optional – Share server model

get_backend_info(context)

Get driver and array configuration parameters.

Driver can use this method to get the special configuration info and return for assessment. The share manager service uses this assessment to invoke “ensure_shares” during service startup.

Returns:

A dictionary containing driver-specific info.

Example:

{
     'version': '2.23'
     'port': '80',
     'logicalportip': '1.1.1.1',
      ...
}

get_configured_ip_versions()

“Get allowed IP versions.

The supported versions are returned with list, possible values are: [4], [6], or [4, 6]

Drivers that assert ipv6_implemented = True must override this method. If the returned list includes 4, then shares created by this driver must have an IPv4 export location. If the list includes 6, then shares created by the driver must have an IPv6 export location.

Drivers should check that their storage controller actually has IPv4/IPv6 enabled and configured properly.

get_optional_share_creation_data(share, share_server=None)

Get the additional properties to be set in a share.

Returns:

the metadata to be set in share.

get_share_status(share, share_server=None)

Returns the current status for a share.

Parameters:
  • share – a manila share.

  • share_server – a manila share server (not currently supported).

Returns:

manila share status.

property rados_client
setup_default_ceph_cmd_target()
shrink_share(share, new_size, share_server=None)

Shrinks size of existing share.

If consumed space on share larger than new_size driver should raise ShareShrinkingPossibleDataLoss exception: raise ShareShrinkingPossibleDataLoss(share_id=share[‘id’])

Parameters:
  • share – Share model

  • new_size – New size of share (new_size < share[‘size’])

  • share_server – Optional – Share server model

:raises ShareShrinkingPossibleDataLoss, NotImplementedError

transfer_accept(context, share, new_user, new_project, access_rules=None, share_server=None)

Backend update project and user info if stored on the backend.

Parameters:
  • context – The ‘context.RequestContext’ object for the request.

  • share – Share instance model.

  • access_rules – A list of access rules for given share.

  • new_user – the share will be updated with the new user id .

  • new_project – the share will be updated with the new project id.

  • share_server – share server for given share.

update_access(context, share, access_rules, add_rules, delete_rules, share_server=None)

Update access rules for given share.

access_rules contains all access_rules that need to be on the share. If the driver can make bulk access rule updates, it can safely ignore the add_rules and delete_rules parameters.

If the driver cannot make bulk access rule changes, it can rely on new rules to be present in add_rules and rules that need to be removed to be present in delete_rules.

When a rule in delete_rules was never applied, drivers must not raise an exception, or attempt to set the rule to error state.

add_rules and delete_rules can be empty lists, in this situation, drivers should ensure that the rules present in access_rules are the same as those on the back end. One scenario where this situation is forced is when the access_level is changed for all existing rules (share migration and for readable replicas).

Drivers must be mindful of this call for share replicas. When ‘update_access’ is called on one of the replicas, the call is likely propagated to all replicas belonging to the share, especially when individual rules are added or removed. If a particular access rule does not make sense to the driver in the context of a given replica, the driver should be careful to report a correct behavior, and take meaningful action. For example, if R/W access is requested on a replica that is part of a “readable” type replication; R/O access may be added by the driver instead of R/W. Note that raising an exception will result in the access_rules_status on the replica, and the share itself being “out_of_sync”. Drivers can sync on the valid access rules that are provided on the create_replica and promote_replica calls.

Parameters:
  • context – Current context

  • share – Share model with share data.

  • access_rules – A list of access rules for given share

  • add_rules – Empty List or List of access rules which should be added. access_rules already contains these rules.

  • delete_rules – Empty List or List of access rules which should be removed. access_rules doesn’t contain these rules.

  • share_server – None or Share server model

Returns:

None, or a dictionary of updates in the format:

{

‘09960614-8574-4e03-89cf-7cf267b0bd08’: {

‘access_key’: ‘alice31493e5441b8171d2310d80e37e’, ‘state’: ‘error’,

},

’28f6eabb-4342-486a-a7f4-45688f0c0295’: {

‘access_key’: ‘bob0078aa042d5a7325480fd13228b’, ‘state’: ‘active’,

},

}

The top level keys are ‘access_id’ fields of the access rules that need to be updated. access_key``s are credentials (str) of the entities granted access. Any rule in the ``access_rules parameter can be updated.

Important

Raising an exception in this method will force all rules in ‘applying’ and ‘denying’ states to ‘error’.

An access rule can be set to ‘error’ state, either explicitly via this return parameter or because of an exception raised in this method. Such an access rule will no longer be sent to the driver on subsequent access rule updates. When users deny that rule however, the driver will be asked to deny access to the client/s represented by the rule. We expect that a rule that was error-ed at the driver should never exist on the back end. So, do not fail the deletion request.

Also, it is possible that the driver may receive a request to add a rule that is already present on the back end. This can happen if the share manager service goes down while the driver is committing access rule changes. Since we cannot determine if the rule was applied successfully by the driver before the disruption, we will treat all ‘applying’ transitional rules as new rules and repeat the request.

property volname
class NFSClusterProtocolHelper(execute, config_object, **kwargs)

Bases: NFSProtocolHelperMixin, NASHelperBase

check_for_setup_error()

Returns an error if prerequisites aren’t met.

get_backend_info(context)
property nfs_clusterid
reapply_rules_while_ensuring_shares = True
supported_access_levels = ('rw', 'ro')
supported_access_types = ('ip',)
update_access(context, share, access_rules, add_rules, delete_rules, share_server=None)

Update access rules of share.

Creates an export per share. Modifies access rules of shares by dynamically updating exports via ceph nfs.

class NFSProtocolHelper(execute, config_object, **kwargs)

Bases: NFSProtocolHelperMixin, GaneshaNASHelper2

check_for_setup_error()

Returns an error if prerequisites aren’t met.

get_backend_info(context)
reapply_rules_while_ensuring_shares = True
shared_data = {}
supported_protocols = ('NFS',)
class NFSProtocolHelperMixin

Bases: object

get_configured_ip_versions()
get_export_locations(share, subvolume_path)
get_optional_share_creation_data(share, share_server=None)
class NativeProtocolHelper(execute, config, **kwargs)

Bases: NASHelperBase

Helper class for native CephFS protocol

check_for_setup_error()

Returns an error if prerequisites aren’t met.

get_backend_info(context)
get_configured_ip_versions()
get_export_locations(share, subvolume_path)
get_mon_addrs()
get_optional_share_creation_data(share, share_server=None)
reapply_rules_while_ensuring_shares = False
supported_access_levels = ('rw', 'ro')
supported_access_types = ('cephx',)
update_access(context, share, access_rules, add_rules, delete_rules, share_server=None)

Update access rules of share.

exception RadosError

Bases: Exception

Something went wrong talking to Ceph with librados

rados_command(rados_client, prefix=None, args=None, json_obj=False, target=None, inbuf=None)

Safer wrapper for ceph_argparse.json_command

Raises error exception instead of relying on caller to check return codes.

Error exception can result from: * Timeout * Actual legitimate errors * Malformed JSON output

return: If json_obj is True, return the decoded JSON object from ceph,

or None if empty string returned. If json is False, return a decoded string (the data returned by ceph command)

setup_json_command()
setup_rados()