CephFS driver

CephFS driver

The CephFS driver enables manila to export shared filesystems backed by Ceph’s File System (CephFS) using either the Ceph network protocol or NFS protocol. Guests require a native Ceph client or an NFS client in order to mount the filesystem.

When guests access CephFS using the native Ceph protocol, access is controlled via Ceph’s cephx authentication system. If a user requests share access for an ID, Ceph creates a corresponding Ceph auth ID and a secret key, if they do not already exist, and authorizes the ID to access the share. The client can then mount the share using the ID and the secret key. To learn more about configuring Ceph clients to access the shares created using this driver, please see the Ceph documentation (http://docs.ceph.com/docs/master/cephfs/). If you choose to use the kernel client rather than the FUSE client, the share size limits set in manila may not be obeyed.

And when guests access CephFS through NFS, an NFS-Ganesha server mediates access to CephFS. The driver enables access control by managing the NFS-Ganesha server’s exports.

Supported Operations

The following operations are supported with CephFS backend:

  • Create/delete share

  • Allow/deny CephFS native protocol access to share

    • Only cephx access type is supported for CephFS native protocol.
    • read-only access level is supported in Newton or later versions of manila.
    • read-write access level is supported in Mitaka or later versions of manila.

    (or)

    Allow/deny NFS access to share

    • Only ip access type is supported for NFS protocol.
    • read-only and read-write access levels are supported in Pike or later versions of manila.
  • Extend/shrink share

  • Create/delete snapshot

  • Create/delete consistency group (CG)

  • Create/delete CG snapshot

Warning

CephFS currently supports snapshots as an experimental feature, therefore the snapshot support with the CephFS Native driver is also experimental and should not be used in production environments. For more information, see (http://docs.ceph.com/docs/master/cephfs/experimental-features/#snapshots).

Prerequisites

Important

A manila share backed by CephFS is only as good as the underlying filesystem. Take care when configuring your Ceph cluster, and consult the latest guidance on the use of CephFS in the Ceph documentation ( http://docs.ceph.com/docs/master/cephfs/)

For CephFS native shares

  • Mitaka or later versions of manila.
  • Jewel or later versions of Ceph.
  • A Ceph cluster with a filesystem configured ( http://docs.ceph.com/docs/master/cephfs/createfs/)
  • ceph-common package installed in the servers running the manila-share service.
  • Ceph client installed in the guest, preferably the FUSE based client, ceph-fuse.
  • Network connectivity between your Ceph cluster’s public network and the servers running the manila-share service.
  • Network connectivity between your Ceph cluster’s public network and guests. See :ref:security_cephfs_native

For CephFS NFS shares

  • Pike or later versions of manila.
  • Kraken or later versions of Ceph.
  • 2.5 or later versions of NFS-Ganesha.
  • A Ceph cluster with a filesystem configured ( http://docs.ceph.com/docs/master/cephfs/createfs/)
  • ceph-common package installed in the servers running the manila-share service.
  • NFS client installed in the guest.
  • Network connectivity between your Ceph cluster’s public network and the servers running the manila-share service.
  • Network connectivity between your Ceph cluster’s public network and NFS-Ganesha server.
  • Network connectivity between your NFS-Ganesha server and the manila guest.

Authorizing the driver to communicate with Ceph

Run the following commands to create a Ceph identity for a driver instance to use:

read -d '' MON_CAPS << EOF
allow r,
allow command "auth del",
allow command "auth caps",
allow command "auth get",
allow command "auth get-or-create"
EOF

ceph auth get-or-create client.manila -o manila.keyring \
mds 'allow *' \
osd 'allow rw' \
mon "$MON_CAPS"

manila.keyring, along with your ceph.conf file, will then need to be placed on the server running the manila-share service.

Important

To communicate with the Ceph backend, a CephFS driver instance (represented as a backend driver section in manila.conf) requires its own Ceph auth ID that is not used by other CephFS driver instances running in the same controller node.

In the server running the manila-share service, you can place the ceph.conf and manila.keyring files in the /etc/ceph directory. Set the same owner for the manila-share process and the manila.keyring file. Add the following section to the ceph.conf file.

[client.manila]
client mount uid = 0
client mount gid = 0
log file = /opt/stack/logs/ceph-client.manila.log
admin socket = /opt/stack/status/stack/ceph-$name.$pid.asok
keyring = /etc/ceph/manila.keyring

It is advisable to modify the Ceph client’s admin socket file and log file locations so that they are co-located with manila services’s pid files and log files respectively.

Enabling snapshot support in Ceph backend

Enable snapshots in Ceph if you want to use them in manila:

ceph mds set allow_new_snaps true --yes-i-really-mean-it

Warning

Note that the snapshot support for the CephFS driver is experimental and is known to have several caveats for use. Only enable this and the equivalent manila.conf option if you understand these risks. See (http://docs.ceph.com/docs/master/cephfs/experimental-features/#snapshots) for more details.

Configuring CephFS backend in manila.conf

Configure CephFS native share backend in manila.conf

Add CephFS to enabled_share_protocols (enforced at manila api layer). In this example we leave NFS and CIFS enabled, although you can remove these if you will only use CephFS:

enabled_share_protocols = NFS,CIFS,CEPHFS

Create a section like this to define a CephFS native backend:

[cephfsnative1]
driver_handles_share_servers = False
share_backend_name = CEPHFSNATIVE1
share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
cephfs_conf_path = /etc/ceph/ceph.conf
cephfs_protocol_helper_type = CEPHFS
cephfs_auth_id = manila
cephfs_cluster_name = ceph
cephfs_enable_snapshots = False

Set driver-handles-share-servers to False as the driver does not manage the lifecycle of share-servers. To let the driver perform snapshot related operations, set cephfs_enable_snapshots to True. For the driver backend to expose shares via the native Ceph protocol, set cephfs_protocol_helper_type to CEPHFS.

Then edit enabled_share_backends to point to the driver’s backend section using the section name. In this example we are also including another backend (“generic1”), you would include whatever other backends you have configured.

Note

For Mitaka, Newton, and Ocata releases, the share_driver path was manila.share.drivers.cephfs.cephfs_native.CephFSNativeDriver

enabled_share_backends = generic1, cephfsnative1

Configure CephFS NFS share backend in manila.conf

Add NFS to enabled_share_protocols if it’s not already there:

enabled_share_protocols = NFS,CIFS,CEPHFS

Create a section to define a CephFS NFS share backend:

[cephfsnfs1]
driver_handles_share_servers = False
share_backend_name = CEPHFSNFS1
share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
cephfs_protocol_helper_type = NFS
cephfs_conf_path = /etc/ceph/ceph.conf
cephfs_auth_id = manila
cephfs_cluster_name = ceph
cephfs_enable_snapshots = False
cephfs_ganesha_server_is_remote= False
cephfs_ganesha_server_ip = 172.24.4.3

The following options are set in the driver backend section above:

  • driver-handles-share-servers to False as the driver does not manage the lifecycle of share-servers.
  • cephfs_protocol_helper_type to NFS to allow NFS protocol access to the CephFS backed shares.
  • ceph_auth_id to the ceph auth ID created in Authorizing the driver to communicate with Ceph.
  • cephfs_ganesha_server_is_remote to False if the NFS-ganesha server is co-located with the manila-share service. If the NFS-Ganesha server is remote, then set the options to True, and set other options such as cephfs_ganesha_server_ip, cephfs_ganesha_server_username, and cephfs_ganesha_server_password (or cephfs_ganesha_path_to_private_key) to allow the driver to manage the NFS-Ganesha export entries over SSH.
  • cephfs_ganesha_server_ip to the ganesha server IP address. It is recommended to set this option even if the ganesha server is co-located with the manila-share service.

With NFS-Ganesha (v2.5.4 or later), Ceph (v12.2.2 or later), the driver (Queens or later) can store NFS-Ganesha exports and export counter in Ceph RADOS objects. This is useful for highly available NFS-Ganesha deployments to store its configuration efficiently in an already available distributed storage system. Set additional options in the NFS driver section to enable the driver to do this.

[cephfsnfs1]
ganesha_rados_store_enable = True
ganesha_rados_store_pool_name = cephfs_data
driver_handles_share_servers = False
share_backend_name = CEPHFSNFS1
share_driver = manila.share.drivers.cephfs.driver.CephFSDriver
cephfs_protocol_helper_type = NFS
cephfs_conf_path = /etc/ceph/ceph.conf
cephfs_auth_id = manila
cephfs_cluster_name = ceph
cephfs_enable_snapshots = False
cephfs_ganesha_server_is_remote= False
cephfs_ganesha_server_ip = 172.24.4.3

The following ganesha library (See manila’s ganesha library documentation for more details) related options are set in the driver backend section above:

  • ganesha_rados_store_enable to True for persisting Ganesha exports and export counter in Ceph RADOS objects.
  • ganesha_rados_store_pool_name to the Ceph RADOS pool that stores Ganesha exports and export counter objects. If you want to use one of the backend CephFS’s RADOS pools, then using CephFS’s data pool is preferred over using its metadata pool.

Edit enabled_share_backends to point to the driver’s backend section using the section name, cephfnfs1.

enabled_share_backends = generic1, cephfsnfs1

Creating shares

Create CephFS native share

The default share type may have driver_handles_share_servers set to True. Configure a share type suitable for CephFS native share:

manila type-create cephfsnativetype false
manila type-key cephfsnativetype set vendor_name=Ceph storage_protocol=CEPHFS

Then create yourself a share:

manila create --share-type cephfsnativetype --name cephnativeshare1 cephfs 1

Note the export location of the share:

manila share-export-location-list cephnativeshare1

The export location of the share contains the Ceph monitor (mon) addresses and ports, and the path to be mounted. It is of the form, {mon ip addr:port}[,{mon ip addr:port}]:{path to be mounted}

Create CephFS NFS share

Configure a share type suitable for CephFS NFS share:

manila type-create cephfsnfstype false
manila type-key cephfsnfstype set vendor_name=Ceph storage_protocol=NFS

Then create a share:

manila create --share-type cephfsnfstype --name cephnfsshare1 nfs 1

Note the export location of the share:

manila share-export-location-list cephnfsshare1

The export location of the share contains the IP address of the NFS-Ganesha server and the path to be mounted. It is of the form, {NFS-Ganesha server address}:{path to be mounted}

Allowing access to shares

Allow access to CephFS native share

Allow Ceph auth ID alice access to the share using cephx access type.

manila access-allow cephnativeshare1 cephx alice

Note the access status, and the access/secret key of alice.

manila access-list cephnativeshare1

Note

In Mitaka release, the secret key is not exposed by any manila API. The Ceph storage admin needs to pass the secret key to the guest out of band of manila. You can refer to the link below to see how the storage admin could obtain the secret key of an ID. http://docs.ceph.com/docs/jewel/rados/operations/user-management/#get-a-user

Alternatively, the cloud admin can create Ceph auth IDs for each of the tenants. The users can then request manila to authorize the pre-created Ceph auth IDs, whose secret keys are already shared with them out of band of manila, to access the shares.

Following is a command that the cloud admin could run from the server running the manila-share service to create a Ceph auth ID and get its keyring file.

ceph --name=client.manila --keyring=/etc/ceph/manila.keyring auth \
get-or-create client.alice -o alice.keyring

For more details, please see the Ceph documentation. http://docs.ceph.com/docs/jewel/rados/operations/user-management/#add-a-user

Allow access to CephFS NFS share

Allow a guest access to the share using ip access type.

manila access-allow cephnfsshare1 ip 172.24.4.225

Mounting CephFS shares

Mounting CephFS native share using FUSE client

Using the secret key of the authorized ID alice create a keyring file, alice.keyring like:

[client.alice]
        key = AQA8+ANW/4ZWNRAAOtWJMFPEihBA1unFImJczA==

Using the mon IP addresses from the share’s export location, create a configuration file, ceph.conf like:

[client]
        client quota = true
        mon host = 192.168.1.7:6789, 192.168.1.8:6789, 192.168.1.9:6789

Finally, mount the filesystem, substituting the filenames of the keyring and configuration files you just created, and substituting the path to be mounted from the share’s export location:

sudo ceph-fuse ~/mnt \
--id=alice \
--conf=./ceph.conf \
--keyring=./alice.keyring \
--client-mountpoint=/volumes/_nogroup/4c55ad20-9c55-4a5e-9233-8ac64566b98c

Mount CephFS NFS share using NFS client

In the guest, mount the share using the NFS client and knowing the share’s export location.

sudo mount -t nfs 172.24.4.3:/volumes/_nogroup/6732900b-32c1-4816-a529-4d6d3f15811e /mnt/nfs/

Known restrictions

  • A CephFS driver instance, represented as a backend driver section in manila.conf, requires a Ceph auth ID unique to the backend Ceph Filesystem. Using a non-unique Ceph auth ID will result in the driver unintentionally evicting other CephFS clients using the same Ceph auth ID to connect to the backend.
  • The snapshot support of the driver is disabled by default. The cephfs_enable_snapshots configuration option needs to be set to True to allow snapshot operations. Snapshot support will also need to be enabled on the backend CephFS storage.
  • Snapshots are read-only. A user can read a snapshot’s contents from the .snap/{manila-snapshot-id}_{unknown-id} folder within the mounted share.

Restrictions with CephFS native share backend

  • To restrict share sizes, CephFS uses quotas that are enforced in the client side. The CephFS FUSE clients are relied on to respect quotas.

Mitaka release only

  • The secret-key of a Ceph auth ID required to mount a share is not exposed to an user by a manila API. To workaround this, the storage admin would need to pass the key out of band of manila, or the user would need to use the Ceph ID and key already created and shared with her by the cloud admin.

Security

  • Each share’s data is mapped to a distinct Ceph RADOS namespace. A guest is restricted to access only that particular RADOS namespace. http://docs.ceph.com/docs/master/cephfs/file-layouts/

  • An additional level of resource isolation can be provided by mapping a share’s contents to a separate RADOS pool. This layout would be be preferred only for cloud deployments with a limited number of shares needing strong resource separation. You can do this by setting a share type specification, cephfs:data_isolated for the share type used by the cephfs driver.

    manila type-key cephfstype set cephfs:data_isolated=True
    

Security with CephFS native share backend

As the guests need direct access to Ceph’s public network, CephFS native share backend is suitable only in private clouds where guests can be trusted.

The manila.share.drivers.cephfs.driver Module

class CephFSDriver(*args, **kwargs)

Bases: manila.share.driver.ExecuteMixin, manila.share.driver.GaneshaMixin, manila.share.driver.ShareDriver

Driver for the Ceph Filesystem.

create_share(context, share, share_server=None)

Create a CephFS volume.

Parameters:
  • context – A RequestContext.
  • share – A Share.
  • share_server – Always None for CephFS native.
Returns:

The export locations dictionary.

create_share_group(context, sg_dict, share_server=None)

Create a share group.

Parameters:
  • context
  • share_group_dict – The share group details EXAMPLE: { ‘status’: ‘creating’, ‘project_id’: ‘13c0be6290934bd98596cfa004650049’, ‘user_id’: ‘a0314a441ca842019b0952224aa39192’, ‘description’: None, ‘deleted’: ‘False’, ‘created_at’: datetime.datetime(2015, 8, 10, 15, 14, 6), ‘updated_at’: None, ‘source_share_group_snapshot_id’: ‘some_fake_uuid’, ‘share_group_type_id’: ‘some_fake_uuid’, ‘host’: ‘hostname@backend_name’, ‘share_network_id’: None, ‘share_server_id’: None, ‘deleted_at’: None, ‘share_types’: [<models.ShareGroupShareTypeMapping>], ‘id’: ‘some_fake_uuid’, ‘name’: None }
Returns:

(share_group_model_update, share_update_list) share_group_model_update - a dict containing any values to be updated for the SG in the database. This value may be None.

create_share_group_snapshot(context, snap_dict, share_server=None)

Create a share group snapshot.

Parameters:
  • context
  • snap_dict

    The share group snapshot details EXAMPLE: .. code:

    {
    'status': 'available',
    'project_id': '13c0be6290934bd98596cfa004650049',
    'user_id': 'a0314a441ca842019b0952224aa39192',
    'description': None,
    'deleted': '0',
    'created_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
    'updated_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
    'share_group_id': 'some_fake_uuid',
    'share_group_snapshot_members': [
        {
         'status': 'available',
         'share_type_id': 'some_fake_uuid',
         'user_id': 'a0314a441ca842019b0952224aa39192',
         'deleted': 'False',
         'created_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
         'share': <models.Share>,
         'updated_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
         'share_proto': 'NFS',
         'project_id': '13c0be6290934bd98596cfa004650049',
         'share_group_snapshot_id': 'some_fake_uuid',
         'deleted_at': None,
         'share_id': 'some_fake_uuid',
         'id': 'some_fake_uuid',
         'size': 1,
         'provider_location': None,
        }
    ],
    'deleted_at': None,
    'id': 'some_fake_uuid',
    'name': None
    }
    
Returns:

(share_group_snapshot_update, member_update_list) share_group_snapshot_update - a dict containing any values to be updated for the CGSnapshot in the database. This value may be None.

member_update_list - a list of dictionaries containing for every member of the share group snapshot. Each dict should contains values to be updated for the ShareGroupSnapshotMember in the database. This list may be empty or None.

create_snapshot(context, snapshot, share_server=None)

Is called to create snapshot.

Parameters:
  • context – Current context
  • snapshot – Snapshot model. Share model could be retrieved through snapshot[‘share’].
  • share_server – Share server model or None.
Returns:

None or a dictionary with key ‘export_locations’ containing a list of export locations, if snapshots can be mounted.

delete_share(context, share, share_server=None)

Is called to remove share.

delete_share_group(context, sg_dict, share_server=None)

Delete a share group

Parameters:
  • context – The request context
  • share_group_dict

    The share group details EXAMPLE: .. code:

    {
    'status': 'creating',
    'project_id': '13c0be6290934bd98596cfa004650049',
    'user_id': 'a0314a441ca842019b0952224aa39192',
    'description': None,
    'deleted': 'False',
    'created_at': datetime.datetime(2015, 8, 10, 15, 14, 6),
    'updated_at': None,
    'source_share_group_snapshot_id': 'some_fake_uuid',
    'share_share_group_type_id': 'some_fake_uuid',
    'host': 'hostname@backend_name',
    'deleted_at': None,
    'shares': [<models.Share>], # The new shares being created
    'share_types': [<models.ShareGroupShareTypeMapping>],
    'id': 'some_fake_uuid',
    'name': None
    }
    
Returns:

share_group_model_update share_group_model_update - a dict containing any values to be updated for the group in the database. This value may be None.

delete_share_group_snapshot(context, snap_dict, share_server=None)

Delete a share group snapshot

Parameters:
  • context
  • snap_dict

    The share group snapshot details EXAMPLE: .. code:

    {
    'status': 'available',
    'project_id': '13c0be6290934bd98596cfa004650049',
    'user_id': 'a0314a441ca842019b0952224aa39192',
    'description': None,
    'deleted': '0',
    'created_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
    'updated_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
    'share_group_id': 'some_fake_uuid',
    'share_group_snapshot_members': [
        {
         'status': 'available',
         'share_type_id': 'some_fake_uuid',
         'share_id': 'some_fake_uuid',
         'user_id': 'a0314a441ca842019b0952224aa39192',
         'deleted': 'False',
         'created_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
         'share': <models.Share>,
         'updated_at': datetime.datetime(2015, 8, 10, 0, 5, 58),
         'share_proto': 'NFS',
         'project_id': '13c0be6290934bd98596cfa004650049',
         'share_group_snapshot_id': 'some_fake_uuid',
         'deleted_at': None,
         'id': 'some_fake_uuid',
         'size': 1,
         'provider_location': 'fake_provider_location_value',
        }
    ],
    'deleted_at': None,
    'id': 'f6aa3b59-57eb-421e-965c-4e182538e36a',
    'name': None
    }
    
Returns:

(share_group_snapshot_update, member_update_list) share_group_snapshot_update - a dict containing any values to be updated for the ShareGroupSnapshot in the database. This value may be None.

delete_snapshot(context, snapshot, share_server=None)

Is called to remove snapshot.

Parameters:
  • context – Current context
  • snapshot – Snapshot model. Share model could be retrieved through snapshot[‘share’].
  • share_server – Share server model or None.
do_setup(context)

Any initialization the share driver does while starting.

ensure_share(context, share, share_server=None)

Invoked to ensure that share is exported.

Driver can use this method to update the list of export locations of the share if it changes. To do that, you should return list with export locations.

Returns:None or list with export locations
extend_share(share, new_size, share_server=None)

Extends size of existing share.

Parameters:
  • share – Share model
  • new_size – New size of share (new_size > share[‘size’])
  • share_server – Optional – Share server model
shrink_share(share, new_size, share_server=None)

Shrinks size of existing share.

If consumed space on share larger than new_size driver should raise ShareShrinkingPossibleDataLoss exception: raise ShareShrinkingPossibleDataLoss(share_id=share[‘id’])

Parameters:
  • share – Share model
  • new_size – New size of share (new_size < share[‘size’])
  • share_server – Optional – Share server model

:raises ShareShrinkingPossibleDataLoss, NotImplementedError

update_access(context, share, access_rules, add_rules, delete_rules, share_server=None)

Update access rules for given share.

access_rules contains all access_rules that need to be on the share. If the driver can make bulk access rule updates, it can safely ignore the add_rules and delete_rules parameters.

If the driver cannot make bulk access rule changes, it can rely on new rules to be present in add_rules and rules that need to be removed to be present in delete_rules.

When a rule in delete_rules was never applied, drivers must not raise an exception, or attempt to set the rule to error state.

add_rules and delete_rules can be empty lists, in this situation, drivers should ensure that the rules present in access_rules are the same as those on the back end. One scenario where this situation is forced is when the access_level is changed for all existing rules (share migration and for readable replicas).

Drivers must be mindful of this call for share replicas. When ‘update_access’ is called on one of the replicas, the call is likely propagated to all replicas belonging to the share, especially when individual rules are added or removed. If a particular access rule does not make sense to the driver in the context of a given replica, the driver should be careful to report a correct behavior, and take meaningful action. For example, if R/W access is requested on a replica that is part of a “readable” type replication; R/O access may be added by the driver instead of R/W. Note that raising an exception will result in the access_rules_status on the replica, and the share itself being “out_of_sync”. Drivers can sync on the valid access rules that are provided on the create_replica and promote_replica calls.

Parameters:
  • context – Current context
  • share – Share model with share data.
  • access_rules – A list of access rules for given share
  • add_rules – Empty List or List of access rules which should be added. access_rules already contains these rules.
  • delete_rules – Empty List or List of access rules which should be removed. access_rules doesn’t contain these rules.
  • share_server – None or Share server model
Returns:

None, or a dictionary of updates in the format:

{

‘09960614-8574-4e03-89cf-7cf267b0bd08’: {

‘access_key’: ‘alice31493e5441b8171d2310d80e37e’, ‘state’: ‘error’,

},

‘28f6eabb-4342-486a-a7f4-45688f0c0295’: {

‘access_key’: ‘bob0078aa042d5a7325480fd13228b’, ‘state’: ‘active’,

},

}

The top level keys are ‘access_id’ fields of the access rules that need to be updated. access_key``s are credentials (str) of the entities granted access. Any rule in the ``access_rules parameter can be updated.

Important

Raising an exception in this method will force all rules in ‘applying’ and ‘denying’ states to ‘error’.

An access rule can be set to ‘error’ state, either explicitly via this return parameter or because of an exception raised in this method. Such an access rule will no longer be sent to the driver on subsequent access rule updates. When users deny that rule however, the driver will be asked to deny access to the client/s represented by the rule. We expect that a rule that was error-ed at the driver should never exist on the back end. So, do not fail the deletion request.

Also, it is possible that the driver may receive a request to add a rule that is already present on the back end. This can happen if the share manager service goes down while the driver is committing access rule changes. Since we cannot determine if the rule was applied successfully by the driver before the disruption, we will treat all ‘applying’ transitional rules as new rules and repeat the request.

volume_client
class NFSProtocolHelper(execute, config_object, **kwargs)

Bases: manila.share.drivers.ganesha.GaneshaNASHelper2

get_export_locations(share, cephfs_volume)
shared_data = {}
supported_protocols = ('NFS',)
class NativeProtocolHelper(execute, config, **kwargs)

Bases: manila.share.drivers.ganesha.NASHelperBase

Helper class for native CephFS protocol

get_export_locations(share, cephfs_volume)
supported_access_levels = ('rw', 'ro')
supported_access_types = ('cephx',)
update_access(context, share, access_rules, add_rules, delete_rules, share_server=None)

Update access rules of share.

cephfs_share_path(share)

Get VolumePath from Share.

Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.