Generic approach for share provisioning

The Shared File Systems service can be configured to use Nova VMs and Cinder volumes. There are two modules that handle them in manila: 1) ‘service_instance’ module creates VMs in Nova with predefined image called service image. This module can be used by any backend driver for provisioning of service VMs to be able to separate share resources among tenants. 2) ‘generic’ module operates with Cinder volumes and VMs created by ‘service_instance’ module, then creates shared filesystems based on volumes attached to VMs.

Network configurations

Each backend driver can handle networking in its own way, see:

One of two possible configurations can be chosen for share provisioning

using ‘service_instance’ module:

  • Service VM has one net interface from net that is connected to public router. For successful creation of share, user network should be connected to public router too.

  • Service VM has two net interfaces, first one connected to service network, second one connected directly to user’s network.

Requirements for service image

  • Linux based distro

  • NFS server

  • Samba server >=3.2.0, that can be configured by data stored in registry

  • SSH server

  • Two net interfaces configured to DHCP (see network approaches)

  • ‘exportfs’ and ‘net conf’ libraries used for share actions

  • Following files will be used, so if their paths differ one needs to create at

    least symlinks for them:

    • /etc/exports (permanent file with NFS exports)

    • /var/lib/nfs/etab (temporary file with NFS exports used by ‘exportfs’)

    • /etc/fstab (permanent file with mounted filesystems)

    • /etc/mtab (temporary file with mounted filesystems used by ‘mount’)

Supported shared filesystems

  • NFS (access by IP)

  • CIFS (access by IP)

Known restrictions

  • One of Nova’s configurations only allows 26 shares per server. This limit comes from the maximum number of virtual PCI interfaces that are used for block device attaching. There are 28 virtual PCI interfaces, in this configuration, two of them are used for server needs and other 26 are used for attaching block devices that are used for shares.

  • Juno version works only with Neutron. Each share should be created with neutron-net and neutron-subnet IDs provided via share-network entity.

  • Juno version handles security group, flavor, image, keypair for Nova VM and also creates service networks, but does not use availability zones for Nova VMs and volume types for Cinder block devices.

  • Juno version does not use security services data provided with share-network. These data will be just ignored.

  • Liberty version adds a share extend capability. Share access will be briefly interrupted during an extend operation.

  • Liberty version adds a share shrink capability, but this capability is not effective because generic driver shrinks only filesystem size and doesn’t shrink the size of Cinder volume.

  • Modifying network-related configuration options, such as service_network_cidr or service_network_division_mask, after manila has already created some shares using those options is not supported.

Using Windows instances

While the generic driver only supports Linux instances, you may use the Windows SMB driver when Windows VMs are preferred.

For more details, please check out the following page: Windows SMB driver.

The manila.share.drivers.generic Module

Generic Driver for shares.

class GenericShareDriver(*args, **kwargs)

Bases: manila.share.driver.ExecuteMixin, manila.share.driver.ShareDriver

Executes commands relating to Shares.


Returns an error if prerequisites aren’t met.

create_share(context, *args, **kwargs)

Is called to create share.

create_share_from_snapshot(context, *args, **kwargs)

Is called to create share from snapshot.

Creating a share from snapshot can take longer than a simple clone operation if data copy is required from one host to another. For this reason driver will be able complete this creation asynchronously, by providing a ‘creating_from_snapshot’ status in the model update.

When answering asynchronously, drivers must implement the call ‘get_share_status’ in order to provide updates for shares with ‘creating_from_snapshot’ status.

It is expected that the driver returns a model update to the share manager that contains: share status and a list of export_locations. A list of ‘export_locations’ is mandatory only for share in ‘available’ status. The current supported status are ‘available’ and ‘creating_from_snapshot’.

  • context – Current context

  • share – Share instance model with share data.

  • snapshot – Snapshot instance model .

  • share_server – Share server model or None.

  • parent_share – Share model from parent snapshot with share data and share server model.


a dictionary of updates containing current share status and its export_location (if available).


    'status': 'available',
    'export_locations': [{...}, {...}],


ShareBackendException. A ShareBackendException in this method will set the instance to ‘error’ and the operation will end.

create_snapshot(context, snapshot, share_server=None)

Creates a snapshot.

delete_share(context, share, share_server=None)

Deletes share.

delete_snapshot(context, snapshot, share_server=None)

Deletes a snapshot.


Any initialization the generic driver does while starting.

ensure_share(context, *args, **kwargs)

Invoked to ensure that share is exported.

Driver can use this method to update the list of export locations of the share if it changes. To do that, you should return list with export locations.


None or list with export locations

extend_share(context, *args, **kwargs)

Extends size of existing share.

  • share – Share model

  • new_size – New size of share (new_size > share[‘size’])

  • share_server – Optional – Share server model


Get number of network interfaces to be created.

manage_existing(share, driver_options)

Manage existing share to manila.

Generic driver accepts only one driver_option ‘volume_id’. If an administrator provides this option, then appropriate Cinder volume will be managed by Manila as well.

  • share – share data

  • driver_options – Empty dict or dict with ‘volume_id’ option.


dict with share size, example: {‘size’: 1}

manage_existing_snapshot(snapshot, driver_options)

Manage existing share snapshot with manila.

  • snapshot – Snapshot data

  • driver_options – Not used by the Generic driver currently


dict with share snapshot size, example: {‘size’: 1}

shrink_share(context, *args, **kwargs)

Shrinks size of existing share.

If consumed space on share larger than new_size driver should raise ShareShrinkingPossibleDataLoss exception: raise ShareShrinkingPossibleDataLoss(share_id=share[‘id’])

  • share – Share model

  • new_size – New size of share (new_size < share[‘size’])

  • share_server – Optional – Share server model

:raises ShareShrinkingPossibleDataLoss, NotImplementedError


Unmanage share snapshot with manila.

update_access(context, *args, **kwargs)

Update access rules for given share.

access_rules contains all access_rules that need to be on the share. If the driver can make bulk access rule updates, it can safely ignore the add_rules and delete_rules parameters.

If the driver cannot make bulk access rule changes, it can rely on new rules to be present in add_rules and rules that need to be removed to be present in delete_rules.

When a rule in delete_rules was never applied, drivers must not raise an exception, or attempt to set the rule to error state.

add_rules and delete_rules can be empty lists, in this situation, drivers should ensure that the rules present in access_rules are the same as those on the back end. One scenario where this situation is forced is when the access_level is changed for all existing rules (share migration and for readable replicas).

Drivers must be mindful of this call for share replicas. When ‘update_access’ is called on one of the replicas, the call is likely propagated to all replicas belonging to the share, especially when individual rules are added or removed. If a particular access rule does not make sense to the driver in the context of a given replica, the driver should be careful to report a correct behavior, and take meaningful action. For example, if R/W access is requested on a replica that is part of a “readable” type replication; R/O access may be added by the driver instead of R/W. Note that raising an exception will result in the access_rules_status on the replica, and the share itself being “out_of_sync”. Drivers can sync on the valid access rules that are provided on the create_replica and promote_replica calls.

  • context – Current context

  • share – Share model with share data.

  • access_rules – A list of access rules for given share

  • add_rules – Empty List or List of access rules which should be added. access_rules already contains these rules.

  • delete_rules – Empty List or List of access rules which should be removed. access_rules doesn’t contain these rules.

  • share_server – None or Share server model


None, or a dictionary of updates in the format:


‘09960614-8574-4e03-89cf-7cf267b0bd08’: {

‘access_key’: ‘alice31493e5441b8171d2310d80e37e’, ‘state’: ‘error’,


’28f6eabb-4342-486a-a7f4-45688f0c0295’: {

‘access_key’: ‘bob0078aa042d5a7325480fd13228b’, ‘state’: ‘active’,



The top level keys are ‘access_id’ fields of the access rules that need to be updated. access_key``s are credentials (str) of the entities granted access. Any rule in the ``access_rules parameter can be updated.


Raising an exception in this method will force all rules in ‘applying’ and ‘denying’ states to ‘error’.

An access rule can be set to ‘error’ state, either explicitly via this return parameter or because of an exception raised in this method. Such an access rule will no longer be sent to the driver on subsequent access rule updates. When users deny that rule however, the driver will be asked to deny access to the client/s represented by the rule. We expect that a rule that was error-ed at the driver should never exist on the back end. So, do not fail the deletion request.

Also, it is possible that the driver may receive a request to add a rule that is already present on the back end. This can happen if the share manager service goes down while the driver is committing access rule changes. Since we cannot determine if the rule was applied successfully by the driver before the disruption, we will treat all ‘applying’ transitional rules as new rules and repeat the request.


The manila.share.drivers.service_instance Module

Module for managing nova instances for share drivers.

class BaseNetworkhelper(service_instance_manager)

Bases: object

abstract property NAME

Returns code name of network helper.

abstract get_network_name(network_info)

Returns name of network for service instance.

abstract setup_connectivity_with_service_instances()

Sets up connectivity between Manila host and service instances.

abstract setup_network(network_info)

Sets up network for service instance.

abstract teardown_network(server_details)

Teardowns network resources provided for service instance.

class NeutronNetworkHelper(service_instance_manager)

Bases: manila.share.drivers.service_instance.BaseNetworkhelper

property NAME

Returns code name of network helper.

property admin_project_id

Returns name of network for service instance.

property neutron_api
property service_network_id

Sets up connectivity with service instances.

Creates host port in service network and/or admin network, creating and setting up required network devices.


Sets up network for service instance.


Teardowns network resources provided for service instance.

class ServiceInstanceManager(driver_config=None)

Bases: object

Manages nova instances for various share drivers.

This class provides following external methods:

  1. set_up_service_instance: creates instance and sets up share infrastructure.

  2. ensure_service_instance: ensure service instance is available.

  3. delete_service_instance: removes service instance and network infrastructure.

delete_service_instance(context, server_details)

Removes share infrastructure.

Deletes service vm and subnet, associated to share network.

ensure_service_instance(context, server)

Ensures that server exists and active.


Returns value of config option.


key – key of config’ option.


str – value of config’s option. first priority is driver’s config, second priority is global config.

property network_helper
reboot_server(server, soft_reboot=False)
set_up_service_instance(context, network_info)

Finds or creates and sets up service vm.

  • context – defines context, that should be used

  • network_info – network info for getting allocations


dict with service instance details



wait_for_instance_to_be_active(instance_id, timeout)