cinder.volume.manager module

Volume manager manages creating, attaching, detaching, and persistent storage.

Persistent storage volumes keep their state independent of instances. You can attach to an instance, terminate the instance, spawn a new instance (even one from a different image) and re-attach the volume with the same data intact.

Related Flags

volume_manager:

The module name of a class derived from manager.Manager (default: cinder.volume.manager.Manager).

volume_driver:

Used by Manager. Defaults to cinder.volume.drivers.lvm.LVMVolumeDriver.

volume_group:

Name of the group that will contain exported volumes (default: cinder-volumes)

num_shell_tries:

Number of times to attempt to run commands (default: 3)

class VolumeManager(volume_driver=None, service_name: str | None = None, *args, **kwargs)

Bases: CleanableManager, SchedulerDependentManager

Manages attachable block storage devices.

FAILBACK_SENTINEL = 'default'
RPC_API_VERSION = '3.19'
accept_transfer(context, volume_id, new_user, new_project, no_snapshots=False) dict
attach_volume(context, volume_id, instance_uuid, host_name, mountpoint, mode, volume=None) VolumeAttachment

Updates db to show volume is attached.

attachment_delete(context: RequestContext, attachment_id: str, vref: Volume) None

Delete/Detach the specified attachment.

Notifies the backend device that we’re detaching the specified attachment instance.

param: attachment_id: Attachment id to remove param: vref: Volume object associated with the attachment

attachment_update(context: RequestContext, vref: Volume, connector: dict, attachment_id: str) dict[str, Any]

Update/Finalize an attachment.

This call updates a valid attachment record to associate with a volume and provide the caller with the proper connection info. Note that this call requires an attachment_ref. It’s expected that prior to this call that the volume and an attachment UUID has been reserved.

param: vref: Volume object to create attachment for param: connector: Connector object to use for attachment creation param: attachment_ref: ID of the attachment record to update

copy_volume_to_image(context: RequestContext, volume_id: str, image_meta: dict) None

Uploads the specified volume to Glance.

image_meta is a dictionary containing the following keys: ‘id’, ‘container_format’, ‘disk_format’

create_group(context: RequestContext, group) Group

Creates the group.

create_group_from_src(context: RequestContext, group: Group, group_snapshot: GroupSnapshot | None = None, source_group=None) Group

Creates the group from source.

The source can be a group snapshot or a source group.

create_group_snapshot(context: RequestContext, group_snapshot: GroupSnapshot) GroupSnapshot

Creates the group_snapshot.

create_snapshot(context, snapshot) UUIDField

Creates and exports the snapshot.

create_volume(context, volume, request_spec=None, filter_properties=None, allow_reschedule=True) UUIDField

Creates the volume.

delete_group(context: RequestContext, group: Group) None

Deletes group and the volumes in the group.

delete_group_snapshot(context: RequestContext, group_snapshot: GroupSnapshot) None

Deletes group_snapshot.

delete_snapshot(context: RequestContext, snapshot: Snapshot, unmanage_only: bool = False) bool | None

Deletes and unexports snapshot.

delete_volume(context: RequestContext, volume: Volume, unmanage_only=False, cascade=False) bool | None

Deletes and unexports volume.

  1. Delete a volume(normal case) Delete a volume and update quotas.

  2. Delete a migration volume If deleting the volume in a migration, we want to skip quotas but we need database updates for the volume.

  3. Delete a temp volume for backup If deleting the temp volume for backup, we want to skip quotas but we need database updates for the volume.

detach_volume(context, volume_id, attachment_id=None, volume=None) None

Updates db to show volume is detached.

disable_replication(ctxt: RequestContext, group: Group) None

Disable replication.

driver_delete_snapshot(snapshot)
driver_delete_volume(volume)
enable_replication(ctxt: RequestContext, group: Group) None

Enable replication.

extend_volume(context: RequestContext, volume: Volume, new_size: int, reservations) None
extend_volume_completion(context: RequestContext, volume: Volume, new_size: int, reservations: list[str], error: bool) None
failover(context: RequestContext, secondary_backend_id=None) None

Failover a backend to a secondary replication target.

Instructs a replication capable/configured backend to failover to one of it’s secondary replication targets. host=None is an acceetable input, and leaves it to the driver to failover to the only configured target, or to choose a target on it’s own. All of the hosts volumes will be passed on to the driver in order for it to determine the replicated volumes on the host, if needed.

Parameters:
  • context – security context

  • secondary_backend_id – Specifies backend_id to fail over to

failover_completed(context: RequestContext, updates) None

Finalize failover of this backend.

When a service is clustered and replicated the failover has 2 stages, one that does the failover of the volumes and another that finalizes the failover of the services themselves.

This method takes care of the last part and is called from the service doing the failover of the volumes after finished processing the volumes.

failover_host(context: RequestContext, secondary_backend_id=None) None

Failover a backend to a secondary replication target.

Instructs a replication capable/configured backend to failover to one of it’s secondary replication targets. host=None is an acceetable input, and leaves it to the driver to failover to the only configured target, or to choose a target on it’s own. All of the hosts volumes will be passed on to the driver in order for it to determine the replicated volumes on the host, if needed.

Parameters:
  • context – security context

  • secondary_backend_id – Specifies backend_id to fail over to

failover_replication(ctxt: RequestContext, group: Group, allow_attached_volume: bool = False, secondary_backend_id=None) None

Failover replication.

finish_failover(context: RequestContext, service, updates) None

Completion of the failover locally or via RPC.

freeze_host(context: RequestContext) bool

Freeze management plane on this backend.

Basically puts the control/management plane into a Read Only state. We should handle this in the scheduler, however this is provided to let the driver know in case it needs/wants to do something specific on the backend.

Parameters:

context – security context

get_backup_device(ctxt: RequestContext, backup: Backup, want_objects: bool = False, async_call: bool = False)
get_capabilities(context: RequestContext, discover: bool)

Get capabilities of backend storage.

get_manageable_snapshots(ctxt: RequestContext, marker, limit: int | None, offset: int | None, sort_keys, sort_dirs, want_objects=False)
get_manageable_volumes(ctxt: RequestContext, marker, limit: int | None, offset: int | None, sort_keys, sort_dirs, want_objects=False) list
init_host(added_to_cluster=None, **kwargs) None

Perform any required initialization.

init_host_with_rpc() None

A hook for service to do jobs after RPC is ready.

Like init_host(), this method is a hook where services get a chance to execute tasks that need RPC. Child classes should override this method.

initialize_connection(context, volume: Volume, connector: dict) dict

Prepare volume for connection from host represented by connector.

This method calls the driver initialize_connection and returns it to the caller. The connector parameter is a dictionary with information about the host that will connect to the volume in the following format:

{
   "ip": "<ip>",
   "initiator": "<initiator>"
}
ip:

the ip address of the connecting machine

initiator:

the iscsi initiator name of the connecting machine. This can be None if the connecting machine does not support iscsi connections.

driver is responsible for doing any necessary security setup and returning a connection_info dictionary in the following format:

{
   "driver_volume_type": "<driver_volume_type>",
   "data": "<data>"
}
driver_volume_type:

a string to identify the type of volume. This can be used by the calling code to determine the strategy for connecting to the volume. This could be ‘iscsi’, ‘rbd’, etc.

data:

this is the data that the calling code will use to connect to the volume. Keep in mind that this will be serialized to json in various places, so it should not contain any non-json data types.

initialize_connection_snapshot(ctxt, snapshot_id: UUIDField, connector: dict) dict
is_working() bool

Return if Manager is ready to accept requests.

This is to inform Service class that in case of volume driver initialization failure the manager is actually down and not ready to accept any requests.

list_replication_targets(ctxt: RequestContext, group: Group) dict[str, list]

Provide a means to obtain replication targets for a group.

This method is used to find the replication_device config info. ‘backend_id’ is a required key in ‘replication_device’.

Response Example for admin:

{
    "replication_targets": [
        {
            "backend_id": "vendor-id-1",
            "unique_key": "val1"
        },
        {
            "backend_id": "vendor-id-2",
            "unique_key": "val2"
        }
     ]
}

Response example for non-admin:

{
    "replication_targets": [
        {
            "backend_id": "vendor-id-1"
        },
        {
            "backend_id": "vendor-id-2"
        }
     ]
}
manage_existing(ctxt: RequestContext, volume: Volume, ref=None) UUIDField
manage_existing_snapshot(ctxt: RequestContext, snapshot: Snapshot, ref=None) UUIDField
migrate_volume(ctxt: RequestContext, volume, host, force_host_copy: bool = False, new_type_id=None) None

Migrate the volume to the specified host (called on source host).

migrate_volume_completion(ctxt: RequestContext, volume, new_volume, error=False) UUIDField
publish_service_capabilities(context: RequestContext) None

Collect driver status and then publish.

reimage(context, volume, image_meta)

Reimage a volume with specific image.

remove_export(context, volume_id: UUIDField) None

Removes an export for a volume.

remove_export_snapshot(ctxt, snapshot_id: UUIDField) None

Removes an export for a snapshot.

retype(context: RequestContext, volume: Volume, new_type_id: str, host, migration_policy: str = 'never', reservations=None, old_reservations=None) None
revert_to_snapshot(context, volume, snapshot) None

Revert a volume to a snapshot.

The process of reverting to snapshot consists of several steps: 1. create a snapshot for backup (in case of data loss) 2.1. use driver’s specific logic to revert volume 2.2. try the generic way to revert volume if driver’s method is missing 3. delete the backup snapshot

secure_file_operations_enabled(ctxt: RequestContext, volume: Volume | None) bool
target = <Target version=3.19>
terminate_connection(context, volume_id: UUIDField, connector: dict, force=False) None

Cleanup connection from host represented by connector.

The format of connector is the same as for initialize_connection.

terminate_connection_snapshot(ctxt, snapshot_id: UUIDField, connector: dict, force=False) None
thaw_host(context: RequestContext) bool

UnFreeze management plane on this backend.

Basically puts the control/management plane back into a normal state. We should handle this in the scheduler, however this is provided to let the driver know in case it needs/wants to do something specific on the backend.

Parameters:

context – security context

update_group(context: RequestContext, group, add_volumes: str | None = None, remove_volumes: str | None = None) None

Updates group.

Update group by adding volumes to the group, or removing volumes from the group.

update_migrated_volume(ctxt: RequestContext, volume: Volume, new_volume: Volume, volume_status) None

Finalize migration process on backend device.

clean_snapshot_locks(func)
clean_volume_locks(func)