Use the VMware VMDK driver to enable management of the OpenStack Block Storage volumes on vCenter-managed data stores. Volumes are backed by VMDK files on data stores that use any VMware-compatible storage technology such as NFS, iSCSI, FiberChannel, and vSAN.
![]()  | Warning | 
|---|---|
The VMware ESX VMDK driver is deprecated as of the Icehouse release and might be removed in Juno or a subsequent release. The VMware vCenter VMDK driver continues to be fully supported.  | 
The VMware VMDK driver connects to vCenter, through which it can dynamically access all the data stores visible from the ESX hosts in the managed cluster.
When you create a volume, the VMDK driver creates a VMDK file on demand. The VMDK file creation completes only when the volume is subsequently attached to an instance. The reason for this requirement is that data stores visible to the instance determine where to place the volume. Before the service creates the VMDK file, attach a volume to the target instance.
The running vSphere VM is automatically reconfigured to attach the VMDK file as an extra disk. Once attached, you can log in to the running vSphere VM to rescan and discover this extra disk.
With the update to ESX version 6.0, the VMDK driver now supports NFS version 4.1.
The recommended volume driver for OpenStack Block Storage is the VMware vCenter VMDK driver. When you configure the driver, you must match it with the appropriate OpenStack Compute driver from VMware and both drivers must point to the same server.
In the nova.conf file, use this
            option to define the Compute driver:
compute_driver=vmwareapi.VMwareVCDriver
In the cinder.conf file, use this
            option to define the volume driver:
volume_driver=cinder.volume.drivers.vmware.vmdk.VMwareVcVmdkDriver
The following table lists various options that the
            drivers support for the OpenStack Block Storage
            configuration (cinder.conf):
| Configuration option = Default value | Description | 
|---|---|
| [DEFAULT] | |
vmware_api_retry_count = 10 | 
        (IntOpt) Number of times VMware ESX/vCenter server API must be retried upon connection related issues. | 
vmware_ca_file = None | 
        (StrOpt) CA bundle file to use in verifying the vCenter server certificate. | 
vmware_cluster_name = None | 
        (MultiStrOpt) Name of a vCenter compute cluster where volumes should be created. | 
vmware_host_ip = None | 
        (StrOpt) IP address for connecting to VMware ESX/vCenter server. | 
vmware_host_password = None | 
        (StrOpt) Password for authenticating with VMware ESX/vCenter server. | 
vmware_host_username = None | 
        (StrOpt) Username for authenticating with VMware ESX/vCenter server. | 
vmware_host_version = None | 
        (StrOpt) Optional string specifying the VMware vCenter server version. The driver attempts to retrieve the version from VMware vCenter server. Set this configuration only if you want to override the vCenter server version. | 
vmware_image_transfer_timeout_secs = 7200 | 
        (IntOpt) Timeout in seconds for VMDK volume transfer between Cinder and Glance. | 
vmware_insecure = False | 
        (BoolOpt) If true, the vCenter server certificate is not verified. If false, then the default CA truststore is used for verification. This option is ignored if "vmware_ca_file" is set. | 
vmware_max_objects_retrieval = 100 | 
        (IntOpt) Max number of objects to be retrieved per batch. Query results will be obtained in batches from the server and not in one shot. Server may still limit the count to something less than the configured value. | 
vmware_task_poll_interval = 0.5 | 
        (FloatOpt) The interval (in seconds) for polling remote tasks invoked on VMware ESX/vCenter server. | 
vmware_tmp_dir = /tmp | 
        (StrOpt) Directory where virtual disks are stored during volume backup and restore. | 
vmware_volume_folder = Volumes | 
        (StrOpt) Name of the vCenter inventory folder that will contain Cinder volumes. This folder will be created under "OpenStack/<project_folder>", where project_folder is of format "Project (<volume_project_id>)". | 
vmware_wsdl_location = None | 
        (StrOpt) Optional VIM service WSDL Location e.g http://<server>/vimService.wsdl. Optional over-ride to default location for bug work-arounds. | 
The VMware VMDK drivers support the creation of VMDK disk file
          types thin, lazyZeroedThick
          (sometimes called thick or flat), or eagerZeroedThick.
        
A thin virtual disk is allocated and zeroed on demand as the space is used. Unused space on a Thin disk is available to other users.
A lazy zeroed thick virtual disk will have all space allocated at disk creation. This reserves the entire disk space, so it is not available to other users at any time.
An eager zeroed thick virtual disk is similar to a lazy zeroed thick disk, in that the entire disk is allocated at creation. However, in this type, any previous data will be wiped clean on the disk before the write. This can mean that the disk will take longer to create, but can also prevent issues with stale data on physical media.
          Use the vmware:vmdk_type extra spec key with the
          appropriate value to specify the VMDK disk file type. This table
          shows the mapping between the extra spec entry and the VMDK disk
          file type:
| Disk file type | Extra spec key | Extra spec value | 
| thin | vmware:vmdk_type | thin | 
| lazyZeroedThick | vmware:vmdk_type | thick | 
| eagerZeroedThick | vmware:vmdk_type | eagerZeroedThick | 
If you do not specify a vmdk_type extra spec entry,
          the disk file type will default to thin.
The following example shows how to create a
                lazyZeroedThick VMDK volume by using the
            appropriate vmdk_type:
$ cinder type-create thick_volume $ cinder type-key thick_volume set vmware:vmdk_type=thick $ cinder create --volume-type thick_volume --display-name volume1 1
With the VMware VMDK drivers, you can create a volume
            from another source volume or a snapshot point. The VMware
            vCenter VMDK driver supports the full
            and linked/fast clone types. Use the
                vmware:clone_type extra spec key to
            specify the clone type. The following table captures the
            mapping for clone types:
| Clone type | Extra spec key | Extra spec value | 
| full | vmware:clone_type | full | 
| linked/fast | vmware:clone_type | linked | 
If you do not specify the clone type, the default is
                full.
The following example shows linked cloning from a source volume, which is created from an image:
$ cinder type-create fast_clone $ cinder type-key fast_clone set vmware:clone_type=linked $ cinder create --image-id 9cb87f4f-a046-47f5-9b7c-d9487b3c7cd4 --volume-type fast_clone --display-name source-vol 1 $ cinder create --source-volid 25743b9d-3605-462b-b9eb-71459fe2bb35 --display-name dest-vol 1
![]()  | Note | 
|---|---|
The VMware ESX VMDK driver ignores the extra spec
                entry and always creates a   | 
This section describes how to configure back-end data
            stores using storage policies. In vCenter 5.5 and greater,
            you can create one or more storage policies and expose them
            as a Block Storage volume-type to a vmdk volume. The storage
            policies are exposed to the vmdk driver through the extra spec
            property with the
                vmware:storage_profile key.
For example, assume a storage policy in vCenter named
                gold_policy. and a Block Storage
            volume type named vol1 with the extra
            spec key vmware:storage_profile set to
            the value gold_policy. Any Block
            Storage volume creation that uses the
                vol1 volume type places the volume
            only in data stores that match the
                gold_policy storage policy.
The Block Storage back-end configuration for vSphere
            data stores is automatically determined based on the
            vCenter configuration. If you configure a connection to
            connect to vCenter version 5.5 or later in the
                cinder.conf file, the use of
            storage policies to configure back-end data stores is
            automatically supported.
![]()  | Note | 
|---|---|
You must configure any data stores that you configure for the Block Storage service for the Compute service.  | 
Procedure 2.2. To configure back-end data stores by using storage policies
In vCenter, tag the data stores to be used for the back end.
OpenStack also supports policies that are created by using vendor-specific capabilities; for example vSAN-specific storage policies.
![[Note]](../common/images/admon/note.png)
Note The tag value serves as the policy. For details, see the section called “Storage policy-based configuration in vCenter”.
Set the extra spec key
vmware:storage_profilein the desired Block Storage volume types to the policy name that you created in the previous step.Optionally, for the
vmware_host_versionparameter, enter the version number of your vSphere platform. For example,5.5.This setting overrides the default location for the corresponding WSDL file. Among other scenarios, you can use this setting to prevent WSDL error messages during the development phase or to work with a newer version of vCenter.
Complete the other vCenter configuration parameters as appropriate.
![]()  | Note | 
|---|---|
The following considerations apply to configuring SPBM for the Block Storage service: 
  | 
The VMware vCenter and ESX VMDK drivers support these operations:
Create, delete, attach, and detach volumes.
![[Note]](../common/images/admon/note.png)
Note When a volume is attached to an instance, a reconfigure operation is performed on the instance to add the volume's VMDK to it. The user must manually rescan and mount the device from within the guest operating system.
Create, list, and delete volume snapshots.
![[Note]](../common/images/admon/note.png)
Note Allowed only if volume is not attached to an instance.
Create a volume from a snapshot.
Copy an image to a volume.
![[Note]](../common/images/admon/note.png)
Note Only images in
vmdkdisk format withbarecontainer format are supported. Thevmware_disktypeproperty of the image can bepreallocated,sparse,streamOptimizedorthin.Copy a volume to an image.
![[Note]](../common/images/admon/note.png)
Note Allowed only if the volume is not attached to an instance.
This operation creates a
streamOptimizeddisk image.
Clone a volume.
![[Note]](../common/images/admon/note.png)
Note Supported only if the source volume is not attached to an instance.
Backup a volume.
![[Note]](../common/images/admon/note.png)
Note This operation creates a backup of the volume in
streamOptimizeddisk format.Restore backup to new or existing volume.
![[Note]](../common/images/admon/note.png)
Note Supported only if the existing volume doesn't contain snapshots.
Change the type of a volume.
![[Note]](../common/images/admon/note.png)
Note This operation is supported only if the volume state is
available.Extend a volume.
![]()  | Note | 
|---|---|
Although the VMware ESX VMDK driver supports these operations, it has not been extensively tested.  | 
You can configure Storage Policy-Based Management (SPBM) profiles for vCenter data stores supporting the Compute, Image Service, and Block Storage components of an OpenStack implementation.
In a vSphere OpenStack deployment, SPBM enables you to delegate several data stores for storage, which reduces the risk of running out of storage space. The policy logic selects the data store based on accessibility and available storage space.
Determine the data stores to be used by the SPBM policy.
Determine the tag that identifies the data stores in the OpenStack component configuration.
Create separate policies or sets of data stores for separate OpenStack components.
Procedure 2.3. To create storage policies in vCenter
In vCenter, create the tag that identifies the data stores:
From the Home screen, click .
Specify a name for the tag.
Specify a tag category. For example,
spbm-cinder.
Apply the tag to the data stores to be used by the SPBM policy.
![[Note]](../common/images/admon/note.png)
Note For details about creating tags in vSphere, see the vSphere documentation.
In vCenter, create a tag-based storage policy that uses one or more tags to identify a set of data stores.
![[Note]](../common/images/admon/note.png)
Note For details about creating storage policies in vSphere, see the vSphere documentation.
If storage policy is enabled, the driver initially selects all the data stores that match the associated storage policy.
If two or more data stores match the storage policy, the driver chooses a data store that is connected to the maximum number of hosts.
In case of ties, the driver chooses the data store with
            lowest space utilization, where space utilization is
            defined by the
                (1-freespace/totalspace)
            meters.
These actions reduce the number of volume migrations while attaching the volume to instances.
The volume must be migrated if the ESX host for the instance cannot access the data store that contains the volume.

![[Warning]](../common/images/admon/warning.png)
