Use the VMware VMDK driver to enable management of the OpenStack Block Storage volumes on vCenter-managed data stores. Volumes are backed by VMDK files on data stores that use any VMware-compatible storage technology such as NFS, iSCSI, FiberChannel, and vSAN.
| ![[Warning]](../common/images/admon/warning.png) | Warning | 
|---|---|
| The VMware ESX VMDK driver is deprecated as of the Icehouse release and might be removed in Juno or a subsequent release. The VMware vCenter VMDK driver continues to be fully supported. | 
The VMware VMDK driver connects to vCenter, through which it can dynamically access all the data stores visible from the ESX hosts in the managed cluster.
When you create a volume, the VMDK driver creates a VMDK file on demand. The VMDK file creation completes only when the volume is subsequently attached to an instance, because the set of data stores visible to the instance determines where to place the volume.
The running vSphere VM is automatically reconfigured to attach the VMDK file as an extra disk. Once attached, you can log in to the running vSphere VM to rescan and discover this extra disk.
The recommended volume driver for OpenStack Block Storage is the VMware vCenter VMDK driver. When you configure the driver, you must match it with the appropriate OpenStack Compute driver from VMware and both drivers must point to the same server.
In the nova.conf file, use this
            option to define the Compute driver:
compute_driver=vmwareapi.VMwareVCDriver
In the cinder.conf file, use this
            option to define the volume driver:
volume_driver=cinder.volume.drivers.vmware.vmdk.VMwareVcVmdkDriver
The following table lists various options that the
            drivers support for the OpenStack Block Storage
            configuration (cinder.conf):
| Configuration option = Default value | Description | 
|---|---|
| [DEFAULT] | |
| vmware_api_retry_count = 10 | (IntOpt) Number of times VMware ESX/VC server API must be retried upon connection related issues. | 
| vmware_host_ip = None | (StrOpt) IP address for connecting to VMware ESX/VC server. | 
| vmware_host_password = None | (StrOpt) Password for authenticating with VMware ESX/VC server. | 
| vmware_host_username = None | (StrOpt) Username for authenticating with VMware ESX/VC server. | 
| vmware_host_version = None | (StrOpt) Optional string specifying the VMware VC server version. The driver attempts to retrieve the version from VMware VC server. Set this configuration only if you want to override the VC server version. | 
| vmware_image_transfer_timeout_secs = 7200 | (IntOpt) Timeout in seconds for VMDK volume transfer between Cinder and Glance. | 
| vmware_max_objects_retrieval = 100 | (IntOpt) Max number of objects to be retrieved per batch. Query results will be obtained in batches from the server and not in one shot. Server may still limit the count to something less than the configured value. | 
| vmware_task_poll_interval = 5 | (IntOpt) The interval (in seconds) for polling remote tasks invoked on VMware ESX/VC server. | 
| vmware_volume_folder = cinder-volumes | (StrOpt) Name for the folder in the VC datacenter that will contain cinder volumes. | 
| vmware_wsdl_location = None | (StrOpt) Optional VIM service WSDL Location e.g http://<server>/vimService.wsdl. Optional over-ride to default location for bug work-arounds. | 
The VMware VMDK drivers support the creation of VMDK
            disk files of type thin,
                thick, or
                eagerZeroedThick. Use the
                vmware:vmdk_type extra spec key with the
            appropriate value to specify the VMDK disk file type. The
            following table captures the mapping between the extra
            spec entry and the VMDK disk file type:
| Disk file type | Extra spec key | Extra spec value | 
| thin | vmware:vmdk_type | thin | 
| thick | vmware:vmdk_type | thick | 
| eagerZeroedThick | vmware:vmdk_type | eagerZeroedThick | 
If you do not specify a vmdk_type extra
            spec entry, the default disk file type is
                thin.
The following example shows how to create a
                thick VMDK volume by using the
            appropriate vmdk_type:
$ cinder type-create thick_volume $ cinder type-key thick_volume set vmware:vmdk_type=thick $ cinder create --volume-type thick_volume --display-name volume1 1
With the VMware VMDK drivers, you can create a volume
            from another source volume or a snapshot point. The VMware
            vCenter VMDK driver supports the full
            and linked/fast clone types. Use the
                vmware:clone_type extra spec key to
            specify the clone type. The following table captures the
            mapping for clone types:
| Clone type | Extra spec key | Extra spec value | 
| full | vmware:clone_type | full | 
| linked/fast | vmware:clone_type | linked | 
If you do not specify the clone type, the default is
                full.
The following example shows linked cloning from another source volume:
$ cinder type-create fast_clone $ cinder type-key fast_clone set vmware:clone_type=linked $ cinder create --volume-type fast_clone --source-volid 25743b9d-3605-462b-b9eb-71459fe2bb35 --display-name volume1 1
| ![[Note]](../common/images/admon/note.png) | Note | 
|---|---|
| The VMware ESX VMDK driver ignores the extra spec
                entry and always creates a  | 
This section describes how to configure back-end data
            stores using storage policies. In vCenter, you can create
            one or more storage policies and expose them as a Block
            Storage volume-type to a vmdk volume. The storage policies
            are exposed to the vmdk driver through the extra spec
            property with the
                vmware:storage_profile key.
For example, assume a storage policy in vCenter named
                gold_policy. and a Block Storage
            volume type named vol1 with the extra
            spec key vmware:storage_profile set to
            the value gold_policy. Any Block
            Storage volume creation that uses the
                vol1 volume type places the volume
            only in data stores that match the
                gold_policy storage policy.
The Block Storage back-end configuration for vSphere
            data stores is automatically determined based on the
            vCenter configuration. If you configure a connection to
            connect to vCenter version 5.5 or later in the
                cinder.conf file, the use of
            storage policies to configure back-end data stores is
            automatically supported.
| ![[Note]](../common/images/admon/note.png) | Note | 
|---|---|
| You must configure any data stores that you configure for the Block Storage service for the Compute service. | 
Procedure 1.6. To configure back-end data stores by using storage policies
- In vCenter, tag the data stores to be used for the back end. - OpenStack also supports policies that are created by using vendor-specific capabilities; for example vSAN-specific storage policies. ![[Note]](../common/images/admon/note.png) - Note - The tag value serves as the policy. For details, see the section called “Storage policy-based configuration in vCenter”. 
- Set the extra spec key - vmware:storage_profilein the desired Block Storage volume types to the policy name that you created in the previous step.
- Optionally, for the - vmware_host_versionparameter, enter the version number of your vSphere platform. For example,- 5.5.- This setting overrides the default location for the corresponding WSDL file. Among other scenarios, you can use this setting to prevent WSDL error messages during the development phase or to work with a newer version of vCenter. 
- Complete the other vCenter configuration parameters as appropriate. 
| ![[Note]](../common/images/admon/note.png) | Note | 
|---|---|
| The following considerations apply to configuring SPBM for the Block Storage service: 
 | 
The VMware vCenter and ESX VMDK drivers support these operations:
- Create volume 
- Create volume from another source volume. (Supported only if source volume is not attached to an instance.) 
- Create volume from snapshot 
- Create volume from glance image 
- Attach volume (When a volume is attached to an instance, a reconfigure operation is performed on the instance to add the volume's VMDK to it. The user must manually rescan and mount the device from within the guest operating system.) 
- Detach volume 
- Create snapshot (Allowed only if volume is not attached to an instance.) 
- Delete snapshot (Allowed only if volume is not attached to an instance.) 
- Upload as image to glance (Allowed only if volume is not attached to an instance.) 
| ![[Note]](../common/images/admon/note.png) | Note | 
|---|---|
| Although the VMware ESX VMDK driver supports these operations, it has not been extensively tested. | 
You can configure Storage Policy-Based Management (SPBM) profiles for vCenter data stores supporting the Compute, Image Service, and Block Storage components of an OpenStack implementation.
In a vSphere OpenStack deployment, SPBM enables you to delegate several data stores for storage, which reduces the risk of running out of storage space. The policy logic selects the data store based on accessibility and available storage space.
- Determine the data stores to be used by the SPBM policy. 
- Determine the tag that identifies the data stores in the OpenStack component configuration. 
- Create separate policies or sets of data stores for separate OpenStack components. 
Procedure 1.7. To create storage policies in vCenter
- In vCenter, create the tag that identifies the data stores: - From the Home screen, click . 
- Specify a name for the tag. 
- Specify a tag category. For example, - spbm-cinder.
 
- Apply the tag to the data stores to be used by the SPBM policy. ![[Note]](../common/images/admon/note.png) - Note - For details about creating tags in vSphere, see the vSphere documentation. 
- In vCenter, create a tag-based storage policy that uses one or more tags to identify a set of data stores. ![[Note]](../common/images/admon/note.png) - Note - You use this tag name and category when you configure the - *.conffile for the OpenStack component. For details about creating tags in vSphere, see the vSphere documentation.
If storage policy is enabled, the driver initially selects all the data stores that match the associated storage policy.
If two or more data stores match the storage policy, the driver chooses a data store that is connected to the maximum number of hosts.
In case of ties, the driver chooses the data store with
            lowest space utilization, where space utilization is
            defined by the
                (1-freespace/totalspace)
            metric.
These actions reduce the number of volume migrations while attaching the volume to instances.
The volume must be migrated if the ESX host for the instance cannot access the data store that contains the volume.



