Atom feed of this document
  
Kilo -  Kilo -  Kilo -  Kilo -  Kilo -  Kilo -  Kilo -  Kilo - 

 EMC VNX direct driver

EMC VNX direct driver (consists of EMCCLIISCSIDriver and EMCCLIFCDriver) supports both iSCSI and FC protocol. EMCCLIISCSIDriver (VNX iSCSI direct driver) and EMCCLIFCDriver (VNX FC direct driver) are separately based on the ISCSIDriver and FCDriver defined in Block Storage.

EMCCLIISCSIDriver and EMCCLIFCDriver perform the volume operations by executing Navisphere CLI (NaviSecCLI) which is a command line interface used for management, diagnostics and reporting functions for VNX.

 Supported OpenStack release

EMC VNX direct driver supports the Kilo release.

 System requirements

  • VNX Operational Environment for Block version 5.32 or higher.

  • VNX Snapshot and Thin Provisioning license should be activated for VNX.

  • Navisphere CLI v7.32 or higher is installed along with the driver.

 Supported operations

  • Create, delete, attach, and detach volumes.

  • Create, list, and delete volume snapshots.

  • Create a volume from a snapshot.

  • Copy an image to a volume.

  • Clone a volume.

  • Extend a volume.

  • Migrate a volume.

  • Retype a volume.

  • Get volume statistics.

  • Create and delete consistency groups.

  • Create, list, and delete consistency group snapshots.

  • Modify consistency groups.

 Preparation

This section contains instructions to prepare the Block Storage nodes to use the EMC VNX direct driver. You install the Navisphere CLI, install the driver, ensure you have correct zoning configurations, and register the driver.

 Install NaviSecCLI

Navisphere CLI needs to be installed on all Block Storage nodes within an OpenStack deployment.

 Install Block Storage driver

Both EMCCLIISCSIDriver and EMCCLIFCDriver are provided in the installer package:

  • emc_vnx_cli.py

  • emc_cli_fc.py (for EMCCLIFCDriver)

  • emc_cli_iscsi.py (for EMCCLIISCSIDriver)

Copy the files above to the cinder/volume/drivers/emc/ directory of the OpenStack node(s) where cinder-volume is running.

 FC zoning with VNX (EMCCLIFCDriver only)

A storage administrator must enable FC SAN auto zoning between all OpenStack nodes and VNX if FC SAN auto zoning is not enabled.

 Register with VNX

Register the compute nodes with VNX to access the storage in VNX or enable initiator auto registration.

To perform "Copy Image to Volume" and "Copy Volume to Image" operations, the nodes running the cinder-volume service(Block Storage nodes) must be registered with the VNX as well.

Steps mentioned below are for a compute node. Please follow the same steps for the Block Storage nodes also. The steps can be skipped if initiator auto registration is enabled.

[Note]Note

When the driver notices that there is no existing storage group that has the host name as the storage group name, it will create the storage group and then add the compute nodes' or Block Storage nodes' registered initiators into the storage group.

If the driver notices that the storage group already exists, it will assume that the registered initiators have also been put into it and skip the operations above for better performance.

It is recommended that the storage administrator does not create the storage group manually and instead relies on the driver for the preparation. If the storage administrator needs to create the storage group manually for some special requirements, the correct registered initiators should be put into the storage group as well (otherwise the following volume attaching operations will fail).

 EMCCLIFCDriver

Steps for EMCCLIFCDriver:

  1. Assume 20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2 is the WWN of a FC initiator port name of the compute node whose hostname and IP are myhost1 and 10.10.61.1. Register 20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2 in Unisphere:

    1. Login to Unisphere, go to FNM0000000000->Hosts->Initiators.

    2. Refresh and wait until the initiator 20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2 with SP Port A-1 appears.

    3. Click the Register button, select CLARiiON/VNX and enter the hostname (which is the output of the linux command hostname) and IP address:

      • Hostname : myhost1

      • IP : 10.10.61.1

      • Click Register

    4. Then host 10.10.61.1 will appear under Hosts->Host List as well.

  2. Register the wwn with more ports if needed.

 EMCCLIISCSIDriver

Steps for EMCCLIISCSIDriver:

  1. On the compute node with IP address 10.10.61.1 and hostname myhost1, execute the following commands (assuming 10.10.61.35 is the iSCSI target):

    1. Start the iSCSI initiator service on the node

      # /etc/init.d/open-iscsi start
    2. Discover the iSCSI target portals on VNX

      # iscsiadm -m discovery -t st -p 10.10.61.35
    3. Enter /etc/iscsi

      # cd /etc/iscsi
    4. Find out the iqn of the node

      # more initiatorname.iscsi
  2. Login to VNX from the compute node using the target corresponding to the SPA port:

    # iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l
  3. Assume iqn.1993-08.org.debian:01:1a2b3c4d5f6g is the initiator name of the compute node. Register iqn.1993-08.org.debian:01:1a2b3c4d5f6g in Unisphere:

    1. Login to Unisphere, go to FNM0000000000->Hosts->Initiators .

    2. Refresh and wait until the initiator iqn.1993-08.org.debian:01:1a2b3c4d5f6g with SP Port A-8v0 appears.

    3. Click the Register button, select CLARiiON/VNX and enter the hostname (which is the output of the linux command hostname) and IP address:

      • Hostname : myhost1

      • IP : 10.10.61.1

      • Click Register

    4. Then host 10.10.61.1 will appear under Hosts->Host List as well.

  4. Logout iSCSI on the node:

    # iscsiadm -m node -u
  5. Login to VNX from the compute node using the target corresponding to the SPB port:

    # iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.61.36 -l
  6. In Unisphere register the initiator with the SPB port.

  7. Logout iSCSI on the node:

    # iscsiadm -m node -u
  8. Register the iqn with more ports if needed.

 Backend configuration

Make the following changes in the /etc/cinder/cinder.conf:

storage_vnx_pool_name = Pool_01_SAS
san_ip = 10.10.72.41
san_secondary_ip = 10.10.72.42
#VNX user name
#san_login = username
#VNX user password
#san_password = password
#VNX user type. Valid values are: global(default), local and ldap.
#storage_vnx_authentication_type = ldap
#Directory path of the VNX security file. Make sure the security file is generated first.
#VNX credentials are not necessary when using security file.
storage_vnx_security_file_dir = /etc/secfile/array1
naviseccli_path = /opt/Navisphere/bin/naviseccli
#timeout in minutes
default_timeout = 10
#If deploying EMCCLIISCSIDriver:
#volume_driver = cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIISCSIDriver
volume_driver = cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver
destroy_empty_storage_group = False
#"node1hostname" and "node2hostname" shoule be the full hostnames of the nodes(Try command 'hostname').
#This option is for EMCCLIISCSIDriver only.
iscsi_initiators = {"node1hostname":["10.0.0.1", "10.0.0.2"],"node2hostname":["10.0.0.3"]}

[database]
max_pool_size = 20
max_overflow = 30
  • where san_ip is one of the SP IP addresses of the VNX array and san_secondary_ip is the other SP IP address of VNX array. san_secondary_ip is an optional field, and it serves the purpose of providing a high availability(HA) design. In case that one SP is down, the other SP can be connected automatically. san_ip is a mandatory field, which provides the main connection.

  • where Pool_01_SAS is the pool from which the user wants to create volumes. The pools can be created using Unisphere for VNX. Refer to the the section called “Multiple pools support” on how to manage multiple pools.

  • where storage_vnx_security_file_dir is the directory path of the VNX security file. Make sure the security file is generated following the steps in the section called “Authentication”.

  • where iscsi_initiators is a dictionary of IP addresses of the iSCSI initiator ports on all OpenStack nodes which want to connect to VNX via iSCSI. If this option is configured, the driver will leverage this information to find an accessible iSCSI target portal for the initiator when attaching volume. Otherwise, the iSCSI target portal will be chosen in a relative random way.

  • Restart cinder-volume service to make the configuration change take effect.

 Authentication

VNX credentials are necessary when the driver connects to the VNX system. Credentials in global, local and ldap scopes are supported. There are two approaches to provide the credentials.

The recommended one is using the Navisphere CLI security file to provide the credentials which can get rid of providing the plain text credentials in the configuration file. Following is the instruction on how to do this.

  1. Find out the linux user id of the /usr/bin/cinder-volume processes. Assuming the service /usr/bin/cinder-volume is running by account cinder.

  2. Switch to root account

  3. Change cinder:x:113:120::/var/lib/cinder:/bin/false to cinder:x:113:120::/var/lib/cinder:/bin/bash in /etc/passwd (This temporary change is to make step 4 work).

  4. Save the credentials on behalf of cinder user to a security file (assuming the array credentials are admin/admin in global scope). In below command, switch -secfilepath is used to specify the location to save the security file (assuming saving to directory /etc/secfile/array1).

    # su -l cinder -c '/opt/Navisphere/bin/naviseccli -AddUserSecurity -user admin -password admin -scope 0 -secfilepath /etc/secfile/array1'

    Save the security file to the different locations for different arrays except where the same credentials are shared between all arrays managed by the host. Otherwise, the credentials in the security file will be overwritten. If -secfilepath is not specified in the command above, the security file will be saved to the default location which is the home directory of the executor.

  5. Change cinder:x:113:120::/var/lib/cinder:/bin/bash back to cinder:x:113:120::/var/lib/cinder:/bin/false in /etc/passwd.

  6. Remove the credentials options san_login, san_password and storage_vnx_authentication_type from cinder.conf (normally it is /etc/cinder/cinder.conf). Add the option storage_vnx_security_file_dir and set its value to the directory path supplied with switch -secfilepath in step 4. Omit this option if -secfilepath is not used in step 4.

    #Directory path that contains the VNX security file. Generate the security file first
    storage_vnx_security_file_dir = /etc/secfile/array1
  7. Restart cinder-volume service to make the change take effect.

Alternatively, the credentials can be specified in /etc/cinder/cinder.conf through the three options below:

#VNX user name
san_login = username
#VNX user password
san_password = password
#VNX user type. Valid values are: global, local and ldap. global is the default value
storage_vnx_authentication_type = ldap

 Restriction of deployment

It does not suggest to deploy the driver on a compute node if cinder upload-to-image --force True is used against an in-use volume. Otherwise, cinder upload-to-image --force True will terminate the vm instance's data access to the volume.

 Restriction of volume extension

VNX does not support to extend the thick volume which has a snapshot. If the user tries to extend a volume which has a snapshot, the volume's status would change to error_extending.

 Restriction of iSCSI attachment

The driver caches the iSCSI ports information. If the iSCSI port configurations are changed, the administrator should restart the cinder-volume service or wait 5 minutes before any volume attachment operation. Otherwise, the attachment may fail because the old iSCSI port configurations were used.

 Provisioning type (thin, thick, deduplicated and compressed)

User can specify extra spec key storagetype:provisioning in volume type to set the provisioning type of a volume. The provisioning type can be thick, thin, deduplicated or compressed.

  • thick provisioning type means the volume is fully provisioned.

  • thin provisioning type means the volume is virtually provisioned.

  • deduplicated provisioning type means the volume is virtually provisioned and the deduplication is enabled on it. Administrator shall go to VNX to configure the system level deduplication settings. To create a deduplicated volume, the VNX deduplication license should be activated on VNX first, and use key deduplication_support=True to let Block Storage scheduler find a volume back end which manages a VNX with deduplication license activated.

  • compressed provisioning type means the volume is virtually provisioned and the compression is enabled on it. Administrator shall go to the VNX to configure the system level compression settings. To create a compressed volume, the VNX compression license should be activated on VNX first, and the user should specify key compression_support=True to let Block Storage scheduler find a volume back end which manages a VNX with compression license activated. VNX does not support to create a snapshot on a compressed volume. If the user tries to create a snapshot on a compressed volume, the operation would fail and OpenStack would show the new snapshot in error state.

Here is an example about how to create a volume with provisioning type. Firstly create a volume type and specify storage pool in the extra spec, then create a volume with this volume type:

$ cinder type-create "ThickVolume"
$ cinder type-create "ThinVolume"
$ cinder type-create "DeduplicatedVolume"
$ cinder type-create "CompressedVolume"
$ cinder type-key "ThickVolume" set storagetype:provisioning=thick
$ cinder type-key "ThinVolume" set storagetype:provisioning=thin
$ cinder type-key "DeduplicatedVolume" set storagetype:provisioning=deduplicated deduplication_support=True
$ cinder type-key "CompressedVolume" set storagetype:provisioning=compressed compression_support=True

In the example above, four volume types are created: ThickVolume, ThinVolume, DeduplicatedVolume and CompressedVolume. For ThickVolume, storagetype:provisioning is set to thick. Similarly for other volume types. If storagetype:provisioning is not specified or an invalid value, the default value thick is adopted.

Volume type name, such as ThickVolume, is user-defined and can be any name. Extra spec key storagetype:provisioning shall be the exact name listed here. Extra spec value for storagetype:provisioning shall be thick, thin, deduplicated or compressed. During volume creation, if the driver finds storagetype:provisioning in the extra spec of the volume type, it will create the volume with the provisioning type accordingly. Otherwise, the volume will be thick as the default.

 Fully automated storage tiering support

VNX supports Fully automated storage tiering which requires the FAST license activated on the VNX. The OpenStack administrator can use the extra spec key storagetype:tiering to set the tiering policy of a volume and use the extra spec key fast_support=True to let Block Storage scheduler find a volume back end which manages a VNX with FAST license activated. Here are the five supported values for the extra spec key storagetype:tiering:

  • StartHighThenAuto (Default option)

  • Auto

  • HighestAvailable

  • LowestAvailable

  • NoMovement

Tiering policy can not be set for a deduplicated volume. The user can check storage pool properties on VNX to know the tiering policy of a deduplicated volume.

Here is an example about how to create a volume with tiering policy:

$ cinder type-create "AutoTieringVolume"
$ cinder type-key "AutoTieringVolume" set storagetype:tiering=Auto fast_support=True
$ cinder type-create "ThinVolumeOnLowestAvaibleTier"
$ cinder type-key "CompressedVolumeOnLowestAvaibleTier" set storagetype:provisioning=thin storagetype:tiering=Auto fast_support=True

 FAST Cache support

VNX has FAST Cache feature which requires the FAST Cache license activated on the VNX. The OpenStack administrator can use the extra spec key fast_cache_enabled to choose whether to create a volume on the volume back end which manages a pool with FAST Cache enabled. The value of the extra spec key fast_cache_enabled is either True or False. When creating a volume, if the key fast_cache_enabled is set to True in the volume type, the volume will be created by a back end which manages a pool with FAST Cache enabled.

 Storage group automatic deletion

For volume attaching, the driver has a storage group on VNX for each compute node hosting the vm instances that are going to consume VNX Block Storage (using the compute node's hostname as the storage group's name). All the volumes attached to the vm instances in a computer node will be put into the corresponding Storage Group. If destroy_empty_storage_group=True, the driver will remove the empty storage group when its last volume is detached. For data safety, it does not suggest to set the option destroy_empty_storage_group=True unless the VNX is exclusively managed by one Block Storage node because consistent lock_path is required for operation synchronization for this behavior.

 EMC storage-assisted volume migration

EMC VNX direct driver supports storage-assisted volume migration, when the user starts migrating with cinder migrate --force-host-copy False volume_id host or cinder migrate volume_id host, cinder will try to leverage the VNX's native volume migration functionality.

In the following scenarios, VNX native volume migration will not be triggered:

  • Volume migration between back ends with different storage protocol, ex, FC and iSCSI.

  • Volume is being migrated across arrays.

 Initiator auto registration

If initiator_auto_registration=True, the driver will automatically register iSCSI initiators with all working iSCSI target ports on the VNX array during volume attaching (The driver will skip those initiators that have already been registered).

If the user wants to register the initiators with some specific ports on VNX but not register with the other ports, this functionality should be disabled.

 Initiator auto deregistration

Enabling storage group automatic deletion is the precondition of this functionality. If initiator_auto_deregistration=True is set, the driver will deregister all the iSCSI initiators of the host after its storage group is deleted.

 Read-only volumes

OpenStack supports read-only volumes. The following command can be used to set a volume to read-only.

$ cinder readonly-mode-update volume True

After a volume is marked as read-only, the driver will forward the information when a hypervisor is attaching the volume and the hypervisor will have an implementation-specific way to make sure the volume is not written.

 Multiple pools support

The user configures a storage pool for a Block Storage back end (named as pool-based back end), so that the Block Storage back end uses only that storage pool.

If storage_vnx_pool_name is not given in the configuration file, the Block Storage back end uses all the pools on the VNX array, and the scheduler chooses the pool to place the volume based on its capacities and capabilities. This kind of Block Storage back end is named as array-based back end.

Here is an example about configuration of array-based back end:

san_ip = 10.10.72.41
#Directory path that contains the VNX security file. Make sure the security file is generated first
storage_vnx_security_file_dir = /etc/secfile/array1
storage_vnx_authentication_type = global
naviseccli_path = /opt/Navisphere/bin/naviseccli
default_timeout = 10
volume_driver = cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIISCSIDriver
destroy_empty_storage_group = False
volume_backend_name = vnx_41

In this configuration, if the user wants to create a volume on a certain storage pool, a volume type with a extra spec specified the storage pool should be created first, then the user can use this volume type to create the volume.

Here is an example about creating the volume type:

$ cinder type-create "HighPerf"
$ cinder type-key "HighPerf" set pool_name=Pool_02_SASFLASH volume_backend_name=vnx_41

 Volume number threshold

In VNX, there is a limit on the maximum number of pool volumes that can be created in the system. When the limit is reached, no more pool volumes can be created even if there is enough remaining capacity in the storage pool. In other words, if the scheduler dispatches a volume creation request to a back end that has free capacity but reaches the limit, the back end will fail to create the corresponding volume.

The default value of the option check_max_pool_luns_threshold is False. When check_max_pool_luns_threshold=True, the pool-based back end will check the limit and will report 0 free capacity to the scheduler if the limit is reached. So the scheduler will be able to skip this kind of pool-based back end that runs out of the pool volume number.

 FC SAN auto zoning

EMC direct driver supports FC SAN auto zoning when ZoneManager is configured. Set zoning_mode to fabric in back-end configuration section to enable this feature. For ZoneManager configuration, please refer to the section called “Fibre Channel Zone Manager”.

 Multi-backend configuration

[DEFAULT]

enabled_backends = backendA, backendB

[backendA]

storage_vnx_pool_name = Pool_01_SAS
san_ip = 10.10.72.41
#Directory path that contains the VNX security file. Make sure the security file is generated first.
storage_vnx_security_file_dir = /etc/secfile/array1
naviseccli_path = /opt/Navisphere/bin/naviseccli
#Timeout in Minutes
default_timeout = 10
volume_driver = cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver
destroy_empty_storage_group = False
initiator_auto_registration = True

[backendB]
storage_vnx_pool_name = Pool_02_SAS
san_ip = 10.10.26.101
san_login = username
san_password = password
naviseccli_path = /opt/Navisphere/bin/naviseccli
#Timeout in Minutes
default_timeout = 10
volume_driver = cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver
destroy_empty_storage_group = False
initiator_auto_registration = True

[database]

max_pool_size = 20
max_overflow = 30

For more details on multi-backend, see OpenStack Cloud Administration Guide.

 Force delete volumes in storage groups

Some available volumes may remain in storage groups on the VNX array due to some OpenStack timeout issues. But the VNX array does not allow the user to delete the volumes which are still in storage groups. The option force_delete_lun_in_storagegroup is introduced to allow the user to delete the available volumes in this tricky situation.

When force_delete_lun_in_storagegroup=True is set in the back-end section, the driver will move the volumes out of storage groups and then delete them if the user tries to delete the volumes that remain in storage groups on the VNX array.

The default value of force_delete_lun_in_storagegroup is False.

Questions? Discuss on ask.openstack.org
Found an error? Report a bug against this page

loading table of contents...