Atom feed of this document
Juno -  Juno -  Juno -  Juno -  Juno -  Juno -  Juno -  Juno - 

 EMC VMAX iSCSI and FC drivers

The EMC VMAX drivers, EMCVMAXISCSIDriver and EMCVMAXFCDriver, support the use of EMC VMAX storage arrays under OpenStack Block Storage. They both provide equivalent functions and differ only in support for their respective host attachment methods.

The drivers perform volume operations by communicating with the backend VMAX storage. It uses a CIM client in Python called PyWBEM to perform CIM operations over HTTP.

The EMC CIM Object Manager (ECOM) is packaged with the EMC SMI-S provider. It is a CIM server that enables CIM clients to perform CIM operations over HTTP by using SMI-S in the back-end for VMAX storage operations.

The EMC SMI-S Provider supports the SNIA Storage Management Initiative (SMI), an ANSI standard for storage management. It supports the VMAX storage system.

 System requirements

EMC SMI-S Provider V4.6.2.8 and higher is required. You can download SMI-S from the EMC's support web site (login is required). See the EMC SMI-S Provider release notes for installation instructions.

EMC storage VMAX Family is supported.

 Supported operations

VMAX drivers support these operations:

  • Create, delete, attach, and detach volumes.

  • Create, list, and delete volume snapshots.

  • Copy an image to a volume.

  • Copy a volume to an image.

  • Clone a volume.

  • Extend a volume.

  • Retype a volume.

  • Create a volume from a snapshot.

VMAX drivers also support the following features:

  • FAST automated storage tiering policy.

  • Dynamic masking view creation.

  • Striped volume creation.

 Set up the VMAX drivers


Procedure 1.1. To set up the EMC VMAX drivers

  1. Install the python-pywbem package for your distribution. See the section called “Install the python-pywbem package”.

  2. Download SMI-S from PowerLink and install it. Add your VMAX arrays to SMI-S.

    For information, see the section called “Set up SMI-S” and the SMI-S release notes.

  3. Change configuration files. See the section called “cinder.conf configuration file” and the section called “cinder_emc_config_CONF_GROUP_ISCSI.xml configuration file”.

  4. Configure connectivity. For FC driver, see the section called “FC Zoning with VMAX”. For iSCSI driver, see the section called “iSCSI with VMAX”.

 Install the python-pywbem package

Install the python-pywbem package for your distribution, as follows:

  • On Ubuntu:

    # apt-get install python-pywbem
  • On openSUSE:

    # zypper install python-pywbem
  • On Fedora:

    # yum install pywbem
 Set up SMI-S

You can install SMI-S on a non-OpenStack host. Supported platforms include different flavors of Windows, Red Hat, and SUSE Linux. SMI-S can be installed on a physical server or a VM hosted by an ESX server. Note that the supported hypervisor for a VM running SMI-S is ESX only. See the EMC SMI-S Provider release notes for more information on supported platforms and installation instructions.


You must discover storage arrays on the SMI-S server before you can use the VMAX drivers. Follow instructions in the SMI-S release notes.

SMI-S is usually installed at /opt/emc/ECIM/ECOM/bin on Linux and C:\Program Files\EMC\ECIM\ECOM\bin on Windows. After you install and configure SMI-S, go to that directory and type TestSmiProvider.exe.

Use addsys in TestSmiProvider.exe to add an array. Use dv and examine the output after the array is added. Make sure that the arrays are recognized by the SMI-S server before using the EMC VMAX drivers.

 cinder.conf configuration file

Make the following changes in /etc/cinder/cinder.conf.

Add the following entries, where is the IP address of the VMAX iSCSI target:

enabled_backends = CONF_GROUP_ISCSI, CONF_GROUP_FC
iscsi_ip_address =
volume_driver = cinder.volume.drivers.emc.emc_vmax_iscsi.EMCVMAXISCSIDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config_CONF_GROUP_ISCSI.xml
volume_driver = cinder.volume.drivers.emc.emc_vmax_fc.EMCVMAXFCDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config_CONF_GROUP_FC.xml

In this example, two backend configuration groups are enabled: CONF_GROUP_ISCSI and CONF_GROUP_FC. Each configuration group has a section describing unique parameters for connections, drivers, the volume_backend_name, and the name of the EMC-specific configuration file containing additional settings. Note that the file name is in the format /etc/cinder/cinder_emc_config_[confGroup].xml.

Once the cinder.conf and EMC-specific configuration files have been created, cinder commands need to be issued in order to create and associate OpenStack volume types with the declared volume_backend_names:

$ cinder type-create VMAX_ISCSI
$ cinder type-key VMAX_ISCSI set volume_backend_name=ISCSI_backend
$ cinder type-create VMAX_FC
$ cinder type-key VMAX_FC set volume_backend_name=FC_backend

By issuing these commands, the Block Storage volume type VMAX_ISCSI is associated with the ISCSI_backend, and the type VMAX_FC is associated with the FC_backend.

Restart the cinder-volume service.

 cinder_emc_config_CONF_GROUP_ISCSI.xml configuration file

Create the /etc/cinder/cinder_emc_config_CONF_GROUP_ISCSI.xml file. You do not need to restart the service for this change.

Add the following lines to the XML file:

<?xml version="1.0" encoding="UTF-8" ?>


  • EcomServerIp and EcomServerPort are the IP address and port number of the ECOM server which is packaged with SMI-S.

  • EcomUserName and EcomPassword are credentials for the ECOM server.

  • PortGroups supplies the names of VMAX port groups that have been pre-configured to expose volumes managed by this backend. Each supplied port group should have sufficient number and distribution of ports (across directors and switches) as to ensure adequate bandwidth and failure protection for the volume connections. PortGroups can contain one or more port groups of either iSCSI or FC ports. When a dynamic masking view is created by the VMAX driver, the port group is chosen randomly from the PortGroup list, to evenly distribute load across the set of groups provided. Make sure that the PortGroups set contains either all FC or all iSCSI port groups (for a given backend), as appropriate for the configured driver (iSCSI or FC).

  • The Array tag holds the unique VMAX array serial number.

  • The Pool tag holds the unique pool name within a given array. For backends not using FAST automated tiering, the pool is a single pool that has been created by the administrator. For backends exposing FAST policy automated tiering, the pool is the bind pool to be used with the FAST policy.

  • The FastPolicy tag conveys the name of the FAST Policy to be used. By including this tag, volumes managed by this backend are treated as under FAST control. Omitting the FastPolicy tag means FAST is not enabled on the provided storage pool.

 FC Zoning with VMAX

Zone Manager is recommended when using the VMAX FC driver, especially for larger configurations where pre-zoning would be too complex and open-zoning would raise security concerns.

 iSCSI with VMAX
  • Make sure the iscsi-initiator-utils package is installed on the host (use apt-get, zypper, or yum, depending on Linux flavor).

  • Verify host is able to ping VMAX iSCSI target ports.

 VMAX masking view and group naming info

 Masking view names

Masking views are dynamically created by the VMAX FC and iSCSI drivers using the following naming conventions:

OS-[shortHostName][poolName]-I-MV (for Masking Views using iSCSI)
OS-[shortHostName][poolName]-F-MV (for Masking Views using FC)
 Initiator group names

For each host that is attached to VMAX volumes using the drivers, an initiator group is created or re-used (per attachment type). All initiators of the appropriate type known for that host are included in the group. At each new attach volume operation, the VMAX driver retrieves the initiators (either WWNNs or IQNs) from OpenStack and adds or updates the contents of the Initiator Group as required. Names are of the following format:

OS-[shortHostName]-I-IG (for iSCSI initiators)
OS-[shortHostName]-F-IG (for Fibre Channel initiators)

Hosts attaching to VMAX storage managed by the OpenStack environment cannot also be attached to storage on the same VMAX not being managed by OpenStack. This is due to limitations on VMAX Initiator Group membership.

 FA port groups

VMAX array FA ports to be used in a new masking view are chosen from the list provided in the EMC configuration file.

 Storage group names

As volumes are attached to a host, they are either added to an existing storage group (if it exists) or a new storage group is created and the volume is then added. Storage groups contain volumes created from a pool (either single-pool or FAST-controlled), attached to a single host, over a single connection type (iSCSI or FC). Names are formed:

OS-[shortHostName][poolName]-I-SG (attached over iSCSI)
OS-[shortHostName][poolName]-F-SG (attached over Fibre Channel)

 Concatenated or striped volumes

In order to support later expansion of created volumes, the VMAX Block Storage drivers create concatenated volumes as the default layout. If later expansion is not required, users can opt to create striped volumes in order to optimize I/O performance.

Below is an example of how to create striped volumes. First, create a volume type. Then define the extra spec for the volume type storagetype:stripecount representing the number of meta members in the striped volume. The example below means that each volume created under the GoldStriped volume type will be striped and made up of 4 meta members.

$ cinder type-create GoldStriped
$ cinder type-key GoldStriped set volume_backend_name=GOLD_BACKEND
$ cinder type-key GoldStriped set storagetype:stripecount=4
Questions? Discuss on
Found an error? Report a bug against this page

loading table of contents...