HPE MSA Fibre Channel and iSCSI drivers

HPE MSA Fibre Channel and iSCSI drivers

The HPMSAFCDriver and HPMSAISCSIDriver Cinder drivers allow the HPE MSA 2050, 1050, 2040, and 1040 arrays to be used for Block Storage in OpenStack deployments.

System requirements

To use the HP MSA drivers, the following are required:

  • HPE MSA 2050, 1050, 2040 or 1040 array with:
    • iSCSI or FC host interfaces
    • G22x firmware or later
  • Network connectivity between the OpenStack host and the array management interfaces
  • HTTPS or HTTP must be enabled on the array

Supported operations

  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Migrate a volume with back-end assistance.
  • Retype a volume.
  • Manage and unmanage a volume.

Configuring the array

  1. Verify that the array can be managed via an HTTPS connection. HTTP can also be used if hpmsa_api_protocol=http is placed into the appropriate sections of the cinder.conf file.

    Confirm that virtual pools A and B are present if you plan to use virtual pools for OpenStack storage.

    If you plan to use vdisks instead of virtual pools, create or identify one or more vdisks to be used for OpenStack storage; typically this will mean creating or setting aside one disk group for each of the A and B controllers.

  2. Edit the cinder.conf file to define a storage back end entry for each storage pool on the array that will be managed by OpenStack. Each entry consists of a unique section name, surrounded by square brackets, followed by options specified in a key=value format.

    • The hpmsa_backend_name value specifies the name of the storage pool or vdisk on the array.
    • The volume_backend_name option value can be a unique value, if you wish to be able to assign volumes to a specific storage pool on the array, or a name that is shared among multiple storage pools to let the volume scheduler choose where new volumes are allocated.
    • The rest of the options will be repeated for each storage pool in a given array: volume_driver specifies the Cinder driver name; san_ip specifies the IP addresses or host names of the array’s management controllers; san_login and san_password specify the username and password of an array user account with manage privileges; and hpmsa_iscsi_ips specfies the iSCSI IP addresses for the array if using the iSCSI transport protocol.

    In the examples below, two back ends are defined, one for pool A and one for pool B, and a common volume_backend_name is used so that a single volume type definition can be used to allocate volumes from both pools.

    iSCSI example back-end entries

    [pool-a]
    hpmsa_backend_name = A
    volume_backend_name = hpmsa-array
    volume_driver = cinder.volume.drivers.san.hp.hpmsa_iscsi.HPMSAISCSIDriver
    san_ip = 10.1.2.3,10.1.2.4
    san_login = manage
    san_password = !manage
    hpmsa_iscsi_ips = 10.2.3.4,10.2.3.5
    
    [pool-b]
    hpmsa_backend_name = B
    volume_backend_name = hpmsa-array
    volume_driver = cinder.volume.drivers.san.hp.hpmsa_iscsi.HPMSAISCSIDriver
    san_ip = 10.1.2.3,10.1.2.4
    san_login = manage
    san_password = !manage
    hpmsa_iscsi_ips = 10.2.3.4,10.2.3.5
    

    Fibre Channel example back-end entries

    [pool-a]
    hpmsa_backend_name = A
    volume_backend_name = hpmsa-array
    volume_driver = cinder.volume.drivers.san.hp.hpmsa_fc.HPMSAFCDriver
    san_ip = 10.1.2.3,10.1.2.4
    san_login = manage
    san_password = !manage
    
    [pool-b]
    hpmsa_backend_name = B
    volume_backend_name = hpmsa-array
    volume_driver = cinder.volume.drivers.san.hp.hpmsa_fc.HPMSAFCDriver
    san_ip = 10.1.2.3,10.1.2.4
    san_login = manage
    san_password = !manage
    
  3. If any volume_backend_name value refers to a vdisk rather than a virtual pool, add an additional statement hpmsa_backend_type = linear to that back end entry.

  4. If HTTPS is not enabled in the array, include hpmsa_api_protocol = http in each of the back-end definitions.

  5. If HTTPS is enabled, you can enable certificate verification with the option hpmsa_verify_certificate=True. You may also use the hpmsa_verify_certificate_path parameter to specify the path to a CA_BUNDLE file containing CAs other than those in the default list.

  6. Modify the [DEFAULT] section of the cinder.conf file to add an enabled_back-ends parameter specifying the backend entries you added, and a default_volume_type parameter specifying the name of a volume type that you will create in the next step.

    Example of [DEFAULT] section changes

    [DEFAULT]
    enabled_backends = pool-a,pool-b
    default_volume_type = hpmsa
    
  7. Create a new volume type for each distinct volume_backend_name value that you added in the cinder.conf file. The example below assumes that the same volume_backend_name=hpmsa-array option was specified in all of the entries, and specifies that the volume type hpmsa can be used to allocate volumes from any of them.

    Example of creating a volume type

    $ openstack volume type create hpmsa
    $ openstack volume type set --property volume_backend_name=hpmsa-array hpmsa
    
  8. After modifying the cinder.conf file, restart the cinder-volume service.

Driver-specific options

The following table contains the configuration options that are specific to the HP MSA drivers.

Description of HPE MSA configuration options
Configuration option = Default value Description
hpmsa_api_protocol = https (String(choices=[‘http’, ‘https’])) HPMSA API interface protocol.
hpmsa_backend_name = A (String) Pool or Vdisk name to use for volume creation.
hpmsa_backend_type = virtual (String(choices=[‘linear’, ‘virtual’])) linear (for Vdisk) or virtual (for Pool).
hpmsa_iscsi_ips = [] (List of String) List of comma-separated target iSCSI IP addresses.
hpmsa_verify_certificate = False (Boolean) Whether to verify HPMSA array SSL certificate.
hpmsa_verify_certificate_path = None (String) HPMSA array SSL certificate path.
Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.