Atom feed of this document
  
Icehouse -  Icehouse -  Icehouse -  Icehouse -  Icehouse -  Icehouse -  Icehouse -  Icehouse - 

 Configuration

The HDS driver supports the concept of differentiated services, where a volume type can be associated with the fine-tuned performance characteristics of an HDP— the dynamic pool where volumes are created[1]. For instance, an HDP can consist of fast SSDs to provide speed. HDP can provide a certain reliability based on things like its RAID level characteristics. HDS driver maps volume type to the volume_type option in its configuration file.

Configuration is read from an XML-format file. Examples are shown for single and multi back-end cases.

[Note]Note
  • Configuration is read from an XML file. This example shows the configuration for single back-end and for multi-back-end cases.

  • It is not recommended to manage an HUS array simultaneously from multiple OpenStack Block Storage instances or servers. [2]

Table 1.5. Description of configuration options for hds-hus
Configuration option = Default value Description
[DEFAULT]
hds_cinder_config_file = /opt/hds/hus/cinder_hus_conf.xml (StrOpt) configuration file for HDS cinder plugin for HUS

 HUS setup

Before using iSCSI services, use the HUS UI to create an iSCSI domain for each EVS providing iSCSI services.

 Single back-end

In a single back-end deployment, only one OpenStack Block Storage instance runs on the OpenStack Block Storage server and controls one HUS array: this deployment requires these configuration files:

  1. Set the hds_cinder_config_file option in the /etc/cinder/cinder.conf file to use the HDS volume driver. This option points to a configuration file.[3]

    volume_driver = cinder.volume.drivers.hds.hds.HUSDriver
    hds_cinder_config_file = /opt/hds/hus/cinder_hds_conf.xml
  2. Configure hds_cinder_config_file at the location specified previously. For example, /opt/hds/hus/cinder_hds_conf.xml:

    <?xml version="1.0" encoding="UTF-8" ?>
    <config>
        <mgmt_ip0>172.17.44.16</mgmt_ip0>
        <mgmt_ip1>172.17.44.17</mgmt_ip1>
        <hus_cmd>hus-cmd</hus_cmd>
        <username>system</username>
        <password>manager</password>
        <svc_0>
            <volume_type>default</volume_type>
            <iscsi_ip>172.17.39.132</iscsi_ip>
            <hdp>9</hdp>
        </svc_0>
        <snapshot>
            <hdp>13</hdp>
        </snapshot>
        <lun_start>
            3000
        </lun_start>
        <lun_end>
            4000
        </lun_end>
    </config>
 Multi back-end

In a multi back-end deployment, more than one OpenStack Block Storage instance runs on the same server. In this example, two HUS arrays are used, possibly providing different storage performance:

  1. Configure /etc/cinder/cinder.conf: the hus1 hus2 configuration blocks are created. Set the hds_cinder_config_file option to point to a unique configuration file for each block. Set the volume_driver option for each back-end to cinder.volume.drivers.hds.hds.HUSDriver

    enabled_backends=hus1,hus2
    
    [hus1]
    volume_driver = cinder.volume.drivers.hds.hds.HUSDriver
    hds_cinder_config_file = /opt/hds/hus/cinder_hus1_conf.xml
    volume_backend_name=hus-1
    
    [hus2]
    volume_driver = cinder.volume.drivers.hds.hds.HUSDriver
    hds_cinder_config_file = /opt/hds/hus/cinder_hus2_conf.xml
    volume_backend_name=hus-2
  2. Configure /opt/hds/hus/cinder_hus1_conf.xml:

    <?xml version="1.0" encoding="UTF-8" ?>
    <config>
        <mgmt_ip0>172.17.44.16</mgmt_ip0>
        <mgmt_ip1>172.17.44.17</mgmt_ip1>
        <hus_cmd>hus-cmd</hus_cmd>
        <username>system</username>
        <password>manager</password>
        <svc_0>
            <volume_type>regular</volume_type>
            <iscsi_ip>172.17.39.132</iscsi_ip>
            <hdp>9</hdp>
        </svc_0>
        <snapshot>
            <hdp>13</hdp>
        </snapshot>
        <lun_start>
            3000
        </lun_start>
        <lun_end>
            4000
        </lun_end>
    </config>
  3. Configure the /opt/hds/hus/cinder_hus2_conf.xml file:

    <?xml version="1.0" encoding="UTF-8" ?>
    <config>
        <mgmt_ip0>172.17.44.20</mgmt_ip0>
        <mgmt_ip1>172.17.44.21</mgmt_ip1>
        <hus_cmd>hus-cmd</hus_cmd>
        <username>system</username>
        <password>manager</password>
        <svc_0>
            <volume_type>platinum</volume_type>
            <iscsi_ip>172.17.30.130</iscsi_ip>
            <hdp>2</hdp>
        </svc_0>
        <snapshot>
            <hdp>3</hdp>
        </snapshot>
        <lun_start>
            2000
        </lun_start>
        <lun_end>
            3000
        </lun_end>
    </config>
 Type extra specs: volume_backend and volume type

If you use volume types, you must configure them in the configuration file and set the volume_backend_name option to the appropriate back-end. In the previous multi back-end example, the platinum volume type is served by hus-2, and the regular volume type is served by hus-1.

cinder type-key regular set volume_backend_name=hus-1
cinder type-key platinum set volume_backend_name=hus-2
 Non differentiated deployment of HUS arrays

You can deploy multiple OpenStack Block Storage instances that each control a separate HUS array. Each instance has no volume type associated with it. The OpenStack Block Storage filtering algorithm selects the HUS array with the largest available free space. In each configuration file, you must define the default volume_type in the service labels.



[1] Do not confuse differentiated services with the OpenStack Block Storage volume services.

[2] It is okay to manage multiple HUS arrays by using multiple OpenStack Block Storage instances (or servers).

[3] The configuration file location may differ.

Questions? Discuss on ask.openstack.org
Found an error? Report a bug against this page

loading table of contents...