Atom feed of this document
Kilo -  Kilo -  Kilo -  Kilo -  Kilo -  Kilo -  Kilo -  Kilo - 

 HDS HNAS iSCSI and NFS driver

This OpenStack Block Storage volume driver provides iSCSI and NFS support for Hitachi NAS Platform Models 3080, 3090, 4040, 4060, 4080 and 4100.

 Supported operations

The NFS and iSCSI drivers support these operations:

  • Create, delete, attach, and detach volumes.

  • Create, list, and delete volume snapshots.

  • Create a volume from a snapshot.

  • Copy an image to a volume.

  • Copy a volume to an image.

  • Clone a volume.

  • Extend a volume.

  • Get volume statistics.

 HNAS storage requirements

Before using iSCSI and NFS services, use the HNAS configuration and management GUI (SMU) or SSC CLI to create storage pool(s), file system(s), and assign an EVS. Make sure that the file system used is not created as replication targets. Additionally:

For NFS:

Create NFS exports, choose a path for them (it must be different from "/") and set the Show snapshots option to hide and disable access.

Also, configure the option norootsquash as "* (rw, norootsquash)", so cinder services can change the permissions of its volumes.

In order to use the hardware accelerated features of NFS HNAS, we recommend setting max-nfs-version to 3. Refer to HNAS command line reference to see how to configure this option.

For iSCSI:

You need to set an iSCSI domain.

 Block storage host requirements

The HNAS driver is supported for Red Hat, SUSE Cloud and Ubuntu Cloud. The following packages must be installed:
  1. nfs-utils for Red Hat

  2. nfs-client for SUSE

  3. nfs-common, libc6-i386 for Ubuntu (libc6-i386 only required on Ubuntu 12.04)

  4. If you are not using SSH, you need the HDS SSC package (hds-ssc-v1.0-1) to communicate with an HNAS array using the SSC command. This utility package is available in the RPM package distributed with the hardware through physical media or it can be manually copied from the SMU to the Block Storage host.

 Package installation

If you are installing the driver from a RPM or DEB package, follow the steps bellow:

  1. Install SSC:

    In Red Hat:

    # rpm -i hds-ssc-v1.0-1.rpm

    Or in SUSE:

    # zypper hds-ssc-v1.0-1.rpm

    Or in Ubuntu:

    # dpkg -i hds-ssc_1.0-1_all.deb
  2. Install the dependencies:

    In Red Hat:

    # yum install nfs-utils nfs-utils-lib

    Or in Ubuntu:

    # apt-get install nfs-common

    Or in SUSE:

    # zypper install nfs-client

    If you are using Ubuntu 12.04, you also need to install libc6-i386

    # apt-get install libc6-i386
  3. Configure the driver as described in the "Driver Configuration" section.

  4. Restart all cinder services (volume, scheduler and backup).

 Driver configuration

The HDS driver supports the concept of differentiated services (also referred as quality of service) by mapping volume types to services provided through HNAS.

HNAS supports a variety of storage options and file system capabilities, which are selected through the definition of volume types and the use of multiple back ends. The driver maps up to four volume types into separated exports or file systems, and can support any number if using multiple back ends.

The configuration for the driver is read from an XML-formatted file (one per back end), which you need to create and set its path in the cinder.conf configuration file. Below are the configuration needed in the cinder.conf configuration file [1]:

enabled_backends = hnas_iscsi1, hnas_nfs1

For HNAS iSCSI driver create this section:

volume_driver = cinder.volume.drivers.hds.iscsi.HDSISCSIDriver
hds_hnas_iscsi_config_file = /path/to/config/hnas_config_file.xml
volume_backend_name = HNAS-ISCSI

For HNAS NFS driver create this section:

volume_driver = cinder.volume.drivers.hds.nfs.HDSNFSDriver
hds_hnas_nfs_config_file = /path/to/config/hnas_config_file.xml
volume_backend_name = HNAS-NFS

The XML file has the following format:

<?xml version = "1.0" encoding = "UTF-8" ?>

 HNAS volume driver XML configuration options

An OpenStack Block Storage node using HNAS drivers can have up to four services. Each service is defined by a svc_n tag (svc_0, svc_1, svc_2, or svc_3 [2], for example). These are the configuration options available for each service label:

Table 2.5. Configuration options for service labels
Option Type Default Description




When a create_volume call with a certain volume type happens, the volume type will try to be matched up with this tag. In each configuration file you must define the default volume type in the service labels and, if no volume type is specified, the default is used. Other labels are case sensitive and should match exactly. If no configured volume types match the incoming requested type, an error occurs in the volume creation.


Required only for iSCSI

An iSCSI IP address dedicated to the service.



For iSCSI driver: virtual file system label associated with the service.

For NFS driver: path to the volume (<ip_address>:/<path>) associated with the service.

Additionally, this entry must be added in the file used to list available NFS shares. This file is located, by default, in /etc/cinder/nfs_shares or you can specify the location in the nfs_shares_config option in the cinder.conf configuration file.

These are the configuration options available to the config section of the XML config file:

Table 2.6. Configuration options
Option Type Default Description



Management Port 0 IP address. Should be the IP address of the "Admin" EVS.




Command to communicate to HNAS array.


Optional (iSCSI only)


Boolean tag used to enable CHAP authentication protocol.




It's always required on HNAS.




Password is always required on HNAS.

svc_0, svc_1, svc_2, svc_3


(at least one label has to be defined)

Service labels: these four predefined names help four different sets of configuration options. Each can specify HDP and a unique volume type.


Optional if ssh_enabled is True

The address of HNAS cluster admin.




Enables SSH authentication between Block Storage host and the SMU.


Required if ssh_enabled is True


Path to the SSH private key used to authenticate in HNAS SMU. The public key must be uploaded to HNAS SMU using ssh-register-public-key (this is an SSH subcommand). Note that copying the public key HNAS using ssh-copy-id doesn't work properly as the SMU periodically wipe out those keys.

 Service labels

HNAS driver supports differentiated types of service using the service labels. It is possible to create up to four types of them, as gold, platinun, silver and ssd, for example.

After creating the services in the XML configuration file, you must configure one volume_type per service. Each volume_type must have the metadata service_label with the same name configured in the <volume_type> section of that service. If this is not set, OpenStack Block Storage will schedule the volume creation to the pool with largest available free space or other criteria configured in volume filters.

$ cinder type-create 'default'
$ cinder type-key 'default' set service_label = 'default'
$ cinder type-create 'platinun-tier'
$ cinder type-key 'platinun' set service_label = 'platinun'

 Multi-back-end configuration

If you use multiple back ends and intend to enable the creation of a volume in a specific back end, you must configure volume types to set the volume_backend_name option to the appropriate back end. Then, create volume_type configurations with the same volume_backend_name .

$ cinder type-create 'iscsi'
$ cinder type-key 'iscsi' set volume_backend_name = 'HNAS-ISCSI'
$ cinder type-create 'nfs'
$ cinder type-key 'nfs' set volume_backend_name = 'HNAS-NFS'

You can deploy multiple OpenStack HNAS drivers instances that each control a separate HNAS array. Each service (svc_0, svc_1, svc_2, svc_3) on the instances need to have a volume_type and service_label metadata associated with it. If no metadata is associated with a pool, OpenStack Block Storage filtering algorithm selects the pool with the largest available free space.

 SSH configuration

Instead of using SSC on the Block Storage host and store its credential on the XML configuration file, HNAS driver supports SSH authentication. To configure that:

  1. If you don't have a pair of public keys already generated, create it in the Block Storage host (leave the pass-phrase empty):

    $ mkdir -p /opt/hds/ssh
    $ ssh-keygen -f /opt/hds/ssh/hnaskey
  2. Change the owner of the key to cinder (or the user the volume service will be run):

    # chown -R cinder.cinder /opt/hds/ssh
  3. Create the directory "ssh_keys" in the SMU server:

    $ ssh [manager|supervisor]@<smu-ip> 'mkdir -p /var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/'
  4. Copy the public key to the "ssh_keys" directory:

    $ scp /opt/hds/ssh/ [manager|supervisor]@<smu-ip>:/var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/
  5. Access the SMU server:

    $ ssh [manager|supervisor]@<smu-ip>
  6. Run the command to register the SSH keys:

    $ ssh-register-public-key -u [manager|supervisor] -f ssh_keys/
  7. Check the communication with HNAS in the Block Storage host:

    $ ssh -i /opt/hds/ssh/hnaskey [manager|supervisor]@<smu-ip> 'ssc <cluster_admin_ip0> df -a'

<cluster_admin_ip0> is "localhost" for single node deployments. This should return a list of available file systems on HNAS.

 Editing the XML config file:

  1. Set the "username".

  2. Enable SSH adding the line "<ssh_enabled> True</ssh_enabled>" under "<config>" session.

  3. Set the private key path: "<ssh_private_key> /opt/hds/ssh/hnaskey</ssh_private_key>" under "<config>" session.

  4. If the HNAS is in a multi-cluster configuration set "<cluster_admin_ip0>" to the cluster node admin IP. In a single node HNAS, leave it empty.

  5. Restart the cinder service.

 Additional notes

  • The get_volume_stats() function always provides the available capacity based on the combined sum of all the HDPs that are used in these services labels.

  • After changing the configuration on the storage, the OpenStack Block Storage driver must be restarted.

  • HNAS iSCSI driver, due to an HNAS limitation, allows only 32 volumes per target.

  • On Red Hat, if the system is configured to use SELinux, you need to set "virt_use_nfs = on" for NFS driver work properly.

[1] The configuration file location may differ.

[2] There is no relative precedence or weight among these four labels.

Questions? Discuss on
Found an error? Report a bug against this page

loading table of contents...