Infortrend volume driver

Infortrend volume driver

The Infortrend volume driver is a Block Storage driver providing iSCSI and Fibre Channel support for Infortrend storages.

Supported operations

The Infortrend volume driver supports the following volume operations:

  • Create, delete, attach, and detach volumes.
  • Create and delete a snapshot.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume
  • Retype a volume.
  • Manage and unmanage a volume.
  • Migrate a volume with back-end assistance.
  • Live migrate an instance with volumes hosted on an Infortrend backend.

System requirements

To use the Infortrend volume driver, the following settings are required:

Set up Infortrend storage

  • Create logical volumes in advance.
  • Host side setting Peripheral device type should be No Device Present (Type=0x7f).

Set up cinder-volume node

  • Install Oracle Java 7 or later.
  • Download the Infortrend storage CLI from the release page, and assign it to the default path /opt/bin/Infortrend/.

Driver configuration

On cinder-volume nodes, set the following in your /etc/cinder/cinder.conf, and use the following options to configure it:

Driver options

Description of Infortrend volume driver configuration options
Configuration option = Default value Description
infortrend_cli_max_retries = 5 (Integer) Maximum retry time for cli. Default is 5.
infortrend_cli_path = /opt/bin/Infortrend/raidcmd_ESDS10.jar (String) The Infortrend CLI absolute path. By default, it is at /opt/bin/Infortrend/raidcmd_ESDS10.jar
infortrend_cli_timeout = 30 (Integer) Default timeout for CLI copy operations in minutes. Support: migrate volume, create cloned volume and create volume from snapshot. By Default, it is 30 minutes.
infortrend_pools_name = (String) Infortrend raid pool name list. It is separated with comma.
infortrend_provisioning = full (String) Let the volume use specific provisioning. By default, it is the full provisioning. The supported options are full or thin.
infortrend_slots_a_channels_id = 0,1,2,3,4,5,6,7 (String) Infortrend raid channel ID list on Slot A for OpenStack usage. It is separated with comma. By default, it is the channel 0~7.
infortrend_slots_b_channels_id = 0,1,2,3,4,5,6,7 (String) Infortrend raid channel ID list on Slot B for OpenStack usage. It is separated with comma. By default, it is the channel 0~7.
infortrend_tiering = 0 (String) Let the volume use specific tiering level. By default, it is the level 0. The supported levels are 0,2,3,4.

iSCSI configuration example

default_volume_type = IFT-ISCSI
enabled_backends = IFT-ISCSI

volume_driver = cinder.volume.drivers.infortrend.infortrend_iscsi_cli.InfortrendCLIISCSIDriver
volume_backend_name = IFT-ISCSI
infortrend_pools_name = POOL-1,POOL-2
infortrend_slots_a_channels_id = 0,1,2,3
infortrend_slots_b_channels_id = 0,1,2,3

Fibre Channel configuration example

default_volume_type = IFT-FC
enabled_backends = IFT-FC

volume_driver = cinder.volume.drivers.infortrend.infortrend_fc_cli.InfortrendCLIFCDriver
volume_backend_name = IFT-FC
infortrend_pools_name = POOL-1,POOL-2,POOL-3
infortrend_slots_a_channels_id = 4,5

Multipath configuration

  • Enable multipath for image transfer in /etc/cinder/cinder.conf.

    use_multipath_for_image_xfer = True

    Restart the cinder-volume service.

  • Enable multipath for volume attach and detach in /etc/nova/nova.conf.

    volume_use_multipath = True

    Restart the nova-compute service.

Extra spec usage

  • infortrend:provisioning - Defaults to full provisioning, the valid values are thin and full.

  • infortrend:tiering - Defaults to use all tiering, the valid values are subsets of 0, 1, 2, 3.

    If multi-pools are configured in cinder.conf, it can be specified for each pool, separated by semicolon.

    For example:

    infortrend:provisioning: POOL-1:thin; POOL-2:full

    infortrend:tiering: POOL-1:all; POOL-2:0; POOL-3:0,1,3

For more details, see Infortrend documents.

Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.