VAST Data Volume Driver

The VAST Data Volume driver integrates OpenStack with VAST Data’s Storage System. Volumes in the Block Storage service are backed by VAST’s NVMe storage and are accessed using VAST’s API.

This documentation explains how to configure and connect the Block Storage nodes to the VAST Data storage system.

Prerequisites

Before configuring the VAST Data volume driver, ensure the following prerequisites are met:

Network Configuration

Ensure your OpenStack environment has network connectivity to:

  • Management Network: Access to VAST VMS/Web UI (typically port 443)

  • Data Network: Access to VAST VIP pool for NVMe connections (port 4420)

NVMe Tools Installation

The NVMe CLI tools must be installed on all compute nodes that will attach VAST volumes:

# On RHEL/CentOS/Fedora
$ sudo yum install nvme-cli

# On Ubuntu/Debian
$ sudo apt-get install nvme-cli

Kernel Modules

Load the necessary kernel modules for NVMe over Fabrics on all compute nodes:

$ sudo modprobe nvme
$ sudo modprobe nvme-fabrics

To ensure these modules load automatically on boot, add them to /etc/modules-load.d/nvme.conf:

$ echo "nvme" | sudo tee -a /etc/modules-load.d/nvme.conf
$ echo "nvme-fabrics" | sudo tee -a /etc/modules-load.d/nvme.conf

VAST Cluster Configuration

On your VAST cluster, ensure the following resources are configured:

  • A VIP pool for NVMe/TCP connections

  • A subsystem for block storage operations

  • Admin credentials or API token for management access

Driver options

Description of VAST Data volume driver configuration options

Configuration option = Default value

Description

[DEFAULT]

san_ip = None

(String) Management IP of the VAST storage system. This is a required field.

san_api_port = 443

(Integer) Management API port of the VAST storage system. Default is 443.

san_login = admin

(String) Management username of the VAST storage system.

san_password = None

(String) Management password of the VAST storage system.

vast_api_token = None

(String) API token for accessing VAST mgmt. If provided, it will be used instead of san_login and san_password.

vast_vippool_name = None

(String) Name of the Virtual IP Pool for NVMe target discovery.

vast_subsystem = None

(String) VAST subsystem name used for identifying the NVMe subsystem.

vast_tenant_name = None

(String) VAST Tenant name – required for additional filtering when multiple subsystems share the same name.

vast_volume_prefix = openstack-vol-

(String) Prefix for all newly created volumes.

vast_snapshot_prefix = openstack-snap-

(String) Prefix for all newly created snapshots.

driver_ssl_cert_verify = False

(Boolean) If set to True, the HTTP client will validate the SSL certificate of the VAST storage system.

driver_ssl_cert_path = None

(String) Path to a CA_BUNDLE file or directory with certificates of trusted CAs, which will be used to validate the VAST storage system backend.

Supported operations

The VAST Data volume driver supports the following operations:

  • Create, list, delete, attach (map), and detach (unmap) volumes

  • Create, list and delete volume snapshots

  • Create a volume from a snapshot

  • Clone a volume

  • Extend a volume

  • Multi-attach volumes (attach same volume to multiple instances)

Configuring the VAST Data Backend

This section details the steps required to configure the VAST Data storage driver for Cinder.

Configuration Steps

  1. Edit the /etc/cinder/cinder.conf configuration file.

  2. In the [DEFAULT] section, add vast to the enabled_backends parameter:

    [DEFAULT]
    enabled_backends = vast
    
  3. Add a new [vast] backend group section with the following required options:

    [vast]
    # The driver path (required)
    volume_driver = cinder.volume.drivers.vastdata.driver.VASTVolumeDriver
    
    # Backend name (required)
    volume_backend_name = vast
    
    # Management IP of the VAST storage system (required)
    san_ip = 10.0.0.100
    
    # Management API port (optional, default: 443)
    san_api_port = 443
    
    # Management username (required if not using API token)
    san_login = admin
    
    # Management password (required if not using API token)
    san_password = password123
    
    # API token (optional, replaces san_login/san_password)
    # vast_api_token = your-api-token-here
    
    # Virtual IP Pool for NVMe connections (required)
    vast_vippool_name = vip-pool1
    
    # Subsystem for NVMe (required)
    vast_subsystem = cinder-subsystem
    
    # Tenant name (optional, for multi-tenant environments)
    # vast_tenant_name = tenant1
    
    # Volume name prefix (optional, default: openstack-vol-)
    # vast_volume_prefix = openstack-vol-
    
    # Snapshot name prefix (optional, default: openstack-snap-)
    # vast_snapshot_prefix = openstack-snap-
    
    # SSL certificate verification (optional, default: False)
    # driver_ssl_cert_verify = True
    
    # Path to CA certificate file or directory (optional)
    # driver_ssl_cert_path = /etc/cinder/vast_ca.pem
    
  4. Restart the cinder-volume service to apply the configuration:

    # On systemd-based systems
    $ sudo systemctl restart openstack-cinder-volume.service
    
    # Or if using devstack
    $ sudo systemctl restart devstack@c-vol.service
    
  5. Verify the service started successfully:

    $ sudo systemctl status openstack-cinder-volume.service
    $ openstack volume service list
    

Example Configuration

Here is a complete working example of a VAST backend configuration:

[DEFAULT]
enabled_backends = vast
debug = False

[vast]
volume_driver = cinder.volume.drivers.vastdata.driver.VASTVolumeDriver
volume_backend_name = vast
san_ip = 10.27.200.100
san_api_port = 443
san_login = admin
san_password = VastAdmin123!
vast_vippool_name = vip-pool-nvme
vast_subsystem = openstack-cinder

SSL Certificate Verification

By default, SSL certificate verification is disabled. To enable secure HTTPS communication with the VAST storage system, you can configure SSL certificate verification.

Option 1: Enable SSL verification with system CA bundle

[vast]
...
driver_ssl_cert_verify = True
...

Option 2: Enable SSL verification with custom CA certificate

  1. Obtain the SSL certificate from your VAST storage.

  2. Copy the certificate to a location accessible by the cinder-volume service, for example: /etc/cinder/vast_ca.pem

  3. Configure the driver to use the custom certificate:

    [vast]
    ...
    driver_ssl_cert_verify = True
    driver_ssl_cert_path = /etc/cinder/vast_ca.pem
    ...
    
  4. Restart the cinder-volume service for the changes to take effect.

Note

If driver_ssl_cert_path is omitted when driver_ssl_cert_verify = True, the system’s default CA bundle will be used. The driver_ssl_cert_path can point to either a CA_BUNDLE file (e.g., /path/to/ca-bundle.crt) or a directory containing CA certificates (e.g., /etc/ssl/certs/).

Usage Examples

This section provides practical examples of common operations using the VAST Data volume driver with OpenStack CLI commands.

Create a Volume Type

First, create a volume type for VAST storage:

$ openstack volume type create vast \
     --property volume_backend_name=vast

Create a Volume

Create a new volume with the VAST volume type:

$ openstack volume create --size 100 --type vast my-vast-volume

This creates a 100 GiB volume named my-vast-volume on VAST storage.

List Volumes

View all volumes:

$ openstack volume list

View details of a specific volume:

$ openstack volume show my-vast-volume

Attach Volume to Instance

Attach a volume to a running instance:

$ openstack server add volume <instance-name-or-id> <volume-name-or-id>

Verify the attachment:

$ openstack server volume list my-instance

Detach Volume from Instance

Detach a volume from an instance:

$ openstack server remove volume <instance-name-or-id> <volume-name-or-id>

Extend a Volume

Increase the size of an existing volume:

$ openstack volume set --size <new-size> <volume-name-or-id>

Create a Volume Snapshot

Create a snapshot of an existing volume:

$ openstack volume snapshot create \
    --volume <volume-name-or-id> \
    --description "My snapshot description" \
    my-snapshot

List snapshots:

$ openstack volume snapshot list

View snapshot details:

$ openstack volume snapshot show my-snapshot

Create Volume from Snapshot

Create a new volume from an existing snapshot:

$ openstack volume create \
    --snapshot <snapshot-name-or-id> \
    --size 100 \
    --type vast \
    restored-volume

Note

The new volume size must be equal to or larger than the original snapshot size.

Delete a Snapshot

Delete a volume snapshot:

$ openstack volume snapshot delete <snapshot-name-or-id>

Clone a Volume

Create a new volume by cloning an existing volume:

$ openstack volume create \
    --source <source-volume-name-or-id> \
    --size 100 \
    --type vast \
    cloned-volume

Delete a Volume

Delete a volume (must be detached first):

$ openstack volume delete <volume-name-or-id>

Multi-attach Volumes

The VAST driver supports multi-attach, allowing a single volume to be attached to multiple instances simultaneously.

Create a multi-attach enabled volume type:

$ openstack volume type create vast-multiattach \
    --property multiattach="<is> True" \
    --property volume_backend_name=vast

Create a multi-attach volume:

$ openstack volume create \
    --size 100 \
    --type vast-multiattach \
    shared-volume

Attach to multiple instances:

$ openstack server add volume instance1 shared-volume
$ openstack server add volume instance2 shared-volume

Troubleshooting

Common Issues

Volume attachment fails

  • Verify NVMe CLI tools are installed on compute nodes

  • Check that nvme and nvme-fabrics kernel modules are loaded

  • Ensure network connectivity between compute nodes and VAST VIP pool

  • Verify the VIP pool name and subsystem are configured correctly

Cannot create volumes

  • Verify management credentials (san_ip, san_login, san_password or vast_api_token)

  • Check network connectivity to VAST VMS

  • Ensure the subsystem exists on the VAST cluster

  • Verify sufficient capacity is available on the VAST cluster