Wallaby Series Release Notes

18.0.0-5

Bug Fixes

  • PowerFlex driver bug #1897598: Fixed bug with PowerFlex storage-assisted volume migration when volume migration was performed without conversion of volume type in cases where it should have been converted to/from thin/thick provisioned.

18.0.0

Prelude

Welcome to the Wallaby release of the OpenStack Block Storage service (cinder). With this release, the Block Storage API version 3 has reached microversion 3.64.

New Features

  • Added revert to snapshot feature in Nimble driver.

  • IBM Spectrum Virtualize: Added support to enable creating a group from source, when source is a replicated group or consistency group snapshot of a replicated group.

  • IBM DS8000 Driver: Add support for revert-to-snapshot operation.

  • The Huawei FusionStorage Cinder driver (dsware) now supports OceanStor 100D Storage.

  • Zadara VPSA Driver: Added support for cinder features volume manage, snapshot manage, list manageable volumes, manageable snapshots, multiattach and ipv6 support.

  • Starting with API microversion 3.64, an encryption_key_id attribute is included in the response body of volume and backup details when the associated volume is encrypted.

  • Added new backup driver to enable backing up cinder volumes to S3-compatible storage. See the reference S3 backup driver for more information.

  • Added support of authenticity verification through self-signed certificates for JovianDSS data storage. Added support of revert to snapshot functionality. Expands unit-test coverage for JovianDSS driver.

  • New Cinder volume driver for KIOXIA Kumoscale storage systems. The driver storage system supports NVMeOF.

  • NetApp ONTAP driver: added support for FlexGroup pool using the NFS mode. There are several considerations for using the driver with it:

    1. The FlexGroup pool is only supported using ONTAP storage 9.8 or greater.

    2. The FlexGroup pool has a different view of aggregate capabilites, changing them by a list of elements, instead of a single element. They are netapp_aggregate, netapp_raid_type, netapp_disk_type and netapp_hybrid_aggregate. The netapp_aggregate_used_percent capability is an average of used percent of all FlexGroup’s aggregates.

    3. The utilization capability is not calculated to FlexGroup pools, it is always set to default of 50.

    4. The driver cannot support consistency group with volumes that are over FlexGroup pools.

    5. The driver cannot support multi-attach with volumes that are over FlexGroup pools.

    6. For volumes over the FlexGroup pool, the operations of clone volume, create snapshot and create volume from an image are implemented as the NFS generic driver. Hence, it does not rely on the ONTAP storage to perform those operations.

    7. A driver with FlexGroup pools has snapshot support disabled by default. To enable, you must set nfs_snapshot_support to true in the backend’s configuration section of the cinder configuration file.

    8. The driver image cache is not applied for volumes over FlexGroup pools. It can use the core image cache for avoiding downloading twice, though.

    9. Given that the FlexGroup pool may be on several cluster nodes, the QoS minimum support is only enabled if all nodes support it.

  • NetApp ONTAP driver: Added support for Adaptive QoS specs. The driver now accepts expectedIOPSperGiB, peakIOPSperGiB, expectedIOPSAllocation, peakIOPSAllocation, absoluteMinIOPS and blockSize. The field peakIOPSperGiB and the field expectedIOPSperGiB are required together. The expectedIOPSperGiB and absoluteMinIOPS specs are only guaranteed by ONTAP AFF systems. All specs can only be used with ONTAP version equal or greater than 9.4, excepting the expectedIOPSAllocation and blockSize specs which require at least 9.5.

  • NetApp ONTAP driver: Added support for QoS Min (floor) throughput specs. The driver now accepts minIOPS and minIOPSperGiB specs, which can be set either individually or along with Max (ceiling) throughput specs. The feature requires storage ONTAP All Flash FAS (AFF) with version equal or greater than 9.3 for NFS and 9.2 for iSCSI and FCP. It also works with Select Premium with SSD and C190 storages with at least ONTAP 9.6.

  • NetApp ONTAP driver: Added a new driver specific capability called netapp_qos_min_support. It is used to filter the pools that has support to the Qos minimum (floor) specs during the scheduler phase.

  • PowerStore driver: Add Consistency Groups support.

  • PowerStore driver: Add OpenStack replication v2.1 support.

  • New Cinder volume driver for TOYOU ACS5000. The new driver supports iSCSI.

  • Added new Ceph iSCSI driver rbd_iscsi. This new driver is derived from the rbd driver and allows all the same features as the rbd driver. The only difference is that volume attachments are done via iSCSI.

  • The cinder-manage command now includes a new quota category with two possible actions check and sync to help administrators manage out of sync quotas on long running deployments.

  • Dell EMC PowerVault ME Series storage arrays are now supported.

  • Add Peer Persistence support for HPE Primera backend.

  • HPE 3PAR Driver: Add support of iSCSI driver for Primera 4.2 or higher versions.

  • IBM Spectrum Virtualize Family driver: Added fucntionality that returns throttle rate of maximum IOPS and bandwidth of all VDisks of a specified storage pool.

  • Added support for Open-E JovianDSS data storage. Driver supports Open-E disaster recovery feature and cascade volume deletion in addition to support minimum required functions.

  • Introduces microversion (MV) 3.63, which includes volume type ID in the volume details JSON response. This MV affects the volume detail list (GET /v3/{project_id}/volumes/detail), and volume-show (GET /v3/{project_id}/volumes/{volume_id}) calls.

  • Added consistency group support in Nimble Storage driver.

  • Pure Storage FlashArray driver: Enabled support for Active/Active to both the iSCSI and FC driver. This allows users to configure Pure Storage backends in clustered environments.

  • Added support for QoS in the Pure Storage drivers. QoS support is available from Purity//FA 5.3.0

  • Cinder now stores the format of the backing file (raw or qcow2), for FS backends, in the volume admin metadata and includes the format in the connection_info returned in the Attachments API. Previously cinder tried to introspect the format, and under some circumstances, an incorrect format would be deduced. This will still be the case for legacy volumes. Explicitly storing the format will avoid this issue for newly created volumes. See spec for more info.

  • IBM Spectrum Virtualize: Adds support for retype operation on global mirror volumes.

  • IBM Spectrum Virtualize Family: Added support for revert to snapshot for global-mirror volume.

Known Issues

  • Anomalies with encrypted volumes

    For the most part, users are happy with the cinder feature Volume encryption supported by the key manager. There are, however, some edge cases that have revealed bugs that you and your users should be aware of.

    First, some background. The Block Storage API supports the creation of volumes in gibibyte (GiB) units. When a volume of a non-encrypted volume type of size n is created, the volume contains n GiB of usable space. When a volume of an encrypted type is requested, however, the volume contains less than n GiB of usable space because the encryption metadata that must be stored within that volume in order for the volume to be usable consumes an amount of the otherwise usable space.

    Although the encryption metadata consumes less than 1% of the volume, suppose that a user wants to retype a volume of a non-encrypted type to an encrypted type of the same size. If the non-encrypted volume is “full”, we are in the position of trying to fit 101% of its capacity into the encrypted volume, which is not possible under the current laws of physics, and the retype should fail (see Known Issues for volume encryption in the cinder documentation).

    (Note that whether a volume should be considered “full”, even if it doesn’t contain exactly n GiB of data for an n GiB volume, can depend upon the storage backend technology used.)

    A similar situation can arise when a user creates a volume of an encrypted volume type from an image in Glance. If the image happens to be sized very close to the gibibyte boundary given by the requested volume size, the operation may fail if the image data plus the encryption metadata exceeds the requested volume size.

    So far, the behavior isn’t anomalous; it’s basically what you’d expect once you are aware that the encryption metadata must be stored in the volume and that it consumes some space.

    We recently became aware of the following anomalies, however, when using the current RBD driver with a Ceph storage backend.

    • When creating an encrypted volume from an image in Glance that was created from a non-encrypted volume uploaded as an image, or an image that just happens to be sized very close to the gibibyte boundary given by the requested volume size, the space consumed by the encryption header may not leave sufficient space for the data contained in the image. In this case, the data is silently truncated to fit within the requested volume size.

    • Similarly, when creating an encrypted volume from a snapshot of an encrypted volume, if the amount of data in the original volume at the time the snapshot was created is very close to the gibibyte boundary given by the volume’s size, it is possible for the data in the new volume to be silently truncated.

    Not to put too fine a point on it, silent truncation is worse than failure, and the Cinder team will be addressing these issues in the next release. Additionally (as if that isn’t bad enough!), we suspect that the above anomalies will also occur when using volume encryption with NFS-based storage backends, though this has not yet been reported or confirmed.

Upgrade Notes

  • The Zadara VPSA Driver has been updated to support json format and reorganized with new code layout. The module path cinder.volume.drivers.zadara.ZadaraVPSAISCSIDriver should now be updated to cinder.volume.drivers.zadara.zadara.ZadaraVPSAISCSIDriver in cinder.conf.

  • RBD driver: Prior to this release, the Cinder project did not have a statement concerning what versions of Ceph are supported by Cinder. We hereby announce that:

    • For a given OpenStack release, Cinder supports the current Ceph active stable releases plus the two prior releases.

    • For any OpenStack release, it is expected that the versions of the Ceph client and server are in alignment.

    The Ceph RADOS Block Device (RBD) driver documentation has been updated to reflect this policy and explains it in more detail.

  • This release contains a fix for Bug #1908315, which changes the default value of the policy governing the Block Storage API action Reset group snapshot status to make the action administrator-only. This policy was inadvertently changed to be admin-or-owner during the Queens development cycle.

    The policy is named group:reset_group_snapshot_status.

    • If you have a custom value for this policy in your cinder policy configuration file, this change to the default value will not affect you.

    • If you have been aware of this regression and like the current (incorrect) behavior, you may add the following line to your cinder policy configuration file to restore that behavior:

      "group:reset_group_snapshot_status": "rule:admin_or_owner"
      

      This setting is not recommended by the Cinder project team, as it may allow end users to put a group snapshot into an invalid status with indeterminate consequences.

    For more information about the cinder policy configuration file, see the policy.yaml section of the Cinder Configuration Guide.

  • Ceph/RBD volume backends will now assume exclusive cinder pools, as if they had rbd_exclusive_cinder_pool = true in their configuration.

    This helps deployments with a large number of volumes and prevent issues on deployments with a growing number of volumes at the small cost of a slightly less accurate stats being reported to the scheduler.

  • TSM backup driver is removed. Please, migrate your backups before upgrade.

  • Cinder now requires LVM version 2.02.107 or newer.

  • New configuration options have been added to enable mTLS between cinder and glance: use glance_certfile and glance_keyfile in the [DEFAULT] section of the cinder configuration file.

  • The cinder.quota.NestedDbQuotaDriver quota driver was marked as deprecated in Train release and is eligible for removal since Ussuri release. This release removes the NestedQuotaDriver support.

Deprecation Notes

  • PowerStore driver: powerstore_appliances option is deprecated and will be removed in a future release. Driver does not use this option to determine which appliances to use. PowerStore uses its own load balancer instead.

  • Use of JSON formatted policy files was deprecated by the oslo.policy library during the Victoria development cycle. As a result, this deprecation is being noted in the Wallaby cycle with an anticipated future removal of JSON formatted file support by oslo.policy. As such operators will need to convert to YAML policy files. Use the oslopolicy-convert-json-to-yaml tool to convert the existing JSON formatted policy file to YAML in a backward compatible way.

Bug Fixes

  • Nimble driver bug #1918099: Fix revert to snapshot not working as expected.

  • Bug #1837524: IBM Spectrum Virtualize Family: Fixed create_consistency_group if the volume has mirror copy and mdisk_grp_name=many.

  • Pure Storage driver bug 1870103: Ensure that unmanaged volumes do not exceed maximum character length on FlashArray.

  • IBM DS8000 Driver Bug #1884030: Support for volume_name_template configuration option.

  • Bug #1887859: Fix for a race in Cinder Backup Manager with double initialization of backup driver.

  • Bug #1887962: PowerMax driver fix to rectify incorrectly deleted non-temporary snapshots when calling do_sync_check used in multiple operations due to missing check for temporary snapshot name.

  • Bug #1888951: Fixed an issue with creating a backup from snapshot with NFS volume driver.

  • IBM Spectrum Virtualize driver Bug #1890254: Fix check_vdisk_fc_mappings is not deleting all flashcopy mappings while deleting source volume, when multiple clones and snapshots are created using common source volume.

  • Bug #1890589: IBM Spectrum Virtualize Family: Fixed issues in create_flashcopy_to_consistgrp, made use of iogrp,qos from opts for create_vdisk, mkfcmap calls if the data exists in opts.

  • Bug #1890591: IBM Spectrum Virtualize Family: Fixed issue in do_setup of StorwizeSVCCommonDriver to save pool information in stats during initialisation.

  • IBM Spectrum Virtualize Family driver Bug #1892034: Fixed issue in get_host_from_connector that volume name is not validated to get the host during terminate connection when the volume name is passed as input.

  • Bug #1894381: Fix the bug that cinder-manage cluster remove does not work and an error NoSuchOptError occurs.

  • Bug #1895510: IBM DS8K: Fixed compatability issue when using the IBM DS8K driver with storage version R9 and later.

  • IBM Spectrum Virtualize Family Bug #1896214: Fixed issues in change_vdisk_iogrp. During retyping a volume between I/O groups, if addvdiskaccess fails an exception is raised and if movevdisk fails rmvdiskaccess should be done for new I/O group before failing the retype operation.

  • IBM Spectrum Virtualize Family Bug #1898746: Fixed issue regarding host-failover and group-failover which impacts storage back-end performance.

  • RBD driver Bug #1898918: Fix thread block caused by the flatten operation during cloning a volume. Now the flatten operation is executed in a different thread.

  • Bug #1900979: Fix bug with using PowerStore with enabled CHAP as a storage backend.

  • Bug #1915800: Add support for ports filtering in XtremIO driver.

  • RBD driver bug #1901241: Fixed an issue where decreasing the rbd_max_clone_depth configuration option would prevent volumes that had already exceeded that depth from being cloned.

  • IBM DS8000 Driver Bug #1903648: Fix os_type compatability and hostname template issue.

  • Bug #1904440: When an iSCSI/FC encrypted volume was cloned, the rekey operation would stamp the wrong encryption key on the newly cloned volume. This resulted in a volume that could not be attached. It does not present a security problem.

  • Bug #1904892: Fix cinder manage operations for NFS backends using IPv6 addresses in the NFS server address. These were previously rejected by the Cinder API.

  • PowerMax Driver bug #1905564: Fix Fix remote SRP not being assigned to volume’s Host when performing retype during failover-promotion.

  • IBM Spectrum Virtualize Family Bug #1905988: Fixed volume IOPS throttling issue with a new option to set volume IOPS based on volume size.

  • Bug #1906528: IBM Spectrum Virtualize Family driver: Fixed issue regarding host-failback and group-failback which impacts storage back-end performance.

  • RBD driver bug #1907964: Add support for fast-diff on backup images stored in Ceph. Provided fast-diff is supported by the backend it will automatically be enabled and used. With fast-diff enabled, the generation of diffs between images and snapshots as well as determining the actual data usage of a snapshot is speed up significantly.

  • Bug #1908315: Corrected the default checkstring for the group:reset_group_snapshot_status policy to make it admin-only. This policy governs the Block Storage API action Reset group snapshot status, which by default is supposed to be an adminstrator-only action.

  • Bug #1912451: IBM Spectrum Virtualize Family driver: Updated replication properties for HyperSwap volumes and volumes with replication enabled that were missing from volume metadata.

  • IBM Spectrum Virtualize Family driver Bug #1912564: Fixed HyperSwap volume deletion issue.

  • Bug 1913449: Fix RBD driver _update_volume_stats() failing when using Ceph Pacific python rados libraries. This failed because we were passing a str instead of bytes to cluster.mon_command()

  • Bug #1920237: The backup manager calls volume remove_export() but does not wait for it to complete when detaching a volume after backup. This caused problems when a subsequent operation started on that volume before it had fully detached.

  • PowerStore driver Bug #1920729: Fix iSCSI targets not being returned from the REST API call if targets are used for multiple purposes (iSCSI target, Replication target, etc.).

  • Bug #1870367 : Partially fixed NFS and Quobyte drivers by no longer allowing extending a volume while it is attached, to prevent failures due to Qemu internal locking mechanisms.

  • Ceph/RBD: Fix Cinder becoming non-responsive and stats gathering taking longer that its period. (Related-Bug #1704106)

  • Bug #1913054: Fix for creating a clone of an encrypted volume for drivers that require additional information to attach.

  • Bug #1902852: Fixed throwing Python traceback message when using cinder-manage <category> without an action for the category.

  • Bug #1917574: Fixed issue when cinderclient requests to show volume by name for non-admin users would result in the volume not being found for microversions 3.31 or later.

  • Hitachi driver bug #1908792: Fix for Hitachi driver allowing delete_volume after create_cloned_volume.

  • LVM driver bug #1901783: Fix unexpected delete volume failure due to unexpected exit code 139 on lvs command call.

  • HPMSA driver: The HPE MSA driver was updated to avoid using deprecated command syntax that has been removed in the latest version of the MSA API. This is required to support the newest firmware in the MSA 2060/1060.

  • Bug #1917797: Fix Cinder’s communication with the Glance API to correctly load mTLS certificates from config (glance_certfile and glance_keyfile in the [DEFAULT] section).

  • PowerMax driver: Fix to prevent an R2 volume being larger than the R1 so that an extend operation will not fail if the R2 happens to be larger than the requested extend size.

  • PowerMax driver: Checking that the contents of the initiator group match the contents of the connector regardless of the initiator_check option being enabled. This will ensure an exception is raised if there is a mismatch, in all scenarios.

  • PowerMax driver: Enhancement to check the status of the ports in the port group so that any potential issue, like the ports being down, is highlighted early and clearly.

  • PowerMax Driver - bug #1908920: This offline r1 promotion fix resets replication enabled and configuration metadata during promotion retype with offline r1 array. It also gets management storage group name from source extra_specs during promotion.

  • PowerMax Driver - Promotion RDF Group number fix uses remote array SID when finding rdf group number when performing retype during failover.

  • Pure Storage driver: Add missing support for host_personality setting for FC-based hosts

  • Pure Storage FlashArray driver fix to ensure cinder_tempest_plugin consistency group tests pass.

  • Bug #1877164: Fix retyping volume with snapshots leaves the snapshots with the old type, making the quotas wrong inmediately for snapshots, and breaking them even more after those snapshots are deleted.

  • Bug #1917450: Fix automatic quota refresh to correctly account for migrating volumes. During volume migration we’ll have 2 volumes in cinder and only one will be accounted for in quota usage.

  • Bug #1919161: Fix automatic quota refresh to correctly account for temporary volumes. During some cinder operations, such as create a backup from a snapshot, temporary volumes are created and are not counted towards quota usage, but the sync mechanism was counting them, thus incorrectly updating volume usage.

  • Bug #1697906: Fix until_refresh configuration changes not taking effect in a timely fashion or at all.

  • Bug #1484343: Fix creation of duplicated quota usage entries in DB.

  • Bug #1898587: Address cloning and api request timeout issues users may hit in certain environments, by allowing configuring timeout values for these operations through cinder configuration file.

  • NetApp SolidFire driver Bug #1896112: Fixes an issue that may duplicate volumes during creation, in case the SolidFire backend successfully processes a request and creates the volume, but fails to deliver the result back to the driver (the response is lost). When this scenario occurs, the SolidFire driver will retry the operation, which previously resulted in the creation of a duplicate volume. This fix adds the sf_volume_create_timeout configuration option (default value: 60 seconds) which specifies an additional length of time that the driver will wait for the volume to become active on the backend before raising an exception.

  • NetApp SolidFire driver Bug #1891914: Fix an error that might occur on cluster workload rebalancing or system upgrade, when an operation is made to a volume at the same time its connection is being moved to a secondary node.

Other Notes

  • Supported Ceph versions

    The Cinder project wishes to clarify its policy concerning what versions of Ceph are supported by Cinder.

    • For a given OpenStack release, Cinder supports the current Ceph active stable releases plus the two prior releases.

    • For any OpenStack release, it is expected that the versions of the Ceph client and server are in alignment.

    The Ceph RADOS Block Device (RBD) driver documentation has been updated to reflect this policy and explains it in more detail.

  • This note applies to deployments that are using the cinder configuration option volume_copy_bps_limit in its non-default value (the default is 0).

    The cinder-volume service currently depends on Linux Kernel Control Groups (cgroups) version 1 to control i/o throttling during some volume-copy and image-convert operations. At the time of this release, some Linux distributions may have changed to using cgroups v2 by default. Thus, you may need to take explicit steps to ensure that cgroups v1 is enabled on any OpenStack nodes running the cinder-volume service. This may entail setting specific Linux kernel parameters for these nodes. Consult your Linux distribution’s documentation for details.

    For more information:

    • The cinder options associated with throttling are volume_copy_blkio_cgroup_name and volume_copy_bps_limit. They are described in the sample cinder configuration file for the Wallaby release.

    • For an example of distribution-specific information about cgroups, see OpenStack and cgroups v1 in the Debian 11 (“bullseye”) release notes.