Current Series Release Notes¶
26.0.0.0rc1-74¶
New Features¶
NetApp - The new ASAr2 driver class inherits from the existing ONTAP REST library, enabling reuse of the mature ONTAP codebase. This design extends the Cinder driver capabilities to support key volume operations on ASAr2 platform, including. * Volume Creation * Volume Deletion * Volume Attachment * Volume Detachment * Volume Extend
Dell PowerMax Driver: use consistency exempt flag consistently PowerMax allows volumes to be added, removed, or suspended without affecting the state of the SRDF/A or SRDF/Metro session or requiring that other volumes in the session be suspended. Known as –exempt for symcli and editStorageGroupActionParam in the PowerMax REST API, this capability is available for an SRDF group supporting an active SRDF/A session or an active SRDF/Metro session.
The PowerMax Cinder driver currently uses the exempt flag when volumes are added to SRDF groups, but not when volumes are removed. This incurs an unnecessary performance penalty that is resolved by this change.
Dell PowerMax driver: Added NVMe-TCP support.
NetApp ONTAP driver: Added support for self-signed certificate support for HTTPS transport for management communication between Cinder and NetApp ONTAP.
ONTAP systems utilize self-signed certificates for HTTPS management access by default. These certificates are generated automatically during the initial setup or deployment of ONTAP. When ssl_cert_path is configured with the extracted certificate file (.PEM format), Cinder establishes HTTPS communication with full certificate validation. When ssl_cert_path is not provided, Cinder automatically uses HTTPS with an unverified SSL context, which provides encrypted communication but skips certificate validation. This allows secure transport while maintaining ease of configuration with ONTAP’s default self-signed certificates. Administrators can extract the certificate using tools such as openssl or curl for full certificate validation if desired.
Cinder now supports setting-up cgroups with the cgroups v2 API, which is used when doing migration of block device with the LVM backend.
Added support for NetApp ASA r2 (All-Flash SAN Array r2) disaggregated platform in the NetApp unified driver. This introduces a new configuration option
netapp_disaggregated_platform
that enables ASA r2 specific workflows and optimizations.The implementation includes:
New boolean configuration option
netapp_disaggregated_platform
(default: False) to enable ASA r2 workflowsNew
RestClientASAr2
class that inherits from the standard REST clientOverride capability for ASA r2 specific functionality when needed
Full backward compatibility with existing NetApp ONTAP configurations
To enable ASA r2 support, set the following in your cinder configuration:
[backend_netapp_asar2] volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver netapp_storage_family = ontap_cluster netapp_storage_protocol = iscsi netapp_use_legacy_client = False netapp_disaggregated_platform = True # ... other NetApp configuration options
When
netapp_disaggregated_platform
is set toTrue
, the driver will:Apply ASA r2 specific optimizations and workflows
Maintain full compatibility with existing volume operations
Automatically fall back to standard ONTAP behavior when ASA r2 specific methods are not available
The ASA r2 client inherits all functionality from the standard REST client by default, with the ability to override individual methods for ASA r2 specific behavior. This design ensures that:
No existing functionality is lost
New ASA r2 features will be added incrementally
ASAr2 does not support ZAPIs. Hence all the APIs are accessed using REST.
This feature enables users to take advantage of NetApp’s disaggregated architecture and ASA r2 specific performance optimizations while maintaining a familiar operational experience.
NetApp NVMe namespace support for in-use expansion
Added support for in-use expansion of NetApp NVMe namespaces, allowing volumes to be resized while attached to running instances without requiring detachment. This feature enables seamless volume expansion for NVMe-backed volumes in NetApp ONTAP environments.
Key capabilities:
In-use expansion: Volumes can be expanded while attached to running instances
NVMe namespace compatibility: Full support for NetApp NVMe namespace expansion
[Pure Storage]: Added FlashArray volumes tags for future use with Pure Storage Data Intelligence tooling.
Upgrade Notes¶
Breaking Change: NetApp NVMe Subsystem Architecture Redesign
Implemented a significant architectural change to NVMe volume attachment handling to address critical limitations with multi-attach workflows and QoS management. The previous implementation used a one-to-one mapping between hosts and subsystems, where each host would have its own dedicated subsystem, and multiple subsystems would map to a single namespace. This approach created two major issues:
QoS Limitations: Since QoS policies are applied at the subsystem level rather than the namespace level, having multiple subsystems per namespace made it impossible to enforce consistent QoS across all host connections to the same volume.
Multi-Attach Incompatibility: Different subsystems cannot enable true multi-attach functionality, which is essential for live migration and other advanced features where the same volume needs to be simultaneously accessible from multiple hosts.
New Architecture: The implementation now uses a many-to-one mapping where multiple hosts share a single subsystem, ensuring a single subsystem-to-namespace relationship. This resolves both QoS consistency and multi-attach limitations.
Compatibility Impact: This change is not backward compatible due to fundamental differences in how NVMe subsystem-to-namespace mappings are handled. Live migration of existing mappings is not technically feasible.
Required Upgrade Path:
Take backup of all volumes using the old NVMe architecture
Upgrade OpenStack to the version with the new architecture
Restore volumes using the new many-to-one subsystem mapping model
For assistance with migration planning and any questions about this process, contact NetApp support who can provide guidance specific to your environment and help minimize disruption during the transition.
This approach ensures data integrity while enabling the improved multi-attach and QoS capabilities of the new architecture.
Dell PowerFlex driver: The fix for Bug #2114879 requires
os-brick
version 6.13.0 or greater. Users do not need to create the /opt/emc/scaleio/openstack/connector.conf file on the hosts usingos-brick
.Follow the steps below to upgrade:
Upgrade
os-brick
to version 6.13.0 without removing the configuration file. This version can perform mapping if the driver has not yet done so, provided the configuration file remains intact.Then upgrade the PowerFlex driver to version 3.6.0 or later. Note that driver version 3.6.0 requires
os-brick
version 6.13.0 or higher to function correctly and will not operate with earlier versions ofos-brick
.The connector configuration file can now be safely removed.
Security Issues¶
Dell PowerFlex driver: This release contains a fix for Bug #2114879. It removes the limitation of use with bare metal hosts mentioned in OSSN-0086.
Bug Fixes¶
IBM Spectrum Virtualize Family driver: Bug #2003300: Enable support for mirror-pool option for metro-mirror replication and global-mirror replication volume-types.
NetApp Driver Bug #2078968: Fixed NVMe namespace mapping fails during VM migration with “Namespace is already mapped to subsystem”. Implemented architecture changes to support multiple hosts attaching to single namespace through shared subsystem model.
Bug #2082587: Fixed backup restoration throwing TypeError on new volume.
NFS driver bug #2103742: Fixed issue preventing the volume resize operation from properly updating the NFS image virtual size with the new size when volume has snapshots.
Bug #2105961: Fixed issue in NVMe-oF target driver to validate the
nqn
property (NVMe-oF) instead of theinitiator
property (iSCSI) in the connector which caused attachment failures in non-iSCSI environments.
Volumes with multi-attach type can be connected to multiple instances. Additional logic has been implemented for FCP/NVMe protocols to handle the removal of cinder volumes from multiple instances. For more details, please check Launchpad bug #2110274
Bug #2111461: Fixed issue preventing cinder-manage command to purge deleted rows due to foreign key constraint errors. This happened as timestamp for bulk delete operations were recalculated per table resulting in slighty different intervals for deleting rows on primary and dependents tables.
NetApp Driver bug #2112245: Fixed the issue where a few cinder volume clone operations failed during bulk clone creation. Added retry logic to ensure the NetApp driver retries any failed clone operations.
bug #2112403: Fixed the image cache so that volumes are deleted if they can no longer be cloned after reaching a driver-specific snapshot limit.
Dell PowerFlex driver Bug #2114879: This release contains an updated Dell PowerFlex driver. It must be used with
os-brick
version 6.13.0 or greater.os-brick
no longer requires access to PowerFlex backend secrets, and all that is handled by the cinder driver now.
NetApp Driver bug #2114993: Fixed iSCSI and FC detach operation failure issue when multiple initiators are connected.
NetApp Driver bug #2116261: NetApp already support the consistency group for NFS/iSCSI/FCP protocol. Extend the same support for NVMe/TCP protocol.
Bug #1906286: Fixed issue with Cinder-backed images in A/A environment not correctly using the cluster name.
Bug #2107451: Fixed crash of cinder-manage quota sync if there is a row in the quota_usage table with the value groups for column resource
RBD bug #2115985: Fixed issue when managing a volume with
multiattach
orreplication_enabled
properties in volume type.
NFS driver bug #2074377: Fixed regression caused by change I65857288b797 (the mitigation for CVE-2024-32498) that was preventing the creation of a new volume from the second and subsequent snapshots of an existing volume.
RBD driver bug #2092534: Fixed uploading a volume to image when image has different format than volume.
Bug #2062539: Fixed reimage operation when the image is backed by a volume snapshot.
Hitachi driver bug #2043978: Since around the Train era, Hitachi had an out-of-tree driver that implemented the Global-Active Device (GAD) and Remote Replication features. As part of an initiative to unify the “Enterprise” and in-tree drivers, change I4543cd036897 in the 2023.1 (Antelope) release implemented the GAD feature for the in-tree driver. Unfortunately, this change used an incompatible string to indicate what copy groups were under GAD control, and thus upgrading to the in-tree driver breaks GAD for existing volumes. This bug fix makes the copy group control identifier consistent so that current users of the out-of-tree driver can upgrade to releases that contain the in-tree driver.
HPE 3par driver bug #2112433: Fixed failure observed when vlan ip is same as iSCSI ip by ignoring the duplicate ip
Pure Storage driver bug #2100547: Fixed issue where volumes created as clones from a source image volume did not get the defined QoS settings associated with the volume type used.
Pure Storage driver bug #2115284: snapshot replication interval in cinder.conf is set in seconds, but the backend expects it in milliseconds. Added fix to handle the conversion.
Pure Storage bug #2101859: Fixed issue where LACP bonds were not been correctly identified as iSCSI and NVMe targets.
Pure Storage driver: Fixed issue with FlashArray secure tenant volumes and snapshots as theese are not eligible to be managed.
[Pure Storage] Resolved issue where LACP bonds are being defined as part of a VLAN, resulting in target ports not being correctly identified.
Bug #1886543: On retypes requiring a migration, try to use the driver assisted mechanism when moving from one backend to another when we know it’s safe from the volume type perspective.
Other Notes¶
The Cinder Backup service examined every known backup upon startup previously, in order to restart the incomplete backups. This was a problem for installations with a large number of backups. We now use one database request in order to compile a list of incomplete backups. See Change-Id I5c6065d99116ae5f223799e8558d25777aedd055.