Xena Series Release Notes

19.3.0-2

既知の問題

  • For security reasons (Bug #2004555) manually deleting an attachment, manually doing the os-terminate_connection, os-detach or os-force_detach actions will no longer be allowed in most cases unless the request is coming from another OpenStack service on behalf of a user.

アップグレード時の注意

  • Nova must be configured to send service tokens and cinder must be configured to recognize at least one of the roles that the nova service user has been assigned in keystone. By default, cinder will recognize the service role, so if the nova service user is assigned a differently named role in your cloud, you must adjust your cinder configuration file (service_token_roles configuration option in the keystone_authtoken section). If nova and cinder are not configured correctly in this regard, detaching volumes will no longer work (Bug #2004555).

Critical Issues

セキュリティー上の問題

  • As part of the fix for Bug #2004555, cinder now rejects user attachment delete requests for attachments that are being used by nova instances to ensure that no leftover devices are produced on the compute nodes which could be used to access another project's volumes. Terminate connection, detach, and force detach volume actions (calls that are not usually made by users directly) are, in most cases, not allowed for users.

バグ修正

  • RBD driver bug #1960206: Fixed total_capacity reported by the driver to the scheduler on Ceph clusters that have renamed the bytes_used field to stored. (e.g., Nautilus).

  • Bug #2004555: Fixed issue where a user manually deleting an attachment, calling terminate connection, detach, or force detach, for a volume that is still used by a nova instance resulted in leftover devices on the compute node. These operations will now fail when it is believed to be a problem.

19.3.0

バグ修正

  • Bug #1953168: Fixed missing parameter in the capacity filter log message.

  • Infinidat Driver bug #1981354: Fixed Infinidat driver to return all configured and enabled iSCSI portals for a given network space.

  • PowerMax driver bug #1979668: Fixed visibility of manageable volumes in multiple storage groups.

  • PowerStore driver bug #1981068: Fixed request data validation for the REST client.

  • Bug #2008259: Fixed the volume create functionality where non-admin users were able to create multiattach volumes by providing the multiattach parameter in the request body. Now we can only create multiattach volumes using a multiattach volume type, which is also the recommended way.

その他の注意点

  • Removed the ability to create multiattach volumes by specifying multiattach parameter in the request body of a volume create operation. This functionality is unsafe, can lead to data loss, and has been deprecated since the Queens release. The recommended method for creating a multiattach volume is to use a volume type that supports multiattach. By default, volume types can only be created by the operator. Users who have a need for multiattach volumes should contact their operator if a suitable volume type is not available.

19.2.0

アップグレード時の注意

  • This release introduces a new configuration option, vmdk_allowed_types, that specifies the list of VMDK image subformats that Cinder will allow. The default setting allows only the 'streamOptimized' and 'monolithicSparse' subformats, which do not use named extents.

セキュリティー上の問題

  • This release introduces a new configuration option, vmdk_allowed_types, that specifies the list of VMDK image subformats that Cinder will allow in order to prevent exposure of host information by modifying the named extents in a VMDK image. The default setting allows only the 'streamOptimized' and 'monolithicSparse' subformats, which do not use named extents.

  • As part of the fix for Bug #1996188, cinder is now more strict in checking that the disk_format recorded for an image (as revealed by the Image Service API image-show response) matches what cinder detects when it downloads the image. Thus, some requests to create a volume from a source image that had previously succeeded may fail with an ImageUnacceptable error.

バグ修正

  • RBD Driver bug #1957073: Fixed snapshot deletion failure when its volume doesn't exist.

  • Bug #1996188: Fixed issue where a VMDK image file whose createType allowed named extents could expose host information. This change introduces a new configuration option, vmdk_allowed_types, that specifies the list of VMDK image subformats that Cinder will allow. The default setting allows only the 'streamOptimized' and 'monolithicSparse' subformats.

  • RBD driver bug #1916843: Fixed rpc timeout when backing up RBD snapshot. We no longer flatten temporary volumes and snapshots.

  • NFS driver bug #1946059: Fixed revert to snapshot operation.

  • HPE 3PAR driver Bug #1958122: Fixed issue of multi-detach operation in multi host environment.

  • PowerMax driver bug #1930290: This fixes the QoS conflict issue on a child storage group by not setting QoS on a parent storage group.

19.1.1

新機能

  • Pure Storage FlashArray driver: Enabled support for Active/Active replication for the FlashArray driver. This allows users to configure FlashArray backends in clustered environments.

バグ修正

  • Bug #1944577: Managing a volume to an encrypted type was never a good idea because there was no way to specify an encryption key ID so that the volume could be used. Requests to manage a volume to an encrypted volume type now result in an invalid request response.

  • Bug #1965847: Fixed issue where importing a backup record for a backup_id that currently existed had the unfortunate side effect of deleting the existing backup record.

  • Bug #1970768: Fixed status of temporary volumes when creating backups and reverting to a snapshot, preventing accidental manual deletion of those resources.

  • NetApp ONTAP: Fix check QoS minimum support for SVM scoped account. See: Bug #1924798.

19.1.0

バグ修正

  • Bug #1935688: Cinder only supports uploading a volume of an encrypted volume type as an image to the Image service in raw format using a bare container type. Previously, os-volume_upload_image action requests to the Block Storage API specifying different format option values were accepted, but would result in a later failure. This condition is now checked at the API layer, and os-volume_upload_image action requests on a volume of an encrypted type that specify unsupported values for disk_format or container_format now result in a 400 (Bad Request) response.

  • RBD driver bug #1947518: Corrected a regression caused by the fix for Bug #1931004 that was attempting to access the glance images RBD pool with write privileges when creating a volume from an image.

  • Bug #1947134: Fixed the initialization of GPFS NFS driver when gpfs_images_share_mode is set to copy_on_write by correcting _same_filesystem functionality.

  • Bug #1947123: Fixed the volume creation issue in GPFS NFS driver when gpfs_images_share_mode is set to copy_on_write.

  • Pure Storage driver Bug #1945824: Fixed missing DB values when creating new consistency group from CG snapshot.

  • Bug #1916980: Fixed stale volume notification information on volume detach.

  • Bug #1935011: Fixed missing detach.start notification when deleting an attachment in reserved state.

  • Bug #1937084: Fixed race condition between delete attachment and delete volume that can leave deleted volumes stuck as attached to instances.

  • Bug #1924643: Fixed the NetApp cinder driver sub-clone operation that might be used by extend operation in case the extended size is greater than the max LUN geometry.

  • Bug #1950474: Fixed policy authorization for transfer accept API. Previously, setting enforce_new_defaults=True in oslo_policy section would break the transfer accept API which is fixed in this release.

  • PowerMax driver bug #1938572 : Legacy PowerMax OS fix to convert an int to a string if the generation of snapVX is returned as an int from REST so that a 0 does not equate to False in python.

  • Pure Storage Driver: Add internal check to allow for FlashArray with joint FC and NVMe-FC support

  • Bug #1935057: Fixed sometimes on a detach volume may end in available and detached yet have an attachment in error_detaching.

19.0.0

Prelude

Welcome to the Xena release of the OpenStack Block Storage service (cinder). With this release, the Block Storage API version 3 has reached microversion 3.66. The cinder team would like to bring the following points to your attention. Details may be found below.

  • The Block Storage API v2, which was deprecated way back in the Pike release, has been removed. We gently remind you that Pike was a long time ago, and that version 3.0 of the Block Storage API was designed to be completely compatible with version 2.

  • Microversion 3.65 includes the display of information in the volume or snapshot detail response to indicate whether that resource consumes quota, and adds the ability to filter a requested list of resources according to whether they consume quota or not.

  • Microversion 3.66 removes the necessity to add a 'force' flag when requesting a snapshot of an in-use volume, given that this is not a problem for modern storage systems.

  • The volume-type detail response has been enhanced to include non-sensitive "extra-specs" information in order to provide more data for automated systems to select a volume type.

  • The default policy configuration has been extensively rewritten.

  • Many backend storage drivers have added features and fixed bugs.

新機能

  • Swift backup driver: Added new configuration option backup_swift_create_storage_policy for the Swift backup driver. If specified it will be used as the storage policy when creating the Swift Container, default value is None meaning it will not be used and Swift will use the system default. Please note that this only applies if a container doesn't exist as we cannot update the storage policy on an already existing container.

  • The cinder-manage command now includes a new quota category with two possible actions check and sync to help administrators manage out of sync quotas on long running deployments.

  • Bug #1432387: Add a command to cinder-manage to clean up file locks existing in hosts where there is a Cinder service running (API, Scheduler, Volume, Backup). Command works with the Cinder services running, useful to be called as a cron job, as well as stopped, to be called on host startup. Command invocation cinder-manage util clean_locks with optional parameter --services-offline.

  • Hitachi driver: Add Cinder generic volume groups.

  • IBM Spectrum Virtualize Family driver: Added support to manage GMCV volumes on separate storage pools.

  • IBM Spectrum Virtualize Family driver: Added volume-extend support for volumes created using a HyperSwap volume-type template.

  • Starting with API microversion 3.65, a consumes_quota field is included in the response body of volumes and snapshots to indicate whether the volume is using quota or not.

    Additionally, consumes_quota can be used as a listing filter for volumes and snapshots. Its availability is controlled by its inclusion in etc/cinder/resource_filters.json, where it is included by default. The default listing behavior is not to use this filter.

    Only temporary resources created internally by cinder will have the value set to false.

  • NetApp ONTAP driver: Added support to Revert to Snapshot for the iSCSI, FC and NFS drivers with FlexVol pool. This feature does not support FlexGroups and is limited to revert only to the most recent snapshot of a given Cinder volume.

  • NetApp ONTAP driver: added option ´netapp_driver_reports_provisioned_capacity´, which enables the driver to calculate and report provisioned capacity to Cinder Scheduler based on volumes sizes in the storage system.

  • NetApp ONTAP: Added support for storage assisted migration within a same ONTAP cluster (iSCSI/FC/NFS).

  • Open-E JovianDSS driver: Added multiattach support.

  • Open-E JovianDSS driver: Added 16K block size support.

  • Pure Storage FlashArray driver: added configuration option pure_iscsi_cidr_list for setting several network CIDRs for iSCSI target connection. Both IPv4 and IPv6 is supported. The default still allows all IPv4 targets.

  • Log a warning from the volume service when a volume driver's get_volume_stats() call takes a long time to return. This can help deployers troubleshoot a cinder-volume service misbehaving due to a driver/backend performance issue.

  • As of API version 3.66, volume snapshots of in-use volumes can be created without passing the 'force' flag, and the 'force' flag is considered invalid for this request. For backward compatability, however, when the 'force' flag is passed with a value evaluating to True, it is silently ignored.

  • A small list of volume type extra specs are now visible to regular users, and not just to cloud administrators. This allows users to see non-senstive extra specs, which may help them choose a particular volume type when creating volumes. Sensitive extra specs are still only visible to cloud administrators. See the "User visible extra specs" section in the Cinder Administration guide for more information.

  • Policy configuration changes

    Over the Xena and Yoga development cycles, cinder's default policy configuration is being modified to take advantage of the default authentication and authorization apparatus supplied by the Keystone project. This will give operators a rich set of default policies to control how users interact with the Block Storage service API.

    The details of this project are described in Policy Personas and Permissions in the Cinder Service Configuration Guide. We encourage you to read through that document. The following is only a summary.

    The primary change in the Xena release is that cinder's default policy configuration will recognize the reader role on a project. Additionally,

    • Some current rules defined in the policy file are being DEPRECATED and will be removed in the Yoga release. You only need to worry about this if you have used any of these rules yourself in when writing custom policies, as you cannot rely on the following rules being pre-defined in the Yoga release.

      • rule:admin_or_owner

      • rule:system_or_domain_or_project_admin

      • rule:volume_extension:volume_type_encryption

    • Some current policies that were over-general (that is, they governed both read and write operations on a resource) are being replaced by a set of new policies that provide greater granularity. The following policies are DEPRECATED and will be removed in the Yoga release:

      • group:group_types_manage is replaced by:

        • group:group_types:create

        • group:group_types:update

        • group:group_types:delete

      • group:group_types_specs is replaced by:

        • group:group_types_specs:get

        • group:group_types_specs:get_all

        • group:group_types:create

        • group:group_types:update

        • group:group_types:delete

      • volume_extension:quota_classes is replaced by:

        • volume_extension:quota_classes:get

        • volume_extension:quota_classes:update

      • volume_extension:types_manage is replaced by:

        • volume_extension:type_create

        • volume_extension:type_update

        • volume_extension:type_delete

      • volume_extension:volume_image_metadata is replaced by:

        • volume_extension:volume_image_metadata:show

        • volume_extension:volume_image_metadata:set

        • volume_extension:volume_image_metadata:remove

    • A new policy was introduced to govern an operation previously controlled by a policy that is not being removed, but whose other governed actions are conceptually different:

      • volume_extension:volume_type_access:get_all_for_type

    • A new policy was introduced as part of the feature described in the User visible extra specs section of the Cinder Administration Guide:

      • volume_extension:types_extra_specs:read_sensitive

    • Many policies had their default values changed and their previous values deprecated. These are indicated in the sample policy configuration file, which you can view in the policy.yaml section of the Cinder Service Configuration Guide.

      • In particular, we direct your attention to the default values for the policies associated with the Default Volume Types API (introduced with microversion 3.62 of the Block Storage API). These had experimentally recognized "scope", but for consistency with the other rules, their default values no longer recognize scope. (Scope will be introduced to all cinder policy defaults in the Yoga release.)

      • When a policy value is deprecated, the oslo.policy engine will check the new value, and if that fails, it will evaluate the deprecated value. This behavior may be modified so only the new policy value is used by setting the configuration option enforce_new_defaults=True in the [oslo_policy] section of the cinder configuration file.

既知の問題

  • It is currently possible to manage a volume to an encrypted volume type, but that is not recommended because there is no way to supply an encryption key for the volume to cinder. Un-managing a volume of an encrypted volume type is already prevented, and it is expected that management to an encrypted type will similarly be blocked in a future release. This issue is being tracked as Bug #1944577.

  • Cinder use of cgroups v1

    This note applies to deployments that are using the cinder configuration option volume_copy_bps_limit in its non-default value (the default is 0).

    The cinder-volume service currently depends on Linux Kernel Control Groups (cgroups) version 1 to control i/o throttling during some volume-copy and image-convert operations. Some Linux distributions, however, have changed to using cgroup v2 by default and may have discontinued cgroups v1 support completely. Consult your Linux distribution's documentation for details.

    The cinder team is working on a throttling solution using cgroup v2, but it was not ready at the time of this release. The solution is expected to be backported to a future release in the Xena series. This issue is being tracked as Bug #1942203.

  • There is a race condition between the delete attachment and delete volume operations that has been observed when running cinder-csi. This race can leave deleted volumes stuck as attached to instances. The cinder team is working on a solution which is expected to be backported to a future release in the Xena series. The issue is being tracked as Bug #1937084.

  • When the Ceph backup driver is used for the backup service, restoring a backup to a volume created on a non-RBD backend fails. The cinder team is working on a solution which is expected to be backported to a future release in the Xena series. The issue is being tracked as Bug #1895035.

  • Creating a volume of an encrypted volume type from an image in the Image service (Glance) using the generic NFS driver results in an unusable volume. The cinder team is working on a solution which is expected to be backported to a future release in the Xena series. The issue is being tracked as Bug #1888680.

  • NFS-based backend drivers and qcow2 version 2 images

    Under some circumstances, NFS-based backend drivers will store a volume as a qcow2 image. Thus cinder allows for the possibility that an operator may choose to manage a storage object in an NFS-based backend that is a qcow2 image.

    Version 3 of the qcow2 format has been the default for qcow2 creation in qemu-img since QEMU-1.7 (December 2013), and operating system vendors are discussing discontinuing (or limiting) support of the version 2 format in upcoming releases.

    Thus, we recommend that operators who want to manage a storage object in an NFS-based storage backend as a cinder volume should not do this with a qcow2 image that is in the version 2 format, but should change it to the qcow2-v3 format first.

    注釈

    The format version of a qcow2 can be determined by looking for the compat field in the output of the qemu-img info command. A version 2 format image will report compat=0.10, whereas a qcow2 in version 3 format will report compat=1.1.

アップグレード時の注意

  • RBD driver: Enable Ceph V2 Clone API and Ceph Trash auto purge

    In light of the fix for RBD driver bug #1941815, we want to bring the following information to your attention.

    Using the v2 clone format for cloned volumes allows volumes with dependent images to be moved to the trash - where they remain until purged - and allow the RBD driver to postpone the deletion until the volume has no dependent images. Configuring the trash purge is recommended to avoid wasting space with these trashed volumes. Since the Ceph Octopus release, the trash can be configured to automatically purge on a defined schedule. See the rbd trash purge schedule commands in the rbd manpage.

  • The [DEFAULT] db_driver config option has been removed. This was intended to allow configuration of the database driver, however, there is only one database driver present in-tree and out-of-tree database drivers are not supported.

  • Pure Storage FlashArray minimum purestorage SDK version increased to 1.17.0

  • The Block Storage API v2, which was deprecated in the Pike release, has been removed. If upgrading from a previous OpenStack release, it is recommended that you edit your /etc/cinder/api-paste.ini file to remove all references to v2. Additionally, the deprecated configuration option enable_v2_api has been removed. If present in a configuration file, it will be silently ignored.

    The configuration option enable_v3_api has been removed from this release due to the fact that v3 is now the only version of the Block Storage API available. If present in a configuration file, it will be silently ignored as the v3 API is now enabled unconditionally.

  • Support for the cinder.database.migration_backend entrypoint, which provided for configurable database migration backends, has been removed. This was never exercised and was a source of unnecessary complexity.

  • The database migration engine has changed from sqlalchemy-migrate to alembic. For most deployments, this should have minimal to no impact and the switch should be mostly transparent. The main user-facing impact is the change in schema versioning. While sqlalchemy-migrate used a linear, integer-based versioning scheme, which required placeholder migrations to allow for potential migration backports, alembic uses a distributed version control-like schema where a migration's ancestor is encoded in the file and branches are possible. The alembic migration files therefore use an arbitrary UUID-like naming scheme and the cinder-manage db sync command now expects such a version when manually specifying the version that should be applied. For example:

    $ cinder-manage db sync 921e1a36b076
    

    It is no longer possible to specify an sqlalchemy-migrate-based version. When the cinder-manage db sync command is run, all remaining sqlalchemy-migrate-based migrations will be automatically applied. Attempting to specify an sqlalchemy-migrate-based version will result in an error.

廃止予定の機能

  • The following policy rules have been DEPRECATED in this release and will be removed in Yoga:

    • rule:admin_or_owner

    • rule:system_or_domain_or_project_admin

    • rule:volume_extension:volume_type_encryption

    For more information, see the 'New Features' section of this document and Policy Personas and Permissions in the Cinder Service Configuration Guide.

  • The following policies have been DEPRECATED in this release and will be removed in Yoga:

    • group:group_types_manage

    • group:group_types_specs

    • volume_extension:quota_classes

    • volume_extension:types_manage

    • volume_extension:volume_image_metadata

    For more information, see the 'New Features' section of this document and Policy Personas and Permissions in the Cinder Service Configuration Guide.

セキュリティー上の問題

  • A small list of volume type extra specs are now visible to regular users, and not just to cloud administrators. Cloud administrators that wish to opt out of this feature should consult the "Security considerations" portion of the "User visible extra specs" section in the Cinder Administration guide.

バグ修正

  • PowerFlex driver bug #1897598: Fixed bug with PowerFlex storage-assisted volume migration when volume migration was performed without conversion of volume type in cases where it should have been converted to/from thin/thick provisioned.

  • IBM Spectrum Virtualize Family driver Bug #1913363: Fixed issue in get_host_from_connector by caching the host information during attach or detach operations and using host details from cached information.

  • IBM Spectrum Virtualize Family driver Bug #1917605: Fixed issue in StorwizeSVCCommonDriver to save site and peer pool information in stats during initialization.

  • Nimble driver bug #1918229: Corrected an issue where the Nimble storage driver was inaccurately determining that there was no free space left in the storage array. The driver now relies on the storage array to report the amount of free space.

  • IBM Spectrum Virtualize Family driver Bug #1920099: Fix issue where _check_delete_vdisk_fc_mappings was deleting flashcopy mappings during extend operation of a clone volume where its source volume already contained a snapshot.

  • PowerStore driver Bug #1920729: Fix iSCSI targets not being returned from the REST API call if targets are used for multiple purposes (iSCSI target, Replication target, etc.).

  • IBM Spectrum Virtualize Family driver Bug #1920870: Fixed extend issues for volumes with replication enabled by avoiding volume remote-copy relationship deletion and creation.

  • IBM Spectrum Virtualize Family driver Bug #1920890: Fixed issue in retype_hyperswap_volume method to update site and iogrp information to the host during a retype from a non-HyperSwap volume to a HyperSwap volume.

  • IBM Spectrum Virtualize Family driver: Bug #1920912: Fixed rccg create issue while adding volumes to a group where the group is cloned from group snapshot or other source group.

  • IBM Spectrum Virtualize Family driver Bug #1922013: Fixed issues in adding volumes to GMCV group.

  • RBD driver Bug #1922408: Fixed create encrypted volume from encrypted snapshot.

  • IBM Spectrum Virtualize Family driver Bug #1924568: Fixed issues that occurred while creating volume on data reduction pool.

  • IBM Spectrum Virtualize Family driver Bug #1924602: Fixed issue to create snapshots, clones, group snapshots, and group clones for HyperSwap volumes.

  • IBM Spectrum Virtualize Family driver Bug #1926286: Fixed an issue while fetching relationship details of a volume with replication enabled.

  • IBM Spectrum Virtualize Family driver Bug #1926491: Updating volume metadata with rccg properties for the volumes with replication enabled and added to a group or removed from a group.

  • IBM Spectrum Virtualize Family driver Bug #1931968: Fixed issue in updating the replication status of HyperSwap volume service based on status of nodes during initialization and periodic calls.

  • IBM Spectrum Virtualize Family driver: Bug #1935670: Fixed empty attribute values issue while updating volume metadata table for replicated volumes.

  • IBM Spectrum Virtualize Family driver: Bug #1938212: Added replication license support for FlashSystem V5000E storage system. Removed support for IBM Storwize V3700 as it reached End Of Service.

  • PowerMax driver bug #1939139: Fix on create snapshot operation that exists when using PowerMax OS 5978.711 and later.

  • RBD driver bug #1941815: Fixed deleting volumes with snapshots/volumes in the ceph trash space.

  • PowerMax driver bug #1929429: Fixes child/parent storage group check so that a pattern match is not case sensitive. For example, myStorageGroup should equal MYSTORAGEGROUP and mystoragegroup.

  • Bug #1913054: Fix for creating a clone of an encrypted volume for drivers that require additional information to attach.

  • Bug #1432387: Try to automatically clean up file locks after a resource (volume, snapshot) is deleted. This will alleviate the issue of the locks directory always increasing the number of files.

  • JovianDSS driver: Bug #1941746: Fixed Fix ensure_export function failure in case of partial target recovery.

  • Fixed the schema validation for attachment create API to make instance uuid an optional field. It had mistakenly been defined as a required field when schema validation was added in an earlier release. Also updated the schema to allow specification of the mode parameter, which has been available since microversion >= 3.54, but which was not recognized as a legitimate request field.

  • Bug #1917574: Fixed issue when cinderclient requests to show volume by name for non-admin users would result in the volume not being found for microversions 3.31 or later.

  • Bug #1941068: Fixed type of the host configuration option. It was limited to valid FQDN values when we document that it isn't. This may result in the cinder-manage db sync command failing.

  • LVM driver bug #1901783: Fix unexpected delete volume failure due to unexpected exit code 139 on lvs command call.

  • NetApp ONTAP bug #1906291: Fix volume losing its QoS policy on the backend after moving it (migrate or retype with migrate) to a NetApp NFS backend.

  • NFS driver bug #1860913: Fixed instance uses base image file when it is rebooted after online snapshot creation.

  • PowerMax driver: Previously, the target storage group created from a replicated storage group was also replicated, which could cause failures. This fix creates a non-replicated target initially, and lets the replicate group API take care of replicating it.

  • PowerMax driver: Fix to suspend the storage group you are about to delete and then add a force flag to delete the volume pairs within the storage group.

  • Pure Storage FlashArray driver bug 1910143: Parameter pure_iscsi_cidr is now IPv4/v6 agnostic.

  • Pure Storage FlashArray driver bug #1936663: Fixes issue where cloning a consistency group containing volumes with very long names causes a crash - Required for PowerVC support

  • Pure Storage FlashArray driver bug #1929219: Fixes issue with incorrect internal mechanism for checking REST API of backend array. This has no external effect for users.

  • Pure Storage FlashArray driver bug #1938579: Fixes issue when cloning multiple volumes in PowerVC deployments.

  • Pure Storage bug #1930748: Fixed issues with multiattched volumes being diconnected from a backend when still listed as an attachment to an instance.

  • Bug #1877164: Fix retyping volume with snapshots leaves the snapshots with the old type, making the quotas wrong inmediately for snapshots, and breaking them even more after those snapshots are deleted.

  • Bug #1919161: Fix automatic quota refresh to correctly account for temporary volumes. During some cinder operations, such as create a backup from a snapshot, temporary volumes are created and are not counted towards quota usage, but the sync mechanism was counting them, thus incorrectly updating volume usage.

  • Bug #1923828: Fixed quota usage sync counting temporary snapshots from backups and revert to snapshot.

  • Bug #1923829: Fixed manually deleting temporary snapshots from backups and revert to snapshots after failure leads to incorrect quota usage.

  • Bug #1923830: Fixed successfully backing up an in-use volume using a temporary snapshot instead of a clone leads to incorrect quota usage.

  • Bug #1697906: Fix until_refresh configuration changes not taking effect in a timely fashion or at all.

  • Bug #1484343: Fix creation of duplicated quota usage entries in DB.

  • Bug #1931004: Fixed use of incorrect stripe unit in RBD image clone causing volume-from-image to fail when using raw images backed by Ceph.

  • Bug #1886543: On retypes requiring a migration, try to use the driver assisted mechanism when moving from one backend to another when we know it's safe from the volume type perspective.

  • Bug #1898075: When Glance added support for multiple cinder stores, Images API version 2.11 modified the format of the image location URI, which Cinder reads in order to try to use an optimized data path when creating a volume from an image. Unfortunately, Cinder did not understand the new format and when Glance multiple cinder stores were used, Cinder could not use the optimized data path, and instead downloaded image data from the Image service. Cinder now supports Images API version 2.11.

  • Bug #1922920: Don't do volume usage notifications for migration temporary volumes.

その他の注意点

  • Added user messages for backup operations that a user can query through the Messages API. These allow users to retrieve error messages for asynchronous failures in backup operations like create, delete, and restore.

  • HPE 3PAR: Documented that existing driver supports the new Alletra 9k backend. HPE Alletra 9k is newer version of existing HPE Primera backend.

  • Nimble: Documented that existing driver supports the new Alletra 6k backend. Alletra 6k is newer version of existing Nimble backend.