Ocata Series Release Notes

Ocata Series Release Notes

10.0.6-3

New Features

  • RBD driver supports returning a static total capacity value instead of a dynamic value like it’s been doing. Configurable with report_dynamic_total_capacity configuration option.

Upgrade Notes

  • RBD/Ceph backends should adjust max_over_subscription_ratio to take into account that the driver is no longer reporting volume’s physical usage but it’s provisioned size.

Bug Fixes

  • RBD stats report has been fixed, now properly reports allocated_capacity_gb and provisioned_capacity_gb with the sum of the sizes of the volumes (not physical sizes) for volumes created by Cinder and all available in the pool respectively. Free capacity will now properly handle quota size restrictions of the pool.

10.0.5

Bug Fixes

  • Prohibit the deletion of group if group snapshot exists.

10.0.4

Bug Fixes

  • Add all_tenants, project_id support in attachment list&detail APIs.

10.0.3

Bug Fixes

  • Fix NFS backup driver, we now support multiple backups on the same container, they are no longer overwritten.

10.0.1

Bug Fixes

  • Fixed consistency groups API which was always returning groups scoped to project ID from user context instead of given input project ID.

10.0.0

Prelude

Everything in Cinder’s release notes related to the High Availability Active-Active effort -preluded with “HA A-A:”- is work in progress and should not be used in production until it has been completed and the appropriate release note has been issued stating its readiness for production.

The default key manager interface in Cinder was deprecated and the Castellan key manager interface library is now used instead. For more information about Castellan, please see http://docs.openstack.org/developer/castellan/ .

New Features

  • Dell SC - Compression and Dedupe support added for Storage Centers that support the options.
  • Dell SC - Volume and Group QOS support added for Storage Centers that support and have enabled the option.
  • Config option dell_server_os added to the Dell SC driver. This option allows the selection of the server type used when creating a server on the Dell DSM during initialize connection. This is only used if the server does not exist. Valid values are from the Dell DSM create server list.
  • Added ability to query backups by project ID.
  • Add support to configure IO ports option in Dell EMC Unity driver.
  • Added reset status API to group snapshot.
  • Added reset status API to generic volume group.
  • Added a new config option scheduler_weight_handler. This is a global option which specifies how the scheduler should choose from a listed of weighted pools. By default the existing weigher is used which always chooses the highest weight.
  • Added a new weight handler StochasticHostWeightHandler. This weight handler chooses pools randomly, where the random probabilities are proportional to the weights, so higher weighted pools are chosen more frequently, but not all the time. This weight handler spreads new shares across available pools more fairly.
  • Add v2.1 volume replication support in VMAX driver.
  • Updating the Datera Elastic DataFabric Storage Driver to version 2.1. This adds ACL support, Multipath support and basic IP pool support.
  • The IBM_Storage driver has been open sourced. This means that there is no more need to download the package from the IBM site. The only requirement remaining is to install pyxcli, which is available through pypi:

    ``sudo pip install pyxcli``
    
  • Support for use of fc_southbound_protocol configuration setting in the Brocade FC SAN lookup service.
  • Cinder is now collecting capacity data, including virtual free capacity etc from the backends. A notification which includes that data is periodically emitted.
  • HA A-A: Add cluster configuration option to allow grouping hosts that share the same backend configurations and should work in Active-Active fashion.
  • HA A-A: Updated manage command to display cluster information on service listings.
  • HA A-A: Added cluster subcommand in manage command to list, remove, and rename clusters.
  • HA A-A: Added clusters API endpoints for cluster related operations (index, detail, show, enable/disable). Index and detail accept filtering by name, binary, disabled, num_hosts, num_down_hosts, and up/down status (is_up) as URL parameters. Also added their respective policies.
  • HA A-A: Attach and detach operations are now cluster aware and make full use of clustered cinder-volume services.
  • HA A-A: Delete volume, delete snapshot, delete consistency group, and delete consistency group snapshot operations are now cluster aware and make full use of clustered cinder-volume services.
  • Added update-host command for consistency groups in cinder-manage.
  • Added Datera EDF API 2.1 support.
  • Added Datera Multi-Tenancy Support.
  • Added Datera Template Support.
  • Broke Datera driver up into modules.
  • All Datera DataFabric backed volume-types will now use API version 2 with Datera DataFabric.
  • The force boolean parameter has been added to the volume delete API. It may be used in combination with cascade. This also means that volume force delete is available in the base volume API rather than only in the volume_admin_actions extension.
  • Added backend driver for Dell EMC Unity storage.
  • Add consistent group capability to generic volume groups in VNX driver.
  • Added new Hitachi VSP FC Driver. The VSP driver supports all Hitachi VSP Family and HUSVM.
  • Adds new Hitachi VSP iSCSI Driver.
  • Hitachi VSP drivers have a new config option vsp_compute_target_ports to specify IDs of the storage ports used to attach volumes to compute nodes. The default is the value specified for the existing vsp_target_ports option. Either or both of vsp_compute_target_ports and vsp_target_ports must be specified.
  • Hitachi VSP drivers have a new config option vsp_horcm_pair_target_ports to specify IDs of the storage ports used to copy volumes by Shadow Image or Thin Image. The default is the value specified for the existing vsp_target_ports option. Either or both of vsp_horcm_pair_target_ports and vsp_target_ports must be specified.
  • Added the ability to list manageable volumes and snapshots to HNAS NFS driver.
  • Optimize backend reporting capabilities for Huawei drivers.
  • Added support for querying group details with volume ids which are in this group. For example, “groups/{group_id}?list_volume=True”.
  • Added driver for the InfiniBox storage array.
  • Added backend FC and iSCSI drivers for NEC Storage.
  • Added host-level (whole back end replication - v2.1) replication support to the NetApp cDOT drivers (iSCSI, FC, NFS).
  • The NetApp cDOT drivers report to the scheduler, for each FlexVol pool, the fraction of the shared block limit that has been consumed by dedupe and cloning operations. This value, netapp_dedupe_used_percent, may be used in the filter & goodness functions for better placement of new Cinder volumes.
  • Added extend method to NFS driver for NexentaStor 5.
  • Added secure HTTP support for REST API calls in the NexentaStor5 driver. Use of HTTPS is set True by default with option nexenta_use_https.
  • Added support for snapshots in the NFS driver. This functionality is only enabled if nfs_snapshot_support is set to True in cinder.conf. Cloning volumes is only supported if the source volume is not attached.
  • Added Nimble Storage Fibre Channel backend driver.
  • Add Support for QoS in the Nimble Storage driver. QoS is available from Nimble OS release 4.x and above.
  • Add Support for deduplication of volumes in the Nimble Storage driver.
  • The Nimble backend driver has been updated to use REST for array communication.
  • Add consistent group capability to generic volume groups in Pure drivers.
  • Add get_manageable_volumes and get_manageable_snapshots implementations for Pure Storage Volume Drivers.
  • Allow the RBD driver to work with max_over_subscription_ratio.
  • Added v2.1 replication support to RBD driver.
  • Added backend ISCSI driver for Reduxio.
  • Added support for scaling QoS in the ScaleIO driver. The new QoS keys are maxIOPSperGB and maxBWSperGB.
  • Added support for oversubscription in thin provisioning in the ScaleIO driver. Volumes should have extra_specs with the key provisioning:type with value equals to either thick or thin. max_oversubscription_ratio can be defined by the global config or for ScaleIO specific with the config option sio_max_over_subscription_ratio. The maximum oversubscription ratio supported at the moment is 10.0.
  • Add provider_id in the detailed view of a volume for admin.
  • Added volume driver for QNAP ES Storage Driver.
  • The SolidFire driver will recognize 4 new QoS spec keys to allow an administrator to specify QoS settings which are scaled by the size of the volume. ‘ScaledIOPS’ is a flag which will tell the driver to look for ‘scaleMin’, ‘scaleMax’ and ‘scaleBurst’ which provide the scaling factor from the minimum values specified by the previous QoS keys (‘minIOPS’, ‘maxIOPS’, ‘burstIOPS’). The administrator must take care to assure that no matter what the final calculated QoS values follow minIOPS <= maxIOPS <= burstIOPS. A exception will be thrown if not. The QoS settings are also checked against the cluster min and max allowed and truncated at the min or max if they exceed.
  • Add multipath enhancement to Storwize iSCSI driver.
  • Added support to querying snapshots filtered by metadata key/value using ‘metadata’ optional URL parameter. For example, “/v3/snapshots?metadata=={‘key1’:’value1’}”.
  • Added support for ZMQ messaging layer in multibackend configuration.
  • Add support to backup volume using snapshot in the Unity driver.
  • Enable backup snapshot optimal path by implementing attach and detach snapshot in the VMAX driver.
  • Added the ability to create a CG from a source CG with the VMAX driver.
  • Support for compression on VMAX All Flash in the VMAX driver.
  • Storage assisted volume migration from one Pool/SLO/Workload combination to another, on the same array, via retype, for the VMAX driver. Both All Flash and Hybrid VMAX3 arrays are supported. VMAX2 is not supported.
  • VNX cinder driver now supports async migration during volume cloning. By default, the cloned volume will be available after the migration starts in the VNX instead of waiting for the completion of migration. This greatly accelerates the cloning process. If user wants to disable this, he could add --metadata async_migrate=False when creating volume from source volume/snapshot.
  • Add consistent group capability to generic volume groups in the XtremIO driver.

Known Issues

  • With the Dell SC Cinder Driver if a volume is retyped to a new storage profile all volumes created via snapshots from this volume will also change to the new storage profile.
  • With the Dell SC Cinder Driver retyping from one replication type to another type (ex. regular replication to live volume replication) is not supported.
  • Dell SC Cinder driver has limited support in a failed over state so thaw_backend has been implemented to reject the thaw call when in such a state.
  • When running Nova Compute and Cinder Volume or Backup services on the same host they must use a shared lock directory to avoid rare race conditions that can cause volume operation failures (primarily attach/detach of volumes). This is done by setting the lock_path to the same directory in the oslo_concurrency section of nova.conf and cinder.conf. This issue affects all previous releases utilizing os-brick and shared operations on hosts between Nova Compute and Cinder data services.

Upgrade Notes

  • In certain environments (Kubernetes for example) indirect calls to the LVM commands result in file descriptor leak warning messages which in turn cause the process_execution method to raise and exception.

    To accommodate these environments, and to maintain backward compatibility in Newton we add a lvm_suppress_fd_warnings bool config to the LVM driver. Setting this to True will append the LVM env vars to include the variable LVM_SUPPRESS_FD_WARNINGS=1.

    This is made an optional configuration because it only applies to very specific environments. If we were to make this global that would require a rootwrap/privsep update that could break compatibility when trying to do rolling upgrades of the volume service.

  • Changes config option default for datera_num_replicas from 1 to 3
  • Previous installations of IBM Storage must be un-installed first and the new driver should be installed on top. In addition the cinder.conf values should be updated to reflect the new paths. For example the proxy setting of storage.proxy.IBMStorageProxy should be updated to cinder.volume.drivers.ibm.ibm_storage.proxy.IBMStorageProxy.
  • The backup_service_inithost_offload configuration option now defaults to True instead of False.
  • Removed deprecated option osapi_max_request_body_size.
  • To get rid of long running DB data migrations that must be run offline, Cinder will now be able to execute them online, on a live cloud. Before upgrading from Ocata to Pike, operator needs to perform all the Newton data migrations. To achieve that he needs to perform cinder-manage db online_data_migrations until there are no records to be updated. To limit DB performance impact migrations can be performed in chunks limited by --max_number option. If your intent is to upgrade Cinder in a non-live manner, you can use --ignore_state option safely. Please note that finishing all the Newton data migrations will be enforced by the first schema migration in Pike, so you won’t be able to upgrade to Pike without that.
  • Datera driver location has changed from cinder.volume.drivers .datera.DateraDriver to cinder.volume.drivers.datera.datera_iscsi .DateraDriver.
  • Users of the Datera Cinder driver are now required to use Datera DataFabric version 1.0+. Versions before 1.0 will not be able to utilize this new driver since they still function on v1 of the Datera DataFabric API.
  • The Cinder database can now only be upgraded from changes since the Liberty release. In order to upgrade from a version prior to that, you must now upgrade to at least Liberty first, then to Ocata or later.
  • The v1 API was deprecated in the Juno release and is now defaulted to disabled. In order to still use the v1 API, you must now set enable_v1_api to True in your cinder.conf file.
  • There is a new policy option volume:force_delete which controls access to the ability to specify force delete via the volume delete API. This is separate from the pre-existing volume-admin-actions:force_delete policy check.
  • Any Volume Drivers configured in the DEFAULT config stanza should be moved to their own stanza and enabled via the enabled_backends config option. The older style of config with DEFAULT is deprecated and will be removed in future releases.
  • The Hitachi NAS iSCSI driver has been marked as unsupported and is now deprecated. enable_unsupported_drivers will need to be set to True in cinder.conf to continue to use it.
  • Removed deprecated option kaminario_nodedup_substring in Kaminario FC and iSCSI Cinder drivers.
  • The CloudByte driver has been marked as unsupported and is now deprecated. enable_unsupported_drivers will need to be set to True in cinder.conf to continue to use it.
  • The DotHill drivers have been marked as unsupported and are now deprecated. enable_unsupported_drivers will need to be set to True in cinder.conf to continue to use it.
  • The HPE XP driver has been marked as unsupported and is now deprecated. enable_unsupported_drivers will need to be set to True in cinder.conf to continue to use it.
  • The Nexenta Edge drivers have been marked as unsupported and are now deprecated. enable_unsupported_drivers will need to be set to True in cinder.conf to continue to use it.
  • The Scality driver has been marked as unsupported and is now deprecated. enable_unsupported_drivers will need to be set to True in cinder.conf to continue to use it.
  • Operator needs to perform cinder-manage db online_data_migrations to migrate existing consistency groups to generic volume groups.
  • The EqualLogic driver is moved to the dell_emc directory and has been rebranded to its current Dell EMC PS Series name. The volume_driver entry in cinder.conf needs to be changed to cinder.volume.drivers.dell_emc.ps.PSSeriesISCSIDriver.
  • The ScaleIO driver is moved to the dell_emc directory. volume_driver entry in cinder.conf needs to be changed to cinder.volume.drivers.dell_emc.scaleio.driver.ScaleIODriver.
  • The XtremIO driver is moved to the dell_emc directory. volume_driver entry in cinder.conf needs to be changed to cinder.volume.drivers.dell_emc.xtremio.XtremIOISCSIDriver or cinder.volume.drivers.dell_emc.xtremio.XtremIOFCDriver.
  • While configuring NetApp cDOT back ends, new configuration options (replication_device and netapp_replication_aggregate_map) must be added in order to use the host-level failover feature.
  • New config option added. "connection_string" in [profiler] section is used to specify OSProfiler driver connection string, for example, "connection_string = messaging://", "connection_string = mongodb://localhost:27017"
  • After running the migration script to migrate CGs to generic volume groups, CG and group APIs work as follows.
    • Create CG only creates in the groups table.
    • Modify CG modifies in the CG table if the CG is in the CG table, otherwise it modifies in the groups table.
    • Delete CG deletes from the CG or the groups table depending on where the CG is.
    • List CG checks both CG and groups tables.
    • List CG Snapshots checks both the CG and the groups tables.
    • Show CG checks both tables.
    • Show CG Snapshot checks both tables.
    • Create CG Snapshot creates either in the CG or the groups table depending on where the CG is.
    • Create CG from Source creates in either the CG or the groups table depending on the source.
    • Create Volume adds the volume either to the CG or the group.
    • default_cgsnapshot_type is reserved for migrating CGs.
    • Group APIs will only write/read in/from the groups table.
    • Group APIs will not work on groups with default_cgsnapshot_type.
    • Groups with default_cgsnapshot_type can only be operated by CG APIs.
    • After CG tables are removed, we will allow default_cgsnapshot_type to be used by group APIs.
  • EMC VNX driver have been rebranded to Dell EMC VNX driver. Existing configurations will continue to work with the legacy name, but will need to be updated by the next release. User needs update volume_driver to cinder.volume.drivers.dell_emc.vnx.driver.VNXDriver.
  • Old driver paths have been removed since they have been through our alloted deprecation period. Make sure if you have any of these paths being set in your cinder.conf for the volume_driver option, to update to the new driver path listed here.
    • Old path - cinder.volume.drivers.huawei.huawei_18000.Huawei18000ISCSIDriver
    • New path - cinder.volume.drivers.huawei.huawei_driver.HuaweiISCSIDriver
    • Old path - cinder.volume.drivers.huawei.huawei_driver.Huawei18000ISCSIDriver
    • New path - cinder.volume.drivers.huawei.huawei_driver.HuaweiISCSIDriver
    • Old path - cinder.volume.drivers.huawei.huawei_18000.Huawei18000FCDriver
    • New path - cinder.volume.drivers.huawei.huawei_driver.HuaweiFCDriver
    • Old path - cinder.volume.drivers.huawei.huawei_driver.Huawei18000FCDriver
    • New path - cinder.volume.drivers.huawei.huawei_driver.HuaweiFCDriver
    • Old path - cinder.volume.drivers.san.hp.hp_3par_fc.HP3PARFCDriver
    • New path - cinder.volume.drivers.hpe.hpe_3par_fc.HPE3PARFCDriver
    • Old path - cinder.volume.drivers.san.hp.hp_3par_iscsi.HP3PARISCSIDriver
    • New path - cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver
    • Old path - cinder.volume.drivers.san.hp.hp_lefthand_iscsi.HPLeftHandISCSIDriver
    • New path - cinder.volume.drivers.hpe.hpe_lefthand_iscsi.HPELeftHandISCSIDriver
    • Old path - cinder.volume.drivers.san.hp.hp_xp_fc.HPXPFCDriver
    • New path - cinder.volume.drivers.hpe.hpe_xp_fc.HPEXPFCDriver
  • Removing the Dell EqualLogic driver’s deprecated configuration options. Please replace old options in your cinder.conf with the new one.
    • Removed - eqlx_cli_timeout
    • Replaced with - ssh_conn_timeout
    • Removed - eqlx_use_chap
    • Replaced with - use_chap_auth
    • Removed - eqlx_chap_login
    • Replaced with - chap_username
    • Removed - eqlx_chap_password
    • Replaced with - chap_password
  • The Scality backend volume driver was marked as not supported in the previous release and has now been removed.
  • Configurations that are setting backend config in [DEFAULT] section are now not supported. You should use enabled_backends option to set up backends.
  • The volume_clear option to use shred was deprecated in the Newton release and has now been removed. Since deprecation, this option has performed the same action as the zero option. Config settings for shred should be updated to be set to zero for continued operation.
  • The GlusterFS volume driver, which was deprecated in the Newton release, has been removed.
  • The RBD driver no longer uses the “volume_tmp_dir” option to set where temporary files for image conversion are stored. Set “image_conversion_dir” to configure this in Ocata.
  • The ISERTgtAdm target was deprecated in the Kilo release. It has now been removed. You should now just use LVMVolumeDriver and specify iscsi_helper for the target driver you wish to use. In order to enable iser, please set iscsi_protocol=iser with lioadm or tgtadm target helpers.
  • Removing cinder-all binary. Instead use the individual binaries like cinder-api, cinder-backup, cinder-volume, cinder-scheduler.
  • EMC ScaleIO driver now uses the config option san_thin_provision to determine the default provisioning type.
  • If using the key manager, the configuration details should be updated to reflect the Castellan-specific configuration options.
  • The VMAX driver is moved to the dell_emc directory. volume_driver entry in cinder.conf needs to be changed to cinder.volume.drivers.dell_emc.vmax.iscsi.VMAXISCSIDriver or cinder.volume.drivers.dell_emc.vmax.fc.VMAXFCDriver.
  • Added config option vmware_connection_pool_size in the VMware VMDK driver to specify the maximum number of connections (to vCenter) in the http connection pool.
  • The VMware VMDK driver supports a new config option vmware_host_port to specify the port number to connect to vCenter server.
  • In VNX Cinder driver, replication_device keys, backend_id and san_ip are mandatory now. If you prefer security file authentication, please append storage_vnx_security_file_dir in replication_device, otherwise, append san_login, san_password, storage_vnx_authentication_type in replication_device.

Deprecation Notes

  • Deprecated datera_api_version option.
  • Removed datera_acl_allow_all option.
  • Removed datera_num_replicas option.
  • Config option datera_api_token has been replaced by options san_login and san_password.
  • Configuring Volume Drivers in the DEFAULT config stanza is not going to be maintained and will be removed in the next release. All backends should use the enabled_backends config option with separate stanza’s for each.
  • The block_driver is deprecated as of the Ocata release and will be removed in the Queens release of Cinder. Instead the LVM driver with the LIO iSCSI target should be used. For those that desire higher performance, they should use LVM striping.
  • The Cinder Linux SMBFS driver is now deprecated and will be removed during the following release. Deployers are encouraged to use the Windows SMBFS driver instead.
  • The HBSD (Hitachi Block Storage Driver) volume drivers which supports Hitachi Storages HUS100 and VSP family are deprecated. Support for HUS110 family will be no longer provided. Support on VSP will be provided as hitachi.vsp_* drivers.
  • Support for snapshots named in the backend as snapshot-<snapshot-id> is deprecated. Snapshots are now named in the backend as <volume-name>.<snapshot-id>.
  • The Hitachi NAS iSCSI driver has been marked as unsupported and is now deprecated. enable_unsupported_drivers will need to be set to True in cinder.conf to continue to use it. The driver will be removed in the next release.
  • Deprecated the configuration option hnas_svcX_volume_type. Use option hnas_svcX_pool_name to indicate the name of the services (pools).
  • The CloudByte driver has been marked as unsupported and is now deprecated. enable_unsupported_drivers will need to be set to True in cinder.conf to continue to use it. If its support status does not change it will be removed in the next release.
  • The DotHill drivers has been marked as unsupported and are now deprecated. enable_unsupported_drivers will need to be set to True in cinder.conf to continue to use it. If its support status does not change it will be removed in the next release.
  • The HPE XP driver has been marked as unsupported and is now deprecated. enable_unsupported_drivers will need to be set to True in cinder.conf to continue to use it. If its support status does not change it will be removed in the next release.
  • The Nexenta Edge drivers has been marked as unsupported and are now deprecated. enable_unsupported_drivers will need to be set to True in cinder.conf to continue to use it. If its support status does not change it will be removed in the next release.
  • The Scality driver has been marked as unsupported and is now deprecated. enable_unsupported_drivers will need to be set to True in cinder.conf to continue to use it. If its support status does not change it will be removed in the next release.
  • The 7-Mode Data ONTAP configuration of the NetApp Unified driver is deprecated as of the Ocata release and will be removed in the Queens release. Other configurations of the NetApp Unified driver, including Clustered Data ONTAP and E-series, are unaffected.
  • Marked the ITRI DISCO driver option disco_wsdl_path as deprecated. The new preferred protocol for array communication is REST and SOAP support will be removed.
  • All barbican and keymgr config options in Cinder are now deprecated. All of these options are moved to the key_manager section for the Castellan library.
  • The api-paste.ini paste.filter_factory setting has been updated to use oslo_middleware.sizelimit rather than cinder.api.middleware.sizelimit compatibility shim. cinder.api.middleware.sizelimit was deprecated in kilo and should now be updated to use oslo_middleware.sizelimit in api-paste.ini in preparation for removal in the pike release.

Security Issues

  • The qemu-img tool now has resource limits applied which prevent it from using more than 1GB of address space or more than 2 seconds of CPU time. This provides protection against denial of service attacks from maliciously crafted or corrupted disk images.

Bug Fixes

  • With the Dell SC Cinder Driver retyping to or from a replicated type should now work.
  • With the Dell SC Cinder Driver retype failed to return a tuple if it had to return an update to the volume state.
  • The NetApp cDOT driver now sets the replication_status attribute appropriately on volumes created within replicated backends when using host level replication.
  • Fixed an issue where the NetApp cDOT NFS driver failed to clone new volumes from the image cache.
  • Fixed volume extend issue that allowed a tenant with enough quota to extend the volume to limits greater than what the volume backend supported.
  • Fixed HNAS bug that placed a cloned volume in the same pool as its source, even if the clone had a different pool specification. Driver will not allow to make clones using a different volume type anymore.
  • Fixed Non-WAN port filter issue in Kaminario iSCSI driver.
  • Fixed issue of managing a VG with more than one volume in Kaminario FC and iSCSI Cinder drivers.
  • For SolidFire, QoS specs are now checked to make sure they fall within the min and max constraints. If not the QoS specs are capped at the min or max (i.e. if spec says 50 and minimum supported is 100, the driver will set it to 100).
  • Added support for images with vmware_adaptertype set to paraVirtual in the VMDK driver.

Other Notes

  • Now extend won’t work on disabled services because it’s going through the scheduler, unlike how it worked before.
Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.