Current Series Release Notes

20.0.0.0rc1-493

New Features

  • Microversion 2.80 changes the list migrations APIs and the os-migrations API.

    In this microversion, expose the user_id and project_id fields in the following APIs:

    • GET /os-migrations

    • GET /servers/{server_id}/migrations

    • GET /servers/{server_id}/migrations/{migration_id}

    The GET /os-migrations API will also have optional user_id and project_id query parameters for filtering migrations by user and/or project, for example:

    • GET /os-migrations?user_id=ef9d34b4-45d0-4530-871b-3fb535988394

    • GET /os-migrations?project_id=011ee9f4-8f16-4c38-8633-a254d420fd54

    • GET /os-migrations?user_id=ef9d34b4-45d0-4530-871b-3fb535988394&project_id=011ee9f4-8f16-4c38-8633-a254d420fd54

  • LXC instances now support cloud-init.

  • A new policy rule os_compute_api:servers:show:host_status:unknown-only has been added to control whether a user can view a server host status of UNKNOWN in the following APIs:

    • GET /servers/{server_id} if using API microversion >= 2.16

    • GET /servers/detail if using API microversion >= 2.16

    • PUT /servers/{server_id} if using API microversion >= 2.75

    • POST /servers/{server_id}/action (rebuild) if using API microversion >= 2.75

    This is different than the os_compute_api:servers:show:host_status policy rule which controls whether a user can view all possible host status in the aforementioned APIs including UP, DOWN, MAINTENANCE, and UNKNOWN.

  • Image pre-caching on hosts by aggregate is now supported (where supported by the underlying virt driver) as of microversion 2.81. A group of hosts within an aggregate can be compelled to fetch and cache a list of images to reduce time-to-boot latency. Adds the new API:

    • POST /os-aggregates/{aggregate_id}/images

    which is controlled by the policy compute:aggregates:images rule.

    See the [image_cache]/precache_concurrency config option for more information about throttling this operation.

  • Added support for instance-level PCI NUMA policies using the hw:pci_numa_affinity_policy flavor extra spec and hw_pci_numa_affinity_policy image metadata property. These apply to both PCI passthrough and SR-IOV devices, unlike host-level PCI NUMA policies configured via the alias key of the [pci] alias config option. See the VM Scoped SR-IOV NUMA Affinity spec for more info.

  • The server evacute action API now supports servers with neutron ports having resource requests, e.g. ports that have QoS minimum bandwidth rules attached.

  • When using the libvirt driver, Nova instances will now get a VirtIO-RNG (Random Number Generator) device by default. This is to ensure guests are not starved of entropy during boot time. In case you want to disallow setting an RNG device for some reason, it can be done by setting the flavor Extra Spec property hw_rng:allowed to False.

Upgrade Notes

  • Python 2.7 support has been dropped. The minimum version of Python now supported by nova is Python 3.6.

  • Starting in the Ussuri release, compute node resource providers are automatically marked with the COMPUTE_NODE trait. This allows them to be distinguished easily from other providers, including sharing and nested providers, as well as other non-compute-related providers in a deployment. To make effective use of this trait (e.g. for scheduling purposes), all compute nodes must be upgrade to Ussuri. Alternatively, you can manually add the trait to pre-Ussuri compute node providers via openstack resource provider trait set

  • The [osapi_v21]/project_id_regex configuration option which has been deprecated since the Mitaka 13.0.0 release has now been removed.

  • The nova-console service has been deprecated since the 19.0.0 Stein release and has now been removed. The following configuration options are therefore removed.

    • [upgrade_levels] console

    In addition, the following APIs have been removed. Calling these APIs will now result in a 410 HTTPGone error response:

    • POST /servers/{server_id}/consoles

    • GET /servers/{server_id}/consoles

    • GET /servers/{server_id}/consoles/{console_id}

    • DELETE /servers/{server_id}/consoles/{console_id}

    Finally, the following policies are removed. These were related to the removed APIs listed above and no longer had any effect:

    • os_compute_api:os-consoles:index

    • os_compute_api:os-consoles:create

    • os_compute_api:os-consoles:delete

    • os_compute_api:os-consoles:show

  • The nova-network feature has been deprecated since the 14.0.0 (Newton) release and has now been removed. The remaining nova-network specific REST APIs have been removed along with their related policy rules. Calling these APIs will now result in a 410 (Gone) error response.

    • GET /os-security-group-default-rules

    • POST /os-security-group-default-rules

    • GET /os-security-group-default-rules/{id}

    • DELETE /os-security-group-default-rules/{id}

    • POST /os-networks

    • DELETE /os-networks

    • POST /os-networks/add

    • POST /os-networks/{id} (associate_host)

    • POST /os-networks/{id} (disassociate)

    • POST /os-networks/{id} (disassociate_host)

    • POST /os-networks/{id} (disassociate_project)

    • POST /os-tenant-networks

    • DELETE /os-tenant-networks

    The following policies have also been removed.

    • os_compute_api:os-security-group-default-rules

    • os_compute_api:os-networks

    • os_compute_api:os-networks-associate

  • The networks quota, which was only enabled if the enabled_network_quota config option was enabled and only useful with nova-network, is removed. It will not longer be present in the responses for the APIs while attempts to update the quota will be rejected.

    • GET /os-quota-sets

    • GET /os-quota-sets/{project_id}

    • GET /os-quota-sets/{project_id}/defaults

    • GET /os-quota-sets/{project_id}/detail

    • PUT /os-quota-sets/{project_id}

    • GET /os-quota-class-sets/{id}

    • PUT /os-quota-class-sets/{id}

    The following related config options have been removed.

    • enable_network_quota

    • quota_networks

  • The following nova-manage commands have been removed.

    • network

    • floating

    These were only useful for the now-removed nova-network service and have been deprecated since the 15.0.0 (Ocata) release.

  • The nova-dhcpbridge service has been removed. This was only used with the now-removed nova-network service.

  • The following config options only applied when using the nova-network network driver which has now been removed. The config options have therefore been removed also.

    • [DEFAULT] firewall_driver

    • [DEFAULT] allow_same_net_traffic

    • [DEFAULT] flat_network_bridge

    • [DEFAULT] flat_network_dns

    • [DEFAULT] flat_interface

    • [DEFAULT] vlan_interface

    • [DEFAULT] vlan_start

    • [DEFAULT] num_networks

    • [DEFAULT] vpn_ip

    • [DEFAULT] vpn_start

    • [DEFAULT] network_size

    • [DEFAULT] fixed_range_v6

    • [DEFAULT] gateway

    • [DEFAULT] gateway_v6

    • [DEFAULT] cnt_vpn_clients

    • [DEFAULT] fixed_ip_disassociate_timeout

    • [DEFAULT] create_unique_mac_address_attempts

    • [DEFAULT] teardown_unused_network_gateway

    • [DEFAULT] l3_lib

    • [DEFAULT] network_driver

    • [DEFAULT] network_manager

    • [DEFAULT] multi_host

    • [DEFAULT] force_dhcp_release

    • [DEFAULT] update_dns_entries

    • [DEFAULT] dns_update_periodic_interval

    • [DEFAULT] dhcp_domain

    • [DEFAULT] use_neutron

    • [DEFAULT] auto_assign_floating_ip

    • [DEFAULT] floating_ip_dns_manager

    • [DEFAULT] instance_dns_manager

    • [DEFAULT] instance_dns_domain

    • [DEFAULT] default_floating_pool

    • [DEFAULT] ipv6_backend

    • [DEFAULT] metadata_host

    • [DEFAULT] metadata_port

    • [DEFAULT] iptables_top_regex

    • [DEFAULT] iptables_bottom_regex

    • [DEFAULT] iptables_drop_action

    • [DEFAULT] ldap_dns_url

    • [DEFAULT] ldap_dns_user

    • [DEFAULT] ldap_dns_password

    • [DEFAULT] ldap_dns_soa_hostmaster

    • [DEFAULT] ldap_dns_servers

    • [DEFAULT] ldap_dns_base_dn

    • [DEFAULT] ldap_dns_soa_refresh

    • [DEFAULT] ldap_dns_soa_retry

    • [DEFAULT] ldap_dns_soa_expiry

    • [DEFAULT] ldap_dns_soa_minimum

    • [DEFAULT] dhcpbridge_flagfile

    • [DEFAULT] dhcpbridge

    • [DEFAULT] dhcp_lease_time

    • [DEFAULT] dns_server

    • [DEFAULT] use_network_dns_servers

    • [DEFAULT] dnsmasq_config_file

    • [DEFAULT] ebtables_exec_attempts

    • [DEFAULT] ebtables_retry_interval

    • [DEFAULT] fake_network

    • [DEFAULT] send_arp_for_ha

    • [DEFAULT] send_arp_for_ha_count

    • [DEFAULT] dmz_cidr

    • [DEFAULT] force_snat_range

    • [DEFAULT] linuxnet_interface_driver

    • [DEFAULT] linuxnet_ovs_integration_bridge

    • [DEFAULT] use_single_default_gateway

    • [DEFAULT] forward_bridge_interface

    • [DEFAULT] ovs_vsctl_timeout

    • [DEFAULT] networks_path

    • [DEFAULT] public_interface

    • [DEFAULT] routing_source_ip

    • [DEFAULT] use_ipv6

    • [DEFAULT] allow_same_net_traffic

    • [DEFAULT] defer_iptables_apply

    • [DEFAULT] share_dhcp_address

    • [upgrade_levels] network

    • [vmware] vlan_interface

  • The nova-xvpvncproxy service has been deprecated since the 19.0.0 Stein release and has now been removed. The following configuration options have also been removed:

    • [vnc] xvpvncproxy_base_url

    • [vnc] xvpvncproxy_host

    • [vnc] xvpvncproxy_port

    • [xvp] console_xvp_conf_template

    • [xvp] console_xvp_conf

    • [xvp] console_xvp_log

    • [xvp] console_xvp_multiplex_port

    • [xvp] console_xvp_pid

  • Compatibility code for compute drivers that do not implement the update_provider_tree interface has been removed. All compute drivers must now implement update_provider_tree.

Deprecation Notes

  • The [api]auth_strategy conf option and the corresponding test-only noauth2 pipeline in api-paste.ini are deprecated and will be removed in a future release. The only supported auth_strategy is keystone, the default.

  • The [glance]api_servers configuration option is deprecated and will be removed in a future release. Deployments should use standard keystoneauth1 options to configure communication with a single image service endpoint. Any load balancing or high availability requirements should be satisfied outside of nova.

  • The following conf options have been moved to the [image_cache] group and renamed accordingly. The old option paths are deprecated and will be removed in a future release.

    Deprecated Option

    New Option

    [DEFAULT]image_cache_manager_interval

    [image_cache]manager_interval

    [DEFAULT]image_cache_subdirectory_name

    [image_cache]subdirectory_name

    [DEFAULT]remove_unused_base_images

    [image_cache]remove_unused_base_images

    [DEFAULT]remove_unused_original_minimum_age_seconds

    [image_cache]remove_unused_original_minimum_age_seconds

    [libvirt]remove_unused_resized_minimum_age_seconds

    [image_cache]remove_unused_resized_minimum_age_seconds

Bug Fixes

  • Long standing bug 1694844 is now fixed where the following conditions would lead to a 400 error response during server create:

    • [cinder]/cross_az_attach=False

    • [DEFAULT]/default_schedule_zone=None

    • A server is created without an availability zone specified but with pre-existing volume block device mappings

    Before the bug was fixed, users would have to specify an availability zone that matches the zone that the volume(s) were in. With the fix, the compute API will implicitly create the server in the zone that the volume(s) are in as long as the volume zone is not the same as the [DEFAULT]/default_availability_zone value (defaults to nova).

  • Bug 1845986 has been fixed by adding iommu driver when the following metadata options are used with AMD SEV:

    • hw_scsi_model=virtio-scsi and either hw_disk_bus=scsi or hw_cdrom_bus=scsi

    • hw_video_model=virtio

    Also a virtio-serial controller is created when hw_qemu_guest_agent=yes option is used, together with iommu driver for it.

  • This release contains a fix for a regression introduced in 15.0.0 (Ocata) where server create failing during scheduling would not result in an instance action record being created in the cell0 database. Now when creating a server fails during scheduling and is “buried” in cell0 a create action will be created with an event named conductor_schedule_and_build_instances.

  • The DELETE /os-services/{service_id} compute API will now return a 409 HTTPConflict response when trying to delete a nova-compute service which is involved in in-progress migrations. This is because doing so would not only orphan the compute node resource provider in the placement service on which those instances have resource allocations but can also break the ability to confirm/revert a pending resize properly. See https://bugs.launchpad.net/nova/+bug/1852610 for more details.

  • This release contains a fix for bug 1856925 such that resize and migrate server actions will be rejected with a 409 HTTPConflict response if the source compute service is down.

  • An instance can be rebuilt in-place with the original image or a new image. Instance resource usage cannot be altered during a rebuild. Previously Nova would have ignored the NUMA topology of the new image continuing to use the NUMA topology of the existing instance until a move operation was performed. As Nova did not explicitly guard against inadvertent changes to resource requests contained in a new image, it was possible to rebuild with an image that would violate this requirement; see bug #1763766 for details. This resulted in an inconsistent state as the instance that was running did not match the instance that was requested. Nova now explicitly checks if a rebuild would alter the requested NUMA topology of an instance and rejects the rebuild if so.

  • With the changes introduced to address bug #1763766, Nova now guards against NUMA constraint changes on rebuild. As a result the NUMATopologyFilter is no longer required to run on rebuild since we already know the topology will not change and therefore the existing resource claim is still valid. As such it is now possible to do an in-place rebuild of an instance with a NUMA topology even if the image changes provided the new image does not alter the topology which addresses bug #1804502.

Other Notes

  • Nova now has a config option called [workarounds]/never_download_image_if_on_rbd which helps to avoid pathological storage behavior with multiple ceph clusters. Currently, Nova does not support multiple ceph clusters properly, but Glance can be configured with them. If an instance is booted from an image residing in a ceph cluster other than the one Nova knows about, it will silently download it from Glance and re-upload the image to the local ceph privately for that instance. Unlike the behavior you expect when configuring Nova and Glance for ceph, Nova will continue to do this over and over for the same image when subsequent instances are booted, consuming a large amount of storage unexpectedly. The new workaround option will cause Nova to refuse to do this download/upload behavior and instead fail the instance boot. It is simply a stop-gap effort to allow unsupported deployments with multiple ceph clusters from silently consuming large amounts of disk space.

  • The resize and migrate server action APIs used to synchronously block until a destination host is selected by the scheduler. Those APIs now asynchronously return a response to the user before scheduling. The response code remains 202 and users can monitor the operation via the status and OS-EXT-STS:task_state fields on the server resource and also by using the os-instance-actions API. The most notable change is NoValidHost will not be returned in a 400 error response from the API if scheduling fails but that information is available via the instance actions API interface.

  • The nova libvirt virt driver supports creating instances with multi-queue virtio network interfaces. In previous releases nova has based the maximum number of virtio queue pairs that can be allocated on the reported kernel major version. It has been reported in bug #1847367 that some distros have backported changes from later major versions that make major version number no longer suitable to determine the maximum virtio queue pair count. A new config option has been added to the libvirt section of the nova.conf. When defined nova will now use the [libvirt]/max_queues option to define the max queues that can be configured, if undefined it will fallback to the previous kernel version approach.