Newton Series Release Notes


Bug Fixes

  • Includes the fix for bug 1673613 which could cause issues when upgrading and running nova-manage cell_v2 simple_cell_setup or nova-manage cell_v2 map_cell0 where the database connection is read from config and has special characters in the URL.
  • Fixes bug 1691545 in which there was a significant increase in database connections because of the way connections to cell databases were being established. With this fix, objects related to database connections are cached in the API service and reused to prevent new connections being established for every communication with cell databases.
  • The nova-manage cell_v2 simple_cell_setup command now creates the default cell0 database connection using the [database] connection configuration option rather than the [api_database] connection. The cell0 database schema is the main database, i.e. the instances table, rather than the api database schema. In other words, the cell0 database would be called something like nova_cell0 rather than nova_api_cell0.


This release includes fixes for security vulnerabilities.

Known Issues

  • The live-migration progress timeout controlled by the configuration option [libvirt]/live_migration_progress_timeout has been discovered to frequently cause live-migrations to fail with a progress timeout error, even though the live-migration is still making good progress. To minimize problems caused by these checks we recommend setting the value to 0, which means do not trigger a timeout. (This has been made the default in Ocata and Pike.) To modify when a live-migration will fail with a timeout error, please now look at [libvirt]/live_migration_completion_timeout and [libvirt]/live_migration_downtime.

Security Issues

  • [CVE-2017-7214] Failed notification payload is dumped in logs with auth secrets


Known Issues

  • When generating Libvirt XML to attach network interfaces for the tap, ivs, iovisor, midonet, and vrouter virtual interface types Nova previously generated an empty path attribute to the script element (<script path=”/>) of the interface.

    As of Libvirt 1.3.3 (commit) and later Libvirt no longer accepts an empty path attribute to the script element of the interface. Notably this includes Libvirt 2.0.0 as provided with RHEL 7.3 and CentOS 7.3-1611. The creation of virtual machines with offending interface definitions on a host with Libvirt 1.3.3 or later will result in an error “libvirtError: Cannot find ” in path: No such file or directory”.

    Additionally, where virtual machines already exist that were created using earlier versions of Libvirt interactions with these virtual machines via Nova or other utilities (e.g. virsh) may result in similar errors.

    To mitigate this issue Nova no longer generates an empty path attribute to the script element when defining an interface. This resolves the issue with regards to virtual machine creation. To resolve the issue with regards to existing virtual machines a change to Libvirt is required, this is being tracked in Bugzilla 1412834

Bug Fixes

  • Fixes bug 1662699 which was a regression in the v2.1 API from the block_device_mapping_v2.boot_index validation that was performed in the legacy v2 API. With this fix, requests to create a server with boot_index=None will be treated as if boot_index was not specified, which defaults to meaning a non-bootable block device.


A new database schema migration is included in this release to fix bug 1635446.

Known Issues

  • Use of the newly introduced optional placement RESTful API in Newton requires WebOb>=1.6.0. This requirement was not reflected prior to the release of Newton in requirements.txt with the lower limit being set to WebOb>=1.2.3.

Bug Fixes

  • Contains database schema migration 021_build_requests_instance_mediumtext which increases the size of the build_requests.instance column on MySQL backends. This is needed to create new instances which have very large user_data fields.


Nova 14.0.0 release is including a lot of new features and bugfixes. It can be extremely hard to mention all the changes we introduced during that release but we beg you to read at least the upgrade section which describes the required modifications that you need to do for upgrading your cloud from 13.0.0 (Mitaka) to 14.0.0 (Newton). That said, a few major changes are worth to notice here. This is not an exhaustive list of things to notice, rather just important things you need to know :

  • Latest API microversion supported for Newton is v2.38
  • Nova now provides a new placement RESTful API endpoint that is for the moment optional where Nova compute nodes use it for providing resources. For the moment, the nova-scheduler is not using it but we plan to check the placement resources for Ocata. In case you plan to rolling-upgrade the compute nodes between Newton and Ocata, please look in the notes below how to use the new placement API.
  • Cells V2 now supports booting instances for one cell v2 only. We plan to add a multi-cell support for Ocata. You can prepare for Ocata now by creating a cellv2 now using the nova-manage related commands, but configuring Cells V2 is still fully optional for this cycle.
  • Nova is now using Glance v2 API for getting image resources.
  • API microversions 2.36 and above now deprecate the REST resources in Nova used to proxy calls to other service type APIs (eg. /os-volumes). We’ll still supporting those until we raise our minimum API version to 2.36 which is not planned yet (we’re supporting v2.1 as of now) but you’re encouraged to stop using those resources and rather calling the other services that provide those natively.


New Features

  • Add perf event support for libvirt driver. This can be done by adding new configure option ‘enabled_perf_events’ in libvirt section of nova.conf. This feature requires libvirt>=2.0.0.
  • Starting from REST API microversion 2.34 pre-live-migration checks are performed asynchronously. instance-actions should be used for getting information about the checks results. New approach allows to reduce rpc timeouts amount, as previous workflow was fully blocking and checks before live-migration make blocking rpc request to both source and destination compute node.
  • New configuration option live_migration_permit_auto_converge has been added to allow hypervisor to throttle down CPU of an instance during live migration in case of a slow progress due to high ratio of dirty pages. Requires libvirt>=1.2.3 and QEMU>=1.6.0.
  • New configuration option live_migration_permit_post_copy has been added to start live migrations in a way that allows nova to switch an on-going live migration to post-copy mode. Requires libvirt>=1.3.3 and QEMU>=2.5.0. If post copy is permitted and version requirements are met it also changes behaviour of ‘live_migration_force_complete’, so that it switches on-going live migration to post-copy mode instead of pausing an instance during live migration.
  • Fix os-console-auth-tokens API to return connection info for all types of tokens, not just RDP.
  • Hyper-V RemoteFX feature.

    Microsoft RemoteFX enhances the visual experience in RDP connections, including providing access to virtualized instances of a physical GPU to multiple guests running on Hyper-V.

    In order to use RemoteFX in Hyper-V 2012 R2, one or more DirectX 11 capable display adapters must be present and the RDS-Virtualization server feature must be installed.

    To enable this feature, the following config option must be set in the Hyper-V compute node’s ‘nova.conf’ file:

    enable_remotefx = True

    To create instances with RemoteFX capabilities, the following flavor extra specs must be used:

    os:resolution. Guest VM screen resolution size. Acceptable values:

    1024x768, 1280x1024, 1600x1200, 1920x1200, 2560x1600, 3840x2160

    ‘3840x2160’ is only available on Windows / Hyper-V Server 2016.

    os:monitors. Guest VM number of monitors. Acceptable values:

    [1, 4] - Windows / Hyper-V Server 2012 R2
    [1, 8] - Windows / Hyper-V Server 2016

    os:vram. Guest VM VRAM amount. Only available on Windows / Hyper-V Server 2016. Acceptable values:

    64, 128, 256, 512, 1024

    There are a few considerations that needs to be kept in mind:

    • Not all guests support RemoteFX capabilities.

    • Windows / Hyper-V Server 2012 R2 does not support Generation 2 VMs with RemoteFX capabilities.

    • Per resolution, there is a maximum amount of monitors that can be added. The limits are as follows:

      For Windows / Hyper-V Server 2012 R2:

      1024x768: 4
      1280x1024: 4
      1600x1200: 3
      1920x1200: 2
      2560x1600: 1

      For Windows / Hyper-V Server 2016:

      1024x768: 8
      1280x1024: 8
      1600x1200: 4
      1920x1200: 4
      2560x1600: 2
      3840x2160: 1
  • Microversion v2.26 allows to create/update/delete simple string tags. They can be used for filtering servers by these tags.
  • Added microversion v2.35 that adds pagination support for keypairs with the help of new optional parameters ‘limit’ and ‘marker’ which were added to GET /os-keypairs request.
  • Added microversion v2.28 from which hypervisor’s ‘cpu_info’ field returned as JSON object by sending GET /v2.1/os-hypervisors/{hypervisor_id} request.
  • Virtuozzo Storage is available as a volume backend in libvirt virtualization driver.


    Only qcow2/raw volume format supported, but not ploop.

  • Virtuozzo ploop disks can be resized now during “nova resize”.
  • Virtuozzo instances with ploop disks now support the rescue operation
  • A new nova-manage command has been added to discover any new hosts that are added to a cell. If a deployment has migrated to cellsv2 using either the simple_cell_setup or the map_cell0/map_cell_and_hosts/map_instances combo then anytime a new host is added to a cell this new “nova-manage cell_v2 discover_hosts” needs to be run before instances can be booted on that host. If multiple hosts are added at one time the command only needs to be run one time to discover all of them. This command should be run from an API host, or a host that is configured to use the nova_api database. Please note that adding a host to a cell and not running this command could lead to build failures/reschedules if that host is selected by the scheduler. The discover_hosts command is necessary to route requests to the host but is not necessary in order for the scheduler to be aware of the host. It is advised that nova-compute hosts are configured with “enable_new_services=False” in order to avoid failures before the hosts have been discovered.
  • On evacuate actions, the default behaviour when providing a host in the request body changed. Now, instead of bypassing the scheduler when asking for a destination, it will instead call it with the requested destination to make sure the proposed host is accepted by all the filters and the original request. In case the administrator doesn’t want to call the scheduler when providing a destination, a new request body field called force (defaulted to False) will modify that new behaviour by forcing the evacuate operation to the destination without verifying the scheduler.
  • On live-migrate actions, the default behaviour when providing a host in the request body changed. Now, instead of bypassing the scheduler when asking for a destination, it will instead call it with the requested destination to make sure the proposed host is accepted by all the filters and the original request. In case the administrator doesn’t want to call the scheduler when providing a destination, a new request body field called force (defaulted to False) will modify that new behaviour by forcing the live-migrate operation to the destination without verifying the scheduler.
  • The 2.37 microversion adds support for automatic allocation of network resources for a project when networks: auto is specified in a server create request. If the project does not have any networks available to it and the auto-allocated-topology API is available in the Neutron networking service, Nova will call that API to allocate resources for the project. There is some setup required in the deployment for the auto-allocated-topology API to work in Neutron. See the Additional features section of the OpenStack Networking Guide for more details for setting up this feature in Neutron.


    The API does not default to ‘auto’. However, python-novaclient will default to passing ‘auto’ for this microversion if no specific network values are provided to the CLI.


    This feature is not available until all of the compute services in the deployment are running Newton code. This is to avoid sending a server create request to a Mitaka compute that can not understand a network ID of ‘auto’ or ‘none’. If this is the case, the API will treat the request as if networks was not in the server create request body. Once all computes are upgraded to Newton, a restart of the nova-api service will be required to use this new feature.

  • Nova now defaults to using the glance version 2 protocol for all backend operations for all virt drivers. A use_glance_v1 config option exists to revert to glance version 1 protocol if issues are seen, however that will be removed early in Ocata, and only glance version 2 protocol will be used going forward.
  • When booting an instance, its sanitized ‘hostname’ attribute is now used to populate the ‘dns_name’ attribute of the Neutron ports the instance is attached to. This functionality enables the Neutron internal DNS service to know the ports by the instance’s hostname. As a consequence, commands like ‘hostname -f’ will work as expected when executed in the instance. When a port’s network has a non-blank ‘dns_domain’ attribute, the port’s ‘dns_name’ combined with the network’s ‘dns_domain’ will be published by Neutron in an external DNS as a service like Designate. As a consequence, the instance’s hostname is published in the external DNS as a service. This functionality is added to Nova when the ‘DNS Integration’ extension is enabled in Neutron. The publication of ‘dns_name’ and ‘dns_domain’ combinations to an external DNS as a service additionally requires the configuration of the appropriate driver in Neutron. When the ‘Port Binding’ extension is also enabled in Neutron, the publication of a ‘dns_name’ and ‘dns_domain’ combination to the external DNS as a service will require one additional update operation when Nova allocates the port during the instance boot. This may have a noticeable impact on the performance of the boot process.
  • Adds a new feature to the ironic virt driver, which allows multiple nova-compute services to be run simultaneously. This uses consistent hashing to divide the ironic nodes between the nova-compute services, with the hash ring being refreshed each time the resource tracker runs.

    Note that instances will still be owned by the same nova-compute service for the entire life of the instance, and so the ironic node that instance is on will also be managed by the same nova-compute service until the node is deleted. This also means that removing a nova-compute service will leave instances managed by that service orphaned, and as such most instance actions will not work until a nova-compute service with the same hostname is brought (back) online.

    When nova-compute services are brought up or down, the ring will eventually re-balance (when the resource tracker runs on each compute). This may result in duplicate compute_node entries for ironic nodes while the nova-compute service pool is re-balancing. However, because any nova-compute service running the ironic virt driver can manage any ironic node, if a build request goes to the compute service not currently managing the node the build request is for, it will still succeed.

    There is no configuration to do to enable this feature; it is always enabled. There are no major changes when only one compute service is running. If more compute services are brought online, the bigger changes come into play.

    Note that this is tested when running with only one nova-compute service, but not more than one. As such, this should be used with caution for multiple compute hosts until it is properly tested in CI.

  • Multitenant networking for the ironic compute driver is now supported. To enable this feature, ironic nodes must be using the ‘neutron’ network_interface.
  • The Libvirt driver now uses os-vif plugins for handling plug/unplug actions for the Linux Bridge and OpenVSwitch VIF types. Each os-vif plugin will have its own group in nova.conf for configuration parameters it needs. These plugins will be installed by default as part of the os-vif module installation so no special action is required.
  • Added hugepage support for POWER architectures.
  • Microversions may now (with microversion 2.27) be requested with the “OpenStack-API-Version: compute 2.27” header, in alignment with OpenStack-wide standards. The original format, “X-OpenStack-Nova-API-Version: 2.27”, may still be used.
  • Nova has been enabled for mutable config. Certain options may be reloaded by sending SIGHUP to the correct process. Live migration options will apply to live migrations currently in progress. Please refer to the configuration manual.
    • DEFAULT.debug
    • libvirt.live_migration_completion_timeout
    • libvirt.live_migration_progress_timeout
  • The following legacy notifications have been been transformed to a new versioned payload:

    • instance.delete
    • instance.pause
    • instance.power_on
    • instance.shelve
    • instance.suspend
    • instance.restore
    • instance.resize
    • instance.update
    • compute.exception

    Every versioned notification has a sample file stored under doc/notification_samples directory. Consult for more information.

  • Nova is now configured to work with two oslo.policy CLI scripts that have been added. The first of these can be called like “oslopolicy-list-redundant –namespace nova” and will output a list of policy rules in policy.[json|yaml] that match the project defaults. These rules can be removed from the policy file as they have no effect there. The second script can be called like “oslopolicy-policy-generator –namespace nova –output-file policy-merged.yaml” and will populate the policy-merged.yaml file with the effective policy. This is the merged results of project defaults and config file overrides.
  • Added microversion v2.33 which adds paging support for hypervisors, the admin is able to perform paginate query by using limit and marker to get a list of hypervisors. The result will be sorted by hypervisor id.
  • Libvirt with Virtuozzo virtualization type now supports snapshot operations
  • The nova-compute worker now communicates with the new placement API service. Nova determines the placement API service by querying the OpenStack service catalog for the service with a service type of ‘placement’. If there is no placement entry in the service catalog, nova-compute will log a warning and no longer try to reconnect to the placement API until the nova-worker process is restarted.
  • A new [placement] section is added to the nova.conf configuration file for configuration options affecting how Nova interacts with the new placement API service. This contains the usual keystone auth and session options.
  • The pointer_model configuration option and hw_pointer_model image property was added to specify different pointer models for input devices. This replaces the now deprecated use_usb_tablet option.
  • The nova-policy command line is implemented as a tool to experience the under-development feature policy discovery. User can input the credentials infomation and the instance info, the tool will return a list of API which can be allowed to invoke. There isn’t any contract for the interface of the tool due to the feature still under-development.
  • Add a nova-manage command to refresh the quota usages for a project or user. This can be used when the usages in the quota-usages database table are out-of-sync with the actual usages. For example, if a resource usage is at the limit in the quota_usages table, but the actual usage is less, then nova will not allow VMs to be created for that project or user. The nova-manage command can be used to re-sync the quota_usages table with the actual usage.
  • Adds reserved_huge_pages option to reserve amount of huge pages used by third party components.
  • Libvirt driver will attempt to update the time of a suspended and/or a migrated guest in order to keep the guest clock in sync. This operation will require the guest agent to be configured and running in order to be able to run. However, this operation will not be disruptive.
  • Two new policies soft-affinity and soft-anti-affinity have been implemented for the server-group feature of Nova. This means that POST /v2.1/{tenant_id}/os-server-groups API resource now accepts ‘soft-affinity’ and ‘soft-anti-affinity’ as value of the ‘policies’ key of the request body.
  • The 2.32 microversion adds support for virtual device role tagging. Device role tagging is an answer to the question ‘Which device is which, inside the guest?’ When booting an instance, an optional arbitrary ‘tag’ parameter can be set on virtual network interfaces and/or block device mappings. This tag is exposed to the instance through the metadata API and on the config drive. Each tagged virtual network interface is listed along with information about the virtual hardware, such as bus type (ex: PCI), bus address (ex: 0000:00:02.0), and MAC address. For tagged block devices, the exposed hardware metadata includes the bus (ex: SCSI), bus address (ex: 1:0:2:0) and serial number.

    The 2.32 microversion also adds the 2016-06-30 version to the metadata API. Starting with 2016-06-30, the metadata contains a ‘devices’ sections which lists any devices that are tagged as described in the previous paragraph, along with their hardware metadata.

Known Issues

  • If a deployer has updated their deployment to using cellsv2 using either the simple_cell_setup or the map_cell0/map_cell_and_hosts/map_instances combo and they add a new host into the cell it may cause build failures or reschedules until they run the “nova-manage cell_v2 discover_hosts” command. This is because the scheduler will quickly become aware of the host but nova-api will not know how to route the request to that host until it has been “discovered”. In order to avoid that it is advised that new computes are disabled until the discover command has been run.
  • When using Neutron extension ‘port_security’ and booting an instance on a network with ‘port_security_enabled=False’ the Nova API response says there is a ‘default’ security group attached to the instance which is incorrect. However when listing security groups for the instance there are none listed, which is correct. The API response will be fixed separately with a microversion.
  • When running Nova Compute and Cinder Volume or Backup services on the same host they must use a shared lock directory to avoid rare race conditions that can cause volume operation failures (primarily attach/detach of volumes). This is done by setting the “lock_path” to the same directory in the “oslo_concurrency” section of nova.conf and cinder.conf. This issue affects all previous releases utilizing os-brick and shared operations on hosts between Nova Compute and Cinder data services.
  • When using virtual device role tagging, the metadata on the config drive lags behind the metadata obtained from the metadata API. For example, if a tagged virtual network interface is detached from the instance, its tag remains in the metadata on the config drive. This is due to the nature of the config drive, which, once written, cannot be easily updated by Nova.

Upgrade Notes

  • All cloudpipe configuration options have been added to the ‘cloudpipe’ group. They should no longer be included in the ‘DEFAULT’ group.
  • All crypto configuration options have been added to the ‘crypto’ group. They should no longer be included in the ‘DEFAULT’ group.
  • All WSGI configuration options have been added to the ‘wsgi’ group. They should no longer be included in the ‘DEFAULT’ group.
  • Aggregates are being moved to the API database for CellsV2. In this release, the online data migrations will move any aggregates you have in your main database to the API database, retaining all attributes. Until this is complete, new attempts to create aggregates will return an HTTP 409 to avoid creating aggregates in one place that may conflict with aggregates you already have and are yet to be migrated.
  • Note that aggregates can no longer be soft-deleted as the API database does not replicate the legacy soft-delete functionality from the main database. As such, deleted aggregates are not migrated and the behavior users will experience will be the same as if a purge of deleted records was performed.
  • The nova-manage db online_data_migrations command will now migrate server groups to the API database. New server groups will be automatically created in the API database but existing server groups must be manually migrated using the nova-manage command.
  • The get_metrics API has been replaced by populate_metrics in nova.compute.monitors.base module. This change is introduced to allow each monitor plugin to have the flexibility of setting it’s own metric value types. The in-tree metrics plugins are modified as a part of this change. However, the out-of-tree plugins would have to adapt to the new API in order to work with nova.
  • For the Virtuozzo Storage driver to work with os-brick <1.4.0, you need to allow “pstorage-mount” in rootwrap filters for nova-compute.
  • You must update the rootwrap configuration for the compute service if you use ploop images, so that “ploop grow” filter is changed to “prl_disk_tool resize”.
  • The record configuration option for the console proxy services (like VNC, serial, spice) is changed from boolean to string. It specifies the filename that will be used for recording websocket frames.
  • ‘nova-manage db sync’ can now sync the cell0 database. The cell0 db is required to store instances that cannot be scheduled to any cell. Before the ‘db sync’ command is called a cell mapping for cell0 must have been created using ‘nova-manage cell_v2 map_cell0’. This command only needs to be called when upgrading to CellsV2.
  • A new nova-manage command has been added which will upgrade a deployment to cells v2. Running the command will setup a single cell containing the existing hosts and instances. No data or instances will be moved during this operation, but new data will be added to the nova_api database. New instances booted after this point will be placed into the cell. Please note that this does not mean that cells v2 is fully functional at this time, but this is a significant part of the effort to get there. The new command is “nova-manage cell_v2 simple_cell_setup –transport_url <transport_url>” where transport_url is the connection information for the current message queue used by Nova. Operators must create a new database for cell0 before running cell_v2 simple_cell_setup. The simple cell setup command expects the name of the cell0 database to be <main database name>_cell0 as it will create a cell mapping for cell0 based on the main database connection, sync the cell0 database, and associate existing hosts and instances with the single cell.
  • The deprecated configuration option client_log_level of the section [ironic] has been deleted. Please use the config options log_config_append or default_log_levels of the [DEFAULT] section.
  • A new nova-manage command ‘nova-manage cell_v2 map_cell0’ is now available. Creates a cell mapping for cell0, which is used for storing instances that cannot be scheduled to any cell. This command only needs to be called when upgrading to CellsV2.
  • The default value of the pointer_model configuration option has been set to ‘usbtablet’.
  • The following policy enforcement points have been removed as part of the restructuring of the Nova API code. The attributes that could have been hidden with these policy points will now always be shown / accepted.

    • os_compute_api:os-disk-config - show / accept OS-DCF:diskConfig parameter on servers
    • os-access-ips - show / accept accessIPv4 and accessIPv6 parameters on servers

    The following entry points have been removed

    • nova.api.v21.extensions.server.resize - allowed accepting additional parameters on server resize requests.
    • nova.api.v21.extensions.server.update - allowed accepting additional parameters on server update requests.
    • nova.api.v21.extensions.server.rebuild - allowed accepting additional parameters on server rebuild requests.
  • Flavors are being moved to the API database for CellsV2. In this release, the online data migrations will move any flavors you have in your main database to the API database, retaining all attributes. Until this is complete, new attempts to create flavors will return an HTTP 409 to avoid creating flavors in one place that may conflict with flavors you already have and are yet to be migrated.
  • Note that flavors can no longer be soft-deleted as the API database does not replicate the legacy soft-delete functionality from the main database. As such, deleted flavors are not migrated and the behavior users will experience will be the same as if a purge of deleted records was performed.
  • The 2.37 microversion enforces the following:
    • networks is required in the server create request body for the API. Specifying networks: auto is similar to not requesting specific networks when creating a server before 2.37.
    • The uuid field in the networks object of a server create request is now required to be in UUID format, it cannot be a random string. More specifically, the API used to support a nic uuid with a “br-” prefix but that is a legacy artifact which is no longer supported.
  • It is now required that the glance environment used by Nova exposes the version 2 REST API. This API has been available for many years, but previously Nova only used the version 1 API.
  • imageRef input to the REST API is now restricted to be UUID or an empty string only. imageRef input while create, rebuild and rescue server etc must be a valid UUID now. Previously, a random image ref url containing image UUID was accepted. But now all the reference of imageRef must be a valid UUID (with below exception) otherwise API will return 400. Exception- In case boot server from volume. Previously empty string was allowed in imageRef and which is ok in case of boot from volume. Nova will keep the same behavior and allow empty string in case of boot from volume only and 400 in all other case.
  • Prior to Grizzly release default instance directory names were based on field, for example directory for instance could be named instance-00000008. In Grizzly this mechanism was changed, instance.uuid is used as an instance directory name, e.g. path to instance:


    In Newton backward compatibility is dropped. For instances that haven’t been restarted since Folsom and earlier maintanance should be scheduled before upgrade(stop, rename directory to instance.uuid, then start) so Nova will start using new paths for instances.

  • The ironic driver now requires python-ironicclient>=1.5.0 (previously >=1.1.0), and requires the ironic service to support API version 1.20 or higher. As usual, ironic should be upgraded before nova for a smooth upgrade process.
  • The ironic driver now requires python-ironicclient>=1.6.0, and requires the ironic service to support API version 1.21.
  • Keypairs have been moved to the API database, using an online data migration. During the first phase of the migration, instances will be given local storage of their key, after which keypairs will be moved to the API database.
  • Default value of live_migration_tunnelled config option in libvirt section has been changed to False. After upgrading nova to Newton all live migrations will be non-tunnelled unless live_migration_tunnelled is explicitly set to True. It means that, by default, the migration traffic will not go through libvirt and therefore will no longer be encrypted.
  • With the introduction of os-vif, some networking related configuration options have moved, and users will need to update their nova.conf. For OpenVSwitch users the following options have moved from [DEFAULT] to [vif_plug_ovs] - network_device_mtu - ovs_vsctl_timeout For Linux Bridge users the following options have moved from [DEFAULT] to [vif_plug_linux_bridge] - use_ipv6 - iptables_top_regex - iptables_bottom_regex - iptables_drop_action - forward_bridge_interface - vlan_interface - flat_interface - network_device_mtu For backwards compatibility, and ease of upgrade, these options will continue to work from [DEFAULT] during the Newton release. However they will not in future releases.
  • The minimum required version of libvirt has been increased to 1.2.1
  • The minimum required QEMU version is now checked and has been set to 1.5.3
  • The network_api_class option was deprecated in Mitaka and is removed in Newton. The use_neutron option replaces this functionality.
  • The newton release has a lot of online migrations that must be performed before you will be able to upgrade to ocata. Please take extra note of this fact and budget time to run these online migrations before you plan to upgrade to ocata. These migrations can be run without downtime with nova-manage db online_data_migrations.
  • The notify_on_state_change configuration option was StrOpt, which would accept any string or None in the previous release. Starting in the Newton release, it allows only three values: None, vm_state, vm_and_task_state. The default value is None.
  • The deprecated auth parameter admin_auth_token was removed from the [ironic] config option group. The use of admin_auth_token is insecure compared to the use of a proper username/password.
  • The previously deprecated config option listen```of the group ``serial_console has been removed, as it was never used in the code.
  • The ‘manager’ option in [cells] group was deprecated in Mitaka and now it is removed completely in newton. There is no impact.
  • The following deprecated configuration options have been removed from the cinder section of nova.conf:

    • ca_certificates_file
    • api_insecure
    • http_timeout
  • The ‘destroy_after_evacuate’ workaround option has been removed as the workaround is no longer necessary.
  • The config options ‘osapi_compute_ext_list’ and ‘osapi_compute_extension’ were deprecated in mitaka. Hence these options were completely removed in newton, as v2 API is removed and v2.1 API doesn’t provide the option of configuring extensions.
  • The deprecated config option remove_unused_kernels has been removed from the [libvirt] config section. No replacement is required, as this behaviour is no longer relevant.
  • The extensible resource tracker was deprecated in the 13.0.0 release and has now been removed. Custom resources in the nova.compute.resources namespace selected by the compute_resources configuration parameter will not be loaded.
  • The legacy v2 API code was deprecated since Liberty release. The legacy v2 API code was removed in Newton release. We suggest that users should move to v2.1 API which compatible v2 API with more restrict input validation and microversions support. If users are still looking for v2 compatible API before switch to v2.1 API, users can use v2.1 API code as v2 API compatible mode. That compatible mode is closer to v2 API behaviour which is v2 API compatible without restrict input validation and microversions support. So if using openstack_compute_api_legacy_v2 in /etc/nova/api-paste.ini for the API endpoint /v2, users need to switch the endpoint to openstack_compute_api_v21_legacy_v2_compatible instead.
  • The ‘live_migration_flag’ and ‘block_migration_flag’ options in libvirt section that were deprecated in Mitaka have been completely removed in Newton, because nova automatically sets correct migration flags. New config options has been added to retain possibility to turn tunnelling, auto-converge and post-copy on/off, respectively named live_migration_tunnelled, live_migration_permit_auto_converge and live_migration_permit_post_copy.
  • The ‘memcached_server’ option in DEFAULT section which was deprecated in Mitaka has been completely removed in Newton. This has been replaced by options from oslo cache section.
  • The service subcommand of nova-manage was deprecated in 13.0. Now in 14.0 the service subcommand is removed. Use service-* commands from python-novaclient or the os-services REST resource instead.
  • The network_device_mtu option in Nova is deprecated for removal in 13.0.0 since network MTU should be specified when creating the network.
  • Legacy v2 API code is already removed. A set of policy rules in the policy.json, which are only used by legacy v2 API, are removed. Both v2.1 API and v2.1 compatible mode API are using same set of new policy rules which are with prefix os_compute_api.
  • Removed the security_group_api configuration option that was deprecated in Mitaka. The correct security_group_api option will be chosen based on the value of use_neutron which provides a more coherent user experience.
  • The deprecated volume_api_class config option has been removed. We only have one sensible backend for it, so don’t need it anymore.
  • The libvirt option ‘iscsi_use_multipath’ has been renamed to ‘volume_use_multipath’.
  • The ‘wsgi_default_pool_size’ and ‘wsgi_keep_alive’ options have been renamed to ‘default_pool_size’ and ‘keep_alive’ respectively.
  • The following deprecated configuration options have been removed from the neutron section of nova.conf:

    • ca_certificates_file
    • api_insecure
    • url_timeout
  • The ability to load a custom scheduler host manager via the scheduler_host_manager configuration option was deprecated in the 13.0.0 Mitaka release and is now removed in the 14.0.0 Newton release.
  • DB2 database support was removed from tree. This is a non open source database that had no 3rd party CI, and a set of constraints that meant we had to keep special casing it in code. It also made the online data migrations needed for cells v2 and placement engine much more difficult. With 0% of OpenStack survey users reporting usage we decided it was time to remove this to focus on features needed by the larger community.
  • Delete the deprecated, glance.port, glance.protocol configuration options. glance.api_servers must be set to have a working config. There is currently no default for this config option, so a value must be set.
  • Only virt drivers in the nova.virt namespace may be loaded. This has been the case according to nova docs for several releases, but a quirk in some library code meant that loading things outside the namespace continued to work unintentionally. That has been fixed, which means “compute_driver =” is invalid (and now enforced as such), and should be “compute_driver = foo” instead.
  • A new use_neutron option is introduced for configuring Nova to use Neutron for its network backend. This replaces the deprecated network_api_class and security_group_api options, which are confusing to use and error prone. The default for this new value is False to match existing defaults, however if network_api_class and/or security_group_api are set to known Neutron values, Neutron networking will still be used as before. Installations are encouraged to set this config option soon after upgrade as the legacy options will be removed by the Newton release.
  • The default policy for updating volume attachments, commonly referred to as swap volume, has been changed from rule:admin_or_owner to rule:admin_api. This is because it is called from the volume service when migrating volumes, which is an admin-only operation by default, and requires calling an admin-only API in the volume service upon completion. So by default it would not work for non-admins.
  • The deprecated osapi_v21.enabled config option has been removed. This previously allowed you a way to disable the v2.1 API. That is no longer something we support, v2.1 is mandatory.
  • Now VMwareVCDriver will set disk.EnableUUID=True by default in all guest VM configuration file. To enable udev to generate /dev/disk/by-id

Deprecation Notes

  • All barbican config options in Nova are now deprecated and may be removed as early as 15.0.0 release. All of these options are moved to the Castellan library.
  • The cells.driver configuration option is now deprecated and will be removed at Ocata cycle.
  • The feature to download Glance images via file transfer instead of HTTP is now deprecated and may be removed as early as the 15.0.0 release. The config options filesystems in the section image_file_url are affected as well as the derived sections image_file_url:<list entry name> and their config options id and mountpoint.
  • As mentioned in the release notes of the Mitaka release (version 13.0.0), the EC2API support was fully removed. The s3 image service related config options were still there but weren’t used anywhere in the code since Mitaka. These are now deprecated and may be removed as early as the 15.0.0 release. This affects image_decryption_dir, s3_host, s3_port, s3_access_key, s3_secret_key, s3_use_ssl, s3_affix_tenant.
  • The default_flavor config option is now deprecated and may be removed as early as the 15.0.0 release. It is an option which was only relevant for the deprecated EC2 API and is not used in the Nova API.
  • The fatal_exception_format_errors config option is now deprecated and may be removed as early as the 15.0.0 release. It is an option which was only relevant for Nova internal testing purposes to ensure that errors in formatted exception messages got detected.
  • The image_info_filename_pattern, checksum_base_images, and checksum_interval_seconds options have been deprecated in the [libvirt] config section. They are no longer used. Any value given will be ignored.
  • The following nova-manage commands are deprecated for removal in the Nova 15.0.0 Ocata release:

    • nova-maange account scrub
    • nova-manage fixed *
    • nova-manage floating *
    • nova-manage network *
    • nova-manage project scrub
    • nova-manage vpn *

    These commands only work with nova-network which is itself deprecated in favor of Neutron.

  • The nova-manage vm list command is deprecated and will be removed in the 15.0.0 Ocata release. Use the nova list command from python-novaclient instead.
  • The auth parameters admin_username, admin_password, admin_tenant_name and admin_url of the [ironic] config option group are now deprecated and will be removed in a future release. Using these parameters will log a warning. Please use username, password, project_id (or project_name) and auth_url instead. If you are using Keystone v3 API, please note that the name uniqueness for project and user only holds inside the same hierarchy level, so you must also specify domain information for user (i.e. user_domain_id or user_domain_name) and for project, if you are using project_name (i.e. project_domain_id or project_domain_name).
  • The config option snapshot_name_template in the DEFAULT group is now deprecated and may be removed as early as the 15.0.0 release. The code which used this option isn’t used anymore since late 2012.
  • Deprecate compute_stats_class config option. This allowed loading an alternate implementation for collecting statistics for the local compute host. Deployments that felt the need to use this facility are encouraged to propose additions upstream so we can create a stable and supported interface here.
  • The nova-all binary is deprecated. This was an all in one binary for nova services used for testing in the early days of OpenStack, but was never intended for real use.
  • Nova network is now deprecated. Based on the results of the current OpenStack User Survey less than 10% of our users remain on Nova network. This is the signal that it is time migrate to Neutron. No new features will be added to Nova network, and bugs will only be fixed on a case by case basis.
  • The /os-certificates API is deprecated, as well as the nova-cert service which powers it. The related config option cert_topic is also now marked for deprecation and may be removed as early as 15.0.0 Ocata release. This is a vestigial part of the Nova API that existed only for EC2 support, which is now maintained out of tree. It does not interact with any of the rest of nova, and should not just be used as a certificates as a service, which is all it is currently good for.
  • Deprecate the vendordata_driver config option. This allowed creating a different class loader for defining vendordata metadata. The default driver loads from a json file that can be arbitrarily specified, so is still quite flexible. Deployments that felt the need to use this facility are encouraged to propose additions upstream so we can create a stable and supported interface here.
  • All the APIs which proxy to other services were deprecated in this API version. Those APIs will return 404 on Microversion 2.36 or higher. The API user should use native API as instead of using those pure proxy for other REST APIs. The quotas and limits related to network resources ‘fixed_ips’, ‘floating ips’, ‘security_groups’, ‘security_group_rules’, ‘networks’ are filtered out of os-quotas and limit APIs respectively and those quotas should be managed through OpenStack network service. For using nova-network, you only can use API and manage quotas under Microversion ‘2.36’. The ‘os-fping’ API was deprecated also, this API is only related to nova-network and depend on the deployment. The deprecated APIs are as below:
    • /images
    • /os-networks
    • /os-fixed-ips
    • /os-floating-ips
    • /os-floating-ips-bulk
    • /os-floating-ip-pools
    • /os-floating-ip-dns
    • /os-security-groups
    • /os-security-group-rules
    • /os-security-group-default-rules
    • /os-volumes
    • /os-snapshots
    • /os-baremetal-nodes
    • /os-fping
  • Nova option ‘use_usb_tablet’ will be deprecated in favor of the global ‘pointer_model’.
  • The quota_driver configuration option is now deprecated and will be removed in a subsequent release.
  • Deprecate volume_api_class and network_api_class config options. We only have one sensible backend for either of these. These options will be removed and turned into constants in Newton.
  • Option memcached_servers is deprecated in Mitaka. Operators should use oslo.cache configuration instead. Specifically enabled option under [cache] section should be set to True, the backend option to oslo_cache.memcache_pool and the location(s) of the memcached servers should be in [cache]/memcache_servers option.

Security Issues

  • The qemu-img tool now has resource limits applied which prevent it from using more than 1GB of address space or more than 2 seconds of CPU time. This provides protection against denial of service attacks from maliciously crafted or corrupted disk images.

Bug Fixes

  • Corrected response for the case where an invalid status value is passed as a filter to the list servers API call. As there are sufficient statuses defined already, any invalid status should not be accepted. As of microversion 2.38, the API will return 400 HTTPBadRequest if an invalid status is passed to list servers API for both admin as well as non admin user.
  • Fixed bug #1579706: “Listing nova instances with invalid status raises 500 InternalServerError for admin user”. Now passing an invalid status as a filter will return an empty list. A subsequent patch will then correct this to raise a 400 Bad Request when an invalid status is received.
  • When instantiating an instance based on an image with the metadata hw_vif_multiqueue_enabled=true, if flavor.vcpus is less than the limit of the number of queues on a tap interface in the kernel, nova uses flavor.vcpus as the number of queues. if not, nova uses the limit. The limits are as follows:

    • kernels prior to 3.0: 1
    • kernels 3.x: 8
    • kernels 4.x: 256
  • To make live-migration consistent with resize, confirm-resize and revert-resize operations, the migration status is changed to ‘error’ instead of ‘failed’ in case of live-migration failure. With this change the periodic task ‘_cleanup_incomplete_migrations’ is now able to remove orphaned instance files from compute nodes in case of live-migration failures. There is no impact since migration status ‘error’ and ‘failed’ refer to the same failed state.
  • When plugging virtual interfaces of type vhost-user the MTU value will not be applied to the interface by nova. vhost-user ports exist only in userspace and are not backed by kernel netdevs, for this reason it is not possible to set the mtu on a vhost-user interface using standard tools such as ifconfig or ip link.

Other Notes

  • The API policy defaults are now defined in code like configuration options. Because of this, the sample policy.json file that is shipped with Nova is empty and should only be necessary if you want to override the API policy from the defaults in the code. To generate the policy file you can run:

    oslopolicy-sample-generator --config-file=etc/nova/nova-policy-generator.conf
  • network_allocate_retries config param now allows only positive integer values or 0.
  • The default API policy shipped with Nova contained many policies set to “”(allow all) which was not the proper default for many of those checks. It was also a source of confusion as some people thought “” meant to use the default rule. These empty policies have been updated to be explicit in all cases. Many of them were changed to match the default rule of “admin_or_owner” which is a more restrictive policy check but does not change the restrictiveness of the API calls overall because there are similar checks in the database already. This does not affect any existing deployment, just the default policy used by new deployments.
  • The api_rate_limit configuration option has been removed. The option was disabled by default back in the Havana release since it’s effectively broken for more than one API worker. It has been removed because the legacy v2 API code that was using it has also been removed.
  • The default flavors that nova has previously had are no longer created as part of the first database migration. New deployments will need to create appropriate flavors before first use.
  • The network configuration option ‘fake_call’ has been removed. It hasn’t been used for several cycles, and has no effect on any code, so there should be no impact.
  • The XenServer configuration option ‘iqn_prefix’ has been removed. It was not used anywhere and has no effect on any code, so there should be no impact.
  • Virt drivers are no longer loaded with the import_object_ns function, which means that only virt drivers in the nova.virt namespace can be loaded.
  • New configuration option sync_power_state_pool_size has been added to set the number of greenthreads available for use to sync power states. Default value (1000) matches the previous implicit default value provided by Greenpool. This option can be used to reduce the number of concurrent requests made to the hypervisor or system with real instance power states for performance reasons.