Mitaka Series Release Notes


Known Issues

  • In kernels < 3.19 net.ipv4.ip_nonlocal_bind was not a per-namespace kernel option. L3 HA sets this option to zero to avoid sending gratuitous ARPs for IP addresses that were removed while processing. If this happens then gratuitous ARPs are going to be sent which might populate ARP caches of peer machines with the wrong MAC address.

Upgrade Notes

  • Server notifies L3 HA agents when HA router interface port status becomes active. Then L3 HA agents spawn keepalived process. So, server has to be restarted before the L3 agents during upgrade.

Bug Fixes

  • Versions of keepalived < 1.2.20 don’t send gratuitous ARPs when keepalived process receives SIGHUP signal. These versions are not packaged in some Linux distributions like RHEL, CentOS or Ubuntu Xenial. Not sending gratuitous ARPs may lead to peer ARP caches containing wrong information about floating IP addresses until the entry is invalidated. Neutron now sends gratuitous ARPs for all new IP addresses that appear on non-HA interfaces in router namespace which simulates behavior of new versions of keepalived.


The Neutron server no longer needs to be configured with a firewall driver and it can support mixed environments of hybrid iptables firewalls and the pure OVS firewall.

By default, the QoS driver for the Open vSwitch and Linuxbridge agents calculates the burst value as 80% of the available bandwidth.

New Features

  • The Neutron server now learns the appropriate firewall wiring behavior from each OVS agent so it no longer needs to be configured with the firewall_driver. This means it also supports multiple agents with different types of firewalls.

Upgrade Notes

  • A new option ha_keepalived_state_change_server_threads has been added to configure the number of concurrent threads spawned for keepalived server connection requests. Higher values increase the CPU load on the agent nodes. The default value is half of the number of CPUs present on the node. This allows operators to tune the number of threads to suit their environment. With more threads, simultaneous requests for multiple HA routers state change can be handled faster.

Bug Fixes

  • Fixes bug 1572670
  • Allow SR-IOV agent to run with 0 vfs


Add options to designate external dns driver of neutron for SSL based connections. This makes it possible to use neutron with designate in scenario where endpoints are SSL based. Users can specify to skip cert validation or specify path to a valid cert in [designate] section of neutron.conf file.

Support for IPv6 addresses as tunnel endpoints in OVS.

New Features

  • Two new options are added to [designate] section to support SSL.
  • First option insecure allows to skip SSL validation when creating a keystone session to initate a designate client. Default value is False, which means to always verify connection.
  • Second option ca_cert allows setting path to a valid cert file. Default is None.
  • The local_ip value in ml2_conf.ini can now be set to an IPv6 address configured on the system.

Other Notes

  • Requires OVS 2.5+ version or higher with linux kernel 4.3 or higher. More info at OVS github page.


Support configuration of greenthreads pool for WSGI.

Several NICs per physical network can be used with SR-IOV.

Bug Fixes

  • The ‘physical_device_mappings’ of sriov_nic configuration now can accept more than one NIC per physical network. For example, if ‘physnet2’ is connected to enp1s0f0 and enp1s0f1, ‘physnet2:enp1s0f0,physnet2:enp1s0f1’ will be a valid option.

Other Notes

  • Operators may want to tune the max_overflow and wsgi_default_pool_size configuration options according to the investigations outlined in this mailing list post. The default value of wsgi_default_pool_size inherits from that of oslo.config, which is currently 100. This is a change in default from the previous Neutron-specific value of 1000.


The ML2 plug-in supports calculating the MTU for instances using overlay networks by subtracting the overlay protocol overhead from the value of ‘path_mtu’, ideally the physical (underlying) network MTU, and providing the smaller value to instances via DHCP. Prior to Mitaka, ‘path_mtu’ defaults to 0 which disables this feature. In Mitaka, ‘path_mtu’ defaults to 1500, a typical MTU for physical networks, to improve the “out of box” experience for typical deployments.

The ML2 plug-in supports calculating the MTU for networks that are realized as flat or VLAN networks, by consulting the ‘segment_mtu’ option. Prior to Mitaka, ‘segment_mtu’ defaults to 0 which disables this feature. This creates slightly confusing API results when querying Neutron networks, since the plugins that support the MTU API extension would return networks with the MTU equal to zero. Networks with an MTU of zero make little sense, since nothing could ever be transmitted. In Mitaka, ‘segment_mtu’ now defaults to 1500 which is the standard MTU for Ethernet networks in order to improve the “out of box” experience for typical deployments.

The LinuxBridge agent now supports QoS bandwidth limiting.

External networks can now be controlled using the RBAC framework that was added in Liberty. This allows networks to be made available to specific tenants (as opposed to all tenants) to be used as an external gateway for routers and floating IPs.

DHCP and L3 Agent scheduling is availability zone aware.

The “get-me-a-network” feature simplifies the process for launching an instance with basic network connectivity (via an externally connected private tenant network).

Support integration with external DNS service.

Add popular IP protocols to the security group code. End-users can specify protocol names instead of protocol numbers in both RESTful API and python-neutronclient CLI.

ML2: ports can now recover from binding failed state.

RBAC support for QoS policies

Add description field to security group rules, networks, ports, routers, floating IPs, and subnet pools.

Add tag mechanism for network resources

Timestamp fields are now added to neutron core resources.

Announcement of tenant prefixes and host routes for floating IP’s via BGP is supported

Allowed address pairs can now be cleared by passing None in addition to an empty list. This is to make it possible to use the –action=clear option with the neutron client. neutron port-update <uuid> –allowed-address-pairs action=clear

Core configuration files are automatically generated.

max_fixed_ips_per_port has been deprecated and will be removed in the Newton or Ocata cycle depending on when all identified usecases of the options are satisfied via another quota system.

OFAgent is decomposed and deprecated in the Mitaka cycle.

Add new VNIC type for SR-IOV physical functions.

High Availability (HA) of SNAT service is supported for Distributed Virtual Routers (DVRs).

An OVS agent configured to run in DVR mode will fail to start if it cannot get proper DVR configuration values from the server on start-up. The agent will no longer fallback to non-DVR mode, since it may lead to inconsistency in the DVR-enabled cluster as the Neutron server does not distinguish between DVR and non-DVR OVS agents.

Improve DVR’s resiliency during Nova VM live migration events.

The Linuxbridge agent now supports l2 agent extensions.

Adding MacVtap ML2 driver and L2 Agent as new vswitch choice

Support for MTU selection and advertisement.

Neutron now provides network IP availability information.

Neutron is integrated with Guru Meditation Reports library.

oslo.messaging.notify.drivers entry points are deprecated

New Features

  • In Mitaka, the combination of ‘path_mtu’ defaulting to 1500 and ‘advertise_mtu’ defaulting to True provides a value of MTU accounting for any overlay protocol overhead on the network to instances using DHCP. For example, an instance attaching to a VXLAN network receives a 1450 MTU from DHCP accounting for 50 bytes of overhead from the VXLAN overlay protocol if using IPv4 endpoints.
  • In Mitaka, queries to the Networking API for network objects will now return network objects that contain a sane MTU value.
  • The LinuxBridge agent can now configure basic bandwidth limiting QoS rules set for ports and networks. It introduces two new config options for LinuxBridge agent. First is ‘kernel_hz’ option which is value of host kernel HZ setting. It is necessary for proper calculation of minimum burst value in tbf qdisc setting. Second is ‘tbf_latency’ which is value of latency to be configured in tc-tbf setting. Details about this option can be found in tc-tbf manual.
  • External networks can now be controlled using the RBAC framework that was added in Liberty. This allows networks to be made available to specific tenants (as opposed to all tenants) to be used as an external gateway for routers and floating IPs. By default this feature will also allow regular tenants to make their networks available as external networks to other individual tenants (or even themselves), but they are prevented from using the wildcard to share to all tenants. This behavior can be adjusted via policy.json by the operator if desired.
  • A DHCP agent is assigned to an availability zone; the network will be hosted by the DHCP agent with availability zone specified by the user.
  • An L3 agent is assigned to an availability zone; the router will be hosted by the L3 agent with availability zone specified by the user. This supports the use of availability zones with HA routers. DVR isn’t supported now because L3HA and DVR integration isn’t finished.
  • Once Nova takes advantage of this feature, a user can launch an instance without explicitly provisioning network resources.
  • Floating IPs can have dns_name and dns_domain attributes associated with them
  • Ports can have a dns_name attribute associated with them. The network where a port is created can have a dns_domain associated with it
  • Floating IPs and ports will be published in an external DNS service if they have dns_name and dns_domain attributes associated with them.
  • The reference driver integrates neutron with designate
  • Drivers for other DNSaaS can be implemented
  • Driver is configured in the default section of neutron.conf using parameter ‘external_dns_driver’
  • Ports that failed to bind when an L2 agent was offline can now recover after the agent is back online.
  • Neutron now supports sharing of QoS policies between a subset of tenants.
  • Security group rules, networks, ports, routers, floating IPs, and subnet pools may now contain an optional description which allows users to easily store details about entities.
  • Users can set tags on their network resources.
  • Networks can be filtered by tags. The supported filters are ‘tags’, ‘tags-any’, ‘not-tags’ and ‘not-tags-any’.
  • Add timestamp fields ‘created_at’, ‘updated_at’ into neutron core resources like network, subnet, port and subnetpool.
  • And support for querying these resources by changed-since, it will return the resources changed after the specfic time string like YYYY-MM-DDTHH:MM:SS
  • By default, the DHCP agent provides a network MTU value to instances using the corresponding DHCP option if core plugin calculates the value. For ML2 plugin, calculation mechanism is enabled by setting [ml2] path_mtu option to a value greater than zero.
  • Allow non-admin users to define “external” extra-routes.
  • Announcement of tenant subnets via BGP using centralized Neutron router gateway port as the next-hop
  • Announcement of floating IP host routes via BGP using the centralized Neutron router gateway port as the next-hop
  • Announcement of floating IP host routes via BGP using the floating IP agent gateway as the next-hop when the floating IP is associated through a distributed router
  • Neutron no longer includes static example configuration files. Instead, use tools/ to generate them. The files are generated with a .sample extension.
  • Add derived attributes to the network to tell users which address scopes the network is in.
  • The subnet API now includes a new use_default_subnetpool attribute. This attribute can be specified on creating a subnet in lieu of a subnetpool_id. The two are mutually exclusive. If it is specified as True, the default subnet pool for the requested ip_version will be looked up and used. If no default exists, an error will be returned.
  • Neutron now supports creation of ports for exposing physical functions as network devices to guests.
  • High Availability support for SNAT services on Distributed Virtual Routers. Routers can now be created with the flags distributed=True and ha=True. The created routers will provide Distributed Virtual Routing as well as SNAT high availability on the l3 agents configured for dvr_snat mode.
  • Use the value of the network ‘mtu’ attribute for the MTU of virtual network interfaces such as veth pairs, patch ports, and tap devices involving a particular network.
  • Enable end-to-end support for arbitrary MTUs including jumbo frames between instances and provider networks by moving MTU disparities between flat or VLAN networks and overlay networks from layer-2 devices to layer-3 devices that support path MTU discovery (PMTUD).
  • The Linuxbridge agent can now be extended by 3rd parties using a pluggable mechanism.
  • Libvirt qemu/kvm instances can now be attached via MacVtap in bridge mode to a network. VLAN and FLAT attachments are supported. Other attachmentes than compute are not supported.
  • When advertise_mtu is set in the config, Neutron supports advertising the LinkMTU using Router Advertisements.
  • A new API endpoint /v2.0/network-ip-availabilities that allows an admin to quickly get counts of used_ips and total_ips for network(s) is available. New endpoint allows filtering by network_id, network_name, tenant_id, and ip_version. Response returns network and nested subnet data that includes used and total IPs.
  • SriovNicSwitchMechanismDriver driver now exposes a new VIF type ‘hostdev_physical’ for ports with vnic type ‘direct-physical’ (used for SR-IOV PF passthrough). This will enable Nova to provision PFs as Neutron ports.
  • The RPC and notification queues have been separated into different queues. Specify the transport_url to be used for notifications within the [oslo_messaging_notifications] section of the configuration file. If no transport_url is specified in [oslo_messaging_notifications], the transport_url used for RPC will be used.
  • Neutron services should respond to SIGUSR2 signal by dumping valuable debug information to standard error output.
  • New security groups firewall driver is introduced. It’s based on OpenFlow using connection tracking.
  • Neutron can interact with keystone v3.

Known Issues

  • The combination of ‘path_mtu’ and ‘advertise_mtu’ only adjusts the MTU for instances rather than all virtual network components between instances and provider/public networks. In particular, setting ‘path_mtu’ to a value greater than 1500 can cause packet loss even if the physical network supports it. Also, the calculation does not consider additional overhead from IPv6 endpoints.
  • When using DVR, if a floating IP is associated to a fixed IP direct access to the fixed IP is not possible when traffic is sent from outside of a Neutron tenant network (north-south traffic). Traffic sent between tenant networks (east-west traffic) is not affected. When using a distributed router, the floating IP will mask the fixed IP making it inaccessible, even though the tenant subnet is being announced as accessible through the centralized SNAT router. In such a case, traffic sent to the instance should be directed to the floating IP. This is a limitation of the Neutron L3 agent when using DVR and will be addressed in a future release.
  • Only creation of dvr/ha routers is currently supported. Upgrade from other types of routers to dvr/ha router is not supported on this release.
  • More synchronization between Nova and Neutron is needed to properly handle live migration failures on either side. For instance, if live migration is reverted or canceled, some dangling Neutron resources may be left on the destination host.
  • To ensure any kind of migration works between all compute nodes, make sure that the same physical_interface_mappings is configured on each MacVtap compute node. Having different mappings could cause live migration to fail (if the configured physical network interface does not exist on the target host), or even worse, result in an instance placed on the wrong physical network (if the physical network interface exists on the target host, but is used by another physical network or not used at all by OpenStack). Such an instance does not have access to its configured networks anymore. It then has layer 2 connectivity to either another OpenStack network, or one of the hosts networks.
  • OVS firewall driver doesn’t work well with other features using openflow.

Upgrade Notes

  • Operators using the ML2 plug-in with ‘path_mtu’ defaulting to 0 may need to perform a database migration to update the MTU for existing networks and possibly disable existing workarounds for MTU problems such as increasing the physical network MTU to 1550.
  • Operators using the ML2 plug-in with existing data may need to perform a database migration to update the MTU for existing networks
  • Add popular IP protocols to security group code.
  • To disable, use [DEFAULT] advertise_mtu = False.
  • The router_id option is deprecated and will be removed in the ‘N’ cycle.
  • Does not change MTU for existing virtual network interfaces.
  • Actions that create virtual network interfaces on an existing network with the ‘mtu’ attribute containing a value greater than zero could cause issues for network traffic traversing existing and new virtual network interfaces.
  • The Hyper-V Neutron Agent has been fully decomposed from Neutron. The neutron.plugins.hyperv.agent.security_groups_driver.HyperVSecurityGroupsDriver firewall driver has been deprecated and will be removed in the ‘O’ cycle. Update the neutron_hyperv_agent.conf files on the Hyper-V nodes to use hyperv.neutron.security_groups_driver.HyperVSecurityGroupsDriver, which is the networking_hyperv security groups driver.
  • When using ML2 and the Linux Bridge agent, the default value for the ARP Responder under L2Population has changed. The responder is now disabled to improve compatibility with the allowed-address-pair extension and to match the default behavior of the ML2 OVS agent. The logical network will now utilize traditional flood and learn through the overlay. When upgrading, existing vxlan devices will retain their old setup and be unimpacted by changes to this flag. To apply this to older devices created with the Liberty agent, the vxlan device must be removed and then the Mitaka agent restarted. The agent will recreate the vxlan devices with the current settings upon restart. To maintain pre-Mitaka behavior, enable the arp_responder in the Linux Bridge agent VXLAN config file prior to starting the updated agent.
  • Neutron depends on keystoneauth instead of keystoneclient.

Deprecation Notes

  • The default_subnet_pools option is now deprecated and will be removed in the Newton release. The same functionality is now provided by setting is_default attribute on subnetpools to True using the API or client.
  • The ‘force_gateway_on_subnet’ option is deprecated and will be removed in the ‘Newton’ cycle.
  • The ‘network_device_mtu’ option is deprecated and will be removed in the ‘Newton’ cycle. Please use the system-wide segment_mtu setting which the agents will take into account when wiring VIFs.
  • max_fixed_ips_per_port has been deprecated and will be removed in the Newton or Ocata cycle depending on when all identified usecases of the options are satisfied via another quota system. If you depend on this configuration option to stop tenants from consuming IP addresses, please leave a comment on the bug report.
  • The ‘segment_mtu’ option of the ML2 configuration has been deprecated and replaced with the ‘global_physnet_mtu’ option in the main Neutron configuration. This option is meant to be used by all plugins for an operator to reference their physical network’s MTU, regardless of the backend plugin. Plugins should access this config option via the ‘get_deployment_physnet_mtu’ method added to neutron.plugins.common.utils to avoid being broken on any potential renames in the future.

Bug Fixes

  • Prior to Mitaka, the settings that control the frequency of router advertisements transmitted by the radvd daemon were not able to be adjusted. Larger deployments may wish to decrease the frequency in which radvd sends multicast traffic. The ‘min_rtr_adv_interval’ and ‘max_rtr_adv_interval’ settings in the L3 agent configuration file map directly to the ‘MinRtrAdvInterval’ and ‘MaxRtrAdvInterval’ in the generated radvd.conf file. Consult the manpage for radvd.conf for more detailed information.
  • Fixes bug 1537734
  • Prior to Mitaka, name resolution in instances requires specifying DNS resolvers via the ‘dnsmasq_dns_servers’ option in the DHCP agent configuration file or via neutron subnet options. In this case, the data plane must provide connectivity between instances and upstream DNS resolvers. Omitting both of these methods causes the dnsmasq service to offer the IP address on which it resides to instances for name resolution. However, the static dnsmasq ‘–no-resolv’ process argument prevents name resolution via dnsmasq, leaving instances without name resolution. Mitaka introduces the ‘dnsmasq_local_resolv’ option, default value False to preserve backward-compatibility, that enables the dnsmasq service to provide name resolution for instances via DNS resolvers on the host running the DHCP agent. In this case, the data plane must provide connectivity between the host and upstream DNS resolvers rather than between the instances and upstream DNS resolvers. Specifying DNS resolvers via the ‘dnsmasq_dns_servers’ option in the DHCP agent configuration overrides the ‘dnsmasq_local_resolv’ option for all subnets using the DHCP agent.
  • Before Mitaka, when a default subnetpool was defined in the configuration, a request to create a subnet would fall back to using it if no specific subnet pool was specified. This behavior broke the semantics of subnet create calls in this scenario and is now considered an API bug. This bug has been fixed so that there is no automatic fallback with the presence of a default subnet pool. Workflows which depended on this new behavior will have to be modified to set the new use_default_subnetpool attribute when creating a subnet.
  • Create DVR router namespaces pro-actively on the destination node during live migration events. This helps minimize packet loss to floating IP traffic.
  • Explicitly configure MTU of virtual network interfaces rather than using default values or incorrect values that do not account for overlay protocol overhead.
  • The server will fail to start if any of the declared required extensions, as needed by core and service plugins, are not properly configured.
  • partially closes bug 1468803
  • The Linuxbridge agent now supports the ability to toggle the local ARP responder when L2Population is enabled. This ensures compatibility with the allowed-address-pairs extension. closes bug 1445089
  • Fix SR-IOV agent macvtap assigned VF check when linux kernel < 3.13
  • Loaded agent extensions of SR-IOV agent are now shown in agent state API.

Other Notes

  • For overlay networks managed by ML2 core plugin, the calculation algorithm subtracts the overlay protocol overhead from the value of [ml2] path_mtu. The DHCP agent provides the resulting (smaller) MTU to instances using overlay networks.
  • The [DEFAULT] advertise_mtu option must contain a consistent value on all hosts running the DHCP agent.
  • Typical networks can use [ml2] path_mtu = 1500.
  • The Openflow Agent(OFAgent) mechanism driver is decomposed completely from neutron tree in the Mitaka. The OFAgent driver and its agent also are deprecated in favor of OpenvSwitch mechanism driver with “native” of_interface in the Mitaka and will be removed in the next release.
  • OVS firewall driver requires OVS 2.5 version or higher with linux kernel 4.3 or higher. More info at OVS github page.
  • The oslo.messaging.notify.drivers entry points that were left in tree for backward compatibility with Icehouse are deprecated and will be removed after liberty-eol. Configure notifications using the oslo_messaging configuration options in neutron.conf.