Ocata Series Release Notes

10.0.7-88

Nouvelles fonctionnalités

  • A new config option bridge_mac_table_size has been added for Neutron OVS agent. This value will be set on every Open vSwitch bridge managed by the openvswitch-neutron-agent in other_config:mac-table-size column in ovsdb. Default value for this new option is set to 50000 and it should be enough for most systems. More details about this option can be found in Open vSwitch documentation For more information see bug 1775797.

Critical Issues

  • The neutron-openvswitch-agent can sometimes spend too much time handling a large number of ports, exceeding its timeout value, agent_boot_time, for L2 population. Because of this, some flow update operations will not be triggerred, resulting in lost flows during agent restart, especially for host-to-host vxlan tunnel flows, causing the original tunnel flows to be treated as stale due to the different cookie IDs. The agent’s first RPC loop will also do a stale flow clean-up procedure and delete them, leading to a loss of connectivity. Please ensure that all neutron-server and neutron-openvswitch-agent binaries are upgraded for the changes to take effect, after which the L2 population agent_boot_time config option will no longer be used.

Corrections de bugs

  • The neutron-openvswitch-agent was changed to notify the neutron-server in its first RPC loop that it has restarted. This signals neutron-server to provide updated L2 population information to correctly program FDB entries, ensuring connectivity to instances is not interrupted. This fixes the following bugs: 1794991, 1799178, 1813703, 1813714, 1813715.

Autres notes

  • In order to improve heavy load ovs agent restart success rate, instead a retry or fullsync, the native driver of_connect_timeout and of_request_timeout are now set to 300s. The value does not have side effect for the regular pressure ovs agent.

10.0.5

Nouvelles fonctionnalités

  • L2 agents based on ML2 _common_agent have now the L2 extension API available. This API can be used by L2 extension drivers to request resources from the L2 agent. It is used, for example, to pass an instance of the IptablesManager to the Linuxbridge L2 agent QoS extension driver.

Corrections de bugs

  • Fixes bug 1736674, security group rules are now properly applied by Linuxbridge L2 agent with QoS extension driver enabled.

  • Adding security group rules by protocol number is documented, but somehow was broken without being noticed in one of the last couple of releases. This is now fixed. For more information see bug 1716045.

10.0.3

Nouvelles fonctionnalités

  • Some scenario tests require advanced Glance images (for example, Ubuntu or CentOS) in order to pass. They are now skipped by default. If you need to execute those tests, please configure tempest.conf to use an advanced image, and set image_is_advanced in neutron_plugin_options section of tempest.conf file to True. The first scenario test case that requires the new option set to execute is test_trunk.

10.0.0

Prelude

Hyper-V Neutron Agent has been fully decomposed from Neutron. Therefore, the neutron.plugins.hyperv.agent.security_groups_driver.HyperVSecurityGroupsDriver firewall driver has been deleted. Update the neutron_hyperv_agent.conf / neutron_ovs_agent.conf files on the Hyper-V nodes to use hyperv.neutron.security_groups_driver.HyperVSecurityGroupsDriver, which is the networking_hyperv security groups driver.

Nouvelles fonctionnalités

  • Middleware was added to parse the X-Forwarded-Proto HTTP header or the Proxy protocol in order to help Neutron respond with the correct URL references when it’s put behind a TLS proxy such as haproxy. This adds http_proxy_to_wsgi middleware to the pipeline. This middleware is disabled by default, but can be enabled via a configuration option in the [oslo_middleware] group.

  • The Linux Bridge agent now supports QoS DSCP marking rules.

  • Keepalived VRRP health check functionality to enable verification of connectivity from the « master » router to all gateways. Activation of this feature enables gateway connectivity validation and rescheduling of the « master » router to another node when connectivity is lost. If all routers lose connectivity to the gateways, the election process will be repeated round-robin until one of the routers restores its gateway connection. In the mean time, all of the routers will be reported as « master ».

  • Add a new configuration section, [placement], with two new options that allow to make segments plugin to use the Compute placement ReST API. This API allows to influence node placement of instances based on availability of IPv4 addresses in routed networks. The first option, region_name, indicates the placement region to use. This option is useful if keystone manages more than one region. The second option, endpoint_type, determines the type of a placement endpoint to use. This endpoint will be looked up in the keystone catalog and should be one of public, internal or admin.

  • Designate driver can now use Keystone v3 authentication options. « The [designate] section now accepts the auth_type option, as well as other keystoneauth options (e.g. auth_url, username, user_domain_name, password, project_name, project_domain_name). »

  • Resource tag mechanism now supports subnet, port, subnetpool and router resources.

  • A new mechanism has been added to the neutron-netns-cleanup tool that allows to kill processes listening on any Unix or network socket within a namespace. The new mechanism will try to kill those processes gracefully using the SIGTERM signal and, if they refuse to die, then the SIGKILL signal will be sent to each remaining process to ensure a proper cleanup.

  • The QoS driver architecture has been refactored to overcome several previous limitations, the main one was the coupling of QoS details into the mechanism drivers, and the next one was the need of configuration knobs to enable each specific notification driver, that will be handled automatically from now on.

  • The created_at and updated_at resource fields now include a timezone indicator at the end. Because this is a change in field format, the old timestamp_core extension has been removed and replaced with a standard-attr-timestamp extension.

  • Initial support for oslo.privsep has been added. Most external commands are still executed using oslo.rootwrap.

  • vhost-user reconnect is a mechanism which allows a vhost-user frontend to reconnect to a vhost-user backend in the event the backend terminates either as a result of a graceful shutdown or a crash. This allows a VM utilising a vhost-user interface to reconnect automatically to the backend e.g. Open vSwitch without requiring the VM to reboot. In this release, support was added to the neutron Open vSwitch agent and ml2 driver for vhost-user reconnect.

Problèmes connus

  • In kernels < 3.19 net.ipv4.ip_nonlocal_bind sysctl option was not isolated to network namespace scope. L3 HA sets this option to zero to avoid sending gratuitous ARPs for IP addresses that were removed while processing. If this happens, then gratuitous ARPs will be sent. It may populate ARP cache tables of peer machines with wrong MAC addresses.

Notes de mises à jours

  • The api-paste.ini configuration file for the paste pipeline was updated to add the http_proxy_to_wsgi middleware.

  • The dhcp_domain DHCP agent configuration option was deprecated in Liberty cycle, and now is no longer used. The dns_domain option should be used instead.

  • On upgrade, IPv6 addresses in DHCP namespaces that have been created dynamically via SLAAC will be removed, and static IPv6 addresses will be added instead.

  • Update the neutron_hyperv_agent.conf / neutron_ovs_agent.conf files on the Hyper-V nodes to use hyperv.neutron.security_groups_driver.HyperVSecurityGroupsDriver, which is the networking_hyperv security groups driver.

  • A new option ha_keepalived_state_change_server_threads has been added to configure the number of concurrent threads spawned for keepalived server connection requests. Higher values increase the CPU load on the agent nodes. The default value is half of the number of CPUs present on the node. This allows operators to tune the number of threads to suit their environment. With more threads, simultaneous requests for multiple HA routers state change can be handled faster.

  • Obsolete oslo.messaging.notify.drivers entrypoints that were left in tree for backwards compatibility with pre-Icehouse releases have been removed. Those are neutron.openstack.common.notifier.log_notifier, neutron.openstack.common.notifier.no_op_notifier, neutron.openstack.common.notifier.test_notifier, neutron.openstack.common.notifier.rpc_notifier2, neutron.openstack.common.notifier.rpc_notifier. Use values provided by oslo.messaging library to configure notification drivers.

  • The advertise_mtu option is removed. Now Neutron always uses all available means to advertise MTUs to instances (including DHCPv4 and IPv6 RA).

  • The min_l3_agents_per_router configuration option was deprecated in Newton cycle and removed in Ocata. HA routers no longer require a minimal number of L3 agents to be created, although obviously they require at least two L3 agents to provide HA guarantees. The rationale for the removal of the option is the case a router was created just when an agent was not operational. The creation of the router will now succeed, and when a second agent resumes operation the router will be scheduled to it providing HA.

  • After upgrade, a macvtap agent without physical_interface_mappings configured can not be started. Specify a valid mapping to be able to start and use the macvtap agent.

  • The timestamp_core extension has been removed and replaced with the standard-attr-timestamp extension. Resources will still have timestamps in the created_at and updated_at fields, but timestamps will have time zone info appended to the end to be consistent with other OpenStack projects.

Notes dépréciées

  • The L3 agent send_arp_for_ha configuration option is deprecated and will be removed in Pike. The functionality will remain, and the agent will send three gratuitious ARPs whenever a new floating IP is configured.

  • The iptables firewall driver will no longer enable bridge firewalling in next versions of Neutron. If your distribution overrides the default value for any of relevant sysctl settings (net.bridge.bridge-nf-call-arptables, net.bridge.bridge-nf-call-ip6tables, and net.bridge.bridge-nf-call-iptables) then make sure you set them back to upstream kernel default (1) using /etc/sysctl.conf or /etc/sysctl.d/* configuration files.

  • notification_drivers from [qos] section has been deprecated. It will be removed in a future release.

Corrections de bugs

  • There is a race condition when adding ports in DHCP namespaces where an IPv6 address could be dynamically created via SLAAC from a Router Advertisement sent from the L3 agent, leading to a failure to start the DHCP agent. This bug has been fixed, but care must be taken on an upgrade dealing with any potentially stale dynamic addresses. For more information, see bug 1627902.

  • Versions of keepalived < 1.2.20 don’t send gratuitous ARPs when keepalived process receives a SIGHUP signal. These versions are not packaged in some Linux distributions like Red Hat Enterprise Linux 7, CentOS 7, or Ubuntu Xenial. Not sending gratuitous ARPs may lead to peer ARP cache tables containing wrong entries about floating IP addresses until those entries are invalidated. To fix that scenario, Neutron now sends gratuitous ARPs for all new IP addresses that appear on non-HA interfaces in router namespaces. This behavior simulates behavior of new versions of keepalived.

Autres notes

  • Due to changes in internal L3 logic, a server crash/backend failure during FIP creation may leave dangling ports attached on external networks. These ports can be identified by a PENDING device_id parameter. While those ports can also be removed by admins, the neutron-server service will now also trigger periodic (approximately once in 10 minutes) cleanup to address the issue.

  • The allow_pagination and allow_sorting configuration options are now removed. Now, sorting and pagination are always enabled for plugins that support the features.

  • vhost-user reconnect requires dpdk 16.07 and qemu 2.7 and openvswitch 2.6 to function. if an older qemu is used, reconnect will not be available but vhost-user will still function.