Train Series Release Notes

5.1.2-37

Known Issues

  • When using a distribution with a recent SELinux release such as CentOS 8 Stream, PING health-monitor does not work as shell_exec_t calls are denied by SELinux.

  • Fixed configuration issue which allowed authenticated and authorized users to inject code into HAProxy configuration using API requests. Octavia API no longer accepts unencoded whitespace characters in url_path values in update requests for healthmonitors.

Upgrade Notes

  • The fix that updates the Netfilter Conntrack Sysfs variables requires rebuilding the amphora image in order to be effective.

Bug Fixes

  • Increased the TCP buffer memory maximum and enabled MTU ICMP black hole detection.

  • The generated RSyslog configuration on the amphora supports now RSyslog failover with TCP if multiple RSyslog servers were specified.

  • In order to avoid hitting the Neutron API hard when batch update with creating many new members, we cache the subnet validation results in batch update members API call. We also change to validate new members only during batch update members since subnet ID is immutable.

  • Disable conntrack for TCP flows in the Amphora, it reduces memory usage for HAProxy-based listeners and prevents some kernel warnings about dropped packets.

  • Fix disabled UDP pools. Disabled UDP pools were marked as “OFFLINE” but the requests were still forwarded to the members of the pool.

  • Correctly detect the member operating status “drain” when querying status data from HAProxy.

  • Fixes loadbalancer creation failure when one of the listener port matches with the octavia generated peer ports and the allowed_cidr is explicitly set to 0.0.0.0/0 on the listener. This is due to creation of two security group rules with remote_ip_prefix as None and remote_ip_prefix as 0.0.0.0/0 which neutron rejects the second request with security group rule already exists.

  • Enable required SELinux booleans for CentOS or RHEL amphora image.

  • Fixed backwards compatibility issue with the feature that preserves HAProxy server states between reloads. HAProxy version 1.5 or below do not support this feature, so Octavia will not to activate it on amphorae with those versions.

  • Fix a bug that prevented the provisioning_state of a health-monitor to be set to ERROR when an error occurred while creating, updating or deleting a health-monitor.

  • Fixed MAX_TIMEOUT for timeout_client_data, timeout_member_connect, timeout_member_data, timeout_tcp_inspect API listener. The value was reduced from 365 days to 24 days, which now does not exceed the value of the data type in DB.

  • Fixed an issue with the lo interface in the amphora-haproxy network namespace. The lo interface was down and prevented haproxy to communicate with other haproxy processes (for persistent stick tables) on configuration change. It delayed old haproxy worker cleanup and increased the memory consumption usage after reloading the configuration.

  • Fixed an issue with members in ERROR operating status that may have been updated briefly to ONLINE during a Load Balancer configuration change.

  • Fixed a potential error when plugging a member from a new network after deleting another member and unplugging its network. Octavia may have tried to plug the new network to a new interface but with an already existing name. This fix requires to update the Amphora image.

  • Netfilter Conntrack Sysfs variables net.netfilter.nf_conntrack_max and nf_conntrack_expect_max get set to sensible values on the amphora now. Previously, kernel default values were used which were much too low for the configured net.netfilter.nf_conntrack_buckets value. As a result packets could get dropped because the conntrack table got filled too quickly. Note that this affects only UDP and SCTP protocol listeners. Connection tracking is disabled for TCP-based connections on the amphora including HTTP(S).

  • Increase the limit value for nr_open and file-max in the amphora, the new value is based on what HAProxy 2.x is expecting from the system with the greatest maxconn value that Octavia can set.

  • Fix an issue with the provisioning status of a load balancer that was set to ERROR too early when an error occurred, making the load balancer mutable while the execution of the tasks for this resources haven’t finished yet.

  • Fix an issue that could set the provisioning status of a load balancer to a PENDING_UPDATE state when an error occurred in the amphora failover flow.

  • Fix a bug when updating a load balancer with a QoS policy after a failover, Octavia attempted to update the VRRP ports of the deleted amphorae, moving the provisioning status of the load balancer to ERROR.

  • Fix an issue when Octavia performs a failover of an ACTIVE-STANDBY load balancer that has both amphorae missing. Some tasks in the controller took too much time to timeout because the timeout value defined in [haproxy_amphora].active_connection_max_retries and [haproxy_amphora].active_connection_rety_interval was not used.

  • Fix a bug that could have triggered a race condition when configuring a member interface in the amphora. Due to a race condition, a network interface might have been deleted from the amphora, leading to a loss of connectivity.

  • Fixed “Could not retrieve certificate” error when updating/deleting the client_ca_tls_container_ref field of a listener after a CA/CRL was deleted.

  • Fixed validations in L7 rule and session cookie APIs in order to prevent authenticated and authorized users to inject code into HAProxy configuration. CR and LF (\r and \n) are no longer allowed in L7 rule keys and values. The session persistence cookie names must follow the rules described in https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie.

  • Fix load balancers stuck in PENDING_UPDATE issues for some API calls (POST /l7rule, PUT /pool) when a provider denied the call.

  • Validate that the creation of L7 policies is compatible with the protocol of the listener in the Amphora driver. L7 policies are allowed for Terminated HTTPS or HTTP protocol listeners, but not for HTTPS, TCP or UDP protocol listeners.

5.1.2

Bug Fixes

  • Fixed an issue when building the HAProxy configuration files, some DELETED members could have been included in the server list after adding new members.

  • Fixes an issue where provider drivers may not decrement the load balancer objects quota on delete.

  • Fix an issue with the rsyslog configuration file in the Amphora when the log offloading feature and the local log storage feature are both disabled.

  • Some IPv6 UDP members were incorrectly marked in ERROR status, because of a formatting issue while generating the health message in the amphora.

  • Fix weighted round-robin for UDP listeners with keepalived and lvs. The algorithm must be specified as ‘wrr’ in order for weighted round-robin to work correctly, but was being set to ‘rr’.

  • Fix a bug that allowed a user to create a load balancer on a vip_subnet_id that belongs to another user using the subnet UUID.

5.1.1

Bug Fixes

  • Fixes an issue with load balancer failover, when the VIP subnet is out of IP addresses, that could lead to the VIP being deallocated.

  • Fix default value override for timeout values for listeners. Changing the default timeouts in the configuration file wasn’t correctly applied in the default listener parameters.

  • Fix nf_conntrack_buckets sysctl in the Amphora, its value was incorrectly set.

  • Fixed an issue were updating a CRL or client certificate on a pool would cause the pool to go into ERROR.

  • Add a validation step in the Octavia Amphora driver to ensure that the port_security_enabled parameter is set on the VIP network.

5.1.0

New Features

  • Add a new configuration option to define the default connection_limit for new listeners that use the Amphora provider. The option is [haproxy_amphora].default_connection_limit and its default value is 50,000. This value is used when creating or setting a listener with -1 as connection_limit parameter, or when unsetting connection_limit parameter.

Security Issues

  • If you are using the admin_or_owner-policy.yaml policy override file you should upgrade your API processes to include the unscoped token fix. The default policies are not affected by this issue.

Bug Fixes

  • Fixed an issue where members added to TLS-enabled pools would go to ERROR provisioning status.

  • Fixed an issue with failing over an amphora if the pair amphora in an active/standby pair had a missing VRRP port in neutron.

  • Fix operational status for disabled UDP listeners. The operating status of disabled UDP listeners is now OFFLINE instead of ONLINE, the behavior is now similary to the behavior of HTTP/HTTPS/TCP/… listeners.

  • Fixed an issue that could cause load balancers, with multiple amphora in a failed state, to be unable to complete a failover.

  • Fix an incorrect operating_status with empty UDP pools. A UDP pool without any member is now ONLINE instead of OFFLINE.

  • Add missing cloud-utils-growpart RPM to Red Hat-based amphora images.

  • Add missing cronie RPM to Red Hat-based amphora images.

  • Fix a potential AttributeError exception at init time in the housekeeping service when using python2 because of an issue with thread safety when calling strptime for the first time.

  • Fixed an issue where TLS-enabled pools would fail to provision.

  • Fixed an issue where UDP only load balancers would not bring up the VIP address.

  • Fix a potential invalid DOWN operating status for members of a UDP pool. A race condition could have occured when building the first heartbeat message after adding a new member in a pool, this recently added member could have been seen as DOWN.

  • Fixes an issue when using the admin_or_owner-policy.yaml policy override file and unscoped tokens.

  • With haproxy 1.8.x releases, haproxy consumes much more memory in the amphorae because of pre-allocated data structures. This amount of memory depends on the maxconn parameters in its configuration file (which is related to the connection_limit parameter in the Octavia API). In the Amphora provider, the default connection_limit value -1 is now converted to a maxconn of 50,000. It was previously 1,000,000 but that value triggered some memory allocation issues when quickly performing multiple configuration updates in a load balancer.

5.0.3

Upgrade Notes

  • The failover improvements do not require an updated amphora image, but updating existing amphora will minimize the failover outage time for standalone amphora on subsequent failovers.

Bug Fixes

  • Fixed an issue where setting of SNI containers were not being applied on listener update API calls.

  • Fixed an Octavia API validation on listener update where SNI containers could be set on non-TERMINATED_HTTPS listeners.

  • Fixed an issue where some columns could not be used for sort keys in API list calls.

  • Fix an issue when the barbican service enable TLS, we create the listerner failed.

  • Fixed an issue where amphora load balancers fail to create when Nova anti-affinity is enabled and topology is SINGLE.

  • Fixed an issue where listener “insert_headers” parameter was accepted for protocols that do not support header insertion.

  • Fixed code that configured the CentOS/Red Hat amphora images to use the correct names for the network ‘ifcfg’ files for static routes and routing rules. It was using the wrong name for the routes file, and did not support IPv6 in either file. For more information, see https://storyboard.openstack.org/#!/story/2007051

  • Significantly improved the reliability and performance of amphora and load balancer failovers. This is especially true when the Nova service is experiencing failures.

5.0.2

Upgrade Notes

  • After this upgrade, users will no longer be able use network resources they cannot see or “show” on load balancers. Operators can revert this behaviour by setting the “allow_invisible_reourece_usage” configuration file setting to True.

  • Any amphorae running a py3 based image must be recycled or else they will eventually fail on certificate rotation.

  • An amphora image update is recommended to pick up a workaround to an HAProxy issue where it would fail to reload on configuration change should the local peer name start with “-x”.

Security Issues

  • Previously, if a user knew or could guess the UUID for a network resource, they could use that UUID to create load balancer resources using that UUID. Now the user must have permission to see or “show” the resource before it can be used with a load balancer. This will be the new default, but operators can disable this behavior via the setting the configuration file setting “allow_invisible_resource_usage” to True. This issue falls under the “Class C1” security issue as the user would require a valid UUID.

Bug Fixes

  • Fixed an issue when a loadbalancer is disabled, Octavia Health Manager keeps failovering the amphorae

  • Add listener and pool protocol validation. The pool and listener can’t be combined arbitrarily. We need some constraints on the protocol side.

  • Resolved broken certificate upload on py3 based amphora images. On a housekeeping certificate rotation event, the amphora would clear out its server certificate and return a 500, putting the amphora in ERROR status and breaking further communication. See upgrade notes.

  • Fixed an issue where the the amphora image create tool would checkout the master amphora-agent code and master upper constraints.

  • Fixes an issue where load balancers with more than one TLS enabled listener, using client authentication and/or backend re-encryption, may load incorrect certificates for the listener.

  • Fix a bug that could interrupt resource creation when performing a graceful shutdown of the house keeping service and leave resources such as amphorae in a BOOTING status.

  • Fixed an issue where load balancers would go into ERROR when setting data not visible to providers (e.g. tags).

  • Workaround an HAProxy issue where it would fail to reload on configuration change should the local peer name start with “-x”.

  • Delay between checks on UDP healthmonitors was using the incorrect config value timeout, when it should have been delay.

Other Notes

  • Amphorae that are booting for a specific loadbalancer will now be linked to that loadbalancer immediately upon creation. Previously this would not happen until near the end of the process, leaving a gap during booting during which is was difficult to understand which booting amphora belonged to which loadbalancer. This was especially problematic when attempting to troubleshoot loadbalancers that entered ERROR status due to boot issues.

5.0.1

Upgrade Notes

  • A new amphora image is required to fix the potential certs-ramfs race condition.

Security Issues

  • A race condition between the certs-ramfs and the amphora agent may lead to tenant TLS content being stored on the amphora filesystem instead of in the encrypted RAM filesystem.

Bug Fixes

  • Fixes an issue where load balancers with more than one TLS enabled listener, one or more SNI enabled, may load certificates from other TLS enabled listeners for SNI use.

  • Fixed a potential race condition with the certs-ramfs and amphora agent services.

  • Fixes the ability to filter on the provider flavor capabilities API.

  • Fix a bug that could interrupt resource creation when performing a graceful shutdown of the controller worker and leave resources in a PENDING_CREATE/PENDING_UPDATE/PENDING_DELETE provisioning status. If the duration of an Octavia flow is greater than the ‘graceful_shutdown_timeout’ configuration value, stopping the Octavia worker can still interrupt the creation of resources.

5.0.0

New Features

  • Adds support for the driver agent to query for load balancer objects.

  • Octavia now supports Amphora log offloading. Operators can define syslog targets for the Amphora administrative logs and for the tenant load balancer flow logs.

  • The Octavia driver-agent now supports starting provider driver agents. Provider driver agents are long running agent processes supporting provider drivers.

  • The default kernel for the amphora image has switched from linux-image-generic to linux-image-virtual, resulting in an image size reduction of about 150MB. The linux-image-virtual kernel works with kvm, qemu tcg, and Xen hypervisors among others.

  • New Load Balancing algorithm SOURCE_IP_PORT has been added. It is supported only by OVN provider driver.

  • Added support to debug with the Python Visual Studio Debugger engine (ptvsd).

  • Added support to create RHEL 8 amphora images.

  • Added support to VIP access control list. Users can now limit incoming traffic to a set of allowed CIDRs.

  • The validity period for locally generated certificates used inside Amphora is now configurable. See [certificates] cert_validity_time.

  • The batch member update resource can now be used additively by passing the query parameter additive_only=True. Existing members can be updated and new members will be created, but missing members will not be deleted.

  • Now supports oslo_middleware http_proxy_to_wsgi, which will set up the request URL correctly in the case that there is a proxy (for example, a loadbalancer such as HAProxy) in front of the Octavia API. It is off by default and can be enabled by setting enable_proxy_headers_parsing=True in the [oslo_middleware] section of octavia.conf.

  • Allow creation of volume based amphora. Many deploy production use volume based instances because of more flexibility. Octavia will create volume and attach this to the amphora.

    Have new settings: * volume_driver: Whether to use volume driver (cinder) to create volume backed amphorae. * volume_size: Size of root volume for Amphora Instance when using Cinder * volume_type : Type of volume for Amphorae volume root disk * volume_create_retry_interval: Interval time to wait volume is created in available state * volume_create_timeout: Timeout When volume is not create success * volume_create_max_retries: Maximum number of retries to create volume

Known Issues

  • Amphorae are unable to provide tenant flow logs for UDP listeners.

  • When a load balancer with a UDP listener is updated, the listener service is restarted, which causes an interruption of the flow of traffic during a short period of time. This issue is caused by a keepalived bug (https://github.com/acassen/keepalived/issues/1163) that was fixed in keepalived 2.0.14, but this package is not yet provided by distributions.

Upgrade Notes

  • To enable log offloading, the Amphora image needs to be updated.

  • All pools configured under OVN provider driver are automatically migrated to SOURCE_IP_PORT algorithm. Previously algorithm was named as ROUND_ROBIN, but in fact it was not working like ROUND_ROBIN. After investigating, it was observed that core OVN actually utilises a 5 Tuple Hash/RSS Hash in DPDK/Kernel as a Load Balancing algorithm. The 5 Tuple Hash has Source IP, Destination IP, Protocol, Source Port, Destination Port. To reflect this the name was changed to SOURCE_IP_PORT.

  • To fix the issue with active/standby load balancers or single topology load balancers with members on the VIP subnet, you need to update the Amphora image.

  • A new amphora image is required to resolve the amphora memory issues when a load balancer has multiple listeners and the amphora image uses HAProxy 1.8 or newer.

  • Octavia v1 API (used for integration with Neutron-LBaaS) has been removed. If Neutron-LBaaS integration is still required, do not upgrade to this version.

  • The default TaskFlow engine is now set to ‘parallel’ instead of ‘serial’. The parallel engine schedules tasks onto different threads to allow for running non-dependent tasks simultaneously. This has the benefit of accelerating the execution of some Octavia Amphora flows such as provisioning of active-standby amphora loadbalancers. Operators can revert to previously default ‘serial’ engine type by setting the configuration option [task_flow]/engine = serial

Deprecation Notes

  • Octavia v1 API deprecation is complete. All relevant code, tests, and docs have been removed.

Critical Issues

  • Fixed a bug where active/standby load balancers and single topology load balancers with members on the VIP subnet may fail. An updated image is required to fix this bug.

Security Issues

  • Correctly require two-way certificate authentication to connect to the amphora agent API (CVE-2019-17134).

  • Communication between the control-plane and the amphora-agent now uses minimum TLSv1.2 by default, and is configurable. The previous default of SSLv2/3 is widely considered insecure.

  • The default validity time for Amphora certificates has been reduced from two years to 30 days.

Bug Fixes

  • Fixed the API handling of None (JSON null) on object update calls. The API will now either clear the value from the field or will reset the value of the field to the API default.

  • Fixed an issue with the health manager reporting an UnboundLocalError if it gets an exception attempting to get a database connection.

  • Fixes a potential DB deadlock in allocate_and_associate found in testing.

  • Fixed an issue creating members on networks with IPv6 subnets.

  • Fixes an issue where, if we were unable to attach the base (VRRP) port to an amphora instance, the revert would not clean up the port in neutron.

  • Fixed duplicated IPv6 addresses in Active/Standby mode in CentOS amphorae.

  • Fixed an issue where the driver errors were not caught.

  • Fixed an error triggered when the deletion of the VIP security group fails.

  • Fixed an issue where the listener API would accept null/None values for fields that must have a valid value, such as connection-limit. Now when a PUT call is made to one of these fields with null as the value the API will reset the field value to the field default value.

  • Fix an issue that prevented the cleanup of load balancer entries in the database by the Octavia housekeeper service.

  • Fixed an issue where /etc/resolv.conf on RHEl-based amphorae was being populated with DNS servers.

  • Fixes the provider driver utils conversion of flavor_id in load balancer conversion, sni_refs and L7 policies in listener conversion, and health monitor in pool conversions.

  • Fixed an issue that prevents spare amphorae to be created.

  • Add support for monitor_address and monitor_port attributes in UDP members. Previously, monitor_address and monitor_port were ignored and address and protocol_port attributes were used as monitoring address and port.

  • Fix operating_status for pools and members that use UDP protocol. operating_status values are now consistant with the values of non-UDP load balancers.

  • Fix a bug that prevented UDP servers to be restored as members of a pool after removing a health monitor resource.

  • Fixes an issue in the selection of vip-subnet-id on multi-subnet networks by checking the IP availability of the subnets, ensuring enough IPs are available for loadbalancer when creating loadbalancer specifying vip-network-id.

  • Fixed an error when plugging the VIP on CentOS-based amphorae.

  • Fixed an issue where trying to set a QoS policy on a VIP while the QoS extension is disabled would bring the load balancer to ERROR. Should the QoS extension be disabled, the API will now return HTTP 400 to the user.

  • Fixed an issue where setting a QoS policy on the VIP would bring the load balancer to ERROR when the QoS extension is enabled.

  • Fixed a bug that prevents spare amphora rotation.

  • Fixed an issue with load balancers that have multiple listeners when using an amphora image that contains HAProxy 1.8 or newer. An updated amphora image is required to apply this fix.

  • The passphrase for config option ‘server_certs_key_passphrase’ is used as a Fernet key in Octavia and thus must be 32, base64(url) compatible, characters long. Octavia will now validate the passphrase length and format.

  • Fixed bug which prevented the creation of listeners for different protocols on the same port (i.e: tcp port 53, and udp port 53).

  • Adding a member with different IP protocol version than the VIP IP protocol version in a UDP load balancer caused a crash in the amphora. A validation step in the amphora driver now prevents mixing IP protocol versions in UDP load balancers.