Train Series Release Notes

5.0.2-15

Upgrade Notes

  • The failover improvements do not require an updated amphora image, but updating existing amphora will minimize the failover outage time for standalone amphora on subsequent failovers.

Bug Fixes

  • Fixed an issue where setting of SNI containers were not being applied on listener update API calls.

  • Fixed an Octavia API validation on listener update where SNI containers could be set on non-TERMINATED_HTTPS listeners.

  • Fixed an issue where some columns could not be used for sort keys in API list calls.

  • Fix an issue when the barbican service enable TLS, we create the listerner failed.

  • Fixed an issue where amphora load balancers fail to create when Nova anti-affinity is enabled and topology is SINGLE.

  • Fixed an issue where listener “insert_headers” parameter was accepted for protocols that do not support header insertion.

  • Fixed code that configured the CentOS/Red Hat amphora images to use the correct names for the network ‘ifcfg’ files for static routes and routing rules. It was using the wrong name for the routes file, and did not support IPv6 in either file. For more information, see https://storyboard.openstack.org/#!/story/2007051

  • Significantly improved the reliability and performance of amphora and load balancer failovers. This is especially true when the Nova service is experiencing failures.

5.0.2

Upgrade Notes

  • After this upgrade, users will no longer be able use network resources they cannot see or “show” on load balancers. Operators can revert this behavior by setting the “allow_invisible_reourece_usage” configuration file setting to True.

  • Any amphorae running a py3 based image must be recycled or else they will eventually fail on certificate rotation.

  • An amphora image update is recommended to pick up a workaround to an HAProxy issue where it would fail to reload on configuration change should the local peer name start with “-x”.

Security Issues

  • Previously, if a user knew or could guess the UUID for a network resource, they could use that UUID to create load balancer resources using that UUID. Now the user must have permission to see or “show” the resource before it can be used with a load balancer. This will be the new default, but operators can disable this behavior via the setting the configuration file setting “allow_invisible_resource_usage” to True. This issue falls under the “Class C1” security issue as the user would require a valid UUID.

Bug Fixes

  • Fixed an issue when a loadbalancer is disabled, Octavia Health Manager keeps failovering the amphorae

  • Add listener and pool protocol validation. The pool and listener can’t be combined arbitrarily. We need some constraints on the protocol side.

  • Resolved broken certificate upload on py3 based amphora images. On a housekeeping certificate rotation event, the amphora would clear out its server certificate and return a 500, putting the amphora in ERROR status and breaking further communication. See upgrade notes.

  • Fixed an issue where the the amphora image create tool would checkout the master amphora-agent code and master upper constraints.

  • Fixes an issue where load balancers with more than one TLS enabled listener, using client authentication and/or backend re-encryption, may load incorrect certificates for the listener.

  • Fix a bug that could interrupt resource creation when performing a graceful shutdown of the house keeping service and leave resources such as amphorae in a BOOTING status.

  • Fixed an issue where load balancers would go into ERROR when setting data not visible to providers (e.g. tags).

  • Workaround an HAProxy issue where it would fail to reload on configuration change should the local peer name start with “-x”.

  • Delay between checks on UDP healthmonitors was using the incorrect config value timeout, when it should have been delay.

Other Notes

  • Amphorae that are booting for a specific loadbalancer will now be linked to that loadbalancer immediately upon creation. Previously this would not happen until near the end of the process, leaving a gap during booting during which is was difficult to understand which booting amphora belonged to which loadbalancer. This was especially problematic when attempting to troubleshoot loadbalancers that entered ERROR status due to boot issues.

5.0.1

Upgrade Notes

  • A new amphora image is required to fix the potential certs-ramfs race condition.

Security Issues

  • A race condition between the certs-ramfs and the amphora agent may lead to tenant TLS content being stored on the amphora filesystem instead of in the encrypted RAM filesystem.

Bug Fixes

  • Fixes an issue where load balancers with more than one TLS enabled listener, one or more SNI enabled, may load certificates from other TLS enabled listeners for SNI use.

  • Fixed a potential race condition with the certs-ramfs and amphora agent services.

  • Fixes the ability to filter on the provider flavor capabilities API.

  • Fix a bug that could interrupt resource creation when performing a graceful shutdown of the controller worker and leave resources in a PENDING_CREATE/PENDING_UPDATE/PENDING_DELETE provisioning status. If the duration of an Octavia flow is greater than the ‘graceful_shutdown_timeout’ configuration value, stopping the Octavia worker can still interrupt the creation of resources.

5.0.0

New Features

  • Adds support for the driver agent to query for load balancer objects.

  • Octavia now supports Amphora log offloading. Operators can define syslog targets for the Amphora administrative logs and for the tenant load balancer flow logs.

  • The Octavia driver-agent now supports starting provider driver agents. Provider driver agents are long running agent processes supporting provider drivers.

  • The default kernel for the amphora image has switched from linux-image-generic to linux-image-virtual, resulting in an image size reduction of about 150MB. The linux-image-virtual kernel works with kvm, qemu tcg, and Xen hypervisors among others.

  • New Load Balancing algorithm SOURCE_IP_PORT has been added. It is supported only by OVN provider driver.

  • Added support to debug with the Python Visual Studio Debugger engine (ptvsd).

  • Added support to create RHEL 8 amphora images.

  • Added support to VIP access control list. Users can now limit incoming traffic to a set of allowed CIDRs.

  • The validity period for locally generated certificates used inside Amphora is now configurable. See [certificates] cert_validity_time.

  • The batch member update resource can now be used additively by passing the query parameter additive_only=True. Existing members can be updated and new members will be created, but missing members will not be deleted.

  • Now supports oslo_middleware http_proxy_to_wsgi, which will set up the request URL correctly in the case that there is a proxy (for example, a loadbalancer such as HAProxy) in front of the Octavia API. It is off by default and can be enabled by setting enable_proxy_headers_parsing=True in the [oslo_middleware] section of octavia.conf.

  • Allow creation of volume based amphora. Many deploy production use volume based instances because of more flexibility. Octavia will create volume and attach this to the amphora.

    Have new settings: * volume_driver: Whether to use volume driver (cinder) to create volume backed amphorae. * volume_size: Size of root volume for Amphora Instance when using Cinder * volume_type : Type of volume for Amphorae volume root disk * volume_create_retry_interval: Interval time to wait volume is created in available state * volume_create_timeout: Timeout When volume is not create success * volume_create_max_retries: Maximum number of retries to create volume

Known Issues

  • Amphorae are unable to provide tenant flow logs for UDP listeners.

  • When a load balancer with a UDP listener is updated, the listener service is restarted, which causes an interruption of the flow of traffic during a short period of time. This issue is caused by a keepalived bug (https://github.com/acassen/keepalived/issues/1163) that was fixed in keepalived 2.0.14, but this package is not yet provided by distributions.

Upgrade Notes

  • To enable log offloading, the amphora image needs to be updated.

  • All pools configured under OVN provider driver are automatically migrated to SOURCE_IP_PORT algorithm. Previously algorithm was named as ROUND_ROBIN, but in fact it was not working like ROUND_ROBIN. After investigating, it was observed that core OVN actually utilizes a 5 Tuple Hash/RSS Hash in DPDK/Kernel as a Load Balancing algorithm. The 5 Tuple Hash has Source IP, Destination IP, Protocol, Source Port, Destination Port. To reflect this the name was changed to SOURCE_IP_PORT.

  • To fix the issue with active/standby load balancers or single topology load balancers with members on the VIP subnet, you need to update the amphora image.

  • A new amphora image is required to resolve the amphora memory issues when a load balancer has multiple listeners and the amphora image uses haproxy 1.8 or newer.

  • Octavia v1 API (used for integration with Neutron-LBaaS) has been removed. If Neutron-LBaaS integration is still required, do not upgrade to this version.

  • The default TaskFlow engine is now set to ‘parallel’ instead of ‘serial’. The parallel engine schedules tasks onto different threads to allow for running non-dependent tasks simultaneously. This has the benefit of accelerating the execution of some Octavia Amphora flows such as provisioning of active-standby amphora loadbalancers. Operators can revert to previously default ‘serial’ engine type by setting the configuration option [task_flow]/engine = serial

Deprecation Notes

  • Octavia v1 API deprecation is complete. All relevant code, tests, and docs have been removed.

Critical Issues

  • Fixed a bug where active/standby load balancers and single topology load balancers with members on the VIP subnet may fail. An updated image is required to fix this bug.

Security Issues

  • Correctly require two-way certificate authentication to connect to the amphora agent API (CVE-2019-17134).

  • Communication between the control-plane and the amphora-agent now uses minimum TLSv1.2 by default, and is configurable. The previous default of SSLv2/3 is widely considered insecure.

  • The default validity time for Amphora certificates has been reduced from two years to 30 days.

Bug Fixes

  • Fixed the API handling of None (JSON null) on object update calls. The API will now either clear the value from the field or will reset the value of the field to the API default.

  • Fixed an issue with the health manager reporting an UnboundLocalError if it gets an exception attempting to get a database connection.

  • Fixes a potential DB deadlock in allocate_and_associate found in testing.

  • Fixed an issue creating members on networks with IPv6 subnets.

  • Fixes an issue where, if we were unable to attach the base (VRRP) port to an amphora instance, the revert would not clean up the port in neutron.

  • Fixed duplicated IPv6 addresses in Active/Standby mode in CentOS amphorae.

  • Fixed an issue where the driver errors were not caught.

  • Fixed an error triggered when the deletion of the VIP security group fails.

  • Fixed an issue where the listener API would accept null/None values for fields that must have a valid value, such as connection-limit. Now when a PUT call is made to one of these fields with null as the value the API will reset the field value to the field default value.

  • Fix an issue that prevented the cleanup of load balancer entries in the database by the Octavia housekeeper service.

  • Fixed an issue where /etc/resolv.conf on RHEl-based amphorae was being populated with DNS servers.

  • Fixes the provider driver utils conversion of flavor_id in load balancer conversion, sni_refs and L7 policies in listener conversion, and health monitor in pool conversions.

  • Fixed an issue that prevents spare amphorae to be created.

  • Add support for monitor_address and monitor_port attributes in UDP members. Previously, monitor_address and monitor_port were ignored and address and protocol_port attributes were used as monitoring address and port.

  • Fix operating_status for pools and members that use UDP protocol. operating_status values are now consistant with the values of non-UDP load balancers.

  • Fix a bug that prevented UDP servers to be restored as members of a pool after removing a health monitor resource.

  • Fixes an issue in the selection of vip-subnet-id on multi-subnet networks by checking the IP availability of the subnets, ensuring enough IPs are available for loadbalancer when creating loadbalancer specifying vip-network-id.

  • Fixed an error when plugging the VIP on CentOS-based amphorae.

  • Fixed an issue where trying to set a QoS policy on a VIP while the QoS extension is disabled would bring the load balancer to ERROR. Should the QoS extension be disabled, the API will now return HTTP 400 to the user.

  • Fixed an issue where setting a QoS policy on the VIP would bring the load balancer to ERROR when the QoS extension is enabled.

  • Fixed a bug that prevents spare amphora rotation.

  • Fixed an issue with load balancers that have multiple listeners when using an amphora image that contains HAProxy 1.8 or newer. An updated amphora image is required to apply this fix.

  • The passphrase for config option ‘server_certs_key_passphrase’ is used as a Fernet key in Octavia and thus must be 32, base64(url) compatible, characters long. Octavia will now validate the passphrase length and format.

  • Fixed bug which prevented the creation of listeners for different protocols on the same port (i.e: tcp port 53, and udp port 53).

  • Adding a member with different IP protocol version than the VIP IP protocol version in a UDP load balancer caused a crash in the amphora. A validation step in the amphora driver now prevents mixing IP protocol versions in UDP load balancers.