Ussuri Series Release Notes

6.0.1-6

Bug Fixes

  • Fixed an issue where setting of SNI containers were not being applied on listener update API calls.

  • Fixed an Octavia API validation on listener update where SNI containers could be set on non-TERMINATED_HTTPS listeners.

  • Fix an issue when the barbican service enable TLS, we create the listerner failed.

6.0.1

Upgrade Notes

  • An amphora image update is recommended to pick up a workaround to an HAProxy issue where it would fail to reload on configuration change should the local peer name start with “-x”.

Bug Fixes

  • Fixed an issue when a loadbalancer is disabled, Octavia Health Manager keeps failovering the amphorae

  • Workaround an HAProxy issue where it would fail to reload on configuration change should the local peer name start with “-x”.

6.0.0

New Features

  • Added the oslo-middleware healthcheck app to the Octavia API. Hitting /healthcheck will return a 200. This is enabled via the [api_settings]healthcheck_enabled setting and is disabled by default.

  • Operators can now use the amphorav2 provider which uses jobboard-based controller. A jobboard controller solves the issue with resources stuck in PENDING_* states by writing info about task states in persistent backend and monitoring job claims via jobboard.

  • Add listener and pool protocol validation. The pool and listener can’t be combined arbitrarily. We need some constraints on the protocol side.

  • Added support for CentOS 8 amphora images.

  • Two new types of healthmonitoring are now valid for UDP listeners. Both HTTP and TCP check types can now be used.

  • Add an API for allowing administrators to manage Octavia Availability Zones and Availability Zone Profiles, which behave nearly identically to Flavors and Flavor Profiles.

  • Availability zone profiles can now override the valid_vip_networks configuration option.

  • Added an option to the diskimage-create.sh script to specify the Octavia Git branch to build the image from.

  • The load balancer create command now accepts an availability_zone argument. With the amphora driver this will create a load balancer in the targeted compute availability_zone in nova.

    When using spare pools, it will create spares in each AZ. For the amphora driver, if no [nova] availability_zone is configured and availability zones are used, results may be slightly unpredictable.

    Note (for the amphora driver): if it is possible for an amphora to change availability zone after initial creation (not typically possible without outside intervention) this may affect the ability of this feature to function properly.

Upgrade Notes

  • After this upgrade, users will no longer be able use network resources they cannot see or “show” on load balancers. Operators can revert this behavior by setting the “allow_invisible_reourece_usage” configuration file setting to True.

  • Any amphorae running a py3 based image must be recycled or else they will eventually fail on certificate rotation.

  • Python 2.7 support has been dropped. The minimum version of Python now supported by Octavia is Python 3.6.

  • A new amphora image is required to fix the potential certs-ramfs race condition.

Security Issues

  • Previously, if a user knew or could guess the UUID for a network resource, they could use that UUID to create load balancer resources using that UUID. Now the user must have permission to see or “show” the resource before it can be used with a load balancer. This will be the new default, but operators can disable this behavior via the setting the configuration file setting “allow_invisible_resource_usage” to True. This issue falls under the “Class C1” security issue as the user would require a valid UUID.

  • Correctly require two-way certificate authentication to connect to the amphora agent API (CVE-2019-17134).

  • A race condition between the certs-ramfs and the amphora agent may lead to tenant TLS content being stored on the amphora filesystem instead of in the encrypted RAM filesystem.

Bug Fixes

  • Resolved broken certificate upload on py3 based amphora images. On a housekeeping certificate rotation event, the amphora would clear out its server certificate and return a 500, putting the amphora in ERROR status and breaking further communication. See upgrade notes.

  • Fixes an issue where load balancers with more than one TLS enabled listener, one or more SNI enabled, may load certificates from other TLS enabled listeners for SNI use.

  • Fixed a potential race condition with the certs-ramfs and amphora agent services.

  • Fixes an issue where load balancers with more than one TLS enabled listener, using client authentication and/or backend re-encryption, may load incorrect certificates for the listener.

  • Fix a bug that could interrupt resource creation when performing a graceful shutdown of the house keeping service and leave resources such as amphorae in a BOOTING status.

  • Fixed an issue where load balancers would go into ERROR when setting data not visible to providers (e.g. tags).

  • Fixes the ability to filter on the provider flavor capabilities API.

  • Fixed code that configured the CentOS/Red Hat amphora images to use the correct names for the network ‘ifcfg’ files for static routes and routing rules. It was using the wrong name for the routes file, and did not support IPv6 in either file. For more information, see https://storyboard.openstack.org/#!/story/2007051

  • Fix a bug that could interrupt resource creation when performing a graceful shutdown of the controller worker and leave resources in a PENDING_CREATE/PENDING_UPDATE/PENDING_DELETE provisioning status. If the duration of an Octavia flow is greater than the ‘graceful_shutdown_timeout’ configuration value, stopping the Octavia worker can still interrupt the creation of resources.

  • Delay between checks on UDP healthmonitors was using the incorrect config value timeout, when it should have been delay.

Other Notes

  • Amphorae that are booting for a specific loadbalancer will now be linked to that loadbalancer immediately upon creation. Previously this would not happen until near the end of the process, leaving a gap during booting during which is was difficult to understand which booting amphora belonged to which loadbalancer. This was especially problematic when attempting to troubleshoot loadbalancers that entered ERROR status due to boot issues.