Stein Series Release Notes

4.1.4-5

Bug Fixes

  • Fixed an issue where setting of SNI containers were not being applied on listener update API calls.

  • Fixed an Octavia API validation on listener update where SNI containers could be set on non-TERMINATED_HTTPS listeners.

  • Fix an incorrect operating_status with empty UDP pools. A UDP pool without any member is now ONLINE instead of OFFLINE.

  • Add missing cloud-utils-growpart RPM to Red Hat based amphora images.

  • Add missing cronie RPM to Red Hat based amphora images.

4.1.4

Security Issues

  • If you are using the admin_or_owner-policy.yaml policy override file you should upgrade your API processes to include the unscoped token fix. The default policies are not affected by this issue.

Bug Fixes

  • Fixed an issue where members added to TLS-enabled pools would go to ERROR provisioning status.

  • Fixed an issue where some columns could not be used for sort keys in API list calls.

  • Fix operational status for disabled UDP listeners. The operating status of disabled UDP listeners is now OFFLINE instead of ONLINE, the behavior is now similary to the behavior of HTTP/HTTPS/TCP/… listeners.

  • Fixed an issue where listener “insert_headers” parameter was accepted for protocols that do not support header insertion.

  • Fix a potential AttributeError exception at init time in the housekeeping service when using python2 because of an issue with thread safety when calling strptime for the first time.

  • Fixed code that configured the CentOS/Red Hat amphora images to use the correct names for the network ‘ifcfg’ files for static routes and routing rules. It was using the wrong name for the routes file, and did not support IPv6 in either file. For more information, see https://storyboard.openstack.org/#!/story/2007051

  • Fixed an issue where TLS-enabled pools would fail to provision.

  • Fixes an issue when using the admin_or_owner-policy.yaml policy override file and unscoped tokens.

4.1.2

Upgrade Notes

  • After this upgrade, users will no longer be able use network resources they cannot see or “show” on load balancers. Operators can revert this behavior by setting the “allow_invisible_resource_usage” configuration file setting to True.

  • Any amphorae running a py3 based image must be recycled or else they will eventually fail on certificate rotation.

  • An amphora image update is recommended to pick up a workaround to an HAProxy issue where it would fail to reload on configuration change should the local peer name start with “-x”.

Security Issues

  • Previously, if a user knew or could guess the UUID for a network resource, they could use that UUID to create load balancer resources using that UUID. Now the user must have permission to see or “show” the resource before it can be used with a load balancer. This will be the new default, but operators can disable this behavior via the setting the configuration file setting “allow_invisible_resource_usage” to True. This issue falls under the “Class C1” security issue as the user would require a valid UUID.

Bug Fixes

  • Fixed an issue when a loadbalancer is disabled, Octavia Health Manager keeps failovering the amphorae

  • Add listener and pool protocol validation. The pool and listener can’t be combined arbitrarily. We need some constraints on the protocol side.

  • Resolved broken certificate upload on py3 based amphora images. On a housekeeping certificate rotation event, the amphora would clear out its server certificate and return a 500, putting the amphora in ERROR status and breaking further communication. See upgrade notes.

  • Fixed an issue where the the amphora image create tool would checkout the master amphora-agent code and master upper constraints.

  • Fixes an issue where load balancers with more than one TLS enabled listener, using client authentication and/or backend re-encryption, may load incorrect certificates for the listener.

  • Fix a bug that could interrupt resource creation when performing a graceful shutdown of the house keeping service and leave resources such as amphorae in a BOOTING status.

  • Fixed an issue where load balancers would go into ERROR when setting data not visible to providers (e.g. tags).

  • Workaround an HAProxy issue where it would fail to reload on configuration change should the local peer name start with “-x”.

  • Delay between checks on UDP healthmonitors was using the incorrect config value timeout, when it should have been delay.

4.1.1

Upgrade Notes

  • A new amphora image is required to fix the potential certs-ramfs race condition.

Security Issues

  • A race condition between the certs-ramfs and the amphora agent may lead to tenant TLS content being stored on the amphora filesystem instead of in the encrypted RAM filesystem.

Bug Fixes

  • Fixes an issue where load balancers with more than one TLS enabled listener, one or more SNI enabled, may load certificates from other TLS enabled listeners for SNI use.

  • Fixed a potential race condition with the certs-ramfs and amphora agent services.

  • Fixes the ability to filter on the provider flavor capabilities API.

  • Fixes an issue in the selection of vip-subnet-id on multi-subnet networks by checking the IP availability of the subnets, ensuring enough IPs are available for loadbalancer when creating loadbalancer specifying vip-network-id.

  • Fix a bug that could interrupt resource creation when performing a graceful shutdown of the controller worker and leave resources in a PENDING_CREATE/PENDING_UPDATE/PENDING_DELETE provisioning status. If the duration of an Octavia flow is greater than the ‘graceful_shutdown_timeout’ configuration value, stopping the Octavia worker can still interrupt the creation of resources.

4.1.0

New Features

  • Now supports oslo_middleware http_proxy_to_wsgi, which will set up the request URL correctly in the case that there is a proxy (for example, a loadbalancer such as HAProxy) in front of the Octavia API. It is off by default and can be enabled by setting enable_proxy_headers_parsing=True in the [oslo_middleware] section of octavia.conf.

Known Issues

  • When a load balancer with a UDP listener is updated, the listener service is restarted, which causes an interruption of the flow of traffic during a short period of time. This issue is caused by a keepalived bug (https://github.com/acassen/keepalived/issues/1163) that was fixed in keepalived 2.0.14, but this package is not yet provided by distributions.

Upgrade Notes

  • A new amphora image is required to resolve the amphora memory issues when a load balancer has multiple listeners and the amphora image uses haproxy 1.8 or newer.

Security Issues

  • Correctly require two-way certificate authentication to connect to the amphora agent API (CVE-2019-17134).

Bug Fixes

  • Fixed the API handling of None (JSON null) on object update calls. The API will now either clear the value from the field or will reset the value of the field to the API default.

  • Fixed an issue with the health manager reporting an UnboundLocalError if it gets an exception attempting to get a database connection.

  • Fixes a potential DB deadlock in allocate_and_associate found in testing.

  • Fixes an issue where, if we were unable to attach the base (VRRP) port to an amphora instance, the revert would not clean up the port in neutron.

  • Fixed an issue where the driver errors were not caught.

  • Fix an issue that prevented the cleanup of load balancer entries in the database by the Octavia housekeeper service.

  • Add support for monitor_address and monitor_port attributes in UDP members. Previously, monitor_address and monitor_port were ignored and address and protocol_port attributes were used as monitoring address and port.

  • Fix operating_status for pools and members that use UDP protocol. operating_status values are now consistant with the values of non-UDP load balancers.

  • Fix a bug that prevented UDP servers to be restored as members of a pool after removing a health monitor resource.

  • Fixed an issue with load balancers that have multiple listeners when using an amphora image that contains HAProxy 1.8 or newer. An updated amphora image is required to apply this fix.

  • The passphrase for config option ‘server_certs_key_passphrase’ is used as a Fernet key in Octavia and thus must be 32, base64(url) compatible, characters long. Octavia will now validate the passphrase length and format.

  • Adding a member with different IP protocol version than the VIP IP protocol version in a UDP load balancer caused a crash in the amphora. A validation step in the amphora driver now prevents mixing IP protocol versions in UDP load balancers.

4.0.1

Bug Fixes

  • Fixed duplicated IPv6 addresses in Active/Standby mode in CentOS amphorae.

  • Fixed an issue where the listener API would accept null/None values for fields that must have a valid value, such as connection-limit. Now when a PUT call is made to one of these fields with null as the value the API will reset the field value to the field default value.

  • Fixed a bug that prevents spare amphora rotation.

4.0.0

Prelude

For the OpenStack Stein release, the Octavia team is excited to announce support for: Octavia flavors, TLS client authentication, backend re-encryption, and object tags.

  • Octavia flavors allow an operator to define “flavors” of load balancers, such as “active-standby” or “single” using the amphora driver, that configure the load balancer topology. The Amphora driver also supports specifying the nova compute flavor to use for the load balancer amphora.

  • TLS client authentication allows the listener to request a client certificate from users connecting to the load balancer. This certificate can then be checked against a CA certificate and optionally a certificate revocation list. New HTTP header insertions allow passing client certificate information to the backend members, while new L7 rules allow you to take custom actions based on the content of the client certificate.

  • Backend re-encryption allows users to configure pools to initiate TLS connections to the backend member servers. This enables load balancers to authenticate and encrypt connections from the load balancer to the backend member server.

  • Object tags allow users to assign a list of strings to the load balancer objects that can then be used for advanced API list filtering.

New Features

  • You can now specify a certificate authority certificate reference, on listeners, for use with TLS client authentication.

  • You can now provide a certificate revocation list reference for listeners using TLS client authentication.

  • When using TLS client authentication on TERMINATED_HTTPS listeners, you can now insert the following headers for backend members: ‘X-SSL-Client-Verify’, ‘X-SSL-Client-Has-Cert’, ‘X-SSL-Client-DN’, ‘X-SSL-Client-CN’, ‘X-SSL-Issuer’, ‘X-SSL-Client-SHA1’, ‘X-SSL-Client-Not-Before’, ‘X-SSL-Client-Not-After’.

  • You can now enable TLS client authentication on listeners.

  • Octavia now has an administrative API that updates the amphora agent configuration on running amphora.

  • You can now specify a ca_tls_container_ref and crl_container_ref on pools for validating backend pool members using TLS.

  • You can now specify a tls_container_ref on pools for TLS client authentication to pool members.

  • You can now enable TLS backend re-encryption for connections to member servers by enabling tls_enabled option on pools.

  • Adds the ability to define L7 rules based on TLS client authentication information. The new L7 rules are: “L7RULE_TYPE_SSL_CONN_HAS_CERT”, “L7RULE_TYPE_VERIFY_RESULT”, and “L7RULE_TYPE_DN_FIELD”.

  • Octavia now has flavors support which allows the operator to define, named, custom configurations that users can select from when creating a load balancer.

  • The Stein release of Octavia introduces the octavia-lib python module. This library enables provider drivers to integrate easier with the Octavia API by providing a shared set of coding objects and interfaces.

  • Amphora API now can return the field compute_flavor which is the ID of the compute instance flavor used to boot the amphora.

  • You can now filter API queries by the object tag.

  • Operators can now use the ‘compute_flavor’ Octavia flavor capability when using the amphora provider driver. This allows custom compute driver flavors to be used per-load balancer. If this is not defined in an Octavia flavor, the amp_flavor_id Octavia configuration file setting will continue to be used.

  • Added new tool octavia-status upgrade check. This framework allows adding various checks which can be run before a Octavia upgrade to ensure if the upgrade can be performed safely.

  • The Octavia API now supports Cloud Auditing Data Federation (CADF) auditing.

  • Added tags property for Octavia resources. It includes:

    • Load balancer

    • Listener

    • Member

    • Pool

    • L7rule

    • L7policy

    • Health Monitor

  • Listeners default timeouts can be set by config in section haproxy_amphora:

    • timeout_client_data: Frontend client inactivity timeout

    • timeout_member_connect: Backend member connection timeout

    • timeout_member_data: Backend member inactivity timeout

    • timeout_tcp_inspect: Time to wait for TCP packets for content inspection

    The value for all of these options is expected to be in milliseconds.

  • This will speed up lb creation by allocating AAP ports in parallel for LBs with more than one amp. As a side effect the AAP driver will be simplified and thus easier to mainain.

  • Adds an administrator API to access per-amphora statistics.

  • Extend the Octavia Health Monitor API with two new fields http_version and domain_name for support HTTP health check, which will inject the domain name into HTTP host header.

  • Now Octavia L7Policy API can accept an new option redirect_http_code for L7Policy actions REDIRECT_URL or REDIRECT_PREFIX, then each HTTP requests to the associated Listener will return the configured HTTP response code.

  • Support REDIRECT_PREFIX action for L7Policy

  • Support remote debugging with PyDev. Please refer to the Contributor documentation section to find more details.

Upgrade Notes

  • When the amphora agent configuration update API is called on an amphora running a version of the amphora agent that does not support configuration updates, an ERROR log message will be posted to the controller log file indicating that the amphora does not support agent configuration updates. In this case, the amphora image should be updated to a newer version.

  • The Stein release of Octavia adds the driver-agent controller process. This process is deployed along with the Octavia API process and uses unix domain sockets for communication between the provider drivers using octavia-lib and the driver-agent. When upgrading to Stein, operators should make sure that the /var/run/octavia directry is available for the driver-agent with the appropriate ownership and permissions for the driver-agent and API processes to access it. The operator may need to make sure the driver-agent process starts after installation. For example, a systemd service may need to be created and enabled for it.

  • We have changed the [haproxy_amphora] connection_max_retries and build_active_retries default values from 300 to 120. This means load balancer builds will wait for ten minutes instead of twenty-five minutes for nova to boot the virtual machine. We feel these are more reasonable default values for most production deployments and provide a better user experience. If you are running nova in a nested virtualization environment, meaning nova is booting VMs inside another VM, and you do not have nested virtualization enabled in the bottom hypervisor, you may need to set these values back up to 300.

  • To enable UDP listener monitoring when no pool is attached, the amphora image needs to be updated and load balancers with UDP listeners need to be failed over to the new image.

  • Operator can now use new CLI tool octavia-status upgrade check to check if Octavia deployment can be safely upgraded from N-1 to N release.

  • To fix IPv6 VIP addresses, you must run the “octavia-db-manage upgrade head” migration script.

  • To fix the issue with active/standby load balancers or single topology load balancers with members on the VIP subnet, you need to update the amphora image.

  • To resolve the IPv6 VIP issues on active/standby load balancers you need to build a new amphora image.

  • The following configuration settings have reached the end of their deprecation period and are now removed from the [default] section of the configuration. These will only be available in the [api_settings] section going forward.

    • [DEFAULT] bind_host

    • [DEFAULT] bind_port

    • [DEFAULT] auth_strategy

    • [DEFAULT] api_handler

Deprecation Notes

  • The following configuration settings have reached the end of their deprecation period and are now removed from the [default] section of the configuration. These will only be available in the [api_settings] section going forward.

    • [DEFAULT] bind_host

    • [DEFAULT] bind_port

    • [DEFAULT] auth_strategy

    • [DEFAULT] api_handler

  • Finally completely the remove user_group option, as it was deprecated in Pike.

  • status_update_threads config option for healthmanager is deprecated because it is replaced as health_update_threads and stats_update_threads.

Critical Issues

  • Fixed a bug where active/standby load balancers and single topology load balancers with members on the VIP subnet may fail. An updated image is required to fix this bug.

Security Issues

  • Note that the amphora provider currently only supports the crl-file provided to check for revocation. Remote revocation lists and/or OCSP will not be used by the amphora provider.

  • As a followup to the fix that resolved CVE-2018-16856, Octavia will now encrypt certificates and keys used for secure communication with amphorae, in its internal workflows. Octavia used to exclude debug-level log prints for specific tasks and flows that were explicitly specified by name, a method that is susceptive to code changes.

  • Adds a configuration option, “reserved_ips” that allows the operator to block addresses from being used in load balancer members. The default setting blocks the nova metadata service address.

  • Fixed a debug level logging of Amphora certificates for flows such as ‘octavia-create-amp-for-lb-subflow-octavia-generate-serverpem’ (triggered with loadbalancer failover) and ‘octavia-create-amp-for-lb-subflow-octavia-update-cert-expiration’.

Bug Fixes

  • Fixed an issue creating members on networks with IPv6 subnets.

  • Fixed a performance regression in the Octavia v2 API when using the “list” APIs.

  • Fully expanded IPv6 VIP addresses would fail to store with “Data too long for column ‘ip_address’ at row 1”. This patch includes a database migration to fix this column.

  • Fixes creating a fully populated load balancer with not REDIRECT_POOL type L7 policy and default_pool field.

  • Fixed an issue when Octavia cannot reach the database (all database instances are down) bringing down all running loadbalancers. The Health Manager is more resilient to DB outages now.

  • Fixed a performance issue where the Housekeeping service could significantly and incrementally utilize CPU as more amphorae and load balancers are created and/or marked as DELETED.

  • Fix load balancers that could not be failed over when in ERROR provisioning status.

  • Fixed a bug that caused an excessive number of RabbitMQ connections to be opened.

  • Fixed an issue that prevents spare amphorae to be created.

  • Fixed an error when plugging the VIP on CentOS-based amphorae.

  • Fixed an issue where trying to set a QoS policy on a VIP while the QoS extension is disabled would bring the load balancer to ERROR. Should the QoS extension be disabled, the API will now return HTTP 400 to the user.

  • Fixed an issue where setting a QoS policy on the VIP would bring the load balancer to ERROR when the QoS extension is enabled.

  • Fixes issues using IPv6 VIP addresses with load balancers configured for active/standby topology. This fix requires a new amphora image to be built.

  • Removes unnecessary listener delete from non-cascade delete load balancer flow thus speeding up the loadbalancer delete.

  • Octavia will no longer automatically revoke access to secrets whenever load balancing resources no longer require access to them. This may be added in the future.

  • Add new parameters to specify the number of threads for updating amphora health and stats.

  • This will automatically nova delete zombie amphora when they are detected by Octavia. Zombie amphorae are amphorae which report health messages but appear DELETED in Octavia’s database.

Other Notes

  • We have changed the [haproxy_amphora] connection_max_retries and build_active_retries default values from 300 to 120. This means load balancer builds will wait for ten minutes instead of twenty-five minutes for nova to boot the virtual machine. We feel these are more reasonable default values for most production deployments and provide a better user experience. If you are running nova in a nested virtualization environment, meaning nova is booting VMs inside another VM, and you do not have nested virtualization enabled in the bottom hypervisor, you may need to set these values back up to 300.

  • Added a new option named server_certs_key_passphrase under the certificates section. The default value gets copied from an environment variable named TLS_PASS_AMPS_DEFAULT. In a case where TLS_PASS_AMPS_DEFAULT is not set, and the operator did not fill any other value directly, ‘insecure-key-do-not-use-this-key’ will be used.

  • Processing zombie amphora is already expensive and this adds another step which could increase the load on Octavia Health Manager, especially during Nova API slowness.