Stein Series Release Notes

4.0.1-7

Bug Fixes

  • Fixes a potential DB deadlock in allocate_and_associate found in testing.

4.0.1

Bug Fixes

  • Fixed duplicated IPv6 addresses in Active/Standby mode in CentOS amphorae.

  • Fixed an issue where the listener API would accept null/None values for fields that must have a valid value, such as connection-limit. Now when a PUT call is made to one of these fields with null as the value the API will reset the field value to the field default value.

  • Fixed a bug that prevents spare amphora rotation.

4.0.0

Prelude

For the OpenStack Stein release, the Octavia team is excited to announce support for: Octavia flavors, TLS client authentication, backend re-encryption, and object tags.

  • Octavia flavors allow an operator to define “flavors” of load balancers, such as “active-standby” or “single” using the amphora driver, that configure the load balancer topology. The Amphora driver also supports specifying the nova compute flavor to use for the load balancer amphora.

  • TLS client authentication allows the listener to request a client certificate from users connecting to the load balancer. This certificate can then be checked against a CA certificate and optionally a certificate revocation list. New HTTP header insertions allow passing client certificate information to the backend members, while new L7 rules allow you to take custom actions based on the content of the client certificate.

  • Backend re-encryption allows users to configure pools to initiate TLS connections to the backend member servers. This enables load balancers to authenticate and encrypt connections from the load balancer to the backend member server.

  • Object tags allow users to assign a list of strings to the load balancer objects that can then be used for advanced API list filtering.

New Features

  • You can now specify a certificate authority certificate reference, on listeners, for use with TLS client authentication.

  • You can now provide a certificate revocation list reference for listeners using TLS client authentication.

  • When using TLS client authentication on TERMINATED_HTTPS listeners, you can now insert the following headers for backend members: ‘X-SSL-Client-Verify’, ‘X-SSL-Client-Has-Cert’, ‘X-SSL-Client-DN’, ‘X-SSL-Client-CN’, ‘X-SSL-Issuer’, ‘X-SSL-Client-SHA1’, ‘X-SSL-Client-Not-Before’, ‘X-SSL-Client-Not-After’.

  • You can now enable TLS client authentication on listeners.

  • Octavia now has an administrative API that updates the amphora agent configuration on running amphora.

  • You can now specify a ca_tls_container_ref and crl_container_ref on pools for validating backend pool members using TLS.

  • You can now specify a tls_container_ref on pools for TLS client authentication to pool members.

  • You can now enable TLS backend re-encryption for connections to member servers by enabling tls_enabled option on pools.

  • Adds the ability to define L7 rules based on TLS client authentication information. The new L7 rules are: “L7RULE_TYPE_SSL_CONN_HAS_CERT”, “L7RULE_TYPE_VERIFY_RESULT”, and “L7RULE_TYPE_DN_FIELD”.

  • Octavia now has flavors support which allows the operator to define, named, custom configurations that users can select from when creating a load balancer.

  • The Stein release of Octavia introduces the octavia-lib python module. This library enables provider drivers to integrate easier with the Octavia API by providing a shared set of coding objects and interfaces.

  • Amphora API now can return the field compute_flavor which is the ID of the compute instance flavor used to boot the amphora.

  • You can now filter API queries by the object tag.

  • Operators can now use the ‘compute_flavor’ Octavia flavor capability when using the amphora provider driver. This allows custom compute driver flavors to be used per-load balancer. If this is not defined in an Octavia flavor, the amp_flavor_id Octavia configuration file setting will continue to be used.

  • Added new tool octavia-status upgrade check. This framework allows adding various checks which can be run before a Octavia upgrade to ensure if the upgrade can be performed safely.

  • The Octavia API now supports Cloud Auditing Data Federation (CADF) auditing.

  • Added tags property for Octavia resources. It includes:

    • Load balancer

    • Listener

    • Member

    • Pool

    • L7rule

    • L7policy

    • Health Monitor

  • Listeners default timeouts can be set by config in section haproxy_amphora:

    • timeout_client_data: Frontend client inactivity timeout

    • timeout_member_connect: Backend member connection timeout

    • timeout_member_data: Backend member inactivity timeout

    • timeout_tcp_inspect: Time to wait for TCP packets for content inspection

    The value for all of these options is expected to be in milliseconds.

  • This will speed up lb creation by allocating AAP ports in parallel for LBs with more than one amp. As a side effect the AAP driver will be simplified and thus easier to mainain.

  • Adds an administrator API to access per-amphora statistics.

  • Extend the Octavia Health Monitor API with two new fields http_version and domain_name for support HTTP health check, which will inject the domain name into HTTP host header.

  • Now Octavia L7Policy API can accept an new option redirect_http_code for L7Policy actions REDIRECT_URL or REDIRECT_PREFIX, then each HTTP requests to the associated Listener will return the configured HTTP response code.

  • Support REDIRECT_PREFIX action for L7Policy

  • Support remote debugging with PyDev. Please refer to the Contributor documentation section to find more details.

Upgrade Notes

  • When the amphora agent configuration update API is called on an amphora running a version of the amphora agent that does not support configuration updates, an ERROR log message will be posted to the controller log file indicating that the amphora does not support agent configuration updates. In this case, the amphora image should be updated to a newer version.

  • The Stein release of Octavia adds the driver-agent controller process. This process is deployed along with the Octavia API process and uses unix domain sockets for communication between the provider drivers using octavia-lib and the driver-agent. When upgrading to Stein, operators should make sure that the /var/run/octavia directry is available for the driver-agent with the appropriate ownership and permissions for the driver-agent and API processes to access it. The operator may need to make sure the driver-agent process starts after installation. For example, a systemd service may need to be created and enabled for it.

  • We have changed the [haproxy_amphora] connection_max_retries and build_active_retries default values from 300 to 120. This means load balancer builds will wait for ten minutes instead of twenty-five minutes for nova to boot the virtual machine. We feel these are more reasonable default values for most production deployments and provide a better user experience. If you are running nova in a nested virtualization environment, meaning nova is booting VMs inside another VM, and you do not have nested virtualization enabled in the bottom hypervisor, you may need to set these values back up to 300.

  • To enable UDP listener monitoring when no pool is attached, the amphora image needs to be updated and load balancers with UDP listeners need to be failed over to the new image.

  • Operator can now use new CLI tool octavia-status upgrade check to check if Octavia deployment can be safely upgraded from N-1 to N release.

  • To fix IPv6 VIP addresses, you must run the “octavia-db-manage upgrade head” migration script.

  • To fix the issue with active/standby load balancers or single topology load balancers with members on the VIP subnet, you need to update the amphora image.

  • To resolve the IPv6 VIP issues on active/standby load balancers you need to build a new amphora image.

  • The following configuration settings have reached the end of their deprecation period and are now removed from the [default] section of the configuration. These will only be available in the [api_settings] section going forward.

    • [DEFAULT] bind_host

    • [DEFAULT] bind_port

    • [DEFAULT] auth_strategy

    • [DEFAULT] api_handler

Deprecation Notes

  • The following configuration settings have reached the end of their deprecation period and are now removed from the [default] section of the configuration. These will only be available in the [api_settings] section going forward.

    • [DEFAULT] bind_host

    • [DEFAULT] bind_port

    • [DEFAULT] auth_strategy

    • [DEFAULT] api_handler

  • Finally completely the remove user_group option, as it was deprecated in Pike.

  • status_update_threads config option for healthmanager is deprecated because it is replaced as health_update_threads and stats_update_threads.

Critical Issues

  • Fixed a bug where active/standby load balancers and single topology load balancers with members on the VIP subnet may fail. An updated image is required to fix this bug.

Security Issues

  • Note that the amphora provider currently only supports the crl-file provided to check for revocation. Remote revocation lists and/or OCSP will not be used by the amphora provider.

  • As a followup to the fix that resolved CVE-2018-16856, Octavia will now encrypt certificates and keys used for secure communication with amphorae, in its internal workflows. Octavia used to exclude debug-level log prints for specific tasks and flows that were explicitly specified by name, a method that is susceptive to code changes.

  • Adds a configuration option, “reserved_ips” that allows the operator to block addresses from being used in load balancer members. The default setting blocks the nova metadata service address.

  • Fixed a debug level logging of Amphora certificates for flows such as ‘octavia-create-amp-for-lb-subflow-octavia-generate-serverpem’ (triggered with loadbalancer failover) and ‘octavia-create-amp-for-lb-subflow-octavia-update-cert-expiration’.

Bug Fixes

  • Fixed an issue creating members on networks with IPv6 subnets.

  • Fixed a performance regression in the Octavia v2 API when using the “list” APIs.

  • Fully expanded IPv6 VIP addresses would fail to store with “Data too long for column ‘ip_address’ at row 1”. This patch includes a database migration to fix this column.

  • Fixes creating a fully populated load balancer with not REDIRECT_POOL type L7 policy and default_pool field.

  • Fixed an issue when Octavia cannot reach the database (all database instances are down) bringing down all running loadbalancers. The Health Manager is more resilient to DB outages now.

  • Fixed a performance issue where the Housekeeping service could significantly and incrementally utilize CPU as more amphorae and load balancers are created and/or marked as DELETED.

  • Fix load balancers that could not be failed over when in ERROR provisioning status.

  • Fixed a bug that caused an excessive number of RabbitMQ connections to be opened.

  • Fixed an issue that prevents spare amphorae to be created.

  • Fixed an error when plugging the VIP on CentOS-based amphorae.

  • Fixed an issue where trying to set a QoS policy on a VIP while the QoS extension is disabled would bring the load balancer to ERROR. Should the QoS extension be disabled, the API will now return HTTP 400 to the user.

  • Fixed an issue where setting a QoS policy on the VIP would bring the load balancer to ERROR when the QoS extension is enabled.

  • Fixes issues using IPv6 VIP addresses with load balancers configured for active/standby topology. This fix requires a new amphora image to be built.

  • Removes unnecessary listener delete from non-cascade delete load balancer flow thus speeding up the loadbalancer delete.

  • Octavia will no longer automatically revoke access to secrets whenever load balancing resources no longer require access to them. This may be added in the future.

  • Add new parameters to specify the number of threads for updating amphora health and stats.

  • This will automatically nova delete zombie amphora when they are detected by Octavia. Zombie amphorae are amphorae which report health messages but appear DELETED in Octavia’s database.

Other Notes

  • We have changed the [haproxy_amphora] connection_max_retries and build_active_retries default values from 300 to 120. This means load balancer builds will wait for ten minutes instead of twenty-five minutes for nova to boot the virtual machine. We feel these are more reasonable default values for most production deployments and provide a better user experience. If you are running nova in a nested virtualization environment, meaning nova is booting VMs inside another VM, and you do not have nested virtualization enabled in the bottom hypervisor, you may need to set these values back up to 300.

  • Added a new option named server_certs_key_passphrase under the certificates section. The default value gets copied from an environment variable named TLS_PASS_AMPS_DEFAULT. In a case where TLS_PASS_AMPS_DEFAULT is not set, and the operator did not fill any other value directly, ‘insecure-key-do-not-use-this-key’ will be used.

  • Processing zombie amphora is already expensive and this adds another step which could increase the load on Octavia Health Manager, especially during Nova API slowness.