Appendix H: Octavia LBaaS

Appendix H: Octavia LBaaS

Overview

As of the 18.11 charm release, with OpenStack Rocky and later, OpenStack Octavia can be deployed to provide Load-balancing services as part of an OpenStack Cloud. This service supersedes the LBaaS v2 services provided directly through Neutron in earlier releases; when Octavia is deployed a proxy service is configured to proxy LBaaS v2 API calls directly to Octavia.

Octavia uses cloud resources to provision instances to provide LBaaS services unlike the LBaaS v2 support in the OpenStack Charms which placed haproxy instances on neutron-gateway units.

Warning

Octavia is only supported by the OpenStack Charms for OpenStack Rocky or later.

Warning

There is no automatic migration path for Neutron LBaaS haproxy-on-host configurations (as deployed under the existing support) to Octavia Amphora configurations. New load balancers must be created in Octavia before the octavia charm is related to the existing neutron-api charm. Floating IP’s can then be moved prior to deletion of existing LBaaS based balancers.

Deployment

Octavia makes use of OpenStack Barbican for storage of certificates for TLS termination on load balancers; Barbican makes use of Vault for secure storage of this data. Follow the instructions for deployment and configuration of Vault in Appendix C and then deploy Barbican:

juju deploy barbican --config openstack-origin=bionic:rocky
juju deploy barbican-vault
juju add-relation barbican mysql
juju add-relation barbican rabbitmq-server
juju add-relation barbican keystone
juju add-relation barbican barbican-vault
juju add-relation barbican-vault vault

Octavia can then be deployed:

juju deploy octavia --config openstack-origin=bionic:rocky
juju add-relation octavia rabbitmq-server
juju add-relation octavia mysql
juju add-relation octavia keystone
juju add-relation octavia neutron-openvswitch
juju add-relation octavia neutron-api

juju deploy octavia-dashboard
juju add-relation octavia-dashboard openstack-dashboard

Note

Octavia uses a Neutron network for communication between Octavia control plane services and Octavia Amphorae; units will deploy into a ‘blocked’ state until the configuration steps are executed.

Configuration

Generate Certificates

Octavia uses client certificates for authentication and security of communication between Amphorae (load balancers) and the Octavia control plane; for the initial version of the Octavia charm, these must be generated by the operator and provided to the Octavia charm as configuration.

The script below generates example certificates and keys with a 365 day expiry period:

mkdir -p demoCA/newcerts
touch demoCA/index.txt
touch demoCA/index.txt.attr
openssl genrsa -passout pass:foobar -des3 -out issuing_ca_key.pem 2048
openssl req -x509 -passin pass:foobar -new -nodes -key issuing_ca_key.pem \
    -config /etc/ssl/openssl.cnf \
    -subj "/C=US/ST=Somestate/O=Org/CN=www.example.com" \
    -days 365 \
    -out issuing_ca.pem

openssl genrsa -passout pass:foobar -des3 -out controller_ca_key.pem 2048
openssl req -x509 -passin pass:foobar -new -nodes \
        -key controller_ca_key.pem \
    -config /etc/ssl/openssl.cnf \
    -subj "/C=US/ST=Somestate/O=Org/CN=www.example.com" \
    -days 365 \
    -out controller_ca.pem
openssl req \
    -newkey rsa:2048 -nodes -keyout controller_key.pem \
    -subj "/C=US/ST=Somestate/O=Org/CN=www.example.com" \
    -out controller.csr
openssl ca -passin pass:foobar -config /etc/ssl/openssl.cnf \
    -cert controller_ca.pem -keyfile controller_ca_key.pem \
    -create_serial -batch \
    -in controller.csr -days 365 -out controller_cert.pem
cat controller_cert.pem controller_key.pem > controller_cert_bundle.pem

The generated certs and keys must then be provided to the octavia charm:

juju config octavia \
    lb-mgmt-issuing-cacert="$(base64 controller_ca.pem)" \
    lb-mgmt-issuing-ca-private-key="$(base64 controller_ca_key.pem)" \
    lb-mgmt-issuing-ca-key-passphrase=foobar \
    lb-mgmt-controller-cacert="$(base64 controller_ca.pem)" \
    lb-mgmt-controller-cert="$(base64 controller_cert_bundle.pem)"

Note

Future versions of the charm may automatically generate the internal Certification Authority required to operate Octavia.

Resource Configuration

The charm will automatically create and maintain the resources required for operation of the Octavia service by running the configure-resources action on the lead octavia unit:

juju run-action --wait octavia/0 configure-resources

This action must be run before Octavia is fully operational.

Access to the Octavia load-balancer API is guarded by policies and end users must have specific roles to gain access to the service. The charm will request Keystone to pre-create these roles for you on deployment but you must assign the roles to your end users as you see fit. Take a look at Octavia Policies.

The charm also allows the operator to pre-configure these resources to support full custom configuration of the management network for Octavia. If you want to manage these resources yourself you must set the create-mgmt-network configuration option to false.

Network resources for use by Octavia must be tagged using Neutron resource tags (typically by passing a ‘–tag’ CLI parameter when creating resources - see the OpenStack CLI for more details) using the following schema:

Resource Type Tag Description
Neutron Network charm-octavia Management network
Neutron Subnet charm-octavia Management network subnet
Neutron Router charm-octavia (Optional) Router for IPv6 RA or north/south mgmt traffic
Amphora Security Group charm-octavia Security group for Amphora ports
Controller Security Group charm-octavia-health Security group for Controller ports

Execution of the configure-resources action will detect the pre-configured network resources in Neutron using tags and configure the Octavia service as appropriate.

The UUID of the Nova flavor to use for Amphorae can be set using the custom-amp-flavor-id configuration option.

Amphora image

Octavia uses Amphorae (cloud instances running haproxy) to provide LBaaS services; an appropriate image must be uploaded to Glance with the tag octavia-amphora.

curl http://tarballs.openstack.org/octavia/test-images/test-only-amphora-x64-haproxy-ubuntu-xenial.qcow2 | \
    openstack image create --tag octavia-amphora --disk-format=qcow2 \
        --container-format=bare --private amphora-haproxy-xenial

Octavia will use this image for all Amphora instances.

Warning

The example above uses the OpenStack published Octavia test image based on Ubuntu Xenial; this is not appropriate for production usage.

It’s important to keep the Amphora image up-to-date to ensure that LBaaS services remain secure; this process is not covered in this document.

See the Octavia operators maintenance guide for more details.

Usage

To deploy a basic HTTP load balancer using a floating IP for access:

lb_vip_port_id=$(openstack loadbalancer create -f value -c vip_port_id --name lb1 --vip-subnet-id private_subnet)

# Re-run the following until lb1 shows ACTIVE and ONLINE status':
openstack loadbalancer show lb1

openstack loadbalancer listener create --name listener1 --protocol HTTP --protocol-port 80 lb1
openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type HTTP --url-path /healthcheck pool1
openstack loadbalancer member create --subnet-id private_subnet --address 192.168.21.100 --protocol-port 80 pool1
openstack loadbalancer member create --subnet-id private_subnet --address 192.168.21.101 --protocol-port 80 pool1

floating_ip=$(openstack floating ip create -f value -c floating_ip_address ext_net)
openstack floating ip set --port $lb_vip_port_id $floating_ip

The example above assumes:

  • The user and project executing the example has a subnet configured with the name private_subnet with the CIDR 192.168.21.0/24
  • An external network definition for floating IP’s has been configured by the cloud operator with the name ext_net
  • Two instances running HTTP services attached to the private_subnet on IP addresses 192.168.21.{100,101} exposing a heat check on /healthcheck

The example is also most applicable in cloud deployments which use overlay networking for project networks and floating IP’s for network ingress to project networks.

For more information on creating and configuring load balancing services in Octavia please refer to the Octavia cookbook.

Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.