19.04

Summary

The 19.04 OpenStack Charm release includes updates for the following charms. Additional charm support status information is published in the main charm guide which ultimately supersedes release note contents.

Always use the latest stable charm revision before proceeding with topological changes, application migrations, workload upgrades, series upgrades, or bug reports.

Supported Charms

  • aodh

  • barbican

  • barbican-vault

  • ceilometer

  • ceilometer-agent

  • ceph-mon

  • ceph-osd

  • ceph-proxy

  • ceph-radosgw

  • ceph-rbd-mirror

  • cinder

  • cinder-ceph

  • designate

  • designate-bind

  • glance

  • gnocchi

  • hacluster

  • heat

  • keystone

  • keystone-ldap

  • lxd

  • neutron-api

  • neutron-openvswitch

  • neutron-gateway

  • neutron-dynamic-routing

  • nova-cloud-controller

  • nova-compute

  • octavia

  • openstack-dashboard

  • percona-cluster

  • rabbitmq-server

  • swift-proxy

  • swift-storage

  • vault

Preview Charms

  • barbican-softhsm

  • ceph-fs

  • cinder-backup

  • keystone-saml-mellon

  • manila

  • manila-generic

  • masakari

  • masakari-monitors

  • pacemaker-remote

  • tempest

Removed Charms

The following charms have been removed as part of this charm release:

  • glusterfs (unmaintained)

  • interface-odl-controller-api (unmaintained)

  • manila-glusterfs (unmaintained)

  • murano (unmaintained)

  • neutron-api-odl (unmaintained)

  • nova-compute-proxy (unmaintained)

  • odl-controller (unmaintained)

  • openvswitch-odl (unmaintained)

  • trove (unmaintained)

New Charm Features

With each new feature, there is a corresponding example bundle in the form of a test bundle, and/or a charm deployment guide section which details the use of the feature. For example test bundles, see the src/tests/bundles/ directory within the relevant charm repository.

OpenStack Stein

The 19.04 OpenStack Charms release introduces support for OpenStack Stein on Ubuntu 18.04 (LTS) and Ubuntu 19.04.

Additional charm support status information is published in the main charm guide which ultimately supersedes release note contents.

PCI passthrough for GPU and other devices

The requirements and configuration for passing arbitrary PCI devices through to Nova KVM instances are now detailed in the charm deployment guide.

A common use case for this configuration is to provide direct access to a GPGPU device within a virtual machine.

Please refer to the app pci section in the charm deployment guide

for more details.

ceph-radosgw: multisite replication

The ceph-radosgw charm now features support for replication between RADOS Gateway deployments; please refer to Appendix J of the charm deployment guide for more details.

ceph-rbd-mirror: mirroring of RBD images

The ceph-rbd-mirror charm allows you to mirror RBD images across Ceph clusters; please refer to Appendix K of the charm deployment guide for more details.

Note

There exist bugs in Ceph Luminous that can make the available status information about RBD Mirror inaccurate. While we regard these bugs to be cosmetic, it is useful to be aware of them. These issues are not present with Ceph Mimic.

rabbitmq integration refactor

RabbitMQ sectional configuration was deprecated and removed, replaced by transport_url configuration in the [DEFAULT] section. As part of this change, a wider refactoring of the rabbitmq-server integration was done.

The ceilometer-agent charm now requires a direct amqp relation to the rabbitmq-server charm.

Users upgrading the ceilometer-agent charm to the 19.04 charm revision will need to add a relation, as ceilometer-agent units will go into a BLOCKED state for lack of this new required relation. Users and operators will also need to update any relevant bundles accordingly.

For example:

juju add-relation ceilometer-agent:amqp rabbitmq-server:amqp
Reference:

neutron-api: FWaaS v2

For the OpenStack Stein release FWaaS v1 has been dropped; FWaaS v2 will be automatically enabled and existing FWaaS v1 definitions migrated to v2 on upgrade.

Preview Charm Features

OpenStack Automated Instance Recovery with Masakari

Three new charms are being previewed: masakari, masakari-monitors and pacemaker-remote. Together they provide automated instance recovery in the event of an individual guest crashing or an entire compute node going offline.

These charms bring forward upstream Masakari features which need to be carefully considered and pre-validated in test labs by cloud operators. Further upstream Masakari development, charm feature work and scenario validation is likely going to be necessary before the solution can be considered mature on the whole.

Please refer to Appendix L of the charm deployment guide for more details.

Note

When a stonith operation is triggered the default is to reboot the lost node, however, this may not be the desired behaviour. Bug 1823331 tracks exposing the stonith behaviour as a configuration option.

Keystone Federation With SAML Mellon

The new charm, keystone-saml-mellon, implements the SAML Mellon Apache2 module. This enables Keystone federation with a third party Identity Provider via SAML. The Identity Provider may be another Keystone or it may be another identity service technology.

SAML Mellon and federation allow a user to log in through the Horizon dashboard using credentials held in a third party Identity Provider. The SAML exchange follows this workflow: Horizon checks with Keystone as the Service Provider, which refers the browser to the Identity Provider, which confirms the users credentials back to Keystone, which grants access to the browser user in Horizon for OpenStack resources.

Federation and SAML are complicated technologies with a number of moving parts including Keystone as the Service Provider, a third party Identity Provider, and Horizon. As such one should read as much of the documentation as possible before attempting to deploy a SAML enabled Keystone federation. The keystone-saml-mellon’s README is considered the primary source for documentation for the deployment and configuration of keystone-saml-mellon charm. It includes many upstream documentation sources all of which should be read and understood.

Note

SAML is a browser based technology. As such, although it may be technically possible, it is not practical as a solution for users of the CLI.

Please refer to README of the charm for more details.

Upgrading charms

Always use the latest stable charm revision before proceeding with topological changes, charm application migrations, workload upgrades, series upgrades, or bug reports.

Please ensure that the keystone charm is upgraded first.

To upgrade an existing deployment to the latest charm version simply use the ‘upgrade-charm’ command, for example:

juju upgrade-charm keystone

Charm upgrades and OpenStack upgrades are two distinctly different things. Charm upgrades ensure that the deployment is using the latest charm revision, containing the latest charm fixes and charm features available for a given deployment.

Charm upgrades do not cause OpenStack versions to upgrade, however OpenStack upgrades do require the latest Charm version as pre-requisite.

Upgrading OpenStack

Before upgrading OpenStack, all OpenStack Charms should be running the latest stable charm revision.

To upgrade an existing Rocky-based deployment on Ubuntu 18.04 to the Stein release, re-configure the charm with a new openstack-origin configuration:

juju config nova-cloud-controller openstack-origin=cloud:bionic-stein

Please ensure that ceph services are upgraded before services that consume ceph resources, such as cinder, glance and nova-compute:

juju config ceph-mon source=cloud:bionic-stein
juju config ceph-osd source=cloud:bionic-stein

Note

Upgrading an OpenStack cloud is still not without risk; upgrades should be tested in pre-production testing environments prior to production deployment upgrades.

See https://docs.openstack.org/charm-guide/latest/admin/upgrades/openstack.html for more details.

Note

See Known Issues: Cinder auto-resume after openstack upgrade action below.

New Bundle Features

Deprecation Notices

Removed Features

nova-cloud-controller: single-nova-consoleauth

The ‘single-nova-consoleauth’ feature has been removed from the nova-cloud-controller charm; this legacy feature has been superceeded by the use of nova-consoleauth daemons on all nova-cloud-controller units, sharing authentication tokens using memcached.

Cluster resources associated with this feature will be cleaned up up during charm upgrade.

If the charm is running in an HA deployment, a relation to memcached must be added to the nova-cloud-controller application:

juju add-relation nova-cloud-controller memcached

Warning

See known issues below: Adding nova-cloud-controller memcached relation

Known Issues

Adding nova-cloud-controller memcached relation

Warning

If a memcached application already exists in the model it is possible that the nova-cloud-controller and memcached applications have different default spaces or the cache relation is not bound to a matching network space.

This leads to bug 1823740 where memcached units have the wrong IP addresses for the nova-cloud-controller units in the iptables rules used to restrict access.

The symptom is the command “openstack availability zone” list timing out and SYN_SENT connections on the nova-cloud-controller unit to the memcached unit. Launching new instances will also fail.

Because Juju does not currently allow network space binding post-deployment (bug 1796653) memcached must be (re-)deployed with the correct network space bindings to support access from the nova-cloud-controller units.

There are two approaches. The safest of which is to deploy a new set of memcached units either with their cache relationship bound to nova-cloud-controllers default space or their default space set to the same as nova-cloud-controllers.

juju deploy -n 2 cs:memcached --to lxd:1,lxd:2 --bind "cache=<NCC's DEFAULT SPACE>" ncc-memcached

or

juju deploy -n 2 cs:memcached --to lxd:1,lxd:2 --bind "<NCC's DEFAULT SPACE>" ncc-memcached

The alternative is to remove the existing memcached application entirely and redeploy it using the same approach.

Related to this issue, is an upstream oslo.cache bug which is working its way through backport at the time of this writing (bug 1812935).

Cinder auto-resume after OpenStack upgrade action

There was a conflict between the way the cinder charm handled series-upgrade and action managed OpenStack upgrades as described in (bug 1824545).

When a cinder unit was paused and an action managed OpenStack upgrade was performed certain necessary steps were accidentally skipped. The solution is to run an automatic resume immediately after OpenStack upgrade, which the charm now does.

This note is to point out this behaviour is different than the other charms. We may change the other charms to match this behaviour at some point in the future.

After the following actions:

juju config cinder action-managed-upgrade=True openstack-origin=$NEW_ORIGIN
juju run-action --wait cinder/0 pause
juju run-action --wait cinder/0 openstack-upgrade

The cinder charm will be upgraded and resumed. It is no longer necessary to run the resume action post OpenStack upgrade.

Bugs Fixed

This release includes 247 bug fixes. For the full list of bugs resolved for the 19.04 charms release please refer to https://launchpad.net/openstack-charms/+milestone/19.04.

Next Release Info

Please see https://docs.openstack.org/charm-guide/latest for current information.