This page documents upgrade issues and notes. These may apply to either of the three upgrade types (charms, OpenStack, series).
The items on this page are distinct from those found on the following pages:
The issues are organised by upgrade type:
A timing issue has been observed during the upgrade of the rabbitmq-server charm (see bug LP #1912638 for tracking). If it occurs the resulting hook error can be resolved with:
juju resolved rabbitmq-server/N
When Horizon is configured with TLS (openstack-dashboard charm option
ssl-cert) revisions 294 and 295 of the charm have been reported to break
the dashboard (see bug LP #1853173). The solution is to upgrade to a working
revision. A temporary workaround is to disable TLS without upgrading.
Most users will not be impacted by this issue as the recommended approach is to always upgrade to the latest revision.
To upgrade to revision 293:
juju upgrade-charm openstack-dashboard --revision 293
To upgrade to revision 296:
juju upgrade-charm openstack-dashboard --revision 296
To disable TLS:
juju config enforce-ssl=false openstack-dashboard
Starting with OpenStack Charms 21.04 any charm that supports the
worker-multiplier configuration option will, upon upgrade, modify the
active number of service workers according to the following: if the option is
not set explicitly the number of workers will be capped at four regardless of
whether the unit is containerised or not. Previously, the cap applied only to
To fix long-standing issues in the manila-ganesha charm related to Manila exporting shares after restart, the nfs-ganesha Ubuntu package must be updated on all affected units prior to the upgrading of the manila-ganesha charm in OpenStack Charms 21.10.
Due to a ceph-radosgw charm change in the
quincy/stable channel, URLs
are processed differently by the RADOS Gateway. This will lead to breakage for
product-streams endpoint, set up by the
glance-simplestreams-sync application, that includes a trailing slash in its
The glance-simplestreams-sync charm has been fixed in the
channel, but it will not update a pre-existing endpoint. The URL must be
modified (remove the trailing slash) with native OpenStack tooling:
openstack endpoint list --service product-streams openstack endpoint set --url <new-url> <endpoint-id>
If it is not possible to upgrade Neutron and Nova within the same maintenance window, be mindful that the RPC communication between nova-cloud-controller, nova-compute, and nova-api-metadata is very likely to cause several errors while those services are not running the same version. This is due to the fact that currently those charms do not support RPC version pinning or auto-negotiation.
See bug LP #1825999.
Between the Mitaka and Newton OpenStack releases, the
added two options,
data-port, which replaced the
ext-port option. This was to provide for more control over
neutron-gateway can configure external networking. Unfortunately, the
charm was only designed to work with either
ext-port (no longer
See bug LP #1809190.
If cinder is directly related to ceph-mon rather than via cinder-ceph then upgrading from Newton to Ocata will result in the loss of some block storage functionality, specifically live migration and snapshotting. To remedy this situation the deployment should migrate to using the cinder-ceph charm. This can be done after the upgrade to Ocata.
Do not attempt to migrate a deployment with existing volumes to use the cinder-ceph charm prior to Ocata.
The intervention is detailed in the below three steps.
Step 0: Check existing configuration¶
Confirm existing volumes are in an RBD pool called ‘cinder’:
juju run --unit cinder/0 "rbd --name client.cinder -p cinder ls"
Step 1: Deploy new topology¶
cinder-ceph charm and set the ‘rbd-pool-name’ to match the pool
that any existing volumes are in (see above):
juju deploy --config rbd-pool-name=cinder cinder-ceph juju add-relation cinder cinder-ceph juju add-relation cinder-ceph ceph-mon juju remove-relation cinder ceph-mon juju add-relation cinder-ceph nova-compute
Step 2: Update volume configuration¶
The existing volumes now need to be updated to associate them with the newly defined cinder-ceph backend:
juju run-action cinder/0 rename-volume-host currenthost='cinder' \ newhost='cinder@cinder-ceph#cinder.volume.drivers.rbd.RBDDriver'
Starting with OpenStack Rocky only the Fernet format for authentication tokens is supported. Therefore, prior to upgrading Keystone to Rocky a transition must be made from the legacy format (of UUID) to Fernet.
Fernet support is available upstream (and in the keystone charm) starting with Ocata so the transition can be made on either Ocata, Pike, or Queens.
A keystone charm upgrade will not alter the token format. The charm’s
token-provider option must be used to make the transition:
juju config keystone token-provider=fernet
This change may result in a minor control plane outage but any running instances will remain unaffected.
token-provider option has no effect starting with Rocky, where the
charm defaults to Fernet and where upstream removes support for UUID. See
Keystone Fernet Token Implementation for more information.
As of Train, support for Neutron LBaaS has been retired. The load-balancing services are now provided by Octavia LBaaS. There is no automatic migration path, please review the Octavia LBaaS page for more information.
When upgrading Designate to Train, there is an encoding issue between the designate-producer and memcached that causes the designate-producer to crash. See bug LP #1828534. This can be resolved by restarting the memcached service.
juju run --application=memcached 'sudo systemctl restart memcached'
The Ceph BlueStore storage backend is enabled by default when Ceph Luminous is
detected. Therefore it is possible for a non-BlueStore cloud to acquire
BlueStore by default after an OpenStack upgrade (Luminous first appeared in
Queens). Problems will occur if storage is scaled out without first disabling
BlueStore (set ceph-osd charm option
bluestore to ‘False’). See bug LP
#1885516 for details.
When the placement charm is deployed during the upgrade to OpenStack Train (as described in placement charm: OpenStack upgrade to Train) the Keystone service catalog is not updated accordingly. This issue is tracked in bug LP #1928992, which also includes an explicit workaround (comment #4).
Before upgrading Ceph its
require-osd-release option should be set to the
current Ceph release (e.g. ‘nautilus’ if upgrading to Octopus). Failing to do
so may cause the upgrade to fail, rendering the cluster inoperable.
On any ceph-mon unit, the current value of the option can be queried with:
sudo ceph osd dump | grep require_osd_release
If it needs changing, it can be done manually on any ceph-mon unit. Here the current release is Nautilus:
sudo ceph osd require-osd-release nautilus
In addition, upon completion of the upgrade, the option should be set to the new release. Here the new release is Octopus:
sudo ceph osd require-osd-release octopus
The charms should be able to respond intelligently to these two situations. Bug LP #1929254 is for tracking this effort.
The Firewall-as-a-Service (FWaaS v2) OpenStack project is retired starting with OpenStack Victoria. Consequently, the neutron-api charm will no longer make this service available starting with that OpenStack release. See the 21.10 Release Notes on this topic.
Prior to upgrading to Victoria users of FWaaS should remove any existing firewall groups to avoid the possibility of orphaning active firewalls (see the FWaaS v2 CLI documentation).
An Octavia upgrade may entail an update of its load balancers (amphorae) as a post-upgrade task. Reasons for doing this include:
API incompatibility between the amphora agent and the new Octavia service
the desire to use features available in the new amphora agent or haproxy
See the upstream documentation on Rotating amphora images.
DNS HA has been reported to not work on the focal series. See LP #1882508 for more information.
If a series upgrade is attempted while Vault is sealed then manual intervention will be required (see bugs LP #1886083 and LP #1890106). The vault leader unit (which will be in error) will need to be unsealed and the hook error resolved. The vault charm README has unsealing instructions, and the hook error can be resolved with:
juju resolved vault/N