Nova cells V2 is a feature that allows Nova deployments to be scaled out to a larger size than would otherwise be possible. This is achieved through sharding of the compute nodes into pools known as cells, with each cell having a separate message queue and database.
Cells: deployment perspective¶
From a deployment perspective, nova cell support involves separating the Nova services into two sets - global services and per-cell services.
nova-super-conductor(in multi-cell mode)
Per-cell control services:
nova-compute-ironic(for Ironic cells)
Per-cell compute services:
Another consideration is the database and message queue clusters that the cells depend on. This will be discussed later.
There are a number of ways to place services in a multi-cell environment.
Single cell topology¶
The single cell topology is used by default, and is limited to a single cell:
+----------------+ | ++ | |-+ | controllers |-| | |-| | |-| +------------------| +-----------------| +----------------+ +--------------+ +--------------+ | | | | | cell 1 | | cell 1 | | compute 1 | | compute 2 | | | | | +--------------+ +--------------+
All control services run on the controllers, and there is no superconductor.
Dedicated cell controller topology¶
In this topology, each cell has a dedicated group of controllers to run cell control services. The following diagram shows the topology for a cloud with two cells:
+----------------+ | ++ | |-+ | controllers |-| | |-| | |-| +------------------| +-----------------| +----------------+ +----------------+ +----------------+ | ++ | ++ | cell 1 |-+ | cell 2 |-+ | controllers |-| | controllers |-| | |-| | |-| +------------------| +------------------| +-----------------| +-----------------| +----------------+ +----------------+ +--------------+ +--------------+ +--------------+ +--------------+ | | | | | | | | | cell 1 | | cell 1 | | cell 2 | | cell 2 | | compute 1 | | compute 2 | | compute 1 | | compute 2 | | | | | | | | | +--------------+ +--------------+ +--------------+ +--------------+
Databases & message queues¶
The global services require access to a database for the API and cell0 databases, in addition to a message queue. Each cell requires its own database and message queue instance. These could be separate database and message queue clusters, or shared database and message queue clusters partitioned via database names and virtual hosts. Currently Kolla Ansible supports deployment of shared database cluster and message queue clusters.
Configuring Kolla Ansible for deployment of multiple cells typically requires use of inventory host and group variables.
Enabling multi-cell support¶
Support for deployment of multiple cells is disabled by default - nova is deployed in single conductor mode.
Deployment of multiple cells may be enabled by setting
globals.yml. This deploys nova in superconductor mode, with
separate conductors for each cell.
By default, all cell services are deployed in a single unnamed cell. This behaviour is backwards compatible with previous releases of Kolla Ansible.
To deploy hosts in a different cell, set the
for the hosts in the cell. This can be done either using host variables or
In a single cell deployment, the following Ansible groups are used to determine the placement of services:
In a multi-cell deployment, this is still necessary - compute hosts must be in
compute group. However, to provide further control over where cell
services are placed, the following variables are used:
For backwards compatibility, these are set by default to the original group names. For a multi-cell deployment, they should be set to the name of a group containing only the compute hosts in that cell.
In the following example we have two cells,
cell2. Each cell
has two compute nodes and a cell controller.
[compute:children] compute-cell1 compute-cell2 [nova-conductor:children] cell-control-cell1 cell-control-cell2 [nova-novncproxy:children] cell-control-cell1 cell-control-cell2 [nova-spicehtml5proxy:children] cell-control-cell1 cell-control-cell2 [nova-serialproxy:children] cell-control-cell1 cell-control-cell2 [cell1:children] compute-cell1 cell-control-cell1 [cell2:children] compute-cell2 cell-control-cell2 [compute-cell1] compute01 compute02 [compute-cell2] compute03 compute04 [cell-control-cell1] cell-control01 [cell-control-cell2] cell-control02
Cell1 group variables (
nova_cell_name: cell1 nova_cell_compute_group: compute-cell1 nova_cell_conductor_group: cell-control-cell1 nova_cell_novncproxy_group: cell-control-cell1 nova_cell_serialproxy_group: cell-control-cell1 nova_cell_spicehtml5proxy_group: cell-control-cell1
Cell2 group variables (
nova_cell_name: cell2 nova_cell_compute_group: compute-cell2 nova_cell_conductor_group: cell-control-cell2 nova_cell_novncproxy_group: cell-control-cell2 nova_cell_serialproxy_group: cell-control-cell2 nova_cell_spicehtml5proxy_group: cell-control-cell2
Note that these example cell group variables specify groups for all console
proxy services for completeness. You will need to ensure that there are no
port collisions. For example, if in both cell1 and cell2, you use the default
novncproxy console proxy, you could add
to the cell2 group variables to prevent a collision with cell1.
The database connection for each cell is configured via the following variables:
By default the MariaDB cluster deployed by Kolla Ansible is used. For an
unnamed cell, the
nova database is used for backwards compatibility. For a
named cell, the database is named
The RPC message queue for each cell is configured via the following variables:
And for notifications:
By default the message queue cluster deployed by Kolla Ansible is used. For an
unnamed cell, the
/ virtual host used by all OpenStack services is used for
backwards compatibility. For a named cell, a virtual host named
name> is used.
Conductor & API database¶
By default the cell conductors are configured with access to the API database. This is currently necessary for some operations in Nova which require an upcall.
If those operations are not required, it is possible to prevent cell conductors
from accessing the API database by setting
General information on configuring console access in Nova is available here. For deployments with multiple cells, the console proxies for each cell must be accessible by a unique endpoint. We achieve this by adding an HAProxy frontend for each cell that forwards to the console proxies for that cell. Each frontend must use a different port. The port may be configured via the following variables:
Currently all Ironic-based instances are deployed in a single cell. The name of
that cell is configured via
nova_cell_ironic_cell_name, and defaults to the
nova_cell_compute_ironic_group can be used to set the group
nova-compute-ironic services are deployed to.
Deployment in a multi-cell environment does not need to be done differently
than in a single-cell environment - use the
kolla-ansible deploy command.
A common operational task in large scale environments is to add new compute resources to an existing deployment. In a multi-cell environment it is likely that these will all be added to one or more new or existing cells. Ideally we would not risk affecting other cells, or even the control hosts, when deploying these new resources.
The Nova cells support in Kolla Ansible has been built such that it is possible
to add new cells or extend existing ones without affecting the rest of the
cloud. This is achieved via the
--limit argument to
example, if we are adding a new cell
cell03 to an existing cloud, and all
hosts for that cell (control and compute) are in a
cell03 group, we could
use this as our limit:
kolla-ansible deploy --limit cell03
When adding a new cell, we also need to ensure that HAProxy is configured for the console proxies in that cell:
kolla-ansible deploy --tags haproxy
Another benefit of this approach is that it should be faster to complete, as the number of hosts Ansible manages is reduced.
Similar to deploys, upgrades in a multi-cell environment can be performed in
the same way as single-cell environments, via
Staged upgrades are not applicable when
In large environments the risk involved with upgrading an entire site can be significant, and the ability to upgrade one cell at a time is crucial. This is very much an advanced procedure, and operators attempting this should be familiar with the Nova upgrade documentation.
Here we use Ansible tags and limits to control the upgrade process. We will
only consider the Nova upgrade here. It is assumed that all dependent services
have been upgraded (see
ansible/site.yml for correct ordering).
The first step, which may be performed in advance of the upgrade, is to perform the database schema migrations.
kolla-ansible upgrade --tags nova-bootstrap
Next, we upgrade the global services.
kolla-ansible upgrade --tags nova-api-upgrade
Now the cell services can be upgraded. This can be performed in batches of
one or more cells at a time, using
--limit. For example, to upgrade
kolla-ansible upgrade --tags nova-cell-upgrade --limit cell03
At this stage, we might wish to perform testing of the new services, to check that they are functioning correctly before proceeding to other cells.
Once all cells have been upgraded, we can reload the services to remove RPC version pinning, and perform online data migrations.
kolla-ansible upgrade --tags nova-reload,nova-online-data-migrations
The nova upgrade is now complete, and upgrading of other services may continue.