Example 1. - Basic Cell Architecture in Train release


Multi cell support is only supported in Stein or later versions. This guide addresses Train release and later!

This guide assumes that you are ready to deploy a new overcloud, or have already installed an overcloud (min Train release).


Starting with CentOS 8 and the TripleO Stein release, podman is the CONTAINERCLI to be used in the following steps.

The following example uses six nodes and the split control plane method to deploy a distributed cell deployment. The first Heat stack deploys a controller cluster and a compute. The second Heat stack deploys a cell controller and a compute node:

openstack overcloud status
| Plan Name |       Created       |       Updated       | Deployment Status |
| overcloud | 2019-02-12 09:00:27 | 2019-02-12 09:00:27 |   DEPLOY_SUCCESS  |

openstack server list -c Name -c Status -c Networks
| Name                       | Status | Networks               |
| overcloud-controller-1     | ACTIVE | ctlplane= |
| overcloud-controller-2     | ACTIVE | ctlplane= |
| overcloud-controller-0     | ACTIVE | ctlplane= |
| overcloud-novacompute-0    | ACTIVE | ctlplane= |

The above deployed overcloud shows the nodes from the first stack.


In this example the default cell and the additional cell uses the same network, When configuring another network scenario keep in mind that it will be necessary for the systems to be able to communicate with each other.

Extract deployment information from the overcloud stack

Any additional cell stack requires information from the overcloud Heat stack where the central OpenStack services are located. The extracted parameters are needed as input for additional cell stacks. To extract these parameters into separate files in a directory (e.g. DIR=cell1) run the following:

source stackrc
mkdir cell1
export DIR=cell1

Export EndpointMap, HostsEntry, AllNodesConfig, GlobalConfig and passwords information

The tripleo-client in Train provides an openstack overcloud cell export functionality to export the required data from the control plane stack which then is used as an environment file passed to the cell stack.

openstack overcloud cell export cell1 -o cell1/cell1-cell-input.yaml

cell1 is the chosen name for the new cell. This parameter is used to set the default export file name, which is then stored on the current directory. In this case a dedicated export file was set via -o.


If the export file already exists it can be forced to be overwritten using –force-overwrite or -f.


The services from the cell stacks use the same passwords services as the control plane services.

Create roles file for the cell stack

Different roles are provided within tripleo-heat-templates, depending on the configuration and desired services to be deployed.

The default compute role at roles/Compute.yaml can be used for cell computes if that is sufficient for the use case.

A dedicated role, roles/CellController.yaml is provided. This role includes the necessary roles for the cell controller, where the main services are galera database, rabbitmq, nova-conductor, nova novnc proxy and nova metadata in case NovaLocalMetadataPerCell is enabled.

Create the roles file for the cell:

openstack overcloud roles generate --roles-path \
/usr/share/openstack-tripleo-heat-templates/roles \
-o $DIR/cell_roles_data.yaml Compute CellController

Create cell parameter file for additional customization (e.g. cell1/cell1.yaml)

Each cell has some mandatory parameters which need to be set using an environment file. Add the following content into a parameter file for the cell, e.g. cell1/cell1.yaml:

  OS::TripleO::Network::Ports::OVNDBsVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
  OS::TripleO::Network::Ports::RedisVipPort: /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml

  # since the same networks are used in this example, the
  # creation of the different networks is omitted
  ManageNetworks: false

  # CELL Parameter to reflect that this is an additional CELL
  NovaAdditionalCell: True

  # The DNS names for the VIPs for the cell
  CloudName: cell1.ooo.test
  CloudNameInternal: cell1.internalapi.ooo.test
  CloudNameStorage: cell1.storage.ooo.test
  CloudNameStorageManagement: cell1.storagemgmt.ooo.test
  CloudNameCtlplane: cell1.ctlplane.ooo.test

  # Flavors used for the cell controller and computes
  OvercloudCellControllerFlavor: cellcontroller
  OvercloudComputeFlavor: compute

  # Number of controllers/computes in the cell
  CellControllerCount: 1
  ComputeCount: 1

  # Compute names need to be uniq across cells. Make sure to have a uniq
  # hostname format for cell nodes
  ComputeHostnameFormat: 'cell1-compute-%index%'

  # default gateway
    - ip_netmask:
      default: true
    - x.x.x.x

The above file disables creating networks by setting ManageNetworks parameter to false so that the same network_data.yaml file from the overcloud stack can be used. When ManageNetworks is set to false, ports will be created for the nodes in the separate stacks on the existing networks that were already created in the overcloud stack.

It also specifies that this will be an additional cell using parameter NovaAdditionalCell.


Compute hostnames need to be uniq across cells. Make sure to use ComputeHostnameFormat to have uniq hostnames.

Create the network configuration for cellcontroller and add to environment file

Depending on the network configuration of the used hardware and network architecture it is required to register a resource for the CellController role.

  OS::TripleO::CellController::Net::SoftwareConfig: single-nic-vlans/controller.yaml
  OS::TripleO::Compute::Net::SoftwareConfig: single-nic-vlans/compute.yaml


This example just reused the exiting network configs as it is a shared L2 network. For details on network configuration consult Configuring Network Isolation guide, chapter Customizing the Interface Templates.

Deploy the cell

Create new flavor used to tag the cell controller

Depending on the hardware create a flavor and tag the node to be used.

openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 1 cellcontroller
openstack flavor set --property "cpu_arch"="x86_64" \
--property "capabilities:boot_option"="local" \
--property "capabilities:profile"="cellcontroller" \
--property "resources:CUSTOM_BAREMETAL=1" \
--property "resources:DISK_GB=0" \
--property "resources:MEMORY_MB=0" \
--property "resources:VCPU=0" \

The properties need to be modified to the needs of the environment.

Tag node into the new flavor using the following command

baremetal node set --property \
capabilities='profile:cellcontroller,boot_option:local' <node id>

Verify the tagged cellcontroller:

openstack overcloud profiles list

Run cell deployment

To deploy the overcloud we can use the same overcloud deploy command as it was used to deploy the overcloud stack and add the created export environment files:

openstack overcloud deploy \
  --templates /usr/share/openstack-tripleo-heat-templates \
  -e ... additional environment files used for overcloud stack, like container
    prepare parameters, or other specific parameters for the cell
  --stack cell1 \
  -r $HOME/$DIR/cell_roles_data.yaml \
  -e $HOME/$DIR/cell1-cell-input.yaml \
  -e $HOME/$DIR/cell1.yaml

Wait for the deployment to finish:

openstack stack list
| ID                                   | Stack Name   | Project                          | Stack Status    | Creation Time        | Updated Time         |
| 890e4764-1606-4dab-9c2f-6ed853e3fed8 | cell1        | 2b303a97f4664a69ba2dbcfd723e76a4 | CREATE_COMPLETE | 2019-02-12T08:35:32Z | None                 |
| 09531653-1074-4568-b50a-48a7b3cc15a6 | overcloud    | 2b303a97f4664a69ba2dbcfd723e76a4 | UPDATE_COMPLETE | 2019-02-09T09:52:56Z | 2019-02-11T08:33:37Z |

Create the cell and discover compute nodes (ansible playbook)

An ansible role and playbook is available to automate the one time tasks to create a cell after the deployment steps finished successfully. In addition Create the cell and discover compute nodes (manual way) explains the tasks being automated by this ansible way.


When using multiple additional cells, don’t place all inventories of the cells in one directory. The current version of the create-nova-cell-v2.yaml playbook uses CellController[0] to get the database_connection and transport_url to create the new cell. When all cell inventories get added to the same directory CellController[0] might not be the correct cell controller for the new cell.

export CONTAINERCLI=podman  #choose appropriate container cli here
source stackrc
mkdir inventories
for i in overcloud cell1; do \
  /usr/bin/tripleo-ansible-inventory \
  --static-yaml-inventory inventories/${i}.yaml --stack ${i}; \

ANSIBLE_HOST_KEY_CHECKING=False ANSIBLE_SSH_RETRIES=3 ansible-playbook -i inventories \
  /usr/share/ansible/tripleo-playbooks/create-nova-cell-v2.yaml \
  -e tripleo_cellv2_cell_name=cell1 \
  -e tripleo_cellv2_containercli=${CONTAINERCLI}

The playbook requires two parameters tripleo_cellv2_cell_name to provide the name of the new cell and until docker got dropped tripleo_cellv2_containercli to specify either if podman or docker is used.

Create the cell and discover compute nodes (manual way)

The following describes the manual needed steps to finalize the cell deployment of a new cell. These are the steps automated in the ansible playbook mentioned in Create the cell and discover compute nodes (ansible playbook).

Get control plane and cell controller IPs:

CTRL_IP=$(openstack server list -f value -c Networks --name overcloud-controller-0 | sed 's/ctlplane=//')
CELL_CTRL_IP=$(openstack server list -f value -c Networks --name cell1-cellcontrol-0 | sed 's/ctlplane=//')

Add cell information to overcloud controllers

On all central controllers add information on how to reach the cell controller endpoint (usually internalapi) to /etc/hosts, from the undercloud:

CELL_INTERNALAPI_INFO=$(ssh heat-admin@${CELL_CTRL_IP} egrep \
cell1.*\.internalapi /etc/hosts)
ansible -i /usr/bin/tripleo-ansible-inventory Controller -b \
-m lineinfile -a "dest=/etc/hosts line=\"$CELL_INTERNALAPI_INFO\""


Do this outside the HEAT_HOSTS_START .. HEAT_HOSTS_END block, or add it to an ExtraHostFileEntries section of an environment file for the central overcloud controller. Add the environment file to the next overcloud deploy run.

Extract transport_url and database connection

Get the transport_url and database connection endpoint information from the cell controller. This information is used to create the cell in the next step:

CELL_TRANSPORT_URL=$(ssh heat-admin@${CELL_CTRL_IP} sudo \
crudini --get /var/lib/config-data/nova/etc/nova/nova.conf DEFAULT transport_url)
CELL_MYSQL_VIP=$(ssh heat-admin@${CELL_CTRL_IP} sudo \
crudini --get /var/lib/config-data/nova/etc/nova/nova.conf database connection \
| awk -F[@/] '{print $4}'

Create the cell

Login to one of the central controllers create the cell with reference to the IP of the cell controller in the database_connection and the transport_url extracted from previous step, like:

ssh heat-admin@${CTRL_IP} sudo ${CONTAINERCLI} exec -i -u root nova_api \
nova-manage cell_v2 create_cell --name computecell1 \
--database_connection "{scheme}://{username}:{password}@$CELL_MYSQL_VIP/nova?{query}" \
--transport-url "$CELL_TRANSPORT_URL"


Templated transport cells URLs could be used if the same amount of controllers are in the default and add on cell. For further information about templated URLs for cell mappings check: Template URLs in Cell Mappings

ssh heat-admin@${CTRL_IP} sudo ${CONTAINERCLI} exec -i -u root nova_api \
nova-manage cell_v2 list_cells --verbose

After the cell got created the nova services on all central controllers need to be restarted.


ansible -i /usr/bin/tripleo-ansible-inventory Controller -b -a \
"docker restart nova_api nova_scheduler nova_conductor"


ansible -i /usr/bin/tripleo-ansible-inventory Controller -b -a \
"systemctl restart tripleo_nova_api tripleo_nova_conductor tripleo_nova_scheduler"

We now see the cell controller services registered:

(overcloud) [stack@undercloud ~]$ nova service-list

Perform cell host discovery

The final step is to discover the computes deployed in the cell. Run the host discovery as explained in Add a compute to a cell.

Create and add the node to an Availability Zone

After a cell got provisioned, it is required to create an availability zone for the cell to make sure an instance created in the cell, stays in the cell when performing a migration. Check Availability Zones (AZ) on more about how to create an availability zone and add the node.

After that the cell is deployed and can be used.


Migrating instances between cells is not supported. To move an instance to a different cell it needs to be re-created in the new target cell.