OpenStack on LXD

Overview

An OpenStack deployment is typically made over a number of physical servers, using LXD containers where appropriate for control plane services.

However, the average developer probably does not have, or want to have, access to such infrastructure for day-to-day charm development.

Its possible to deploy OpenStack using the OpenStack Charms in LXD containers on a single machine; this allows for faster, localized charm development and testing.

Host Setup

The tools in the openstack-on-lxd git repository require the use of Juju 2.x, which provides full support for the LXD local provider.

sudo apt-get install juju lxd zfsutils-linux squid-deb-proxy \
    python-novaclient python-keystoneclient python-glanceclient \
    python-neutronclient python-openstackclient curl

These tools are provided as part of the Ubuntu 16.04 LTS release; the latest Juju 2.x beta release can be obtained from the Juju team devel PPA:

sudo add-apt-repository ppa:juju/devel

You’ll need a well specified machine with at least 8G of RAM and a SSD; for reference the author uses Lenovo x240 with an Intel i5 processor, 16G RAM and a 500G Samsung SSD (split into two - one partition for the OS and one partition for a ZFS pool).

For s390x, this has been validated on an LPAR with 12 CPUs, 40GB RAM, 2 ~40GB disks (one disk for the OS and one disk for the ZFS pool).

You’ll also need to clone the repository with the bundles and configuration for the deployment:

git clone https://github.com/openstack-charmers/openstack-on-lxd

All commands in this document assume they are being made from within the local copy of this repo.

LXD

Base Configuration

This type of deployment creates numerous containers on a single host which leads to many thousands of file handles.

Some of the default system thresholds may not be high enough for this use case, potentially leading to issues such as Too many open files.

To address this, the host system should be configured according to the LXD production-setup guide, specifically the /etc/sysctl.conf bits:

echo fs.inotify.max_queued_events=1048576 | sudo tee -a /etc/sysctl.conf
echo fs.inotify.max_user_instances=1048576 | sudo tee -a /etc/sysctl.conf
echo fs.inotify.max_user_watches=1048576 | sudo tee -a /etc/sysctl.conf
echo vm.max_map_count=262144 | sudo tee -a /etc/sysctl.conf
echo vm.swappiness=1 | sudo tee -a /etc/sysctl.conf
sudo sysctl -p

In order to allow the OpenStack Cloud to function, you’ll need to reconfigure the default LXD bridge to support IPv4 networking; is also recommended that you use a fast storage backend such as ZFS on a SSD based block device. Use the lxd provided configuration tool to help do this:

sudo lxd init

The referenced config.yaml uses an apt proxy to improve installation performance. The network that you create during the lxd init procedure should accomodate that address. Also ensure that you leave a range of IP addresses free to use for floating IP addresses for OpenStack instances; The following are the values which are used in this example procedure:

Network and IP: 10.0.8.1/24 DHCP range: 10.0.8.2 -> 10.0.8.200

Also update the default profile to use Jumbo frames for all network connections into containers:

lxc profile device set default eth0 mtu 9000

This will ensure you avoid any packet fragmentation type problems with overlay networks.

Test out your configuration prior to launching an entire cloud:

lxc launch ubuntu-daily:xenial

This should result in a running container you can exec into and back out of:

lxc exec <container-name> bash
exit

Juju Profile Update

Juju creates a couple of profiles for the models that it creates by default; these are named juju-default and juju-admin.

lxc profile create juju-default 2>/dev/null || echo "juju-default profile already exists"
cat lxd-profile.yaml | lxc profile edit juju-default

This will ensure that containers created by LXD for Juju have the correct permissions to run your OpenStack cloud.

Juju

Bootstrap the Juju Controller

Prior to deploying the OpenStack on LXD bundle, you’ll need to bootstrap a controller to manage your Juju models:

juju bootstrap --config config.yaml localhost lxd

Review the contents of the config.yaml prior to running this command and edit as appropriate; this configures some defaults for containers created in the model including setting up things like a APT proxy to improve performance of network operations.

Configure a PowerNV (ppc64el) Host

When deployed directly to metal, the nova-compute charm sets smt=off, as is necessary for libvirt usage. However, when nova-compute is in a container, the containment prevents ppc64_cpu from modifying the host’s smt value. It is necessary to pre-configure the host smt setting for nova-compute (libvirt + qemu) in ppc64el scenarios.

sudo ppc64_cpu --smt=off

OpenStack

Deploy

Next, deploy the OpenStack cloud using the provided bundle.

For amd64, arm64, or ppc64el Mitaka:

juju deploy bundle-mitaka.yaml

For amd64, arm64, or ppc64el Newton:

juju deploy bundle-newton.yaml

For amd64, arm64, or ppc64el Ocata:

juju deploy bundle-ocata.yaml

For s390x Mitaka:

juju deploy bundle-mitaka-s390x.yaml

For s390x Newton:

juju deploy bundle-newton-s390x.yaml

For s390x Ocata:

juju deploy bundle-ocata-s390x.yaml

You can watch deployment progress using the ‘juju status’ command. This may take some time depending on the speed of your system; CPU, disk and network speed will all effect deployment time.

Using the Cloud

Check Access

Once deployment has completed (units should report a ready state in the status output), check that you can access the deployed cloud OK:

source novarc
openstack catalog list
openstack service list
openstack network agent list
openstack volume service list

This commands should all succeed and you should get a feel as to how the various OpenStack components are deployed in each container.

Upload an image

Before we can boot an instance, we need an image to boot in Glance.

For amd64:

curl http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img | \
    openstack image create --public --container-format=bare --disk-format=qcow2 xenial

For arm64:

curl http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-arm64-uefi1.img | \
    openstack image create --public --container-format=bare --disk-format=qcow2 --property hw_firmware_type=uefi xenial

For s390x:

curl http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-s390x-disk1.img | \
    openstack image create --public --container-format=bare --disk-format=qcow2 xenial

For ppc64el:

curl http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-ppc64el-disk1.img | \
    openstack image create --public --container-format=bare --disk-format=qcow2 xenial

Configure some networking

First, create the ‘external’ network which actually maps directly to the LXD bridge:

./neutron-ext-net --network-type flat \
    -g 10.0.8.1 -c 10.0.8.0/24
    -f 10.0.8.201:10.0.8.254 ext_net

and then create an internal overlay network for the instances to actually attach to:

./neutron-tenant-net -t admin -r provider-router \
    -N 10.0.8.1 internal 192.168.20.0/24

Create a key-pair

Upload your local public key into the cloud so you can access instances:

openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey

Create Flavors

It’s safe to skip this for Mitaka. For Newton and later, there are no pre-populated flavors. Check if flavors exist, and if not, create them:

openstack flavor list
openstack flavor create --public --ram 512 --disk 1 --ephemeral 0 --vcpus 1 m1.tiny
openstack flavor create --public --ram 1024 --disk 20 --ephemeral 40 --vcpus 1 m1.small
openstack flavor create --public --ram 2048 --disk 40 --ephemeral 40 --vcpus 2 m1.medium
openstack flavor create --public --ram 8192 --disk 40 --ephemeral 40 --vcpus 4 m1.large
openstack flavor create --public --ram 16384 --disk 80 --ephemeral 40 --vcpus 8 m1.xlarge

Boot an instance

You can now boot an instance on your cloud:

openstack server create --image xenial --flavor m1.small --key-name mykey \
   --wait --nic net-id=$(neutron net-list | grep internal | awk '{ print $2 }') \
   openstack-on-lxd-ftw

Attaching a volume

First, create a volume in cinder:

openstack volume create --size 10 testvolume

then attach it to the instance we just booted in nova:

openstack server add volume openstack-on-lxd-ftw testvolume
openstack volume show testvolume

The attached volume will be accessible once you login to the instance (see below). It will need to be formatted and mounted!

Accessing your instance

In order to access the instance you just booted on the cloud, you’ll need to assign a floating IP address to the instance:

nova floating-ip-create
nova add-floating-ip <uuid-of-instance> <new-floating-ip>

and then allow access via SSH (and ping) - you only need todo this once:

neutron security-group-rule-create --protocol icmp \
    --direction ingress $(nova secgroup-list | grep default | awk '{ print $2 }')
neutron security-group-rule-create --protocol tcp \
    --port-range-min 22 --port-range-max 22 \
    --direction ingress $(nova secgroup-list | grep default | awk '{ print $2 }')

After running these commands you should be able to access the instance:

ssh ubuntu@<new-floating-ip>

Switching in a dev charm

Now that you have a running OpenStack deployment on your machine, you can switch in your development changes to one of the charms in the deployment:

juju upgrade-charm --switch <path-to-your-charm> cinder

The charm will be upgraded with your local development changes; alternatively you can update the bundle.yaml to reference your local charm so that its used from the start of cloud deployment.

Known Limitations

Currently is not possible to run Cinder with iSCSI/LVM based storage under LXD; this limits use of block storage options to those that are 100% userspace, such as Ceph.