(There are detailed instructions available below, the overview and configuration sections provide background information).
This document is extracted from devtest.sh, our automated bring-up story for CI/experimentation.
The seed instance expects to run with its eth0 connected to the outside world, via whatever IP range you choose to setup. You can run NAT, or not, as you choose. This is how we connect to it to run scripts etc - though you can equally log in on its console if you like.
We use flat networking with all machines on one broadcast domain for dev-test.
The eth1 of your seed instance should be connected to your bare metal cloud LAN. The seed VM uses the rfc5735 TEST-NET-1 range - 192.0.2.0/24 for bringing up nodes, and does its own DHCP etc, so do not connect it to a network shared with other DHCP servers or the like. The instructions in this document create a bridge device (‘brbm’) on your machine to emulate this with virtual machine ‘bare metal’ nodes.
(Note: all of the following commands should be run on your host machine, not inside the seed VM)
Before you start, check to see that your machine supports hardware virtualization, otherwise performance of the test environment will be poor. We are currently bringing up an LXC based alternative testing story, which will mitigate this, though the deployed instances will still be full virtual machines and so performance will be significantly less there without hardware virtualization.
As you step through the instructions several environment variables are set in your shell. These variables will be lost if you exit out of your shell. After setting variables, use scripts/write-tripleorc to write out the variables to a file that can be sourced later to restore the environment.
Also check ssh server is running on the host machine and port 22 is open for connections from virbr0 - VirtPowerManager will boot VMs by sshing into the host machine and issuing libvirt/virsh commands. The user these instructions use is your own, but you can also setup a dedicated user if you choose.
The devtest scripts require access to the libvirt system URI. If running against a different libvirt URI you may encounter errors. Export LIBVIRT_DEFAULT_URI to prevent devtest using qemu:///system Check that the default libvirt connection for your user is qemu:///system. If it is not, set an environment variable to configure the connection. This configuration is necessary for consistency, as later steps assume qemu:///system is being used.
The vm’s created by devtest will use e1000 network device emulation by default. This can be overriden to use a different network driver for interfaces instead, such as virtio. virtio provides faster network performance than e1000, but may prove to be less stable.
Choose a base location to put all of the source code.
# exports are ephemeral - new shell sessions, or reboots, and you need # to redo them, or use $TRIPLEO_ROOT/tripleo-incubator/scripts/write-tripleorc # and then source the generated tripleorc file. export TRIPLEO_ROOT=~/tripleo mkdir -p $TRIPLEO_ROOT cd $TRIPLEO_ROOT
git clone this repository to your local machine.
git clone https://git.openstack.org/openstack/tripleo-incubator
Nova tools get installed in $TRIPLEO_ROOT/tripleo-incubator/scripts - you need to add that to the PATH.
Set HW resources for VMs used as ‘baremetal’ nodes. NODE_CPU is cpu count, NODE_MEM is memory (MB), NODE_DISK is disk size (GB), NODE_ARCH is architecture (i386, amd64). NODE_ARCH is used also for the seed VM. A note on memory sizing: TripleO images in raw form are currently ~2.7Gb, which means that a tight node will end up with a thrashing page cache during glance -> local + local -> raw operations. This significantly impairs performance. Of the four minimum VMs for TripleO simulation, two are nova baremetal nodes (seed an undercloud) and these need to be 2G or larger. The hypervisor host in the overcloud also needs to be a decent size or it cannot host more than one VM.
export NODE_CPU=1 NODE_MEM=2048 NODE_DISK=20 NODE_ARCH=i386
For 64bit it is better to create VMs with more memory and storage because of increased memory footprint:
export NODE_CPU=1 NODE_MEM=2048 NODE_DISK=20 NODE_ARCH=amd64
Set distribution used for VMs (fedora, ubuntu).
export NODE_DIST=ubuntu ##
for Fedora set SELinux permissive mode.
export NODE_DIST="fedora selinux-permissive"
Ensure dependencies are installed and required virsh configuration is performed:
Run cleanup-env to ensure VM’s and storage pools from previous devtest runs are removed.
Clone/update the other needed tools which are not available as packages.
You need to make the tripleo image elements accessible to diskimage-builder:
Configure a network for your test environment. This configures an openvswitch bridge and teaches libvirt about it.
Choose the deploy image element to be used. deploy-kexec will relieve you of the need to wait for long hardware POST times, however it has known stability issues (please see https://bugs.launchpad.net/diskimage-builder/+bug/1240933). If stability is preferred over speed, use deploy image element (default).
Create a deployment ramdisk + kernel. These are used by the seed cloud and the undercloud for deployment to bare metal.
$TRIPLEO_ROOT/diskimage-builder/bin/ramdisk-image-create -a $NODE_ARCH \ $NODE_DIST $DEPLOY_IMAGE_ELEMENT -o $TRIPLEO_ROOT/deploy-ramdisk 2>&1 | \ tee $TRIPLEO_ROOT/dib-deploy.log
Setting Up Squid Proxy