For a Linux-based image to have full functionality in an OpenStack Compute cloud, there are a few requirements. For some of these, you can fulfill the requirements by installing the cloud-init package. Read this section before you create your own image to be sure that the image supports the OpenStack features that you plan to use.
Disk partitions and resize root partition on boot (
No hard-coded MAC address information
SSH server running
Access instance using ssh public key (
Process user data and other metadata (
Paravirtualized Xen support in Linux kernel (Xen hypervisor only with Linux kernel version < 3.0)
Disk partitions and resize root partition on boot (cloud-init)¶
When you create a Linux image, you must decide how to partition the disks. The choice of partition method can affect the resizing functionality, as described in the following sections.
The size of the disk in a virtual machine image is determined
when you initially create the image.
However, OpenStack lets you launch instances with different size
drives by specifying different flavors.
For example, if your image was created with a 5 GB disk, and you
launch an instance with a flavor of
The resulting virtual machine instance has, by default,
a primary disk size of 20 GB. When the disk for an instance is
resized up, zeros are just added to the end.
Your image must be able to resize its partitions on boot to match the size requested by the user. Otherwise, after the instance boots, you must manually resize the partitions to access the additional storage to which you have access when the disk size associated with the flavor exceeds the disk size with which your image was created.
Xen: one ext3/ext4 partition (no LVM)¶
If you use the OpenStack XenAPI driver, the Compute service automatically adjusts the partition and file system for your instance on boot. Automatic resize occurs if the following conditions are all true:
auto_disk_config=Trueis set as a property on the image in the image registry.
The disk on the image has only one partition.
The file system on the one partition is ext3 or ext4.
Therefore, if you use Xen, we recommend that when you create your images, you create a single ext3 or ext4 partition (not managed by LVM). Otherwise, read on.
Non-Xen with cloud-init/cloud-tools: one ext3/ext4 partition (no LVM)¶
You must configure these items for your image:
The partition table for the image describes the original size of the image.
The file system for the image fills the original size of the image.
Then, during the boot process, you must:
Modify the partition table to make it aware of the additional space:
If you do not use LVM, you must modify the table to extend the existing root partition to encompass this additional space.
If you use LVM, you can add a new LVM entry to the partition table, create a new LVM physical volume, add it to the volume group, and extend the logical partition with the root volume.
Resize the root volume file system.
Depending on your distribution, the simplest way to support this is to install in your image:
the cloud-init package,
the cloud-utils package, which, on Ubuntu and Debian, also contains the
growparttool for extending partitions,
if you use Fedora, CentOS 7, or RHEL 7, the
cloud-utils-growpartpackage, which contains the
growparttool for extending partitions,
if you use Ubuntu or Debian, the cloud-initramfs-growroot package , which supports resizing root partition on the first boot.
With these packages installed, the image performs the root partition
resize on boot. For example, in the
If you cannot install
cloud-initramfs-tools, Robert Plestenjak
has a GitHub project called linux-rootfs-resize
that contains scripts that update a ramdisk by using
growpart so that the image resizes properly on boot.
If you can install the
we recommend that when you create your images, you create
a single ext3 or ext4 partition (not managed by LVM).
Non-Xen without cloud-init/cloud-tools: LVM¶
If you cannot install
cloud-tools inside of
your guest, and you want to support resize, you must write
a script that your image runs on boot to modify the partition table.
In this case, we recommend using LVM to manage your partitions.
Due to a limitation in the Linux kernel (as of this writing),
you cannot modify a partition table of a raw disk that has
partitions currently mounted, but you can do this for LVM.
Your script must do something like the following:
Detect if any additional space is available on the disk. For example, parse the output of
parted /dev/sda --script "print free".
Create a new LVM partition with the additional space. For example,
parted /dev/sda --script "mkpart lvm ...".
Create a new physical volume. For example,
Extend the volume group with this physical partition. For example,
vgextend vg00 /dev/sda6.
Extend the logical volume contained the root partition by the amount of space. For example,
lvextend /dev/mapper/node-root /dev/sda6.
Resize the root file system. For example,
You do not need a
/boot partition unless your image is an older
Linux distribution that requires that
/boot is not managed by LVM.
No hard-coded MAC address information¶
You must remove the network persistence rules in the image because they cause the network interface in the instance to come up as an interface other than eth0. This is because your image has a record of the MAC address of the network interface card when it was first installed, and this MAC address is different each time the instance boots. You should alter the following files:
/etc/udev/rules.d/70-persistent-net.ruleswith an empty file (contains network persistence rules, including MAC address).
/lib/udev/rules.d/75-persistent-net-generator.ruleswith an empty file (this generates the file above).
Remove the HWADDR line from
/etc/sysconfig/network-scripts/ifcfg-eth0on Fedora-based images.
If you delete the network persistent rules files,
you may get a
udev kernel warning at boot time,
which is why we recommend replacing them with empty files instead.
Ensure ssh server runs¶
You must install an ssh server into the image and ensure
that it starts up on boot, or you cannot connect to your
instance by using ssh when it boots inside of OpenStack.
This package is typically called
In general, we recommend that you disable any firewalls inside of your image and use OpenStack security groups to restrict access to instances. The reason is that having a firewall installed on your instance can make it more difficult to troubleshoot networking issues if you cannot connect to your instance.
Access instance by using ssh public key (cloud-init)¶
The typical way that users access virtual machines running on OpenStack is to ssh using public key authentication. For this to work, your virtual machine image must be configured to download the ssh public key from the OpenStack metadata service or config drive, at boot time.
If both the XenAPI agent and
cloud-init are present
in an image,
cloud-init handles ssh-key injection.
The system assumes
cloud-init is present when the image
Use cloud-init to fetch the public key¶
cloud-init package automatically fetches the public key
from the metadata server and places the key in an account.
The account varies by distribution.
On Ubuntu-based virtual machines, the account is called
on Fedora-based virtual machines, the account is called
and on CentOS-based virtual machines, the account is called
You can change the name of the account used by
by editing the
/etc/cloud/cloud.cfg file and adding a line
with a different user. For example, to configure
to put the key in an account named
admin, use the following syntax
in the configuration file:
users: - name: admin (...)
Write a custom script to fetch the public key¶
If you are unable or unwilling to install
the guest, you can write a custom script to fetch the public key
and add it to a user account.
To fetch the ssh public key and add it to the root account,
/etc/rc.local file and add the following lines
before the line
This code fragment is taken from the
rackerjoe oz-image-build CentOS 6 template.
if [ ! -d /root/.ssh ]; then mkdir -p /root/.ssh chmod 700 /root/.ssh fi # Fetch public key using HTTP ATTEMPTS=30 FAILED=0 while [ ! -f /root/.ssh/authorized_keys ]; do curl -f http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key > /tmp/metadata-key 2>/dev/null if [ $? -eq 0 ]; then cat /tmp/metadata-key >> /root/.ssh/authorized_keys chmod 0600 /root/.ssh/authorized_keys restorecon /root/.ssh/authorized_keys rm -f /tmp/metadata-key echo "Successfully retrieved public key from instance metadata" echo "*****************" echo "AUTHORIZED KEYS" echo "*****************" cat /root/.ssh/authorized_keys echo "*****************" else FAILED=`expr $FAILED + 1` if [ $FAILED -ge $ATTEMPTS ]; then echo "Failed to retrieve public key from instance metadata after $FAILED attempts, quitting" break fi echo "Could not retrieve public key from instance metadata (attempt #$FAILED/$ATTEMPTS), retrying in 5 seconds..." sleep 5 fi done
Some VNC clients replace : (colon) with ; (semicolon) and _ (underscore) with - (hyphen). If editing a file over a VNC session, make sure it is http: not http; and authorized_keys not authorized-keys.
Process user data and other metadata (cloud-init)¶
In addition to the ssh public key, an image might need additional information from OpenStack, such as to povide user data to instances, that the user submitted when requesting the image. For example, you might want to set the host name of the instance when it is booted. Or, you might wish to configure your image so that it executes user data content as a script on boot.
You can access this information through the metadata service or referring to Store metadata on the configuration drive. As the OpenStack metadata service is compatible with version 2009-04-04 of the Amazon EC2 metadata service, consult the Amazon EC2 documentation on Using Instance Metadata for details on how to retrieve the user data.
The easiest way to support this type of functionality is
to install the
cloud-init package into your image,
which is configured by default to treat user data as
an executable script, and sets the host name.
Ensure image writes boot log to console¶
You must configure the image so that the kernel writes
the boot log to the
ttyS0 device. In particular, the
console=tty0 console=ttyS0,115200n8 arguments must be passed to
the kernel on boot.
If your image uses
grub2 as the boot loader,
there should be a line in the grub configuration file.
/boot/grub/grub.cfg, which looks something like this:
linux /boot/vmlinuz-3.2.0-49-virtual root=UUID=6d2231e4-0975-4f35-a94f-56738c1a8150 ro console=tty0 console=ttyS0,115200n8
console=tty0 console=ttyS0,115200n8 does not appear, you must
modify your grub configuration. In general, you should not update the
grub.cfg directly, since it is automatically generated.
Instead, you should edit the
/etc/default/grub file and modify the
value of the
Next, update the grub configuration. On Debian-based operating systems such as Ubuntu, run this command:
On Fedora-based systems, such as RHEL and CentOS, and on openSUSE, run this command:
# grub2-mkconfig -o /boot/grub2/grub.cfg
Paravirtualized Xen support in the kernel (Xen hypervisor only)¶
Prior to Linux kernel version 3.0, the mainline branch of the Linux kernel did not have support for paravirtualized Xen virtual machine instances (what Xen calls DomU guests). If you are running the Xen hypervisor with paravirtualization, and you want to create an image for an older Linux distribution that has a pre 3.0 kernel, you must ensure that the image boots a kernel that has been compiled with Xen support.
Manage the image cache¶
Use options in the
nova.conf file to control whether, and for how long,
unused base images are stored in the
If you have configured live migration of instances, all your compute
nodes share one common
For information about the libvirt images in OpenStack, see The life of an OpenStack libvirt image from Pádraig Brady.
Configuration option=Default value
(StrOpt) VM image preallocation mode:
(BoolOpt) Should unused base images be removed? When set to True, the interval at which base images are removed are set with the following two settings. If set to False base images are never removed by Compute.
(IntOpt) Unused unresized base images younger than this are not removed. Default is 86400 seconds, or 24 hours.
(IntOpt) Unused resized base images younger than this are not removed. Default is 3600 seconds, or one hour.
To see how the settings affect the deletion of a running instance, check the directory where the images are stored:
# ls -lash /var/lib/nova/instances/_base/
/var/log/compute/compute.log file, look for the identifier:
2012-02-18 04:24:17 41389 WARNING nova.virt.libvirt.imagecache [-] Unknown base file: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3_20 2012-02-18 04:24:17 41389 INFO nova.virt.libvirt.imagecache [-] Removable base files: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3 /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3_20 2012-02-18 04:24:17 41389 INFO nova.virt.libvirt.imagecache [-] Removing base file: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3
Because 86400 seconds (24 hours) is the default time for
you can either wait for that time interval to see the base image
removed, or set the value to a shorter time period in the
Restart all nova services after changing a setting in the