Xen via libvirt

Xen via libvirt

OpenStack Compute supports the Xen Project Hypervisor (or Xen). Xen can be integrated with OpenStack Compute via the libvirt toolstack or via the XAPI toolstack. This section describes how to set up OpenStack Compute with Xen and libvirt. For information on how to set up Xen with XAPI refer to XenServer (and other XAPI based Xen variants).

Installing Xen with libvirt

At this stage we recommend using the baseline that we use for the Xen Project OpenStack CI Loop, which contains the most recent stability fixes to both Xen and libvirt.

Xen 4.5.1 (or newer) and libvirt 1.2.15 (or newer) contain the minimum required OpenStack improvements for Xen. Although libvirt 1.2.15 works with Xen, libvirt 1.3.2 or newer is recommended. The necessary Xen changes have also been backported to the Xen 4.4.3 stable branch. Please check with the Linux and FreeBSD distros you are intending to use as Dom 0, whether the relevant version of Xen and libvirt are available as installable packages.

The latest releases of Xen and libvirt packages that fulfil the above minimum requirements for the various openSUSE distributions can always be found and installed from the Open Build Service Virtualization project. To install these latest packages, add the Virtualization repository to your software management stack and get the newest packages from there. More information about the latest Xen and libvirt packages are available here and here.

Alternatively, it is possible to use the Ubuntu LTS 14.04 Xen Package 4.4.1-0ubuntu0.14.04.4 (Xen 4.4.1) and apply the patches outlined here. You can also use the Ubuntu LTS 14.04 libvirt package 1.2.2 libvirt_1.2.2-0ubuntu13.1.7 as baseline and update it to libvirt version 1.2.15, or 1.2.14 with the patches outlined here applied. Note that this will require rebuilding these packages partly from source.

For further information and latest developments, you may want to consult the Xen Project’s mailing lists for OpenStack related issues and questions.

Configuring Xen with libvirt

To enable Xen via libvirt, ensure the following options are set in /etc/nova/nova.conf on all hosts running the nova-compute service.

compute_driver = libvirt.LibvirtDriver

[libvirt]
virt_type = xen

Additional configuration options

Use the following as a guideline for configuring Xen for use in OpenStack:

  1. Dom0 memory: Set it between 1GB and 4GB by adding the following parameter to the Xen Boot Options in the grub.conf file.

    dom0_mem=1024M
    

    Note

    The above memory limits are suggestions and should be based on the available compute host resources. For large hosts that will run many hundreds of instances, the suggested values may need to be higher.

    Note

    The location of the grub.conf file depends on the host Linux distribution that you are using. Please refer to the distro documentation for more details (see Dom 0 for more resources).

  2. Dom0 vcpus: Set the virtual CPUs to 4 and employ CPU pinning by adding the following parameters to the Xen Boot Options in the grub.conf file.

    dom0_max_vcpus=4 dom0_vcpus_pin
    

    Note

    Note that the above virtual CPU limits are suggestions and should be based on the available compute host resources. For large hosts, that will run many hundred of instances, the suggested values may need to be higher.

  3. PV vs HVM guests: A Xen virtual machine can be paravirtualized (PV) or hardware virtualized (HVM). The virtualization mode determines the interaction between Xen, Dom 0, and the guest VM’s kernel. PV guests are aware of the fact that they are virtualized and will co-operate with Xen and Dom 0. The choice of virtualization mode determines performance characteristics. For an overview of Xen virtualization modes, see Xen Guest Types.

    In OpenStack, customer VMs may run in either PV or HVM mode. The mode is a property of the operating system image used by the VM, and is changed by adjusting the image metadata stored in the Image service. The image metadata can be changed using the openstack commands.

    To choose one of the HVM modes (HVM, HVM with PV Drivers or PVHVM), use openstack to set the vm_mode property to hvm.

    To choose one of the HVM modes (HVM, HVM with PV Drivers or PVHVM), use one of the following two commands:

    $ openstack image set --property vm_mode=hvm IMAGE
    

    To chose PV mode, which is supported by NetBSD, FreeBSD and Linux, use one of the following two commands

    $ openstack image set --property vm_mode=xen IMAGE
    

    Note

    The default for virtualization mode in nova is PV mode.

  4. Image formats: Xen supports raw, qcow2 and vhd image formats. For more information on image formats, refer to the OpenStack Virtual Image Guide and the Storage Options Guide on the Xen Project Wiki.

  5. Image metadata: In addition to the vm_mode property discussed above, the hypervisor_type property is another important component of the image metadata, especially if your cloud contains mixed hypervisor compute nodes. Setting the hypervisor_type property allows the nova scheduler to select a compute node running the specified hypervisor when launching instances of the image. Image metadata such as vm_mode, hypervisor_type, architecture, and others can be set when importing the image to the Image service. The metadata can also be changed using the openstack commands:

    $ openstack image set --property hypervisor_type=xen vm_mode=hvm IMAGE
    

    For more more information on image metadata, refer to the OpenStack Virtual Image Guide.

  6. Libguestfs file injection: OpenStack compute nodes can use libguestfs to inject files into an instance’s image prior to launching the instance. libguestfs uses libvirt’s QEMU driver to start a qemu process, which is then used to inject files into the image. When using libguestfs for file injection, the compute node must have the libvirt qemu driver installed, in addition to the Xen driver. In RPM based distributions, the qemu driver is provided by the libvirt-daemon-qemu package. In Debian and Ubuntu, the qemu driver is provided by the libvirt-bin package.

To customize the libvirt driver, use the configuration option settings documented in Description of libvirt configuration options.

Troubleshoot Xen with libvirt

Important log files: When an instance fails to start, or when you come across other issues, you should first consult the following log files:

If you need further help you can ask questions on the mailing lists xen-users@, wg-openstack@ or raise a bug against Xen.

Known issues

  • Networking: Xen via libvirt is currently only supported with nova-network. Fixes for a number of bugs are currently being worked on to make sure that Xen via libvirt will also work with OpenStack Networking (neutron).
  • Live migration: Live migration is supported in the libvirt libxl driver since version 1.2.5. However, there were a number of issues when used with OpenStack, in particular with libvirt migration protocol compatibility. It is worth mentioning that libvirt 1.3.0 addresses most of these issues. We do however recommend using libvirt 1.3.2, which is fully supported and tested as part of the Xen Project CI loop. It addresses live migration monitoring related issues and adds support for peer-to-peer migration mode, which nova relies on.
  • Live migration monitoring: On compute nodes running Kilo or later, live migration monitoring relies on libvirt APIs that are only implemented from libvirt version 1.3.1 onwards. When attempting to live migrate, the migration monitoring thread would crash and leave the instance state as “MIGRATING”. If you experience such an issue and you are running on a version released before libvirt 1.3.1, make sure you backport libvirt commits ad71665 and b7b4391 from upstream.

Additional information and resources

The following section contains links to other useful resources.

Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.