Atom feed of this document
  
Icehouse -  Icehouse -  Icehouse -  Icehouse -  Icehouse -  Icehouse -  Icehouse -  Icehouse - 

 Xen, XenAPI, XenServer, and XCP

[Warning]This section needs help

This section is low quality, and contains out of date information. The Documentation Team is currently looking for individuals with experience with the hypervisor to Re-document Xen integration with OpenStack.

This section describes Xen, XenAPI, XenServer, and XCP, their differences, and how to use them with OpenStack. After you understand how the Xen and KVM architectures differ, you can determine when to use each architecture in your OpenStack cloud.

 Xen terminology

Xen. A hypervisor that provides the fundamental isolation between virtual machines. Xen is open source (GPLv2) and is managed by Xen.org, an cross-industry organization.

Xen is a component of many different products and projects. The hypervisor itself is very similar across all these projects, but the way that it is managed can be different, which can cause confusion if you're not clear which tool stack you are using. Make sure you know what tool stack you want before you get started.

Xen Cloud Platform (XCP). An open source (GPLv2) tool stack for Xen. It is designed specifically as a platform for enterprise and cloud computing, and is well integrated with OpenStack. XCP is available both as a binary distribution, installed from an iso, and from Linux distributions, such as xcp-xapi in Ubuntu. The current versions of XCP available in Linux distributions do not yet include all the features available in the binary distribution of XCP.

Citrix XenServer. A commercial product. It is based on XCP, and exposes the same tool stack and management API. As an analogy, think of XenServer being based on XCP in the way that Red Hat Enterprise Linux is based on Fedora. XenServer has a free version (which is very similar to XCP) and paid-for versions with additional features enabled. Citrix provides support for XenServer, but as of July 2012, they do not provide any support for XCP. For a comparison between these products see the XCP Feature Matrix.

Both XenServer and XCP include Xen, Linux, and the primary control daemon known as xapi.

The API shared between XCP and XenServer is called XenAPI. OpenStack usually refers to XenAPI, to indicate that the integration works equally well on XCP and XenServer. Sometimes, a careless person will refer to XenServer specifically, but you can be reasonably confident that anything that works on XenServer will also work on the latest version of XCP. Read the XenAPI Object Model Overview for definitions of XenAPI specific terms such as SR, VDI, VIF and PIF.

 Privileged and unprivileged domains

A Xen host runs a number of virtual machines, VMs, or domains (the terms are synonymous on Xen). One of these is in charge of running the rest of the system, and is known as "domain 0," or "dom0." It is the first domain to boot after Xen, and owns the storage and networking hardware, the device drivers, and the primary control software. Any other VM is unprivileged, and are known as a "domU" or "guest". All customer VMs are unprivileged of course, but you should note that on Xen the OpenStack control software (nova-compute) also runs in a domU. This gives a level of security isolation between the privileged system software and the OpenStack software (much of which is customer-facing). This architecture is described in more detail later.

There is an ongoing project to split domain 0 into multiple privileged domains known as driver domains and stub domains. This would give even better separation between critical components. This technology is what powers Citrix XenClient RT, and is likely to be added into XCP in the next few years. However, the current architecture just has three levels of separation: dom0, the OpenStack domU, and the completely unprivileged customer VMs.

 Paravirtualized versus hardware virtualized domains

A Xen virtual machine can be paravirtualized (PV) or hardware virtualized (HVM). This refers to the interaction between Xen, domain 0, and the guest VM's kernel. PV guests are aware of the fact that they are virtualized and will co-operate with Xen and domain 0; this gives them better performance characteristics. HVM guests are not aware of their environment, and the hardware has to pretend that they are running on an unvirtualized machine. HVM guests do not need to modify the guest operating system, which is essential when running Windows.

In OpenStack, customer VMs may run in either PV or HVM mode. However, the OpenStack domU (that's the one running nova-compute) must be running in PV mode.

 XenAPI Deployment Architecture

When you deploy OpenStack on XCP or XenServer, you get something similar to this:

Key things to note:

  • The hypervisor: Xen

  • Domain 0: runs xapi and some small pieces from OpenStack (some xapi plug-ins and network isolation rules). The majority of this is provided by XenServer or XCP (or yourself using Kronos).

  • OpenStack VM: The nova-compute code runs in a paravirtualized virtual machine, running on the host under management. Each host runs a local instance of nova-compute. It will often also be running nova-network (depending on your network mode). In this case, nova-network is managing the addresses given to the tenant VMs through DHCP.

  • Nova uses the XenAPI Python library to talk to xapi, and it uses the Management Network to reach from the domU to dom0 without leaving the host.

Some notes on the networking:

  • The above diagram assumes FlatDHCP networking (the DevStack default).

  • There are three main OpenStack Networks:

    • Management network - RabbitMQ, MySQL, etc. Please note that the VM images are downloaded by the XenAPI plug-ins, so make sure that the images can be downloaded through the management network. It usually means binding those services to the management interface.

    • Tenant network - controlled by nova-network. The parameters of this network depend on the networking model selected (Flat, Flat DHCP, VLAN).

    • Public network - floating IPs, public API endpoints.

  • The networks shown here must be connected to the corresponding physical networks within the data center. In the simplest case, three individual physical network cards could be used. It is also possible to use VLANs to separate these networks. Please note, that the selected configuration must be in line with the networking model selected for the cloud. (In case of VLAN networking, the physical channels have to be able to forward the tagged traffic.)

 XenAPI pools

The host-aggregates feature enables you to create pools of XenServer hosts to enable live migration when using shared storage. However, you cannot configure shared storage.

 Further reading

Here are some of the resources available to learn more about Xen:

 Install XenServer and XCP

Before you can run OpenStack with XCP or XenServer, you must install the software on an appropriate server.

[Note]Note

Xen is a type 1 hypervisor: When your server starts, Xen is the first software that runs. Consequently, you must install XenServer or XCP before you install the operating system where you want to run OpenStack code. The OpenStack services then run in a virtual machine that you install on top of XenServer.

Before you can install your system, decide whether to install a free or paid edition of Citrix XenServer or Xen Cloud Platform from Xen.org. Download the software from these locations:

When you install many servers, you might find it easier to perform PXE boot installations of XenServer or XCP. You can also package any post-installation changes that you want to make to your XenServer by creating your own XenServer supplemental pack.

You can also install the xcp-xenapi package on Debian-based distributions to get XCP. However, this is not as mature or feature complete as above distributions. This modifies your boot loader to first boot Xen and boot your existing OS on top of Xen as Dom0. The xapi daemon runs in Dom0. Find more details at http://wiki.xen.org/wiki/Project_Kronos.

[Important]Important

Make sure you use the EXT type of storage repository (SR). Features that require access to VHD files (such as copy on write, snapshot and migration) do not work when you use the LVM SR. Storage repository (SR) is a XenAPI-specific term relating to the physical storage where virtual disks are stored.

On the XenServer/XCP installation screen, choose the XenDesktop Optimized option. If you use an answer file, make sure you use srtype="ext" in the installation tag of the answer file.

 Post-installation steps

Complete these steps to install OpenStack in your XenServer system:

  1. For resize and migrate functionality, complete the changes described in the Configure resize section in the OpenStack Configuration Reference.

  2. Install the VIF isolation rules to help prevent mac and IP address spoofing.

  3. Install the XenAPI plug-ins. See the following section.

  4. To support AMI type images, you must set up /boot/guest symlink/directory in Dom0. For detailed instructions, see next section.

  5. To support resize/migration, set up an ssh trust relation between your XenServer hosts, and ensure /images is properly set up. See next section for more details.

  6. Create a Paravirtualized virtual machine that can run the OpenStack compute code.

  7. Install and configure the nova-compute in the above virtual machine.

For more information, see how DevStack performs the last three steps for developer deployments. For more information about DevStack, see Getting Started With XenServer and Devstack (https://github.com/openstack-dev/devstack/blob/master/tools/xen/README.md). Find more information about the first step, see Multi Tenancy Networking Protections in XenServer (https://github.com/openstack/nova/blob/master/plugins/xenserver/doc/networking.rst). For information about how to install the XenAPI plug-ins, see XenAPI README (https://github.com/openstack/nova/blob/master/plugins/xenserver/xenapi/README).

 Install the XenAPI plug-ins

When you use Xen as the hypervisor for OpenStack Compute, you can install a Python script (or any executable) on the host side, and call that through the XenAPI. These scripts are called plug-ins. The XenAPI plug-ins live in the nova code repository. These plug-ins have to be copied to the Dom0 for the hypervisor, to the appropriate directory, where xapi can find them. There are several options for the installation. The important thing is to ensure that the version of the plug-ins are in line with the nova installation by only installing plug-ins from a matching nova repository.

 Manually install the plug-in
  1. Create temporary files/directories:

    $ NOVA_ZIPBALL=$(mktemp)
    $ NOVA_SOURCES=$(mktemp -d)
  2. Get the source from github. The example assumes the master branch is used. Amend the URL to match the version being used:

    $ wget -qO "$NOVA_ZIPBALL" https://github.com/openstack/nova/archive/master.zip
    $ unzip "$NOVA_ZIPBALL" -d "$NOVA_SOURCES"

    (Alternatively) To use the official Ubuntu packages, use the following commands to get the nova code base:

    $ ( cd $NOVA_SOURCES && apt-get source python-nova --download-only )
    $ ( cd $NOVA_SOURCES && for ARCHIVE in *.tar.gz; do tar -xzf $ARCHIVE; done )
  3. Copy the plug-ins to the hypervisor:

    $ PLUGINPATH=$(find $NOVA_SOURCES -path '*/xapi.d/plugins' -type d -print)
    $ tar -czf - -C "$PLUGINPATH" ./ | ssh root@xenserver tar -xozf - -C /etc/xapi.d/plugins/
  4. Remove the temporary files/directories:

    $ rm "$NOVA_ZIPBALL"
    $ rm -rf "$NOVA_SOURCES" 
 Package a XenServer supplemental pack

Follow these steps to produce a supplemental pack from the nova sources, and package it as a XenServer supplemental pack.

  1. Create RPM packages. Given you have the nova sources. Use one of the methods in the section called “Manually install the plug-in”:

    $ cd nova/plugins/xenserver/xenapi/contrib
    $ ./build-rpm.sh

    These commands leave an .rpm file in the rpmbuild/RPMS/noarch/ directory.

  2. Pack the RPM packages to a Supplemental Pack, using the XenServer DDK (the following command should be issued on the XenServer DDK virtual appliance, after the produced rpm file has been copied over):

    $ /usr/bin/build-supplemental-pack.sh \
    > --output=output_directory \
    > --vendor-code=novaplugin \
    > --vendor-name=openstack \
    > --label=novaplugins \
    > --text="nova plugins" \
    > --version=0 \
    > full_path_to_rpmfile

    This command produces an .iso file in the output directory specified. Copy that file to the hypervisor.

  3. Install the Supplemental Pack. Log in to the hypervisor, and issue:

    # xe-install-supplemental-pack path_to_isofile
 Prepare for AMI type images

To support AMI type images in your OpenStack installation, you must create a /boot/guest directory inside Dom0. The OpenStack VM extracts the kernel and ramdisk from the AKI and ARI images puts them in this location.

OpenStack maintains the contents of this directory and its size should not increase during normal operation. However, in case of power failures or accidental shutdowns, some files might be left over. To prevent these files from filling the Dom0 disk, set up this directory as a symlink that points to a subdirectory of the local SR.

Run these commands in Dom0 to achieve this setup:

# LOCAL_SR=$(xe sr-list name-label="Local storage" --minimal)
# LOCALPATH="/var/run/sr-mount/$LOCAL_SR/os-guest-kernels"
# mkdir -p "$LOCALPATH"
# ln -s "$LOCALPATH" /boot/guest
 Modify Dom0 for resize/migration support

To resize servers with XenServer and XCP, you must:

  • Establish a root trust between all hypervisor nodes of your deployment:

    To do so, generate an ssh key-pair with the ssh-keygen command. Ensure that each of your dom0's authorized_keys file (located in /root/.ssh/authorized_keys) contains the public key fingerprint (located in /root/.ssh/id_rsa.pub).

  • Provide an /images mount point to the dom0 for your hypervisor:

    Dom0 space is at a premium so creating a directory in dom0 is potentially dangerous and likely to fail especially when you resize large servers. The least you can do is to symlink /images to your local storage SR. The following instructions work for an English-based installation of XenServer (and XCP) and in the case of ext3-based SR (with which the resize functionality is known to work correctly).

    # LOCAL_SR=$(xe sr-list name-label="Local storage" --minimal)
    # IMG_DIR="/var/run/sr-mount/$LOCAL_SR/images"
    # mkdir -p "$IMG_DIR"
    # ln -s "$IMG_DIR" /images
 Xen boot from ISO

XenServer, through the XenAPI integration with OpenStack, provides a feature to boot instances from an ISO file. To activate the Boot From ISO feature, you must configure the SR elements on XenServer host, as follows:

  1. Create an ISO-typed SR, such as an NFS ISO library, for instance. For this, using XenCenter is a simple method. You must export an NFS volume from a remote NFS server. Make sure it is exported in read-write mode.

  2. On the compute host, find and record the uuid of this ISO SR:

    # xe host-list
  3. Locate the uuid of the NFS ISO library:

    # xe sr-list content-type=iso
  4. Set the uuid and configuration. Even if an NFS mount point is not local, you must specify local-storage-iso.

    # xe sr-param-set uuid=[iso sr uuid] other-config:i18n-key=local-storage-iso
  5. Make sure the host-uuid from xe pbd-list equals the uuid of the host you found previously:

    # xe sr-uuid=[iso sr uuid]
  6. You can now add images through the OpenStack Image Service with disk-format=iso, and boot them in OpenStack Compute:

    $ glance image-create --name=fedora_iso --disk-format=iso --container-format=bare < Fedora-16-x86_64-netinst.iso

 Xen configuration reference

The following section discusses some commonly changed options in XenServer. The table below provides a complete reference of all configuration options available for configuring Xen with OpenStack.

The recommended way to use Xen with OpenStack is through the XenAPI driver. To enable the XenAPI driver, add the following configuration options /etc/nova/nova.conf and restart the nova-compute service:

compute_driver = xenapi.XenAPIDriver
xenapi_connection_url = http://your_xenapi_management_ip_address
xenapi_connection_username = root
xenapi_connection_password = your_password

These connection details are used by the OpenStack Compute service to contact your hypervisor and are the same details you use to connect XenCenter, the XenServer management console, to your XenServer or XCP box.

[Note]Note

The xenapi_connection_url is generally the management network IP address of the XenServer. Though it is possible to use the internal network IP Address (169.250.0.1) to contact XenAPI, this does not allow live migration between hosts, and other functionalities like host aggregates do not work.

It is possible to manage Xen using libvirt, though this is not well-tested or supported. To experiment using Xen through libvirt add the following configuration options /etc/nova/nova.conf:

compute_driver = libvirt.LibvirtDriver

[libvirt]
virt_type = xen
 Agent

If you don't have the guest agent on your VMs, it takes a long time for nova to decide the VM has successfully started. Generally a large timeout is required for Windows instances, bug you may want to tweak agent_version_timeout

 Firewall

If using nova-network, IPTables is supported:

firewall_driver = nova.virt.firewall.IptablesFirewallDriver

Alternately, doing the isolation in Dom0:

firewall_driver = nova.virt.xenapi.firewall.Dom0IptablesFirewallDriver

 VNC proxy address

Assuming you are talking to XenAPI through the host local management network, and XenServer is on the address: 169.254.0.1, you can use the following: vncserver_proxyclient_address=169.254.0.1

 Storage

You can specify which Storage Repository to use with nova by looking at the following flag. The default is to use the local-storage setup by the default installer:

sr_matching_filter = "other-config:i18n-key=local-storage"

Another good alternative is to use the "default" storage (for example if you have attached NFS or any other shared storage):

sr_matching_filter = "default-sr:true"

[Note]Note

To use a XenServer pool, you must create the pool by using the Host Aggregates feature.

 Xen configuration reference

To customize the Xen driver, use the configuration option settings documented in Table 2.59, “Description of configuration options for xen”.

Questions? Discuss on ask.openstack.org
Found an error? Report a bug against this page

loading table of contents...