Atom feed of this document
  
Icehouse -  Icehouse -  Icehouse -  Icehouse -  Icehouse -  Icehouse -  Icehouse -  Icehouse - 

 VMware vSphere

 Introduction

OpenStack Compute supports the VMware vSphere product family and enables access to advanced features such as vMotion, High Availability, and Dynamic Resource Scheduling (DRS). This section describes how to configure VMware-based virtual machine images for launch. vSphere versions 4.1 and newer are supported.

The VMware vCenter driver enables the nova-compute service to communicate with a VMware vCenter server that manages one or more ESX host clusters. The driver aggregates the ESX hosts in each cluster to present one large hypervisor entity for each cluster to the Compute scheduler. Because individual ESX hosts are not exposed to the scheduler, Compute schedules to the granularity of clusters and vCenter uses DRS to select the actual ESX host within the cluster. When a virtual machine makes its way into a vCenter cluster, it can use all vSphere features.

The following sections describe how to configure the VMware vCenter driver.

 High-level architecture

The following diagram shows a high-level view of the VMware driver architecture:

 

Figure 2.1. VMware driver architecture


As the figure shows, the OpenStack Compute Scheduler sees three hypervisors that each correspond to a cluster in vCenter. Nova-compute contains the VMware driver. You can run with multiple nova-compute services. While Compute schedules at the granularity of a cluster, the VMware driver inside nova-compute interacts with the vCenter APIs to select an appropriate ESX host within the cluster. Internally, vCenter uses DRS for placement.

The VMware vCenter driver also interacts with the OpenStack Image Service to copy VMDK images from the Image Service back end store. The dotted line in the figure represents VMDK images being copied from the OpenStack Image Service to the vSphere data store. VMDK images are cached in the data store so the copy operation is only required the first time that the VMDK image is used.

After OpenStack boots a VM into a vSphere cluster, the VM becomes visible in vCenter and can access vSphere advanced features. At the same time, the VM is visible in the OpenStack dashboard and you can manage it as you would any other OpenStack VM. You can perform advanced vSphere operations in vCenter while you configure OpenStack resources such as VMs through the OpenStack dashboard.

The figure does not show how networking fits into the architecture. Both nova-network and the OpenStack Networking Service are supported. For details, see the section called “Networking with VMware vSphere”.

 Configuration overview

To get started with the VMware vCenter driver, complete the following high-level steps:

  1. Configure vCenter correctly. See the section called “Prerequisites and limitations”.

  2. Configure nova.conf for the VMware vCenter driver. See the section called “VMware vCenter driver”.

  3. Load desired VMDK images into the OpenStack Image Service. See the section called “Images with VMware vSphere”.

  4. Configure networking with either nova-network or the OpenStack Networking Service. See the section called “Networking with VMware vSphere”.

 Prerequisites and limitations

Use the following list to prepare a vSphere environment that runs with the VMware vCenter driver:

  1. Copying VMDK files (vSphere 5.1 only). In vSphere 5.1, copying large image files (for example, 12 GB and greater) from Glance can take a long time. To improve performance, VMware recommends that you upgrade to VMware vCenter Server 5.1 Update 1 or later. For more information, see the Release Notes.

  2. DRS. For any cluster that contains multiple ESX hosts, enable DRS and enable fully automated placement.

  3. Shared storage. Only shared storage is supported and data stores must be shared among all hosts in a cluster. It is recommended to remove data stores not intended for OpenStack from clusters being configured for OpenStack.

  4. Clusters and data stores. Do not use OpenStack clusters and data stores for other purposes. If you do, OpenStack displays incorrect usage information.

  5. Networking. The networking configuration depends on the desired networking model. See the section called “Networking with VMware vSphere”.

  6. Security groups. If you use the VMware driver with OpenStack Networking and the NSX plug-in, security groups are supported. If you use nova-network, security groups are not supported.

    [Note]Note

    The NSX plug-in is the only plug-in that is validated for vSphere.

  7. VNC. The port range 5900 - 6105 (inclusive) is automatically enabled for VNC connections on every ESX host in all clusters under OpenStack control. For more information about using a VNC client to connect to virtual machine, see http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1246.

    [Note]Note

    In addition to the default VNC port numbers (5900 to 6000) specified in the above document, the following ports are also used: 6101, 6102, and 6105.

    You must modify the ESXi firewall configuration to allow the VNC ports. Additionally, for the firewall modifications to persist after a reboot, you must create a custom vSphere Installation Bundle (VIB) which is then installed onto the running ESXi host or added to a custom image profile used to install ESXi hosts. For details about how to create a VIB for persisting the firewall configuration modifications, see http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2007381.

  8. Ephemeral Disks. Ephemeral disks are not supported. A future major release will address this limitation.

  9. Injection of SSH keys into compute instances hosted by vCenter is not currently supported.

  10. To use multiple vCenter installations with OpenStack, each vCenter must be assigned to a separate availability zone. This is required as the OpenStack Block Storage VMDK driver does not currently work across multiple vCenter installations.

 VMware vCenter driver

Use the VMware vCenter driver (VMwareVCDriver) to connect OpenStack Compute with vCenter. This recommended configuration enables access through vCenter to advanced vSphere features like vMotion, High Availability, and Dynamic Resource Scheduling (DRS).

 VMwareVCDriver configuration options

When you use the VMwareVCDriver (vCenter versions 5.1 and later) with OpenStack Compute, add the following VMware-specific configuration options to the nova.conf file:

[DEFAULT]
compute_driver=vmwareapi.VMwareVCDriver

[vmware]
host_ip=<vCenter host IP>
host_username=<vCenter username>
host_password=<vCenter password>
cluster_name=<vCenter cluster name>
datastore_regex=<optional datastore regex>
[Note]Note
  • vSphere vCenter versions 5.0 and earlier: You must specify the location of the WSDL files by adding the wsdl_location=http://127.0.0.1:8080/vmware/SDK/wsdl/vim25/vimService.wsdl setting to the above configuration. For more information, see vSphere 5.0 and earlier additional set up.

  • Clusters: The vCenter driver can support multiple clusters. To use more than one cluster, simply add multiple cluster_name lines in nova.conf with the appropriate cluster name. Clusters and data stores used by the vCenter driver should not contain any VMs other than those created by the driver.

  • Data stores: The datastore_regex setting specifies the data stores to use with Compute. For example, datastore_regex="nas.*" selects all the data stores that have a name starting with "nas". If this line is omitted, Compute uses the first data store returned by the vSphere API. It is recommended not to use this field and instead remove data stores that are not intended for OpenStack.

  • Reserved host memory: The reserved_host_memory_mb option value is 512 MB by default. However, VMware recommends that you set this option to 0 MB because the vCenter driver reports the effective memory available to the virtual machines.

A nova-compute service can control one or more clusters containing multiple ESX hosts, making nova-compute a critical service from a high availability perspective. Because the host that runs nova-compute can fail while the vCenter and ESX still run, you must protect the nova-compute service against host failures.

[Note]Note

Many nova.conf options are relevant to libvirt but do not apply to this driver.

You must complete additional configuration for environments that use vSphere 5.0 and earlier. See the section called “vSphere 5.0 and earlier additional set up”.

 Images with VMware vSphere

The vCenter driver supports images in the VMDK format. Disks in this format can be obtained from VMware Fusion or from an ESX environment. It is also possible to convert other formats, such as qcow2, to the VMDK format using the qemu-img utility. After a VMDK disk is available, load it into the OpenStack Image Service. Then, you can use it with the VMware vCenter driver. The following sections provide additional details on the supported disks and the commands used for conversion and upload.

 Supported image types

Upload images to the OpenStack Image Service in VMDK format. The following VMDK disk types are supported:

  • VMFS Flat Disks (includes thin, thick, zeroedthick, and eagerzeroedthick). Note that once a VMFS thin disk is exported from VMFS to a non-VMFS location, like the OpenStack Image Service, it becomes a preallocated flat disk. This impacts the transfer time from the OpenStack Image Service to the data store when the full preallocated flat disk, rather than the thin disk, must be transferred.

  • Monolithic Sparse disks. Sparse disks get imported from the OpenStack Image Service into ESX as thin provisioned disks. Monolithic Sparse disks can be obtained from VMware Fusion or can be created by converting from other virtual disk formats using the qemu-img utility.

The following table shows the vmware_disktype property that applies to each of the supported VMDK disk types:

Table 2.7. OpenStack Image Service disk type settings
vmware_disktype property VMDK disk type
sparse

Monolithic Sparse

thin

VMFS flat, thin provisioned

preallocated (default)

VMFS flat, thick/zeroedthick/eagerzeroedthick

The vmware_disktype property is set when an image is loaded into the OpenStack Image Service. For example, the following command creates a Monolithic Sparse image by setting vmware_disktype to sparse:

$ glance image-create --name="ubuntu-sparse" --disk_format=vmdk \
--container_format=bare --is_public=true \
--property vmware_disktype="sparse" \
--property vmware_ostype="ubuntu64Guest" < ubuntuLTS-sparse.vmdk

Note that specifying thin does not provide any advantage over preallocated with the current version of the driver. Future versions might restore the thin properties of the disk after it is downloaded to a vSphere data store.

 Convert and load images

Using the qemu-img utility, disk images in several formats (such as, qcow2) can be converted to the VMDK format.

For example, the following command can be used to convert a qcow2 Ubuntu Precise cloud image:

$ qemu-img convert -f qcow2 ~/Downloads/precise-server-cloudimg-amd64-disk1.img \
-O vmdk precise-server-cloudimg-amd64-disk1.vmdk

VMDK disks converted through qemu-img are always monolithic sparse VMDK disks with an IDE adapter type. Using the previous example of the Precise Ubuntu image after the qemu-img conversion, the command to upload the VMDK disk should be something like:

$ glance image-create --name precise-cloud --is-public=True \
--container-format=bare --disk-format=vmdk \
--property vmware_disktype="sparse" \
--property vmware_adaptertype="ide" < \
precise-server-cloudimg-amd64-disk1.vmdk

Note that the vmware_disktype is set to sparse and the vmware_adaptertype is set to ide in the previous command.

If the image did not come from the qemu-img utility, the vmware_disktype and vmware_adaptertype might be different. To determine the image adapter type from an image file, use the following command and look for the ddb.adapterType= line:

$ head -20 <vmdk file name>

Assuming a preallocated disk type and an iSCSI lsiLogic adapter type, the following command uploads the VMDK disk:

$ glance image-create --name="ubuntu-thick-scsi" --disk_format=vmdk \
--container_format=bare --is_public=true \
--property vmware_adaptertype="lsiLogic" \
--property vmware_disktype="preallocated" \
--property vmware_ostype="ubuntu64Guest" < ubuntuLTS-flat.vmdk

Currently, OS boot VMDK disks with an IDE adapter type cannot be attached to a virtual SCSI controller and likewise disks with one of the SCSI adapter types (such as, busLogic, lsiLogic) cannot be attached to the IDE controller. Therefore, as the previous examples show, it is important to set the vmware_adaptertype property correctly. The default adapter type is lsiLogic, which is SCSI, so you can omit the vmware_adaptertype property if you are certain that the image adapter type is lsiLogic.

 Tag VMware images

In a mixed hypervisor environment, OpenStack Compute uses the hypervisor_type tag to match images to the correct hypervisor type. For VMware images, set the hypervisor type to vmware. Other valid hypervisor types include: xen, qemu, kvm, lxc, uml, and hyperv.

$ glance image-create --name="ubuntu-thick-scsi" --disk_format=vmdk \
--container_format=bare --is_public=true \
--property vmware_adaptertype="lsiLogic" \
--property vmware_disktype="preallocated" \
--property hypervisor_type="vmware" \
--property vmware_ostype="ubuntu64Guest" < ubuntuLTS-flat.vmdk
 Optimize images

Monolithic Sparse disks are considerably faster to download but have the overhead of an additional conversion step. When imported into ESX, sparse disks get converted to VMFS flat thin provisioned disks. The download and conversion steps only affect the first launched instance that uses the sparse disk image. The converted disk image is cached, so subsequent instances that use this disk image can simply use the cached version.

To avoid the conversion step (at the cost of longer download times) consider converting sparse disks to thin provisioned or preallocated disks before loading them into the OpenStack Image Service. Below are some tools that can be used to pre-convert sparse disks.

  1. Using vSphere CLI (or sometimes called the remote CLI or rCLI) tools

    Assuming that the sparse disk is made available on a data store accessible by an ESX host, the following command converts it to preallocated format:

    vmkfstools --server=ip_of_some_ESX_host -i /vmfs/volumes/datastore1/sparse.vmdk /vmfs/volumes/datastore1/converted.vmdk

    (Note that the vifs tool from the same CLI package can be used to upload the disk to be converted. The vifs tool can also be used to download the converted disk if necessary.)

  2. Using vmkfstools directly on the ESX host

    If the SSH service is enabled on an ESX host, the sparse disk can be uploaded to the ESX data store via scp and the vmkfstools local to the ESX host can use used to perform the conversion: (After logging in to the host via ssh)

    vmkfstools -i /vmfs/volumes/datastore1/sparse.vmdk /vmfs/volumes/datastore1/converted.vmdk
  3. vmware-vdiskmanager

    vmware-vdiskmanager is a utility that comes bundled with VMware Fusion and VMware Workstation. Below is an example of converting a sparse disk to preallocated format:

    '/Applications/VMware Fusion.app/Contents/Library/vmware-vdiskmanager' -r sparse.vmdk -t 4 converted.vmdk

    In all of the above cases, the converted vmdk is actually a pair of files: the descriptor file converted.vmdk and the actual virtual disk data file converted-flat.vmdk. The file to be uploaded to the OpenStack Image Service is converted-flat.vmdk.

 Image handling

The ESX hypervisor requires a copy of the VMDK file in order to boot up a virtual machine. As a result, the vCenter OpenStack Compute driver must download the VMDK via HTTP from the OpenStack Image Service to a data store that is visible to the hypervisor. To optimize this process, the first time a VMDK file is used, it gets cached in the data store. Subsequent virtual machines that need the VMDK use the cached version and don't have to copy the file again from the OpenStack Image Service.

Even with a cached VMDK, there is still a copy operation from the cache location to the hypervisor file directory in the shared data store. To avoid this copy, boot the image in linked_clone mode. To learn how to enable this mode, see the section called “Configuration reference”. Note also that it is possible to override the linked_clone mode on a per-image basis by using the vmware_linked_clone property in the OpenStack Image Service.

You can configure the nova.conf file to automatically purge unused images after a specified period of time. The relevant settings in the DEFAULT section are:

  • remove_unused_base_images - Set this parameter to True to specify that unused images should be removed after the duration specified in the remove_unused_original_minimum_age_seconds parameter. The default is True.

  • remove_unused_original_minimum_age_seconds - Specifies the duration in seconds after which an unused image is purged from the cache. The default is 86400 (24 hours).

 Networking with VMware vSphere

The VMware driver supports networking with the nova-network service or the OpenStack Networking Service. Depending on your installation, complete these configuration steps before you provision VMs:

  • The nova-network service with the FlatManager or FlatDHCPManager. Create a port group with the same name as the flat_network_bridge value in the nova.conf file. The default value is br100. If you specify another value, the new value must be a valid linux bridge identifier that adheres to linux bridge naming conventions.

    All VM NICs are attached to this port group.

    Ensure that the flat interface of the node that runs the nova-network service has a path to this network.

    [Note]Note

    When configuring the port binding for this port group in vCenter, specify ephemeral for the port binding type. For more information, see Choosing a port binding type in ESX/ESXi in the VMware Knowledge Base.

  • The nova-network service with the VlanManager. Set the vlan_interface configuration option to match the ESX host interface that handles VLAN-tagged VM traffic.

    OpenStack Compute automatically creates the corresponding port groups.

  • If you are using the OpenStack Networking Service: Before provisioning VMs, create a port group with the same name as the vmware.integration_bridge value in nova.conf (default is br-int). All VM NICs are attached to this port group for management by the OpenStack Networking plug-in.

 Volumes with VMware vSphere

The VMware driver supports attaching volumes from the OpenStack Block Storage service. The VMware VMDK driver for OpenStack Block Storage is recommended and should be used for managing volumes based on vSphere data stores. More information about the VMware VMDK driver can be found at: VMware VMDK Driver. Also an iscsi volume driver provides limited support and can be used only for attachments.

 vSphere 5.0 and earlier additional set up

Users of vSphere 5.0 or earlier must host their WSDL files locally. These steps are applicable for vCenter 5.0 or ESXi 5.0 and you can either mirror the WSDL from the vCenter or ESXi server that you intend to use or you can download the SDK directly from VMware. These workaround steps fix a known issue with the WSDL that was resolved in later versions.

When setting the VMwareVCDriver configuration options, you must include the wsdl_location option. For more information, see VMwareVCDriver configuration options above.

 

Procedure 2.1. Mirror WSDL from vCenter (or ESXi)

  1. Set the VMWAREAPI_IP shell variable to the IP address for your vCenter or ESXi host from where you plan to mirror files. For example:

    $ export VMWAREAPI_IP=<your_vsphere_host_ip>
  2. Create a local file system directory to hold the WSDL files:

    $ mkdir -p /opt/stack/vmware/wsdl/5.0
  3. Change into the new directory.

    $ cd /opt/stack/vmware/wsdl/5.0 
  4. Use your OS-specific tools to install a command-line tool that can download files like wget.

  5. Download the files to the local file cache:

    wget  --no-check-certificate https://$VMWAREAPI_IP/sdk/vimService.wsdl
    wget  --no-check-certificate https://$VMWAREAPI_IP/sdk/vim.wsdl
    wget  --no-check-certificate https://$VMWAREAPI_IP/sdk/core-types.xsd
    wget  --no-check-certificate https://$VMWAREAPI_IP/sdk/query-messagetypes.xsd
    wget  --no-check-certificate https://$VMWAREAPI_IP/sdk/query-types.xsd
    wget  --no-check-certificate https://$VMWAREAPI_IP/sdk/vim-messagetypes.xsd
    wget  --no-check-certificate https://$VMWAREAPI_IP/sdk/vim-types.xsd
    wget  --no-check-certificate https://$VMWAREAPI_IP/sdk/reflect-messagetypes.xsd
    wget  --no-check-certificate https://$VMWAREAPI_IP/sdk/reflect-types.xsd

    Because the reflect-types.xsd and reflect-messagetypes.xsd files do not fetch properly, you must stub out these files. Use the following XML listing to replace the missing file content. The XML parser underneath Python can be very particular and if you put a space in the wrong place, it can break the parser. Copy the following contents and formatting carefully.

    <?xml version="1.0" encoding="UTF-8"?>
      <schema
         targetNamespace="urn:reflect"
         xmlns="http://www.w3.org/2001/XMLSchema"
         xmlns:xsd="http://www.w3.org/2001/XMLSchema"
         elementFormDefault="qualified">
      </schema>       
  6. Now that the files are locally present, tell the driver to look for the SOAP service WSDLs in the local file system and not on the remote vSphere server. Add the following setting to the nova.conf file for your nova-compute node:

    [vmware]
    wsdl_location=file:///opt/stack/vmware/wsdl/5.0/vimService.wsdl

Alternatively, download the version appropriate SDK from http://www.vmware.com/support/developer/vc-sdk/ and copy it to the /opt/stack/vmware file. Make sure that the WSDL is available, in for example /opt/stack/vmware/SDK/wsdl/vim25/vimService.wsdl. You must point nova.conf to fetch this WSDL file from the local file system by using a URL.

When using the VMwareVCDriver (vCenter) with OpenStack Compute with vSphere version 5.0 or earlier, nova.conf must include the following extra config option:

[vmware]
wsdl_location=file:///opt/stack/vmware/SDK/wsdl/vim25/vimService.wsdl

 VMware ESX driver

This section covers details of using the VMwareESXDriver. The ESX Driver has not been extensively tested and is not recommended. To configure the VMware vCenter driver instead, see the section called “VMware vCenter driver”.

[Warning]Warning

The VMWare ESX driver has been deprecated in the Icehouse release and will be removed with the Juno release.

 VMwareESXDriver configuration options

When you use the VMwareESXDriver (no vCenter) with OpenStack Compute, add the following VMware-specific configuration options to the nova.conf file:

[DEFAULT]
compute_driver=vmwareapi.VMwareESXDriver

[vmware]
host_ip=<ESXi host IP>
host_username=<ESXi host username>
host_password=<ESXi host password>
wsdl_location=http://127.0.0.1:8080/vmware/SDK/wsdl/vim25/vimService.wsdl

Remember that you will have one nova-compute service for each ESXi host. It is recommended that this host run as a VM on the same ESXi host that it manages.

[Note]Note

Many nova.conf options are relevant to libvirt but do not apply to this driver.

 Requirements and limitations

The ESXDriver cannot use many of the vSphere platform advanced capabilities, namely vMotion, high availability, and DRS.

 Configuration reference

To customize the VMware driver, use the configuration option settings documented in Table 2.54, “Description of configuration options for vmware”.

Questions? Discuss on ask.openstack.org
Found an error? Report a bug against this page

loading table of contents...