Atom feed of this document

 Configuring Migrations


This feature is for cloud administrators only.

Migration allows an administrator to move a virtual machine instance from one compute host to another. This feature is useful when a compute host requires maintenance. Migration can also be useful to redistribute the load when many VM instances are running on a specific physical machine.

There are two types of migration:

  • Migration (or non-live migration): In this case the instance will be shut down (and the instance will know that it has been rebooted) for a period of time in order to be moved to another hypervisor.

  • Live migration (or true live migration): Almost no instance downtime, it is useful when the instances must be kept running during the migration.

There are two types of live migration:

  • Shared storage based live migration: In this case both hypervisors have access to a shared storage.

  • Block live migration: for this type of migration, no shared storage is required.

The following sections describe how to configure your hosts and compute nodes for migrations using the KVM and XenServer hypervisors.



  • Hypervisor: KVM with libvirt

  • Shared storage: NOVA-INST-DIR/instances/ (eg /var/lib/nova/instances) has to be mounted by shared storage. This guide uses NFS but other options, including the OpenStack Gluster Connector are available.

  • Instances: Instance can be migrated with iSCSI based volumes


Migrations done by the Compute service do not use libvirt's live migration functionality by default. Because of this, guests are suspended before migration and may therefore experience several minutes of downtime. See True Migration for KVM and Libvirt for more details.


This guide assumes the default value for instances_path in your nova.conf (NOVA-INST-DIR/instances). If you have changed the state_path or instances_path variables, please modify accordingly.


You must specify vncserver_listen= or live migration will not work correctly.

Example Nova Installation Environment

  • Prepare 3 servers at least; for example, HostA, HostB and HostC

  • HostA is the "Cloud Controller", and should be running: nova-api, nova-scheduler, nova-network, cinder-volume, nova-objectstore.

  • HostB and HostC are the "compute nodes", running nova-compute.

  • Ensure that, NOVA-INST-DIR (set with state_path in nova.conf) is same on all hosts.

  • In this example, HostA will be the NFSv4 server which exports NOVA-INST-DIR/instances, and HostB and HostC mount it.

System configuration

  1. Configure your DNS or /etc/hosts and ensure it is consistent across all hosts. Make sure that the three hosts can perform name resolution with each other. As a test, use the ping command to ping each host from one another.

    $ ping HostA
    $ ping HostB
    $ ping HostC
  2. Ensure that the UID and GID of your nova and libvirt users are identical between each of your servers. This ensures that the permissions on the NFS mount will work correctly.

  3. Follow the instructions at the Ubuntu NFS HowTo to setup an NFS server on HostA, and NFS Clients on HostB and HostC.

    Our aim is to export NOVA-INST-DIR/instances from HostA, and have it readable and writable by the nova user on HostB and HostC.

  4. Using your knowledge from the Ubuntu documentation, configure the NFS server at HostA by adding a line to /etc/exports

    NOVA-INST-DIR/instances HostA/,sync,fsid=0,no_root_squash)

    Change the subnet mask ( to the appropriate value to include the IP addresses of HostB and HostC. Then restart the NFS server.

    $ /etc/init.d/nfs-kernel-server restart
    $ /etc/init.d/idmapd restart
  5. Set the 'execute/search' bit on your shared directory

    On both compute nodes, make sure to enable the 'execute/search' bit to allow qemu to be able to use the images within the directories. On all hosts, execute the following command:

    $ chmod o+x NOVA-INST-DIR/instances 
  6. Configure NFS at HostB and HostC by adding below to /etc/fstab.

    HostA:/ /NOVA-INST-DIR/instances nfs4 defaults 0 0

    Then ensure that the exported directory can be mounted.

    $ mount -a -v

    Check that "NOVA-INST-DIR/instances/" directory can be seen at HostA

    $ ls -ld NOVA-INST-DIR/instances/
    drwxr-xr-x 2 nova nova 4096 2012-05-19 14:34 nova-install-dir/instances/

    Perform the same check at HostB and HostC - paying special attention to the permissions (nova should be able to write)

    $ ls -ld NOVA-INST-DIR/instances/
    drwxr-xr-x 2 nova nova 4096 2012-05-07 14:34 nova-install-dir/instances/
    $ df -k
    Filesystem           1K-blocks      Used Available Use% Mounted on
    /dev/sda1            921514972   4180880 870523828   1% /
    none                  16498340      1228  16497112   1% /dev
    none                  16502856         0  16502856   0% /dev/shm
    none                  16502856       368  16502488   1% /var/run
    none                  16502856         0  16502856   0% /var/lock
    none                  16502856         0  16502856   0% /lib/init/rw
    HostA:        921515008 101921792 772783104  12% /var/lib/nova/instances  ( <--- this line is important.)
  7. Update the libvirt configurations so that the calls can be made securely. These methods enable remote access over TCP and are not documented here, please consult your network administrator for assistance in deciding how to configure access.

    • SSH tunnel to libvirtd's UNIX socket

    • libvirtd TCP socket, with GSSAPI/Kerberos for auth+data encryption

    • libvirtd TCP socket, with TLS for encryption and x509 client certs for authentication

    • libvirtd TCP socket, with TLS for encryption and Kerberos for authentication

    Restart libvirt. After you run the command, ensure that libvirt is successfully restarted:

    $ stop libvirt-bin && start libvirt-bin
    $ ps -ef | grep libvirt
    root 1145 1 0 Nov27 ? 00:00:03 /usr/sbin/libvirtd -d -l
  8. Configure your firewall to allow libvirt to communicate between nodes.

    Information about ports used with libvirt can be found at the libvirt documentation By default, libvirt listens on TCP port 16509 and an ephemeral TCP range from 49152 to 49261 is used for the KVM communications. As this guide has disabled libvirt auth, you should take good care that these ports are only open to hosts within your installation.

  9. You can now configure options for live migration. In most cases, you do not need to configure any options. The following chart is for advanced usage only.

Table 4.10. Description of configuration options for livemigration
Configuration option=Default value (Type) Description
live_migration_bandwidth=0 (IntOpt)Maximum bandwidth to be used during migration, in Mbps
live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER (StrOpt)Migration flags to be set for live migration
live_migration_retry_count=30 (IntOpt)Number of 1 second retries needed in live_migration
live_migration_uri=qemu+tcp://%s/system (StrOpt)Migration target URI (any included "%s" is replaced with the migration target hostname)

 Enabling true live migration

By default, the Compute service does not use libvirt's live migration functionality. To enable this functionality, add the following line to nova.conf:


The Compute service does not use libvirt's live miration by default because there is a risk that the migration process will never terminate. This can happen if the guest operating system dirties blocks on the disk faster than they can migrated.


 Shared Storage


  • Compatible XenServer hypervisors. For more information, please refer to the Requirements for Creating Resource Pools section of the XenServer Administrator's Guide.

  • Shared storage: an NFS export, visible to all XenServer hosts.


    Please check the NFS VHD section of the XenServer Administrator's Guide for the supported NFS versions.

In order to use shared storage live migration with XenServer hypervisors, the hosts must be joined to a XenServer pool. In order to create that pool, a host aggregate must be created with special metadata. This metadata will be used by the XAPI plugins to establish the pool.

  1. Add an NFS VHD storage to your master XenServer, and set it as default SR. For more information, please refer to the NFS VHD section of the XenServer Administrator's Guide.

  2. Configure all the compute nodes to use the default sr for pool operations, by including:


    in your nova.conf configuration files across your compute nodes.

  3. Create a host aggregate

    $ nova aggregate-create <name-for-pool> <availability-zone>

    The command will display a table which contains the id of the newly created aggregate. Now add special metadata to the aggregate, to mark it as a hypervisor pool

    $ nova aggregate-set-metadata <aggregate-id> hypervisor_pool=true
    $ nova aggregate-set-metadata <aggregate-id> operational_state=created

    Make the first compute node part of that aggregate

    $ nova aggregate-add-host <aggregate-id> <name-of-master-compute>

    At this point, the host is part of a XenServer pool.

  4. Add additional hosts to the pool:

    $ nova aggregate-add-host <aggregate-id> <compute-host-name>


    At this point the added compute node and the host will be shut down, in order to join the host to the XenServer pool. The operation will fail, if any server other than the compute node is running/suspended on your host.

 Block migration


  • Compatible XenServer hypervisors. The hypervisors must support the Storage XenMotion feature. Please refer to the manual of your XenServer to make sure your edition has this feature.


Please note, that you need to use an extra option --block-migrate for the live migration command, in order to use block migration.


Please note, that block migration works only with EXT local storage SRs, and the server should not have any volumes attached.

Log a bug against this page

loading table of contents...