VM Workload Consolidation Strategy

Synopsis

display name: VM Workload Consolidation Strategy

goal: vm_consolidation

VM Workload Consolidation Strategy

A load consolidation strategy based on heuristic first-fit algorithm which focuses on measured CPU utilization and tries to minimize hosts which have too much or too little load respecting resource capacity constraints.

This strategy produces a solution resulting in more efficient utilization of cluster resources using following four phases:

  • Offload phase - handling over-utilized resources

  • Consolidation phase - handling under-utilized resources

  • Solution optimization - reducing number of migrations

  • Disability of unused compute nodes

A capacity coefficients (cc) might be used to adjust optimization thresholds. Different resources may require different coefficient values as well as setting up different coefficient values in both phases may lead to more efficient consolidation in the end. If the cc equals 1 the full resource capacity may be used, cc values lower than 1 will lead to resource under utilization and values higher than 1 will lead to resource overbooking. e.g. If targeted utilization is 80 percent of a compute node capacity, the coefficient in the consolidation phase will be 0.8, but may any lower value in the offloading phase. The lower it gets the cluster will appear more released (distributed) for the following consolidation phase.

As this strategy leverages VM live migration to move the load from one compute node to another, this feature needs to be set up correctly on all compute nodes within the cluster. This strategy assumes it is possible to live migrate any VM from an active compute node to any other active compute node.

Requirements

Metrics

The vm_workload_consolidation strategy requires the following metrics:

metric

service name

plugins

comment

cpu

ceilometer

none

memory.resident

ceilometer

none

memory

ceilometer

none

disk.root.size

ceilometer

none

compute.node.cpu.percent

ceilometer

none

(optional) need to set the compute_monitors option to cpu.virt_driver in the nova.conf.

hardware.memory.used

ceilometer

SNMP

(optional)

Cluster data model

Default Watcher’s Compute cluster data model:

Nova cluster data model collector

The Nova cluster data model collector creates an in-memory representation of the resources exposed by the compute service.

Actions

Default Watcher’s actions:

action

description

migration

Migrates a server to a destination nova-compute host

This action will allow you to migrate a server to another compute destination host. Migration type ‘live’ can only be used for migrating active VMs. Migration type ‘cold’ can be used for migrating non-active VMs as well active VMs, which will be shut down while migrating.

The action schema is:

schema = Schema({
 'resource_id': str,  # should be a UUID
 'migration_type': str,  # choices -> "live", "cold"
 'destination_node': str,
 'source_node': str,
})

The resource_id is the UUID of the server to migrate. The source_node and destination_node parameters are respectively the source and the destination compute hostname (list of available compute hosts is returned by this command: nova service-list --binary nova-compute).

Note

Nova API version must be 2.56 or above if destination_node parameter is given.

change_nova_service_state

Disables or enables the nova-compute service, deployed on a host

By using this action, you will be able to update the state of a nova-compute service. A disabled nova-compute service can not be selected by the nova scheduler for future deployment of server.

The action schema is:

schema = Schema({
 'resource_id': str,
 'state': str,
 'disabled_reason': str,
})

The resource_id references a nova-compute service name (list of available nova-compute services is returned by this command: nova service-list --binary nova-compute). The state value should either be ONLINE or OFFLINE. The disabled_reason references the reason why Watcher disables this nova-compute service. The value should be with watcher_ prefix, such as watcher_disabled, watcher_maintaining.

Planner

Default Watcher’s planner:

Weight planner implementation

This implementation builds actions with parents in accordance with weights. Set of actions having a higher weight will be scheduled before the other ones. There are two config options to configure: action_weights and parallelization.

Limitations

  • This planner requires to have action_weights and parallelization configs tuned well.

Configuration

Strategy parameter is:

parameter

type

default Value

description

period

Number

3600

The time interval in seconds for getting statistic aggregation from metric data source

Efficacy Indicator

[{'name': 'released_nodes_ratio', 'description': 'Ratio of released compute nodes divided by the total number of enabled compute nodes.', 'unit': '%', 'value': 0}]

Algorithm

For more information on the VM Workload consolidation strategy please refer to: https://specs.openstack.org/openstack/watcher-specs/specs/mitaka/implemented/zhaw-load-consolidation.html

How to use it ?

$ openstack optimize audittemplate create \
  at1 server_consolidation --strategy vm_workload_consolidation

$ openstack optimize audit create -a at1