Compute uses the nova-scheduler
service to determine how to
dispatch compute requests. For example, the nova-scheduler
service determines on which host a VM should launch.
In the context of filters, the term host
means a physical
node that has a nova-compute
service running on it.
You can configure the scheduler through a variety of options.
Compute is configured with the following default scheduler
options in the /etc/nova/nova.conf
file:
scheduler_driver_task_period = 60
scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
scheduler_available_filters = nova.scheduler.filters.all_filters
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, DiskFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter
By default, the scheduler_driver
is configured as a filter scheduler,
as described in the next section. In the default configuration,
this scheduler considers hosts that meet all the following criteria:
RetryFilter
).AvailabilityZoneFilter
).RamFilter
).DiskFilter
).ComputeFilter
).ComputeCapabilitiesFilter
).ImagePropertiesFilter
).ServerGroupAntiAffinityFilter
).ServerGroupAffinityFilter
).The scheduler caches its list of available hosts;
use the scheduler_driver_task_period
option to specify
how often the list is updated.
Note
Do not configure service_down_time
to be much smaller than
scheduler_driver_task_period
; otherwise, hosts appear to
be dead while the host list is being cached.
For information about the volume scheduler, see the Block Storage section of OpenStack Administrator Guide.
The scheduler chooses a new host when an instance is migrated.
When evacuating instances from a host, the scheduler service honors the target host defined by the administrator on the nova evacuate command. If a target is not defined by the administrator, the scheduler determines the target host. For information about instance evacuation, see Evacuate instances section of the OpenStack Administrator Guide.
The filter scheduler (nova.scheduler.filter_scheduler.FilterScheduler
)
is the default scheduler for scheduling virtual machine instances.
It supports filtering and weighting to make informed decisions on
where a new instance should be created.
When the filter scheduler receives a request for a resource, it first applies filters to determine which hosts are eligible for consideration when dispatching a resource. Filters are binary: either a host is accepted by the filter, or it is rejected. Hosts that are accepted by the filter are then processed by a different algorithm to decide which hosts to use for that request, described in the Weights section.
Filtering
The scheduler_available_filters
configuration option in nova.conf
provides the Compute service with the list of the filters that are used
by the scheduler. The default setting specifies all of the filter that
are included with the Compute service:
scheduler_available_filters = nova.scheduler.filters.all_filters
This configuration option can be specified multiple times.
For example, if you implemented your own custom filter in Python called
myfilter.MyFilter
and you wanted to use both the built-in filters
and your custom filter, your nova.conf
file would contain:
scheduler_available_filters = nova.scheduler.filters.all_filters
scheduler_available_filters = myfilter.MyFilter
The scheduler_default_filters
configuration option in nova.conf
defines the list of filters that are applied by the nova-scheduler
service. The default filters are:
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter
The following sections describe the available compute filters.
Filters host by CPU core numbers with a per-aggregate
cpu_allocation_ratio
value. If the per-aggregate value
is not found, the value falls back to the global setting.
If the host is in more than one aggregate and more than
one value is found, the minimum value will be used.
For information about how to use this filter,
see Host aggregates and availability zones. See also CoreFilter.
Filters host by disk allocation with a per-aggregate
disk_allocation_ratio
value. If the per-aggregate value
is not found, the value falls back to the global setting.
If the host is in more than one aggregate and more than
one value is found, the minimum value will be used.
For information about how to use this filter,
see Host aggregates and availability zones. See also DiskFilter.
Matches properties defined in an image’s metadata against those of aggregates to determine host matches:
For example, the following aggregate myWinAgg
has the
Windows operating system as metadata (named ‘windows’):
$ openstack aggregate show MyWinAgg
+-------------------+----------------------------+
| Field | Value |
+-------------------+----------------------------+
| availability_zone | zone1 |
| created_at | 2017-01-01T15:36:44.000000 |
| deleted | False |
| deleted_at | None |
| hosts | [u'sf-devel'] |
| id | 1 |
| name | test |
| properties | |
| updated_at | None |
+-------------------+----------------------------+
In this example, because the following Win-2012 image has the
windows
property, it boots on the sf-devel
host
(all other filters being equal):
$ openstack image show Win-2012
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | bare |
| created_at | 2016-12-13T09:30:30Z |
| disk_format | qcow2 |
| ...
You can configure the AggregateImagePropertiesIsolation
filter by using the following options in the nova.conf
file:
# Considers only keys matching the given namespace (string).
# Multiple values can be given, as a comma-separated list.
aggregate_image_properties_isolation_namespace = <None>
# Separator used between the namespace and keys (string).
aggregate_image_properties_isolation_separator = .
Matches properties defined in extra specs for an instance type
against admin-defined properties on a host aggregate.
Works with specifications that are scoped with
aggregate_instance_extra_specs
.
Multiple values can be given, as a comma-separated list.
For backward compatibility, also works with non-scoped specifications;
this action is highly discouraged because it conflicts with
ComputeCapabilitiesFilter filter when you enable both filters.
For information about how to use this filter, see the
Host aggregates and availability zones section.
Filters host by disk allocation with a per-aggregate
max_io_ops_per_host
value. If the per-aggregate value
is not found, the value falls back to the global setting.
If the host is in more than one aggregate and more than one
value is found, the minimum value will be used.
For information about how to use this filter,
see Host aggregates and availability zones. See also IoOpsFilter.
Ensures that the tenant (or list of tenants) creates all instances only
on specific Host aggregates and availability zones. If a host is in an aggregate that has
the filter_tenant_id
metadata key, the host creates instances from only
that tenant or list of tenants. A host can be in different aggregates. If a
host does not belong to an aggregate with the metadata key, the host can
create instances from all tenants. This setting does not isolate the
aggregate from other tenants. Any other tenant can continue to build
instances on the specified aggregate.
Filters host by number of instances with a per-aggregate
max_instances_per_host
value. If the per-aggregate value
is not found, the value falls back to the global setting.
If the host is in more than one aggregate and thus more than
one value is found, the minimum value will be used.
For information about how to use this filter, see Host aggregates and availability zones.
See also NumInstancesFilter.
Filters host by RAM allocation of instances with a per-aggregate
ram_allocation_ratio
value. If the per-aggregate value is not
found, the value falls back to the global setting.
If the host is in more than one aggregate and thus more than
one value is found, the minimum value will be used.
For information about how to use this filter, see Host aggregates and availability zones.
See also RamFilter.
This filter passes hosts if no instance_type
key is set or the
instance_type
aggregate metadata value contains the name of the
instance_type
requested. The value of the instance_type
metadata entry is a string that may contain either a single
instance_type
name or a comma-separated list of instance_type
names, such as m1.nano
or m1.nano,m1.small
.
For information about how to use this filter, see Host aggregates and availability zones.
See also TypeAffinityFilter.
This is a no-op filter. It does not eliminate any of the available hosts.
Filters hosts by availability zone. You must enable this filter for the scheduler to respect availability zones in requests.
Matches properties defined in extra specs for an instance type
against compute capabilities. If an extra specs key contains
a colon (:
), anything before the colon is treated as a namespace
and anything after the colon is treated as the key to be matched.
If a namespace is present and is not capabilities
, the filter
ignores the namespace. For backward compatibility, also treats the
extra specs key as the key to be matched if no namespace is present;
this action is highly discouraged because it conflicts with
AggregateInstanceExtraSpecsFilter filter when you enable both filters.
Passes all hosts that are operational and enabled.
In general, you should always enable this filter.
Only schedules instances on hosts if sufficient CPU cores are available. If this filter is not set, the scheduler might over-provision a host based on cores. For example, the virtual cores running on an instance may exceed the physical cores.
You can configure this filter to enable a fixed amount of vCPU
overcommitment by using the cpu_allocation_ratio
configuration
option in nova.conf
. The default setting is:
cpu_allocation_ratio = 16.0
With this setting, if 8 vCPUs are on a node, the scheduler allows instances up to 128 vCPU to be run on that node.
To disallow vCPU overcommitment set:
cpu_allocation_ratio = 1.0
Note
The Compute API always returns the actual number of CPU cores available
on a compute node regardless of the value of the cpu_allocation_ratio
configuration key. As a result changes to the cpu_allocation_ratio
are not reflected via the command line clients or the dashboard.
Changes to this configuration key are only taken into account internally
in the scheduler.
Schedules the instance on a different host from a set of instances.
To take advantage of this filter, the requester must pass a scheduler hint,
using different_host
as the key and a list of instance UUIDs as
the value. This filter is the opposite of the SameHostFilter
.
Using the openstack server create command, use the --hint
flag. For example:
$ openstack server create --image cedef40a-ed67-4d10-800e-17455edce175 \
--flavor 1 --hint different_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 \
--hint different_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1
With the API, use the os:scheduler_hints
key. For example:
{
"server": {
"name": "server-1",
"imageRef": "cedef40a-ed67-4d10-800e-17455edce175",
"flavorRef": "1"
},
"os:scheduler_hints": {
"different_host": [
"a0cf03a5-d921-4877-bb5c-86d26cf818e1",
"8c19174f-4220-44f0-824a-cd1eeef10287"
]
}
}
Only schedules instances on hosts if there is sufficient disk space available for root and ephemeral storage.
You can configure this filter to enable a fixed amount of disk
overcommitment by using the disk_allocation_ratio
configuration
option in the nova.conf
configuration file.
The default setting disables the possibility of the overcommitment
and allows launching a VM only if there is a sufficient amount of
disk space available on a host:
disk_allocation_ratio = 1.0
DiskFilter always considers the value of the disk_available_least
property and not the one of the free_disk_gb
property of
a hypervisor’s statistics:
$ openstack hypervisor stats show
+----------------------+-------+
| Field | Value |
+----------------------+-------+
| count | 1 |
| current_workload | 0 |
| disk_available_least | 14 |
| free_disk_gb | 27 |
| free_ram_mb | 15374 |
| local_gb | 27 |
| local_gb_used | 0 |
| memory_mb | 15886 |
| memory_mb_used | 512 |
| running_vms | 0 |
| vcpus | 8 |
| vcpus_used | 0 |
+----------------------+-------+
As it can be viewed from the command output above, the amount of the
available disk space can be less than the amount of the free disk space.
It happens because the disk_available_least
property accounts
for the virtual size rather than the actual size of images.
If you use an image format that is sparse or copy on write so that each
virtual instance does not require a 1:1 allocation of a virtual disk to a
physical storage, it may be useful to allow the overcommitment of disk space.
To enable scheduling instances while overcommitting disk resources on the
node, adjust the value of the disk_allocation_ratio
configuration
option to greater than 1.0
:
disk_allocation_ratio > 1.0
Note
If the value is set to >1
, we recommend keeping track of the free
disk space, as the value approaching 0
may result in the incorrect
functioning of instances using it at the moment.
Only schedules instances on hosts if host has the exact number of CPU cores.
Only schedules instances on hosts if host has the exact amount of disk available.
Only schedules instances on hosts if host has the exact number of RAM available.
Note
This filter is deprecated in favor of ServerGroupAffinityFilter.
The GroupAffinityFilter ensures that an instance is scheduled on to a host
from a set of group hosts. To take advantage of this filter, the requester
must pass a scheduler hint, using group
as the key and an arbitrary name
as the value. Using the openstack server create command, use the
--hint
flag. For example:
$ openstack server create --image IMAGE_ID --flavor 1 \
--hint group=foo server-1
This filter should not be enabled at the same time as GroupAntiAffinityFilter or neither filter will work properly.
Note
This filter is deprecated in favor of ServerGroupAntiAffinityFilter.
The GroupAntiAffinityFilter ensures that each instance in a group is on
a different host. To take advantage of this filter, the requester must
pass a scheduler hint, using group
as the key and an arbitrary name
as the value. Using the openstack server create command,
use the --hint
flag. For example:
$ openstack server create --image IMAGE_ID --flavor 1 \
--hint group=foo server-1
This filter should not be enabled at the same time as GroupAffinityFilter or neither filter will work properly.
Filters hosts based on properties defined on the instance’s image. It passes hosts that can support the specified image properties contained in the instance. Properties include the architecture, hypervisor type, hypervisor version (for Xen hypervisor type only), and virtual machine mode.
For example, an instance might require a host that runs an ARM-based processor, and QEMU as the hypervisor. You can decorate an image with these properties by using:
$ openstack image set --architecture arm --property hypervisor_type=qemu \
img-uuid
The image properties that the filter checks for are:
i686
, x86_64
, arm
, and ppc64
.describes the hypervisor required by the image.
Examples are xen
, qemu
, and xenapi
.
Note
qemu
is used for both QEMU and KVM hypervisor types.
describes the hypervisor version required by the image. The property is supported for Xen hypervisor type only. It can be used to enable support for multiple hypervisor versions, and to prevent instances with newer Xen tools from being provisioned on an older version of a hypervisor. If available, the property value is compared to the hypervisor version of the compute host.
To filter the hosts by the hypervisor version, add the
hypervisor_version_requires
property on the image as metadata and
pass an operator and a required hypervisor version as its value:
$ openstack image set --property hypervisor_type=xen --property \
hypervisor_version_requires=">=4.3" img-uuid
xen
for Xen 3.0 paravirtual ABI,
hvm
for native ABI, uml
for User Mode Linux paravirtual ABI,
exe
for container virt executable ABI.Allows the admin to define a special (isolated) set of images and a special
(isolated) set of hosts, such that the isolated images can only run on
the isolated hosts, and the isolated hosts can only run isolated images.
The flag restrict_isolated_hosts_to_isolated_images
can be used to
force isolated hosts to only run isolated images.
The admin must specify the isolated set of images and hosts in the
nova.conf
file using the isolated_hosts
and isolated_images
configuration options. For example:
isolated_hosts = server1, server2
isolated_images = 342b492c-128f-4a42-8d3a-c5088cf27d13, ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09
The IoOpsFilter filters hosts by concurrent I/O operations on it.
Hosts with too many concurrent I/O operations will be filtered out.
The max_io_ops_per_host
option specifies the maximum number of
I/O intensive instances allowed to run on a host.
A host will be ignored by the scheduler if more than
max_io_ops_per_host
instances in build, resize, snapshot,
migrate, rescue or unshelve task states are running on it.
The JsonFilter allows a user to construct a custom filter by passing a scheduler hint in JSON format. The following operators are supported:
The filter supports the following variables:
$free_ram_mb
$free_disk_mb
$total_usable_ram_mb
$vcpus_total
$vcpus_used
Using the openstack server create command, use the
--hint
flag:
$ openstack server create --image 827d564a-e636-4fc4-a376-d36f7ebe1747 \
--flavor 1 --hint query='[">=","$free_ram_mb",1024]' server1
With the API, use the os:scheduler_hints
key:
{
"server": {
"name": "server-1",
"imageRef": "cedef40a-ed67-4d10-800e-17455edce175",
"flavorRef": "1"
},
"os:scheduler_hints": {
"query": "[>=,$free_ram_mb,1024]"
}
}
Filters hosts based on meters weight_setting
.
Only hosts with the available meters are passed so that
the metrics weigher will not fail due to these hosts.
Filters hosts based on the NUMA topology that was specified for the
instance through the use of flavor extra_specs
in combination
with the image properties, as described in detail in the
related nova-spec document.
Filter will try to match the exact NUMA cells of the instance to
those of the host. It will consider the standard over-subscription
limits each cell, and provide limits to the compute host accordingly.
Note
If instance has no topology defined, it will be considered for any host. If instance has a topology defined, it will be considered only for NUMA capable hosts.
Hosts that have more instances running than specified by the
max_instances_per_host
option are filtered out when this filter
is in place.
The filter schedules instances on a host if the host has devices that
meet the device requests in the extra_specs
attribute for the flavor.
Only schedules instances on hosts that have sufficient RAM available. If this filter is not set, the scheduler may over provision a host based on RAM (for example, the RAM allocated by virtual machine instances may exceed the physical RAM).
You can configure this filter to enable a fixed amount of RAM
overcommitment by using the ram_allocation_ratio
configuration
option in nova.conf
. The default setting is:
ram_allocation_ratio = 1.5
This setting enables 1.5 GB instances to run on any compute node with 1 GB of free RAM.
Filters out hosts that have already been attempted for scheduling purposes. If the scheduler selects a host to respond to a service request, and the host fails to respond to the request, this filter prevents the scheduler from retrying that host for the service request.
This filter is only useful if the scheduler_max_attempts
configuration option is set to a value greater than zero.
Schedules the instance on the same host as another instance in a set
of instances. To take advantage of this filter, the requester must
pass a scheduler hint, using same_host
as the key and a
list of instance UUIDs as the value.
This filter is the opposite of the DifferentHostFilter
.
Using the openstack server create command, use the
--hint
flag:
$ openstack server create --image cedef40a-ed67-4d10-800e-17455edce175 \
--flavor 1 --hint same_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 \
--hint same_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1
With the API, use the os:scheduler_hints
key:
{
"server": {
"name": "server-1",
"imageRef": "cedef40a-ed67-4d10-800e-17455edce175",
"flavorRef": "1"
},
"os:scheduler_hints": {
"same_host": [
"a0cf03a5-d921-4877-bb5c-86d26cf818e1",
"8c19174f-4220-44f0-824a-cd1eeef10287"
]
}
}
The ServerGroupAffinityFilter ensures that an instance is scheduled
on to a host from a set of group hosts. To take advantage of this filter,
the requester must create a server group with an affinity
policy,
and pass a scheduler hint, using group
as the key and the server
group UUID as the value.
Using the openstack server create command, use the
--hint
flag. For example:
$ openstack server group create --policy affinity group-1
$ openstack server create --image IMAGE_ID --flavor 1 \
--hint group=SERVER_GROUP_UUID server-1
The ServerGroupAntiAffinityFilter ensures that each instance in a group is
on a different host. To take advantage of this filter, the requester must
create a server group with an anti-affinity
policy, and pass a scheduler
hint, using group
as the key and the server group UUID as the value.
Using the openstack server create command, use the
--hint
flag. For example:
$ openstack server group create --policy anti-affinity group-1
$ openstack server create --image IMAGE_ID --flavor 1 \
--hint group=SERVER_GROUP_UUID server-1
Schedules the instance based on host IP subnet range. To take advantage of this filter, the requester must specify a range of valid IP address in CIDR format, by passing two scheduler hints:
192.168.1.1
)/24
)Using the openstack server create command, use the
--hint
flag. For example, to specify the IP subnet
192.168.1.1/24
:
$ openstack server create --image cedef40a-ed67-4d10-800e-17455edce175 \
--flavor 1 --hint build_near_host_ip=192.168.1.1 --hint cidr=/24 server-1
With the API, use the os:scheduler_hints
key:
{
"server": {
"name": "server-1",
"imageRef": "cedef40a-ed67-4d10-800e-17455edce175",
"flavorRef": "1"
},
"os:scheduler_hints": {
"build_near_host_ip": "192.168.1.1",
"cidr": "24"
}
}
Filters hosts based on their trust. Only passes hosts that meet the trust requirements specified in the instance properties.
Dynamically limits hosts to one instance type. An instance can only be launched on a host, if no instance with different instances types are running on it, or if the host has no running instances at all.
The following sections describe the available cell filters.
Schedules the instance on a different cell from a set of instances.
To take advantage of this filter, the requester must pass a scheduler hint,
using different_cell
as the key and a list of instance UUIDs as the value.
Filters cells based on properties defined on the instance’s image. This filter works specifying the hypervisor required in the image metadata and the supported hypervisor version in cell capabilities.
Filters target cells. This filter works by specifying a scheduler
hint of target_cell
. The value should be the full cell path.
When resourcing instances, the filter scheduler filters and weights each host in the list of acceptable hosts. Each time the scheduler selects a host, it virtually consumes resources on it, and subsequent selections are adjusted accordingly. This process is useful when the customer asks for the same large amount of instances, because weight is computed for each requested instance.
All weights are normalized before being summed up; the host with the largest weight is given the highest priority.
Weighting hosts
If cells are used, cells are weighted by the scheduler in the same manner as hosts.
Hosts and cells are weighted based on the following options in
the /etc/nova/nova.conf
file:
Section | Option | Description |
---|---|---|
[DEFAULT] | ram_weight_multiplier |
By default, the scheduler spreads instances across all hosts evenly.
Set the ram_weight_multiplier option to a negative number if you
prefer stacking instead of spreading. Use a floating-point value. |
[DEFAULT] | scheduler_host_subset_size |
New instances are scheduled on a host that is chosen randomly from a subset of the N best hosts. This property defines the subset size from which a host is chosen. A value of 1 chooses the first host returned by the weighting functions. This value must be at least 1. A value less than 1 is ignored, and 1 is used instead. Use an integer value. |
[DEFAULT] | scheduler_weight_classes |
Defaults to nova.scheduler.weights.all_weighers .
Hosts are then weighted and sorted with the largest weight winning. |
[DEFAULT] | io_ops_weight_multiplier |
Multiplier used for weighing host I/O operations. A negative value means a preference to choose light workload compute hosts. |
[DEFAULT] | soft_affinity_weight_multiplier |
Multiplier used for weighing hosts for group soft-affinity. Only a positive value is meaningful. Negative means that the behavior will change to the opposite, which is soft-anti-affinity. |
[DEFAULT] | soft_anti_affinity_weight_multiplier |
Multiplier used for weighing hosts for group soft-anti-affinity. Only a positive value is meaningful. Negative means that the behavior will change to the opposite, which is soft-affinity. |
[metrics] | weight_multiplier |
Multiplier for weighting meters. Use a floating-point value. |
[metrics] | weight_setting |
Determines how meters are weighted. Use a comma-separated list of
metricName=ratio. For example: name1=1.0, name2=-1.0 results in:
name1.value * 1.0 + name2.value * -1.0 |
[metrics] | required |
Specifies how to treat unavailable meters:
|
[metrics] | weight_of_unavailable |
If required is set to False, and any one of the meters set by
weight_setting is unavailable, the weight_of_unavailable
value is returned to the scheduler. |
For example:
[DEFAULT]
scheduler_host_subset_size = 1
scheduler_weight_classes = nova.scheduler.weights.all_weighers
ram_weight_multiplier = 1.0
io_ops_weight_multiplier = 2.0
soft_affinity_weight_multiplier = 1.0
soft_anti_affinity_weight_multiplier = 1.0
[metrics]
weight_multiplier = 1.0
weight_setting = name1=1.0, name2=-1.0
required = false
weight_of_unavailable = -10000.0
Section | Option | Description |
---|---|---|
[cells] | mute_weight_multiplier |
Multiplier to weight mute children (hosts which have not sent capacity or capacity updates for some time). Use a negative, floating-point value. |
[cells] | offset_weight_multiplier |
Multiplier to weight cells, so you can specify a preferred cell. Use a floating point value. |
[cells] | ram_weight_multiplier |
By default, the scheduler spreads instances across all cells evenly.
Set the ram_weight_multiplier option to a negative number if you
prefer stacking instead of spreading. Use a floating-point value. |
[cells] | scheduler_weight_classes |
Defaults to nova.cells.weights.all_weighers , which maps to all
cell weighers included with Compute. Cells are then weighted and
sorted with the largest weight winning. |
For example:
[cells]
scheduler_weight_classes = nova.cells.weights.all_weighers
mute_weight_multiplier = -10.0
ram_weight_multiplier = 1.0
offset_weight_multiplier = 1.0
As an administrator, you work with the filter scheduler.
However, the Compute service also uses the Chance Scheduler,
nova.scheduler.chance.ChanceScheduler
,
which randomly selects from lists of filtered hosts.
It is possible to schedule VMs using advanced scheduling decisions.
These decisions are made based on enhanced usage statistics encompassing
data like memory cache utilization, memory bandwidth utilization,
or network bandwidth utilization. This is disabled by default.
The administrator can configure how the metrics are weighted in the
configuration file by using the weight_setting
configuration option
in the nova.conf
configuration file.
For example to configure metric1 with ratio1 and metric2 with ratio2:
weight_setting = "metric1=ratio1, metric2=ratio2"
Host aggregates are a mechanism for partitioning hosts in an OpenStack cloud, or a region of an OpenStack cloud, based on arbitrary characteristics. Examples where an administrator may want to do this include where a group of hosts have additional hardware or performance characteristics.
Host aggregates are not explicitly exposed to users. Instead administrators map flavors to host aggregates. Administrators do this by setting metadata on a host aggregate, and matching flavor extra specifications. The scheduler then endeavors to match user requests for instance of the given flavor to a host aggregate with the same key-value pair in its metadata. Compute nodes can be in more than one host aggregate.
Administrators are able to optionally expose a host aggregate as an availability zone. Availability zones are different from host aggregates in that they are explicitly exposed to the user, and hosts can only be in a single availability zone. Administrators can configure a default availability zone where instances will be scheduled when the user fails to specify one.
The nova command-line client supports the following aggregate-related commands.
<name>
, and optionally in availability
zone [availability-zone]
if specified. The command returns the ID of
the newly created aggregate. Hosts can be made available to multiple
host aggregates. Be careful when adding a host to an additional host
aggregate when the host is also in an availability zone. Pay attention
when using the nova aggregate-set-metadata and
nova aggregate-update commands to avoid user confusion when they
boot instances in different availability zones.
An error occurs if you cannot add a particular host to an aggregate zone
for which it is not intended.<id>
or <name>
.<id>
or <name>
.<host>
to aggregate with its <id>
or <name>
.<host>
from the aggregate with its <id>
or <name>
.<id>
or <name>
.Note
Only administrators can access these commands. If you try to use
these commands and the user name and tenant that you use to access
the Compute service do not have the admin
role or the
appropriate privileges, these errors occur:
ERROR: Policy doesn't allow compute_extension:aggregates to be performed. (HTTP 403) (Request-ID: req-299fbff6-6729-4cef-93b2-e7e1f96b4864)
ERROR: Policy doesn't allow compute_extension:hosts to be performed. (HTTP 403) (Request-ID: req-ef2400f6-6776-4ea3-b6f1-7704085c27d1)
One common use case for host aggregates is when you want to support scheduling instances to a subset of compute hosts because they have a specific capability. For example, you may want to allow users to request compute hosts that have SSD drives if they need access to faster disk I/O, or access to compute hosts that have GPU cards to take advantage of GPU-accelerated code.
To configure the scheduler to support host aggregates, the
scheduler_default_filters
configuration option must contain the
AggregateInstanceExtraSpecsFilter
in addition to the other
filters used by the scheduler. Add the following line to
/etc/nova/nova.conf
on the host that runs the nova-scheduler
service to enable host aggregates filtering, as well as the other
filters that are typically enabled:
scheduler_default_filters=AggregateInstanceExtraSpecsFilter,RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
This example configures the Compute service to enable users to request
nodes that have solid-state drives (SSDs). You create a fast-io
host aggregate in the nova
availability zone and you add the
ssd=true
key-value pair to the aggregate. Then, you add the
node1
, and node2
compute nodes to it.
$ openstack aggregate create --zone nova fast-io
+-------------------+----------------------------+
| Field | Value |
+-------------------+----------------------------+
| availability_zone | nova |
| created_at | 2016-12-22T07:31:13.013466 |
| deleted | False |
| deleted_at | None |
| id | 1 |
| name | fast-io |
| updated_at | None |
+-------------------+----------------------------+
$ openstack aggregate set --property ssd=true 1
+-------------------+----------------------------+
| Field | Value |
+-------------------+----------------------------+
| availability_zone | nova |
| created_at | 2016-12-22T07:31:13.000000 |
| deleted | False |
| deleted_at | None |
| hosts | [] |
| id | 1 |
| name | fast-io |
| properties | ssd='true' |
| updated_at | None |
+-------------------+----------------------------+
$ openstack aggregate add host 1 node1
+-------------------+--------------------------------------------------+
| Field | Value |
+-------------------+--------------------------------------------------+
| availability_zone | nova |
| created_at | 2016-12-22T07:31:13.000000 |
| deleted | False |
| deleted_at | None |
| hosts | [u'node1'] |
| id | 1 |
| metadata | {u'ssd': u'true', u'availability_zone': u'nova'} |
| name | fast-io |
| updated_at | None |
+-------------------+--------------------------------------------------+
$ openstack aggregate add host 1 node2
+-------------------+--------------------------------------------------+
| Field | Value |
+-------------------+--------------------------------------------------+
| availability_zone | nova |
| created_at | 2016-12-22T07:31:13.000000 |
| deleted | False |
| deleted_at | None |
| hosts | [u'node2'] |
| id | 1 |
| metadata | {u'ssd': u'true', u'availability_zone': u'nova'} |
| name | fast-io |
| updated_at | None |
+-------------------+--------------------------------------------------+
Use the openstack flavor create command to create the ssd.large
flavor called with an ID of 6, 8 GB of RAM, 80 GB root disk, and 4 vCPUs.
$ openstack flavor create --id 6 --ram 8192 --disk 80 --vcpus 4 ssd.large
+----------------------------+-----------+
| Field | Value |
+----------------------------+-----------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 80 |
| id | 6 |
| name | ssd.large |
| os-flavor-access:is_public | True |
| ram | 8192 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 4 |
+----------------------------+-----------+
Once the flavor is created, specify one or more key-value pairs that
match the key-value pairs on the host aggregates with scope
aggregate_instance_extra_specs
. In this case, that is the
aggregate_instance_extra_specs:ssd=true
key-value pair.
Setting a key-value pair on a flavor is done using the
openstack flavor set command.
$ openstack flavor set --property aggregate_instance_extra_specs:ssd=true ssd.large
Once it is set, you should see the extra_specs
property of the
ssd.large
flavor populated with a key of ssd
and a corresponding
value of true
.
$ openstack flavor show ssd.large
+----------------------------+-------------------------------------------+
| Field | Value |
+----------------------------+-------------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 80 |
| id | 6 |
| name | ssd.large |
| os-flavor-access:is_public | True |
| properties | aggregate_instance_extra_specs:ssd='true' |
| ram | 8192 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 4 |
+----------------------------+-------------------------------------------+
Now, when a user requests an instance with the ssd.large
flavor,
the scheduler only considers hosts with the ssd=true
key-value pair.
In this example, these are node1
and node2
.
When using the XenAPI-based hypervisor, the Compute service uses host aggregates to manage XenServer Resource pools, which are used in supporting live migration.
The Compute scheduler configuration options are documented in the tables below.
Configuration option = Default value | Description |
---|---|
discover_hosts_in_cells_interval = -1 |
(Integer) Periodic task interval. This value controls how often (in seconds) the scheduler should attempt to discover new hosts that have been added to cells. If negative (the default), no automatic discovery will occur. Deployments where compute nodes come and go frequently may want this enabled, where others may prefer to manually discover hosts when one is added to avoid any overhead from constantly checking. If enabled, every time this runs, we will select any unmapped hosts out of each cell database on every run. |
driver = filter_scheduler |
(String) The class of the driver used by the scheduler. The options are chosen from the entry points under the namespace ‘nova.scheduler.driver’ in ‘setup.cfg’. Possible values:
|
periodic_task_interval = 60 |
(Integer) Periodic task interval. This value controls how often (in seconds) to run periodic tasks in the scheduler. The specific tasks that are run for each period are determined by the particular scheduler being used. If this is larger than the nova-service ‘service_down_time’ setting, Nova may report the scheduler service as down. This is because the scheduler driver is responsible for sending a heartbeat and it will only do that as often as this option allows. As each scheduler can work a little differently than the others, be sure to test this with your selected scheduler. Possible values:
Related options:
|
host_manager = host_manager |
(String) The scheduler host manager to use. The host manager manages the in-memory picture of the hosts that the scheduler uses. The options values are chosen from the entry points under the namespace ‘nova.scheduler.host_manager’ in ‘setup.cfg’. |
max_attempts = 3 |
(Integer) Maximum number of schedule attempts for a chosen host. This is the maximum number of attempts that will be made to schedule an instance before it is assumed that the failures aren’t due to normal occasional race conflicts, but rather some other problem. When this is reached a MaxRetriesExceeded exception is raised, and the instance is set to an error state. Possible values:
|
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.