[api] allow_instance_snapshots = True |
(BoolOpt) Operators can turn off the ability for a user to take snapshots of their instances by setting this option to False. When disabled, any attempt to take a snapshot will result in a HTTP 400 response (“Bad Request”). |
[api] auth_strategy = keystone |
(StrOpt) This determines the strategy to use for authentication: keystone or noauth2. ‘noauth2’ is designed for testing only, as it does no actual credential checking. ‘noauth2’ provides administrative credentials only if ‘admin’ is specified as the username. |
[api] compute_link_prefix = None |
(StrOpt) This string is prepended to the normal URL that is returned in links to the OpenStack Compute API. If it is empty (the default), the URLs are returned unchanged. Possible values: * Any string, including an empty string (the default). |
[api] config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 |
(StrOpt) When gathering the existing metadata for a config drive, the EC2-style metadata is returned for all versions that don’t appear in this option. As of the Liberty release, the available versions are: * 1.0 * 2007-01-19 * 2007-03-01 * 2007-08-29 * 2007-10-10 * 2007-12-15 * 2008-02-01 * 2008-09-01 * 2009-04-04 The option is in the format of a single string, with each version separated by a space. Possible values: * Any string that represents zero or more versions, separated by spaces. |
[api] enable_instance_password = True |
(BoolOpt) Enables returning of the instance password by the relevant server API calls such as create, rebuild, evacuate, or rescue. If the hypervisor does not support password injection, then the password returned will not be correct, so if your hypervisor does not support password injection, set this to False. |
[api] fping_path = /usr/sbin/fping |
(StrOpt) The full path to the fping binary. |
[api] glance_link_prefix = None |
(StrOpt) This string is prepended to the normal URL that is returned in links to Glance resources. If it is empty (the default), the URLs are returned unchanged. Possible values: * Any string, including an empty string (the default). |
[api] hide_server_address_states = building |
(ListOpt) This option is a list of all instance states for which network address information should not be returned from the API. Possible values: A list of strings, where each string is a valid VM state, as defined in nova/compute/vm_states.py. As of the Newton release, they are: * “active” * “building” * “paused” * “suspended” * “stopped” * “rescued” * “resized” * “soft-delete” * “deleted” * “error” * “shelved” * “shelved_offloaded” |
[api] max_limit = 1000 |
(IntOpt) As a query can potentially return many thousands of items, you can limit the maximum number of items in a single response by setting this option. |
[api] metadata_cache_expiration = 15 |
(IntOpt) This option is the time (in seconds) to cache metadata. When set to 0, metadata caching is disabled entirely; this is generally not recommended for performance reasons. Increasing this setting should improve response times of the metadata API when under heavy load. Higher values may increase memory usage, and result in longer times for host metadata changes to take effect. |
[api] neutron_default_tenant_id = default |
(StrOpt) Tenant ID for getting the default network from Neutron API (also referred in some places as the ‘project ID’) to use. Related options: * use_neutron_default_nets |
[api] use_forwarded_for = False |
(BoolOpt) When True, the ‘X-Forwarded-For’ header is treated as the canonical remote address. When False (the default), the ‘remote_address’ header is used. You should only enable this if you have an HTML sanitizing proxy. |
[api] use_neutron_default_nets = False |
(BoolOpt) When True, the TenantNetworkController will query the Neutron API to get the default networks to use. Related options: * neutron_default_tenant_id |
[api] vendordata_dynamic_connect_timeout = 5 |
(IntOpt) Maximum wait time for an external REST service to connect. Possible values: * Any integer with a value greater than three (the TCP packet retransmission timeout). Note that instance start may be blocked during this wait time, so this value should be kept small. Related options: * vendordata_providers * vendordata_dynamic_targets * vendordata_dynamic_ssl_certfile * vendordata_dynamic_read_timeout * vendordata_dynamic_failure_fatal |
[api] vendordata_dynamic_failure_fatal = False |
(BoolOpt) Should failures to fetch dynamic vendordata be fatal to instance boot? Related options: * vendordata_providers * vendordata_dynamic_targets * vendordata_dynamic_ssl_certfile * vendordata_dynamic_connect_timeout * vendordata_dynamic_read_timeout |
[api] vendordata_dynamic_read_timeout = 5 |
(IntOpt) Maximum wait time for an external REST service to return data once connected. Possible values: * Any integer. Note that instance start is blocked during this wait time, so this value should be kept small. Related options: * vendordata_providers * vendordata_dynamic_targets * vendordata_dynamic_ssl_certfile * vendordata_dynamic_connect_timeout * vendordata_dynamic_failure_fatal |
[api] vendordata_dynamic_ssl_certfile = |
(StrOpt) Path to an optional certificate file or CA bundle to verify dynamic vendordata REST services ssl certificates against. Possible values: * An empty string, or a path to a valid certificate file Related options: * vendordata_providers * vendordata_dynamic_targets * vendordata_dynamic_connect_timeout * vendordata_dynamic_read_timeout * vendordata_dynamic_failure_fatal |
[api] vendordata_dynamic_targets = |
(ListOpt) A list of targets for the dynamic vendordata provider. These targets are of the form <name>@<url>. The dynamic vendordata provider collects metadata by contacting external REST services and querying them for information about the instance. This behaviour is documented in the vendordata.rst file in the nova developer reference. |
[api] vendordata_jsonfile_path = None |
(StrOpt) Cloud providers may store custom data in vendor data file that will then be available to the instances via the metadata service, and to the rendering of config-drive. The default class for this, JsonFileVendorData, loads this information from a JSON file, whose path is configured by this option. If there is no path set by this option, the class returns an empty dictionary. Possible values: * Any string representing the path to the data file, or an empty string (default). |
[api] vendordata_providers = |
(ListOpt) A list of vendordata providers. vendordata providers are how deployers can provide metadata via configdrive and metadata that is specific to their deployment. There are currently two supported providers: StaticJSON and DynamicJSON. StaticJSON reads a JSON file configured by the flag vendordata_jsonfile_path and places the JSON from that file into vendor_data.json and vendor_data2.json. DynamicJSON is configured via the vendordata_dynamic_targets flag, which is documented separately. For each of the endpoints specified in that flag, a section is added to the vendor_data2.json. For more information on the requirements for implementing a vendordata dynamic endpoint, please see the vendordata.rst file in the nova developer reference. Possible values: * A list of vendordata providers, with StaticJSON and DynamicJSON being current options. Related options: * vendordata_dynamic_targets * vendordata_dynamic_ssl_certfile * vendordata_dynamic_connect_timeout * vendordata_dynamic_read_timeout * vendordata_dynamic_failure_fatal |
[console] allowed_origins = |
(ListOpt) Adds list of allowed origins to the console websocket proxy to allow connections from other origin hostnames. Websocket proxy matches the host header with the origin header to prevent cross-site requests. This list specifies if any there are values other than host are allowed in the origin header. Possible values: * A list where each element is an allowed origin hostnames, else an empty list |
[consoleauth] token_ttl = 600 |
(IntOpt) The lifetime of a console auth token. A console auth token is used in authorizing console access for a user. Once the auth token time to live count has elapsed, the token is considered expired. Expired tokens are then deleted. |
[filter_scheduler] aggregate_image_properties_isolation_namespace = None |
(StrOpt) Image property namespace for use in the host aggregate. Images and hosts can be configured so that certain images can only be scheduled to hosts in a particular aggregate. This is done with metadata values set on the host aggregate that are identified by beginning with the value of this option. If the host is part of an aggregate with such a metadata key, the image in the request spec must have the value of that metadata in its properties in order for the scheduler to consider the host as acceptable. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘aggregate_image_properties_isolation’ filter is enabled. Possible values: * A string, where the string corresponds to an image property namespace Related options: * aggregate_image_properties_isolation_separator |
[filter_scheduler] aggregate_image_properties_isolation_separator = . |
(StrOpt) Separator character(s) for image property namespace and name. When using the aggregate_image_properties_isolation filter, the relevant metadata keys are prefixed with the namespace defined in the aggregate_image_properties_isolation_namespace configuration option plus a separator. This option defines the separator to be used. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘aggregate_image_properties_isolation’ filter is enabled. Possible values: * A string, where the string corresponds to an image property namespace separator character Related options: * aggregate_image_properties_isolation_namespace |
[filter_scheduler] available_filters = ['nova.scheduler.filters.all_filters'] |
(MultiStrOpt) Filters that the scheduler can use. An unordered list of the filter classes the nova scheduler may apply. Only the filters specified in the ‘scheduler_enabled_filters’ option will be used, but any filter appearing in that option must also be included in this list. By default, this is set to all filters that are included with nova. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Possible values: * A list of zero or more strings, where each string corresponds to the name of a filter that may be used for selecting a host Related options: * scheduler_enabled_filters |
[filter_scheduler] baremetal_enabled_filters = RetryFilter, AvailabilityZoneFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ExactRamFilter, ExactDiskFilter, ExactCoreFilter |
(ListOpt) Filters used for filtering baremetal hosts. Filters are applied in order, so place your most restrictive filters first to make the filtering process more efficient. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Possible values: * A list of zero or more strings, where each string corresponds to the name of a filter to be used for selecting a baremetal host Related options: * If the ‘scheduler_use_baremetal_filters’ option is False, this option has no effect. |
[filter_scheduler] disk_weight_multiplier = 1.0 |
(FloatOpt) Disk weight multipler ratio. Multiplier used for weighing free disk space. Negative numbers mean to stack vs spread. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘ram’ weigher is enabled. Possible values: * An integer or float value, where the value corresponds to the multipler ratio for this weigher. |
[filter_scheduler] enabled_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, DiskFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter |
(ListOpt) Filters that the scheduler will use. An ordered list of filter class names that will be used for filtering hosts. Ignore the word ‘default’ in the name of this option: these filters will always be applied, and they will be applied in the order they are listed so place your most restrictive filters first to make the filtering process more efficient. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Possible values: * A list of zero or more strings, where each string corresponds to the name of a filter to be used for selecting a host Related options: * All of the filters in this option must be present in the ‘scheduler_available_filters’ option, or a SchedulerHostFilterNotFound exception will be raised. |
[filter_scheduler] host_subset_size = 1 |
(IntOpt) Size of subset of best hosts selected by scheduler. New instances will be scheduled on a host chosen randomly from a subset of the N best hosts, where N is the value set by this option. Setting this to a value greater than 1 will reduce the chance that multiple scheduler processes handling similar requests will select the same host, creating a potential race condition. By selecting a host randomly from the N hosts that best fit the request, the chance of a conflict is reduced. However, the higher you set this value, the less optimal the chosen host may be for a given request. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Possible values: * An integer, where the integer corresponds to the size of a host subset. Any integer is valid, although any value less than 1 will be treated as 1 |
[filter_scheduler] io_ops_weight_multiplier = -1.0 |
(FloatOpt) IO operations weight multipler ratio. This option determines how hosts with differing workloads are weighed. Negative values, such as the default, will result in the scheduler preferring hosts with lighter workloads whereas positive values will prefer hosts with heavier workloads. Another way to look at it is that positive values for this option will tend to schedule instances onto hosts that are already busy, while negative values will tend to distribute the workload across more hosts. The absolute value, whether positive or negative, controls how strong the io_ops weigher is relative to other weighers. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘io_ops’ weigher is enabled. Possible values: * An integer or float value, where the value corresponds to the multipler ratio for this weigher. |
[filter_scheduler] isolated_hosts = |
(ListOpt) List of hosts that can only run certain images. If there is a need to restrict some images to only run on certain designated hosts, list those host names here. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘IsolatedHostsFilter’ filter is enabled. Possible values: * A list of strings, where each string corresponds to the name of a host Related options: * scheduler/isolated_images * scheduler/restrict_isolated_hosts_to_isolated_images |
[filter_scheduler] isolated_images = |
(ListOpt) List of UUIDs for images that can only be run on certain hosts. If there is a need to restrict some images to only run on certain designated hosts, list those image UUIDs here. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘IsolatedHostsFilter’ filter is enabled. Possible values: * A list of UUID strings, where each string corresponds to the UUID of an image Related options: * scheduler/isolated_hosts * scheduler/restrict_isolated_hosts_to_isolated_images |
[filter_scheduler] max_instances_per_host = 50 |
(IntOpt) Maximum number of instances that be active on a host. If you need to limit the number of instances on any given host, set this option to the maximum number of instances you want to allow. The num_instances_filter will reject any host that has at least as many instances as this option’s value. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘num_instances_filter’ filter is enabled. Possible values: * An integer, where the integer corresponds to the max instances that can be scheduled on a host. |
[filter_scheduler] max_io_ops_per_host = 8 |
(IntOpt) The number of instances that can be actively performing IO on a host. Instances performing IO includes those in the following states: build, resize, snapshot, migrate, rescue, unshelve. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘io_ops_filter’ filter is enabled. Possible values: * An integer, where the integer corresponds to the max number of instances that can be actively performing IO on any given host. |
[filter_scheduler] ram_weight_multiplier = 1.0 |
(FloatOpt) Ram weight multipler ratio. This option determines how hosts with more or less available RAM are weighed. A positive value will result in the scheduler preferring hosts with more available RAM, and a negative number will result in the scheduler preferring hosts with less available RAM. Another way to look at it is that positive values for this option will tend to spread instances across many hosts, while negative values will tend to fill up (stack) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the RAM weigher is relative to other weighers. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘ram’ weigher is enabled. Possible values: * An integer or float value, where the value corresponds to the multipler ratio for this weigher. |
[filter_scheduler] restrict_isolated_hosts_to_isolated_images = True |
(BoolOpt) Prevent non-isolated images from being built on isolated hosts. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘IsolatedHostsFilter’ filter is enabled. Even then, this option doesn’t affect the behavior of requests for isolated images, which will always be restricted to isolated hosts. Related options: * scheduler/isolated_images * scheduler/isolated_hosts |
[filter_scheduler] soft_affinity_weight_multiplier = 1.0 |
(FloatOpt) Multiplier used for weighing hosts for group soft-affinity. Possible values: * An integer or float value, where the value corresponds to weight multiplier for hosts with group soft affinity. Only a positive value are meaningful, as negative values would make this behave as a soft anti-affinity weigher. |
[filter_scheduler] soft_anti_affinity_weight_multiplier = 1.0 |
(FloatOpt) Multiplier used for weighing hosts for group soft-anti-affinity. Possible values: * An integer or float value, where the value corresponds to weight multiplier for hosts with group soft anti-affinity. Only a positive value are meaningful, as negative values would make this behave as a soft affinity weigher. |
[filter_scheduler] track_instance_changes = True |
(BoolOpt) Enable querying of individual hosts for instance information. The scheduler may need information about the instances on a host in order to evaluate its filters and weighers. The most common need for this information is for the (anti-)affinity filters, which need to choose a host based on the instances already running on a host. If the configured filters and weighers do not need this information, disabling this option will improve performance. It may also be disabled when the tracking overhead proves too heavy, although this will cause classes requiring host usage data to query the database on each request instead. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. |
[filter_scheduler] use_baremetal_filters = False |
(BoolOpt) Enable baremetal filters. Set this to True to tell the nova scheduler that it should use the filters specified in the ‘baremetal_scheduler_enabled_filters’ option. If you are not scheduling baremetal nodes, leave this at the default setting of False. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Related options: * If this option is set to True, then the filters specified in the ‘baremetal_scheduler_enabled_filters’ are used instead of the filters specified in ‘scheduler_enabled_filters’. |
[filter_scheduler] weight_classes = nova.scheduler.weights.all_weighers |
(ListOpt) Weighers that the scheduler will use. Only hosts which pass the filters are weighed. The weight for any host starts at 0, and the weighers order these hosts by adding to or subtracting from the weight assigned by the previous weigher. Weights may become negative. An instance will be scheduled to one of the N most-weighted hosts, where N is ‘scheduler_host_subset_size’. By default, this is set to all weighers that are included with Nova. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Possible values: * A list of zero or more strings, where each string corresponds to the name of a weigher that will be used for selecting a host |
[hyperv] iscsi_initiator_list = |
(ListOpt) List of iSCSI initiators that will be used for estabilishing iSCSI sessions. If none are specified, the Microsoft iSCSI initiator service will choose the initiator. |
[hyperv] use_multipath_io = False |
(BoolOpt) Use multipath connections when attaching iSCSI or FC disks. This requires the Multipath IO Windows feature to be enabled. MPIO must be configured to claim such devices. |
[ironic] serial_console_state_timeout = 10 |
(IntOpt) Timeout (seconds) to wait for node serial console state changed. Set to 0 to disable timeout. |
[libvirt] live_migration_scheme = None |
(StrOpt) Schema used for live migration. Override the default libvirt live migration scheme (which is dependant on virt_type). If this option is set to None, nova will automatically choose a sensible default based on the hypervisor. It is not recommended that you change this unless you are very sure that hypervisor supports a particular scheme. Related options: * virt_type : This option is meaningful only when virt_type is set to kvm or qemu. * live_migration_uri : If live_migration_uri value is not None, the scheme used for live migration is taken from live_migration_uri instead. |
[notifications] default_level = INFO |
(StrOpt) Default notification level for outgoing notifications. |
[notifications] default_publisher_id = $my_ip |
(StrOpt) Default publisher_id for outgoing notifications. If you consider routing notifications using different publisher, change this value accordingly. Possible values: * Defaults to the IPv4 address of this host, but it can be any valid oslo.messaging publisher_id Related options: * my_ip - IP address of this host |
[notifications] notification_format = both |
(StrOpt) Specifies which notification format shall be used by nova. The default value is fine for most deployments and rarely needs to be changed. This value can be set to ‘versioned’ once the infrastructure moves closer to consuming the newer format of notifications. After this occurs, this option will be removed (possibly in the “P” release). Possible values: * unversioned: Only the legacy unversioned notifications are emitted. * versioned: Only the new versioned notifications are emitted. * both: Both the legacy unversioned and the new versioned notifications are emitted. (Default) The list of versioned notifications is visible in http://docs.openstack.org/developer/nova/notifications.html |
[notifications] notify_on_api_faults = False |
(BoolOpt) If enabled, send api.fault notifications on caught exceptions in the API service. |
[notifications] notify_on_state_change = None |
(StrOpt) If set, send compute.instance.update notifications on instance state changes. Please refer to https://wiki.openstack.org/wiki/SystemUsageData for additional information on notifications. Possible values: * None - no notifications * “vm_state” - notifications on VM state changes * “vm_and_task_state” - notifications on VM and task state changes |
[pci] alias = [] |
(MultiStrOpt) An alias for a PCI passthrough device requirement. This allows users to specify the alias in the extra_spec for a flavor, without needing to repeat all the PCI property requirements. Possible Values: * A list of JSON values which describe the aliases. For example: alias = { “name”: “QuickAssist”, “product_id”: “0443”, “vendor_id”: “8086”, “device_type”: “type-PCI” } defines an alias for the Intel QuickAssist card. (multi valued). Valid key values are : * “name”: Name of the PCI alias. * “product_id”: Product ID of the device in hexadecimal. * “vendor_id”: Vendor ID of the device in hexadecimal. * “device_type”: Type of PCI device. Valid values are: “type-PCI”, “type-PF” and “type-VF”. |
[pci] passthrough_whitelist = [] |
(MultiStrOpt) White list of PCI devices available to VMs. Possible values: * A JSON dictionary which describe a whitelisted PCI device. It should take the following format: [“vendor_id”: “<id>”,] [“product_id”: “<id>”,] [“address”: “[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]” | “devname”: “<name>”,] {“<tag>”: “<tag_value>”,} Where ‘[‘ indicates zero or one occurrences, ‘{‘ indicates zero or multiple occurrences, and ‘|’ mutually exclusive options. Note that any missing fields are automatically wildcarded. Valid key values are : * “vendor_id”: Vendor ID of the device in hexadecimal. * “product_id”: Product ID of the device in hexadecimal. * “address”: PCI address of the device. * “devname”: Device name of the device (for e.g. interface name). Not all PCI devices have a name. * “<tag>”: Additional <tag> and <tag_value> used for matching PCI devices. Supported <tag>: “physical_network”. The address key supports traditional glob style and regular expression syntax. Valid examples are: passthrough_whitelist = {“devname”:”eth0”, “physical_network”:”physnet”} passthrough_whitelist = {“address”:”:0a:00.“} passthrough_whitelist = {“address”:”:0a:00.”, “physical_network”:”physnet1”} passthrough_whitelist = {“vendor_id”:”1137”, “product_id”:”0071”} passthrough_whitelist = {“vendor_id”:”1137”, “product_id”:”0071”, “address”: “0000:0a:00.1”, “physical_network”:”physnet1”} passthrough_whitelist = {“address”:{“domain”: ”.*”, “bus”: “02”, “slot”: “01”, “function”: “[2-7]”}, “physical_network”:”physnet1”} passthrough_whitelist = {“address”:{“domain”: ”.*”, “bus”: “02”, “slot”: “0[1-2]”, “function”: ”.*”}, “physical_network”:”physnet1”} The following are invalid, as they specify mutually exclusive options: passthrough_whitelist = {“devname”:”eth0”, “physical_network”:”physnet”, “address”:”:0a:00.“} * A JSON list of JSON dictionaries corresponding to the above format. For example: passthrough_whitelist = [{“product_id”:”0001”, “vendor_id”:”8086”}, {“product_id”:”0002”, “vendor_id”:”8086”}] |
[placement] os_interface = None |
(StrOpt) Endpoint interface for this node. This is used when picking the URL in the service catalog. |
[profiler] connection_string = messaging:// |
(StrOpt) Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: * messaging://: use oslo_messaging driver for sending notifications. * mongodb://127.0.0.1:27017 : use mongodb driver for sending notifications. * elasticsearch://127.0.0.1:9200 : use elasticsearch driver for sending notifications. |
[profiler] enabled = False |
(BoolOpt) Enables the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values: * True: Enables the feature * False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty. |
[profiler] es_doc_type = notification |
(StrOpt) Document type for notification indexing in elasticsearch. |
[profiler] es_scroll_size = 10000 |
(IntOpt) Elasticsearch splits large requests in batches. This parameter defines maximum size of each batch (for example: es_scroll_size=10000). |
[profiler] es_scroll_time = 2m |
(StrOpt) This parameter is a time value parameter (for example: es_scroll_time=2m), indicating for how long the nodes that participate in the search will maintain relevant resources in order to continue and support it. |
[profiler] hmac_keys = SECRET_KEY |
(StrOpt) Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,...<keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both “enabled” flag and “hmac_keys” config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. |
[profiler] sentinel_service_name = mymaster |
(StrOpt) Redissentinel uses a service name to identify a master redis service. This parameter defines the name (for example: sentinal_service_name=mymaster). |
[profiler] socket_timeout = 0.1 |
(FloatOpt) Redissentinel provides a timeout option on the connections. This parameter defines that timeout (for example: socket_timeout=0.1). |
[profiler] trace_sqlalchemy = False |
(BoolOpt) Enables SQL requests profiling in services. Default value is False (SQL requests won’t be traced). Possible values: * True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that. * False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way. |
[quota] cores = 20 |
(IntOpt) The number of instance cores or vCPUs allowed per project. Possible values: * A positive integer or 0. * -1 to disable the quota. |
[quota] driver = nova.quota.DbQuotaDriver |
(StrOpt) The quota enforcer driver. Provides abstraction for quota checks. Users can configure a specific driver to use for quota checks. Possible values: * nova.quota.DbQuotaDriver (default) or any string representing fully qualified class name. |
[quota] fixed_ips = -1 |
(IntOpt) The number of fixed IPs allowed per project. Unlike floating IPs, fixed IPs are allocated dynamically by the network component when instances boot up. This quota value should be at least the number of instances allowed Possible values: * A positive integer or 0. * -1 to disable the quota. |
[quota] floating_ips = 10 |
(IntOpt) The number of floating IPs allowed per project. Floating IPs are not allocated to instances by default. Users need to select them from the pool configured by the OpenStack administrator to attach to their instances. Possible values: * A positive integer or 0. * -1 to disable the quota. |
[quota] injected_file_content_bytes = 10240 |
(IntOpt) The number of bytes allowed per injected file. Possible values: * A positive integer or 0. * -1 to disable the quota. |
[quota] injected_file_path_length = 255 |
(IntOpt) The maximum allowed injected file path length. Possible values: * A positive integer or 0. * -1 to disable the quota. |
[quota] injected_files = 5 |
(IntOpt) The number of injected files allowed. File injection allows users to customize the personality of an instance by injecting data into it upon boot. Only text file injection is permitted: binary or ZIP files are not accepted. During file injection, any existing files that match specified files are renamed to include .bak extension appended with a timestamp. Possible values: * A positive integer or 0. * -1 to disable the quota. |
[quota] instances = 10 |
(IntOpt) The number of instances allowed per project. Possible Values * A positive integer or 0. * -1 to disable the quota. |
[quota] key_pairs = 100 |
(IntOpt) The maximum number of key pairs allowed per user. Users can create at least one key pair for each project and use the key pair for multiple instances that belong to that project. Possible values: * A positive integer or 0. * -1 to disable the quota. |
[quota] max_age = 0 |
(IntOpt) The number of seconds between subsequent usage refreshes. This defaults to 0 (off) to avoid additional load but it is useful to turn on to help keep quota usage up-to-date and reduce the impact of out of sync usage issues. Note that quotas are not updated on a periodic task, they will update on a new reservation if max_age has passed since the last reservation. |
[quota] metadata_items = 128 |
(IntOpt) The number of metadata items allowed per instance. Users can associate metadata with an instance during instance creation. This metadata takes the form of key-value pairs. Possible values: * A positive integer or 0. * -1 to disable the quota. |
[quota] ram = 51200 |
(IntOpt) The number of megabytes of instance RAM allowed per project. Possible values: * A positive integer or 0. * -1 to disable the quota. |
[quota] reservation_expire = 86400 |
(IntOpt) The number of seconds until a reservation expires. This quota represents the time period for invalidating quota reservations. |
[quota] security_group_rules = 20 |
(IntOpt) The number of security rules per security group. The associated rules in each security group control the traffic to instances in the group. Possible values: * A positive integer or 0. * -1 to disable the quota. |
[quota] security_groups = 10 |
(IntOpt) The number of security groups per project. Possible values: * A positive integer or 0. * -1 to disable the quota. |
[quota] server_group_members = 10 |
(IntOpt) The maximum number of servers per server group. Possible values: * A positive integer or 0. * -1 to disable the quota. |
[quota] server_groups = 10 |
(IntOpt) The maxiumum number of server groups per project. Server groups are used to control the affinity and anti-affinity scheduling policy for a group of servers or instances. Reducing the quota will not affect any existing group, but new servers will not be allowed into groups that have become over quota. Possible values: * A positive integer or 0. * -1 to disable the quota. |
[quota] until_refresh = 0 |
(IntOpt) The count of reservations until usage is refreshed. This defaults to 0 (off) to avoid additional load but it is useful to turn on to help keep quota usage up-to-date and reduce the impact of out of sync usage issues. |
[scheduler] discover_hosts_in_cells_interval = -1 |
(IntOpt) Periodic task interval. This value controls how often (in seconds) the scheduler should attempt to discover new hosts that have been added to cells. If negative (the default), no automatic discovery will occur. Small deployments may want this periodic task enabled, as surveying the cells for new hosts is likely to be lightweight enough to not cause undue burdon to the scheduler. However, larger clouds (and those that are not adding hosts regularly) will likely want to disable this automatic behavior and instead use the nova-manage cell_v2 discover_hosts command when hosts have been added to a cell. |
[scheduler] driver = filter_scheduler |
(StrOpt) The class of the driver used by the scheduler. The options are chosen from the entry points under the namespace ‘nova.scheduler.driver’ in ‘setup.cfg’. Possible values: * A string, where the string corresponds to the class name of a scheduler driver. There are a number of options available: ** ‘caching_scheduler’, which aggressively caches the system state for better individual scheduler performance at the risk of more retries when running multiple schedulers ** ‘chance_scheduler’, which simply picks a host at random ** ‘fake_scheduler’, which is used for testing ** A custom scheduler driver. In this case, you will be responsible for creating and maintaining the entry point in your ‘setup.cfg’ file |
[scheduler] host_manager = host_manager |
(StrOpt) The scheduler host manager to use. The host manager manages the in-memory picture of the hosts that the scheduler uses. The options values are chosen from the entry points under the namespace ‘nova.scheduler.host_manager’ in ‘setup.cfg’. |
[scheduler] max_attempts = 3 |
(IntOpt) Maximum number of schedule attempts for a chosen host. This is the maximum number of attempts that will be made to schedule an instance before it is assumed that the failures aren’t due to normal occasional race conflicts, but rather some other problem. When this is reached a MaxRetriesExceeded exception is raised, and the instance is set to an error state. Possible values: * A positive integer, where the integer corresponds to the max number of attempts that can be made when scheduling an instance. |
[scheduler] periodic_task_interval = 60 |
(IntOpt) Periodic task interval. This value controls how often (in seconds) to run periodic tasks in the scheduler. The specific tasks that are run for each period are determined by the particular scheduler being used. If this is larger than the nova-service ‘service_down_time’ setting, Nova may report the scheduler service as down. This is because the scheduler driver is responsible for sending a heartbeat and it will only do that as often as this option allows. As each scheduler can work a little differently than the others, be sure to test this with your selected scheduler. Possible values: * An integer, where the integer corresponds to periodic task interval in seconds. 0 uses the default interval (60 seconds). A negative value disables periodic tasks. Related options: * nova-service service_down_time |
[service_user] auth_section = None |
(Opt) Config Section from which to load plugin specific options |
[service_user] auth_type = None |
(Opt) Authentication type to load |
[service_user] cafile = None |
(StrOpt) PEM encoded Certificate Authority to use when verifying HTTPs connections. |
[service_user] certfile = None |
(StrOpt) PEM encoded client certificate cert file |
[service_user] insecure = False |
(BoolOpt) Verify HTTPS connections. |
[service_user] keyfile = None |
(StrOpt) PEM encoded client certificate key file |
[service_user] send_service_user_token = False |
(BoolOpt) When True, if sending a user token to an REST API, also send a service token. Nova often reuses the user token provided to the nova-api to talk to other REST APIs, such as Cinder and Neutron. It is possible that while the user token was valid when the request was made to Nova, the token may expire before it reaches the other service. To avoid any failures, and to make it clear it is Nova calling the service on the users behalf, we include a server token along with the user token. Should the user’s token have expired, a valid service token ensures the REST API request will still be accepted by the keystone middleware. This feature is currently experimental, and as such is turned off by default while full testing and performance tuning of this feature is completed. |
[service_user] timeout = None |
(IntOpt) Timeout value for http requests |
[vendordata_dynamic_auth] auth_section = None |
(Opt) Config Section from which to load plugin specific options |
[vendordata_dynamic_auth] auth_type = None |
(Opt) Authentication type to load |
[vendordata_dynamic_auth] cafile = None |
(StrOpt) PEM encoded Certificate Authority to use when verifying HTTPs connections. |
[vendordata_dynamic_auth] certfile = None |
(StrOpt) PEM encoded client certificate cert file |
[vendordata_dynamic_auth] insecure = False |
(BoolOpt) Verify HTTPS connections. |
[vendordata_dynamic_auth] keyfile = None |
(StrOpt) PEM encoded client certificate key file |
[vendordata_dynamic_auth] timeout = None |
(IntOpt) Timeout value for http requests |
[xenserver] console_public_hostname = localhost |
(StrOpt) Publicly visible name for this console host. Possible values: * A string representing a valid hostname |