Metastatic Driver

This driver uses NodePool nodes (from any driver) as backing nodes to further allocate “static-like” nodes for end use.

A typical use case is to be able to request a large node (a backing node) from a cloud provider, and then divide that node up into smaller nodes that are actually used (requested nodes). A backing node can support one or more requested nodes, and backing nodes are scaled up or down as necessary based on the number of requested nodes.

The name is derived from the nodes it provides (which are like “static” nodes) and the fact that the backing nodes come from NodePool itself, which is “meta”.

providers.[metastatic]
Type: list

A metastatic provider’s resources are partitioned into groups called pools, and within a pool, the node types which are to be made available are listed.

Note

For documentation purposes the option names are prefixed providers.[metastatic] to disambiguate from other drivers, but [metastatic] is not required in the configuration (e.g. below providers.[metastatic].pools refers to the pools key in the providers section when the metastatic driver is selected).

Example:

providers:
  - name: meta-provider
    driver: metastatic
    pools:
      - name: main
        max-servers: 10
        labels:
          - name: small-node
            backing-label: large-node
            max-parallel-jobs: 2
            grace-time: 600
providers.[metastatic].name (required)

A unique name for this provider configuration.

providers.[metastatic].boot-timeout
Default: 60
Type: int seconds

Once an instance is active, how long to try connecting to the image via SSH. If the timeout is exceeded, the node launch is aborted and the instance deleted.

providers.[metastatic].launch-timeout
Default: 3600
Type: int seconds

The time to wait from issuing the command to create a new instance until that instance is reported as “active”. If the timeout is exceeded, the node launch is aborted and the instance deleted.

providers.[metastatic].launch-retries
Default: 3

The number of times to retry launching a node before considering the job failed.

providers.[metastatic].pools
Type: list

A pool defines a group of resources from the provider. Each pool has a maximum number of nodes which can be launched from it, along with a number of attributes that characterize the use of the backing nodes.

providers.[metastatic].pools.name (required)

A unique name within the provider for this pool of resources.

providers.[metastatic].pools.priority
Default: 100
Type: int

The priority of this provider pool (a lesser number is a higher priority). Nodepool launchers will yield requests to other provider pools with a higher priority as long as they are not paused. This means that in general, higher priority pools will reach quota first before lower priority pools begin to be used.

This setting may be specified at the provider level in order to apply to all pools within that provider, or it can be overridden here for a specific pool.

providers.[metastatic].pools.host-key-checking
Default: False
Type: bool

Whether to validate SSH host keys. When true, this helps ensure that nodes are ready to receive SSH connections before they are supplied to the requestor. When set to false, nodepool-launcher will not attempt to ssh-keyscan nodes after they are booted. Unlike other drivers, this defaults to false here because it is presumed that the backing node has already been checked for connectivity. Enabling it here will cause the launcher to check connectivity each time it allocates a new slot on the backing node, and if a check fails, it will mark the backing node as failed and stop allocating any more slots on that node.

providers.[metastatic].pools.node-attributes
Type: dict

A dictionary of key-value pairs that will be stored with the node data in ZooKeeper. The keys and values can be any arbitrary string.

The metastatic driver will automatically use the values supplied by the backing node as default values. Any values specified here for top-level dictionary keys will override those supplied by the backing node.

providers.[metastatic].pools.max-servers
Type: int

Maximum number of servers spawnable from this pool. This can be used to limit the number of servers. If not defined nodepool can create as many servers that the backing node providers support.

providers.[metastatic].pools.labels
Type: list

Each entry in a pool’s labels section indicates that the corresponding label is available for use in this pool.

labels:
  - name: small-node
    backing-label: large-node
    max-parallel-jobs: 2
    grace-time: 600

Each entry is a dictionary with the following keys:

providers.[metastatic].pools.labels.name (required)
Type: str

Identifier for this label.

providers.[metastatic].pools.labels.backing-label (required)
Type: str

Refers to the name of a different label in Nodepool which will be used to supply the backing nodes for requests of this label.

providers.[metastatic].pools.labels.max-parallel-jobs
Default: 1
Type: int

The number of jobs that can run in parallel on a single backing node.

providers.[metastatic].pools.labels.grace-time
Default: 60
Type: int

When all requested nodes which were assigned to a backing node have been deleted, the backing node itself is eligible for deletion. In order to reduce churn, NodePool will wait a certain amount of time after the last requested node is deleted to see if new requests arrive for this label before deleting the backing node. Set this value to the amount of time in seconds to wait.

providers.[metastatic].pools.labels.min-retention-time
Type: int

If this value is set, the backing node will not be deleted unless this amount of time (in seconds) has passed since the backing node was launched. For backing node resources with minimum billing times, this can be used to ensure that the backing node is retained for at least the minimum billing interval.

providers.[metastatic].pools.labels.max-age
Type: int

If this value is set, the backing node will be removed from service after this amount of time (in seconds) has passed since the backing node was launched. After a backing node reaches this point, any existing jobs will be permitted to run to completion, but no new metastatic nodes will be created with that backing node and once all metastatic nodes using it have been deleted, then backing node will be deleted.

providers.[metastatic].pools.labels.host-key-checking
Default: False
Type: bool

Whether to validate SSH host keys. When true, this helps ensure that nodes are ready to receive SSH connections before they are supplied to the requestor. When set to false, nodepool-launcher will not attempt to ssh-keyscan nodes after they are booted. Unlike other drivers, this defaults to false here because it is presumed that the backing node has already been checked for connectivity. Enabling it here will cause the launcher to check connectivity each time it allocates a new slot on the backing node, and if a check fails, it will mark the backing node as failed and stop allocating any more slots on that node.

Note

This value will override the value for providers.[metastatic].pools.host-key-checking.