ironic_python_agent.hardware module

class ironic_python_agent.hardware.BlockDevice(name, model, size, rotational, wwn=None, serial=None, vendor=None, wwn_with_extension=None, wwn_vendor_extension=None, hctl=None, by_path=None, uuid=None, partuuid=None)[source]

Bases: SerializableComparable

serializable_fields = ('name', 'model', 'size', 'rotational', 'wwn', 'serial', 'vendor', 'wwn_with_extension', 'wwn_vendor_extension', 'hctl', 'by_path')
class ironic_python_agent.hardware.BootInfo(current_boot_mode, pxe_interface=None)[source]

Bases: SerializableComparable

serializable_fields = ('current_boot_mode', 'pxe_interface')
class ironic_python_agent.hardware.CPU(model_name, frequency, count, architecture, flags=None, socket_count=None)[source]

Bases: SerializableComparable

serializable_fields = ('model_name', 'frequency', 'count', 'architecture', 'flags', 'socket_count')
class ironic_python_agent.hardware.GenericHardwareManager[source]

Bases: HardwareManager

HARDWARE_MANAGER_NAME = 'generic_hardware_manager'
HARDWARE_MANAGER_VERSION = '1.2'
apply_configuration(node, ports, raid_config, delete_existing=True)[source]

Apply RAID configuration.

Parameters:
  • node – A dictionary of the node object.

  • ports – A list of dictionaries containing information of ports for the node.

  • raid_config – The configuration to apply.

  • delete_existing – Whether to delete the existing configuration.

burnin_cpu(node, ports)[source]

Burn-in the CPU

Parameters:
  • node – Ironic node object

  • ports – list of Ironic port objects

burnin_disk(node, ports)[source]

Burn-in the disk

Parameters:
  • node – Ironic node object

  • ports – list of Ironic port objects

burnin_memory(node, ports)[source]

Burn-in the memory

Parameters:
  • node – Ironic node object

  • ports – list of Ironic port objects

burnin_network(node, ports)[source]

Burn-in the network

Parameters:
  • node – Ironic node object

  • ports – list of Ironic port objects

collect_lldp_data(interface_names=None)[source]

Collect and convert LLDP info from the node.

In order to process the LLDP information later, the raw data needs to be converted for serialization purposes.

Parameters:

interface_names – list of names of node’s interfaces.

Returns:

a dict, containing the lldp data from every interface.

collect_system_logs(io_dict, file_list)[source]

Collect logs from the system.

Implementations should update io_dict and file_list with logs to send to Ironic and Inspector.

Parameters:
  • io_dict – Dictionary mapping file names to binary IO objects with corresponding data.

  • file_list – List of full file paths to include.

create_configuration(node, ports)[source]

Create a RAID configuration.

Unless overwritten by a local hardware manager, this method will create a software RAID configuration as read from the node’s ‘target_raid_config’.

Parameters:
  • node – A dictionary of the node object.

  • ports – A list of dictionaries containing information of ports for the node.

Returns:

The current RAID configuration in the usual format.

Raises:

SoftwareRAIDError if the desired configuration is not valid or if there was an error when creating the RAID devices.

delete_configuration(node, ports)[source]

Delete a RAID configuration.

Unless overwritten by a local hardware manager, this method will delete all software RAID devices on the node. NOTE(arne_wiebalck): It may be worth considering to only delete RAID devices in the node’s ‘target_raid_config’. If that config has been lost, though, the cleanup may become difficult. So, for now, we delete everything we detect.

Parameters:
  • node – A dictionary of the node object

  • ports – A list of dictionaries containing information of ports for the node

erase_block_device(node, block_device)[source]

Attempt to erase a block device.

Implementations should detect the type of device and erase it in the most appropriate way possible. Generic implementations should support common erase mechanisms such as ATA secure erase, or multi-pass random writes. Operators with more specific needs should override this method in order to detect and handle “interesting” cases, or delegate to the parent class to handle generic cases.

For example: operators running ACME MagicStore (TM) cards alongside standard SSDs might check whether the device is a MagicStore and use a proprietary tool to erase that, otherwise call this method on their parent class. Upstream submissions of common functionality are encouraged.

This interface could be called concurrently to speed up erasure, as such, it should be implemented in a thread-safe way.

Parameters:
  • node – Ironic node object

  • block_device – a BlockDevice indicating a device to be erased.

Raises:
erase_devices_express(node, ports)[source]

Attempt to perform time-optimised disk erasure:

for NVMe devices, perform NVMe Secure Erase if supported. For other devices, perform metadata erasure

Parameters:
  • node – Ironic node object

  • ports – list of Ironic port objects

Raises:

BlockDeviceEraseError – when there’s an error erasing the block device

Raises:

ProtectedDeviceFound if a device has been identified which may require manual intervention due to the contents and operational risk which exists as it could also be a sign of an environmental misconfiguration.

erase_devices_metadata(node, ports)[source]

Attempt to erase the disk devices metadata.

Parameters:
  • node – Ironic node object

  • ports – list of Ironic port objects

Raises:

BlockDeviceEraseError – when there’s an error erasing the block device

Raises:

ProtectedDeviceFound if a device has been identified which may require manual intervention due to the contents and operational risk which exists as it could also be a sign of an environmental misconfiguration.

erase_pstore(node, ports)[source]

Attempt to erase the kernel pstore.

Parameters:
  • node – Ironic node object

  • ports – list of Ironic port objects

evaluate_hardware_support()[source]
generate_tls_certificate(ip_address)[source]

Generate a TLS certificate for the IP address.

get_bios_given_nic_name(interface_name)[source]

Collect the BIOS given NICs name.

This function uses the biosdevname utility to collect the BIOS given name of network interfaces.

The collected data is added to the network interface inventory with an extra field named biosdevname.

Parameters:

interface_name – list of names of node’s interfaces.

Returns:

the BIOS given NIC name of node’s interfaces or default as None.

get_bmc_address()[source]

Attempt to detect BMC IP address

Returns:

IP address of lan channel or 0.0.0.0 in case none of them is configured properly

get_bmc_mac()[source]

Attempt to detect BMC MAC address

Returns:

MAC address of the first LAN channel or 00:00:00:00:00:00 in case none of them has one or is configured properly

get_bmc_v6address()[source]

Attempt to detect BMC v6 address

Returns:

IPv6 address of lan channel or ::/0 in case none of them is configured properly. May return None value if it cannot interact with system tools or critical error occurs.

get_boot_info()[source]
get_clean_steps(node, ports)[source]

Get a list of clean steps with priority.

Returns a list of steps. Each step is represented by a dict:

{
 'interface': the name of the driver interface that should execute
              the step.
 'step': the HardwareManager function to call.
 'priority': the order steps will be run in. Ironic will sort all
             the clean steps from all the drivers, with the largest
             priority step being run first. If priority is set to 0,
             the step will not be run during cleaning, but may be
             run during zapping.
 'reboot_requested': Whether the agent should request Ironic reboots
                     the node via the power driver after the
                     operation completes.
 'abortable': Boolean value. Whether the clean step can be
              stopped by the operator or not. Some clean step may
              cause non-reversible damage to a machine if interrupted
              (i.e firmware update), for such steps this parameter
              should be set to False. If no value is set for this
              parameter, Ironic will consider False (non-abortable).
}

If multiple hardware managers return the same step name, the following logic will be used to determine which manager’s step “wins”:

  • Keep the step that belongs to HardwareManager with highest HardwareSupport (larger int) value.

  • If equal support level, keep the step with the higher defined priority (larger int).

  • If equal support level and priority, keep the step associated with the HardwareManager whose name comes earlier in the alphabet.

The steps will be called using hardware.dispatch_to_managers and handled by the best suited hardware manager. If you need a step to be executed by only your hardware manager, ensure it has a unique step name.

node and ports can be used by other hardware managers to further determine if a clean step is supported for the node.

Parameters:
  • node – Ironic node object

  • ports – list of Ironic port objects

Returns:

a list of cleaning steps, where each step is described as a dict as defined above

get_cpus()[source]
get_deploy_steps(node, ports)[source]

Get a list of deploy steps with priority.

Returns a list of steps. Each step is represented by a dict:

{
 'interface': the name of the driver interface that should execute
              the step.
 'step': the HardwareManager function to call.
 'priority': the order steps will be run in. Ironic will sort all
             the deploy steps from all the drivers, with the largest
             priority step being run first. If priority is set to 0,
             the step will not be run during deployment
             automatically, but may be requested via deploy
             templates.
 'reboot_requested': Whether the agent should request Ironic reboots
                     the node via the power driver after the
                     operation completes.
 'argsinfo': arguments specification.
}

If multiple hardware managers return the same step name, the following logic will be used to determine which manager’s step “wins”:

  • Keep the step that belongs to HardwareManager with highest HardwareSupport (larger int) value.

  • If equal support level, keep the step with the higher defined priority (larger int).

  • If equal support level and priority, keep the step associated with the HardwareManager whose name comes earlier in the alphabet.

The steps will be called using hardware.dispatch_to_managers and handled by the best suited hardware manager. If you need a step to be executed by only your hardware manager, ensure it has a unique step name.

node and ports can be used by other hardware managers to further determine if a deploy step is supported for the node.

Parameters:
  • node – Ironic node object

  • ports – list of Ironic port objects

Returns:

a list of deploying steps, where each step is described as a dict as defined above

get_interface_info(interface_name)[source]
get_ipv4_addr(interface_id)[source]
get_ipv6_addr(interface_id)[source]

Get the default IPv6 address assigned to the interface.

With different networking environment, the address could be a link-local address, ULA or something else.

get_memory()[source]
get_os_install_device(permit_refresh=False)[source]
get_service_steps(node, ports)[source]

Get a list of service steps.

Returns a list of steps. Each step is represented by a dict:

{
 'interface': the name of the driver interface that should execute
              the step.
 'step': the HardwareManager function to call.
 'priority': the order steps will be run in if executed upon
             similar to automated cleaning or deployment.
             In service steps, the order comes from the user request,
             but this similarity is kept for consistency should we
             further extend the capability at some point in the
             future.
 'reboot_requested': Whether the agent should request Ironic reboots
                     the node via the power driver after the
                     operation completes.
 'abortable': Boolean value. Whether the service step can be
              stopped by the operator or not. Some steps may
              cause non-reversible damage to a machine if interrupted
              (i.e firmware update), for such steps this parameter
              should be set to False. If no value is set for this
              parameter, Ironic will consider False (non-abortable).
}

If multiple hardware managers return the same step name, the following logic will be used to determine which manager’s step “wins”:

  • Keep the step that belongs to HardwareManager with highest HardwareSupport (larger int) value.

  • If equal support level, keep the step with the higher defined priority (larger int).

  • If equal support level and priority, keep the step associated with the HardwareManager whose name comes earlier in the alphabet.

The steps will be called using hardware.dispatch_to_managers and handled by the best suited hardware manager. If you need a step to be executed by only your hardware manager, ensure it has a unique step name.

node and ports can be used by other hardware managers to further determine if a step is supported for the node.

Parameters:
  • node – Ironic node object

  • ports – list of Ironic port objects

Returns:

a list of service steps, where each step is described as a dict as defined above

get_skip_list_from_node(node, block_devices=None, just_raids=False)[source]

Get the skip block devices list from the node

Parameters:
  • block_devices – a list of BlockDevices

  • just_raids – a boolean to signify that only RAID devices are important

Returns:

A set of names of devices on the skip list

get_system_vendor_info()[source]
get_usb_devices()[source]

Collect USB devices

List all USB final devices, based on lshw information

Returns:

a dict, containing product, vendor, and handle information

inject_files(node, ports, files=None, verify_ca=True)[source]

A deploy step to inject arbitrary files.

Parameters:
  • node – A dictionary of the node object

  • ports – A list of dictionaries containing information of ports for the node (unused)

  • files – See inject_files

  • verify_ca – Whether to verify TLS certificate.

list_block_devices(include_partitions=False)[source]

List physical block devices

Parameters:

include_partitions – If to include partitions

Returns:

A list of BlockDevices

list_block_devices_check_skip_list(node, include_partitions=False)[source]

List physical block devices without the ones listed in

properties/skip_block_devices list

Parameters:
  • node – A node used to check the skip list

  • include_partitions – If to include partitions

Returns:

A list of BlockDevices

list_hardware_info()[source]

Return full hardware inventory as a serializable dict.

This inventory is sent to Ironic on lookup and to Inspector on inspection.

Returns:

a dictionary representing inventory

list_network_interfaces()[source]
validate_configuration(raid_config, node)[source]

Validate a (software) RAID configuration

Validate a given raid_config, in particular with respect to the limitations of the current implementation of software RAID support.

Parameters:

raid_config – The current RAID configuration in the usual format.

write_image(node, ports, image_info, configdrive=None)[source]

A deploy step to write an image.

Downloads and writes an image to disk if necessary. Also writes a configdrive to disk if the configdrive parameter is specified.

Parameters:
  • node – A dictionary of the node object

  • ports – A list of dictionaries containing information of ports for the node

  • image_info – Image information dictionary.

  • configdrive – A string containing the location of the config drive as a URL OR the contents (as gzip/base64) of the configdrive. Optional, defaults to None.

class ironic_python_agent.hardware.HardwareManager[source]

Bases: object

collect_lldp_data(interface_names=None)[source]
collect_system_logs(io_dict, file_list)[source]

Collect logs from the system.

Implementations should update io_dict and file_list with logs to send to Ironic and Inspector.

Parameters:
  • io_dict – Dictionary mapping file names to binary IO objects with corresponding data.

  • file_list – List of full file paths to include.

erase_block_device(node, block_device)[source]

Attempt to erase a block device.

Implementations should detect the type of device and erase it in the most appropriate way possible. Generic implementations should support common erase mechanisms such as ATA secure erase, or multi-pass random writes. Operators with more specific needs should override this method in order to detect and handle “interesting” cases, or delegate to the parent class to handle generic cases.

For example: operators running ACME MagicStore (TM) cards alongside standard SSDs might check whether the device is a MagicStore and use a proprietary tool to erase that, otherwise call this method on their parent class. Upstream submissions of common functionality are encouraged.

This interface could be called concurrently to speed up erasure, as such, it should be implemented in a thread-safe way.

Parameters:
  • node – Ironic node object

  • block_device – a BlockDevice indicating a device to be erased.

Raises:
erase_devices(node, ports)[source]

Erase any device that holds user data.

By default this will attempt to erase block devices. This method can be overridden in an implementation-specific hardware manager in order to erase additional hardware, although backwards-compatible upstream submissions are encouraged.

Parameters:
  • node – Ironic node object

  • ports – list of Ironic port objects

Raises:

ProtectedDeviceFound if a device has been identified which may require manual intervention due to the contents and operational risk which exists as it could also be a sign of an environmental misconfiguration.

Returns:

a dictionary in the form {device.name: erasure output}

abstract evaluate_hardware_support()[source]
generate_tls_certificate(ip_address)[source]
get_bmc_address()[source]
get_bmc_mac()[source]
get_bmc_v6address()[source]
get_boot_info()[source]
get_clean_steps(node, ports)[source]

Get a list of clean steps with priority.

Returns a list of steps. Each step is represented by a dict:

{
 'interface': the name of the driver interface that should execute
              the step.
 'step': the HardwareManager function to call.
 'priority': the order steps will be run in. Ironic will sort all
             the clean steps from all the drivers, with the largest
             priority step being run first. If priority is set to 0,
             the step will not be run during cleaning, but may be
             run during zapping.
 'reboot_requested': Whether the agent should request Ironic reboots
                     the node via the power driver after the
                     operation completes.
 'abortable': Boolean value. Whether the clean step can be
              stopped by the operator or not. Some clean step may
              cause non-reversible damage to a machine if interrupted
              (i.e firmware update), for such steps this parameter
              should be set to False. If no value is set for this
              parameter, Ironic will consider False (non-abortable).
}

If multiple hardware managers return the same step name, the following logic will be used to determine which manager’s step “wins”:

  • Keep the step that belongs to HardwareManager with highest HardwareSupport (larger int) value.

  • If equal support level, keep the step with the higher defined priority (larger int).

  • If equal support level and priority, keep the step associated with the HardwareManager whose name comes earlier in the alphabet.

The steps will be called using hardware.dispatch_to_managers and handled by the best suited hardware manager. If you need a step to be executed by only your hardware manager, ensure it has a unique step name.

node and ports can be used by other hardware managers to further determine if a clean step is supported for the node.

Parameters:
  • node – Ironic node object

  • ports – list of Ironic port objects

Returns:

a list of cleaning steps, where each step is described as a dict as defined above

get_cpus()[source]
get_deploy_steps(node, ports)[source]

Get a list of deploy steps with priority.

Returns a list of steps. Each step is represented by a dict:

{
 'interface': the name of the driver interface that should execute
              the step.
 'step': the HardwareManager function to call.
 'priority': the order steps will be run in. Ironic will sort all
             the deploy steps from all the drivers, with the largest
             priority step being run first. If priority is set to 0,
             the step will not be run during deployment
             automatically, but may be requested via deploy
             templates.
 'reboot_requested': Whether the agent should request Ironic reboots
                     the node via the power driver after the
                     operation completes.
 'argsinfo': arguments specification.
}

If multiple hardware managers return the same step name, the following logic will be used to determine which manager’s step “wins”:

  • Keep the step that belongs to HardwareManager with highest HardwareSupport (larger int) value.

  • If equal support level, keep the step with the higher defined priority (larger int).

  • If equal support level and priority, keep the step associated with the HardwareManager whose name comes earlier in the alphabet.

The steps will be called using hardware.dispatch_to_managers and handled by the best suited hardware manager. If you need a step to be executed by only your hardware manager, ensure it has a unique step name.

node and ports can be used by other hardware managers to further determine if a deploy step is supported for the node.

Parameters:
  • node – Ironic node object

  • ports – list of Ironic port objects

Returns:

a list of deploying steps, where each step is described as a dict as defined above

get_interface_info(interface_name)[source]
get_memory()[source]
get_os_install_device(permit_refresh=False)[source]
get_service_steps(node, ports)[source]

Get a list of service steps.

Returns a list of steps. Each step is represented by a dict:

{
 'interface': the name of the driver interface that should execute
              the step.
 'step': the HardwareManager function to call.
 'priority': the order steps will be run in if executed upon
             similar to automated cleaning or deployment.
             In service steps, the order comes from the user request,
             but this similarity is kept for consistency should we
             further extend the capability at some point in the
             future.
 'reboot_requested': Whether the agent should request Ironic reboots
                     the node via the power driver after the
                     operation completes.
 'abortable': Boolean value. Whether the service step can be
              stopped by the operator or not. Some steps may
              cause non-reversible damage to a machine if interrupted
              (i.e firmware update), for such steps this parameter
              should be set to False. If no value is set for this
              parameter, Ironic will consider False (non-abortable).
}

If multiple hardware managers return the same step name, the following logic will be used to determine which manager’s step “wins”:

  • Keep the step that belongs to HardwareManager with highest HardwareSupport (larger int) value.

  • If equal support level, keep the step with the higher defined priority (larger int).

  • If equal support level and priority, keep the step associated with the HardwareManager whose name comes earlier in the alphabet.

The steps will be called using hardware.dispatch_to_managers and handled by the best suited hardware manager. If you need a step to be executed by only your hardware manager, ensure it has a unique step name.

node and ports can be used by other hardware managers to further determine if a step is supported for the node.

Parameters:
  • node – Ironic node object

  • ports – list of Ironic port objects

Returns:

a list of service steps, where each step is described as a dict as defined above

get_skip_list_from_node(node, block_devices=None, just_raids=False)[source]

Get the skip block devices list from the node

Parameters:
  • block_devices – a list of BlockDevices

  • just_raids – a boolean to signify that only RAID devices are important

Returns:

A set of names of devices on the skip list

get_usb_devices()[source]

Collect USB devices

List all USB final devices, based on lshw information

Returns:

a dict, containing product, vendor, and handle information

get_version()[source]

Get a name and version for this hardware manager.

In order to avoid errors and make agent upgrades painless, cleaning will check the version of all hardware managers during get_clean_steps at the beginning of cleaning and before executing each step in the agent.

The agent isn’t aware of the steps being taken before or after via out of band steps, so it can never know if a new step is safe to run. Therefore, we default to restarting the whole process.

Returns:

a dictionary with two keys: name and version, where name is a string identifying the hardware manager and version is an arbitrary version string. name will be a class variable called HARDWARE_MANAGER_NAME, or default to the class name and version will be a class variable called HARDWARE_MANAGER_VERSION or default to ‘1.0’.

list_block_devices(include_partitions=False)[source]

List physical block devices

Parameters:

include_partitions – If to include partitions

Returns:

A list of BlockDevices

list_block_devices_check_skip_list(node, include_partitions=False)[source]

List physical block devices without the ones listed in

properties/skip_block_devices list

Parameters:
  • node – A node used to check the skip list

  • include_partitions – If to include partitions

Returns:

A list of BlockDevices

list_hardware_info()[source]

Return full hardware inventory as a serializable dict.

This inventory is sent to Ironic on lookup and to Inspector on inspection.

Returns:

a dictionary representing inventory

list_network_interfaces()[source]
wait_for_disks()[source]

Wait for the root disk to appear.

Wait for at least one suitable disk to show up or a specific disk if any device hint is specified. Otherwise neither inspection not deployment have any chances to succeed.

class ironic_python_agent.hardware.HardwareSupport[source]

Bases: object

Example priorities for hardware managers.

Priorities for HardwareManagers are integers, where largest means most specific and smallest means most generic. These values are guidelines that suggest values that might be returned by calls to evaluate_hardware_support(). No HardwareManager in mainline IPA will ever return a value greater than MAINLINE. Third party hardware managers should feel free to return values of SERVICE_PROVIDER or greater to distinguish between additional levels of hardware support.

GENERIC = 1
MAINLINE = 2
NONE = 0
SERVICE_PROVIDER = 3
class ironic_python_agent.hardware.HardwareType[source]

Bases: object

MAC_ADDRESS = 'mac_address'
class ironic_python_agent.hardware.Memory(total, physical_mb=None)[source]

Bases: SerializableComparable

serializable_fields = ('total', 'physical_mb')
class ironic_python_agent.hardware.NetworkInterface(name, mac_addr, ipv4_address=None, ipv6_address=None, has_carrier=True, lldp=None, vendor=None, product=None, client_id=None, biosdevname=None, speed_mbps=None)[source]

Bases: SerializableComparable

serializable_fields = ('name', 'mac_address', 'ipv4_address', 'ipv6_address', 'has_carrier', 'lldp', 'vendor', 'product', 'client_id', 'biosdevname', 'speed_mbps')
class ironic_python_agent.hardware.SystemFirmware(vendor, version, build_date)[source]

Bases: SerializableComparable

serializable_fields = ('vendor', 'version', 'build_date')
class ironic_python_agent.hardware.SystemVendorInfo(product_name, serial_number, manufacturer, firmware)[source]

Bases: SerializableComparable

serializable_fields = ('product_name', 'serial_number', 'manufacturer', 'firmware')
class ironic_python_agent.hardware.USBInfo(product, vendor, handle)[source]

Bases: SerializableComparable

serializable_fields = ('product', 'vendor', 'handle')
ironic_python_agent.hardware.cache_node(node)[source]

Store the node object in the hardware module.

Stores the node object in the hardware module to facilitate the access of a node information in the hardware extensions.

If the new node does not match the previously cached one, wait for the expected root device to appear.

Parameters:

node – Ironic node object

ironic_python_agent.hardware.check_versions(provided_version=None)[source]

Ensure the version of hardware managers hasn’t changed.

Parameters:

provided_version – Hardware manager versions used by ironic.

Raises:

errors.VersionMismatch if any hardware manager version on the currently running agent doesn’t match the one stored in provided_version.

Returns:

None

ironic_python_agent.hardware.deduplicate_steps(candidate_steps)[source]

Remove duplicated clean or deploy steps

Deduplicates steps returned from HardwareManagers to prevent running a given step more than once. Other than individual step priority, it doesn’t actually impact the deployment which specific steps are kept and what HardwareManager they are associated with. However, in order to make testing easier, this method returns deterministic results.

Uses the following filtering logic to decide which step “wins”:

  • Keep the step that belongs to HardwareManager with highest HardwareSupport (larger int) value.

  • If equal support level, keep the step with the higher defined priority (larger int).

  • If equal support level and priority, keep the step associated with the HardwareManager whose name comes earlier in the alphabet.

Parameters:

candidate_steps – A dict containing all possible steps from all managers, key=manager, value=list of steps

Returns:

A deduplicated dictionary of {hardware_manager: [steps]}

ironic_python_agent.hardware.dispatch_to_all_managers(method, *args, **kwargs)[source]

Dispatch a method to all hardware managers.

Dispatches the given method in priority order as sorted by get_managers. If the method doesn’t exist or raises IncompatibleHardwareMethodError, it continues to the next hardware manager. All managers that have hardware support for this node will be called, and their responses will be added to a dictionary of the form {HardwareManagerClassName: response}.

Parameters:
  • method – hardware manager method to dispatch

  • args – arguments to dispatched method

  • kwargs – keyword arguments to dispatched method

Raises:

errors.HardwareManagerMethodNotFound – if all managers raise IncompatibleHardwareMethodError.

Returns:

a dictionary with keys for each hardware manager that returns a response and the value as a list of results from that hardware manager.

ironic_python_agent.hardware.dispatch_to_managers(method, *args, **kwargs)[source]

Dispatch a method to best suited hardware manager.

Dispatches the given method in priority order as sorted by get_managers. If the method doesn’t exist or raises IncompatibleHardwareMethodError, it is attempted again with a more generic hardware manager. This continues until a method executes that returns any result without raising an IncompatibleHardwareMethodError.

Parameters:
  • method – hardware manager method to dispatch

  • args – arguments to dispatched method

  • kwargs – keyword arguments to dispatched method

Returns:

result of successful dispatch of method

Raises:
ironic_python_agent.hardware.get_cached_node()[source]

Guard function around the module variable NODE.

ironic_python_agent.hardware.get_component_devices(raid_device)[source]

Get the component devices of a Software RAID device.

Get the UUID of the md device and scan all other devices for the same md UUID.

Parameters:

raid_device – A Software RAID block device name.

Returns:

A list of the component devices.

ironic_python_agent.hardware.get_current_versions()[source]

Fetches versions from all hardware managers.

Returns:

Dict in the format {name: version} containing one entry for every hardware manager.

ironic_python_agent.hardware.get_holder_disks(raid_device)[source]

Get the holder disks of a Software RAID device.

Examine an md device and return its underlying disks.

Parameters:

raid_device – A Software RAID block device name.

Returns:

A list of the holder disks.

ironic_python_agent.hardware.get_managers()[source]

Get a list of hardware managers in priority order.

Use stevedore to find all eligible hardware managers, sort them based on self-reported (via evaluate_hardware_support()) priorities, and return them in a list. The resulting list is cached in _global_managers.

Returns:

Priority-sorted list of hardware managers

Raises:

HardwareManagerNotFound – if no valid hardware managers found

ironic_python_agent.hardware.get_multipath_status()[source]

Return the status of multipath initialization.

ironic_python_agent.hardware.is_md_device(raid_device)[source]

Check if a device is an md device

Check if a device is a Software RAID (md) device.

Parameters:

raid_device – A Software RAID block device name.

Returns:

True if the device is an md device, False otherwise.

ironic_python_agent.hardware.list_all_block_devices(block_type='disk', ignore_raid=False, ignore_floppy=True, ignore_empty=True, ignore_multipath=False)[source]

List all physical block devices

The switches we use for lsblk: P for KEY=”value” output, b for size output in bytes, i to ensure ascii characters only, and o to specify the fields/columns we need.

Broken out as its own function to facilitate custom hardware managers that don’t need to subclass GenericHardwareManager.

Parameters:
  • block_type – Type of block device to find

  • ignore_raid – Ignore auto-identified raid devices, example: md0 Defaults to false as these are generally disk devices and should be treated as such if encountered.

  • ignore_floppy – Ignore floppy disk devices in the block device list. By default, these devices are filtered out.

  • ignore_empty – Whether to ignore disks with size equal 0.

  • ignore_multipath – Whether to ignore devices backing multipath devices. Default is to consider multipath devices, if possible.

Returns:

A list of BlockDevices

ironic_python_agent.hardware.list_hardware_info(use_cache=True)[source]

List hardware information with caching.

ironic_python_agent.hardware.md_get_raid_devices()[source]

Get all discovered Software RAID (md) devices

Returns:

A python dict containing details about the discovered RAID devices

ironic_python_agent.hardware.md_restart(raid_device)[source]

Restart an md device

Stop and re-assemble a Software RAID (md) device.

Parameters:

raid_device – A Software RAID block device name.

Raises:

CommandExecutionError in case the restart fails.

ironic_python_agent.hardware.safety_check_block_device(node, device)[source]

Performs safety checking of a block device before destroying.

In order to guard against destruction of file systems such as shared-disk file systems (https://en.wikipedia.org/wiki/Clustered_file_system#SHARED-DISK) or similar filesystems where multiple distinct computers may have unlocked concurrent IO access to the entire block device or SAN Logical Unit Number, we need to evaluate, and block cleaning from occurring on these filesystems unless we have been explicitly configured to do so.

This is because cleaning is an intentionally destructive operation, and once started against such a device, given the complexities of shared disk clustered filesystems where concurrent access is a design element, in all likelihood the entire cluster can be negatively impacted, and an operator will be forced to recover from snapshot and or backups of the volume’s contents.

Parameters:
  • node – A node, or cached node object.

  • device – String representing the path to the block device to be checked.

Raises:

ProtectedDeviceError when a device is identified with one of these known clustered filesystems, and the overall settings have not indicated for the agent to skip such safety checks.

ironic_python_agent.hardware.save_api_client(client=None, timeout=None, interval=None)[source]

Preserves access to the API client for potential later reuse.

ironic_python_agent.hardware.update_cached_node()[source]

Attempts to update the node cache via the API