Zed Series Release Notes

9.1.0

New Features

  • Software RAID devices are built with the –name option followed by volume name if it is defined in target raid config and an internal ID otherwise.

  • The node property skip_block_devices supports specifying volume names of software RAID devices. These devices are not cleaned during cleaning and are not created provided they already exist.

Bug Fixes

  • Fixes handling of Software RAID device discovery so RAID device Names and Events field values do not inadvertently cause the command to return unexpected output. Previously this could cause a deployment to fail when handling UEFI partitions.

9.0.0

New Features

  • Users can specify a list of devices to be skipped during the cleaning process or be chosen as the root device in property field skip_block_devices.

Known Issues

  • Logic to guard VMFS filesystems from being destroyed may not recognize VMFS extents. Operators with examples of partitioning for extent usage are encouraged to contact the Ironic community.

Upgrade Notes

  • No longer supports network boot of instances (boot_option=netboot). This feature is dropped from Ironic in the Zed cycle.

Bug Fixes

  • Fixes an issue where arguments were not passed to clean steps when a manual cleaning operation was being executed. The arguments are now passed in appropriately.

  • Fixes GenericHardwareManager to find network information for bonded interfaces if they exist.

  • Fixes failures with handling of Multipath IO devices where Active/Passive storage arrays are in use. Previously, “standby” paths could result in IO errors causing cleaning to terminate. The agent now explicitly attempts to handle and account for multipaths based upon the MPIO data available. This requires the multipath and multipathd utility to be present in the ramdisk. These are supplied by the device-mapper-multipath or multipath-tools packages, and are not requried for the agent’s use.

  • Fixes non-ideal behavior when performing cleaning where Active/Active MPIO devices would ultimately be cleaned once per IO path, instead of once per backend device.

  • Fixes discovering WWN/serial numbers for devicemapper devices.

  • Previously when the ironic-python-agent would undergo erasure of block devices during cleaning, it would automatically attempt to erase the contents of any “Shared Device” clustered filesystems which may be in use by distinct multiple machines over a storage fabric. In particular IBM GPFS, Red Hat Global File System 2, and VmWare Virtual Machine File System (VMFS), are now identified and cleaning is halted. This is important because should an active cluster be using the this disk, cleaning could potentially cause the cluster to go down forcing restoration from backups. Ideally, infrastructure operators should check their environment’s storage configuration and un-map any clustered filesystems from being visible to Ironic nodes, unless explicitly needed and expected. Please see the Ironic-Python-Agent troubleshooting documentation for more information.

Other Notes

  • Block devices properties reported by udev are now collected with the ramdisk logs.

  • The ramdisk logs now contain an lsblk output with all pairs in the new lsblk-full file.

  • The agent will now attempt to collect any multipath path information and upload it to the agent ramdisk, if the tooling is present.

8.6.0

Known Issues

  • Creating a configdrive partition on a devicemapper device (e.g. a multipath storage device) with MBR partitioning may fail with the following error:

    Command execution failed: Failed to create config drive on disk /dev/dm-0
    for node 168af30d-0fad-4d67-af99-b28b3238e977. Error: Unexpected error
    while running command.
    

    Use GPT partitioning instead.

Bug Fixes

  • Fixes creating a configdrive partition on a devicemapper device (e.g. a multipath storage device) with GPT partitioning. The newly created partition is now detected by a pre-generated UUID rather than by comparing partition numbers.

  • Fixes configuring UEFI boot when the EFI partition is located on a devicemapper device.