Rocky Series Release Notes

9.4.1-97

New Features

  • Adds support for IGMP snooping (Multicast) in the Neutron ML2/OVS driver.

  • Added the configuration option to set reserved_huge_pages. When NovaReservedHugePages is set, “reserved_huge_pages” is set to the value of NovaReservedHugePages. If NovaReservedHugePages is unset and OvsDpdkSocketMemory is set, reserved_huge_pages value is calcuated from KernelArgs and OvsDpdkSocketMemory. KernelArgs helps determine the default huge page size used, the default is set to 2048kb and OvsDpdkSocketMemory helps determine the number of hugepages to reserve.

  • Added the “connection_logging” parameter for the Octavia service.

  • Added the Octavia anti-affinity parameters.

  • Three new parameter options are now added to Octavia service (OctaviaConnectionMaxRetries, OctaviaBuildActiveRetries, OctaviaPortDetachTimeout)

  • deep_compare is now enabled by default for stonith resources, allowing their properties to be updated via stack update. To disable it set ‘tripleo::fencing::deep_compare: false’.

  • Added NeutronPermittedEthertypes to allow configuring additional ethertypes on neutron security groups for L2 agents that support it.

  • Added new heat param OVNOpenflowProbeInterval to set ovn_openflow_probe_interval which is inactivity probe interval of the OpenFlow connection to the OpenvSwitch integration bridge, in seconds. If the value is zero, it disables the connection keepalive feature, by default this value is set on 60s. If the value is nonzero, then it will be forced to a value of at least 5s.

  • HA services use a special container image name derived from the one configured in Heat parameter plus a fixed tag part, i.e. ‘<registry>/<namespace>/<servicename>:pcmklatest’. To implement rolling update without service disruption, this ‘pcmklatest’ tag is adjusted automatically during minor update every time a new image is pulled. A new Heat parameter ClusterCommonTag can now control the prefix part of the container image name. When set to true, the container name for HA services will look like ‘container-common-tag/<servicename>:pcmklatest’. This allows rolling update of HA services even when the <namespace> changes in Heat.

  • Introduce a PacemakerTLSPriorities parameter (which will set the PCMK_tls_priorities config option in /etc/sysconfig/pacemaker and the PCMK_tls_priorities variable inside the bundle. This, when set, allows an operator to specify what kind of GNUTLS ciphers are desired for the pacemaker control port.

  • Under pressure, the default monitor timeout value of 20 seconds is not enough to prevent unnecessary failovers of the ovn-dbs pacemaker resource. While spawning a few VMs in the same time this could lead to unnecessary movements of master DB, then re-connections of ovn-controllers (slaves are read-only), further peaks of load on DBs, and at the end it could lead to snowball effect. Now this value can be configurable by OVNDBSPacemakerTimeout which will configure tripleo::profile::pacemaker::ovn_dbs_bundle (default is set to 60s).

  • OVS and neutron now supports endpoint creation on IPv6 networks. New network--v6-all.j2.yaml environment files are added to allow tenant network to be created on IPv6 addresses. Note that these files are only to be used for new deployments and not during update or upgrade. network_data.yaml files are also edited to reflect the same.

Upgrade Notes

  • The CIDR for the StorageNFS network in the sample network_data_ganesha.yaml file has been modified to provide more usable IPs for the corresponding Neutron overcloud StorageNFS provider network. Since the CIDR of an existing network cannot be modified, deployments with existing StorageNFS networks should be sure to customize the StorageNFS network definition to use the same CIDR as that in their existing deployment in order to avoid a heat resource failure when updating or upgrading the overcloud.

Bug Fixes

  • When deploying a spine-and-leaf (L3 routed architecture) with TLS enabled for internal endpoints the deployment would fail because some roles are not connected to the network mapped to the service in ServiceNetMap. To fix this issue a role specific parameter {{role.name}}ServiceNetMap is introduced (defaults to: {}). The role specific ServiceNetMap parameter allow the operator to override one or more service network mappings per-role. For example:

    ComputeLeaf2ServiceNetMap:
      NovaLibvirtNetwork: internal_api_leaf2
    

    The role specific {{role.name}}ServiceNetMap override is merged with the global ServiceNetMap when it’s passed as a value to the {{role.name}}ServiceChain resources, and the {{role.name}} resource groups so that the correct network for this role is mapped to the service.

    Closes bug: 1904482.

  • Fixed an issue where Octavia controller services were not properly configured.

  • Fixed issue in the sample network_data_ganesha.yaml file where the IPv4 allocation range for the StorageNFS network occupies almost the whole of its CIDR. If network_data_ganesha.yaml is used without modification in a customer deployment then there are too few IPs left over in its CIDR for use by the corresponding overcloud Neutron StorageNFS provider network for its overcloud DHCP service. (See bug: #1889682)

  • Fixes an issue where filtering of networks for kerberos service principals was too aggressive, causing deployment failure. See bug 1854846.

  • Restart certmnonger after registering system with IPA. This prevents cert requests not completely correctly when doing a brownfield update.

  • Fix Swift ring synchronization to ensure every node on the overcloud has the same copy to start with. This is especially required when replacing nodes or using manually modifed rings.

Other Notes

  • Add “port_forwarding” service plugin and L3 agent extension to be enabled by default when Neutron ML2 plugin with OVS driver is used. New config option “NeutronL3AgentExtensions” is also added. This new option allows to set list of L3 agent’s extensions which should be used by agent.

  • Add “radvd_user” configuration parameter to the Neutron L3 container. This parameter defines the user pased to radvd. The default value is “root”.

9.4.1

New Features

  • ContainerImageRegistryLogin has been added to indicate if login calls should be issued by the container engine on deployment. The default is set to false.

  • Values specified in ContainerImageRegistryCredentials will now be used to issue a login call when deploying the container engine on the hosts if ContainerImageRegistryLogin is set to true

  • Created a ExtraKernelPackages parameter to allow users to install additional kernel related packages prior to loading the kernel modules defined in ExtraKernelModules.

  • Add ContainerNovaLibvirtUlimit to configure Ulimit for containerized Libvirt. Defaults to nofile=131072,nproc=126960.

  • Add parameter NovaLibvirtMemStatsPeriodSeconds, which allows to set libvirt/mem_stats_period_seconds parameter value to number of seconds to memory usage statistics period, zero or negative value mean to disable memory usage statistics. Default value for NovaLibvirtMemStatsPeriodSeconds is 10.

  • Adds LibvirtLogFilters parameter to define a filter to select a different logging level for a given category log outputs, as specified in https://libvirt.org/logging.html . Default: ‘1:libvirt 1:qemu 1:conf 1:security 3:event 3:json 3:file 3:object 1:util’

  • Adds LibvirtTLSPriority parameter to override the compile time default TLS priority string. Default: ‘NORMAL:-VERS-SSL3.0:-VERS-TLS-ALL:+VERS-TLS1.2’

  • Introduced two new numeric parameters OvsRevalidatorCores and OvsHandlerCores to set values of n-revalidator-threads and n-handler-threads on openvswitch.

  • The RabbitMQ management plugin (rabbitmq_management) is now enabled. By default RabbitMQ managment is available on port 15672 on the localhost (127.0.0.1) interface.

Upgrade Notes

  • The new role variable update_serial is introduced allowing parallel update execution. On Controller role this variable defaults to 1 as pacemaker has to be taken down and up in rolling fashion. The default value is 25 as that is default value for parallel ansible execution used by tripleo.

Bug Fixes

  • Fixed an issue where the update and upgrade tasks for Octavia would use the removed docker module in Ansible 2.4.

  • The passphrase for config option ‘server_certs_key_passphrase’, is used as a Fernet key in Octavia and thus must be 32 bytes long. In the case of an operator-provided passphrase, TripleO will validate that.

  • Certain nova containers require more locked memory that the default limit of 16KiB. Increase the default memlock to 64MiB via DockerNovaComputeUlimit.

    As this is only a maximum limit and not a pre-allocatiosn this will not increase the memory requirements for all nova containers. To date the only container to require this is nova_cell_v2_discover_hosts which is short lived.

  • Recent changes for e.g edge scenarios caused intended move of discovery from controller to bootstrap compute node. The task is triggered by deploy-identifier to make sure it gets run on any deploy,scale, … run. If deploy run is triggered with –skip-deploy-identifier flag, discovery will not be triggered at and as result causing failures in previously supported scenarios. This change moves the host discovery task to be an ansible deploy_steps_tasks that it gets triggered even if –skip-deploy-identifier is used, or the compute bootstrap node is blacklisted.

  • Fixes an issue whereby TLS Everywhere brownfield deployments were timing out because the db entry for cell0 in the database was not being updated in step 3. This entry is now updated in step 3.

Other Notes

  • HostPrepConfig has been removed. The resource isn’t used anymore. It was using the old fashion to run Ansible via Heat, which we don’t need anymore with config-download by default in Rocky.

9.4.0

New Features

  • Added the configuration option to disable Exact Match Cache (EMC)

  • Support setting values for cephfs_volume_mode manila parameter via the THT parameter ManilaCephFSCephVolumeMode. These control the POSIX rwx mode of the cephfs volumes, snapshots, and groups of these that back corresponding manila resources. Default value for ManilaCephFSCephVolumeMode is ‘0755’, backwards-compatible with the mode for these objects before it was settable.

  • Add new CinderNfsSnapshotSupport parameter, which controls whether cinder’s NFS driver supports snapshots. The default value is True.

  • Add neutron-plugin-ml2-mlnx-sdn-assist as a containerized Neutron Core service template to support Mellanox SDN ml2 plugin.

  • The parameter {{role.name}}RemovalPoliciesMode can be set to ‘update’ to reset the existing blacklisted nodes in heat. This will help re-use the node indexes when required.

  • Allows a deployer to specify the IdM domain with –domain on the ipa-client-install invocation by providing the IdMDomain parameter.

  • Allows a deployer to direct the ipa-client-install to skip NTP setup by specifying the IdMNoNtpSetup parameter. This is useful if the ipa-client-install setup clobbers the NTP setup by puppet.

  • New parameters, NovaCronDBArchivedMaxDelay and CinderCronDbPurgeMaxDelay, are introduced to configure max_delay parameter to calculate randomized sleep time before db archive/purge. This avoids db collisions when performing db archive/purge operations on multiple controller nodes.

  • The passphrase for config option ‘server_certs_key_passphrase’, that was recently added to Octavia, and will now be auto-generated by TripleO by adding OctaviaServerCertsKeyPassphrase to the list of parameters TripleO configures in Octavia.

  • To allow PAM to create home directory for user who do not have one, ipa-client-install need an option. This change allow to enable it.

  • Configure Neutron API for Nova Placement When the Neutron Routed Provider Networks feature is used in the overcloud, the Networking service will use those credentials to communicate with the Compute scheduler’s placement API.

  • The parameters NovaNfsEnabled, NovaNfsShare, NovaNfsOptions, NovaNfsVersion are changed to be role specific. This requires the usage of host aggregates as otherwise it will break live migration of instances as we can not do this with different storage backends.

  • The parameter NovaRbdPoolName is changed to be role specific. This requires the usage of host aggregates as otherwise it will break live migration of instances as we can not do this with different storage backends.

  • New parameter NovaNfsVersion allow configuring the NFS version used for nova storage (when NovaNfsEnabled is true). Since NFSv3 does not support full locking a NFSv4 version need to be used. To not break current installations the default is the previous hard coded version 4.

  • The Shared File Systems service (manila) API has been switched to running behind httpd, and it now supports configuring TLS options.

Upgrade Notes

  • Cinder’s NFS driver does not support snapshots unless the feature is explicitly enabled (this policy was chosen to ensure compatibility with very old versions of libvirt). The CinderNfsSnapshotSupport default value is True, and so the new default behavior enables NFS snapshots. This change is safe because it just enables a capability (i.e. snapshots) that other cinder drivers generally provide.

  • Keystone catalog entries for Cinder’s v1 API are no longer created, but existing entries will not be automatically deleted.

Deprecation Notes

  • The only OVN Tunnel Encap Type that we are supporting in OVN is Geneve and this is set by default in ovn puppet. So there are no need to set it in TripleO

Bug Fixes

  • Fixes an issue where deployment would fail if a non-default name_lower is used in network data for one of the networks: External, InternalApi or StorageMgmt. (See bug: 1830852.)

  • Fixed service auth URL in Octavia to use the Keystone v3 internal endpoint.

  • Fixes an issue that caused a subnet to be wrongly created on the Undercloud provisioning network based on environment default values. If the default ctlplane-subnet was renamed in undercloud.conf, the defaults for ctlplane-subnet in environments/undercloud.yaml was merged with the subnets defined in undercloud.conf. See bug 1820330.

  • It is now possible for temporary containers inside THT to test if they are being run as part of a minor update by checking if the TRIPLEO_MINOR_UPDATE environment variable is set to ‘true’ (said containers need to export it to the container explicitely), see <service>_restart_bundles for examples.

  • When setting up TLS everywhere, some deployers may not have their FreIPA server in the ctlplane, causing the ipaclient registration to fail. We move this registration to host-prep tasks and invoke it using ansible. At this point, all networks should be set up and the FreeIPA server should be accessible.

  • e0e885b8ca3332e0815c537a32c564cac81f7f7e moved the cellv2 discovery from control plane to compute services. In case the computes won’t have access to the external API this task will fail. Switch nova_cell_v2_discover_host.py to use internal api.

  • With large number of OSDs, where each OSD need a connection, the default nofile (1024) of nova_compute is too small. This changes the default DockerNovaComputeUlimit to 131072 what is the same for cinder.

  • Change-Id: I1a159a7c2ac286373df2b7c566426b37b7734961 moved the dicovery to run on a single compute host to not race on simultanious nova-manage commands. This change make sure we run the discover on every deploy run which is required for scaling up events.

  • If nova-manage command was triggered on a host for the first time as root (usually manual runs) the nova-manage.log gets created as root user. On overcloud deploy runs the nova-manage command is run as nova user. In such situation the overcloud deploy fails as the nova user can not write to the nova-manage.log. With this change we run the chown of the logs files on every overcloud deploy to fix the nova-manage.log file permissions.

  • The keystone service and endpoint for Cinder’s API v1 are no longer created. Cinder removed support for its v1 API in Queens.

Other Notes

  • The EndpointMap parameter is now required by post_deploy templates. So if an user overrides OS::TripleO::NodeExtraConfigPost with another template, the template would need to have EndpointMap parameter to work fine.

9.3.0

New Features

  • Add new parameter ‘GlanceInjectMetadataProperties’, to add metadata properties to be injected in image. Add new parameter ‘GlanceIgnoreUserRoles’, to specify name of user roles to be ignored for injecting metadata properties in the image.

  • Allow to output HAProxy in a dedicated file

  • Adds new HAProxySyslogFacility param

  • Add new TunedCustomProfile parameter which may contain a string in INI format describing a custom tuned profile. Also provide a new environment file for users of hypercoverged Ceph deployments using the Ceph filestore storage backened. The tuned profile is based on heavy I/O load testing. The provided environment file creates /etc/tuned/ceph-filestore-osd-hci/tuned.conf and sets this tuned profile to be active. Not intended for use with Ceph bluestore.

Known Issues

  • Fix misnaming of service in firewall rule for Octavia Health Manager service.

Upgrade Notes

  • Deployers that used resource_registry override in their environment to add networks to roles without also using a custom roles data file must create a custom roles data file and add the additional network(s) and use this when upgrading.

    Previously it was possible to add additional networks to a role without using a custom role by overriding the resource registry, for example:

    OS::TripleO::Compute::Ports::ExternalPort: ../network/ports/external.yaml
    

    Warning

    Since resources are no longer added to the plan unless the network is specified in the role, the resource_registry override alone is no longer sufficient.

  • Non-lifecycle stack actions like stack check and cancel update for undercloud are now disabled. Stack check is yet to be migrated to heat convergence architecture and cancel update is not recommended for overcloud. Both are disabled by adding required heat policy for undercloud. ‘overcloud update abort’ wrapper for stack cancel update had been dropped since few releases.

Deprecation Notes

  • The NodeDataLookup parameter type was changed from string to json

Critical Issues

  • Networks not specified for roles in roles data (roles_data.yaml) no longer have Heat resources created. It is now mandatory that custom roles are used when non-default networks is used for a role.

    Previously it was possible to add additional networks to a role without using a custom role by overriding the resource registry, for example:

    OS::TripleO::Compute::Ports::ExternalPort: ../network/ports/external.yaml
    

    Note

    The resource_registry override was the only requirement prior to the introduction of Composable Networks in the Pike release.

    Since Pike a custom role would ideally be used when adding networks to roles, but documentation and other guides may not have been properly updated and only mention the resource_registry override.

Bug Fixes

    • Bug 1784967 invalid JSON in NodeDataLookup error message should be more helpful

  • In other sections we already use the internal endpoints for authentication urls. With this change the auth_uri in the neutron section gets moved from KeystoneV3Admin to KeystoneV3Internal.

  • CephOSD/Compute nodes crash under memory pressure unless custom tuned profile is used (bug 1800232).

9.2.0

New Features

  • Added Dell EMC SC multipath support This change adds support for cinder::backend::dellsc_iscsi::use_multipath_for_image_xfer Added a new parameter CinderDellScMultipathXfer.

  • Add new parameter ‘GlanceImageImportPlugins’, to enable plugins used by image import process. Add parameter ‘GlanceImageConversionOutputFormat’, to provide desired output format for image conversion plugin.

  • Adds support to configure disjoint address pools for Ironic Inspector.

    When Inspector is deployed as a HA service disjoint address pools should be served by the DHCP instances to avoid address conflict issues. The disjoint address pools are configured by using hostname (short form) as the key, then pass the list of ranges for each host. For example:

    parameter_defaults:
    
      IronicInspectorSubnets:
        overcloud-ironic-0:
          - ip_range: 192.168.24.100,192.168.24.119
          - ip_range: 192.168.25.100,192.168.25.119
            netmask: 255.255.255.0
            gateway: 192.168.25.254
            tag: subnet1
        overcloud-ironic-1:
          - ip_range: 192.168.24.120,192.168.24.139
          - ip_range: 192.168.25.120,192.168.25.139
            netmask: 255.255.255.0
            gateway: 192.168.25.254
            tag: subnet1
    
  • Add OctaviaEventStreamDriver parameter to specify which driver to use for syncing Octavia and Neutron LBaaS databases.

Upgrade Notes

  • Octavia amphora images are now expected to be located in directory /usr/share/openstack-octavia-amphora-images on the undercloud node for uniformization across different OpenStack distributions.

  • Remove zaqar wbesocket service when upgrading from non-containerized environment.

Deprecation Notes

  • Ensure Octavia amphora image files are placed in directory /usr/share/openstack-octavia-amphora-images on the undercloud node.

Bug Fixes

  • Make sure all Swift services are disabled after upgrading to a containerized undercloud.

  • With tls-everywhere enabled connecting to keystone endpoint fails to retrieve the URL for the placement endpoint as the certificate can not be verified. While verification is disabled to check the placement endpoint later, it is not to communicate with keystone. This disables certificate verification for communication with keystone.

  • Fix an issue where Octavia amphora images were not accessible during overcloud deployment.

  • The deployed-server get-occ-config.sh script now allows $SSH_OPTIONS to be overridden.

9.1.0

New Features

  • Add support for ODL deployment on IPv6 networks.

  • Allow plugins that support it to create VLAN transparent networks The vlan_transparent determines if plugins that support it to create VLAN transparent networks or not

  • We now provide an example set of environment files that can be used to deploy a single all-in-one standalone cloud node via the ‘openstack overcloud deploy’ and ‘openstack tripleo deploy’ (experimental) commands. For the overcloud deployment, use environments/standalone/standalone-overcloud.yaml. For the tripleo deploy deployment, use environments/standalone/standalone-tripleo.yaml.

  • Adds posibilities to set ‘neutron::agents::ml2::ovs::tunnel_csum’ via NeutronOVSTunnelCsum in heat template. This param set or un-set the tunnel header checksum on outgoing IP packet carrying GRE/VXLAN tunnel in ovs agent.

  • Now it’s possible to define the number of API and RPC workers separately for neutron-api service. This is good for certain network backends such as OVN that don’t require RPC communication.

  • Usage of eventlet of all the WSGI-run nova services get deprecated, including nova-api and nova-metadata-api. See https://review.openstack.org/#/c/549510/ for more details. With this change we move nova-metadata to run via httpd wsgi.

  • Add provision to set java options like heap size configurations in ODL.

  • Add support for libvirt volume_use_multipath the ability to use multipath connection of the iSCSI or FC volume. Volumes can be connected in the LibVirt as multipath devices. Adds new parameter “NovaLibvirtVolumeUseMultipath”.

Upgrade Notes

  • swift worker count parameter defaults have been changed from ‘auto’ to 0. If not provided, puppet module default would instead be used and the number of server processes will be limited to ‘12’.

  • The online part of the service upgrades (online data migrations) is now run using:

    openstack overcloud external-upgrade run --tags online_upgrade
    

    or per-service like:

    openstack overcloud external-upgrade run --tags online_upgrade_nova
    openstack overcloud external-upgrade run --tags online_upgrade_cinder
    openstack overcloud external-upgrade run --tags online_upgrade_ironic
    

    Consult the upgrade documentation regarding the full upgrade workflow.

Deprecation Notes

  • The environments/standalone.yaml has been deprecated and should be replaced with environments/standalone/standalone-tripleo.yaml when using the ‘openstack tripleo deploy’ command.

Bug Fixes

  • Fixed an issue where if Octavia API or Glance API were deployed away from the controller node with internal TLS, the service principals wouldn’t be created.

  • Nova Scheduler added worker support in Rocky. Added NovaSchedulerWorkers to allow it to be configurable.

  • An issue causing undercloud installer re-run (or update) to fail because VIP’s where lost in case the networking configuration was changed has been fixed. See Bug: 1791238.

  • Fixes an issue in the legacy port_from_pool templates for predictable IP addressing. Prior to this fix using these tamplates would fail with the following error: Referenced Attribute (%network_name%%Port host_routes) is incorrect. (Bug: 1792968.)

  • Add customized libvirt-guests unit file to properly shutdown instances

    If resume_guests_state_on_host_boot is set in nova.conf instances need to be shutdown using libvirt-guests after nova_compute container is shut down. Therefore we need a customized libvirt-guests unit file 1) removes the dependency to libvirt (non container) that it don’t get started as a dependency and make the nova_libvirt container to fail. 2) adds a dependency to docker related services that a shutdown of nova_compute container is possible on system reboot. 3) stops nova_compute container 4) shutdown VMs

    This is a missing part of Bug 1778216.

  • With OOO we configure a separate DB for placement for the undercloud and overcloud since the beginning. But the placement_database config options were reverted with https://review.openstack.org/#/c/442762/1 , which means so far even if the config option was set, it was not used. With rocky the options were introduced again which is not a problem on a fresh installed env, but on upgrades from queens to rocky. We should use the same DB for both fresh deployments on and upgrades to rocky before we switch to the new DB as part of the extraction of placement.

  • Empty /var/lib/config-data/puppet-generated/opendaylight/opt/ opendaylight/etc/opendaylight/karaf directory on host empties /opt/opendaylight/etc/opendaylight/karaf inside the ODL container because of the mount. This leads to deployment failure on redeploy. Delete the empty karaf directory on host before redeploying.

  • The previous installation method for the undercloud installed some extra OpenStack clients during the installation. Since we did not have an equivalent way in the containerized version of the undercloud, we’ve added a new TripleO ‘service’ to install all of the OpenStack clients on a system. OS::TripleO::Services::OpenStackClients has been added which can be added to a custom role to install the clients. By default, only the Undercloud and Standalone roles will have this available.

  • Ping the default gateways before controllers in validation script. In certain situations when using IPv6 its necessary to establish connectivity to the router before other hosts.

  • SELinux can be configured on the Standalone deployment by setting SELinuxMode.

Other Notes

  • A new parameter called ‘RabbitAdditionalErlArgs’ that specifies additional arguments to the Erlang VM has been added. It now defaults to “’+sbwt none’” (http://erlang.org/doc/man/erl.html#+sbwt) This threshold determines how long schedulers are to busy wait when running out of work before going to sleep. By setting it to none we let the erlang threads go to sleep right away when they do not have any work to do.

  • The common tasks in deploy-steps-tasks.yaml that are common to all roles are now tagged with one of: host_config, container_config, container_config_tasks, container_config_scripts, or container_startup_configs.

  • The step plays in deploy-steps.j2 (which generates the deploy_steps_tasks.yaml playbook) are now tagged with step[1-5] so that they can run individually if needed.

9.0.0

Prelude

TLS certificate injection is now done with Ansible instead of a bash script called by Heat. It deprecates the NodeTLSData resource and script, while keeping the use of its variables (SSLCertificate, SSLIntermediateCertificate, SSLKey)

New Features

  • This adds a flag called EnablePublicTLS, which defaults to ‘true’. It reflects that Public TLS is enabled by default, and it’s read by the deployment workflow to let the public certificate generation happen. It can also be used to disable this feature, if it’s set to ‘false’ as it’s done in the no-tls-endpoints-public-ip.yaml environment file, which allows deployers to turn this feature off.

  • The KeystoneURL stack output is now versionless.

  • Add NVMeOF as Cinder backend.

  • Adds support for configuring the cinder-backup service with an NFS backend.

  • Adds docker service for Neutron SFC.

  • A new routes field is available for the network definition in network_data.yaml. This field contains a list of network routes. For example:

    routes:
      - destination: 10.0.1.0/24
        nexthop: 10.0.0.1
      - destination: 10.0.2.0/24
        nexthop: 10.0.0.1
    

    The routes are used to set the host_routes property of the neutron subnet resource created for the network.

  • New parameter {{network.name}}InterfaceRoutes in network config templates. Routes specified is configured on the overcloud node network interfaces. The parameter takes a list of routes in JSON. For example:

    [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
    
  • The parameter MistralExecutorExtraVolumes can be set to add volume mounts into the undercloud mistral-executor container. This allows workflows to have access to the required files for custom tasks such as rebuilding container images.

  • Added support for containerized networking-ansible Ml2 plugin.

  • Add the ability of fully purging the shadow tables whether in the archive or the purge cron.

  • Adds support to configure enabled bios interfaces, using IronicEnabledBiosInterfaces setting.

  • Add Parameters to Configure Ulimit for Containers. These parameters can be used to configure ulimit per container basis as per the requirement of the deployment. Following parameters are added for neutron, nova and cinder:- - DockerNeutronDHCPAgentUlimit defaults to nofile=1024 - DockerNeutronL3AgentUlimit defaults to nofile=1024 - DockerOpenvswitchUlimit defaults to nofile=1024 - DockerNovaComputeUlimit defaults to nofile=1024 - DockerCinderVolumeUlimit defaults to nofile=131072

  • AllNodesExtraMapData is a new parameter that can be used to inject additional hieradata into the all_nodes.yaml hieradata file on each node. The injected data will be deeply merged with the new all_nodes hieradata calculated for the stack.

  • Add ‘neutron::plugins::ml2::physical_network_mtus’ as a NeutronML2PhysicalNetworkMtus in heat template to allow set MTU in ml2 plugin

  • Enabled collectd on overcloud nodes to connect to local QDR running on each overcloud node in metrics_qdr container.

  • Makes collectd deployment default output metrics data to Gnocchi instance running on overcloud nodes.

  • Adds possibility to override default polling interval for collectd and set default value to 120 seconds, because current default (10s) was too aggressive.

  • Introduces NovaComputeCpuSharedSet parameter to set [compute]/cpu_shared_set option for compute nodes. Some workloads run best when the hypervisor overhead processes (emulator threads in libvirt/QEMU) can be placed on different physical host CPUs than other guest CPU resources. This allows those workloads to prevent latency spikes for guest vCPU threads.

    To place a workload’s emulator threads on a set of isolated physical CPUs, set the configuration option to the set of host CPUs that should be used for best-effort CPU resources. Then set a flavor extra spec to hw:emulator_threads_policy=share to instruct nova to place that workload’s emulator threads on that set of host CPUs.

  • Added NovaResumeGuestsStateOnHostBoot (true/false) parameter which configures whether or not to start again instances which were running at the time of a compute reboot. This will set the resume_guests_state_on_host_boot parameter in nova.conf and configures and enables libvirt-guests with a dependency to the docker service to shutdown instances before the docker container gets stopped. NovaResumeGuestsShutdownTimeout specifies the number in seconds for an instance to allow to shutdown.

  • The mappings from environments/config-download-environment.yaml are now included by default in overcloud-resource-registry.j2.yaml. config-download is now the default way of deploying. An environment at environments/disable-config-download.yaml is added to enable the previous method, but that method is deprecated.

  • Add KernelIpForward configuration to enable/disable the net.ipv4.ip_forward configuration.

  • Add support for Neutron LBaaSV2 service plugin in a containerized deployment.

  • Added containerized tempest support in undercloud.

  • Add DockerTempestImage parameter to add a fake tempest service which makes sure tempest container exists on the undercloud.

  • Containers are now the default way of deploying. There is still a way to deploy the baremetal services in environments/baremetal-services.yaml, but this is expected to eventually disappear.

  • Add ability to specify a fixed IP for the provisioning control plane (ctlplane) network. This works similarly to the existing fixed IPs for isolated networks, by including an environment file which includes an IP for each node in each role that should use a fixed IP. An example environment file is included in environments/ips-from-pool-ctlplane.yaml.

  • Support for deploying Designate is now available. Designate support in TripleO is experimental and not recommended for use in production.

  • The user can now use a custom script to switch repo during the fast forward upgrade. He/She has to set FastForwardRepoType to custom-script and set FastForwardCustomRepoScriptContent to a string representing a shell script. That script will be executed on each node and given the upstream name of the release as the first argument (ocata, pike, queens in that order). Here is an example that describes its interface.

    #!/bin/bash
    case $1 in
      ocata)
        curl -o /etc/yum.repos.d/ocata.repo http://somewhere.com/my-Ocata.repo;
        yum clean metadata;
      pike)
        curl -o /etc/yum.repos.d/pike.repo http://somewhere.com/my-Pike.repo;
        yum clean metadata;
      queens)
        curl -o /etc/yum.repos.d/pike.repo http://somewhere.com/my-Queens.repo;
        yum clean metadata;
      *)
        echo "unknown release $1" >&2
        exit 1
    esac
    
  • A new parameter IronicIPXETimeout can change the default iPXE timeout, set to 60 seconds. Note that 0 would set an infinite timeout.

  • Adds support to configure ironic-inspector with multiple ip ranges. This enables ironic-inspector’s DHCP server to serve request that came in via dhcp-relay agent.

  • Adds support for Ironic Networking Baremetal. Networking Baremetal is used to integrate the Bare Metal service with the Networking service.

  • Rescue mode is now enabled by default in ironic. To disable it, set IronicDefaultRescueInterface to no-rescue.

  • Allow to configure extra Kernel modules and extra sysctl settings per role and not only global to the whole deployment. The two parameters that can be role-specific are ExtraKernelModules and ExtraSysctlSettings.

  • L2GW driver changes to version 2 when using OpenDaylight.

  • It is now possible to specify values for any key in config_settings from multiple services; multiple values will be merged using YAQL mergeWith() function. For example, assuming two services defining a key as follows:

    config_settings:

    mykey: - val1

    config_settings:

    mykey: - val2 - val3

    the content of the key, as seen by ansible or puppet on the nodes, will be:

    mykey: [‘val1’,’val2’,’val3’]

  • Added new composable service (QDR) for containerized deployments. Metrics QDR will run on each overcloud node in ‘edge’ mode. This basically means that there is a possibility there will be two QDRs running on controllers in case that oslo messaging is deployed. This is a reason why we need separate composable service for this use case.

  • MistralEvaluationInterval is a new parameter that allow to configure how often will the Mistral Executions be evaluated. For example for value 120 the interval will be 2 hours (every 2 hours).

  • MistralFinishedExecutionDuration is a new parameter that allow to configure how Mistral will evaluate from which time remove executions in minutes. For example when set to 60, remove all executions that finished a 60 minutes ago or more. Note that only final state execution will remove (SUCCESS/ERROR).

  • Added support for networking-ansible ML2 plugin.

  • Add cleanup services for neutron bridges that work with container based deployments.

  • Support for predictable IP addressing added to the default port templates. In previous releases the use of _from_pool templates was required to have predictable ip addresses assigned to the nodes. Use of the port_from_pool templates is no longer required. The interface to configure predictable IP addressing without port_from_pool templates is the same. For example:

    parameter_defaults:
      ControllerIPs:
        intapi:
          - 10.0.0.10
          - 10.0.0.11
        external:
          - 172.16.1.10
          - 172.16.1.11
    
  • Introduce NovaLibvirtRxQueueSize and NovaLibvirtTxQueueSize to set virtio-net queue sizes as a role parameter. Valid values are 256, 512 and 1024

  • Add parameters NeutronPhysnetNUMANodesMapping and NeutronTunnelNUMANodes to provide numa affinity for physnets attached to vswitches.

  • The Octavia amphora image name is now derived from the filename by default so the OctaviaAmphoraImageName now behaves as an override if set to a non-default value.

  • The Octavia amphora image file name default value is now an empty string resulting in a distribution specific default location being used. The OctaviaAmphoraImageFilename parameter now behaves as an override if set to a non-default value.

  • Allow users to specify SSH name and public key to add to Octavia amphorae.

  • In OpenDaylight, a config parameter is available to enable DSCP marking inheritance for packets egressing out of OVS through VXLAN/GRE tunnels. Add a flag “OpenDaylightInheritDSCPMarking” in TripleO to enable this parameter via puppet-opendaylight. If the flag is set to ‘true’, DSCP marking feature is enabled in OpenDaylight.

  • Configure ODL to log to file instead of console. Also ODL log configuration has been moved out of general service configurations to a stand-alone config file at docker/services/logging/files/opendaylight-api.yaml. ODL logs are now accessible at /var/log/containers/opendaylight/ karaf.log

  • Adds host_routes of the ports neutron subnet to output of OS::TripleO::{{role.name}}::Ports::{{network.name}}Port resources.

  • Support separate oslo.messaging services for RPC and Notifications. Enable separate messaging backend servers.

  • Provides the option to define a set of DNS servers which will be configured in the ‘ovn’ section of etc/neutron/plugins/ml2_conf.ini. These DNS servers will be used as DNS forwarders for the VMs if a neutron subnet is not defined with ‘dns_nameservers’ option.

  • Now it is possible to enable/disable debug mode in OVN metadata agent.

  • Till now, the ovs service file and ovs-ctl command files are patched to allow ovs to run with qemu group. In order to remove this workarounds, a new group hugetlbfs is created which will be shared between ovs and qemu. Vhostuser Socket Directory is changed from “/var/run/openvswitch” to “/var/lib/vhost_sockets” to avoid modifying the directory access by packaged scripts. Use env file ovs-dpdk-permissions.yaml while deploying.

  • Allows to stop and disable xinetd service, as it not used anymore.

  • Allows to remove xinetd package when UpgradeRemoveUnusedPackages is set to “True”.

  • With the standalone deployment mechanism, the default list of enabled OpenStack services is Keystone, Nova (and related), Neutron (and related), Glance, Cinder, Swift and Horizon. The default list of disabled of OpenStack services is Aodh, Barbican, Ceilomter, Congress, Designate, Gnocchi, Heat, Ironic, Manila, Mistral, Panko, Sahara, Tacker and Zaqar. Disabled services can be enabled by passing the appropriate environment files to re-enable them during the deployment.

  • Allow NFS configuration of storage backend for Nova. This way the instance files will be stored on a shared NFS storage.

  • Add composable service for TripleO Validations, that will be deployed on the Undercloud when enabled.

  • Added support to be able to configure SELinux with the containerized undercloud. By default it is enforcing. To disable SELinux, use SELinuxMode: permissive as part of the deployment extra configuration.

  • Enhance lb-mgmt-subnet to be a class B subnet, so the global amount of Octavia loadbalancers won’t be constrained to a very low number.

  • Adds network_plugin_ipv6_enabled, emc_ssl_cert_verify and emc_ssl_cert_path options for Manila Unity driver.

  • Adds network_plugin_ipv6_enabled, emc_ssl_cert_verify and emc_ssl_cert_path options for Manila VNX driver.

  • Minor update ODL steps are added. ODL minor update (within same ODL release) can have 2 different workflow. These are called level 1 and level2. Level 1 is simple - stop, update and start ODL. Level 2 is complex and involved yang model changes. This requires wiping of DB and resync to repopulate the data. Steps involved in level 2 update are 1. Block OVS instances to connect to ODL 2. Set ODL upgrade flag to True 3. Start ODL 4. Start Neutron re-sync and wait for it to finish 5. Delete OVS groups and ports 6. Stop OVS 7. Unblock OVS ports 8. Start OVS 9. Unset ODL upgrade flag To achieve L2 update, use “-e environments/services-docker/ update-odl.yaml” along with other env files to the update command.

  • Routes specified in the host_routes attribute of neutron subnets is passed to {{network.name}}InterfaceRoutes in network templates. (The routes in neutron subnet’s host_routes can be configured by setting the routes field in network_data.yaml.)

  • It is no longer a requirement to provide the parameter: ControlPlaneSubnetCidr in the environment when deploying. Now get_attr on the server resource is used to resolve the value from the ctlplane subnet’s cidr attribute. A conditional is used to determine if the user provided the parameter in the environment. If the user provided the parameter, the user provided value is used.

  • It is no longer a requirement to provide the parameter: ControlPlaneDefaultRoute in the environment when deploying. Now get_attr on the server resource is used to resolve the value from the ctlplane subnet’s gateway_ip attribute. A conditional is used to determine if the user provided the parameter in the environment. If the user provided the parameter, the user provided value is used.

  • It is no longer a requirement to provide the parameter: DnsServers in the environment when deploying. Now get_attr on the server resource is used to resolve the value from the ctlplane subnet’s dns_nameservers attribute. A conditional is used to determine if the user provided the parameter in the environment. If the user provided the parameter, the user provided value is used.

  • It is no longer a requirement to provide the parameter: EC2MetadataIp in the environment when deploying. Now get_attr on the server resource is used to resolve the value from the ctlplane subnet’s host_routes attribute.. A conditional is used to determine if the user provided the parameter in the environment. If the user provided the parameter, the user provided value is used.

  • If TLS on the internal network is enabled, the nova-novnc to libvirt vnc transport defaults to using TLS. This can be changed by setting the UseTLSTransportForVnc parameter, which is true by default. A dedicated IPA sub-CA can be specified by the LibvirtVncCACert parameter. By default the main IPA CA will be used.

  • Add support for Dell EMC XTREMIO ISCSI cinder driver

Upgrade Notes

  • Environment files originally referenced from environments/services-docker should be altered to the environments/services paths. If some of the deployed baremetal services need to be retained as non-containerized, update its references to environments/services-baremetal instead of environments/services.

    Note

    Starting from Rocky, overcloud upgrades to baremetal services (non-containerized), or mixed services is no more tested nor verified.

  • Ironic in the containerized undercloud now supports the direct deploy interface for better performance and scalability. See the direct deploy documentation for details.

    This deploy interface can be enabled per node by setting the node’s deploy_interface field to direct or globally by changing the new IronicDefaultDeployInterface parameter to direct.

  • Composable service templates can now define external_update_tasks and external_upgrade_tasks. They are meant for update/upgrade logic of services deployed via external_deploy_tasks. The external update playbook first executes external_update_tasks and then external_deploy_tasks, the procedure for upgrades works analogously. All happens within a single playbook, so variables or fact overrides exported from the update/upgrade tasks will be available to the deploy tasks during the update/upgrade procedure.

  • Per-service config_settings should now use hiera interpolation to set the bind IP for services, e.g “%{hiera(‘internal_api’)}” whereas prior to this release we replaced e.g internal_api for the IP address internally. The network name can still be derived from the ServiceNetMap - all the in-tree templates have been converted to the new format, but any out of tree templates may require similar adjustment.

  • The ‘LogrotatePurgeAfterDays’ enforces cleaning up of information exceeded its life-time (defaults to a 14 days) in the /var/log/containers directory of bare metal overcloud hosts, including upgrade (from containers) cases, when leftovers may be remaining on the host systems.

  • Containerized memcached logs to stdout/stderr instead of a file. Its logs may be picked up via journald.

  • When a service is deployed in WSGI with Apache, make sure mode_ssl package is deployed during the upgrade process, it’s now required by default so Apache can start properly.

  • Support for predictable IP addressing has been added to the default port templates. This however has not been tested for upgrades. If upgrading continues use of the _from_pool templates is required.

  • When the undercloud was not containerized, the neutron database name was called neutron. When we upgrade to a containerized undercloud, the database name is called ovs_neutron.

  • Support for deprecated classic drivers was removed from the Ironic templates, which removes the IronicEnabledDrivers option. Please use IronicEnabledHardwareTypes and IronicEnabled***Interfaces parameters to enable/disable support for hardware types and interfaces.

  • Deprecated OVS-DPDK parameters (in pike) have been removed in rocky. If the deployment still uses the removed parameters, use the alternate parameters. Use OvsDpdkCoreList instead of HostCpusList. Use OvsPmdCoreList instead of NeutronDpdkCoreList. Use OvsDpdkMemoryChannels instead of NeutronDpdkMemoryChannels. Use OvsDpdkSocketMemory instead of NeutronDpdkSocketMemory. Use OvsDpdkDriverType instead of NeutronDpdkDriverType.

  • pre_upgrade_rolling_tasks are added for use by the composable service templates. The resulting pre_upgrade_rolling_steps_playbook is intended to be run at the beginning of major update workflow (before running the upgrade_steps_playbook). As the name suggests, the tasks in this playbook will be executed in a node-by-node rolling fashion.

  • The disable_upgrade_deployment flag is now completely removed from the roles_data. It will have no effect if you continue to include this flag. It has not been used since the Pike upgrade. In Queens the upgrade workflow is delivered with ansible playbooks.

  • manila containerization was experimental in Pike and we had both bare metal and docker versions of some of the manila environment files. Now the docker environment files are fully supported so we keep them using the standard manila environment file names, without any ‘docker’ in their name.

  • Upgrading DVR deployments may require customization of the Compute role if they depend on the overcloud’s external API network for floating IP connectivity. If necessary, please add “External” to the list of networks for the Compute role in roles_data.yaml before upgrading.

  • All NodeTLSData related resources must be removed.

  • SSLCertificate, SSLIntermediateCertificate, SSLKey are still used for the TLS configuration.

  • This fix is changing the default mask for lb-mgmt-subnet. Operators can either manually modify the already existing subnet or create a new one and update Octavia to use it. Newly create loadbalancer will use The newly created subnet.

  • Since the the ControlPlaneSubnetCidr can now be resolved from the ctlplane subnet(s) this parameter can be removed from the environment (network-environment.yaml).

    Note

    Prior to removing the parameter, ensure that the property of the ctlplane subnet(s) is correct. In case it is not, update undercloud.conf with the correct configuration and re-run the openstack undercloud install command to ensure the property is set correctly.

    Note

    ControlPlaneSubnetCidr is now passed to the network config template when the resource is created. Because of this the parameter must be defined in the network config template, even if it is not used.

  • Since the the ControlPlaneDefaultRoute can now be resolved from the ctlplane subnet(s) this parameter can be removed from the environment (network-environment.yaml).

    Note

    Prior to removing the parameter ensure that the property of the ctlplane subnet(s) is correct. In case it is not, update undercloud.conf with the correct configuration and re-run the openstack undercloud install command to ensure the property is set correctly.

    Note

    ControlPlaneDefaultRoute is now passed to the network config template when the resource is created. Because of this the parameter must be defined in the network config template, even if it is not used.

  • Since the the DnsServers can now be resolved from the ctlplane subnet(s) this parameter can be removed from the environment (network-environment.yaml).

    Note

    Prior to removing the parameter ensure that the property of the ctlplane subnet(s) is correct. In case it is not, update undercloud.conf with the correct configuration and re-run the openstack undercloud install command to ensure the property is set correctly.

    Note

    DnsServers is now passed to the network config template when the resource is created. Because of this the parameter must be defined in the network config template, even if it is not used.

  • Since the the EC2MetadataIp can now be resolved from the ctlplane subnet(s) this parameter can be removed from the environment (network-environment.yaml).

    Note

    Prior to removing the parameter ensure that the property of the ctlplane subnet(s) is correct. In case it is not, update undercloud.conf with the correct configuration and re-run the openstack undercloud install command to ensure the property is set correctly.

    Note

    EC2MetadataIp is now passed to the network config template when the resource is created. Because of this the parameter must be defined in the network config template, even if it is not used.

  • Zaqar has been switched to use the redis backend by default from the mongodb backend. Mongodb has not been supported by TripleO since Pike.

Deprecation Notes

  • The nova_catalog_admin_info parameter is no longer being configured for cinder since it was deprecated.

  • The Debug parameter do not activate Memcached debug anymore. You have to pass MemcachedDebug explicitly.

  • environments/disable-config-download.yaml can be used to disable config-download but is deprecated.

  • The following parameters are deprecated when deploying Manila with CephFS: ManilaCephFSNativeShareBackendName, use ManilaCephFSShareBackendName instead ManilaCephFSNativeBackendName, use ManilaCephFSBackendName instead ManilaCephFSNativeCephFSAuthId, use ManilaCephFSCephFSAuthId instead ManilaCephFSNativeDriverHandlesShareServers, use ManilaCephFSDriverHandlesShareServers instead ManilaCephFSNativeCephFSEnableSnapshots, use ManilaCephFSCephFSEnableSnapshots instead ManilaCephFSNativeCephFSClusterName, matches CephClusterName parameter ManilaCephFSNativeCephFSConfPath, autogenerated from CephClusterName

  • The templates at extraconfig/pre_network/host_config_and_reboot.yaml (replaced with extraconfig/pre_network/boot-params-service.yaml) and extraconfig/tasks/ssh/host_public_key.yaml (replaced with the tripleo-ssh-known-hosts role) are deprecated as they do not work with config-download. They will be removed in the Stein release.

  • auth_uri is depreacted and will be removed in a future release. Please, use www_authenticate_uri instead.

  • GnocchiArchivePolicy is now deprecated. The archive policy have to be passed through the PipelinePublishers/EventPipelinePublishers uris.

  • The parameter IronicInspectorIpRange is deprecated. Use the new IronicInspectorSubnets instead.

  • Environment file ovs-dpdk-permissions.yaml is deprecated and the mandatory parameter VhostuserSocketGroup is added to the roles data file of the missing OvS-DPDK role. Using this environment file is redundant and it will be removed in S-release.

  • Deployment of a managed Ceph cluster using puppet-ceph is not supported from the Pike release. From the Queens release it is not supported to use puppet-ceph when configuring OpenStack with an external Ceph cluster. In Rocky any support file necessary for the deployment with puppet-ceph is removed completely.

  • The environment/services/undercloud-.yaml files will be removed in the Stein release. These files relied on OS::TripleO::Services::Undercloud services that have been removed.

  • xinetd service is deprecated, so we stop it, disable it, and optionnaly remove its package.

  • NodeTLSData is now deprecated.

  • The use of outputs with Heat SoftwareConfig or StructuredConfig resources is now deprecated as they are no longer supported with config-download. Resources that depend on outputs and their values should be changed to use composable services with external_deploy_tasks or deploy_steps_tasks.

Security Issues

  • New heat parameters for containerized services ‘LogrotateMaxsize’, ‘LogrotateRotationInterval’, ‘LogrotateRotate’ and ‘LogrotatePurgeAfterDays’ allow customizing size/time-based rules for the containerized services logs rotation. The time based rules prevail over all.

  • Restrict memcached service to TCP and internal_api network (CVE-2018-1000115).

  • PasswordAuthentication is enabled by default when deploying a containerized undercloud. We don’t expect our operators to setup ssh keys during the initial deployment so we allow them to use the password to login into the undercloud node.

Bug Fixes

  • Add VTSSideId parameter to Cisco VTS ML2 template.

  • Fix a typo in the manila-share pacemaker template which was causing failures on upgrades and updates.

  • Launchpad bug 1788337 that affected the overcloud deployment with TLS Everywhere has been fixed. The manila bootstrap container no longer fails to connect securely to the database.

  • Avoid life cycle issues with Cinder volumes by ensuring Cinder has a default volume type. The name of the default volume type is controlled by a new CinderDefaultVolumeType parameter, which defaults to “tripleo”. Fixes bug 1782217.

  • Previously the default throughput-performance was set on the computes. Now virtual-host is set as default for the Compute roles. For compute NFV use case cpu-partitioning, RT realtime-virtual-host and HCI throughput-performance.

  • Previously, get-occ-config.sh could configure nodes out of order when deploying with more than 10 nodes. The script has been updated to properly sort the node resource names by first converting the names to a number.

  • The name_lower field in network_data.yaml can be used to define custom network names but the ServiceNetMap must be updated with the new names in all places. This change adds a new field to network_data.yaml - service_net_map_replace, that should be set to the original name_lower so that ServiceNetMap will be automatically updated.

  • Previously, when blacklisting all servers of the primary role, the stack would fail since the bootstrap server id was empty. The value is now defaulted in case all primary role servers are blacklisted.

  • Default Octavia SSH public key to ‘default’ keypair from undercloud.

  • When using get-occ-config.sh with a role using a count greater than 1, the script will now configure all nodes that are of that role type instead of exiting after only configuring the first.

  • Fixes Neutron certificate and key for TLS deployments to have the correct user/group IDs.

  • Fixes GUI feature loaded into OpenDaylight, which fixes the GUI as well as the URL used for Docker healthcheck.

  • Fixes OpenDaylight container service not starting due to missing config files in /opt/opendaylight/etc directory.

  • Fixes an issue where Custom Hostnames is in use. Previously it was possible that the HostnameMap lookup lookup would return unexpected and incorrect results if the hostname (the key) is not in the map. The string replacement mechanism used previously was replaced by a yaql expression that will do exact matching only. If no matching key is in the HostnameMap, the default hostname is returned. bug 1781560.

  • Fixes failure to create Neutron certificates for roles which do not contain Neutron DHCP agent, but include other Neutron agents (i.e. default Compute role).

  • The nova/neutron/ceilometer host parameter is now explicitly set to the same value that is written to /etc/hosts. On a correctly configured deployment they should be already be identical. However if the hostname or domainname is altered (e.g via DHCP) then the hostname is unlikely to resolve to the correct IP address for live-migraiton. Related bug: https://bugs.launchpad.net/tripleo/+bug/1758034

  • Set live_migration_inbound_addr for ssh transport

    Previously this was only set when TLS is enabled, which means that with the ssh transport we could not control the network used, and were relying on DNS or hosts file to be correct, which is not guaranteed (especially with DNS).

  • By default, libvirtd uses ports from 49152 to 49215 for live-migration as specified in qemu.conf, that becomes a subset of ephemeral ports (from 32768 to 61000) used by many linux kernels. The issue here is that these ephemeral ports are used for outgoing TCP sockets. And live-migration might fail, if there are no port available from the specified range. Moving the port range out of ephemeral port range to be used only for live-migration.

  • This fixes an issue with the yaml-nic-config-2-script.py script that converts old-style nic config files to new-style. It now handles blank lines followed by a comment line.

  • Instance create fails due to wrong default secontext with NFS

    With NovaNfsEnabled instance create fails due to wrong default secontext. The default in THT is set to nova_var_lib_t in Ie4fe217bd119b638f42c682d21572547f02f17b2 while system_u:object_r:nfs_t:s0 should have access. The virt_use_nfs boolean, which is turned on by openstack-selinux, should cover this use case.

    This changes the default to context=system_u:object_r:nfs_t:s0

  • When tls-everywhere is configured we have TLS connection from client -> haproxy and novncproxy -> vnc server (instance), but the connection from haproxy to the nova novnc proxy was not encrypted. Now we request a certificate and configue haproxy and the novnc proxy to encrypt this remaining part in a vnc connection to be encrypted as well.

  • With https://review.openstack.org/#/c/561784 we change the default migration port range to ‘61152-61215’. nova::migration::qemu::configure_qemu needs to be set to true that the config gets applied via puppet-nova.

  • The nova statedir ownership logic has been reimplemented to target only the files/directories controlled by nova. Resolves VM I/O errors when using an NFS backend (bug 1778465).

  • Delete ODL data folder while updating/upgrading ODL.

  • Fixes minor updates issue for ovn dbs pacemaker bundle resource by tagging the docker image used for ovn dbs pacemaker resource with pcmklatest and adding required missing tasks in “update_tasks” and “upgrade_tasks” section of the service file.

  • The default values for the PcsdPassword and PacemakerRemoteAuthkey parameters have been removed, as they did not result in a functioning pacemaker installation. These values are instead generated by tripleo-common, and in the cases where they are not (direct API), we want to fail explicitly if they are not provided.

  • The baremetal API version is no longer hardcoded in stackrc. This allows easy access to new features in ironicclient as they are introduced. If you need to use a fixed API version, set the OS_BAREMETAL_API_VERSION environment variable.

  • Add support for the SshKnownHostsDeployment resources to config-download. Since the deployment resources relied on Heat outputs, they were not supported with the default handling from tripleo-common that relies on the group_vars mechanism. The templates have been refactored to add the known hosts entries as global_vars to deploy_steps_playbook.yaml, and then include the new tripleo-ssh-known-hosts role from tripleo-common to apply the same configuration that the Heat deployment did.

  • manila-backend-vnx.yaml:
    1. Remove ManilaVNXServerMetaPool since meta_pool is not used by Manila VNX.

    2. Add ManilaVNXServerContainer.

  • cinder-dellemc-vnx-config.yaml:
    1. Remove default value of CinderDellEMCVNXStorageSecurityFileDir since it is not mandatory option for Cinder VNX driver.

  • Historically if a puppet definition for a pacemaker resource did change puppet would not update it. We now enable the updating of pacemaker resources by default. The main use case being restarting a bundle when a bind mount gets added. Puppet will wait for the resource to completely restart before proceeding with the deploy.

  • {{role.name}}ExtraConfig will now be honored even when using deprecated params in roles_data.yaml. Previously, its value was ignored and never used even though it is defined as a valid parameter in the rendered template.

Other Notes

  • Add “segments” service plugin to the default list of neutron service plugins.

  • BlacklistedHostnames has been added as a stack output. The value of the output is a list of blacklisted hostnames.

  • Add check for nic config files using the old style format (os-apply-config) and list the script that can be used to convert the file.

  • Add the ContainerImagePrepareLogFile heat parameter, which points to a log file to store outputs of the openstack tripleo container image prepare commands invoked by ansible for containers registry deployments. As it takes quite a while to finish, and may be retrying intermittent failures, make the command to log --verbose details as well.

    The default log file tripleo-container-image-prepare.log will be placed in the directory containing downloaded ansible playbooks and inventory files. For undercloud deployments, logs destination is shared with the default install-undercloud.log file.

  • The default docker0 brige should be normally given a value that does not conflict to any of the existing networks’ CIDR ranges.

    If there is a conflict for the default value 172.31.0.1/24, allow users to alter the the docker service startup --bip option via DockerNetworkOptions.

  • New Heat deployments that use outputs will now cause an error by yaml-validate.py as these do not work with config-download. Existing deployments with outputs are excluded.

  • Moved glance nfs mount task to puppet/services under host_prep_tasks instead of having it seperately for puppet & docker services.

    This is just refactoring cleanup, so there will be no functional change & upgrade impact.

  • Removed environment files to deploy OVN db servers in non HA mode for OVN deployments as it is not recommended. There is no support to upgrade an existing OVN deployments from non HA to HA. It is recommended to have a fresh deployment. To deploy OVN with dvr support, use environment/services/neutron-ovn-dvr-ha.yaml, otherwise use environment/services/neutron-ovn-ha.yaml

  • We now set the default number of rabbitmq queues to CEIL(N/2). (Where N is the number of rabbitmq nodes). Previously this was set to N by default which translated to having all queues mirrored to all controllers. By changing the default to CEIL(N/2) the queues will not be copied around all servers, but just to a subset of them. This still provides the necessary resilience in case of a controller crash, but is less demanding in terms of performance (and likely triggers fewer bugs in rabbit)