Queens Series Release Notes


New Features

  • Adds support for IGMP snooping (Multicast) in the Neutron ML2/OVS driver.

  • Added the configuration option to set reserved_huge_pages. When NovaReservedHugePages is set, “reserved_huge_pages” is set to the value of NovaReservedHugePages. If NovaReservedHugePages is unset and OvsDpdkSocketMemory is set, reserved_huge_pages value is calcuated from KernelArgs and OvsDpdkSocketMemory. KernelArgs helps determine the default huge page size used, the default is set to 2048kb and OvsDpdkSocketMemory helps determine the number of hugepages to reserve.

  • Added the “connection_logging” parameter for the Octavia service.

  • Added the Octavia anti-affinity parameters.

  • Three new parameter options are now added to Octavia service (OctaviaConnectionMaxRetries, OctaviaBuildActiveRetries, OctaviaPortDetachTimeout)

  • Add new role parameters NovaCPUAllocationRatio, NovaRAMAllocationRatio and NovaDiskAllocationRatio which allows to configure cpu_allocation_ratio, ram_allocation_ratio and disk_allocation_ratio. Default value for NovaCPUAllocationRatio is 0.0 Default value for NovaRAMAllocationRatio is 1.0 Default value for NovaDiskAllocationRatio is 0.0

    The default values for CPU and Disk allocation ratio are taken 0.0 as mentioned in [1]. [1] https://specs.openstack.org/openstack/nova-specs/specs/stein/implemented/initial-allocation-ratios.html

  • Enabled collectd on overcloud nodes to connect to local QDR running on each overcloud node in metrics_qdr container.

  • Add a role specific parameter, ContainerCpusetCpus, default to ‘all’, which allows to limit the specific CPUs or cores a container can use. To disable it and rely on container engine default, set it to ‘’.

  • deep_compare is now enabled by default for stonith resources, allowing their properties to be updated via stack update. To disable it set ‘tripleo::fencing::deep_compare: false’.

  • Add new parameter ‘GlanceImageImportPlugins’, to enable plugins used by image import process. Add parameter ‘GlanceImageConversionOutputFormat’, to provide desired output format for image conversion plugin.

  • Added NeutronPermittedEthertypes to allow configuring additional ethertypes on neutron security groups for L2 agents that support it.

  • Added new heat param OVNOpenflowProbeInterval to set ovn_openflow_probe_interval which is inactivity probe interval of the OpenFlow connection to the OpenvSwitch integration bridge, in seconds. If the value is zero, it disables the connection keepalive feature, by default this value is set on 60s. If the value is nonzero, then it will be forced to a value of at least 5s.

  • HA services use a special container image name derived from the one configured in Heat parameter plus a fixed tag part, i.e. ‘<registry>/<namespace>/<servicename>:pcmklatest’. To implement rolling update without service disruption, this ‘pcmklatest’ tag is adjusted automatically during minor update every time a new image is pulled. A new Heat parameter ClusterCommonTag can now control the prefix part of the container image name. When set to true, the container name for HA services will look like ‘container-common-tag/<servicename>:pcmklatest’. This allows rolling update of HA services even when the <namespace> changes in Heat.

  • Introduce a PacemakerTLSPriorities parameter (which will set the PCMK_tls_priorities config option in /etc/sysconfig/pacemaker and the PCMK_tls_priorities variable inside the bundle. This, when set, allows an operator to specify what kind of GNUTLS ciphers are desired for the pacemaker control port.

  • Under pressure, the default monitor timeout value of 20 seconds is not enough to prevent unnecessary failovers of the ovn-dbs pacemaker resource. While spawning a few VMs in the same time this could lead to unnecessary movements of master DB, then re-connections of ovn-controllers (slaves are read-only), further peaks of load on DBs, and at the end it could lead to snowball effect. Now this value can be configurable by OVNDBSPacemakerTimeout which will configure tripleo::profile::pacemaker::ovn_dbs_bundle (default is set to 60s).

  • OVS and neutron now supports endpoint creation on IPv6 networks. New network--v6-all.j2.yaml environment files are added to allow tenant network to be created on IPv6 addresses. Note that these files are only to be used for new deployments and not during update or upgrade. network_data.yaml files are also edited to reflect the same.

Upgrade Notes

  • The CIDR for the StorageNFS network in the sample network_data_ganesha.yaml file has been modified to provide more usable IPs for the corresponding Neutron overcloud StorageNFS provider network. Since the CIDR of an existing network cannot be modified, deployments with existing StorageNFS networks should be sure to customize the StorageNFS network definition to use the same CIDR as that in their existing deployment in order to avoid a heat resource failure when updating or upgrading the overcloud.

Bug Fixes

  • When deploying a spine-and-leaf (L3 routed architecture) with TLS enabled for internal endpoints the deployment would fail because some roles are not connected to the network mapped to the service in ServiceNetMap. To fix this issue a role specific parameter {{role.name}}ServiceNetMap is introduced (defaults to: {}). The role specific ServiceNetMap parameter allow the operator to override one or more service network mappings per-role. For example:

      NovaLibvirtNetwork: internal_api_leaf2

    The role specific {{role.name}}ServiceNetMap override is merged with the global ServiceNetMap when it’s passed as a value to the {{role.name}}ServiceChain resources, and the {{role.name}} resource groups so that the correct network for this role is mapped to the service.

    Closes bug: 1904482.

  • Fixed an issue where Octavia controller services were not properly configured.

  • Fixed issue in the sample network_data_ganesha.yaml file where the IPv4 allocation range for the StorageNFS network occupies almost the whole of its CIDR. If network_data_ganesha.yaml is used without modification in a customer deployment then there are too few IPs left over in its CIDR for use by the corresponding overcloud Neutron StorageNFS provider network for its overcloud DHCP service. (See bug: #1889682)

  • Fixes an issue where filtering of networks for kerberos service principals was too aggressive, causing deployment failure. See bug 1854846.

  • ServiceNetMap now handles any network name when computing the default network for each service in ServiceNetMapDefaults.

  • Partial backport from train to use bind mounts for certificates. The UseTLSTransportForNbd is not available in queens.

    Certificates get merged into the containers using kolla_config mechanism. If a certificate changes, or e.g. UseTLSTransportForNbd gets disabled and enabled at a later point the containers running the qemu process miss the required certificates and live migration fails. This change moves to use bind mount for the certificates and in case of UseTLSTransportForNbd ans creates the required certificates even if UseTLSTransportForNbd is set to False. With this UseTLSTransportForNbd can be enabled/disabled as the required bind mounts/certificates are already present.

  • Restart certmnonger after registering system with IPA. This prevents cert requests not completely correctly when doing a brownfield update.

  • Fix Swift ring synchronization to ensure every node on the overcloud has the same copy to start with. This is especially required when replacing nodes or using manually modifed rings.

Other Notes

  • Add “port_forwarding” service plugin and L3 agent extension to be enabled by default when Neutron ML2 plugin with OVS driver is used. New config option “NeutronL3AgentExtensions” is also added. This new option allows to set list of L3 agent’s extensions which should be used by agent.


New Features

  • Created a ExtraKernelPackages parameter to allow users to install additional kernel related packages prior to loading the kernel modules defined in ExtraKernelModules.

  • Added NovaResumeGuestsStateOnHostBoot (true/false) parameter which configures whether or not to start again instances which were running at the time of a compute reboot. This will set the resume_guests_state_on_host_boot parameter in nova.conf and configures and enables libvirt-guests with a dependency to the docker service to shutdown instances before the docker container gets stopped. NovaResumeGuestsShutdownTimeout specifies the number in seconds for an instance to allow to shutdown.

  • Adds support for Ironic Networking Baremetal. Networking Baremetal is used to integrate the Bare Metal service with the Networking service.

  • Added new composable service (QDR) for containerized deployments. Metrics QDR will run on each overcloud node in ‘edge’ mode. This basically means that there is a possibility there will be two QDRs running on controllers in case that oslo messaging is deployed. This is a reason why we need separate composable service for this use case.

  • Add ContainerNovaLibvirtUlimit to configure Ulimit for containerized Libvirt. Defaults to nofile=131072,nproc=126960.

  • Add parameter NovaLibvirtMemStatsPeriodSeconds, which allows to set libvirt/mem_stats_period_seconds parameter value to number of seconds to memory usage statistics period, zero or negative value mean to disable memory usage statistics. Default value for NovaLibvirtMemStatsPeriodSeconds is 10.

  • Adds LibvirtLogFilters parameter to define a filter to select a different logging level for a given category log outputs, as specified in https://libvirt.org/logging.html . Default: ‘1:libvirt 1:qemu 1:conf 1:security 3:event 3:json 3:file 3:object 1:util’

  • Adds LibvirtTLSPriority parameter to override the compile time default TLS priority string. Default: ‘NORMAL:-VERS-SSL3.0:-VERS-TLS-ALL:+VERS-TLS1.2’

  • Introduced two new numeric parameters OvsRevalidatorCores and OvsHandlerCores to set values of n-revalidator-threads and n-handler-threads on openvswitch.

  • The RabbitMQ management plugin (rabbitmq_management) is now enabled. By default RabbitMQ managment is available on port 15672 on the localhost ( interface.

Upgrade Notes

  • The new role variable update_serial is introduced allowing parallel update execution. On Controller role this variable defaults to 1 as pacemaker has to be taken down and up in rolling fashion. The default value is 25 as that is default value for parallel ansible execution used by tripleo.

Bug Fixes

  • Avoid life cycle issues with Cinder volumes by ensuring Cinder has a default volume type. The name of the default volume type is controlled by a new CinderDefaultVolumeType parameter, which defaults to “tripleo”. Fixes bug 1782217.

  • Fixed an issue where the update and upgrade tasks for Octavia would use the removed docker module in Ansible 2.4.

  • The passphrase for config option ‘server_certs_key_passphrase’, is used as a Fernet key in Octavia and thus must be 32 bytes long. In the case of an operator-provided passphrase, TripleO will validate that.

  • Certain nova containers require more locked memory that the default limit of 16KiB. Increase the default memlock to 64MiB via DockerNovaComputeUlimit.

    As this is only a maximum limit and not a pre-allocatiosn this will not increase the memory requirements for all nova containers. To date the only container to require this is nova_cell_v2_discover_hosts which is short lived.

  • Add customized libvirt-guests unit file to properly shutdown instances

    If resume_guests_state_on_host_boot is set in nova.conf instances need to be shutdown using libvirt-guests after nova_compute container is shut down. Therefore we need a customized libvirt-guests unit file 1) removes the dependency to libvirt (non container) that it don’t get started as a dependency and make the nova_libvirt container to fail. 2) adds a dependency to docker related services that a shutdown of nova_compute container is possible on system reboot. 3) stops nova_compute container 4) shutdown VMs

    This is a missing part of Bug 1778216.

  • Fixes an issue whereby TLS Everywhere brownfield deployments were timing out because the db entry for cell0 in the database was not being updated in step 3. This entry is now updated in step 3.


New Features

  • Added the configuration option to disable Exact Match Cache (EMC)

  • Support setting values for cephfs_volume_mode manila parameter via the THT parameter ManilaCephFSCephVolumeMode. These control the POSIX rwx mode of the cephfs volumes, snapshots, and groups of these that back corresponding manila resources. Default value for ManilaCephFSCephVolumeMode is ‘0755’, backwards-compatible with the mode for these objects before it was settable.

  • Add new CinderNfsSnapshotSupport parameter, which controls whether cinder’s NFS driver supports snapshots. The default value is True.

  • The parameter {{role.name}}RemovalPoliciesMode can be set to ‘update’ to reset the existing blacklisted nodes in heat. This will help re-use the node indexes when required.

  • Allows a deployer to specify the IdM domain with –domain on the ipa-client-install invocation by providing the IdMDomain parameter.

  • Allows a deployer to direct the ipa-client-install to skip NTP setup by specifying the IdMNoNtpSetup parameter. This is useful if the ipa-client-install setup clobbers the NTP setup by puppet.

  • New parameters, NovaCronDBArchivedMaxDelay and CinderCronDbPurgeMaxDelay, are introduced to configure max_delay parameter to calculate randomized sleep time before db archive/purge. This avoids db collisions when performing db archive/purge operations on multiple controller nodes.

  • The passphrase for config option ‘server_certs_key_passphrase’, that was recently added to Octavia, and will now be auto-generated by TripleO by adding OctaviaServerCertsKeyPassphrase to the list of parameters TripleO configures in Octavia.

  • To allow PAM to create home directory for user who do not have one, ipa-client-install need an option. This change allow to enable it.

  • Configure Neutron API for Nova Placement When the Neutron Routed Provider Networks feature is used in the overcloud, the Networking service will use those credentials to communicate with the Compute scheduler’s placement API.

  • The parameters NovaNfsEnabled, NovaNfsShare, NovaNfsOptions, NovaNfsVersion are changed to be role specific. This requires the usage of host aggregates as otherwise it will break live migration of instances as we can not do this with different storage backends.

  • The parameter NovaRbdPoolName is changed to be role specific. This requires the usage of host aggregates as otherwise it will break live migration of instances as we can not do this with different storage backends.

  • New parameter NovaNfsVersion allow configuring the NFS version used for nova storage (when NovaNfsEnabled is true). Since NFSv3 does not support full locking a NFSv4 version need to be used. To not break current installations the default is the previous hard coded version 4.

  • The Shared File Systems service (manila) API has been switched to running behind httpd, and it now supports configuring TLS options.

Upgrade Notes

  • Cinder’s NFS driver does not support snapshots unless the feature is explicitly enabled (this policy was chosen to ensure compatibility with very old versions of libvirt). The CinderNfsSnapshotSupport default value is True, and so the new default behavior enables NFS snapshots. This change is safe because it just enables a capability (i.e. snapshots) that other cinder drivers generally provide.

  • Per-service config_settings should now use hiera interpolation to set the bind IP for services, e.g “%{hiera(‘internal_api’)}” whereas prior to this release we replaced e.g internal_api for the IP address internally. The network name can still be derived from the ServiceNetMap - all the in-tree templates have been converted to the new format, but any out of tree templates may require similar adjustment.

  • Keystone catalog entries for Cinder’s v1 API are no longer created, but existing entries will not be automatically deleted.

Deprecation Notes

  • The only OVN Tunnel Encap Type that we are supporting in OVN is Geneve and this is set by default in ovn puppet. So there are no need to set it in TripleO

Bug Fixes

  • Fixes an issue where deployment would fail if a non-default name_lower is used in network data for one of the networks: External, InternalApi or StorageMgmt. (See bug: 1830852.)

  • Fixed service auth URL in Octavia to use the Keystone v3 internal endpoint.

  • It is now possible for temporary containers inside THT to test if they are being run as part of a minor update by checking if the TRIPLEO_MINOR_UPDATE environment variable is set to ‘true’ (said containers need to export it to the container explicitely), see <service>_restart_bundles for examples.

  • When setting up TLS everywhere, some deployers may not have their FreIPA server in the ctlplane, causing the ipaclient registration to fail. We move this registration to host-prep tasks and invoke it using ansible. At this point, all networks should be set up and the FreeIPA server should be accessible.

  • With large number of OSDs, where each OSD need a connection, the default nofile (1024) of nova_compute is too small. This changes the default DockerNovaComputeUlimit to 131072 what is the same for cinder.

  • Change-Id: I1a159a7c2ac286373df2b7c566426b37b7734961 moved the dicovery to run on a single compute host to not race on simultanious nova-manage commands. This change make sure we run the discover on every deploy run which is required for scaling up events.

  • If nova-manage command was triggered on a host for the first time as root (usually manual runs) the nova-manage.log gets created as root user. On overcloud deploy runs the nova-manage command is run as nova user. In such situation the overcloud deploy fails as the nova user can not write to the nova-manage.log. With this change we run the chown of the logs files on every overcloud deploy to fix the nova-manage.log file permissions.

  • The keystone service and endpoint for Cinder’s API v1 are no longer created. Cinder removed support for its v1 API in Queens.

  • Historically if a puppet definition for a pacemaker resource did change puppet would not update it. We now enable the updating of pacemaker resources by default. The main use case being restarting a bundle when a bind mount gets added. Puppet will wait for the resource to completely restart before proceeding with the deploy.

Other Notes

  • The common tasks in deploy-steps-tasks.yaml that are common to all roles are now tagged with one of: host_config, container_config, container_config_tasks, container_config_scripts, or container_startup_configs.

  • The step plays in deploy-steps.j2 (which generates the deploy_steps_tasks.yaml playbook) are now tagged with step[1-5] so that they can run individually if needed.


New Features

  • Allow to output HAProxy in a dedicated file

  • Adds new HAProxySyslogFacility param


New Features

  • Added support for containerized networking-ansible Ml2 plugin.

  • Added support for networking-ansible ML2 plugin.

  • Add OctaviaEventStreamDriver parameter to specify which driver to use for syncing Octavia and Neutron LBaaS databases.

  • Add new TunedCustomProfile parameter which may contain a string in INI format describing a custom tuned profile. Also provide a new environment file for users of hypercoverged Ceph deployments using the Ceph filestore storage backened. The tuned profile is based on heavy I/O load testing. The provided environment file creates /etc/tuned/ceph-filestore-osd-hci/tuned.conf and sets this tuned profile to be active. Not intended for use with Ceph bluestore.

Known Issues

  • Fix misnaming of service in firewall rule for Octavia Health Manager service.

Upgrade Notes

  • Deployers that used resource_registry override in their environment to add networks to roles without also using a custom roles data file must create a custom roles data file and add the additional network(s) and use this when upgrading.

    Previously it was possible to add additional networks to a role without using a custom role by overriding the resource registry, for example:

    OS::TripleO::Compute::Ports::ExternalPort: ../network/ports/external.yaml


    Since resources are no longer added to the plan unless the network is specified in the role, the resource_registry override alone is no longer sufficient.

Critical Issues

  • Networks not specified for roles in roles data (roles_data.yaml) no longer have Heat resources created. It is now mandatory that custom roles are used when non-default networks is used for a role.

    Previously it was possible to add additional networks to a role without using a custom role by overriding the resource registry, for example:

    OS::TripleO::Compute::Ports::ExternalPort: ../network/ports/external.yaml


    The resource_registry override was the only requirement prior to the introduction of Composable Networks in the Pike release.

    Since Pike a custom role would ideally be used when adding networks to roles, but documentation and other guides may not have been properly updated and only mention the resource_registry override.

Bug Fixes

  • Fixed an issue where if Octavia API or Glance API were deployed away from the controller node with internal TLS, the service principals wouldn’t be created.

  • In other sections we already use the internal endpoints for authentication urls. With this change the auth_uri in the neutron section gets moved from KeystoneV3Admin to KeystoneV3Internal.

  • With tls-everywhere enabled connecting to keystone endpoint fails to retrieve the URL for the placement endpoint as the certificate can not be verified. While verification is disabled to check the placement endpoint later, it is not to communicate with keystone. This disables certificate verification for communication with keystone.

  • CephOSD/Compute nodes crash under memory pressure unless custom tuned profile is used (bug 1800232).


New Features

  • Add support for ODL deployment on IPv6 networks.

  • Added Dell EMC SC multipath support This change adds support for cinder::backend::dellsc_iscsi::use_multipath_for_image_xfer Added a new parameter CinderDellScMultipathXfer.

Bug Fixes

  • The deployed-server get-occ-config.sh script now allows $SSH_OPTIONS to be overridden.


New Features

  • Allow plugins that support it to create VLAN transparent networks The vlan_transparent determines if plugins that support it to create VLAN transparent networks or not

  • Add ‘neutron::plugins::ml2::physical_network_mtus’ as a NeutronML2PhysicalNetworkMtus in heat template to allow set MTU in ml2 plugin

  • Adds posibilities to set ‘neutron::agents::ml2::ovs::tunnel_csum’ via NeutronOVSTunnelCsum in heat template. This param set or un-set the tunnel header checksum on outgoing IP packet carrying GRE/VXLAN tunnel in ovs agent.

  • Add provision to set java options like heap size configurations in ODL.

  • Add support for libvirt volume_use_multipath the ability to use multipath connection of the iSCSI or FC volume. Volumes can be connected in the LibVirt as multipath devices. Adds new parameter “NovaLibvirtVolumeUseMultipath”.

Bug Fixes

  • Launchpad bug 1788337 that affected the overcloud deployment with TLS Everywhere has been fixed. The manila bootstrap container no longer fails to connect securely to the database.

  • Empty /var/lib/config-data/puppet-generated/opendaylight/opt/ opendaylight/etc/opendaylight/karaf directory on host empties /opt/opendaylight/etc/opendaylight/karaf inside the ODL container because of the mount. This leads to deployment failure on redeploy. Delete the empty karaf directory on host before redeploying.

  • Ping the default gateways before controllers in validation script. In certain situations when using IPv6 its necessary to establish connectivity to the router before other hosts.

Other Notes

  • A new parameter called ‘RabbitAdditionalErlArgs’ that specifies additional arguments to the Erlang VM has been added. It now defaults to “’+sbwt none’” (http://erlang.org/doc/man/erl.html#+sbwt) This threshold determines how long schedulers are to busy wait when running out of work before going to sleep. By setting it to none we let the erlang threads go to sleep right away when they do not have any work to do.


New Features

  • Add cleanup services for neutron bridges that work with container based deployments.

  • Introduce NovaLibvirtRxQueueSize and NovaLibvirtTxQueueSize to set virtio-net queue sizes as a role parameter. Valid values are 256, 512 and 1024

Deprecation Notes

  • The following parameters are deprecated when deploying Manila with CephFS: ManilaCephFSNativeShareBackendName, use ManilaCephFSShareBackendName instead ManilaCephFSNativeBackendName, use ManilaCephFSBackendName instead ManilaCephFSNativeCephFSAuthId, use ManilaCephFSCephFSAuthId instead ManilaCephFSNativeDriverHandlesShareServers, use ManilaCephFSDriverHandlesShareServers instead ManilaCephFSNativeCephFSEnableSnapshots, use ManilaCephFSCephFSEnableSnapshots instead ManilaCephFSNativeCephFSClusterName, matches CephClusterName parameter ManilaCephFSNativeCephFSConfPath, autogenerated from CephClusterName

  • Environment file ovs-dpdk-permissions.yaml is deprecated and the mandatory parameter VhostuserSocketGroup is added to the roles data file of the missing OvS-DPDK role. Using this environment file is redundant and it will be removed in S-release.

Bug Fixes

  • Previously the default throughput-performance was set on the computes. Now virtual-host is set as default for the Compute roles. For compute NFV use case cpu-partitioning, RT realtime-virtual-host and HCI throughput-performance.

  • Previously, when blacklisting all servers of the primary role, the stack would fail since the bootstrap server id was empty. The value is now defaulted in case all primary role servers are blacklisted.

  • Instance create fails due to wrong default secontext with NFS

    With NovaNfsEnabled instance create fails due to wrong default secontext. The default in THT is set to nova_var_lib_t in Ie4fe217bd119b638f42c682d21572547f02f17b2 while system_u:object_r:nfs_t:s0 should have access. The virt_use_nfs boolean, which is turned on by openstack-selinux, should cover this use case.

    This changes the default to context=system_u:object_r:nfs_t:s0

  • When tls-everywhere is configured we have TLS connection from client -> haproxy and novncproxy -> vnc server (instance), but the connection from haproxy to the nova novnc proxy was not encrypted. Now we request a certificate and configue haproxy and the novnc proxy to encrypt this remaining part in a vnc connection to be encrypted as well.

  • manila-backend-vnx.yaml:
    1. Remove ManilaVNXServerMetaPool since meta_pool is not used by Manila VNX.

    2. Add ManilaVNXServerContainer.

  • cinder-dellemc-vnx-config.yaml:
    1. Remove default value of CinderDellEMCVNXStorageSecurityFileDir since it is not mandatory option for Cinder VNX driver.

Other Notes

  • BlacklistedHostnames has been added as a stack output. The value of the output is a list of blacklisted hostnames.


New Features

  • Adds docker service for Neutron SFC.

  • The Octavia amphora image name is now derived from the filename by default so the OctaviaAmphoraImageName now behaves as an override if set to a non-default value.

  • The Octavia amphora image file name default value is now an empty string resulting in a distribution specific default location being used. The OctaviaAmphoraImageFilename parameter now behaves as an override if set to a non-default value.

  • Allow NFS configuration of storage backend for Nova. This way the instance files will be stored on a shared NFS storage.

Upgrade Notes

  • manila containerization was experimental in Pike and we had both bare metal and docker versions of some of the manila environment files. Now the docker environment files are fully supported so we keep them using the standard manila environment file names, without any ‘docker’ in their name.

  • Upgrading DVR deployments may require customization of the Compute role if they depend on the overcloud’s external API network for floating IP connectivity. If necessary, please add “External” to the list of networks for the Compute role in roles_data.yaml before upgrading.

Bug Fixes

  • The name_lower field in network_data.yaml can be used to define custom network names but the ServiceNetMap must be updated with the new names in all places. This change adds a new field to network_data.yaml - service_net_map_replace, that should be set to the original name_lower so that ServiceNetMap will be automatically updated.

  • This fixes an issue with the yaml-nic-config-2-script.py script that converts old-style nic config files to new-style. It now handles blank lines followed by a comment line.

  • With https://review.openstack.org/#/c/561784 we change the default migration port range to ‘61152-61215’. nova::migration::qemu::configure_qemu needs to be set to true that the config gets applied via puppet-nova.

  • The nova statedir ownership logic has been reimplemented to target only the files/directories controlled by nova. Resolves VM I/O errors when using an NFS backend (bug 1778465).

  • Moving to file logging for ODL as docker logs, sometimes, miss older logs due to journal rollover.

  • Add support for the SshKnownHostsDeployment resources to config-download. Since the deployment resources relied on Heat outputs, they were not supported with the default handling from tripleo-common that relies on the group_vars mechanism. The templates have been refactored to add the known hosts entries as global_vars to deploy_steps_playbook.yaml, and then include the new tripleo-ssh-known-hosts role from tripleo-common to apply the same configuration that the Heat deployment did.

Other Notes

  • The default docker0 brige should be normally given a value that does not conflict to any of the existing networks’ CIDR ranges.

    If there is a conflict for the default value, allow users to alter the the docker service startup --bip option via DockerNetworkOptions.

  • Removed environment files to deploy OVN db servers in non HA mode for OVN deployments as it is not recommended. There is no support to upgrade an existing OVN deployments from non HA to HA. It is recommended to have a fresh deployment. To deploy OVN with dvr support, use environment/services/neutron-ovn-dvr-ha.yaml, otherwise use environment/services/neutron-ovn-ha.yaml


New Features

  • Adds support for configuring the cinder-backup service with an NFS backend.

  • Add ability to specify a fixed IP for the provisioning control plane (ctlplane) network. This works similarly to the existing fixed IPs for isolated networks, by including an environment file which includes an IP for each node in each role that should use a fixed IP. An example environment file is included in environments/ips-from-pool-ctlplane.yaml.

  • Provides the option to define a set of DNS servers which will be configured in the ‘ovn’ section of etc/neutron/plugins/ml2_conf.ini. These DNS servers will be used as DNS forwarders for the VMs if a neutron subnet is not defined with ‘dns_nameservers’ option.

  • Adds network_plugin_ipv6_enabled, emc_ssl_cert_verify and emc_ssl_cert_path options for Manila VNX driver.

Upgrade Notes

  • Containerized memcached logs to stdout/stderr instead of a file. Its logs may be picked up via journald.

Deprecation Notes

  • The Debug parameter do not activate Memcached debug anymore. You have to pass MemcachedDebug explicitly.

Bug Fixes

  • Fix a typo in the manila-share pacemaker template which was causing failures on upgrades and updates.

  • Fixes minor updates issue for ovn dbs pacemaker bundle resource by tagging the docker image used for ovn dbs pacemaker resource with pcmklatest and adding required missing tasks in “update_tasks” and “upgrade_tasks” section of the service file.


New Features

  • Makes collectd deployment default output metrics data to Gnocchi instance running on overcloud nodes.

  • Adds possibility to override default polling interval for collectd and set default value to 120 seconds, because current default (10s) was too aggressive.

  • Add support for Neutron LBaaSV2 service plugin in a containerized deployment.

  • Allow users to specify SSH name and public key to add to Octavia amphorae.

  • Adds network_plugin_ipv6_enabled, emc_ssl_cert_verify and emc_ssl_cert_path options for Manila Unity driver.

Upgrade Notes

  • The ‘LogrotatePurgeAfterDays’ enforces cleaning up of information exceeded its life-time (defaults to a 14 days) in the /var/log/containers directory of bare metal overcloud hosts, including upgrade (from containers) cases, when leftovers may be remaining on the host systems.

Security Issues

  • New heat parameters for containerized services ‘LogrotateMaxsize’, ‘LogrotateRotationInterval’, ‘LogrotateRotate’ and ‘LogrotatePurgeAfterDays’ allow customizing size/time-based rules for the containerized services logs rotation. The time based rules prevail over all.

Bug Fixes

  • Previously, get-occ-config.sh could configure nodes out of order when deploying with more than 10 nodes. The script has been updated to properly sort the node resource names by first converting the names to a number.

  • Default Octavia SSH public key to ‘default’ keypair from undercloud.

  • The nova/neutron/ceilometer host parameter is now explicitly set to the same value that is written to /etc/hosts. On a correctly configured deployment they should be already be identical. However if the hostname or domainname is altered (e.g via DHCP) then the hostname is unlikely to resolve to the correct IP address for live-migraiton. Related bug: https://bugs.launchpad.net/tripleo/+bug/1758034

  • Set live_migration_inbound_addr for ssh transport

    Previously this was only set when TLS is enabled, which means that with the ssh transport we could not control the network used, and were relying on DNS or hosts file to be correct, which is not guaranteed (especially with DNS).

  • By default, libvirtd uses ports from 49152 to 49215 for live-migration as specified in qemu.conf, that becomes a subset of ephemeral ports (from 32768 to 61000) used by many linux kernels. The issue here is that these ephemeral ports are used for outgoing TCP sockets. And live-migration might fail, if there are no port available from the specified range. Moving the port range out of ephemeral port range to be used only for live-migration.

Other Notes

  • Add “segments” service plugin to the default list of neutron service plugins.


New Features

  • Add Parameters to Configure Ulimit for Containers. These parameters can be used to configure ulimit per container basis as per the requirement of the deployment. Following parameters are added for neutron, nova and cinder:- - DockerNeutronDHCPAgentUlimit defaults to nofile=1024 - DockerNeutronL3AgentUlimit defaults to nofile=1024 - DockerOpenvswitchUlimit defaults to nofile=1024 - DockerNovaComputeUlimit defaults to nofile=1024 - DockerCinderVolumeUlimit defaults to nofile=131072

Deprecation Notes

Bug Fixes

  • When using get-occ-config.sh with a role using a count greater than 1, the script will now configure all nodes that are of that role type instead of exiting after only configuring the first.

  • Fixes Neutron certificate and key for TLS deployments to have the correct user/group IDs.

  • Delete ODL data folder while updating/upgrading ODL.

  • The default values for the PcsdPassword and PacemakerRemoteAuthkey parameters have been removed, as they did not result in a functioning pacemaker installation. These values are instead generated by tripleo-common, and in the cases where they are not (direct API), we want to fail explicitly if they are not provided.

  • {{role.name}}ExtraConfig will now be honored even when using deprecated params in roles_data.yaml. Previously, its value was ignored and never used even though it is defined as a valid parameter in the rendered template.


New Features

  • Add support for Dell EMC XTREMIO ISCSI cinder driver

New Features

  • Containers are now the default way of deploying. There is still a way to deploy the baremetal services in environments/baremetal-services.yaml, but this is expected to eventually disappear.

  • The user can now use a custom script to switch repo during the fast forward upgrade. He/She has to set FastForwardRepoType to custom-script and set FastForwardCustomRepoScriptContent to a string representing a shell script. That script will be executed on each node and given the upstream name of the release as the first argument (ocata, pike, queens in that order). Here is an example that describes its interface.

    case $1 in
        curl -o /etc/yum.repos.d/ocata.repo http://somewhere.com/my-Ocata.repo;
        yum clean metadata;
        curl -o /etc/yum.repos.d/pike.repo http://somewhere.com/my-Pike.repo;
        yum clean metadata;
        curl -o /etc/yum.repos.d/pike.repo http://somewhere.com/my-Queens.repo;
        yum clean metadata;
        echo "unknown release $1" >&2
        exit 1
  • Till now, the ovs service file and ovs-ctl command files are patched to allow ovs to run with qemu group. In order to remove this workarounds, a new group hugetlbfs is created which will be shared between ovs and qemu. Vhostuser Socket Directory is changed from “/var/run/openvswitch” to “/var/lib/vhost_sockets” to avoid modifying the directory access by packaged scripts. Use env file ovs-dpdk-permissions.yaml while deploying.

  • Minor update ODL steps are added. ODL minor update (within same ODL release) can have 2 different workflow. These are called level 1 and level2. Level 1 is simple - stop, update and start ODL. Level 2 is complex and involved yang model changes. This requires wiping of DB and resync to repopulate the data. Steps involved in level 2 update are 1. Block OVS instances to connect to ODL 2. Set ODL upgrade flag to True 3. Start ODL 4. Start Neutron re-sync and wait for it to finish 5. Delete OVS groups and ports 6. Stop OVS 7. Unblock OVS ports 8. Start OVS 9. Unset ODL upgrade flag To achieve L2 update, use “-e environments/services-docker/ update-odl.yaml” along with other env files to the update command.

Upgrade Notes

  • Environment files originally referenced from environments/services-docker should be altered to the environments/services paths. If some of the deployed baremetal services need to be retained as non-containerized, update its references to environments/services-baremetal instead of environments/services.


    Overcloud upgrades to baremetal services (non-containerized), or mixed services is no more tested nor verified.

  • pre_upgrade_rolling_tasks are added for use by the composable service templates. The resulting pre_upgrade_rolling_steps_playbook is intended to be run at the beginning of major update workflow (before running the upgrade_steps_playbook). As the name suggests, the tasks in this playbook will be executed in a node-by-node rolling fashion.

Security Issues

  • Restrict memcached service to TCP and internal_api network (CVE-2018-1000115).

Bug Fixes

  • Fixes OpenDaylight container service not starting due to missing config files in /opt/opendaylight/etc directory.

  • Fixes failure to create Neutron certificates for roles which do not contain Neutron DHCP agent, but include other Neutron agents (i.e. default Compute role).

Other Notes

  • Add check for nic config files using the old style format (os-apply-config) and list the script that can be used to convert the file.



From the Queens release the deployment of Ceph is only supported in containers.

New Features

  • This exposes the GnocchiStorageSwiftEndpointType parameter, which sets the interface type that gnocchi will use to get the swift endpoint.

  • Configure ODL to log karaf logs to file for non-containarised deployment and to console for containarised deployment.

  • Add neutron-plugin-ml2-cisco-vts as a Neutron Core service template in support of the cisco VTS controller ml2 plugin.

  • Add support for Mistral event engine.

  • Add Mistral to the provided controller roles.

  • Add support for deploying Service Function Chaining api service with neutron networking-sfc.

  • Added support for providing Octavia certificate data through heat parameters.

  • Add configuration of octavia’s ‘service_auth’ parameters.

  • Manila now supports the CephNFS back end. Deploy using the ControllerStorageNFS role and ‘-n network_data_ganesha.yaml’, along with manila-cephfsganesha-config-docker.yaml.

  • Adds ability to configure metadata agent for networking-ovn based deployments.

  • Add neutron-plugin-ml2-cisco-vts as a dockerized Neutron Core service template in support of the cisco VTS controller ml2 plugin.

  • Introduces a puppet service to configure AIDE Intrusion Detection. This service init’s the database and copies the new database to the active naming. It also sets a cron job, when parameter AideEmail is populated, otherwise reports are sent to /var/log/aide/.

    AIDE rules can be supplied as a hash, and should the rules ever be changed, the service will populate the new rules and re-init a fresh integrity database.

  • This patch allows to attach optional volumes to and set optional environment variables in the neutron-api, heat-api and nova-compute containers. This makes it easier to plug plugins to that containers.

  • Add KernelIpForward configuration to enable/disable the net.ipv4.ip_forward configuration.

  • Configure OpenDaylight SNAT to use conntrack mechanism with OVS and controller based mechanism with OVS-DPDK.

  • Barbican API added to containarised overcloud deployment

  • With the move to containers, Ceph OSDs may be combined with other Ceph services and dedicated Ceph monitors on controllers may be used less. Popular Ceph roles which include OSDs are Ceph file, object and nodes which run all Ceph services. This pattern also applies to Hyper Converged (HCI) roles. The following pre-composed roles have been added to make it easier to deploy in this pattern. - CephAll: Standalone Storage Full Role - CephFile: Standalone Scale-out File Role - CephObject: Standalone Scale-out Object Role - HciCephAll: HCI Full Stack Role - HciCephFile: HCI Scale-out File Role - HciCephObject: HCI Scale-out Object Role - HciCephMon: HCI Scale-out Block Full Role - ControllerNoCeph: OpenStack Controller without any Ceph Services

  • Support added for per-service deploy_steps_tasks which are run every step on the overcloud nodes.

  • Default values for OctaviaFlavorProperties have been added and OctaviaManageNovaFlavor is now enabled by default so a usable OpenStack flavor will be available for creating Octavia load balancers immediately after deployment.

  • Service templates now support an external_post_deploy_tasks interface, this works in the same way as external_deploy_tasks but runs after all other deploy steps have completed.

  • The KeystoneNotificationTopics parameter was introduced. This takes a list which will configure extra notification topics, which end up as queues in the message broker. This is useful for when keystone notifications need to be integrated with third party software. Note that enabling telemetry will by default make keystone emit notifications to the ‘notifications’ topic, but this parameter can enable extra topics still.

  • Support for Instance HA is added. This configures the control plane to do fence for a compute node that dies, then a nova –force-down and finally and evacuation for the vms that were running on the failed node.

  • Add support for troubleshooting network issues using Skydive.

  • Encryption of the internal network’s communications through IPSec has been added. To enable you need to add the OS::TripleO::Services::Ipsec service to your roles if it’s not there already. And you need to add the file environments/ipsec.yaml to your overcloud deploy.

  • The IpsecVars parameter was added in order to configure the parameters in the tripleo-ipsec ansible role that configures IPSec tunnels if they’re enabled.

  • Add support for Dell EMC Isilon manila driver

  • Allow to easily personalize Kernel modules and sysctl settings with two new parameters. ExtraKernelModules and ExtraSysctlSettings are dictionaries that will take precedence over the defaults settings provided in the composable service.

  • Allow to configure extra Kernel modules and extra sysctl settings per role and not only global to the whole deployment. The two parameters that can be role-specific are ExtraKernelModules and ExtraSysctlSettings.

  • Mistral is now deployed with Keystone v3 options (authtoken).

  • The memcached service now reacts to the Debug flag, which will make its logs verbose. Also, the MemcachedDebug flag was added, which will just add this for the individual service.

  • When containerizing mistral-executor, we need to mount /var/lib/mistral so our operators can get the config-download logs when the undercloud is containerized and config-download is used to deploy the overcloud.

  • Add new CinderRbdExtraPools Heat parameter, which specifies a list of Ceph pools for use with RBD backends for Cinder. An extra Cinder RBD backend driver is created for each pool in the list. This is in addition to the standard RBD backend driver associated with the CinderRbdPoolName. The new parameter is optional, and defaults to an empty list. All of the pools are associated with a single Ceph cluster.

  • Added new real-time roles for NFV (ComputeOvsDpdkRT and ComputeSriovRT)

  • Add MinPoll and MaxPoll options to NTP module. These options specify the minimum and maximum poll intervals for NTP messages, in seconds to the power of two. The maximum poll interval defaults to 10 (1,024 s), but can be increased by the MaxPoll option to an upper limit of 17 (36.4 h). The minimum poll interval defaults to 6 (64 s), but can be decreased by the MinPoll option to a lower limit of 4 (16 s).

  • Enables deploying OpenDaylight with TLS. Open vSwitch is also configured to communicate with OpenDaylight via TLS.

  • Add support for ODL OVS Hardware Offload. This feature requires Linux Kernel >= 4.13 Open vSwitch >= 2.8 iproute >= 4.12.

  • Endpoint is added for ODL. Public access is not allowed for ODL so public endpoint is not added.

  • Support containerized ovn-controller

  • Support containerized OVN Dbs without HA

  • Support containerized OVN DBs with HA

  • Add support for OVS Hardware Offload. This feature requires Linux Kernel >= 4.13 Open vSwitch >= 2.8 iproute >= 4.12.

  • Add the ability to deploy PTP. Precision Time Protocol (PTP) is a protocol used to synchronize clocks throughout a compute network. With hardware timestamping support on the host, PTP can achieve clock accuracy in the sub-microsecond range. PTP can be used as an alternative to NTP for high precision clock calibration.

  • A new parameter, RabbitNetTickTime, allows tuning the Erlang net_ticktime parameter for rabbitmq-server. The default value is 15 seconds. This replaces previous tunings in the RabbitMQ environment file which set socket options forcing TCP_USER_TIMEOUT to 15 seconds.

  • Neutron no longer accesses octavia through a neutron service plugin.

  • Introduce a new service to configure RHSM with Ansible, by calling ansible-role-redhat-subscription in host_prep_tasks.

  • When using RHSM proxy, TripleO will now verify that the proxy can be reached otherwise we’ll stop early and not try to subscribe nodes.

  • The parameters KeystoneChangePasswordUponFirstUse, KeystoneDisableUserAccountDaysInactive, KeystoneLockoutDuration, KeystoneLockoutFailureAttempts, KeystoneMinimumPasswordAge, KeystonePasswordExpiresDays, KeystonePasswordRegex, KeystonePasswordRegexDescription, KeystoneUniqueLastPasswordCount were introduced. They all correspond to keystone configuration options that belong to the security_compliance group.

  • A new role ComputeSriov has been added to the roles definition to create a compute with SR-IOV capabilities. The SR-IOV services has been removed from the default Compute role, so that a cluster can have general Compute along with ComputeSriov roles.

  • Add support for Dell EMC Unity Manila driver

  • Add support for Dell EMC VMAX Iscsi cinder driver

  • Add support for Dell EMC VMAX Manila driver

  • If TLS on the internal network is enabled, the nova-novnc to libvirt vnc transport defaults to using TLS. This can be changed by setting the UseTLSTransportForVnc parameter, which is true by default. A dedicated IPA sub-CA can be specified by the LibvirtVncCACert parameter. By default the main IPA CA will be used.

  • Add support for Dell EMC VNX cinder driver

  • Add support for Dell EMC VNX Manila driver

Upgrade Notes

  • Adds a new UpgradeRemoveUnusedPackages parameter (default False) and some service upgrade_tasks that use this parameter to remove any unused packages. “Unused” is those services that are being stopped and disabled from starting on boot (because they are being containerized). Note that ignore_errors is set on all the package removal ansible tasks so any issues removing a given package will not fail the upgrade workflow. For clarity, setting UpgradeRemoveUnusedPackages to True in your deployment environment file(s) will result in the REMOVAL of packages for stopped and disabled services, during the upgrade.

  • force_config_drive is now set to False in Nova. Instances will now fetch their metadata from the metadata service instead from the config drive.

  • This adds post_upgrade_tasks, ansible tasks that can be added to any service manifest (currently, pacemaker/cinder-volume for bug 1706951).

    These are similar to the existing upgrade_tasks in their format, however they will be executed after the docker/puppet config. So the order is upgrade_tasks, deployment steps (docker/puppet), then post_upgrade_tasks.

    Also like the upgrade_tasks these are serialised and you can use ‘tags’ with ‘step0’ to ‘step6’ (more can be added if needed).

  • The format to use for the CephPools parameter needs to be updated into the form expected by ceph-ansible. For example, for a new pool named mypool it should change from: { “mypool”: { “size”: 3, “pg_num”: 128, “pgp_num”: 128 } } into: [ { “name”: “mypool”, “pg_num”: 128, “rule_name”: “” } ] The first is a map where each key is a pool name and its value the pool properties, the second is a list where each item describes all properties of a pool, including its name.

  • Changed default address of docker0 bridge to be in the last class B private network – – to stop conflicting with the default network range for InternalApiNetCidr. The docker0 bridge is normally unused in TripleO deployment.

  • Containerized services logs can be found under updated paths. Pacemaker-managed resources write logs to /var/log/pacemaker/bundles/*. Docker-daemon managed openstack services bind-mount their log files to the /var/log/containers/<foo>/* sub-directories. Services running under Apache2 WSGI use the /var/log/containers/httpd/<foo-api>/* destinations. Additional tools or commands that log to syslog, end up placing log records into the hosts journalctl and /var/log/messages.

  • The configuration management related directories managed by the tripleo deployment tools and bind-mounted as docker volumes now using the :z flag, which is a docker’s equivalent for chcon -Rt svirt_sandbox_file_t -l s0. This makes those directories available for all containers on the host, in the shared mode: /var/lib/tripleo-config, /var/lib/docker-puppet, /var/lib/kolla/config, /etc/puppet, /usr/share/openstack-puppet/modules/, /var/lib/config-data.

  • The Heat API Cloudwatch API is deprecated in Pike and so it removed by default during the Ocata to Pike upgrade. If you wish to keep this service then you should use the environments/heat-api-cloudwatch.yaml environment file in the tripleo-heat-templates during the upgrade (note that this is migrated to running under httpd, if you do decide to keep this service on Pike).

  • Each service template may optionally define a fast_forward_upgrade_tasks key, which is a list of ansible tasks to be performed during the fast-forward upgrade process. As with Upgrade steps each task is associated to a particular step provided as a variable and used along with a release variable by a basic conditional that determines when the task should run.

  • Add ODL upgradability Steps of upgrade are as follows 1. Block OVS instances to connect to ODL done in upgrade_tasks 2. Set ODL upgrade flag to True done in upgrade_tasks 3. Start ODL. This is done via docker config step 1 4. Start Neutron re-sync triggered by starting of Neutron server container in step 4 of docker config 5. Delete OVS groups and ports 6. Stop OVS 7. Unblock OVS ports 8. Start OVS 9. Unset ODL upgrade flag Steps 5 to 9 are done in post_upgrade_steps

  • The Heat API Cloudwatch service has been removed from heat in Queens and would not be available for deployment.

  • When deploying with RHSM, sat-tools 6.2 will be installed instead of 6.1. The new version is supported by RHEL 7.4 and provides katello-agent package.

  • If a existing cluster has enabled SR-IOV with Compute role, then the service OS::TripleO::Services::NeutronSriovAgent has to be added to the Compute role in their roles_data.yaml. If the existing cluster has created SR-IOV role as a custom role (other than Compute), then this change will not affect.

  • Upgrade Heat templates version to queens in order to match Heat requirements.

  • Since we are now running the ansible-playbooks with a step variable rather than via Heat SoftwareConfig/Deployments, the per service upgrade_tasks need to use “when: step|int == [0-6]” rather than “tags: step[0-6]” to signal the step at which the given task is to be invoked. This also applies to the update_tasks that are used for the minor update. This also deprecates the upgrade_batch_tasks

Deprecation Notes

  • This patch removes Contrail templates from tripleo as a preparation for the new microservice based templates.

  • The pre-existing environment files which previously enabled the deployment of Ceph on baremetal, via puppet-ceph, will all be’ migrated to deploy Ceph in containers, via ceph-ansible.

  • The CeilometerWorkers parameter, unused, is now deprecated.

  • The Heat API Cloudwatch API is deprecated in Pike and so it is now not deployed by default. You can override this behaviour with the environments/heat-api-cloudwatch.yaml environment file in the tripleo-heat-templates.

  • Deprecates the OpenDaylightConnectionProtocol heat parameter. This parameter is now decided based on using TLS or non-TLS deployments.

  • Parameter “OpenDaylightPort” is deprecated and will be removed from R.

Security Issues

  • The configuration management related directories managed by the tripleo deployment tools and bind-mounted as docker volumes now using the :z flag, which is a docker’s equivalent for chcon -Rt svirt_sandbox_file_t -l s0. This makes those directories available for all containers on the host, in the shared mode: /var/lib/tripleo-config, /var/lib/docker-puppet, /var/lib/kolla/config, /etc/puppet, /usr/share/openstack-puppet/modules/, /var/lib/config-data.

  • Live migration over TLS has been disabled since the settings it was using don’t meet the required security standards. It is currently not possible to enable it via t-h-t.

  • Change the IPtables rule for SNMP service and open 161 udp port on SnmpdIpSubnet parameter instead of If SnmpdIpSubnet is left empty, SnmpdNetwork will be used.

Bug Fixes

  • Set “host” parameter in manila.conf to ‘hostgroup’ when running manila share service under pacemaker. This labels instances of the service on different nodes with the same “host” as cinder does in this circumstance so that the instances are considered by OpenStack to provide the same service and manila share is able to maintain management of shares on the backend after failover and failback.

  • Expose panko expirer params to enable and configure it.

  • Add s3 driver option and params associated with it.

  • Allow the configuration of image_member_quota in Glance API. This error blocks the ability of sharing images if the default value (128) is reached.

  • Enabling ceilometer automatically enables keystone notifications through the ‘notifications’ topic (which was the default).

  • Deployments with Ceph now honor the DeploymentServerBlacklist parameter. Previously, this meant that changes could still be triggered for servers in the blacklist.

  • Added hiera for network_virtual_ips in vip_data to allow composable networks to be configured in puppet.

  • Allow containerized services to be executed on hosts with SELinux in the enforcing mode.

  • If docker-puppet.py fails on any config_volume, it can be difficult to reproduce the failure given all the other entries in docker-puppet.json. Often to reproduce a single failure, one has to modify the json file, and remove all other entries, save the result to a new file, then pass that new file as $CONFIG. The ability to specify $CONFIG_VOLUME, which will cause docker-puppet.py to only run the configuration for the specified entry in docker-puppet.json whose config_volume value matches the user specified value has been added.

  • As documented in launchpad bug 1708680 the templates for manila with the “generic” back end do not yield a successful manila deployment even if they do not cause the overall overcloud deployment to fail, so we are dropping these faulty and unmaintained manila “generic” back end templates.

  • Drop redundant MetricProcessingDelay param from gnocchi base templates. This is already done in metricd templates, so lets drop it to avoid duplicates in config file.

  • Enable the ntp iburst configuration for each server by default. As some services are very sensitive to time syncronization, this will help speed up the syncronization when servers are unavailable for a time. See LP#1731883

  • Fixes Heat resource for API networks to be the correct name of “InternalApiNetwork” instead of “InternalNetwork”.

  • Fixes dynamic networks to fallback to ctlplane network when they are disabled.

  • Fixes missing Keystone authtoken password for Tacker.

  • Fixes issue in OpenDaylight deployments where SSL between Neutron DHCP agent with OVS did not work due to missing SSL certificate/key configuration.

  • The “neutron_admin_auth_url” is now properly set using KeystoneInternal rather than using the NeutronAdmin endpoint.

  • Fixes GUI feature loaded into OpenDaylight, which fixes the GUI as well as the URL used for Docker healthcheck.

  • Fixes missing SSL/TLS configuration for OpenDaylight docker deployments.

  • Fixes bug where neutron port status was not updated with OpenDaylight deployments due to firewall blocking the websocket port used to send the update (port 8185).

  • Fixes generation public certificates for haproxy in a non-containerized TLS deployment scenario.

  • Removes hardcoded network names. The networks are now defined dynamically by network_data.yaml.

  • When Horizon is enabled, the _member_ Keystone role will now be created. (bug 1741066).

  • Disables QoS with OpenDaylight until officially supported.

  • – The pacemaker docker version for the rabbitmq service should also include the noop’s for the for Rabbitmq_policy and Rabbitmq_user puppet resources that are noop’d in docker/services/rabbitmq.yaml These resources must be noop’d in puppet, otherwise they could be triggered during puppet apply’s during the docker-puppet.py generate config step where rabbitmqctl is not actually running.

  • Changes the default RabbitMQ partition handling strategy from ‘pause_minority’ to ‘ignore’, avoiding crashes due to race conditions with nodes starting and stopping concurrently.

  • Remove Ceilometer Collector, Expirer and Api from the roles data and templates. Both these services have been deprecated in Pike release and targeted for removal in the current Queens release.

  • Remove unused nova ports 3773 and 8773 from being opened in iptables.

  • Restore ceilometer templates to disable Api, Collector and Expirer. These are required for fast forward upgrades to remove the services during the upgrades.

  • For deployments running on RHEL with Satellite 6 (or beyond) with Capsule (Katello API enabled), the Katello API is available on 8443 port, so the previous API ping didn’t work for this case. Capsule is now supported since we just check if katello-ca-consumer-latest rpm is available to tell that Satellite version is 6 or beyond.

  • Allow to configure SR-IOV agent with agent extenstions.

  • Start sequence at 1 for the downloaded deploy steps playbook instead of 0. The first step should be 1 since that is what the puppet manifests expect.

  • Swift added a requirement to ensure that storage directories exist before using them. However, when local directories are used in Tripleo (storing data in /srv/node/d1), these are missing by default and thus Swift won’t store any data. This fix creates this directory if needed on a containerized environment.

  • Swift added a requirement to ensure that storage directories exist before using them. However, when local directories are used in Tripleo (storing data in /srv/node/d1), these are missing by default and thus Swift won’t store any data. This fix creates this directory if needed.

  • Processes are storing important health and debug data in some files within /var/cache/swift, and these files must be shared between all swift-* processes. Therefore it is needed to mount this directory on all Swift containers, which is required to make swift-recon working.

  • Add swift_config puppet tag to the dockerized proxy service to ensure the required hash values in swift.conf are set properly. This is required when deploying a proxy node without the storage service at the same time.

  • The standalone Telemetry role at roles/Telemetry.yaml had an incorrect list of services. The list has been updated to remove services such as MySQL and RabbitMQ and the services common to all TripleO roles have been added.

  • Change the default ManageEventPipeline to true. This is because we want the event pipeline publishers overridden by heat templates to take effect over the puppet defaults. Once we drop panko:// from the pipeline we can switch this back to false.

  • In the deploy steps playbook downloaded via “openstack overcloud config download”, all the tasks require sudo. The tasks now use “become: true”.

  • Use StrictHostKeyChecking=no to inject the temporary ssh key in enable-ssh-admin.sh. The user provides the list of hosts for ssh, so we can safely assume that they intend to ssh to those hosts. Also, for the ovb case the hosts will have new host ssh keys which have not yet been accepted.

Other Notes

  • With the migration from puppet-ceph to ceph-ansible for the deployment of Ceph, the format of CephPools parameter changes because the two tools use a different format to represent the list of additional pools to create.

  • Network templates are now rendered with jinja2 based on network_data.yaml. The only required parameter for each network is the name, optional params will populate the defaults in the network template. Network templates will be generated for both IPv4 and IPv6 versions of the networks, setting ipv6: true on the network will generate only IPv6 templates. An example for overriding default IP addresses for IPv6 has been added in environments/network-environment-v6.yaml.