Newton Series Release Notes

Newton Series Release Notes

14.2.8-11

New Features

  • Tags have been added to all of the common tags with the prefix “common-“. This has been done to allow a deployer to rapidly run any of the common on a need basis without having to rerun an entire playbook.
  • Extra headers can be added to Keystone responses by adding items to keystone_extra_headers. Example:

    keystone_extra_headers:
      - parameter: "Access-Control-Expose-Headers"
        value: "X-Subject-Token"
      - parameter: "Access-Control-Allow-Headers"
        value: "Content-Type, X-Auth-Token"
      - parameter: "Access-Control-Allow-Origin"
        value: "*"
    

Upgrade Notes

  • The openstack-ansible-security role is now retired and the ansible-hardening role replaces it. The ansible-hardening role provides the same functionality and will be the maintained hardening role going forward.

Bug Fixes

  • In Ubuntu the dnsmasq package actually includes init scripts and service configuration which conflict with LXC and are best not included. The actual dependent package is dnsmasq-base. The package list has been adjusted and a task added to remove the dnsmasq package and purge the related configuration files from all LXC hosts.

14.2.8

New Features

  • The os_nova role now provides for doing online data migrations once the db sync has been completed. The data migrations will not be executed until the boolean variable nova_all_software_updated is true. This variable will need to be set by the playbook consuming the role.

Upgrade Notes

  • LXC containers will have their TZ data sync with their physical host machines. This is being done because containers have been assumed to use UTC while a host could be using something else. This causes issues in some services like celiometer and can result in general time differences in logging.

Bug Fixes

  • MariaDB 10.0.32 released on Aug 17 which, when configured to use xtrabackup for the SST, requires percona xtrabackup version 2.3.5 or higher. As xtrabackup is the default SST mechanism in the galera_server role, the version used has been updated from 2.2.13 to 2.3.5 for the x86_64 hardware architecture. See the percona release notes for 2.3.2 for more details of what was included in the fix.

14.2.7

New Features

  • The os_cinder role now provides for doing online data migrations once the db sync has been completed. The data migrations will not be executed until the boolean variable cinder_all_software_updated is true. This variable will need to be set by the playbook consuming the role.
  • The os-cinder-install.yml playbook will now execute a rolling upgrade of cinder including database migrations (both schema and online) as per the procedure described in the cinder documentation. When haproxy is used as the load balancer, the backend being changed will be drained before changes are made, then added back to the pool once the changes are complete.
  • It’s now possible to disable heat stack password field in horizon. horizon_enable_heatstack_user_pass variable has been added and default to True.
  • The os-neutron-install.yml playbook will now execute a rolling upgrade of neutron including database migrations (both expand and contract) as per the procedure described in the neutron documentation.
  • The os-nova-install.yml playbook will now execute a rolling upgrade of nova including database migrations as per the procedure described in the nova documentation.

Upgrade Notes

  • The entire repo build process is now idempotent. From now on when the repo build is re-run, it will only fetch updated git repositories and rebuild the wheels/venvs if the requirements have changed, or a new release is being deployed.
  • The git clone part of the repo build process now only happens when the requirements change. A git reclone can be forced by using the boolean variable repo_build_git_reclone.
  • The python wheel build process now only happens when requirements change. A wheel rebuild may be forced by using the boolean variable repo_build_wheel_rebuild.
  • The python venv build process now only happens when requirements change. A venv rebuild may be forced by using the boolean variable repo_build_venv_rebuild.
  • The repo build process now only has the following tags, providing a clear path for each deliverable. The tag repo-build-install completes the installation of required packages. The tag repo-build-wheels completes the wheel build process. The tag repo-build-venvs completes the venv build process. Finally, the tag repo-build-index completes the manifest preparation and indexing of the os-releases and links folders.

14.2.6

New Features

  • The horizon_images_allow_location variable is added to support the IMAGES_ALLOW_LOCATION setting in the horizon_local_settings.py file to allow to specify and external location during the image creation.

14.2.5

New Features

  • The new option haproxy_backend_arguments can be utilized to add arbitrary options to a HAProxy backend like tcp-check or http-check.

14.2.4

Known Issues

  • When executing a deployment which includes the telemetry systems (ceilometer, gnocchi, aodh), the repo build will fail due to the inability for pip to read the constraints properly from the extras section in ceilometer’s setup.cfg. The current workaround for this is to add the following content to /etc/openstack_deploy/user_variables.yml.

    repo_build_upper_constraints_overrides:
      - gnocchiclient<3.0.0
    

Bug Fixes

  • The workaround of requiring the addition of gnocchiclient to the repo_build_upper_constraints_overrides variable is no longer required. The appropriate constraints have been implemented as a global pin.
  • Upstream is now depending on version 2.1.0 of ldappool.

14.2.3

New Features

  • New variables have been added to allow a deployer to customize a aodh systemd unit file to their liking.
  • The task dropping the aodh systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • For the os_aodh role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the aodh_*_init_config_overrides variables which use the config_template task to change template defaults.
  • New variables have been added to allow a deployer to customize a ceilometer systemd unit file to their liking.
  • The task dropping the ceilometer systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • For the os_ceilometer role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the ceilometer_*_init_config_overrides variables which use the config_template task to change template defaults.
  • New variables have been added to allow a deployer to customize a cinder systemd unit file to their liking.
  • The task dropping the cinder systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • New variables have been added to allow a deployer to customize a glance systemd unit file to their liking.
  • The task dropping the glance systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • For the os_glance role, the systemd unit RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. This value can be adjusted by using the glance_*_init_config_overrides variables which use the config_template task to change template defaults.
  • New variables have been added to allow a deployer to customize a gnocchi systemd unit file to their liking.
  • The task dropping the gnocchi systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • For the os_gnocchi role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the gnocchi_*_init_config_overrides variables which use the config_template task to change template defaults.
  • New variables have been added to allow a deployer to customize a heat systemd unit file to their liking.
  • The task dropping the heat systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • For the os_heat role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the heat_*_init_config_overrides variables which use the config_template task to change template defaults.
  • New variables have been added to allow a deployer to customize a ironic systemd unit file to their liking.
  • The task dropping the ironic systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • For the os_ironic role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the ironic_*_init_config_overrides variables which use the config_template task to change template defaults.
  • New variables have been added to allow a deployer to customize a keystone systemd unit file to their liking.
  • The task dropping the keystone systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • For the os_keystone role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the keystone_*_init_config_overrides variables which use the config_template task to change template defaults.
  • Removed dependency for cinder_backends_rbd_inuse in nova.conf when setting rbd_user and rbd_secret_uuid variables. Cinder delivers all necessary values via RPC when attaching the volume, so those variables are only necessary for ephemeral disks stored in Ceph. These variables are required to be set up on cinder-volume side under backend section.
  • New variables have been added to allow a deployer to customize a magnum systemd unit file to their liking.
  • The task dropping the magnum systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • For the os_magnum role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the magnum_*_init_config_overrides variables which use the config_template task to change template defaults.
  • New variables have been added to allow a deployer to customize a neutron systemd unit file to their liking.
  • The task dropping the neutron systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • For the os_neutron role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the neutron_*_init_config_overrides variables which use the config_template task to change template defaults.
  • New variables have been added to allow a deployer to customize a nova systemd unit file to their liking.
  • The task dropping the nova systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • For the os_nova role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the nova_*_init_config_overrides variables which use the config_template task to change template defaults.
  • New variables have been added to allow a deployer to customize a swift systemd unit file to their liking.
  • The task dropping the swift systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • For the os_swift role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the swift_*_init_config_overrides variables which use the config_template task to change template defaults.

Upgrade Notes

  • For the os_aodh role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the aodh_*_init_config_overrides variables which use the config_template task to change template defaults.
  • For the os_ceilometer role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the ceilometer_*_init_config_overrides variables which use the config_template task to change template defaults.
  • For the os_glance role, the systemd unit RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. This value can be adjusted by using the glance_*_init_config_overrides variables which use the config_template task to change template defaults.
  • For the os_gnocchi role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the gnocchi_*_init_config_overrides variables which use the config_template task to change template defaults.
  • For the os_heat role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the heat_*_init_config_overrides variables which use the config_template task to change template defaults.
  • For the os_ironic role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the ironic_*_init_config_overrides variables which use the config_template task to change template defaults.
  • For the os_keystone role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the keystone_*_init_config_overrides variables which use the config_template task to change template defaults.
  • For the os_magnum role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the magnum_*_init_config_overrides variables which use the config_template task to change template defaults.
  • For the os_neutron role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the neutron_*_init_config_overrides variables which use the config_template task to change template defaults.
  • For the os_nova role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the nova_*_init_config_overrides variables which use the config_template task to change template defaults.
  • For the os_swift role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the swift_*_init_config_overrides variables which use the config_template task to change template defaults.

14.2.2

New Features

  • Allows SSL connection to Galera with SSL support. galera_use_ssl option has to be set to true, in this case self-signed CA cert or user-provided CA cert will be delivered to the container/host.
  • Implements SSL connection ability to MySQL. galera_use_ssl option has to be set to true (default), in this case playbooks create self-signed SSL bundle and sets up MySQL configs to use it or distributes user-provided bundle throughout Galera nodes.

Bug Fixes

  • Nova features that use libguestfs (libvirt password/key injection) now work on compute hosts running Ubuntu. When Nova is deployed to Ubuntu compute hosts and either nova_libvirt_inject_key or nova_libvirt_inject_password are set to True, then kernels stored in /boot/vmlinuz-* will be made readable to nova user. See launchpad bug 1507915.

14.2.1

New Features

  • Add support for the cinder v3 api. This is enabled by default, but can be disabled by setting the cinder_enable_v3_api variable to false.
  • For the os_cinder role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the cinder_*_init_config_overrides variables which use the config_template task to change template defaults.
  • Haproxy-server role allows to set up tunable parameters. For doing that it is necessary to set up a dictionary of options in the config files, mentioning those which have to be changed (defaults for the remaining ones are programmed in the template). Also “maxconn” global option made to be tunable.
  • Add support for neutron as an enabled_network_interface.
  • The ironic_neutron_provisioning_network_name and ironic_neutron_cleaning_network_name variable can be set to the name of the neutron network to use for provisioning and cleaning. The ansible tasks will determine the appropriate UUID for that network. Alternatively, ironic_neutron_provisioning_network_uuid or ironic_neutron_cleaning_network can be used to directly specify the UUID of the networks. If both ironic_neutron_provisioning_network_name and ironic_neutron_provisioning_network_uuid are specified, the specified UUID will be used. If only the provisioning network is specified, the cleaning network will default to the same network.

Upgrade Notes

  • During the keepalived role upgrade the keepalived process will restart and introduce a brief service disruption.

Deprecation Notes

  • The variables cinder_sigkill_timeout and cinder_restart_wait have been deprecated and will be removed in Pike.

Critical Issues

  • A bug that caused the Keystone credential keys to be lost when the playbook is run during a rebuild of the first Keystone container has been fixed. Please see launchpad bug 1667960 for more details.

Other Notes

14.2.0

New Features

  • The galera_client role will default to using the galera_repo_url URL if the value for it is set. This simplifies using an alternative mirror for the MariaDB server and client as only one variable needs to be set to cover them both.
  • Add get_networks command to the neutron library. This will return network information for all networks, and fail if the specified net_name network is not present. If no net_name is specified network information will for all networks will be returned without performing a check on an existing net_name network.
  • The default behaviour of ensure_endpoint in the keystone module has changed to update an existing endpoint, if one exists that matches the service name, type, region and interface. This ensures that no duplicate service entries can exist per region.
  • The repo server file system structure has been updated to allow for multiple Operating systems running multiple architectures to be run at the same time and served from a single server without impacting pools, venvs, wheel archives, and manifests. The new structure follows the following pattern $RELEASE/$OS_TYPE-$ARCH and has been applied to os-releases, venvs, and pools.
  • The deployer can now define an environment variable GROUP_VARS_PATH with the folders of its choice (separated by the colon sign) to define an user space group_vars folder. These vars will apply but be (currently) overriden by the OpenStack-Ansible default group vars, by the set facts, and by the user_* variables. If the deployer defines multiple paths, the variables found are merged, and precedence is increasing from left to right (the last defined in GROUP_VARS_PATH wins)
  • The deployer can now define an environment variable HOST_VARS_PATH with the folders of its choice (separated by the colon sign) to define an user space host_vars folder. These vars will apply but be (currently) overriden by the OpenStack-Ansible default host vars, by the set facts, and by the user_* variables. If the deployer defines multiple paths, the variables found are merged, and precedence is increasing from left to right (the last defined in HOST_VARS_PATH wins)

Upgrade Notes

  • The repo server file system structure has been updated to allow for multiple Operating systems running multiple architectures to be run at the same time and served from a single server without impacting pools, venvs, wheel archives, and manifests. The new structure follows the following pattern $RELEASE/$OS_TYPE-$ARCH and has been applied to os-releases, venvs, and pools.

Deprecation Notes

  • The variables galera_client_apt_repo_url and galera_client_yum_repo_url are deprecated in favour of the common variable galera_client_repo_url.
  • The update state for the ensure_endpoint method of the keystone module is now deprecated, and will be removed in the Queens cycle. Setting state to present will achieve the same result.

14.1.1

New Features

  • The new provider network attribute sriov_host_interfaces is added to support SR-IOV network mappings inside Neutron. The provider_network adds new items network_sriov_mappings and network_sriov_mappings_list to the provider_networks dictionary. Multiple interfaces can be defined by comma separation.
  • Neutron SR-IOV can now be optionally deployed and configured. For details about the what the service is and what it provides, see the SR-IOV Installation Guide for more information.
  • Added new variable tempest_volume_backend_names and updated templates/tempest.conf.j2 to point backend_names at this variable

Known Issues

  • There is currently an Ansible bug in regards to HOSTNAME. If the host .bashrc holds a var named HOSTNAME, the container where the lxc_container module attaches will inherit this var and potentially set the wrong $HOSTNAME. See the Ansible fix which will be released in Ansible version 2.3.

Upgrade Notes

  • Gnocchi service endpoint variables were not named correctly. Renamed variables to be consistent with other roles.

Deprecation Notes

  • Removed tempest_volume_backend1_name and tempest_volume_backend1_name since backend1_name and backend2_name were removed from tempest in commit 27905cc (merged 26/04/2016)

14.1.0

New Features

  • It’s now possible to change the behavior of DISALLOW_IFRAME_EMBED by defining the variable horizon_disallow_iframe_embed in the user variables.

Bug Fixes

  • Metal hosts were being inserted into the lxc_hosts group, even if they had no containers (Bug 1660996). This is now corrected for newly configured hosts. In addition, any hosts that did not belong in lxc_hosts will be removed on the next inventory run or playbook call.

Other Notes

  • From now on, external repo management (in use for RDO/UCA for example) will be done inside the pip-install role, not in the repo_build role.
  • Ubuntu Cloud Archive (UCA) was installed by default on repo, nova, neutron nodes, but not on the other nodes. From now on, we are using UCA everywhere to avoid dependency issues (like having virtualenv build with incompatible versions of python-cryptography). The same reasoning applies to CentOS and RDO packages.

14.0.8

New Features

  • The security-hardening playbook hosts target can now be filtered using the security_host_group var.

Upgrade Notes

  • The global override cinder_nfs_client is replaced in favor of fully supporting multi backends configuration via the cinder_backends stanza.

Deprecation Notes

  • The global override cinder_nfs_client is replaced in favor of fully supporting multi backends configuration via the cinder_backends stanza.

Bug Fixes

  • Systems using systemd (like Ubuntu Xenial) were incorrectly limited to a low amount of open files. This was causing issues when restarting galera. A deployer can still define the maximum number of open files with the variable galera_file_limits (Defaults to 65536).

Other Notes

  • The limits.conf file for galera servers will now be deployed under /etc/security/limits.d/99-limits.conf. This is being done to ensure our changes do not clobber existing settings within the system’s default /etc/security/limits.conf file when the file is templated.

14.0.7

New Features

  • It is now possible to customise the location of the configuration file source for the All-In-One (AIO) bootstrap process using the bootstrap_host_aio_config_path variable.
  • It is now possible to customise the location of the scripts used in the All-In-One (AIO) boostrap process using the bootstrap_host_aio_script_path variable.
  • It is now possible to customise the name of the user_variables.yml file created by the All-In-One (AIO) bootstrap process using the bootstrap_host_user_variables_filename variable.
  • It is now possible to customise the name of the user_secrets.yml file created by the All-In-One (AIO) bootstrap process using the bootstrap_host_user_secrets_filename variable.
  • The filename of the apt source for the ubuntu cloud archive can now be defined with the variable uca_apt_source_list_filename.
  • The filename of the apt source for the ubuntu cloud archive used in ceph client can now be defined by giving a filename in the uca part of the dict ceph_apt_repos.
  • The filename of the apt source for the ubuntu cloud archive can now be defined with the variable uca_apt_source_list_filename.
  • The filename of the apt/yum source can now be defined with the variable mariadb_repo_filename.
  • The filename of the apt source can now be defined with the variable filename inside the dicts galera_repo and galera_percona_xtrabackup_repo.
  • The filename of the apt source for the haproxy ppa can now be defined with the filename section of the dict haproxy_repo.
  • The filename of the apt source for the ubuntu cloud archive can now be defined with the variable uca_apt_source_list_filename.
  • The rabbitmq_server role now supports disabling listeners that do not use TLS. Deployers can override the rabbitmq_disable_non_tls_listeners variable, setting a value of True if they wish to enable this feature.
  • Additional volume-types can be created by defining a list named extra_volume_types in the desired backend of the variable(s) cinder_backends
  • You can specify the galera_package_arch variable to force a specific architecture when installing percona and qpress packages. This will be automatically calculated based on the architecture of the galera_server host. Acceptable values are x86_64 for Ubuntu-14.04`, ``Ubuntu-16.04 and RHEL 7, and ppc64le for Ubuntu-16.04.
  • Deployers can now define the varible cinder_qos_specs to create qos specs and assign those specs to desired cinder volume types.
  • RabbitMQ Server can now be installed from different methods: a deb file (default), from standard repository package and from external repository. Current behavior is unchanged. Please define rabbitmq_install_method: distro to use packages provided by your distribution or rabbitmq_install_method: external_repo to use packages stored in an external repo. In the case external_repo is used, the process will install RabbitMQ from the packages hosted by packagecloud.io, as recommended by RabbitMQ.

Known Issues

Bug Fixes

  • The percona repository stayed in placed even after a change of the variable use_percona_upstream. From now on, the percona repository will not be present unless the deployer decides to use_percona_upstream. This also fixes a bug of the presence of this apt repository after an upgdrade from Mitaka.

14.0.6

New Features

  • Deployers can set heat_cinder_backups_enabled to enable or disable the cinder backups feature in heat. If heat has cinder backups enabled, but cinder’s backup service is disabled, newly built stacks will be undeletable.

    The heat_cinder_backups_enabled variable is set to false by default.

Bug Fixes

  • Properly distrubute client keys to nova hypervisors when extra ceph clusters are being deployed.
  • Properly remove temporary files used to transfer ceph client keys from the deploy host and hypervisors.

14.0.5

New Features

  • The installation of chrony is still enabled by default, but it is now controlled by the security_enable_chrony variable.
  • If the cinder backup service is enabled with cinder_service_backup_program_enabled: True, then heat will be configured to use the cinder backup service. The heat_cinder_backups_enabled variable will automatically be set to True.
  • The copy of the /etc/openstack-release file is now optional. To disable the copy of the file, set openstack_distrib_file to no.
  • The location of the /etc/openstack-release file placement can now be changed. Set the variable openstack_distrib_file_path to place it in a different path.
  • Swift versioned_writes middleware is added to the pipeline by default. Additionally the allow_versioned_writes settings in the middleware configuration is set to True. This follows the Swift defaults, and enables the use of the X-History-Location metadata Header.

Upgrade Notes

  • The variables used to produce the /etc/openstack-release file have been changed in order to improve consistency in the name spacing according to their purpose.

    openstack_code_name –> openstack_distrib_code_name openstack_release –> openstack_distrib_release

    Note that the value for openstack_distrib_release will be taken from the variable openstack_release if it is set.

  • The variable proxy_env_url is now used by the apt-cacher-ng jinja2 template to set up an HTTP/HTTPS proxy if needed.
  • The variable gnocchi_required_pip_packages was incorrectly named and has been renamed to gnocchi_requires_pip_packages to match the standard across all roles.

Bug Fixes

  • The container_cidr key has been restored back to openstack_inventory.json.

    The fix to remove deleted global override keys mistakenly deleted the container_cidr key. This was used by downstream consumers, and cannot be reconstructed with other information inside the inventory file. Regression tests were also added.

  • The apt-cacher-ng daemon does not use the proxy server specified in environment variables. The proxy server specified in the proxy_env_url variable is now set inside the apt-cacher-ng configuration file.
  • Setup for the PowerVM driver was not properly configuring the system to support RMC configuration for client instances. This fix introduces an interface template for PowerVM that properly supports mixed IPV4/IPV6 deploys and adds documentation for PowerVM RMC. For more information see bug 1643988.

14.0.3

Upgrade Notes

  • The variables tempest_requirements_git_repo and tempest_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables swift_requirements_git_repo and swift_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables neutron_requirements_git_repo and neutron_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables sahara_requirements_git_repo and sahara_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables nova_requirements_git_repo and nova_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables nova_lxd_requirements_git_repo and nova_lxd_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.

Bug Fixes

  • SSLv3 is now disabled in the haproxy daemon configuration by default.
  • Setting the haproxy_bind list on a service is now used as an override to the other VIPs defined in the environment. Previously it was being treated as an append to the other VIPs so there was no path to override the VIP binds for a service. For example, haproxy_bind could be used to bind a service to the internal VIP only.

14.0.2

New Features

  • Deployers can now define the override cinder_rpc_executor_thread_pool_size which defaults to 64
  • Deployers can now define the override cinder_rpc_response_timeout which defaults to 60
  • Container boot ordering has been implemented on container types where it would be beneficial. This change ensures that stateful systems running within a container are started ahead of non-stateful systems. While this change has no impact on a running deployment it will assist with faster recovery should any node hosting container go down or simply need to be restarted.
  • A new task has been added to the “os-lxc-container-setup.yml” common-tasks file. This new task will allow for additional configurations to be added without having to restart the container. This change is helpful in cases where non-impacting config needs to be added or updated to a running containers.
  • IPv6 support has been added for the LXC bridge network. This can be configured using lxc_net6_address, lxc_net6_netmask, and lxc_net6_nat.

Upgrade Notes

  • The variables horizon_requirements_git_repo and horizon_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables ironic_requirements_git_repo and ironic_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables heat_requirements_git_repo and heat_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables magnum_requirements_git_repo and magnum_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables cinder_requirements_git_repo and cinder_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables gnocchi_requirements_git_repo and gnocchi_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables glance_requirements_git_repo and glance_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables keystone_requirements_git_repo and keystone_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables aodh_requirements_git_repo and aodh_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables rally_requirements_git_repo and rally_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables ceilometer_requirements_git_repo and ceilometer_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.

Bug Fixes

  • When a task fails while executing a playbook, the default behaviour for Ansible is to fail for that host without executing any notifiers. This can result in configuration changes being executed, but services not being restarted. OpenStack-Ansible now sets ANSIBLE_FORCE_HANDLERS to True by default to ensure that all notified handlers attempt to execute before stopping the playbook execution.
  • The URL of NovaLink uses ‘ftp’ protocol to provision apt key. It causes apt_key module to fail to retrieve NovaLink gpg public key file. Therefore, change the protocol of URL to ‘http’. For more information, see bug 1637348.

14.0.0

New Features

  • LXC containers will now have a proper RFC1034/5 hostname set during post build tasks. A localhost entry for 127.0.1.1 will be created by converting all of the “_” in the inventory_hostname to “-“. Containers will be created with a default domain of openstack.local. This domain name can be customized to meet your deployment needs by setting the option lxc_container_domain.
  • The option openstack_domain has been added to the openstack_hosts role. This option is used to setup proper hostname entries for all hosts within a given OpenStack deployment.
  • The openstack_hosts role will setup an RFC1034/5 hostname and create an alias for all hosts in inventory.
  • Added new parameter `cirros_img_disk_format to support disk formats other than qcow2.
  • Ceilometer can now use Gnocchi for storage. By default this is disabled. To enable the service, set ceilometer_gnocchi_enabled: yes. See the Gnocchi role documentation for more details.
  • The os_horizon role now has support for the horizon ironic-ui dashboard. The dashboard may be enabled by setting horizon_enable_ironic_ui to True in /etc/openstack_deploy/user_variables.yml.
  • Adds support for the horizon ironic-ui dashboard. The dashboard will be automatically enabled if any ironic hosts are defined.
  • The os_horizon role now has support for the horizon magnum-ui dashboard. The dashboard may be enabled by setting horizon_enable_magnum_ui to True in /etc/openstack_deploy/user_variables.yml.
  • Adds support for the horizon magnum-ui dashboard. The dashboard will be automatically enabled if any magnum hosts are defined.
  • The horizon_keystone_admin_roles variable is added to support the OPENSTACK_KEYSTONE_ADMIN_ROLES list in the horizon_local_settings.py file.
  • A new variable has been added to allow a deployer to control the restart of containers via the handler. This new option is lxc_container_allow_restarts and has a default of yes. If a deployer wishes to disable the auto-restart functionality they can set this value to no and automatic container restarts that are not absolutely required will be disabled.
  • Experimental support has been added to allow the deployment of the OpenStack Magnum service when hosts are present in the host group magnum-infra_hosts.
  • Deployers can now blacklist certain Nova extensions by providing a list of such extensions in horizon_nova_extensions_blacklist variable, for example:

    horizon_nova_extensions_blacklist:
      - "SimpleTenantUsage"
    
  • The os_nova role can now deploy the nova-lxd hypervisor. This can be achieved by setting nova_virt_type to lxd on a per-host basis in openstack_user_config.yml or on a global basis in user_variables.yml.
  • The os_nova role can now deploy the a custom /etc/libvirt/qemu.conf file by defining qemu_conf_dict.
  • The role now enables auditing during early boot to comply with the requirements in V-38438. By default, the GRUB configuration variables in /etc/default/grub.d/ will be updated and the active grub.cfg will be updated.

    Deployers can opt-out of the change entirely by setting a variable:

    security_enable_audit_during_boot: no
    

    Deployers may opt-in for the change without automatically updating the active grub.cfg file by setting the following Ansible variables:

    security_enable_audit_during_boot: yes
    security_enable_grub_update: no
    
  • A task was added to disable secure ICMP redirects per the requirements in V-38526. This change can cause problems in some environments, so it is disabled by default. Deployers can enable the task (which disables secure ICMP redirects) by setting security_disable_icmpv4_redirects_secure to yes.
  • A new task was added to disable ICMPv6 redirects per the requirements in V-38548. However, since this change can cause problems in running OpenStack environments, it is disabled by default. Deployers who wish to enable this task (and disable ICMPv6 redirects) should set security_disable_icmpv6_redirects to yes.
  • AIDE is configured to skip the entire /var directory when it does the database initialization and when it performs checks. This reduces disk I/O and allows these jobs to complete faster.

    This also allows the initialization to become a blocking process and Ansible will wait for the initialization to complete prior to running the next task.

  • In order to reduce the time taken for fact gathering, the default subset gathered has been reduced to a smaller set than the Ansible default. This may be changed by the deployer by setting the ANSIBLE_GATHER_SUBSET variable in the bash environment prior to executing any ansible commands.
  • A new option has been added to bootstrap-ansible.sh to set the role fetch mode. The environment variable ANSIBLE_ROLE_FETCH_MODE sets how role dependencies are resolved.
  • The auditd rules template included a rule that audited changes to the AppArmor policies, but the SELinux policy changes were not being audited. Any changes to SELinux policies in /etc/selinux are now being logged by auditd.
  • The container cache preparation process now allows copy-on-write to be set as the lxc_container_backing_method when the lxc_container_backing_store is set to lvm. When this is set a base container will be created using a name of the form <linux-distribution>-distribution-release>-<host-cpu-architecture>. The container will be stopped as it is not used for anything except to be a backing store for all other containers which will be based on a snapshot of the base container.
  • When using copy-on-write backing stores for containers, the base container name may be set using the variable lxc_container_base_name which defaults to <linux-distribution>-distribution-release>-<host-cpu-architecture>.
  • The container cache preparation process now allows overlayfs to be set as the lxc_container_backing_store. When this is set a base container will be created using a name of the form <linux-distribution>-distribution-release>-<host-cpu-architecture>. The container will be stopped as it is not used for anything except to be a backing store for all other containers which will be based on a snapshot of the base container. The overlayfs backing store is not recommended to be used for production unless the host kernel version is 3.18 or higher.
  • Containers will now bind mount all logs to the physical host machine in the “/openstack/log/{{ inventory_hostname }}” location. This change will ensure containers using a block backed file system (lvm, zfs, bfrfs) do not run into issues with full file systems due to logging.
  • Added new variable tempest_img_name.
  • Added new variable tempest_img_url. This variable replaces cirros_tgz_url and cirros_img_url.
  • Added new variable tempest_image_file. This variable replaces the hard-coded value for the img_file setting in tempest.conf.j2. This will allow users to specify images other than cirros.
  • Added new variable tempest_img_disk_format. This variable replaces cirros_img_disk_format.
  • The rsyslog_server role now has support for CentOS 7.
  • Support had been added to install the ceph_client packages and dependencies from Ceph.com, Ubuntu Cloud Archive (UCA), or the operating system’s default repository.

    The ceph_pkg_source variable controls the install source for the Ceph packages. Valid values include:

    • ceph: This option installs Ceph from a ceph.com repo. Additional variables to adjust items such as Ceph release and regional download mirror can be found in the variables files.
    • uca: This option installs Ceph from the Ubuntu Cloud Archive. Additional variables to adjust items such as the OpenStack/Ceph release can be found in the variables files.
    • distro: This options installs Ceph from the operating system’s default repository and unlike the other options does not attempt to manage package keys or add additional package repositories.
  • The pip_install role can now configure pip to be locked down to the repository built by OpenStack-Ansible. To enable the lockdown configuration, deployers may set pip_lock_to_internal_repo to true in /etc/openstack_deploy/user_variables.yml.
  • The dynamic_inventory.py file now takes a new argument, --check, which will run the inventory build without writing any files to the file system. This is useful for checking to make sure your configuration does not contain known errors prior to running Ansible commands.
  • The ability to support MultiStrOps has been added to the config_template action plugin. This change updates the parser to use the set() type to determine if values within a given key are to be rendered as MultiStrOps. If an override is used in an INI config file the set type is defined using the standard yaml construct of “?” as the item marker.

    # Example Override Entries
    Section:
      typical_list_things:
        - 1
        - 2
      multistrops_things:
        ? a
        ? b
    
    # Example Rendered Config:
    [Section]
    typical_list_things = 1,2
    multistrops_things = a
    multistrops_things = b
    
  • Although the STIG requires martian packets to be logged, the logging is now disabled by default. The logs can quickly fill up a syslog server or make a physical console unusable.

    Deployers that need this logging enabled will need to set the following Ansible variable:

    security_sysctl_enable_martian_logging: yes
    
  • The rabbitmq_server now supports a configurable inventory host group. Deployers can override the rabbitmq_host_group variable if they wish to use the role to create additional RabbitMQ clusters on a custom host group.
  • The lxc-container-create role now consumes the variable lxc_container_bind_mounts which should contain a list of bind mounts to apply to a newly created container. The appropriate host and container directory will be created and the configuration applied to the container config. This feature is designed to be used in group_vars to ensure that containers are fully prepared at the time they are created, thus cutting down the number of times containers are restarted during deployments and upgrades.
  • The lxc-container-create role now consumes the variable lxc_container_config_list which should contain a list of the entries which should be added to the LXC container config file when the container is created. This feature is designed to be used in group_vars to ensure that containers are fully prepared at the time they are created, thus cutting down the number of times containers are restarted during deployments and upgrades.
  • The lxc-container-create role now consumes the variable lxc_container_commands which should contain any shell commands that should be executed in a newly created container. This feature is designed to be used in group_vars to ensure that containers are fully prepared at the time they are created, thus cutting down the number of times containers are restarted during deployments and upgrades.
  • The container creation process now allows copy-on-write to be set as the lxc_container_backing_method when the lxc_container_backing_store is set to lvm. When this is set it will use a snapshot of the base container to build the containers.
  • The container creation process now allows overlayfs to be set as the lxc_container_backing_store. When this is set it will use a snapshot of the base container to build the containers. The overlayfs backing store is not recommended to be used for production unless the host kernel version is 3.18 or higher.
  • LXC containers will now generate a fixed mac address on all network interfaces when the option lxc_container_fixed_mac is set to true. This feature was implemented to resolve issues with dynamic mac addresses in containers generally experienced at scale with network intensive services.
  • All of the database and database user creates have been removed from the roles into the playbooks. This allows the roles to be tested independently of the deployed database and also allows the roles to be used independently of infrastructure choices made by the integrated OSA project.
  • Host security hardening is now applied by default using the openstack-ansible-security role. Developers can opt out by setting the apply_security_hardening Ansible variable to false. For more information about the role and the changes it makes, refer to the openstack-ansible-security documentation.
  • If there are swift hosts in the environment, then the value for cinder_service_backup_program_enabled will automatically be set to True. This negates the need to set this variable in user_variables.yml, but the value may still be overridden at the deployer discretion.
  • If there are swift hosts in the environment, then the value for glance_default_store will automatically be set to swift. This negates the need to set this variable in user_variables.yml, but the value may still be overridden at the deployer discretion.
  • The os_nova role can now detect a PowerNV environment and set the virtualization type to ‘kvm’.
  • The security role now has tasks that will disable the graphical interface on a server using upstart (Ubuntu 14.04) or systemd (Ubuntu 16.04 and CentOS 7). These changes take effect after a reboot.

    Deployers that need a graphical interface will need to set the following Ansible variable:

    security_disable_x_windows: no
    
  • Yaml files used for ceilometer configuration will now allow a deployer to override a given list. If an override is provided that matches an already defined list in one of the ceilometer default yaml files the entire list will be replaced by the provided override. Previously, a nested lists of lists within the default ceilometer configration files would extend should a deployer provide an override matching an existing pipeline. The extension of the defaults had a high probability to cause undesirable outcomes and was very unpredictable.
  • An Ansible was added to disable the rdisc service on CentOS systems if the service is installed on the system.

    Deployers can opt-out of this change by setting security_disable_rdisc to no.

  • Whether ceilometer should be enabled by default for each service is now dynamically determined based on whether there are any ceilometer hosts/containers deployed. This behaviour can still be overridden by toggling <service>_ceilometer_enabled in /etc/openstack_deploy/user_variables.yml.
  • The os_neutron role now determines the default configuration for openvswitch-agent tunnel_types and the presence or absence of local_ip configuration based on the value of neutron_ml2_drivers_type. Deployers may directly control this configuration by overriding the neutron_tunnel_types variable .
  • The os_neutron role now configures neutron ml2 to load the l2_population mechanism driver by default based on the value of neutron_l2_population. Deployers may directly control the neutron ml2 mechanism drivers list by overriding the mechanisms variable in the neutron_plugins dictionary.
  • LBaaSv2 is now enabled by default in all-in-one (AIO) deployments.
  • The Linux Security Module (LSM) that is appropriate for the Linux distribution in use will be automatically enabled by the security role by default. Deployers can opt out of this change by setting the following Ansible variable:

    security_enable_linux_security_module: False
    

    The documentation for STIG V-51337 has more information about how each LSM is enabled along with special notes for SELinux.

  • An export flag has been added to the inventory-manage.py script. This flag allows exporting of host and network information from an OpenStack-Ansible inventory for import into another system, or an alternate view of the existing data. See the developer docs for more details.
  • Variable ceph_extra_confs has been expanded to support retrieving additional ceph.conf and keyrings from multiple ceph clusters automatically.
  • Additional libvirt ceph client secrets can be defined to support attaching volumes from different ceph clusters.
  • New variable ceph_extra_confs may be defined to support deployment of extra Ceph config files. This is useful for cinder deployments that utilize multiple Ceph clusters as cinder backends.
  • The py_pkgs lookup plugin now has strict ordering for requirement files discovered. These files are used to add additional requirements to the python packages discovered. The order is defined by the constant, REQUIREMENTS_FILE_TYPES which contains the following entries, ‘test-requirements.txt’, ‘dev-requirements.txt’, ‘requirements.txt’, ‘global-requirements.txt’, ‘global-requirement-pins.txt’. The items in this list are arranged from least to most priority.
  • The openstack-ansible-galera_server role will now prevent deployers from changing the galera_cluster_name variable on clusters that already have a value set in a running galera cluster. You can set the new galera_force_change_cluster_name variable to True to force the galera_cluster_name variable to be changed. We recommend setting this by running the galera-install.yml playbook with -e galera_force_change_cluster_name=True, to avoid changing the galera_cluster_name variable unintentionally. Use with caution, changing the galera_cluster_name value can cause your cluster to fail, as the nodes won’t join if restarted sequentially.
  • The repo build process is now able to make use of a pre-staged git cache. If the /var/www/repo/openstackgit folder on the repo server is found to contain existing git clones then they will be updated if they do not already contain the required SHA for the build.
  • The repo build process is now able to synchronize a git cache from the deployment node to the repo server. The git cache path on the deployment node is set using the variable repo_build_git_cache. If the deployment node hosts the repo container, then the folder will be symlinked into the bind mount for the repo container. If the deployment node does not host the repo container, then the contents of the folder will be synchronised into the repo container.
  • The os_glance role now supports Ubuntu 16.04 and SystemD.
  • Gnocchi is available for deploy as a metrics storage service. At this time it does not integrate with Aodh or Ceilometer. To deploy Aodh or Ceilometer to use Gnocchi as a storage / query API, each must be configured appropriately with the use of overrides as described in the configuration guides for each of these services.
  • CentOS 7 and Ubuntu 16.04 support have been added to the haproxy role.
  • The haproxy role installs hatop from source to ensure that the same operator tooling is available across all supported distributions. The download URL for the source can be set using the variable haproxy_hatop_download_url.
  • Added a boolean var haproxy_service_enabled to the haproxy_service_configs dict to support toggling haproxy endpoints on/off.
  • Added a new haproxy_extra_services var which will allow extra haproxy endpoint additions.
  • The repo server will now be used as a package manager cache.
  • The HAProxy role provided by OpenStack-Ansible now terminates SSL using a self-signed certificate by default. While this can be disabled the inclusion of SSL services on all public endpoints as a default will help make deployments more secure without any additional user interaction. More information on SSL and certificate generation can be found here.
  • The rabbitmq_server role now supports configuring HiPE compilation of the RabbitMQ server Erlang code. This configuration option may improve server performance for some workloads and hardware. Deployers can override the rabbitmq_hipe_compile variable, setting a value of True if they wish to enable this feature.
  • Horizon now has the ability to set arbitrary configuration options using global option horizon_config_overrides in YAML format. The overrides follow the same pattern found within the other OpenStack service overrides. General documentation on overrides can be found here.
  • The os_horizon role now supports configuration of custom themes. Deployers can use the new horizon_custom_themes and horizon_default_theme variables to configure the dashboard with custom themes and default to a specific theme respectively.
  • CentOS 7 support has been added to the galera_server role.
  • Implemented support for Ubuntu 16.04 Xenial. percona-xtrabackup packages will be installed from distro repositories, instead of upstream percona repositories due to lack of available packages upstream at the time of implementing this feature.
  • A task was added that restricts ICMPv4 redirects to meet the requirements of V-38524 in the STIG. This configuration is disabled by default since it could cause issues with LXC in some environments.

    Deployers can enable this configuration by setting an Ansible variable:

    security_disable_icmpv4_redirects: yes
    
  • The audit rules added by the security role now have key fields that make it easier to link the audit log entry to the audit rule that caused it to appear.
  • pip can be installed via the deployment host using the new variable pip_offline_install. This can be useful in environments where the containers lack internet connectivity. Please refer to the limited connectivity installation guide for more information.
  • The env.d directory included with OpenStack-Ansible is now used as the first source for the environment skeleton, and /etc/openstack_deploy/env.d will be used only to override values. Deployers without customizations will no longer need to copy the env.d directory to /etc/openstack_deploy. As a result, the env.d copy operation has been removed from the node bootstrap role.
  • A new debug flag has been added to dynamic_inventory.py. This should make it easier to understand what’s happening with the inventory script, and provide a way to gather output for more detailed bug reports. See the developer docs for more details.
  • The ironic role now supports Ubuntu 16.04 and SystemD.
  • To ensure the deployment system remains clean the Ansible execution environment is contained within a virtual environment. The virtual environment is created at “/opt/ansible-runtime” and the “ansible.*” CLI commands are linked within /usr/local/bin to ensure there is no interruption in the deployer workflow.
  • There is a new default configuration for keepalived, supporting more than 2 nodes.
  • In order to make use of the latest stable keepalived version, the variable keepalived_use_latest_stable must be set to True
  • The ability to support login user domain and login project domain has been added to the keystone module.

    # Example usage
    - keystone:
        command: ensure_user
        endpoint: "{{ keystone_admin_endpoint }}"
        login_user: admin
        login_password: admin
        login_project_name: admin
        login_user_domain_name: custom
        login_project_domain_name: custom
        user_name: demo
        password: demo
        project_name: demo
        domain_name: custom
    
  • The new LBaaS v2 dashboard is available in Horizon. Deployers can enable the panel by setting the following Ansible variable:

    horizon_enable_neutron_lbaas: True
    
  • The LBaaSv2 service provider configuration can now be adjusted with the neutron_lbaasv2_service_provider variable. This allows a deployer to choose to deploy LBaaSv2 with Octavia in a future version.
  • The config_template action plugin now has a new option to toggle list extension for JSON or YAML formats. The new option is list_extend and is a boolean. The default is True which maintains the existing API.
  • The lxc_hosts role can now make use of a primary and secondary gpg keyserver for gpg validation of the downloaded cache. Setting the servers to use can be done using the lxc_image_cache_primary_keyserver and lxc_image_cache_secondary_keyserver variables.
  • The lxc_container_create role will now build a container based on the distro of the host OS.
  • The lxc_container_create role now supports Ubuntu 14.04, 16.04, and RHEL/CentOS 7
  • The LXC container creation process now has a configurable delay for the task which waits for the container to start. The variable lxc_container_ssh_delay can be set to change the default delay of five seconds.
  • The lxc_host cache prep has been updated to use the LXC download template. This removes the last remaining dependency the project has on the rpc-trusty-container.tgz image.
  • The lxc_host role will build lxc cache using the download template built from images found here. These images are upstream builds from the greater LXC/D community.
  • The lxc_host role introduces support for CentOS 7 and Ubuntu 16.04 container types.
  • The inventory script will now dynamically populate the lxc_hosts group dynamically based on which machines have container affinities defined. This group is not allowed in user-defined configuration.
  • Neutron HA router capabilities in Horizon will be enabled automatically if the neutron plugin type is ML2 and environment has >=2 L3 agent nodes.
  • Horizon now has a boolean variable named horizon_enable_ha_router to enable Neutron HA router management.
  • Horizon’s IPv6 support is now enabled by default. This allows users to manage subnets with IPv6 addresses within the Horizon interface. Deployers can disable IPv6 support in Horizon by setting the following variable:

    horizon_enable_ipv6: False
    

    Please note: Horizon will still display IPv6 addresses in various panels with IPv6 support disabled. However, it will not allow any direct management of IPv6 configuration.

  • memcached now logs with multiple levels of verbosity, depending on the user variables. Setting debug: True enables maximum verbosity while setting verbose: True logs with an intermediate level.
  • The openstack-ansible-memcached_server role includes a new override, memcached_connections which is automatically calculated from the number of memcached connection limit plus additional 1k to configure the OS nofile limit. Without proper nofile limit configuration, memcached will crash in order to support higher parallel connection TCP/Memcache counts.
  • The repo build process is now able to support building and synchronizing artifacts for multiple CPU architectures. Build artifacts are now tagged with the appropriate CPU architecture by default, and synchronization of build artifacts from secondary, architecture-specific repo servers back to the primary repo server is supported.
  • The repo install process is now able to support building and synchronizing artifacts for multiple CPU architectures. To support multiple architectures, one or more repo servers must be created for each CPU architecture in the deployment. When multiple CPU architectures are detected among the repo servers, the repo-discovery process will automatically assign a repo master to perform the build process for each architecture.
  • CentOS 7 support has been added to the galera_client role.
  • Whether the Neutron DHCP Agent, Metadata Agent or LinuxBridge Agent should be enabled is now dynamically determined based on the neutron_plugin_type and the neutron_ml2_mechanism_drivers that are set. This aims to simplify the configuration of Neutron services and eliminate the need for deployers to override the entire neutron_services dict variable to disable these services.
  • The Project Calico Neutron networking plugin is now integrated into the deployment. For setup instructions please see os_neutron role documentation.
  • A conditional has been added to the _local_ip settings used in the neutron_local_ip which removes the hard requirement for an overlay network to be set within a deployment. If no overlay network is set within the deployment the local_ip will be set to the value of “ansible_ssh_host”.
  • Neutron Firewall as a Service (FWaaS) can now optionally be deployed and configured. Please see the FWaaS Configuration Reference for details about the what the service is and what it provides. See the FWaaS Install Guide for implementation details.
  • Deployers can now configure tempest public and private networks by setting the following variables, ‘tempest_private_net_provider_type’ to either vxlan or vlan and ‘tempest_public_net_provider_type’ to flat or vlan. Depending on what the deployer sets these variables to, they may also need to update other variables accordingly, this mainly involves ‘tempest_public_net_physical_type’ and ‘tempest_public_net_seg_id’. Please refer to http://docs.openstack.org/mitaka/networking-guide/intro-basic-networking.html for more neutron networking information.
  • The Project Calico Neutron networking plugin is now integrated into the os_neutron role. This can be activated using the instructions located in the role documentation.
  • The os_neutron role will now default to the OVS firewall driver when neutron_plugin_type is ml2.ovs and the host is running Ubuntu 16.04 on PowerVM. To override this default behavior, deployers should define neutron_ml2_conf_ini_overrides and ‘neutron_openvswitch_agent_ini_overrides’ in ‘user_variables.yml’. Example below

    neutron_ml2_conf_ini_overrides:
      securitygroup:
        firewall_driver: neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
    neutron_openvswitch_agent_ini_overrides:
      securitygroup:
        firewall_driver: iptables_hybrid
    
  • Neutron VPN as a Service (VPNaaS) can now optionally be deployed and configured. Please see the OpenStack Networking Guide for details about the what the service is and what it provides. See the VPNaaS Install Guide for implementation details.
  • Support for Neutron distributed virtual routing has been added to the os_neutron role. This includes the implementation of Networking Guide’s suggested agent configuration. This feature may be activated by setting neutron_plugin_type: ml2.ovs.dvr in /etc/openstack_deploy/user_variables.yml.
  • The horizon next generation instance management panels have been enabled by default. This changes horizon to use the upstream defaults instead of the legacy panels. Documentation can be found here.
  • The nova SSH public key distribution has been made a lot faster especially when deploying against very large clusters. To support larger clusters the role has moved away from the “authorized_key” module and is now generating a script to insert keys that may be missing from the authorized keys file. The script is saved on all nova compute nodes and can be found at /usr/local/bin/openstack-nova-key.sh. If ever there is a need to reinsert keys or fix issues on a given compute node the script can be executed at any time without directly running the ansible playbooks or roles.
  • The os_nova role can now detect and support basic deployment of a PowerVM environment. This sets the virtualization type to ‘powervm’ and installs/updates the PowerVM NovaLink package and nova-powervm driver.
  • Nova UCA repository support is implemented by default. This will allow the users to benefit from the updated packages for KVM. The nova_uca_enable variable controls the install source for the KVM packages. By default this value is set to True to make use of UCA repository. User can set to False to disable.
  • A new configuration parameter security_ntp_bind_local_interfaces was added to the security role to restrict the network interface to which chronyd will listen for NTP requests.
  • The LXC container creation and modification process now supports online network additions. This ensures a container remains online when additional networks are added to a system.
  • Open vSwitch driver support has been implemented. This includes the implementation of the appropriate Neutron configuration and package installation. This feature may be activated by setting neutron_plugin_type: ml2.ovs in /etc/openstack_deploy/user_variables.yml.
  • An opportunistic Ansible execution strategy has been implemented. This allows the Ansible linear strategy to skip tasks with conditionals faster by never queuing the task when the conditional is evaluated to be false.
  • The Ansible SSH plugin has been modified to support running commands within containers without having to directly ssh into them. The change will detect presence of a container. If a container is found the physical host will be used as the SSH target and commands will be run directly. This will improve system reliability and speed while also opening up the possibility for SSH to be disabled from within the container itself.
  • Added horizon_apache_custom_log_format tunable to the os-horizon role for changing CustomLog format. Default is “combined”.
  • Added keystone_apache_custom_log_format tunable for changing CustomLog format. Default is “combined”.
  • Apache MPM tunable support has been added to the os-keystone role in order to allow MPM thread tuning. Default values reflect the current Ubuntu default settings:

    keystone_httpd_mpm_backend: event
    keystone_httpd_mpm_start_servers: 2
    keystone_httpd_mpm_min_spare_threads: 25
    keystone_httpd_mpm_max_spare_threads: 75
    keystone_httpd_mpm_thread_limit: 64
    keystone_httpd_mpm_thread_child: 25
    keystone_httpd_mpm_max_requests: 150
    keystone_httpd_mpm_max_conn_child: 0
    
  • Introduced option to deploy Keystone under Uwsgi. A new variable keystone_mod_wsgi_enabled is introduced to toggle this behavior. The default is true which continues to deploy with mod_wsgi for Apache. The ports used by Uwsgi for socket and http connection for both public and admin Keystone services are configurable (see also the keystone_uwsgi_ports dictionary variable). Other Uwsgi configuration can be overridden by using the keystone_uwsgi_ini_overrides variable as documented under “Overriding OpenStack configuration defaults” in the OpenStack-Ansible Install Guide. Federation features should be considered _experimental_ with this configuration at this time.
  • Introduced option to deploy Keystone behind Nginx. A new variable keystone_apache_enabled is introduced to toggle this behavior. The default is true which continues to deploy with Apache. Additional configuration can be delivered to Nginx through the use of the keystone_nginx_extra_conf list variable. Federation features are not supported with this configuration at this time. Use of this option requires keystone_mod_wsgi_enabled to be set to false which will deploy Keystone under Uwsgi.
  • The os_cinder role now supports Ubuntu 16.04.
  • CentOS7/RHEL support has been added to the os_cinder role.
  • CentOS7/RHEL support has been added to the os_glance role.
  • CentOS7/RHEL support has been added to the os_keystone role.
  • The os_magnum role now supports deployment on Ubuntu 16.04 using systemd.
  • The galera_client role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting galera_client_package_state to present.
  • The ceph_client role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting ceph_client_package_state to present.
  • The os_ironic role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting ironic_package_state to present.
  • The os_nova role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting nova_package_state to present.
  • The memcached_server role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting memcached_package_state to present.
  • The os_heat role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting heat_package_state to present.
  • The rsyslog_server role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting rsyslog_server_package_state to present.
  • The pip_install role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting pip_install_package_state to present.
  • The repo_build role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting repo_build_package_state to present.
  • The os_rally role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting rally_package_state to present.
  • The os_glance role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting glance_package_state to present.
  • The security role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting security_package_state to present.
  • A new global option to control all package install states has been implemented. The default action for all distribution package installations is to ensure that the latest package is installed. This may be changed to only verify if the package is present by setting package_state to present.
  • The os_keystone role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting keystone_package_state to present.
  • The os_cinder role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting cinder_package_state to present.
  • The os_gnocchi role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting gnocchi_package_state to present.
  • The os_magnum role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting magnum_package_state to present.
  • The rsyslog_client role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting rsyslog_client_package_state to present.
  • The os_sahara role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting sahara_package_state to present.
  • The repo_server role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting repo_server_package_state to present.
  • The haproxy_server role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting haproxy_package_state to present.
  • The os_aodh role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting aodh_package_state to present.
  • The openstack_hosts role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting openstack_hosts_package_state to present.
  • The galera_server role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting galera_server_package_state to present.
  • The rabbitmq_server role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting rabbitmq_package_state to present.
  • The lxc_hosts role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting lxc_hosts_package_state to present.
  • The os_ceilometer role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting ceilometer_package_state to present.
  • The os_swift role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting swift_package_state to present.
  • The os_neutron role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting neutron_package_state to present.
  • The os_horizon role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting horizon_package_state to present.
  • The PATH environment variable that is configured on the remote system can now be set using the openstack_host_environment_path list variable.
  • The repo build process now has the ability to store the pip sources within the build archive. This ability is useful when deploying environments that are “multi-architecture”, “multi-distro”, or “multi-interpreter” where specific pre-build wheels may not be enough to support all of the deployment. To enable the ability to store the python source code within a given release, set the new option repo_build_store_pip_sources to true.
  • The repo server now has a Package Cache service for distribution packages. To leverage the cache, deployers will need to configure the package manager on all hosts to use the cache as a proxy. If a deployer would prefer to disable this service, the variable repo_pkg_cache_enabled should be set to false.
  • The rabbitmq_server role now supports deployer override of the RabbitMQ policies applied to the cluster. Deployers can override the rabbitmq_policies variable, providing a list of desired policies.
  • The RabbitMQ Management UI is now available through HAProxy on port 15672. The default userid is monitoring. This user can be modified by changing the parameter rabbitmq_monitoring_userid in the file user_variables.yml. Please note that ACLs have been added to this HAProxy service by default, such that it may only be accessed by common internal clients. Reference playbooks/vars/configs/haproxy_config.yml
  • Added playbook for deploying Rally in the utility containers
  • Our general config options are now stored in an “/usr/local/bin/openstack-ansible.rc” file and will be sourced when the “openstack-ansible” wrapper is invoked. The RC file will read in BASH environment variables and should any Ansible option be set that overlaps with our defaults the provided value will be used.
  • The LBaaSv2 device driver is now set by the Ansible variable neutron_lbaasv2_device_driver. The default is set to use the HaproxyNSDriver, which allows for agent-based load balancers.
  • The GPG key checks for package verification in V-38476 are now working for Red Hat Enterprise Linux 7 in addition to CentOS 7. The checks only look for GPG keys from Red Hat and any other GPG keys, such as ones imported from the EPEL repository, are skipped.
  • CentOS7 support has been added to the rsyslog_client role.
  • The options of application logrotate configuration files are now configurable. rsyslog_client_log_rotate_options can be used to provide a list of directives, and rsyslog_client_log_rotate_scripts can be used to provide a list of postrotate, prerotate, firstaction, or lastaction scripts.
  • Experimental support has been added to allow the deployment of the Sahara data-processing service. To deploy sahara hosts should be present in the host group sahara-infra_hosts.
  • The Sahara dashboard is available in Horizon. Deployers can enable the panel by setting the following Ansible variable:

    horizon_enable_sahara_ui: True
    
  • Tasks were added to search for any device files without a proper SELinux label on CentOS systems. If any of these device labels are found, the playbook execution will stop with an error message.
  • The repo build process now selectively clones git repositories based on whether each OpenStack service group has any hosts in it. If there are no hosts in the group, the git repo for the service will not be cloned. This behaviour can be optionally changed to force all git repositories to be cloned by setting repo_build_git_selective to no.
  • The repo build process now selectively builds venvs based on whether each OpenStack service group has any hosts in it. If there are no hosts in the group, the venv will not be built. This behaviour can be optionally changed to force all venvs to be built by setting repo_build_venv_selective to yes.
  • The repo build process now selectively builds python packages based on whether each OpenStack service group has any hosts in it. If there are no hosts in the group, the list of python packages for the service will not be built. This behaviour can be optionally changed to force all python packages to be built by setting repo_build_wheel_selective to no.
  • A new variable is supported in the neutron_services dictionary called service_conf_path. This variable enables services to deploy their config templates to paths outside of /etc/neutron by specifying a directory using the new variable.
  • The openstack-ansible-security role supports the application of the Red Hat Enterprise Linux 6 STIG configurations to systems running CentOS 7 and Ubuntu 16.04 LTS.
  • The fallocate_reserve` option can now be set (in bytes or as a percentage) for swift by using the ``swift_fallocate_reserve variable in /etc/openstack_deploy/user_variables.yml. This value is the amount of space to reserve on a disk to prevent a situation where swift is unable to remove objects due to a lack of available disk space to work with. The default value is 1% of the total disk size.
  • The openstack-ansible-os_swift role will now prevent deployers from changing the swift_hash_path_prefix and swift_hash_path_suffix variables on clusters that already have a value set in /etc/swift/swift.conf. You can set the new swift_force_change_hashes variable to True to force the swift_hash_path_ variables to be changed. We recommend setting this by running the os-swift.yml playbook with -e swift_force_change_hashes=True, to avoid changing the swift_hash_path_ variables unintentionally. Use with caution, changing the swift_hash_path_ values causes end-user impact.
  • The os_swift role has 3 new variables that will allow a deployer to change the hard, soft and fs.file-max limits. the hard and soft limits are being added to the limits.conf file for the swift system user. The fs.file-max settings are added to storage hosts via kernel tuning. The new options are swift_hard_open_file_limits with a default of 10240 swift_soft_open_file_limits with a default of 4096 swift_max_file_limits with a default of 24 times the value of swift_hard_open_file_limits.
  • The pretend_min_part_hours_passed option can now be passed to swift-ring-builder prior to performing a rebalance. This is set by the swift_pretend_min_part_hours_passed boolean variable. The default for this variable is False. We recommend setting this by running the os-swift.yml playbook with -e swift_pretend_min_part_hours_passed=True, to avoid resetting min_part_hours unintentionally on every run. Setting swift_pretend_min_part_hours_passed to True will reset the clock on the last time a rebalance happened, thus circumventing the min_part_hours check. This should only be used with extreme caution. If you run this command and deploy rebalanced rings before a replication pass completes, you may introduce unavailability in your cluster. This has an end-user imapct.
  • While default python interpreter for swift is cpython, pypy is now an option. This change adds the ability to greatly improve swift performance without the core code modifications. These changes have been implemented using the documentation provided by Intel and Swiftstack. Notes about the performance increase can be seen here.
  • Change the port for devices in the ring by adjusting the port value for services, hosts, or devices. This will not involve a rebalance of the ring.
  • Changing the port for a device, or group of devices, carries a brief period of downtime to the swift storage services for those devices. The devices will be unavailable during period between when the storage service restarts after the port update, and the ring updates to match the new port.
  • Enable rsync module per object server drive by setting the swift_rsync_module_per_drive setting to True. Set this to configure rsync and swift to utilise individual configuration per drive. This is required when disabling rsyncs to individual disks. For example, in a disk full scenario.
  • The os_swift role will now include the swift “staticweb” middleware by default.
  • The os_swift role now allows the permissions for the log files created by the swift account, container and object servers to be set. The variable is swift_syslog_log_perms and is set to 0644 by default.
  • Support added to allow deploying on ppc64le architecture using the Ubuntu distributions.
  • Support had been added to allow the functional tests to pass when deploying on ppc64le architecture using the Ubuntu distributions.
  • Support for the deployment of Unbound caching DNS resolvers has been added as an optional replacement for /etc/hosts management across all hosts in the environment. To enable the Unbound DNS containers, add unbound_hosts entries to the environment.
  • The repo_build role now provides the ability to override the upper-constraints applied which are sourced from OpenStack and from the global-requirements-pins.txt file. The variable repo_build_upper_constraints_overrides can be populated with a list of upper constraints. This list will take the highest precedence in the constraints process, with the exception of the pins set in the git source SHAs.

Known Issues

  • Deployments on ppc64le are limited to Ubuntu 16.04 for the Newton release of OpenStack-Ansible.
  • The variables haproxy_keepalived_(internal|external)_cidr now has a default set to 169.254.(2|1).1/24. This is to prevent Ansible undefined variable warnings. Deployers must set values for these variables for a working haproxy with keepalived environment when using more than one haproxy node.
  • In the latest stable version of keepalived there is a problem with the priority calculation when a deployer has more than five keepalived nodes. The problem causes the whole keepalived cluster to fail to work. To work around this issue it is recommended that deployers limit the number of keepalived nodes to no more than five or that the priority for each node is set as part of the configuration (cf. haproxy_keepalived_vars_file variable).
  • Paramiko version 2.0 Python requires the Python cryptography library. New system packages must be installed for this library. For OpenStack-Ansible versions <12.0.12, <11.2.15, <13.0.2 the system packages must be installed on the deployment host manually by executing apt-get install -y build-essential libssl-dev libffi-dev.

Upgrade Notes

  • LXC containers will now have a proper RFC1034/5 hostname set during post build tasks. A localhost entry for 127.0.1.1 will be created by converting all of the “_” in the inventory_hostname to “-“. Containers will be created with a default domain of openstack.local. This domain name can be customized to meet your deployment needs by setting the option lxc_container_domain.
  • A new global variable has been created named openstack_domain. This variable has a default value of “openstack.local”.
  • The ca-certificates package has been included in the LXC container build process in order to prevent issues related to trying to connect to public websites which make use of newer certificates than exist in the base CA certificate store.
  • In order to reduce the time taken for fact gathering, the default subset gathered has been reduced to a smaller set than the Ansible default. This may be changed by the deployer by setting the ANSIBLE_GATHER_SUBSET variable in the bash environment prior to executing any ansible commands.
  • The environment variable FORKS is no longer used. The standard Ansible environment variable ANSIBLE_FORKS should be used instead.
  • The Galera client role now has a dependency on the apt package pinning role.
  • The variable security_audit_apparmor_changes is now renamed to security_audit_mac_changes and is enabled by default. Setting security_audit_mac_changes to no will disable syscall auditing for any changes to AppArmor policies (in Ubuntu) or SELinux policies (in CentOS).
  • When upgrading deployers will need to ensure they have a backup of all logging from within the container prior to running the playbooks. If the logging node is present within the deployment all logs should already be sync’d with the logging server and no action is required. As a pre-step it’s recommended that deployers clean up logging directories from within containers prior to running the playbooks. After the playbooks have run the bind mount will be in effect at “/var/log” which will mount over all previous log files and directories.
  • Due to a new bind mount at “/var/log” all containers will be restarted. This is a required restart. It is recommended that deployers run the container restarts in serial to not impact production workloads.
  • The default value of service_credentials/os_endpoint_type within ceilometer’s configuration file has been changed to internalURL. This may be overridden through the use of the ceilometer_ceilometer_conf_overrides variable.
  • The default database collation has changed from utf8_unicode_ci to utf8_general_ci. Existing databases and tables will need to be converted.
  • The LXC container cache preparation process now copies package repository configuration from the host instead of implementing its own configuration. The following variables are therefore unnecessary and have been removed:
    • lxc_container_template_main_apt_repo
    • lxc_container_template_security_apt_repo
    • lxc_container_template_apt_components
  • The LXC container cache preparation process now copies DNS resolution configuration from the host instead of implementing its own configuration. The lxc_cache_resolvers variable is therefore unnecessary and has been removed.
  • The MariaDB wait_timeout setting is decreased to 1h to match the SQL Alchemy pool recycle timeout, in order to prevent unnecessary database session buildups.
  • The variable repo_server_packages that defines the list of packages required to install a repo server has been replaced by repo_server_distro_packages.
  • If there are swift hosts in the environment, then the value for cinder_service_backup_program_enabled will automatically be set to True. This negates the need to set this variable in user_variables.yml, but the value may still be overridden at the deployer discretion.
  • If there are swift hosts in the environment, then the value for glance_default_store will automatically be set to swift. This negates the need to set this variable in user_variables.yml, but the value may still be overridden at the deployer discretion.
  • The variable security_sysctl_enable_tcp_syncookies has replaced security_sysctl_tcp_syncookies and it is now a boolean instead of an integer. It is still enabled by default, but deployers can disable TCP syncookies by setting the following Ansible variable:

    security_sysctl_enable_tcp_syncookies: no
    
  • The glance_apt_packages variable has been renamed to glance_distro_packages so that it applies to multiple operating systems.
  • Within the haproxy role hatop has been changed from a package installation to a source-based installation. This has been done to ensure that the same operator tooling is available across all supported distributions. The download URL for the source can be set using the variable haproxy_hatop_download_url.
  • Haproxy has a new backend to support using the repo server nodes as a git server. The new backend is called “repo_git” and uses port “9418”. Default ACLs have been created to lock down the port’s availability to only internal networks originating from an RFC1918 address.
  • Haproxy has a new backend to support using the repo server nodes as a package manager cache. The new backend is called “repo_cache” and uses port “3142” and a single active node. All other nodes within the pool are backups and will be promoted if the active node goes down. Default ACLs have been created to lock down the port’s availability to only internal networks originating from an RFC1918 address.
  • SSL termination is assumed enabled for all public endpoints by default. If this is not needed it can be disabled by setting the openstack_external_ssl option to false and the openstack_service_publicuri_proto to http.
  • If HAProxy is used as the loadbalancer for a deployment it will generate a self-signed certificate by default. If HAProxy is NOT used, an SSL certificate should be installed on the external loadbalancer. The installation of an SSL certificate on an external load balancer is not covered by the deployment tooling.
  • In previous releases connections to Horizon originally terminated SSL at the Horizon container. While that is still an option, SSL is now assumed to be terminated at the load balancer. If you wish to terminate SSL at the horizon node change the horizon_external_ssl option to false.
  • Public endpoints will need to be updated using the Keystone admin API to support secure endpoints. The Keystone ansible module will not recreate the endpoints automatically. Documentation on the Keystone service catalog can be found here.
  • Upgrades will not replace entries in the /etc/openstack_deploy/env.d directory, though new versions of OpenStack-Ansible will now use the shipped env.d as a base, which may alter existing deployments.
  • The variable used to store the mysql password used by the ironic service account has been changed. The following variable:

    ironic_galera_password: secrete
    

    has been changed to:

    ironic_container_mysql_password: secrete
    
  • There is a new default configuration for keepalived. When running the haproxy playbook, the configuration change will cause a keepalived restart unless the deployer has used a custom configuration file. The restart will cause the virtual IP addresses managed by keepalived to be briefly unconfigured, then reconfigured.
  • A new version of keepalived will be installed on the haproxy nodes if the variable keepalived_use_latest_stable is set to True and more than one haproxy node is configured. The update of the package will cause keepalived to restart and therefore will cause the virtual IP addresses managed by keepalived to be briefly unconfigured, then reconfigured.
  • Adding a new nova.conf entry, live_migration_uri. This entry will default to a qemu-ssh:// uri, which uses the ssh keys that have already been distributed between all of the compute hosts.
  • The lxc_container_create role no longer uses the distro specific lxc container create template.
  • The following variable changes have been made in the lxc_host role:
    • lxc_container_template: Removed because the template option is now contained within the operating system specific variable file loaded at runtime.
    • lxc_container_template_options: This option was renamed to lxc_container_download_template_options. The deprecation filter was not used because the values provided from this option have been fundamentally changed and old overrides will cause problems.
    • lxc_container_release: Removed because image is now tied with the host operating system.
    • lxc_container_user_name: Removed because the default users are no longer created when the cached image is created.
    • lxc_container_user_password: Removed because the default users are no longer created when the cached image is created.
    • lxc_container_template_main_apt_repo: Removed because this option is now being set within the cache creation process and is no longer needed here.
    • lxc_container_template_security_apt_repo: Removed because this option is now being set within the cache creation process and is no longer needed here.
  • The lxc_host role no longer uses the distro specific lxc container create template.
  • The following variable changes have been made in the lxc_host role:
    • lxc_container_user_password: Removed because the default lxc container user is no longer created by the lxc container template.
    • lxc_container_template_options: This option was renamed to lxc_cache_download_template_options. The deprecation filter was not used because the values provided from this option have been fundamentally changed and potentially old overrides will cause problems.
    • lxc_container_base_delete: Removed because the cache will be refreshed upon role execution.
    • lxc_cache_validate_certs: Removed because the Ansible get_url module is no longer used.
    • lxc_container_caches: Removed because the container create process will build a cached image based on the host OS.
  • LXC package installation and cache preparation will now occur by default only on hosts which will actually implement containers.
  • The dynamic_inventory script previously set the provider network attributes is_container_address and is_ssh_address to True for the management network regardless of whether a deployer had them configured this way or not. Now, these attributes must be configured by deployers and the dynamic_inventory script will fail if they are missing or not True.
  • During upgrades, container and service restarts for the mariadb/galera cluster were being triggered multiple times and causing the cluster to become unstable and often unrecoverable. This situation has been improved immensely, and we now have tight control such that restarts of the galera containers only need to happen once, and are done so in a controlled, predictable and repeatable way.
  • The memcached log is removed from /var/log/memcached.log and is now stored in the /var/log/memcached folder.
  • The variable galera_client_apt_packages has been replaced by galera_client_distro_packages.
  • Whether the Neutron DHCP Agent, Metadata Agent or LinuxBridge Agent should be enabled is now dynamically determined based on the neutron_plugin_type and the neutron_ml2_mechanism_drivers that are set. This aims to simplify the configuration of Neutron services and eliminate the need for deployers to override the entire neutron_services dict variable to disable these services.
  • Database migration tasks have been added for the dynamic routing neutron plugin.
  • As described in the Mitaka release notes Neutron now correctly calculates for and advertises the MTU to instances. The default DHCP configuration to advertise an MTU to instances has therefore been removed from the variable neutron_dhcp_config.
  • Database migration tasks have been added for the FWaaS neutron plugin.
  • As described in the Mitaka release notes Neutron now correctly calculates for and advertises the MTU to instances. As such the neutron_network_device_mtu variable has been removed and the hard-coded values in the templates for advertise_mtu, path_mtu, and segment_mtu have been removed to allow upstream defaults to operate as intended.
  • The new host group neutron_openvswitch_agent has been added to the env.d/neutron.yml and env.d/nova.yml environment configuration files in order to support the implementation of Open vSwitch. Deployers must ensure that their environment configuration files are updated to include the above group name. Please see the example implementations in env.d/neutron.yml and env.d/nova.yml.
  • The variable neutron_agent_mode has been removed from the os_neutron role. The appropriate value for l3_agent.ini is now determined based on the neutron_plugin_type and host group membership.
  • The default horizon instance launch panels have been changed to the next generation panels. To enable legacy functionality set the following options accordingly:

    horizon_launch_instance_legacy: True
    horizon_launch_instance_ng: False
    
  • A new nova admin endpoint will be registered with the suffix /v2.1/%(tenant_id)s. The nova admin endpoint with the suffix /v2/%(tenant_id)s may be manually removed.
  • Cleanup tasks are added to remove the nova console git directories /usr/share/novnc and /usr/share/spice-html5, prior to cloning these inside the nova vnc and spice console playbooks. This is necessary to guarantee that local modifications do not break git clone operations, especially during upgrades.
  • The variable neutron_linuxbridge has been removed as it is no longer used.
  • The variable neutron_driver_interface has been removed. The appropriate value for neutron.conf is now determined based on the neutron_plugin_type.
  • The variable neutron_driver_firewall has been removed. The appropriate value for neutron.conf is now determined based on the neutron_plugin_type.
  • The variable neutron_ml2_mechanism_drivers has been removed. The appropriate value for ml2_conf.ini is now determined based on the neutron_plugin_type.
  • Installation of glance and its dependent pip packages will now only occur within a Python virtual environment. The glance_venv_bin, glance_venv_enabled, glance_venv_etc_dir, and glance_non_venv_etc_dir variables have been removed.
  • Installation of glance and its dependent pip packages will now only occur within a Python virtual environment. The gnocchi_venv_bin, gnocchi_venv_enabled, gnocchi_venv_etc_dir, and gnocchi_non_venv_etc_dir variables have been removed.
  • Installation of heat and its dependent pip packages will now only occur within a Python virtual environment. The heat_venv_bin and heat_venv_enabled variables have been removed.
  • Installation of horizon and its dependent pip packages will now only occur within a Python virtual environment. The horizon_venv_bin, horizon_venv_enabled, horizon_venv_lib_dir, and horizon_non_venv_lib_dir variables have been removed.
  • Installation of ironic and its dependent pip packages will now only occur within a Python virtual environment. The ironic_venv_bin and ironic_venv_enabled variables have been removed.
  • Installation of keystone and its dependent pip packages will now only occur within a Python virtual environment. The keystone_venv_enabled variable has been removed.
  • The Neutron L3 Agent configuration for the handle_internal_only_routers variable is removed in order to use the Neutron upstream default setting. The current default for handle_internal_only_routers is True, which does allow Neutron L3 router without external networks attached (as discussed per https://bugs.launchpad.net/neutron/+bug/1572390).
  • Installation of aodh and its dependent pip packages will now only occur within a Python virtual environment. The aodh_venv_enabled and aodh_venv_bin variables have been removed.
  • Installation of ceilometer and its dependent pip packages will now only occur within a Python virtual environment. The ceilometer_venv_enabled and ceilometer_venv_bin variables have been removed.
  • Installation of cinder and its dependent pip packages will now only occur within a Python virtual environment. The cinder_venv_enabled and cinder_venv_bin variables have been removed.
  • Installation of magnum and its dependent pip packages will now only occur within a Python virtual environment. The magnum_venv_bin, magnum_venv_enabled variables have been removed.
  • Installation of neutron and its dependent pip packages will now only occur within a Python virtual environment. The neutron_venv_enabled, neutron_venv_bin, neutron_non_venv_lib_dir and neutron_venv_lib_dir variables have been removed.
  • Installation of nova and its dependent pip packages will now only occur within a Python virtual environment. The nova_venv_enabled, nova_venv_bin variables have been removed.
  • Installation of rally and its dependent pip packages will now only occur within a Python virtual environment. The rally_venv_bin, rally_venv_enabled variables have been removed.
  • Installation of sahara and its dependent pip packages will now only occur within a Python virtual environment. The sahara_venv_bin, sahara_venv_enabled, sahara_venv_etc_dir, and sahara_non_venv_etc_dir variables have been removed.
  • Installation of swift and its dependent pip packages will now only occur within a Python virtual environment. The swift_venv_enabled, swift_venv_bin variables have been removed.
  • The variable keystone_apt_packages has been renamed to keystone_distro_packages.
  • The variable keystone_idp_apt_packages has been renamed to keystone_idp_distro_packages.
  • The variable keystone_sp_apt_packages has been renamed to keystone_sp_distro_packages.
  • The variable keystone_developer_apt_packages has been renamed to keystone_developer_mode_distro_packages.
  • The variable glance_apt_packages has been renamed to glance_distro_packages.
  • The variable horizon_apt_packages has been renamed to horizon_distro_packages.
  • The variable aodh_apt_packages has been renamed to aodh_distro_packages.
  • The variable cinder_apt_packages has been renamed to cinder_distro_packages.
  • The variable cinder_volume_apt_packages has been renamed to cinder_volume_distro_packages.
  • The variable cinder_lvm_volume_apt_packages has been renamed to cinder_lvm_volume_distro_packages.
  • The variable ironic_api_apt_packages has been renamed to ironic_api_distro_packages.
  • The variable ironic_conductor_apt_packages has been renamed to ironic_conductor_distro_packages.
  • The variable ironic_conductor_standalone_apt_packages has been renamed to ironic_conductor_standalone_distro_packages.
  • The variable galera_pre_packages has been renamed to galera_server_required_distro_packages.
  • The variable galera_packages has been renamed to galera_server_mariadb_distro_packages.
  • The variable haproxy_pre_packages has been renamed to haproxy_required_distro_packages.
  • The variable haproxy_packages has been renamed to haproxy_distro_packages.
  • The variable memcached_apt_packages has been renamed to memcached_distro_packages.
  • The variable neutron_apt_packages has been renamed to neutron_distro_packages.
  • The variable neutron_lbaas_apt_packages has been renamed to neutron_lbaas_distro_packages.
  • The variable neutron_vpnaas_apt_packages has been renamed to neutron_vpnaas_distro_packages.
  • The variable neutron_apt_remove_packages has been renamed to neutron_remove_distro_packages.
  • The variable heat_apt_packages has been renamed to heat_distro_packages.
  • The variable ceilometer_apt_packages has been renamed to ceilometer_distro_packages.
  • The variable ceilometer_developer_mode_apt_packages has been renamed to ceilometer_developer_mode_distro_packages.
  • The variable swift_apt_packages has been renamed to swift_distro_packages.
  • The variable lxc_apt_packages has been renamed to lxc_hosts_distro_packages.
  • The variable openstack_host_apt_packages has been renamed to openstack_host_distro_packages.
  • The galera_client role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option galera_client_package_state should be set to present.
  • The ceph_client role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option ceph_client_package_state should be set to present.
  • The os_ironic role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option ironic_package_state should be set to present.
  • The os_nova role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option nova_package_state should be set to present.
  • The memcached_server role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option memcached_package_state should be set to present.
  • The os_heat role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option heat_package_state should be set to present.
  • The rsyslog_server role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option rsyslog_server_package_state should be set to present.
  • The pip_install role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option pip_install_package_state should be set to present.
  • The repo_build role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option repo_build_package_state should be set to present.
  • The os_rally role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option rally_package_state should be set to present.
  • The os_glance role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option glance_package_state should be set to present.
  • The security role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option security_package_state should be set to present.
  • All roles always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option package_state should be set to present.
  • The os_keystone role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option keystone_package_state should be set to present.
  • The os_cinder role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option cinder_package_state should be set to present.
  • The os_gnocchi role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option gnocchi_package_state should be set to present.
  • The os_magnum role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option magnum_package_state should be set to present.
  • The rsyslog_client role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option rsyslog_client_package_state should be set to present.
  • The os_sahara role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option sahara_package_state should be set to present.
  • The repo_server role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option repo_server_package_state should be set to present.
  • The haproxy_server role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option haproxy_package_state should be set to present.
  • The os_aodh role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option aodh_package_state should be set to present.
  • The openstack_hosts role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option openstack_hosts_package_state should be set to present.
  • The galera_server role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option galera_server_package_state should be set to present.
  • The rabbitmq_server role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option rabbitmq_package_state should be set to present.
  • The lxc_hosts role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option lxc_hosts_package_state should be set to present.
  • The os_ceilometer role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option ceilometer_package_state should be set to present.
  • The os_swift role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option swift_package_state should be set to present.
  • The os_neutron role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option neutron_package_state should be set to present.
  • The os_horizon role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option horizon_package_state should be set to present.
  • The variable rsyslog_client_packages has been replaced by rsyslog_client_distro_packages.
  • The variable rsyslog_server_packages has been replaced by rsyslog_server_distro_packages.
  • The variable rabbitmq_monitoring_password has been added to user_secrets.yml. If this variable does not exist, the RabbitMQ monitoring user will not be created.
  • All of the discretionary access control (DAC) auditing is now disabled by default. This reduces the amount of logs generated during deployments and minor upgrades. The following variables are now set to no:

    security_audit_DAC_chmod: no
    security_audit_DAC_chown: no
    security_audit_DAC_lchown: no
    security_audit_DAC_fchmod: no
    security_audit_DAC_fchmodat: no
    security_audit_DAC_fchown: no
    security_audit_DAC_fchownat: no
    security_audit_DAC_fremovexattr: no
    security_audit_DAC_lremovexattr: no
    security_audit_DAC_fsetxattr: no
    security_audit_DAC_lsetxattr: no
    security_audit_DAC_setxattr: no
    
  • The container property container_release has been removed as this is automatically set to the same version as the host in the container creation process.
  • The variable lxc_container_release has been removed from the lxc-container-create.yml playbook as it is no longer consumed by the container creation process.
  • LBaaSv1 has been removed from the neutron-lbaas project in the Newton release and it has been removed from OpenStack-Ansible as well.
  • The LVM configuration tasks and lvm.conf template have been removed from the openstack_hosts role since they are no longer needed. All of the LVM configuration is properly handled in the os_cinder role.
  • In the rsyslog_client role, the variable rsyslog_client_repos has been removed as it is no longer used.
  • Percona Xtrabackup has been removed from the Galera client role.
  • The infra_hosts and infra_containers inventory groups have been removed. No containers or services were assigned to these groups exclusively, and the usage of the groups has been supplanted by the shared-infra_* and os-infra_* groups for some time. Deployers who were using the groups should adjust any custom configuration in the env.d directory to assign containers and/or services to other groups.
  • The variable verbose has been removed. Deployers should rely on the debug var to enable higher levels of memcached logging.
  • The variable verbose has been removed. Deployers should rely on the debug var to enable higher levels of logging.
  • The aodh-api init service is removed since aodh-api is deployed as an apache mod_wsgi service.
  • The ceilometer-api init service is removed since ceilometer-api is deployed as an apache mod_wsgi service.
  • The database create and user creates have been removed from the os_heat role. These tasks have been relocated to the playbooks.
  • The database create and user creates have been removed from the os_nova role. These tasks have been relocated to the playbooks.
  • The database create and user creates have been removed from the os_glance role. These tasks have been relocated to the playbooks.
  • The database and user creates have been removed from the os_horizon role. These tasks have been relocated to the playbooks.
  • The database create and user creates have been removed from the os_cinder role. These tasks have been relocated to the playbooks.
  • The database create and user creates have been removed from the os_neutron role. These tasks have been relocated to the playbooks.
  • The Neutron HA tool written by AT&T is no longer enabled by default. This tool was providing HA capabilities for networks and routers that were not using the native Neutron L3HA. Because native Neutron L3HA is stable, compatible with the Linux Bridge Agent, and is a better means of enabling HA within a deployment this tool is no longer being setup by default. If legacy L3HA is needed within a deployment the deployer can set neutron_legacy_ha_tool_enabled to true to enable the legacy tooling.
  • The repo_build_apt_packages variable has been renamed. repo_build_distro_packages should be used instead to override packages required to build Python wheels and venvs.
  • The repo_build role now makes use of Ubuntu Cloud Archive by default. This can be disabled by setting repo_build_uca_enable to False.
  • New overrides are provided to allow for better customization around logfile retention and rate limiting for UDP/TCP sockets. rsyslog_server_logrotation_window defaults to 14 days rsyslog_server_ratelimit_interval defaults to 0 seconds rsyslog_server_ratelimit_burst defaults to 10000
  • The rsyslog.conf is now using v7+ style configuration settings
  • The swift_fallocate_reserve default value has changed from 10737418240 (10GB) to 1% in order to match the OpenStack swift default setting.
  • A new option swift_pypy_enabled has been added to enable or disable the pypy interpreter for swift. The default is “false”.
  • A new option swift_pypy_archive has been added to allow a pre-built pypy archive to be downloaded and moved into place to support swift running under pypy. This option is a dictionary and contains the URL and SHA256 as keys.
  • The swift_max_rsync_connections default value has changed from 2 to 4 in order to match the OpenStack swift documented value.
  • When upgrading a Swift deployment from Mitaka to Newton it should be noted that the enabled middleware list has changed. In Newton the “staticweb” middleware will be loaded by default. While the change adds a feature it is non-disruptive in upgrades.
  • All variables in the security role are now prepended with security_ to avoid collisions with variables in other roles. All deployers who have used the security role in previous releases will need to prepend all security role variables with security_.

    For example, a deployer could have disabled direct root ssh logins with the following variable:

    ssh_permit_root_login: yes
    

    That variable would become:

    security_ssh_permit_root_login: yes
    
  • Ceilometer no longer manages alarm storage when Aodh is enabled. It now redirects alarm-related requests to the Aodh API. This is now auto-enabled when Aodh is deployed.
  • Overrides for ceilometer aodh_connection_string will no longer work. Specifying an Aodh connection string in Ceilometer was deprecated within Ceilometer in a prior release so this option has been removed.
  • The neutron_plugin_base variable has been modifed to use the friendly names. Deployers should change any customisations to this variable to ensure that the customisation makes use of the short names instead of the full class path.
  • Database migration tasks have been added for the LBaaS neutron plugin.
  • Hosts running LXC on Ubuntu 14.04 will now need to enable the “trusty-backports” repository. The backports repo on Ubuntu 14.04 is now required to ensure LXC is updated to the latest stable version.
  • The Aodh data migration script should be run to migrate alarm data from MongoDB storage to Galera due to the pending removal of MongoDB support.
  • Neutron now makes use of Ubuntu Cloud Archive by default. This can be disabled by setting neutron_uca_enable to False.
  • The utility-all.yml playbook will no longer distribute the deployment host’s root user’s private ssh key to all utility containers. Deployers who desire this behavior should set the utility_ssh_private_key variable.
  • The following variables have been renamed in order to make the variable names neutral for multiple operating systems.

    • nova_apt_packages -> nova_distro_packages
    • nova_spice_apt_packages -> nova_spice_distro_packages
    • nova_novnc_apt_packages -> nova_novnc_distro_packages
    • nova_compute_kvm_apt_packages -> nova_compute_kvm_distro_packages

Deprecation Notes

  • Removed cirros_tgz_url and in most places replaced with tempest_img_url.
  • Removed cirros_img_url and in most places replaced with tempest_img_url.
  • Removed deprecated variable tempest_compute_image_alt_ssh_user
  • Removed deprecated variable tempest_compute_image_ssh_password
  • Removed deprecated variable tempest_compute_image_alt_ssh_password
  • Renamed cirros_img_disk_format to tempest_img_disk_format
  • Downloading and unarchiving a .tar.gz has been removed. The related tempest options ami_img_file, aki_img_file, and ari_img_file have been removed from tempest.conf.j2.
  • The [boto] section of tempest.conf.j2 has been removed. These tests have been completely removed from tempest for some time.
  • The openstack_host_apt_packages variable has been deprecated. openstack_host_packages should be used instead to override packages required to install on all OpenStack hosts.
  • The rabbitmq_apt_packages variable has been deprecated. rabbitmq_dependencies should be used instead to override additional packages to install alongside rabbitmq-server.
  • Moved haproxy_service_configs var to haproxy_default_service_configs so that haproxy_service_configs can be modified and added to without overriding the entire default service dict.
  • galera_package_url changed to percona_package_url for clarity
  • galera_package_sha256 changed to percona_package_sha256 for clarity
  • galera_package_path changed to percona_package_path for clarity
  • galera_package_download_validate_certs changed to percona_package_download_validate_certs for clarity
  • The main function in dynamic_inventory.py now takes named arguments instead of dictionary. This is to support future code changes that will move construction logic into separate files.
  • Installation of Ansible on the root system, outside of a virtual environment, will no longer be supported.
  • The variables `galera_client_package_*` and `galera_client_apt_percona_xtrabackup_*` have been removed from the role as Xtrabackup is no longer deployed.
  • The Neutron HA tool written by AT&T has been deprecated and will be removed in the Ocata release.
  • The old class path names used within the neutron_plugin_base have been deprecated in favor of the friendly names. Support for the use of the class path plugins will be removed in the OpenStack Newton cycle.

Security Issues

  • A sudoers entry has been added to the repo_servers in order to allow the nginx user to stop and start nginx via the init script. This is implemented in order to ensure that the repo sync process can shut off nginx while synchronising data from the master to the slaves.
  • A self-signed certificate will now be generated by default when HAproxy is used as a load balancer. This certificate is used to terminate the public endpoint for Horizon and all OpenStack API services.
  • Horizon disables password autocompletion in the browser by default, but deployers can now enable autocompletion by setting horizon_enable_password_autocomplete to True.
  • When enabled, Neutron Firewall as a Service (FWaaS) provides projects the option to implement perimeter security (filtering at the router), adding to filtering at the instance interfaces which is provided by ‘Security Groups’.
  • The admin_token_auth middleware presents a potential security risk and will be removed in a future release of keystone. Its use can be removed by setting the keystone_keystone_paste_ini_overrides variable.

    keystone_keystone_paste_ini_overrides:
      pipeline:public_api:
        pipeline: cors sizelimit osprofiler url_normalize request_id build_auth_context token_auth json_body ec2_extension public_service
      pipeline:admin_api:
        pipeline: cors sizelimit osprofiler url_normalize request_id build_auth_context token_auth json_body ec2_extension s3_extension admin_service
      pipeline:api_v3:
        pipeline: cors sizelimit osprofiler url_normalize request_id build_auth_context token_auth json_body ec2_extension_v3 s3_extension service_v3
    

Bug Fixes

  • This role assumes that there is a network named “public|private” and a subnet named “public|private-subnet”. These names are made configurable by the addition of two sets of variables; tempest_public_net_name and tempest_public_subnet_name for public networks and tempest_private_net_name and tempest_private_subnet_name for private networks This addresses bug 1588818
  • The /run directory is excluded from AIDE checks since the files and directories there are only temporary and often change when services start and stop.
  • AIDE initialization is now always run on subsequent playbook runs when security_initialize_aide is set to yes. The initialization will be skipped if AIDE isn’t installed or if the AIDE database already exists.

    See bug 1616281 for more details.

  • Add architecture-specific locations for percona-xtrabackup and qpress, with alternate locations provided for ppc64el due to package inavailability from the current provider.
  • The role previously did not restart the audit daemon after generating a new rules file. The bug has been fixed and the audit daemon will be restarted after any audit rule changes.
  • Logging within the container has been bind mounted to the hosts this reslves issue 1588051 <https://bugs.launchpad.net/openstack-ansible/+bug/1588051>_
  • Removed various deprecated / no longer supported features from tempest.conf.j2. Some variables have been moved to their new sections in the config.
  • The standard collectstatic and compression process in the os_horizon role now happens after horizon customizations are installed, so that all static resources will be collected and compressed.
  • LXC containers will now have the ability to use a fixed mac address on all network interfaces when the option lxc_container_fixed_mac is set true. This change will assist in resolving a long standing issue where network intensive services, such as neutron and rabbitmq, can enter a confused state for long periods of time and require rolling restarts or internal system resets to recover.
  • The dictionary-based variables in defaults/main.yml are now individual variables. The dictionary-based variables could not be changed as the documentation instructed. Instead it was required to override the entire dictionary. Deployers must use the new variable names to enable or disable the security configuration changes applied by the security role. For more information, see Launchpad Bug 1577944.
  • Failed access logging is now disabled by default and can be enabled by changing security_audit_failed_access to yes. The rsyslog daemon checks for the existence of log files regularly and this audit rule was triggered very frequently, which led to very large audit logs.
  • An Ansible task was added to disable the netconsole service on CentOS systems if the service is installed on the system.

    Deployers can opt-out of this change by setting security_disable_netconsole to no.

  • In order to ensure that the appropriate data is delivered to requesters from the repo servers, the slave repo_server web servers are taken offline during the synchronisation process. This ensures that the right data is always delivered to the requesters through the load balancer.
  • The pip_install_options variable is now honored during repo building. This variable allows deployers to specify trusted CA certificates by setting the variable to “–cert /etc/ssl/certs/ca-certificates.crt”
  • The security role previously set the permissions on all audit log files in /var/log/audit to 0400, but this prevents the audit daemon from writing to the active log file. This will prevent auditd from starting or restarting cleanly.

    The task now removes any permissions that are not allowed by the STIG. Any log files that meet or exceed the STIG requirements will not be modified.

  • When the security role was run in Ansible’s check mode and a tag was provided, the check_mode variable was not being set. Any tasks which depend on that variable would fail. This bug is fixed and the check_mode variable is now set properly on every playbook run.
  • The security role now handles ssh_config files that contain Match stanzas. A marker is added to the configuration file and any new configuration items will be added below that marker. In addition, the configuration file is validated for each change to the ssh configuration file.
  • Horizon deployments were broken due to an incorrect hostname setting being placed in the apache ServerName configuration. This caused Horizon startup failure any time debug was disabled.
  • The ability to support login user domain and login project domain has been added to the keystone module. This resolves https://bugs.launchpad.net/openstack-ansible/+bug/1574000

    # Example usage
    - keystone:
        command: ensure_user
        endpoint: "{{ keystone_admin_endpoint }}"
        login_user: admin
        login_password: admin
        login_project_name: admin
        login_user_domain_name: custom
        login_project_domain_name: custom
        user_name: demo
        password: demo
        project_name: demo
        domain_name: custom
    
  • LXC package installation and cache preparation will now occur by default only on hosts which will actually implement containers.
  • When upgrading it is possible for an old neutron-ns-metadata-proxy process to remain running in memory. If this happens the old version of the process can cause unexpected issues in a production environment. To fix this a task has been added to the os_neutron role that will execute a process lookup and kill any neutron-ns-metadata-proxy processes that are not running the current release tag. Once the old processes are removed the metadata agent running will respawn everything needed within 60 seconds.
  • Assigning multiple IP addresses to the same host name will now result in an inventory error before running any playbooks.
  • The nova admin endpoint is now correctly registered as /v2.1/%(tenant_id)s instead of /v2/%(tenant_id)s.
  • The auditd rules for auditing V-38568 (filesystem mounts) were incorrectly labeled in the auditd logs with the key of export-V-38568. They are now correctly logged with the key filesystem_mount-V-38568.
  • Deleting variable entries from the global_overrides dictionary in openstack_user_config.yml now properly removes those variables from the openstack_inventory.json file. See Bug
  • The pip_packages_tmp variable has been renamed pip_tmp_packages to avoid unintended processing by the py_pkgs lookup plugin.
  • The check to validate whether an appropriate ssh public key is available to copy into the container cache has been corrected to check the deployment host, not the LXC host.
  • Static route information for provider networks now must include the cidr and gateway information. If either key is missing, an error will be raised and the dynamic_inventory.py script will halt before any Ansible action is taken. Previously, if either key was missing, the inventory script would continue silently without adding the static route information to the networks. Note that this check does not validate the CIDR or gateway values, just just that the values are present.
  • The repo_build play now correctly evaluates environment variables configured in /etc/environment. This enables deployments in an environment with http proxies.
  • Previously, the ansible_managed var was being used to insert a header into the swift.conf that contained date/time information. This meant that swift.conf across different nodes did not have the same MD5SUM, causing swift-recon --md5 to break. We now insert a piece of static text instead to resolve this issue.
  • The XFS filesystem is excluded from the daily mlocate crond job in order to conserve disk IO for large IOPS bursts due to updatedb/mlocate file indexing.
  • The /var/lib/libvirt/qemu/save directory is now a symlink to {{ nova_system_home_folder }}/save to resolve an issue where the default location used by the libvirt managed save command can result with the root partitions on compute nodes becoming full when nova image-create is run on large instances.
  • Aodh has deprecated support for NoSQL storage (MongoDB and Cassandra) in Mitaka with removal scheduled for the O* release. This causes warnings in the logs. The default of using MongoDB storage for Aodh is replaced with the use of Galera. Continued use of MongoDB will require the use of vars to specify a correct aodh_connection_string and add pymongo to the aodh_pip_packages list.
  • The --compact flag has been removed from xtrabackup options. This had been shown to cause crashes in some SST situations

Other Notes

  • nova_libvirt_live_migration_flag is now phased out. Please create a nova configuration override with live_migration_tunnelled: True if you want to force the flag VIR_MIGRATE_TUNNELLED to libvirt. Nova “chooses a sensible default” otherwise.
  • nova_compute_manager is now phased out.
  • The in tree “ansible.cfg” file in the playbooks directory has been removed. This file was making compatibility difficult for deployers who need to change these values. Additionally this files very existance forced Ansible to ignore any other config file in either a users home directory or in the default “/etc/ansible” directory.
  • Mariadb version upgrade gate checks removed.
  • The run-playbooks.sh script has been refactored to run all playbooks using our core tool set and run order. The refactor work updates the old special case script to a tool that simply runs the integrated playbooks as they’ve been designed.
Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.