Ocata Series Release Notes

15.1.3-15

New Features

  • New variables have been added to allow a deployer to customize a barbican systemd unit file to their liking.
  • The task dropping the barbican systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.

Bug Fixes

  • Upgrading from Newton to Ocata will now correctly add existing Nova instances to the nova_cell1_name cell. For more information see bug 1682169.
  • The version 2.1.0 of ldappool fix the issue with TLS bind on ldapserver. It is set in global pin to fix the issue in OSA until the fix is made upstream.

15.1.3

New Features

  • New variables have been added to allow a deployer to customize a aodh systemd unit file to their liking.
  • The task dropping the aodh systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • For the os_aodh role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the aodh_*_init_config_overrides variables which use the config_template task to change template defaults.
  • For the os_barbican role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the barbican_*_init_config_overrides variables which use the config_template task to change template defaults.
  • New variables have been added to allow a deployer to customize a ceilometer systemd unit file to their liking.
  • The task dropping the ceilometer systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • For the os_ceilometer role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the ceilometer_*_init_config_overrides variables which use the config_template task to change template defaults.
  • New variables have been added to allow a deployer to customize a cinder systemd unit file to their liking.
  • The task dropping the cinder systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • New variables have been added to allow a deployer to customize a glance systemd unit file to their liking.
  • The task dropping the glance systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • For the os_glance role, the systemd unit RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. This value can be adjusted by using the glance_*_init_config_overrides variables which use the config_template task to change template defaults.
  • New variables have been added to allow a deployer to customize a gnocchi systemd unit file to their liking.
  • The task dropping the gnocchi systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • For the os_gnocchi role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the gnocchi_*_init_config_overrides variables which use the config_template task to change template defaults.
  • New variables have been added to allow a deployer to customize a heat systemd unit file to their liking.
  • The task dropping the heat systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • For the os_heat role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the heat_*_init_config_overrides variables which use the config_template task to change template defaults.
  • New variables have been added to allow a deployer to customize a ironic systemd unit file to their liking.
  • The task dropping the ironic systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • For the os_ironic role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the ironic_*_init_config_overrides variables which use the config_template task to change template defaults.
  • New variables have been added to allow a deployer to customize a keystone systemd unit file to their liking.
  • The task dropping the keystone systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • For the os_keystone role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the keystone_*_init_config_overrides variables which use the config_template task to change template defaults.
  • New variables have been added to allow a deployer to customize a magnum systemd unit file to their liking.
  • The task dropping the magnum systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • For the os_magnum role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the magnum_*_init_config_overrides variables which use the config_template task to change template defaults.
  • New variables have been added to allow a deployer to customize a neutron systemd unit file to their liking.
  • The task dropping the neutron systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • For the os_neutron role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the neutron_*_init_config_overrides variables which use the config_template task to change template defaults.
  • New variables have been added to allow a deployer to customize a nova systemd unit file to their liking.
  • The task dropping the nova systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • For the os_nova role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the nova_*_init_config_overrides variables which use the config_template task to change template defaults.
  • In the Ocata release, Trove added support for encrypting the rpc communication between the guest DBaaS instances and the control plane. The default values for trove_taskmanager_rpc_encr_key and trove_inst_rpc_key_encr_key should be overridden to specify installation specific values.
  • New variables have been added to allow a deployer to customize a sahara systemd unit file to their liking.
  • The task dropping the sahara systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • For the os_sahara role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the sahara_*_init_config_overrides variables which use the config_template task to change template defaults.
  • New variables have been added to allow a deployer to customize a swift systemd unit file to their liking.
  • The task dropping the swift systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • New variables have been added to allow a deployer to customize a trove systemd unit file to their liking.
  • The task dropping the trove systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.
  • For the os_trove role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the trove_*_init_config_overrides variables which use the config_template task to change template defaults.

Upgrade Notes

  • For the os_aodh role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the aodh_*_init_config_overrides variables which use the config_template task to change template defaults.
  • For the os_barbican role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the barbican_*_init_config_overrides variables which use the config_template task to change template defaults.
  • For the os_ceilometer role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the ceilometer_*_init_config_overrides variables which use the config_template task to change template defaults.
  • For the os_glance role, the systemd unit RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. This value can be adjusted by using the glance_*_init_config_overrides variables which use the config_template task to change template defaults.
  • For the os_gnocchi role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the gnocchi_*_init_config_overrides variables which use the config_template task to change template defaults.
  • For the os_heat role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the heat_*_init_config_overrides variables which use the config_template task to change template defaults.
  • For the os_ironic role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the ironic_*_init_config_overrides variables which use the config_template task to change template defaults.
  • For the os_keystone role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the keystone_*_init_config_overrides variables which use the config_template task to change template defaults.
  • For the os_magnum role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the magnum_*_init_config_overrides variables which use the config_template task to change template defaults.
  • For the os_neutron role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the neutron_*_init_config_overrides variables which use the config_template task to change template defaults.
  • For the os_nova role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the nova_*_init_config_overrides variables which use the config_template task to change template defaults.
  • For the os_sahara role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the sahara_*_init_config_overrides variables which use the config_template task to change template defaults.
  • For the os_trove role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the trove_*_init_config_overrides variables which use the config_template task to change template defaults.

15.1.2

New Features

  • A variable named bootstrap_user_variables_template has been added to the bootstrap-host role so the user can define the user variable template filename for AIO deployments
  • Implements SSL connection ability to MySQL. galera_use_ssl option has to be set to true (default), in this case playbooks create self-signed SSL bundle and sets up MySQL configs to use it or distributes user-provided bundle throughout Galera nodes.
  • Removed dependency for cinder_backends_rbd_inuse in nova.conf when setting rbd_user and rbd_secret_uuid variables. Cinder delivers all necessary values via RPC when attaching the volume, so those variables are only necessary for ephemeral disks stored in Ceph. These variables are required to be set up on cinder-volume side under backend section.

Critical Issues

  • A bug that caused the Keystone credential keys to be lost when the playbook is run during a rebuild of the first Keystone container has been fixed. Please see launchpad bug 1667960 for more details.

Bug Fixes

  • Nova features that use libguestfs (libvirt password/key injection) now work on compute hosts running Ubuntu. When Nova is deployed to Ubuntu compute hosts and either nova_libvirt_inject_key or nova_libvirt_inject_password are set to True, then kernels stored in /boot/vmlinuz-* will be made readable to nova user. See launchpad bug 1507915.

15.1.1

New Features

  • Capping the default value for the variable swift_proxy_server_workers to 16 when the user doesn’t configure this variable and if the swift proxy is in a container. Default value is half the number of vCPUs available on the machine if the swift proxy is not in a container. Default value is half the number of vCPUs available on the machine with a capping value of 16 if the proxy is in a container.
  • Add support for the cinder v3 api. This is enabled by default, but can be disabled by setting the cinder_enable_v3_api variable to false.
  • For the os_cinder role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the cinder_*_init_config_overrides variables which use the config_template task to change template defaults.
  • Haproxy-server role allows to set up tunable parameters. For doing that it is necessary to set up a dictionary of options in the config files, mentioning those which have to be changed (defaults for the remaining ones are programmed in the template). Also “maxconn” global option made to be tunable.

Deprecation Notes

  • The variables cinder_sigkill_timeout and cinder_restart_wait have been deprecated and will be removed in Pike.

Bug Fixes

  • The openstack service uri protocol variables were not being used to set the Trove specific uris. This resulted in ‘http’ always being used for the public, admin and internal uris even when ‘https’ was intended.

15.1.0

New Features

  • Capping the default value for the variable aodh_wsgi_processes to 16 when the user doesn’t configure this variable. Default value is twice the number of vCPUs available on the machine with a capping value of 16.
  • Capping the default value for the variable gnocchi_wsgi_processes to 16 when the user doesn’t configure this variable. Default value is twice the number of vCPUs available on the machine with a capping value of 16.
  • Capping the default value for the variable ironic_wsgi_processes to 16 when the user doesn’t configure this variable. Default value is one fourth the number of vCPUs available on the machine with a capping value of 16.
  • Capping the default value for the variable sahara_api_workers to 16 when the user doesn’t configure this variable. Default value is half the number of vCPUs available on the machine with a capping value of 16.
  • Tags have been added to all of the common tags with the prefix “common-”. This has been done to allow a deployer to rapidly run any of the common on a need basis without having to rerun an entire playbook.
  • The COPR repository for installing LXC on CentOS 7 is now set to a higher priority than the default to ensure that LXC packages always come from the COPR repository.
  • The galera_client role will default to using the galera_repo_url URL if the value for it is set. This simplifies using an alternative mirror for the MariaDB server and client as only one variable needs to be set to cover them both.
  • The default behaviour of ensure_endpoint in the keystone module has changed to update an existing endpoint, if one exists that matches the service name, type, region and interface. This ensures that no duplicate service entries can exist per region.
  • The repo server file system structure has been updated to allow for multiple Operating systems running multiple architectures to be run at the same time and served from a single server without impacting pools, venvs, wheel archives, and manifests. The new structure follows the following pattern $RELEASE/$OS_TYPE-$ARCH and has been applied to os-releases, venvs, and pools.
  • The deployer can now define an environment variable GROUP_VARS_PATH with the folders of its choice (separated by the colon sign) to define an user space group_vars folder. These vars will apply but be (currently) overriden by the OpenStack-Ansible default group vars, by the set facts, and by the user_* variables. If the deployer defines multiple paths, the variables found are merged, and precedence is increasing from left to right (the last defined in GROUP_VARS_PATH wins)
  • The deployer can now define an environment variable HOST_VARS_PATH with the folders of its choice (separated by the colon sign) to define an user space host_vars folder. These vars will apply but be (currently) overriden by the OpenStack-Ansible default host vars, by the set facts, and by the user_* variables. If the deployer defines multiple paths, the variables found are merged, and precedence is increasing from left to right (the last defined in HOST_VARS_PATH wins)

Known Issues

  • There is currently an Ansible bug in regards to HOSTNAME. If the host .bashrc holds a var named HOSTNAME, the container where the lxc_container module attaches will inherit this var and potentially set the wrong $HOSTNAME. See the Ansible fix which will be released in Ansible version 2.3.

Upgrade Notes

  • The repo server file system structure has been updated to allow for multiple Operating systems running multiple architectures to be run at the same time and served from a single server without impacting pools, venvs, wheel archives, and manifests. The new structure follows the following pattern $RELEASE/$OS_TYPE-$ARCH and has been applied to os-releases, venvs, and pools.
  • The EPEL repository is now removed in favor of the RDO repository.

    This is a breaking change for existing CentOS deployments. The yum package manager will have errors when it finds that certain packages that it installed from EPEL are no longer available. Deployers may need to rebuild container or reinstall packages to complete this change.

  • The openstack_tempest_gate.sh script has been removed as it requires the use of the run_tempest.sh script which has been deprecated in Tempest. In order to facilitate the switch, the default for the variable tempest_run has been set to yes, forcing the role to execute tempest by default. This default can be changed by overriding the value to no. The test whitelist may be set through the list variable tempest_test_whitelist.

Deprecation Notes

  • The variables galera_client_apt_repo_url and galera_client_yum_repo_url are deprecated in favour of the common variable galera_client_repo_url.
  • The update state for the ensure_endpoint method of the keystone module is now deprecated, and will be removed in the Queens cycle. Setting state to present will achieve the same result.

Security Issues

  • The security role will no longer fix file permissions and ownership based on the contents of the RPM database by default. Deployers can opt in for these changes by setting security_reset_perm_ownership to yes.
  • The tasks that search for .shosts and shosts.equiv files (STIG ID: RHEL-07-040330) are now skipped by default. The search takes a long time to complete on systems with lots of files and it also causes a significant amount of disk I/O while it runs.

Other Notes

15.0.0

Functionality to support Ubuntu Trusty (14.04) has been removed from the code base.

New Features

  • CentOS7/RHEL support has been added to the ceph_client role.
  • Only Ceph repos are supported for now.
  • There is now experimental support to deploy OpenStack-Ansible on CentOS 7 for both development and test environments.
  • Experimental support has been added to allow the deployment of the OpenStack Designate service when hosts are present in the host group dnsaas_hosts.
  • Support has been added for the horizon designate-ui dashboard. The dashboard will be automatically enabled if any hosts are in the dnsaas_hosts inventory group.
  • The os_horizon role now has support for the horizon designate-ui dashboard. The dashboard may be enabled by setting horizon_enable_designate_ui to True in /etc/openstack_deploy/user_variables.yml.
  • Support has been added for the horizon trove-ui dashboard. The dashboard will be automatically enabled if any hosts are defined in the trove-infra_hosts inventory group.
  • Experimental support has been added to allow the deployment of the OpenStack Magnum service when hosts are present in the host group magnum-infra_hosts.
  • Deployers can now define the override cinder_rpc_executor_thread_pool_size which defaults to 64
  • Deployers can now define the override cinder_rpc_response_timeout which defaults to 60
  • Experimental support has been added to allow the deployment of the OpenStack trove service when hosts are present in the host group trove-infra_hosts.
  • It is now possible to customise the location of the configuration file source for the All-In-One (AIO) bootstrap process using the bootstrap_host_aio_config_path variable.
  • It is now possible to customise the location of the scripts used in the All-In-One (AIO) boostrap process using the bootstrap_host_aio_script_path variable.
  • It is now possible to customise the name of the user_variables.yml file created by the All-In-One (AIO) bootstrap process using the bootstrap_host_user_variables_filename variable.
  • It is now possible to customise the name of the user_secrets.yml file created by the All-In-One (AIO) bootstrap process using the bootstrap_host_user_secrets_filename variable.
  • The filename of the apt source for the ubuntu cloud archive can now be defined with the variable uca_apt_source_list_filename.
  • The filename of the apt source for the ubuntu cloud archive used in ceph client can now be defined by giving a filename in the uca part of the dict ceph_apt_repos.
  • The filename of the apt source for the ubuntu cloud archive can now be defined with the variable uca_apt_source_list_filename.
  • The filename of the apt/yum source can now be defined with the variable mariadb_repo_filename.
  • The filename of the apt source can now be defined with the variable filename inside the dicts galera_repo and galera_percona_xtrabackup_repo.
  • The filename of the apt source for the ubuntu cloud archive can now be defined with the variable uca_apt_source_list_filename.
  • Support has been added to allow the deployment of the OpenStack barbican service when hosts are present in the host group key-manager_hosts.
  • In a greenfield deployment containers will now bind mount all logs to the physical host machine in the /openstack/log/{{ inventory_hostname }} location. This change will ensure containers using a block backed file system (lvm, zfs, bfrfs) do not run into issues with full file disks due to logging. If this feature is not needed or desired it can be disabled by setting the option default_bind_mount_logs to false.
  • The number of worker threads for neutron will now be capped at 16 unless a specific value is specified. Previously, the calculated number of workers could get too high on systems with a large number of processors. This was particularly evident on POWER systems.
  • Capping the default value for the variables ceilometer_api_workers and ceilometer_notification_workers to 16 when the user doesn’t configure these variables. Default value is half the number of vCPUs available on the machine with a capping value of 16.
  • Capping the default value for the variable cinder_osapi_volume_workers to 16 when the user doesn’t configure this variable. Default value is half the number of vCPUs available on the machine with a capping value of 16.
  • Capping the default value for the variable galera_wsrep_slave_threads to 16 when the user doesn’t configure this variable. Default value is the number of vCPUs available on the machine with a capping value of 16.
  • Capping the default value for the variable galera_max_connections to 1600 when the user doesn’t configure this variable. Default value is 100 times the number of vCPUs available on the machine with a capping value of 1600.
  • Capping the default value for the variables glance_api_workers and glance_registry_workers to 16 when the user doesn’t configure these variables. Default value is half the number of vCPUs available on the machine with a capping value of 16.
  • Capping the default value for the variables heat_api_workers and heat_engine_workers to 16 when the user doesn’t configure these variables. Default value is half the number of vCPUs available on the machine with a capping value of 16.
  • Capping the default value for the variables horizon_wsgi_processes and horizon_wsgi_threads to 16 when the user doesn’t configure these variables. Default value is half the number of vCPUs available on the machine with a capping value of 16.
  • Capping the default value for the variable keystone_wsgi_processes to 16 when the user doesn’t configure this variable. Default value is half the number of vCPUs available on the machine with a capping value of 16.
  • Capping the default value for the variables neutron_api_workers, neutron_num_sync_threads and neutron_metadata_workers to 16 when the user doesn’t configure these variables. Default value is half the number of vCPUs available on the machine with a capping value of 16.
  • Capping the default value for the variables nova_wsgi_processes, nova_osapi_compute_workers, nova_metadata_workers and nova_conductor_workers to 16 when the user doesn’t configure these variables. Default value is half the number of vCPUs available on the machine with a capping value of 16.
  • Capping the default value for the variable repo_nginx_workers to 16 when the user doesn’t configure this variable. Default value is half the number of vCPUs available on the machine with a capping value of 16.
  • The ceilometer configuration files other than ceilometer.conf are now retrieved from upstream. You can override the repository from which these are retrieved by setting the ceilometer_git_config_lookup_location variable which defaults to the git.openstack.org.
  • Several configuration files that were not templated for the os_ceilometer role are now retrieved from git. The git repository used can be changed using the ceilometer_git_config_lookup_location variable. By default this points to git.openstack.org. These files can still be changed using the ceilometer_x_overrides variables.
  • Playbooks for ceph-ansible have been added to facilitate gate testing of the OpenStack-Ansible integration with Ceph clusters, and can be used to integrate the two projects so that OpenStack-Ansible can deploy and consume its own Ceph installation using ceph-ansible. This should be considered an experimental integration until further testing is been completed by deployers and the OpenStack-Ansible gate to fine tune its stability and completeness. The ceph-install playbook can be activated by adding hosts to the ceph-mon_hosts and ceph-osd_hosts in the OSA inventory. A variety of ceph-ansible specific variables will likely need to be configured in user_variables.yml to configure ceph-ansible for your environment. Please reference the ceph-ansible repo for a list of variables the project supports.
  • The installation of chrony is still enabled by default, but it is now controlled by the security_enable_chrony variable.
  • Deployers can set heat_cinder_backups_enabled to enable or disable the cinder backups feature in heat. If heat has cinder backups enabled, but cinder’s backup service is disabled, newly built stacks will be undeletable.

    The heat_cinder_backups_enabled variable is set to false by default.

  • A new switch pip_install_build_packages is introduced to allow toggling compiler and development library installation. The legacy behavior of installing the compiler and development libraries is maintained as the switch is enabled by default.
  • Deployers can set openstack_host_nf_conntrack_max to control the maximum size of the netfilter connection tracking table. The default of 262144 should be increased if virtual machines will be handling large amounts of concurrent connections.
  • LXC containers will now generate a fixed mac address on all network interfaces when the option lxc_container_fixed_mac is set to true. This feature was implemented to resolve issues with dynamic mac addresses in containers generally experienced at scale with network intensive services.
  • The os-designate role now supports Ubuntu 16.04 and SystemD.
  • The Designate pools.yaml file can now be generated via the designate_pools_yaml attribute, if desired. This allows users to populate the Designate DNS server configuration using attributes from other plays and obviates the need to manage the file outside of the Designate role.
  • The rabbitmq_server role now supports disabling listeners that do not use TLS. Deployers can override the rabbitmq_disable_non_tls_listeners variable, setting a value of True if they wish to enable this feature.
  • Neutron DHCP options have been set to allow a DHCP server running dnsmasq to coexist with other DHCP servers within the same network. This works by instructing dnsmasq to ignore any clients which are not specified in dhcp-host files.
  • Neutron DHCP options have been set to provide for logging which makes debugging DHCP and connectivity issues easier by default.
  • Variable ceph_extra_confs has been expanded to support retrieving additional ceph.conf and keyrings from multiple ceph clusters automatically.
  • Additional libvirt ceph client secrets can be defined to support attaching volumes from different ceph clusters.
  • Additional volume-types can be created by defining a list named extra_volume_types in the desired backend of the variable(s) cinder_backends
  • Container boot ordering has been implemented on container types where it would be beneficial. This change ensures that stateful systems running within a container are started ahead of non-stateful systems. While this change has no impact on a running deployment it will assist with faster recovery should any node hosting container go down or simply need to be restarted.
  • A new task has been added to the “os-lxc-container-setup.yml” common-tasks file. This new task will allow for additional configurations to be added without having to restart the container. This change is helpful in cases where non-impacting config needs to be added or updated to a running containers.
  • The galera_client_package_install option can now be specified to handle whether packages are installed as a result of the openstack-ansible-galera_client role running. This will default to true, but can be set to false to prevent package installs. This is useful when deploying the my.cnf client configuration file on hosts that already have Galera installed.
  • You can specify the galera_package_arch variable to force a specific architecture when installing percona and qpress packages. This will be automatically calculated based on the architecture of the galera_server host. Acceptable values are x86_64 for Ubuntu-16.04 and RHEL 7, and ppc64le for Ubuntu-16.04.
  • Add get_networks command to the neutron library. This will return network information for all networks, and fail if the specified net_name network is not present. If no net_name is specified network information will for all networks will be returned without performing a check on an existing net_name network.
  • Set the glance_swift_store_auth_insecure variable to override the swift_store_auth_inscure value in /etc/glance/glance-api.conf. Set this value when using an external Swift store that does not have the same insecure setting as the local Keystone.
  • Specify the gnocchi_auth_mode var to set the auth_mode for gnocchi. This defaults to basic which has changed from noauth to match upstream. If gnocchi_keystone_auth is true or yes this value will default to keystone.
  • Specify the gnocchi_git_config_lookup_location value to specify the git repository where the gnocchi config files can be retrieved. The api-paste.ini and policy.json files are now retrieved from the specified git repository and are not carried in the os_gnocchi role.
  • Several configuration files that were not templated for the os_gnocchi` role are now retrieved from git. The git repository used can be changed using the ``gnocchi_git_config_lookup_location variable. By default this points to git.openstack.org. These files can still be changed using the gnocchi_x_overrides variables.
  • If the cinder backup service is enabled with cinder_service_backup_program_enabled: True, then heat will be configured to use the cinder backup service. The heat_cinder_backups_enabled variable will automatically be set to True.
  • It’s now possible to change the behavior of DISALLOW_IFRAME_EMBED by defining the variable horizon_disallow_iframe_embed in the user variables.
  • The --check parameter for dynamic_inventory.py will now raise warnings if there are any groups defined in the user configuration that are not also found in the environment definition.
  • Add support for neutron as an enabled_network_interface.
  • The ironic_neutron_provisioning_network_name and ironic_neutron_cleaning_network_name variable can be set to the name of the neutron network to use for provisioning and cleaning. The ansible tasks will determine the appropriate UUID for that network. Alternatively, ironic_neutron_provisioning_network_uuid or ironic_neutron_cleaning_network can be used to directly specify the UUID of the networks. If both ironic_neutron_provisioning_network_name and ironic_neutron_provisioning_network_uuid are specified, the specified UUID will be used. If only the provisioning network is specified, the cleaning network will default to the same network.
  • Added support for ironic-OneView drivers. Check the documentation on how to enable them.
  • When using a copy-on-write backing store, the lxc_container_base_name can now include a prefix defined by lxc_container_base_name_prefix.
  • LXC on CentOS is now installed via package from a COPR repository rather than installed from the upstream source.
  • IPv6 support has been added for the LXC bridge network. This can be configured using lxc_net6_address, lxc_net6_netmask, and lxc_net6_nat.
  • The variable lxc_cache_environment has been added. This dictionary can be overridden by deployers to set HTTP proxy environment variables that will be applied to all lxc container download tasks.
  • The new provider network attribute sriov_host_interfaces is added to support SR-IOV network mappings inside Neutron. The provider_network adds new items network_sriov_mappings and network_sriov_mappings_list to the provider_networks dictionary. Multiple interfaces can be defined by comma separation.
  • The dragonflow plugin for neutron is now available. You can set the neutron_plugin_type to ml2.dragonflow to utilize this code path. The dragonflow code path is currently experimental.
  • Neutron SR-IOV can now be optionally deployed and configured. For details about the what the service is and what it provides, see the SR-IOV Installation Guide for more information.
  • The nova-placement service is now configured by default. nova_placement_service_enabled can be set to False to disable the nova-placement service.
  • The nova-placement api service will run as its own ansible group nova_api_placement.
  • Nova cell_v2 support has been added. The default cell is cell1 which can be overridden by the nova_cell1_name. Support for multiple cells is not yet available.
  • The copy of the /etc/openstack-release file is now optional. To disable the copy of the file, set openstack_distrib_file to no.
  • The location of the /etc/openstack-release file placement can now be changed. Set the variable openstack_distrib_file_path to place it in a different path.
  • A new variable, tempest_flavors, has been added to the os_tempest role allowing users to define nova flavors to be during tempest testing.
  • CentOS7/RHEL support has been added to the os_aodh role.
  • CentOS7/RHEL support has been added to the os_ceilometer role.
  • CentOS7/RHEL support has been added to the os_designate role.
  • CentOS7/RHEL support has been added to the os_gnocchi role.
  • CentOS7/RHEL support has been added to the os_heat role.
  • CentOS7/RHEL support has been added to the os_horizon role.
  • CentOS7/RHEL support has been added to the os_neutron role.
  • CentOS7/RHEL support has been added to the os_nova role.
  • CentOS7/RHEL support has been added to the os_swift role.
  • The openstack-ansible-security role is now configured to apply the security configurations from the Red Hat Enterprise Linux 7 STIG to OpenStack-Ansible deployments.
  • The os_barbican role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting barbican_package_state to present.
  • The os_designate role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting designate_package_state to present.
  • The PATH environment variable that is configured on the remote system can now be set using the openstack_host_environment_path list variable.
  • Deployers can now define the varible cinder_qos_specs to create qos specs and assign those specs to desired cinder volume types.
  • RabbitMQ Server can now be installed from different methods: a deb file (default), from standard repository package and from external repository. Current behavior is unchanged. Please define rabbitmq_install_method: distro to use packages provided by your distribution or rabbitmq_install_method: external_repo to use packages stored in an external repo. In the case external_repo is used, the process will install RabbitMQ from the packages hosted by packagecloud.io, as recommended by RabbitMQ.
  • Our general config options are now stored in an “/usr/local/bin/openstack-ansible.rc” file and will be sourced when the “openstack-ansible” wrapper is invoked. The RC file will read in BASH environment variables and should any Ansible option be set that overlaps with our defaults the provided value will be used.
  • The Red Hat Enterprise Linux (RHEL) 7 STIG content is now deployed by default. Deployers can continue using the RHEL 7 STIG content by setting the following Ansible variable:

    stig_version: rhel6
    
  • The swift_rsync_reverse_lookup option has been added. This setting will handle whether rsync performs reverse lookups on client IP addresses, and will default to False. We recommend leaving this option at False, unless DNS or host entries exist for each swift host’s replication address.
  • Experimental support has been added to allow the deployment of the Sahara data-processing service. To deploy sahara hosts should be present in the host group sahara-infra_hosts.
  • The security-hardening playbook hosts target can now be filtered using the security_host_group var.
  • When using the pypy python interpreter you can configure the garbage collection (gc) settings for pypy. Set the minimum GC value using the swift_pypy_gc_min variable. GC will only happen when the memory size is above this value. Set the maximum GC value using the swift_pypy_gc_max variable. This is the maximum memory heap size for pypy. Both variables are not defined by default, and will only be used if the values are defined and swift_pypy_enabled is set to True.
  • While default python interpreter for swift is cpython, pypy is now an option. This change adds the ability to greatly improve swift performance without the core code modifications. These changes have been implemented using the documentation provided by Intel and Swiftstack. Notes about the performance increase can be seen here.
  • Swift tempauth users now be specified. The swift_tempauth_users variable can be defined as a list of tempauth users, and their permissions. You will still need to specify the appropriate Swift middleware using the swift_middleware_list variable, in order to utilise tempauth.
  • Swift versioned_writes middleware is added to the pipeline by default. Additionally the allow_versioned_writes settings in the middleware configuration is set to True. This follows the Swift defaults, and enables the use of the X-History-Location metadata Header.
  • Adds support for the horizon trove-ui dashboard. The dashboard will be automatically enabled if any trove hosts are defined.
  • The Trove dashboard is available in Horizon. Deployers can enable the panel by setting the following Ansible variable:

    horizon_enable_trove_ui: True
    
  • The variable trove_conductor_workers can be configured for defining the number of workers for the trove conductor service. The default value is half the number of vCPUs available on the machine with a capping value of 16.
  • Added new variable tempest_volume_backend_names and updated templates/tempest.conf.j2 to point backend_names at this variable
  • The os_barbican role now supports deployment on Ubuntu 16.04 using SystemD.

Known Issues

  • The variables haproxy_keepalived_(internal|external)_cidr now has a default set to 169.254.(2|1).1/24. This is to prevent Ansible undefined variable warnings. Deployers must set values for these variables for a working haproxy with keepalived environment when using more than one haproxy node.

Upgrade Notes

  • During an upgrade the option default_bind_mount_logs will be set to false. This will ensure that an existing deployment is not adversely impacted by container restarts. If a deployer wishes to enable the default bind mount for /var/log they can at a later date.
  • The global override cinder_nfs_client is replaced in favor of fully supporting multi backends configuration via the cinder_backends stanza.
  • The Designate pools.yaml file can now be generated via the designate_pools_yaml attribute, if desired. This ability is toggled by the designate_use_pools_yaml_attr attribute. In the future this behavior may become default and designate_pools_yaml may become a required variable.
  • The galera_client role now installs MariaDB client version 10.1.
  • For systems using the APT package manager, the sources file for the MariaDB repo now has a consistent name, ‘MariaDB.list’.
  • The galera_server role now installs MariaDB server version 10.1.
  • For systems using the APT package manager, the sources files for the MariaDB and Percona repos now have consistent names, ‘MariaDB.list’ and ‘Percona.list’.
  • The galera_mariadb_apt_server_package and galera_mariadb_yum_server_package variables have been renamed to galera_mariadb_server_package.
  • The galera_apt_repo_url and galera_yum_repo_url variables have been renamed to galera_repo_url.
  • The latest stable release of Ceph, Jewel, is now used as the default client version since Hammer was scheduled for EOL in November 2016.
  • The variables used to produce the /etc/openstack-release file have been changed in order to improve consistency in the name spacing according to their purpose.

    openstack_code_name –> openstack_distrib_code_name openstack_release –> openstack_distrib_release

    Note that the value for openstack_distrib_release will be taken from the variable openstack_release if it is set.

  • The variable neutron_dhcp_domain has been renamed to neutron_dns_domain.
  • The nova-cert service has been deprecated, is marked for removal in the Ocata release, and will no longer be deployed by the os_nova role.
  • Installation of designate and its dependent pip packages will now only occur within a Python virtual environment. The designate_venv_enabled, designate_venv_bin, designate_venv_etc_dir and designate_non_venv_etc_dir variables have been removed.
  • The os_barbican role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option barbican_package_state should be set to present.
  • The os_designate role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option designate_package_state should be set to present.
  • The security role will accept the currently installed version of a package rather than attempting to update it. This reduces unexpected changes on the system from subsequent runs of the security role. Deployers can still set security_package_state to latest to ensure that all packages installed by the security role are up to date.
  • The glance library has been removed from OpenStack-Ansible’s plugins. Upstream Ansible modules for managing OpenStack image resources should be used instead.
  • The variable proxy_env_url is now used by the apt-cacher-ng jinja2 template to set up an HTTP/HTTPS proxy if needed.
  • The gnocchi_archive_policies and gnocchi_archive_policy_rules variables never had full support in the role so were ineffective at the intended purpose. The task references to them have been removed and the library to perform gnocchi operations has also been removed. This eliminates the need for the gnocchi client to be installed outside the virtual environment as well.
  • The following secrets are no longer used by OpenStack-Ansible and can be removed from user_secrets.yml:
    • container_openstack_password
    • keystone_auth_admin_token
    • cinder_v2_service_password
    • nova_ec2_service_password
    • nova_v3_service_password
    • nova_v21_service_password
    • nova_s3_service_password
    • swift_container_mysql_password
  • The variables tempest_requirements_git_repo and tempest_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables horizon_requirements_git_repo and horizon_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables swift_requirements_git_repo and swift_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables ironic_requirements_git_repo and ironic_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables neutron_requirements_git_repo and neutron_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables heat_requirements_git_repo and heat_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables magnum_requirements_git_repo and magnum_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables sahara_requirements_git_repo and sahara_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables cinder_requirements_git_repo and cinder_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables trove_requirements_git_repo and trove_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables gnocchi_requirements_git_repo and gnocchi_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables glance_requirements_git_repo and glance_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables keystone_requirements_git_repo and keystone_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables aodh_requirements_git_repo and aodh_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables barbican_requirements_git_repo and barbican_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables nova_requirements_git_repo and nova_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables nova_lxd_requirements_git_repo and nova_lxd_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables rally_requirements_git_repo and rally_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • The variables ceilometer_requirements_git_repo and ceilometer_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.
  • Deployers should review the new RHEL 7 STIG variables in defaults/main.yml to provide custom configuration for the Ansible tasks.
  • The default behaviour of rsync, to perform reverse lookups, has been changed to False. This can be set to True by setting the swift_rsync_reverse_lookup variable to True.
  • A new option swift_pypy_enabled has been added to enable or disable the pypy interpreter for swift. The default is “false”.
  • A new option swift_pypy_archive has been added to allow a pre-built pypy archive to be downloaded and moved into place to support swift running under pypy. This option is a dictionary and contains the URL and SHA256 as keys.
  • Functionality to support Ubuntu Trusty (14.04) has been removed from the code base.
  • Gnocchi service endpoint variables were not named correctly. Renamed variables to be consistent with other roles.
  • The variable gnocchi_required_pip_packages was incorrectly named and has been renamed to gnocchi_requires_pip_packages to match the standard across all roles.
  • The cinder project removed the shred value for the volume_clear option. The default for the os_cinder OpenStack-Ansible role has changed to zero.

Deprecation Notes

  • The vars to set source_sample_interval for the os_ceilometer role are deprecated and will be removed in the Queen cycle. To override these variables after Queen, utilize the ceilometer_pipeline_yaml_overrides file.
  • The ceilometer_gnocci_resources_yaml_overrides variable is deprecated and scheduled for removal in the Pike cycle. This is replaced with the correctly spelled variable, which should now be used ceilometer_gnocchi_resources_yaml_overrides.
  • The gnocchi_keystone_auth is deprecated, and will be removed in the Queen cycle. Setting gnocchi_auth_mode to keystone will achieve the same result.
  • The Red Hat Enteprise Linux 6 STIG content has been deprecated. The tasks and variables for the RHEL 6 STIG will be removed in a future release.
  • Removed tempest_volume_backend1_name and tempest_volume_backend1_name since backend1_name and backend2_name were removed from tempest in commit 27905cc (merged 26/04/2016)

Bug Fixes

  • When a task fails while executing a playbook, the default behaviour for Ansible is to fail for that host without executing any notifiers. This can result in configuration changes being executed, but services not being restarted. OpenStack-Ansible now sets ANSIBLE_FORCE_HANDLERS to True by default to ensure that all notified handlers attempt to execute before stopping the playbook execution.
  • Logging within the container has been bind mounted to the hosts this reslves issue 1588051 <https://bugs.launchpad.net/openstack-ansible/+bug/1588051>_
  • LXC containers will now have the ability to use a fixed mac address on all network interfaces when the option lxc_container_fixed_mac is set true. This change will assist in resolving a long standing issue where network intensive services, such as neutron and rabbitmq, can enter a confused state for long periods of time and require rolling restarts or internal system resets to recover.
  • The ‘container_cidr’ key has been restored back to openstack_inventory.json The fix to remove deleted global override keys mistakenly deleted the ‘container_cidr’ key, as well. This was used by downstream consumers, and cannot be reconstructed with other information inside the inventory file. Regression tests were also added.
  • SSLv3 is now disabled in the haproxy daemon configuration by default.
  • Properly distrubute client keys to nova hypervisors when extra ceph clusters are being deployed.
  • Properly remove temporary files used to transfer ceph client keys from the deploy host and hypervisors.
  • Systems using systemd (like Ubuntu Xenial) were incorrectly limited to a low amount of open files. This was causing issues when restarting galera. A deployer can still define the maximum number of open files with the variable galera_file_limits (Defaults to 65536).
  • Metal hosts were being inserted into the lxc_hosts group, even if they had no containers (Bug 1660996). This is now corrected for newly configured hosts. In addition, any hosts that did not belong in lxc_hosts will be removed on the next inventory run or playbook call.
  • Errors relating to groups containing both hosts and other groups as children now raise a more descriptive error. See inventory documentation for more details. Fixes bug
  • Setting the haproxy_bind list on a service is now used as an override to the other VIPs defined in the environment. Previously it was being treated as an append to the other VIPs so there was no path to override the VIP binds for a service. For example, haproxy_bind could be used to bind a service to the internal VIP only.
  • The haproxy daemon is now able to bind to any port on CentOS 7. The haproxy_connect_any SELinux boolean is now set to on.
  • The percona repository stayed in placed even after a change of the variable use_percona_upstream. From now on, the percona repository will not be present unless the deployer decides to use_percona_upstream. This also fixes a bug of the presence of this apt repository after an upgdrade from Mitaka.
  • The URL of NovaLink uses ‘ftp’ protocol to provision apt key. It causes apt_key module to fail to retrieve NovaLink gpg public key file. Therefore, change the protocol of URL to ‘http’. For more information, see bug 1637348.
  • The apt-cacher-ng daemon does not use the proxy server specified in environment variables. The proxy server specified in the proxy_env_url variable is now set inside the apt-cacher-ng configuration file.
  • Setup for the PowerVM driver was not properly configuring the system to support RMC configuration for client instances. This fix introduces an interface template for PowerVM that properly supports mixed IPV4/IPV6 deploys and adds documentation for PowerVM RMC. For more information see bug 1643988.

Other Notes

  • XtraBackup is currently on version 2.4.5 for ppc64le architecture when pulling deb packages from the repos.
  • XtraBackup is currently on version 2.4.5 for amd64 architecture when pulling rpm/deb packages from the repos. To pull the latest available 2.4 branch version from the yum/apt repository set the use_percona_upstream variable to True. The default behavior using deb packages is unchanged.
  • The in tree “ansible.cfg” file in the playbooks directory has been removed. This file was making compatibility difficult for deployers who need to change these values. Additionally this files very existence forced Ansible to ignore any other config file in either a users home directory or in the default “/etc/ansible” directory.
  • From now on, external repo management (in use for RDO/UCA for example) will be done inside the pip-install role, not in the repo_build role.