Current Series Release Notes

19.0.0.0rc1-212

New Features

  • Added new parameter tempest_services for setting tempest_service_available_{service_name} var automatically.

  • The ansible version used by OSA is updated from the 2.7 to the 2.8 series. This requires an upgrade of ceph-ansible to 4.0 and this in turn requires an upgrade of ceph from Mimic to Nautilus. This version dependancy applies where OSA uses ceph-ansible directly to deploy the ceph infrastructure, but not when OSA is integrated with an externally provisioned ceph cluster.

  • Each openstack service role has a new variable <role>_bind_address which defaults to 0.0.0.0. A global override openstack_service_bind_address may be used by a deployer either in group_vars or user_variables to define an alternative IP address for services to bind to. This feature allows a deployer to bind all of the services to a specific network, for example the openstack management network. In this release the default binding remains 0.0.0.0, and future releases may default the binding to the management network.

  • Cinder is deployed with Active-Active enabled by default if you are using Ceph as a backend storage.

  • Passed –extra-vars flag to the openstack-ansible should have precedence over the user-variables*.yml now.

  • The ceph_client role will now look for and configure manila services to work with ceph and cephfs.

  • The os_masakari role now covers the monitors installation and configuration, completing the full service configuration.

  • The override rabbitmq_memory_high_watermark can be used to set the maximum size of the erlang Virtual Machine before the garbage collection is triggered. The default is lowered to 0.2, from 0.4 as the garbage collection can require 2x of allocated amount during its operation. This can result in a equivalent use of 0.4, resulting in 40% of memory usage, visible to the rabbitMQ container. The original default setting of 0.4 can lead to 80% memory allocation of rabbitMQ, potentially leading to a scenario where the underlying Linux kernel is killing the process due to shortage of virtual memory.

  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the octavia_install_method variable to distro.

  • Added 2 new varibles for all groups - oslomsg_notify_policies and oslomsg_rpc_policies. These variables contain default rabbitmq policies, which will be applied for every rabbitmq vhost. As for now they will enable [HA mode](https://www.rabbitmq.com/ha.html) for all vhosts. If you would like to disable HA mode just set these variables to empty lists inside your user_config.yml

  • Deployments will now default to using python3 when a python3 interpreter is present in the operating system. Each openstack-ansible role has a new variable for the form <role>_venv_python_executable which defaults to python2 but a global variable openstack_venv_python_executable in the openstack-ansible group variables sets this to python3 on supporting operating systems. This enables a deployer is to selectively use python2 or python3 on a per service basis if required. The ansible-runtime venv is also created using python3 on the deploy host if possible.

  • Several environment variables have been added in the bootstrapping functions used by the gate-check-commit script. These variables can be used to skip various phases of the bootstrap during the gate-check-commit or bootstrap-ansible script execution.

    The environment variables added are:

    • SKIP_OSA_RUNTIME_VENV_BUILD: Skip bootstrapping of the OSA ansible venv in bootstrap-ansible.sh

    • SKIP_OSA_BOOTSTRAP_AIO: Skip execution of the bootstrap-aio playbook in gate-check-commit

    • SKIP_OSA_ROLE_CLONE: Skip execution of the get-role-requirements-playbook in the bootstrap-ansible.sh script

  • The galera_server role now uses mariabackup in order to complete SST operations due to the fact that this is the recommended choice from MariaDB.

  • The galera_server role now ships with the latest MariaDB release of 10.3.13.

  • All roles are migrated from usage of regular log files to systemd-journald

  • Deployers may require custom CA certificates installing on their openstack hosts or service containers. A new variable openstack_host_ca_certificates is added which is a list of certificates that should be copied from the deploy host to the target hosts. Certificates may be selectively deployed by defining the variable either in user_variables.yml or via host/group vars.

  • A new optional file /etc/openstack_deploy/user-role-requirements.yml is now available for a deployer to override individual entries in the upstream ansible-role-requirements file. This can be used to point to alternative repos containing local fixes, or to add supplementary ansible roles that are not specified in the standard ansible-role-requirements.

Known Issues

  • Due to a change in how backend_host is defined when using Ceph, all the Cinder volumes will restart under the same backend name. This will mean that any volumes which previously were assigned to the host or container that hosted the volume will no longer be managable. The workaround for this is to use the cinder-manage volume update_host command to move those volumes to the new backend host. This known issue will be resolved soon with an upgrade playbook.

  • The previous way of using a common backend_host across all deployments was not recommended by the Cinder team and it will cause duplicate messages that cause problems in the environment.

Upgrade Notes

  • Any ceph infrastructure components (OSDs, MONs etc) deployed using the OSA/ceph-ansible tooling will be upgraded to the Ceph Nautilus release. Deployers should verify that this upgrade is suitable for their environment before commencing a major upgrade to Train, and consult the ceph-ansible and ceph release notes for Nautilus. For integration with external ceph clusters where OSA does not deploy any of the ceph cluster infrastructure, overrides can be used to select the specific version of ceph repositories used by the OSA ceph_client ansible role.

  • It is possible that you may need to use the cinder-manage command to migrate volumes to a specific host. In addition, you will have to remove the old rbd:volumes service which will be stale.

  • The rabbitMQ high watermark is set to 0.2 rather than 0.4 to prevent possible OOM situations, which limits the maximum memory usage by rabbitMQ to 40% rather than 80% of the memory visible to the rabbitMQ container. The override rabbitmq_memory_high_watermark can be used to alter the limit.

  • The default nova console type has been changed to novnc. Spice is still supported however due to novnc being more actively maintained it is now a better default option.

  • New virtual environments will be created using python3, giving a straightforward transition from python2 to python3 during the major upgrade. The ansible-runtime virtual environment on the deployment host will also be upgraded from python2 to python3 where the operating system allows.

  • The default Mnesia dump_log_write_threshold value has changed to 300 instead of 100 for efficiency. dump_log_write_threshold specifies the maximum number of writes allowed to the transaction log before a new dump of the log is performed. Increasing this value can increase the performances during the queues/exchanges/bindings creation/destroying. The values should be between 100 and 1000. More detail [1].

    [1] http://erlang.org/doc/man/mnesia.html#dump_log_write_threshold

  • The option rabbitmq_disable_non_tls_listeners has been removed in favor of setting the bind address and port configuration directly using a new option rabbitmq_port_bindings. This new option is a hash allowing for multiple bind addresses and port configurations.

  • The repo server no longer uses pypiserver, so it has been removed. Along with this, the following variables have also been removed.

    • repo_pypiserver_port

    • repo_pypiserver_pip_packages

    • repo_pypiserver_package_path

    • repo_pypiserver_bin

    • repo_pypiserver_working_dir

    • repo_pypiserver_start_options

    • repo_pypiserver_init_overrides

  • The following Nova tunables have been removed, users need to start using the nova_nova_conf_overrides dictionary to override them. If those values were not previously overridden, there should be no need to override them. - nova_quota_cores - nova_quota_injected_file_content_bytes - nova_quota_injected_file_path_length - nova_quota_injected_files - nova_quota_instances - nova_quota_key_pairs - nova_quota_metadata_items - nova_quota_ram - nova_quota_server_group_members - nova_quota_server_groups - nova_max_instances_per_host - nova_scheduler_available_filters - nova_scheduler_weight_classes - nova_scheduler_driver - nova_scheduler_driver_task_period - nova_rpc_conn_pool_size - nova_rpc_thread_pool_size - nova_rpc_response_timeout - nova_force_config_drive - nova_enable_instance_password - nova_default_schedule_zone - nova_fatal_deprecations - nova_resume_guests_state_on_host_boot - nova_cross_az_attach - nova_remove_unused_resized_minimum_age_seconds - nova_cpu_model - nova_cpu_model_extra_flags

  • The following Nova variables have been removed because they have no effect in the current release of Nova. - nova_max_age - nova_osapi_compute_workers - nova_metadata_workers

  • Tacker role now uses default systemd_service role. Due to this upstart is not supported anymore. Was added variable tacker_init_config_overrides, with wich deployer may override predifined options. Also variable program_override has no effect now, and tacker_service_names was removed in favor of tacker_service_name.

  • Gnocchi migrated from usage of Apache mod_wsgi or native daemon to uWSGI daemon. This means, that some variables are not available and has no effect anymore, specifically * gnocchi_use_mod_wsgi * gnocchi_apache_* * gnocchi_ssl* (except gnocchi_ssl_external - it’s still in place) * gnocchi_user_ssl_*

    During upgrade process role will drop gnocchi_service_port from apache listeners (ports.conf) and gnocchi virtualhost, which by default means misconfigured apache service (since it won’t have any listeners) unless it’s aio build and this apache server is in use by other role/service. Apache server won’t be dropped from gnocchi_api hosts, so deployers are encoureged to remove it manually.

  • Panko migrated from usage of Apache mod_wsgi or native daemon to uWSGI daemon. This means, that panko_apache_* variables are not available and has no effect anymore.

    During upgrade process role will drop panko_service_port from apache listeners (ports.conf) and panko virtualhost, which by default means misconfigured apache service (since it won’t have any listeners) unless it’s aio build and this apache server is in use by other role/service. Apache server won’t be dropped from panko_api hosts, so deployers are encoureged to remove it manually.

Deprecation Notes

  • In the ceph_client role, the only valid values for ceph_pkg_source are now ceph and distro. For Ubuntu, the Ubuntu Cloud Archive apt source is already setup by the openstack_hosts role, so there is no need for it to also be setup by the ceph_client role.

  • The compression option in the galera_server role has been removed due to the fact that it is not recommended by MariaDB anymore. This means that all the dependencies from Percona such as QPress are no longer necessary.

  • The following variables have been removed because they are no longer used. * galera_percona_xtrabackup_repo * use_percona_upstream * galera_xtrabackup_compression * galera_server_percona_distro_packages

  • The variable galera_xtrabackup_threads has been renamed to galera_mariabackup_threads to reflect the change in the SST provider.

  • The PowerVM driver has been removed as it is not tested and it has been broken since late 2016 due to the driver name being renamed to powervm_ext instead of powervm.

  • Support of the legacy neutron L3 tool has been dropped. Deployers are appreciated to use built-in l3-agent options for configuring HA.

  • The deprecated Neutron LBaaS v2 plugin has been removed from the Neutron role.

  • The deprecated Neutron LBaaS v2 plugin support has been removed from openstack-ansible.

  • nova-placement-api has been removed from the os_nova role, along with all nova_placement_* variables. Please review the os_placement role for information about how to configure the new placement service.

  • The nova-lxd driver is no longer supported upstream, and the git repo for it’s source code has been retired on the master branch. All code for deploying or testing nova-lxd has been removed from the os_nova ansible role. The following variables have been removed:

    • nova_supported_virt_types ‘lxd’ list entry

    • nova_compute_lxd_pip_packages

    • lxd_bind_address

    • lxd_bind_port

    • lxd_storage_backend

    • lxd_trust_password

    • lxd_storage_create_device

  • Removal of the netloc, netloc_no_port and netorigin filters. Please use the urlsplit filter instead. All usages of the deprecated filters in openstack repos have been updated.

  • The py_pkgs and packages_file Ansible lookups are no longer used in OSA and have been removed from the plugins repository.

  • Due to usage of systemd-journald mapping of /openstack/log/ to /var/log/$SERVICE is not present anymore. Also rsyslog_client role is not called for projects since logs are stored in journald. Also variables like service_log_dir are not supported anymore and have no effect.

Bug Fixes

  • ceilometer-polling services running on compute nodes did not have the polling namespace configured. Because of this they used the default value of running all pollsters from the central and compute namespaces. But the pollsters from the central namespace don’t have to run on every compute node. This is fixed by only running the compute pollsters on compute nodes.

  • The RyuBgpDriver is no longer available and replaced by the OsKenBgpDriver of the neutron_dynamic_routing project.

19.0.0.0rc1

New Features

  • Experimental support has been added to allow the deployment of the OpenStack Masakari service when hosts are present in the host group masakari-infra_hosts.

Known Issues

  • We are limiting the tarred inventory backups to 15 in addition to changes that only apply backups when the config has changed. These changes are to address an issue where the inventory was corruped with parallel runs on large clusters.

19.0.0.0b1

New Features

  • Support has been added for deploying on Ubuntu 18.04 LTS hosts. The most significant change is a major version increment of LXC from 2.x to 3.x which deprecates some previously used elements of the container configuration file.

  • It is possible to configure Glance to allow cross origin requests by specifying the allowed origin address using the glance_cors_allowed_origin variable. By default, this will be the load balancer address.

  • Adding support for Mistral to be built as part of the repo build process.

  • Adding the os-mistral-install.yml file to deploy mistral to hosts tagged with hostgroup mistral_all

  • It is now possible to use NFS mountpoints with the role by using the nova_nfs_client variable, which is useful for using NFS for instance data and saves.

  • The os_tempest role now has the ability to install from distribution packages by setting tempest_install_method to distro.

  • The new variable tempest_workspace has been introduced to set the location of the tempest workspace.

  • The default location of the default tempest configuration is now /etc/tempest/tempest.conf rather than the previous default of $HOME/.tempest/etc.

  • The service setup in keystone for aodh will now be executed through delegation to the aodh_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    aodh_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The service setup in keystone for barbican will now be executed through delegation to the barbican_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    barbican_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • Add the launchpad and bugzilla keys in tempest_test_blacklist ansible variable. Developers must have a way to trackdown why a test was inserted in the skiplist, and one of the ways is through bugs. This feature add the information regarding it in the list of skipped tests on os_tempest

  • The blazar dashboard is available in Horizon. Deployers can enable the panel by setting the following Ansible variable:

    horizon_enable_blazar_ui: True
    
  • The service setup in keystone for ceilometer will now be executed through delegation to the ceilometer_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    ceilometer_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • It is now possible to modify the NTP server options in chrony using security_ntp_server_options.

  • Chrony got a new configuration option to synchronize the system clock back to the RTC using the security_ntp_sync_rtc variable. Disabled by default.

  • The service setup in keystone for cinder will now be executed through delegation to the cinder_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    cinder_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The cloudkitty dashboard is available in Horizon. Deployers can enable the panel by setting the following Ansible variable:

    horizon_enable_cloudkitty_ui: True
    
  • The list of enabled filters for the Cinder scheduler, scheduler_default_filters in cinder.conf, could previously be defined only via an entry in cinder_cinder_conf_overrides. You now have the option to instead define a list variable, cinder_scheduler_default_filters, that defines the enabled filters. This is helpful if you either want to disable one of the filters enabled by default (at the time of writing, these are AvailabilityZoneFilter, CapacityFilter, and CapabilitiesFilter), or if conversely you want to add a filter that is normally not enabled, such as DifferentBackendFilter or InstanceLocalityFilter.

    For example, to enable the InstanceLocalityFilter in addition to the normally enabled scheduler filters, use the following variable.

    cinder_scheduler_default_filters:
      - AvailabilityZoneFilter
      - CapacityFilter
      - CapabilitiesFilter
      - InstanceLocalityFilter
    
  • The option repo_venv_default_pip_packages has been added which will allow deployers to insert any packages into a service venv as needed. The option expects a list of strings which are valid python package names as found on PYPI.

  • The nova configuration is updated to always specify an LXD storage pool name when ‘nova_virt_type’ is ‘lxd’. The variable ‘lxd_storage_pool’ is defaulted to ‘default’, the LXD default storage pool name. A new variable ‘lxd_init_storage_pool’ is introduced which specifies the underlying storage pool name. ‘lxd_init_storage_pool’ is used by lxd init when setting up the storage pool. If not provided, lxd init will not use this parameter at all. Please see the lxd man page for further information about the storage pool parameter.

  • The service setup in keystone for designate will now be executed through delegation to the designate_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    designate_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • Compare dict vars of before and after configuration to determine whether the config keys or values have changed so a configuration file will not be incorrectly marked as changed when only the ordering has changed.

  • Set diff return variable to a dict of changes applied.

  • The os_horizon role now supports distribution of user custom themes. Deployers can use the new key theme_src_archive of horizon_custom_themes dictionary to provide absolute path to the archived theme. Only .tar.gz, .tgz, .zip, .tar.bz, .tar.bz2, .tbz, .tbz2 archives are supported. Structure inside archive should be as a standard theme, without any leading folders.

  • Python-tempestconf is a tool that generates a tempest.conf file, based only on the credentials from an openstack installation. It uses the discoverable api from openstack to check for services, features, etc.

    Add the possibility to use python-tempestconf tool to generate tempest.conf file, rather than use the role template.

  • Octavia is creating vms, securitygroups, and other things in its project. In most cases the default quotas are not big enough. This will adjust them to (configurable) reasonable values.

  • Glance containers will now bind mount the default glance cache directory from the host when glance_default_store is set to file and nfs is not in use. With this change, the glance file cache size is no longer restricted to the size of the container file system.

  • The service setup in keystone for glance will now be executed through delegation to the glance_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    glance_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The service setup in keystone for gnocchi will now be executed through delegation to the gnocchi_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    gnocchi_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The service setup in keystone for heat will now be executed through delegation to the heat_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    heat_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The service setup in keystone for horizon will now be executed through delegation to the horizon_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    horizon_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • Horizon has, since OSA’s inception, been deployed with HTTPS access enabled, and has had no way to turn it off. Some use-cases may want to access via HTTP instead, so this patch enables the following.

    • Listen via HTTPS on a load balancer, but via HTTP on the horizon host and have the load balancer forward the correct headers. It will do this by default in the integrated build due to the presence of the load balancer, so the current behaviour is retained.

    • Enable HTTPS on the horizon host without a load balancer. This is the role’s default behaviour which matches what it always has been.

    • Disable HTTPS entirely by setting haproxy_ssl: no (which will also disable https on haproxy. This setting is inherited by the new horizon_enable_ssl variable by default. This is a new option.

  • The service setup in keystone for ironic will now be executed through delegation to the ironic_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    ironic_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The service updates for keystone will now be executed through delegation to the keystone_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    keystone_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • If Horizon dashboard of OSA installation has a public FQDN, is it now possible to use LetsEncrypt certification service. Certificate will be generated within HAProxy installation and a cron entry to renew the certificate daily will be setup. Note that there is no certificate distribution implementation at this time, so this will only work for a single haproxy-server environment.

  • The service setup in keystone for magnum will now be executed through delegation to the magnum_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    magnum_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • Instead of downloading images to the magnum API servers, the images will now download to the magnum_service_setup_host to the folder set in magnum_image_path owned by magnum_image_path_owner.

  • The masakari dashboard is available in Horizon. Deployers can enable the panel by setting the following Ansible variable:

    horizon_enable_masakari_ui: True
    
  • It is now possible for deployers to enable or disable the mysqlcheck capability. The Boolean option galera_monitoring_check_enabled has been added which has a default value of true.

  • It is now possible to change the port used by mysqlcheck. The integer option galera_monitoring_check_port has been added with the default value of 9200.

  • The Neutron Service Function Chaining Extension (SFC) can optionally be deployed and configured by defining the following service plugins:

    • flow_classifier

    • sfc

    neutron_plugin_base:
    - router
    - metering
    - flow_classifier
    - sfc
    

    For more information about SFC in Neutron, refer to the following:

  • The provider_networks library has been updated to support the definition of network interfaces that can automatically be added as ports to OVS provider bridges setup during a deployment. To activate this feature, add the network_interface key to the respective flat and/or vlan provider network definition in openstack_user_config.yml. For more information, refer to the latest Open vSwitch deployment guide.

  • The service setup in keystone for neutron will now be executed through delegation to the neutron_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    neutron_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • VPNaaS dashboard is again available in Horizon. Deployers can enable the panel by setting the following Ansible variable:

    horizon_enable_neutron_vpnaas: True
    
  • You can now set the Libvirt CPU model and feature flags from the appropriate entry under the nova_virt_types dictionary variable (normally kvm). nova_cpu_model is a string value that sets the CPU model; this value is ignored if you set any nova_cpu_mode other than custom. nova_cpu_model_extra_flags is a list that allows you to specify extra CPU feature flags not normally passed through with host-model, or the custom CPU model of your choice.

  • The service setup in keystone for nova will now be executed through delegation to the nova_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    nova_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The service setup in keystone for octavia will now be executed through delegation to the octavia_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    octavia_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the nova_install_method variable to distro.

  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the neutron_install_method variable to distro.

  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the nova_install_method variable to distro.

  • Deployers can now define a cinder-backend volume type explicitly private or public with option public set to true or false.

  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in trove.

  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in barbican.

  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in aodh.

  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in ceilometer.

  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in designate.

  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in magnum.

  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in swift.

  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in octavia.

  • The container_interface provider network option is no longer required for Neutron provider network definitions when related agents or OVN controllers are deployed on bare metal.

  • The service setup in keystone for sahara will now be executed through delegation to the sahara_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    sahara_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The service setup in keystone for swift will now be executed through delegation to the swift_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    swift_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The tacker dashboard is available in Horizon. Deployers can enable the panel by setting the following Ansible variable:

    horizon_enable_tacker_ui: True
    
  • The service setup in keystone for tempest will now be executed through delegation to the tempest_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    tempest_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • Rather than a hard-coded set of projects and users, tempest can now be configured with a custom list with the variables tempest_projects and tempest_users.

  • It is now possible to specify a list of tests for tempest to blacklist when executing using the tempest_test_blacklist list variable.

  • Allow the default section in an ini file to be specified using the default_section variable when calling a config_template task. This defaults to DEFAULT.

  • The trove service setup in keystone will now be executed through delegation to the trove_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    trove_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The MariaDB version has been bumped to 10.2

  • The MariaDB version has been bumped to 10.2

  • The watcher dashboard is available in Horizon. Deployers can enable the panel by setting the following Ansible variable:

    horizon_enable_watcher_ui: True
    
  • The zun dashboard is available in Horizon. Deployers can enable the panel by setting the following Ansible variable:

    horizon_enable_zun_ui: True
    

Known Issues

  • Although the ceph-rgw playbooks do enable Swift object versioning, support in radosgw is currently limited to setting X-Versions-Location on a container. X-History-Location, understood by native Swift, is currently not supported by radosgw (although the feature is pending upstream).

  • The number of inotify watch instances available is limited system wide via a sysctl setting. It is possible for certain processes, such as pypi-server, or elasticsearch from the ops repo to consume a large number of inotify watches. If the system wide maximum is reached then any process on the host or in any container on the host will be unable to create a new inotify watch. Systemd uses inotify watches, and if there are none available it is unable to restart services. The processes which synchronise the repo server contents between infra nodes also relies on inotify watches. If the repo servers fail to synchronise, or services fail to restart when expected check the the inotify watch limit which is defined in the sysctl value fs.inotify.max_user_watches. Patches have merged to increase these limits, but for existing environments or those which have not upgraded to a recent enough point release may have to apply an increased limit manually.

  • When using the connection plugin’s container_user option, ansible_remote_tmp should be set to a system writable path such as ‘/var/tmp/’.

Upgrade Notes

  • The supported upgrade path from Xenial to Bionic is via re-installation of the host OS across all nodes and redeployment of the required services. The Rocky branch of OSA is intended as the transition point for such upgrades from Xenial to Bionic. At this time there is no support for in-place operating system upgrades (typically via do-release-upgrade).

  • In Stein, Cinder stopped supporting configuring backup drivers without the full class path. This means that you must now use the following values for cinder_service_backup_driver.

    • cinder.backup.drivers.swift.SwiftBackupDriver

    • cinder.backup.drivers.ceph.CephBackupDriver

    If you do not make this change, the Cinder backup service will refuse to start properly.

  • Data structure for tempest_test_blacklist has been updated to add launchpad and/or bugzilla linked with the test being skipped.

  • The ceph-rgw playbooks now set rgw_swift_account_in_url = true and update the corresponding Keystone service catalog entry accordingly. Applications (such as monitoring scripts) that do not rely on service catalog lookup must be updated with the new endpoint URL that includes AUTH_%(tenant_id)s just like native Swift does — or, alternatively, should be updated to honor the service catalog after all.

  • The ceph-rgw playbooks now set rgw_swift_versioning_enabled = true, adding support for object versioning for the object-store service.

  • Changed the default NTP server options in chrony.conf. The offline option has been removed, minpoll/maxpoll have been removed in favour of the upstream defaults, while the iburst option was added to speed up initial time synchronization.

  • The variable cinder_iscsi_helper has been replaced by the new variable which is cinder_target_helper due to the fact that iscsi_helper has been deprecated in Cinder.

  • The data structure for galera_client_gpg_keys has been changed to be a dict passed directly to the applicable apt_key/rpm_key module. As such any overrides would need to be reviewed to ensure that they do not pass any key/value pairs which would cause the module to fail.

  • The default values for galera_client_gpg_keys have been changed for all supported platforms will use vendored keys. This means that the task execution will no longer reach out to the internet to add the keys, making offline or proxy-based installations easier and more reliable.

  • The data structure for galera_gpg_keys has been changed to be a dict passed directly to the applicable apt_key/rpm_key module. As such any overrides would need to be reviewed to ensure that they do not pass any key/value pairs which would cause the module to fail.

  • The default values for galera_gpg_keys have been changed for all supported platforms will use vendored keys. This means that the task execution will no longer reach out to the internet to add the keys, making offline or proxy-based installations easier and more reliable.

  • Glance containers will be rebooted to add the glance cache bind mount if glance_default_store is set to file and nfs is not in use.

  • The plugin names for the classifier and sfc changed:

    • networking_sfc.services.flowclassifier.plugin.FlowClassifierPlugin => flow_classifier

    • networking_sfc.services.sfc.plugin.SfcPlugin => sfc

  • The provider_networks library has been updated to support the definition of network interfaces that can automatically be added as ports to OVS provider bridges setup during a deployment. As a result, the network_interface value applied to the neutron_provider_networks override in user_variables.yml, as described in previous Open vSwitch deployment guides, is no longer effective. If overrides are necessary, use network_interface_mappings within the provider network override and specify the respective bridge-to-interface mapping (e.g. “br-provider:bond1”). For more information, refer to the latest Open vSwitch deployment guide.

  • If your configuration previously set the libvirt/cpu_model and/or libvirt/cpu_model_extra_flags variables in a nova_nova_conf_overrides dictionary, you should consider moving those to nova_cpu_model and nova_cpu_model_extra_flags in the appropriate entry (normally kvm) in the nova_virt_types dictionary.

  • The tasks creating a keystone service user have been removed, along with related variables keystone_service_user_name and keystone_service_password. This user can be deleted in existing deployments.

  • The data structure for rabbitmq_gpg_keys has been changed to be a dict passed directly to the applicable apt_key/rpm_key module. As such any overrides would need to be reviewed to ensure that they do not pass any key/value pairs which would cause the module to fail.

  • The default values for rabbitmq_gpg_keys have been changed for all supported platforms will use vendored keys. This means that the task execution will no longer reach out to the internet to add the keys, making offline or proxy-based installations easier and more reliable.

  • The default queue policy has changed to ^(?!(amq\.)|(.*_fanout_)|(reply_)).* instead of ^(?!amq\.).* for efficiency. The new HA policy excludes reply queues (these queues have a single consumer and TTL policy), fanout queues (they have the TTL policy) and amq queues (they are auto-delete queues, with a single consumer).

  • The variable tempest_image_dir_owner is removed in favour of using default ansible user to create the image directory.

  • The glance v1 API is now removed upstream and the deployment code is now removed from this glance ansible role. The variable glance_enable_v1_api is removed.

  • The variables ceilometer_oslomsg_rpc_servers and ceilometer_oslomsg_notify_servers have been removed in favour of using ceilometer_oslomsg_rpc_host_group and ceilometer_oslomsg_notify_host_group instead.

  • Due to the smart-reources implementation, variables, related to custom git path of exact config files were removed. Now all config files are taken from upstream git repo, but overrides and client configs are still supported. The following variables are not supported now: * ceilometer_git_config_lookup_location * ceilometer_data_meters_git_file_path * ceilometer_event_definitions_git_file_path * ceilometer_gnocchi_resources_git_file_path * ceilometer_loadbalancer_v2_meter_definitions_git_file_path * ceilometer_osprofiler_event_definitions_git_file_path * ceilometer_polling_git_file_path If you are maintaining custom ceilometer git repository, you still may use ceilometer_git_repo variable, to provide url to your git repository.

  • The data structure for ceph_gpg_keys has been changed to be a list of dicts, each of which is passed directly to the applicable apt_key/rpm_key module. As such any overrides would need to be reviewed to ensure that they do not pass any key/value pairs which would cause the module to fail.

  • The default values for ceph_gpg_keys have been changed for all supported platforms and now use vendored keys. This means that the task execution will no longer reach out to the internet to add the keys, making offline or proxy-based installations easier and more reliable.

  • A new value epel_gpg_keys can be overridden to use a different GPG key for the EPEL-7 RPM package repo instead of the vendored key used by default.

Deprecation Notes

  • The variable aodh_requires_pip_packages is no longer required and has therefore been removed.

  • The variable barbican_requires_pip_packages is no longer required and has therefore been removed.

  • The following variables are no longer used and have therefore been removed.

    • ceilometer_requires_pip_packages

    • ceilometer_service_name

    • ceilometer_service_port

    • ceilometer_service_proto

    • ceilometer_service_type

    • ceilometer_service_description

  • The variable cinder_requires_pip_packages is no longer required and has therefore been removed.

  • There was previously an environment variable (ANSIBLE_ROLE_FETCH_MODE) to set whether the roles in ansible-role-requirements.yml were fetched using ansible-galaxy or using git, however the default has been git for some time ansible since the use of the ceph-ansible respoitory for ceph deployment, using ansible-galaxy to download the roles does not work properly. This functionality has therefore been removed.

  • The variable designate_requires_pip_packages is no longer required and has therefore been removed.

  • Dragonflow is no longer maintained as an OpenStack project and has therefore been removed from OpenStack-Ansible as a supported ML2 driver for neutron.

  • The get_gested filter has been removed, as it is not used by any roles/plays.

  • The variable glance_requires_pip_packages is no longer required and has therefore been removed.

  • The variable gnocchi_requires_pip_packages is no longer required and has therefore been removed.

  • The variable heat_requires_pip_packages is no longer required and has therefore been removed.

  • The variable horizon_requires_pip_packages is no longer required and has therefore been removed.

  • The variable ironic_requires_pip_packages is no longer required and has therefore been removed.

  • The log path, /var/log/barbican is no longer used to capture service logs. All logging for the barbican service will now be sent directly to the systemd journal.

  • The log path, /var/log/keystone is no longer used to capture service logs. All logging for the Keystone service will now be sent directly to the systemd journal.

  • The log path, /var/log/congress is no longer used to capture service logs. All logging for the congress service will now be sent directly to the systemd journal.

  • The log path, /var/log/cinder is no longer used to capture service logs. All logging for the cinder service will now be sent directly to the systemd journal.

  • The log path, /var/log/blazar is no longer used to capture service logs. All logging for the blazar service will now be sent directly to the systemd journal.

  • The log path, /var/log/aodh is no longer used to capture service logs. All logging for the aodh service will now be sent directly to the systemd journal.

  • The log path, /var/log/ceilometer is no longer used to capture service logs. All logging for the ceilometer service will now be sent directly to the systemd journal.

  • The log path, /var/log/designate is no longer used to capture service logs. All logging for the designate service will now be sent directly to the systemd journal.

  • The variable keystone_requires_pip_packages is no longer required and has therefore been removed.

  • The following variable name changes have been implemented in order to better reflect their purpose.

    • lxc_host_machine_quota_disabled -> lxc_host_btrfs_quota_disabled

    • lxc_host_machine_qgroup_space_limit -> lxc_host_btrfs_qgroup_space_limit

    • lxc_host_machine_qgroup_compression_limit -> lxc_host_btrfs_qgroup_compression_limit

  • The variable magnum_requires_pip_packages is no longer required and has therefore been removed.

  • The variable neutron_requires_pip_packages is no longer required and has therefore been removed.

  • The variable nova_requires_pip_packages is no longer required and has therefore been removed.

  • The variable octavia_requires_pip_packages is no longer required and has therefore been removed.

  • The variable octavia_image_downloader has been removed. The image download now uses the same host designated by the octavia_service_setup_host for the image download.

  • The variable octavia_ansible_endpoint_type has been removed. The endpoint used for ansible tasks has been hard set to the ‘admin’ endpoint as is commonly used across all OSA roles.

  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - trove_oslomsg_rpc_servers replaces trove_rabbitmq_servers - trove_oslomsg_rpc_port replaces trove_rabbitmq_port - trove_oslomsg_rpc_use_ssl replaces trove_rabbitmq_use_ssl - trove_oslomsg_rpc_userid replaces trove_rabbitmq_userid - trove_oslomsg_rpc_vhost replaces trove_rabbitmq_vhost - added trove_oslomsg_notify_servers - added trove_oslomsg_notify_port - added trove_oslomsg_notify_use_ssl - added trove_oslomsg_notify_userid - added trove_oslomsg_notify_vhost - added trove_oslomsg_notify_password

  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - barbican_oslomsg_rpc_servers replaces rabbitmq_servers - barbican_oslomsg_rpc_port replaces rabbitmq_port - barbican_oslomsg_rpc_userid replaces barbican_rabbitmq_userid - barbican_oslomsg_rpc_vhost replaces barbican_rabbitmq_vhost - added barbican_oslomsg_rpc_use_ssl - added barbican_oslomsg_notify_servers - added barbican_oslomsg_notify_port - added barbican_oslomsg_notify_use_ssl - added barbican_oslomsg_notify_userid - added barbican_oslomsg_notify_vhost - added barbican_oslomsg_notify_password

  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - aodh_oslomsg_rpc_servers replaces aodh_rabbitmq_servers - aodh_oslomsg_rpc_port replaces aodh_rabbitmq_port - aodh_oslomsg_rpc_use_ssl replaces aodh_rabbitmq_use_ssl - aodh_oslomsg_rpc_userid replaces aodh_rabbitmq_userid - aodh_oslomsg_rpc_vhost replaces aodh_rabbitmq_vhost - aodh_oslomsg_rpc_password replaces aodh_rabbitmq_password

  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - ceilometer_oslomsg_rpc_servers replaces rabbitmq_servers - ceilometer_oslomsg_rpc_port replaces rabbitmq_port - ceilometer_oslomsg_rpc_userid replaces ceilometer_rabbitmq_userid - ceilometer_oslomsg_rpc_vhost replaces ceilometer_rabbitmq_vhost - added ceilometer_oslomsg_rpc_use_ssl - added ceilometer_oslomsg_notify_servers - added ceilometer_oslomsg_notify_port - added ceilometer_oslomsg_notify_use_ssl - added ceilometer_oslomsg_notify_userid - added ceilometer_oslomsg_notify_vhost - added ceilometer_oslomsg_notify_password

  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - designate_oslomsg_rpc_servers replaces designate_rabbitmq_servers - designate_oslomsg_rpc_port replaces designate_rabbitmq_port - designate_oslomsg_rpc_use_ssl replaces designate_rabbitmq_use_ssl - designate_oslomsg_rpc_userid replaces designate_rabbitmq_userid - designate_oslomsg_rpc_vhost replaces designate_rabbitmq_vhost - designate_oslomsg_notify_servers replaces designate_rabbitmq_telemetry_servers - designate_oslomsg_notify_port replaces designate_rabbitmq_telemetry_port - designate_oslomsg_notify_use_ssl replaces designate_rabbitmq_telemetry_use_ssl - designate_oslomsg_notify_userid replaces designate_rabbitmq_telemetry_userid - designate_oslomsg_notify_vhost replaces designate_rabbitmq_telemetry_vhost - designate_oslomsg_notify_password replaces designate_rabbitmq_telemetry_password

  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - magnum_oslomsg_rpc_servers replaces rabbitmq_servers - magnum_oslomsg_rpc_port replaces rabbitmq_port - magnum_oslomsg_rpc_userid replaces magnum_rabbitmq_userid - magnum_oslomsg_rpc_vhost replaces magnum_rabbitmq_vhost - added magnum_oslomsg_rpc_use_ssl - added magnum_oslomsg_notify_servers - added magnum_oslomsg_notify_port - added magnum_oslomsg_notify_use_ssl - added magnum_oslomsg_notify_userid - added magnum_oslomsg_notify_vhost - added magnum_oslomsg_notify_password

  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging Notify parameters in order to abstract the messaging service from the actual backend server deployment. - swift_oslomsg_notify_servers replaces swift_rabbitmq_telemetry_servers - swift_oslomsg_notify_port replaces swift_rabbitmq_telemetry_port - swift_oslomsg_notify_use_ssl replaces swift_rabbitmq_telemetry_use_ssl - swift_oslomsg_notify_userid replaces swift_rabbitmq_telemetry_userid - swift_oslomsg_notify_vhost replaces swift_rabbitmq_telemetry_vhost - swift_oslomsg_notify_password replaces swift_rabbitmq_telemetry_password

  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - octavia_oslomsg_rpc_servers replaces octavia_rabbitmq_servers - octavia_oslomsg_rpc_port replaces octavia_rabbitmq_port - octavia_oslomsg_rpc_use_ssl replaces octavia_rabbitmq_use_ssl - octavia_oslomsg_rpc_userid replaces octavia_rabbitmq_userid - octavia_oslomsg_rpc_vhost replaces octavia_rabbitmq_vhost - octavia_oslomsg_notify_servers replaces octavia_rabbitmq_telemetry_servers - octavia_oslomsg_notify_port replaces octavia_rabbitmq_telemetry_port - octavia_oslomsg_notify_use_ssl replaces octavia_rabbitmq_telemetry_use_ssl - octavia_oslomsg_notify_userid replaces octavia_rabbitmq_telemetry_userid - octavia_oslomsg_notify_vhost replaces octavia_rabbitmq_telemetry_vhost - octavia_oslomsg_notify_password replaces octavia_rabbitmq_telemetry_password

  • The repo server’s reverse proxy for pypi has now been removed, leaving only the pypiserver to serve packages already on the repo server. The attempt to reverse proxy upstream pypi turned out to be very unstable with increased complexity for deployers using proxies or offline installs. With this, the variables repo_nginx_pypi_upstream and repo_nginx_proxy_cache_path have also been removed.

  • The package cache on the repo server has been removed. If caching of packages is desired, it should be setup outside of OpenStack-Ansible and the variable lxc_container_cache_files (for LXC containers) or nspawn_container_cache_files_from_host (for nspawn containers) can be used to copy the appropriate host configuration from the host into the containers on creation. Alternatively, environment variables can be set to use the cache in the host /etc/environment file prior to container creation, or the deployment_environment_variables can have the right variables set to use it. The following variables have been removed.

    • repo_pkg_cache_enabled

    • repo_pkg_cache_port

    • repo_pkg_cache_bind

    • repo_pkg_cache_dirname

    • repo_pkg_cache_dir

    • repo_pkg_cache_owner

    • repo_pkg_cache_group

  • The repo build process no longer builds packaged venvs. Instead, the venvs are created on the target hosts as the install process for each service needs to. This opens up the opportunity for roles to be capable of creating multiple venvs, and for any role to create venvs - neither of these options were possible in previous releases.

    The following variables therefore have been removed.

    • repo_build_venv_selective

    • repo_build_venv_rebuild

    • repo_build_venv_timeout

    • repo_build_concurrency

    • repo_build_venv_build_dir

    • repo_build_venv_dir

    • repo_build_venv_pip_install_options

    • repo_build_venv_command_options

    • repo_venv_default_pip_packages

  • The variable repo_requires_pip_packages is no longer required and has therefore been removed.

  • The variable sahara_requires_pip_packages is no longer required and has therefore been removed.

  • The variable swift_requires_pip_packages is no longer required and has therefore been removed.

  • The variable tempest_requires_pip_packages is no longer required and has therefore been removed.

  • The variable tempest_image_downloader has been removed. The image download now uses the same host designated by the tempest_service_setup_host for the image download.

  • The variable trove_requires_pip_packages is no longer required and has therefore been removed.

Security Issues

  • Avoid setting the quotas too high for your cloud since this can impact the performance of other servcies and lead to a potential Denial-of-Service attack if Loadbalancer quotas are not set properly or RBAC is not properly set up.

  • The default TLS version has been set to TLS1.2. This only allows version 1.2 of the protocol to be used when terminating or creating TLS connections. You can change the value with the ssl_protocol variable.

  • The default TLS version has been set to TLS1.2. This only allows version 1.2 of the protocol to be used when terminating or creating TLS connections. You can change the value with the barbican_ssl_protocol variable.

  • The default TLS version has been set to TLS1.2. This only allows version 1.2 of the protocol to be used when terminating or creating TLS connections. You can change the value with the horizon_ssl_protocol variable.

  • The default TLS verion has been set to TLS1.2. This only allows version 1.2 of the protocol to be used when terminating or creating TLS connections. You can change the value with the keystone_ssl_protocol variable.

  • The default TLS version has been set to TLS1.2. This only allows version 1.2 of the protocol to be used when terminating or creating TLS connections. You can change the value with the gnocchi_ssl_protocol variable.

  • The default TLS version has been set to force-tlsv12. This only allows version 1.2 of the protocol to be used when terminating or creating TLS connections. You can change the value with the haproxy_ssl_bind_options variable.

  • The default TLS version has been set to TLS1.2. This only allows version 1.2 of the protocol to be used when terminating or creating TLS connections. You can change the value with the trove_ssl_protocol variable.

Bug Fixes

  • The ceph-rgw playbooks now include the AUTH_%(tenant_id)s suffix in the Keystone object-store endpoint. This aligns radosgw’s behavior with that of native Swift. It also enables radosgw to support public read ACLs on containers, and temporary URLs on objects, in the same way that Swift does (bug 1800637).

  • Fixes neutron HA routers, by enabling neutron-l3-agent to invoke the required helper script.

  • The quota for security group rules was erroneously set to 100 with the aim to have 100 security group rules per security group instead of to 100*#security group rules. This patch fixes this discrepancy.

  • When using LXC containers with a copy-on-write back-end, the lxc_hosts role execution would fail due to undefined variables with the nspawn_host_ prefix. This issue has now been fixed.

  • In https://review.openstack.org/582633 an adjustment was made to the openstack-ansible wrapper which mistakenly changed the intended behaviour. The wrapper is only meant to include the extra-vars and invoke the inventory if ansible-playbook was executed from inside the openstack-ansible repository clone (typically /opt/openstack-ansible), but the change made the path irrelevant. This has now been fixed - ansible-playbook and ansible will only invoke the inventory and include extra vars if it is invoked from inside the git clone path.

  • With the release of CentOS 7.6, deployments were breaking and becoming very slow when we restart dbus in order to catch some PolicyKit changes. However, those changes were never actaully used so they were happening for no reason. We no longer make any modifications to the systemd-machined configuration and/or PolicyKit to maintain upstream compatibility.

  • The conditional that determines whether the sso_callback_template.html file is deployed for federated deployments has been fixed.

Other Notes

  • The config_template action module has now been moved into its own git repository (openstack/ansible-config_template). This has been done to simplify the ability to use the plugin in other non OpenStack-Ansible projects.

  • When running keystone with apache(httpd) all apache logs will be stored in the standard apache log directory which is controlled by the distro specific variable keystone_apache_default_log_folder.

  • When running aodh with apache(httpd) all apache logs will be stored in the standard apache log directory which is controlled by the distro specific variable aodh_apache_default_log_folder.

  • Code which added ‘Acquire::http:No-Cache true’ to the host and container apt preferences when http proxy environment variables were set has been removed. This setting is only required when working around issues introduced by badly configured http proxies. In some cases proxies can improperly cache the apt Releases and Packages files leading to package installation errors. If a deployment is behind a badly configured proxy, the deployer can add the necessary apt config fragment as part of host provisioning. OSA will replicate that config into any containers that are created. This setting can be removed from existing deployments if required by manually deleting the file /etc/apt/apt.conf.d/00apt-no-cache from all host and containers.

18.0.0.0rc1

New Features

  • Deployers can now set the install_method to either source (default) or distro to choose the method for installing OpenStack services on the hosts. This only applies to new deployments. Existing deployments which are source based, cannot be converted to the new distro method. For more information, please refer to the Deployment Guide.

Deprecation Notes

  • The molteniron service is no longer included in the OSA integrated build. Any deployers wishing to use it may still use the playbook and configuration examples from the os_molteniron role.

18.0.0.0b3

Upgrade Notes

  • The ping check that happens inside keepalived to make sure that the server that runs it can reach 193.0.14.129 has been removed by default. The functionality can continue to be used if you define the keepalived_ping_address in your user_variables.yml file to 193.0.14.129 or any IP of your choice.

18.0.0.0b2

New Features

  • Adds support for the horizon octavia-ui dashboard. The dashboard will be automatically enabled if any octavia hosts are defined. If both Neutron LBaaSv2 and Octavia are enabled, two Load Balancer panels will be visible in Horizon.

  • Deployers can now set the container_tech to nspawn when deploying OSA within containers. When making the decision to deploy container types the deployer only needs to define the desired container_tech and continue the deployment as normal.

  • The addition of the container_tech option and the inclusion of nspawn support deployers now have the availability to define a desired containerization strategy globally or on specific hosts.

  • When using the nspawn driver containers will connect to the system bridges using a MACVLAN, more on this type of network setup can be seen here.

  • When using the nspawn driver container networking is managed by systemd-networkd both on the host and within the container. This gives us a single interface to manage regardless of distro and allows systemd to efficiently manage the resources.

  • When venvwithindex=True and ignorerequirements=True are both specified in rally_git_install_fragments (as was previously the default), this results in rally being installed from PyPI without any constraints being applied. This results in inconsistent builds from day to day, and can cause build failures for stable implementations due to new library releases. Going forward, we remove the rally_git_* overrides in playbooks/defaults/repo_packages/openstack_testing.yml so that the integrated build installs rally from PyPI, but with appropriate constraints applied.

  • When venvwithindex=True and ignorerequirements=True are both specified in tempest_git_install_fragments (as was previously the default), this results in tempest being installed from PyPI without any constraints being applied. This could result in the version of tempest being installed in the integrated build being different than the version being installed in the independent role tests. Going forward, we remove the tempest_git_* overrides in playbooks/defaults/repo_packages/openstack_testing.yml so that the integrated build installs tempest from PyPI, but with appropriate constraints applied.

  • Octavia requires SSL certificates for communication with the amphora. This adds the automatic creation of self signed certificates for this purpose. It uses different certificate authorities for amphora and control plane thus insuring maximum security.

  • If defined in applicable host or group vars the variable container_extra_networks will be merged with the existing container_networks from the dynamic inventory. This allows a deployer to specify special interfaces which may be unique to an indivdual container. An example use for this feature would be applying known fixed IP addresses to public interfaces on BIND servers for designate.

  • The option rabbitmq_erlang_version_spec has been added allowing deployers to set the version of erlang used on a given installation.

  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the heat_install_method variable to distro.

  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the cinder_install_method variable to distro.–

  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the glance_install_method variable to distro.

  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the aodh_install_method variable to distro.

  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the designate_install_method variable to distro.

  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the swift_install_method variable to distro.

  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the ceilometer_install_method variable to distro.

  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the barbican_install_method variable to distro.

  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the keystone_install_method variable to distro.

  • The openrc role will no longer be executed on all OpenStack service containers/hosts. Instead a single host is designated through the use of the openstack_service_setup_host variable. The default is localhost (the deployment host). Deployers can opt to change this to the utility container by implementing the following override in user_variables.yml.

    openstack_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in cinder.

  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in nova.

  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in ironic.

  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in heat.

  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in glance.

  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in sahara.

  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in neutron.

  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in keystone.

  • An option to disable the machinectl quota system has been changed. The variable lxc_host_machine_quota_disabled is a Boolean with a default of false. When this option is set to true it will disable the machinectl quota system.

  • The options lxc_host_machine_qgroup_space_limit and lxc_host_machine_qgroup_compression_limit have been added allowing a deployer to set qgroup limits as they see fit. The default value for these options is “none” which is effectively unlimited. These options accept any nominal size value followed by the single letter type, example 64G. These options are only effective when the option lxc_host_machine_quota_disabled is set to false.

  • A new playbook infra-journal-remote.yml to ship journals has been added. Physical hosts will now ship the all available systemd journals to the logging infrastructure. The received journals will be split up by host and stored in the /var/log/journal/remote directory. This feature will give deployers greater access/insight into how the cloud is functioning requiring nothing more that the systemd built-ins.

Known Issues

  • All OSA releases earlier than 17.0.5, 16.0.4, and 15.1.22 will fail to build the rally venv due to the release of the new cmd2-0.9.0 python library. Deployers are encouraged to update to the latest OSA release which pins to an appropriate version which is compatible with python2.

  • With the release of CentOS 7.5, all pike releases are broken due to a mismatch in version between the libvirt-python library specified by the OpenStack community, and the version provided in CentOS 7.5. As such OSA is unable build the appropriate python library for libvirt. The only recourse for this is to upgrade the environment to the latest queens release.

Upgrade Notes

  • Users should purge the ‘ntp’ package from their hosts if ceph-ansible is enabled. ceph-ansible previously was configured to install ntp by default which conflicts with the OSA ansible-hardening role chrony service.

  • The key is_ssh_address has been removed from the openstack_user_config.yml and the dynamic inventory. This key was responsible mapping an address to the container which was used for SSH connectivity. Because we’ve created the SSH connectivity plugin, which allows us the ability to connect to remote containers without SSH, this option is no longer useful. To keep the openstack_user_config.yml clean deployers can remove the option however moving forward it no longer has any effect.

  • The distribution package lookup and data output has been removed from the py_pkgs lookup so that the repo-build use of py_pkgs has reduced output and the lookup is purpose specific for python packages only.

Deprecation Notes

  • The use of the apt_package_pinning role as a meta dependency has been removed from the rabbitmq_server role. While the package pinning role is still used, it will now only be executed when the apt task file is executed.

  • The variable nova_compute_pip_packages is no longer used and has been removed.

  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - cinder_oslomsg_rpc_servers replaces cinder_rabbitmq_servers - cinder_oslomsg_rpc_port replaces cinder_rabbitmq_port - cinder_oslomsg_rpc_use_ssl replaces cinder_rabbitmq_use_ssl - cinder_oslomsg_rpc_userid replaces cinder_rabbitmq_userid - cinder_oslomsg_rpc_vhost replaces cinder_rabbitmq_vhost - cinder_oslomsg_notify_servers replaces cinder_rabbitmq_telemetry_servers - cinder_oslomsg_notify_port replaces cinder_rabbitmq_telemetry_port - cinder_oslomsg_notify_use_ssl replaces cinder_rabbitmq_telemetry_use_ssl - cinder_oslomsg_notify_userid replaces cinder_rabbitmq_telemetry_userid - cinder_oslomsg_notify_vhost replaces cinder_rabbitmq_telemetry_vhost

  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - nova_oslomsg_rpc_servers replaces nova_rabbitmq_servers - nova_oslomsg_rpc_port replaces nova_rabbitmq_port - nova_oslomsg_rpc_use_ssl replaces nova_rabbitmq_use_ssl - nova_oslomsg_rpc_userid replaces nova_rabbitmq_userid - nova_oslomsg_rpc_vhost replaces nova_rabbitmq_vhost - nova_oslomsg_notify_servers replaces nova_rabbitmq_telemetry_servers - nova_oslomsg_notify_port replaces nova_rabbitmq_telemetry_port - nova_oslomsg_notify_use_ssl replaces nova_rabbitmq_telemetry_use_ssl - nova_oslomsg_notify_userid replaces nova_rabbitmq_telemetry_userid - nova_oslomsg_notify_vhost replaces nova_rabbitmq_telemetry_vhost - nova_oslomsg_notify_password replaces nova_rabbitmq_telemetry_password

  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - ironic_oslomsg_rpc_servers replaces ironic_rabbitmq_servers - ironic_oslomsg_rpc_port replaces ironic_rabbitmq_port - ironic_oslomsg_rpc_use_ssl replaces ironic_rabbitmq_use_ssl - ironic_oslomsg_rpc_userid replaces ironic_rabbitmq_userid - ironic_oslomsg_rpc_vhost replaces ironic_rabbitmq_vhost

  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - heat_oslomsg_rpc_servers replaces heat_rabbitmq_servers - heat_oslomsg_rpc_port replaces heat_rabbitmq_port - heat_oslomsg_rpc_use_ssl replaces heat_rabbitmq_use_ssl - heat_oslomsg_rpc_userid replaces heat_rabbitmq_userid - heat_oslomsg_rpc_vhost replaces heat_rabbitmq_vhost - heat_oslomsg_rpc_password replaces heat_rabbitmq_password - heat_oslomsg_notify_servers replaces heat_rabbitmq_telemetry_servers - heat_oslomsg_notify_port replaces heat_rabbitmq_telemetry_port - heat_oslomsg_notify_use_ssl replaces heat_rabbitmq_telemetry_use_ssl - heat_oslomsg_notify_userid replaces heat_rabbitmq_telemetry_userid - heat_oslomsg_notify_vhost replaces heat_rabbitmq_telemetry_vhost - heat_oslomsg_notify_password replaces heat_rabbitmq_telemetry_password

  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - glance_oslomsg_rpc_servers replaces glance_rabbitmq_servers - glance_oslomsg_rpc_port replaces glance_rabbitmq_port - glance_oslomsg_rpc_use_ssl replaces glance_rabbitmq_use_ssl - glance_oslomsg_rpc_userid replaces glance_rabbitmq_userid - glance_oslomsg_rpc_vhost replaces glance_rabbitmq_vhost - glance_oslomsg_notify_servers replaces glance_rabbitmq_telemetry_servers - glance_oslomsg_notify_port replaces glance_rabbitmq_telemetry_port - glance_oslomsg_notify_use_ssl replaces glance_rabbitmq_telemetry_use_ssl - glance_oslomsg_notify_userid replaces glance_rabbitmq_telemetry_userid - glance_oslomsg_notify_vhost replaces glance_rabbitmq_telemetry_vhost - glance_oslomsg_notify_password replaces glance_rabbitmq_telemetry_password

  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - sahara_oslomsg_rpc_servers replaces sahara_rabbitmq_servers - sahara_oslomsg_rpc_port replaces sahara_rabbitmq_port - sahara_oslomsg_rpc_use_ssl replaces sahara_rabbitmq_use_ssl - sahara_oslomsg_rpc_userid replaces sahara_rabbitmq_userid - sahara_oslomsg_rpc_vhost replaces sahara_rabbitmq_vhost - sahara_oslomsg_notify_servers replaces sahara_rabbitmq_telemetry_servers - sahara_oslomsg_notify_port replaces sahara_rabbitmq_telemetry_port - sahara_oslomsg_notify_use_ssl replaces sahara_rabbitmq_telemetry_use_ssl - sahara_oslomsg_notify_userid replaces sahara_rabbitmq_telemetry_userid - sahara_oslomsg_notify_vhost replaces sahara_rabbitmq_telemetry_vhost

  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - neutron_oslomsg_rpc_servers replaces neutron_rabbitmq_servers - neutron_oslomsg_rpc_port replaces neutron_rabbitmq_port - neutron_oslomsg_rpc_use_ssl replaces neutron_rabbitmq_use_ssl - neutron_oslomsg_rpc_userid replaces neutron_rabbitmq_userid - neutron_oslomsg_rpc_vhost replaces neutron_rabbitmq_vhost - neutron_oslomsg_notify_servers replaces neutron_rabbitmq_telemetry_servers - neutron_oslomsg_notify_port replaces neutron_rabbitmq_telemetry_port - neutron_oslomsg_notify_use_ssl replaces neutron_rabbitmq_telemetry_use_ssl - neutron_oslomsg_notify_userid replaces neutron_rabbitmq_telemetry_userid - neutron_oslomsg_notify_vhost replaces neutron_rabbitmq_telemetry_vhost

  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - keystone_oslomsg_rpc_servers replaces keystone_rabbitmq_servers - keystone_oslomsg_rpc_port replaces keystone_rabbitmq_port - keystone_oslomsg_rpc_use_ssl replaces keystone_rabbitmq_use_ssl - keystone_oslomsg_rpc_userid replaces keystone_rabbitmq_userid - keystone_oslomsg_rpc_vhost replaces keystone_rabbitmq_vhost - keystone_oslomsg_notify_servers replaces keystone_rabbitmq_telemetry_servers - keystone_oslomsg_notify_port replaces keystone_rabbitmq_telemetry_port - keystone_oslomsg_notify_use_ssl replaces keystone_rabbitmq_telemetry_use_ssl - keystone_oslomsg_notify_userid replaces keystone_rabbitmq_telemetry_userid - keystone_oslomsg_notify_vhost replaces keystone_rabbitmq_telemetry_vhost

  • With the implementation of systemd-journal-remote the rsyslog_client role is no longer run by default. To enable the legacy functionality, the variable rsyslog_client_enabled and rsyslog_server_enabled can be set to true.

Security Issues

  • It is recommended that the certificate generation is always reviewed by security professionals since algorithms and key-lengths considered secure change all the time.

Bug Fixes

  • Newer releases of CentOS ship a version of libnss that depends on the existence of /dev/random and /dev/urandom in the operating system in order to run. This causes a problem during the cache preparation process which runs inside chroot that does not contain this, resulting in errors with the following message.

    error: Failed to initialize NSS library
    

    This has been resolved by introducing a /dev/random and /dev/urandom inside the chroot-ed environment.

  • ceph-ansible is no longer configured to install ntp by default, which creates a conflict with OSA’s ansible-hardening role that is used to implement ntp using ‘chrony’.

  • In order to prevent further issues with a libvirt and python-libvirt version mismatch, KVM-based compute nodes will now use the distribution package python library for libvirt. This should resolve the issue seen with pike builds on CentOS 7.5.

Other Notes

  • The max_fail_percentage playbook option has been used with the default playbooks since the first release of the playbooks back in Icehouse. While the intention was to allow large-scale deployments to succeed in cases where a single node fails due to transient issues, this option has produced more problems that it solves. If a failure occurs that is transient in nature but is under the set failure percentage the playbook will report a success, which can cause silent failures depending on where the failure happened. If a deployer finds themselves in this situation the problems are are then compounded because the tools will report there are no known issues. To ensure deployers have the best deployment experience and the most accurate information a change has been made to remove the max_fail_percentage option from all of the default playbooks. The removal of this option has the side effect of requiring the deploy to skip specific hosts should one need to be omitted from a run, but has the benefit of eliminating silent, hard to track down, failures. To skip a failing host for a given playbook run use the –limit ‘!$HOSTNAME’ CLI switch for the specific run. Once the issues have been resolved for the failing host rerun the specific playbook without the –limit option to ensure everything is in sync.

17.0.0.0rc1

New Features

  • A new variable has been added which allows deployers to set the container technology OSA will use when running a deployment in containers. This new variable is container_tech which has a default value of “lxc”.

  • Persistent systemd journals are now enabled. This allows deployers to keep older systemd journals on disk for review. The disk space requirements are extremely low since the journals are stored in binary format. The default location for persistent journals is in /var/log/journal.

    Deployers can opt out of this change by setting openstack_host_keep_journals to no.

Deprecation Notes

  • The variables keystone_memcached_servers and keystone_cache_backend_argument have been deprecated in favor of keystone_cache_servers, a list of servers for caching purposes.

  • The Ceilometer API is no longer available in the Queens release of OpenStack, this patch removes all references to API related configurations as they are no longer needed.

  • The nova_placement database which was implemented in the ocata release of OpenStack-Ansible was never actually used for anything due to reverts in the upstream code. The database should be empty and can be deleted. With this the following variables also no longer have any function and have been removed.

    • nova_placement_galera_user

    • nova_placement_galera_database

    • nova_placement_db_max_overflow

    • nova_placement_db_max_pool_size

    • nova_placement_db_pool_timeout

Security Issues

  • The PermitRootLogin in sshd_config changed from ‘yes’ to ‘prohibit-password’ in the containers. By default there is no password set in the containers but the ssh pub key from the deployment host is injected in the targets nodes authorized_keys.

Bug Fixes

  • SELinux policy for neutron on CentOS 7 is now provided to fix SELinux AVCs that occur when neutron’s agents attempt to start daemons such as haproxy and dnsmasq.

17.0.0.0b3

New Features

  • A new variable has been added to allow a deployer to control the restart of containers from common-tasks/os-lxc-container-setup.yml. This new option is lxc_container_allow_restarts and has a default of true. If a deployer wishes to disable the auto-restart functionality they can set this value to false and automatic container restarts will be disabled. This is a complement to the same option already present in the lxc_container_create role. This option is useful to avoid uncoordinated restarts of galera or rabbitmq containers if the LXC container configuration changes in a way that requires a restart.

  • An option has been added allowing the user to define the user_group LBaaSv2 uses. The new option is neutron_lbaasv2_user_group and is set within the OS specific value by default.

  • The lxcbr0 bridge now allows NetworkManager to control it, which allows for networks to start in the correct order when the system boots. In addition, the NetworkManager-wait-online.service is enabled to ensure that all services that require networking to function, such as keepalived, will only start when network configuration is complete. These changes are only applied if a deployer is actively using NetworkManager in their environment.

  • Neutron connectivity agents will now be deployed on baremetal within the “network_hosts” defined within the openstack_user_config.yml.

  • A new option has been added allowing deployers to disable any and all containers on a given host. The option no_containers is a boolean which, if undefined, will default to false. This option can be added to any host in the openstack_user_config.yml or via an override in conf.d. When this option is set to true the given host will be treated as a baremetal machine. The new option mirrors the existing environmental option is_metal but allows deployers to target specific hosts instead of entire groups.

    log_hosts:
      infra-1:
        ip: 172.16.24.2
        no_containers: true
    
  • HAProxy services that use backend nodes that are not in the Ansible inventory can now be specified manually by setting haproxy_backend_nodes to a list of name and ip_addr settings.

  • Deployers can set a refresh interval for haproxy’s stats page by setting the haproxy_stats_refresh_interval variable. The default value is 60, which causes haproxy to refresh the stats page every 60 seconds.

  • When using Glance and NFS the NFS mount point will now be managed using a systemd mount unit file. This change ensures the deployment of glance is not making potentially system impacting changes to the /etc/fstab and modernizes how we deploy glance when using shared storage.

  • New variables have been added to the glance role allowing a deployer to set the UID and GID of the glance user. The new options are, glance_system_user_uid and glance_system_group_uid. These options are useful when deploying glance with shared storage as the back-end for images and will only set the UID and GID of the glance user when defined.

Upgrade Notes

  • When upgrading there is nothing a deployer must immediately do to run neutron agent services on hosts within the network_hosts group. Simply executing the playbooks will deploy the neutron servers on the baremetal machines and will leave all existing agents containers alone.

  • It is recommended for deployers to clean up the neutron_agents container(s) after an upgrade is complete and the cluster has been verified as stable. This can be done by simply disabling the neutron agents running in the neutron_agent container(s), re-balancing the agent services targeting the new baremetal agents, deleting the container(s), and finally removing the container(s) from inventory.

  • Default quotas were bumped for the following resources: networks (from 10 to 100), subnets (from 10 to 100), ports (from 50 to 500) to match upstream defaults.

Deprecation Notes

  • The galera_percona_xtrabackup_repo_url variable which was used on Ubuntu distributions to select the upstream Percona repository has been dropped and the default upstream repository is always used from now on.

  • In OSA deployments prior to Queens, if repo_git_cache_dir was set to a folder which existed on a repo container host then that folder would be symlinked to the repo container bind mount instead of synchronising its contents to the repo container. This functionality is deprecated in Queens and will be removed in Rocky. The ability to make use of the git cache still exists, but the folder contents will be synchronised from the deploy host to the repo container. If you have made use of the symlink functionality previously, please move the contents to a standard folder and remove the symlink.

  • The galera_client_opensuse_mirror_obs_url variable has been removed since the OBS repository is no longer used to install the MariaDB packages.

Other Notes

  • CentOS deployments require a special COPR repository for modern LXC packages. The COPR repository is not mirrored at this time and this causes failed gate tests and production deployments.

    The role now syncs the LXC packages down from COPR to each host and builds a local LXC package repository in /opt/thm-lxc2.0. This greatly reduces the amount of times that packages must be downloaded from the COPR server during deployments, which will reduce failures until the packages can be hosted with a more reliable source.

    In addition, this should speed up playbook runs since yum can check a locally-hosted repository instead of a remote repository with availability and performance challenges.

  • The vars plugin override_folder.py has been removed. With the move to Ansible 2.4 [“https://review.openstack.org/#/c/522778”] this plugin is no longer required. The functionality this plugin provided has been replaced with the native Ansible inventory plugin.

17.0.0.0b2

New Features

  • OpenStack-Ansible now supports the openSUSE Leap 42.3 distribution.

  • The Ceph stable release used by openstack-ansible and its ceph-ansible integration has been changed to the recent Ceph LTS Luminous release.

  • The galera cluster now supports cluster health checks over HTTP using port 9200. The new cluster check ensures a node is healthy by running a simple query against the wsrep sync status using monitoring user. This change will provide for a more robust cluster check ensuring we have the most fault tolerant galera cluster possible.

  • A typical OSA install will put the neutron and octavia queues on different vhosts thus preventing the event streamer from working While octavia is streaming to its own queue the consumer on the neutron side listens to the neutron queue. With a recent octavia enhancement a separate queue for the event streamer can be configured. This patch will set up the event streamer to post into the neutron queue using neutron’s credentials. Thus reaching the consumer on the neutron-lbaas side and allowing for streaming.

  • Generating and validating checksums for all files installed by packages is now disabled by default. The check causes delays in playbook runs and it can consume a significant amount of CPU and I/O resources. Deployers can re-enable the check by setting security_check_package_checksums to yes.

  • New hypervisor groups have been added allowing deployers to better define their compute workloads. While the generic “compute_hosts” group will still work explicit definitions for compute hosts can now be defined using the ironic-compute_hosts, kvm-compute_hosts, lxd-compute_hosts, qemu-compute_hosts, and powervm-compute_hosts groups accordingly

  • The maximum amount of time to wait until forcibly failing the LXC cache preparation process is now configurable using the lxc_cache_prep_timeout variable. The value is specified in seconds, with the default being 20 minutes.

  • Galera healthcheck has been improved, and relies on an xinetd service. By default, the service is unaccessible (filtered with the no_access directive). You can override the directive by setting any xinetd valid value to galera_monitoring_allowed_source.

  • Open vSwitch dataplane with NSH support has been implemented. This feature may be activated by setting ovs_nsh_support: True in /etc/openstack_deploy/user_variables.yml.

  • A new variable, tempest_roles, has been added to the os_tempest role allowing users to define keystone roles to be during tempest testing.

  • The security_sshd_permit_root_login setting can now be set to change the PermitRootLogin setting in /etc/ssh/sshd_config to any of the possible options. Set security_sshd_permit_root_login to one of without-password, prohibit-password, forced-commands-only, yes or no.

  • The repo server now implements nginx as a reverse proxy for python packages sourced from pypi. The initial query will be to a local deployment of pypiserver in order to serve any locally built packages, but if the package is not available locally it will retry the query against the upstream pypi mirror set in the variable repo_nginx_pypi_upstream (defaults to pypi) and cache the response.

  • The tempest_images data structure for the os_tempest role now expects the values for each image to include name (optionally) and format (the disk format). Also, the optional variable checksum may be used to set the checksum expected for the file in the format <algorithm>:<checksum>.

  • The default location for the image downloads in the os_tempest role set by the tempest_image_dir variable has now been changed to be /opt/cache/files in order to match the default location in nodepool. This improves the reliability of CI testing in OpenStack CI as it will find the file already cached there.

  • A new variable has been introduced into the os_tempest role named tempest_image_downloader. When set to deployment-host (which is the default) it uses the deployment host to handle the download of images to be used for tempest testing. The images are then uploaded to the target host for uploading into Glance.

  • The tasks within the ansible-hardening role are now based on Version 1, Release 3 of the Red Hat Enteprise Linux Security Technical Implementation Guide.

  • The sysctl parameter kernel.randomize_va_space is now set to 2 by default. This matches the default of most modern Linux distributions and it ensures that Address Space Layout Randomization (ASLR) is enabled.

  • The Datagram Congestion Control Protocol (DCCP) kernel module is now disabled by default, but a reboot is required to make the change effective.

  • Enable Kernel Shared Memory support by setting nova_compute_ksm_enabled to True.

  • Searching for world-writable files is now disabled by default. The search causes delays in playbook runs and it can consume a significant amount of CPU and I/O resources. Deployers can re-enable the search by setting security_find_world_writable_dirs to yes.

Upgrade Notes

  • The ceph-ansible integration has been updated to support the ceph-ansible v3.0 series tags. The new v3.0 series brings a significant refactoring of the ceph-ansible roles and vars, so it is strongly recommended to consult the upstream ceph-ansible documentation to perform any required vars migrations before you upgrade.

  • The ceph-ansible common roles are no longer namespaced with a galaxy-style ‘.’ (ie. ceph.ceph-common is now cloned as ceph-common), due to a change in the way upstream meta dependencies are handled in the ceph roles. The roles will be cloned according to the new naming, and an upgrade playbook ceph-galaxy-removal.yml has been added to clean up the stale galaxy-named roles.

  • The Ceph stable release used by openstack-ansible and its ceph-ansible integration has been changed to the recent Ceph LTS Luminous release.

  • KSM configuration is changed to disabled by default on Ubuntu. If you overcommit the RAM on your hypervisor it’s a good idea to set nova_compute_ksm_enabled to True.

  • The glance v1 API is now disabled by default as the API is scheduled to be removed in Queens.

  • The glance registry service is now disabled by default as it is not required for the v2 API and is scheduled to be removed in the future. The service can be enabled by setting glance_enable_v2_registry to True.

  • If you have overridden your openstack_host_specific_kernel_modules, please remove its group matching, and move that override directly to the appropriate group.

    Example, for an override like:

    - name: "ebtables"
      pattern: "CONFIG_BRIDGE_NF_EBTABLES"
      group: "network_hosts"
    

    You can create a file for the network_host group, inside its group vars folder /etc/openstack_deploy/group_vars/network_hosts, with the content:

    - name: "ebtables"
      pattern: "CONFIG_BRIDGE_NF_EBTABLES"
    
  • Any user that is coming from Pike or below on Ubuntu should modify its user_external_repos_list, switching its ubuntu cloud archive repository from state: present to state: absent. From now on, UCA will be defined with the filename uca. If the deployer wants to use its mirror, he can still override the variable uca_repo to point to its mirror. Alternatively, the deployer can completely define which repos to add and remove, ignoring our defaults, by overriding openstack_hosts_package_repos.

Deprecation Notes

  • The glance_enable_v1_registry variable has been removed. When using the glance v1 API the registry service is required, so having a variable to disable it makes little sense. The service is now enabled/disabled for the v1 API using the glance_enable_v1_api variable.

  • The following variables have been removed from the os_tempest role to simplify it. They have been replaced through the use of the data structure tempest_images which now has equivalent variables per image. - cirros_version - tempest_img_url - tempest_image_file - tempest_img_disk_format - tempest_img_name - tempest_images.sha256 (replaced by checksum)

Critical Issues

  • The ceph-ansible integration has been updated to support the ceph-ansible v3.0 series tags. The new v3.0 series brings a significant refactoring of the ceph-ansible roles and vars, so it is strongly recommended to consult the upstream ceph-ansible documentation to perform any required vars migrations before you upgrade.

Security Issues

  • The following headers were added as additional default (and static) values. X-Content-Type-Options nosniff, X-XSS-Protection “1; mode=block”, and Content-Security-Policy “default-src ‘self’ https: wss:;”. Additionally, the X-Frame-Options DENY header was added, defaulting to DENY. You may override the header via the keystone_x_frame_options variable.

  • Since we use neutron’s credentials to access the queue, security conscious people might want to set up an extra user for octavia on the neutron queue restricted to the topics octavia posts to.

Bug Fixes

  • When the glance_enable_v2_registry variable is set to True the corresponding data_api setting is now correctly set. Previously it was not set and therefore the API service was not correctly informed that the registry was operating.

  • The os_tempest tempest role was downloading images twice - once arbitrarily, and once to use for testing. This has been consolidated into a single download to a consistent location.

Other Notes

  • Added support for specifying GID and UID for cinder system user by defining cinder_system_user_uid and cinder_system_group_gid. This setting is optional.

  • The use_neutron option was marked to be removed in sahara.

16.0.0.0rc1

New Features

  • A new variable called ceph_extra_components is available for the ceph_client role. Extra components, packages, and services that are not shipped by default by OpenStack-Ansible can be defined here.

  • The cinder-api service has moved to run as a uWSGI application. You can set the max number of WSGI processes, the number of processes, threads, and buffer size utilizing the cinder_wsgi_processes_max, cinder_wsgi_processes, cinder_wsgi_threads, and cinder_wsgi_buffer_size. Additionally, you can override any settings in the uWSGI ini configuration file using the cinder_api_uwsgi_ini_overrides setting. The uWSGI application will listen on the address specified by cinder_uwsgi_bind_address which defaults to 0.0.0.0.

  • The glance-api service has moved to run as a uWSGI application. You can set the max number of WSGI processes, the number of processes, threads, and buffer size utilizing the glance_wsgi_processes_max, glance_wsgi_processes, glance_wsgi_threads, and glance_wsgi_buffer_size. Additionally, you can override any settings in the uWSGI ini configuration file using the glance_api_uwsgi_ini_overrides setting.

  • The heat-api, heat-api-cfn, and heat-api-cloudwatch services have moved to run as a uWSGI applications. You can set the max number of WSGI processes, the number of processes, threads, and buffer size utilizing the heat_wsgi_processes_max, heat_wsgi_processes, heat_wsgi_threads, and heat_wsgi_buffer_size. Additionally, you can override any settings in the uWSGI ini configuration file using the heat_api_uwsgi_ini_overrides, heat_api_cfn_uwsgi_ini_overrides, and heat_api_cloudwatch_uwsgi_ini_overrides settings. The uWSGI applications will listen on the addresses specified by heat_api_uwsgi_bind_address, heat_api_cfn_uwsgi_bind_address, and heat_api_cloudwatch_uwsgi_bind_address respectively. Which all default to 0.0.0.0.

  • The nova-api, and nova-metadata services have moved to run as uWSGI applications. You can override their uwsgi configuration files using the nova_api_os_compute_uwsgi_ini_overrides, and nova_api_metadata_uwsgi_ini_overrides settings.

  • The octavia-api service has moved to run as a uWSGI application. You can set the max number of WSGI processes, the number of processes, threads, and buffer size utilizing the octavia_wsgi_processes_max, octavia_wsgi_processes, octavia_wsgi_threads, and octavia_wsgi_buffer_size. Additionally, you can override any settings in the uWSGI ini configuration file using the octavia_api_uwsgi_ini_overrides setting. The uWSGI application will listen on the address specified by octavia_uwsgi_bind_address which defaults to 0.0.0.0.

  • The sahara-api service has moved to run as a uWSGI application. You can set the max number of WSGI processes, the number of processes, threads, and buffer size utilizing the sahara_wsgi_processes_max, sahara_wsgi_processes, sahara_wsgi_threads, and sahara_wsgi_buffer_size. Additionally, you can override any settings in the uWSGI ini configuration file using the sahara_api_uwsgi_ini_overrides setting. The uWSGI application will listen on the address specified by sahara_uwsgi_bind_address which defaults to 0.0.0.0.

Upgrade Notes

  • The nova-placement service now runs as a uWSGI application that is not fronted by an nginx web-server by default. After upgrading, if the nova-placement service was running on a host or container without any other services requiring nginx, you should manually remove nginx.

Deprecation Notes

  • The gnocchi ceph component has been moved out as a default component required by the ceph_client role. It can now be optionally specified through the use of the ceph_extra_components variable.

  • Settings related to nginx and the placement will no longer serve any purpose, and should be removed. Those settings are as follows - nova_placement_nginx_access_log_format_extras, nova_placement_nginx_access_log_format_combined, nova_placement_nginx_extra_conf, nova_placement_uwsgi_socket_port, and nova_placement_pip_packages.

  • Remove designate_rpc_backend option due to deprecation of rpc_backend option in oslo.messaging.

Other Notes

  • The inventory generation code has been switched to use standard Python packaging tools. For most, this should not be a visible change. However, running the dynamic inventory script on a local development environment should now be called via python dynamic_inventory.py.

16.0.0.0b3

New Features

  • The os_swift role now supports the swift3 middleware, allowing access to swift via the Amazon S3 API. This feature can enabled by setting swift_swift3_enabled to true.

  • Adds a way for the system to automatically create the Octavia management network if octavia_service_net_setup is enabled (DEFAULT). Additional parameters can control the setup.

  • Adds support for glance-image-id and automatic uploading of the image if octavia_amp_image_upload_enabled is True (Default is False). This is mostly tp work around the limitations of Ansible’s OpenStack support and should not be used in prodcution settings. Instead refer to the documentation to upload images yourself.

  • Deployers can now specify a custom package name or URL for an EPEL release package. CentOS systems use epel-release by default, but some deployers have a customized package that redirects servers to internal mirrors.

  • The os_cinder role now provides for doing online data migrations once the db sync has been completed. The data migrations will not be executed until the boolean variable cinder_all_software_updated is true. This variable will need to be set by the playbook consuming the role.

  • The os-cinder-install.yml playbook will now execute a rolling upgrade of cinder including database migrations (both schema and online) as per the procedure described in the cinder documentation. When haproxy is used as the load balancer, the backend being changed will be drained before changes are made, then added back to the pool once the changes are complete.

  • The config_template template module now supports writing out valueless INI options without suffixing them with ‘=’ or ‘:’. This is done via the ‘ignore_none_type’ attribute. If ignore_none_type is set to true, these key/value entries will be ignored, if it’s set to false, then ConfigTemplateParser will write out only the option name without the ‘=’ or ‘:’ suffix. The default is true.

  • A new repository for installing modern erlang from ESL (erlang solutions) has been added giving us the ability to install and support modern stable erlang over numerous operating systems.

  • The ability to set the RabbitMQ repo URL for both erlang and RabbitMQ itself has been added. This has been done to allow deployers to define the location of a given repo without having to fully redefine the entire set of definitions for a specific repository. The default variables rabbitmq_gpg_keys, rabbitmq_repo_url, and rabbitmq_erlang_repo_url have been created to facilitate this capability.

  • The get_nested filter has been added, allowing for simplified value lookups inside of nested dictionaries. ansible_local|get_nested(‘openstack_ansible.swift’), for example, will look 2 levels down and return the result.

  • It’s now possible to disable heat stack password field in horizon. horizon_enable_heatstack_user_pass variable has been added and default to True.

  • The horizon_images_allow_location variable is added to support the IMAGES_ALLOW_LOCATION setting in the horizon_local_settings.py file to allow to specify and external location during the image creation.

  • The os-neutron-install.yml playbook will now execute a rolling upgrade of neutron including database migrations (both expand and contract) as per the procedure described in the neutron documentation.

  • The os_nova role now provides for doing online data migrations once the db sync has been completed. The data migrations will not be executed until the boolean variable nova_all_software_updated is true. This variable will need to be set by the playbook consuming the role.

  • The os-nova-install.yml playbook will now execute a rolling upgrade of nova including database migrations as per the procedure described in the nova documentation.

  • The password minimum and maximum lifetimes are now opt-in changes that can take action against user accounts instead of printing debug warnings. Refer to the documentation for STIG requirements V-71927 and V-71931 to review the opt-in process and warnings.

  • Added the lxc_container_recreate option, which will destroy then recreate LXC containers. The container names and IP addresses will remain the same, as will the MAC addresses of any containers using the lxc_container_fixed_mac setting.

  • MAC addresses for containers with a fixed MAC (lxc_container_fixed_mac variable) are now saved to the /etc/ansible/facts.d/mac.fact file. Should such a container be destroyed but not removed from inventory, the interfaces will be recreated with the same MAC address when the container is recreated.

  • You can set the endpoint_type used when creating the Trove service network by specifying the trove_service_net_endpoint_type variable. This will default to internal. Other possible options are public and admin.

  • Add support for Ubuntu on IBM z Systems (s390x).

  • Add support for Ubuntu on IBM z Systems (s390x).

  • You can force update the translations direct from Zanata by setting horizon_translations_update to True. This will call the pull_catalog option built into horizon-manage.py. You should only use this when testing translations, otherwise this should remain set to the default of False.

Known Issues

  • OpenStack-Ansible sets a new variable, galera_disable_privatedevices, that controls whether the PrivateDevices configuration in MariaDB’s systemd unit file is enabled.

    If the galera_server role is deployed on a bare metal host, the MariaDB default is maintained (PrivateDevices=true). If the galera_server role is deployed within a container, the PrivateDevices configuration is set to true to work around a systemd bug with a bind mounted /dev/ptmx.

    See Launchpad Bug 1697531 for more details.

  • OpenStack-Ansible sets a new variable, memcached_disable_privatedevices, that controls whether the PrivateDevices configuration in MemcacheD’s systemd unit file is enabled.

    If the memcached_server role is deployed on a bare metal host, the default is maintained (PrivateDevices=true). If the role is deployed within a container, the PrivateDevices configuration is set to true to work around a systemd bug with a bind mounted /dev/ptmx.

    See Launchpad Bug 1697531 for more details.

  • MemcacheD sets PrivateDevices=true in its systemd unit file to add extra security around mount namespaces. While this is useful when running MemcacheD on a bare metal host with other services, it is less useful when MemcacheD is already in a container with its own namespaces. In addition, LXC 2.0.8 presents /dev/ptmx as a bind mount within the container and systemd 219 (on CentOS 7) cannot make an additional bind mount of /dev/ptmx when PrivateDevices is enabled.

    Deployers can memcached_disable_privatedevices to yes to set PrivateDevices=false in the systemd unit file for MariaDB on CentOS 7. The default is no, which keeps the default systemd unit file settings from the MemcacheD package.

    For additional information, refer to the following bugs:

  • MariaDB 10.1+ includes PrivateDevices=true in its systemd unit files to add extra security around mount namespaces for MariaDB. While this is useful when running MariaDB on a bare metal host with other services, it is less useful when MariaDB is already in a container with its own namespaces. In addition, LXC 2.0.8 presents /dev/ptmx as a bind mount within the container and systemd 219 (on CentOS 7) cannot make an additional bind mount of /dev/ptmx when PrivateDevices is enabled.

    Deployers can galera_disable_privatedevices to yes to set PrivateDevices=false in the systemd unit file for MariaDB on CentOS 7. The default is no, which keeps the default systemd unit file settings from the MariaDB package.

    For additional information, refer to the following bugs:

Upgrade Notes

  • The EPEL repository is only installed and configured when the deployer sets security_enable_virus_scanner to yes. This allows the ClamAV packages to be installed. If security_enable_virus_scanner is set to no (the default), the EPEL repository will not be added.

    See Bug 1702167 for more details.

  • Deployers now have the option to prevent the EPEL repository from being installed by the role. Setting security_epel_install_repository to no prevents EPEL from being installed. This setting may prevent certain packages from installing, such as ClamAV.

  • Changing to the ESL repos has no upgrade impact. The version of erlang provided by ESL is newer than that what is found in the distro repos. Furthermore, a pin has been added to ensure that APT always uses the ESL repos as it’s preferred source which has been done to simply ensure APT is always pointed at ESL.

  • The entire repo build process is now idempotent. From now on when the repo build is re-run, it will only fetch updated git repositories and rebuild the wheels/venvs if the requirements have changed, or a new release is being deployed.

  • The git clone part of the repo build process now only happens when the requirements change. A git reclone can be forced by using the boolean variable repo_build_git_reclone.

  • The python wheel build process now only happens when requirements change. A wheel rebuild may be forced by using the boolean variable repo_build_wheel_rebuild.

  • The python venv build process now only happens when requirements change. A venv rebuild may be forced by using the boolean variable repo_build_venv_rebuild.

  • The repo build process now only has the following tags, providing a clear path for each deliverable. The tag repo-build-install completes the installation of required packages. The tag repo-build-wheels completes the wheel build process. The tag repo-build-venvs completes the venv build process. Finally, the tag repo-build-index completes the manifest preparation and indexing of the os-releases and links folders.

  • Keystone now uses uWSGI exclusively (instead of Apache with mod_wsgi) and has the web server acting as a reverse proxy. The default web server is now set to Nginx instead of Apache, but Apache will automatically used if federation is configured.

  • The neutron library has been removed from OpenStack-Ansible’s plugins. Upstream Ansible modules for managing OpenStack network resources should be used instead.

Deprecation Notes

  • The variable keepalived_uca_enable is deprecated, and replaced by keepalived_ubuntu_src. The keepalived_uca_enable variable will be removed in future versions of the keepalived role. The value of keepalived_ubuntu_src should be either “uca”, “ppa”, or “native”, for respectively installing from the Ubuntu Cloud archive, from keepalived stable ppa, or not installing from an external source.

  • The variable keepalived_use_latest_stable is deprecated, and replaced by keepalived_package_state. The keepalived_use_latest_stable variable will be removed in future versions of the keepalived role. The value of keepalived_package_state should be either “latest” or “present”.

  • The variables keystone_apache_enabled and keystone_mod_wsgi_enabled have been removed and replaced with a single variable keystone_web_server to optionally set the web server used for keystone.

  • The variable repo_build_pip_extra_index has been removed. The replacement list variable repo_build_pip_extra_indexes should be used instead.

  • The nova_cpu_mode Ansible variable has been removed by default, to allow Nova to detect the default value automatically. Hard-coded values can cause problems. You can still set nova_cpu_mode to enforce a cpu_mode for Nova. Additionally, the default value for the qemu libvirt_type is set to none to avoid issues caused with qemu 2.6.0.

  • Remove heat_rpc_backend option due to deprecation of rpc_backend option in oslo.messaging.

  • Remove octavia_rpc_backend option due to deprecation of rpc_backend option in oslo.messaging.

  • Remove keystone_rpc_backend option due to deprecation of rpc_backend option in oslo.messaging.

  • Remove cinder_rpc_backend option due to deprecation of rpc_backend option in oslo.messaging.

  • Remove trove_rpc_backend option due to deprecation of rpc_backend option in oslo.messaging.

  • Remove neutron_rpc_backend option due to deprecation of rpc_backend option in oslo.messaging.

  • Remove sahara_rpc_backend option due to deprecation of rpc_backend option in oslo.messaging.

  • Remove magnum_rpc_backend option due to deprecation of rpc_backend option in oslo.messaging.

  • Remove glance_rpc_backend option due to deprecation of rpc_backend option in oslo.messaging.

Bug Fixes

  • Mysql cnf files can now be properly overridden. The config_template module has been extended to support valueless options, such as those that are found in the my.cnf file(i.e. quick under the mysqldump section). To use valueless options, use the ignore_none_type attribute of the config_template module.

16.0.0.0b2

New Features

  • Simplifies configuration of lbaas-mgmt network.

  • Adds iptables rules to block taffic from the octavia managment network to the octavia container for both ipv4 and ipv6.

  • A variable named bootstrap_user_variables_template has been added to the bootstrap-host role so the user can define the user variable template filename for AIO deployments

  • For the os_aodh role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the aodh_*_init_config_overrides variables which use the config_template task to change template defaults.

  • For the os_barbican role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the barbican_*_init_config_overrides variables which use the config_template task to change template defaults.

  • New variables have been added to allow a deployer to customize a ceilometer systemd unit file to their liking.

  • The task dropping the ceilometer systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.

  • For the os_ceilometer role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the ceilometer_*_init_config_overrides variables which use the config_template task to change template defaults.

  • Added cinder_auth_strategy variable to configure Cinder’s auth strategy since Cinder can work in noauth mode as well.

  • The os_ceilometer role now includes a facility where you can place your own templates in /etc/openstack_deploy/ceilometer (by default) and it will be deployed to the target host after being interpreted by the template engine. If no file is found there, the fallback of the git sourced template is used.

  • For the os_designate role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the designate_*_init_config_overrides variables which use the config_template task to change template defaults.

  • For the os_glance role, the systemd unit RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. This value can be adjusted by using the glance_*_init_config_overrides variables which use the config_template task to change template defaults.

  • For the os_gnocchi role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the gnocchi_*_init_config_overrides variables which use the config_template task to change template defaults.

  • From now on, a deployer can override any group_var in userspace, by creating a folder /etc/openstack_deploy/group_vars/. This folder has precedence over OpenStack-Ansible default group_vars, and the merge behavior is similar to Ansible merge behavior. The group_vars folder precedence can still be changed with the GROUP_VARS_PATH. Same applies for host vars.

  • The new option haproxy_backend_arguments can be utilized to add arbitrary options to a HAProxy backend like tcp-check or http-check.

  • For the os_heat role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the heat_*_init_config_overrides variables which use the config_template task to change template defaults.

  • For the os_ironic role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the ironic_*_init_config_overrides variables which use the config_template task to change template defaults.

  • The os_keystone role will now (by default) source the keystone-paste.ini, policy.json and sso_callback_template.html templates from the service git source instead of from the role. It also now includes a facility where you can place your own templates in /etc/openstack_deploy/keystone (by default) and it will be deployed to the target host after being interpreted by the template engine.

  • For the os_keystone role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the keystone_*_init_config_overrides variables which use the config_template task to change template defaults.

  • New variables have been added to allow a deployer to customize a magnum systemd unit file to their liking.

  • The task dropping the magnum systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.

  • For the os_magnum role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the magnum_*_init_config_overrides variables which use the config_template task to change template defaults.

  • For the os_neutron role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the neutron_*_init_config_overrides variables which use the config_template task to change template defaults.

  • For the os_nova role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the nova_*_init_config_overrides variables which use the config_template task to change template defaults.

  • New variables have been added to allow a deployer to customize a octavia systemd unit file to their liking.

  • The task dropping the octavia systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.

  • For the os_octavia role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the octavia_*_init_overrides variables which use the config_template task to change template defaults.

  • Deployers may provide a list of custom haproxy template files to copy from the deployment host through the octavia_user_haproxy_templates variable and configure Octavia to make use of a custom haproxy template file with with octavia_haproxy_amphora_template variable.

  • For the os_sahara role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the sahara_*_init_config_overrides variables which use the config_template task to change template defaults.

  • The ability to disable the certificate validation when checking and interacting with the internal cinder endpoint has been implemented. In order to do so, set the following in /etc/openstack_deploy/user_variables.yml.

    cinder_service_internaluri_insecure: yes
    
  • For the os_swift role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the swift_*_init_config_overrides variables which use the config_template task to change template defaults.

  • New variables have been added to allow a deployer to customize a trove systemd unit file to their liking.

  • The task dropping the trove systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.

  • For the os_trove role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the trove_*_init_config_overrides variables which use the config_template task to change template defaults.

Upgrade Notes

  • For the os_aodh role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the aodh_*_init_config_overrides variables which use the config_template task to change template defaults.

  • For the os_barbican role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the barbican_*_init_config_overrides variables which use the config_template task to change template defaults.

  • For the os_ceilometer role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the ceilometer_*_init_config_overrides variables which use the config_template task to change template defaults.

  • The following variables have been removed from the os_ceilometer role as their respective upstream files are no longer present. * ceilometer_event_definitions_yaml_overrides * ceilometer_event_pipeline_yaml_overrides

  • For the os_designate role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the designate_*_init_config_overrides variables which use the config_template task to change template defaults.

  • The endpoint which designate uses to communicate with neutron has been set to the internalURL by default. This change has been done within the template designate.conf.j2 and can be changed using the designate_designate_conf_overrides variable.

  • For the os_glance role, the systemd unit RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. This value can be adjusted by using the glance_*_init_config_overrides variables which use the config_template task to change template defaults.

  • For the os_gnocchi role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the gnocchi_*_init_config_overrides variables which use the config_template task to change template defaults.

  • For the os_heat role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the heat_*_init_config_overrides variables which use the config_template task to change template defaults.

  • For the os_ironic role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the ironic_*_init_config_overrides variables which use the config_template task to change template defaults.

  • If you had your own keepalived configuration file, please rename and move it to the openstack-ansible user space, for example by moving it to `/etc/openstack_deploy/keepalived/keepalived.yml`. Our haproxy playbook does not load an external variable files anymore. The keepalived variable override system has been standardised to the same method used elsewhere.

  • The keystone endpoints now have versionless URLs. Any existing endpoints will be updated.

  • For the os_keystone role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the keystone_*_init_config_overrides variables which use the config_template task to change template defaults.

  • The var lxc_container_ssh_delay along with SSH specific ping checks have been removed in favor of using Ansible’s wait_for_connection module, which will not rely on SSH to the container to verify connectivity. A new variable called lxc_container_wait_params has been added to allow configuration of the parameters passed to the wait_for_connection module.

  • The magnum client interaction will now make use of the public endpoints by default. Previously this was set to use internal endpoints.

  • The keystone endpoints for instances spawned by magnum will now be provided with the public endpoints by default. Previously this was set to use internal endpoints.

  • For the os_magnum role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the magnum_*_init_config_overrides variables which use the config_template task to change template defaults.

  • For the os_neutron role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the neutron_*_init_config_overrides variables which use the config_template task to change template defaults.

  • For the os_nova role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the nova_*_init_config_overrides variables which use the config_template task to change template defaults.

  • For the os_octavia role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the octavia_*_init_overrides variables which use the config_template task to change template defaults.

  • For the os_sahara role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the sahara_*_init_config_overrides variables which use the config_template task to change template defaults.

  • For the os_swift role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the swift_*_init_config_overrides variables which use the config_template task to change template defaults.

  • For the os_trove role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the trove_*_init_config_overrides variables which use the config_template task to change template defaults.

Deprecation Notes

  • The var lxc_container_ssh_delay along with SSH specific ping checks have been removed in favor of using Ansible’s wait_for_connection module, which will not rely on SSH to the container.

  • The upstream noVNC developers recommend that the keymap be automatically detected for virtual machine consoles. Three Ansible variables have been removed:

    • nova_console_keymap

    • nova_novncproxy_vnc_keymap

    • nova_spice_console_keymap

    Deployers can still set a specific keymap using a nova configuration override if necessary.

  • The plumgrid network provider has been removed. This is being dropped without a full deprecation cycle because the company, plumgrid, no longer exists.

  • Remove cinder_glance_api_version option due to deprecation of glance_api_version option in Cinder.

  • Remove cinder_glance_api_version option due to deprecation of glance_api_version option in Cinder.

Security Issues

  • The magnum client interaction will now make use of the public endpoints by default. Previously this was set to use internal endpoints.

  • The keystone endpoints for instances spawned by magnum will now be provided with the public endpoints by default. Previously this was set to use internal endpoints.

16.0.0.0b1

Prelude

The first release of the Red Hat Enterprise Linux 7 STIG was entirely renumbered from the pre-release versions. Many of the STIG configurations simply changed numbers, but some were removed or changed. A few new configurations were added as well.

New Features

  • CentOS7/RHEL support has been added to the ceph_client role.

  • Only Ceph repos are supported for now.

  • There is now experimental support to deploy OpenStack-Ansible on CentOS 7 for both development and test environments.

  • Experimental support has been added to allow the deployment of the OpenStack Octavia Load Balancing service when hosts are present in the host group octavia-infra_hosts.

  • New variables have been added to allow a deployer to customize a aodh systemd unit file to their liking.

  • The task dropping the aodh systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.

  • The number of worker threads for neutron will now be capped at 16 unless a specific value is specified. Previously, the calculated number of workers could get too high on systems with a large number of processors. This was particularly evident on POWER systems.

  • Capping the default value for the variable aodh_wsgi_processes to 16 when the user doesn’t configure this variable. Default value is twice the number of vCPUs available on the machine with a capping value of 16.

  • Capping the default value for the variable cinder_osapi_volume_workers to 16 when the user doesn’t configure this variable. Default value is half the number of vCPUs available on the machine with a capping value of 16.

  • Capping the default value for the variables glance_api_workers and glance_registry_workers to 16 when the user doesn’t configure these variables. Default value is half the number of vCPUs available on the machine with a capping value of 16.

  • Capping the default value for the variable gnocchi_wsgi_processes to 16 when the user doesn’t configure this variable. Default value is twice the number of vCPUs available on the machine with a capping value of 16.

  • Capping the default value for the variables heat_api_workers and heat_engine_workers to 16 when the user doesn’t configure these variables. Default value is half the number of vCPUs available on the machine with a capping value of 16.

  • Capping the default value for the variables horizon_wsgi_processes and horizon_wsgi_threads to 16 when the user doesn’t configure these variables. Default value is half the number of vCPUs available on the machine with a capping value of 16.

  • Capping the default value for the variable ironic_wsgi_processes to 16 when the user doesn’t configure this variable. Default value is one fourth the number of vCPUs available on the machine with a capping value of 16.

  • Capping the default value for the variable keystone_wsgi_processes to 16 when the user doesn’t configure this variable. Default value is half the number of vCPUs available on the machine with a capping value of 16.

  • Capping the default value for the variables neutron_api_workers, neutron_num_sync_threads and neutron_metadata_workers to 16 when the user doesn’t configure these variables. Default value is half the number of vCPUs available on the machine with a capping value of 16.

  • Capping the default value for the variables nova_wsgi_processes, nova_osapi_compute_workers, nova_metadata_workers and nova_conductor_workers to 16 when the user doesn’t configure these variables. Default value is half the number of vCPUs available on the machine with a capping value of 16.

  • Capping the default value for the variable repo_nginx_workers to 16 when the user doesn’t configure this variable. Default value is half the number of vCPUs available on the machine with a capping value of 16.

  • Capping the default value for the variable sahara_api_workers to 16 when the user doesn’t configure this variable. Default value is half the number of vCPUs available on the machine with a capping value of 16.

  • Capping the default value for the variable swift_proxy_server_workers to 16 when the user doesn’t configure this variable and if the swift proxy is in a container. Default value is half the number of vCPUs available on the machine if the swift proxy is not in a container. Default value is half the number of vCPUs available on the machine with a capping value of 16 if the proxy is in a container.

  • New variables have been added to allow a deployer to customize a ceilometer systemd unit file to their liking.

  • The task dropping the ceilometer systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and pollute the generic systemd unit file with jinja2 variables and conditionals.

  • Several configuration files that were not templated for the os_ceilometer role are now retrieved from git. The git repository used can be changed using the ceilometer_git_config_lookup_location variable. By default this points to git.openstack.org. These files can still be changed using the ceilometer_x_overrides variables.

  • New variables have been added to allow a deployer to customize a cinder systemd unit file to their liking.

  • The task dropping the cinder systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.

  • Add support for the cinder v3 api. This is enabled by default, but can be disabled by setting the cinder_enable_v3_api variable to false.

  • For the os_cinder role, the systemd unit TimeoutSec value which controls the time between sending a SIGTERM signal and a SIGKILL signal when stopping or restarting the service has been reduced from 300 seconds to 120 seconds. This provides 2 minutes for long-lived sessions to drain while preventing new ones from starting before a restart or a stop. The RestartSec value which controls the time between the service stop and start when restarting has been reduced from 150 seconds to 2 seconds to make the restart happen faster. These values can be adjusted by using the cinder_*_init_config_overrides variables which use the config_template task to change template defaults.

  • Tags have been added to all of the common tags with the prefix “common-“. This has been done to allow a deployer to rapidly run any of the common on a need basis without having to rerun an entire playbook.

  • The COPR repository for installing LXC on CentOS 7 is now set to a higher priority than the default to ensure that LXC packages always come from the COPR repository.

  • Deployers can provide a customized login banner via a new Ansible variable: security_login_banner_text. This banner text is used for non-graphical logins, which includes console and ssh logins.

  • The Designate pools.yaml file can now be generated via the designate_pools_yaml attribute, if desired. This allows users to populate the Designate DNS server configuration using attributes from other plays and obviates the need to manage the file outside of the Designate role.

  • The galera_client role will default to using the galera_repo_url URL if the value for it is set. This simplifies using an alternative mirror for the MariaDB server and client as only one variable needs to be set to cover them both.

  • New variables have been added to allow a deployer to customize a glance systemd unit file to their liking.

  • The task dropping the glance systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.

  • The os_gnocchi role now includes a facility where you can place your own default api-paste.ini or policy.json file in /etc/openstack_deploy/gnocchi (by default) and it will be deployed to the target host after being interpreted by the template engine.

  • New variables have been added to allow a deployer to customize a gnocchi systemd unit file to their liking.

  • The task dropping the gnocchi systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.

  • Several configuration files that were not templated for the os_gnocchi` role are now retrieved from git. The git repository used can be changed using the ``gnocchi_git_config_lookup_location variable. By default this points to git.openstack.org. These files can still be changed using the gnocchi_x_overrides variables.

  • New variables have been added to allow a deployer to customize a heat systemd unit file to their liking.

  • The task dropping the heat systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.

  • Allows SSL connection to Galera with SSL support. galera_use_ssl option has to be set to true, in this case self-signed CA cert or user-provided CA cert will be delivered to the container/host.

  • Implements SSL connection ability to MySQL. galera_use_ssl option has to be set to true (default), in this case playbooks create self-signed SSL bundle and sets up MySQL configs to use it or distributes user-provided bundle throughout Galera nodes.

  • Haproxy-server role allows to set up tunable parameters. For doing that it is necessary to set up a dictionary of options in the config files, mentioning those which have to be changed (defaults for the remaining ones are programmed in the template). Also “maxconn” global option made to be tunable.

  • New variables have been added to allow a deployer to customize a ironic systemd unit file to their liking.

  • The task dropping the ironic systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.

  • New variables have been added to allow a deployer to customize a keystone systemd unit file to their liking.

  • The task dropping the keystone systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and pollute the generic systemd unit file with jinja2 variables and conditionals.

  • The default behaviour of ensure_endpoint in the keystone module has changed to update an existing endpoint, if one exists that matches the service name, type, region and interface. This ensures that no duplicate service entries can exist per region.

  • Removed dependency for cinder_backends_rbd_inuse in nova.conf when setting rbd_user and rbd_secret_uuid variables. Cinder delivers all necessary values via RPC when attaching the volume, so those variables are only necessary for ephemeral disks stored in Ceph. These variables are required to be set up on cinder-volume side under backend section.

  • LXC on CentOS is now installed via package from a COPR repository rather than installed from the upstream source.

  • In the lxc_container_create role, the keys preup, postup, predown, and postdown are now supported in the container_networks dict for Ubuntu systems. This allows operators to configure custom scripts to be run by Ubuntu’s ifupdown system when network interface states are changed.

  • The variable lxc_net_manage_iptables has been added. This variable can be overridden by deployers if system wide iptables rules are already in place or managed by deployers chioce.

  • The repo server file system structure has been updated to allow for multiple Operating systems running multiple architectures to be run at the same time and served from a single server without impacting pools, venvs, wheel archives, and manifests. The new structure follows the following pattern $RELEASE/$OS_TYPE-$ARCH and has been applied to os-releases, venvs, and pools.

  • The dragonflow plugin for neutron is now available. You can set the neutron_plugin_type to ml2.dragonflow to utilize this code path. The dragonflow code path is currently experimental.

  • New variables have been added to allow a deployer to customize a neutron systemd unit file to their liking.

  • The task dropping the neutron systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.

  • New variables have been added to allow a deployer to customize a nova systemd unit file to their liking.

  • The task dropping the nova systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.

  • New variables have been added to allow a deployer to customize a designate systemd unit file to their liking.

  • The task dropping the designate systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and pollute the generic systemd unit file with jinja2 variables and conditionals.

  • The nova-placement service is now configured by default. nova_placement_service_enabled can be set to False to disable the nova-placement service.

  • The nova-placement api service will run as its own ansible group nova_api_placement.

  • Nova cell_v2 support has been added. The default cell is cell1 which can be overridden by the nova_cell1_name. Support for multiple cells is not yet available.

  • Nova may now use an encrypted database connection. This is enabled by setting nova_galera_use_ssl to True.

  • Gnocchi is now used as the default publisher.

  • In the Ocata release, Trove added support for encrypting the rpc communication between the guest DBaaS instances and the control plane. The default values for trove_taskmanager_rpc_encr_key and trove_inst_rpc_key_encr_key should be overridden to specify installation specific values.

  • Added storage policy so that deployers can override how to store the logs. per_host stores logs in a sub-directory per host. per_program stores logs in a single file per application which facilitates troubleshooting easy.

  • New variables have been added to allow a deployer to customize a sahara systemd unit file to their liking.

  • The task dropping the sahara systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and polute the generic systemd unit file with jinja2 variables and conditionals.

  • The role now supports SUSE based distributions. Required packages can now be installed using the zypper package manager.

  • The role now supports SUSE based distributions. Required packages can now be installed using the zypper package manager.

  • New variables have been added to allow a deployer to customize a swift systemd unit file to their liking.

  • The task dropping the swift systemd unit files now uses the config_template action plugin allowing deployers access to customize the unit files as they see fit without having to load extra options into the defaults and pollute the generic systemd unit file with jinja2 variables and conditionals.

  • Swift container-sync has been updated to use internal-client. This means a new configuration file internal-client.conf has been added. Configuration can be overridden using the variable swift_internal_client_conf_overrides.

  • Added new variable tempest_volume_backend_names and updated templates/tempest.conf.j2 to point backend_names at this variable

  • The deployer can now define an environment variable GROUP_VARS_PATH with the folders of its choice (separated by the colon sign) to define an user space group_vars folder. These vars will apply but be (currently) overridden by the OpenStack-Ansible default group vars, by the set facts, and by the user_* variables. If the deployer defines multiple paths, the variables found are merged, and precedence is increasing from left to right (the last defined in GROUP_VARS_PATH wins)

  • The deployer can now define an environment variable HOST_VARS_PATH with the folders of its choice (separated by the colon sign) to define an user space host_vars folder. These vars will apply but be (currently) overridden by the OpenStack-Ansible default host vars, by the set facts, and by the user_* variables. If the deployer defines multiple paths, the variables found are merged, and precedence is increasing from left to right (the last defined in HOST_VARS_PATH wins)

Known Issues

  • There is currently an Ansible bug in regards to HOSTNAME. If the host .bashrc holds a var named HOSTNAME, the container where the lxc_container module attaches will inherit this var and potentially set the wrong $HOSTNAME. See the Ansible fix which will be released in Ansible version 2.3.

Upgrade Notes

  • The variables cinder_sigkill_timeout and cinder_restart_wait have been removed. The previous default values have now been set in the template directly and can be adjusted by using the cinder_*_init_overrides variables which use the config_template task to change template defaults.

  • The Designate pools.yaml file can now be generated via the designate_pools_yaml attribute, if desired. This ability is toggled by the designate_use_pools_yaml_attr attribute. In the future this behavior may become default and designate_pools_yaml may become a required variable.

  • The haproxy_bufsize variable has been removed and made a part of the haproxy_tuning_params dictionary.

  • When upgrading nova the cinder catalog_info will change to use the cinderv3 endpoint. Ensure that you have upgraded cinder so that the cinderv3 endpoint exists in the keystone catalog.

  • The variable neutron_dhcp_domain has been renamed to neutron_dns_domain.

  • The ceilometer-api service/container can be removed as part of O->P upgrades. A ceilometer-central container will be created to contain the central ceilometer agents.

  • The EPEL repository is now removed in favor of the RDO repository.

    This is a breaking change for existing CentOS deployments. The yum package manager will have errors when it finds that certain packages that it installed from EPEL are no longer available. Deployers may need to rebuild container or reinstall packages to complete this change.

  • The openstack_tempest_gate.sh script has been removed as it requires the use of the run_tempest.sh script which has been deprecated in Tempest. In order to facilitate the switch, the default for the variable tempest_run has been set to yes, forcing the role to execute tempest by default. This default can be changed by overriding the value to no. The test whitelist may be set through the list variable tempest_test_whitelist.

  • Gnocchi service endpoint variables were not named correctly. Renamed variables to be consistent with other roles.

Deprecation Notes

  • The cinder_keystone_auth_plugin variable has been deprecated. cinder_keystone_auth_type should be used instead to configure authentication type.

  • The neutron_keystone_auth_plugin variable has been deprecated. neutron_keystone_auth_type should be used instead to configure authentication type.

  • The swift_keystone_auth_plugin variable has been deprecated. swift_keystone_auth_type should be used instead to configure authentication type.

  • The trove_keystone_auth_plugin variable has been deprecated. trove_keystone_auth_type should be used instead to configure authentication type.

  • The aodh_keystone_auth_plugin variable has been deprecated. aodh_keystone_auth_type should be used instead to configure authentication type.

  • The ceilometer_keystone_auth_plugin variable has been deprecated. ceilometer_keystone_auth_type should be used instead to configure authentication type.

  • The gnocchi_keystone_auth_plugin variable has been deprecated. gnocchi_keystone_auth_type should be used instead to configure authentication type.

  • The octavia_keystone_auth_plugin variable has been deprecated. octavia_keystone_auth_type should be used instead to configure authentication type.

  • The variables galera_client_apt_repo_url and galera_client_yum_repo_url are deprecated in favour of the common variable galera_client_repo_url.

  • The update state for the ensure_endpoint method of the keystone module is now deprecated, and will be removed in the Queens cycle. Setting state to present will achieve the same result.

  • Several nova.conf options that were deprecated have been removed from the os_nova role. The following OpenStack-Ansible variables are no longer used and should be removed from any variable override files. * nova_dhcp_domain * nova_quota_fixed_ips * nova_quota_floating_ips * nova_quota_security_group_rules * nova_quota_security_groups

  • The ceilometer API service is now deprecated. OpenStack-Ansible no longer deploys this service. To make queries against metrics, alarms, and/or events, please use the gnocchi, aodh, and panko APIs, respectively.

  • Removed tempest_volume_backend1_name and tempest_volume_backend1_name since backend1_name and backend2_name were removed from tempest in commit 27905cc (merged 26/04/2016)

Critical Issues

  • A bug that caused the Keystone credential keys to be lost when the playbook is run during a rebuild of the first Keystone container has been fixed. Please see launchpad bug 1667960 for more details.

Security Issues

  • The security role will no longer fix file permissions and ownership based on the contents of the RPM database by default. Deployers can opt in for these changes by setting security_reset_perm_ownership to yes.

  • Nova may now use an encrypted database connection. This is enabled by setting nova_galera_use_ssl to True.

  • The tasks that search for .shosts and shosts.equiv files (STIG ID: RHEL-07-040330) are now skipped by default. The search takes a long time to complete on systems with lots of files and it also causes a significant amount of disk I/O while it runs.

  • The latest version of the RHEL 7 STIG requires that a standard login banner is presented to users when they log into the system (V-71863). The security role now deploys a login banner that is used for console and ssh sessions.

  • The cn_map permissions and ownership adjustments included as part of RHEL-07-040070 and RHEL-07-040080 has been removed. This STIG configuration was removed in the most recent release of the RHEL 7 STIG.

  • The PKI-based authentication checks for RHEL-07-040030, RHEL-07-040040, and RHEL-07-040050 are no longer included in the RHEL 7 STIG. The tasks and documentation for these outdated configurations are removed.

Bug Fixes

  • Metal hosts were being inserted into the lxc_hosts group, even if they had no containers (Bug 1660996). This is now corrected for newly configured hosts. In addition, any hosts that did not belong in lxc_hosts will be removed on the next inventory run or playbook call.

  • The openstack service uri protocol variables were not being used to set the Trove specific uris. This resulted in ‘http’ always being used for the public, admin and internal uris even when ‘https’ was intended.

Other Notes

  • From now on, external repo management (in use for RDO/UCA for example) will be done inside the pip-install role, not in the repo_build role.

15.0.0.0rc1

New Features

  • Deployers can set openstack_host_nf_conntrack_max to control the maximum size of the netfilter connection tracking table. The default of 262144 should be increased if virtual machines will be handling large amounts of concurrent connections.

  • Added support for ironic-OneView drivers. Check the documentation on how to enable them.

  • Neutron SR-IOV can now be optionally deployed and configured. For details about the what the service is and what it provides, see the SR-IOV Installation Guide for more information.

  • CentOS7/RHEL support has been added to the os_designate role.

  • The security-hardening playbook hosts target can now be filtered using the security_host_group var.

15.0.0.0b3

New Features

  • It is now possible to customise the location of the configuration file source for the All-In-One (AIO) bootstrap process using the bootstrap_host_aio_config_path variable.

  • It is now possible to customise the location of the scripts used in the All-In-One (AIO) boostrap process using the bootstrap_host_aio_script_path variable.

  • It is now possible to customise the name of the user_variables.yml file created by the All-In-One (AIO) bootstrap process using the bootstrap_host_user_variables_filename variable.

  • It is now possible to customise the name of the user_secrets.yml file created by the All-In-One (AIO) bootstrap process using the bootstrap_host_user_secrets_filename variable.

  • The filename of the apt source for the ubuntu cloud archive can now be defined with the variable uca_apt_source_list_filename.

  • The filename of the apt source for the ubuntu cloud archive used in ceph client can now be defined by giving a filename in the uca part of the dict ceph_apt_repos.

  • The filename of the apt source for the ubuntu cloud archive can now be defined with the variable uca_apt_source_list_filename.

  • The filename of the apt/yum source can now be defined with the variable mariadb_repo_filename.

  • The filename of the apt source can now be defined with the variable filename inside the dicts galera_repo and galera_percona_xtrabackup_repo.

  • The filename of the apt source for the ubuntu cloud archive can now be defined with the variable uca_apt_source_list_filename.

  • The ceilometer configuration files other than ceilometer.conf are now retrieved from upstream. You can override the repository from which these are retrieved by setting the ceilometer_git_config_lookup_location variable which defaults to the git.openstack.org.

  • Playbooks for ceph-ansible have been added to facilitate gate testing of the OpenStack-Ansible integration with Ceph clusters, and can be used to integrate the two projects so that OpenStack-Ansible can deploy and consume its own Ceph installation using ceph-ansible. This should be considered an experimental integration until further testing is been completed by deployers and the OpenStack-Ansible gate to fine tune its stability and completeness. The ceph-install playbook can be activated by adding hosts to the ceph-mon_hosts and ceph-osd_hosts in the OSA inventory. A variety of ceph-ansible specific variables will likely need to be configured in user_variables.yml to configure ceph-ansible for your environment. Please reference the ceph-ansible repo for a list of variables the project supports.

  • Deployers can set heat_cinder_backups_enabled to enable or disable the cinder backups feature in heat. If heat has cinder backups enabled, but cinder’s backup service is disabled, newly built stacks will be undeletable.

    The heat_cinder_backups_enabled variable is set to false by default.

  • The rabbitmq_server role now supports disabling listeners that do not use TLS. Deployers can override the rabbitmq_disable_non_tls_listeners variable, setting a value of True if they wish to enable this feature.

  • You can specify the galera_package_arch variable to force a specific architecture when installing percona and qpress packages. This will be automatically calculated based on the architecture of the galera_server host. Acceptable values are x86_64 for Ubuntu-16.04 and RHEL 7, and ppc64le for Ubuntu-16.04.

  • Specify the gnocchi_auth_mode var to set the auth_mode for gnocchi. This defaults to basic which has changed from noauth to match upstream. If gnocchi_keystone_auth is true or yes this value will default to keystone.

  • Specify the gnocchi_git_config_lookup_location value to specify the git repository where the gnocchi config files can be retrieved. The api-paste.ini and policy.json files are now retrieved from the specified git repository and are not carried in the os_gnocchi role.

  • If the cinder backup service is enabled with cinder_service_backup_program_enabled: True, then heat will be configured to use the cinder backup service. The heat_cinder_backups_enabled variable will automatically be set to True.

  • It’s now possible to change the behavior of DISALLOW_IFRAME_EMBED by defining the variable horizon_disallow_iframe_embed in the user variables.

  • The new provider network attribute sriov_host_interfaces is added to support SR-IOV network mappings inside Neutron. The provider_network adds new items network_sriov_mappings and network_sriov_mappings_list to the provider_networks dictionary. Multiple interfaces can be defined by comma separation.

  • CentOS7/RHEL support has been added to the os_ceilometer role.

  • The openstack-ansible-security role is now configured to apply the security configurations from the Red Hat Enterprise Linux 7 STIG to OpenStack-Ansible deployments.

  • RabbitMQ Server can now be installed from different methods: a deb file (default), from standard repository package and from external repository. Current behavior is unchanged. Please define rabbitmq_install_method: distro to use packages provided by your distribution or rabbitmq_install_method: external_repo to use packages stored in an external repo. In the case external_repo is used, the process will install RabbitMQ from the packages hosted by packagecloud.io, as recommended by RabbitMQ.

  • The Red Hat Enterprise Linux (RHEL) 7 STIG content is now deployed by default. Deployers can continue using the RHEL 7 STIG content by setting the following Ansible variable:

    stig_version: rhel6
    

Upgrade Notes

  • The global override cinder_nfs_client is replaced in favor of fully supporting multi backends configuration via the cinder_backends stanza.

  • The latest stable release of Ceph, Jewel, is now used as the default client version since Hammer was scheduled for EOL in November 2016.

  • The gnocchi_archive_policies and gnocchi_archive_policy_rules variables never had full support in the role so were ineffective at the intended purpose. The task references to them have been removed and the library to perform gnocchi operations has also been removed. This eliminates the need for the gnocchi client to be installed outside the virtual environment as well.

  • Deployers should review the new RHEL 7 STIG variables in defaults/main.yml to provide custom configuration for the Ansible tasks.

Deprecation Notes

  • The vars to set source_sample_interval for the os_ceilometer role are deprecated and will be removed in the Queen cycle. To override these variables after Queen, utilize the ceilometer_pipeline_yaml_overrides file.

  • The gnocchi_keystone_auth is deprecated, and will be removed in the Queen cycle. Setting gnocchi_auth_mode to keystone will achieve the same result.

  • The Red Hat Enteprise Linux 6 STIG content has been deprecated. The tasks and variables for the RHEL 6 STIG will be removed in a future release.

Bug Fixes

  • Properly distrubute client keys to nova hypervisors when extra ceph clusters are being deployed.

  • Properly remove temporary files used to transfer ceph client keys from the deploy host and hypervisors.

  • Systems using systemd (like Ubuntu Xenial) were incorrectly limited to a low amount of open files. This was causing issues when restarting galera. A deployer can still define the maximum number of open files with the variable galera_file_limits (Defaults to 65536).

  • The percona repository stayed in placed even after a change of the variable use_percona_upstream. From now on, the percona repository will not be present unless the deployer decides to use_percona_upstream. This also fixes a bug of the presence of this apt repository after an upgdrade from Mitaka.

Other Notes

  • XtraBackup is currently on version 2.4.5 for ppc64le architecture when pulling deb packages from the repos.

  • XtraBackup is currently on version 2.4.5 for amd64 architecture when pulling rpm/deb packages from the repos. To pull the latest available 2.4 branch version from the yum/apt repository set the use_percona_upstream variable to True. The default behavior using deb packages is unchanged.

15.0.0.0b2

Prelude

Functionality to support Ubuntu Trusty (14.04) has been removed from the code base.

New Features

  • A new switch pip_install_build_packages is introduced to allow toggling compiler and development library installation. The legacy behavior of installing the compiler and development libraries is maintained as the switch is enabled by default.

  • Neutron DHCP options have been set to allow a DHCP server running dnsmasq to coexist with other DHCP servers within the same network. This works by instructing dnsmasq to ignore any clients which are not specified in dhcp-host files.

  • Neutron DHCP options have been set to provide for logging which makes debugging DHCP and connectivity issues easier by default.

  • The galera_client_package_install option can now be specified to handle whether packages are installed as a result of the openstack-ansible-galera_client role running. This will default to true, but can be set to false to prevent package installs. This is useful when deploying the my.cnf client configuration file on hosts that already have Galera installed.

  • Set the glance_swift_store_auth_insecure variable to override the swift_store_auth_inscure value in /etc/glance/glance-api.conf. Set this value when using an external Swift store that does not have the same insecure setting as the local Keystone.

  • Add support for neutron as an enabled_network_interface.

  • The ironic_neutron_provisioning_network_name and ironic_neutron_cleaning_network_name variable can be set to the name of the neutron network to use for provisioning and cleaning. The ansible tasks will determine the appropriate UUID for that network. Alternatively, ironic_neutron_provisioning_network_uuid or ironic_neutron_cleaning_network can be used to directly specify the UUID of the networks. If both ironic_neutron_provisioning_network_name and ironic_neutron_provisioning_network_uuid are specified, the specified UUID will be used. If only the provisioning network is specified, the cleaning network will default to the same network.

  • The variable lxc_cache_environment has been added. This dictionary can be overridden by deployers to set HTTP proxy environment variables that will be applied to all lxc container download tasks.

  • The copy of the /etc/openstack-release file is now optional. To disable the copy of the file, set openstack_distrib_file to no.

  • The location of the /etc/openstack-release file placement can now be changed. Set the variable openstack_distrib_file_path to place it in a different path.

  • CentOS7/RHEL support has been added to the os_aodh role.

  • CentOS7/RHEL support has been added to the os_gnocchi role.

  • CentOS7/RHEL support has been added to the os_heat role.

  • CentOS7/RHEL support has been added to the os_horizon role.

  • When using the pypy python interpreter you can configure the garbage collection (gc) settings for pypy. Set the minimum GC value using the swift_pypy_gc_min variable. GC will only happen when the memory size is above this value. Set the maximum GC value using the swift_pypy_gc_max variable. This is the maximum memory heap size for pypy. Both variables are not defined by default, and will only be used if the values are defined and swift_pypy_enabled is set to True.

  • Swift tempauth users now be specified. The swift_tempauth_users variable can be defined as a list of tempauth users, and their permissions. You will still need to specify the appropriate Swift middleware using the swift_middleware_list variable, in order to utilise tempauth.

  • Swift versioned_writes middleware is added to the pipeline by default. Additionally the allow_versioned_writes settings in the middleware configuration is set to True. This follows the Swift defaults, and enables the use of the X-History-Location metadata Header.

Upgrade Notes

  • The galera_client role now installs MariaDB client version 10.1.

  • For systems using the APT package manager, the sources file for the MariaDB repo now has a consistent name, ‘MariaDB.list’.

  • The galera_server role now installs MariaDB server version 10.1.

  • For systems using the APT package manager, the sources files for the MariaDB and Percona repos now have consistent names, ‘MariaDB.list’ and ‘Percona.list’.

  • The galera_mariadb_apt_server_package and galera_mariadb_yum_server_package variables have been renamed to galera_mariadb_server_package.

  • The galera_apt_repo_url and galera_yum_repo_url variables have been renamed to galera_repo_url.

  • The variables used to produce the /etc/openstack-release file have been changed in order to improve consistency in the name spacing according to their purpose.

    openstack_code_name –> openstack_distrib_code_name openstack_release –> openstack_distrib_release

    Note that the value for openstack_distrib_release will be taken from the variable openstack_release if it is set.

  • The variable proxy_env_url is now used by the apt-cacher-ng jinja2 template to set up an HTTP/HTTPS proxy if needed.

  • Functionality to support Ubuntu Trusty (14.04) has been removed from the code base.

  • The variable gnocchi_required_pip_packages was incorrectly named and has been renamed to gnocchi_requires_pip_packages to match the standard across all roles.

  • The cinder project removed the shred value for the volume_clear option. The default for the os_cinder OpenStack-Ansible role has changed to zero.

Bug Fixes

  • The ‘container_cidr’ key has been restored back to openstack_inventory.json The fix to remove deleted global override keys mistakenly deleted the ‘container_cidr’ key, as well. This was used by downstream consumers, and cannot be reconstructed with other information inside the inventory file. Regression tests were also added.

  • Errors relating to groups containing both hosts and other groups as children now raise a more descriptive error. See inventory documentation for more details. Fixes bug

  • The apt-cacher-ng daemon does not use the proxy server specified in environment variables. The proxy server specified in the proxy_env_url variable is now set inside the apt-cacher-ng configuration file.

  • Setup for the PowerVM driver was not properly configuring the system to support RMC configuration for client instances. This fix introduces an interface template for PowerVM that properly supports mixed IPV4/IPV6 deploys and adds documentation for PowerVM RMC. For more information see bug 1643988.

15.0.0.0b1

New Features

  • Experimental support has been added to allow the deployment of the OpenStack Designate service when hosts are present in the host group dnsaas_hosts.

  • Support has been added for the horizon designate-ui dashboard. The dashboard will be automatically enabled if any hosts are in the dnsaas_hosts inventory group.

  • The os_horizon role now has support for the horizon designate-ui dashboard. The dashboard may be enabled by setting horizon_enable_designate_ui to True in /etc/openstack_deploy/user_variables.yml.

  • Support has been added for the horizon trove-ui dashboard. The dashboard will be automatically enabled if any hosts are defined in the trove-infra_hosts inventory group.

  • Deployers can now define the override cinder_rpc_executor_thread_pool_size which defaults to 64

  • Deployers can now define the override cinder_rpc_response_timeout which defaults to 60

  • Experimental support has been added to allow the deployment of the OpenStack trove service when hosts are present in the host group trove-infra_hosts.

  • Support has been added to allow the deployment of the OpenStack barbican service when hosts are present in the host group key-manager_hosts.

  • The installation of chrony is still enabled by default, but it is now controlled by the security_enable_chrony variable.

  • LXC containers will now generate a fixed mac address on all network interfaces when the option lxc_container_fixed_mac is set to true. This feature was implemented to resolve issues with dynamic mac addresses in containers generally experienced at scale with network intensive services.

  • The os-designate role now supports Ubuntu 16.04 and SystemD.

  • Variable ceph_extra_confs has been expanded to support retrieving additional ceph.conf and keyrings from multiple ceph clusters automatically.

  • Additional libvirt ceph client secrets can be defined to support attaching volumes from different ceph clusters.

  • Additional volume-types can be created by defining a list named extra_volume_types in the desired backend of the variable(s) cinder_backends

  • Container boot ordering has been implemented on container types where it would be beneficial. This change ensures that stateful systems running within a container are started ahead of non-stateful systems. While this change has no impact on a running deployment it will assist with faster recovery should any node hosting container go down or simply need to be restarted.

  • A new task has been added to the “os-lxc-container-setup.yml” common-tasks file. This new task will allow for additional configurations to be added without having to restart the container. This change is helpful in cases where non-impacting config needs to be added or updated to a running containers.

  • Add get_networks command to the neutron library. This will return network information for all networks, and fail if the specified net_name network is not present. If no net_name is specified network information will for all networks will be returned without performing a check on an existing net_name network.

  • The --check parameter for dynamic_inventory.py will now raise warnings if there are any groups defined in the user configuration that are not also found in the environment definition.

  • When using a copy-on-write backing store, the lxc_container_base_name can now include a prefix defined by lxc_container_base_name_prefix.

  • IPv6 support has been added for the LXC bridge network. This can be configured using lxc_net6_address, lxc_net6_netmask, and lxc_net6_nat.

  • A new variable, tempest_flavors, has been added to the os_tempest role allowing users to define nova flavors to be during tempest testing.

  • CentOS7/RHEL support has been added to the os_neutron role.

  • CentOS7/RHEL support has been added to the os_nova role.

  • CentOS7/RHEL support has been added to the os_swift role.

  • The os_barbican role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting barbican_package_state to present.

  • The os_designate role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting designate_package_state to present.

  • The PATH environment variable that is configured on the remote system can now be set using the openstack_host_environment_path list variable.

  • Deployers can now define the varible cinder_qos_specs to create qos specs and assign those specs to desired cinder volume types.

  • The swift_rsync_reverse_lookup option has been added. This setting will handle whether rsync performs reverse lookups on client IP addresses, and will default to False. We recommend leaving this option at False, unless DNS or host entries exist for each swift host’s replication address.

  • Adds support for the horizon trove-ui dashboard. The dashboard will be automatically enabled if any trove hosts are defined.

  • The Trove dashboard is available in Horizon. Deployers can enable the panel by setting the following Ansible variable:

    horizon_enable_trove_ui: True
    
  • The os_barbican role now supports deployment on Ubuntu 16.04 using SystemD.

Known Issues

  • The variables haproxy_keepalived_(internal|external)_cidr now has a default set to 169.254.(2|1).1/24. This is to prevent Ansible undefined variable warnings. Deployers must set values for these variables for a working haproxy with keepalived environment when using more than one haproxy node.

Upgrade Notes

  • The nova-cert service has been deprecated, is marked for removal in the Ocata release, and will no longer be deployed by the os_nova role.

  • Installation of designate and its dependent pip packages will now only occur within a Python virtual environment. The designate_venv_enabled, designate_venv_bin, designate_venv_etc_dir and designate_non_venv_etc_dir variables have been removed.

  • The os_barbican role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option barbican_package_state should be set to present.

  • The os_designate role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option designate_package_state should be set to present.

  • The security role will accept the currently installed version of a package rather than attempting to update it. This reduces unexpected changes on the system from subsequent runs of the security role. Deployers can still set security_package_state to latest to ensure that all packages installed by the security role are up to date.

  • The glance library has been removed from OpenStack-Ansible’s plugins. Upstream Ansible modules for managing OpenStack image resources should be used instead.

  • The following secrets are no longer used by OpenStack-Ansible and can be removed from user_secrets.yml:

    • container_openstack_password

    • keystone_auth_admin_token

    • cinder_v2_service_password

    • nova_ec2_service_password

    • nova_v3_service_password

    • nova_v21_service_password

    • nova_s3_service_password

    • swift_container_mysql_password

  • The variables tempest_requirements_git_repo and tempest_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.

  • The variables horizon_requirements_git_repo and horizon_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.

  • The variables swift_requirements_git_repo and swift_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.

  • The variables ironic_requirements_git_repo and ironic_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.

  • The variables neutron_requirements_git_repo and neutron_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.

  • The variables heat_requirements_git_repo and heat_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.

  • The variables magnum_requirements_git_repo and magnum_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.

  • The variables sahara_requirements_git_repo and sahara_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.

  • The variables cinder_requirements_git_repo and cinder_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.

  • The variables trove_requirements_git_repo and trove_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.

  • The variables gnocchi_requirements_git_repo and gnocchi_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.

  • The variables glance_requirements_git_repo and glance_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.

  • The variables keystone_requirements_git_repo and keystone_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.

  • The variables aodh_requirements_git_repo and aodh_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.

  • The variables barbican_requirements_git_repo and barbican_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.

  • The variables nova_requirements_git_repo and nova_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.

  • The variables nova_lxd_requirements_git_repo and nova_lxd_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.

  • The variables rally_requirements_git_repo and rally_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.

  • The variables ceilometer_requirements_git_repo and ceilometer_requirements_git_install_branch have been removed in favour of using the URL/path to the upper-constraints file using the variable pip_install_upper_constraints instead.

  • The default behaviour of rsync, to perform reverse lookups, has been changed to False. This can be set to True by setting the swift_rsync_reverse_lookup variable to True.

Bug Fixes

  • When a task fails while executing a playbook, the default behaviour for Ansible is to fail for that host without executing any notifiers. This can result in configuration changes being executed, but services not being restarted. OpenStack-Ansible now sets ANSIBLE_FORCE_HANDLERS to True by default to ensure that all notified handlers attempt to execute before stopping the playbook execution.

  • LXC containers will now have the ability to use a fixed mac address on all network interfaces when the option lxc_container_fixed_mac is set true. This change will assist in resolving a long standing issue where network intensive services, such as neutron and rabbitmq, can enter a confused state for long periods of time and require rolling restarts or internal system resets to recover.

  • SSLv3 is now disabled in the haproxy daemon configuration by default.

  • Setting the haproxy_bind list on a service is now used as an override to the other VIPs defined in the environment. Previously it was being treated as an append to the other VIPs so there was no path to override the VIP binds for a service. For example, haproxy_bind could be used to bind a service to the internal VIP only.

  • The haproxy daemon is now able to bind to any port on CentOS 7. The haproxy_connect_any SELinux boolean is now set to on.

  • The URL of NovaLink uses ‘ftp’ protocol to provision apt key. It causes apt_key module to fail to retrieve NovaLink gpg public key file. Therefore, change the protocol of URL to ‘http’. For more information, see bug 1637348.

14.0.0.0rc2

New Features

  • An export flag has been added to the inventory-manage.py script. This flag allows exporting of host and network information from an OpenStack-Ansible inventory for import into another system, or an alternate view of the existing data. See the developer docs for more details.

  • A new debug flag has been added to dynamic_inventory.py. This should make it easier to understand what’s happening with the inventory script, and provide a way to gather output for more detailed bug reports. See the developer docs for more details.

  • CentOS7/RHEL support has been added to the os_cinder role.

Deprecation Notes

  • The main function in dynamic_inventory.py now takes named arguments instead of dictionary. This is to support future code changes that will move construction logic into separate files.

14.0.0.0rc1

New Features

  • The os_nova role can now deploy the a custom /etc/libvirt/qemu.conf file by defining qemu_conf_dict.

  • The role now enables auditing during early boot to comply with the requirements in V-38438. By default, the GRUB configuration variables in /etc/default/grub.d/ will be updated and the active grub.cfg will be updated.

    Deployers can opt-out of the change entirely by setting a variable:

    security_enable_audit_during_boot: no
    

    Deployers may opt-in for the change without automatically updating the active grub.cfg file by setting the following Ansible variables:

    security_enable_audit_during_boot: yes
    security_enable_grub_update: no
    
  • In order to reduce the time taken for fact gathering, the default subset gathered has been reduced to a smaller set than the Ansible default. This may be changed by the deployer by setting the ANSIBLE_GATHER_SUBSET variable in the bash environment prior to executing any ansible commands.

  • Although the STIG requires martian packets to be logged, the logging is now disabled by default. The logs can quickly fill up a syslog server or make a physical console unusable.

    Deployers that need this logging enabled will need to set the following Ansible variable:

    security_sysctl_enable_martian_logging: yes
    
  • Yaml files used for ceilometer configuration will now allow a deployer to override a given list. If an override is provided that matches an already defined list in one of the ceilometer default yaml files the entire list will be replaced by the provided override. Previously, a nested list of lists within the default ceilometer configuration files would extend should a deployer provide an override matching an existing pipeline. The extension of the defaults had a high probability to cause undesirable outcomes and was very unpredictable.

  • New variable ceph_extra_confs may be defined to support deployment of extra Ceph config files. This is useful for cinder deployments that utilize multiple Ceph clusters as cinder backends.

  • The openstack-ansible-galera_server role will now prevent deployers from changing the galera_cluster_name variable on clusters that already have a value set in a running galera cluster. You can set the new galera_force_change_cluster_name variable to True to force the galera_cluster_name variable to be changed. We recommend setting this by running the galera-install.yml playbook with -e galera_force_change_cluster_name=True, to avoid changing the galera_cluster_name variable unintentionally. Use with caution, changing the galera_cluster_name value can cause your cluster to fail, as the nodes won’t join if restarted sequentially.

  • The rabbitmq_server role now supports configuring HiPE compilation of the RabbitMQ server Erlang code. This configuration option may improve server performance for some workloads and hardware. Deployers can override the rabbitmq_hipe_compile variable, setting a value of True if they wish to enable this feature.

  • The config_template action plugin now has a new option to toggle list extension for JSON or YAML formats. The new option is list_extend and is a boolean. The default is True which maintains the existing API.

  • The inventory script will now dynamically populate the lxc_hosts group dynamically based on which machines have container affinities defined. This group is not allowed in user-defined configuration.

  • Introduced option to deploy Keystone under Uwsgi. A new variable keystone_mod_wsgi_enabled is introduced to toggle this behavior. The default is true which continues to deploy with mod_wsgi for Apache. The ports used by Uwsgi for socket and http connection for both public and admin Keystone services are configurable (see also the keystone_uwsgi_ports dictionary variable). Other Uwsgi configuration can be overridden by using the keystone_uwsgi_ini_overrides variable as documented under “Overriding OpenStack configuration defaults” in the OpenStack-Ansible Install Guide. Federation features should be considered _experimental_ with this configuration at this time.

  • Introduced option to deploy Keystone behind Nginx. A new variable keystone_apache_enabled is introduced to toggle this behavior. The default is true which continues to deploy with Apache. Additional configuration can be delivered to Nginx through the use of the keystone_nginx_extra_conf list variable. Federation features are not supported with this configuration at this time. Use of this option requires keystone_mod_wsgi_enabled to be set to false which will deploy Keystone under Uwsgi.

  • CentOS7/RHEL support has been added to the os_glance role.

  • The rabbitmq_server role now supports deployer override of the RabbitMQ policies applied to the cluster. Deployers can override the rabbitmq_policies variable, providing a list of desired policies.

  • The openstack-ansible-os_swift role will now prevent deployers from changing the swift_hash_path_prefix and swift_hash_path_suffix variables on clusters that already have a value set in /etc/swift/swift.conf. You can set the new swift_force_change_hashes variable to True to force the swift_hash_path_ variables to be changed. We recommend setting this by running the os-swift.yml playbook with -e swift_force_change_hashes=True, to avoid changing the swift_hash_path_ variables unintentionally. Use with caution, changing the swift_hash_path_ values causes end-user impact.

  • Change the port for devices in the ring by adjusting the port value for services, hosts, or devices. This will not involve a rebalance of the ring.

  • Changing the port for a device, or group of devices, carries a brief period of downtime to the swift storage services for those devices. The devices will be unavailable during period between when the storage service restarts after the port update, and the ring updates to match the new port.

Known Issues

  • Deployments on ppc64le are limited to Ubuntu 16.04 for the Newton release of OpenStack-Ansible.

Upgrade Notes

  • In order to reduce the time taken for fact gathering, the default subset gathered has been reduced to a smaller set than the Ansible default. This may be changed by the deployer by setting the ANSIBLE_GATHER_SUBSET variable in the bash environment prior to executing any ansible commands.

  • The glance_apt_packages variable has been renamed to glance_distro_packages so that it applies to multiple operating systems.

  • The variable used to store the mysql password used by the ironic service account has been changed. The following variable:

    ironic_galera_password: secrete
    

    has been changed to:

    ironic_container_mysql_password: secrete
    
  • The variable swift_apt_packages has been renamed to swift_distro_packages.

  • All of the discretionary access control (DAC) auditing is now disabled by default. This reduces the amount of logs generated during deployments and minor upgrades. The following variables are now set to no:

    security_audit_DAC_chmod: no
    security_audit_DAC_chown: no
    security_audit_DAC_lchown: no
    security_audit_DAC_fchmod: no
    security_audit_DAC_fchmodat: no
    security_audit_DAC_fchown: no
    security_audit_DAC_fchownat: no
    security_audit_DAC_fremovexattr: no
    security_audit_DAC_lremovexattr: no
    security_audit_DAC_fsetxattr: no
    security_audit_DAC_lsetxattr: no
    security_audit_DAC_setxattr: no
    
  • The ceilometer-api init service is removed since ceilometer-api is deployed as an apache mod_wsgi service.

  • New overrides are provided to allow for better customization around logfile retention and rate limiting for UDP/TCP sockets. rsyslog_server_logrotation_window defaults to 14 days rsyslog_server_ratelimit_interval defaults to 0 seconds rsyslog_server_ratelimit_burst defaults to 10000

  • The rsyslog.conf is now using v7+ style configuration settings

Bug Fixes

  • Add architecture-specific locations for percona-xtrabackup and qpress, with alternate locations provided for ppc64el due to package inavailability from the current provider.

  • The pip_install_options variable is now honored during repo building. This variable allows deployers to specify trusted CA certificates by setting the variable to “–cert /etc/ssl/certs/ca-certificates.crt”

  • The auditd rules for auditing V-38568 (filesystem mounts) were incorrectly labeled in the auditd logs with the key of export-V-38568. They are now correctly logged with the key filesystem_mount-V-38568.

  • The repo_build play now correctly evaluates environment variables configured in /etc/environment. This enables deployments in an environment with http proxies.

14.0.0.0b3

New Features

  • Added new parameter `cirros_img_disk_format to support disk formats other than qcow2.

  • Adds support for the horizon ironic-ui dashboard. The dashboard will be automatically enabled if any ironic hosts are defined.

  • Adds support for the horizon magnum-ui dashboard. The dashboard will be automatically enabled if any magnum hosts are defined.

  • The os_horizon role now has support for the horizon manila-ui dashboard. The dashboard may be enabled by setting horizon_enable_manila_ui to True in /etc/openstack_deploy/user_variables.yml.

  • The horizon_keystone_admin_roles variable is added to support the OPENSTACK_KEYSTONE_ADMIN_ROLES list in the horizon_local_settings.py file.

  • Experimental support has been added to allow the deployment of the OpenStack Magnum service when hosts are present in the host group magnum-infra_hosts.

  • The os_nova role can now deploy the nova-lxd hypervisor. This can be achieved by setting nova_virt_type to lxd on a per-host basis in openstack_user_config.yml or on a global basis in user_variables.yml.

  • A task was added to disable secure ICMP redirects per the requirements in V-38526. This change can cause problems in some environments, so it is disabled by default. Deployers can enable the task (which disables secure ICMP redirects) by setting security_disable_icmpv4_redirects_secure to yes.

  • A new task was added to disable ICMPv6 redirects per the requirements in V-38548. However, since this change can cause problems in running OpenStack environments, it is disabled by default. Deployers who wish to enable this task (and disable ICMPv6 redirects) should set security_disable_icmpv6_redirects to yes.

  • AIDE is configured to skip the entire /var directory when it does the database initialization and when it performs checks. This reduces disk I/O and allows these jobs to complete faster.

    This also allows the initialization to become a blocking process and Ansible will wait for the initialization to complete prior to running the next task.

  • The container cache preparation process now allows copy-on-write to be set as the lxc_container_backing_method when the lxc_container_backing_store is set to lvm. When this is set a base container will be created using a name of the form <linux-distribution>-distribution-release>-<host-cpu-architecture>. The container will be stopped as it is not used for anything except to be a backing store for all other containers which will be based on a snapshot of the base container.

  • When using copy-on-write backing stores for containers, the base container name may be set using the variable lxc_container_base_name which defaults to <linux-distribution>-distribution-release>-<host-cpu-architecture>.

  • In a greenfield deployment containers will now bind mount all logs to the physical host machine in the /openstack/log/{{ inventory_hostname }} location. This change will ensure containers using a block backed file system (lvm, zfs, bfrfs) do not run into issues with full file disks due to logging. If this feature is not needed or desired it can be disabled by setting the option default_bind_mount_logs to false.

  • Added new variable tempest_img_name.

  • Added new variable tempest_img_url. This variable replaces cirros_tgz_url and cirros_img_url.

  • Added new variable tempest_image_file. This variable replaces the hard-coded value for the img_file setting in tempest.conf.j2. This will allow users to specify images other than cirros.

  • Added new variable tempest_img_disk_format. This variable replaces cirros_img_disk_format.

  • The dynamic_inventory.py file now takes a new argument, --check, which will run the inventory build without writing any files to the file system. This is useful for checking to make sure your configuration does not contain known errors prior to running Ansible commands.

  • The lxc-container-create role now consumes the variable lxc_container_bind_mounts which should contain a list of bind mounts to apply to a newly created container. The appropriate host and container directory will be created and the configuration applied to the container config. This feature is designed to be used in group_vars to ensure that containers are fully prepared at the time they are created, thus cutting down the number of times containers are restarted during deployments and upgrades.

  • The container creation process now allows copy-on-write to be set as the lxc_container_backing_method when the lxc_container_backing_store is set to lvm. When this is set it will use a snapshot of the base container to build the containers.

  • If there are swift hosts in the environment, then the value for cinder_service_backup_program_enabled will automatically be set to True. This negates the need to set this variable in user_variables.yml, but the value may still be overridden at the deployer discretion.

  • If there are swift hosts in the environment, then the value for glance_default_store will automatically be set to swift. This negates the need to set this variable in user_variables.yml, but the value may still be overridden at the deployer discretion.

  • The repo build process is now able to synchronize a git cache from the deployment node to the repo server. The git cache path on the deployment node is set using the variable repo_build_git_cache. If the deployment node hosts the repo container, then the folder will be symlinked into the bind mount for the repo container. If the deployment node does not host the repo container, then the contents of the folder will be synchronised into the repo container.

  • Added a boolean var haproxy_service_enabled to the haproxy_service_configs dict to support toggling haproxy endpoints on/off.

  • The repo server will now be used as a package manager cache.

  • The lxc_hosts role can now make use of a primary and secondary gpg keyserver for gpg validation of the downloaded cache. Setting the servers to use can be done using the lxc_image_cache_primary_keyserver and lxc_image_cache_secondary_keyserver variables.

  • The LXC container creation process now has a configurable delay for the task which waits for the container to start. The variable lxc_container_ssh_delay can be set to change the default delay of five seconds.

  • The repo build process is now able to support building and synchronizing artifacts for multiple CPU architectures. Build artifacts are now tagged with the appropriate CPU architecture by default, and synchronization of build artifacts from secondary, architecture-specific repo servers back to the primary repo server is supported.

  • The repo install process is now able to support building and synchronizing artifacts for multiple CPU architectures. To support multiple architectures, one or more repo servers must be created for each CPU architecture in the deployment. When multiple CPU architectures are detected among the repo servers, the repo-discovery process will automatically assign a repo master to perform the build process for each architecture.

  • The Project Calico Neutron networking plugin is now integrated into the deployment. For setup instructions please see os_neutron role documentation.

  • The Project Calico Neutron networking plugin is now integrated into the os_neutron role. This can be activated using the instructions located in the role documentation.

  • The LXC container creation and modification process now supports online network additions. This ensures a container remains online when additional networks are added to a system.

  • An opportunistic Ansible execution strategy has been implemented. This allows the Ansible linear strategy to skip tasks with conditionals faster by never queuing the task when the conditional is evaluated to be false.

  • The Ansible SSH plugin has been modified to support running commands within containers without having to directly ssh into them. The change will detect presence of a container. If a container is found the physical host will be used as the SSH target and commands will be run directly. This will improve system reliability and speed while also opening up the possibility for SSH to be disabled from within the container itself.

  • CentOS7/RHEL support has been added to the os_keystone role.

  • The os_magnum role now supports deployment on Ubuntu 16.04 using systemd.

  • The galera_client role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting galera_client_package_state to present.

  • The ceph_client role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting ceph_client_package_state to present.

  • The os_ironic role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting ironic_package_state to present.

  • The os_nova role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting nova_package_state to present.

  • The memcached_server role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting memcached_package_state to present.

  • The os_heat role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting heat_package_state to present.

  • The rsyslog_server role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting rsyslog_server_package_state to present.

  • The pip_install role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting pip_install_package_state to present.

  • The repo_build role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting repo_build_package_state to present.

  • The os_rally role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting rally_package_state to present.

  • The os_glance role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting glance_package_state to present.

  • The security role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting security_package_state to present.

  • A new global option to control all package install states has been implemented. The default action for all distribution package installations is to ensure that the latest package is installed. This may be changed to only verify if the package is present by setting package_state to present.

  • The os_keystone role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting keystone_package_state to present.

  • The os_cinder role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting cinder_package_state to present.

  • The os_gnocchi role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting gnocchi_package_state to present.

  • The os_magnum role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting magnum_package_state to present.

  • The rsyslog_client role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting rsyslog_client_package_state to present.

  • The os_sahara role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting sahara_package_state to present.

  • The repo_server role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting repo_server_package_state to present.

  • The haproxy_server role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting haproxy_package_state to present.

  • The os_aodh role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting aodh_package_state to present.

  • The openstack_hosts role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting openstack_hosts_package_state to present.

  • The galera_server role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting galera_server_package_state to present.

  • The rabbitmq_server role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting rabbitmq_package_state to present.

  • The lxc_hosts role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting lxc_hosts_package_state to present.

  • The os_ceilometer role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting ceilometer_package_state to present.

  • The os_swift role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting swift_package_state to present.

  • The os_neutron role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting neutron_package_state to present.

  • The os_horizon role now supports the ability to configure whether apt/yum tasks install the latest available package, or just ensure that the package is present. The default action is to ensure that the latest package is present. The action taken may be changed to only ensure that the package is present by setting horizon_package_state to present.

  • Added playbook for deploying Rally in the utility containers

  • Our general config options are now stored in an “/usr/local/bin/openstack-ansible.rc” file and will be sourced when the “openstack-ansible” wrapper is invoked. The RC file will read in BASH environment variables and should any Ansible option be set that overlaps with our defaults the provided value will be used.

  • Experimental support has been added to allow the deployment of the Sahara data-processing service. To deploy sahara hosts should be present in the host group sahara-infra_hosts.

  • The Sahara dashboard is available in Horizon. Deployers can enable the panel by setting the following Ansible variable:

    horizon_enable_sahara_ui: True
    
  • The repo build process now selectively clones git repositories based on whether each OpenStack service group has any hosts in it. If there are no hosts in the group, the git repo for the service will not be cloned. This behaviour can be optionally changed to force all git repositories to be cloned by setting repo_build_git_selective to no.

  • The repo build process now selectively builds python packages based on whether each OpenStack service group has any hosts in it. If there are no hosts in the group, the list of python packages for the service will not be built. This behaviour can be optionally changed to force all python packages to be built by setting repo_build_wheel_selective to no.

  • A new variable is supported in the neutron_services dictionary called service_conf_path. This variable enables services to deploy their config templates to paths outside of /etc/neutron by specifying a directory using the new variable.

  • The os_swift role now allows the permissions for the log files created by the swift account, container and object servers to be set. The variable is swift_syslog_log_perms and is set to 0644 by default.

  • Support for the deployment of Unbound caching DNS resolvers has been added as an optional replacement for /etc/hosts management across all hosts in the environment. To enable the Unbound DNS containers, add unbound_hosts entries to the environment.

  • The repo_build role now provides the ability to override the upper-constraints applied which are sourced from OpenStack and from the global-requirements-pins.txt file. The variable repo_build_upper_constraints_overrides can be populated with a list of upper constraints. This list will take the highest precedence in the constraints process, with the exception of the pins set in the git source SHAs.

Upgrade Notes

  • During an upgrade the option default_bind_mount_logs will be set to false. This will ensure that an existing deployment is not adversely impacted by container restarts. If a deployer wishes to enable the default bind mount for /var/log they can at a later date.

  • If there are swift hosts in the environment, then the value for cinder_service_backup_program_enabled will automatically be set to True. This negates the need to set this variable in user_variables.yml, but the value may still be overridden at the deployer discretion.

  • If there are swift hosts in the environment, then the value for glance_default_store will automatically be set to swift. This negates the need to set this variable in user_variables.yml, but the value may still be overridden at the deployer discretion.

  • The variable security_sysctl_enable_tcp_syncookies has replaced security_sysctl_tcp_syncookies and it is now a boolean instead of an integer. It is still enabled by default, but deployers can disable TCP syncookies by setting the following Ansible variable:

    security_sysctl_enable_tcp_syncookies: no
    
  • Haproxy has a new backend to support using the repo server nodes as a package manager cache. The new backend is called “repo_cache” and uses port “3142” and a single active node. All other nodes within the pool are backups and will be promoted if the active node goes down. Default ACLs have been created to lock down the port’s availability to only internal networks originating from an RFC1918 address.

  • LXC package installation and cache preparation will now occur by default only on hosts which will actually implement containers.

  • During upgrades, container and service restarts for the mariadb/galera cluster were being triggered multiple times and causing the cluster to become unstable and often unrecoverable. This situation has been improved immensely, and we now have tight control such that restarts of the galera containers only need to happen once, and are done so in a controlled, predictable and repeatable way.

  • Database migration tasks have been added for the dynamic routing neutron plugin.

  • Installation of magnum and its dependent pip packages will now only occur within a Python virtual environment. The magnum_venv_bin, magnum_venv_enabled variables have been removed.

  • Installation of rally and its dependent pip packages will now only occur within a Python virtual environment. The rally_venv_bin, rally_venv_enabled variables have been removed.

  • Installation of sahara and its dependent pip packages will now only occur within a Python virtual environment. The sahara_venv_bin, sahara_venv_enabled, sahara_venv_etc_dir, and sahara_non_venv_etc_dir variables have been removed.

  • The variable keystone_apt_packages has been renamed to keystone_distro_packages.

  • The variable keystone_idp_apt_packages has been renamed to keystone_idp_distro_packages.

  • The variable keystone_sp_apt_packages has been renamed to keystone_sp_distro_packages.

  • The variable keystone_developer_apt_packages has been renamed to keystone_developer_mode_distro_packages.

  • The variable glance_apt_packages has been renamed to glance_distro_packages.

  • The variable horizon_apt_packages has been renamed to horizon_distro_packages.

  • The variable aodh_apt_packages has been renamed to aodh_distro_packages.

  • The variable cinder_apt_packages has been renamed to cinder_distro_packages.

  • The variable cinder_volume_apt_packages has been renamed to cinder_volume_distro_packages.

  • The variable cinder_lvm_volume_apt_packages has been renamed to cinder_lvm_volume_distro_packages.

  • The variable ironic_api_apt_packages has been renamed to ironic_api_distro_packages.

  • The variable ironic_conductor_apt_packages has been renamed to ironic_conductor_distro_packages.

  • The variable ironic_conductor_standalone_apt_packages has been renamed to ironic_conductor_standalone_distro_packages.

  • The variable galera_pre_packages has been renamed to galera_server_required_distro_packages.

  • The variable galera_packages has been renamed to galera_server_mariadb_distro_packages.

  • The variable haproxy_pre_packages has been renamed to haproxy_required_distro_packages.

  • The variable haproxy_packages has been renamed to haproxy_distro_packages.

  • The variable memcached_apt_packages has been renamed to memcached_distro_packages.

  • The variable neutron_apt_packages has been renamed to neutron_distro_packages.

  • The variable neutron_lbaas_apt_packages has been renamed to neutron_lbaas_distro_packages.

  • The variable neutron_vpnaas_apt_packages has been renamed to neutron_vpnaas_distro_packages.

  • The variable neutron_apt_remove_packages has been renamed to neutron_remove_distro_packages.

  • The variable heat_apt_packages has been renamed to heat_distro_packages.

  • The variable ceilometer_apt_packages has been renamed to ceilometer_distro_packages.

  • The variable ceilometer_developer_mode_apt_packages has been renamed to ceilometer_developer_mode_distro_packages.

  • The variable lxc_apt_packages has been renamed to lxc_hosts_distro_packages.

  • The variable openstack_host_apt_packages has been renamed to openstack_host_distro_packages.

  • The galera_client role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option galera_client_package_state should be set to present.

  • The ceph_client role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option ceph_client_package_state should be set to present.

  • The os_ironic role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option ironic_package_state should be set to present.

  • The os_nova role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option nova_package_state should be set to present.

  • The memcached_server role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option memcached_package_state should be set to present.

  • The os_heat role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option heat_package_state should be set to present.

  • The rsyslog_server role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option rsyslog_server_package_state should be set to present.

  • The pip_install role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option pip_install_package_state should be set to present.

  • The repo_build role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option repo_build_package_state should be set to present.

  • The os_rally role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option rally_package_state should be set to present.

  • The os_glance role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option glance_package_state should be set to present.

  • The security role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option security_package_state should be set to present.

  • All roles always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option package_state should be set to present.

  • The os_keystone role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option keystone_package_state should be set to present.

  • The os_cinder role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option cinder_package_state should be set to present.

  • The os_gnocchi role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option gnocchi_package_state should be set to present.

  • The os_magnum role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option magnum_package_state should be set to present.

  • The rsyslog_client role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option rsyslog_client_package_state should be set to present.

  • The os_sahara role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option sahara_package_state should be set to present.

  • The repo_server role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option repo_server_package_state should be set to present.

  • The haproxy_server role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option haproxy_package_state should be set to present.

  • The os_aodh role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option aodh_package_state should be set to present.

  • The openstack_hosts role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option openstack_hosts_package_state should be set to present.

  • The galera_server role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option galera_server_package_state should be set to present.

  • The rabbitmq_server role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option rabbitmq_package_state should be set to present.

  • The lxc_hosts role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option lxc_hosts_package_state should be set to present.

  • The os_ceilometer role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option ceilometer_package_state should be set to present.

  • The os_swift role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option swift_package_state should be set to present.

  • The os_neutron role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option neutron_package_state should be set to present.

  • The os_horizon role always checks whether the latest package is installed when executed. If a deployer wishes to change the check to only validate the presence of the package, the option horizon_package_state should be set to present.

  • The variable rsyslog_client_packages has been replaced by rsyslog_client_distro_packages.

  • The variable rsyslog_server_packages has been replaced by rsyslog_server_distro_packages.

  • The LVM configuration tasks and lvm.conf template have been removed from the openstack_hosts role since they are no longer needed. All of the LVM configuration is properly handled in the os_cinder role.

  • In the rsyslog_client role, the variable rsyslog_client_repos has been removed as it is no longer used.

  • The aodh-api init service is removed since aodh-api is deployed as an apache mod_wsgi service.

Deprecation Notes

  • Removed cirros_tgz_url and in most places replaced with tempest_img_url.

  • Removed cirros_img_url and in most places replaced with tempest_img_url.

  • Removed deprecated variable tempest_compute_image_alt_ssh_user

  • Removed deprecated variable tempest_compute_image_ssh_password

  • Removed deprecated variable tempest_compute_image_alt_ssh_password

  • Renamed cirros_img_disk_format to tempest_img_disk_format

  • Downloading and unarchiving a .tar.gz has been removed. The related tempest options ami_img_file, aki_img_file, and ari_img_file have been removed from tempest.conf.j2.

  • The [boto] section of tempest.conf.j2 has been removed. These tests have been completely removed from tempest for some time.

Bug Fixes

  • This role assumes that there is a network named “public|private” and a subnet named “public|private-subnet”. These names are made configurable by the addition of two sets of variables; tempest_public_net_name and tempest_public_subnet_name for public networks and tempest_private_net_name and tempest_private_subnet_name for private networks This addresses bug 1588818

  • The /run directory is excluded from AIDE checks since the files and directories there are only temporary and often change when services start and stop.

  • AIDE initialization is now always run on subsequent playbook runs when security_initialize_aide is set to yes. The initialization will be skipped if AIDE isn’t installed or if the AIDE database already exists.

    See bug 1616281 for more details.

  • Logging within the container has been bind mounted to the hosts this reslves issue 1588051 <https://bugs.launchpad.net/openstack-ansible/+bug/1588051>_

  • Removed various deprecated / no longer supported features from tempest.conf.j2. Some variables have been moved to their new sections in the config.

  • Horizon deployments were broken due to an incorrect hostname setting being placed in the apache ServerName configuration. This caused Horizon startup failure any time debug was disabled.

  • LXC package installation and cache preparation will now occur by default only on hosts which will actually implement containers.

  • The --compact flag has been removed from xtrabackup options. This had been shown to cause crashes in some SST situations

Other Notes

  • The in tree “ansible.cfg” file in the playbooks directory has been removed. This file was making compatibility difficult for deployers who need to change these values. Additionally this files very existence forced Ansible to ignore any other config file in either a users home directory or in the default “/etc/ansible” directory.

14.0.0.0b2

New Features

  • The option openstack_domain has been added to the openstack_hosts role. This option is used to setup proper hostname entries for all hosts within a given OpenStack deployment.

  • The openstack_hosts role will setup an RFC1034/5 hostname and create an alias for all hosts in inventory.

  • Ceilometer can now use Gnocchi for storage. By default this is disabled. To enable the service, set ceilometer_gnocchi_enabled: yes. See the Gnocchi role documentation for more details.

  • The os_horizon role now has support for the horizon ironic-ui dashboard. The dashboard may be enabled by setting horizon_enable_ironic_ui to True in /etc/openstack_deploy/user_variables.yml.

  • A new variable has been added to allow a deployer to control the restart of containers via the handler. This new option is lxc_container_allow_restarts and has a default of yes. If a deployer wishes to disable the auto-restart functionality they can set this value to no and automatic container restarts that are not absolutely required will be disabled.

  • Deployers can now blacklist certain Nova extensions by providing a list of such extensions in horizon_nova_extensions_blacklist variable, for example:

    horizon_nova_extensions_blacklist:
      - "SimpleTenantUsage"
    
  • The container cache preparation process now allows overlayfs to be set as the lxc_container_backing_store. When this is set a base container will be created using a name of the form <linux-distribution>-distribution-release>-<host-cpu-architecture>. The container will be stopped as it is not used for anything except to be a backing store for all other containers which will be based on a snapshot of the base container. The overlayfs backing store is not recommended to be used for production unless the host kernel version is 3.18 or higher.

  • The rsyslog_server role now has support for CentOS 7.

  • The rabbitmq_server now supports a configurable inventory host group. Deployers can override the rabbitmq_host_group variable if they wish to use the role to create additional RabbitMQ clusters on a custom host group.

  • The lxc-container-create role now consumes the variable lxc_container_config_list which should contain a list of the entries which should be added to the LXC container config file when the container is created. This feature is designed to be used in group_vars to ensure that containers are fully prepared at the time they are created, thus cutting down the number of times containers are restarted during deployments and upgrades.

  • The lxc-container-create role now consumes the variable lxc_container_commands which should contain any shell commands that should be executed in a newly created container. This feature is designed to be used in group_vars to ensure that containers are fully prepared at the time they are created, thus cutting down the number of times containers are restarted during deployments and upgrades.

  • The container creation process now allows overlayfs to be set as the lxc_container_backing_store. When this is set it will use a snapshot of the base container to build the containers. The overlayfs backing store is not recommended to be used for production unless the host kernel version is 3.18 or higher.

  • The security role now has tasks that will disable the graphical interface on a server using upstart (Ubuntu 14.04) or systemd (Ubuntu 16.04 and CentOS 7). These changes take effect after a reboot.

    Deployers that need a graphical interface will need to set the following Ansible variable:

    security_disable_x_windows: no
    
  • Whether ceilometer should be enabled by default for each service is now dynamically determined based on whether there are any ceilometer hosts/containers deployed. This behaviour can still be overridden by toggling <service>_ceilometer_enabled in /etc/openstack_deploy/user_variables.yml.

  • The os_neutron role now determines the default configuration for openvswitch-agent tunnel_types and the presence or absence of local_ip configuration based on the value of neutron_ml2_drivers_type. Deployers may directly control this configuration by overriding the neutron_tunnel_types variable .

  • The os_neutron role now configures neutron ml2 to load the l2_population mechanism driver by default based on the value of neutron_l2_population. Deployers may directly control the neutron ml2 mechanism drivers list by overriding the mechanisms variable in the neutron_plugins dictionary.

  • LBaaSv2 is now enabled by default in all-in-one (AIO) deployments.

  • The py_pkgs lookup plugin now has strict ordering for requirement files discovered. These files are used to add additional requirements to the python packages discovered. The order is defined by the constant, REQUIREMENTS_FILE_TYPES which contains the following entries, ‘test-requirements.txt’, ‘dev-requirements.txt’, ‘requirements.txt’, ‘global-requirements.txt’, ‘global-requirement-pins.txt’. The items in this list are arranged from least to most priority.

  • The repo build process is now able to make use of a pre-staged git cache. If the /var/www/repo/openstackgit folder on the repo server is found to contain existing git clones then they will be updated if they do not already contain the required SHA for the build.

  • Gnocchi is available for deploy as a metrics storage service. At this time it does not integrate with Aodh or Ceilometer. To deploy Aodh or Ceilometer to use Gnocchi as a storage / query API, each must be configured appropriately with the use of overrides as described in the configuration guides for each of these services.

  • Added a new haproxy_extra_services var which will allow extra haproxy endpoint additions.

  • Horizon now has the ability to set arbitrary configuration options using global option horizon_config_overrides in YAML format. The overrides follow the same pattern found within the other OpenStack service overrides. General documentation on overrides can be found here.

  • The os_horizon role now supports configuration of custom themes. Deployers can use the new horizon_custom_themes and horizon_default_theme variables to configure the dashboard with custom themes and default to a specific theme respectively.

  • A task was added that restricts ICMPv4 redirects to meet the requirements of V-38524 in the STIG. This configuration is disabled by default since it could cause issues with LXC in some environments.

    Deployers can enable this configuration by setting an Ansible variable:

    security_disable_icmpv4_redirects: yes
    
  • The audit rules added by the security role now have key fields that make it easier to link the audit log entry to the audit rule that caused it to appear.

  • pip can be installed via the deployment host using the new variable pip_offline_install. This can be useful in environments where the containers lack internet connectivity. Please refer to the limited connectivity installation guide for more information.

  • The env.d directory included with OpenStack-Ansible is now used as the first source for the environment skeleton, and /etc/openstack_deploy/env.d will be used only to override values. Deployers without customizations will no longer need to copy the env.d directory to /etc/openstack_deploy. As a result, the env.d copy operation has been removed from the node bootstrap role.

  • The ironic role now supports Ubuntu 16.04 and SystemD.

  • The LBaaSv2 service provider configuration can now be adjusted with the neutron_lbaasv2_service_provider variable. This allows a deployer to choose to deploy LBaaSv2 with Octavia in a future version.

  • A conditional has been added to the _local_ip settings used in the neutron_local_ip which removes the hard requirement for an overlay network to be set within a deployment. If no overlay network is set within the deployment the local_ip will be set to the value of “ansible_ssh_host”.

  • The os_neutron role will now default to the OVS firewall driver when neutron_plugin_type is ml2.ovs and the host is running Ubuntu 16.04 on PowerVM. To override this default behavior, deployers should define neutron_ml2_conf_ini_overrides and ‘neutron_openvswitch_agent_ini_overrides’ in ‘user_variables.yml’. Example below

    neutron_ml2_conf_ini_overrides:
      securitygroup:
        firewall_driver: neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
    neutron_openvswitch_agent_ini_overrides:
      securitygroup:
        firewall_driver: iptables_hybrid
    
  • Support for Neutron distributed virtual routing has been added to the os_neutron role. This includes the implementation of Networking Guide’s suggested agent configuration. This feature may be activated by setting neutron_plugin_type: ml2.ovs.dvr in /etc/openstack_deploy/user_variables.yml.

  • The nova SSH public key distribution has been made a lot faster especially when deploying against very large clusters. To support larger clusters the role has moved away from the “authorized_key” module and is now generating a script to insert keys that may be missing from the authorized keys file. The script is saved on all nova compute nodes and can be found at /usr/local/bin/openstack-nova-key.sh. If ever there is a need to reinsert keys or fix issues on a given compute node the script can be executed at any time without directly running the ansible playbooks or roles.

  • The os_nova role can now detect and support basic deployment of a PowerVM environment. This sets the virtualization type to ‘powervm’ and installs/updates the PowerVM NovaLink package and nova-powervm driver.

  • Nova UCA repository support is implemented by default. This will allow the users to benefit from the updated packages for KVM. The nova_uca_enable variable controls the install source for the KVM packages. By default this value is set to True to make use of UCA repository. User can set to False to disable.

  • Added horizon_apache_custom_log_format tunable to the os-horizon role for changing CustomLog format. Default is “combined”.

  • Added keystone_apache_custom_log_format tunable for changing CustomLog format. Default is “combined”.

  • The os_cinder role now supports Ubuntu 16.04.

  • The repo build process now has the ability to store the pip sources within the build archive. This ability is useful when deploying environments that are “multi-architecture”, “multi-distro”, or “multi-interpreter” where specific pre-build wheels may not be enough to support all of the deployment. To enable the ability to store the python source code within a given release, set the new option repo_build_store_pip_sources to true.

  • The repo server now has a Package Cache service for distribution packages. To leverage the cache, deployers will need to configure the package manager on all hosts to use the cache as a proxy. If a deployer would prefer to disable this service, the variable repo_pkg_cache_enabled should be set to false.

  • The LBaaSv2 device driver is now set by the Ansible variable neutron_lbaasv2_device_driver. The default is set to use the HaproxyNSDriver, which allows for agent-based load balancers.

  • The GPG key checks for package verification in V-38476 are now working for Red Hat Enterprise Linux 7 in addition to CentOS 7. The checks only look for GPG keys from Red Hat and any other GPG keys, such as ones imported from the EPEL repository, are skipped.

  • CentOS7 support has been added to the rsyslog_client role.

  • The options of application logrotate configuration files are now configurable. rsyslog_client_log_rotate_options can be used to provide a list of directives, and rsyslog_client_log_rotate_scripts can be used to provide a list of postrotate, prerotate, firstaction, or lastaction scripts.

  • The repo build process now selectively builds venvs based on whether each OpenStack service group has any hosts in it. If there are no hosts in the group, the venv will not be built. This behaviour can be optionally changed to force all venvs to be built by setting repo_build_venv_selective to yes.

  • The os_swift role has 3 new variables that will allow a deployer to change the hard, soft and fs.file-max limits. the hard and soft limits are being added to the limits.conf file for the swift system user. The fs.file-max settings are added to storage hosts via kernel tuning. The new options are swift_hard_open_file_limits with a default of 10240 swift_soft_open_file_limits with a default of 4096 swift_max_file_limits with a default of 24 times the value of swift_hard_open_file_limits.

  • The pretend_min_part_hours_passed option can now be passed to swift-ring-builder prior to performing a rebalance. This is set by the swift_pretend_min_part_hours_passed boolean variable. The default for this variable is False. We recommend setting this by running the os-swift.yml playbook with -e swift_pretend_min_part_hours_passed=True, to avoid resetting min_part_hours unintentionally on every run. Setting swift_pretend_min_part_hours_passed to True will reset the clock on the last time a rebalance happened, thus circumventing the min_part_hours check. This should only be used with extreme caution. If you run this command and deploy rebalanced rings before a replication pass completes, you may introduce unavailability in your cluster. This has an end-user imapct.

  • Support added to allow deploying on ppc64le architecture using the Ubuntu distributions.

Upgrade Notes

  • A new global variable has been created named openstack_domain. This variable has a default value of “openstack.local”.

  • The default database collation has changed from utf8_unicode_ci to utf8_general_ci. Existing databases and tables will need to be converted.

  • Haproxy has a new backend to support using the repo server nodes as a git server. The new backend is called “repo_git” and uses port “9418”. Default ACLs have been created to lock down the port’s availability to only internal networks originating from an RFC1918 address.

  • Upgrades will not replace entries in the /etc/openstack_deploy/env.d directory, though new versions of OpenStack-Ansible will now use the shipped env.d as a base, which may alter existing deployments.

  • Adding a new nova.conf entry, live_migration_uri. This entry will default to a qemu-ssh:// uri, which uses the ssh keys that have already been distributed between all of the compute hosts.

  • The dynamic_inventory script previously set the provider network attributes is_container_address and is_ssh_address to True for the management network regardless of whether a deployer had them configured this way or not. Now, these attributes must be configured by deployers and the dynamic_inventory script will fail if they are missing or not True.

  • The variable neutron_agent_mode has been removed from the os_neutron role. The appropriate value for l3_agent.ini is now determined based on the neutron_plugin_type and host group membership.

  • Installation of glance and its dependent pip packages will now only occur within a Python virtual environment. The glance_venv_bin, glance_venv_enabled, glance_venv_etc_dir, and glance_non_venv_etc_dir variables have been removed.

  • Installation of glance and its dependent pip packages will now only occur within a Python virtual environment. The gnocchi_venv_bin, gnocchi_venv_enabled, gnocchi_venv_etc_dir, and gnocchi_non_venv_etc_dir variables have been removed.

  • Installation of heat and its dependent pip packages will now only occur within a Python virtual environment. The heat_venv_bin and heat_venv_enabled variables have been removed.

  • Installation of horizon and its dependent pip packages will now only occur within a Python virtual environment. The horizon_venv_bin, horizon_venv_enabled, horizon_venv_lib_dir, and horizon_non_venv_lib_dir variables have been removed.

  • Installation of ironic and its dependent pip packages will now only occur within a Python virtual environment. The ironic_venv_bin and ironic_venv_enabled variables have been removed.

  • Installation of keystone and its dependent pip packages will now only occur within a Python virtual environment. The keystone_venv_enabled variable has been removed.

  • Installation of aodh and its dependent pip packages will now only occur within a Python virtual environment. The aodh_venv_enabled and aodh_venv_bin variables have been removed.

  • Installation of ceilometer and its dependent pip packages will now only occur within a Python virtual environment. The ceilometer_venv_enabled and ceilometer_venv_bin variables have been removed.

  • Installation of cinder and its dependent pip packages will now only occur within a Python virtual environment. The cinder_venv_enabled and cinder_venv_bin variables have been removed.

  • Installation of neutron and its dependent pip packages will now only occur within a Python virtual environment. The neutron_venv_enabled, neutron_venv_bin, neutron_non_venv_lib_dir and neutron_venv_lib_dir variables have been removed.

  • Installation of nova and its dependent pip packages will now only occur within a Python virtual environment. The nova_venv_enabled, nova_venv_bin variables have been removed.

  • Installation of swift and its dependent pip packages will now only occur within a Python virtual environment. The swift_venv_enabled, swift_venv_bin variables have been removed.

  • LBaaSv1 has been removed from the neutron-lbaas project in the Newton release and it has been removed from OpenStack-Ansible as well.

  • The infra_hosts and infra_containers inventory groups have been removed. No containers or services were assigned to these groups exclusively, and the usage of the groups has been supplanted by the shared-infra_* and os-infra_* groups for some time. Deployers who were using the groups should adjust any custom configuration in the env.d directory to assign containers and/or services to other groups.

  • The Neutron HA tool written by AT&T is no longer enabled by default. This tool was providing HA capabilities for networks and routers that were not using the native Neutron L3HA. Because native Neutron L3HA is stable, compatible with the Linux Bridge Agent, and is a better means of enabling HA within a deployment this tool is no longer being setup by default. If legacy L3HA is needed within a deployment the deployer can set neutron_legacy_ha_tool_enabled to true to enable the legacy tooling.

  • The repo_build_apt_packages variable has been renamed. repo_build_distro_packages should be used instead to override packages required to build Python wheels and venvs.

  • The repo_build role now makes use of Ubuntu Cloud Archive by default. This can be disabled by setting repo_build_uca_enable to False.

  • Ceilometer no longer manages alarm storage when Aodh is enabled. It now redirects alarm-related requests to the Aodh API. This is now auto-enabled when Aodh is deployed.

  • Overrides for ceilometer aodh_connection_string will no longer work. Specifying an Aodh connection string in Ceilometer was deprecated within Ceilometer in a prior release so this option has been removed.

  • Hosts running LXC on Ubuntu 14.04 will now need to enable the “trusty-backports” repository. The backports repo on Ubuntu 14.04 is now required to ensure LXC is updated to the latest stable version.

  • The Aodh data migration script should be run to migrate alarm data from MongoDB storage to Galera due to the pending removal of MongoDB support.

  • Neutron now makes use of Ubuntu Cloud Archive by default. This can be disabled by setting neutron_uca_enable to False.

  • The utility-all.yml playbook will no longer distribute the deployment host’s root user’s private ssh key to all utility containers. Deployers who desire this behavior should set the utility_ssh_private_key variable.

  • The following variables have been renamed in order to make the variable names neutral for multiple operating systems.

    • nova_apt_packages -> nova_distro_packages

    • nova_spice_apt_packages -> nova_spice_distro_packages

    • nova_novnc_apt_packages -> nova_novnc_distro_packages

    • nova_compute_kvm_apt_packages -> nova_compute_kvm_distro_packages

Deprecation Notes

  • The openstack_host_apt_packages variable has been deprecated. openstack_host_packages should be used instead to override packages required to install on all OpenStack hosts.

  • Moved haproxy_service_configs var to haproxy_default_service_configs so that haproxy_service_configs can be modified and added to without overriding the entire default service dict.

  • The Neutron HA tool written by AT&T has been deprecated and will be removed in the Ocata release.

Security Issues

  • The admin_token_auth middleware presents a potential security risk and will be removed in a future release of keystone. Its use can be removed by setting the keystone_keystone_paste_ini_overrides variable.

    keystone_keystone_paste_ini_overrides:
      pipeline:public_api:
        pipeline: cors sizelimit osprofiler url_normalize request_id build_auth_context token_auth json_body ec2_extension public_service
      pipeline:admin_api:
        pipeline: cors sizelimit osprofiler url_normalize request_id build_auth_context token_auth json_body ec2_extension s3_extension admin_service
      pipeline:api_v3:
        pipeline: cors sizelimit osprofiler url_normalize request_id build_auth_context token_auth json_body ec2_extension_v3 s3_extension service_v3
    

Bug Fixes

  • The role previously did not restart the audit daemon after generating a new rules file. The bug has been fixed and the audit daemon will be restarted after any audit rule changes.

  • The standard collectstatic and compression process in the os_horizon role now happens after horizon customizations are installed, so that all static resources will be collected and compressed.

  • When the security role was run in Ansible’s check mode and a tag was provided, the check_mode variable was not being set. Any tasks which depend on that variable would fail. This bug is fixed and the check_mode variable is now set properly on every playbook run.

  • When upgrading it is possible for an old neutron-ns-metadata-proxy process to remain running in memory. If this happens the old version of the process can cause unexpected issues in a production environment. To fix this a task has been added to the os_neutron role that will execute a process lookup and kill any neutron-ns-metadata-proxy processes that are not running the current release tag. Once the old processes are removed the metadata agent running will respawn everything needed within 60 seconds.

  • Deleting variable entries from the global_overrides dictionary in openstack_user_config.yml now properly removes those variables from the openstack_inventory.json file. See Bug

  • The pip_packages_tmp variable has been renamed pip_tmp_packages to avoid unintended processing by the py_pkgs lookup plugin.

  • Previously, the ansible_managed var was being used to insert a header into the swift.conf that contained date/time information. This meant that swift.conf across different nodes did not have the same MD5SUM, causing swift-recon --md5 to break. We now insert a piece of static text instead to resolve this issue.

  • Aodh has deprecated support for NoSQL storage (MongoDB and Cassandra) in Mitaka with removal scheduled for the O* release. This causes warnings in the logs. The default of using MongoDB storage for Aodh is replaced with the use of Galera. Continued use of MongoDB will require the use of vars to specify a correct aodh_connection_string and add pymongo to the aodh_pip_packages list.

Other Notes

  • nova_libvirt_live_migration_flag is now phased out. Please create a nova configuration override with live_migration_tunnelled: True if you want to force the flag VIR_MIGRATE_TUNNELLED to libvirt. Nova “chooses a sensible default” otherwise.

  • nova_compute_manager is now phased out.

  • The run-playbooks.sh script has been refactored to run all playbooks using our core tool set and run order. The refactor work updates the old special case script to a tool that simply runs the integrated playbooks as they’ve been designed.

14.0.0.0b1

New Features

  • LXC containers will now have a proper RFC1034/5 hostname set during post build tasks. A localhost entry for 127.0.1.1 will be created by converting all of the “_” in the inventory_hostname to “-“. Containers will be created with a default domain of openstack.local. This domain name can be customized to meet your deployment needs by setting the option lxc_container_domain.

  • A new option has been added to bootstrap-ansible.sh to set the role fetch mode. The environment variable ANSIBLE_ROLE_FETCH_MODE sets how role dependencies are resolved.

  • The auditd rules template included a rule that audited changes to the AppArmor policies, but the SELinux policy changes were not being audited. Any changes to SELinux policies in /etc/selinux are now being logged by auditd.

  • Support had been added to install the ceph_client packages and dependencies from Ceph.com, Ubuntu Cloud Archive (UCA), or the operating system’s default repository.

    The ceph_pkg_source variable controls the install source for the Ceph packages. Valid values include:

    • ceph: This option installs Ceph from a ceph.com repo. Additional variables to adjust items such as Ceph release and regional download mirror can be found in the variables files.

    • uca: This option installs Ceph from the Ubuntu Cloud Archive. Additional variables to adjust items such as the OpenStack/Ceph release can be found in the variables files.

    • distro: This options installs Ceph from the operating system’s default repository and unlike the other options does not attempt to manage package keys or add additional package repositories.

  • The pip_install role can now configure pip to be locked down to the repository built by OpenStack-Ansible. To enable the lockdown configuration, deployers may set pip_lock_to_internal_repo to true in /etc/openstack_deploy/user_variables.yml.

  • The ability to support MultiStrOps has been added to the config_template action plugin. This change updates the parser to use the set() type to determine if values within a given key are to be rendered as MultiStrOps. If an override is used in an INI config file the set type is defined using the standard yaml construct of “?” as the item marker.

    # Example Override Entries
    Section:
      typical_list_things:
        - 1
        - 2
      multistrops_things:
        ? a
        ? b
    
    # Example Rendered Config:
    [Section]
    typical_list_things = 1,2
    multistrops_things = a
    multistrops_things = b
    
  • All of the database and database user creates have been removed from the roles into the playbooks. This allows the roles to be tested independently of the deployed database and also allows the roles to be used independently of infrastructure choices made by the integrated OSA project.

  • Host security hardening is now applied by default using the openstack-ansible-security role. Developers can opt out by setting the apply_security_hardening Ansible variable to false. For more information about the role and the changes it makes, refer to the openstack-ansible-security documentation.

  • The os_nova role can now detect a PowerNV environment and set the virtualization type to ‘kvm’.

  • An Ansible was added to disable the rdisc service on CentOS systems if the service is installed on the system.

    Deployers can opt-out of this change by setting security_disable_rdisc to no.

  • The Linux Security Module (LSM) that is appropriate for the Linux distribution in use will be automatically enabled by the security role by default. Deployers can opt out of this change by setting the following Ansible variable:

    security_enable_linux_security_module: False
    

    The documentation for STIG V-51337 has more information about how each LSM is enabled along with special notes for SELinux.

  • The os_glance role now supports Ubuntu 16.04 and SystemD.

  • CentOS 7 and Ubuntu 16.04 support have been added to the haproxy role.

  • The haproxy role installs hatop from source to ensure that the same operator tooling is available across all supported distributions. The download URL for the source can be set using the variable haproxy_hatop_download_url.

  • The HAProxy role provided by OpenStack-Ansible now terminates SSL using a self-signed certificate by default. While this can be disabled the inclusion of SSL services on all public endpoints as a default will help make deployments more secure without any additional user interaction. More information on SSL and certificate generation can be found here.

  • CentOS 7 support has been added to the galera_server role.

  • Implemented support for Ubuntu 16.04 Xenial. percona-xtrabackup packages will be installed from distro repositories, instead of upstream percona repositories due to lack of available packages upstream at the time of implementing this feature.

  • To ensure the deployment system remains clean the Ansible execution environment is contained within a virtual environment. The virtual environment is created at “/opt/ansible-runtime” and the “ansible.*” CLI commands are linked within /usr/local/bin to ensure there is no interruption in the deployer workflow.

  • There is a new default configuration for keepalived, supporting more than 2 nodes.

  • In order to make use of the latest stable keepalived version, the variable keepalived_use_latest_stable must be set to True

  • The ability to support login user domain and login project domain has been added to the keystone module.

    # Example usage
    - keystone:
        command: ensure_user
        endpoint: "{{ keystone_admin_endpoint }}"
        login_user: admin
        login_password: admin
        login_project_name: admin
        login_user_domain_name: custom
        login_project_domain_name: custom
        user_name: demo
        password: demo
        project_name: demo
        domain_name: custom
    
  • The new LBaaS v2 dashboard is available in Horizon. Deployers can enable the panel by setting the following Ansible variable:

    horizon_enable_neutron_lbaas: True
    
  • The lxc_container_create role will now build a container based on the distro of the host OS.

  • The lxc_container_create role now supports Ubuntu 14.04, 16.04, and RHEL/CentOS 7

  • The lxc_host cache prep has been updated to use the LXC download template. This removes the last remaining dependency the project has on the rpc-trusty-container.tgz image.

  • The lxc_host role will build lxc cache using the download template built from images found here. These images are upstream builds from the greater LXC/D community.

  • The lxc_host role introduces support for CentOS 7 and Ubuntu 16.04 container types.

  • Neutron HA router capabilities in Horizon will be enabled automatically if the neutron plugin type is ML2 and environment has >=2 L3 agent nodes.

  • Horizon now has a boolean variable named horizon_enable_ha_router to enable Neutron HA router management.

  • Horizon’s IPv6 support is now enabled by default. This allows users to manage subnets with IPv6 addresses within the Horizon interface. Deployers can disable IPv6 support in Horizon by setting the following variable:

    horizon_enable_ipv6: False
    

    Please note: Horizon will still display IPv6 addresses in various panels with IPv6 support disabled. However, it will not allow any direct management of IPv6 configuration.

  • memcached now logs with multiple levels of verbosity, depending on the user variables. Setting debug: True enables maximum verbosity while setting verbose: True logs with an intermediate level.

  • The openstack-ansible-memcached_server role includes a new override, memcached_connections which is automatically calculated from the number of memcached connection limit plus additional 1k to configure the OS nofile limit. Without proper nofile limit configuration, memcached will crash in order to support higher parallel connection TCP/Memcache counts.

  • CentOS 7 support has been added to the galera_client role.

  • Whether the Neutron DHCP Agent, Metadata Agent or LinuxBridge Agent should be enabled is now dynamically determined based on the neutron_plugin_type and the neutron_ml2_mechanism_drivers that are set. This aims to simplify the configuration of Neutron services and eliminate the need for deployers to override the entire neutron_services dict variable to disable these services.

  • Deployers can now configure tempest public and private networks by setting the following variables, ‘tempest_private_net_provider_type’ to either vxlan or vlan and ‘tempest_public_net_provider_type’ to flat or vlan. Depending on what the deployer sets these variables to, they may also need to update other variables accordingly, this mainly involves ‘tempest_public_net_physical_type’ and ‘tempest_public_net_seg_id’. Please refer to http://docs.openstack.org/mitaka/networking-guide/intro-basic-networking.html for more neutron networking information.

  • Neutron VPN as a Service (VPNaaS) can now optionally be deployed and configured. Please see the OpenStack Networking Guide for details about the what the service is and what it provides. See the VPNaaS Install Guide for implementation details.

  • The horizon next generation instance management panels have been enabled by default. This changes horizon to use the upstream defaults instead of the legacy panels. Documentation can be found here.

  • A new configuration parameter security_ntp_bind_local_interfaces was added to the security role to restrict the network interface to which chronyd will listen for NTP requests.

  • Open vSwitch driver support has been implemented. This includes the implementation of the appropriate Neutron configuration and package installation. This feature may be activated by setting neutron_plugin_type: ml2.ovs in /etc/openstack_deploy/user_variables.yml.

  • Apache MPM tunable support has been added to the os-keystone role in order to allow MPM thread tuning. Default values reflect the current Ubuntu default settings:

    keystone_httpd_mpm_backend: event
    keystone_httpd_mpm_start_servers: 2
    keystone_httpd_mpm_min_spare_threads: 25
    keystone_httpd_mpm_max_spare_threads: 75
    keystone_httpd_mpm_thread_limit: 64
    keystone_httpd_mpm_thread_child: 25
    keystone_httpd_mpm_max_requests: 150
    keystone_httpd_mpm_max_conn_child: 0
    
  • The RabbitMQ Management UI is now available through HAProxy on port 15672. The default userid is monitoring. This user can be modified by changing the parameter rabbitmq_monitoring_userid in the file user_variables.yml. Please note that ACLs have been added to this HAProxy service by default, such that it may only be accessed by common internal clients. Reference playbooks/vars/configs/haproxy_config.yml

  • Tasks were added to search for any device files without a proper SELinux label on CentOS systems. If any of these device labels are found, the playbook execution will stop with an error message.

  • The openstack-ansible-security role supports the application of the Red Hat Enterprise Linux 6 STIG configurations to systems running CentOS 7 and Ubuntu 16.04 LTS.

  • The fallocate_reserve` option can now be set (in bytes or as a percentage) for swift by using the ``swift_fallocate_reserve variable in /etc/openstack_deploy/user_variables.yml. This value is the amount of space to reserve on a disk to prevent a situation where swift is unable to remove objects due to a lack of available disk space to work with. The default value is 1% of the total disk size.

  • While default python interpreter for swift is cpython, pypy is now an option. This change adds the ability to greatly improve swift performance without the core code modifications. These changes have been implemented using the documentation provided by Intel and Swiftstack. Notes about the performance increase can be seen here.

  • Enable rsync module per object server drive by setting the swift_rsync_module_per_drive setting to True. Set this to configure rsync and swift to utilise individual configuration per drive. This is required when disabling rsyncs to individual disks. For example, in a disk full scenario.

  • The os_swift role will now include the swift “staticweb” middleware by default.

  • Support had been added to allow the functional tests to pass when deploying on ppc64le architecture using the Ubuntu distributions.

Known Issues

  • In the latest stable version of keepalived there is a problem with the priority calculation when a deployer has more than five keepalived nodes. The problem causes the whole keepalived cluster to fail to work. To work around this issue it is recommended that deployers limit the number of keepalived nodes to no more than five or that the priority for each node is set as part of the configuration (cf. haproxy_keepalived_vars_file variable).

  • Paramiko version 2.0 Python requires the Python cryptography library. New system packages must be installed for this library. For OpenStack-Ansible versions <12.0.12, <11.2.15, <13.0.2 the system packages must be installed on the deployment host manually by executing apt-get install -y build-essential libssl-dev libffi-dev.

Upgrade Notes

  • LXC containers will now have a proper RFC1034/5 hostname set during post build tasks. A localhost entry for 127.0.1.1 will be created by converting all of the “_” in the inventory_hostname to “-“. Containers will be created with a default domain of openstack.local. This domain name can be customized to meet your deployment needs by setting the option lxc_container_domain.

  • The ca-certificates package has been included in the LXC container build process in order to prevent issues related to trying to connect to public websites which make use of newer certificates than exist in the base CA certificate store.

  • The environment variable FORKS is no longer used. The standard Ansible environment variable ANSIBLE_FORKS should be used instead.

  • The Galera client role now has a dependency on the apt package pinning role.

  • The variable security_audit_apparmor_changes is now renamed to security_audit_mac_changes and is enabled by default. Setting security_audit_mac_changes to no will disable syscall auditing for any changes to AppArmor policies (in Ubuntu) or SELinux policies (in CentOS).

  • The default value of service_credentials/os_endpoint_type within ceilometer’s configuration file has been changed to internalURL. This may be overridden through the use of the ceilometer_ceilometer_conf_overrides variable.

  • The LXC container cache preparation process now copies package repository configuration from the host instead of implementing its own configuration. The following variables are therefore unnecessary and have been removed:

    • lxc_container_template_main_apt_repo

    • lxc_container_template_security_apt_repo

    • lxc_container_template_apt_components

  • The LXC container cache preparation process now copies DNS resolution configuration from the host instead of implementing its own configuration. The lxc_cache_resolvers variable is therefore unnecessary and has been removed.

  • The MariaDB wait_timeout setting is decreased to 1h to match the SQL Alchemy pool recycle timeout, in order to prevent unnecessary database session buildups.

  • The variable repo_server_packages that defines the list of packages required to install a repo server has been replaced by repo_server_distro_packages.

  • Within the haproxy role hatop has been changed from a package installation to a source-based installation. This has been done to ensure that the same operator tooling is available across all supported distributions. The download URL for the source can be set using the variable haproxy_hatop_download_url.

  • SSL termination is assumed enabled for all public endpoints by default. If this is not needed it can be disabled by setting the openstack_external_ssl option to false and the openstack_service_publicuri_proto to http.

  • If HAProxy is used as the loadbalancer for a deployment it will generate a self-signed certificate by default. If HAProxy is NOT used, an SSL certificate should be installed on the external loadbalancer. The installation of an SSL certificate on an external load balancer is not covered by the deployment tooling.

  • In previous releases connections to Horizon originally terminated SSL at the Horizon container. While that is still an option, SSL is now assumed to be terminated at the load balancer. If you wish to terminate SSL at the horizon node change the horizon_external_ssl option to false.

  • Public endpoints will need to be updated using the Keystone admin API to support secure endpoints. The Keystone ansible module will not recreate the endpoints automatically. Documentation on the Keystone service catalog can be found here.

  • There is a new default configuration for keepalived. When running the haproxy playbook, the configuration change will cause a keepalived restart unless the deployer has used a custom configuration file. The restart will cause the virtual IP addresses managed by keepalived to be briefly unconfigured, then reconfigured.

  • A new version of keepalived will be installed on the haproxy nodes if the variable keepalived_use_latest_stable is set to True and more than one haproxy node is configured. The update of the package will cause keepalived to restart and therefore will cause the virtual IP addresses managed by keepalived to be briefly unconfigured, then reconfigured.

  • The lxc_container_create role no longer uses the distro specific lxc container create template.

  • The following variable changes have been made in the lxc_host role:

    • lxc_container_template: Removed because the template option is now contained within the operating system specific variable file loaded at runtime.

    • lxc_container_template_options: This option was renamed to lxc_container_download_template_options. The deprecation filter was not used because the values provided from this option have been fundamentally changed and old overrides will cause problems.

    • lxc_container_release: Removed because image is now tied with the host operating system.

    • lxc_container_user_name: Removed because the default users are no longer created when the cached image is created.

    • lxc_container_user_password: Removed because the default users are no longer created when the cached image is created.

    • lxc_container_template_main_apt_repo: Removed because this option is now being set within the cache creation process and is no longer needed here.

    • lxc_container_template_security_apt_repo: Removed because this option is now being set within the cache creation process and is no longer needed here.

  • The lxc_host role no longer uses the distro specific lxc container create template.

  • The following variable changes have been made in the lxc_host role:

    • lxc_container_user_password: Removed because the default lxc container user is no longer created by the lxc container template.

    • lxc_container_template_options: This option was renamed to lxc_cache_download_template_options. The deprecation filter was not used because the values provided from this option have been fundamentally changed and potentially old overrides will cause problems.

    • lxc_container_base_delete: Removed because the cache will be refreshed upon role execution.

    • lxc_cache_validate_certs: Removed because the Ansible get_url module is no longer used.

    • lxc_container_caches: Removed because the container create process will build a cached image based on the host OS.

  • The memcached log is removed from /var/log/memcached.log and is now stored in the /var/log/memcached folder.

  • The variable galera_client_apt_packages has been replaced by galera_client_distro_packages.

  • Whether the Neutron DHCP Agent, Metadata Agent or LinuxBridge Agent should be enabled is now dynamically determined based on the neutron_plugin_type and the neutron_ml2_mechanism_drivers that are set. This aims to simplify the configuration of Neutron services and eliminate the need for deployers to override the entire neutron_services dict variable to disable these services.

  • As described in the Mitaka release notes Neutron now correctly calculates for and advertises the MTU to instances. The default DHCP configuration to advertise an MTU to instances has therefore been removed from the variable neutron_dhcp_config.

  • As described in the Mitaka release notes Neutron now correctly calculates for and advertises the MTU to instances. As such the neutron_network_device_mtu variable has been removed and the hard-coded values in the templates for advertise_mtu, path_mtu, and segment_mtu have been removed to allow upstream defaults to operate as intended.

  • The new host group neutron_openvswitch_agent has been added to the env.d/neutron.yml and env.d/nova.yml environment configuration files in order to support the implementation of Open vSwitch. Deployers must ensure that their environment configuration files are updated to include the above group name. Please see the example implementations in env.d/neutron.yml and env.d/nova.yml.

  • The default horizon instance launch panels have been changed to the next generation panels. To enable legacy functionality set the following options accordingly:

    horizon_launch_instance_legacy: True
    horizon_launch_instance_ng: False
    
  • A new nova admin endpoint will be registered with the suffix /v2.1/%(tenant_id)s. The nova admin endpoint with the suffix /v2/%(tenant_id)s may be manually removed.

  • Cleanup tasks are added to remove the nova console git directories /usr/share/novnc and /usr/share/spice-html5, prior to cloning these inside the nova vnc and spice console playbooks. This is necessary to guarantee that local modifications do not break git clone operations, especially during upgrades.

  • The variable neutron_linuxbridge has been removed as it is no longer used.

  • The variable neutron_driver_interface has been removed. The appropriate value for neutron.conf is now determined based on the neutron_plugin_type.

  • The variable neutron_driver_firewall has been removed. The appropriate value for neutron.conf is now determined based on the neutron_plugin_type.

  • The variable neutron_ml2_mechanism_drivers has been removed. The appropriate value for ml2_conf.ini is now determined based on the neutron_plugin_type.

  • The Neutron L3 Agent configuration for the handle_internal_only_routers variable is removed in order to use the Neutron upstream default setting. The current default for handle_internal_only_routers is True, which does allow Neutron L3 router without external networks attached (as discussed per https://bugs.launchpad.net/neutron/+bug/1572390).

  • The variable rabbitmq_monitoring_password has been added to user_secrets.yml. If this variable does not exist, the RabbitMQ monitoring user will not be created.

  • The container property container_release has been removed as this is automatically set to the same version as the host in the container creation process.

  • The variable lxc_container_release has been removed from the lxc-container-create.yml playbook as it is no longer consumed by the container creation process.

  • Percona Xtrabackup has been removed from the Galera client role.

  • The variable verbose has been removed. Deployers should rely on the debug var to enable higher levels of memcached logging.

  • The variable verbose has been removed. Deployers should rely on the debug var to enable higher levels of logging.

  • The database create and user creates have been removed from the os_heat role. These tasks have been relocated to the playbooks.

  • The database create and user creates have been removed from the os_nova role. These tasks have been relocated to the playbooks.

  • The database create and user creates have been removed from the os_glance role. These tasks have been relocated to the playbooks.

  • The database and user creates have been removed from the os_horizon role. These tasks have been relocated to the playbooks.

  • The database create and user creates have been removed from the os_cinder role. These tasks have been relocated to the playbooks.

  • The database create and user creates have been removed from the os_neutron role. These tasks have been relocated to the playbooks.

  • The swift_fallocate_reserve default value has changed from 10737418240 (10GB) to 1% in order to match the OpenStack swift default setting.

  • A new option swift_pypy_enabled has been added to enable or disable the pypy interpreter for swift. The default is “false”.

  • A new option swift_pypy_archive has been added to allow a pre-built pypy archive to be downloaded and moved into place to support swift running under pypy. This option is a dictionary and contains the URL and SHA256 as keys.

  • The swift_max_rsync_connections default value has changed from 2 to 4 in order to match the OpenStack swift documented value.

  • When upgrading a Swift deployment from Mitaka to Newton it should be noted that the enabled middleware list has changed. In Newton the “staticweb” middleware will be loaded by default. While the change adds a feature it is non-disruptive in upgrades.

  • All variables in the security role are now prepended with security_ to avoid collisions with variables in other roles. All deployers who have used the security role in previous releases will need to prepend all security role variables with security_.

    For example, a deployer could have disabled direct root ssh logins with the following variable:

    ssh_permit_root_login: yes
    

    That variable would become:

    security_ssh_permit_root_login: yes
    

Deprecation Notes

  • The rabbitmq_apt_packages variable has been deprecated. rabbitmq_dependencies should be used instead to override additional packages to install alongside rabbitmq-server.

  • galera_package_url changed to percona_package_url for clarity

  • galera_package_sha256 changed to percona_package_sha256 for clarity

  • galera_package_path changed to percona_package_path for clarity

  • galera_package_download_validate_certs changed to percona_package_download_validate_certs for clarity

  • Installation of Ansible on the root system, outside of a virtual environment, will no longer be supported.

  • The variables `galera_client_package_*` and `galera_client_apt_percona_xtrabackup_*` have been removed from the role as Xtrabackup is no longer deployed.

Security Issues

  • A sudoers entry has been added to the repo_servers in order to allow the nginx user to stop and start nginx via the init script. This is implemented in order to ensure that the repo sync process can shut off nginx while synchronising data from the master to the slaves.

  • A self-signed certificate will now be generated by default when HAproxy is used as a load balancer. This certificate is used to terminate the public endpoint for Horizon and all OpenStack API services.

  • Horizon disables password autocompletion in the browser by default, but deployers can now enable autocompletion by setting horizon_enable_password_autocomplete to True.

Bug Fixes

  • The dictionary-based variables in defaults/main.yml are now individual variables. The dictionary-based variables could not be changed as the documentation instructed. Instead it was required to override the entire dictionary. Deployers must use the new variable names to enable or disable the security configuration changes applied by the security role. For more information, see Launchpad Bug 1577944.

  • Failed access logging is now disabled by default and can be enabled by changing security_audit_failed_access to yes. The rsyslog daemon checks for the existence of log files regularly and this audit rule was triggered very frequently, which led to very large audit logs.

  • An Ansible task was added to disable the netconsole service on CentOS systems if the service is installed on the system.

    Deployers can opt-out of this change by setting security_disable_netconsole to no.

  • In order to ensure that the appropriate data is delivered to requesters from the repo servers, the slave repo_server web servers are taken offline during the synchronisation process. This ensures that the right data is always delivered to the requesters through the load balancer.

  • The security role previously set the permissions on all audit log files in /var/log/audit to 0400, but this prevents the audit daemon from writing to the active log file. This will prevent auditd from starting or restarting cleanly.

    The task now removes any permissions that are not allowed by the STIG. Any log files that meet or exceed the STIG requirements will not be modified.

  • The security role now handles ssh_config files that contain Match stanzas. A marker is added to the configuration file and any new configuration items will be added below that marker. In addition, the configuration file is validated for each change to the ssh configuration file.

  • The ability to support login user domain and login project domain has been added to the keystone module. This resolves https://bugs.launchpad.net/openstack-ansible/+bug/1574000

    # Example usage
    - keystone:
        command: ensure_user
        endpoint: "{{ keystone_admin_endpoint }}"
        login_user: admin
        login_password: admin
        login_project_name: admin
        login_user_domain_name: custom
        login_project_domain_name: custom
        user_name: demo
        password: demo
        project_name: demo
        domain_name: custom
    
  • Assigning multiple IP addresses to the same host name will now result in an inventory error before running any playbooks.

  • The nova admin endpoint is now correctly registered as /v2.1/%(tenant_id)s instead of /v2/%(tenant_id)s.

  • The check to validate whether an appropriate ssh public key is available to copy into the container cache has been corrected to check the deployment host, not the LXC host.

  • Static route information for provider networks now must include the cidr and gateway information. If either key is missing, an error will be raised and the dynamic_inventory.py script will halt before any Ansible action is taken. Previously, if either key was missing, the inventory script would continue silently without adding the static route information to the networks. Note that this check does not validate the CIDR or gateway values, just just that the values are present.

  • The XFS filesystem is excluded from the daily mlocate crond job in order to conserve disk IO for large IOPS bursts due to updatedb/mlocate file indexing.

  • The /var/lib/libvirt/qemu/save directory is now a symlink to {{ nova_system_home_folder }}/save to resolve an issue where the default location used by the libvirt managed save command can result with the root partitions on compute nodes becoming full when nova image-create is run on large instances.

Other Notes

  • Mariadb version upgrade gate checks removed.

13.0.0

New Features

  • Ubuntu has 4 different ‘components’ - main, universe, multiverse and restricted:

    • Main: Officially supported software.

    • Restricted: Supported software that is not available under a completely free license.

    • Universe: Community maintained software, i.e. not officially supported software.

    • Multiverse: Software that is not free.

    The default apt sources configuration is now set to only include the main and universe components as those are the only required components for a functional deployment. If deployers wish to include other components then the variable lxc_container_template_apt_components may be set in /etc/openstack_deploy/user_variables.yml with the full list of desired components.

  • Ceilometer now uses the Keystone v3 endpoint. The ‘identity_uri’ directive has been removed since it’s unused. ‘region_name’ has been added. The directives under ‘service_credentials’ have been updated to support the keystoneauth library

  • Added a function in dynamic_inventory.py to improve the identification of incorrect settings inside the user config files.

  • Deployers can optionally set a UID and/or GID for the nova user and group. This is helpful for environments with shared storage.

  • A new variable called lxc_container_cache_files has been implemented which contains a list of dictionaries that specify files on the deployment host which should be copied into the LXC container cache and what attributes to assign to the copied file.

  • The haproxy-install.yml playbook will now be run as a part of setup-infrastructure.yml.

  • Logs for haproxy can now be found in /openstack/log/<haproxy host name>-haproxy/ on the container host (if haproxy is in a container), or /var/log/haproxy/ if haproxy is installed directly on a host.

  • Horizon deployment now supports an operator provided customization module which can be configured using the horizon_customization_module variable. Please see the Horizon documentation for more information.

  • Keystone’s v3 API is now the default for all services.

  • TLS certificate chain verification during the download of LXC cached images can now be toggled using the configuration variable ‘lxc_cache_validate_certs’. The default behavior is to validate the certificate chain.

  • Keystone can now be configured for multiple LDAP or Active Directory identity back-ends. Configuration of this feature is documented in the Keystone Configuration section of the Install Guide.

  • LBaaS v2 is available for deployment in addition to LBaaS v1. Both versions are mutually exclusive and cannot be running at the same time. Deployers will need to re-create any existing load balancers if they switch between LBaaS versions. Switching to LBaaS v2 will stop any existing LBaaS v1 load balancers.

  • Neutron Firewall as a Service (FWaaS) can now optionally be deployed and configured. Please see the FWaaS Configuration Reference for details about the what the service is and what it provides. See the FWaaS Install Guide for implementation details.

  • Deployers can set a default availability zone (AZ) for new instance builds which do not provide an AZ. The value is None by default, but it can be changed with the nova_default_schedule_zone Ansible variable.

  • Two new variables (rabbitmq_async_threads and rabbitmq_process_limit) have been added to the openstack-ansible-rabbitmq_server role. The variable rabbitmq_async_threads limits the number of asynchronous threads for file and socket I/O. The variable rabbitmq_process_limit limits the overall number of supported processes inside the erlang VM.

  • OpenStack services have been set to communicate with RabbitMQ using SSL by default. This feature may be disabled by setting rabbit_use_ssl to false in /etc/openstack_deploy/user_variables.yml. The default behaviour will be to use a self-signed certificate for communications. This can be changed by the procedure referred to in the SSL documentation.

  • Repeatable deployments are now easier since the manifest files for OpenStack software uses the exact content from an upstream repository. Specific commits or tags can be referenced within the manifest. The yaprt package is no longer used to build the repo.

  • Developers can specify additional python packages for the repo build process by creating YAML files within /etc/openstack_deploy/. Refer to the documentation on adding packages for more details.

Upgrade Notes

  • Ubuntu has 4 different ‘components’ - main, universe, multiverse and restricted:

    • Main: Officially supported software.

    • Restricted: Supported software that is not available under a completely free license.

    • Universe: Community maintained software, i.e. not officially supported software.

    • Multiverse: Software that is not free.

    The default apt sources configuration is now set to only include the main and universe components as those are the only required components for a functional deployment. If deployers wish to include other components then the variable lxc_container_template_apt_components may be set in /etc/openstack_deploy/user_variables.yml with the full list of desired components.

  • The pip_get_pip_options override has been removed from group_vars, resulting in the empty default being used. Previously this method was used to lock the pip install source appropriately, but this has been handled through the pip_lock_down role for several cycles. If a deployer has implemented an override in user_variables based on the previous group_vars settings then the settings should be reviewed. This override may now be used for catering to situations where the pip installation requires extra options set to allow installing through a proxy, or disabling certificate validation.

  • The dynamic_inventory.py script now takes a --config argument rather than a --file argument.

  • Deployers can optionally remove the Keystone v2 endpoints from the database. Those endpoints will not be removed by the upgrade process.

  • The distribution of the .my.cnf database access configuration file which contains sensitive root credentials has now been limited to only be distributed to containers and hosts which require it for troubleshooting purposes. It is recommended that this file be removed from all hosts and containers. The only containers that should have the file are the Utility container and the Galera containers. This may be done by executing ansible 'all:!galera:!utility' -m shell -a 'rm -f /root/.my.cnf' from the /opt/openstack-ansible/ directory.

  • The first tier of the keystone_ldap dictionary variable now relates to the Keystone Domain name. An existing keystone_ldap configuration entry can be converted by renaming the ldap key to the domain name ‘Default’. Note that the domain name entry is case-sensitive.

  • The keystone_ldap_identity_driver variable has been removed. The driver for an LDAP back-end in Keystone now simply uses the value ‘ldap’. There are no other back-end options for Keystone at this time.

  • Existing LBaaS v1 load balancers and agents will not be altered by the new OpenStack-Ansible release.

  • Database migration tasks have been added for the FWaaS neutron plugin.

  • Neutron notifications to nova will now use the internal API endpoint instead of the public endpoint.

  • The repo-clone-mirror.yml play has been removed as it is no longer used by the project.

  • A new database called nova_api has been created. This database has its own user credentials and nova-manage db sync process. For the database password there is a new variable entry in user_secrets.yml called nova_api_container_mysql_password.

  • The number of erlang asynchronous threads used by RabbitMQ have been increased from the default of 32 to 128 in order to speed up message processing.

  • The maximum erlang process limit for RabbitMQ has been set to 1048576 in order to prevent virtual machine lockups which are caused when this limit has been reached.

  • The deployment configuration file openstack_environment.yml has been removed and is no longer used in the dynamic inventory generation process. This file was previously rendered functionally irrelevant to the inventory generation process in the Liberty release.

  • The plugins folders have been renamed to the default names used in Ansible 2.x. This is part of the preparation for Ansible 2.x readiness. The renames are specifically actions > action, callbacks > callback, filters > filter, lookups > lookup.

  • The git source for python2_lxc was used in the past as the package was not available on pypi. Now that the package has been published the py_from_git role dependency has been removed from the lxc_hosts playbook, the role has been removed from the required roles list and the repo details have been removed from the repo_packages files as none of these details are required any more.

  • OpenStack services have been set to communicate with RabbitMQ using SSL by default. This feature may be disabled by setting rabbit_use_ssl to false in /etc/openstack_deploy/user_variables.yml. The default behaviour will be to use a self-signed certificate for communications. This can be changed by the procedure referred to in the SSL documentation.

  • The os_swift and os_swift_sync role have been merged into the single os_swift role. Two variables (swift_do_setup and swift_do_sync) have been implemented to action the install and synchronise code paths. The separate playbooks have been adjusted to make use of these variables to ensure that the behaviour is exactly the same as before.

  • The neutron_plugin_base variable has been modifed to use the friendly names. Deployers should change any customisations to this variable to ensure that the customisation makes use of the short names instead of the full class path.

  • Database migration tasks have been added for the LBaaS neutron plugin.

  • The repo-store-source.yml playbook has been removed as it is no longer needed.

  • The variable neutron_service_names has been removed. A more efficient way of determining the list of Neutron services for service restarts has been implemented.

Deprecation Notes

  • The dynamic_inventory.py script now takes a --config argument rather than a --file argument.

  • The old class path names used within the neutron_plugin_base have been deprecated in favor of the friendly names. Support for the use of the class path plugins will be removed in the OpenStack Newton cycle.

Security Issues

  • A sudoers entry is added to the repo_servers to allow the nginx user to stop and start NGINX from the init script. This ensures that the repo sync process can shut off NGINX while synchronizing data from master to slaves.

  • The distribution of the .my.cnf database access configuration file which contains sensitive root credentials has now been limited to only be distributed to containers and hosts which require it for troubleshooting purposes.

  • When enabled, Neutron Firewall as a Service (FWaaS) provides projects the option to implement perimeter security (filtering at the router), adding to filtering at the instance interfaces which is provided by ‘Security Groups’.

  • OpenStack services have been set to communicate with RabbitMQ using SSL by default. This feature may be disabled by setting rabbit_use_ssl to false in /etc/openstack_deploy/user_variables.yml. The default behaviour will be to use a self-signed certificate for communications. This can be changed by the procedure referred to in the SSL documentation.

Bug Fixes

  • Containers might fail to retrieve packages from the repo server when connecting to a slave repo server that has not finished synchronizing. For more information, see https://bugs.launchpad.net/openstack-ansible/+bug/1543146. This is addressed by adding pre and post hooks into lsyncd to connect to the slave repo servers and disable NGINX for the duration for the sync.

  • The addition of multi-domain LDAP configuration support left behind a configuration file for the default domain that causes problems with Keystone. This file will automatically be removed if the deployer is not using the Default domain with an LDAP back end. (Bug 1547542)

  • The python packages pip, setuptools and wheel are now all pinned on a per-tag basis. The pins are updated along with every OpenStack Service update. This is done to ensure a consistent build experience with the latest available packages at the time the tag is released. A deployer may override the pins by adding a list of required pins using the pip_packages variable in user_variables.yml.

Other Notes

  • Neutron notifications to nova will now use the internal API endpoint instead of the public endpoint.