Current Series Release Notes

Current Series Release Notes

18.0.0.0rc1-104

New Features

  • Support has been added for deploying on Ubuntu 18.04 LTS hosts. The most significant change is a major version increment of LXC from 2.x to 3.x which deprecates some previously used elements of the container configuration file.
  • The service setup in keystone for aodh will now be executed through delegation to the aodh_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    aodh_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The service setup in keystone for barbican will now be executed through delegation to the barbican_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    barbican_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The service setup in keystone for ceilometer will now be executed through delegation to the ceilometer_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    ceilometer_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The service setup in keystone for cinder will now be executed through delegation to the cinder_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    cinder_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The option repo_venv_default_pip_packages has been added which will allow deployers to insert any packages into a service venv as needed. The option expects a list of strings which are valid python package names as found on PYPI.
  • The service setup in keystone for designate will now be executed through delegation to the designate_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    designate_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • Glance containers will now bind mount the default glance cache directory from the host when glance_default_store is set to file and nfs is not in use. With this change, the glance file cache size is no longer restricted to the size of the container file system.
  • The service setup in keystone for glance will now be executed through delegation to the glance_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    glance_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The service setup in keystone for gnocchi will now be executed through delegation to the gnocchi_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    gnocchi_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The service setup in keystone for heat will now be executed through delegation to the heat_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    heat_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The service setup in keystone for horizon will now be executed through delegation to the horizon_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    horizon_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The service setup in keystone for ironic will now be executed through delegation to the ironic_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    ironic_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The service updates for keystone will now be executed through delegation to the keystone_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    keystone_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The service setup in keystone for magnum will now be executed through delegation to the magnum_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    magnum_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • Instead of downloading images to the magnum API servers, the images will now download to the magnum_service_setup_host to the folder set in magnum_image_path owned by magnum_image_path_owner.
  • The service setup in keystone for neutron will now be executed through delegation to the neutron_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    neutron_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The service setup in keystone for nova will now be executed through delegation to the nova_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    nova_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The service setup in keystone for octavia will now be executed through delegation to the octavia_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    octavia_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the nova_install_method variable to distro.
  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the neutron_install_method variable to distro.
  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the nova_install_method variable to distro.
  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in trove.
  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in barbican.
  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in aodh.
  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in ceilometer.
  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in designate.
  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in magnum.
  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in swift.
  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers in octavia.
  • The service setup in keystone for sahara will now be executed through delegation to the sahara_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    sahara_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • The service setup in keystone for swift will now be executed through delegation to the swift_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    swift_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • It is now possible to specify a list of tests for tempest to blacklist when executing using the tempest_test_blacklist list variable.
  • The trove service setup in keystone will now be executed through delegation to the trove_service_setup_host which, by default, is localhost (the deploy host). Deployers can opt to rather change this to the utility container by implementing the following override in user_variables.yml.

    trove_service_setup_host: "{{ groups['utility_all'][0] }}"
    

Upgrade Notes

  • The supported upgrade path from Xenial to Bionic is via re-installation of the host OS across all nodes and redeployment of the required services. The Rocky branch of OSA is intended as the transition point for such upgrades from Xenial to Bionic. At this time there is no support for in-place operating system upgrades (typically via do-release-upgrade).
  • The variable cinder_iscsi_helper has been replaced by the new variable which is cinder_target_helper due to the fact that iscsi_helper has been deprecated in Cinder.
  • Glance containers will be rebooted to add the glance cache bind mount if glance_default_store is set to file and nfs is not in use.
  • The glance v1 API is now removed upstream and the deployment code is now removed from this glance ansible role. The variable glance_enable_v1_api is removed.

Deprecation Notes

  • The variable aodh_requires_pip_packages is no longer required and has therefore been removed.
  • The variable barbican_requires_pip_packages is no longer required and has therefore been removed.
  • The following variables are no longer used and have therefore been removed.
    • ceilometer_requires_pip_packages
    • ceilometer_service_name
    • ceilometer_service_port
    • ceilometer_service_proto
    • ceilometer_service_type
    • ceilometer_service_description
  • The variable cinder_requires_pip_packages is no longer required and has therefore been removed.
  • The variable designate_requires_pip_packages is no longer required and has therefore been removed.
  • The get_gested filter has been removed, as it is not used by any roles/plays.
  • The variable glance_requires_pip_packages is no longer required and has therefore been removed.
  • The variable gnocchi_requires_pip_packages is no longer required and has therefore been removed.
  • The variable heat_requires_pip_packages is no longer required and has therefore been removed.
  • The variable horizon_requires_pip_packages is no longer required and has therefore been removed.
  • The variable ironic_requires_pip_packages is no longer required and has therefore been removed.
  • The log path, /var/log/barbican is no longer used to capture service logs. All logging for the barbican service will now be sent directly to the systemd journal.
  • The log path, /var/log/keystone is no longer used to capture service logs. All logging for the Keystone service will now be sent directly to the systemd journal.
  • The log path, /var/log/congress is no longer used to capture service logs. All logging for the congress service will now be sent directly to the systemd journal.
  • The log path, /var/log/cinder is no longer used to capture service logs. All logging for the cinder service will now be sent directly to the systemd journal.
  • The log path, /var/log/aodh is no longer used to capture service logs. All logging for the aodh service will now be sent directly to the systemd journal.
  • The log path, /var/log/ceilometer is no longer used to capture service logs. All logging for the ceilometer service will now be sent directly to the systemd journal.
  • The log path, /var/log/designate is no longer used to capture service logs. All logging for the designate service will now be sent directly to the systemd journal.
  • The variable keystone_requires_pip_packages is no longer required and has therefore been removed.
  • The variable magnum_requires_pip_packages is no longer required and has therefore been removed.
  • The variable neutron_requires_pip_packages is no longer required and has therefore been removed.
  • The variable nova_requires_pip_packages is no longer required and has therefore been removed.
  • The variable octavia_image_downloader has been removed. The image download now uses the same host designated by the octavia_service_setup_host for the image download.
  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - trove_oslomsg_rpc_servers replaces trove_rabbitmq_servers - trove_oslomsg_rpc_port replaces trove_rabbitmq_port - trove_oslomsg_rpc_use_ssl replaces trove_rabbitmq_use_ssl - trove_oslomsg_rpc_userid replaces trove_rabbitmq_userid - trove_oslomsg_rpc_vhost replaces trove_rabbitmq_vhost - added trove_oslomsg_notify_servers - added trove_oslomsg_notify_port - added trove_oslomsg_notify_use_ssl - added trove_oslomsg_notify_userid - added trove_oslomsg_notify_vhost - added trove_oslomsg_notify_password
  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - barbican_oslomsg_rpc_servers replaces rabbitmq_servers - barbican_oslomsg_rpc_port replaces rabbitmq_port - barbican_oslomsg_rpc_userid replaces barbican_rabbitmq_userid - barbican_oslomsg_rpc_vhost replaces barbican_rabbitmq_vhost - added barbican_oslomsg_rpc_use_ssl - added barbican_oslomsg_notify_servers - added barbican_oslomsg_notify_port - added barbican_oslomsg_notify_use_ssl - added barbican_oslomsg_notify_userid - added barbican_oslomsg_notify_vhost - added barbican_oslomsg_notify_password
  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - aodh_oslomsg_rpc_servers replaces aodh_rabbitmq_servers - aodh_oslomsg_rpc_port replaces aodh_rabbitmq_port - aodh_oslomsg_rpc_use_ssl replaces aodh_rabbitmq_use_ssl - aodh_oslomsg_rpc_userid replaces aodh_rabbitmq_userid - aodh_oslomsg_rpc_vhost replaces aodh_rabbitmq_vhost - aodh_oslomsg_rpc_password replaces aodh_rabbitmq_password
  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - ceilometer_oslomsg_rpc_servers replaces rabbitmq_servers - ceilometer_oslomsg_rpc_port replaces rabbitmq_port - ceilometer_oslomsg_rpc_userid replaces ceilometer_rabbitmq_userid - ceilometer_oslomsg_rpc_vhost replaces ceilometer_rabbitmq_vhost - added ceilometer_oslomsg_rpc_use_ssl - added ceilometer_oslomsg_notify_servers - added ceilometer_oslomsg_notify_port - added ceilometer_oslomsg_notify_use_ssl - added ceilometer_oslomsg_notify_userid - added ceilometer_oslomsg_notify_vhost - added ceilometer_oslomsg_notify_password
  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - designate_oslomsg_rpc_servers replaces designate_rabbitmq_servers - designate_oslomsg_rpc_port replaces designate_rabbitmq_port - designate_oslomsg_rpc_use_ssl replaces designate_rabbitmq_use_ssl - designate_oslomsg_rpc_userid replaces designate_rabbitmq_userid - designate_oslomsg_rpc_vhost replaces designate_rabbitmq_vhost - designate_oslomsg_notify_servers replaces designate_rabbitmq_telemetry_servers - designate_oslomsg_notify_port replaces designate_rabbitmq_telemetry_port - designate_oslomsg_notify_use_ssl replaces designate_rabbitmq_telemetry_use_ssl - designate_oslomsg_notify_userid replaces designate_rabbitmq_telemetry_userid - designate_oslomsg_notify_vhost replaces designate_rabbitmq_telemetry_vhost - designate_oslomsg_notify_password replaces designate_rabbitmq_telemetry_password
  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - magnum_oslomsg_rpc_servers replaces rabbitmq_servers - magnum_oslomsg_rpc_port replaces rabbitmq_port - magnum_oslomsg_rpc_userid replaces magnum_rabbitmq_userid - magnum_oslomsg_rpc_vhost replaces magnum_rabbitmq_vhost - added magnum_oslomsg_rpc_use_ssl - added magnum_oslomsg_notify_servers - added magnum_oslomsg_notify_port - added magnum_oslomsg_notify_use_ssl - added magnum_oslomsg_notify_userid - added magnum_oslomsg_notify_vhost - added magnum_oslomsg_notify_password
  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging Notify parameters in order to abstract the messaging service from the actual backend server deployment. - swift_oslomsg_notify_servers replaces swift_rabbitmq_telemetry_servers - swift_oslomsg_notify_port replaces swift_rabbitmq_telemetry_port - swift_oslomsg_notify_use_ssl replaces swift_rabbitmq_telemetry_use_ssl - swift_oslomsg_notify_userid replaces swift_rabbitmq_telemetry_userid - swift_oslomsg_notify_vhost replaces swift_rabbitmq_telemetry_vhost - swift_oslomsg_notify_password replaces swift_rabbitmq_telemetry_password
  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - octavia_oslomsg_rpc_servers replaces octavia_rabbitmq_servers - octavia_oslomsg_rpc_port replaces octavia_rabbitmq_port - octavia_oslomsg_rpc_use_ssl replaces octavia_rabbitmq_use_ssl - octavia_oslomsg_rpc_userid replaces octavia_rabbitmq_userid - octavia_oslomsg_rpc_vhost replaces octavia_rabbitmq_vhost - octavia_oslomsg_notify_servers replaces octavia_rabbitmq_telemetry_servers - octavia_oslomsg_notify_port replaces octavia_rabbitmq_telemetry_port - octavia_oslomsg_notify_use_ssl replaces octavia_rabbitmq_telemetry_use_ssl - octavia_oslomsg_notify_userid replaces octavia_rabbitmq_telemetry_userid - octavia_oslomsg_notify_vhost replaces octavia_rabbitmq_telemetry_vhost - octavia_oslomsg_notify_password replaces octavia_rabbitmq_telemetry_password
  • The repo server’s reverse proxy for pypi has now been removed, leaving only the pypiserver to serve packages already on the repo server. The attempt to reverse proxy upstream pypi turned out to be very unstable with increased complexity for deployers using proxies or offline installs. With this, the variables repo_nginx_pypi_upstream and repo_nginx_proxy_cache_path have also been removed.
  • The variable repo_requires_pip_packages is no longer required and has therefore been removed.
  • The variable sahara_requires_pip_packages is no longer required and has therefore been removed.
  • The variable swift_requires_pip_packages is no longer required and has therefore been removed.
  • The variable trove_requires_pip_packages is no longer required and has therefore been removed.

Bug Fixes

  • The conditional that determines whether the sso_callback_template.html file is deployed for federated deployments has been fixed.

Other Notes

  • When running keystone with apache(httpd) all apache logs will be stored in the standard apache log directory which is controlled by the distro specific variable keystone_apache_default_log_folder.
  • When running aodh with apache(httpd) all apache logs will be stored in the standard apache log directory which is controlled by the distro specific variable aodh_apache_default_log_folder.
Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.