Current Series Release Notes

Current Series Release Notes

18.0.0.0b2

New Features

  • Adds support for the horizon octavia-ui dashboard. The dashboard will be automatically enabled if any octavia hosts are defined. If both Neutron LBaaSv2 and Octavia are enabled, two Load Balancer panels will be visible in Horizon.
  • Deployers can now set the container_tech to nspawn when deploying OSA within containers. When making the decision to deploy container types the deployer only needs to define the desired container_tech and continue the deployment as normal.
  • The addition of the container_tech option and the inclusion of nspawn support deployers now have the availability to define a desired containerization strategy globally or on specific hosts.
  • When using the nspawn driver containers will connect to the system bridges using a MACVLAN, more on this type of network setup can be seen here.
  • When using the nspawn driver container networking is managed by systemd-networkd both on the host and within the container. This gives us a single interface to manage regardless of distro and allows systemd to efficiently manage the resources.
  • When venvwithindex=True and ignorerequirements=True are both specified in rally_git_install_fragments (as was previously the default), this results in rally being installed from PyPI without any constraints being applied. This results in inconsistent builds from day to day, and can cause build failures for stable implementations due to new library releases. Going forward, we remove the rally_git_* overrides in playbooks/defaults/repo_packages/openstack_testing.yml so that the integrated build installs rally from PyPI, but with appropriate constraints applied.
  • When venvwithindex=True and ignorerequirements=True are both specified in tempest_git_install_fragments (as was previously the default), this results in tempest being installed from PyPI without any constraints being applied. This could result in the version of tempest being installed in the integrated build being different than the version being installed in the independent role tests. Going forward, we remove the tempest_git_* overrides in playbooks/defaults/repo_packages/openstack_testing.yml so that the integrated build installs tempest from PyPI, but with appropriate constraints applied.
  • Octavia requires SSL certificates for communication with the amphora. This adds the automatic creation of self signed certificates for this purpose. It uses different certificate authorities for amphora and control plane thus insuring maximum security.
  • If defined in applicable host or group vars the variable container_extra_networks will be merged with the existing container_networks from the dynamic inventory. This allows a deployer to specify special interfaces which may be unique to an indivdual container. An example use for this feature would be applying known fixed IP addresses to public interfaces on BIND servers for designate.
  • The option rabbitmq_erlang_version_spec has been added allowing deployers to set the version of erlang used on a given installation.
  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the heat_install_method variable to distro.
  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the cinder_install_method variable to distro.–
  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the glance_install_method variable to distro.
  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the aodh_install_method variable to distro.
  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the designate_install_method variable to distro.
  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the swift_install_method variable to distro.
  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the ceilometer_install_method variable to distro.
  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the barbican_install_method variable to distro.
  • The role now supports using the distribution packages for the OpenStack services instead of the pip ones. This feature is disabled by default and can be enabled by simply setting the keystone_install_method variable to distro.
  • The openrc role will no longer be executed on all OpenStack service containers/hosts. Instead a single host is designated through the use of the openstack_service_setup_host variable. The default is localhost (the deployment host). Deployers can opt to change this to the utility container by implementing the following override in user_variables.yml.

    openstack_service_setup_host: "{{ groups['utility_all'][0] }}"
    
  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers.
  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers.
  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers.
  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers.
  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers.
  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers.
  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers.
  • Support separate oslo.messaging services for RPC and Notifications to enable operation of separate and different messaging backend servers.
  • An option to disable the machinectl quota system has been changed. The variable lxc_host_machine_quota_disabled is a Boolean with a default of false. When this option is set to true it will disable the machinectl quota system.
  • The options lxc_host_machine_qgroup_space_limit and lxc_host_machine_qgroup_compression_limit have been added allowing a deployer to set qgroup limits as they see fit. The default value for these options is “none” which is effectively unlimited. These options accept any nominal size value followed by the single letter type, example 64G. These options are only effective when the option lxc_host_machine_quota_disabled is set to false.
  • A new playbook infra-journal-remote.yml to ship journals has been added. Physical hosts will now ship the all available systemd journals to the logging infrastructure. The received journals will be split up by host and stored in the /var/log/journal/remote directory. This feature will give deployers greater access/insight into how the cloud is functioning requiring nothing more that the systemd built-ins.

Known Issues

  • All OSA releases earlier than 17.0.5, 16.0.4, and 15.1.22 will fail to build the rally venv due to the release of the new cmd2-0.9.0 python library. Deployers are encouraged to update to the latest OSA release which pins to an appropriate version which is compatible with python2.
  • With the release of CentOS 7.5, all pike releases are broken due to a mismatch in version between the libvirt-python library specified by the OpenStack community, and the version provided in CentOS 7.5. As such OSA is unable build the appropriate python library for libvirt. The only recourse for this is to upgrade the environment to the latest queens release.

Upgrade Notes

  • Users should purge the ‘ntp’ package from their hosts if ceph-ansible is enabled. ceph-ansible previously was configured to install ntp by default which conflicts with the OSA ansible-hardening role chrony service.
  • The key is_ssh_address has been removed from the openstack_user_config.yml and the dynamic inventory. This key was responsible mapping an address to the container which was used for SSH connectivity. Because we’ve created the SSH connectivity plugin, which allows us the ability to connect to remote containers without SSH, this option is no longer useful. To keep the openstack_user_config.yml clean deployers can remove the option however moving forward it no longer has any effect.
  • The distribution package lookup and data output has been removed from the py_pkgs lookup so that the repo-build use of py_pkgs has reduced output and the lookup is purpose specific for python packages only.

Deprecation Notes

  • The use of the apt_package_pinning role as a meta dependency has been removed from the rabbitmq_server role. While the package pinning role is still used, it will now only be executed when the apt task file is executed.
  • The variable nova_compute_pip_packages is no longer used and has been removed.
  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - cinder_oslomsg_rpc_servers replaces cinder_rabbitmq_servers - cinder_oslomsg_rpc_port replaces cinder_rabbitmq_port - cinder_oslomsg_rpc_use_ssl replaces cinder_rabbitmq_use_ssl - cinder_oslomsg_rpc_userid replaces cinder_rabbitmq_userid - cinder_oslomsg_rpc_vhost replaces cinder_rabbitmq_vhost - cinder_oslomsg_notify_servers replaces cinder_rabbitmq_telemetry_servers - cinder_oslomsg_notify_port replaces cinder_rabbitmq_telemetry_port - cinder_oslomsg_notify_use_ssl replaces cinder_rabbitmq_telemetry_use_ssl - cinder_oslomsg_notify_userid replaces cinder_rabbitmq_telemetry_userid - cinder_oslomsg_notify_vhost replaces cinder_rabbitmq_telemetry_vhost
  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - nova_oslomsg_rpc_servers replaces nova_rabbitmq_servers - nova_oslomsg_rpc_port replaces nova_rabbitmq_port - nova_oslomsg_rpc_use_ssl replaces nova_rabbitmq_use_ssl - nova_oslomsg_rpc_userid replaces nova_rabbitmq_userid - nova_oslomsg_rpc_vhost replaces nova_rabbitmq_vhost - nova_oslomsg_notify_servers replaces nova_rabbitmq_telemetry_servers - nova_oslomsg_notify_port replaces nova_rabbitmq_telemetry_port - nova_oslomsg_notify_use_ssl replaces nova_rabbitmq_telemetry_use_ssl - nova_oslomsg_notify_userid replaces nova_rabbitmq_telemetry_userid - nova_oslomsg_notify_vhost replaces nova_rabbitmq_telemetry_vhost - nova_oslomsg_notify_password replaces nova_rabbitmq_telemetry_password
  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - ironic_oslomsg_rpc_servers replaces ironic_rabbitmq_servers - ironic_oslomsg_rpc_port replaces ironic_rabbitmq_port - ironic_oslomsg_rpc_use_ssl replaces ironic_rabbitmq_use_ssl - ironic_oslomsg_rpc_userid replaces ironic_rabbitmq_userid - ironic_oslomsg_rpc_vhost replaces ironic_rabbitmq_vhost
  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - heat_oslomsg_rpc_servers replaces heat_rabbitmq_servers - heat_oslomsg_rpc_port replaces heat_rabbitmq_port - heat_oslomsg_rpc_use_ssl replaces heat_rabbitmq_use_ssl - heat_oslomsg_rpc_userid replaces heat_rabbitmq_userid - heat_oslomsg_rpc_vhost replaces heat_rabbitmq_vhost - heat_oslomsg_rpc_password replaces heat_rabbitmq_password - heat_oslomsg_notify_servers replaces heat_rabbitmq_telemetry_servers - heat_oslomsg_notify_port replaces heat_rabbitmq_telemetry_port - heat_oslomsg_notify_use_ssl replaces heat_rabbitmq_telemetry_use_ssl - heat_oslomsg_notify_userid replaces heat_rabbitmq_telemetry_userid - heat_oslomsg_notify_vhost replaces heat_rabbitmq_telemetry_vhost - heat_oslomsg_notify_password replaces heat_rabbitmq_telemetry_password
  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - glance_oslomsg_rpc_servers replaces glance_rabbitmq_servers - glance_oslomsg_rpc_port replaces glance_rabbitmq_port - glance_oslomsg_rpc_use_ssl replaces glance_rabbitmq_use_ssl - glance_oslomsg_rpc_userid replaces glance_rabbitmq_userid - glance_oslomsg_rpc_vhost replaces glance_rabbitmq_vhost - glance_oslomsg_notify_servers replaces glance_rabbitmq_telemetry_servers - glance_oslomsg_notify_port replaces glance_rabbitmq_telemetry_port - glance_oslomsg_notify_use_ssl replaces glance_rabbitmq_telemetry_use_ssl - glance_oslomsg_notify_userid replaces glance_rabbitmq_telemetry_userid - glance_oslomsg_notify_vhost replaces glance_rabbitmq_telemetry_vhost - glance_oslomsg_notify_password replaces glance_rabbitmq_telemetry_password
  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - sahara_oslomsg_rpc_servers replaces sahara_rabbitmq_servers - sahara_oslomsg_rpc_port replaces sahara_rabbitmq_port - sahara_oslomsg_rpc_use_ssl replaces sahara_rabbitmq_use_ssl - sahara_oslomsg_rpc_userid replaces sahara_rabbitmq_userid - sahara_oslomsg_rpc_vhost replaces sahara_rabbitmq_vhost - sahara_oslomsg_notify_servers replaces sahara_rabbitmq_telemetry_servers - sahara_oslomsg_notify_port replaces sahara_rabbitmq_telemetry_port - sahara_oslomsg_notify_use_ssl replaces sahara_rabbitmq_telemetry_use_ssl - sahara_oslomsg_notify_userid replaces sahara_rabbitmq_telemetry_userid - sahara_oslomsg_notify_vhost replaces sahara_rabbitmq_telemetry_vhost
  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - neutron_oslomsg_rpc_servers replaces neutron_rabbitmq_servers - neutron_oslomsg_rpc_port replaces neutron_rabbitmq_port - neutron_oslomsg_rpc_use_ssl replaces neutron_rabbitmq_use_ssl - neutron_oslomsg_rpc_userid replaces neutron_rabbitmq_userid - neutron_oslomsg_rpc_vhost replaces neutron_rabbitmq_vhost - neutron_oslomsg_notify_servers replaces neutron_rabbitmq_telemetry_servers - neutron_oslomsg_notify_port replaces neutron_rabbitmq_telemetry_port - neutron_oslomsg_notify_use_ssl replaces neutron_rabbitmq_telemetry_use_ssl - neutron_oslomsg_notify_userid replaces neutron_rabbitmq_telemetry_userid - neutron_oslomsg_notify_vhost replaces neutron_rabbitmq_telemetry_vhost
  • The rabbitmq server parameters have been replaced by corresponding oslo.messaging RPC and Notify parameters in order to abstract the messaging service from the actual backend server deployment. - keystone_oslomsg_rpc_servers replaces keystone_rabbitmq_servers - keystone_oslomsg_rpc_port replaces keystone_rabbitmq_port - keystone_oslomsg_rpc_use_ssl replaces keystone_rabbitmq_use_ssl - keystone_oslomsg_rpc_userid replaces keystone_rabbitmq_userid - keystone_oslomsg_rpc_vhost replaces keystone_rabbitmq_vhost - keystone_oslomsg_notify_servers replaces keystone_rabbitmq_telemetry_servers - keystone_oslomsg_notify_port replaces keystone_rabbitmq_telemetry_port - keystone_oslomsg_notify_use_ssl replaces keystone_rabbitmq_telemetry_use_ssl - keystone_oslomsg_notify_userid replaces keystone_rabbitmq_telemetry_userid - keystone_oslomsg_notify_vhost replaces keystone_rabbitmq_telemetry_vhost
  • With the implementation of systemd-journal-remote the rsyslog_client role is no longer run by default. To enable the legacy functionality, the variable rsyslog_client_enabled and rsyslog_server_enabled can be set to true.

Security Issues

  • It is recommended that the certificate generation is always reviewed by security professionals since algorithms and key-lengths considered secure change all the time.

Bug Fixes

  • Newer releases of CentOS ship a version of libnss that depends on the existance of /dev/random and /dev/urandom in the operating system in order to run. This causes a problem during the cache preparation process which runs inside chroot that does not contain this, resulting in errors with the following message.

    error: Failed to initialize NSS library
    

    This has been resolved by introducing a /dev/random and /dev/urandom inside the chroot-ed environment.

  • ceph-ansible is no longer configured to install ntp by default, which creates a conflict with OSA’s ansible-hardening role that is used to implement ntp using ‘chrony’.
  • In order to prevent further issues with a libvirt and python-libvirt version mismatch, KVM-based compute nodes will now use the distribution package python library for libvirt. This should resolve the issue seen with pike builds on CentOS 7.5.

Other Notes

  • The max_fail_percentage playbook option has been used with the default playbooks since the first release of the playbooks back in Icehouse. While the intention was to allow large-scale deployments to succeed in cases where a single node fails due to transient issues, this option has produced more problems that it solves. If a failure occurs that is transient in nature but is under the set failure percentage the playbook will report a success, which can cause silent failures depending on where the failure happened. If a deployer finds themselves in this situation the problems are are then compounded because the tools will report there are no known issues. To ensure deployers have the best deployment experience and the most accurate information a change has been made to remove the max_fail_percentage option from all of the default playbooks. The removal of this option has the side effect of requiring the deploy to skip specific hosts should one need to be omitted from a run, but has the benefit of eliminating silent, hard to track down, failures. To skip a failing host for a given playbook run use the –limit ‘!$HOSTNAME’ CLI switch for the specific run. Once the issues have been resolved for the failing host rerun the specific playbook without the –limit option to ensure everything is in sync.
Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.