[ English | Deutsch | English (United Kingdom) | русский | Indonesia | español ]

Sicherheitseinstellungen

Dieses Kapitel enthält Informationen zum Konfigurieren bestimmter Sicherheitseinstellungen für Ihre OpenStack-Ansible-Cloud.

Zum Verständnis des Sicherheitsdesigns siehe Sicherheit.

[ English | Deutsch | English (United Kingdom) | русский | Indonesia | español ]

Sichern von Diensten mit SSL-Zertifikaten

Das OpenStack Security Guide empfiehlt die sichere Kommunikation zwischen verschiedenen Diensten in einer OpenStack-Bereitstellung. Das OpenStack-Ansible-Projekt bietet derzeit die Möglichkeit, SSL-Zertifikate für die sichere Kommunikation zwischen Diensten zu konfigurieren:

All public endpoints reside behind haproxy, resulting in the only certificate management for externally visible https services are those for haproxy. Certain internal services such as RabbitMQ also require proper SSL configuration.

Bei der Bereitstellung mit OpenStack-Ansible können Sie entweder selbst signierte Zertifikate verwenden, die während des Bereitstellungsprozesses generiert werden, oder SSL-Zertifikate, Schlüssel und CA-Zertifikate von Ihrer eigenen vertrauenswürdigen Zertifizierungsstelle bereitstellen. In stark gesicherten Umgebungen werden vertrauenswürdige, vom Benutzer bereitgestellte Zertifikate für so viele Dienste wie möglich verwendet.

Bemerkung

Führen Sie die Konfiguration des SSL-Zertifikats in der Datei `` / etc / openstack_deploy / user_variables.yml`` durch. Bearbeiten Sie die Playbooks oder Rollen nicht selbst.

Openstack-Ansible uses an ansible role ansible_role_pki as a general tool to manage and install self-signed and user provided certificates.

Bemerkung

The openstack-ansible example configurations are designed to be minimal examples and in test or development use-cases will set external_lb_vip_address to the IP address of the haproxy external endpoint. For a production deployment it is advised to set external_lb_vip_address to be the FQDN which resolves via DNS to the IP of the external endpoint.

Selbstsignierte Zertifikate

Self-signed certificates enable you to start quickly and encrypt data in transit. However, they do not provide a high level of trust for public endpoints in highly secure environments. By default, self-signed certificates are used in OpenStack-Ansible. When self-signed certificates are used, certificate verification is automatically disabled.

Self-signed certificates can play an important role in securing internal services within the Openstack-Ansible deployment, as they can only be issued by the private CA associated with the deployment. Using mutual TLS between backend services such as RabbitMQ and MariaDB with self-signed certificates and a robust CA setup can ensure that only correctly authenticated clients can connect to these internal services.

Generating and regenerating self-signed certificate authorities

A self-signed certificate authority is generated on the deploy host during the first run of the playbook.

To regenerate the certificate authority you must set the openstack_pki_regen_ca variable to either the name of the root CA or intermediate CA you wish or regenerate, or to true to regenerate all self-signed certificate authorities.

# openstack-ansible -e "openstack_pki_regen_ca=ExampleCorpIntermediate" certificate-authority.yml

Take particular care not to regenerate Root or Intermediate certificate authorities in a way that may invalidate existing server certificates in the deployment. It may be preferable to create new Intermediate CA certificates rather than regenerate existing ones in order to maintain existing chains of trust.

Generieren und Regenerieren von selbstsignierten Zertifikaten

Während der ersten Ausführung des Playbooks werden für jeden Dienst selbstsignierte Zertifikate generiert.

To regenerate a new self-signed certificate for a service, you must set the <servicename>_pki_regen_cert variable to true in one of the following ways:

  • Um zu erzwingen, dass ein selbstsigniertes Zertifikat neu generiert wird, können Sie die Variable in der Befehlszeile an &quot;openstack-ansible&quot; übergeben:

    # openstack-ansible -e "haproxy_pki_regen_cert=true" haproxy-install.yml
    
  • To force a self-signed certificate to regenerate with every playbook run, set the appropriate regeneration option to true. For example, if you have already run the haproxy playbook, but you want to regenerate the self-signed certificate, set the haproxy_pki_regen_cert variable to true in the /etc/openstack_deploy/user_variables.yml file:

    haproxy_pki_regen_cert: true
    

Generating and regenerating self-signed user certificates

Self-signed user certificates are generated but not installed for services outside of Openstack ansible. These user certificates are signed by the same self-signed certificate authority as is used by openstack services but are intended to be used by user applications.

To generate user certificates, define a variable with the prefix user_pki_certificates_ in the /etc/openstack_deploy/user_variables.yml file.

Example

user_pki_certificates_example:
   - name: "example"
     provider: ownca
     cn: "example.com"
     san: "DNS:example.com,IP:x.x.x.x"
     signed_by: "{{ openstack_pki_service_intermediate_cert_name }}"
     key_usage:
       - digitalSignature
       - keyAgreement
     extended_key_usage:
       - serverAuth

Generate the certificate with the following command:

# openstack-ansible certificate-generate.yml

To regenerate a new self-signed certificate for a service, you must set the user_pki_regen_cert variable to true in one of the following ways:

  • Um zu erzwingen, dass ein selbstsigniertes Zertifikat neu generiert wird, können Sie die Variable in der Befehlszeile an &quot;openstack-ansible&quot; übergeben:

    # openstack-ansible -e "user_pki_regen_cert=true" certificate-generate.yml
    
  • To force a self-signed certificate to regenerate with every playbook run, set the user_pki_regen_cert variable to true in the /etc/openstack_deploy/user_variables.yml file:

    user_pki_regen_cert: true
    

Vom Benutzer bereitgestellte Zertifikate

Für zusätzliches Vertrauen in hochsichere Umgebungen können Sie Ihre eigenen SSL-Zertifikate, Schlüssel und CA-Zertifikate bereitstellen. Das Abrufen von Zertifikaten von einer vertrauenswürdigen Zertifizierungsstelle liegt außerhalb des Geltungsbereichs dieses Dokuments. Im Abschnitt Certificate Management des Linux-Dokumentationsprojekts wird jedoch erläutert, wie Sie Ihre eigene Zertifizierungsstelle erstellen und Zertifikate signieren.

Verwenden Sie den folgenden Prozess, um von Benutzern bereitgestellte SSL-Zertifikate in OpenStack-Ansible bereitzustellen:

  1. Kopieren Sie Ihre SSL-Zertifikat-, Schlüssel- und CA-Zertifikatdateien auf den Bereitstellungshost.

  2. Geben Sie den Pfad zu Ihrem SSL-Zertifikat, Schlüssel und CA-Zertifikat in der Datei `` / etc / openstack_deploy / user_variables.yml`` an.

  3. Führen Sie das Playbook für diesen Dienst aus.

Beispiel für HAProxy

Die zu setzenden Variablen, die den Zertifikaten für die HAProxy-Konfiguration den Pfad auf dem Implementierungsknoten bereitstellen, sind:

haproxy_user_ssl_cert: /etc/openstack_deploy/ssl/example.com.crt
haproxy_user_ssl_key: /etc/openstack_deploy/ssl/example.com.key
haproxy_user_ssl_ca_cert: /etc/openstack_deploy/ssl/ExampleCA.crt

RabbitMQ Beispiel

Um von Benutzern bereitgestellte Zertifikate für RabbitMQ bereitzustellen, kopieren Sie die Zertifikate auf den Implementierungshost, bearbeiten Sie die Datei `` / etc / openstack_deploy / user_variables.yml`` und legen Sie die folgenden drei Variablen fest:

rabbitmq_user_ssl_cert:    /etc/openstack_deploy/ssl/example.com.crt
rabbitmq_user_ssl_key:     /etc/openstack_deploy/ssl/example.com.key
rabbitmq_user_ssl_ca_cert: /etc/openstack_deploy/ssl/ExampleCA.crt

Führen Sie dann das Playbook aus, um die Zertifikate anzuwenden:

# openstack-ansible rabbitmq-install.yml

Das Playbook stellt das von Ihnen bereitgestellte SSL-Zertifikat, den Schlüssel und das CA-Zertifikat für jeden RabbitMQ-Container bereit.

Der Prozess ist für die anderen Dienste identisch. Ersetzen Sie rabbitmq in den vorhergehenden Konfigurationsvariablen durch` horizon`, haproxy oder` keystone`. Führen Sie dann das Playbook für diesen Dienst aus, um von Benutzern bereitgestellte Zertifikate für diese Dienste bereitzustellen.

Certbot certificates

The HAProxy ansible role supports using certbot to automatically deploy trusted SSL certificates for the public endpoint. Each HAProxy server will individually request a SSL certificate using certbot.

Certbot defaults to using LetsEncrypt as the Certificate Authority, other Certificate Authorities can be used by setting the haproxy_ssl_letsencrypt_certbot_server variable in the /etc/openstack_deploy/user_variables.yml file:

haproxy_ssl_letsencrypt_certbot_server: "https://acme-staging-v02.api.letsencrypt.org/directory"

The http-01 type challenge is used by certbot to deploy certificates so it is required that the public endpoint is accessible directly by the Certificate Authority.

Deployment of certificates using LetsEncrypt has been validated for openstack-ansible using Ubuntu Jammy. Other distributions should work but are not tested.

To deploy certificates with certbot, add the following to /etc/openstack_deploy/user_variables.yml to enable the certbot function in the haproxy ansible role, and to create a new backend service called certbot to service http-01 challenge requests.

haproxy_ssl: true
haproxy_ssl_letsencrypt_enable: True
haproxy_ssl_letsencrypt_email: "email.address@example.com"

TLS for Haproxy Internal VIP

As well as load balancing public endpoints, haproxy is also used to load balance internal connections.

By default, OpenStack-Ansible does not secure connections to the internal VIP. To enable this you must set the following variables in the /etc/openstack_deploy/user_variables.yml file:

openstack_service_adminuri_proto: https
openstack_service_internaluri_proto: https

haproxy_ssl_all_vips: true

Run all playbooks to configure haproxy and openstack services.

When enabled haproxy will use the same TLS certificate on all interfaces (internal and external). It is not currently possible in OpenStack-Ansible to use different self-signed or user-provided TLS certificates on different haproxy interfaces.

The only way to use a different TLS certificates on the internal and external VIP is to use certbot.

Enabling TLS on the internal VIP for existing deployments will cause some downtime, this is because haproxy only listens on a single well known port for each OpenStack service and OpenStack services are configured to use http or https. This means once haproxy is updated to only accept HTTPS connections, the OpenStack services will stop working until they are updated to use HTTPS.

To avoid downtime, it is recommended to enable openstack_service_accept_both_protocols until all services are configured correctly. It allows haproxy frontends to listen on both HTTP and HTTPS.

TLS for Haproxy Backends

Communication between haproxy and service backends can be encrypted. Currently it is disabled by default. It can be enabled for all services by setting the following variable:

openstack_service_backend_ssl: True

There is also an option to enable it only for individual services:

keystone_backend_ssl: True
neutron_backend_ssl: True

By default, self-signed certificates will be used to secure traffic but user-provided certificates are also supported.

TLS for Live Migrations

Live migration of VM’s using SSH is deprecated and the OpenStack Nova Docs recommends using the more secure native TLS method supported by QEMU. The default live migration method used by OpenStack-Ansible has been updated to use TLS migrations.

QEMU-native TLS requires all compute hosts to accept TCP connections on port 16514 and port range 49152 to 49261.

It is not possible to have a mixed estate of some compute nodes using SSH and some using TLS for live migrations, as this would prevent live migrations between the compute nodes.

There are no issues enabling TLS live migration during an OpenStack upgrade, as long as you do not need to live migrate instances during the upgrade. If you you need to live migrate instances during an upgrade, enable TLS live migrations before or after the upgrade.

To force the use of SSH instead of TLS for live migrations you must set the nova_libvirtd_listen_tls variable to 0 in the /etc/openstack_deploy/user_variables.yml file:

nova_libvirtd_listen_tls: 0

TLS for VNC

When using VNC for console access there are 3 connections to secure, client to haproxy, haproxy to noVNC Proxy and noVNC Proxy to Compute nodes. The OpenStack Nova Docs for remote console access cover console security in much more detail.

In OpenStack-Ansible TLS to haproxy is configured in haproxy, TLS from haproxy to noVNC is not currently enabled and TLS from nVNC to Compute nodes is enabled by default.

Changes will not apply to any existing running guests on the compute node, so this configuration should be done before launching any instances. For existing deployments it is recommended that you migrate instances off the compute node before enabling.

To help with the transition from unencrypted VNC to VeNCrypt, initially noVNC proxy auth scheme allows for both encrypted and unencrypted sessions using the variable nova_vencrypt_auth_scheme. This will be restricted to VeNCrypt only in future versions of OpenStack-Ansible.

nova_vencrypt_auth_scheme: "vencrypt,none"

To not encrypt data from noVNC proxy to Compute nodes you must set the nova_qemu_vnc_tls variable to 0 in the /etc/openstack_deploy/user_variables.yml file:

nova_qemu_vnc_tls: 0

[ English | Deutsch | English (United Kingdom) | русский | Indonesia | español ]

Security Headers

Security headers are HTTP headers that can be used to increase the security of a web application by restricting what modern browsers are able to run.

In OpenStack-Ansible, security headers are implemented in haproxy as all the public endpoints reside behind it.

The following headers are enabled by default on all the haproxy interfaces that implement TLS, but only for the Horizon service. The security headers can be implemented on other haproxy services, but only services used by browsers will make use of the headers.

HTTP Strict Transport Security

The OpenStack TLS Security Guide recommends that all production deployments use HTTP strict transport security (HSTS).

By design, this header is difficult to disable once set. It is recommended that during testing you set a short time of 1 day and after testing increase the time to 1 year.

To change the default max age to 1 day, override the variable haproxy_security_headers_max_age in the /etc/openstack_deploy/user_variables.yml file:

haproxy_security_headers_max_age: 86400

If you would like your domain included in the HSTS preload list, which is built into browsers, before submitting your request to be added to the HSTS preload list you must add the preload token to your response header. The preload token indicates to the maintainers of HSTS preload list that you are happy to have your site included.

- "http-response set-header Strict-Transport-Security \"max-age={{ haproxy_security_headers_max_age }}; includeSubDomains; preload;\""

X-Content-Type-Options

The X-Content-Type-Options header prevents MIME type sniffing.

This functionality can be changed by overriding the list of headers in haproxy_security_headers variable in the /etc/openstack_deploy/user_variables.yml file.

Referrer Policy

The Referrer-Policy header controls how much referrer information is sent with requests. It defaults to same-origin, which does not send the origin path for cross-origin requests.

This functionality can be changed by overriding the list of headers in haproxy_security_headers variable in the /etc/openstack_deploy/user_variables.yml file.

Permission Policy

The Permissions-Policy header allows you to selectively enable, disable or modify the use of browser features and APIs, previously known as Feature Policy.

By default this header is set to block access to all features apart from the following from the same origin; fullscreen, clipboard read, clipboard write and spatial navigation.

This functionality can be changed by overriding the list of headers in haproxy_security_headers variable in the /etc/openstack_deploy/user_variables.yml file.

Content Security Policy (CSP)

The Content-Security-Policy header allows you to control what resources a browser is allowed to load for a given page, which helps to mitigate the risks from Cross-Site Scripting (XSS) and data injection attacks.

By default, the Content Security Policy (CSP) enables a minimum set of resources to allow Horizon to work, which includes access the Nova console. If you require access to other resources these can be set by overriding the haproxy_security_headers_csp variable in the /etc/openstack_deploy/user_variables.yml file.

Report Only

Implementing CSP could lead to broken content if a browser is blocked from accessing certain resources, therefore it is recommended that when testing CSP you use the Content-Security-Policy-Report-Only header, instead of Content-Security-Policy, this reports CSP violations to the browser console, but does not enforce the policy.

To set the CSP policy to report only by overriding the haproxy_security_headers_csp_report_only variable to True in the /etc/openstack_deploy/user_variables.yml file:

haproxy_security_headers_csp_report_only: True

Reporting Violations

It is recommended that you monitor attempted CSP violations in production, this is achieved by setting the report-uri and report-to tokens.

Federated Login

When using federated login you will need to override the default Content Security Policy to allow access to your authorisation server by overriding the haproxy_horizon_csp variable in the /etc/openstack_deploy/user_variables.yml file:

haproxy_horizon_csp: >
  http-response set-header Content-Security-Policy "
  default-src 'self';
  frame-ancestors 'self';
  form-action 'self' {{ external_lb_vip_address }}:5000 <YOUR-AUTHORISATION-SERVER-ORIGIN>;
  upgrade-insecure-requests;
  style-src 'self' 'unsafe-inline';
  script-src 'self' 'unsafe-inline' 'unsafe-eval';
  child-src 'self' {{ external_lb_vip_address }}:{{ nova_spice_html5proxy_base_port }} {{ external_lb_vip_address }}:{{ nova_novncproxy_port }} {{ external_lb_vip_address }}:{{ nova_serialconsoleproxy_port }};
  frame-src 'self' {{ external_lb_vip_address }}:{{ nova_spice_html5proxy_base_port }} {{ external_lb_vip_address }}:{{ nova_novncproxy_port }} {{ external_lb_vip_address }}:{{ nova_serialconsoleproxy_port }};
  "

It is also possible to set specific security headers for skyline.

haproxy_skyline_csp: ...

[ English | Deutsch | English (United Kingdom) | русский | Indonesia | español ]

Security.txt

security.txt is a proposed IETF standard to allow independent security researchers to easily report vulnerabilities. The standard defines that a text file called security.txt should be found at „/.well-known/security.txt“. For legacy compatibility reasons the file might also be placed at „/security.txt“.

In OpenStack-Ansible, security.txt is implemented in haproxy as all public endpoints reside behind it. It defaults to directing any request paths that end with /security.txt to the text file using an ACL rule in haproxy.

Enabling security.txt

Use the following process to add a security.txt file to your deployment using OpenStack-Ansible:

  1. Write the contents of the security.txt file in accordance with the standard.

  2. Define the contents of security.txt in the variable haproxy_security_txt_content in the /etc/openstack_deploy/user_variables.yml file:

haproxy_security_txt_content: |
    # This is my example security.txt file
    # Please see https://securitytxt.org/ for details of the specification of this file
  1. Update haproxy

# openstack-ansible haproxy-install.yml

Advanced security.txt ACL

In some cases you may need to change the haproxy ACL used to redirect requests to the security.txt file, such as adding extra domains.

The haproxy ACL is updated by overriding the variable haproxy_map_entries inside haproxy_security_txt_service.

[ English | Deutsch | English (United Kingdom) | русский | Indonesia | español ]

Aufhärtbar auftragen

Die ansible-hardening -Rolle gilt für physische Hosts innerhalb einer OpenStack-Ansible-Bereitstellung, die als ein beliebiger Typ von Knoten, Infrastruktur oder Computer ausgeführt werden. Standardmäßig ist die Rolle aktiviert. Sie können es deaktivieren, indem Sie den Wert der Variablen `` apply_security_hardening`` in der `` user_variables.yml`` Datei auf `` false`` setzen:

apply_security_hardening: false

Sie können Konfigurationen zur Sicherheitssicherung auf eine vorhandene Umgebung anwenden oder eine Umgebung mithilfe eines mit OpenStack-Ansible gelieferten Playbooks überwachen:

# Apply security hardening configurations
  openstack-ansible security-hardening.yml

# Perform a quick audit by using Ansible's check mode
  openstack-ansible --check security-hardening.yml

Weitere Informationen zu den Sicherheitskonfigurationen finden Sie in der Dokumentation security hardening role.

[ English | Deutsch | English (United Kingdom) | русский | Indonesia | español ]

Running as non-root user

Deployers do not have to use root user accounts on deploy or target hosts. This approach works out of the box by leveraging Ansible privilege escalation.

Deploment hosts

You can avoid usage of the root user on a deployment by following these guidelines:

  1. Clone OpenStack-Ansible repository to home user directory. It means, that instead of /opt/openstack-ansible repository will be in ~/openstack-ansible.

  2. Use custom path for /etc/openstack_deploy directory. You can place OpenStack-Ansible configuration directory inside user home directory. For that you will need to define the following environment variable:

    export OSA_CONFIG_DIR="${HOME}/openstack_deploy"
    
  3. If you want to keep basic ansible logging, you need either to create /openstack/log/ansible-logging/ directory and allow user to write there, or define the following environment variable:

    export ANSIBLE_LOG_PATH="${HOME}/ansible-logging/ansible.log"
    

    Bemerkung

    You can also add the environment variable to user.rc file inside openstack_deploy folder (${OSA_CONFIG_DIR}/user.rc). user.rc file is sourced each time you run openstack-ansible binary.

  4. Initial bootstrap of OpenStack-Ansible using ./scripts/bootstrap-ansible.sh script still should be done either as the root user or escalate privileges using sudo or su.

Destination hosts

It is also possible to use non-root user for Ansible authentication on destination hosts. However, this user must be able to escalate privileges using Ansible privilege escalation.

Bemerkung

You can add environment variables from that section to user.rc file inside openstack_deploy folder (${OSA_CONFIG_DIR}/user.rc). user.rc file is sourced each time you run openstack-ansible binary.

There are also couple of additional things which you might want to consider:

  1. Provide --become flag each time your run a playbook or ad-hoc command. Alternatively, you can define the following environment variable:

    export ANSIBLE_BECOME="True"
    
  2. Override Ansible temporary path if LXC containers are used. The ansible connection from the physical host to the LXC container passes environment variables from the host. This means that Ansible attempts to use the same temporary folder in the LXC container as it would on the host, relative to the non-root user ${HOME} directory. This will not exist inside the container and another path must be used instead.

    You can do that following in multiple ways:

    1. Define ansible_remote_tmp: /tmp in user_variables.yml

    2. Define the following environment variable:

    export ANSIBLE_LOCAL_TEMP="/tmp"
    
  3. Define the user that will be used for for connections from the deploy host to the ansible target hosts. In case the user is the same for all hosts in your deployment, you can do it in one of following ways:

    1. Define ansible_user: <USER> in user_variables.yml

    2. Define the following environment variable:

    export ANSIBLE_REMOTE_USER="<USER>"
    

    If the user differs from host to host, you can leverage group_vars or host_vars. More information on how to use that can be found in the overrides guide