Role - backup-and-restore

Role Documentation

Welcome to the “backup_and_restore” role documentation.

Role Defaults

This section highlights all of the defaults and variables set within the “backup_and_restore” role.

# All variables intended for modification should be placed in this file.
tripleo_backup_and_restore_hide_sensitive_logs: '{{ hide_sensitive_logs | default(true)
  }}'
tripleo_backup_and_restore_debug: '{{ ((ansible_verbosity | int) >= 2) | bool }}'

# Set the container command line entry-point
tripleo_container_cli: "{{ container_cli | default('podman') }}"
# Stop and start all running services before backup is ran.
tripleo_backup_and_restore_service_manager: true

# Set the name of the mysql container
tripleo_backup_and_restore_mysql_container: mysql

# All variables within this role should have a prefix of "tripleo_backup_and_restore"
# By default this should be the Undercloud node
tripleo_backup_and_restore_nfs_server: 192.168.24.1
tripleo_backup_and_restore_nfs_storage_folder: /ctl_plane_backups
tripleo_backup_and_restore_nfs_clients_nets: [192.168.24.0/24, 10.0.0.0/24, 172.16.0.0/24]
tripleo_backup_and_restore_rear_simulate: false
tripleo_backup_and_restore_using_uefi_bootloader: 0
tripleo_backup_and_restore_exclude_paths_common: [/data/*, /tmp/*, '{{ tripleo_backup_and_restore_nfs_storage_folder
    }}/*']
tripleo_backup_and_restore_exclude_paths_controller_non_bootrapnode: true
tripleo_backup_and_restore_exclude_paths_controller: [/var/lib/mysql/*]
tripleo_backup_and_restore_exclude_paths_compute: [/var/lib/nova/instances/*]
tripleo_backup_and_restore_hiera_config_file: /etc/puppet/hiera.yaml

# This var is a dictionary of the configuration of the /etc/rear/local.conf
# The key:value will be interpreted as key=value on the configuration file.
# To set that the value is a string, it needs to be single quoted followed by
# double quoted as it will be interpreted by BASH.
tripleo_backup_and_restore_local_config:
  ISO_DEFAULT: '"automatic"'
  USING_UEFI_BOOTLOADER: 0
  OUTPUT: ISO
  BACKUP: NETFS
  BACKUP_PROG_COMPRESS_OPTIONS: ( --gzip)
  BACKUP_PROG_COMPRESS_SUFFIX: '".gz"'

# This var is a dictionary of the configuration of the /etc/rear/rescue.conf
# The key:value will be interpreted as key=value on the configuration file.
# To set that the value is a string, it needs to be single quoted followed by
# double quoted as it will be interpreted by BASH.
tripleo_backup_and_restore_rescue_config: {}

tripleo_backup_and_restore_output_url: nfs://{{ tripleo_backup_and_restore_nfs_server
  }}/ctl_plane_backups
tripleo_backup_and_restore_backup_url: nfs://{{ tripleo_backup_and_restore_nfs_server
  }}/ctl_plane_backups

# Ceph authentication backup file
tripleo_backup_and_restore_ceph_auth_file: ceph_auth_export.bak

Role Variables: redhat.yml

# While options found within the vars/ path can be overridden using extra
# vars, items within this path are considered part of the role and not
# intended to be modified.

# All variables within this role should have a prefix of "tripleo_{{ role_name | replace('-', '_') }}"

tripleo_backup_and_restore_rear_packages:
- rear
- syslinux
- genisoimage
- nfs-utils
tripleo_backup_and_restore_nfs_packages:
- nfs-utils

Molecule Scenarios

Molecule is being used to test the “backup_and_restore” role. The following section highlights the drivers in service and provides an example playbook showing how the role is leveraged.

Scenario: default

Driver: delegated
Molecule Options
managed: false
login_cmd_template: >-
  ssh
  -o UserKnownHostsFile=/dev/null
  -o StrictHostKeyChecking=no
  -o Compression=no
  -o TCPKeepAlive=yes
  -o VerifyHostKeyDNS=no
  -o ForwardX11=no
  -o ForwardAgent=no
  {instance}
ansible_connection_options:
  ansible_connection: ssh
Molecule Inventory
hosts:
  all:
    hosts:
      instance:
        ansible_host: localhost
Example default playbook
- name: Converge
  become: true
  hosts: all
  roles:
  - role: backup_and_restore
    tripleo_backup_and_restore_nfs_server: undercloud
    tripleo_backup_and_restore_rear_simulate: true
    tripleo_backup_and_restore_service_manager: false
    tripleo_backup_and_restore_hiera_config_file: '{{ ansible_user_dir }}/hiera.yaml'

Usage

This Ansible role allows to do the following tasks:

  1. Install an NFS server.

  2. Install ReaR.

  3. Perform a ReaR backup.

This example is meant to describe a very simple use case in which the user needs to create a set of recovery images from the control plane nodes.

First, the user needs to have access to the environment Ansible inventory.

We will use the tripleo-ansible-inventory command to generate the inventory file.

tripleo-ansible-inventory \
  --ansible_ssh_user heat-admin \
  --static-yaml-inventory ~/tripleo-inventory.yaml

In this particular case, we don’t have an additional NFS server to store the backups from the control plane nodes, so, we will install the NFS server in the Undercloud node (but any other node can be used as the NFS storage backend).

First, we need to create an Ansible playbook to specify that we will install the NFS server in the Undercloud node.

cat <<'EOF' > ~/bar_nfs_setup.yaml
# Playbook
# We will setup the NFS node in the Undercloud node
# (we don't have any other place at the moment to do this)
- become: true
  hosts: undercloud
  name: Setup NFS server for ReaR
  roles:
  - role: backup-and-restore
EOF

Then, we will create another playbook to determine the location in which we will like to install ReaR.

cat <<'EOF' > ~/bar_rear_setup.yaml
# Playbook
# We install and configure ReaR in the control plane nodes
# As they are the only nodes we will like to backup now.
- become: true
  hosts: Controller
  name: Install ReaR
  roles:
  - role: backup-and-restore
EOF

Now we create the playbook to create the actual backup.

cat <<'EOF' > ~/bar_rear_create_restore_images.yaml
# Playbook
# We run ReaR in the control plane nodes.
- become: true
  hosts: ceph_mon
  name: Backup ceph authentication
  tasks:
    - name: Backup ceph authentication role
      include_role:
        name: backup_and_restore
        tasks_from: ceph_authentication
      tags:
      -  bar_create_recover_image

- become: true
  hosts: Controller
  name: Create the recovery images for the control plane
  roles:
  - role: backup-and-restore
EOF

The last step is to run the previously create playbooks filtering by the corresponding tag.

First, we configure the NFS server.

# Configure NFS server in the Undercloud node
ansible-playbook \
    -v -i ~/tripleo-inventory.yaml \
    --extra="ansible_ssh_common_args='-o StrictHostKeyChecking=no'" \
    --become \
    --become-user root \
    --tags bar_setup_nfs_server \
    ~/bar_nfs_setup.yaml

Then, we install ReaR in the desired nodes.

# Configure ReaR in the control plane
ansible-playbook \
    -v -i ~/tripleo-inventory.yaml \
    --extra="ansible_ssh_common_args='-o StrictHostKeyChecking=no'" \
    --become \
    --become-user root \
    --tags bar_setup_rear \
    ~/bar_rear_setup.yaml

Lastly, we execute the actual backup step. With or without ceph.

# Create recovery images of the control plane
ansible-playbook \
    -v -i ~/tripleo-inventory.yaml \
    --extra="ansible_ssh_common_args='-o StrictHostKeyChecking=no'" \
    --become \
    --become-user root \
    --tags bar_create_recover_image \
    ~/bar_rear_create_restore_images.yaml