Role - tripleo_cephadm

Role Documentation

Welcome to the “tripleo_cephadm” role documentation.

Role Defaults

This section highlights all of the defaults and variables set within the “tripleo_cephadm” role.

# defaults file for tripleo_cephadm
tripleo_cephadm_spec_on_bootstrap: false  # not recommended due to
tripleo_cephadm_ssh_user: ceph-admin
tripleo_cephadm_bin: /usr/sbin/cephadm
tripleo_cephadm_cluster: ceph
tripleo_cephadm_config_home: /etc/ceph
tripleo_cephadm_verbose: true
tripleo_cephadm_container_image: ceph
tripleo_cephadm_container_tag: v15
tripleo_cephadm_container_cli: podman
tripleo_cephadm_container_options: --net=host --ipc=host
tripleo_cephadm_admin_keyring: '{{ tripleo_cephadm_config_home }}/{{ tripleo_cephadm_cluster
tripleo_cephadm_conf: '{{ tripleo_cephadm_config_home }}/{{ tripleo_cephadm_cluster
tripleo_cephadm_bootstrap_conf: /home/{{ tripleo_cephadm_ssh_user }}/bootstrap_{{
  tripleo_cephadm_cluster }}.conf
# path on ansible host (i.e. undercloud) of the ceph spec
tripleo_cephadm_spec_ansible_host: '{{ playbook_dir }}/ceph_spec.yaml'
# path on bootstrap node of ceph spec (scp'd from above var)
tripleo_cephadm_spec: /home/{{ tripleo_cephadm_ssh_user }}/specs/ceph_spec.yaml
# path in container on bootstrap node of spec (podman -v'd from above var)
tripleo_cephadm_container_spec: /home/ceph_spec.yaml
# path of other ceph specs podman -v mounted into running container
tripleo_cephadm_spec_home: /home/{{ tripleo_cephadm_ssh_user }}/specs
- /home/{{ tripleo_cephadm_ssh_user }}/.ssh/id_rsa
- /home/{{ tripleo_cephadm_ssh_user }}/.ssh/
tripleo_cephadm_uid: '167'
tripleo_cephadm_mode: '0755'
tripleo_cephadm_keyring_permissions: '0644'
tripleo_ceph_client_vars: /home/stack/ceph_client.yaml
tripleo_cephadm_dashboard_enabled: false
tripleo_cephadm_wait_for_mons: true
tripleo_cephadm_wait_for_mons_retries: 10
tripleo_cephadm_wait_for_mons_delay: 20
tripleo_cephadm_wait_for_mons_ignore_errors: false
tripleo_cephadm_wait_for_osds: true
tripleo_cephadm_wait_for_osds_retries: 10
tripleo_cephadm_wait_for_osds_delay: 20
tripleo_cephadm_wait_for_osds_ignore_errors: false
tripleo_cephadm_num_osd_expected: 1
tripleo_cephadm_predeployed: true
tripleo_cephadm_conf_overrides: {}
tripleo_cephadm_fsid_list: []
tripleo_cephadm_fqdn: false
tripleo_cephadm_crush_rules: []
# todo(fultonj) add is_hci boolean for target memory

Molecule Scenarios

Molecule is being used to test the “tripleo_cephadm” role. The following section highlights the drivers in service and provides an example playbook showing how the role is leveraged.

Scenario: default

Driver: podman
Molecule Inventory
        ansible_python_interpreter: /usr/bin/python3
Example default playbook
- name: Converge
  hosts: all
    tripleo_cephadm_wait_for_mons: false
    tripleo_ceph_client_vars: ceph_client.yaml
  - name: Satisfy Ceph prerequisites
      name: tripleo_cephadm
      tasks_from: pre

  - name: Bootstrap Ceph
      name: tripleo_cephadm
      tasks_from: bootstrap

  - name: Mock ceph_mon_dump command
    shell: cat mock/mock_ceph_mon_dump.json
    register: ceph_mon_mock_dump
    delegate_to: localhost

  - name: Mock ceph_keys_module_output
    include_vars: mock_ceph_keys.yml

  - name: Export configuration for tripleo_ceph_client
      name: tripleo_cephadm
      tasks_from: export
      ceph_mon_dump: '{{ ceph_mon_mock_dump }}'
      tripleo_cephadm_client_keys: '{{ mock_ceph_keys }}'

  - name: Run verify tasks
    include_tasks: tasks/verify.yml


An Ansible role for TripleO integration with Ceph clusters deployed with cephadm and managed with Ceph orchestrator.

This role is provided as part of the implementation of the tripleo_ceph_spec. It is an Ansible wrapper to call the Ceph tools cephadm and orchestrator and it contains the Ansible module ceph_key from ceph-ansible.


  • This role assumes it has an inventory with a single host, known as the bootstrap_host. An inventory genereated by tripleo-ansible-inventory will have a mons group so the first node in this group is a good candidate for this host.

  • The cephadm binary must be installed on the bootstrap_host.

  • Though there only needs to be one Ceph node in the inventory cephadm will configure the other servers with SSH. Thus, the following playbook should be run before one which uses this role to configure the ceph-admin user on the overcloud with the SSH keys that cephadm requires.

    ansible-playbook -i $INV \
      tripleo-ansible/tripleo_ansible/playbooks/cli-enable-ssh-admin.yaml \
      -e @ceph-admin.yml

    Where ceph-admin.yml contains something like the following:

    tripleo_admin_user: ceph-admin
    ssh_servers: "{{ groups['mons'] }}"
    distribute_private_key: true

    The ssh_servers variable should be expanded to contain another other nodes hosting Ceph, e.g. osds.

  • A cephadm-spec file should be provided which references the Ceph services to be run on the other ssh_hosts. The path to this file can be set with the ceph_spec variable.


Here is an example of a playbook which bootstraps the first Ceph monitor and then applies a spec file to add other hosts. It then creates RBD pools for Nova, Cinder, and Glance and a cephx keyring called openstack to access those pools. It then creates a file which can be passed as input to the role tripleo_ceph_client so that an overcloud can be configured to use the deployed Ceph cluster.

- name: Deploy Ceph with cephadm
  hosts: mons[0]
    bootstrap_host: "{{ groups['mons'][0] }}"
    tripleo_cephadm_spec_on_bootstrap: false
      - vms
      - volumes
      - images
    - name: Satisfy Ceph prerequisites
        role: tripleo_cephadm
        tasks_from: pre

    - name: Bootstrap Ceph
        role: tripleo_cephadm
        tasks_from: bootstrap

    - name: Apply Ceph spec
        role: tripleo_cephadm
        tasks_from: apply_spec
      when: not tripleo_cephadm_spec_on_bootstrap

    - name: Create Pools
        role: tripleo_cephadm
        tasks_from: pools

    - name: Create Keys
        role: tripleo_cephadm
        tasks_from: keys

    - name: Export configuration for tripleo_ceph_client
        role: tripleo_cephadm
        tasks_from: export
          - client.openstack