Configuring the operating system and storage

Configuring the operating system and storage

This section describes the installation and configuration of operating systems for the target hosts, as well as deploying SSH keys and configuring storage.

Installing the operating system

Install one of the following supported operating systems on the target host:

  • Ubuntu server 16.04 (Xenial Xerus) LTS 64-bit
  • Centos 7 64-bit
  • openSUSE 42.X 64-bit

Configure at least one network interface to access the Internet or suitable local repositories.

We recommend adding the Secure Shell (SSH) server packages to the installation on target hosts that do not have local (console) access.

Note

We also recommend setting your locale to en_US.UTF-8. Other locales might work, but they are not tested or supported.

Configure the operating system (Ubuntu)

  1. Update package source lists

    # apt-get update
    
  2. Upgrade the system packages and kernel:

    # apt-get dist-upgrade
    
  3. Reboot the host.

  4. Ensure that the kernel version is 3.13.0-34-generic or later:

    # uname -r
    
  5. Install additional software packages:

    # apt-get install bridge-utils debootstrap ifenslave ifenslave-2.6 \
      lsof lvm2 ntp ntpdate openssh-server sudo tcpdump vlan python
    
  6. Add the appropriate kernel modules to the /etc/modules file to enable VLAN and bond interfaces:

    # echo 'bonding' >> /etc/modules
    # echo '8021q' >> /etc/modules
    
  7. Configure Network Time Protocol (NTP) in /etc/ntp.conf to synchronize with a suitable time source and restart the service:

    # service ntp restart
    
  8. Reboot the host to activate the changes and use the new kernel.

Configure the operating system (CentOS)

  1. Upgrade the system packages and kernel:

    # yum upgrade
    
  2. Reboot the host.

  3. Ensure that the kernel version is 3.10 or later:

    # uname -r
    
  4. Install additional software packages:

    # yum install bridge-utils iputils lsof lvm2 \
      ntp ntpdate openssh-server sudo tcpdump python
    
  5. Add the appropriate kernel modules to the /etc/modules file to enable VLAN and bond interfaces:

    # echo 'bonding' >> /etc/modules-load.d/openstack-ansible.conf
    # echo '8021q' >> /etc/modules-load.d/openstack-ansible.conf
    
  6. Configure Network Time Protocol (NTP) in /etc/ntp.conf to synchronize with a suitable time source and start the service:

    # systemctl enable ntpd.service
    # systemctl start ntpd.service
    
  7. Reboot the host to activate the changes and use the new kernel.

Configure the operating system (openSUSE)

  1. Upgrade the system packages and kernel:

    # zypper up
    
  2. Reboot the host.

  3. Ensure that the kernel version is 4.4 or later:

    # uname -r
    
  4. Install additional software packages:

    # zypper install bridge-utils iputils lsof lvm2 \
      ntp opensshr sudo tcpdump python
    
  5. Add the appropriate kernel modules to the /etc/modules file to enable VLAN and bond interfaces:

    # echo 'bonding' >> /etc/modules-load.d/openstack-ansible.conf
    # echo '8021q' >> /etc/modules-load.d/openstack-ansible.conf
    
  6. Configure Network Time Protocol (NTP) in /etc/ntp.conf to synchronize with a suitable time source and start the service:

    # systemctl enable ntpd.service
    # systemctl start ntpd.service
    
  7. Reboot the host to activate the changes and use the new kernel.

Deploying Secure Shell (SSH) keys

Ansible uses SSH to connect the deployment host and target hosts.

  1. Copy the contents of the public key file on the deployment host to the /root/.ssh/authorized_keys file on each target host.
  2. Test public key authentication from the deployment host to each target host by using SSH to connect to the target host from the deployment host. If you can connect and get the shell without authenticating, it is working. SSH provides a shell without asking for a password.

For more information about how to generate an SSH key pair, as well as best practices, see GitHub’s documentation about generating SSH keys.

Important

OpenStack-Ansible deployments require the presence of a /root/.ssh/id_rsa.pub file on the deployment host. The contents of this file is inserted into an authorized_keys file for the containers, which is a necessary step for the Ansible playbooks. You can override this behavior by setting the lxc_container_ssh_key variable to the public key for the container.

Configure storage

Logical Volume Manager (LVM) enables a single device to be split into multiple logical volumes that appear as a physical storage device to the operating system. The Block Storage (cinder) service, and the LXC containers that run the OpenStack infrastructure, can optionally use LVM for their data storage.

Note

OpenStack-Ansible automatically configures LVM on the nodes, and overrides any existing LVM configuration. If you had a customized LVM configuration, edit the generated configuration file as needed.

  1. To use the optional Block Storage (cinder) service, create an LVM volume group named cinder-volumes on the storage host. Specify a metadata size of 2048 when creating the physical volume. For example:

    # pvcreate --metadatasize 2048 physical_volume_device_path
    # vgcreate cinder-volumes physical_volume_device_path
    
  2. Optionally, create an LVM volume group named lxc for container file systems. If the lxc volume group does not exist, containers are automatically installed on the file system under /var/lib/lxc by default.

Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.