Kilo -  Kilo -  Kilo -  Kilo -  Kilo -  Kilo -  Kilo -  Kilo - 

 Chapter 2. Basic environment

[Note]Note

The trunk version of this guide focuses on the future Kilo release and will not work for the current Juno release. If you want to install Juno, you must use the Juno version of this guide instead.

This chapter explains how to configure each node in the example architectures including the two-node architecture with legacy networking and three-node architecture with OpenStack Networking (neutron).

[Note]Note

Although most environments include Identity, Image service, Compute, at least one networking service, and the dashboard, the Object Storage service can operate independently. If your use case only involves Object Storage, you can skip to Chapter 9, Add Object Storage after configuring the appropriate nodes for it. However, the dashboard requires at least the Image service and Compute.

[Note]Note

You must use an account with administrative privileges to configure each node. Either run the commands as the root user or configure the sudo utility.

 Before you begin

For best performance, we recommend that your environment meets or exceeds the hardware requirements in Figure 1.2, “Minimal architecture example with OpenStack Networking (neutron)—Hardware requirements” or Figure 1.5, “Minimal architecture example with legacy networking (nova-network)—Hardware requirements”. However, OpenStack does not require a significant amount of resources and the following minimum requirements should support a proof-of-concept environment with core services and several CirrOS instances:

  • Controller Node: 1 processor, 2 GB memory, and 5 GB storage

  • Network Node: 1 processor, 512 MB memory, and 5 GB storage

  • Compute Node: 1 processor, 2 GB memory, and 10 GB storage

To minimize clutter and provide more resources for OpenStack, we recommend a minimal installation of your Linux distribution. Also, we strongly recommend that you install a 64-bit version of your distribution on at least the compute node. If you install a 32-bit version of your distribution on the compute node, attempting to start an instance using a 64-bit image will fail.

[Note]Note

A single disk partition on each node works for most basic installations. However, you should consider Logical Volume Manager (LVM) for installations with optional services such as Block Storage.

Many users build their test environments on virtual machines (VMs). The primary benefits of VMs include the following:

  • One physical server can support multiple nodes, each with almost any number of network interfaces.

  • Ability to take periodic "snap shots" throughout the installation process and "roll back" to a working configuration in the event of a problem.

However, VMs will reduce performance of your instances, particularly if your hypervisor and/or processor lacks support for hardware acceleration of nested VMs.

[Note]Note

If you choose to install on VMs, make sure your hypervisor permits promiscuous mode and disables MAC address filtering on the external network.

For more information about system requirements, see the OpenStack Operations Guide.

 Security

OpenStack services support various security methods including password, policy, and encryption. Additionally, supporting services including the database server and message broker support at least password security.

To ease the installation process, this guide only covers password security where applicable. You can create secure passwords manually, generate them using a tool such as pwgen, or by running the following command:

$ openssl rand -hex 10

For OpenStack services, this guide uses SERVICE_PASS to reference service account passwords and SERVICE_DBPASS to reference database passwords.

The following table provides a list of services that require passwords and their associated references in the guide:

Table 2.1. Passwords
Password name Description
Database password (no variable used) Root password for the database
ADMIN_PASS Password of user admin
CEILOMETER_DBPASS Database password for the Telemetry service
CEILOMETER_PASS Password of Telemetry service user ceilometer
CINDER_DBPASS Database password for the Block Storage service
CINDER_PASS Password of Block Storage service user cinder
DASH_DBPASS Database password for the dashboard
DEMO_PASS Password of user demo
GLANCE_DBPASS Database password for Image service
GLANCE_PASS Password of Image service user glance
HEAT_DBPASS Database password for the Orchestration service
HEAT_DOMAIN_PASS Password of Orchestration domain
HEAT_PASS Password of Orchestration service user heat
KEYSTONE_DBPASS Database password of Identity service
NEUTRON_DBPASS Database password for the Networking service
NEUTRON_PASS Password of Networking service user neutron
NOVA_DBPASS Database password for Compute service
NOVA_PASS Password of Compute service user nova
RABBIT_PASS Password of user guest of RabbitMQ
SAHARA_DBPASS Database password of Data processing service
SWIFT_PASS Password of Object Storage service user swift
TROVE_DBPASS Database password of Database service
TROVE_PASS Password of Database service user trove

OpenStack and supporting services require administrative privileges during installation and operation. In some cases, services perform modifications to the host that can interfere with deployment automation tools such as Ansible, Chef, and Puppet. For example, some OpenStack services add a root wrapper to sudo that can interfere with security policies. See the Cloud Administrator Guide for more information. Also, the Networking service assumes default values for kernel network parameters and modifies firewall rules. To avoid most issues during your initial installation, we recommend using a stock deployment of a supported distribution on your hosts. However, if you choose to automate deployment of your hosts, review the configuration and policies applied to them before proceeding further.

 Networking

After installing the operating system on each node for the architecture that you choose to deploy, you must configure the network interfaces. We recommend that you disable any automated network management tools and manually edit the appropriate configuration files for your distribution. For more information on how to configure networking on your distribution, see the documentation.

All nodes require Internet access for administrative purposes such as package installation, security updates, DNS, and NTP. In most cases, nodes should obtain Internet access through the management network interface. To highlight the importance of network separation, the example architectures use private address space for the management network and assume that network infrastructure provides Internet access via NAT. To illustrate the flexibility of IaaS, the example architectures use public IP address space for the external network and assume that network infrastructure provides direct Internet access to instances in your OpenStack environment. In environments with only one block of public IP address space, both the management and external networks must ultimately obtain Internet access using it. For simplicity, the diagrams in this guide only show Internet access for OpenStack services.

[Note]Note

Your distribution does not enable a restrictive firewall by default. For more information about securing your environment, refer to the OpenStack Security Guide.

Proceed to network configuration for the example OpenStack Networking (neutron) or legacy networking (nova-network) architecture.

 OpenStack Networking (neutron)

The example architecture with OpenStack Networking (neutron) requires one controller node, one network node, and at least one compute node. The controller node contains one network interface on the management network. The network node contains one network interface on the management network, one on the instance tunnels network, and one on the external network. The compute node contains one network interface on the management network and one on the instance tunnels network.

The example architecture assumes use of the following networks:

  • Management on 10.0.0.0/24 with gateway 10.0.0.1

    [Note]Note

    This network requires a gateway to provide Internet access to all nodes for administrative purposes such as package installation, security updates, DNS, and NTP.

  • Instance tunnels on 10.0.1.0/24 without a gateway

    [Note]Note

    This network does not require a gateway because communication only occurs among network and compute nodes in your OpenStack environment.

  • External on 203.0.113.0/24 with gateway 203.0.113.1

    [Note]Note

    This network requires a gateway to provide Internet access to instances in your OpenStack environment.

You can modify these ranges and gateways to work with your particular network infrastructure.

[Note]Note

Network interface names vary by distribution. Traditionally, interfaces use "eth" followed by a sequential number. To cover all variations, this guide simply refers to the first interface as the interface with the lowest number, the second interface as the interface with the middle number, and the third interface as the interface with the highest number.

 

Figure 2.1. Minimal architecture example with OpenStack Networking (neutron)—Network layout


Unless you intend to use the exact configuration provided in this example architecture, you must modify the networks in this procedure to match your environment. Also, each node must resolve the other nodes by name in addition to IP address. For example, the controller name must resolve to 10.0.0.11, the IP address of the management interface on the controller node.

[Warning]Warning

Reconfiguring network interfaces will interrupt network connectivity. We recommend using a local terminal session for these procedures.

 Controller node

 

To configure networking:

  1. Configure the first interface as the management interface:

    IP address: 10.0.0.11

    Network mask: 255.255.255.0 (or /24)

    Default gateway: 10.0.0.1

  2. Reboot the system to activate the changes.

 

To configure name resolution:

  1. Set the hostname of the node to controller.

  2. Edit the /etc/hosts file to contain the following:

    # controller
    10.0.0.11       controller
    
    # network
    10.0.0.21       network
    
    # compute1
    10.0.0.31       compute1
    [Warning]Warning

    Some distributions add an extraneous entry in the /etc/hosts file that resolves the actual hostname to another loopback IP address such as 127.0.1.1. You must comment out or remove this entry to prevent name resolution problems.

 Network node

 

To configure networking:

  1. Configure the first interface as the management interface:

    IP address: 10.0.0.21

    Network mask: 255.255.255.0 (or /24)

    Default gateway: 10.0.0.1

  2. Configure the second interface as the instance tunnels interface:

    IP address: 10.0.1.21

    Network mask: 255.255.255.0 (or /24)

  3. The external interface uses a special configuration without an IP address assigned to it. Configure the third interface as the external interface:

    Replace INTERFACE_NAME with the actual interface name. For example, eth2 or ens256.

    1. Edit the /etc/network/interfaces file to contain the following:

      # The external network interface
      auto INTERFACE_NAME
      iface INTERFACE_NAME inet manual
              up ip link set dev $IFACE up
              down ip link set dev $IFACE down
  4. Reboot the system to activate the changes.

 

To configure name resolution:

  1. Set the hostname of the node to network.

  2. Edit the /etc/hosts file to contain the following:

    # network
    10.0.0.21       network
    
    # controller
    10.0.0.11       controller
    
    # compute1
    10.0.0.31       compute1
    [Warning]Warning

    Some distributions add an extraneous entry in the /etc/hosts file that resolves the actual hostname to another loopback IP address such as 127.0.1.1. You must comment out or remove this entry to prevent name resolution problems.

 Compute node

 

To configure networking:

  1. Configure the first interface as the management interface:

    IP address: 10.0.0.31

    Network mask: 255.255.255.0 (or /24)

    Default gateway: 10.0.0.1

    [Note]Note

    Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.

  2. Configure the second interface as the instance tunnels interface:

    IP address: 10.0.1.31

    Network mask: 255.255.255.0 (or /24)

    [Note]Note

    Additional compute nodes should use 10.0.1.32, 10.0.1.33, and so on.

  3. Reboot the system to activate the changes.

 

To configure name resolution:

  1. Set the hostname of the node to compute1.

  2. Edit the /etc/hosts file to contain the following:

    # compute1
    10.0.0.31       compute1
    
    # controller
    10.0.0.11       controller
    
    # network
    10.0.0.21       network
    [Warning]Warning

    Some distributions add an extraneous entry in the /etc/hosts file that resolves the actual hostname to another loopback IP address such as 127.0.1.1. You must comment out or remove this entry to prevent name resolution problems.

 Verify connectivity

We recommend that you verify network connectivity to the Internet and among the nodes before proceeding further.

  1. From the controller node, ping a site on the Internet:

    # ping -c 4 openstack.org
    PING openstack.org (174.143.194.225) 56(84) bytes of data.
    64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
    64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
    64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
    64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
    
    --- openstack.org ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 3022ms
    rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
  2. From the controller node, ping the management interface on the network node:

    # ping -c 4 network
    PING network (10.0.0.21) 56(84) bytes of data.
    64 bytes from network (10.0.0.21): icmp_seq=1 ttl=64 time=0.263 ms
    64 bytes from network (10.0.0.21): icmp_seq=2 ttl=64 time=0.202 ms
    64 bytes from network (10.0.0.21): icmp_seq=3 ttl=64 time=0.203 ms
    64 bytes from network (10.0.0.21): icmp_seq=4 ttl=64 time=0.202 ms
    
    --- network ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 3000ms
    rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
  3. From the controller node, ping the management interface on the compute node:

    # ping -c 4 compute1
    PING compute1 (10.0.0.31) 56(84) bytes of data.
    64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
    64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
    64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
    64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms
    
    --- network ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 3000ms
    rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
  4. From the network node, ping a site on the Internet:

    # ping -c 4 openstack.org
    PING openstack.org (174.143.194.225) 56(84) bytes of data.
    64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
    64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
    64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
    64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
    
    --- openstack.org ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 3022ms
    rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
  5. From the network node, ping the management interface on the controller node:

    # ping -c 4 controller
    PING controller (10.0.0.11) 56(84) bytes of data.
    64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
    64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
    64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
    64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms
    
    --- controller ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 3000ms
    rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
  6. From the network node, ping the instance tunnels interface on the compute node:

    # ping -c 4 10.0.1.31
    PING 10.0.1.31 (10.0.1.31) 56(84) bytes of data.
    64 bytes from 10.0.1.31 (10.0.1.31): icmp_seq=1 ttl=64 time=0.263 ms
    64 bytes from 10.0.1.31 (10.0.1.31): icmp_seq=2 ttl=64 time=0.202 ms
    64 bytes from 10.0.1.31 (10.0.1.31): icmp_seq=3 ttl=64 time=0.203 ms
    64 bytes from 10.0.1.31 (10.0.1.31): icmp_seq=4 ttl=64 time=0.202 ms
    
    --- 10.0.1.31 ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 3000ms
    rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
  7. From the compute node, ping a site on the Internet:

    # ping -c 4 openstack.org
    PING openstack.org (174.143.194.225) 56(84) bytes of data.
    64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
    64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
    64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
    64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
    
    --- openstack.org ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 3022ms
    rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
  8. From the compute node, ping the management interface on the controller node:

    # ping -c 4 controller
    PING controller (10.0.0.11) 56(84) bytes of data.
    64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
    64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
    64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
    64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms
    
    --- controller ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 3000ms
    rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
  9. From the compute node, ping the instance tunnels interface on the network node:

    # ping -c 4 10.0.1.21
    PING 10.0.1.21 (10.0.1.21) 56(84) bytes of data.
    64 bytes from 10.0.1.21 (10.0.1.21): icmp_seq=1 ttl=64 time=0.263 ms
    64 bytes from 10.0.1.21 (10.0.1.21): icmp_seq=2 ttl=64 time=0.202 ms
    64 bytes from 10.0.1.21 (10.0.1.21): icmp_seq=3 ttl=64 time=0.203 ms
    64 bytes from 10.0.1.21 (10.0.1.21): icmp_seq=4 ttl=64 time=0.202 ms
    
    --- 10.0.1.21 ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 3000ms
    rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms

 Legacy networking (nova-network)

The example architecture with legacy networking (nova-network) requires a controller node and at least one compute node. The controller node contains one network interface on the management network. The compute node contains one network interface on the management network and one on the external network.

The example architecture assumes use of the following networks:

  • Management on 10.0.0.0/24 with gateway 10.0.0.1

    [Note]Note

    This network requires a gateway to provide Internet access to all nodes for administrative purposes such as package installation, security updates, DNS, and NTP.

  • External on 203.0.113.0/24 with gateway 203.0.113.1

    [Note]Note

    This network requires a gateway to provide Internet access to instances in your OpenStack environment.

You can modify these ranges and gateways to work with your particular network infrastructure.

[Note]Note

Network interface names vary by distribution. Traditionally, interfaces use "eth" followed by a sequential number. To cover all variations, this guide simply refers to the first interface as the interface with the lowest number and the second interface as the interface with the highest number.

 

Figure 2.2. Minimal architecture example with legacy networking (nova-network)—Network layout


Unless you intend to use the exact configuration provided in this example architecture, you must modify the networks in this procedure to match your environment. Also, each node must resolve the other nodes by name in addition to IP address. For example, the controller name must resolve to 10.0.0.11, the IP address of the management interface on the controller node.

[Warning]Warning

Reconfiguring network interfaces will interrupt network connectivity. We recommend using a local terminal session for these procedures.

 Controller node

 

To configure networking:

  1. Configure the first interface as the management interface:

    IP address: 10.0.0.11

    Network mask: 255.255.255.0 (or /24)

    Default gateway: 10.0.0.1

  2. Reboot the system to activate the changes.

 

To configure name resolution:

  1. Set the hostname of the node to controller.

  2. Edit the /etc/hosts file to contain the following:

    # controller
    10.0.0.11       controller
    
    # compute1
    10.0.0.31       compute1
    [Warning]Warning

    Some distributions add an extraneous entry in the /etc/hosts file that resolves the actual hostname to another loopback IP address such as 127.0.1.1. You must comment out or remove this entry to prevent name resolution problems.

 Compute node

 

To configure networking:

  1. Configure the first interface as the management interface:

    IP address: 10.0.0.31

    Network mask: 255.255.255.0 (or /24)

    Default gateway: 10.0.0.1

    [Note]Note

    Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.

  2. The external interface uses a special configuration without an IP address assigned to it. Configure the second interface as the external interface:

    Replace INTERFACE_NAME with the actual interface name. For example, eth1 or ens224.

    1. Edit the /etc/network/interfaces file to contain the following:

      # The external network interface
      auto INTERFACE_NAME
      iface INTERFACE_NAME inet manual
              up ip link set dev $IFACE up
              down ip link set dev $IFACE down
  3. Reboot the system to activate the changes.

 

To configure name resolution:

  1. Set the hostname of the node to compute1.

  2. Edit the /etc/hosts file to contain the following:

    # compute1
    10.0.0.31       compute1
    
    # controller
    10.0.0.11       controller
    [Warning]Warning

    Some distributions add an extraneous entry in the /etc/hosts file that resolves the actual hostname to another loopback IP address such as 127.0.1.1. You must comment out or remove this entry to prevent name resolution problems.

 Verify connectivity

We recommend that you verify network connectivity to the Internet and among the nodes before proceeding further.

  1. From the controller node, ping a site on the Internet:

    # ping -c 4 openstack.org
    PING openstack.org (174.143.194.225) 56(84) bytes of data.
    64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
    64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
    64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
    64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
    
    --- openstack.org ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 3022ms
    rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
  2. From the controller node, ping the management interface on the compute node:

    # ping -c 4 compute1
    PING compute1 (10.0.0.31) 56(84) bytes of data.
    64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
    64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
    64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
    64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms
    
    --- compute1 ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 3000ms
    rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
  3. From the compute node, ping a site on the Internet:

    # ping -c 4 openstack.org
    PING openstack.org (174.143.194.225) 56(84) bytes of data.
    64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
    64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
    64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
    64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
    
    --- openstack.org ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 3022ms
    rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
  4. From the compute node, ping the management interface on the controller node:

    # ping -c 4 controller
    PING controller (10.0.0.11) 56(84) bytes of data.
    64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
    64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
    64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
    64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms
    
    --- controller ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 3000ms
    rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms

 Network Time Protocol (NTP)

You must install NTP to properly synchronize services among nodes. We recommend that you configure the controller node to reference more accurate (lower stratum) servers and other nodes to reference the controller node.

 Controller node

 

To install the NTP service

  • # apt-get install ntp
 

To configure the NTP service

By default, the controller node synchronizes the time via a pool of public servers. However, you can optionally edit the /etc/ntp.conf file to configure alternative servers such as those provided by your organization.

  1. Edit the /etc/ntp.conf file and add, change, or remove the following keys as necessary for your environment:

    server NTP_SERVER iburst
    restrict -4 default kod notrap nomodify
    restrict -6 default kod notrap nomodify

    Replace NTP_SERVER with the hostname or IP address of a suitable more accurate (lower stratum) NTP server. The configuration supports multiple server keys.

    [Note]Note

    For the restrict keys, you essentially remove the nopeer and noquery options.

    [Note]Note

    Remove the /var/lib/ntp/ntp.conf.dhcp file if it exists.

  2. Restart the NTP service:

    # service ntp restart

 Other nodes

 

To install the NTP service

  • # apt-get install ntp
 

To configure the NTP service

Configure the network and compute nodes to reference the controller node.

  1. Edit the /etc/ntp.conf file:

    Comment out or remove all but one server key and change it to reference the controller node.

    server controller iburst
    [Note]Note

    Remove the /var/lib/ntp/ntp.conf.dhcp file if it exists.

  2. Restart the NTP service:

    # service ntp restart

 Verify operation

We recommend that you verify NTP synchronization before proceeding further. Some nodes, particularly those that reference the controller node, can take several minutes to synchronize.

  1. Run this command on the controller node:

    # ntpq -c peers
         remote           refid      st t when poll reach   delay   offset  jitter
    ==============================================================================
    *ntp-server1     192.0.2.11       2 u  169 1024  377    1.901   -0.611   5.483
    +ntp-server2     192.0.2.12       2 u  887 1024  377    0.922   -0.246   2.864

    Contents in the remote column should indicate the hostname or IP address of one or more NTP servers.

    [Note]Note

    Contents in the refid column typically reference IP addresses of upstream servers.

  2. Run this command on the controller node:

    # ntpq -c assoc
    ind assid status  conf reach auth condition  last_event cnt
    ===========================================================
      1 20487  961a   yes   yes  none  sys.peer    sys_peer  1
      2 20488  941a   yes   yes  none candidate    sys_peer  1

    Contents in the condition column should indicate sys.peer for at least one server.

  3. Run this command on all other nodes:

    # ntpq -c peers
         remote           refid      st t when poll reach   delay   offset  jitter
    ==============================================================================
    *controller      192.0.2.21       3 u   47   64   37    0.308   -0.251   0.079

    Contents in the remote column should indicate the hostname of the controller node.

    [Note]Note

    Contents in the refid column typically reference IP addresses of upstream servers.

  4. Run this command on all other nodes:

    # ntpq -c assoc
    ind assid status  conf reach auth condition  last_event cnt
    ===========================================================
      1 21181  963a   yes   yes  none  sys.peer    sys_peer  3

    Contents in the condition column should indicate sys.peer.

 OpenStack packages

Distributions release OpenStack packages as part of the distribution or using other methods because of differing release schedules. Perform these procedures on all nodes.

[Note]Note

Disable or remove any automatic update services because they can impact your OpenStack environment.

 

To enable the OpenStack repository

  • Install the Ubuntu Cloud archive keyring and repository:

    # apt-get install ubuntu-cloud-keyring
    # echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu" \
      "trusty-updates/kilo main" > /etc/apt/sources.list.d/cloudarchive-kilo.list
 

To finalize installation

  • Upgrade the packages on your system:

    # apt-get update && apt-get dist-upgrade
    [Note]Note

    If the upgrade process includes a new kernel, reboot your system to activate it.

 SQL database

Most OpenStack services use an SQL database to store information. The database typically runs on the controller node. The procedures in this guide use MariaDB or MySQL depending on the distribution. OpenStack services also support other SQL databases including PostgreSQL.

 

To install and configure the database server

  1. Install the packages:

    [Note]Note

    The Python MySQL library is compatible with MariaDB.

    # apt-get install mariadb-server python-mysqldb
  2. Choose a suitable password for the database root account.

  3. Create and edit the /etc/mysql/conf.d/mysqld_openstack.cnf file and complete the following actions:

    1. In the [mysqld] section, set the bind-address key to the management IP address of the controller node to enable access by other nodes via the management network:

      [mysqld]
      ...
      bind-address = 10.0.0.11
    2. In the [mysqld] section, set the following keys to enable useful options and the UTF-8 character set:

      [mysqld]
      ...
      default-storage-engine = innodb
      innodb_file_per_table
      collation-server = utf8_general_ci
      init-connect = 'SET NAMES utf8'
      character-set-server = utf8
 

To finalize installation

  1. Restart the database service:

    # service mysql restart
  2. Secure the database service:

    # mysql_secure_installation
    NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
          SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!
    
    In order to log into MariaDB to secure it, we'll need the current
    password for the root user.  If you've just installed MariaDB, and
    you haven't set the root password yet, the password will be blank,
    so you should just press enter here.
    
    Enter current password for root (enter for none):
    OK, successfully used password, moving on...
    
    Setting the root password ensures that nobody can log into the MariaDB
    root user without the proper authorisation.
    
    Set root password? [Y/n] Y
    New password:
    Re-enter new password:
    Password updated successfully!
    Reloading privilege tables..
     ... Success!
    
    
    By default, a MariaDB installation has an anonymous user, allowing anyone
    to log into MariaDB without having to have a user account created for
    them.  This is intended only for testing, and to make the installation
    go a bit smoother.  You should remove them before moving into a
    production environment.
    
    Remove anonymous users? [Y/n] Y
     ... Success!
    
    Normally, root should only be allowed to connect from 'localhost'.  This
    ensures that someone cannot guess at the root password from the network.
    
    Disallow root login remotely? [Y/n] Y
     ... Success!
    
    By default, MariaDB comes with a database named 'test' that anyone can
    access.  This is also intended only for testing, and should be removed
    before moving into a production environment.
    
    Remove test database and access to it? [Y/n] Y
     - Dropping test database...
     ... Success!
     - Removing privileges on test database...
     ... Success!
    
    Reloading the privilege tables will ensure that all changes made so far
    will take effect immediately.
    
    Reload privilege tables now? [Y/n] Y
     ... Success!
    
    Cleaning up...
    
    All done!  If you've completed all of the above steps, your MariaDB
    installation should now be secure.
    
    Thanks for using MariaDB!

 Message queue

OpenStack uses a message queue to coordinate operations and status information among services. The message queue service typically runs on the controller node. OpenStack supports several message queue services including RabbitMQ, Qpid, and ZeroMQ. However, most distributions that package OpenStack support a particular message queue service. This guide implements the RabbitMQ message queue service because most distributions support it. If you prefer to implement a different message queue service, consult the documentation associated with it.

 

To install the message queue service

  • Install the package:

    # apt-get install rabbitmq-server
 

To configure the message queue service

  1. Add the openstack user:

    # rabbitmqctl add_user openstack RABBIT_PASS
    Creating user "openstack" ...
    ...done.

    Replace RABBIT_PASS with a suitable password.

  2. Permit configuration, write, and read access for the openstack user:

    # rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    Setting permissions for user "openstack" in vhost "/" ...
    ...done.
Questions? Discuss on ask.openstack.org
Found an error? Report a bug against this page

loading table of contents...