Install and configure a compute node for Ubuntu¶
This section describes how to install and configure the Compute service on a compute node. The service supports several hypervisors to deploy instances or virtual machines (VMs). For simplicity, this configuration uses the Quick EMUlator (QEMU) hypervisor with the kernel-based VM (KVM) extension on compute nodes that support hardware acceleration for virtual machines. On legacy hardware, this configuration uses the generic QEMU hypervisor. You can follow these instructions with minor modifications to horizontally scale your environment with additional compute nodes.
This section assumes that you are following the instructions in this guide step-by-step to configure the first compute node. If you want to configure additional compute nodes, prepare them in a similar fashion to the first compute node in the example architectures section. Each additional compute node requires a unique IP address.
Install and configure components¶
Default configuration files vary by distribution. You might need to add
these sections and options rather than modifying existing sections and
options. Also, an ellipsis (
...) in the configuration snippets indicates
potential default configuration options that you should retain.
Install the packages:
# apt install nova-compute
/etc/nova/nova.conffile and complete the following actions:
RabbitMQmessage queue access:
[DEFAULT] # ... transport_url = rabbit://openstack:RABBIT_PASS@controller
RABBIT_PASSwith the password you chose for the
[keystone_authtoken]sections, configure Identity service access:
[api] # ... auth_strategy = keystone [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASS
NOVA_PASSwith the password you chose for the
novauser in the Identity service.
Comment out or remove any other options in the
[DEFAULT]section, configure the
[DEFAULT] # ... my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
MANAGEMENT_INTERFACE_IP_ADDRESSwith the IP address of the management network interface on your compute node, typically 10.0.0.31 for the first node in the example architecture.
[neutron]section of /etc/nova/nova.conf. Refer to the Networking service install guide for more details.
[vnc]section, enable and configure remote console access:
[vnc] # ... enabled = true server_listen = 0.0.0.0 server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html
The server component listens on all IP addresses and the proxy component only listens on the management interface IP address of the compute node. The base URL indicates the location where you can use a web browser to access remote consoles of instances on this compute node.
If the web browser to access remote consoles resides on a host that cannot resolve the
controllerhostname, you must replace
controllerwith the management interface IP address of the controller node.
[glance]section, configure the location of the Image service API:
[glance] # ... api_servers = http://controller:9292
[oslo_concurrency]section, configure the lock path:
[oslo_concurrency] # ... lock_path = /var/lib/nova/tmp
[placement]section, configure the Placement API:
[placement] # ... region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASS
PLACEMENT_PASSwith the password you choose for the
placementuser in the Identity service. Comment out any other options in the
Determine whether your compute node supports hardware acceleration for virtual machines:
$ egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of
one or greater, your compute node supports hardware acceleration which typically requires no additional configuration.
If this command returns a value of
zero, your compute node does not support hardware acceleration and you must configure
libvirtto use QEMU instead of KVM.
[libvirt]section in the
/etc/nova/nova-compute.conffile as follows:
[libvirt] # ... virt_type = qemu
Restart the Compute service:
# service nova-compute restart
nova-compute service fails to start, check
/var/log/nova/nova-compute.log. The error message
AMQP server on
controller:5672 is unreachable likely indicates that the firewall on the
controller node is preventing access to port 5672. Configure the firewall
to open port 5672 on the controller node and restart
service on the compute node.
Add the compute node to the cell database¶
Run the following commands on the controller node.
Source the admin credentials to enable admin-only CLI commands, then confirm there are compute hosts in the database:
$ . admin-openrc $ openstack compute service list --service nova-compute +----+-------+--------------+------+-------+---------+----------------------------+ | ID | Host | Binary | Zone | State | Status | Updated At | +----+-------+--------------+------+-------+---------+----------------------------+ | 1 | node1 | nova-compute | nova | up | enabled | 2017-04-14T15:30:44.000000 | +----+-------+--------------+------+-------+---------+----------------------------+
Discover compute hosts:
# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3
When you add new compute nodes, you must run
nova-manage cell_v2 discover_hostson the controller node to register those new compute nodes. Alternatively, you can set an appropriate interval in
[scheduler] discover_hosts_in_cells_interval = 300