This section describes how to install and configure the Compute service on a compute node. The service supports several hypervisors to deploy instances or VMs. For simplicity, this configuration uses the QEMU hypervisor with the KVM extension on compute nodes that support hardware acceleration for virtual machines. On legacy hardware, this configuration uses the generic QEMU hypervisor. You can follow these instructions with minor modifications to horizontally scale your environment with additional compute nodes.
Note
This section assumes that you are following the instructions in this guide step-by-step to configure the first compute node. If you want to configure additional compute nodes, prepare them in a similar fashion to the first compute node in the example architectures section. Each additional compute node requires a unique IP address.
Note
Default configuration files vary by distribution. You might need
to add these sections and options rather than modifying existing
sections and options. Also, an ellipsis (...
) in the configuration
snippets indicates potential default configuration options that you
should retain.
Install the packages:
# apt install nova-compute
Edit the /etc/nova/nova.conf
file and
complete the following actions:
In the [DEFAULT]
section, configure RabbitMQ
message queue access:
[DEFAULT]
...
transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace RABBIT_PASS
with the password you chose for
the openstack
account in RabbitMQ
.
In the [DEFAULT]
and [keystone_authtoken]
sections,
configure Identity service access:
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS
Replace NOVA_PASS
with the password you chose for the
nova
user in the Identity service.
Note
Comment out or remove any other options in the
[keystone_authtoken]
section.
In the [DEFAULT]
section, check that the my_ip
option
is correctly set (this value is handled by the config and postinst
scripts of the nova-common
package using debconf):
[DEFAULT]
...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
Replace MANAGEMENT_INTERFACE_IP_ADDRESS
with the IP address
of the management network interface on your compute node,
typically 10.0.0.31 for the first node in the
example architecture.
In the [vnc]
section, enable and configure remote console access:
[vnc]
...
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
The server component listens on all IP addresses and the proxy component only listens on the management interface IP address of the compute node. The base URL indicates the location where you can use a web browser to access remote consoles of instances on this compute node.
Note
If the web browser to access remote consoles resides on
a host that cannot resolve the controller
hostname,
you must replace controller
with the management
interface IP address of the controller node.
In the [glance]
section, configure the location of the
Image service API:
[glance]
...
api_servers = http://controller:9292
Ensure the kernel module nbd
is loaded.
# modprobe nbd
Ensure the module loads on every boot by adding nbd
to the /etc/modules-load.d/nbd.conf
file.
Determine whether your compute node supports hardware acceleration for virtual machines:
$ egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of one or greater
, your compute
node supports hardware acceleration which typically requires no
additional configuration.
If this command returns a value of zero
, your compute node does
not support hardware acceleration and you must configure libvirt
to use QEMU instead of KVM.
Replace the nova-compute-kvm
package with nova-compute-qemu
which automatically changes the /etc/nova/nova-compute.conf
file and installs the necessary dependencies:
# apt install nova-compute-qemu
Restart the Compute service:
# service nova-compute restart
Note
If the nova-compute
service fails to start, check
/var/log/nova/nova-compute.log
. The error message
AMQP server on controller:5672 is unreachable
likely indicates that
the firewall on the controller node is preventing access to port 5672.
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.