Manual install & Configuration¶
This document discusses what is required for manual installation or integration into a production OpenStack deployment tool of conventional architectures that include the following types of nodes:
Controller - Runs OpenStack control plane services such as REST APIs and databases.
Network - Runs the layer-2, layer-3 (routing), DHCP, and metadata agents for the Networking service. Some agents optional. Usually provides connectivity between provider (public) and project (private) networks via NAT and floating IP addresses.
Note
Some tools deploy these services on controller nodes.
Compute - Runs the hypervisor and layer-2 agent for the Networking service.
Packaging¶
Open vSwitch (OVS) includes OVN beginning with version 2.5 and considers
it experimental. From version 2.13 OVN has been released as separate project.
The Networking service integration for OVN is now one of the in-tree Neutron
drivers so should be delivered with neutron package, but older versions of
this integration were delivered with independent package, typically networking-ovn.
Building OVS from source automatically installs OVN for releases older than 2.13.
For newer releases it is required to build OVS and OVN separately.
For deployment tools using distribution packages, the openvswitch-ovn
package for RHEL/CentOS and compatible distributions automatically installs
openvswitch as a dependency. Ubuntu/Debian includes ovn-central, ovn-host,
ovn-docker, and ovn-common packages that pull in the appropriate Open
vSwitch dependencies as needed.
A python-networking-ovn RPM may be obtained for Fedora or CentOS from
the RDO project. Since Ussuri release OVN driver is shipped with neutron package.
A package based on the older branch of networking-ovn can be found at
https://trunk.rdoproject.org/.
Fedora and CentOS RPM builds of OVS and OVN from the master branch of
ovs can be found in this COPR repository:
https://copr.fedorainfracloud.org/coprs/leifmadsen/ovs-master/.
Controller nodes¶
Each controller node runs the OVS service (including dependent services such
as ovsdb-server) and the ovn-northd service. However, only a single
instance of the ovsdb-server and ovn-northd services can operate in
a deployment. However, deployment tools can implement active/passive
high-availability using a management tool that monitors service health
and automatically starts these services on another node after failure of the
primary node. See the Frequently Asked Questions for more information.
Install the
openvswitch-ovnandnetworking-ovnpackages.Start the OVS service. The central OVS service starts the
ovsdb-serverservice that manages OVN databases.Using the systemd unit:
# systemctl start openvswitchUsing the
ovs-ctlscript:# /usr/share/openvswitch/scripts/ovs-ctl start --system-id="random"
Configure the
ovsdb-servercomponent. By default, theovsdb-serverservice only permits local access to databases via Unix socket. However, OVN services on compute nodes require access to these databases.Permit remote database access.
# ovn-nbctl set-connection ptcp:6641:0.0.0.0 -- \ set connection . inactivity_probe=60000 # ovn-sbctl set-connection ptcp:6642:0.0.0.0 -- \ set connection . inactivity_probe=60000 # if using the VTEP functionality: # ovs-appctl -t ovsdb-server ovsdb-server/add-remote ptcp:6640:0.0.0.0
Replace
0.0.0.0with the IP address of the management network interface on the controller node to avoid listening on all interfaces.Note
Permit remote access to TCP ports: 6640 (OVS) to VTEPS (if you use vteps), 6642 (SBDB) to hosts running neutron-server, gateway nodes that run ovn-controller, and compute node services like ovn-controller an ovn-metadata-agent. 6641 (NBDB) to hosts running neutron-server.
Start the
ovn-northdservice.Using the systemd unit:
# systemctl start ovn-northdUsing the
ovn-ctlscript:# /usr/share/openvswitch/scripts/ovn-ctl start_northdOptions for start_northd:
# /usr/share/openvswitch/scripts/ovn-ctl start_northd --help # ... # DB_NB_SOCK="/usr/local/etc/openvswitch/nb_db.sock" # DB_NB_PID="/usr/local/etc/openvswitch/ovnnb_db.pid" # DB_SB_SOCK="usr/local/etc/openvswitch/sb_db.sock" # DB_SB_PID="/usr/local/etc/openvswitch/ovnsb_db.pid" # ...
Configure the Networking server component. The Networking service implements OVN as an ML2 driver. Edit the
/etc/neutron/neutron.conffile:Enable the ML2 core plug-in.
[DEFAULT] ... core_plugin = ml2
Enable the OVN layer-3 service.
[DEFAULT] ... service_plugins = ovn-router
Configure the ML2 plug-in. Edit the
/etc/neutron/plugins/ml2/ml2_conf.inifile:Configure the OVN mechanism driver, network type drivers, self-service (tenant) network types, and enable the port security extension.
[ml2] ... mechanism_drivers = ovn type_drivers = local,flat,vlan,geneve tenant_network_types = geneve extension_drivers = port_security overlay_ip_version = 4
Note
To enable VLAN self-service networks, make sure that OVN version 2.11 (or higher) is used, then add
vlanto thetenant_network_typesoption. The first network type in the list becomes the default self-service network type.To use IPv6 for all overlay (tunnel) network endpoints, set the
overlay_ip_versionoption to6.Configure the Geneve ID range and maximum header size. The IP version overhead (20 bytes for IPv4 (default) or 40 bytes for IPv6) is added to the maximum header size based on the ML2
overlay_ip_versionoption.[ml2_type_geneve] ... vni_ranges = 1:65536 max_header_size = 38
Note
The Networking service uses the
vni_rangesoption to allocate network segments. However, OVN ignores the actual values. Thus, the ID range only determines the quantity of Geneve networks in the environment. For example, a range of5001:6000defines a maximum of 1000 Geneve networks.Optionally, enable support for VLAN provider and self-service networks on one or more physical networks. If you specify only the physical network, only administrative (privileged) users can manage VLAN networks. Additionally specifying a VLAN ID range for a physical network enables regular (non-privileged) users to manage VLAN networks. The Networking service allocates the VLAN ID for each self-service network using the VLAN ID range for the physical network.
[ml2_type_vlan] ... network_vlan_ranges = PHYSICAL_NETWORK:MIN_VLAN_ID:MAX_VLAN_ID
Replace
PHYSICAL_NETWORKwith the physical network name and optionally define the minimum and maximum VLAN IDs. Use a comma to separate each physical network.For example, to enable support for administrative VLAN networks on the
physnet1network and self-service VLAN networks on thephysnet2network using VLAN IDs 1001 to 2000:network_vlan_ranges = physnet1,physnet2:1001:2000
Enable security groups.
[securitygroup] ... enable_security_group = true
Note
The
firewall_driveroption under[securitygroup]is ignored since the OVN ML2 driver itself handles security groups.Configure OVS database access and L3 scheduler
[ovn] ... ovn_nb_connection = tcp:IP_ADDRESS:6641 ovn_sb_connection = tcp:IP_ADDRESS:6642 ovn_l3_scheduler = OVN_L3_SCHEDULER
Note
Replace
IP_ADDRESSwith the IP address of the controller node that runs theovsdb-serverservice. ReplaceOVN_L3_SCHEDULERwithleastloadedif you want the scheduler to select a compute node with the least number of gateway ports orchanceif you want the scheduler to randomly select a compute node from the available list of compute nodes.Set ovn-cms-options with enable-chassis-as-gw in Open_vSwitch table’s external_ids column. Then if this chassis has proper bridge mappings, it will be selected for scheduling gateway routers.
# ovs-vsctl set open . external-ids:ovn-cms-options=enable-chassis-as-gw
Start the
neutron-serverservice.
Network nodes¶
Deployments using OVN native layer-3 and DHCP services do not require conventional network nodes because connectivity to external networks (including VTEP gateways) and routing occurs on compute nodes.
Compute nodes¶
Each compute node runs the OVS and ovn-controller services. The
ovn-controller service replaces the conventional OVS layer-2 agent.
Install the
openvswitch-ovnandnetworking-ovnpackages.Start the OVS service.
Using the systemd unit:
# systemctl start openvswitchUsing the
ovs-ctlscript:# /usr/share/openvswitch/scripts/ovs-ctl start --system-id="random"
Configure the OVS service.
Use OVS databases on the controller node.
# ovs-vsctl set open . external-ids:ovn-remote=tcp:IP_ADDRESS:6642
Replace
IP_ADDRESSwith the IP address of the controller node that runs theovsdb-serverservice.Enable one or more overlay network protocols. At a minimum, OVN requires enabling the
geneveprotocol. Deployments using VTEP gateways should also enable thevxlanprotocol.# ovs-vsctl set open . external-ids:ovn-encap-type=geneve,vxlan
Note
Deployments without VTEP gateways can safely enable both protocols.
Configure the overlay network local endpoint IP address.
# ovs-vsctl set open . external-ids:ovn-encap-ip=IP_ADDRESS
Replace
IP_ADDRESSwith the IP address of the overlay network interface on the compute node.
Start the
ovn-controllerservice.Using the systemd unit:
# systemctl start ovn-controllerUsing the
ovn-ctlscript:# /usr/share/openvswitch/scripts/ovn-ctl start_controller
Verify operation¶
Each compute node should contain an
ovn-controllerinstance.# ovn-sbctl show <output>