Networking Option 2: Self-service networks¶
Install and configure the Networking components on the controller node.
Install the components¶
# yum install openstack-neutron openstack-neutron-ml2 \ openstack-neutron-linuxbridge ebtables
Configure the server component¶
/etc/neutron/neutron.conffile and complete the following actions:
[database]section, configure database access:
[database] # ... connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
NEUTRON_DBPASSwith the password you chose for the database.
Comment out or remove any other
connectionoptions in the
[DEFAULT]section, enable the Modular Layer 2 (ML2) plug-in, router service, and overlapping IP addresses:
[DEFAULT] # ... core_plugin = ml2 service_plugins = router allow_overlapping_ips = true
RabbitMQmessage queue access:
[DEFAULT] # ... transport_url = rabbit://openstack:RABBIT_PASS@controller
RABBIT_PASSwith the password you chose for the
openstackaccount in RabbitMQ.
[keystone_authtoken]sections, configure Identity service access:
[DEFAULT] # ... auth_strategy = keystone [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = NEUTRON_PASS
NEUTRON_PASSwith the password you chose for the
neutronuser in the Identity service.
Comment out or remove any other options in the
[nova]sections, configure Networking to notify Compute of network topology changes:
[DEFAULT] # ... notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [nova] # ... auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = NOVA_PASS
NOVA_PASSwith the password you chose for the
novauser in the Identity service.
[oslo_concurrency]section, configure the lock path:
[oslo_concurrency] # ... lock_path = /var/lib/neutron/tmp
Configure the Modular Layer 2 (ML2) plug-in¶
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging and switching) virtual networking infrastructure for instances.
/etc/neutron/plugins/ml2/ml2_conf.inifile and complete the following actions:
[ml2]section, enable flat, VLAN, and VXLAN networks:
[ml2] # ... type_drivers = flat,vlan,vxlan
[ml2]section, enable VXLAN self-service networks:
[ml2] # ... tenant_network_types = vxlan
[ml2]section, enable the Linux bridge and layer-2 population mechanisms:
[ml2] # ... mechanism_drivers = linuxbridge,l2population
After you configure the ML2 plug-in, removing values in the
type_driversoption can lead to database inconsistency.
The Linux bridge agent only supports VXLAN overlay networks.
[ml2]section, enable the port security extension driver:
[ml2] # ... extension_drivers = port_security
[ml2_type_flat]section, configure the provider virtual network as a flat network:
[ml2_type_flat] # ... flat_networks = provider
[ml2_type_vxlan]section, configure the VXLAN network identifier range for self-service networks:
[ml2_type_vxlan] # ... vni_ranges = 1:1000
[securitygroup]section, enable ipset to increase efficiency of security group rules:
[securitygroup] # ... enable_ipset = true
Configure the Linux bridge agent¶
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for instances and handles security groups.
/etc/neutron/plugins/ml2/linuxbridge_agent.inifile and complete the following actions:
[linux_bridge]section, map the provider virtual network to the provider physical network interface:
[linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
PROVIDER_INTERFACE_NAMEwith the name of the underlying provider physical network interface. See Host networking for more information.
[vxlan]section, enable VXLAN overlay networks, configure the IP address of the physical network interface that handles overlay networks, and enable layer-2 population:
[vxlan] enable_vxlan = true local_ip = OVERLAY_INTERFACE_IP_ADDRESS l2_population = true
OVERLAY_INTERFACE_IP_ADDRESSwith the IP address of the underlying physical network interface that handles overlay networks. The example architecture uses the management interface to tunnel traffic to the other nodes. Therefore, replace
OVERLAY_INTERFACE_IP_ADDRESSwith the management IP address of the controller node. See Host networking for more information.
[securitygroup]section, enable security groups and configure the Linux bridge iptables firewall driver:
[securitygroup] # ... enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Ensure your Linux operating system kernel supports network bridge filters by verifying all the following
sysctlvalues are set to
To enable networking bridge support, typically the
br_netfilterkernel module needs to be loaded. Check your operating system’s documentation for additional details on enabling this module.
Configure the layer-3 agent¶
The Layer-3 (L3) agent provides routing and NAT services for self-service virtual networks.
/etc/neutron/l3_agent.inifile and complete the following actions:
[DEFAULT]section, configure the Linux bridge interface driver:
[DEFAULT] # ... interface_driver = linuxbridge
Configure the DHCP agent¶
The DHCP agent provides DHCP services for virtual networks.
/etc/neutron/dhcp_agent.inifile and complete the following actions:
[DEFAULT]section, configure the Linux bridge interface driver, Dnsmasq DHCP driver, and enable isolated metadata so instances on provider networks can access metadata over the network:
[DEFAULT] # ... interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true
Return to Networking controller node configuration.