This architecture example augments the self-service deployment example with the Distributed Virtual Router (DVR) high-availability mechanism that provides connectivity between self-service and provider networks on compute nodes rather than network nodes for specific scenarios. For instances with a floating IPv4 address, routing between self-service and provider networks resides completely on the compute nodes to eliminate single point of failure and performance issues with network nodes. Routing also resides completely on the compute nodes for instances with a fixed or floating IPv4 address using self-service networks on the same distributed virtual router. However, instances with a fixed IP address still rely on the network node for routing and SNAT services between self-service and provider networks.
Consider the following attributes of this high-availability mechanism to determine practicality in your environment:
Modify the compute nodes with the following components:
注釈
Consider adding at least one additional network node to provide high-availability for instances with a fixed IP address. See See 分散仮想ルーティングと VRRP の組み合わせ for more information.
The following figure shows components and connectivity for one self-service network and one untagged (flat) network. In this particular case, the instance resides on the same compute node as the DHCP agent for the network. If the DHCP agent resides on another compute node, the latter only contains a DHCP namespace with a port on the OVS integration bridge.
Use the following example configuration as a template to add support for high-availability using DVR to an existing operational environment that supports self-service networks.
neutron.conf ファイル:
Enable distributed routing by default for all routers.
[DEFAULT]
router_distributed = true
Restart the following services:
サーバー
In the openswitch_agent.ini file, enable distributed routing.
[DEFAULT]
enable_distributed_routing = true
In the l3_agent.ini file, configure the layer-3 agent to provide SNAT services.
[DEFAULT]
agent_mode = dvr_snat
注釈
external_network_bridge オプションには意図的に値を指定していません。
Restart the following services:
Open vSwitch エージェント
L3 エージェント
Install the Networking service layer-3 agent.
In the openswitch_agent.ini file, enable distributed routing.
[DEFAULT]
enable_distributed_routing = true
Restart the following services:
Open vSwitch エージェント
In the l3_agent.ini file, configure the layer-3 agent.
[DEFAULT]
interface_driver = openvswitch
external_network_bridge =
agent_mode = dvr
注釈
external_network_bridge オプションには意図的に値を指定していません。
管理プロジェクトのクレデンシャルを読み込みます。
Verify presence and operation of the agents.
$ neutron agent-list
+--------------------------------------+--------------------+-------------+----------------+-------+----------------+---------------------------+
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
| 05d980f2-a4fc-4815-91e7-a7f7e118c0db | L3 agent | compute1 | nova | :-) | True | neutron-l3-agent |
| 1236bbcb-e0ba-48a9-80fc-81202ca4fa51 | Metadata agent | compute2 | | :-) | True | neutron-metadata-agent |
| 2a2e9a90-51b8-4163-a7d6-3e199ba2374b | L3 agent | compute2 | nova | :-) | True | neutron-l3-agent |
| 457d6898-b373-4bb3-b41f-59345dcfb5c5 | Open vSwitch agent | compute2 | | :-) | True | neutron-openvswitch-agent |
| 513caa68-0391-4e53-a530-082e2c23e819 | Linux bridge agent | compute1 | | :-) | True | neutron-linuxbridge-agent |
| 71f15e84-bc47-4c2a-b9fb-317840b2d753 | DHCP agent | compute2 | nova | :-) | True | neutron-dhcp-agent |
| 8805b962-de95-4e40-bdc2-7a0add7521e8 | L3 agent | network1 | nova | :-) | True | neutron-l3-agent |
| a33cac5a-0266-48f6-9cac-4cef4f8b0358 | Open vSwitch agent | network1 | | :-) | True | neutron-openvswitch-agent |
| a6c69690-e7f7-4e56-9831-1282753e5007 | Metadata agent | compute1 | | :-) | True | neutron-metadata-agent |
| af11f22f-a9f4-404f-9fd8-cd7ad55c0f68 | DHCP agent | compute1 | nova | :-) | True | neutron-dhcp-agent |
| bcfc977b-ec0e-4ba9-be62-9489b4b0e6f1 | Open vSwitch agent | compute1 | | :-) | True | neutron-openvswitch-agent |
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
Similar to the self-service deployment example, this configuration supports multiple VXLAN self-service networks. After enabling high-availability, all additional routers use distributed routing. The following procedure creates an additional self-service network and router. The Networking service also supports adding distributed routing to existing routers.
Source a regular (non-administrative) project credentials.
Create a self-service network.
$ neutron net-create selfservice2
Created a new network:
+-------------------------+--------------------------------------+
| Field | Value |
+-------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| description | |
| id | 7ebc353c-6c8f-461f-8ada-01b9f14beb18 |
| ipv4_address_scope | |
| ipv6_address_scope | |
| mtu | 1450 |
| name | selfservice2 |
| port_security_enabled | True |
| router:external | False |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | |
| tenant_id | f986edf55ae945e2bef3cb4bfd589928 |
+-------------------------+--------------------------------------+
Create a IPv4 subnet on the self-service network.
$ neutron subnet-create --name selfservice2-v4 --ip-version 4 \
--dns-nameserver 8.8.4.4 selfservice2 192.168.2.0/24
Created a new subnet:
+-------------------+--------------------------------------------------+
| Field | Value |
+-------------------+--------------------------------------------------+
| allocation_pools | {"start": "192.168.2.2", "end": "192.168.2.254"} |
| cidr | 192.168.2.0/24 |
| description | |
| dns_nameservers | 8.8.4.4 |
| enable_dhcp | True |
| gateway_ip | 192.168.2.1 |
| host_routes | |
| id | 12a41804-18bf-4cec-bde8-174cbdbf1573 |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | selfservice2-v4 |
| network_id | 7ebc353c-6c8f-461f-8ada-01b9f14beb18 |
| subnetpool_id | |
| tenant_id | f986edf55ae945e2bef3cb4bfd589928 |
+-------------------+--------------------------------------------------+
Create a IPv6 subnet on the self-service network.
$ neutron subnet-create --name selfservice2-v6 --ip-version 6 \
--ipv6-address-mode slaac --ipv6-ra-mode slaac \
--dns-nameserver 2001:4860:4860::8844 selfservice2 \
fd00:192:168:2::/64
Created a new subnet:
+-------------------+-----------------------------------------------------------------------------+
| Field | Value |
+-------------------+-----------------------------------------------------------------------------+
| allocation_pools | {"start": "fd00:192:168:2::2", "end": "fd00:192:168:2:ffff:ffff:ffff:ffff"} |
| cidr | fd00:192:168:2::/64 |
| description | |
| dns_nameservers | 2001:4860:4860::8844 |
| enable_dhcp | True |
| gateway_ip | fd00:192:168:2::1 |
| host_routes | |
| id | b0f122fe-0bf9-4f31-975d-a47e58aa88e3 |
| ip_version | 6 |
| ipv6_address_mode | slaac |
| ipv6_ra_mode | slaac |
| name | selfservice2-v6 |
| network_id | 7ebc353c-6c8f-461f-8ada-01b9f14beb18 |
| subnetpool_id | |
| tenant_id | f986edf55ae945e2bef3cb4bfd589928 |
+-------------------+-----------------------------------------------------------------------------+
Create a router.
$ neutron router-create router2
Created a new router:
+-------------------------+--------------------------------------+
| Field | Value |
+-------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| description | |
| external_gateway_info | |
| id | b6206312-878e-497c-8ef7-eb384f8add96 |
| name | router2 |
| routes | |
| status | ACTIVE |
| tenant_id | f986edf55ae945e2bef3cb4bfd589928 |
+-------------------------+--------------------------------------+
Add the IPv4 and IPv6 subnets as interfaces on the router.
$ neutron router-interface-add router2 selfservice2-v4
Added interface da3504ad-ba70-4b11-8562-2e6938690878 to router router2.
$ neutron router-interface-add router2 selfservice2-v6
Added interface 442e36eb-fce3-4cb5-b179-4be6ace595f0 to router router2.
プロバイダーネットワークをルーターのゲートウェイとして追加します。
$ neutron router-gateway-set router2 provider1
Set gateway for router router2
管理プロジェクトのクレデンシャルを読み込みます。
Verify distributed routing on the router.
$ neutron router-show router2
+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| distributed | True |
+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
On each compute node, verify creation of a qrouter namespace with the same ID.
Compute node 1:
# ip netns
qrouter-78d2f628-137c-4f26-a257-25fc20f203c1
Compute node 2:
# ip netns
qrouter-78d2f628-137c-4f26-a257-25fc20f203c1
On the network node, verify creation of the snat and qrouter namespaces with the same ID.
# ip netns
snat-78d2f628-137c-4f26-a257-25fc20f203c1
qrouter-78d2f628-137c-4f26-a257-25fc20f203c1
注釈
The namespace for router 1 from Open vSwitch: Self-service networks should also appear on network node 1 because of creation prior to enabling distributed routing.
Launch an instance with an interface on the addtional self-service network. For example, a CirrOS image using flavor ID 1.
$ openstack server create --flavor 1 --image cirros --nic net-id=NETWORK_ID selfservice-instance2
Replace NETWORK_ID with the ID of the additional self-service network.
Determine the IPv4 and IPv6 addresses of the instance.
$ openstack server list
+--------------------------------------+-----------------------+--------+---------------------------------------------------------------------------+
| ID | Name | Status | Networks |
+--------------------------------------+-----------------------+--------+---------------------------------------------------------------------------+
| bde64b00-77ae-41b9-b19a-cd8e378d9f8b | selfservice-instance2 | ACTIVE | selfservice2=fd00:192:168:2:f816:3eff:fe71:e93e, 192.168.2.4 |
+--------------------------------------+-----------------------+--------+---------------------------------------------------------------------------+
Create a floating IPv4 address on the provider network.
$ openstack ip floating create provider1
+-------------+--------------------------------------+
| Field | Value |
+-------------+--------------------------------------+
| fixed_ip | None |
| id | 0174056a-fa56-4403-b1ea-b5151a31191f |
| instance_id | None |
| ip | 203.0.113.17 |
| pool | provider1 |
+-------------+--------------------------------------+
Associate the floating IPv4 address with the instance.
$ openstack ip floating add 203.0.113.17 selfservice-instance2
注釈
このコマンドは何も出力しません。
On the compute node containing the instance, verify creation of the fip namespace with the same ID as the provider network.
# ip netns
fip-4bfa3075-b4b2-4f7d-b88e-df1113942d43
The following sections describe the flow of network traffic in several common scenarios. North-south network traffic travels between an instance and external network such as the Internet. East-west network traffic travels between instances on the same or different networks. In all scenarios, the physical network infrastructure handles switching and routing among provider networks and external networks such as the Internet. Each case references one or more of the following components:
プロバイダーネットワーク (VLAN)
セルフサービスネットワーク 1 (VXLAN)
セルフサービスネットワーク 2 (VXLAN)
セルフサービスルーター
プロバイダネットワークのゲートウェイ
セルフサービスネットワーク 1 のインターフェイス
セルフサービスネットワーク 2 のインターフェイス
インスタンス 1
インスタンス 2
This section only contains flow scenarios that benefit from distributed virtual routing or that differ from conventional operation. For other flow scenarios, see Network traffic flow.
Similar to North-south scenario 1: Instance with a fixed IP address, except the router namespace on the network node becomes the SNAT namespace. The network node still contains the router namespace, but it serves no purpose in this case.
For instances with a floating IPv4 address using a self-service network on a distributed router, the compute node containing the instance performs SNAT on north-south traffic passing from the instance to external networks such as the Internet and DNAT on north-south traffic passing from external networks to the instance. Floating IP addresses and NAT do not apply to IPv6. Thus, the network node routes IPv6 traffic in this scenario. north-south traffic passing between the instance and external networks such as the Internet.
インスタンス 1 は、コンピュートノード 1 上にあり、セルフサービスネットワーク 1 を使用します。
The following steps involve the compute node:
注釈
Egress traffic follows similar steps in reverse, except SNAT changes the source IPv4 address of the packet to the floating IPv4 address.
Instances with fixed IPv4/IPv6 address or floating IPv4 address on the same compute node communicate via router on the compute node. Instances on different compute nodes communicate via an instance of the router on each compute node.
注釈
This scenario places the instances on different compute nodes to show the most complex situation.
以下の手順は、コンピュートノード 1 で行われます。
以下の手順は、コンピュートノード 2 で行われます。
注釈
Routing between self-service networks occurs on the compute node containing the instance sending the packet. In this scenario, routing occurs on compute node 1 for packets from instance 1 to instance 2 and on compute node 2 for packets from instance 2 to instance 1.
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.