Open vSwitch: Self-service networks

Open vSwitch: Self-service networks

This architecture example augments Open vSwitch: Provider networks to support a nearly limitless quantity of entirely virtual networks. Although the Networking service supports VLAN self-service networks, this example focuses on VXLAN self-service networks. For more information on self-service networks, see Self-service networks.

前提

Add one network node with the following components:

  • Three network interfaces: management, provider, and overlay.
  • OpenStack Networking Open vSwitch (OVS) layer-2 agent, layer-3 agent, and any including OVS.

Modify the compute nodes with the following components:

  • Add one network interface: overlay.

注釈

You can keep the DHCP and metadata agents on each compute node or move them to the network node.

アーキテクチャー

Self-service networks using OVS - overview

The following figure shows components and connectivity for one self-service network and one untagged (flat) provider network. In this particular case, the instance resides on the same compute node as the DHCP agent for the network. If the DHCP agent resides on another compute node, the latter only contains a DHCP namespace and with a port on the OVS integration bridge.

Self-service networks using OVS - components and connectivity - one network

設定例

Use the following example configuration as a template to add support for self-service networks to an existing operational environment that supports provider networks.

コントローラーノード

  1. neutron.conf ファイル:

    • Enable routing and allow overlapping IP address ranges.

      [DEFAULT]
      service_plugins = router
      allow_overlapping_ips = True
      
  2. ml2_conf.ini ファイル:

    • Add vxlan to type drivers and project network types.

      [ml2]
      type_drivers = flat,vlan,vxlan
      tenant_network_types = vxlan
      
    • Enable the layer-2 population mechanism driver.

      [ml2]
      mechanism_drivers = linuxbridge,l2population
      
    • Configure the VXLAN network ID (VNI) range.

      [ml2_type_vxlan]
      vni_ranges = VNI_START:VNI_END
      

      Replace VNI_START and VNI_END with appropriate numerical values.

  3. Restart the following services:

    • サーバー

ネットワークノード

  1. Install the Networking service OVS layer-2 agent and layer-3 agent.

  2. Install OVS.

  3. In the neutron.conf file, configure common options:

    [DEFAULT]
    core_plugin = ml2
    auth_strategy = keystone
    rpc_backend = rabbit
    notify_nova_on_port_status_changes = true
    notify_nova_on_port_data_changes = true
    
    [database]
    ...
    
    [keystone_authtoken]
    ...
    
    [oslo_messaging_rabbit]
    ...
    
    [nova]
    ...
    

    See the Installation Guide for your OpenStack release to obtain the appropriate configuration for the [database], [keystone_authtoken], [oslo_messaging_rabbit], and [nova] sections.

  4. 以下のサービスを実行します。

    • OVS
  5. Create the OVS provider bridge br-provider:

    $ ovs-vsctl add-br br-provider
    
  6. In the openvswitch_agent.ini file, configure the layer-2 agent.

    [ovs]
    bridge_mappings = provider:br-provider
    local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    
    [agent]
    tunnel_types = vxlan
    l2_population = true
    
    [securitygroup]
    firewall_driver = iptables_hybrid
    

    Replace OVERLAY_INTERFACE_IP_ADDRESS with the IP address of the interface that handles VXLAN overlays for self-service networks.

  7. In the l3_agent.ini file, configure the layer-3 agent.

    [DEFAULT]
    interface_driver = openvswitch
    external_network_bridge =
    

    注釈

    external_network_bridge オプションには意図的に値を指定していません。

  8. 以下のサービスを実行します。

    • Open vSwitch エージェント

    • L3 エージェント

コンピュートノード

  1. In the openvswitch_agent.ini file, enable VXLAN support including layer-2 population.

    [ovs]
    local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    
    [agent]
    tunnel_types = vxlan
    l2_population = true
    

    Replace OVERLAY_INTERFACE_IP_ADDRESS with the IP address of the interface that handles VXLAN overlays for self-service networks.

  2. Restart the following services:

    • Open vSwitch エージェント

サービスの動作検証

  1. 管理プロジェクトのクレデンシャルを読み込みます。

  2. Verify presence and operation of the agents.

    $ neutron agent-list
    +--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
    | id                                   | agent_type         | host     | availability_zone | alive | admin_state_up | binary                    |
    +--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
    | 1236bbcb-e0ba-48a9-80fc-81202ca4fa51 | Metadata agent     | compute2 |                   | :-)   | True           | neutron-metadata-agent    |
    | 457d6898-b373-4bb3-b41f-59345dcfb5c5 | Open vSwitch agent | compute2 |                   | :-)   | True           | neutron-openvswitch-agent |
    | 71f15e84-bc47-4c2a-b9fb-317840b2d753 | DHCP agent         | compute2 | nova              | :-)   | True           | neutron-dhcp-agent        |
    | 8805b962-de95-4e40-bdc2-7a0add7521e8 | L3 agent           | network1 | nova              | :-)   | True           | neutron-l3-agent          |
    | a33cac5a-0266-48f6-9cac-4cef4f8b0358 | Open vSwitch agent | network1 |                   | :-)   | True           | neutron-openvswitch-agent |
    | a6c69690-e7f7-4e56-9831-1282753e5007 | Metadata agent     | compute1 |                   | :-)   | True           | neutron-metadata-agent    |
    | af11f22f-a9f4-404f-9fd8-cd7ad55c0f68 | DHCP agent         | compute1 | nova              | :-)   | True           | neutron-dhcp-agent        |
    | bcfc977b-ec0e-4ba9-be62-9489b4b0e6f1 | Open vSwitch agent | compute1 |                   | :-)   | True           | neutron-openvswitch-agent |
    +--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
    

初期ネットワークの作成

The configuration supports multiple VXLAN self-service networks. For simplicity, the following procedure creates one self-service network and a router with a gateway on the flat provider network. The router uses NAT for IPv4 network traffic and directly routes IPv6 network traffic.

注釈

IPv6 connectivity with self-service networks often requires addition of static routes to nodes and physical network infrastructure.

  1. 管理プロジェクトのクレデンシャルを読み込みます。

  2. Update the provider network to support external connectivity for self-service networks.

    $ neutron net-update --router:external provider1
    Updated network: provider1
    
  3. Source a regular (non-administrative) project credentials.

  4. Create a self-service network.

    $ neutron net-create selfservice1
    Created a new network:
    +-------------------------+--------------------------------------+
    | Field                   | Value                                |
    +-------------------------+--------------------------------------+
    | admin_state_up          | True                                 |
    | availability_zone_hints |                                      |
    | availability_zones      |                                      |
    | description             |                                      |
    | id                      | 8fbc13ca-cfe0-4b8a-993b-e33f37ba66d1 |
    | ipv4_address_scope      |                                      |
    | ipv6_address_scope      |                                      |
    | mtu                     | 1450                                 |
    | name                    | selfservice1                         |
    | port_security_enabled   | True                                 |
    | router:external         | False                                |
    | shared                  | False                                |
    | status                  | ACTIVE                               |
    | subnets                 |                                      |
    | tags                    |                                      |
    | tenant_id               | f986edf55ae945e2bef3cb4bfd589928     |
    +-------------------------+--------------------------------------+
    
  5. Create a IPv4 subnet on the self-service network.

    $ neutron subnet-create --name selfservice1-v4 --ip-version 4 \
      --dns-nameserver 8.8.4.4 selfservice1 192.168.1.0/24
    Created a new subnet:
    +-------------------+--------------------------------------------------+
    | Field             | Value                                            |
    +-------------------+--------------------------------------------------+
    | allocation_pools  | {"start": "192.168.1.2", "end": "192.168.1.254"} |
    | cidr              | 192.168.1.0/24                                   |
    | description       |                                                  |
    | dns_nameservers   | 8.8.4.4                                          |
    | enable_dhcp       | True                                             |
    | gateway_ip        | 192.168.1.1                                      |
    | host_routes       |                                                  |
    | id                | db1e5c17-2968-4533-8722-512c29fd1b88             |
    | ip_version        | 4                                                |
    | ipv6_address_mode |                                                  |
    | ipv6_ra_mode      |                                                  |
    | name              | selfservice1-v4                                  |
    | network_id        | 8fbc13ca-cfe0-4b8a-993b-e33f37ba66d1             |
    | subnetpool_id     |                                                  |
    | tenant_id         | f986edf55ae945e2bef3cb4bfd589928                 |
    +-------------------+--------------------------------------------------+
    
  6. Create a IPv6 subnet on the self-service network.

    $ neutron subnet-create --name selfservice1-v6 --ip-version 6 \
      --ipv6-address-mode slaac --ipv6-ra-mode slaac \
      --dns-nameserver 2001:4860:4860::8844 selfservice1 \
      fd00:192:168:1::/64
    Created a new subnet:
    +-------------------+-----------------------------------------------------------------------------+
    | Field             | Value                                                                       |
    +-------------------+-----------------------------------------------------------------------------+
    | allocation_pools  | {"start": "fd00:192:168:1::2", "end": "fd00:192:168:1:ffff:ffff:ffff:ffff"} |
    | cidr              | fd00:192:168:1::/64                                                         |
    | description       |                                                                             |
    | dns_nameservers   | 2001:4860:4860::8844                                                        |
    | enable_dhcp       | True                                                                        |
    | gateway_ip        | fd00:192:168:1::1                                                           |
    | host_routes       |                                                                             |
    | id                | 6299cc4e-6581-4626-9720-03c808c662b3                                        |
    | ip_version        | 6                                                                           |
    | ipv6_address_mode | slaac                                                                       |
    | ipv6_ra_mode      | slaac                                                                       |
    | name              | selfservice1-v6                                                             |
    | network_id        | 8fbc13ca-cfe0-4b8a-993b-e33f37ba66d1                                        |
    | subnetpool_id     |                                                                             |
    | tenant_id         | f986edf55ae945e2bef3cb4bfd589928                                            |
    +-------------------+-----------------------------------------------------------------------------+
    
  7. Create a router.

    $ neutron router-create router1
    Created a new router:
    +-------------------------+--------------------------------------+
    | Field                   | Value                                |
    +-------------------------+--------------------------------------+
    | admin_state_up          | True                                 |
    | availability_zone_hints |                                      |
    | availability_zones      |                                      |
    | description             |                                      |
    | external_gateway_info   |                                      |
    | id                      | 17db2a15-e024-46d0-9250-4cd4d336a2cc |
    | name                    | router1                              |
    | routes                  |                                      |
    | status                  | ACTIVE                               |
    | tenant_id               | f986edf55ae945e2bef3cb4bfd589928     |
    +-------------------------+--------------------------------------+
    
  8. Add the IPv4 and IPv6 subnets as interfaces on the router.

    $ neutron router-interface-add router1 selfservice1-v4
    Added interface 77ebe721-a7d3-457c-9534-bce4657da9da to router router1.
    
    $ neutron router-interface-add router1 selfservice1-v6
    Added interface 695e0993-394d-4c40-a338-d4ba4061491a to router router1.
    
  9. Add the provider network as the gateway on the router.

    $ neutron router-gateway-set router1 provider1
    Set gateway for router router1
    

ネットワーク動作の検証

  1. On each compute node, verify creation of a second qdhcp namespace.

    # ip netns
    qdhcp-8b868082-e312-4110-8627-298109d4401c
    qdhcp-8fbc13ca-cfe0-4b8a-993b-e33f37ba66d1
    
  2. On the network node, verify creation of the qrouter namespace.

    # ip netns
    qrouter-17db2a15-e024-46d0-9250-4cd4d336a2cc
    
  3. Source a regular (non-administrative) project credentials.

  4. Create the appropriate security group rules to allow ping and SSH access instances using the network.

    $ openstack security group rule create --proto icmp default
    +-----------------------+--------------------------------------+
    | Field                 | Value                                |
    +-----------------------+--------------------------------------+
    | id                    | 2b45fbf8-45db-486c-915f-3f254740ae76 |
    | ip_protocol           | icmp                                 |
    | ip_range              | 0.0.0.0/0                            |
    | parent_group_id       | d35188d0-6b10-4fb9-a6b9-891ed3feeb54 |
    | port_range            |                                      |
    | remote_security_group |                                      |
    +-----------------------+--------------------------------------+
    
    $ openstack security group rule create --proto ipv6-icmp default
    +-----------------------+--------------------------------------+
    | Field                 | Value                                |
    +-----------------------+--------------------------------------+
    | id                    | 2b45fbf8-45db-486c-915f-3f254740ae76 |
    | ip_protocol           | ipv6-icmp                            |
    | ip_range              | ::/0                                 |
    | parent_group_id       | d35188d0-6b10-4fb9-a6b9-891ed3feeb54 |
    | port_range            |                                      |
    | remote_security_group |                                      |
    +-----------------------+--------------------------------------+
    
    $ openstack security group rule create --proto tcp --dst-port 22 default
    +-----------------------+--------------------------------------+
    | Field                 | Value                                |
    +-----------------------+--------------------------------------+
    | id                    | 86e5cc55-bb08-447a-a807-d36e2b9f56af |
    | ip_protocol           | tcp                                  |
    | ip_range              | 0.0.0.0/0                            |
    | parent_group_id       | d35188d0-6b10-4fb9-a6b9-891ed3feeb54 |
    | port_range            | 22:22                                |
    | remote_security_group |                                      |
    +-----------------------+--------------------------------------+
    
  5. Launch an instance with an interface on the self-service network. For example, a CirrOS image using flavor ID 1.

    $ openstack server create --flavor 1 --image cirros --nic net-id=NETWORK_ID selfservice-instance1
    

    NETWORK_ID をセルフサービスネットワークの ID で置き換えます。

  6. Determine the IPv4 and IPv6 addresses of the instance.

    $ openstack server list
    +--------------------------------------+-----------------------+--------+--------------------------------------------------------------+
    | ID                                   | Name                  | Status | Networks                                                     |
    +--------------------------------------+-----------------------+--------+--------------------------------------------------------------+
    | c055cdb0-ebb4-4d65-957c-35cbdbd59306 | selfservice-instance1 | ACTIVE | selfservice1=192.168.1.4, fd00:192:168:1:f816:3eff:fe30:9cb0 |
    +--------------------------------------+-----------------------+--------+--------------------------------------------------------------+
    

    警告

    The IPv4 address resides in a private IP address range (RFC1918). Thus, the Networking service performs source network address translation (SNAT) for the instance to access external networks such as the Internet. Access from external networks such as the Internet to the instance requires a floating IPv4 address. The Networking service performs destination network address translation (DNAT) from the floating IPv4 address to the instance IPv4 address on the self-service network. On the other hand, the Networking service architecture for IPv6 lacks support for NAT due to the significantly larger address space and complexity of NAT. Thus, floating IP addresses do not exist for IPv6 and the Networking service only performs routing for IPv6 subnets on self-service networks. In other words, you cannot rely on NAT to “hide” instances with IPv4 and IPv6 addresses or only IPv6 addresses and must properly implement security groups to restrict access.

  7. On the controller node or any host with access to the provider network, ping the IPv6 address of the instance.

    $ ping6 -c 4 fd00:192:168:1:f816:3eff:fe30:9cb0
    PING fd00:192:168:1:f816:3eff:fe30:9cb0(fd00:192:168:1:f816:3eff:fe30:9cb0) 56 data bytes
    64 bytes from fd00:192:168:1:f816:3eff:fe30:9cb0: icmp_seq=1 ttl=63 time=2.08 ms
    64 bytes from fd00:192:168:1:f816:3eff:fe30:9cb0: icmp_seq=2 ttl=63 time=1.88 ms
    64 bytes from fd00:192:168:1:f816:3eff:fe30:9cb0: icmp_seq=3 ttl=63 time=1.55 ms
    64 bytes from fd00:192:168:1:f816:3eff:fe30:9cb0: icmp_seq=4 ttl=63 time=1.62 ms
    
    --- fd00:192:168:1:f816:3eff:fe30:9cb0 ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 3004ms
    rtt min/avg/max/mdev = 1.557/1.788/2.085/0.217 ms
    
  8. Optionally, enable IPv4 access from external networks such as the Internet to the instance.

    1. Create a floating IPv4 address on the provider network.

      $ openstack ip floating create provider1
      +-------------+--------------------------------------+
      | Field       | Value                                |
      +-------------+--------------------------------------+
      | fixed_ip    | None                                 |
      | id          | 22a1b088-5c9b-43b4-97f3-970ce5df77f2 |
      | instance_id | None                                 |
      | ip          | 203.0.113.16                         |
      | pool        | provider1                            |
      +-------------+--------------------------------------+
      
    2. Associate the floating IPv4 address with the instance.

      $ openstack ip floating add 203.0.113.16 selfservice-instance1
      

      注釈

      このコマンドは何も出力しません。

    3. On the controller node or any host with access to the provider network, ping the floating IPv4 address of the instance.

      $ ping -c 4 203.0.113.16
      PING 203.0.113.16 (203.0.113.16) 56(84) bytes of data.
      64 bytes from 203.0.113.16: icmp_seq=1 ttl=63 time=3.41 ms
      64 bytes from 203.0.113.16: icmp_seq=2 ttl=63 time=1.67 ms
      64 bytes from 203.0.113.16: icmp_seq=3 ttl=63 time=1.47 ms
      64 bytes from 203.0.113.16: icmp_seq=4 ttl=63 time=1.59 ms
      
      --- 203.0.113.16 ping statistics ---
      4 packets transmitted, 4 received, 0% packet loss, time 3005ms
      rtt min/avg/max/mdev = 1.473/2.040/3.414/0.798 ms
      
  9. インスタンスにアクセスします。

  10. Test IPv4 and IPv6 connectivity to the Internet or other external network.

Network traffic flow

The following sections describe the flow of network traffic in several common scenarios. North-south network traffic travels between an instance and external network such as the Internet. East-west network traffic travels between instances on the same or different networks. In all scenarios, the physical network infrastructure handles switching and routing among provider networks and external networks such as the Internet. Each case references one or more of the following components:

  • プロバイダーネットワーク (VLAN)

    • VLAN ID 101 (tagged)
  • セルフサービスネットワーク 1 (VXLAN)

    • VXLAN ID (VNI) 101
  • セルフサービスネットワーク 2 (VXLAN)

    • VXLAN ID (VNI) 102
  • セルフサービスルーター

    • プロバイダネットワークのゲートウェイ

    • セルフサービスネットワーク 1 のインターフェイス

    • セルフサービスネットワーク 2 のインターフェイス

  • インスタンス 1

  • インスタンス 2

North-south scenario 1: Instance with a fixed IP address

For instances with a fixed IPv4 address, the network node performs SNAT on north-south traffic passing from self-service to external networks such as the Internet. For instances with a fixed IPv6 address, the network node performs conventional routing of traffic between self-service and external networks.

  • The instance resides on compute node 1 and uses self-service network 1.
  • The instance sends a packet to a host on the Internet.

以下の手順は、コンピュートノード 1 で行われます。

  1. The instance interface (1) forwards the packet to the security group bridge instance port (2) via veth pair.
  2. Security group rules (3) on the security group bridge handle firewalling and connection tracking for the packet.
  3. The security group bridge OVS port (4) forwards the packet to the OVS integration bridge security group port (5) via veth pair.
  4. The OVS integration bridge adds an internal VLAN tag to the packet.
  5. The OVS integration bridge exchanges the internal VLAN tag for an internal tunnel ID.
  6. The OVS integration bridge patch port (6) forwards the packet to the OVS tunnel bridge patch port (7).
  7. The OVS tunnel bridge (8) wraps the packet using VNI 101.
  8. The underlying physical interface (9) for overlay networks forwards the packet to the network node via the overlay network (10).

以下の手順は、ネットワークノードで行われます。

  1. The underlying physical interface (11) for overlay networks forwards the packet to the OVS tunnel bridge (12).
  2. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID to it.
  3. The OVS tunnel bridge exchanges the internal tunnel ID for an internal VLAN tag.
  4. The OVS tunnel bridge patch port (13) forwards the packet to the OVS integration bridge patch port (14).
  5. The OVS integration bridge port for the self-service network (15) removes the internal VLAN tag and forwards the packet to the self-service network interface (16) in the router namespace.
    • For IPv4, the router performs SNAT on the packet which changes the source IP address to the router IP address on the provider network and sends it to the gateway IP address on the provider network via the gateway interface on the provider network (17).
    • For IPv6, the router sends the packet to the next-hop IP address, typically the gateway IP address on the provider network, via the provider gateway interface (17).
  6. The router forwards the packet to the OVS integration bridge port for the provider network (18).
  7. The OVS integration bridge adds the internal VLAN tag to the packet.
  8. The OVS integration bridge int-br-provider patch port (19) forwards the packet to the OVS provider bridge phy-br-provider patch port (20).
  9. The OVS provider bridge swaps the internal VLAN tag with actual VLAN tag 101.
  10. The OVS provider bridge provider network port (21) forwards the packet to the physical network interface (22).
  11. The physical network interface forwards the packet to the Internet via physical network infrastructure (23).

注釈

Return traffic follows similar steps in reverse. However, without a floating IPv4 address, hosts on the provider or external networks cannot originate connections to instances on the self-service network.

Self-service networks using Open vSwitch - network traffic flow - north/south scenario 1

North-south scenario 2: Instance with a floating IPv4 address

For instances with a floating IPv4 address, the network node performs SNAT on north-south traffic passing from the instance to external networks such as the Internet and DNAT on north-south traffic passing from external networks to the instance. Floating IP addresses and NAT do not apply to IPv6. Thus, the network node routes IPv6 traffic in this scenario.

  • The instance resides on compute node 1 and uses self-service network 1.
  • A host on the Internet sends a packet to the instance.

以下の手順は、ネットワークノードで行われます。

  1. The physical network infrastructure (1) forwards the packet to the provider physical network interface (2).
  2. The provider physical network interface forwards the packet to the OVS provider bridge provider network port (3).
  3. The OVS provider bridge swaps actual VLAN tag 101 with the internal VLAN tag.
  4. The OVS provider bridge phy-br-provider port (4) forwards the packet to the OVS integration bridge int-br-provider port (5).
  5. The OVS integration bridge port for the provider network (6) removes the internal VLAN tag and forwards the packet to the provider network interface (6) in the router namespace.
    • For IPv4, the router performs DNAT on the packet which changes the destination IP address to the instance IP address on the self-service network and sends it to the gateway IP address on the self-service network via the self-service interface (7).
    • For IPv6, the router sends the packet to the next-hop IP address, typically the gateway IP address on the self-service network, via the self-service interface (8).
  6. The router forwards the packet to the OVS integration bridge port for the self-service network (9).
  7. The OVS integration bridge adds an internal VLAN tag to the packet.
  8. The OVS integration bridge exchanges the internal VLAN tag for an internal tunnel ID.
  9. The OVS integration bridge patch-tun patch port (10) forwards the packet to the OVS tunnel bridge patch-int patch port (11).
  10. The OVS tunnel bridge (12) wraps the packet using VNI 101.
  11. The underlying physical interface (13) for overlay networks forwards the packet to the network node via the overlay network (14).

The following steps involve the compute node:

  1. The underlying physical interface (15) for overlay networks forwards the packet to the OVS tunnel bridge (16).
  2. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID to it.
  3. The OVS tunnel bridge exchanges the internal tunnel ID for an internal VLAN tag.
  4. The OVS tunnel bridge patch-int patch port (17) forwards the packet to the OVS integration bridge patch-tun patch port (18).
  5. The OVS integration bridge removes the internal VLAN tag from the packet.
  6. The OVS integration bridge security group port (19) forwards the packet to the security group bridge OVS port (20) via veth pair.
  7. Security group rules (21) on the security group bridge handle firewalling and connection tracking for the packet.
  8. The security group bridge instance port (22) forwards the packet to the instance interface (23) via veth pair.
Self-service networks using Open vSwitch - network traffic flow - north/south scenario 2

注釈

Egress instance traffic flows similar to north-south scenario 1, except SNAT changes the source IP address of the packet to the floating IPv4 address rather than the router IP address on the provider network.

East-west scenario 1: Instances on the same network

Instances with a fixed IPv4/IPv6 address or floating IPv4 address on the same network communicate directly between compute nodes containing those instances.

By default, the VXLAN protocol lacks knowledge of target location and uses multicast to discover it. After discovery, it stores the location in the local forwarding database. In large deployments, the discovery process can generate a significant amount of network that all nodes must process. To eliminate the latter and generally increase efficiency, the Networking service includes the layer-2 population mechanism driver that automatically populates the forwarding database for VXLAN interfaces. The example configuration enables this driver. For more information, see ML2 プラグイン.

  • インスタンス 1 は、コンピュートノード 1 上にあり、セルフサービスネットワーク 1 を使用します。

  • インスタンス 2 は、コンピュートノード 2 上にあり、セルフサービスネットワーク 1 を使用します。

  • インスタンス 1 がインスタンス 2 にパケットを送信します。

以下の手順は、コンピュートノード 1 で行われます。

  1. The instance 1 interface (1) forwards the packet to the security group bridge instance port (2) via veth pair.
  2. Security group rules (3) on the security group bridge handle firewalling and connection tracking for the packet.
  3. The security group bridge OVS port (4) forwards the packet to the OVS integration bridge security group port (5) via veth pair.
  4. The OVS integration bridge adds an internal VLAN tag to the packet.
  5. The OVS integration bridge exchanges the internal VLAN tag for an internal tunnel ID.
  6. The OVS integration bridge patch port (6) forwards the packet to the OVS tunnel bridge patch port (7).
  7. The OVS tunnel bridge (8) wraps the packet using VNI 101.
  8. The underlying physical interface (9) for overlay networks forwards the packet to compute node 2 via the overlay network (10).

以下の手順は、コンピュートノード 2 で行われます。

  1. The underlying physical interface (11) for overlay networks forwards the packet to the OVS tunnel bridge (12).
  2. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID to it.
  3. The OVS tunnel bridge exchanges the internal tunnel ID for an internal VLAN tag.
  4. The OVS tunnel bridge patch-int patch port (13) forwards the packet to the OVS integration bridge patch-tun patch port (14).
  5. The OVS integration bridge removes the internal VLAN tag from the packet.
  6. The OVS integration bridge security group port (15) forwards the packet to the security group bridge OVS port (16) via veth pair.
  7. Security group rules (17) on the security group bridge handle firewalling and connection tracking for the packet.
  8. The security group bridge instance port (18) forwards the packet to the instance 2 interface (19) via veth pair.
Self-service networks using Open vSwitch - network traffic flow - east/west scenario 1

注釈

戻りのトラフィックは、同様の手順の逆順で処理されます。

East-west scenario 2: Instances on different networks

Instances using a fixed IPv4/IPv6 address or floating IPv4 address communicate via router on the network node. The self-service networks must reside on the same router.

  • インスタンス 1 は、コンピュートノード 1 上にあり、セルフサービスネットワーク 1 を使用します。

  • Instance 2 resides on compute node 1 and uses self-service network 2.
  • インスタンス 1 がインスタンス 2 にパケットを送信します。

注釈

Both instances reside on the same compute node to illustrate how VXLAN enables multiple overlays to use the same layer-3 network.

The following steps involve the compute node:

  1. The instance interface (1) forwards the packet to the security group bridge instance port (2) via veth pair.
  2. Security group rules (3) on the security group bridge handle firewalling and connection tracking for the packet.
  3. The security group bridge OVS port (4) forwards the packet to the OVS integration bridge security group port (5) via veth pair.
  4. The OVS integration bridge adds an internal VLAN tag to the packet.
  5. The OVS integration bridge exchanges the internal VLAN tag for an internal tunnel ID.
  6. The OVS integration bridge patch-tun patch port (6) forwards the packet to the OVS tunnel bridge patch-int patch port (7).
  7. The OVS tunnel bridge (8) wraps the packet using VNI 101.
  8. The underlying physical interface (9) for overlay networks forwards the packet to the network node via the overlay network (10).

以下の手順は、ネットワークノードで行われます。

  1. The underlying physical interface (11) for overlay networks forwards the packet to the OVS tunnel bridge (12).
  2. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID to it.
  3. The OVS tunnel bridge exchanges the internal tunnel ID for an internal VLAN tag.
  4. The OVS tunnel bridge patch-int patch port (13) forwards the packet to the OVS integration bridge patch-tun patch port (14).
  5. The OVS integration bridge port for self-service network 1 (15) removes the internal VLAN tag and forwards the packet to the self-service network 1 interface (16) in the router namespace.
  6. The router sends the packet to the next-hop IP address, typically the gateway IP address on self-service network 2, via the self-service network 2 interface (17).
  7. The router forwards the packet to the OVS integration bridge port for self-service network 2 (18).
  8. The OVS integration bridge adds the internal VLAN tag to the packet.
  9. The OVS integration bridge exchanges the internal VLAN tag for an internal tunnel ID.
  10. The OVS integration bridge patch-tun patch port (19) forwards the packet to the OVS tunnel bridge patch-int patch port (20).
  11. The OVS tunnel bridge (21) wraps the packet using VNI 102.
  12. The underlying physical interface (22) for overlay networks forwards the packet to the compute node via the overlay network (23).

The following steps involve the compute node:

  1. The underlying physical interface (24) for overlay networks forwards the packet to the OVS tunnel bridge (25).
  2. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID to it.
  3. The OVS tunnel bridge exchanges the internal tunnel ID for an internal VLAN tag.
  4. The OVS tunnel bridge patch-int patch port (26) forwards the packet to the OVS integration bridge patch-tun patch port (27).
  5. The OVS integration bridge removes the internal VLAN tag from the packet.
  6. The OVS integration bridge security group port (28) forwards the packet to the security group bridge OVS port (29) via veth pair.
  7. Security group rules (30) on the security group bridge handle firewalling and connection tracking for the packet.
  8. The security group bridge instance port (31) forwards the packet to the instance interface (32) via veth pair.

注釈

戻りのトラフィックは、同様の手順の逆順で処理されます。

Self-service networks using Open vSwitch - network traffic flow - east/west scenario 2
Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.