This architecture example provides layer-2 connectivity between instances and the physical network infrastructure using VLAN (802.1q) tagging. It supports one untagged (flat) network and up to 4095 tagged (VLAN) networks. The actual quantity of VLAN networks depends on the physical network infrastructure. For more information on provider networks, see プロバイダーネットワーク.
警告
Linux distributions often package older releases of Open vSwitch that can introduce issues during operation with the Networking service. We recommend using at least the latest long-term stable (LTS) release of Open vSwitch for the best experience and support from Open vSwitch. See http://www.openvswitch.org for available releases and the installation instructions for
One controller node with the following components:
Two compute nodes with the following components:
注釈
Larger deployments typically deploy the DHCP and metadata agents on a subset of compute nodes to increase performance and redundancy. However, too many agents can overwhelm the message bus. Also, to further simplify any deployment, you can omit the metadata agent and use a configuration drive to provide metadata to instances.
The following figure shows components and connectivity for one untagged (flat) network. In this particular case, the instance resides on the same compute node as the DHCP agent for the network. If the DHCP agent resides on another compute node, the latter only contains a DHCP namespace with a port on the OVS integration bridge.
The following figure describes virtual connectivity among components for two tagged (VLAN) networks. Essentially, all networks use a single OVS integration bridge with different internal VLAN tags. The internal VLAN tags almost always differ from the network VLAN assignment in the Networking service. Similar to the untagged network case, the DHCP agent may reside on a different compute node.
注釈
These figures omit the controller node because it does not handle instance network traffic.
Use the following example configuration as a template to deploy provider networks in your environment.
Install the Networking service components that provide the neutron-server service and ML2 plug-in.
neutron.conf ファイル:
共通のオプションを設定します。
[DEFAULT]
core_plugin = ml2
auth_strategy = keystone
rpc_backend = rabbit
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[database]
...
[keystone_authtoken]
...
[oslo_messaging_rabbit]
...
[nova]
...
See the Installation Guide for your OpenStack release to obtain the appropriate configuration for the [database], [keystone_authtoken], [oslo_messaging_rabbit], and [nova] sections.
Disable service plug-ins because provider networks do not require any. However, this breaks portions of the dashboard that manage the Networking service. See the Installation Guide for more information.
[DEFAULT]
service_plugins =
Enable two DHCP agents per network so both compute nodes can provide DHCP service provider networks.
[DEFAULT]
dhcp_agents_per_network = 2
必要であれば MTU を設定します。
ml2_conf.ini ファイル:
ドライバーとネットワークタイプを設定します。
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = openvswitch
extension_drivers = port_security
ネットワークマッピングを設定します。
[ml2_type_flat]
flat_networks = provider
[ml2_type_vlan]
network_vlan_ranges = provider
注釈
The tenant_network_types option contains no value because the architecture does not support self-service networks.
注釈
network_vlan_ranges オプションの provider 値に VLAN ID を指定しなかった場合、任意の VLAN ID を使用できます。
セキュリティーグループドライバーを設定します。
[securitygroup]
firewall_driver = iptables_hybrid
Populate the database.
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
以下のサービスを実行します。
サーバー
Install the Networking service OVS layer-2 agent, DHCP agent, and metadata agent.
Install OVS.
In the neutron.conf file, configure common options:
[DEFAULT]
core_plugin = ml2
auth_strategy = keystone
rpc_backend = rabbit
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[database]
...
[keystone_authtoken]
...
[oslo_messaging_rabbit]
...
[nova]
...
See the Installation Guide for your OpenStack release to obtain the appropriate configuration for the [database], [keystone_authtoken], [oslo_messaging_rabbit], and [nova] sections.
In the openvswitch_agent.ini file, configure the OVS agent:
[ovs]
bridge_mappings = provider:br-provider
[securitygroup]
firewall_driver = iptables_hybrid
dhcp_agent.ini ファイルで DHCP エージェントを設定します。
[DEFAULT]
interface_driver = openvswitch
enable_isolated_metadata = True
metadata_agent.ini ファイルでメタデータエージェントを設定します。
[DEFAULT]
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET
The value of METADATA_SECRET must match the value of the same option in the [neutron] section of the nova.conf file.
以下のサービスを実行します。
Create the OVS provider bridge br-provider:
$ ovs-vsctl add-br br-provider
Add the provider network interface as a port on the OVS provider bridge br-provider:
$ ovs-vsctl add-port br-provider PROVIDER_INTERFACE
PROVIDER_INTERFACE は、 プロバイダーネットワークを処理するインターフェース名で置き換えます。例えば、 eth1 など。
以下のサービスを実行します。
OVS エージェント
DHCP エージェント
メタデータエージェント
管理プロジェクトのクレデンシャルを読み込みます。
エージェントが存在し、動作していることを確認します。
$ neutron agent-list
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
| 1236bbcb-e0ba-48a9-80fc-81202ca4fa51 | Metadata agent | compute2 | | :-) | True | neutron-metadata-agent |
| 457d6898-b373-4bb3-b41f-59345dcfb5c5 | Open vSwitch agent | compute2 | | :-) | True | neutron-openvswitch-agent |
| 71f15e84-bc47-4c2a-b9fb-317840b2d753 | DHCP agent | compute2 | nova | :-) | True | neutron-dhcp-agent |
| a6c69690-e7f7-4e56-9831-1282753e5007 | Metadata agent | compute1 | | :-) | True | neutron-metadata-agent |
| af11f22f-a9f4-404f-9fd8-cd7ad55c0f68 | DHCP agent | compute1 | nova | :-) | True | neutron-dhcp-agent |
| bcfc977b-ec0e-4ba9-be62-9489b4b0e6f1 | Open vSwitch agent | compute1 | | :-) | True | neutron-openvswitch-agent |
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
The configuration supports one flat or multiple VLAN provider networks. For simplicity, the following procedure creates one flat provider network.
管理プロジェクトのクレデンシャルを読み込みます。
Create a flat network.
$ neutron net-create --shared --provider:physical_network provider \
--provider:network_type flat provider1
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| description | |
| id | 2b5ad13f-3859-4847-8db7-c695ab7dfce6 |
| ipv4_address_scope | |
| ipv6_address_scope | |
| mtu | 1500 |
| name | provider1 |
| port_security_enabled | True |
| provider:network_type | flat |
| provider:physical_network | provider |
| provider:segmentation_id | |
| router:external | False |
| shared | True |
| status | ACTIVE |
| subnets | |
| tags | |
| tenant_id | de59fed9547a4628b781df0862c837cf |
+---------------------------+--------------------------------------+
注釈
shared オプションを指定すると、すべてのプロジェクトがこのネットワークを使用できるようになります。
注釈
To create a VLAN network instead of a flat network, change --provider:network_type flat to --provider:network_type vlan and add --provider:segmentation_id with a value referencing the VLAN ID.
Create a IPv4 subnet on the provider network.
$ neutron subnet-create --name provider1-v4 --ip-version 4 \
--allocation-pool start=203.0.113.11,end=203.0.113.250 \
--gateway 203.0.113.1 --dns-nameserver 8.8.4.4 provider1 \
203.0.113.0/24
Created a new subnet:
+-------------------+---------------------------------------------------+
| Field | Value |
+-------------------+---------------------------------------------------+
| allocation_pools | {"start": "203.0.113.11", "end": "203.0.113.250"} |
| cidr | 203.0.113.0/24 |
| description | |
| dns_nameservers | 8.8.4.4 |
| enable_dhcp | True |
| gateway_ip | 203.0.113.1 |
| host_routes | |
| id | 7ce3fd60-1d45-4432-a9a5-4f7645629bd9 |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | provider1-v4 |
| network_id | 2b5ad13f-3859-4847-8db7-c695ab7dfce6 |
| subnetpool_id | |
| tenant_id | de59fed9547a4628b781df0862c837cf |
+-------------------+---------------------------------------------------+
Create a IPv6 subnet on the provider network.
$ neutron subnet-create --name provider1-v6 --ip-version 6 \
--gateway fd00:203:0:113::1 --dns-nameserver 2001:4860:4860::8844 \
provider1 fd00:203:0:113::/64
Created a new subnet:
+-------------------+-----------------------------------------------------------------------------+
| Field | Value |
+-------------------+-----------------------------------------------------------------------------+
| allocation_pools | {"start": "fd00:203:0:113::2", "end": "fd00:203:0:113:ffff:ffff:ffff:ffff"} |
| cidr | fd00:203:0:113::/64 |
| description | |
| dns_nameservers | 2001:4860:4860::8844 |
| enable_dhcp | True |
| gateway_ip | fd00:203:0:113::1 |
| host_routes | |
| id | 773ea59c-e8c1-4254-baf3-27d5b2d42eb5 |
| ip_version | 6 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | provider1-v6 |
| network_id | 2b5ad13f-3859-4847-8db7-c695ab7dfce6 |
| subnetpool_id | |
| tenant_id | de59fed9547a4628b781df0862c837cf |
+-------------------+-----------------------------------------------------------------------------+
注釈
By default, IPv6 provider networks rely on physical network infrastructure for stateless address autoconfiguration (SLAAC) and router advertisement.
On each compute node, verify creation of the qdhcp namespace.
# ip netns
qdhcp-8b868082-e312-4110-8627-298109d4401c
Source a regular (non-administrative) project credentials.
Create the appropriate security group rules to allow ping and SSH access instances using the network.
$ openstack security group rule create --proto icmp default
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| id | 2b45fbf8-45db-486c-915f-3f254740ae76 |
| ip_protocol | icmp |
| ip_range | 0.0.0.0/0 |
| parent_group_id | d35188d0-6b10-4fb9-a6b9-891ed3feeb54 |
| port_range | |
| remote_security_group | |
+-----------------------+--------------------------------------+
$ openstack security group rule create --proto ipv6-icmp default
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| id | 2b45fbf8-45db-486c-915f-3f254740ae76 |
| ip_protocol | ipv6-icmp |
| ip_range | ::/0 |
| parent_group_id | d35188d0-6b10-4fb9-a6b9-891ed3feeb54 |
| port_range | |
| remote_security_group | |
+-----------------------+--------------------------------------+
$ openstack security group rule create --proto tcp --dst-port 22 default
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| id | 86e5cc55-bb08-447a-a807-d36e2b9f56af |
| ip_protocol | tcp |
| ip_range | 0.0.0.0/0 |
| parent_group_id | d35188d0-6b10-4fb9-a6b9-891ed3feeb54 |
| port_range | 22:22 |
| remote_security_group | |
+-----------------------+--------------------------------------+
Launch an instance with an interface on the provider network. For example, a CirrOS image using flavor ID 1.
$ openstack server create --flavor 1 --image cirros \
--nic net-id=NETWORK_ID provider-instance1
NETWORK_ID をプロバイダーネットワークの ID で置き換えます。
Determine the IPv4 and IPv6 addresses of the instance.
$ openstack server list
+--------------------------------------+--------------------+--------+--------------------------------------------+
| ID | Name | Status | Networks |
+--------------------------------------+--------------------+--------+--------------------------------------------+
| 018e0ae2-b43c-4271-a78d-62653dd03285 | provider-instance1 | ACTIVE | provider1=203.0.113.13, fd00:203:0:113::13 |
+--------------------------------------+--------------------+--------+--------------------------------------------+
注釈
The IPv4 and IPv6 addresses appear similar only for illustration purposes.
On the controller node or any host with access to the provider network, ping the IPv4 and IPv6 addresses of the instance.
$ ping -c 4 203.0.113.13
PING 203.0.113.13 (203.0.113.13) 56(84) bytes of data.
64 bytes from 203.0.113.13: icmp_req=1 ttl=63 time=3.18 ms
64 bytes from 203.0.113.13: icmp_req=2 ttl=63 time=0.981 ms
64 bytes from 203.0.113.13: icmp_req=3 ttl=63 time=1.06 ms
64 bytes from 203.0.113.13: icmp_req=4 ttl=63 time=0.929 ms
--- 203.0.113.13 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms
$ ping6 -c 4 fd00:203:0:113::13
PING fd00:203:0:113::13(fd00:203:0:113::13) 56 data bytes
64 bytes from fd00:203:0:113::13: icmp_seq=1 ttl=64 time=1.25 ms
64 bytes from fd00:203:0:113::13: icmp_seq=2 ttl=64 time=0.683 ms
64 bytes from fd00:203:0:113::13: icmp_seq=3 ttl=64 time=0.762 ms
64 bytes from fd00:203:0:113::13: icmp_seq=4 ttl=64 time=0.486 ms
--- fd00:203:0:113::13 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.486/0.796/1.253/0.282 ms
インスタンスにアクセスします。
Test IPv4 and IPv6 connectivity to the Internet or other external network.
The following sections describe the flow of network traffic in several common scenarios. North-south network traffic travels between an instance and external network such as the Internet. East-west network traffic travels between instances on the same or different networks. In all scenarios, the physical network infrastructure handles switching and routing among provider networks and external networks such as the Internet. Each case references one or more of the following components:
インスタンス 1
インスタンス 2
以下の手順は、コンピュートノード 1 で行われます。
以下の手順は、物理ネットワーク環境で行われます。
注釈
戻りのトラフィックは、同様の手順の逆順で処理されます。
Instances on the same network communicate directly between compute nodes containing those instances.
インスタンス 1 はコンピュートノード 1 上にあり、プロバイダーネットワーク 1 を使用します。
インスタンス 1 がインスタンス 2 にパケットを送信します。
以下の手順は、コンピュートノード 1 で行われます。
以下の手順は、物理ネットワーク環境で行われます。
以下の手順は、コンピュートノード 2 で行われます。
注釈
戻りのトラフィックは、同様の手順の逆順で処理されます。
Instances communicate via router on the physical network infrastructure.
インスタンス 1 はコンピュートノード 1 上にあり、プロバイダーネットワーク 1 を使用します。
インスタンス 1 がインスタンス 2 にパケットを送信します。
注釈
Both instances reside on the same compute node to illustrate how VLAN tagging enables multiple logical layer-2 networks to use the same physical layer-2 network.
The following steps involve the compute node:
以下の手順は、物理ネットワーク環境で行われます。
The following steps involve the compute node:
注釈
戻りのトラフィックは、同様の手順の逆順で処理されます。
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.