Merge "[network-guide] Update legacy scenarios"

This commit is contained in:
Jenkins 2015-07-02 07:19:54 +00:00 committed by Gerrit Code Review
commit d18906e9c3
34 changed files with 652 additions and 623 deletions

View File

@ -6,7 +6,7 @@ Deployment scenarios
:maxdepth: 2
scenario_legacy_ovs.rst
deploy_scenario1b.rst
scenario_legacy_lb.rst
deploy_scenario2.rst
deploy_scenario3a.rst
deploy_scenario3b.rst

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 26 KiB

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 68 KiB

After

Width:  |  Height:  |  Size: 68 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 76 KiB

After

Width:  |  Height:  |  Size: 77 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 142 KiB

After

Width:  |  Height:  |  Size: 141 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 47 KiB

After

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 167 KiB

After

Width:  |  Height:  |  Size: 168 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 163 KiB

After

Width:  |  Height:  |  Size: 164 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 88 KiB

After

Width:  |  Height:  |  Size: 88 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 115 KiB

After

Width:  |  Height:  |  Size: 116 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 65 KiB

After

Width:  |  Height:  |  Size: 65 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 21 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 24 KiB

View File

@ -4,13 +4,13 @@ Scenario: Legacy with Open vSwitch
This scenario describes a legacy (basic) implementation of the
OpenStack Networking service using the ML2 plug-in with Open vSwitch.
The example configuration creates one flat external network and VXLAN
project networks. However, this configuration also supports VLAN
The example configuration creates one flat external network and one VXLAN
project (tenant) network. However, this configuration also supports VLAN
external networks, VLAN project networks, and GRE project networks.
To improve understanding of network traffic flow, the network and compute
nodes contain a separate network interface for project VLAN networks. In
production environments, project VLAN networks can use any Open vSwitch
nodes contain a separate network interface for VLAN project networks. In
production environments, VLAN project networks can use any Open vSwitch
bridge with access to a network interface. For example, the ``br-tun``
bridge.
@ -23,7 +23,9 @@ the Networking service immediately depends on the Identity service and the
Compute service immediately depends on the Networking service. These
dependencies lack services such as the Image service because the Networking
service does not immediately depend on it. However, the Compute service
depends on the Image service to launch an instance.
depends on the Image service to launch an instance. The example configuration
in this scenario assumes basic configuration knowledge of Networking service
components.
Infrastructure
--------------
@ -31,13 +33,13 @@ Infrastructure
#. One controller node with one network interface: management.
#. One network node with four network interfaces: management, project tunnel
networks, project VLAN networks, and external (typically the Internet).
networks, VLAN project networks, and external (typically the Internet).
The Open vSwitch bridge ``br-vlan`` must contain a port on the VLAN
interface and Open vSwitch bridge ``br-ex`` must contain a port on the
external interface.
#. At least one compute node with three network interfaces: management,
project tunnel networks, and project VLAN networks. The Open vSwitch
project tunnel networks, and VLAN project networks. The Open vSwitch
bridge ``br-vlan`` must contain a port on the VLAN interface.
.. image:: figures/scenario-legacy-hw.png
@ -125,8 +127,7 @@ The network node contains the following components:
networks. They also route metadata traffic between instances and the
metadata agent.
#. Metadata agent handling metadata operations. The metadata agent
handles metadata operations for instances.
#. Metadata agent handling metadata operations for instances.
.. image:: figures/scenario-legacy-ovs-network1.png
:alt: Network node components - overview
@ -164,25 +165,25 @@ Case 1: North-south for instances with a fixed IP address
For instances with a fixed IP address, the network node routes
*north-south* network traffic between project and external networks.
Instance 1 resides on compute node 1 and uses project network 1.
Instance 1 resides on compute node 1 and uses a project network.
The instance sends a packet to a host on the external network.
* External network 1
* External network
* Network 203.0.113.0/24
* Gateway 203.0.113.1 with MAC address *EG1*
* Gateway 203.0.113.1 with MAC address *EG*
* Floating IP range 203.0.113.101 to 203.0.113.200
* Tenant network 1 router interface 203.0.113.101 *TR1*
* Project network router interface 203.0.113.101 *TR*
* Tenant network 1
* Project network
* Network 192.168.1.0/24
* Gateway 192.168.1.1 with MAC address *TG1*
* Gateway 192.168.1.1 with MAC address *TG*
* Compute node 1
@ -191,7 +192,7 @@ The instance sends a packet to a host on the external network.
The following steps involve compute node 1:
#. The instance 1 ``tap`` interface (1) forwards the packet to the Linux
bridge ``qbr``. The packet contains destination MAC address *TG1*
bridge ``qbr``. The packet contains destination MAC address *TG*
because the destination resides on another network.
#. Security group rules (2) on the Linux bridge ``qbr`` handle state tracking
@ -201,7 +202,7 @@ The following steps involve compute node 1:
integration bridge ``br-int``.
#. The Open vSwitch integration bridge ``br-int`` adds the internal tag for
project network 1.
the project network.
#. For VLAN project networks:
@ -209,7 +210,7 @@ The following steps involve compute node 1:
the Open vSwitch VLAN bridge ``br-vlan``.
#. The Open vSwitch VLAN bridge ``br-vlan`` replaces the internal tag
with the actual VLAN tag of project network 1.
with the actual VLAN tag of the project network.
#. The Open vSwitch VLAN bridge ``br-vlan`` forwards the packet to the
network node via the VLAN interface.
@ -220,7 +221,7 @@ The following steps involve compute node 1:
the Open vSwitch tunnel bridge ``br-tun``.
#. The Open vSwitch tunnel bridge ``br-tun`` wraps the packet in a VXLAN
or GRE tunnel and adds a tag to identify the project network 1.
or GRE tunnel and adds a tag to identify the project network.
#. The Open vSwitch tunnel bridge ``br-tun`` forwards the packet to the
network node via the tunnel interface.
@ -236,7 +237,7 @@ The following steps involve the network node:
Open vSwitch integration bridge ``br-int``.
#. The Open vSwitch integration bridge ``br-int`` replaces the actual
VLAN tag of project network 1 with the internal tag.
VLAN tag of the project network with the internal tag.
#. For VXLAN and GRE project networks:
@ -244,18 +245,18 @@ The following steps involve the network node:
bridge ``br-tun``.
#. The Open vSwitch tunnel bridge ``br-tun`` unwraps the packet and adds
the internal tag for project network 1.
the internal tag for the project network.
#. The Open vSwitch tunnel bridge ``br-tun`` forwards the packet to the
Open vSwitch integration bridge ``br-int``.
#. The Open vSwitch integration bridge ``br-int`` forwards the packet to
the ``qr`` interface (3) in the router namespace ``qrouter``. The ``qr``
interface contains the project network 1 gateway IP address *TG1*.
interface contains the project network gateway IP address *TG*.
#. The *iptables* service (4) performs SNAT on the packet using the ``qg``
interface (5) as the source IP address. The ``qg`` interface contains
the project network 1 router interface IP address *TR1*.
the project network router interface IP address *TR*.
#. The router namespace ``qrouter`` forwards the packet to the Open vSwitch
integration bridge ``br-int`` via the ``qg`` interface.
@ -278,25 +279,25 @@ Case 2: North-south for instances with a floating IP address
For instances with a floating IP address, the network node routes
*north-south* network traffic between project and external networks.
Instance 1 resides on compute node 1 and uses project network 1.
Instance 1 resides on compute node 1 and uses a project network.
The instance receives a packet from a host on the external network.
* External network 1
* External network
* Network 203.0.113.0/24
* Gateway 203.0.113.1 with MAC address *EG1*
* Gateway 203.0.113.1 with MAC address *EG*
* Floating IP range 203.0.113.101 to 203.0.113.200
* Tenant network 1 router interface 203.0.113.101 *TR1*
* Project network router interface 203.0.113.101 *TR*
* Tenant network 1
* Project network
* Network 192.168.1.0/24
* Gateway 192.168.1.1 with MAC address *TG1*
* Gateway 192.168.1.1 with MAC address *TG*
* Compute node 1
@ -317,13 +318,13 @@ The following steps involve the network node:
#. The *iptables* service (2) performs DNAT on the packet using the ``qr``
interface (3) as the source IP address. The ``qr`` interface contains
the project network 1 router interface IP address *TR1*.
the project network router interface IP address *TR1*.
#. The router namespace ``qrouter`` forwards the packet to the Open vSwitch
integration bridge ``br-int``.
#. The Open vSwitch integration bridge ``br-int`` adds the internal tag for
project network 1.
the project network.
#. For VLAN project networks:
@ -331,18 +332,18 @@ The following steps involve the network node:
the Open vSwitch VLAN bridge ``br-vlan``.
#. The Open vSwitch VLAN bridge ``br-vlan`` replaces the internal tag
with the actual VLAN tag of project network 1.
with the actual VLAN tag of the project network.
#. The Open vSwitch VLAN bridge ``br-vlan`` forwards the packet to the
compute node via the VLAN interface.
#. For VXLAN and GRE networks:
#. For VXLAN and GRE project networks:
#. The Open vSwitch integration bridge ``br-int`` forwards the packet to
the Open vSwitch tunnel bridge ``br-tun``.
#. The Open vSwitch tunnel bridge ``br-tun`` wraps the packet in a VXLAN
or GRE tunnel and adds a tag to identify project network 1.
or GRE tunnel and adds a tag to identify the project network.
#. The Open vSwitch tunnel bridge ``br-tun`` forwards the packet to the
compute node via the tunnel interface.
@ -358,7 +359,7 @@ The following steps involve compute node 1:
Open vSwitch integration bridge ``br-int``.
#. The Open vSwitch integration bridge ``br-int`` replaces the actual
VLAN tag project network 1 with the internal tag.
VLAN tag the project network with the internal tag.
#. For VXLAN and GRE project networks:
@ -366,7 +367,7 @@ The following steps involve compute node 1:
bridge ``br-tun``.
#. The Open vSwitch tunnel bridge ``br-tun`` unwraps the packet and adds
the internal tag for project network 1.
the internal tag for the project network.
#. The Open vSwitch tunnel bridge ``br-tun`` forwards the packet to the
Open vSwitch integration bridge ``br-int``.
@ -399,13 +400,13 @@ reside on the same project router.
Instance 1 sends a packet to instance 2.
* Tenant network 1
* Project network 1
* Network: 192.168.1.0/24
* Gateway: 192.168.1.1 with MAC address *TG1*
* Tenant network 2
* Project network 2
* Network: 192.168.2.0/24
@ -505,7 +506,7 @@ The following steps involve the network node:
#. The Open vSwitch VLAN bridge ``br-vlan`` forwards the packet to compute
node 2 via the VLAN interface.
#. For VXLAN and GRE networks:
#. For VXLAN and GRE project networks:
#. The Open vSwitch integration bridge ``br-int`` forwards the packet to
the Open vSwitch tunnel bridge ``br-tun``.
@ -666,7 +667,9 @@ scenario in your environment.
Controller node
---------------
#. Configure base options. Edit the :file:`/etc/neutron/neutron.conf` file::
#. Configure common options. Edit the :file:`/etc/neutron/neutron.conf` file:
.. code-block:: ini
[DEFAULT]
verbose = True
@ -674,7 +677,10 @@ Controller node
service_plugins = router
allow_overlapping_ips = True
#. Configure the ML2 plug-in. Edit the :file:`/etc/neutron/plugins/ml2/ml2_conf.ini` file::
#. Configure the ML2 plug-in. Edit the
:file:`/etc/neutron/plugins/ml2/ml2_conf.ini` file:
.. code-block:: ini
[ml2]
type_drivers = flat,vlan,gre,vxlan
@ -685,26 +691,32 @@ Controller node
flat_networks = external
[ml2_type_vlan]
network_vlan_ranges = vlan:1001:2000
network_vlan_ranges = external,vlan:MIN_VLAN_ID:MAX_VLAN_ID
[ml2_type_gre]
tunnel_id_ranges = 1001:2000
tunnel_id_ranges = MIN_GRE_ID:MAX_GRE_ID
[ml2_type_vxlan]
vni_ranges = 1001:2000
vni_ranges = MIN_VXLAN_ID:MAX_VXLAN_ID
vxlan_group = 239.1.1.1
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
Adjust the VLAN tag, GRE tunnel ID, and VXLAN tunnel ID ranges for
your environment.
Replace ``MIN_VLAN_ID``, ``MAX_VLAN_ID``, ``MIN_GRE_ID``, ``MAX_GRE_ID``,
``MIN_VXLAN_ID``, and ``MAX_VXLAN_ID`` with VLAN, GRE, and VXLAN ID minimum
and maximum values suitable for your environment.
.. note::
The first value in the ``tenant_network_types`` option becomes the
default project network type when a non-privileged user creates a network.
default project network type when a non-privileged user creates a
network.
.. note::
The ``external`` value in the ``network_vlan_ranges`` option lacks VLAN
ID ranges to support use of arbitrary VLAN IDs by privileged users.
#. Start the following services:
@ -714,23 +726,31 @@ Network node
------------
#. Configure the kernel to enable packet forwarding and disable reverse path
filtering. Edit the :file:`/etc/sysctl.conf` file::
filtering. Edit the :file:`/etc/sysctl.conf` file:
.. code-block:: ini
net.ipv4.ip_forward=1
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.all.rp_filter=0
#. Load the new kernel configuration::
#. Load the new kernel configuration:
.. code-block:: console
$ sysctl -p
#. Configure base options. Edit the :file:`/etc/neutron/neutron.conf` file::
#. Configure common options. Edit the :file:`/etc/neutron/neutron.conf` file:
.. code-block:: console
[DEFAULT]
verbose = True
#. Configure the L2 agent. Edit the
:file:`/etc/neutron/plugins/ml2/ml2_conf.ini` file::
#. Configure the Open vSwitch agent. Edit the
:file:`/etc/neutron/plugins/ml2/ml2_conf.ini` file:
.. code-block:: ini
[ovs]
local_ip = TUNNEL_INTERFACE_IP_ADDRESS
@ -742,14 +762,16 @@ Network node
tunnel_types = gre,vxlan
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
Replace TUNNEL_INTERFACE_IP_ADDRESS with the IP address of the interface
that handles project GRE/VXLAN tunnel networks.
Replace ``TUNNEL_INTERFACE_IP_ADDRESS`` with the IP address of the interface
that handles GRE/VXLAN project networks.
#. Configure the L3 agent. Edit the :file:`/etc/neutron/l3_agent.ini` file::
#. Configure the L3 agent. Edit the :file:`/etc/neutron/l3_agent.ini` file:
.. code-block:: ini
[DEFAULT]
verbose = True
@ -763,7 +785,9 @@ Network node
no value.
#. Configure the DHCP agent. Edit the :file:`/etc/neutron/dhcp_agent.ini`
file::
file:
.. code-block:: ini
[DEFAULT]
verbose = True
@ -774,17 +798,23 @@ Network node
#. (Optional) Reduce MTU for VXLAN/GRE project networks.
#. Edit the :file:`/etc/neutron/dhcp_agent.ini` file::
#. Edit the :file:`/etc/neutron/dhcp_agent.ini` file:
.. code-block:: ini
[DEFAULT]
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf
#. Edit the :file:`/etc/neutron/dnsmasq-neutron.conf` file::
#. Edit the :file:`/etc/neutron/dnsmasq-neutron.conf` file:
.. code-block:: ini
dhcp-option-force=26,1450
#. Configure the metadata agent. Edit the
:file:`/etc/neutron/metadata_agent.ini` file::
:file:`/etc/neutron/metadata_agent.ini` file:
.. code-block:: ini
[DEFAULT]
verbose = True
@ -793,11 +823,6 @@ Network node
Replace ``METADATA_SECRET`` with a suitable value for your environment.
.. note::
The metadata agent also requires authentication options. See the
configuration reference guide for your OpenStack release for more
information.
#. Start the following services:
* Open vSwitch
@ -809,28 +834,33 @@ Network node
Compute nodes
-------------
The compute nodes provide switching services and handle security groups
for instances.
#. Configure the kernel to enable *iptables* on bridges and disable reverse
path filtering. Edit the :file:`/etc/sysctl.conf` file:
#. Configure the kernel to enable packet forwarding and disable reverse path
filtering. Edit the :file:`/etc/sysctl.conf` file::
.. code-block:: ini
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.all.rp_filter=0
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
#. Load the new kernel configuration::
#. Load the new kernel configuration:
.. code-block:: console
$ sysctl -p
#. Configure base options. Edit the :file:`/etc/neutron/neutron.conf` file::
#. Configure common options. Edit the :file:`/etc/neutron/neutron.conf` file:
.. code-block:: ini
[DEFAULT]
verbose = True
#. Configure the L2 agent. Edit the
:file:`/etc/neutron/plugins/ml2/ml2_conf.ini` file::
#. Configure the Open vSwitch agent. Edit the
:file:`/etc/neutron/plugins/ml2/ml2_conf.ini` file:
.. code-block:: ini
[ovs]
local_ip = TUNNEL_INTERFACE_IP_ADDRESS
@ -842,12 +872,12 @@ for instances.
tunnel_types = gre,vxlan
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
Replace TUNNEL_INTERFACE_IP_ADDRESS with the IP address of the interface
that handles project GRE/VXLAN tunnel networks.
Replace ``TUNNEL_INTERFACE_IP_ADDRESS`` with the IP address of the interface
that handles GRE/VXLAN project networks.
#. Start the following services:
@ -859,7 +889,9 @@ Verify service operation
#. Source the administrative project credentials.
#. Verify presence and operation of the agents::
#. Verify presence and operation of the agents:
.. code-block:: console
$ neutron agent-list
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
@ -880,11 +912,13 @@ This example creates a flat external network and a VXLAN project network.
#. Source the administrative project credentials.
#. Create the external network::
#. Create the external network:
.. code-block:: console
$ neutron net-create ext-net --router:external True \
--provider:physical_network external --provider:network_type flat
Created a new network:
--provider:physical_network external --provider:network_type flat
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
@ -901,11 +935,13 @@ This example creates a flat external network and a VXLAN project network.
| tenant_id | 96393622940e47728b6dcdb2ef405f50 |
+---------------------------+--------------------------------------+
#. Create a subnet on the external network::
#. Create a subnet on the external network:
.. code-block:: console
$ neutron subnet-create ext-net --name ext-subnet --allocation-pool \
start=203.0.113.101,end=203.0.113.200 --disable-dhcp \
--gateway 203.0.113.1 203.0.113.0/24
start=203.0.113.101,end=203.0.113.200 --disable-dhcp \
--gateway 203.0.113.1 203.0.113.0/24
Created a new subnet:
+-------------------+----------------------------------------------------+
| Field | Value |
@ -928,24 +964,29 @@ This example creates a flat external network and a VXLAN project network.
.. note::
The example configuration contains ``vlan`` as the first project network
type. Only a privileged user can create other types of networks such as
VXLAN or GRE. The following commands use the ``admin`` project credentials to
create a VXLAN project network.
GRE or VXLAN. The following commands use the ``admin`` project credentials
to create a VXLAN project network.
#. Obtain the ``demo`` project ID::
#. Obtain the ``demo`` project ID:
.. code-block:: console
$ keystone tenant-get demo
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Demo Tenant |
| description | Demo Project |
| enabled | True |
| id | 443cd1596b2e46d49965750771ebbfe1 |
| name | demo |
+-------------+----------------------------------+
#. Create the project network::
#. Create the project network:
$ neutron net-create demo-net --tenant-id 443cd1596b2e46d49965750771ebbfe1 --provider:network_type vxlan
.. code-block:: console
$ neutron net-create demo-net --tenant-id 443cd1596b2e46d49965750771ebbfe1 \
--provider:network_type vxlan
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
@ -965,9 +1006,12 @@ This example creates a flat external network and a VXLAN project network.
#. Source the regular project credentials.
#. Create a subnet on the project network::
#. Create a subnet on the project network:
$ neutron subnet-create demo-net --name demo-subnet --gateway 192.168.1.1 192.168.1.0/24
.. code-block:: console
$ neutron subnet-create demo-net --name demo-subnet --gateway 192.168.1.1 \
192.168.1.0/24
Created a new subnet:
+-------------------+--------------------------------------------------+
| Field | Value |
@ -987,7 +1031,9 @@ This example creates a flat external network and a VXLAN project network.
| tenant_id | 443cd1596b2e46d49965750771ebbfe1 |
+-------------------+--------------------------------------------------+
#. Create a project router::
#. Create a project router:
.. code-block:: console
$ neutron router-create demo-router
Created a new router:
@ -1003,12 +1049,16 @@ This example creates a flat external network and a VXLAN project network.
| tenant_id | 443cd1596b2e46d49965750771ebbfe1 |
+-----------------------+--------------------------------------+
#. Add the project subnet as an interface on the router::
#. Add the project subnet as an interface on the router:
.. code-block:: console
$ neutron router-interface-add demo-router demo-subnet
Added interface 0fa57069-29fd-4795-87b7-c123829137e9 to router demo-router.
#. Add a gateway to the external network on the router::
#. Add a gateway to the external network on the router:
.. code-block:: console
$ neutron router-gateway-set demo-router ext-net
Set gateway for router demo-router
@ -1017,16 +1067,22 @@ Verify network operation
------------------------
#. On the network node, verify creation of the ``qrouter`` and ``qdhcp``
namespaces. The ``qdhcp`` namespace might not exist until launching
an instance::
namespaces:
.. code-block:: console
$ ip netns
qrouter-4d7928a0-4a3c-4b99-b01b-97da2f97e279
qdhcp-353f5937-a2d3-41ba-8225-fa1af2538141
.. note::
The ``qdhcp`` namespace might not exist until launching an instance.
#. On the controller node, ping the project router gateway IP address,
typically the lowest IP address in the external network subnet
allocation range::
allocation range:
.. code-block:: console
$ ping -c 4 203.0.113.101
PING 203.0.113.101 (203.0.113.101) 56(84) bytes of data.
@ -1045,7 +1101,9 @@ Verify network operation
#. Obtain console access to the instance.
#. Test connectivity to the project router::
#. Test connectivity to the project router:
.. code-block:: console
$ ping -c 4 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
@ -1058,7 +1116,9 @@ Verify network operation
4 packets transmitted, 4 received, 0% packet loss, time 2998ms
rtt min/avg/max/mdev = 0.357/0.451/0.504/0.055 ms
#. Test connectivity to the Internet::
#. Test connectivity to the Internet:
.. code-block:: console
$ ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.
@ -1072,7 +1132,9 @@ Verify network operation
rtt min/avg/max/mdev = 17.431/17.575/17.734/0.143 ms
#. Create the appropriate security group rules to allow ping and SSH access
to the instance. For example::
to the instance. For example:
.. code-block:: console
$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
@ -1088,7 +1150,9 @@ Verify network operation
| tcp | 22 | 22 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
#. Create a floating IP address::
#. Create a floating IP address on the external network:
.. code-block:: console
$ neutron floatingip-create ext-net
+---------------------+--------------------------------------+
@ -1104,11 +1168,15 @@ Verify network operation
| tenant_id | 443cd1596b2e46d49965750771ebbfe1 |
+---------------------+--------------------------------------+
#. Associate the floating IP address with the instance::
#. Associate the floating IP address with the instance:
.. code-block:: console
$ nova floating-ip-associate demo-instance1 203.0.113.102
#. Verify addition of the floating IP address to the instance::
#. Verify addition of the floating IP address to the instance:
.. code-block:: console
$ nova list
+--------------------------------------+----------------+--------+------------+-------------+-----------------------------------------+
@ -1118,7 +1186,9 @@ Verify network operation
+--------------------------------------+----------------+--------+------------+-------------+-----------------------------------------+
#. On the controller node or any host with access to the external network,
ping the floating IP address associated with the instance::
ping the floating IP address associated with the instance:
.. code-block:: console
$ ping -c 4 203.0.113.102
PING 203.0.113.102 (203.0.113.112) 56(84) bytes of data.