Remove obsolete Networking DocBook XML files

Since we're publishing now the RST files, it's time to remove
the XML files.

Change-Id: Ia8ba03fc56590951036ee86cda96d153f90614b4
This commit is contained in:
Andreas Jaeger 2015-05-01 11:35:36 +02:00
parent b932cfeccc
commit 5e09a5b4cc
36 changed files with 3 additions and 4032 deletions

View File

@ -1,72 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<book xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="openstack-networking-juno">
<title>OpenStack Networking Guide</title>
<?rax
status.bar.text.font.size="40px"
status.bar.text="Draft"?>
<?rax title.font.size="28px" subtitle.font.size="28px"?>
<titleabbrev>Networking Guide</titleabbrev>
<info>
<author>
<personname>
<firstname/>
<surname/>
</personname>
<affiliation>
<orgname>OpenStack Foundation</orgname>
</affiliation>
</author>
<copyright>
<year>2014</year>
<year>2015</year>
<holder>OpenStack Foundation</holder>
</copyright>
<releaseinfo>current</releaseinfo>
<productname>OpenStack</productname>
<pubdate/>
<legalnotice role="apache2">
<annotation>
<remark>Copyright details are filled in by the template.</remark>
</annotation>
</legalnotice>
<legalnotice role="cc-by-sa">
<annotation>
<remark>Remaining licensing details are filled in by the template.</remark>
</annotation>
</legalnotice>
<abstract>
<para>
This guide targets OpenStack administrators seeking to
deploy and manage OpenStack Networking.
</para>
</abstract>
<revhistory>
<!-- ... continue adding more revisions here as you change this document using the markup shown below... -->
<revision>
<date>2014-08-08</date>
<revdescription>
<itemizedlist>
<listitem>
<para>Creation of document.</para>
</listitem>
</itemizedlist>
</revdescription>
</revision>
</revhistory>
</info>
<!-- Chapters are referred from the book file through these include statements. You can add additional chapters using these types of statements. -->
<xi:include href="../common/ch_preface.xml"/>
<xi:include href="../common/ch_getstart.xml"/>
<xi:include href="ch_intro.xml"/>
<xi:include href="ch_networking-architecture.xml"/>
<xi:include href="ch_plugins.xml"/>
<xi:include href="ch_deployment.xml"/>
<xi:include href="ch_scalability-HA.xml"/>
<xi:include href="ch_advanced.xml"/>
<xi:include href="ch_troubleshooting.xml"/>
<xi:include href="../common/app_support.xml"/>
<glossary role="auto"/>
</book>

View File

@ -1,26 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_advanced">
<title>Advanced configuration options</title>
<para>
The following topics describe advanced configuration options for various system components. For example, configuration options where the default works but that the user wants to customize options. After installing from packages, <varname>$NEUTRON_CONF_DIR</varname> is <filename>/etc/neutron</filename>.
</para>
<xi:include href="section_networking_adv_agent.xml"/>
<xi:include href="section_networking_adv_operational_features.xml"/>
<xi:include href="section_networking_adv_features.xml"/>
<section xml:id="faas">
<title>Firewall-as-a-Service</title>
<para>FIXME</para>
</section>
<section xml:id="vpnaas">
<title>VPN-as-a-service</title>
<para>FIXME</para>
</section>
<section xml:id="servicechaining">
<title>Service chaining</title>
<para>FIXME</para>
</section>
</chapter>

View File

@ -1,67 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_deployment">
<title>Deployment</title>
<para>
OpenStack Networking provides an extreme amount of flexibility when
deploying networking in support of a compute environment. As a
result, the exact layout of a deployment will depend on a
combination of expected workloads, expected scale, and available
hardware.
</para>
<mediaobject>
<imageobject>
<imagedata fileref="../common/figures/Neutron-PhysNet-Diagram.png" align="center" width="6in"/>
</imageobject>
</mediaobject>
<para>
For demonstration purposes, this chapter concentrates on a
networking deployment that consists of these types of nodes:
</para>
<itemizedlist>
<listitem>
<para>
<emphasis>Service node:</emphasis>
The service node exposes the networking API to clients and
handles incoming requests before forwarding them to a message
queue. Requests are then actioned by the other nodes. The
service node hosts both the networking service itself and
the active networking plug-in. In environments that use
controller nodes to host the client-facing APIs, and
schedulers for all services, the controller node would also
fulfill the role of service node as it is applied in this
chapter.
</para>
<para>
<emphasis>Network node:</emphasis>
The network node handles the majority of the networking
workload. It hosts the DHCP agent, the Layer-3 (L3) agent, the
Layer-2 (L2) agent, and the metadata proxy.
In addition to plug-ins that require an agent, it runs an instance
of the plug-in agent (as do all other systems that handle data
packets in an environment where such plug-ins are in use). Both
the Open vSwitch and Linux Bridge mechanism drivers include an
agent.
</para>
<para>
<emphasis>Compute node:</emphasis>
The compute node hosts the compute instances themselves. To
connect compute instances to the networking services, compute
nodes must also run the L2 agent. Like all other systems that
handle data packets it must also run an instance of the plug-in
agent.
</para>
</listitem>
</itemizedlist>
<xi:include href="section_deployment-architecture.xml"/>
<xi:include href="section_deployment-authentication.xml"/>
<!--<xi:include href="section_plugins-ml2.xml"/>
section above is being created by Summer Long currently, will include when
saved and pushed up. AS -->
<xi:include href="section_deployment-scenarios.xml"/>
</chapter>

View File

@ -1,68 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE chapter [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="networking-intro">
<title>Introduction to networking</title>
<para>
The OpenStack Networking service provides an API that allows users
to set up and define network connectivity and addressing in the
cloud. The project code-name for Networking services is neutron.
OpenStack Networking handles the creation and management
of a virtual networking infrastructure, including
<glossterm baseform="network">networks</glossterm>, switches,
<glossterm baseform="subnet">subnets</glossterm>, and
<glossterm baseform="router">routers</glossterm> for devices
managed by the
OpenStack Compute service (nova). Advanced services such as
<glossterm baseform="firewall">firewalls</glossterm> or
<glossterm baseform="virtual private network (VPN)">virtual
private networks (VPNs)</glossterm> can also be used.
</para>
<para>
OpenStack Networking consists of the
<systemitem class="service">neutron-server</systemitem>, a
database for persistent storage, and any number of plugin
<firstterm>agents</firstterm>, which provide
other services such as interfacing with native Linux networking
mechanisms, external devices, or SDN controllers.
</para>
<para>
OpenStack Networking is entirely standalone and can be deployed
to a dedicated host. If your deployment uses a controller
host to run centralized Compute components, you can
deploy the Networking server to that specific host instead.
</para>
<para>
OpenStack Networking integrates with various other OpenStack
components:
</para>
<itemizedlist>
<listitem>
<para>OpenStack Identity (keystone) is used for
authentication and authorization of API requests.</para>
</listitem>
<listitem>
<para>OpenStack Compute (nova) is used to plug each virtual
NIC on the VM into a particular network.</para>
</listitem>
<listitem>
<para>OpenStack dashboard (horizon) for administrators and
tenant users to create and manage network services through a
web-based graphical interface.</para>
</listitem>
</itemizedlist>
<xi:include href="section_intro-layers.xml"/>
<xi:include href="section_intro-switches.xml"/>
<xi:include href="section_intro-routers.xml"/>
<xi:include href="section_intro-firewalls.xml"/>
<xi:include href="section_intro-tunnel.xml"/>
<xi:include href="section_intro-namespaces.xml"/>
<xi:include href="section_intro-neutron.xml"/>
</chapter>

View File

@ -1,210 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_networking-architecture">
<title>Networking architecture</title>
<para>
A standard network architecture design includes a cloud controller
host, a network gateway host, and a number of hypervisors for hosting
virtual machines. The cloud controller and network gateway can be on
the same host. However, if you expect VMs to send significant traffic
to or from the Internet, a dedicated network gateway host helps avoid
CPU contention between the
<systemitem class="service">neutron-l3-agent</systemitem> and other
OpenStack services that forward packets.
</para>
<para>
You can run OpenStack Networking across multiple physical devices. It
is also possible to run all service daemons on a single physical host
for evaluation purposes. However, this is not generally robust enough
for production purposes. For greater redundancy, you can run each
service on a dedicated physical host and replicate any essential
services across multiple hosts.
</para>
<para>
For more information about networking architecture options, see the
<link
xlink:href="http://docs.openstack.org/openstack-ops/content/network_design.html"
>Network Design</link> section of the
<citetitle>OpenStack Operations Guide</citetitle>.
<!-- Bring this content directly in rather than referring to it. LKB -->
</para>
<!-- <mediaobject>
<imageobject>
<imagedata scale="50" fileref="../../common/figures/Neutron-PhysNet-Diagram.png"/>
</imageobject>
</mediaobject>-->
<para>
A standard OpenStack Networking deployment usually includes one or more of
the following physical networks:
</para>
<!--Lana has edited up to here. LKB-->
<table rules="all">
<caption>General distinct physical data center
networks</caption>
<col width="20%"/>
<col width="80%"/>
<thead>
<tr>
<th>Network</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><emphasis role="bold">Management
network</emphasis></td>
<td>Provides internal communication between
OpenStack components. IP addresses on this network
should be reachable only within the data
center.</td>
</tr>
<tr>
<td><emphasis role="bold">Data
network</emphasis></td>
<td>Provides VM data communication within
the cloud deployment. The IP
addressing requirements of this
network depend on the Networking
plug-in that is used.</td>
</tr>
<tr>
<td><emphasis role="bold">External
network</emphasis></td>
<td>Provides VMs with Internet access in
some deployment scenarios. Anyone on
the Internet can reach IP addresses on
this network.</td>
</tr>
<tr>
<td><emphasis role="bold">API
network</emphasis></td>
<td>Exposes all OpenStack APIs, including
the Networking API, to tenants. IP
addresses on this network should be
reachable by anyone on the
Internet. The API network might be the
same as the external network because
it is possible to create an
external-network subnet that has
allocated IP ranges, which use less than
the full range of IP addresses in an
IP block.</td>
</tr>
</tbody>
</table>
<section xml:id="tenant-provider-networks">
<title>Tenant and provider networks</title>
<para>
The following diagram presents an overview of the tenant and provider
network types, and illustrates how they interact within the overall
Networking topology:
</para>
<!-- <figure>
<title>Tenant and provider networks</title>
<mediaobject>
<imageobject>
<imagedata scale="90"
fileref="../../common/figures/NetworkTypes.png"/>
</imageobject>
</mediaobject>
</figure>-->
<formalpara>
<title>Tenant networks</title>
<para>Users create tenant networks for connectivity within projects;
they are fully isolated by default and are not shared with other projects.
Networking supports a range of tenant network types:
</para>
</formalpara>
<variablelist>
<varlistentry>
<term>Flat</term>
<listitem>
<para>All instances reside on the same network, which can
also be shared with the hosts. No VLAN tagging or other
network segregation takes place.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Local</term>
<listitem>
<para>Instances reside on the local compute host and are
effectively isolated from any external networks.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>VLAN</term>
<listitem>
<para>Networking allows users to create multiple provider
or tenant networks using VLAN IDs (802.1Q tagged) that
correspond to VLANs present in the physical network. This
allows instances to communicate with each other across
the environment. They can also communicate with dedicated
servers, firewalls, load balancers, and other networking
infrastructure on the same layer 2 VLAN.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>VXLAN and GRE</term>
<listitem>
<para>VXLAN and GRE use network overlays to support private
communication between instances. A Networking router is
required to enable traffic to traverse outside of the GRE or
VXLAN tenant network. A router is also required to connect
directly-connected tenant networks with external networks,
including the Internet. The router provides the ability to
connect to instances directly from an external network using
floating IP addresses.</para>
</listitem>
</varlistentry>
</variablelist>
<formalpara>
<title>Provider networks</title>
<para>The OpenStack administrator creates provider networks. These networks
map to existing physical networks in the data center. Useful
network types in this category are flat (untagged) and VLAN (802.1Q
tagged). It is possible to share provider networks among tenants as
part of the network creation process.</para>
</formalpara>
</section>
<section xml:id="NSX_overview">
<title>VMware NSX integration</title>
<para>OpenStack Networking uses the NSX plug-in for Networking to
integrate with an existing VMware vCenter deployment. When installed on
the network nodes, the NSX plug-in enables a NSX controller to centrally
manage configuration settings and push them to managed network nodes.
Network nodes are considered managed when they're added as hypervisors
to the NSX controller.</para>
<para>The following diagram depicts an example NSX deployment and
illustrates the route inter-VM traffic takes between separate Compute
nodes. Note the placement of the VMware NSX plug-in and the
<systemitem class="service">neutron-server</systemitem> service on the
network node. The NSX controller features centrally with a green line to
the network node to indicate the management relationship:</para>
<!--<figure>
<title>VMware NSX overview</title>
<mediaobject>
<imageobject>
<imagedata
fileref="../../common/figures/vmware_nsx.png"
format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
</figure>-->
</section>
<xi:include href="section_architecture-overview.xml"/>
<xi:include href="section_architecture-server.xml"/>
<xi:include href="section_architecture-plug-in.xml"/>
<xi:include href="section_architecture-agents.xml"/>
</chapter>

View File

@ -1,20 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE chapter [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_networking-data-model">
<title>Networking data model</title>
<para>
Bacon ipsum dolor sit amet biltong meatloaf andouille, turducken
bresaola pork belly beef ribs ham hock capicola tail prosciutto
landjaeger meatball pork loin. Swine turkey jowl, porchetta doner
boudin meatloaf. Shoulder capicola prosciutto, shank landjaeger short
ribs sirloin turducken pork belly boudin frankfurter chuck. Salami
shankle bresaola cow filet mignon ham hock shank.
</para>
</chapter>

View File

@ -1,19 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch-plugins">
<title>Plugins</title>
<para>
Bacon ipsum dolor sit amet biltong meatloaf andouille, turducken
bresaola pork belly beef ribs ham hock capicola tail prosciutto
landjaeger meatball pork loin. Swine turkey jowl, porchetta doner
boudin meatloaf. Shoulder capicola prosciutto, shank landjaeger short
ribs sirloin turducken pork belly boudin frankfurter chuck. Salami
shankle bresaola cow filet mignon ham hock shank.
</para>
<xi:include href="section_plugins-ml2.xml"/>
<xi:include href="section_plugins-proprietary.xml"/>
</chapter>

View File

@ -1,20 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_scalability-HA">
<title>Scalability and high availability</title>
<para>
Bacon ipsum dolor sit amet biltong meatloaf andouille, turducken
bresaola pork belly beef ribs ham hock capicola tail prosciutto
landjaeger meatball pork loin. Swine turkey jowl, porchetta doner
boudin meatloaf. Shoulder capicola prosciutto, shank landjaeger short
ribs sirloin turducken pork belly boudin frankfurter chuck. Salami
shankle bresaola cow filet mignon ham hock shank.
</para>
<xi:include href="section_ha-dhcp.xml"/>
<xi:include href="section_ha-l3.xml"/>
<xi:include href="section_ha-dvr.xml"/>
</chapter>

View File

@ -1,54 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE chapter [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_troubleshooting">
<title>Troubleshoot OpenStack Networking</title>
<para>These sections provide additional troubleshooting
information for OpenStack Networking (neutron). For
nova-network troubleshooting information, see the
<citetitle>OpenStack Operations Guide</citetitle>.
</para>
<section xml:id="neutron_network_traffic_in_cloud">
<title>Visualize OpenStack Networking service traffic in the
cloud</title>
<xi:include href="https://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/ch_ops_network_troubleshooting.xml"
xpointer="xmlns(db=http://docbook.org/ns/docbook)
xpath(//*[@xml:id = 'neutron_network_traffic_in_cloud']/*[not(self::db:title)])">
<xi:fallback><para><link
xlink:href="http://docs.openstack.org">http://docs.openstack.org</link></para></xi:fallback>
</xi:include>
</section>
<section xml:id="dealing_with_netns">
<title>Troubleshoot network namespace issues</title>
<xi:include href="https://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/ch_ops_network_troubleshooting.xml"
xpointer="xmlns(db=http://docbook.org/ns/docbook)
xpath(//*[@xml:id = 'dealing_with_netns']/*[not(self::db:title)])">
<xi:fallback><para><link
xlink:href="http://docs.openstack.org">http://docs.openstack.org</link></para></xi:fallback>
</xi:include>
</section>
<section xml:id="trouble_shooting_ovs">
<title>Troubleshoot Open vSwitch</title>
<xi:include href="https://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/ch_ops_network_troubleshooting.xml"
xpointer="xmlns(db=http://docbook.org/ns/docbook)
xpath(//*[@xml:id = 'trouble_shooting_ovs']/*[not(self::db:title)])">
<xi:fallback><para><link
xlink:href="http://docs.openstack.org">http://docs.openstack.org</link></para></xi:fallback>
</xi:include>
</section>
<section xml:id="debugging_dns_issues">
<title>Debug DNS issues</title>
<xi:include href="https://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/ch_ops_network_troubleshooting.xml"
xpointer="xmlns(db=http://docbook.org/ns/docbook)
xpath(//*[@xml:id = 'debugging_dns_issues']/*[not(self::db:title)])">
<xi:fallback><para><link
xlink:href="http://docs.openstack.org">http://docs.openstack.org</link></para></xi:fallback>
</xi:include>
</section>
</chapter>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 288 KiB

View File

@ -1,79 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<parent>
<groupId>org.openstack.docs</groupId>
<artifactId>parent-pom</artifactId>
<version>1.0.0-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>openstack-networking-guide</artifactId>
<packaging>jar</packaging>
<name>OpenStack Networking Guide</name>
<properties>
<!-- This is set by Jenkins according to the branch. -->
<release.path.name></release.path.name>
<comments.enabled>0</comments.enabled>
</properties>
<!-- ################################################ -->
<!-- USE "mvn clean generate-sources" to run this POM -->
<!-- ################################################ -->
<build>
<plugins>
<plugin>
<groupId>com.rackspace.cloud.api</groupId>
<artifactId>clouddocs-maven-plugin</artifactId>
<!-- version set in ../pom.xml -->
<executions>
<execution>
<id>generate-webhelp</id>
<goals>
<goal>generate-webhelp</goal>
</goals>
<phase>generate-sources</phase>
<configuration>
<!-- These parameters only apply to webhelp -->
<enableDisqus>0</enableDisqus>
<disqusShortname>os-networking-guide</disqusShortname>
<enableGoogleAnalytics>1</enableGoogleAnalytics>
<googleAnalyticsId>UA-17511903-1</googleAnalyticsId>
<generateToc>
appendix toc,title
article/appendix nop
article toc,title
book toc,title,figure,table,example,equation
chapter toc,title
section toc
part toc,title
qandadiv toc
qandaset toc
reference toc,title
set toc,title
</generateToc>
<!-- The following elements sets the autonumbering of sections in output for chapter numbers but no numbered sections-->
<sectionAutolabel>0</sectionAutolabel>
<tocSectionDepth>1</tocSectionDepth>
<sectionLabelIncludesComponentLabel>0</sectionLabelIncludesComponentLabel>
<webhelpDirname>networking-guide</webhelpDirname>
<pdfFilenameBase>networking-guide</pdfFilenameBase>
</configuration>
</execution>
</executions>
<configuration>
<!-- These parameters apply to pdf and webhelp -->
<xincludeSupported>true</xincludeSupported>
<sourceDirectory>.</sourceDirectory>
<includes>
bk-networking.xml
</includes>
<canonicalUrlBase>http://docs.openstack.org/networking-guide/content</canonicalUrlBase>
<glossaryCollection>${basedir}/../glossary/glossary-terms.xml</glossaryCollection>
<branding>openstack</branding>
<formalProcedures>0</formalProcedures>
</configuration>
</plugin>
</plugins>
</build>
</project>

View File

@ -1,129 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="architecture-agents">
<title>Agents</title>
<!-- This section has not yet been edited. LKB-->
<para>
Bacon ipsum dolor sit amet ribeye rump pork loin shankle jowl pancetta
bacon. Chicken andouille capicola filet mignon shoulder, turducken
corned beef boudin hamburger fatback pork chop t-bone kevin. Leberkas
turducken short loin t-bone pork belly pig prosciutto chicken beef
ribs pork loin short ribs shoulder jerky bacon strip steak.
</para>
<variablelist>
<varlistentry>
<term><systemitem class="service">neutron-server</systemitem></term>
<listitem>
<para>
A Python daemon, which manages user requests (and exposes the API).
It is configured with a plugin that implements the OpenStack
Networking API operations using a specific set of networking
mechanisms. A wide choice of plugins are also available. For
example, the open vSwitch and linuxbridge plugins utilize native
Linux networking mechanisms, while other plugins interface with
external devices or SDN controllers.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><systemitem class="service">neutron-l3-agent</systemitem></term>
<listitem>
<para>
An agent providing L3/NAT forwarding.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><systemitem class="service">neutron-*-agent</systemitem></term>
<listitem>
<para>
A plug-in agent that runs on each node to perform local networking
configuration for the node's virtual machines and networking services.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><systemitem class="service">neutron-dhcp-agent</systemitem></term>
<listitem>
<para>
An agent providing DHCP services to tenant networks.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Database</term>
<listitem>
<para>
Provides persistent storage.
</para>
</listitem>
</varlistentry>
</variablelist>
<section xml:id="sec-overview">
<title>Overview</title>
<para>provide layer-2/3 connectivity to instances, handle
physical-virtual network transition, handle metadata, etc</para>
</section>
<section xml:id="layer2">
<title>Layer 2</title>
<variablelist>
<varlistentry>
<term>Linux Bridge</term>
<listitem>
<para>overview/concepts</para>
<para>configuration file</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Open vSwitch</term>
<listitem>
<para>overview/concepts</para>
<para>configuration file</para>
</listitem>
</varlistentry>
</variablelist>
</section>
<section xml:id="layer3-iprouting">
<title>Layer 3 (IP/Routing)</title>
<variablelist>
<varlistentry>
<term>l3</term>
<listitem>
<para>overview/concepts</para>
<para>configuration file</para>
</listitem>
</varlistentry>
<varlistentry>
<term>DHCP</term>
<listitem>
<para>overview/concepts</para>
<para>configuration file</para>
</listitem>
</varlistentry>
</variablelist>
</section>
<section xml:id="misc">
<title>Miscellaneous</title>
<variablelist>
<varlistentry>
<term>Metadata</term>
<listitem>
<para>overview/concepts</para>
<para>configuration file</para>
</listitem>
</varlistentry>
</variablelist>
</section>
</section>

View File

@ -1,36 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="architecture-overview">
<title>Overview</title>
<para>
Bacon ipsum dolor sit amet biltong meatloaf andouille, turducken
bresaola pork belly beef ribs ham hock capicola tail prosciutto
landjaeger meatball pork loin. Swine turkey jowl, porchetta doner
boudin meatloaf. Shoulder capicola prosciutto, shank landjaeger short
ribs sirloin turducken pork belly boudin frankfurter chuck. Salami
shankle bresaola cow filet mignon ham hock shank.
</para>
<section xml:id="service-component">
<title>Service/component hierarchy</title>
<para>Neutron server -> plug-in -> agents</para>
</section>
<section xml:id="examplearch">
<title>Example architectures</title>
<para>XIncluded below from Cloud Admin Guide. May need to be reworked.
See also
http://docs.openstack.org/grizzly/openstack-network/admin/content/use_cases.html
and
admin-guide-cloud/networking/section_networking-scenarios.xml .</para>
</section>
</section>

View File

@ -1,44 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="architecture-plug-in">
<title>Plug-ins</title>
<!-- This section has not yet been edited. LKB-->
<para>
The legacy networking (nova-network) implementation assumed a basic model
of isolation through Linux VLANs and IP tables. Networking introduces
support for vendor plug-ins, which offer a custom back-end
implementation of the Networking API. A plugin can use a variety of
technologies to implement the logical API requests. Some networking
plug-ins might use basic Linux VLANs and IP tables, while others might
use more advanced technologies, such as L2-in-L3 tunneling or
OpenFlow, to provide similar benefits.
</para>
<!-- <para>
Table 6.3. Available networking plug-ins
Plug-in Documentation
Big Switch Plug-in (Floodlight REST Proxy) This guide and http://www.openflowhub.org/display/floodlightcontroller/Neutron+REST+Proxy+Plugin
Brocade Plug-in This guide and https://wiki.openstack.org/wiki/Brocade-neutron-plugin
Cisco http://wiki.openstack.org/cisco-neutron
Cloudbase Hyper-V Plug-in http://www.cloudbase.it/quantum-hyper-v-plugin/
Linux Bridge Plug-in http://wiki.openstack.org/Neutron-Linux-Bridge-Plugin
Mellanox Plug-in https://wiki.openstack.org/wiki/Mellanox-Neutron/
Midonet Plug-in http://www.midokura.com/
ML2 (Modular Layer 2) Plug-in https://wiki.openstack.org/wiki/Neutron/ML2
NEC OpenFlow Plug-in https://wiki.openstack.org/wiki/Neutron/NEC_OpenFlow_Plugin
Open vSwitch Plug-in This guide.
PLUMgrid This guide and https://https://wiki.openstack.org/wiki/PLUMgrid-Neutron
VMware NSX Plug-in This guide and NSX Product Overview, NSX Product Support
Plug-ins can have different properties for hardware requirements, features, performance, scale, or operator tools. Because Networking supports a large number of plug-ins, the cloud administrator can weigh options to decide on the right networking technology for the deployment.
In the Havana release, OpenStack Networking introduces the Modular Layer 2 (ML2) plug-in that enables the use of multiple concurrent mechanism drivers. This capability aligns with the complex requirements typically found in large heterogeneous environments. It currently works with the existing Open vSwitch, Linux Bridge, and Hyper-v L2 agents. The ML2 framework simplifies the addition of support for new L2 technologies and reduces the effort that is required to add and maintain them compared to earlier large plug-ins.
NOTE
The Open vSwitch and Linux Bridge plug-ins are deprecated in the Havana release and will be removed in the Icehouse release. The features in these plug-ins are now part of the ML2 plug-in in the form of mechanism drivers.
</para>-->
</section>

View File

@ -1,20 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="architecture-server">
<title>Server</title>
<para>
Bacon ipsum dolor sit amet biltong meatloaf andouille, turducken
bresaola pork belly beef ribs ham hock capicola tail prosciutto
landjaeger meatball pork loin. Swine turkey jowl, porchetta doner
boudin meatloaf. Shoulder capicola prosciutto, shank landjaeger short
ribs sirloin turducken pork belly boudin frankfurter chuck. Salami
shankle bresaola cow filet mignon ham hock shank.
</para>
</section>

View File

@ -1,28 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="deployment-architecture">
<title>Example architecture</title>
<section xml:id="controllornode">
<title>Controller node</title>
<para>Functions (provides API)</para>
</section>
<section xml:id="networknode">
<title>Network node</title>
<para>Functions (handles routing, NAT, floating IPs, etc)</para>
</section>
<section xml:id="computenodes">
<title>Compute nodes</title>
<para>Functions (implements security groups</para>
</section>
</section>

View File

@ -1,24 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="deployment-authentication">
<title>Authentication</title>
<para>
Bacon ipsum dolor sit amet biltong meatloaf andouille, turducken
bresaola pork belly beef ribs ham hock capicola tail prosciutto
landjaeger meatball pork loin. Swine turkey jowl, porchetta doner
boudin meatloaf. Shoulder capicola prosciutto, shank landjaeger short
ribs sirloin turducken pork belly boudin frankfurter chuck. Salami
shankle bresaola cow filet mignon ham hock shank.
</para>
<!-- For future reference, please note that the Red Hat Cloud Administrator Guide
was the book in mind when creating this section. Link here:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/5/html-single/Cloud_Administrator_Guide/index.html#section_networking_auth
AS -->
</section>

View File

@ -1,30 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="deployment-scenarios">
<title>Scenarios</title>
<para>(provide configuration, diagrams, and flow of communications
when launching an instance)</para>
<itemizedlist>
<listitem><para>Linux Bridge using VLAN</para></listitem>
<listitem><para>Linux Bridge using GRE</para></listitem>
<listitem><para>Linux Bridge using VXLAN</para></listitem>
<listitem><para>Open vSwitch with VLAN</para></listitem>
<listitem><para>Open vSwitch with GRE</para></listitem>
<listitem><para>Open vSwitch with VXLAN</para></listitem>
<listitem><para>Mixed Linux Bridge and Open vSwitch</para></listitem>
</itemizedlist>
<para>XIncluded content for possible reuse below</para>
<xi:include href="../admin-guide-cloud/networking/section_networking-scenarios.xml"/>
</section>

View File

@ -1,20 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ha-dhcp">
<title>DHCP Agents</title>
<para>
Bacon ipsum dolor sit amet biltong meatloaf andouille, turducken
bresaola pork belly beef ribs ham hock capicola tail prosciutto
landjaeger meatball pork loin. Swine turkey jowl, porchetta doner
boudin meatloaf. Shoulder capicola prosciutto, shank landjaeger short
ribs sirloin turducken pork belly boudin frankfurter chuck. Salami
shankle bresaola cow filet mignon ham hock shank.
</para>
</section>

View File

@ -1,151 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="ha-dvr">
<?dbhtml stop-chunking?>
<title>Distributed Virtual Router (DVR)</title>
<para>DVR stands for <glossterm baseform="distributed virtual router (DVR)"
>Distributed Virtual Router</glossterm>. For
OpenStack Networking you can provide high availability across compute nodes
using a DVR configuration. Both the layer-2 and layer-3 agents work together
on a compute node, and the L2 agent works in an enhanced mode for DVR,
providing management for OVS rules. In this scenario you do not use a
separate networking node unless a legacy one is in place already.</para>
<para>Here is a sample network topology showing how distributed routing
occurs.</para>
<figure xml:id="example-dvr-diagram">
<title>DVR configuration diagram</title>
<mediaobject>
<imageobject>
<imagedata contentwidth="6in" fileref="figures/dvr_diagram.png"/>
</imageobject>
</mediaobject>
</figure>
<para>The DVR agent takes responsibility for creating, updating, or deleting
the routers in the router namespaces. For all clients in the network owned
by the router, the DVR agent populates the ARP entry. By pre-populating ARP
entries across compute nodes, the distributed virtual router ensures traffic
goes to the correct destination. The integration bridge on a particular
compute node identifies the incoming frame's source MAC address as a
DVR-unique MAC address because every compute node l2 agent knows all
configured unique MAC addresses for DVR used in the cloud. The agent
replaces the DVR-unique MAC Address with the green subnet interface MAC
address and forwards the frame to the instance. By default, distributed
routing is not turned on. When set to true, the layer-2 agent handles the
DVR ports detected on the integration bridge. Also, when a tenant creates a
router with <literal>neutron router-create</literal>, the Networking
services creates only distributed routers after you have enabled distributed
routing.</para>
<section xml:id="configure-dvr">
<title>Configure Distributed Virtual Router (DVR)</title>
<procedure>
<step>
<para>Edit the <filename>ovs_neutron_plugin.ini</filename> file to
change <literal>enable_distributed_routing</literal> to
<literal>True</literal>:</para>
<programlisting language="ini">enable_distributed_routing = True</programlisting>
</step>
<step>
<para>Edit the <filename>/etc/neutron/neutron.conf</filename> file to
set the base MAC address that the DVR system uses for unique MAC
allocation with the <literal>dvr_base_mac</literal> setting:</para>
<programlisting language="ini">dvr_base_mac = fa:16:3f:00:00:00</programlisting>
<note>
<para>This <literal>dvr_base_mac</literal> value must be different
from the <literal>base_mac</literal> value assigned for virtual
ports to ensure port isolation and for troubleshooting purposes. The
default is <literal>fa:16:3f:00:00:00</literal>. If you want four
octets used, substitute again for the fourth octet (00), otherwise
three octets are kept the same and the others are randomly
generated.</para>
</note>
</step>
<step>
<para>Edit the <filename>/etc/neutron/neutron.conf</filename> file to
set <literal>router_distributed</literal> to
<literal>True</literal>.</para>
<programlisting language="ini">router_distributed = True</programlisting>
</step>
<step>
<para>Edit the <filename>l3_agent.ini</filename> file to set
<literal>agent_mode</literal> to <literal>dvr</literal> on compute
nodes for multi-node deployments:</para>
<programlisting language="ini">agent_mode = dvr</programlisting>
<note>
<para>When using a separate networking host, set
<literal>agent_mode</literal> to <literal>dvr_snat</literal>. Use
<literal>dvr_snat</literal> for Devstack or other single-host
deployments also.</para>
</note>
</step>
<step>
<para>In the <literal>[ml2]</literal> section, edit the
<filename>ml2_conf.ini</filename> file to add
<literal>l2population</literal>:</para>
<programlisting language="ini">[ml2]
mechanism_drivers = openvswitch,l2population</programlisting>
</step>
<step>
<para>In the <literal>[agent]</literal> section of the
<filename>ml2_conf.ini</filename> file, set these configuration
options to these values:</para>
<programlisting language="ini">[agent]
l2_population = True
tunnel_types = vxlan
enable_distributed_routing = True</programlisting>
</step>
<step>
<para>Restart the OVS L2 agent.</para>
<stepalternatives>
<step>
<para>Ubuntu/Debian:</para>
<screen><prompt>#</prompt> <userinput>service neutron-plugin-openvswitch-agent restart</userinput> restart</screen>
</step>
<step>
<para>RHEL/CentOS/Fedora:</para>
<screen><prompt>#</prompt> <userinput>service neutron-openvswitch-agent restart</userinput></screen>
</step>
<step>
<para>SLES/openSUSE:</para>
<screen><prompt>#</prompt> <userinput>service openstack-neutron-openvswitch-agent restart</userinput></screen>
</step>
</stepalternatives>
</step>
</procedure>
</section>
<section xml:id="dvr-requirements">
<title>DVR requirements</title>
<itemizedlist>
<listitem>
<para>You must use the ML2 plug-in for Open vSwitch (OVS) to enable
DVR.</para>
</listitem>
<listitem>
<para>Be sure that your firewall or security groups allows UDP traffic
over the VLAN, GRE, or VXLAN port to pass between the compute hosts.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="dvr-limitations">
<title>DVR limitations</title>
<itemizedlist>
<listitem>
<para>Distributed virtual router configurations work with the Open
vSwitch Modular Layer 2 driver only for Juno.</para>
</listitem>
<listitem>
<para>In order to enable true north-south bandwidth between hypervisors
(compute nodes), you must use public IP addresses for every compute
node and enable floating IPs.</para>
</listitem>
<listitem>
<para>For now, based on the current neutron design and architecture,
DHCP cannot become distributed across compute nodes.</para>
</listitem>
</itemizedlist>
</section>
</section>

View File

@ -1,17 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ha-l3">
<title>L3 Agents</title>
<para>
The Neutron L3 Agent enables layer 3 forwarding and floating
IP support. It provides L3/NAT forwarding to ensure external network access
for VMs on tenant networks. The L3 agent achieves High Availability by adopting Pacemaker.
</para>
</section>

View File

@ -1,24 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="intro-firewalls">
<title>Firewalls</title>
<para>A firewall is a network security system that controls the
network traffic using a set of rules. Firewalls can be implemented in both hardware and software, or a mix of the two. For example, a firewall might
disallow traffic originating from a range of IP addresses or only
allow traffic on specific ports. Firewalls are often used between
internal networks, where data is trusted, and on external networks
such as the Internet.</para>
<para>Many firewalls incorporate Network Address Translation (NAT).
Network address translation masks the IP addresses of the devices on
the internal network, so that external devices see only the single
public IP address of the device hosting the firewall.</para>
</section>

View File

@ -1,87 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="intro-layers">
<title>Networking layers</title>
<para>Network communication is commonly described in terms of the OSI model. The OSI model is
a seven-layer model that describes how various protocols and mechanisms fit
together. Each layer depends on the protocols in the layer beneath it.</para>
<variablelist>
<varlistentry>
<term>7 Application</term>
<listitem>
<para>The Application layer of the OSI model is the user interface
layer. Everything here is application specific. Some examples at
this layer which could be Telnet, FTP, email.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>6 Presentation</term>
<listitem>
<para>Here is where data gets translated from the application into
network format and back again. The presentation layer transforms
data into a form that the application layer can accept. The
presentation layer formats and encrypts data which prevents
compatibility problems.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>5 Session</term>
<listitem>
<para>The Session layer manages connections between applications. It
is responsible for session and connection coordination.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>4 Transport</term>
<listitem>
<para>This layer deals with transporting data usually over TCP or
Transmission Control Protocol.
TCP enables two hosts to establish, connect, and exchange streams
of data together. TCP also guarantees delivery of data and the
order in which packets are sent.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>3 Network</term>
<listitem>
<para>This layer handles transmission for data from node to node.
The network layer handles routing, forwarding, addressing,
inter-networking, and error handling. Usually this is the
IP portion of TCP/IP.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>2 Data link</term>
<listitem>
<para>The data link layer is where most LAN technologies such as Ethernet
live. This layer is divided into two sub layers:
<emphasis role="bold">MAC (Media Access Control):</emphasis> Handles
access to data and permissions to translate it.
<emphasis role="bold">LLC (Logical Link Control):</emphasis> Manages
traffic across the physical layer.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>1 Physical</term>
<listitem>
<para>The physical layer is the description of the physical media or
signals used in networking. Examples include the size of the Ethernet,
hubs, or other physical devices used to establish a network.</para>
</listitem>
</varlistentry>
</variablelist>
<para>OpenStack Networking is primarily concerned with Layer 2 and Layer 3.</para>
<para>Layer-3 protocols include the Internet Protocol Suite, which includes
IPv4, IPv6, and the Internet Control Message Protocol (ICMP).</para>
</section>

View File

@ -1,19 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="intro-namespaces">
<title>Namespaces</title>
<para>
A namespace is a container for a set of identifiers. Namespaces provide a level of
direction to specific identifiers and make it possible to differentiate between
identifiers with the same exact name. With network namespaces, you can have
different and separate instances of network interfaces and routing tables that
operate separate from each other.
</para>
</section>

View File

@ -1,33 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="intro-neutron">
<title>Neutron data model</title>
<para>FIXME: Explain Neutron terminology and how it maps to networking
concepts presented in previous chapters. A small amount of terminology
is at http://docs.openstack.org/admin-guide-cloud/content/api_abstractions.html .
Probably not worth subsections as outlined here. table or variablelist?</para>
<section xml:id="networks">
<title>Networks</title>
<para>FIXME</para>
</section>
<section xml:id="subnets">
<title>Subnets</title>
<para>FIXME</para>
</section>
<section xml:id="ports">
<title>Ports</title>
<para>FIXME</para>
</section>
<section xml:id="extensions">
<title>Extensions</title>
<para>FIXME</para>
</section>
</section>

View File

@ -1,17 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="intro-routers">
<title>Routers</title>
<para>A router is a networking device that connects multiple networks
together. Routers are connected to two or more networks. When they
receive data packets, they use a routing table to determine which
networks to pass the information to.</para>
</section>

View File

@ -1,17 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="intro-switches">
<title>Switches</title>
<para>A switch is a device that is used to connect devices on a network.
Switches forward packets on to other devices, using packet switching to
pass data along only to devices that need to receive it. Switches operate
at Layer 2 of the OSI model.</para>
</section>

View File

@ -1,52 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="intro-tunnel">
<title>Tunnel (segmentation) technologies</title>
<para>Tunneling allows one network protocol to encapsulate another
payload protocol, such that packets from the payload protocol are
passed as data on the delivery protocol. This can be used, for
example, to pass data securely over an untrusted network.</para>
<section xml:id="layer2config">
<title>Layer 2</title>
<variablelist>
<varlistentry>
<term>Virtual local area network (VLAN)</term>
<listitem>
<para>A VLAN partitions a single layer-2 network into multiple isolated
broadcast domains.</para>
</listitem>
</varlistentry>
</variablelist>
</section>
<section xml:id="layer3">
<title>Layer 3</title>
<variablelist>
<varlistentry>
<term>Generic routing encapsulation (GRE)</term>
<listitem>
<para>GRE carries IP packets with private IP address over the Internet
using delivery packets with public IP addresses.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Virtual extensible local area network (VXLAN)</term>
<listitem>
<para>VXLAN encapsulates layer-2 Ethernet frames over layer-4
UDP packets.</para>
</listitem>
</varlistentry>
</variablelist>
</section>
</section>

View File

@ -1,433 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="section_networking-advanced-agent">
<title>Advanced agent options</title>
<para>This section describes advanced configuration options for server plug-ins and agents.</para>
<section xml:id="section_neutron_server">
<title>OpenStack Networking server with plug-in</title>
<para>This web server runs the OpenStack Networking API Web Server. It loads
a plug-in and passes the API calls to the plug-in for processing. The
<systemitem class="service">neutron-server</systemitem> service receives one or more configuration files
as input. For example:</para>
<screen><userinput>neutron-server --config-file <replaceable>NEUTRON_CONFIG_FILE</replaceable> --config-file <replaceable>PLUGIN_CONFIG_FILE</replaceable></userinput></screen>
<para>The neutron configuration file contains the common neutron configuration options.</para>
<para>The plug-in configuration file contains the plug-in specific options.</para>
<para>The plug-in that runs on the service is loaded through the
<option>core_plugin</option> configuration option. In some cases, a plug-in
might have an agent that performs the actual networking.</para>
<para>Most plug-ins require an SQL database. After you install and start the database
server, set a password for the root account and delete the anonymous accounts:</para>
<screen><prompt>$</prompt> <userinput>&gt; mysql -u root</userinput>
<prompt>mysql&gt;</prompt> <userinput>update mysql.user set password = password('iamroot') where user = 'root';</userinput>
<prompt>mysql&gt;</prompt> <userinput>delete from mysql.user where user = '';</userinput></screen>
<para>Create a database and user account specifically for plug-in:</para>
<screen><prompt>mysql&gt;</prompt> <userinput>create database <replaceable>DATABASE_NAME</replaceable></userinput>
<prompt>mysql&gt;</prompt> <userinput>create user '<replaceable>USER_NAME</replaceable>'@'localhost' identified by '<replaceable>USER_NAME</replaceable>';</userinput>
<prompt>mysql&gt;</prompt> <userinput>create user '<replaceable>USER_NAME</replaceable>'@'%' identified by '<replaceable>USER_NAME</replaceable>';</userinput>
<prompt>mysql&gt;</prompt> <userinput>grant all on <replaceable>DATABASE_NAME</replaceable>.* to '<replaceable>USER_NAME</replaceable>'@'%';</userinput></screen>
<para>After this step completes, you can update the settings in the relevant plug-in
configuration files. Find the plug-in specific configuration files at
<filename>$NEUTRON_CONF_DIR/plugins</filename>.</para>
<para>Some plug-ins have an L2 agent that performs the actual networking. That is, the agent
attaches the virtual machine NIC to the OpenStack Networking network. Each node should
run an L2 agent. Note that the agent receives the following input parameters:</para>
<screen><userinput>neutron-plugin-agent --config-file <replaceable>NEUTRON_CONFIG_FILE</replaceable> --config-file <replaceable>PLUGIN_CONFIG_FILE</replaceable></userinput></screen>
<para>You must complete these tasks before you can work with the plug-in:</para>
<orderedlist>
<listitem>
<para>Ensure that the core plug-in is updated.</para>
</listitem>
<listitem>
<para>Ensure that the database connection is correctly set.</para>
</listitem>
</orderedlist>
<para>The following table shows sample values for the configuration options. Some Linux
packages might provide installation utilities that configure these values.</para>
<table rules="all">
<caption>Settings</caption>
<thead>
<tr>
<th>Option</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td><emphasis role="bold">Open vSwitch</emphasis></td>
<td/>
</tr>
<tr>
<td>core_plugin ($NEUTRON_CONF_DIR/neutron.conf)</td>
<td>openvswitch</td>
</tr>
<tr>
<td>connection (in the plugin configuration file, section
<code>[database]</code>)</td>
<td>mysql://<replaceable>USERNAME</replaceable>:<replaceable>PASSWORD</replaceable>@localhost/ovs_neutron?charset=utf8</td>
</tr>
<tr>
<td>Plug-in Configuration File</td>
<td>$NEUTRON_CONF_DIR/plugins/openvswitch/ovs_neutron_plugin.ini</td>
</tr>
<tr>
<td>Agent</td>
<td>neutron-openvswitch-agent</td>
</tr>
<tr>
<td><emphasis role="bold">Linux Bridge</emphasis></td>
<td/>
</tr>
<tr>
<td>core_plugin ($NEUTRON_CONF_DIR/neutron.conf)</td>
<td>linuxbridge</td>
</tr>
<tr>
<td>connection (in the plug-in configuration file, section
<code>[database]</code>)</td>
<td>mysql://<replaceable>USERNAME</replaceable>:<replaceable>PASSWORD</replaceable>@localhost/neutron_linux_bridge?charset=utf8</td>
</tr>
<tr>
<td>Plug-in Configuration File</td>
<td>$NEUTRON_CONF_DIR/plugins/linuxbridge/linuxbridge_conf.ini</td>
</tr>
<tr>
<td>Agent</td>
<td>neutron-linuxbridge-agent</td>
</tr>
</tbody>
</table>
</section>
<section xml:id="section_adv_cfg_dhcp_agent">
<title>DHCP agent</title>
<para>You can run a DHCP server that allocates IP addresses to virtual machines that run on
the network. When a subnet is created, by default, the subnet has DHCP enabled.</para>
<para>The node that runs the DHCP agent should run:</para>
<screen><userinput>neutron-dhcp-agent --config-file <replaceable>NEUTRON_CONFIG_FILE</replaceable> --config-file <replaceable>DHCP_CONFIG_FILE</replaceable></userinput></screen>
<para>Currently the DHCP agent uses <systemitem class="service">dnsmasq</systemitem> to perform that static
address assignment.</para>
<para>You must configure a driver that matches the plug-in that runs on the service.</para>
<table rules="all">
<caption>Settings</caption>
<thead>
<tr>
<th>Option</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td><emphasis role="bold">Open vSwitch</emphasis></td>
<td/>
</tr>
<tr>
<td>interface_driver ($NEUTRON_CONF_DIR/dhcp_agent.ini)</td>
<td>neutron.agent.linux.interface.OVSInterfaceDriver</td>
</tr>
<tr>
<td><emphasis role="bold">Linux Bridge</emphasis></td>
<td/>
</tr>
<tr>
<td>interface_driver ($NEUTRON_CONF_DIR/dhcp_agent.ini)</td>
<td>neutron.agent.linux.interface.BridgeInterfaceDriver</td>
</tr>
</tbody>
</table>
<section xml:id="adv_cfg_dhcp_agent_namespace">
<title>Namespace</title>
<para>By default, the DHCP agent uses Linux network namespaces to support overlapping IP
addresses. For information about network namespaces support, see the <link
linkend="section_limitations">Limitations</link> section.</para>
<para>If the Linux installation does not support network namespaces, you must disable
network namespaces in the DHCP agent configuration file. The default value of
<option>use_namespaces</option> is <literal>True</literal>.</para>
<programlisting language="ini">use_namespaces = False</programlisting>
</section>
</section>
<section xml:id="section_adv_cfg_l3_agent">
<title>L3 agent</title>
<para>You can run an L3 agent that enables layer 3 forwarding and floating IP support.</para>
<para>The node that runs the L3 agent should run:</para>
<screen><userinput>neutron-l3-agent --config-file <replaceable>NEUTRON_CONFIG_FILE</replaceable> --config-file <replaceable>L3_CONFIG_FILE</replaceable></userinput></screen>
<para>You must configure a driver that matches the plug-in that runs on the service. This
driver creates the routing interface.</para>
<table rules="all">
<caption>Settings</caption>
<thead>
<tr>
<th>Option</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td><emphasis role="bold">Open vSwitch</emphasis></td>
<td/>
</tr>
<tr>
<td>interface_driver ($NEUTRON_CONF_DIR/l3_agent.ini)</td>
<td>neutron.agent.linux.interface.OVSInterfaceDriver</td>
</tr>
<tr>
<td>external_network_bridge ($NEUTRON_CONF_DIR/l3_agent.ini)</td>
<td>br-ex</td>
</tr>
<tr>
<td><emphasis role="bold">Linux Bridge</emphasis></td>
<td/>
</tr>
<tr>
<td>interface_driver ($NEUTRON_CONF_DIR/l3_agent.ini)</td>
<td>neutron.agent.linux.interface.BridgeInterfaceDriver</td>
</tr>
<tr>
<td>external_network_bridge ($NEUTRON_CONF_DIR/l3_agent.ini)</td>
<td>This field must be empty (or the bridge name for the external network).</td>
</tr>
</tbody>
</table>
<para>The L3 agent communicates with the OpenStack Networking server through the OpenStack
Networking API, so the following configuration is required:</para>
<orderedlist>
<listitem>
<para>OpenStack Identity authentication:</para>
<programlisting language="ini">auth_url="$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_AUTH_HOST:$KEYSTONE_AUTH_PORT/v2.0"</programlisting>
<para>For example:</para>
<programlisting language="ini">http://10.56.51.210:5000/v2.0</programlisting>
</listitem>
<listitem>
<para>Administrative user details:</para>
<programlisting language="ini">admin_tenant_name $SERVICE_TENANT_NAME
admin_user $Q_ADMIN_USERNAME
admin_password $SERVICE_PASSWORD</programlisting>
</listitem>
</orderedlist>
<section xml:id="adv_cfg_l3_agent_namespace">
<title>Namespace</title>
<para>By default, the L3 agent uses Linux network namespaces to support overlapping IP
addresses.</para>
<para>For information about network namespaces support, see the <link
linkend="section_limitations">Limitation</link> section.</para>
<para>If the Linux installation does not support network namespaces, you must disable
network namespaces in the L3 agent configuration file. The default value of
<option>use_namespaces</option> is <literal>True</literal>.</para>
<programlisting language="ini">use_namespaces = False</programlisting>
<para>When you set <option>use_namespaces</option> to <literal>False</literal>, only one
router ID is supported per node.</para>
<para>Use the <option>router_id</option> configuration option to configure the
router:</para>
<programlisting language="ini"># If use_namespaces is set to False then the agent can only configure one router.
# This is done by setting the specific router_id.
router_id = 1064ad16-36b7-4c2f-86f0-daa2bcbd6b2a</programlisting>
<para>To configure it, you must run the OpenStack Networking service, create a router,
and set the router ID value to the <option>router_id</option> value in the L3 agent
configuration file.</para>
<screen><prompt>$</prompt> <userinput>neutron router-create myrouter1</userinput>
<computeroutput>Created a new router:
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| admin_state_up | True |
| external_gateway_info | |
| id | 338d42d7-b22e-42c5-9df6-f3674768fe75 |
| name | myrouter1 |
| status | ACTIVE |
| tenant_id | 0c236f65baa04e6f9b4236b996555d56 |
+-----------------------+--------------------------------------+</computeroutput></screen>
</section>
<section xml:id="adv_cfg_l3_agent_multi_extnet">
<title>Multiple external networks</title>
<para>Use one of these methods to support multiple external networks:</para>
<itemizedlist>
<listitem>
<para>Assign multiple subnets to an external network.</para>
</listitem>
<listitem>
<para>Use multiple floating IP pools.</para>
</listitem>
</itemizedlist>
<para>The following sections describe these options.</para>
<section xml:id="adv_cfg_l3_agent_multi_subnet">
<title>Assign multiple subnets to an external network</title>
<para>This approach leverages the addition of on-link routes, which enables a router
to host floating IPs from any subnet on an external network regardless of which
subnet the primary router IP address comes from. This method does not require
the creation of multiple external networks.</para>
<para>To add a subnet to the external network, use the following command
template:</para>
<screen><prompt>$</prompt> <userinput>neutron subnet-create <replaceable>EXT_NETWORK_NAME</replaceable> <replaceable>CIDR</replaceable></userinput></screen>
<para>For example:</para>
<screen><prompt>$</prompt> <userinput>neutron subnet-create my-ext_network 10.0.0.0/29</userinput> </screen>
</section>
<section xml:id="adv_cfg_l3_agent_multi_floatip">
<title>Multiple floating IP pools</title>
<para>The L3 API in OpenStack Networking supports multiple
floating IP pools. In OpenStack Networking, a floating
IP pool is represented as an external network, and a
floating IP is allocated from a subnet associated with
the external network. You can associate a L3 agent
with multiple external networks.</para>
<para>Before starting a L3 agent, you must update the
configuration files with the UUID of the external network.</para>
<para>To enable the L3 agent to support multiple external
networks, edit the <filename>l3_agent.ini</filename>
file and leave the <option>gateway_external_network_id</option>
and <option>external_network_bridge</option> options
unset:</para>
<programlisting language="ini">handle_internal_only_routers = True
gateway_external_network_id =
external_network_bridge = </programlisting>
</section>
</section>
</section>
<section xml:id="section_adv_cfg_l3_metering_agent">
<title>L3 metering agent</title>
<para>You can run an L3 metering agent that enables layer 3 traffic metering. In general,
you should launch the metering agent on all nodes that run the L3 agent:</para>
<screen><userinput>neutron-metering-agent --config-file <replaceable>NEUTRON_CONFIG_FILE</replaceable> --config-file <replaceable>L3_METERING_CONFIG_FILE</replaceable></userinput></screen>
<para>You must configure a driver that matches the plug-in that runs on the service. The
driver adds metering to the routing interface.</para>
<table rules="all">
<caption>Settings</caption>
<thead>
<tr>
<th>Option</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td><emphasis role="bold">Open vSwitch</emphasis></td>
<td/>
</tr>
<tr>
<td>interface_driver ($NEUTRON_CONF_DIR/metering_agent.ini)</td>
<td>neutron.agent.linux.interface.OVSInterfaceDriver</td>
</tr>
<tr>
<td><emphasis role="bold">Linux Bridge</emphasis></td>
<td/>
</tr>
<tr>
<td>interface_driver ($NEUTRON_CONF_DIR/metering_agent.ini)</td>
<td>neutron.agent.linux.interface.BridgeInterfaceDriver</td>
</tr>
</tbody>
</table>
<section xml:id="adv_cfg_l3_metering_agent_namespace">
<title>Namespace</title>
<para>The metering agent and the L3 agent must have the same network namespaces
configuration.</para>
<note>
<para>If the Linux installation does not support network namespaces, you must
disable network namespaces in the L3 metering configuration file. The default
value of the <option>use_namespaces</option> option is <code>True</code>.</para>
</note>
<para><programlisting language="ini">use_namespaces = False</programlisting></para>
</section>
<section xml:id="adv_cfg_l3_metering_agent_driver">
<title>L3 metering driver</title>
<para>You must configure any driver that implements the metering abstraction. Currently
the only available implementation uses iptables for metering.</para>
<para><programlisting language="ini">driver = neutron.services.metering.drivers.iptables.iptables_driver.IptablesMeteringDriver</programlisting></para>
</section>
<section xml:id="adv_cfg_l3_metering_service_driver">
<title>L3 metering service driver</title>
<para>To enable L3 metering, you must set the following option in the
<filename>neutron.conf</filename> file on the host that runs <systemitem
class="service">neutron-server</systemitem>:</para>
<programlisting language="ini">service_plugins = metering</programlisting>
</section>
</section>
<section xml:id="section_limitations">
<title>Limitations</title>
<itemizedlist>
<listitem>
<para><emphasis>No equivalent for nova-network <parameter>--multi_host</parameter>
option</emphasis>. Nova-network has a model where the L3, NAT, and DHCP
processing happen on the compute node itself, rather than a dedicated networking
node. OpenStack Networking now support running multiple l3-agent and dhcp-agents
with load being split across those agents, but the tight coupling of that
scheduling with the location of the VM is not supported in Icehouse. The Juno
release is expected to include an exact replacement for the
<parameter>--multi_host</parameter> parameter in nova-network.</para>
</listitem>
<listitem>
<para><emphasis>Linux network namespace required on nodes running <systemitem
class="
service">neutron-l3-agent</systemitem>
or <systemitem class="
service"
>neutron-dhcp-agent</systemitem> if overlapping IPs are in
use</emphasis>. To support overlapping IP addresses, the OpenStack
Networking DHCP and L3 agents use Linux network namespaces by default. The hosts
running these processes must support network namespaces. To support network
namespaces, the following are required:</para>
<itemizedlist>
<listitem>
<para>Linux kernel 2.6.24 or later (with <literal>CONFIG_NET_NS=y</literal>
in kernel configuration)</para>
</listitem>
<listitem>
<para>iproute2 utilities ('ip' command) version 3.1.0 (aka 20111117) or
later</para>
</listitem>
</itemizedlist>
<para>To check whether your host supports namespaces, run these commands as
root:</para>
<screen><prompt>#</prompt> <userinput>ip netns add test-ns</userinput>
<prompt>#</prompt> <userinput>ip netns exec test-ns ifconfig</userinput></screen>
<para>If these commands succeed, your platform is likely sufficient to use the
dhcp-agent or l3-agent with namespaces. In our experience, Ubuntu 12.04 or later
support namespaces as does Fedora 17 and later, but some earlier RHEL platforms
do not by default. It might be possible to upgrade the <systemitem
class="service">iproute2</systemitem> package on a platform that does not
support namespaces by default.</para>
<para>If you must disable namespaces, make sure that the
<filename>neutron.conf</filename> file that is used by <systemitem class="service">neutron-server</systemitem> has the following
setting:</para>
<programlisting>allow_overlapping_ips=False</programlisting>
<para>Also, ensure that the <filename>dhcp_agent.ini</filename> and
<filename>l3_agent.ini</filename> files have the following setting:</para>
<programlisting>use_namespaces=False</programlisting>
<note>
<para>If the host does not support namespaces, the <systemitem class="service"
>neutron-l3-agent</systemitem> and <systemitem class="service"
>neutron-dhcp-agent</systemitem> should run on different hosts because
there is no isolation between the IP addresses created by the L3 agent and
by the DHCP agent. By manipulating the routing, the user can ensure that
these networks have access to one another.</para>
</note>
<para>If you run both L3 and DHCP services on the same node, you should enable
namespaces to avoid conflicts with routes:</para>
<programlisting>use_namespaces=True</programlisting>
</listitem>
<listitem>
<para><emphasis>No IPv6 support for L3 agent</emphasis>. The <systemitem
class="
service">neutron-l3-agent</systemitem>, used by
many plug-ins to implement L3 forwarding, supports only IPv4 forwarding.
Currently, there are no errors provided if you configure IPv6 addresses via the
API.</para>
</listitem>
<listitem>
<para><emphasis>ZeroMQ support is experimental</emphasis>. Some agents, including
<systemitem class="service">neutron-dhcp-agent</systemitem>, <systemitem
class="service">neutron-openvswitch-agent</systemitem>, and <systemitem
class="service">neutron-linuxbridge-agent</systemitem> use RPC to
communicate. ZeroMQ is an available option in the configuration file, but has
not been tested and should be considered experimental. In particular, issues
might occur with ZeroMQ and the dhcp agent.</para>
</listitem>
<listitem>
<para><emphasis>MetaPlugin is experimental</emphasis>. This release includes a
MetaPlugin that is intended to support multiple plug-ins at the same time for
different API requests, based on the content of those API requests. The core
team has not thoroughly reviewed or tested this functionality. Consider this
functionality to be experimental until further validation is performed.</para>
</listitem>
</itemizedlist>
</section>
<xi:include href="section_sr-iov_networking.xml"/>
</section>

File diff suppressed because it is too large Load Diff

View File

@ -1,144 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="section_networking-adv-operational_features">
<title>Advanced operational features</title>
<section xml:id="section_adv_logging">
<title>Logging settings</title>
<para>Networking components use Python logging module to do
logging. Logging configuration can be provided in
<filename>neutron.conf</filename> or as command-line
options. Command options override ones in
<filename>neutron.conf</filename>.</para>
<para>To configure logging for Networking components, use one
of these methods:</para>
<itemizedlist>
<listitem>
<para>Provide logging settings in a logging
configuration file.</para>
<para>See <link xlink:href="http://docs.python.org/howto/logging.html">Python
logging how-to</link> to learn more about logging.</para>
</listitem>
<listitem>
<para>Provide logging setting in
<filename>neutron.conf</filename></para>
<programlisting language="ini">[DEFAULT]
# Default log level is WARNING
# Show debugging output in logs (sets DEBUG log level output)
# debug = False
# Show more verbose log output (sets INFO log level output) if debug is False
# verbose = False
# log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s
# log_date_format = %Y-%m-%d %H:%M:%S
# use_syslog = False
# syslog_log_facility = LOG_USER
# if use_syslog is False, we can set log_file and log_dir.
# if use_syslog is False and we do not set log_file,
# the log will be printed to stdout.
# log_file =
# log_dir =</programlisting>
</listitem>
</itemizedlist>
</section>
<section xml:id="section_adv_notification">
<title>Notifications</title>
<para>Notifications can be sent when Networking resources such
as network, subnet and port are created, updated or
deleted.</para>
<section xml:id="section_adv_notification_overview">
<title>Notification options</title>
<para>To support DHCP agent, rpc_notifier driver must be
set. To set up the notification, edit notification
options in <filename>neutron.conf</filename>:</para>
<programlisting language="ini"># Driver or drivers to handle sending notifications. (multi
# valued)
#notification_driver=
# AMQP topic used for OpenStack notifications. (list value)
# Deprecated group/name - [rpc_notifier2]/topics
notification_topics = notifications</programlisting>
</section>
<section xml:id="section_adv_notification_cases">
<title>Setting cases</title>
<section xml:id="section_adv_notification_cases_log_rpc">
<title>Logging and RPC</title>
<para>These options configure the Networking
server to send notifications through logging and
RPC. The logging options are described in
<citetitle
>OpenStack Configuration Reference</citetitle>
. RPC notifications go to 'notifications.info'
queue bound to a topic exchange defined by
'control_exchange' in
<filename>neutron.conf</filename>.</para>
<programlisting language="ini"># ============ Notification System Options =====================
# Notifications can be sent when network/subnet/port are create, updated or deleted.
# There are three methods of sending notifications: logging (via the
# log_file directive), rpc (via a message queue) and
# noop (no notifications sent, the default)
# Notification_driver can be defined multiple times
# Do nothing driver
# notification_driver = neutron.openstack.common.notifier.no_op_notifier
# Logging driver
notification_driver = neutron.openstack.common.notifier.log_notifier
# RPC driver
notification_driver = neutron.openstack.common.notifier.rpc_notifier
# default_notification_level is used to form actual topic names or to set logging level
default_notification_level = INFO
# default_publisher_id is a part of the notification payload
# host = myhost.com
# default_publisher_id = $host
# Defined in rpc_notifier for rpc way, can be comma-separated values.
# The actual topic names will be %s.%(default_notification_level)s
notification_topics = notifications</programlisting>
</section>
<section
xml:id="ch_adv_notification_cases_multi_rpc_topics">
<title>Multiple RPC topics</title>
<para>These options configure the Networking
server to send notifications to multiple RPC
topics. RPC notifications go to
'notifications_one.info' and
'notifications_two.info' queues bound to a topic
exchange defined by 'control_exchange' in
<filename>neutron.conf</filename>.</para>
<programlisting language="ini"># ============ Notification System Options =====================
# Notifications can be sent when network/subnet/port are create, updated or deleted.
# There are three methods of sending notifications: logging (via the
# log_file directive), rpc (via a message queue) and
# noop (no notifications sent, the default)
# Notification_driver can be defined multiple times
# Do nothing driver
# notification_driver = neutron.openstack.common.notifier.no_op_notifier
# Logging driver
# notification_driver = neutron.openstack.common.notifier.log_notifier
# RPC driver
notification_driver = neutron.openstack.common.notifier.rpc_notifier
# default_notification_level is used to form actual topic names or to set logging level
default_notification_level = INFO
# default_publisher_id is a part of the notification payload
# host = myhost.com
# default_publisher_id = $host
# Defined in rpc_notifier for rpc way, can be comma-separated values.
# The actual topic names will be %s.%(default_notification_level)s
notification_topics = notifications_one,notifications_two</programlisting>
</section>
</section>
</section>
</section>

View File

@ -1,108 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="plugins-ml2">
<title>ML2</title>
<section xml:id="plugins-ml2-overview">
<title>Overview</title>
<para>architecture</para>
<para>configuration file organization, relationships, etc</para>
</section>
<section xml:id="plugins-ml2-networktypedrivers">
<title>Network type drivers</title>
<variablelist>
<varlistentry>
<term>Flat</term>
<listitem>
<para>FIXME</para>
</listitem>
</varlistentry>
<varlistentry>
<term>VLAN</term>
<listitem>
<para>FIXME</para>
</listitem>
</varlistentry>
<varlistentry>
<term>GRE</term>
<listitem>
<para>FIXME</para>
</listitem>
</varlistentry>
<varlistentry>
<term>VXLAN</term>
<listitem>
<para>FIXME</para>
</listitem>
</varlistentry>
</variablelist>
</section>
<section xml:id="plugins-ml2-tenantnetworktypes">
<title>Tenant network types</title>
<itemizedlist>
<listitem><para>Local</para></listitem>
<listitem><para>VLAN, VLAN ID ranges</para></listitem>
<listitem><para>GRE, Tunnel ID ranges</para></listitem>
<listitem><para>VXLAN, VNI ID ranges</para></listitem>
</itemizedlist>
<para>See admin-guide-cloud/networking/section_networking_arch.xml</para>
</section>
<section xml:id="plugins-ml2-mechanisms">
<title>Mechanisms</title>
<variablelist>
<varlistentry>
<term>Linux Bridge</term>
<listitem>
<para>FIXME</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Open vSwitch</term>
<listitem>
<para>FIXME</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Open Daylight</term>
<listitem>
<para>FIXME</para>
</listitem>
</varlistentry>
<varlistentry>
<term>L2 Population</term>
<listitem>
<para>FIXME</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Proprietary</term>
<listitem>
<para>FIXME</para>
</listitem>
</varlistentry>
</variablelist>
</section>
<section xml:id="plugins-ml2-security">
<title>Security</title>
<variablelist>
<varlistentry>
<term>Options</term>
<listitem>
<para>FIXME</para>
</listitem>
</varlistentry>
</variablelist>
</section>
</section>

View File

@ -1,13 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="plugins-proprietary">
<title>Proprietary</title>
<para>FIXME</para>
</section>

View File

@ -1,189 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="section_sr-iov_networking">
<title>SR-IOV Networking</title>
<para>The single root I/O virtualization (SR-IOV) interface is an extension
to the PCI Express (PCIe) specification. SR-IOV allows a device, such
as a network adapter, to separate access to its resources among various
PCIe hardware functions. These functions consist of the following
types:</para>
<itemizedlist>
<listitem><para>PCIe Physical Function (PF): This function is the
primary function of the device and advertises the device's SR-IOV
capabilities. The PF is associated with the Hyper-V parent partition
in a virtualized environment.</para></listitem>
<listitem><para>One or more PCIe Virtual Functions (VFs): Each VF is
associated with the device's PF. A VF shares one or more physical
resources of the device, such as a memory and a network port, with
the PF and other VFs on the device. Each VF is associated with a
Hyper-V child partition in a virtualized environment.</para></listitem>
</itemizedlist>
<para>There are two ways that a SR-IOV port can be connected:</para>
<itemizedlist>
<listitem><para>Directly connected to its VF.</para></listitem>
<listitem><para>Connected with a <literal>macvtap</literal> device that
resides on the host, which is then connected to the corresponding VF.</para></listitem>
</itemizedlist>
<para>Each PF and VF is assigned a unique PCI Express Requester ID (RID)
that allows an I/O memory management unit (IOMMU) to differentiate
between different traffic streams and apply memory and interrupt
translations between the PF and VFs. This allows traffic streams to be
delivered directly to the appropriate Hyper-V parent or child partition.
As a result, nonprivileged data traffic flows from the PF to VF without
affecting other VFs.</para>
<para>SR-IOV enables network traffic to bypass the software switch layer
of the Hyper-V virtualization stack. Because the VF is assigned to a
child partition, the network traffic flows directly between the VF and
child partition. As a result, the I/O overhead in the software emulation
layer is diminished and achieves network performance that is nearly the
same performance as in nonvirtualized environments.</para>
<section xml:id="sect-support-for-nova">
<title>Compute support for SR-IOV Networking</title>
<para>Compute support for SR-IOV enables scheduling an instance with SR-IOV
ports based on their network connectivity. The OpenStack Networking
ports' associated physical networks have to be considered in making
the scheduling decision. PCI Whitelist has been enchanced to allow
tags to be associated with PCI devices. PCI devices available for SR-IOV
networking should be tagged with <literal>physical_network label</literal>.</para>
<para>For SR-IOV networking, a pre-defined tag <literal>physical_network</literal>
is used to define the physical network to which the devices are attached.
A whitelist entry is defined as:</para>
<screen>["vendor_id": "<replaceable>id</replaceable>",] ["product_id": "<replaceable>id</replaceable>",]
["address": "[[[[<replaceable>domain</replaceable>]:]<replaceable>bus</replaceable>]:][<replaceable>slot</replaceable>][.[<replaceable>function</replaceable>]]" |
"devname": "<replaceable>Ethernet_Interface_Name</replaceable>",]
"physical_network":"<replaceable>Name_String_of_the_Physical_Network</replaceable>"</screen>
<para>Here <replaceable>id</replaceable> can be an asterisk (*) or a valid
vendor or product ID as displayed by the Linux utility <literal>lspci</literal>.
The address uses the same syntax as in <literal>lspci</literal>. The
<literal>devname</literal> can be a valid PCI device name. The only
device names that are supported are those displayed by the Linux
utility <command>ifconfig -a</command> and correspond to either
a PF or a VF on a vNIC.</para>
<para>If the device defined by the address or <literal>devname</literal>
corresponds to a SR-IOV PF, all VFs under the PF will match the entry.</para>
<para>Multiple whitelist entries per host are supported.</para>
</section>
<section xml:id="sect-support-for-neutron">
<title>OpenStack Networking support for SR-IOV</title>
<para>OpenStack Networking support for SR-IOV requires ML2 Plugin with
SR-IOV supporting mechanism driver. Currently there is a ML2 Mechanism
Driver for SR-IOV capable NIC based switching (HW VEB). There are
network adapters from different vendors that vary by supporting various
functionality. If VF link state update is supported by a vendor network
adapter, the SR-IOV NIC L2 agent should be deployed to leverage this
functionality.</para>
</section>
<section xml:id="sect-VM-creation-vNIC">
<title>Virtual Machine creation flow with SR-IOV vNIC</title>
<procedure>
<!-- <title>Virtual Machine creation flow with SR-IOV vNIC</title>-->
<step><para>Create one or more OpenStack Networking ports:</para>
<screen><prompt>#</prompt> <userinput>neutron port-create <replaceable>net-id</replaceable> \
--binding:vnic-type <replaceable>direct | macvtap | normal</replaceable></userinput></screen></step>
<step><para>Boot the VM with one or more OpenStack Networking ports:</para>
<screen><prompt>#</prompt> <userinput>nova boot --flavor m1.large --image <replaceable>image</replaceable>
--nic port-id=<replaceable>port1</replaceable> --nic port-id=<replaceable>port2</replaceable> <replaceable>vm_name</replaceable></userinput></screen></step>
</procedure>
<note><para>In the Compute boot API, users can specify either a
<literal>port-ID</literal> or a <literal>net-ID</literal>. If a
<literal>net-ID</literal> is specified, it is assumed that the user is
requesting a normal virtual port (which is not an SR-IOV port).</para></note>
</section>
<section xml:id="sect-SR-IOV_configuration">
<title>SR-IOV Configuration</title>
<para>Configuring SR-IOV networking involves configuring the OpenStack
Networking server, the Compute nodes and SR-IOV network agent.</para>
<section xml:id="sect-SR-IOV_configuration-neutron">
<title>OpenStack Networking Server</title>
<para>Using ML2 Neutron plugin modify <filename>/etc/neutron/plugins/ml2/ml2_conf.ini</filename>:</para>
<programlisting language="ini">[ml2]
tenant_network_types = vlan
type_drivers = vlan
mechanism_drivers = openvswitch,sriovnicswitch
[ml2_type_vlan]
network_vlan_ranges = physnet1:2:100</programlisting>
<para>Add supported PCI vendor VF devices, defined by <literal>vendor_id:product_id</literal>
according to the PCI ID Repository in the
<filename>/etc/neutron/plugins/ml2/ml2_conf_sriov.ini</filename>:</para>
<programlisting language="ini">[ml2_sriov]
supported_pci_vendor_devs = vendor_id:product_id</programlisting>
<para>Example for Intel NIC that supports SR-IOV:</para>
<screen>supported_pci_vendor_devs = 8086:10ca</screen>
<para>If SR-IOV network adapters support VF link state setting and
<literal>admin</literal> state management is desired, make sure to
add the following setting in the
<filename>/etc/neutron/plugins/ml2/ml2_conf_sriov.ini</filename>:</para>
<programlisting language="ini">[ml2_sriov]
agent_required = True</programlisting>
<para>Neutron server should be run with the two configuration files
<filename>/etc/neutron/plugins/ml2/ml2_conf.in</filename> and
<filename>/etc/neutron/plugins/ml2/ml2_conf_sriov.ini</filename></para>
<screen>neutron-server --config-file <replaceable>/etc/neutron/neutron.conf</replaceable>
--config-file <replaceable>/etc/neutron/plugins/ml2/ml2_conf.ini</replaceable>
--config-file <replaceable>/etc/neutron/plugins/ml2/ml2_conf_sriov.ini</replaceable></screen>
</section>
<section xml:id="sect-SR-IOV_configuration-nova-compute">
<title>Compute</title>
<para>On each compute node, associate the VFs available to
each physical network. This is performed by configuring
<literal>pci_passthrough_whitelist</literal> in <filename>/etc/nova/nova.conf</filename>.
For example:</para>
<screen>pci_passthrough_whitelist = {"address":"*:0a:00.*","physical_network":"physnet1"}</screen>
<para>This associates any VF with address that includes '<literal>:0a:00.</literal>'
in its address to the physical network <literal>physnet1</literal>.
After configuring the whitelist you have to restart <literal>nova-compute</literal>
service.</para>
<para>When using <literal>devstack pci_passthrough_whitelist</literal>
can be configured in <filename>local.conf</filename> file. For example:</para>
<programlisting language="ini">[[post-config|$NOVA_CONF]]
[DEFAULT]
pci_passthrough_whitelist = {"'"address"'":"'"*:02:00.*"'","'"physical_network"'":"'"default"'"}</programlisting>
</section>
<section xml:id="sect-SR-IOV_configuration-neutron-agent">
<title>SR-IOV neutron agent</title>
<para>If the hardware supports it and you want to enable changing the
port <literal>admin_state</literal>, you have to run the OpenStack Networking
SR-IOV agent.</para>
<note><para>If you configured <literal>agent_required</literal>=<replaceable>True</replaceable>
on the OpenStack Networking server, you must run the Agent on each
compute node.</para></note>
<para>In <filename>/etc/neutron/plugins/ml2/ml2_conf.ini</filename>
ensure you have the following:</para>
<programlisting language="ini">[securitygroup]
firewall_driver = neutron.agent.firewall.NoopFirewallDriver</programlisting>
<para>Modify <filename>/etc/neutron/plugins/ml2/ml2_conf_sriov.ini</filename>
as follows:</para>
<programlisting language="ini">[sriov_nic]
physical_device_mappings = physnet1:eth1
exclude_devices =</programlisting>
<para>where:</para>
<itemizedlist>
<listitem><para><literal>physnet1</literal> is the physical network</para></listitem>
<listitem><para><literal>eth1</literal> is the physical function (PF)</para></listitem>
<listitem><para><literal>exclude_devices</literal> is empty so all
the VFs associated with <literal>eth1</literal> may be configured
by the agent</para></listitem>
</itemizedlist>
<para>After modifying the configuration file, start the OpenStack
Networking SR-IOV agent:</para>
<screen><prompt>#</prompt> <userinput>neutron-sriov-nic-agent \
--config-file <replaceable>/etc/neutron/neutron.conf</replaceable> \
--config-file <replaceable>/etc/neutron/plugins/ml2/ml2_conf.ini</replaceable> \
--config-file <replaceable>/etc/neutron/plugins/ml2/ml2_conf_sriov.ini</replaceable></userinput></screen>
<note><para>If you want to exclude some of the VFs so the agent does
not configure them, you need to list them in the <literal>sriov_nic</literal>
section. For example:</para></note>
<screen>exclude_devices = eth1:0000:07:00.2; 0000:07:00.3, eth2:0000:05:00.1; 0000:05:00.2</screen>
<para>For more information, refer
<link xlink:href="http://community.mellanox.com/docs/DOC-1484">
Openstack ML2 SR-IOV driver support
</link></para>
</section>
</section>
</section>

View File

@ -58,8 +58,8 @@ commands =
sphinx-build -E -W doc/networking-guide/source doc/networking-guide/build/html
mkdir -p publish-docs/networking-guide/
rsync -a doc/networking-guide/build/html/ publish-docs/networking-guide/
# Build DocBook Guides, note we do not build the DocBook XML Networking Guide
openstack-doc-test --check-build --ignore-book networking-guide {posargs}
# Build DocBook Guides
openstack-doc-test --check-build {posargs}
[testenv:docs]
commands =
@ -81,8 +81,7 @@ commands =
# not publish anything.
mkdir -p publish-docs
# We only publish changed manuals.
# Do not publish DocBook XML Networking Guide
openstack-doc-test --check-build --ignore-book networking-guide --publish
openstack-doc-test --check-build --publish
# TODO(jaegerandi): Remove the following lines before we branch off
# a kilo branch.
# Do not publish Debian guide