diff --git a/doc/source/admin/archives/adv-config.rst b/doc/source/admin/archives/adv-config.rst
new file mode 100644
index 00000000000..3539a6ad06c
--- /dev/null
+++ b/doc/source/admin/archives/adv-config.rst
@@ -0,0 +1,57 @@
+==============================
+Advanced configuration options
+==============================
+
+This section describes advanced configuration options for various system
+components. For example, configuration options where the default works
+but that the user wants to customize options. After installing from
+packages, ``$NEUTRON_CONF_DIR`` is ``/etc/neutron``.
+
+L3 metering agent
+~~~~~~~~~~~~~~~~~
+
+You can run an L3 metering agent that enables layer-3 traffic metering.
+In general, you should launch the metering agent on all nodes that run
+the L3 agent:
+
+.. code-block:: console
+
+ $ neutron-metering-agent --config-file NEUTRON_CONFIG_FILE \
+ --config-file L3_METERING_CONFIG_FILE
+
+You must configure a driver that matches the plug-in that runs on the
+service. The driver adds metering to the routing interface.
+
++------------------------------------------+---------------------------------+
+| Option | Value |
++==========================================+=================================+
+| **Open vSwitch** | |
++------------------------------------------+---------------------------------+
+| interface\_driver | |
+| ($NEUTRON\_CONF\_DIR/metering\_agent.ini)| openvswitch |
++------------------------------------------+---------------------------------+
+| **Linux Bridge** | |
++------------------------------------------+---------------------------------+
+| interface\_driver | |
+| ($NEUTRON\_CONF\_DIR/metering\_agent.ini)| linuxbridge |
++------------------------------------------+---------------------------------+
+
+L3 metering driver
+------------------
+
+You must configure any driver that implements the metering abstraction.
+Currently the only available implementation uses iptables for metering.
+
+.. code-block:: ini
+
+ driver = iptables
+
+L3 metering service driver
+--------------------------
+
+To enable L3 metering, you must set the following option in the
+``neutron.conf`` file on the host that runs ``neutron-server``:
+
+.. code-block:: ini
+
+ service_plugins = metering
diff --git a/doc/source/admin/archives/adv-features.rst b/doc/source/admin/archives/adv-features.rst
new file mode 100644
index 00000000000..1f6e1a2bffe
--- /dev/null
+++ b/doc/source/admin/archives/adv-features.rst
@@ -0,0 +1,854 @@
+.. _adv-features:
+
+========================================
+Advanced features through API extensions
+========================================
+
+Several plug-ins implement API extensions that provide capabilities
+similar to what was available in ``nova-network``. These plug-ins are likely
+to be of interest to the OpenStack community.
+
+Provider networks
+~~~~~~~~~~~~~~~~~
+
+Networks can be categorized as either project networks or provider
+networks. Project networks are created by normal users and details about
+how they are physically realized are hidden from those users. Provider
+networks are created with administrative credentials, specifying the
+details of how the network is physically realized, usually to match some
+existing network in the data center.
+
+Provider networks enable administrators to create networks that map
+directly to the physical networks in the data center.
+This is commonly used to give projects direct access to a public network
+that can be used to reach the Internet. It might also be used to
+integrate with VLANs in the network that already have a defined meaning
+(for example, enable a VM from the marketing department to be placed
+on the same VLAN as bare-metal marketing hosts in the same data center).
+
+The provider extension allows administrators to explicitly manage the
+relationship between Networking virtual networks and underlying physical
+mechanisms such as VLANs and tunnels. When this extension is supported,
+Networking client users with administrative privileges see additional
+provider attributes on all virtual networks and are able to specify
+these attributes in order to create provider networks.
+
+The provider extension is supported by the Open vSwitch and Linux Bridge
+plug-ins. Configuration of these plug-ins requires familiarity with this
+extension.
+
+Terminology
+-----------
+
+A number of terms are used in the provider extension and in the
+configuration of plug-ins supporting the provider extension:
+
+.. list-table:: **Provider extension terminology**
+ :widths: 30 70
+ :header-rows: 1
+
+ * - Term
+ - Description
+ * - virtual network
+ - A Networking L2 network (identified by a UUID and optional name) whose
+ ports can be attached as vNICs to Compute instances and to various
+ Networking agents. The Open vSwitch and Linux Bridge plug-ins each
+ support several different mechanisms to realize virtual networks.
+ * - physical network
+ - A network connecting virtualization hosts (such as compute nodes) with
+ each other and with other network resources. Each physical network might
+ support multiple virtual networks. The provider extension and the plug-in
+ configurations identify physical networks using simple string names.
+ * - project network
+ - A virtual network that a project or an administrator creates. The
+ physical details of the network are not exposed to the project.
+ * - provider network
+ - A virtual network administratively created to map to a specific network
+ in the data center, typically to enable direct access to non-OpenStack
+ resources on that network. Project can be given access to provider
+ networks.
+ * - VLAN network
+ - A virtual network implemented as packets on a specific physical network
+ containing IEEE 802.1Q headers with a specific VID field value. VLAN
+ networks sharing the same physical network are isolated from each other
+ at L2 and can even have overlapping IP address spaces. Each distinct
+ physical network supporting VLAN networks is treated as a separate VLAN
+ trunk, with a distinct space of VID values. Valid VID values are 1
+ through 4094.
+ * - flat network
+ - A virtual network implemented as packets on a specific physical network
+ containing no IEEE 802.1Q header. Each physical network can realize at
+ most one flat network.
+ * - local network
+ - A virtual network that allows communication within each host, but not
+ across a network. Local networks are intended mainly for single-node test
+ scenarios, but can have other uses.
+ * - GRE network
+ - A virtual network implemented as network packets encapsulated using
+ GRE. GRE networks are also referred to as *tunnels*. GRE tunnel packets
+ are routed by the IP routing table for the host, so GRE networks are not
+ associated by Networking with specific physical networks.
+ * - Virtual Extensible LAN (VXLAN) network
+ - VXLAN is a proposed encapsulation protocol for running an overlay network
+ on existing Layer 3 infrastructure. An overlay network is a virtual
+ network that is built on top of existing network Layer 2 and Layer 3
+ technologies to support elastic compute architectures.
+
+The ML2, Open vSwitch, and Linux Bridge plug-ins support VLAN networks,
+flat networks, and local networks. Only the ML2 and Open vSwitch
+plug-ins currently support GRE and VXLAN networks, provided that the
+required features exist in the hosts Linux kernel, Open vSwitch, and
+iproute2 packages.
+
+Provider attributes
+-------------------
+
+The provider extension extends the Networking network resource with
+these attributes:
+
+
+.. list-table:: **Provider network attributes**
+ :widths: 10 10 10 49
+ :header-rows: 1
+
+ * - Attribute name
+ - Type
+ - Default Value
+ - Description
+ * - provider: network\_type
+ - String
+ - N/A
+ - The physical mechanism by which the virtual network is implemented.
+ Possible values are ``flat``, ``vlan``, ``local``, ``gre``, and
+ ``vxlan``, corresponding to flat networks, VLAN networks, local
+ networks, GRE networks, and VXLAN networks as defined above.
+ All types of provider networks can be created by administrators,
+ while project networks can be implemented as ``vlan``, ``gre``,
+ ``vxlan``, or ``local`` network types depending on plug-in
+ configuration.
+ * - provider: physical_network
+ - String
+ - If a physical network named "default" has been configured and
+ if provider:network_type is ``flat`` or ``vlan``, then "default"
+ is used.
+ - The name of the physical network over which the virtual network
+ is implemented for flat and VLAN networks. Not applicable to the
+ ``local`` or ``gre`` network types.
+ * - provider:segmentation_id
+ - Integer
+ - N/A
+ - For VLAN networks, the VLAN VID on the physical network that
+ realizes the virtual network. Valid VLAN VIDs are 1 through 4094.
+ For GRE networks, the tunnel ID. Valid tunnel IDs are any 32 bit
+ unsigned integer. Not applicable to the ``flat`` or ``local``
+ network types.
+
+To view or set provider extended attributes, a client must be authorized
+for the ``extension:provider_network:view`` and
+``extension:provider_network:set`` actions in the Networking policy
+configuration. The default Networking configuration authorizes both
+actions for users with the admin role. An authorized client or an
+administrative user can view and set the provider extended attributes
+through Networking API calls. See the section called
+:ref:`Authentication and authorization` for details on policy configuration.
+
+.. _L3-routing-and-NAT:
+
+L3 routing and NAT
+~~~~~~~~~~~~~~~~~~
+
+The Networking API provides abstract L2 network segments that are
+decoupled from the technology used to implement the L2 network.
+Networking includes an API extension that provides abstract L3 routers
+that API users can dynamically provision and configure. These Networking
+routers can connect multiple L2 Networking networks and can also provide
+a gateway that connects one or more private L2 networks to a shared
+external network. For example, a public network for access to the
+Internet. See the `OpenStack Configuration Reference `_ for details on common
+models of deploying Networking L3 routers.
+
+The L3 router provides basic NAT capabilities on gateway ports that
+uplink the router to external networks. This router SNATs all traffic by
+default and supports floating IPs, which creates a static one-to-one
+mapping from a public IP on the external network to a private IP on one
+of the other subnets attached to the router. This allows a project to
+selectively expose VMs on private networks to other hosts on the
+external network (and often to all hosts on the Internet). You can
+allocate and map floating IPs from one port to another, as needed.
+
+Basic L3 operations
+-------------------
+
+External networks are visible to all users. However, the default policy
+settings enable only administrative users to create, update, and delete
+external networks.
+
+This table shows example :command:`openstack` commands that enable you
+to complete basic L3 operations:
+
+.. list-table:: **Basic L3 Operations**
+ :widths: 30 50
+ :header-rows: 1
+
+ * - Operation
+ - Command
+ * - Creates external networks.
+ - .. code-block:: console
+
+ $ openstack network create public --external
+ $ openstack subnet create --network public --subnet-range 172.16.1.0/24 subnetname
+ * - Lists external networks.
+ - .. code-block:: console
+
+ $ openstack network list --external
+ * - Creates an internal-only router that connects to multiple L2 networks privately.
+ - .. code-block:: console
+
+ $ openstack network create net1
+ $ openstack subnet create --network net1 --subnet-range 10.0.0.0/24 subnetname1
+ $ openstack network create net2
+ $ openstack subnet create --network net2 --subnet-range 10.0.1.0/24 subnetname2
+ $ openstack router create router1
+ $ openstack router add subnet router1 subnetname1
+ $ openstack router add subnet router1 subnetname2
+
+ An internal router port can have only one IPv4 subnet and multiple IPv6 subnets
+ that belong to the same network ID. When you call ``router-interface-add`` with an IPv6
+ subnet, this operation adds the interface to an existing internal port with the same
+ network ID. If a port with the same network ID does not exist, a new port is created.
+
+ * - Connects a router to an external network, which enables that router to
+ act as a NAT gateway for external connectivity.
+ - .. code-block:: console
+
+ $ openstack router set --external-gateway EXT_NET_ID router1
+ $ openstack router set --route destination=172.24.4.0/24,gateway=172.24.4.1 router1
+
+ The router obtains an interface with the gateway_ip address of the
+ subnet and this interface is attached to a port on the L2 Networking
+ network associated with the subnet. The router also gets a gateway
+ interface to the specified external network. This provides SNAT
+ connectivity to the external network as well as support for floating
+ IPs allocated on that external networks. Commonly an external network
+ maps to a network in the provider.
+
+ * - Lists routers.
+ - .. code-block:: console
+
+ $ openstack router list
+ * - Shows information for a specified router.
+ - .. code-block:: console
+
+ $ openstack router show ROUTER_ID
+ * - Shows all internal interfaces for a router.
+ - .. code-block:: console
+
+ $ openstack port list --router ROUTER_ID
+ $ openstack port list --router ROUTER_NAME
+ * - Identifies the PORT_ID that represents the VM NIC to which the floating
+ IP should map.
+ - .. code-block:: console
+
+ $ openstack port list -c ID -c "Fixed IP Addresses" --server INSTANCE_ID
+
+ This port must be on a Networking subnet that is attached to
+ a router uplinked to the external network used to create the floating
+ IP. Conceptually, this is because the router must be able to perform the
+ Destination NAT (DNAT) rewriting of packets from the floating IP address
+ (chosen from a subnet on the external network) to the internal fixed
+ IP (chosen from a private subnet that is behind the router).
+
+ * - Creates a floating IP address and associates it with a port.
+ - .. code-block:: console
+
+ $ openstack floating ip create EXT_NET_ID
+ $ openstack floating ip add port FLOATING_IP_ID --port-id INTERNAL_VM_PORT_ID
+
+ * - Creates a floating IP on a specific subnet in the external network.
+ - .. code-block:: console
+
+ $ openstack floating ip create EXT_NET_ID --subnet SUBNET_ID
+
+ If there are multiple subnets in the external network, you can choose a specific
+ subnet based on quality and costs.
+
+ * - Creates a floating IP address and associates it with a port, in a single step.
+ - .. code-block:: console
+
+ $ openstack floating ip create --port INTERNAL_VM_PORT_ID EXT_NET_ID
+ * - Lists floating IPs
+ - .. code-block:: console
+
+ $ openstack floating ip list
+ * - Finds floating IP for a specified VM port.
+ - .. code-block:: console
+
+ $ openstack floating ip list --port INTERNAL_VM_PORT_ID
+ * - Disassociates a floating IP address.
+ - .. code-block:: console
+
+ $ openstack floating ip remove port FLOATING_IP_ID
+ * - Deletes the floating IP address.
+ - .. code-block:: console
+
+ $ openstack floating ip delete FLOATING_IP_ID
+ * - Clears the gateway.
+ - .. code-block:: console
+
+ $ openstack router unset --external-gateway router1
+ * - Removes the interfaces from the router.
+ - .. code-block:: console
+
+ $ openstack router remove subnet router1 SUBNET_ID
+
+ If this subnet ID is the last subnet on the port, this operation deletes the port itself.
+
+ * - Deletes the router.
+ - .. code-block:: console
+
+ $ openstack router delete router1
+
+Security groups
+~~~~~~~~~~~~~~~
+
+Security groups and security group rules allow administrators and
+projects to specify the type of traffic and direction
+(ingress/egress) that is allowed to pass through a port. A security
+group is a container for security group rules.
+
+When a port is created in Networking it is associated with a security
+group. If a security group is not specified the port is associated with
+a 'default' security group. By default, this group drops all ingress
+traffic and allows all egress. Rules can be added to this group in order
+to change the behavior.
+
+To use the Compute security group APIs or use Compute to orchestrate the
+creation of ports for instances on specific security groups, you must
+complete additional configuration. You must configure the
+``/etc/nova/nova.conf`` file and set the ``security_group_api=neutron``
+option on every node that runs nova-compute and nova-api. After you make
+this change, restart nova-api and nova-compute to pick up this change.
+Then, you can use both the Compute and OpenStack Network security group
+APIs at the same time.
+
+.. note::
+
+ - To use the Compute security group API with Networking, the
+ Networking plug-in must implement the security group API. The
+ following plug-ins currently implement this: ML2, Open vSwitch,
+ Linux Bridge, NEC, and VMware NSX.
+
+ - You must configure the correct firewall driver in the
+ ``securitygroup`` section of the plug-in/agent configuration
+ file. Some plug-ins and agents, such as Linux Bridge Agent and
+ Open vSwitch Agent, use the no-operation driver as the default,
+ which results in non-working security groups.
+
+ - When using the security group API through Compute, security
+ groups are applied to all ports on an instance. The reason for
+ this is that Compute security group APIs are instances based and
+ not port based as Networking.
+
+Basic security group operations
+-------------------------------
+
+This table shows example neutron commands that enable you to complete
+basic security group operations:
+
+.. list-table:: **Basic security group operations**
+ :widths: 30 50
+ :header-rows: 1
+
+ * - Operation
+ - Command
+ * - Creates a security group for our web servers.
+ - .. code-block:: console
+
+ $ openstack security group create webservers \
+ --description "security group for webservers"
+ * - Lists security groups.
+ - .. code-block:: console
+
+ $ openstack security group list
+ * - Creates a security group rule to allow port 80 ingress.
+ - .. code-block:: console
+
+ $ openstack security group rule create --ingress \
+ --protocol tcp SECURITY_GROUP_UUID
+ * - Lists security group rules.
+ - .. code-block:: console
+
+ $ openstack security group rule list
+ * - Deletes a security group rule.
+ - .. code-block:: console
+
+ $ openstack security group rule delete SECURITY_GROUP_RULE_UUID
+ * - Deletes a security group.
+ - .. code-block:: console
+
+ $ openstack security group delete SECURITY_GROUP_UUID
+ * - Creates a port and associates two security groups.
+ - .. code-block:: console
+
+ $ openstack port create port1 --security-group SECURITY_GROUP_ID1 \
+ --security-group SECURITY_GROUP_ID2 --network NETWORK_ID
+ * - Removes security groups from a port.
+ - .. code-block:: console
+
+ $ openstack port set --no-security-group PORT_ID
+
+Basic Load-Balancer-as-a-Service operations
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. note::
+
+ The Load-Balancer-as-a-Service (LBaaS) API provisions and configures
+ load balancers. The reference implementation is based on the HAProxy
+ software load balancer.
+
+This list shows example neutron commands that enable you to complete
+basic LBaaS operations:
+
+- Creates a load balancer pool by using specific provider.
+
+ ``--provider`` is an optional argument. If not used, the pool is
+ created with default provider for LBaaS service. You should configure
+ the default provider in the ``[service_providers]`` section of the
+ ``neutron.conf`` file. If no default provider is specified for LBaaS,
+ the ``--provider`` parameter is required for pool creation.
+
+ .. code-block:: console
+
+ $ neutron lb-pool-create --lb-method ROUND_ROBIN --name mypool \
+ --protocol HTTP --subnet-id SUBNET_UUID --provider PROVIDER_NAME
+
+- Associates two web servers with pool.
+
+ .. code-block:: console
+
+ $ neutron lb-member-create --address WEBSERVER1_IP --protocol-port 80 mypool
+ $ neutron lb-member-create --address WEBSERVER2_IP --protocol-port 80 mypool
+
+- Creates a health monitor that checks to make sure our instances are
+ still running on the specified protocol-port.
+
+ .. code-block:: console
+
+ $ neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 \
+ --timeout 3
+
+- Associates a health monitor with pool.
+
+ .. code-block:: console
+
+ $ neutron lb-healthmonitor-associate HEALTHMONITOR_UUID mypool
+
+- Creates a virtual IP (VIP) address that, when accessed through the
+ load balancer, directs the requests to one of the pool members.
+
+ .. code-block:: console
+
+ $ neutron lb-vip-create --name myvip --protocol-port 80 --protocol \
+ HTTP --subnet-id SUBNET_UUID mypool
+
+Plug-in specific extensions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Each vendor can choose to implement additional API extensions to the
+core API. This section describes the extensions for each plug-in.
+
+VMware NSX extensions
+---------------------
+
+These sections explain NSX plug-in extensions.
+
+VMware NSX QoS extension
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+The VMware NSX QoS extension rate-limits network ports to guarantee a
+specific amount of bandwidth for each port. This extension, by default,
+is only accessible by a project with an admin role but is configurable
+through the ``policy.json`` file. To use this extension, create a queue
+and specify the min/max bandwidth rates (kbps) and optionally set the
+QoS Marking and DSCP value (if your network fabric uses these values to
+make forwarding decisions). Once created, you can associate a queue with
+a network. Then, when ports are created on that network they are
+automatically created and associated with the specific queue size that
+was associated with the network. Because one size queue for a every port
+on a network might not be optimal, a scaling factor from the nova flavor
+``rxtx_factor`` is passed in from Compute when creating the port to scale
+the queue.
+
+Lastly, if you want to set a specific baseline QoS policy for the amount
+of bandwidth a single port can use (unless a network queue is specified
+with the network a port is created on) a default queue can be created in
+Networking which then causes ports created to be associated with a queue
+of that size times the rxtx scaling factor. Note that after a network or
+default queue is specified, queues are added to ports that are
+subsequently created but are not added to existing ports.
+
+Basic VMware NSX QoS operations
+'''''''''''''''''''''''''''''''
+
+This table shows example neutron commands that enable you to complete
+basic queue operations:
+
+.. list-table:: **Basic VMware NSX QoS operations**
+ :widths: 30 50
+ :header-rows: 1
+
+ * - Operation
+ - Command
+ * - Creates QoS queue (admin-only).
+ - .. code-block:: console
+
+ $ neutron queue-create --min 10 --max 1000 myqueue
+ * - Associates a queue with a network.
+ - .. code-block:: console
+
+ $ neutron net-create network --queue_id QUEUE_ID
+ * - Creates a default system queue.
+ - .. code-block:: console
+
+ $ neutron queue-create --default True --min 10 --max 2000 default
+ * - Lists QoS queues.
+ - .. code-block:: console
+
+ $ neutron queue-list
+ * - Deletes a QoS queue.
+ - .. code-block:: console
+
+ $ neutron queue-delete QUEUE_ID_OR_NAME
+
+VMware NSX provider networks extension
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Provider networks can be implemented in different ways by the underlying
+NSX platform.
+
+The *FLAT* and *VLAN* network types use bridged transport connectors.
+These network types enable the attachment of large number of ports. To
+handle the increased scale, the NSX plug-in can back a single OpenStack
+Network with a chain of NSX logical switches. You can specify the
+maximum number of ports on each logical switch in this chain on the
+``max_lp_per_bridged_ls`` parameter, which has a default value of 5,000.
+
+The recommended value for this parameter varies with the NSX version
+running in the back-end, as shown in the following table.
+
+**Recommended values for max_lp_per_bridged_ls**
+
++---------------+---------------------+
+| NSX version | Recommended Value |
++===============+=====================+
+| 2.x | 64 |
++---------------+---------------------+
+| 3.0.x | 5,000 |
++---------------+---------------------+
+| 3.1.x | 5,000 |
++---------------+---------------------+
+| 3.2.x | 10,000 |
++---------------+---------------------+
+
+In addition to these network types, the NSX plug-in also supports a
+special *l3_ext* network type, which maps external networks to specific
+NSX gateway services as discussed in the next section.
+
+VMware NSX L3 extension
+^^^^^^^^^^^^^^^^^^^^^^^
+
+NSX exposes its L3 capabilities through gateway services which are
+usually configured out of band from OpenStack. To use NSX with L3
+capabilities, first create an L3 gateway service in the NSX Manager.
+Next, in ``/etc/neutron/plugins/vmware/nsx.ini`` set
+``default_l3_gw_service_uuid`` to this value. By default, routers are
+mapped to this gateway service.
+
+VMware NSX L3 extension operations
+''''''''''''''''''''''''''''''''''
+
+Create external network and map it to a specific NSX gateway service:
+
+.. code-block:: console
+
+ $ openstack network create public --external --provider-network-type l3_ext \
+ --provider-physical-network L3_GATEWAY_SERVICE_UUID
+
+Terminate traffic on a specific VLAN from a NSX gateway service:
+
+.. code-block:: console
+
+ $ openstack network create public --external --provider-network-type l3_ext \
+ --provider-physical-network L3_GATEWAY_SERVICE_UUID --provider-segment VLAN_ID
+
+Operational status synchronization in the VMware NSX plug-in
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Starting with the Havana release, the VMware NSX plug-in provides an
+asynchronous mechanism for retrieving the operational status for neutron
+resources from the NSX back-end; this applies to *network*, *port*, and
+*router* resources.
+
+The back-end is polled periodically and the status for every resource is
+retrieved; then the status in the Networking database is updated only
+for the resources for which a status change occurred. As operational
+status is now retrieved asynchronously, performance for ``GET``
+operations is consistently improved.
+
+Data to retrieve from the back-end are divided in chunks in order to
+avoid expensive API requests; this is achieved leveraging NSX APIs
+response paging capabilities. The minimum chunk size can be specified
+using a configuration option; the actual chunk size is then determined
+dynamically according to: total number of resources to retrieve,
+interval between two synchronization task runs, minimum delay between
+two subsequent requests to the NSX back-end.
+
+The operational status synchronization can be tuned or disabled using
+the configuration options reported in this table; it is however worth
+noting that the default values work fine in most cases.
+
+.. list-table:: **Configuration options for tuning operational status synchronization in the NSX plug-in**
+ :widths: 10 10 10 10 35
+ :header-rows: 1
+
+ * - Option name
+ - Group
+ - Default value
+ - Type and constraints
+ - Notes
+ * - ``state_sync_interval``
+ - ``nsx_sync``
+ - 10 seconds
+ - Integer; no constraint.
+ - Interval in seconds between two run of the synchronization task. If the
+ synchronization task takes more than ``state_sync_interval`` seconds to
+ execute, a new instance of the task is started as soon as the other is
+ completed. Setting the value for this option to 0 will disable the
+ synchronization task.
+ * - ``max_random_sync_delay``
+ - ``nsx_sync``
+ - 0 seconds
+ - Integer. Must not exceed ``min_sync_req_delay``
+ - When different from zero, a random delay between 0 and
+ ``max_random_sync_delay`` will be added before processing the next
+ chunk.
+ * - ``min_sync_req_delay``
+ - ``nsx_sync``
+ - 1 second
+ - Integer. Must not exceed ``state_sync_interval``.
+ - The value of this option can be tuned according to the observed
+ load on the NSX controllers. Lower values will result in faster
+ synchronization, but might increase the load on the controller cluster.
+ * - ``min_chunk_size``
+ - ``nsx_sync``
+ - 500 resources
+ - Integer; no constraint.
+ - Minimum number of resources to retrieve from the back-end for each
+ synchronization chunk. The expected number of synchronization chunks
+ is given by the ratio between ``state_sync_interval`` and
+ ``min_sync_req_delay``. This size of a chunk might increase if the
+ total number of resources is such that more than ``min_chunk_size``
+ resources must be fetched in one chunk with the current number of
+ chunks.
+ * - ``always_read_status``
+ - ``nsx_sync``
+ - False
+ - Boolean; no constraint.
+ - When this option is enabled, the operational status will always be
+ retrieved from the NSX back-end ad every ``GET`` request. In this
+ case it is advisable to disable the synchronization task.
+
+When running multiple OpenStack Networking server instances, the status
+synchronization task should not run on every node; doing so sends
+unnecessary traffic to the NSX back-end and performs unnecessary DB
+operations. Set the ``state_sync_interval`` configuration option to a
+non-zero value exclusively on a node designated for back-end status
+synchronization.
+
+The ``fields=status`` parameter in Networking API requests always
+triggers an explicit query to the NSX back end, even when you enable
+asynchronous state synchronization. For example, ``GET
+/v2.0/networks/NET_ID?fields=status&fields=name``.
+
+Big Switch plug-in extensions
+-----------------------------
+
+This section explains the Big Switch neutron plug-in-specific extension.
+
+Big Switch router rules
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Big Switch allows router rules to be added to each project router. These
+rules can be used to enforce routing policies such as denying traffic
+between subnets or traffic to external networks. By enforcing these at
+the router level, network segmentation policies can be enforced across
+many VMs that have differing security groups.
+
+Router rule attributes
+''''''''''''''''''''''
+
+Each project router has a set of router rules associated with it. Each
+router rule has the attributes in this table. Router rules and their
+attributes can be set using the :command:`neutron router-update` command,
+through the horizon interface or the Networking API.
+
+.. list-table:: **Big Switch Router rule attributes**
+ :widths: 10 10 10 35
+ :header-rows: 1
+
+ * - Attribute name
+ - Required
+ - Input type
+ - Description
+ * - source
+ - Yes
+ - A valid CIDR or one of the keywords 'any' or 'external'
+ - The network that a packet's source IP must match for the
+ rule to be applied.
+ * - destination
+ - Yes
+ - A valid CIDR or one of the keywords 'any' or 'external'
+ - The network that a packet's destination IP must match for the rule to
+ be applied.
+ * - action
+ - Yes
+ - 'permit' or 'deny'
+ - Determines whether or not the matched packets will allowed to cross the
+ router.
+ * - nexthop
+ - No
+ - A plus-separated (+) list of next-hop IP addresses. For example,
+ ``1.1.1.1+1.1.1.2``.
+ - Overrides the default virtual router used to handle traffic for packets
+ that match the rule.
+
+Order of rule processing
+''''''''''''''''''''''''
+
+The order of router rules has no effect. Overlapping rules are evaluated
+using longest prefix matching on the source and destination fields. The
+source field is matched first so it always takes higher precedence over
+the destination field. In other words, longest prefix matching is used
+on the destination field only if there are multiple matching rules with
+the same source.
+
+Big Switch router rules operations
+''''''''''''''''''''''''''''''''''
+
+Router rules are configured with a router update operation in OpenStack
+Networking. The update overrides any previous rules so all rules must be
+provided at the same time.
+
+Update a router with rules to permit traffic by default but block
+traffic from external networks to the 10.10.10.0/24 subnet:
+
+.. code-block:: console
+
+ $ neutron router-update ROUTER_UUID --router_rules type=dict list=true \
+ source=any,destination=any,action=permit \
+ source=external,destination=10.10.10.0/24,action=deny
+
+Specify alternate next-hop addresses for a specific subnet:
+
+.. code-block:: console
+
+ $ neutron router-update ROUTER_UUID --router_rules type=dict list=true \
+ source=any,destination=any,action=permit \
+ source=10.10.10.0/24,destination=any,action=permit,nexthops=10.10.10.254+10.10.10.253
+
+Block traffic between two subnets while allowing everything else:
+
+.. code-block:: console
+
+ $ neutron router-update ROUTER_UUID --router_rules type=dict list=true \
+ source=any,destination=any,action=permit \
+ source=10.10.10.0/24,destination=10.20.20.20/24,action=deny
+
+L3 metering
+~~~~~~~~~~~
+
+The L3 metering API extension enables administrators to configure IP
+ranges and assign a specified label to them to be able to measure
+traffic that goes through a virtual router.
+
+The L3 metering extension is decoupled from the technology that
+implements the measurement. Two abstractions have been added: One is the
+metering label that can contain metering rules. Because a metering label
+is associated with a project, all virtual routers in this project are
+associated with this label.
+
+Basic L3 metering operations
+----------------------------
+
+Only administrators can manage the L3 metering labels and rules.
+
+This table shows example :command:`neutron` commands that enable you to
+complete basic L3 metering operations:
+
+.. list-table:: **Basic L3 operations**
+ :widths: 20 50
+ :header-rows: 1
+
+ * - Operation
+ - Command
+ * - Creates a metering label.
+ - .. code-block:: console
+
+ $ openstack network meter label create LABEL1 \
+ --description "DESCRIPTION_LABEL1"
+ * - Lists metering labels.
+ - .. code-block:: console
+
+ $ openstack network meter label list
+ * - Shows information for a specified label.
+ - .. code-block:: console
+
+ $ openstack network meter label show LABEL_UUID
+ $ openstack network meter label show LABEL1
+ * - Deletes a metering label.
+ - .. code-block:: console
+
+ $ openstack network meter label delete LABEL_UUID
+ $ openstack network meter label delete LABEL1
+ * - Creates a metering rule.
+ - .. code-block:: console
+
+ $ openstack network meter label rule create LABEL_UUID \
+ --remote-ip-prefix CIDR \
+ --direction DIRECTION --exclude
+
+ For example:
+
+ .. code-block:: console
+
+ $ openstack network meter label rule create label1 \
+ --remote-ip-prefix 10.0.0.0/24 --direction ingress
+ $ openstack network meter label rule create label1 \
+ --remote-ip-prefix 20.0.0.0/24 --exclude
+
+ * - Lists metering all label rules.
+ - .. code-block:: console
+
+ $ openstack network meter label rule list
+ * - Shows information for a specified label rule.
+ - .. code-block:: console
+
+ $ openstack network meter label rule show RULE_UUID
+ * - Deletes a metering label rule.
+ - .. code-block:: console
+
+ $ openstack network meter label rule delete RULE_UUID
+ * - Lists the value of created metering label rules.
+ - .. code-block:: console
+
+ $ ceilometer sample-list -m SNMP_MEASUREMENT
+
+ For example:
+
+ .. code-block:: console
+
+ $ ceilometer sample-list -m hardware.network.bandwidth.bytes
+ $ ceilometer sample-list -m hardware.network.incoming.bytes
+ $ ceilometer sample-list -m hardware.network.outgoing.bytes
+ $ ceilometer sample-list -m hardware.network.outgoing.errors
diff --git a/doc/source/admin/archives/adv-operational-features.rst b/doc/source/admin/archives/adv-operational-features.rst
new file mode 100644
index 00000000000..783ce08a466
--- /dev/null
+++ b/doc/source/admin/archives/adv-operational-features.rst
@@ -0,0 +1,123 @@
+=============================
+Advanced operational features
+=============================
+
+Logging settings
+~~~~~~~~~~~~~~~~
+
+Networking components use Python logging module to do logging. Logging
+configuration can be provided in ``neutron.conf`` or as command-line
+options. Command options override ones in ``neutron.conf``.
+
+To configure logging for Networking components, use one of these
+methods:
+
+- Provide logging settings in a logging configuration file.
+
+ See `Python logging
+ how-to `__ to learn more
+ about logging.
+
+- Provide logging setting in ``neutron.conf``.
+
+ .. code-block:: ini
+
+ [DEFAULT]
+ # Default log level is WARNING
+ # Show debugging output in logs (sets DEBUG log level output)
+ # debug = False
+
+ # log_date_format = %Y-%m-%d %H:%M:%S
+
+ # use_syslog = False
+ # syslog_log_facility = LOG_USER
+
+ # if use_syslog is False, we can set log_file and log_dir.
+ # if use_syslog is False and we do not set log_file,
+ # the log will be printed to stdout.
+ # log_file =
+ # log_dir =
+
+Notifications
+~~~~~~~~~~~~~
+
+Notifications can be sent when Networking resources such as network,
+subnet and port are created, updated or deleted.
+
+Notification options
+--------------------
+
+To support DHCP agent, ``rpc_notifier`` driver must be set. To set up the
+notification, edit notification options in ``neutron.conf``:
+
+.. code-block:: ini
+
+ # Driver or drivers to handle sending notifications. (multi
+ # valued)
+ # notification_driver=messagingv2
+
+ # AMQP topic used for OpenStack notifications. (list value)
+ # Deprecated group/name - [rpc_notifier2]/topics
+ notification_topics = notifications
+
+Setting cases
+-------------
+
+Logging and RPC
+^^^^^^^^^^^^^^^
+
+These options configure the Networking server to send notifications
+through logging and RPC. The logging options are described in OpenStack
+Configuration Reference . RPC notifications go to ``notifications.info``
+queue bound to a topic exchange defined by ``control_exchange`` in
+``neutron.conf``.
+
+**Notification System Options**
+
+A notification can be sent when a network, subnet, or port is created,
+updated or deleted. The notification system options are:
+
+* ``notification_driver``
+ Defines the driver or drivers to handle the sending of a notification.
+ The six available options are:
+
+ * ``messaging``
+ Send notifications using the 1.0 message format.
+ * ``messagingv2``
+ Send notifications using the 2.0 message format (with a message
+ envelope).
+ * ``routing``
+ Configurable routing notifier (by priority or event_type).
+ * ``log``
+ Publish notifications using Python logging infrastructure.
+ * ``test``
+ Store notifications in memory for test verification.
+ * ``noop``
+ Disable sending notifications entirely.
+* ``default_notification_level``
+ Is used to form topic names or to set a logging level.
+* ``default_publisher_id``
+ Is a part of the notification payload.
+* ``notification_topics``
+ AMQP topic used for OpenStack notifications. They can be comma-separated
+ values. The actual topic names will be the values of
+ ``default_notification_level``.
+* ``control_exchange``
+ This is an option defined in oslo.messaging. It is the default exchange
+ under which topics are scoped. May be overridden by an exchange name
+ specified in the ``transport_url`` option. It is a string value.
+
+Below is a sample ``neutron.conf`` configuration file:
+
+.. code-block:: ini
+
+ notification_driver = messagingv2
+
+ default_notification_level = INFO
+
+ host = myhost.com
+ default_publisher_id = $host
+
+ notification_topics = notifications
+
+ control_exchange = openstack
diff --git a/doc/source/admin/archives/arch.rst b/doc/source/admin/archives/arch.rst
new file mode 100644
index 00000000000..dcad74212a7
--- /dev/null
+++ b/doc/source/admin/archives/arch.rst
@@ -0,0 +1,88 @@
+=======================
+Networking architecture
+=======================
+
+Before you deploy Networking, it is useful to understand the Networking
+services and how they interact with the OpenStack components.
+
+Overview
+~~~~~~~~
+
+Networking is a standalone component in the OpenStack modular
+architecture. It is positioned alongside OpenStack components such as
+Compute, Image service, Identity, or Dashboard. Like those
+components, a deployment of Networking often involves deploying several
+services to a variety of hosts.
+
+The Networking server uses the neutron-server daemon to expose the
+Networking API and enable administration of the configured Networking
+plug-in. Typically, the plug-in requires access to a database for
+persistent storage (also similar to other OpenStack services).
+
+If your deployment uses a controller host to run centralized Compute
+components, you can deploy the Networking server to that same host.
+However, Networking is entirely standalone and can be deployed to a
+dedicated host. Depending on your configuration, Networking can also
+include the following agents:
+
++----------------------------+---------------------------------------------+
+| Agent | Description |
++============================+=============================================+
+|**plug-in agent** | |
+|(``neutron-*-agent``) | Runs on each hypervisor to perform |
+| | local vSwitch configuration. The agent that |
+| | runs, depends on the plug-in that you use. |
+| | Certain plug-ins do not require an agent. |
++----------------------------+---------------------------------------------+
+|**dhcp agent** | |
+|(``neutron-dhcp-agent``) | Provides DHCP services to project networks. |
+| | Required by certain plug-ins. |
++----------------------------+---------------------------------------------+
+|**l3 agent** | |
+|(``neutron-l3-agent``) | Provides L3/NAT forwarding to provide |
+| | external network access for VMs on project |
+| | networks. Required by certain plug-ins. |
++----------------------------+---------------------------------------------+
+|**metering agent** | |
+|(``neutron-metering-agent``)| Provides L3 traffic metering for project |
+| | networks. |
++----------------------------+---------------------------------------------+
+
+These agents interact with the main neutron process through RPC (for
+example, RabbitMQ or Qpid) or through the standard Networking API. In
+addition, Networking integrates with OpenStack components in a number of
+ways:
+
+- Networking relies on the Identity service (keystone) for the
+ authentication and authorization of all API requests.
+
+- Compute (nova) interacts with Networking through calls to its
+ standard API. As part of creating a VM, the ``nova-compute`` service
+ communicates with the Networking API to plug each virtual NIC on the
+ VM into a particular network.
+
+- The dashboard (horizon) integrates with the Networking API, enabling
+ administrators and project users to create and manage network services
+ through a web-based GUI.
+
+VMware NSX integration
+~~~~~~~~~~~~~~~~~~~~~~
+
+OpenStack Networking uses the NSX plug-in to integrate with an existing
+VMware vCenter deployment. When installed on the network nodes, the NSX
+plug-in enables a NSX controller to centrally manage configuration
+settings and push them to managed network nodes. Network nodes are
+considered managed when they are added as hypervisors to the NSX
+controller.
+
+The diagrams below depict some VMware NSX deployment examples. The first
+diagram illustrates the traffic flow between VMs on separate Compute
+nodes, and the second diagram between two VMs on a single compute node.
+Note the placement of the VMware NSX plug-in and the neutron-server
+service on the network node. The green arrow indicates the management
+relationship between the NSX controller and the network node.
+
+
+.. figure:: figures/vmware_nsx_ex1.png
+
+.. figure:: figures/vmware_nsx_ex2.png
diff --git a/doc/source/admin/archives/auth.rst b/doc/source/admin/archives/auth.rst
new file mode 100644
index 00000000000..63011c4a4c0
--- /dev/null
+++ b/doc/source/admin/archives/auth.rst
@@ -0,0 +1,175 @@
+.. _Authentication and authorization:
+
+================================
+Authentication and authorization
+================================
+
+Networking uses the Identity service as the default authentication
+service. When the Identity service is enabled, users who submit requests
+to the Networking service must provide an authentication token in
+``X-Auth-Token`` request header. Users obtain this token by
+authenticating with the Identity service endpoint. For more information
+about authentication with the Identity service, see `OpenStack Identity
+service API v2.0
+Reference `__.
+When the Identity service is enabled, it is not mandatory to specify the
+project ID for resources in create requests because the project ID is
+derived from the authentication token.
+
+The default authorization settings only allow administrative users
+to create resources on behalf of a different project. Networking uses
+information received from Identity to authorize user requests.
+Networking handles two kind of authorization policies:
+
+- **Operation-based** policies specify access criteria for specific
+ operations, possibly with fine-grained control over specific
+ attributes.
+
+- **Resource-based** policies specify whether access to specific
+ resource is granted or not according to the permissions configured
+ for the resource (currently available only for the network resource).
+ The actual authorization policies enforced in Networking might vary
+ from deployment to deployment.
+
+The policy engine reads entries from the ``policy.json`` file. The
+actual location of this file might vary from distribution to
+distribution. Entries can be updated while the system is running, and no
+service restart is required. Every time the policy file is updated, the
+policies are automatically reloaded. Currently the only way of updating
+such policies is to edit the policy file. In this section, the terms
+*policy* and *rule* refer to objects that are specified in the same way
+in the policy file. There are no syntax differences between a rule and a
+policy. A policy is something that is matched directly from the
+Networking policy engine. A rule is an element in a policy, which is
+evaluated. For instance in ``"create_subnet":
+"rule:admin_or_network_owner"``, *create_subnet* is a
+policy, and *admin_or_network_owner* is a rule.
+
+Policies are triggered by the Networking policy engine whenever one of
+them matches a Networking API operation or a specific attribute being
+used in a given operation. For instance the ``create_subnet`` policy is
+triggered every time a ``POST /v2.0/subnets`` request is sent to the
+Networking server; on the other hand ``create_network:shared`` is
+triggered every time the *shared* attribute is explicitly specified (and
+set to a value different from its default) in a ``POST /v2.0/networks``
+request. It is also worth mentioning that policies can also be related
+to specific API extensions; for instance
+``extension:provider_network:set`` is triggered if the attributes
+defined by the Provider Network extensions are specified in an API
+request.
+
+An authorization policy can be composed by one or more rules. If more
+rules are specified then the evaluation policy succeeds if any of the
+rules evaluates successfully; if an API operation matches multiple
+policies, then all the policies must evaluate successfully. Also,
+authorization rules are recursive. Once a rule is matched, the rule(s)
+can be resolved to another rule, until a terminal rule is reached.
+
+The Networking policy engine currently defines the following kinds of
+terminal rules:
+
+- **Role-based rules** evaluate successfully if the user who submits
+ the request has the specified role. For instance ``"role:admin"`` is
+ successful if the user who submits the request is an administrator.
+
+- **Field-based rules** evaluate successfully if a field of the
+ resource specified in the current request matches a specific value.
+ For instance ``"field:networks:shared=True"`` is successful if the
+ ``shared`` attribute of the ``network`` resource is set to true.
+
+- **Generic rules** compare an attribute in the resource with an
+ attribute extracted from the user's security credentials and
+ evaluates successfully if the comparison is successful. For instance
+ ``"tenant_id:%(tenant_id)s"`` is successful if the project identifier
+ in the resource is equal to the project identifier of the user
+ submitting the request.
+
+This extract is from the default ``policy.json`` file:
+
+- A rule that evaluates successfully if the current user is an
+ administrator or the owner of the resource specified in the request
+ (project identifier is equal).
+
+ .. code-block:: none
+
+ {
+ "admin_or_owner": "role:admin",
+ "tenant_id:%(tenant_id)s",
+ "admin_or_network_owner": "role:admin",
+ "tenant_id:%(network_tenant_id)s",
+ "admin_only": "role:admin",
+ "regular_user": "",
+ "shared":"field:networks:shared=True",
+ "default":
+
+- The default policy that is always evaluated if an API operation does
+ not match any of the policies in ``policy.json``.
+
+ .. code-block:: none
+
+ "rule:admin_or_owner",
+ "create_subnet": "rule:admin_or_network_owner",
+ "get_subnet": "rule:admin_or_owner",
+ "rule:shared",
+ "update_subnet": "rule:admin_or_network_owner",
+ "delete_subnet": "rule:admin_or_network_owner",
+ "create_network": "",
+ "get_network": "rule:admin_or_owner",
+
+- This policy evaluates successfully if either *admin_or_owner*, or
+ *shared* evaluates successfully.
+
+ .. code-block:: none
+
+ "rule:shared",
+ "create_network:shared": "rule:admin_only"
+
+- This policy restricts the ability to manipulate the *shared*
+ attribute for a network to administrators only.
+
+ .. code-block:: none
+
+ ,
+ "update_network": "rule:admin_or_owner",
+ "delete_network": "rule:admin_or_owner",
+ "create_port": "",
+ "create_port:mac_address": "rule:admin_or_network_owner",
+ "create_port:fixed_ips":
+
+- This policy restricts the ability to manipulate the *mac_address*
+ attribute for a port only to administrators and the owner of the
+ network where the port is attached.
+
+ .. code-block:: none
+
+ "rule:admin_or_network_owner",
+ "get_port": "rule:admin_or_owner",
+ "update_port": "rule:admin_or_owner",
+ "delete_port": "rule:admin_or_owner"
+ }
+
+In some cases, some operations are restricted to administrators only.
+This example shows you how to modify a policy file to permit project to
+define networks, see their resources, and permit administrative users to
+perform all other operations:
+
+.. code-block:: none
+
+ {
+ "admin_or_owner": "role:admin", "tenant_id:%(tenant_id)s",
+ "admin_only": "role:admin", "regular_user": "",
+ "default": "rule:admin_only",
+ "create_subnet": "rule:admin_only",
+ "get_subnet": "rule:admin_or_owner",
+ "update_subnet": "rule:admin_only",
+ "delete_subnet": "rule:admin_only",
+ "create_network": "",
+ "get_network": "rule:admin_or_owner",
+ "create_network:shared": "rule:admin_only",
+ "update_network": "rule:admin_or_owner",
+ "delete_network": "rule:admin_or_owner",
+ "create_port": "rule:admin_only",
+ "get_port": "rule:admin_or_owner",
+ "update_port": "rule:admin_only",
+ "delete_port": "rule:admin_only"
+ }
diff --git a/doc/source/admin/archives/config-agents.rst b/doc/source/admin/archives/config-agents.rst
new file mode 100644
index 00000000000..8d04a7ee06b
--- /dev/null
+++ b/doc/source/admin/archives/config-agents.rst
@@ -0,0 +1,505 @@
+========================
+Configure neutron agents
+========================
+
+Plug-ins typically have requirements for particular software that must
+be run on each node that handles data packets. This includes any node
+that runs nova-compute and nodes that run dedicated OpenStack Networking
+service agents such as ``neutron-dhcp-agent``, ``neutron-l3-agent``,
+``neutron-metering-agent`` or ``neutron-lbaasv2-agent``.
+
+A data-forwarding node typically has a network interface with an IP
+address on the management network and another interface on the data
+network.
+
+This section shows you how to install and configure a subset of the
+available plug-ins, which might include the installation of switching
+software (for example, ``Open vSwitch``) and as agents used to communicate
+with the ``neutron-server`` process running elsewhere in the data center.
+
+Configure data-forwarding nodes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Node set up: NSX plug-in
+------------------------
+
+If you use the NSX plug-in, you must also install Open vSwitch on each
+data-forwarding node. However, you do not need to install an additional
+agent on each node.
+
+.. warning::
+
+ It is critical that you run an Open vSwitch version that is
+ compatible with the current version of the NSX Controller software.
+ Do not use the Open vSwitch version that is installed by default on
+ Ubuntu. Instead, use the Open vSwitch version that is provided on
+ the VMware support portal for your NSX Controller version.
+
+**To set up each node for the NSX plug-in**
+
+#. Ensure that each data-forwarding node has an IP address on the
+ management network, and an IP address on the data network that is used
+ for tunneling data traffic. For full details on configuring your
+ forwarding node, see the `NSX Administration Guide
+ `__.
+
+#. Use the NSX Administrator Guide to add the node as a Hypervisor
+ by using the NSX Manager GUI. Even if your forwarding node has no
+ VMs and is only used for services agents like ``neutron-dhcp-agent``
+ or ``neutron-lbaas-agent``, it should still be added to NSX as a
+ Hypervisor.
+
+#. After following the NSX Administrator Guide, use the page for this
+ Hypervisor in the NSX Manager GUI to confirm that the node is properly
+ connected to the NSX Controller Cluster and that the NSX Controller
+ Cluster can see the ``br-int`` integration bridge.
+
+Configure DHCP agent
+~~~~~~~~~~~~~~~~~~~~
+
+The DHCP service agent is compatible with all existing plug-ins and is
+required for all deployments where VMs should automatically receive IP
+addresses through DHCP.
+
+**To install and configure the DHCP agent**
+
+#. You must configure the host running the neutron-dhcp-agent as a data
+ forwarding node according to the requirements for your plug-in.
+
+#. Install the DHCP agent:
+
+ .. code-block:: console
+
+ # apt-get install neutron-dhcp-agent
+
+#. Update any options in the ``/etc/neutron/dhcp_agent.ini`` file
+ that depend on the plug-in in use. See the sub-sections.
+
+ .. important::
+
+ If you reboot a node that runs the DHCP agent, you must run the
+ :command:`neutron-ovs-cleanup` command before the ``neutron-dhcp-agent``
+ service starts.
+
+ On Red Hat, SUSE, and Ubuntu based systems, the
+ ``neutron-ovs-cleanup`` service runs the :command:`neutron-ovs-cleanup`
+ command automatically. However, on Debian-based systems, you
+ must manually run this command or write your own system script
+ that runs on boot before the ``neutron-dhcp-agent`` service starts.
+
+Networking dhcp-agent can use
+`dnsmasq `__ driver which
+supports stateful and stateless DHCPv6 for subnets created with
+``--ipv6_address_mode`` set to ``dhcpv6-stateful`` or
+``dhcpv6-stateless``.
+
+For example:
+
+.. code-block:: console
+
+ $ openstack subnet create --ip-version 6 --ipv6-ra-mode dhcpv6-stateful \
+ --ipv6-address-mode dhcpv6-stateful --network NETWORK --subnet-range \
+ CIDR SUBNET_NAME
+
+.. code-block:: console
+
+ $ openstack subnet create --ip-version 6 --ipv6-ra-mode dhcpv6-stateless \
+ --ipv6-address-mode dhcpv6-stateless --network NETWORK --subnet-range \
+ CIDR SUBNET_NAME
+
+If no dnsmasq process for subnet's network is launched, Networking will
+launch a new one on subnet's dhcp port in ``qdhcp-XXX`` namespace. If
+previous dnsmasq process is already launched, restart dnsmasq with a new
+configuration.
+
+Networking will update dnsmasq process and restart it when subnet gets
+updated.
+
+.. note::
+
+ For dhcp-agent to operate in IPv6 mode use at least dnsmasq v2.63.
+
+After a certain, configured timeframe, networks uncouple from DHCP
+agents when the agents are no longer in use. You can configure the DHCP
+agent to automatically detach from a network when the agent is out of
+service, or no longer needed.
+
+This feature applies to all plug-ins that support DHCP scaling. For more
+information, see the `DHCP agent configuration
+options `__
+listed in the OpenStack Configuration Reference.
+
+DHCP agent setup: OVS plug-in
+-----------------------------
+
+These DHCP agent options are required in the
+``/etc/neutron/dhcp_agent.ini`` file for the OVS plug-in:
+
+.. code-block:: bash
+
+ [DEFAULT]
+ enable_isolated_metadata = True
+ interface_driver = openvswitch
+
+DHCP agent setup: NSX plug-in
+-----------------------------
+
+These DHCP agent options are required in the
+``/etc/neutron/dhcp_agent.ini`` file for the NSX plug-in:
+
+.. code-block:: bash
+
+ [DEFAULT]
+ enable_metadata_network = True
+ enable_isolated_metadata = True
+ interface_driver = openvswitch
+
+DHCP agent setup: Linux-bridge plug-in
+--------------------------------------
+
+These DHCP agent options are required in the
+``/etc/neutron/dhcp_agent.ini`` file for the Linux-bridge plug-in:
+
+.. code-block:: bash
+
+ [DEFAULT]
+ enabled_isolated_metadata = True
+ interface_driver = linuxbridge
+
+Configure L3 agent
+~~~~~~~~~~~~~~~~~~
+
+The OpenStack Networking service has a widely used API extension to
+allow administrators and projects to create routers to interconnect L2
+networks, and floating IPs to make ports on private networks publicly
+accessible.
+
+Many plug-ins rely on the L3 service agent to implement the L3
+functionality. However, the following plug-ins already have built-in L3
+capabilities:
+
+- Big Switch/Floodlight plug-in, which supports both the open source
+ `Floodlight `__
+ controller and the proprietary Big Switch controller.
+
+ .. note::
+
+ Only the proprietary BigSwitch controller implements L3
+ functionality. When using Floodlight as your OpenFlow controller,
+ L3 functionality is not available.
+
+- IBM SDN-VE plug-in
+
+- MidoNet plug-in
+
+- NSX plug-in
+
+- PLUMgrid plug-in
+
+.. warning::
+
+ Do not configure or use ``neutron-l3-agent`` if you use one of these
+ plug-ins.
+
+**To install the L3 agent for all other plug-ins**
+
+#. Install the ``neutron-l3-agent`` binary on the network node:
+
+ .. code-block:: console
+
+ # apt-get install neutron-l3-agent
+
+#. To uplink the node that runs ``neutron-l3-agent`` to the external network,
+ create a bridge named ``br-ex`` and attach the NIC for the external
+ network to this bridge.
+
+ For example, with Open vSwitch and NIC eth1 connected to the external
+ network, run:
+
+ .. code-block:: console
+
+ # ovs-vsctl add-br br-ex
+ # ovs-vsctl add-port br-ex eth1
+
+ When the ``br-ex`` port is added to the ``eth1`` interface, external
+ communication is interrupted. To avoid this, edit the
+ ``/etc/network/interfaces`` file to contain the following information:
+
+ .. code-block:: shell
+
+ ## External bridge
+ auto br-ex
+ iface br-ex inet static
+ address 192.27.117.101
+ netmask 255.255.240.0
+ gateway 192.27.127.254
+ dns-nameservers 8.8.8.8
+
+ ## External network interface
+ auto eth1
+ iface eth1 inet manual
+ up ifconfig $IFACE 0.0.0.0 up
+ up ip link set $IFACE promisc on
+ down ip link set $IFACE promisc off
+ down ifconfig $IFACE down
+
+ .. note::
+
+ The external bridge configuration address is the external IP address.
+ This address and gateway should be configured in
+ ``/etc/network/interfaces``.
+
+ After editing the configuration, restart ``br-ex``:
+
+ .. code-block:: console
+
+ # ifdown br-ex && ifup br-ex
+
+ Do not manually configure an IP address on the NIC connected to the
+ external network for the node running ``neutron-l3-agent``. Rather, you
+ must have a range of IP addresses from the external network that can be
+ used by OpenStack Networking for routers that uplink to the external
+ network. This range must be large enough to have an IP address for each
+ router in the deployment, as well as each floating IP.
+
+#. The ``neutron-l3-agent`` uses the Linux IP stack and iptables to perform L3
+ forwarding and NAT. In order to support multiple routers with
+ potentially overlapping IP addresses, ``neutron-l3-agent`` defaults to
+ using Linux network namespaces to provide isolated forwarding contexts.
+ As a result, the IP addresses of routers are not visible simply by running
+ the :command:`ip addr list` or :command:`ifconfig` command on the node.
+ Similarly, you cannot directly :command:`ping` fixed IPs.
+
+ To do either of these things, you must run the command within a
+ particular network namespace for the router. The namespace has the name
+ ``qrouter-ROUTER_UUID``. These example commands run in the router
+ namespace with UUID 47af3868-0fa8-4447-85f6-1304de32153b:
+
+ .. code-block:: console
+
+ # ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ip addr list
+
+ .. code-block:: console
+
+ # ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ping FIXED_IP
+
+ .. important::
+
+ If you reboot a node that runs the L3 agent, you must run the
+ :command:`neutron-ovs-cleanup` command before the ``neutron-l3-agent``
+ service starts.
+
+ On Red Hat, SUSE and Ubuntu based systems, the neutron-ovs-cleanup
+ service runs the :command:`neutron-ovs-cleanup` command
+ automatically. However, on Debian-based systems, you must manually
+ run this command or write your own system script that runs on boot
+ before the neutron-l3-agent service starts.
+
+**How routers are assigned to L3 agents**
+By default, a router is assigned to the L3 agent with the least number
+of routers (LeastRoutersScheduler). This can be changed by altering the
+``router_scheduler_driver`` setting in the configuration file.
+
+Configure metering agent
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+The Neutron Metering agent resides beside neutron-l3-agent.
+
+**To install the metering agent and configure the node**
+
+#. Install the agent by running:
+
+ .. code-block:: console
+
+ # apt-get install neutron-metering-agent
+
+#. If you use one of the following plug-ins, you need to configure the
+ metering agent with these lines as well:
+
+ - An OVS-based plug-in such as OVS, NSX, NEC, BigSwitch/Floodlight:
+
+ .. code-block:: ini
+
+ interface_driver = openvswitch
+
+ - A plug-in that uses LinuxBridge:
+
+ .. code-block:: ini
+
+ interface_driver = linuxbridge
+
+#. To use the reference implementation, you must set:
+
+ .. code-block:: ini
+
+ driver = iptables
+
+#. Set the ``service_plugins`` option in the ``/etc/neutron/neutron.conf``
+ file on the host that runs ``neutron-server``:
+
+ .. code-block:: ini
+
+ service_plugins = metering
+
+ If this option is already defined, add ``metering`` to the list, using a
+ comma as separator. For example:
+
+ .. code-block:: ini
+
+ service_plugins = router,metering
+
+Configure Load-Balancer-as-a-Service (LBaaS v2)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+For the back end, use either ``Octavia`` or ``HAProxy``.
+This example uses Octavia.
+
+**To configure LBaaS V2**
+
+#. Install Octavia using your distribution's package manager.
+
+
+#. Edit the ``/etc/neutron/neutron_lbaas.conf`` file and change
+ the ``service_provider`` parameter to enable Octavia:
+
+ .. code-block:: ini
+
+ service_provider = LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver:default
+
+
+#. Edit the ``/etc/neutron/neutron.conf`` file and add the
+ ``service_plugins`` parameter to enable the load-balancing plug-in:
+
+ .. code-block:: ini
+
+ service_plugins = neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2
+
+ If this option is already defined, add the load-balancing plug-in to
+ the list using a comma as a separator. For example:
+
+ .. code-block:: ini
+
+ service_plugins = [already defined plugins],neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2
+
+
+
+#. Create the required tables in the database:
+
+ .. code-block:: console
+
+ # neutron-db-manage --subproject neutron-lbaas upgrade head
+
+#. Restart the ``neutron-server`` service.
+
+
+#. Enable load balancing in the Project section of the dashboard.
+
+ .. warning::
+
+ Horizon panels are enabled only for LBaaSV1. LBaaSV2 panels are still
+ being developed.
+
+ By default, the ``enable_lb`` option is ``True`` in the `local_settings.py`
+ file.
+
+ .. code-block:: python
+
+ OPENSTACK_NEUTRON_NETWORK = {
+ 'enable_lb': True,
+ ...
+ }
+
+ Apply the settings by restarting the web server. You can now view the
+ Load Balancer management options in the Project view in the dashboard.
+
+Configure Hyper-V L2 agent
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Before you install the OpenStack Networking Hyper-V L2 agent on a
+Hyper-V compute node, ensure the compute node has been configured
+correctly using these
+`instructions `__.
+
+**To install the OpenStack Networking Hyper-V agent and configure the node**
+
+#. Download the OpenStack Networking code from the repository:
+
+ .. code-block:: console
+
+ > cd C:\OpenStack\
+ > git clone https://git.openstack.org/openstack/neutron
+
+#. Install the OpenStack Networking Hyper-V Agent:
+
+ .. code-block:: console
+
+ > cd C:\OpenStack\neutron\
+ > python setup.py install
+
+#. Copy the ``policy.json`` file:
+
+ .. code-block:: console
+
+ > xcopy C:\OpenStack\neutron\etc\policy.json C:\etc\
+
+#. Create the ``C:\etc\neutron-hyperv-agent.conf`` file and add the proper
+ configuration options and the `Hyper-V related
+ options `__. Here is a sample config file:
+
+ .. code-block:: ini
+
+ [DEFAULT]
+ control_exchange = neutron
+ policy_file = C:\etc\policy.json
+ rpc_backend = neutron.openstack.common.rpc.impl_kombu
+ rabbit_host = IP_ADDRESS
+ rabbit_port = 5672
+ rabbit_userid = guest
+ rabbit_password =
+ logdir = C:\OpenStack\Log
+ logfile = neutron-hyperv-agent.log
+
+ [AGENT]
+ polling_interval = 2
+ physical_network_vswitch_mappings = *:YOUR_BRIDGE_NAME
+ enable_metrics_collection = true
+
+ [SECURITYGROUP]
+ firewall_driver = hyperv.neutron.security_groups_driver.HyperVSecurityGroupsDriver
+ enable_security_group = true
+
+#. Start the OpenStack Networking Hyper-V agent:
+
+ .. code-block:: console
+
+ > C:\Python27\Scripts\neutron-hyperv-agent.exe --config-file
+ C:\etc\neutron-hyperv-agent.conf
+
+Basic operations on agents
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This table shows examples of Networking commands that enable you to
+complete basic operations on agents.
+
+.. list-table::
+ :widths: 50 50
+ :header-rows: 1
+
+ * - Operation
+ - Command
+ * - List all available agents.
+ - ``$ openstack network agent list``
+ * - Show information of a given agent.
+ - ``$ openstack network agent show AGENT_ID``
+ * - Update the admin status and description for a specified agent. The
+ command can be used to enable and disable agents by using
+ ``--admin-state-up`` parameter set to ``False`` or ``True``.
+ - ``$ neutron agent-update --admin-state-up False AGENT_ID``
+ * - Delete a given agent. Consider disabling the agent before deletion.
+ - ``$ openstack network agent delete AGENT_ID``
+
+**Basic operations on Networking agents**
+
+See the `OpenStack Command-Line Interface
+Reference `__
+for more information on Networking commands.
diff --git a/doc/source/admin/archives/config-identity.rst b/doc/source/admin/archives/config-identity.rst
new file mode 100644
index 00000000000..7d57645876d
--- /dev/null
+++ b/doc/source/admin/archives/config-identity.rst
@@ -0,0 +1,306 @@
+=========================================
+Configure Identity service for Networking
+=========================================
+
+**To configure the Identity service for use with Networking**
+
+#. Create the ``get_id()`` function
+
+ The ``get_id()`` function stores the ID of created objects, and removes
+ the need to copy and paste object IDs in later steps:
+
+ a. Add the following function to your ``.bashrc`` file:
+
+ .. code-block:: bash
+
+ function get_id () {
+ echo `"$@" | awk '/ id / { print $4 }'`
+ }
+
+ b. Source the ``.bashrc`` file:
+
+ .. code-block:: console
+
+ $ source .bashrc
+
+#. Create the Networking service entry
+
+ Networking must be available in the Compute service catalog. Create the
+ service:
+
+ .. code-block:: console
+
+ $ NEUTRON_SERVICE_ID=$(get_id openstack service create network \
+ --name neutron --description 'OpenStack Networking Service')
+
+#. Create the Networking service endpoint entry
+
+ The way that you create a Networking endpoint entry depends on whether
+ you are using the SQL or the template catalog driver:
+
+ - If you are using the ``SQL driver``, run the following command with the
+ specified region (``$REGION``), IP address of the Networking server
+ (``$IP``), and service ID (``$NEUTRON_SERVICE_ID``, obtained in the
+ previous step).
+
+ .. code-block:: console
+
+ $ openstack endpoint create $NEUTRON_SERVICE_ID --region $REGION \
+ --publicurl 'http://$IP:9696/' --adminurl 'http://$IP:9696/' \
+ --internalurl 'http://$IP:9696/'
+
+ For example:
+
+ .. code-block:: console
+
+ $ openstack endpoint create $NEUTRON_SERVICE_ID --region myregion \
+ --publicurl "http://10.211.55.17:9696/" \
+ --adminurl "http://10.211.55.17:9696/" \
+ --internalurl "http://10.211.55.17:9696/"
+
+ - If you are using the ``template driver``, specify the following
+ parameters in your Compute catalog template file
+ (``default_catalog.templates``), along with the region (``$REGION``)
+ and IP address of the Networking server (``$IP``).
+
+ .. code-block:: bash
+
+ catalog.$REGION.network.publicURL = http://$IP:9696
+ catalog.$REGION.network.adminURL = http://$IP:9696
+ catalog.$REGION.network.internalURL = http://$IP:9696
+ catalog.$REGION.network.name = Network Service
+
+ For example:
+
+ .. code-block:: bash
+
+ catalog.$Region.network.publicURL = http://10.211.55.17:9696
+ catalog.$Region.network.adminURL = http://10.211.55.17:9696
+ catalog.$Region.network.internalURL = http://10.211.55.17:9696
+ catalog.$Region.network.name = Network Service
+
+#. Create the Networking service user
+
+ You must provide admin user credentials that Compute and some internal
+ Networking components can use to access the Networking API. Create a
+ special ``service`` project and a ``neutron`` user within this project,
+ and assign an ``admin`` role to this role.
+
+ a. Create the ``admin`` role:
+
+ .. code-block:: console
+
+ $ ADMIN_ROLE=$(get_id openstack role create admin)
+
+ b. Create the ``neutron`` user:
+
+ .. code-block:: console
+
+ $ NEUTRON_USER=$(get_id openstack user create neutron \
+ --password "$NEUTRON_PASSWORD" --email demo@example.com \
+ --project service)
+
+ c. Create the ``service`` project:
+
+ .. code-block:: console
+
+ $ SERVICE_TENANT=$(get_id openstack project create service \
+ --description "Services project" --domain default)
+
+ d. Establish the relationship among the project, user, and role:
+
+ .. code-block:: console
+
+ $ openstack role add $ADMIN_ROLE --user $NEUTRON_USER \
+ --project $SERVICE_TENANT
+
+For information about how to create service entries and users, see the `Ocata Installation
+Tutorials and Guides `_
+for your distribution.
+
+Compute
+~~~~~~~
+
+If you use Networking, do not run the Compute ``nova-network`` service (like
+you do in traditional Compute deployments). Instead, Compute delegates
+most network-related decisions to Networking.
+
+.. note::
+
+ Uninstall ``nova-network`` and reboot any physical nodes that have been
+ running ``nova-network`` before using them to run Networking.
+ Inadvertently running the ``nova-network`` process while using
+ Networking can cause problems, as can stale iptables rules pushed
+ down by previously running ``nova-network``.
+
+Compute proxies project-facing API calls to manage security groups and
+floating IPs to Networking APIs. However, operator-facing tools such
+as ``nova-manage``, are not proxied and should not be used.
+
+.. warning::
+
+ When you configure networking, you must use this guide. Do not rely
+ on Compute networking documentation or past experience with Compute.
+ If a :command:`nova` command or configuration option related to networking
+ is not mentioned in this guide, the command is probably not
+ supported for use with Networking. In particular, you cannot use CLI
+ tools like ``nova-manage`` and ``nova`` to manage networks or IP
+ addressing, including both fixed and floating IPs, with Networking.
+
+To ensure that Compute works properly with Networking (rather than the
+legacy ``nova-network`` mechanism), you must adjust settings in the
+``nova.conf`` configuration file.
+
+Networking API and credential configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Each time you provision or de-provision a VM in Compute, ``nova-\*``
+services communicate with Networking using the standard API. For this to
+happen, you must configure the following items in the ``nova.conf`` file
+(used by each ``nova-compute`` and ``nova-api`` instance).
+
+.. list-table:: **nova.conf API and credential settings prior to Mitaka**
+ :widths: 20 50
+ :header-rows: 1
+
+ * - Attribute name
+ - Required
+ * - ``[DEFAULT] use_neutron``
+ - Modify from the default to ``True`` to
+ indicate that Networking should be used rather than the traditional
+ nova-network networking model.
+ * - ``[neutron] url``
+ - Update to the host name/IP and port of the neutron-server instance
+ for this deployment.
+ * - ``[neutron] auth_strategy``
+ - Keep the default ``keystone`` value for all production deployments.
+ * - ``[neutron] admin_project_name``
+ - Update to the name of the service tenant created in the above section on
+ Identity configuration.
+ * - ``[neutron] admin_username``
+ - Update to the name of the user created in the above section on Identity
+ configuration.
+ * - ``[neutron] admin_password``
+ - Update to the password of the user created in the above section on
+ Identity configuration.
+ * - ``[neutron] admin_auth_url``
+ - Update to the Identity server IP and port. This is the Identity
+ (keystone) admin API server IP and port value, and not the Identity
+ service API IP and port.
+
+.. list-table:: **nova.conf API and credential settings in Newton**
+ :widths: 20 50
+ :header-rows: 1
+
+ * - Attribute name
+ - Required
+ * - ``[DEFAULT] use_neutron``
+ - Modify from the default to ``True`` to
+ indicate that Networking should be used rather than the traditional
+ nova-network networking model.
+ * - ``[neutron] url``
+ - Update to the host name/IP and port of the neutron-server instance
+ for this deployment.
+ * - ``[neutron] auth_strategy``
+ - Keep the default ``keystone`` value for all production deployments.
+ * - ``[neutron] project_name``
+ - Update to the name of the service tenant created in the above section on
+ Identity configuration.
+ * - ``[neutron] username``
+ - Update to the name of the user created in the above section on Identity
+ configuration.
+ * - ``[neutron] password``
+ - Update to the password of the user created in the above section on
+ Identity configuration.
+ * - ``[neutron] auth_url``
+ - Update to the Identity server IP and port. This is the Identity
+ (keystone) admin API server IP and port value, and not the Identity
+ service API IP and port.
+
+Configure security groups
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The Networking service provides security group functionality using a
+mechanism that is more flexible and powerful than the security group
+capabilities built into Compute. Therefore, if you use Networking, you
+should always disable built-in security groups and proxy all security
+group calls to the Networking API. If you do not, security policies
+will conflict by being simultaneously applied by both services.
+
+To proxy security groups to Networking, use the following configuration
+values in the ``nova.conf`` file:
+
+**nova.conf security group settings**
+
++-----------------------+-----------------------------------------------------+
+| Item | Configuration |
++=======================+=====================================================+
+| ``firewall_driver`` | Update to ``nova.virt.firewall.NoopFirewallDriver``,|
+| | so that nova-compute does not perform |
+| | iptables-based filtering itself. |
++-----------------------+-----------------------------------------------------+
+
+Configure metadata
+~~~~~~~~~~~~~~~~~~
+
+The Compute service allows VMs to query metadata associated with a VM by
+making a web request to a special 169.254.169.254 address. Networking
+supports proxying those requests to nova-api, even when the requests are
+made from isolated networks, or from multiple networks that use
+overlapping IP addresses.
+
+To enable proxying the requests, you must update the following fields in
+``[neutron]`` section in the ``nova.conf``.
+
+**nova.conf metadata settings**
+
++---------------------------------+------------------------------------------+
+| Item | Configuration |
++=================================+==========================================+
+| ``service_metadata_proxy`` | Update to ``true``, otherwise nova-api |
+| | will not properly respond to requests |
+| | from the neutron-metadata-agent. |
++---------------------------------+------------------------------------------+
+| ``metadata_proxy_shared_secret``| Update to a string "password" value. |
+| | You must also configure the same value in|
+| | the ``metadata_agent.ini`` file, to |
+| | authenticate requests made for metadata. |
+| | |
+| | The default value of an empty string in |
+| | both files will allow metadata to |
+| | function, but will not be secure if any |
+| | non-trusted entities have access to the |
+| | metadata APIs exposed by nova-api. |
++---------------------------------+------------------------------------------+
+
+.. note::
+
+ As a precaution, even when using ``metadata_proxy_shared_secret``,
+ we recommend that you do not expose metadata using the same
+ nova-api instances that are used for projects. Instead, you should
+ run a dedicated set of nova-api instances for metadata that are
+ available only on your management network. Whether a given nova-api
+ instance exposes metadata APIs is determined by the value of
+ ``enabled_apis`` in its ``nova.conf``.
+
+Example nova.conf (for nova-compute and nova-api)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Example values for the above settings, assuming a cloud controller node
+running Compute and Networking with an IP address of 192.168.1.2:
+
+.. code-block:: ini
+
+ [DEFAULT]
+ use_neutron = True
+ firewall_driver=nova.virt.firewall.NoopFirewallDriver
+
+ [neutron]
+ url=http://192.168.1.2:9696
+ auth_strategy=keystone
+ admin_tenant_name=service
+ admin_username=neutron
+ admin_password=password
+ admin_auth_url=http://192.168.1.2:35357/v2.0
+ service_metadata_proxy=true
+ metadata_proxy_shared_secret=foo
diff --git a/doc/source/admin/archives/config-plugins.rst b/doc/source/admin/archives/config-plugins.rst
new file mode 100644
index 00000000000..c37e7b31433
--- /dev/null
+++ b/doc/source/admin/archives/config-plugins.rst
@@ -0,0 +1,246 @@
+======================
+Plug-in configurations
+======================
+
+For configurations options, see `Networking configuration
+options `__
+in Configuration Reference. These sections explain how to configure
+specific plug-ins.
+
+Configure Big Switch (Floodlight REST Proxy) plug-in
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+#. Edit the ``/etc/neutron/neutron.conf`` file and add this line:
+
+ .. code-block:: ini
+
+ core_plugin = bigswitch
+
+#. In the ``/etc/neutron/neutron.conf`` file, set the ``service_plugins``
+ option:
+
+ .. code-block:: ini
+
+ service_plugins = neutron.plugins.bigswitch.l3_router_plugin.L3RestProxy
+
+#. Edit the ``/etc/neutron/plugins/bigswitch/restproxy.ini`` file for the
+ plug-in and specify a comma-separated list of controller\_ip:port pairs:
+
+ .. code-block:: ini
+
+ server = CONTROLLER_IP:PORT
+
+ For database configuration, see `Install Networking
+ Services `__
+ in the Installation Tutorials and Guides. (The link defaults to the Ubuntu
+ version.)
+
+#. Restart the ``neutron-server`` to apply the settings:
+
+ .. code-block:: console
+
+ # service neutron-server restart
+
+Configure Brocade plug-in
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+#. Install the Brocade-modified Python netconf client (ncclient) library,
+ which is available at https://github.com/brocade/ncclient:
+
+ .. code-block:: console
+
+ $ git clone https://github.com/brocade/ncclient
+
+#. As root, run this command:
+
+ .. code-block:: console
+
+ # cd ncclient;python setup.py install
+
+#. Edit the ``/etc/neutron/neutron.conf`` file and set the following
+ option:
+
+ .. code-block:: ini
+
+ core_plugin = brocade
+
+#. Edit the ``/etc/neutron/plugins/brocade/brocade.ini`` file for the
+ Brocade plug-in and specify the admin user name, password, and IP
+ address of the Brocade switch:
+
+ .. code-block:: ini
+
+ [SWITCH]
+ username = ADMIN
+ password = PASSWORD
+ address = SWITCH_MGMT_IP_ADDRESS
+ ostype = NOS
+
+ For database configuration, see `Install Networking
+ Services `__
+ in any of the Installation Tutorials and Guides in the `OpenStack Documentation
+ index `__. (The link defaults to the Ubuntu
+ version.)
+
+#. Restart the ``neutron-server`` service to apply the settings:
+
+ .. code-block:: console
+
+ # service neutron-server restart
+
+Configure NSX-mh plug-in
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+The instructions in this section refer to the VMware NSX-mh platform,
+formerly known as Nicira NVP.
+
+#. Install the NSX plug-in:
+
+ .. code-block:: console
+
+ # apt-get install python-vmware-nsx
+
+#. Edit the ``/etc/neutron/neutron.conf`` file and set this line:
+
+ .. code-block:: ini
+
+ core_plugin = vmware
+
+ Example ``neutron.conf`` file for NSX-mh integration:
+
+ .. code-block:: ini
+
+ core_plugin = vmware
+ rabbit_host = 192.168.203.10
+ allow_overlapping_ips = True
+
+#. To configure the NSX-mh controller cluster for OpenStack Networking,
+ locate the ``[default]`` section in the
+ ``/etc/neutron/plugins/vmware/nsx.ini`` file and add the following
+ entries:
+
+ - To establish and configure the connection with the controller cluster
+ you must set some parameters, including NSX-mh API endpoints, access
+ credentials, and optionally specify settings for HTTP timeouts,
+ redirects and retries in case of connection failures:
+
+ .. code-block:: ini
+
+ nsx_user = ADMIN_USER_NAME
+ nsx_password = NSX_USER_PASSWORD
+ http_timeout = HTTP_REQUEST_TIMEOUT # (seconds) default 75 seconds
+ retries = HTTP_REQUEST_RETRIES # default 2
+ redirects = HTTP_REQUEST_MAX_REDIRECTS # default 2
+ nsx_controllers = API_ENDPOINT_LIST # comma-separated list
+
+ To ensure correct operations, the ``nsx_user`` user must have
+ administrator credentials on the NSX-mh platform.
+
+ A controller API endpoint consists of the IP address and port for the
+ controller; if you omit the port, port 443 is used. If multiple API
+ endpoints are specified, it is up to the user to ensure that all
+ these endpoints belong to the same controller cluster. The OpenStack
+ Networking VMware NSX-mh plug-in does not perform this check, and
+ results might be unpredictable.
+
+ When you specify multiple API endpoints, the plug-in takes care of
+ load balancing requests on the various API endpoints.
+
+ - The UUID of the NSX-mh transport zone that should be used by default
+ when a project creates a network. You can get this value from the
+ Transport Zones page for the NSX-mh manager:
+
+ Alternatively the transport zone identifier can be retrieved by query
+ the NSX-mh API: ``/ws.v1/transport-zone``
+
+ .. code-block:: ini
+
+ default_tz_uuid = TRANSPORT_ZONE_UUID
+
+ - .. code-block:: ini
+
+ default_l3_gw_service_uuid = GATEWAY_SERVICE_UUID
+
+ .. warning::
+
+ Ubuntu packaging currently does not update the neutron init
+ script to point to the NSX-mh configuration file. Instead, you
+ must manually update ``/etc/default/neutron-server`` to add this
+ line:
+
+ .. code-block:: ini
+
+ NEUTRON_PLUGIN_CONFIG = /etc/neutron/plugins/vmware/nsx.ini
+
+ For database configuration, see `Install Networking
+ Services `__
+ in the Installation Tutorials and Guides.
+
+#. Restart ``neutron-server`` to apply settings:
+
+ .. code-block:: console
+
+ # service neutron-server restart
+
+ .. warning::
+
+ The neutron NSX-mh plug-in does not implement initial
+ re-synchronization of Neutron resources. Therefore resources that
+ might already exist in the database when Neutron is switched to the
+ NSX-mh plug-in will not be created on the NSX-mh backend upon
+ restart.
+
+Example ``nsx.ini`` file:
+
+.. code-block:: ini
+
+ [DEFAULT]
+ default_tz_uuid = d3afb164-b263-4aaa-a3e4-48e0e09bb33c
+ default_l3_gw_service_uuid=5c8622cc-240a-40a1-9693-e6a5fca4e3cf
+ nsx_user=admin
+ nsx_password=changeme
+ nsx_controllers=10.127.0.100,10.127.0.200:8888
+
+.. note::
+
+ To debug :file:`nsx.ini` configuration issues, run this command from the
+ host that runs neutron-server:
+
+.. code-block:: console
+
+ # neutron-check-nsx-config PATH_TO_NSX.INI
+
+This command tests whether ``neutron-server`` can log into all of the
+NSX-mh controllers and the SQL server, and whether all UUID values
+are correct.
+
+Configure PLUMgrid plug-in
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+#. Edit the ``/etc/neutron/neutron.conf`` file and set this line:
+
+ .. code-block:: ini
+
+ core_plugin = plumgrid
+
+#. Edit the [PLUMgridDirector] section in the
+ ``/etc/neutron/plugins/plumgrid/plumgrid.ini`` file and specify the IP
+ address, port, admin user name, and password of the PLUMgrid Director:
+
+ .. code-block:: ini
+
+ [PLUMgridDirector]
+ director_server = "PLUMgrid-director-ip-address"
+ director_server_port = "PLUMgrid-director-port"
+ username = "PLUMgrid-director-admin-username"
+ password = "PLUMgrid-director-admin-password"
+
+ For database configuration, see `Install Networking
+ Services `__
+ in the Installation Tutorials and Guides.
+
+#. Restart the ``neutron-server`` service to apply the settings:
+
+ .. code-block:: console
+
+ # service neutron-server restart
diff --git a/doc/source/admin/archives/figures/vmware_nsx_ex1.graffle b/doc/source/admin/archives/figures/vmware_nsx_ex1.graffle
new file mode 100644
index 00000000000..a8ca3e21ea8
Binary files /dev/null and b/doc/source/admin/archives/figures/vmware_nsx_ex1.graffle differ
diff --git a/doc/source/admin/archives/figures/vmware_nsx_ex1.png b/doc/source/admin/archives/figures/vmware_nsx_ex1.png
new file mode 100644
index 00000000000..82ff650dc34
Binary files /dev/null and b/doc/source/admin/archives/figures/vmware_nsx_ex1.png differ
diff --git a/doc/source/admin/archives/figures/vmware_nsx_ex1.svg b/doc/source/admin/archives/figures/vmware_nsx_ex1.svg
new file mode 100644
index 00000000000..3df91ee8425
--- /dev/null
+++ b/doc/source/admin/archives/figures/vmware_nsx_ex1.svg
@@ -0,0 +1,3 @@
+
+
+
diff --git a/doc/source/admin/archives/figures/vmware_nsx_ex2.graffle b/doc/source/admin/archives/figures/vmware_nsx_ex2.graffle
new file mode 100644
index 00000000000..11ef427bcf1
Binary files /dev/null and b/doc/source/admin/archives/figures/vmware_nsx_ex2.graffle differ
diff --git a/doc/source/admin/archives/figures/vmware_nsx_ex2.png b/doc/source/admin/archives/figures/vmware_nsx_ex2.png
new file mode 100644
index 00000000000..3d826ac6738
Binary files /dev/null and b/doc/source/admin/archives/figures/vmware_nsx_ex2.png differ
diff --git a/doc/source/admin/archives/figures/vmware_nsx_ex2.svg b/doc/source/admin/archives/figures/vmware_nsx_ex2.svg
new file mode 100644
index 00000000000..9ec00025fda
--- /dev/null
+++ b/doc/source/admin/archives/figures/vmware_nsx_ex2.svg
@@ -0,0 +1,3 @@
+
+
+
diff --git a/doc/source/admin/archives/index.rst b/doc/source/admin/archives/index.rst
new file mode 100644
index 00000000000..ab8a6334b04
--- /dev/null
+++ b/doc/source/admin/archives/index.rst
@@ -0,0 +1,23 @@
+=================
+Archived Contents
+=================
+
+.. note::
+
+ Contents here have been moved from the unified version of Administration
+ Guide. They will be merged into the Networking Guide gradually.
+
+.. toctree::
+ :maxdepth: 2
+
+ introduction.rst
+ arch.rst
+ config-plugins.rst
+ config-agents.rst
+ config-identity.rst
+ adv-config.rst
+ multi-dhcp-agents.rst
+ use.rst
+ adv-features.rst
+ adv-operational-features.rst
+ auth.rst
diff --git a/doc/source/admin/archives/introduction.rst b/doc/source/admin/archives/introduction.rst
new file mode 100644
index 00000000000..5d5260e710a
--- /dev/null
+++ b/doc/source/admin/archives/introduction.rst
@@ -0,0 +1,228 @@
+==========================
+Introduction to Networking
+==========================
+
+The Networking service, code-named neutron, provides an API that lets
+you define network connectivity and addressing in the cloud. The
+Networking service enables operators to leverage different networking
+technologies to power their cloud networking. The Networking service
+also provides an API to configure and manage a variety of network
+services ranging from L3 forwarding and NAT to load balancing, edge
+firewalls, and IPsec VPN.
+
+For a detailed description of the Networking API abstractions and their
+attributes, see the `OpenStack Networking API v2.0
+Reference `__.
+
+.. note::
+
+ If you use the Networking service, do not run the Compute
+ ``nova-network`` service (like you do in traditional Compute deployments).
+ When you configure networking, see the Compute-related topics in this
+ Networking section.
+
+Networking API
+~~~~~~~~~~~~~~
+
+Networking is a virtual network service that provides a powerful API to
+define the network connectivity and IP addressing that devices from
+other services, such as Compute, use.
+
+The Compute API has a virtual server abstraction to describe computing
+resources. Similarly, the Networking API has virtual network, subnet,
+and port abstractions to describe networking resources.
+
++---------------+-------------------------------------------------------------+
+| Resource | Description |
++===============+=============================================================+
+| **Network** | An isolated L2 segment, analogous to VLAN in the physical |
+| | networking world. |
++---------------+-------------------------------------------------------------+
+| **Subnet** | A block of v4 or v6 IP addresses and associated |
+| | configuration state. |
++---------------+-------------------------------------------------------------+
+| **Port** | A connection point for attaching a single device, such as |
+| | the NIC of a virtual server, to a virtual network. Also |
+| | describes the associated network configuration, such as |
+| | the MAC and IP addresses to be used on that port. |
++---------------+-------------------------------------------------------------+
+
+**Networking resources**
+
+To configure rich network topologies, you can create and configure
+networks and subnets and instruct other OpenStack services like Compute
+to attach virtual devices to ports on these networks.
+
+In particular, Networking supports each project having multiple private
+networks and enables projects to choose their own IP addressing scheme,
+even if those IP addresses overlap with those that other projects use.
+
+The Networking service:
+
+- Enables advanced cloud networking use cases, such as building
+ multi-tiered web applications and enabling migration of applications
+ to the cloud without changing IP addresses.
+
+- Offers flexibility for administrators to customize network
+ offerings.
+
+- Enables developers to extend the Networking API. Over time, the
+ extended functionality becomes part of the core Networking API.
+
+Configure SSL support for networking API
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+OpenStack Networking supports SSL for the Networking API server. By
+default, SSL is disabled but you can enable it in the ``neutron.conf``
+file.
+
+Set these options to configure SSL:
+
+``use_ssl = True``
+ Enables SSL on the networking API server.
+
+``ssl_cert_file = PATH_TO_CERTFILE``
+ Certificate file that is used when you securely start the Networking
+ API server.
+
+``ssl_key_file = PATH_TO_KEYFILE``
+ Private key file that is used when you securely start the Networking
+ API server.
+
+``ssl_ca_file = PATH_TO_CAFILE``
+ Optional. CA certificate file that is used when you securely start
+ the Networking API server. This file verifies connecting clients.
+ Set this option when API clients must authenticate to the API server
+ by using SSL certificates that are signed by a trusted CA.
+
+``tcp_keepidle = 600``
+ The value of TCP\_KEEPIDLE, in seconds, for each server socket when
+ starting the API server. Not supported on OS X.
+
+``retry_until_window = 30``
+ Number of seconds to keep retrying to listen.
+
+``backlog = 4096``
+ Number of backlog requests with which to configure the socket.
+
+Load-Balancer-as-a-Service (LBaaS) overview
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Load-Balancer-as-a-Service (LBaaS) enables Networking to distribute
+incoming requests evenly among designated instances. This distribution
+ensures that the workload is shared predictably among instances and
+enables more effective use of system resources. Use one of these load
+balancing methods to distribute incoming requests:
+
+Round robin
+ Rotates requests evenly between multiple instances.
+
+Source IP
+ Requests from a unique source IP address are consistently directed
+ to the same instance.
+
+Least connections
+ Allocates requests to the instance with the least number of active
+ connections.
+
++-------------------------+---------------------------------------------------+
+| Feature | Description |
++=========================+===================================================+
+| **Monitors** | LBaaS provides availability monitoring with the |
+| | ``ping``, TCP, HTTP and HTTPS GET methods. |
+| | Monitors are implemented to determine whether |
+| | pool members are available to handle requests. |
++-------------------------+---------------------------------------------------+
+| **Management** | LBaaS is managed using a variety of tool sets. |
+| | The REST API is available for programmatic |
+| | administration and scripting. Users perform |
+| | administrative management of load balancers |
+| | through either the CLI (``neutron``) or the |
+| | OpenStack Dashboard. |
++-------------------------+---------------------------------------------------+
+| **Connection limits** | Ingress traffic can be shaped with *connection |
+| | limits*. This feature allows workload control, |
+| | and can also assist with mitigating DoS (Denial |
+| | of Service) attacks. |
++-------------------------+---------------------------------------------------+
+| **Session persistence** | LBaaS supports session persistence by ensuring |
+| | incoming requests are routed to the same instance |
+| | within a pool of multiple instances. LBaaS |
+| | supports routing decisions based on cookies and |
+| | source IP address. |
++-------------------------+---------------------------------------------------+
+
+
+Firewall-as-a-Service (FWaaS) overview
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+For information on Firewall-as-a-Service (FWaaS), please consult the :doc:`Networking Guide <../fwaas>`.
+
+Allowed-address-pairs
+~~~~~~~~~~~~~~~~~~~~~
+
+``Allowed-address-pairs`` enables you to specify
+mac_address and ip_address(cidr) pairs that pass through a port regardless
+of subnet. This enables the use of protocols such as VRRP, which floats
+an IP address between two instances to enable fast data plane failover.
+
+.. note::
+
+ Currently, only the ML2, Open vSwitch, and VMware NSX plug-ins
+ support the allowed-address-pairs extension.
+
+**Basic allowed-address-pairs operations.**
+
+- Create a port with a specified allowed address pair:
+
+ .. code-block:: console
+
+ $ openstack port create port1 --allowed-address \
+ ip-address=[,mac_address=[,mac_address=`
+in the Networking Guide.
diff --git a/doc/source/admin/archives/use.rst b/doc/source/admin/archives/use.rst
new file mode 100644
index 00000000000..a6039c3014e
--- /dev/null
+++ b/doc/source/admin/archives/use.rst
@@ -0,0 +1,347 @@
+==============
+Use Networking
+==============
+
+You can manage OpenStack Networking services by using the service
+command. For example:
+
+.. code-block:: console
+
+ # service neutron-server stop
+ # service neutron-server status
+ # service neutron-server start
+ # service neutron-server restart
+
+Log files are in the ``/var/log/neutron`` directory.
+
+Configuration files are in the ``/etc/neutron`` directory.
+
+Administrators and projects can use OpenStack Networking to build
+rich network topologies. Administrators can create network
+connectivity on behalf of projects.
+
+Core Networking API features
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+After installing and configuring Networking (neutron), projects and
+administrators can perform create-read-update-delete (CRUD) API networking
+operations. This is performed using the Networking API directly with either
+the :command:`neutron` command-line interface (CLI) or the :command:`openstack`
+CLI. The :command:`neutron` CLI is a wrapper around the Networking API. Every
+Networking API call has a corresponding :command:`neutron` command.
+
+The :command:`openstack` CLI is a common interface for all OpenStack
+projects, however, not every API operation has been implemented. For the
+list of available commands, see `Command List
+`__.
+
+The :command:`neutron` CLI includes a number of options. For details, see
+`Create and manage networks `__.
+
+Basic Networking operations
+---------------------------
+
+To learn about advanced capabilities available through the :command:`neutron`
+command-line interface (CLI), read the networking section `Create and manage
+networks `__
+in the OpenStack End User Guide.
+
+This table shows example :command:`openstack` commands that enable you to
+complete basic network operations:
+
++-------------------------+-------------------------------------------------+
+| Operation | Command |
++=========================+=================================================+
+|Creates a network. | |
+| | |
+| | ``$ openstack network create net1`` |
++-------------------------+-------------------------------------------------+
+|Creates a subnet that is | |
+|associated with net1. | |
+| | |
+| | ``$ openstack subnet create subnet1`` |
+| | ``--subnet-range 10.0.0.0/24`` |
+| | ``--network net1`` |
++-------------------------+-------------------------------------------------+
+|Lists ports for a | |
+|specified project. | |
+| | |
+| | ``$ openstack port list`` |
++-------------------------+-------------------------------------------------+
+|Lists ports for a | |
+|specified project | |
+|and displays the ``ID``, | |
+|``Fixed IP Addresses`` | |
+| | |
+| | ``$ openstack port list -c ID`` |
+| | ``-c "Fixed IP Addresses`` |
++-------------------------+-------------------------------------------------+
+|Shows information for a | |
+|specified port. | |
+| | ``$ openstack port show PORT_ID`` |
++-------------------------+-------------------------------------------------+
+
+**Basic Networking operations**
+
+.. note::
+
+ The ``device_owner`` field describes who owns the port. A port whose
+ ``device_owner`` begins with:
+
+ - ``network`` is created by Networking.
+
+ - ``compute`` is created by Compute.
+
+Administrative operations
+-------------------------
+
+The administrator can run any :command:`openstack` command on behalf of
+projects by specifying an Identity ``project`` in the command, as
+follows:
+
+.. code-block:: console
+
+ $ openstack network create --project PROJECT_ID NETWORK_NAME
+
+For example:
+
+.. code-block:: console
+
+ $ openstack network create --project 5e4bbe24b67a4410bc4d9fae29ec394e net1
+
+.. note::
+
+ To view all project IDs in Identity, run the following command as an
+ Identity service admin user:
+
+ .. code-block:: console
+
+ $ openstack project list
+
+Advanced Networking operations
+------------------------------
+
+This table shows example CLI commands that enable you to complete
+advanced network operations:
+
++-------------------------------+--------------------------------------------+
+| Operation | Command |
++===============================+============================================+
+|Creates a network that | |
+|all projects can use. | |
+| | |
+| | ``$ openstack network create`` |
+| | ``--share public-net`` |
++-------------------------------+--------------------------------------------+
+|Creates a subnet with a | |
+|specified gateway IP address. | |
+| | |
+| | ``$ openstack subnet create subnet1`` |
+| | ``--gateway 10.0.0.254 --network net1`` |
++-------------------------------+--------------------------------------------+
+|Creates a subnet that has | |
+|no gateway IP address. | |
+| | |
+| | ``$ openstack subnet create subnet1`` |
+| | ``--no-gateway --network net1`` |
++-------------------------------+--------------------------------------------+
+|Creates a subnet with DHCP | |
+|disabled. | |
+| | |
+| | ``$ openstack subnet create subnet1`` |
+| | ``--network net1 --no-dhcp`` |
++-------------------------------+--------------------------------------------+
+|Specifies a set of host routes | |
+| | |
+| | ``$ openstack subnet create subnet1`` |
+| | ``--network net1 --host-route`` |
+| | ``destination=40.0.1.0/24,`` |
+| | ``gateway=40.0.0.2`` |
++-------------------------------+--------------------------------------------+
+|Creates a subnet with a | |
+|specified set of dns name | |
+|servers. | |
+| | |
+| | ``$ openstack subnet create subnet1`` |
+| | ``--network net1 --dns-nameserver`` |
+| | ``8.8.4.4`` |
++-------------------------------+--------------------------------------------+
+|Displays all ports and | |
+|IPs allocated on a network. | |
+| | |
+| | ``$ openstack port list --network NET_ID`` |
++-------------------------------+--------------------------------------------+
+
+**Advanced Networking operations**
+
+.. note::
+
+ During port creation and update, specific extra-dhcp-options can be left blank.
+ For example, ``router`` and ``classless-static-route``. This causes dnsmasq
+ to have an empty option in the ``opts`` file related to the network.
+ For example:
+
+ .. code-block:: console
+
+ tag:tag0,option:classless-static-route,
+ tag:tag0,option:router,
+
+Use Compute with Networking
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Basic Compute and Networking operations
+---------------------------------------
+
+This table shows example :command:`openstack` commands that enable you to
+complete basic VM networking operations:
+
++----------------------------------+-----------------------------------------+
+| Action | Command |
++==================================+=========================================+
+|Checks available networks. | |
+| | |
+| | ``$ openstack network list`` |
++----------------------------------+-----------------------------------------+
+|Boots a VM with a single NIC on | |
+|a selected Networking network. | |
+| | |
+| | ``$ openstack server create --image`` |
+| | ``IMAGE --flavor FLAVOR --nic`` |
+| | ``net-id=NET_ID VM_NAME`` |
++----------------------------------+-----------------------------------------+
+|Searches for ports with a | |
+|``device_id`` that matches the | |
+|Compute instance UUID. See :ref: | |
+|`Create and delete VMs` | |
+| | |
+| |``$ openstack port list --server VM_ID`` |
++----------------------------------+-----------------------------------------+
+|Searches for ports, but shows | |
+|only the ``mac_address`` of | |
+|the port. | |
+| | |
+| | ``$ openstack port list -c`` |
+| | ``"MAC Address" --server VM_ID`` |
++----------------------------------+-----------------------------------------+
+|Temporarily disables a port from | |
+|sending traffic. | |
+| | |
+| | ``$ openstack port set PORT_ID`` |
+| | ``--disable`` |
++----------------------------------+-----------------------------------------+
+
+**Basic Compute and Networking operations**
+
+.. note::
+
+ The ``device_id`` can also be a logical router ID.
+
+.. note::
+
+ - When you boot a Compute VM, a port on the network that
+ corresponds to the VM NIC is automatically created and associated
+ with the default security group. You can configure `security
+ group rules <#enable-ping-and-ssh-on-vms-security-groups>`__ to enable
+ users to access the VM.
+
+.. _Create and delete VMs:
+ - When you delete a Compute VM, the underlying Networking port is
+ automatically deleted.
+
+Advanced VM creation operations
+-------------------------------
+
+This table shows example :command:`openstack` commands that enable you to
+complete advanced VM creation operations:
+
++-------------------------------------+--------------------------------------+
+| Operation | Command |
++=====================================+======================================+
+|Boots a VM with multiple | |
+|NICs. | |
+| | ``$ openstack server create --image``|
+| | ``IMAGE --flavor FLAVOR --nic`` |
+| | ``net-id=NET_ID VM_NAME`` |
+| | ``net-id=NET2-ID VM_NAME`` |
++-------------------------------------+--------------------------------------+
+|Boots a VM with a specific IP | |
+|address. Note that you cannot | |
+|use the ``--max`` or ``--min`` | |
+|parameters in this case. | |
+| | |
+| | ``$ openstack server create --image``|
+| | ``IMAGE --flavor FLAVOR --nic`` |
+| | ``net-id=NET_ID VM_NAME`` |
+| | ``v4-fixed-ip=IP-ADDR VM_NAME`` |
++-------------------------------------+--------------------------------------+
+|Boots a VM that connects to all | |
+|networks that are accessible to the | |
+|project who submits the request | |
+|(without the ``--nic`` option). | |
+| | |
+| | ``$ openstack server create --image``|
+| | ``IMAGE --flavor FLAVOR`` |
++-------------------------------------+--------------------------------------+
+
+**Advanced VM creation operations**
+
+.. note::
+
+ Cloud images that distribution vendors offer usually have only one
+ active NIC configured. When you boot with multiple NICs, you must
+ configure additional interfaces on the image or the NICs are not
+ reachable.
+
+ The following Debian/Ubuntu-based example shows how to set up the
+ interfaces within the instance in the ``/etc/network/interfaces``
+ file. You must apply this configuration to the image.
+
+ .. code-block:: bash
+
+ # The loopback network interface
+ auto lo
+ iface lo inet loopback
+
+ auto eth0
+ iface eth0 inet dhcp
+
+ auto eth1
+ iface eth1 inet dhcp
+
+Enable ping and SSH on VMs (security groups)
+--------------------------------------------
+
+You must configure security group rules depending on the type of plug-in
+you are using. If you are using a plug-in that:
+
+- Implements Networking security groups, you can configure security
+ group rules directly by using the :command:`openstack security group rule create`
+ command. This example enables ``ping`` and ``ssh`` access to your VMs.
+
+ .. code-block:: console
+
+ $ openstack security group rule create --protocol icmp \
+ --ingress SECURITY_GROUP
+
+ .. code-block:: console
+
+ $ openstack security group rule create --protocol tcp \
+ --egress --description "Sample Security Group" SECURITY_GROUP
+
+- Does not implement Networking security groups, you can configure
+ security group rules by using the :command:`openstack security group rule
+ create` or :command:`euca-authorize` command. These :command:`openstack`
+ commands enable ``ping`` and ``ssh`` access to your VMs.
+
+ .. code-block:: console
+
+ $ openstack security group rule create --protocol icmp default
+ $ openstack security group rule create --protocol tcp --dst-port 22:22 default
+
+.. note::
+
+ If your plug-in implements Networking security groups, you can also
+ leverage Compute security groups by setting
+ ``security_group_api = neutron`` in the ``nova.conf`` file. After
+ you set this option, all Compute security group commands are proxied
+ to Networking.
diff --git a/doc/source/admin/index.rst b/doc/source/admin/index.rst
index 99fc010d292..a739c84f102 100644
--- a/doc/source/admin/index.rst
+++ b/doc/source/admin/index.rst
@@ -21,3 +21,4 @@ This guide documents the OpenStack Ocata release.
ops
migration
misc
+ archives/index