Additional fixes

1. Removes centralized_dragonflow from index (learned in
   last meeting that this isn't a scope for df)
2. Fixes duplicated diagram
3. Fixes reason for last warning.

Fixes sphinx warnings and errors.

1. Change image references to local paths.
2. Fix underline too short issues.
3. Fix numbered/unnumbered lists.
4. Fix unindented ascii art
5. Fix line break issues
6. Fix naming typo in index.rst

There is still a reference to an unknown document in
source/index.rst (centralized_dragonflow).

Change-Id: I5165d4efa0644470c90adfbc606daabfbf55292f
This commit is contained in:
Christian Schulze-Wiehenbrauk 2016-11-07 08:33:13 +01:00
parent 5769fd9a64
commit 54de7fd5e3
19 changed files with 328 additions and 284 deletions

View File

@ -1,9 +1,9 @@
==================
================
Distributed DHCP
==================
================
Current Neutron Reference Implementation
=========================================
========================================
The DHCP server is implemented using the Dnsmasq server
running in a namespace on the newtork-node per tenant subnet
that is configured with DHCP enabled.
@ -20,14 +20,14 @@ Problems with current DHCP implementation:
2) Centralize solution depended on the network node
DHCP agent
-----------
----------
Same Concept as L3 agent and namespaces for virtual router.
Using black boxes that implement functionality and using them as the IaaS
backbone implementation
Distributed DHCP In Dragonflow
===============================
==============================
Dragonflow distribute DHCP policy/configuration using the pluggable DB.
Each controller read this DB and install hijacking OVS flows for DHCP traffic
and send that traffic to the controller.
@ -37,13 +37,13 @@ DHCP acks.
The following diagrams demonstrate this process:
.. image:: https://raw.githubusercontent.com/openstack/dragonflow/master/doc/images/dhcp1.jpg
.. image:: ../images/dhcp1.jpg
:alt: Distributed DHCP 1
:width: 600
:height: 525
:align: center
.. image:: https://raw.githubusercontent.com/openstack/dragonflow/master/doc/images/dhcp2.jpg
.. image:: ../images/dhcp2.jpg
:alt: Distributed DHCP 1
:width: 600
:height: 525

View File

@ -1,6 +1,6 @@
=======================
======================
Distributed Dragonflow
=======================
======================
Dragonflow is a distributed SDN controller for OpenStack® Neutron™
supporting distributed Switching, Routing, DHCP and more.
@ -39,8 +39,7 @@ Key Design Guidelines
High Level Architecture
-----------------------
.. image:: https://raw.githubusercontent.com/openstack/dragonflow/master/doc/images/
dragonflow_distributed_architecture.png
.. image:: ../images/dragonflow_distributed_architecture.png
:alt: Solution Overview
:width: 600
:height: 455
@ -119,9 +118,9 @@ How to Install
<https://github.com/openstack/dragonflow/tree/master/doc/source/multi-node-conf>`_
Dragonflow Talks
-----------------
----------------
- `Dragonflow - Neutron done the SDN Way - OpenStack Austin Summit
<https://www.openstack.org/videos/video/dragonflow-neutron-done-the-sdn-way>`_
<https://www.openstack.org/videos/video/dragonflow-neutron-done-the-sdn-way>`_
- `Dragonflow Introduction Video - OpenStack Tokyo Summit
<https://www.youtube.com/watch?v=wo1Q-BL3nII>`_

View File

@ -14,7 +14,6 @@ Contents:
readme
installation
testing_and_debugging
centralized_dragonflow
distributed_dragonflow
distributed_dhcp
pluggable_db

View File

@ -1,41 +1,41 @@
==========
========
Pipeline
==========
========
The following diagrams explain Dragonflow pipeline in more details
.. image:: https://raw.githubusercontent.com/openstack/dragonflow/master/doc/images/pipeline1.jpg
.. image:: ../images/pipeline1.jpg
:alt: Dragonflow Pipeline
:width: 600
:height: 525
:align: center
.. image:: https://raw.githubusercontent.com/openstack/dragonflow/master/doc/images/pipeline2.jpg
.. image:: ../images/pipeline2.jpg
:alt: Dragonflow Pipeline
:width: 600
:height: 525
:align: center
.. image:: https://raw.githubusercontent.com/openstack/dragonflow/master/doc/images/pipeline3.jpg
.. image:: ../images/pipeline3.jpg
:alt: Dragonflow Pipeline
:width: 600
:height: 525
:align: center
.. image:: https://raw.githubusercontent.com/openstack/dragonflow/master/doc/images/pipeline4.jpg
.. image:: ../images/pipeline4.jpg
:alt: Dragonflow Pipeline
:width: 600
:height: 525
:align: center
.. image:: https://raw.githubusercontent.com/openstack/dragonflow/master/doc/images/pipeline5.jpg
.. image:: ../images/pipeline5.jpg
:alt: Dragonflow Pipeline
:width: 600
:height: 525
:align: center
.. image:: https://raw.githubusercontent.com/openstack/dragonflow/master/doc/images/pipeline6.jpg
.. image:: ../images/pipeline6.jpg
:alt: Dragonflow Pipeline
:width: 600
:height: 525

View File

@ -1,6 +1,6 @@
==============
============
Pluggable DB
==============
============
Instead of implementing a proprietary DB solution for Dragonflow or picking
one open source framework over the other, we designed the DB layer in
@ -24,20 +24,20 @@ the DB driver API class and add an installation script for the DB framework serv
The following diagram depicts the pluggable DB architecture in Dragonflow and the
currently supported DB frameworks:
.. image:: https://raw.githubusercontent.com/openstack/dragonflow/master/doc/images/db1.jpg
.. image:: ../images/db1.jpg
:alt: Pluggable DB architecture
:width: 600
:height: 525
:align: center
Classes in the DB Layer
========================
=======================
The following sections describe the two main classes that are part of the
DB layer.
Applicative N/B DB Adapter Layer
----------------------------------
--------------------------------
This component is the translator layer between the data model elements
to the DB driver which is generic.
@ -66,7 +66,7 @@ and using them if possible.
DB Driver API
--------------
-------------
DB Driver is an interface class that list the methods needed to be implemented
in order to connect a certain DB framework to work with Dragonflow as a backend.
@ -90,7 +90,7 @@ DB framework looking for changes.
Modes of DB
============
===========
There are three different modes for the interaction between Dragonflow and the DB.
Full Proactive
@ -102,7 +102,7 @@ DB in order to do fast lookups.
Selective Proactive
-------------------
.. image:: https://raw.githubusercontent.com/openstack/dragonflow/master/doc/images/db2.jpg
.. image:: ../images/db2.jpg
:alt: Pluggable DB architecture
:width: 600
:height: 525
@ -116,7 +116,7 @@ This mode is called selective proactive.
The following diagram depicts why this is needed:
.. image:: https://raw.githubusercontent.com/openstack/dragonflow/master/doc/images/db3.jpg
.. image:: ../images/db3.jpg
:alt: Pluggable DB architecture
:width: 600
:height: 525
@ -133,7 +133,7 @@ longer correct, but we kept it simple in order to demonstrate that in setups tod
are many isolated topologies)
Reactive
---------
--------

View File

@ -17,7 +17,7 @@ publisher.
This can be seen in the following diagram:
.. image:: https://raw.githubusercontent.com/openstack/dragonflow/master/doc/images/pubsub_topology.png
.. image:: ../images/pubsub_topology.png
@ -28,7 +28,7 @@ Since some publishers need to bind to a TCP socket, and we will want to run
monitoring services that need to run only once per server, and not once per
core, we provide a *publisher service*.
.. image:: https://raw.githubusercontent.com/openstack/dragonflow/master/doc/images/pubsub_neutron_API_server.png
.. image:: ../images/pubsub_neutron_API_server.png
Therefore the communications between the Neutron service and the publisher
service requires an inter-process communications (IPC) solution.

View File

@ -2,12 +2,12 @@ Installation guide for Dragonflow
Keep in mind that Dragonflow is still in beta.
Prerequisites
------------
-------------
1) Open vSwitch 2.5.0
Quick Installation
-------------------
------------------
1) Clone Devstack
@ -121,11 +121,11 @@ Important parameters that needs to be set in ``local.conf`` :
You can find example configuration files in the multi-node-conf or the single-node-conf directories.
============================================
Automated setup using Vagrant + Virtualbox
============================================
==========================================
Automated setup using Vagrant + Virtualbox
==========================================
`Vagrant Installation Guide <http://docs.openstack.org/developer/dragonflow/installation.html>`_
Troubleshooting
----------------
---------------

View File

@ -151,12 +151,12 @@ We will do the following tests:
7. 200 objects created/updated
8. 300 objects created/updated
9. 400 objects created/updated
8. 500 objects created/updated
9. 600 objects created/updated
10. 700 objects created/updated
10. 800 objects created/updated
10. 900 objects created/updated
10. 1000 objects created/updated
10. 500 objects created/updated
11. 600 objects created/updated
12. 700 objects created/updated
13. 800 objects created/updated
14. 900 objects created/updated
15. 1000 objects created/updated
Multiple tenants
@ -216,5 +216,7 @@ References
==========
[1] http://docs-draft.openstack.org/04/270204/4/check/gate-performance-docs-docs/9264b70/doc/build/html/test_plans/db/plan.html
[2] http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Appendix_Limits.html
[3] https://aws.amazon.com/vpc/faqs/

View File

@ -4,9 +4,9 @@
http://creativecommons.org/licenses/by/3.0/legalcode
==================
================
Distributed DNAT
==================
================
https://blueprints.launchpad.net/dragonflow/+spec/fip-distribution
@ -49,7 +49,7 @@ A performance penalty is not expected as patch-ports are not really installed
in the kernel cache.
Setup
------
-----
Dragonflow controller creates br-ex bridge at every compute node and register
it self as the controller for this bridge.
The Dragonflow controller needs to distinguish between the two bridges and
@ -63,7 +63,7 @@ potentially more then one external network can be configured.
Dragonflow controller must have the correct mappings internally)
Configuration - Floating IP Added
----------------------------------
---------------------------------
Floating IP is configured in the Neutron DB and Dragonflow plugin map this
configuration to Dragonflow's DB model and populate floating ip table.
@ -99,8 +99,11 @@ to the floating IP address coming from the external network.
Match only on traffic coming from external network.
Before this point FWaaS must be applied, we can either:
- Add this logic in flows in Dragonflow pipeline
- Direct traffic to a local FW entity port.
- Receive FW services from external appliance, in that case the FWaaS
should have already been applied before the pakcet arrived at the
compute node.
@ -114,10 +117,11 @@ to the floating IP address coming from the external network.
A different detailed spec must be introduce to define this, this spec
however has no conflicts with such possible design.
3) For every floating IP a patch port is added between br-ex and br-int.
3) For every floating IP a patch port is added between br-ex and br-int
after the NAT conversion (IP and MAC) send the packet to the correct
patch-port.
4) On br-int, add a matching flow on in_port (for the patch port),
classify it with the same network as the destination VM (the VM
that this floating IP belongs too) and continue the same regular
@ -154,8 +158,8 @@ external network.
4) At the egress table (after egress security rules) add flow to send the
packet to br-ex using the correct patch port.
(** We can avoid the extra steps and send the packet to the patch
port in step 3, however doing it this way also includes egress security
and introduce better modeling)
port in step 3, however doing it this way also includes egress security
and introduce better modeling)
5) At br-ex match flow according to patch-port in_port and apply the NAT
rule.

View File

@ -21,26 +21,25 @@
Dragonflow Specs
=================
================
This section contains detailed specification documents for
different features inside Dragonflow.
Spec Template
--------------
Specs
-----
.. toctree::
:maxdepth: 3
skeleton
template
distributed_dnat
mac_spoofing
selective_topo_dist
performance_testing
ovsdb_monitor
igmp_application_and_multicast_support
redis_db_driver
redis_driver
keep_db_consistency
publish_subscribe_abstraction
local_controller_reliability
@ -55,8 +54,18 @@ Spec Template
port_status_update
config_generation
cassandra_driver
virtual_tunnel_port
Templates
---------
.. toctree::
:maxdepth: 3
skeleton
template
Indices and tables
------------------

View File

@ -4,14 +4,14 @@
http://creativecommons.org/licenses/by/3.0/legalcode
=============================
============================
Local Controller Reliability
=============================
============================
This spec describe the design of reliability of DragonFlow.
Problem Description
====================
===================
OVS default to reset up flows when it lose connection with controller.
That means both restart of local controller and OVS will delete flows,
@ -26,10 +26,10 @@ limited to the following:
4. Missing flows
Proposed Change
================
===============
Solution to local controller restart
-------------------------------------
------------------------------------
When local controller restarts OVS drops all existing flows. This break network
traffic until flows are re-created.
@ -39,11 +39,11 @@ flows and then flows with stale cookies are deleted during cleanup.
The detail is:
1. Change the fail mode to secure, with this setting, OVS won't delete flows
when it lose connection with local controller.
when it lose connection with local controller.
2. Use canary flow to hold cookie.
3. When local controller restart, read canary flow from OVS, get canary flow's
cookie as old cookie, generate new cookie based on old cookie, update
canary flow with new cookie.
cookie as old cookie, generate new cookie based on old cookie, update
canary flow with new cookie.
4. Notify dragonflow apps to flush flows with new cookie.
5. Delete flows with old cookie.
@ -63,80 +63,80 @@ The aging process is depicted in the following diagram:
::
+------------------+ +------------------+ +------------------+
| | | | | |
| OVS | | Dragonflow | | CentralDB |
| | | | | |
+---------+--------+ +---------+--------+ +---------+--------+
| | |
| set fail mode to secure | |
| <---------------------------+ |
| | |
| +-----+ |
| | |restart |
| | | |
| +-----+ |
| | |
| notify all ports | |
+---------------------------> | get ports' detail info |
| +---------------------------> |
| | |
| | return ports' info |
| | <---------------------------+
| | |
| add flows with new cookie | |
| <---------------------------+ |
| | |
| | |
| get all flows | |
| <---------------------------+ |
| return | |
+---------------------------> | |
| | |
| delete flows with stale cookie |
| <---------------------------| |
| | |
| | |
+ + +
+------------------+ +------------------+ +------------------+
| | | | | |
| OVS | | Dragonflow | | CentralDB |
| | | | | |
+---------+--------+ +---------+--------+ +---------+--------+
| | |
| set fail mode to secure | |
|<----------------------------+ |
| | |
| +-----+ |
| | |restart |
| | | |
| +-----+ |
| | |
| notify all ports | |
+---------------------------->| get ports' detail info |
| +---------------------------->|
| | |
| | return ports' info |
| |<----------------------------+
| | |
| add flows with new cookie | |
|<----------------------------+ |
| | |
| | |
| get all flows | |
|<----------------------------+ |
| return | |
+---------------------------->| |
| | |
| delete flows with stale cookie |
|<----------------------------+ |
| | |
| | |
+ + +
Solution to OVS restart
------------------------
-----------------------
OVS restart will delete all flows and interrupt the traffic.
After startup, OVS will reconnect with controller to setup new flows.
This process is depicted in the following diagram:
::
+------------------+ +------------------+ +------------------+
| | | | | |
| OVS | | Dragonflow | | CentralDB |
| | | | | |
+------------------+ +---------+--------+ +---------+--------+
+----+ | |
| |restart | |
| | | |
+----+ | |
| | |
| notify all ports | |
+---------------------------> | |
| | get ports' detail info |
| +---------------------------> |
| | |
| | return ports' info |
| +<--------------------------- |
| | |
| create bridges if needed | |
| <---------------------------+ |
| | |
| | |
| add flows with new cookie | |
| <---------------------------+ |
| | |
| | |
+ + +
+------------------+ +------------------+ +------------------+
| | | | | |
| OVS | | Dragonflow | | CentralDB |
| | | | | |
+------------------+ +---------+--------+ +---------+--------+
+----+ | |
| |restart | |
| | | |
+----+ | |
| | |
| notify all ports | |
+---------------------------> | |
| | get ports' detail info |
| +---------------------------> |
| | |
| | return ports' info |
| +<--------------------------- |
| | |
| create bridges if needed | |
| <---------------------------+ |
| | |
| | |
| add flows with new cookie | |
| <---------------------------+ |
| | |
| | |
+ + +
Solution to residual flows
---------------------------
--------------------------
Residual flows means flows which don't take effect any more but stay in flow
table. Backward incompatible upgrade and incorrect implementation may generate
this kind of flows. The residual flows may not affect the forwarding but it will
@ -147,18 +147,18 @@ We could reuse the solution for 'local controller restart', trigger local
controller to re-flush flows then delete the flows with old cookie.
Pros
"""""
""""
It's easy to implement because we could reuse the solution for 'OVS restart'
Cons
"""""
""""
It's not efficient because we need to regenerate all the flows again.
This method is suited for the residual flows caused by the
'backward incompatible upgrade'.
Solution to missing flows
--------------------------
-------------------------
When there are missing flows, OVS cannot forward the packet by itself, it will
forward the packet to local controller. For example, in the context of DVR
forwarding, if no corresponding host route flow to destination, OVS will forward
@ -168,9 +168,13 @@ to OVS. We don't plan to discuss it in more detail here and it will be processed
by the specific application of Dragonflow.
References
===========
==========
[1] http://www.openvswitch.org/support/dist-docs-2.5/ovs-vswitchd.8.pdf
[2] http://www.openvswitch.org/support/dist-docs-2.5/ovsdb-server.1.pdf
[3] https://bugs.launchpad.net/mos/+bug/1480292
[4] https://bugs.launchpad.net/openstack-manuals/+bug/1487250
[5] https://www.kernel.org/doc/Documentation/networking/openvswitch.txt

View File

@ -6,7 +6,7 @@
==============================
Port Security and MAC Spoofing
===============================
==============================
https://blueprints.launchpad.net/dragonflow/+spec/mac-spoofing-protection
@ -36,11 +36,13 @@ It will also have rules allowing DST broadcast/multicast MAC traffic
to pass.
Additional drop rules:
1) Packets with SRC MAC broadcast/multicast bit set.
1. Packets with SRC MAC broadcast/multicast bit set.
(This option might be needed in some environments, we can leave this as a configurable
option in case it is -
http://www.cisco.com/c/en/us/support/docs/switches/catalyst-6500-series-switches/107995-config-catalyst-00.html#mm)
2) VLAN tagged frames where the TCI "Drop eligible indicator" (TEI) bit is set (congestion)
option in case it is -
http://www.cisco.com/c/en/us/support/docs/switches/catalyst-6500-series-switches/107995-config-catalyst-00.html#mm)
2. VLAN tagged frames where the TCI "Drop eligible indicator" (TEI) bit is set (congestion)
Following are examples for the flows configured in that table::
@ -54,27 +56,28 @@ This table also blocks any ARP responses with IPs that don't belong
to this VM port.
(Same for ND responses for IPv6)
::
+-------------------------------------------------------------------------------------+
| |
| OVS Dragonflow Pipeline |
| |
| +----------------+ +-------------+ +-----------+ +------------+ |
| | | | | | | | | |
+------+ | | | | MAC | | Security | | Dispatch | |
| | | | Port | | Spoofing | | Groups | | To | |
| VM +----------->+ Classification +-----> | Protection +------->+ Ingress +---->+ Local | |
| | | | (Table 0) | | Table | | | | Ports | |
+------+ | | | | | | | | | |
| | | | | | | | | |
| +----------------+ +-------------+ +-----------+ +------------+ |
| |
| |
+-------------------------------------------------------------------------------------+
+------+ +-------------------------------------------------------------------------------------+
| | | |
| VM | | OVS Dragonflow Pipeline |
| | | |
+---+--+ | +----------------+ +-------------+ +-----------+ +------------+ |
| | | | | | | | | | |
| | | | | MAC | | Security | | Dispatch | |
| | | Port | | Spoofing | | Groups | | To | |
+-------------->+ Classification +-----> | Protection +------->+ Ingress +---->+ Local | |
| | | (Table 0) | | Table | | | | Ports | |
| | | | | | | | | |
| | | | | | | | | |
| +----------------+ +-------------+ +-----------+ +------------+ |
| |
| |
+-------------------------------------------------------------------------------------+
Allowed Address Pairs
-----------------------
---------------------
In Neutron there is a feature called allowed address pairs [1], this allow you
to define <mac, ip> pairs that are allowed for a specific port regardless of
his configured MAC address/IP address.
@ -83,20 +86,20 @@ Dragonflow needs to add specific rules to allow all the allowed address
pairs.
Port Security Disable
----------------------
---------------------
Neutron has a feature to disable port security for ML2 plugins [3], even
that Dragonflow is currently not a ML2 plugin, we still would like a way
to disable/enable port security for a certain port.
L2 ARP Supression
------------------
-----------------
It is also important to note that with full ARP L2 suppression [2], some of
the features described here are not needed as OVS flows are used
to respond to ARP requests and no ARP traffic should actually reach a VM.
We still need to verify that this also block gratitude ARPs.
Blocking invalid broadcast/multicast traffic
---------------------------------------------
--------------------------------------------
As part of the port security feature we should also prevent traffic loops.
We drop traffic that has the same src and dst ports classified (the src port register
and the dst port register are same).
@ -112,5 +115,7 @@ on a different spec that will address controller reliability concerns.
References
==========
[1] http://specs.openstack.org/openstack/neutron-specs/specs/api/allowed_address_pairs.html
[2] https://blueprints.launchpad.net/dragonflow/+spec/l2-arp-supression
[3] https://github.com/openstack/neutron-specs/blob/master/specs/kilo/ml2-ovs-portsecurity.rst

View File

@ -4,16 +4,15 @@
http://creativecommons.org/licenses/by/3.0/legalcode
===============
=============
OVSDB Monitor
===============
=============
This blueprint describe the addition of OVSDB monitor support for
Dragonflow. It implements the lightweight OVSDB driver which based
on the OVSDB monitor\notification mechanism, it solves the performance
problem for Dragonflow to fetch vm ports/interfaces info from OVSDB.
===================
Problem Description
===================
@ -37,7 +36,6 @@ resources further;
For each session between Dragonflow and OVSDB for a new logical port,
it will fetch many unnecessary data from many OVSDB tables;
====================
Solution Description
====================
@ -89,7 +87,6 @@ notification it will do the same work as step 4.
If we restart Dragonflow process or restart the OVSDB, Dragonflow OVSDB
driver will reconnect to OVSDB server, so step1 to 6 will be executed again.
====================
Event Classification
====================
@ -119,9 +116,8 @@ Patch port online\offline:
type patch
options Peer=<peer port name>
=========
Conclusion
=========
==========
Our solution provides a lightweight OVSDB driver functionality which
implements the OVSDB data monitor and synchronize, remove the Dragonflow
loop process, maintain only one socket channel and transfer less data.

View File

@ -1,9 +1,10 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
http://creativecommons.org/licenses/by/3.0/legalcode
==============================================
DragonFlow Data Plane Performance Testing Spec
==============================================
@ -82,7 +83,7 @@ Environment setup
Test description & scenarios
----------------------------
* Generate 10 consequence tests with 5 seconds sleep between each test towards
the server.
the server.
Tests types
-----------
@ -107,7 +108,7 @@ Environment setup
Test description & scenarios
----------------------------
* From each of the iPerf clients, generate 10 consequence tests with 5
seconds sleep between each test towards the server.
seconds sleep between each test towards the server.
Tests types
-----------
@ -120,9 +121,9 @@ Total number of tests
Testing methodology
===================
- For measuring DragonFlow networking improvement/overhead, all tests have to
be executed with & without DF (with DVR. OVN as well?).
be executed with & without DF (with DVR. OVN as well?).
- The created VMs should report when there are up, so it will be possible to
count the successfully created VMs in the automation.
count the successfully created VMs in the automation.
Network link quality definition
@ -131,7 +132,7 @@ The quality of a link can be tested as follows:
* Bandwidth - measured through iPerf TCP test
* Datagram loss - measured through iPerf UDP test. A good link quality: the
packet loss should not go over 1%.
packet loss should not go over 1%.
* Jitter (latency variation) - measured through iPerf UDP test
* Latency (response time or RTT) - measured using the Ping command

View File

@ -341,7 +341,8 @@ implement an enhancement to
enable multi-publisher-multi-subscriber mechanism, using something like ZMQ EPGM.
Configuration Options
======================
=====================
'enable_df_pub_sub', default=False, help=_("Enable use of Dragonflow built-in pub/sub")),
'pub_sub_driver', default='zmq_pubsub_driver', help=_('Drivers to use for the Dragonflow pub/sub')),

View File

@ -4,14 +4,14 @@
http://creativecommons.org/licenses/by/3.0/legalcode
=============================
==================
Redis Availability
=============================
==================
This spec describe the design of availability of Redis of DragonFlow.
Problem Description
====================
===================
Dragonflow's Redis driver read the Redis cluster topology and cache it
locally in Redis driver's initialization and then it connects to the Redis
@ -21,12 +21,13 @@ db master node restarting and Dragonflow should detect it that it could
move the connections from the old master node to the new one.
There are two scenarios in Redis cluster topology changing:
1. The connection will be lost when master node restarting.
2. The connection will not be lost while master node changed to slave
without restarting as using "CLUSTER FAILOVER" command.
In this case one slave will be promoted to master and the client
could not get connection error but a MOVED error from server after
sending request to the new slave node.
without restarting as using "CLUSTER FAILOVER" command.
In this case one slave will be promoted to master and the client
could not get connection error but a MOVED error from server after
sending request to the new slave node.
Some data maybe lost in Redis HA because Redis does not
provide strong consistency. So for this case,
@ -40,10 +41,10 @@ It could be divided into 2 steps:
2. Processing HA after detection
Proposed Change
================
===============
Description to step 1
-------------------------------------
---------------------
If this step is done in each controller, there may have too many
Dragonflow compute nodes read the DB cluster in the same time and
redis cluster could hardly handle it.
@ -69,7 +70,7 @@ Note that there will be a reconnection after connection error and
if the reconnection failed too, it means that a HA occurred.
Description to step 2
------------------------
---------------------
After receiving the cluster information from plugin, local controller will
compare the new nodes with the old nodes and update the topology information
and connections,
@ -77,35 +78,38 @@ then a "dbrestart" message will be sent to db consist module.
The following diagram shows the procedure of Dragonflow:
NB
+-------------------------------+
| 1.notify |
+--------+------> +----------+ |
||driver | |DB consist| |
|--------+ +----------+ |
+-------------------------------+
|
2.resync data|
|
+-------------------v------+
| |
| |
| Redis cluster |
| |
| |
+--------------------+-----+
^
2.resync data |
|
+-------------------------------+
| 1.notify | |
+--------+------> +--+-------+ |
||driver | |DB consist| |
|--------+ +----------+ |
+-------------------------------+
SB
::
NB
+-------------------------------+
| 1.notify |
+--------+------> +----------+ |
||driver | |DB consist| |
|--------+ +----------+ |
+-------------------------------+
|
2.resync data|
|
+-------------------v------+
| |
| |
| Redis cluster |
| |
| |
+--------------------+-----+
^
2.resync data |
|
+-------------------------------+
| 1.notify | |
+--------+------> +--+-------+ |
||driver | |DB consist| |
|--------+ +----------+ |
+-------------------------------+
SB
References
===========
==========
[1] http://redis.io/topics/cluster-tutorial
[2] http://redis.io/topics/cluster-spec

View File

@ -31,8 +31,8 @@ Populating a Database API
-------------------------
Basic operation for Redis DB,including add/delete/get/modify and so on.
Realization is based on Grokzen lib.
The following diagram shows which components will populate Redis DB Cluster with driver[4].
The following diagram shows which components will populate Redis DB Cluster with
driver[4].::
+------------------+ +----------------+
|Neutron server | | Redis Driver |
@ -53,33 +53,36 @@ The following diagram shows which components will populate Redis DB Cluster with
+------------------+ +----------------+
Publish API
-----------
The new API realizes publish function with channel, based on andymccurdy lib
The following diagram shows how Neutron config changes are published to all local controllers.
It is only a example.
It is only a example.::
+---------------+
| | +-----------------+
| DF Neutron | | Redis DB |
| Plugin | | |
| | | |
| Configuration| | |
| Change | | |
| | call Publish API | |
| +----------------------------------------> | |
| | | |
| | | |
| | +-----------------+
| |
+---------------+
Main process of realization;
r = redis.StrictRedis(...)
p = r.pubsub()
r.publish('my-first-channel', 'some data')/* my-first-channel is channel name,
some data is what you want to publish */
+---------------+
| | +-----------------+
| DF Neutron | | Redis DB |
| Plugin | | |
| | | |
| Configuration| | |
| Change | | |
| | call Publish API | |
| +----------------------------------------> | |
| | | |
| | | |
| | +-----------------+
| |
+---------------+
Main process of realization:
.. code-block::python
r = redis.StrictRedis(*args)
p = r.pubsub()
r.publish('my-first-channel', 'some data')
# my-first-channel is channel name,
# some data is what you want to publish
Special Notice:
'Some data' will be coded into json pattern.
@ -95,19 +98,25 @@ Realization is based on andymccurdy lib.
Here is a example of subscription process:
r = redis.StrictRedis(...)
p = r.pubsub()
p.subscribe('my-first-channel', 'my-second-channel', ...) /* my-first-channel is channel name*/
p.unsubscribe('my-first-channel') /*here unsubscribe the channel */
.. code-block::python
r = redis.StrictRedis(*args)
p = r.pubsub()
p.subscribe('my-first-channel', 'my-second-channel', ...) # my-first-channel is channel name
p.unsubscribe('my-first-channel') # here unsubscribe the channel
Here is an example of message driver may received:
{'channel': 'my-first-channel', 'data': 'some data', 'pattern': None, 'type': 'message'}
.. code-block::python
{'channel': 'my-first-channel', 'data': 'some data', 'pattern': None, 'type': 'message'}
type: One of the following: 'subscribe', 'unsubscribe', 'psubscribe', 'punsubscribe',
'message', 'pmessage'
channel: The channel [un]subscribed to or the channel a message was published to
'message', 'pmessage'.
channel: The channel [un]subscribed to or the channel a message was published to.
pattern: The pattern that matched a published message's channel.
Will be None in all cases except for 'pmessage' types.
data:
@ -125,35 +134,38 @@ Subscribe Thread For Reading Messages
The subscribe thread is in charge of receiving the notifications and sending
them back to the controller. Realization is based on andymccurdy lib.
The subscribe thread loop is depicted in the following diagram:
The subscribe thread loop is depicted in the following diagram::
+---------------+
| |
| Process |
+-----------------+ +-----------------+fun call| Function1 |
| | | +--------> |
| Subscribe Thread| | Message Dispatch| +---------------+
+-----------------+ +---------------+
| | | |
| | | Process |
| | +-----------------+fun call | Function1 |
| | | +-------->| |
|Subscribe Thread | | Message Dispatch| +---------------+
| | | |
|Wait For Message | | |
| | | Read Message | +----------------+
| | Send into Queue | From Queue |fun call | Process |
| New Message +-----------------------> +-------->| Function2 |
| | | Dispatch Message| | |
| | | | +----------------+
| Wait For Message| | |
| | | Read Message | +---------------+
| | Send into Queue | From Queue |fun call | Process |
| New Message +-----------------------> +-------->| Function2 |
| | | Dispatch Message| | |
| | | | +---------------+
| | | |
| | | |
| | | | +---------------+
| | | | fun call| Process |
| | | |fun call | Process |
| | | +---------> Function3 |
| | | | | |
+-----------------+ +-----------------+ | |
+---------------+
Realization Example:
while True:
for message in p.listen():
# classify the message channel content, send to different message queue for channel
.. code-block::python
while True:
for message in p.listen():
# classify the message channel content, send to different message queue for channel
Special Notice:
Not only three Process Functions.
@ -169,7 +181,7 @@ driver only does connection fix,throw exception when connection is recovered,
driver will clear all subscription and user of Subscription do resubscribe.
Connection Setup
-----------------
----------------
When driver is initialized,it will connect to all db nodes for read/write/get/modify operation.
But for pub/sub, driver will connect to one db node for one pub or one sub.
Driver guarantee connections for pub/sub will be scattered among db nodes.
@ -208,7 +220,11 @@ subscriber should subscribe again.
References
==========
[1]https://github.com/andymccurdy/redis-py
[2]http://redis.io/commands
[3]https://github.com/Grokzen/redis-py-cluster
[4]http://redis.io/topics/cluster-tutorial
[1] https://github.com/andymccurdy/redis-py
[2] http://redis.io/commands
[3] https://github.com/Grokzen/redis-py-cluster
[4] http://redis.io/topics/cluster-tutorial

View File

@ -13,7 +13,7 @@ https://blueprints.launchpad.net/dragonflow/+spec/vlan-network
This blueprint describes how to implement vlan L2 networking in Dragonflow.
Problem Description
==================
===================
Currently, Dragonflow only supports overlay network.
When admin creates network,no network type can be chosen.
The network type is always set to overlay network (VxLan, GRE...).
@ -26,7 +26,7 @@ This spec just discusses how to support vlan L2 networking.
Proposed Change
==============
===============
First, Dragonflow plugin does not support to create vlan network.
In future, ML2 Mechanism Driver will replace Dragonflow Plugin to support
vlan network.
@ -58,13 +58,13 @@ Here we call from vms is outbound direction, from outside is inbound direction.
These two directions will be discussed separately.
Two bridges per host
-------------------
--------------------
VMs are connected to br-int,
overlay tunnels connected to br-int, physical nic connected to br-1,
Vlan Packets are transmited to/from br-1.
Port updated
-----------
------------
When controller receives port updated messages, it will install flows.
With this, outbound and inbound will be discussed as follows.
@ -75,7 +75,7 @@ arp,dhcp, broadcast/multicast.
These three types will be handled differently by the dragonflow controller.
Outbound-Arp
""""""""""
""""""""""""
local port
~~~~~~~~~~
@ -89,7 +89,7 @@ Openflow items like this:
Table=ARP, Match: Arp Request, Actions: Arp Responders.
remote port
~~~~~~~~~~
~~~~~~~~~~~
When controller receives remote port updated message,
it will install flows as what local scenario does.
If destination is unknown, arp request will be handled as common broadcast,
@ -97,7 +97,7 @@ which will be discussed as follows.
Outbound-DHCP
""""""""""""
"""""""""""""
If 'dhcp enable' option is chosen with vlan network,
controller acts as dhcp server to respond for dhcp request.
If 'dhcp enable' option is off, dhcp broadcast is treated as common broadcast.
@ -105,7 +105,7 @@ Actually it's same as what is done for vxlan network.
Outbound-Common Broadcast/Multicast
""""""""""""""""""""""""""""""""""
"""""""""""""""""""""""""""""""""""
Broadcast excepts to arp and dhcp, it's similar to multicast processing.
We just take broadcast for example.
When broadcast happens, thus packet should be forwarded to local ports,
@ -120,7 +120,7 @@ Outside forwarding behaviors depends on physical networks,
which will be not discussed here.
local port
~~~~~~~~~~~~
~~~~~~~~~~
When controller receives local port updated messages,
if this port is the first port of the network on the host,
controller will install broadcast flows on ovs like this:
@ -135,7 +135,7 @@ Actions:mod_vlan=vlan_id,output:path_br_1
If this port is not the first one, controller only update the first flow above.
remote port
~~~~~~~~~~~~
~~~~~~~~~~~
When controller receives remote port updated message, it will not update
broadcast flows. Because with broadcast, ovs just needs to forward it to br-1.
This has been done when local port updated.like this.
@ -147,11 +147,11 @@ The first action 'resubmit(,EGRESSTABLE)' has included remote broadcast scenario
Outbound-Unicast
"""""""""""""""""
""""""""""""""""
For unicast, controller treats them differently according to destination port.
local port
~~~~~~~~~~~
~~~~~~~~~~
When controller receives local ports updated message,
it will install flows for unicast forwarding.
@ -173,7 +173,7 @@ Because this has been done when first port updated.
Inbound
^^^^^^^^^^^
^^^^^^^
With inbound, a flow item will be installed to table 0, which will strip vlan
and set metadata for next table. Flow item like this:
Table=0,
@ -184,12 +184,12 @@ For simplicity, I will omit some flow tables that are not so directly related
with vlan networking.
Inbound-Arp
"""""""""""""""
"""""""""""
Inbound arp broadcast will be handled as common broadcast,
which will be discussed as follows .
Inbound-DHCP
"""""""""""""""
""""""""""""
DHCP Request will be handled by controller that acts as DHCP server,
so if inbound dhcp packets are received,, nothing needs to be done.
@ -206,7 +206,7 @@ Match: reg7=port_key, Actions: output:ofport
Inbound-Broadcast/Multicast
""""""""""""""""""""""""""""
"""""""""""""""""""""""""""
When controller receives local port updated message,
it will install or update flow like this.
@ -219,7 +219,7 @@ Match: reg7=port_unique_key, Actions: output:ofport
Port delete
---------------------------
-----------
When controller receive port deleted messages, it will delete corresponding
flow items as above.
What's more, there's some special scenario if the deleted port is the last

View File

@ -39,6 +39,10 @@ commands = {posargs}
commands = python setup.py testr --coverage --testr-args='{posargs}'
[testenv:docs]
deps =
sphinx
oslosphinx
reno
commands = python setup.py build_sphinx
[flake8]