The centralized l3 controller is deprecated from Mitaka

Removed all the documentation and source code
from the master-Mitaka release.
Remove it from the source tree, to avoid user confusions.

Change-Id: I88947c5787ca95e26af371acce828fe906429703
This commit is contained in:
Eran Gampel 2016-01-21 20:02:05 +02:00
parent 3bed23e3df
commit d399ccec52
18 changed files with 0 additions and 3171 deletions

View File

@ -39,15 +39,3 @@ Overview and details are available in the `Distributed Dragonflow Section`_
:width: 600
:height: 525
:align: center
Centralized Dragonflow
======================
An implementation of a fully distributed virtual router, which replaces
DVR and can work with any ML2 mechanism and type drivers.
Overview and details are available in the `Centralized Dragonflow Section`_
.. _Centralized Dragonflow Section: http://docs.openstack.org/developer/dragonflow/centralized_dragonflow.html

View File

@ -389,61 +389,3 @@ if [[ "$Q_ENABLE_DRAGONFLOW_LOCAL_CONTROLLER" == "True" ]]; then
cleanup_ovs
fi
fi
if [[ "$Q_ENABLE_DRAGONFLOW" == "True" ]]; then
if [[ "$1" == "stack" && "$2" == "pre-install" ]]; then
echo summary "DragonFlow pre-install"
elif [[ "$1" == "stack" && "$2" == "install" ]]; then
echo_summary "Installing DragonFlow"
git_clone $DRAGONFLOW_REPO $DRAGONFLOW_DIR $DRAGONFLOW_BRANCH
if is_service_enabled q-df-l3; then
echo "Cloning and installing Ryu"
git_clone $RYU_REPO $RYU_DIR $RYU_BRANCH
# Don't use setup_develop, which is for openstack global requirement
# compatible projects, and Ryu is not.
pushd $RYU_DIR
setup_package ./ -e
popd
echo "Finished installing Ryu"
fi
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
echo_summary "Configure DragonFlow"
if is_service_enabled q-df-l3; then
_configure_neutron_l3_agent
fi
iniset $NEUTRON_CONF DEFAULT L3controller_ip_list $Q_DF_CONTROLLER_IP
iniset /$Q_PLUGIN_CONF_FILE agent enable_l3_controller "True"
iniset /$Q_PLUGIN_CONF_FILE agent L3controller_ip_list $Q_DF_CONTROLLER_IP
echo export PYTHONPATH=\$PYTHONPATH:$DRAGONFLOW_DIR:$RYU_DIR >> $RC_DIR/.localrc.auto
OVS_VERSION=`ovs-vsctl --version | head -n 1 | grep -E -o "[0-9]+\.[0-9]+\.[0-9]"`
if [ `vercmp_numbers "$OVS_VERSION" "2.3.1"` -lt "0" ] && is_service_enabled q-agt ; then
die $LINENO "You are running OVS version $OVS_VERSION. OVS 2.3.1+ is required for Dragonflow."
fi
echo summary "Dragonflow OVS version validated, version is $OVS_VERSION"
echo summary "Setting L2 Agent to use Dragonflow Agent"
AGENT_BINARY="$DF_L2_AGENT"
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
echo_summary "Initializing DragonFlow"
if is_service_enabled q-df-l3; then
run_process q-df-l3 "python $DF_L3_AGENT --config-file $NEUTRON_CONF --config-file=$Q_L3_CONF_FILE"
fi
fi
if [[ "$1" == "unstack" ]]; then
if is_service_enabled q-df-l3; then
stop_process q-df-l3
fi
fi
fi

View File

@ -1,173 +0,0 @@
Centralized Dragonflow
######################
Overview
--------
Dragonflow is an implementation of a fully distributed virtual router for
OpenStack Neutron that follows a Software Defined Network (SDN) Controller
design.
The *Centralized* version of Dragonflow is intended as a 100% replacement
to the `Neutron DVR <https://wiki.openstack.org/wiki/Neutron/DVR>`_, with
some advantages such as greatly simplified management of the virtual router,
improved performance, stability and scalability.
Architecture
------------
Dragonflow SDN architecture is based on the separation of the network control
plane and data plane. This is accomplished by implementing the service logic
as a pipeline of {match ,action} OpenFlow flows that are executed in the data
plane by the forwarding engine in the virtual switch (we rely on OVS).
By leveraging these programmatic capabilities and the distributed nature of
the virtual switches (i.e. one runs on each compute node), we were able to
consistently remove other "moving parts" from the OpenStack deployment, and
replace them with OpenFlow pipelines.
The benefits in this approach are twofold:
1. Fewer running processes == simpler maintenance == more stable environment
2. Services run truly distributed, removing the need to trombone traffic to
a service node, therefore eliminating undesirable bottlenecks and greatly
improving the ability to scale the environment to larger number of VMs
and compute nodes
The Hybrid Reactive-Proactive Model
===================================
Dragonflow makes extensive use of the reactive OpenFlow behavior, in which
the forwarding element (i.e. the virtual switch) forwards unmatched packets
to the software path that leads to the SDN controller.
Combining this extremely powerful capability with carefully-constructed
proactively-deployed pipelines enabled us to balance between
functionality-rich slow-path logic and blazing-fast match and action engine.
Deployment Models
=================
The following diagram illustrates the main Dragonflow service components, in
its *Centralized* deployment model.
Centralized Dragonflow
^^^^^^^^^^^^^^^^^^^^^^
.. image:: https://raw.githubusercontent.com/openstack/dragonflow/master/doc/images/df_components.jpg
:alt: Solution Overview
:width: 750
:align: center
The main principle of this model is that the Dragonflow controller is
deployed in one or several main locations and is separate from the
virtual switches that it manages.
The virtual switches connect to Dragonflow controller in OpenFlow and
are remote managed.
This model is suitable for small-to-medium deployments, with moderate
rate of new "VM-to-VM" connection establishments.
Advanced Services
=================
Distributed Virtual Router
^^^^^^^^^^^^^^^^^^^^^^^^^^
The Dragonflow distributed virtual router is implemented using OpenFlow
flows.
This allowed us to eliminate the use of namespaces, which was both slow
(additional IP stack) and harder to maintain (more OS-level artifacts).
Perhaps the most important part of the solution is the OpenFlow pipeline which
we install into the integration bridge upon bootstrap. This is the flow that
controls all traffic in the OVS integration bridge `(br-int)`. The pipeline
works in the following manner:
::
1) Classify the traffic
2) Forward to the appropriate element:
1. If it is ARP, forward to the ARP Responder table
2. If routing is required (L3), forward to the L3 Forwarding table
(which implements a virtual router)
3. All L2 traffic and local subnet traffic are offloaded to the NORMAL
pipeline handled by ML2
4. North/South traffic is forwarded to the network node (SNAT)
The following diagram shows the multi-table OpenFlow pipeline installed into
the OVS integration bridge `(br-int)` in order to represent the virtual router
using flows only:
.. image:: https://raw.githubusercontent.com/openstack/dragonflow/master/doc/images/df_of_pipeline.jpg
:alt: Pipeline
:width: 650
:align: center
A detailed blog post describing the solution can be found Here_.
.. _Here: http://blog.gampel.net/2015/01/neutron-dvr-sdn-way.html
Documentation
-------------
* `Solution Overview Presentation <http://www.slideshare.net/gampel/dragonflow-sdn-based-distributed-virtual-router-for-openstack-neutron>`_
* `Solution Overview Blog Post <http://blog.gampel.net/2015/01/neutron-dvr-sdn-way.html>`_
* `Deep-Dive Introduction 1 Blog Post <http://galsagie.github.io/sdn/openstack/ovs/dragonflow/2015/05/09/dragonflow-1/>`_
* `Deep-Dive Introduction 2 Blog Post <http://galsagie.github.io/sdn/openstack/ovs/dragonflow/2015/05/11/dragonflow-2/>`_
* `Kilo-Release Blog Post <http://blog.gampel.net/2015/01/dragonflow-sdn-based-distributed.html>`_
How to Install
--------------
`Installation Guide <https://github.com/openstack/dragonflow/tree/master/doc/source/centralized_readme.rst>`_
`DevStack Single Node Configuration <https://github.com/openstack/dragonflow/tree/master/doc/source/single-node-conf>`_
`DevStack Multi Node Configuration <https://github.com/openstack/dragonflow/tree/master/doc/source/multi-node-conf>`_
Prerequisites
-------------
Install DevStack with Neutron ML2 as core plugin
Install OVS 2.3.1 or newer
Features
--------
* APIs for routing IPv4 East-West traffic
* Performance improvement for inter-subnet network by removing the amount of
kernel layers (namespaces and their TCP stack overhead)
* Scalability improvement for inter-subnet network by offloading L3 East-West
routing from the Network Node to all Compute Nodes
* Reliability improvement for inter-subnet network by removal of Network Node
from the East-West traffic
* Simplified virtual routing management
* Support for all type drivers GRE/VXLAN/VLAN
* Support for centralized shared public network (SNAT) based on the legacy L3
implementation
* Support for centralized floating IP (DNAT) based on the legacy L3
implementation
* Support for HA, in case the connection to the Controller is lost, fall back
to the legacy L3 implementation until recovery. Reused all the legacy L3 HA.
(Controller HA will be supported in the next release).
* Supports for centralized IPv6 based on the legacy L3 implementation
TODO
----
* Add support for North-South L3 IPv4 distribution (SNAT and DNAT)
* Add support for IPv6
* Support for multi controllers solution
Full description can be found in the project `Blueprints
<https://blueprints.launchpad.net/dragonflow>`_

View File

@ -1,48 +0,0 @@
[[local|localrc]]
enable_plugin dragonflow https://github.com/openstack/dragonflow.git
### Compute node managment IP
HOST_IP=10.100.100.15
#Credentials
ADMIN_PASSWORD=devstack
MYSQL_PASSWORD=devstack
RABBIT_PASSWORD=devstack
SERVICE_PASSWORD=devstack
SERVICE_TOKEN=devstack
#MULTINODE CONFIGURATION
FIXED_RANGE=10.0.0.0/24
FIXED_NETWORK_SIZE=256
DATABASE_TYPE=mysql
# The below IP's are the same as the controller managment IP
SERVICE_HOST=10.100.100.4
MYSQL_HOST=10.100.100.4
RABBIT_HOST=10.100.100.4
GLANCE_HOSTPORT=10.100.100.4:9292
ENABLED_SERVICES=n-cpu,rabbit,neutron,q-agt,g-api,n-novnc,n-cauth
Q_PLUGIN=ml2
Q_ML2_TENANT_NETWORK_TYPE=vxlan
Q_ML2_PLUGIN_TYPE_DRIVERS=vxlan
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=(vni_ranges=4001:5000)
Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_types=vxlan vxlan_udp_port=8472)
Q_USE_NAMESPACE=True
Q_USE_SECGROUP=True
#dragonflow settings
Q_ENABLE_DRAGONFLOW=True
Q_DF_CONTROLLER_IP=10.100.100.4
#
#Log Output
LOGFILE=/opt/stack/logs/stack.sh.log
VERBOSE=True
LOG_COLOR=True
SCREEN_LOGDIR=/opt/stack/log
# Make VNC work on compute node
NOVA_VNC_ENABLED=True
NOVNCPROXY_URL=http://$SERVICE_HOST:6080/vnc_auto.html
VNCSERVER_LISTEN=$HOST_IP
VNCSERVER_PROXYCLIENT_ADDRESS=$VNCSERVER_LISTEN

View File

@ -1,52 +0,0 @@
[[local|localrc]]
enable_plugin dragonflow https://github.com/openstack/dragonflow.git
LOGFILE=/opt/stack/logs/stack.sh.log
VERBOSE=True
DEBUG=False
LOG_COLOR=True
SCREEN_LOGDIR=/opt/stack/logs
# Managment IP of the node
HOST_IP=10.100.100.4
# Managment interface id
FLAT_INTERFACE=eth0
FIXED_RANGE=10.0.0.0/24
NETWORK_GATEWAY=10.0.0.1
FIXED_NETWORK_SIZE=256
# Public network range
FLOATING_RANGE=10.100.0.0/16
# Change according to public network range
Q_FLOATING_ALLOCATION_POOL=start=10.100.201.100,end=10.100.201.200
# Public network gateway IP address
PUBLIC_NETWORK_GATEWAY=10.100.0.1
ENABLE_DEBUG_LOG_LEVEL=true
ENABLED_SERVICES=rabbit,mysql,key
ENABLED_SERVICES+=,n-cpu,n-api,n-crt,n-obj,n-cond,n-sch,n-novnc,n-cauth
ENABLED_SERVICES+=,neutron,q-svc,q-df-l3,q-agt,q-dhcp,q-meta
ENABLED_SERVICES+=,cinder,g-api,g-reg
ENABLED_SERVICES+=,c-api,c-vol,c-sch,c-bak,horizon
# Neutron OVS (vxlan)
Q_PLUGIN=ml2
Q_ML2_TENANT_NETWORK_TYPE=vxlan
Q_ML2_PLUGIN_TYPE_DRIVERS=vxlan
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=(vni_ranges=8001:9000)
Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_types=vxlan vxlan_udp_port=8972)
Q_USE_NAMESPACE=True
Q_USE_SECGROUP=True
Q_ENABLE_DRAGONFLOW=True
ML2_L3_PLUGIN=dragonflow.neutron.services.l3.l3_controller_plugin.ControllerL3ServicePlugin
DATABASE_PASSWORD=devstack
ADMIN_PASSWORD=devstack
SERVICE_PASSWORD=devstack
SERVICE_TOKEN=devstack
RABBIT_PASSWORD=devstack
DATABASE_TYPE=mysql
DATABASE_USER=root

View File

@ -1,52 +0,0 @@
[[local|localrc]]
enable_plugin dragonflow https://github.com/openstack/dragonflow.git
LOGFILE=/opt/stack/logs/stack.sh.log
VERBOSE=True
DEBUG=False
LOG_COLOR=True
SCREEN_LOGDIR=/opt/stack/logs
# Managment IP of the node
HOST_IP=10.100.100.4
# Managment interface id
FLAT_INTERFACE=eth0
FIXED_RANGE=10.0.0.0/24
NETWORK_GATEWAY=10.0.0.1
FIXED_NETWORK_SIZE=256
# Public network range
FLOATING_RANGE=10.100.0.0/16
# Change according to public network range
Q_FLOATING_ALLOCATION_POOL=start=10.100.201.100,end=10.100.201.200
# Public network gateway IP address
PUBLIC_NETWORK_GATEWAY=10.100.0.1
ENABLE_DEBUG_LOG_LEVEL=true
ENABLED_SERVICES=rabbit,mysql,key
ENABLED_SERVICES+=,n-cpu,n-api,n-crt,n-obj,n-cond,n-sch,n-novnc,n-cauth
ENABLED_SERVICES+=,neutron,q-svc,q-df-l3,q-agt,q-dhcp,q-meta
ENABLED_SERVICES+=,cinder,g-api,g-reg
ENABLED_SERVICES+=,c-api,c-vol,c-sch,c-bak,horizon
# Neutron OVS (vxlan)
Q_PLUGIN=ml2
Q_ML2_TENANT_NETWORK_TYPE=vxlan
Q_ML2_PLUGIN_TYPE_DRIVERS=vxlan
Q_ML2_PLUGIN_VXLAN_TYPE_OPTIONS=(vni_ranges=8001:9000)
Q_AGENT_EXTRA_AGENT_OPTS=(tunnel_types=vxlan vxlan_udp_port=8972)
Q_USE_NAMESPACE=True
Q_USE_SECGROUP=True
Q_ENABLE_DRAGONFLOW=True
ML2_L3_PLUGIN=dragonflow.neutron.services.l3.l3_controller_plugin.ControllerL3ServicePlugin
DATABASE_PASSWORD=devstack
ADMIN_PASSWORD=devstack
SERVICE_PASSWORD=devstack
SERVICE_TOKEN=devstack
RABBIT_PASSWORD=devstack
DATABASE_TYPE=mysql
DATABASE_USER=root

File diff suppressed because it is too large Load Diff

View File

@ -1,312 +0,0 @@
# Copyright (c) 2015 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import signal
import sys
import threading
import eventlet
eventlet.monkey_patch()
from six import moves
from dragonflow.neutron.common import df_ovs_bridge
from oslo_config import cfg
from neutron.agent.common import config
from neutron.agent.linux import ip_lib
from neutron.common import config as common_config
from neutron.common import utils as q_utils
from neutron.i18n import _, _LE, _LI
from neutron.plugins.common import constants as p_const
from neutron.plugins.ml2.drivers.openvswitch.agent import (
ovs_neutron_agent as ona)
from neutron.plugins.ml2.drivers.openvswitch.agent.common import constants
from neutron.plugins.ml2.drivers.openvswitch.agent.openflow.ovs_ofctl import (
br_phys, br_tun)
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
agent_additional_opts = [
cfg.StrOpt('L3controller_ip_list',
default='tcp:localhost:6633',
help=("L3 Controler IP list list tcp:ip_addr:port;"
"tcp:ip_addr:port..;..")),
cfg.BoolOpt('enable_l3_controller', default=True,
help=_("L3 SDN Controller")),
cfg.IntOpt('tunnel_map_check_rate', default=5,
help=_("Rate in multiple of the rpc loop")),
]
cfg.CONF.register_opts(agent_additional_opts, "AGENT")
class L2OVSControllerAgent(ona.OVSNeutronAgent):
def __init__(self, bridge_classes, integ_br, tun_br, local_ip,
bridge_mappings, polling_interval, tunnel_types=None,
veth_mtu=None, l2_population=False,
enable_distributed_routing=False,
minimize_polling=False,
ovsdb_monitor_respawn_interval=(
constants.DEFAULT_OVSDBMON_RESPAWN),
arp_responder=False,
prevent_arp_spoofing=False,
use_veth_interconnection=False,
quitting_rpc_timeout=None):
if prevent_arp_spoofing:
LOG.error(_LE("ARP Spoofing prevention is not"
" yet supported in Dragonflow feature disabled"))
prevent_arp_spoofing = False
'''
Sync lock for Race condition set_controller <--> check_ovs_status
when setting the controller all the flow table are deleted
by the time we set the CANARY_TABLE again.
'''
self.set_controller_lock = threading.Lock()
self.enable_l3_controller = cfg.CONF.AGENT.enable_l3_controller
self.tunnel_map_check_rate = cfg.CONF.AGENT.tunnel_map_check_rate
super(L2OVSControllerAgent, self) \
.__init__(bridge_classes,
integ_br,
tun_br, local_ip,
bridge_mappings,
polling_interval,
tunnel_types,
veth_mtu, l2_population,
enable_distributed_routing,
minimize_polling,
ovsdb_monitor_respawn_interval,
arp_responder,
prevent_arp_spoofing,
use_veth_interconnection,
quitting_rpc_timeout)
# Initialize controller
self.df_available_local_vlans = set(moves.range(p_const.MIN_VLAN_TAG,
p_const.MAX_VLAN_TAG))
self.df_local_to_vlan_map = {}
self.controllers_ip_list = cfg.CONF.AGENT.L3controller_ip_list
self.set_controller_for_br(self.int_br, self.controllers_ip_list)
def set_controller_for_br(self, bridge, ip_address_list):
'''Set OpenFlow Controller on the Bridge .
:param bridge: the bridge object.
:param ip_address_list: tcp:ip_address:port;tcp:ip_address2:port
'''
if not self.enable_l3_controller:
LOG.info(_LI("Controller Base l3 is disabled on Agent"))
return
ip_address_ = ip_address_list.split(";")
LOG.debug("Set Controllers on br %s to %s", bridge.br_name,
ip_address_)
with self.set_controller_lock:
bridge.del_controller()
bridge.set_controller(ip_address_)
bridge.set_controllers_connection_mode("out-of-band")
bridge.set_standalone_mode()
bridge.add_flow(priority=0, actions="normal")
bridge.add_flow(table=constants.CANARY_TABLE,
priority=0,
actions="drop")
# add the normal flow higher priority than the drop
for br in self.phys_brs.values():
br.add_flow(priority=3, actions="normal")
# add the vlan flows
cur_ports = self.int_br.get_vif_ports()
# use to initialize once each local vlan
l_vlan_map = set()
for port in cur_ports:
local_vlan_map = self.int_br.db_get_val("Port", port.port_name,
"other_config")
local_vlan = self.int_br.db_get_val("Port", port.port_name,
"tag")
net_uuid = local_vlan_map.get('net_uuid')
if (net_uuid and local_vlan != ona.DEAD_VLAN_TAG and
net_uuid not in l_vlan_map):
l_vlan_map.add(net_uuid)
self.provision_local_vlan2(
local_vlan_map['net_uuid'],
local_vlan_map['network_type'],
local_vlan_map['physical_network'],
local_vlan_map['segmentation_id'])
def check_tunnel_map_table(self):
if p_const.TYPE_VLAN in self.tunnel_types:
# TODO(gampel) check for the vlan flows here
return
if not self.df_local_to_vlan_map:
return
tunnel_flows = self.int_br.dump_flows(
df_ovs_bridge.TUN_TRANSLATE_TABLE)
for tunnel_ip in self.df_local_to_vlan_map:
vlan_action = "mod_vlan_vid:%d" % (
self.df_local_to_vlan_map[tunnel_ip])
if vlan_action not in tunnel_flows:
self.tunnel_sync()
def check_ovs_status(self):
if not self.enable_l3_controller:
return super(L2OVSControllerAgent, self).check_ovs_status()
# Check for the canary flow
# Add lock to avoid race condition of flows
with self.set_controller_lock:
ret = super(L2OVSControllerAgent, self).check_ovs_status()
if not self.iter_num % self.tunnel_map_check_rate:
self.check_tunnel_map_table()
return ret
def _claim_df_tunnel_local_vlan(self, tunnel_ip_hex):
lvid = None
if tunnel_ip_hex in self.df_local_to_vlan_map:
lvid = self.df_local_to_vlan_map[tunnel_ip_hex]
else:
lvid = self.df_available_local_vlans.pop()
self.df_local_to_vlan_map[tunnel_ip_hex] = lvid
return lvid
def _release_df_tunnel_local_vlan(self, tunnel_ip_hex):
lvid = self.df_local_to_vlan_map.pop(tunnel_ip_hex, None)
self.df_available_local_vlans.add(lvid)
def cleanup_tunnel_port(self, br, tun_ofport, tunnel_type):
items = list(self.tun_br_ofports[tunnel_type].items())
for remote_ip, ofport in items:
if ofport == tun_ofport:
tunnel_ip_hex = "0x%s" % self.get_ip_in_hex(remote_ip)
lvid = self.df_local_to_vlan_map[tunnel_ip_hex]
self.int_br.delete_flows(
table=df_ovs_bridge.TUN_TRANSLATE_TABLE,
reg7=tunnel_ip_hex)
br.delete_flows(
table=constants.UCAST_TO_TUN,
dl_vlan=lvid)
self._release_df_tunnel_local_vlan(tunnel_ip_hex)
return super(L2OVSControllerAgent, self).cleanup_tunnel_port(
br,
tun_ofport,
tunnel_type)
def _setup_tunnel_port(self, br, port_name, remote_ip, tunnel_type):
ofport = super(L2OVSControllerAgent, self) \
._setup_tunnel_port(
br,
port_name,
remote_ip,
tunnel_type)
if p_const.TYPE_VLAN not in self.tunnel_types:
tunnel_ip_hex = "0x%s" % self.get_ip_in_hex(remote_ip)
lvid = self._claim_df_tunnel_local_vlan(tunnel_ip_hex)
self.int_br.add_flow(
table=df_ovs_bridge.TUN_TRANSLATE_TABLE,
priority=2000,
reg7=tunnel_ip_hex,
actions="mod_vlan_vid:%s,"
"load:0->NXM_NX_REG7[0..31],"
"resubmit(,%s)" %
(lvid, df_ovs_bridge.TUN_TRANSLATE_TABLE))
br.add_flow(table=constants.UCAST_TO_TUN,
priority=100,
dl_vlan=lvid,
pkt_mark="0x80000000/0x80000000",
actions="strip_vlan,move:NXM_NX_PKT_MARK[0..30]"
"->NXM_NX_TUN_ID[0..30],"
"output:%s" %
(ofport))
if ofport > 0:
ofports = (br_tun.OVSTunnelBridge._ofport_set_to_str
(self.tun_br_ofports[tunnel_type].values()))
if self.enable_l3_controller:
if ofports:
br.add_flow(table=constants.FLOOD_TO_TUN,
actions="move:NXM_NX_PKT_MARK[0..30]"
"->NXM_NX_TUN_ID[0..30],"
"output:%s" %
(ofports))
return ofport
def provision_local_vlan2(self, net_uuid, network_type, physical_network,
segmentation_id):
if network_type == p_const.TYPE_VLAN:
if physical_network in self.phys_brs:
#outbound
# The global vlan id is set in table 60
# from segmentation id/tun id
self.int_br.add_flow(table=df_ovs_bridge.TUN_TRANSLATE_TABLE,
priority=1,
actions="move:NXM_NX_TUN_ID[0..11]"
"->OXM_OF_VLAN_VID[],"
"output:%s" %
(self.int_ofports[physical_network]))
lvid = self.local_vlan_map.get(net_uuid).vlan
# inbound
self.int_br.add_flow(priority=1000,
in_port=self.
int_ofports[physical_network],
dl_vlan=segmentation_id,
actions="mod_vlan_vid:%s,normal" % lvid)
else:
LOG.error(_LE("Cannot provision VLAN network for "
"net-id=%(net_uuid)s - no bridge for "
"physical_network %(physical_network)s"),
{'net_uuid': net_uuid,
'physical_network': physical_network})
def main():
cfg.CONF.register_opts(ip_lib.OPTS)
config.register_root_helper(cfg.CONF)
common_config.init(sys.argv[1:])
common_config.setup_logging()
q_utils.log_opt_values(LOG)
bridge_classes = {
'br_int': df_ovs_bridge.DFOVSAgentBridge,
'br_phys': br_phys.OVSPhysicalBridge,
'br_tun': br_tun.OVSTunnelBridge
}
try:
agent_config = ona.create_agent_config_map(cfg.CONF)
except ValueError as e:
LOG.error(_LE('%s Agent terminated!'), e)
sys.exit(1)
is_xen_compute_host = 'rootwrap-xen-dom0' in cfg.CONF.AGENT.root_helper
if is_xen_compute_host:
# Force ip_lib to always use the root helper to ensure that ip
# commands target xen dom0 rather than domU.
cfg.CONF.set_default('ip_lib_force_root', True)
agent = L2OVSControllerAgent(bridge_classes, **agent_config)
signal.signal(signal.SIGTERM, agent._handle_sigterm)
# Start everything.
LOG.info(_LI("Agent initialized successfully, now running... "))
agent.daemon_loop()
if __name__ == "__main__":
main()

View File

@ -1,88 +0,0 @@
# Copyright (c) 2015 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from neutron.agent.l3 import legacy_router
from neutron.i18n import _LE
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
class DfDvrRouter(legacy_router.LegacyRouter):
def __init__(self, agent, host, controller, *args, **kwargs):
super(DfDvrRouter, self).__init__(*args, **kwargs)
self.agent = agent
self.host = host
self.controller = controller
def _add_snat_binding_to_controller(self, sn_port):
if sn_port is None:
return
LOG.debug("_add_snat_binding_to_controller")
LOG.debug("subnet = %s" % sn_port['fixed_ips'][0]['subnet_id'])
LOG.debug("ip = %s" % sn_port['fixed_ips'][0]['ip_address'])
LOG.debug("mac = %s" % sn_port['mac_address'])
self.controller.add_snat_binding(
sn_port['fixed_ips'][0]['subnet_id'], sn_port)
def _remove_snat_binding_to_controller(self, sn_port):
if sn_port is None:
LOG.error(_LE("None sn_port"))
return
LOG.debug("_remove_snat_binding_to_controller")
LOG.debug("subnet = %s" % sn_port['fixed_ips'][0]['subnet_id'])
LOG.debug("ip = %s" % sn_port['fixed_ips'][0]['ip_address'])
LOG.debug("mac = %s" % sn_port['mac_address'])
self.controller.remove_snat_binding(
sn_port['fixed_ips'][0]['subnet_id'])
def internal_network_added(self, port):
super(DfDvrRouter, self).internal_network_added(port)
if self.router.get('enable_snat'):
self._add_snat_binding_to_controller(port)
def internal_network_removed(self, port):
super(DfDvrRouter, self).internal_network_removed(port)
if self.router.get('enable_snat'):
self._remove_snat_binding_to_controller(port)
def external_gateway_added(self, ex_gw_port, interface_name):
super(DfDvrRouter, self).external_gateway_added(
ex_gw_port, interface_name)
for p in self.internal_ports:
self._add_snat_binding_to_controller(p)
def external_gateway_updated(self, ex_gw_port, interface_name):
super(DfDvrRouter, self).external_gateway_updated(
ex_gw_port, interface_name)
def external_gateway_removed(self, ex_gw_port, interface_name):
super(DfDvrRouter, self).external_gateway_removed(
ex_gw_port, interface_name)
for p in self.internal_ports:
self._remove_snat_binding_to_controller(p)
def routes_updated(self):
pass

View File

@ -1,173 +0,0 @@
# Copyright (c) 2015 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from oslo_config import cfg
from oslo_service import loopingcall
from dragonflow.controller import openflow_controller as of_controller
from dragonflow.neutron.agent.l3 import df_dvr_router
from neutron.agent.l3 import agent
from neutron.agent.l3 import namespaces
from neutron.agent import rpc as agent_rpc
from neutron.common import constants as l3_constants
from neutron.common import topics
from neutron.i18n import _LE, _LI
from oslo_log import log as logging
EXTERNAL_DEV_PREFIX = namespaces.EXTERNAL_DEV_PREFIX
LOG = logging.getLogger(__name__)
NET_CONTROL_L3_OPTS = [
cfg.StrOpt('net_controller_l3_southbound_protocol',
default='OpenFlow',
help=("Southbound protocol to connect the forwarding"
"element Currently supports only OpenFlow")),
cfg.IntOpt('subnet_flows_idle_timeout',
default=300,
help=("The L3 VM to VM traffic (between networks) flows are "
"configured with this idle timeout (in seconds), "
"value of 0 means no timeout")),
cfg.IntOpt('subnet_flows_hard_timeout',
default=0,
help=("The L3 VM to VM traffic (between networks) flows are "
"configured with this hard timeout (in seconds), "
"value of 0 means no timeout"))
]
cfg.CONF.register_opts(NET_CONTROL_L3_OPTS)
class L3ControllerAgent(agent.L3NATAgent):
def __init__(self, host, conf=None):
super(L3ControllerAgent, self).__init__(host, conf)
self.use_ipv6 = False
if cfg.CONF.net_controller_l3_southbound_protocol == "OpenFlow":
# Open Flow Controller
LOG.info(_LI("Using Southbound OpenFlow Protocol "))
self.controller = of_controller.OpenFlowController(cfg, "openflow")
elif cfg.CONF.net_controller_l3_southbound_protocol == "OVSDB":
LOG.error(_LE("Southbound OVSDB Protocol not implemented yet"))
elif cfg.CONF.net_controller_l3_southbound_protocol == "OP-FLEX":
LOG.error(_LE("Southbound OP-FLEX Protocol not implemented yet"))
# Initialize the controller application
self.controller.initialize()
# Sync all ports data from neutron to the L3 Agent
self.sync_ports_on_startup()
# Start the controller application
self.controller.start()
def sync_ports_on_startup(self):
try:
routers = self.plugin_rpc.get_routers(self.context)
except Exception:
LOG.error(_LE("Failed synchronizing routers due to RPC error"))
return
for router in routers:
for interface in router.get('_interfaces', []):
for subnet in interface['subnets']:
self.sync_subnet_port_data(subnet['id'])
def _create_router(self, router_id, router):
args = []
kwargs = {
'router_id': router_id,
'router': router,
'use_ipv6': self.use_ipv6,
'agent_conf': self.conf,
'interface_driver': self.driver,
'controller': self.controller,
'host': self.host,
'agent': self,
}
return df_dvr_router.DfDvrRouter(*args, **kwargs)
def _safe_router_removed(self, router_id):
"""delete a router from the controller & call and return base class"""
self.controller.delete_router(router_id)
return super(L3ControllerAgent, self)._safe_router_removed(router_id)
def _process_router_if_compatible(self, router):
self.controller.sync_router(router)
for interface in router.get('_interfaces', ()):
for subnet_info in interface['subnets']:
self.sync_subnet_port_data(subnet_info['id'])
super(L3ControllerAgent, self)._process_router_if_compatible(router)
def sync_subnet_port_data(self, subnet_id):
ports_data = self.plugin_rpc.get_ports_by_subnet(self.context,
subnet_id)
router_ports = []
if ports_data:
for port in ports_data:
seg_id = port.get('segmentation_id')
if (seg_id is None) or (seg_id == 0):
router_ports.append(port)
self.controller.sync_port(port)
if (seg_id is not None) and (seg_id != 0):
for router_port in router_ports:
router_port['segmentation_id'] = seg_id
self.controller.sync_port(router_port)
def add_arp_entry(self, context, payload):
"""Add arp entry into router namespace. Called from RPC."""
port = payload['arp_table']
self.controller.sync_port(port)
def del_arp_entry(self, context, payload):
"""Delete arp entry from router namespace. Called from RPC."""
port = payload['arp_table']
self.controller.delete_port(port)
class L3ControllerAgentWithStateReport(L3ControllerAgent,
agent.L3NATAgentWithStateReport):
def __init__(self, host, conf=None):
super(L3ControllerAgentWithStateReport, self).__init__(host=host,
conf=conf)
self.state_rpc = agent_rpc.PluginReportStateAPI(topics.PLUGIN)
self.agent_state = {
'binary': 'neutron-l3-controller-agent',
'host': host,
'topic': topics.L3_AGENT,
'configurations': {
'agent_mode': 'legacy',
'use_namespaces': self.conf.use_namespaces,
'router_id': self.conf.router_id,
'handle_internal_only_routers':
self.conf.handle_internal_only_routers,
'external_network_bridge': self.conf.external_network_bridge,
'gateway_external_network_id':
self.conf.gateway_external_network_id,
'interface_driver': self.conf.interface_driver},
'start_flag': True,
'agent_type': l3_constants.AGENT_TYPE_L3}
report_interval = self.conf.AGENT.report_interval
self.use_call = True
if report_interval:
self.heartbeat = loopingcall.FixedIntervalLoopingCall(
self._report_state)
self.heartbeat.start(interval=report_interval)

View File

@ -1,44 +0,0 @@
# Copyright (c) 2015 OpenStack Foundation.
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import eventlet
import sys
eventlet.monkey_patch()
from oslo_config import cfg
from oslo_service import service
from neutron.agent.common import config
from neutron.agent import l3_agent
from neutron.common import config as common_config
from neutron.common import topics
from neutron import service as neutron_service
def main(manager='dragonflow.neutron.agent.l3.l3_controller_agent.'
'L3ControllerAgentWithStateReport'):
l3_agent.register_opts(cfg.CONF)
common_config.init(sys.argv[1:])
config.setup_logging()
cfg.CONF.set_override('router_delete_namespaces', True)
server = neutron_service.Service.create(
binary='neutron-l3-controller-agent',
topic=topics.L3_AGENT,
report_interval=cfg.CONF.AGENT.report_interval,
manager=manager)
service.launch(cfg.CONF, server).wait()
if __name__ == "__main__":
main()

View File

@ -1,333 +0,0 @@
# Copyright (c) 2015 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_utils import importutils
from neutron import context as neutron_context
from neutron import manager
from neutron.api.rpc.agentnotifiers import l3_rpc_agent_api
from neutron.api.rpc.handlers import l3_rpc
from neutron.callbacks import events
from neutron.callbacks import registry
from neutron.callbacks import resources
from neutron.common import constants as q_const
from neutron.common import rpc as n_rpc
from neutron.common import topics
from neutron.db import l3_hamode_db
from neutron.i18n import _LE, _LI, _LW
from neutron.plugins.common import constants
from neutron.plugins.ml2 import driver_api as api
from neutron.db import common_db_mixin
from neutron.db import l3_gwmode_db
from neutron.db import l3_hascheduler_db
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
NET_CONTROL_L3_OPTS = [
cfg.StrOpt('net_controller_l3_southbound_protocol',
default='OpenFlow',
help=("Southbound protocol to connect the forwarding"
"element Currently supports only OpenFlow"))
]
cfg.CONF.register_opts(NET_CONTROL_L3_OPTS)
def _notify_l3_agent_new_port(resource, event, trigger, **kwargs):
LOG.debug('Received %s %s', resource, event)
port = kwargs.get('port')
if port is None:
return
l3plugin = manager.NeutronManager.get_service_plugins().get(
constants.L3_ROUTER_NAT)
mac_address_updated = kwargs.get('mac_address_updated')
update_device_up = kwargs.get('update_device_up')
context = kwargs.get('context')
if context is None:
LOG.warning(_LW(
'Received %(resource)s %(event)s without context [%(port)s]'),
{'resource': resource, 'event': event, 'port': port}
)
return
if mac_address_updated or update_device_up:
l3plugin.add_port(context, port)
def _notify_l3_agent_delete_port(event, resource, trigger, **kwargs):
context = kwargs['context']
port = kwargs['port']
removed_routers = kwargs['removed_routers']
l3plugin = manager.NeutronManager.get_service_plugins().get(
constants.L3_ROUTER_NAT)
l3plugin.remove_port(context, port)
if port['device_owner'] in q_const.ROUTER_INTERFACE_OWNERS:
l3plugin.delete_router_interface(context, port)
for router in removed_routers:
l3plugin.remove_router_from_l3_agent(
context, router['agent_id'], router['router_id'])
def subscribe():
registry.subscribe(
_notify_l3_agent_new_port, resources.PORT, events.AFTER_UPDATE)
registry.subscribe(
_notify_l3_agent_new_port, resources.PORT, events.AFTER_CREATE)
registry.subscribe(
_notify_l3_agent_delete_port, resources.PORT, events.AFTER_DELETE)
def is_vm_port_with_ip_addresses(port_dict):
is_vm_port = "compute:" in port_dict['device_owner']
has_ip_addresses = len(port_dict['fixed_ips']) > 0
return is_vm_port and has_ip_addresses
class ControllerL3ServicePlugin(common_db_mixin.CommonDbMixin,
l3_hamode_db.L3_HA_NAT_db_mixin,
l3_gwmode_db.L3_NAT_db_mixin,
l3_hascheduler_db.L3_HA_scheduler_db_mixin,
l3_rpc.L3RpcCallback):
RPC_API_VERSION = '1.2'
supported_extension_aliases = ["router", "ext-gw-mode",
"l3_agent_scheduler"]
def __init__(self):
self.setup_rpc()
self.router_scheduler = importutils.import_object(
cfg.CONF.router_scheduler_driver)
#self.start_periodic_agent_status_check()
self.ctx = neutron_context.get_admin_context()
cfg.CONF.router_auto_schedule = True
if cfg.CONF.net_controller_l3_southbound_protocol == "OpenFlow":
# Open Flow Controller
LOG.info(_LI("Using Southbound OpenFlow Protocol "))
elif cfg.CONF.net_controller_l3_southbound_protocol == "OVSDB":
LOG.error(_LE("Southbound OVSDB Protocol not implemented yet"))
elif cfg.CONF.net_controller_l3_southbound_protocol == "OP-FLEX":
LOG.error(_LE("Southbound OP-FLEX Protocol not implemented yet"))
super(ControllerL3ServicePlugin, self).__init__()
subscribe()
def setup_rpc(self):
# RPC support
self.topic = topics.L3PLUGIN
self.conn = n_rpc.create_connection(new=True)
self.agent_notifiers.update(
{q_const.AGENT_TYPE_L3: l3_rpc_agent_api.L3AgentNotifyAPI()})
self.endpoints = [self]
self.conn.create_consumer(self.topic, self.endpoints,
fanout=True)
self.conn.consume_in_threads()
def get_plugin_type(self):
return constants.L3_ROUTER_NAT
def get_plugin_description(self):
"""Returns string description of the plugin."""
return "L3 SDN Controller For Neutron"
def add_port(self, context, port_dict):
if is_vm_port_with_ip_addresses(port_dict):
self.add_vm_port(context, port_dict)
def add_vm_port(self, context, port_dict):
notify_port = self._core_plugin.get_port(context,
port_dict['id'])
notify_port['subnets'] = [
self._core_plugin.get_subnet(context, fixed_ip['subnet_id'])
for fixed_ip in notify_port['fixed_ips']
]
router_id = 0
if (notify_port['device_owner'] in
q_const.ROUTER_INTERFACE_OWNERS):
router_id = notify_port['device_id']
segmentation_id = self._get_segmentation_id(context, notify_port)
self._send_new_port_notify(context,
notify_port,
"add",
router_id,
segmentation_id)
def remove_port(self, context, port_dict):
if is_vm_port_with_ip_addresses(port_dict):
self.remove_vm_port(context, port_dict)
def remove_vm_port(self, context, port_dict):
port_dict['subnets'] = [
self._core_plugin.get_subnet(context, fixed_ip['subnet_id'])
for fixed_ip in port_dict['fixed_ips']
]
self._send_new_port_notify(context,
port_dict,
"del",
0,
0)
def _get_segmentation_id(self, context, port):
port_data = self.get_ml2_port_bond_data(context,
port['id'],
port['binding:host_id'])
if port_data is None:
return 0
return port_data.get('segmentation_id', 0)
def remove_router_from_l3_agent(self, context, agent_id, router_id):
self.l3_rpc_notifier.router_deleted(context, router_id)
def delete_router_interface(self, context, notify_port):
self.l3_rpc_notifier.routers_updated(
context,
router_ids=[notify_port['device_id']],
operation="del_interface",
data={'port': notify_port},
)
def _send_new_port_notify(self, context, notify_port, action, router_id,
segmentation_id):
notify_port['segmentation_id'] = segmentation_id
if action == "add":
notify_action = self._add_arp_entry
elif action == "del":
notify_action = self._del_arp_entry
notify_action(context, router_id, notify_port)
return
def _add_arp_entry(self, context, router_id, arp_table, operation=None):
if router_id:
self.l3_rpc_notifier.add_arp_entry(context,
router_id,
arp_table,
operation)
else:
self._agent_notification_arp(context, 'add_arp_entry', arp_table)
def _del_arp_entry(self, context, router_id, arp_table, operation=None):
if router_id:
self.l3_rpc_notifier.del_arp_entry(context,
router_id,
arp_table,
operation)
else:
self._agent_notification_arp(context, 'del_arp_entry', arp_table)
def _agent_notification_arp(self, context, method, data):
"""Notify arp details to all l3 agents.
This is an expansion of a function in core openstack used so that we
can get VM port events even if there are no routers
"""
admin_context = (context.is_admin and
context or context.elevated())
plugin = manager.NeutronManager.get_service_plugins().get(
constants.L3_ROUTER_NAT)
l3_agents = plugin.get_l3_agents(admin_context)
for l3_agent in l3_agents:
log_topic = '%s.%s' % (l3_agent.topic, l3_agent.host)
LOG.debug('Casting message %(method)s with topic %(topic)s',
{'topic': log_topic, 'method': method})
dvr_arptable = {'router_id': 0,
'arp_table': data}
cctxt = self.l3_rpc_notifier.client.prepare(
topic=l3_agent.topic,
server=l3_agent.host,
version='1.2')
cctxt.cast(context, method, payload=dvr_arptable)
def get_ports_by_subnet(self, context, **kwargs):
result = super(ControllerL3ServicePlugin, self).get_ports_by_subnet(
context,
**kwargs)
if result:
for port in result:
port_data = self.get_ml2_port_bond_data(context, port['id'],
port['binding:host_id'])
segmentation_id = 0
if "segmentation_id" in port_data:
segmentation_id = port_data['segmentation_id']
port['segmentation_id'] = segmentation_id
port['subnets'] = [
self._core_plugin.get_subnet(
context, fixed_ip['subnet_id'])
for fixed_ip in port['fixed_ips']
]
return result
def get_ml2_port_bond_data(self, ctx, port_id, device_id):
core_plugin = manager.NeutronManager.get_plugin()
port_context = core_plugin.get_bound_port_context(
ctx, port_id, device_id)
if not port_context:
LOG.warning(_LW("Device %(device)s requested by agent "
"%(agent_id)s not found in database"),
{'device': device_id, 'agent_id': port_id})
return None
port = port_context.current
try:
segment = port_context.network.network_segments[0]
except KeyError:
if not segment:
LOG.warning(_LW("Device %(device)s requested by agent "
" on network %(network_id)s not "
"bound, vif_type: "),
{'device': device_id,
'network_id': port['network_id']})
return {}
entry = {'device': device_id,
'network_id': port['network_id'],
'port_id': port_id,
'mac_address': port['mac_address'],
'admin_state_up': port['admin_state_up'],
'network_type': segment[api.NETWORK_TYPE],
'segmentation_id': segment[api.SEGMENTATION_ID],
'physical_network': segment[api.PHYSICAL_NETWORK],
'fixed_ips': port['fixed_ips'],
'device_owner': port['device_owner']}
LOG.debug(("Returning: %s"), entry)
return entry
def auto_schedule_routers(self, context, host, router_ids):
l3_agent = self.get_enabled_agent_on_host(
context, q_const.AGENT_TYPE_L3, host)
if not l3_agent:
return False
if self.router_scheduler:
unscheduled_rs = self.router_scheduler._get_routers_to_schedule(
context,
self,
router_ids)
self.router_scheduler._bind_routers(context, self,
unscheduled_rs,
l3_agent)
return

View File

@ -47,6 +47,4 @@ output_file = dragonflow/locale/dragonflow.pot
neutron.ml2.mechanism_drivers =
df = dragonflow.neutron.ml2.mech_driver:DFMechDriver
console_scripts =
neutron-l3-controller-agent = dragonflow.neutron.agent.l3_sdn_agent:main
neutron-l2-controller-agent = dragonflow.neutron.agent.l2.ovs_dragonflow_neutron_agent:main
df-db = dragonflow.db.df_db:main