Retire Packaging Deb project repos

This commit is part of a series to retire the Packaging Deb
project. Step 2 is to remove all content from the project
repos, replacing it with a README notification where to find
ongoing work, and how to recover the repo if needed at some
future point (as in
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project).

Change-Id: I5c25db1bd49dc9d3e9c4791ba4d2a54b629bd3ce
This commit is contained in:
Tony Breeds 2017-09-12 15:42:29 -06:00
parent e5ddfa1b68
commit 98ecf2f8ff
169 changed files with 14 additions and 18952 deletions

View File

@ -1,7 +0,0 @@
[run]
branch = True
source = networking_l2gw
omit = networking_l2gw/tests/*,networking_l2gw/openstack/*
[report]
ignore_errors = True

53
.gitignore vendored
View File

@ -1,53 +0,0 @@
*.py[cod]
# C extensions
*.so
# Packages
*.egg
*.egg-info
dist
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
.tox
nosetests.xml
.testrepository
.venv
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Complexity
output/*.html
output/*/index.html
# Sphinx
doc/build
# pbr generates these
AUTHORS
ChangeLog
# Editors
*~
.*.swp
.*sw?

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/networking-l2gw.git

View File

@ -1,8 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./networking_l2gw/tests/unit} $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -1,16 +0,0 @@
If you would like to contribute to the development of OpenStack,
you must follow the steps in this page:
http://docs.openstack.org/infra/manual/developers.html
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/networking-l2gw

View File

@ -1,4 +0,0 @@
networking-l2gw Style Commandments
===============================================
Read the OpenStack Style Commandments http://docs.openstack.org/developer/hacking/

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,10 +0,0 @@
include AUTHORS
include ChangeLog
exclude .gitignore
exclude .gitreview
include networking_l2gw/db/migration/alembic_migrations/README
include networking_l2gw/db/migration/alembic_migrations/script.py.mako
recursive-include networking_l2gw/db/migration/alembic_migrations/versions *
global-exclude *.pyc

14
README Normal file
View File

@ -0,0 +1,14 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For ongoing work on maintaining OpenStack packages in the Debian
distribution, please see the Debian OpenStack packaging team at
https://wiki.debian.org/OpenStack/.
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,40 +0,0 @@
===============
networking-l2gw
===============
API's and implementations to support L2 Gateways in Neutron.
* Free software: Apache license
* Source: http://git.openstack.org/cgit/openstack/networking-l2gw
L2 Gateways
-----------
This project proposes a Neutron API extension that can be used to express
and manage L2 Gateway components. In the simplest terms L2 Gateways are meant
to bridge two or more networks together to make them look at a single L2
broadcast domain.
Initial implementation
----------------------
There are a number of use cases that can be addressed by an L2 Gateway API.
Most notably in cloud computing environments, a typical use case is bridging
the virtual with the physical. Translate this to Neutron and the OpenStack
world, and this means relying on L2 Gateway capabilities to extend Neutron
logical (overlay) networks into physical (provider) networks that are outside
the OpenStack realm. These networks can be, for instance, VLAN's that may or
may not be managed by OpenStack.
More information
----------------
For help using or hacking on L2GW, you can send an email to the
`OpenStack Development Mailing List <mailto:openstack-dev@lists.openstack.org>`;
please use the [L2-Gateway] Tag in the subject. Most folks involved hang out on
the IRC channel #openstack-neutron.
Getting started
---------------
* TODO

View File

@ -1,2 +0,0 @@
[python: **.py]

View File

@ -1,58 +0,0 @@
Debian packaging, installation and configuration of
neutron-l2gateway plugin.
Prior requirements
script install_and_config_l2gateway_plugin.sh will run on openstack controller.
install_and_config_l2gateway_plugin.sh will create and install debian package of networking-l2gw,
and it will enable neutron-l2gateway service plugin.
Creation of debian package requires copyright, changelog, control, compat
and rules file inside the debian folder.
debian folder is to be placed inside the folder which needs to be packaged (networking-l2gw).
command dpkg-buildpackage -b, builds debian package of networking-l2gw which uses the files
mentioned inside debian folder to create debian package.
please refer https://www.debian.org/doc/manuals/maint-guide/dreq.en.html
for further details.
Installation procedure example:
The script will ask for further details for packaging and installing as shown below.
press ENTER for assigning default values to debian/changelog and debian/control file.
#info for debian/changelog file
enter package name for debian/changelog
networking-l2gw
enter package version for debian/changelog
1.0
#info for debian/control file
enter the networking-l2gw source name
networking-l2gw
enter the networking-l2gw package name
networking-l2gw
enter the version number
1.0
enter the maintainer info
user@hp.com
enter the architecture
all
enter the description title
l2gateway package
enter the description details
description details of l2gateway package
#info of neutron.conf file path
press ENTER for assigning default file path /etc/neutron/neutron.conf for neutron.conf file.
enter neutron.conf file path
/etc/neutron/neutron.conf
after execution of install_and_config_l2gateway_plugin.sh
check neutron-server status.
sudo service neutron-server status
neutron-server start/running, process 17876
and also check service_plugins in neutron.conf file whether
networking_l2gw.services.l2gateway.plugin.L2GatewayPlugin is enabled or not.

View File

@ -1,59 +0,0 @@
Debian packaging and installation of neutron-l2gateway-agent.
Prior requirements
script install_l2gateway_agent.sh will run on neutron installed and configured nodes
(controller, compute and network nodes).
install_l2gateway_agent.sh will create and install debian package of networking-l2gw,
and it will start neutron-l2gateway-agent.
Creation of debian package requires copyright, changelog, control, compat
and rules file inside the debian folder.
debian folder is to be placed inside the folder which needs to be packaged (networking-l2gw).
command dpkg-buildpackage -b, builds debian package of networking-l2gw which uses the files
mentioned inside debian folder to create debian package.
please refer https://www.debian.org/doc/manuals/maint-guide/dreq.en.html
for further details.
Installation procedure example:
The script will ask for further details for packaging and installing as shown below.
press ENTER for assigning default values to debian/changelog and debian/control file.
#info for debian/changelog file
enter package name for debian/changelog
networking-l2gw
enter package version for debian/changelog
1.0
#info for debian/control file
enter the networking-l2gw source name
networking-l2gw
enter the networking-l2gw package name
networking-l2gw
enter the version number
1.0
enter the maintainer info
user@hp.com
enter the architecture
all
enter the description title
l2gateway package
enter the description details
description details of l2gateway package
#info for neutron-l2gateway-agent.conf file
enter the networking-l2gw binary path
/usr/bin/neutron-l2gateway-agent
enter the neutron config file path
/etc/neutron/neutron.conf
enter the l2gateway agent config file path
/usr/etc/neutron/l2gateway_agent.ini
enter the l2gateway log file path
/var/log/neutron/l2gateway-agent.log
after execution of install_l2gateway_agent.sh check neutron-l2gateway-agent status
sudo service neutron-l2gateway-agent status
neutron-l2gateway-agent start/running, process 15276

View File

@ -1,100 +0,0 @@
#!/bin/bash
# Copyright (c) 2015 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License
if [ $(id -u -r) -ne 0 ]
then
echo "Requires root privileges. Please re-run using sudo."
exit 1
fi
apt-get update -y
apt-get install devscripts -y
apt-get install debhelper -y
apt-get install dh-make -y
#read the package name and version,if not take default values and enter to
#debian/changelog file.
cd ..
if [ -f "debian/changelog" ]
then
echo info for debian/changelog file
echo enter package name for debian/changelog
read pck
sed -i 's/PACKAGE/'${pck:-networking-l2gw}'/' debian/changelog
echo enter package version for debian/changelog
read pck_ver
sed -i 's/VERSION/'${pck_ver:-1.0}'/' debian/changelog
fi
#control file contains various values which dpkg, dselect, apt-get, apt-cache, aptitude,
#and other package management tools will use to manage the package.
#It is defined by the Debian Policy Manual, 5 "Control files and their fields".
if [ -f "debian/control" ]
then
echo info for debian/control file
echo enter the networking-l2gw source name
read src_name
echo enter the networking-l2gw package name
read pck_name
echo enter the version number
read ver
echo enter the maintainer info
read maintainer_info
echo enter the architecture
read architecture
echo enter the description title
read description
echo enter the description details
read description_details
sed -i 's/source/'${src_name:-networking-l2gw}'/' debian/control
sed -i 's/package/'${pck_name:-networking-l2gw}'/' debian/control
sed -i 's/version/'${ver:-1.0}'/' debian/control
sed -i 's/maintainer/'${maintainer_info:-user@openstack}'/' debian/control
sed -i 's/arch/'${architecture:-all}'/' debian/control
sed -i 's/desc/'${description:-networking-l2gw}'/' debian/control
sed -i 's/desc_details/'${description_details:-networking-l2gw}'/' debian/control
fi
#dpkg-buildpackage, build binary or source packages from sources.
#-b Specifies a binary-only build, no source files are to be built and/or distributed.
echo building debian package
dpkg-buildpackage -b
cd ../
if [ -z "$pck_name" ]
then
pck_name="networking-l2gw"
fi
if [ -z "$pck_ver" ]
then
pck_ver=1.0
fi
if [ -z "$architecture" ]
then
architecture="all"
fi
echo installing $pck_name\_$pck_ver\_$architecture.deb
dpkg -i $pck_name\_$pck_ver\_$architecture.deb
echo enter neutron.conf file path
read neutron_conf
l2gw_plugin=", networking_l2gw.services.l2gateway.plugin.L2GatewayPlugin"
while read line
do
if [[ $line == *"service_plugins"* ]]
then
if [[ $line != *$l2gw_plugin* ]]
then
serv_plugin=$line$l2gw_plugin
sed -i "s|$line|$serv_plugin|" ${neutron_conf:-/etc/neutron/neutron.conf}
fi
fi
done <${neutron_conf:-/etc/neutron/neutron.conf}
service neutron-server restart

View File

@ -1,99 +0,0 @@
#!/bin/bash
# Copyright (c) 2015 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License
if [ $(id -u -r) -ne 0 ]
then
echo "Requires root privileges. Please re-run using sudo."
exit 1
fi
#apt-get update -y
apt-get install devscripts -y
apt-get install debhelper -y
apt-get install dh-make -y
#read the package name and version,if not take default values and enter to
#debian/changelog file.
cd ..
if [ -f "debian/changelog" ]
then
echo info for debian/changelog file
echo enter package name for debian/changelog
read pck
sed -i 's/PACKAGE/'${pck:-networking-l2gw}'/' debian/changelog
echo enter package version for debian/changelog
read pck_ver
sed -i 's/VERSION/'${pck_ver:-1.0}'/' debian/changelog
fi
#control file contains various values which dpkg, dselect, apt-get, apt-cache, aptitude,
#and other package management tools will use to manage the package.
#It is defined by the Debian Policy Manual, 5 "Control files and their fields".
if [ -f "debian/control" ]
then
echo info for debian/control file
echo enter the networking-l2gw source name
read src_name
echo enter the networking-l2gw package name
read pck_name
echo enter the version number
read ver
echo enter the maintainer info
read maintainer_info
echo enter the architecture
read architecture
echo enter the description title
read description
echo enter the description details
read description_details
sed -i 's/source/'${src_name:-networking-l2gw}'/' debian/control
sed -i 's/package/'${pck_name:-networking-l2gw}'/' debian/control
sed -i 's/version/'${ver:-1.0}'/' debian/control
sed -i 's/maintainer/'${maintainer_info:-user@openstack}'/' debian/control
sed -i 's/arch/'${architecture:-all}'/' debian/control
sed -i 's/desc/'${description:-networking-l2gw}'/' debian/control
sed -i 's/desc_details/'${description_details:-networking-l2gw}'/' debian/control
fi
#dpkg-buildpackage, build binary or source packages from sources.
#-b Specifies a binary-only build, no source files are to be built and/or distributed.
echo building debian package
dpkg-buildpackage -b
cd ../
if [ -z "$pck_name" ]
then
pck_name="networking-l2gw"
fi
if [ -z "$pck_ver" ]
then
pck_ver=1.0
fi
if [ -z "$architecture" ]
then
architecture="all"
fi
echo installing $pck_name\_$pck_ver\_$architecture.deb
dpkg -i $pck_name\_$pck_ver\_$architecture.deb
echo enter the networking-l2gw binary path
read l2gw_bin_path
echo enter the neutron config file path
read neutron_conf
echo enter the l2gateway config file path
read l2gw_conf
echo enter the l2gateway log file path
read l2gw_log
sed -i 's|l2gw_bin_path|'$l2gw_bin_path'|' networking-l2gw/contrib/neutron-l2gateway-agent.conf
sed -i 's|neutron_conf|'$neutron_conf'|' networking-l2gw/contrib/neutron-l2gateway-agent.conf
sed -i 's|l2gw_conf|'$l2gw_conf'|' networking-l2gw/contrib/neutron-l2gateway-agent.conf
sed -i 's|l2gw_log|'$l2gw_log'|' networking-l2gw/contrib/neutron-l2gateway-agent.conf
cp networking-l2gw/contrib/neutron-l2gateway-agent.conf /etc/init/
service neutron-l2gateway-agent restart

View File

@ -1,16 +0,0 @@
# vim:set ft=upstart ts=2 et:
description "Neutron L2GW Agent"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
chdir /var/run
pre-start script
mkdir -p /var/run/neutron
chown neutron:root /var/run/neutron
end script
exec start-stop-daemon --start --chuid neutron --exec l2gw_bin_path -- --config-file=neutron_conf --config-file=l2gw_conf --log-file l2gw_log

5
debian/changelog vendored
View File

@ -1,5 +0,0 @@
PACKAGE (VERSION) UNRELEASED; urgency=medium
* Initial release. (Closes: #XXXXXX)
-- root <root@user> Mon, 23 Mar 2015 02:17:32 -0700

1
debian/compat vendored
View File

@ -1 +0,0 @@
9

8
debian/control vendored
View File

@ -1,8 +0,0 @@
Source: source
Version: version
Maintainer: maintainer
Package: package
Architecture: arch
Description: desc
desc_details

0
debian/copyright vendored
View File

4
debian/rules vendored
View File

@ -1,4 +0,0 @@
#!/usr/bin/make -f
%:
dh $@ --with python2

View File

@ -1,26 +0,0 @@
======================
Enabling in Devstack
======================
1. Download DevStack
2. Add this repo as an external repository and configure following flags in ``local.conf``::
[[local|localrc]]
enable_plugin networking-l2gw https://github.com/openstack/networking-l2gw
enable_service l2gw-plugin l2gw-agent
OVSDB_HOSTS=<ovsdb_name>:<ip address>:<port>
3. If you want to override the default service driver for L2Gateway (which uses
L2Gateway Agent with RPC) with an alternative service driver, please give that
alternative service driver inside the parameter NETWORKING_L2GW_SERVICE_DRIVER
of your ``local.conf``.
For example, to configure ODL service driver to be used for L2Gateway,
you need to include ODL Service Driver in ``local.conf`` as below:
NETWORKING_L2GW_SERVICE_DRIVER=L2GW:OpenDaylight:networking_odl.l2gateway.driver.OpenDaylightL2gwDriver:default
3. Read the settings file for more details.
4. run ``stack.sh``

View File

@ -1,17 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# This script is executed in the OpenStack CI jobs located here:
# http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/networking-l2gw.yaml
#
export OVERRIDE_ENABLED_SERVICES=key,n-api,n-cpu,n-cond,n-sch,n-crt,n-cauth,n-obj,g-api,g-reg,c-sch,c-api,c-vol,q-meta,q-dhcp,rabbit,mysql,dstat,l2gw-plugin,l2gw-agent,q-svc,q-l3,q-agt

View File

@ -1,106 +0,0 @@
#!/bin/bash
# devstack/plugin.sh
# Functions to control the configuration and operation of the l2gw
# Dependencies:
#
# ``functions`` file
# ``DEST`` must be defined
# ``STACK_USER`` must be defined
# ``stack.sh`` calls the entry points in this order:
# Save trace setting
XTRACE=$(set +o | grep xtrace)
set +o xtrace
function install_l2gw {
setup_develop $L2GW_DIR
}
function configure_agent_conf {
sudo cp $L2GW_DIR/etc/l2gateway_agent.ini $L2GW_CONF_FILE
iniset $L2GW_CONF_FILE ovsdb ovsdb_hosts $OVSDB_HOSTS
}
function start_l2gw_agent {
run_process l2gw-agent "$L2GW_AGENT_BINARY --config-file $NEUTRON_CONF --config-file=$L2GW_CONF_FILE"
}
function run_l2gw_alembic_migration {
$NEUTRON_BIN_DIR/neutron-db-manage --config-file $NEUTRON_CONF --config-file /$Q_PLUGIN_CONF_FILE upgrade head
}
function configure_l2gw_plugin {
sudo cp $L2GW_DIR/etc/l2gw_plugin.ini $L2GW_PLUGIN_CONF_FILE
neutron_server_config_add $L2GW_PLUGIN_CONF_FILE
}
function configure_tempest_for_l2gw {
if is_service_enabled tempest; then
iniset $TEMPEST_CONFIG l2gw l2gw_switch "cell08-5930-01::FortyGigE1/0/1|100"
source /opt/stack/new/tempest/.tox/tempest/bin/activate
pip install -r $L2GW_DIR/test-requirements.txt
deactivate
fi
}
# main loop
if is_service_enabled l2gw-plugin; then
if [[ "$1" == "source" ]]; then
# no-op
:
elif [[ "$1" == "stack" && "$2" == "install" ]]; then
install_l2gw
elif [[ "$1" == "stack" && "$2" == "pre-install" ]]; then
_neutron_service_plugin_class_add $L2GW_PLUGIN
elif [[ "$1" == "stack" && "$2" == "test-config" ]]; then
configure_tempest_for_l2gw
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
configure_l2gw_plugin
run_l2gw_alembic_migration
if is_service_enabled q-svc; then
echo_summary "Configuring networking-l2gw"
if [ "$NETWORKING_L2GW_SERVICE_DRIVER" ]; then
inicomment $L2GW_PLUGIN_CONF_FILE service_providers service_provider
iniadd $L2GW_PLUGIN_CONF_FILE service_providers service_provider $NETWORKING_L2GW_SERVICE_DRIVER
fi
fi
elif [[ "$1" == "stack" && "$2" == "post-extra" ]]; then
# no-op
:
fi
if [[ "$1" == "unstack" ]]; then
# no-op
:
fi
if [[ "$1" == "clean" ]]; then
# no-op
:
fi
fi
if is_service_enabled l2gw-agent; then
if [[ "$1" == "source" ]]; then
# no-op
:
elif [[ "$1" == "stack" && "$2" == "install" ]]; then
install_l2gw
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
configure_agent_conf
start_l2gw_agent
fi
if [[ "$1" == "unstack" ]]; then
#no-op
:
fi
if [[ "$1" == "clean" ]]; then
#no-op
:
fi
fi
# Restore xtrace
$XTRACE

View File

@ -1,18 +0,0 @@
# Devstack settings
L2GW_DIR=$DEST/networking-l2gw
L2GW_AGENT_BINARY="$NEUTRON_BIN_DIR/neutron-l2gateway-agent"
L2GW_PLUGIN=${L2GW_PLUGIN:-"networking_l2gw.services.l2gateway.plugin.L2GatewayPlugin"}
L2GW_CONF_FILE=/etc/neutron/l2gateway_agent.ini
L2GW_PLUGIN_CONF_FILE=/etc/neutron/l2gw_plugin.ini
#NETWORKING_L2GW_SERVICE_DRIVER=L2GW:OpenDaylight:networking_odl.l2gateway.driver.OpenDaylightL2gwDriver:default
#
# Each service you enable has the following meaning:
# l2gw-plugin - Add this config flag to enable l2gw service plugin
# l2gw-agent - Add this config flag to enable l2gw agent
#
# An example of enabling all-in-one l2gw is below.
# enable_service l2gw-plugin l2gw-agent
#
# This can be overridden in the localrc file
OVSDB_HOSTS=${OVSDB_HOSTS:-"ovsdb1:127.0.0.1:6632"}

View File

@ -1,75 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
#'sphinx.ext.intersphinx',
'oslosphinx'
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'networking-l2gw'
copyright = u'2013, OpenStack Foundation'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
#intersphinx_mapping = {'http://docs.python.org/': None}

View File

@ -1,4 +0,0 @@
============
Contributing
============
.. include:: ../../CONTRIBUTING.rst

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

View File

@ -1,25 +0,0 @@
.. networking-l2gw documentation master file, created by
sphinx-quickstart on Tue Jul 9 22:26:36 2013.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to networking-l2gw's documentation!
========================================================
Contents:
.. toctree::
:maxdepth: 2
readme
installation
usage
contributing
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,12 +0,0 @@
============
Installation
============
At the command line::
$ pip install networking-l2gw
Or, if you have virtualenvwrapper installed::
$ mkvirtualenv networking-l2gw
$ pip install networking-l2gw

View File

@ -1 +0,0 @@
.. include:: ../../README.rst

View File

@ -1,152 +0,0 @@
========
Overview
========
.. _whatisl2gw:
1. What is L2 Gateway
=============================
L2 Gateway (L2GW) is an API framework for OpenStack that offers bridging two or more networks together to make them look at a
single broadcast domain. A typical use case is bridging the virtual with the physical networks
.. _model:
2. The L2GW model
=================
L2GW introduces a various models to describe the relationships between logical and the physical entities.
========================= ====================================================================
Models Description
========================= ====================================================================
l2gateways logical gateways that represents for the set of physical devices
l2gatewaydevices l2 gateway devices that represents for logical gateways.
l2gatewayinterfaces it represents the physical ports for the devices
l2gatewayconnections represents connection between neutron network and the logical gateway
========================= =====================================================================
.. _usage:
3. L2GW NB API usage
=====================
L2GW NB REST API definitions are below,
3.1 Create l2gateway: neutron-l2gw l2-gateway-create <l2gateway-name> --device name="<device_name>",interface_names=”<interface_name1>|[<segid1] ; <interface_name2>|[<segid2]”
Note : segid is an optional parameter , if its not provided while creating l2gateway , it needs to be provided while creating l2-gateway-connection
3.2 List l2gateways: neutron-l2gw l2-gateway-list
3.3 Show l2gateway: neutron-l2gw l2-gateway-show <l2gateway-id/l2gateway-name>
3.4 Delete l2gateway: neutron-l2gw l2-gateway-delete <l2gateway-id/l2gateway-name>
3.5 Update l2gateway: neutron-l2gw l2-gateway-update <l2gateway-id/l2gateway-name> --name <new l2gateway-name> --device name=<device_name>,interface_names=”<interface_name1>|[<segid1] ; <interface_name2>|[<segid2]”
3.6 Create l2gateway-connection: neutron-l2gw l2-gateway-connection-create <l2gateway-id > <network-id> --default-segmentation-id [seg-id]
3.7 List l2gateway-connection: neutron-l2gw l2-gateway-connection-list
3.8 Show l2gateway-connection: neutron-l2gw l2-gateway-connection-show <l2gateway-connection-id>
3.9 Delete l2gateway-connection: neutron-l2gw l2-gateway-connection-delete <l2gateway-connection-id>
.. _l2gw_agent:
4. L2GW agent
=============
Configure the OVSDB parameters in /etc/neutron/l2gateway_agent.ini in case for openstack deployment.
Ex:
[ovsdb
ovsdb_hosts = ovsdb1:127.0.0.1:6632
In devstack local.conf will do a trick.(Refer - networking-l2gw/devstack/README.rst)
L2GW agent will be listed as part of “neutron agent-list”.
Details of L2GW Agent can be seen using “neutron agent-show <agent-id>” command
L2 Gateway Agent connects to ovsdb server to configure and fetch L2 Gateways
.. _l2gw_deployment:
5. L2GW Deployment
==================
.. image:: images/L2GW_deployment.png
:height: 225px
:width: 450px
:align: center
.. _l2gw_release_management
6. L2GW Package Versioning and Release Management
=================================================
Versioning of L2 Gateway Package
--------------------------------
L2 Gateway package will be uploaded, as networking-l2gw,
to https://pypi.python.org.
In order to upload this package, it will be versioned.
Any subsequent updates will require version updates.
This sub-section describes the versioning and release management of this package.
By keeping L2 Gateway repository out of Neutron main repository gives us
flexibility in terms of development and enhancements.
This flexibility is extended for versioning of this project as well - this
means, if we wanted to, we could version this project sequentially.
This means whenever a new fix is released, we could bump up the version to
the next number.
Flexibility comes with cost. Thinking in terms of future, assuming this API
is deployed by many users along with different releases of Neutron.
Many enhancements/fixes may be introduced to this project.
If we incremented the version/release number sequentially, this may force
uninterested users to upgrade as well.
This may or may not be desirable. Therefore, following release/versioning
proposal is suggested for this package.
Versioning of L2 Gateway will be aligned closely with Neutron releases.
Neutron releases are formatted as follows::
<year>.<major-release>.<minor-release>
year = 2015, 2014, etc...
major-release = 1 or 2 - only two releases in a year
minor-release = 1,2,3 or b1,b2,b3, or rc1,rc2,rc3, etc
2015.1.1, 2014.2.rc2, etc…
L2 Gateway package is versioned in the same manner with an exception that the
last tuple is used for intermediate patches/fixes between major release.
As an example, the first release will be::
2015.1.X where X will continue to increment as we add fixes to this release
When kilo is released, L2 Gateway repository will also be tagged as kilo/stable
to match with Neutron release.
At this time the version of this package will be tagged to
2015.1.X ("X" will continue to increase as bug fixes are added to kilo/stable).
For liberty release, the version of this package will be changed to 2015.2.Y.
All the new features will be added to 2015.2.Y and all the bug fixes for kilo
will be back-ported to 2015.1.X.
This gives the flexibility of keeping the contents/features of this package
closely aligned with Neutron releases.
Which Version of L2 Gateway Package to use?
-------------------------------------------
Anybody who wants to use L2 Gateway package, they can install it by issuing::
pip install networking-l2gw
This will always pick the latest version of the package.
However, for those users who are already using this package and want to pick
up point fixes for a given release may use the specific version.
For example, if a user wants to pick the latest version of the package that is
suitable for kilo/stable, may use the following::
pip install networking-l2gw>=2015.1.X,<2015.2.0
For information on deploying L2GW refer networking-l2gw/doc/source/installation.rst and in devstack , networking-l2gw/devstack/README.rst

View File

@ -1,62 +0,0 @@
[DEFAULT]
# Show debugging output in log (sets DEBUG log level output)
# debug = False
[ovsdb]
# (StrOpt) OVSDB server tuples in the format
# <ovsdb_name>:<ip address>:<port>[,<ovsdb_name>:<ip address>:<port>]
# - ovsdb_name: a symbolic name that helps identifies keys and certificate files
# - ip address: the address or dns name for the ovsdb server
# - port: the port (ssl is supported)
# ovsdb_hosts =
# Example: ovsdb_hosts = 'ovsdb1:16.95.16.1:6632,ovsdb2:16.95.16.2:6632'
# enable_manager = False
# (BoolOpt) connection can be initiated by the ovsdb server.
# By default 'enable_manager' value is False, turn on the variable to True
# to initiate the connection from ovsdb server to l2gw agent.
# manager_table_listening_port = 6632
# (PortOpt) set port number for l2gateway agent, so that it can listen
# for ovsdb server,whenever its IP is entered in manager table of ovsdb server.
# by default it is set to port 6632.
# you can use vtep-ctl utility to populate manager table of ovsdb.
# For Example: sudo vtep-ctl set-manager tcp:x.x.x.x:6640,
# where x.x.x.x is IP of l2gateway agent and 6640 is a port.
# (StrOpt) Base path to private key file(s).
# Agent will find key file named
# $l2_gw_agent_priv_key_base_path/$ovsdb_name.key
# l2_gw_agent_priv_key_base_path =
# Example: l2_gw_agent_priv_key_base_path = '/home/someuser/keys'
# (StrOpt) Base path to cert file(s).
# Agent will find cert file named
# $l2_gw_agent_cert_base_path/$ovsdb_name.cert
# l2_gw_agent_cert_base_path =
# Example: l2_gw_agent_cert_base_path = '/home/someuser/certs'
# (StrOpt) Base path to ca cert file(s).
# Agent will find ca cert file named
# $l2_gw_agent_ca_cert_base_path/$ovsdb_name.ca_cert
# l2_gw_agent_ca_cert_base_path =
# Example: l2_gw_agent_ca_cert_base_path = '/home/someuser/ca_certs'
# (IntOpt) The L2 gateway agent checks connection state with the OVSDB
# servers.
# The interval is number of seconds between attempts.
# periodic_interval =
# Example: periodic_interval = 20
# (IntOpt) The L2 gateway agent retries to connect to the OVSDB server
# if a socket does not get opened in the first attempt.
# the max_connection_retries is the maximum number of such attempts
# before giving up.
# max_connection_retries =
# Example: max_connection_retries = 10
# (IntOpt) The remote OVSDB server sends echo requests every 4 seconds.
# If there is no echo request on the socket for socket_timeout seconds,
# by default socket_timeout is set to 30 seconds. The agent can
# safely assume that the connection with the remote OVSDB server is lost.
# socket_timeout =
# Example: socket_timeout = 30

View File

@ -1,25 +0,0 @@
[DEFAULT]
# (StrOpt) default interface name of the l2 gateway
# default_interface_name =
# Example: default_interface_name = "FortyGigE1/0/1"
# (StrOpt) default device name of the l2 gateway
# default_device_name =
# Example: default_device_name = "Switch1"
# (IntOpt) quota of the l2 gateway
# quota_l2_gateway =
# Example: quota_l2_gateway = 10
# (IntOpt) The periodic interval at which the plugin
# checks for the monitoring L2 gateway agent
# periodic_monitoring_interval =
# Example: periodic_monitoring_interval = 5
[service_providers]
# Must be in form:
# service_provider=<service_type>:<name>:<driver>[:default]
# List of allowed service types includes L2GW
# Combination of <service type> and <name> must be unique; <driver> must also be unique
# This is multiline option
service_provider=L2GW:l2gw:networking_l2gw.services.l2gateway.service_drivers.rpc_l2gw.L2gwRpcDriver:default

View File

@ -1,15 +0,0 @@
{
"admin_only": "rule:context_is_admin",
"admin_or_owner": "rule:context_is_admin or tenant_id:%(tenant_id)s",
"create_l2_gateway": "rule:admin_only",
"update_l2_gateway": "rule:admin_only",
"get_l2_gateway": "rule:admin_only",
"delete_l2_gateway": "rule:admin_only",
"get_l2_gateways": "rule:admin_only",
"create_l2_gateway_connection": "rule:admin_only",
"get_l2_gateway_connections": "rule:admin_only",
"get_l2_gateway_connection": "rule:admin_only",
"delete_l2_gateway_connection": "rule:admin_only"
}

View File

@ -1,19 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pbr.version
__version__ = pbr.version.VersionInfo(
'networking_l2gw').version_string()

View File

@ -1,32 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import oslo_i18n
DOMAIN = "networking_l2gw"
_translators = oslo_i18n.TranslatorFactory(domain=DOMAIN)
# The primary translation function using the well-known name "_"
_ = _translators.primary
# The contextual translation function using the name "_C"
_C = _translators.contextual_form
# The plural translation function using the name "_P"
_P = _translators.plural_form
def get_available_languages():
return oslo_i18n.get_available_languages(DOMAIN)

View File

@ -1,17 +0,0 @@
# Copyright 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import eventlet
eventlet.monkey_patch()

View File

@ -1,20 +0,0 @@
# Copyright 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from networking_l2gw.services.l2gateway import l2gw_agent
def main():
l2gw_agent.main()

View File

@ -1,112 +0,0 @@
# Copyright 2015 OpenStack Foundation
# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutron.db import models_v2
from neutron_lib import exceptions
from neutron_lib.plugins.ml2 import api
import sqlalchemy as sa
from sqlalchemy.orm import exc
class L2GatewayCommonDbMixin(object):
def _apply_filters_to_query(self, query, model, filters):
"""Apply filters to query for the models."""
if filters:
for key, value in filters.items():
column = getattr(model, key, None)
if column:
query = query.filter(column.in_(value))
return query
def _model_query(self, context, model):
"""Query model based on filter."""
query = context.session.query(model)
query_filter = None
if not context.is_admin and hasattr(model, 'tenant_id'):
if hasattr(model, 'shared'):
query_filter = ((model.tenant_id == context.tenant_id) |
(model.shared == sa.true()))
else:
query_filter = (model.tenant_id == context.tenant_id)
if query_filter is not None:
query = query.filter(query_filter)
return query
def _get_collection_query(self, context, model, filters=None,
sorts=None, limit=None, marker_obj=None,
page_reverse=False):
"""Get collection query for the models."""
collection = self._model_query(context, model)
collection = self._apply_filters_to_query(collection, model, filters)
return collection
def _get_marker_obj(self, context, resource, limit, marker):
"""Get marker object for the resource."""
if limit and marker:
return getattr(self, '_get_%s' % resource)(context, marker)
return None
def _fields(self, resource, fields):
"""Get fields for the resource for get query."""
if fields:
return dict(((key, item) for key, item in resource.items()
if key in fields))
return resource
def _get_tenant_id_for_create(self, context, resource):
"""Get tenant id for creation of resources."""
if context.is_admin and 'tenant_id' in resource:
tenant_id = resource['tenant_id']
elif ('tenant_id' in resource and
resource['tenant_id'] != context.tenant_id):
reason = _('Cannot create resource for another tenant')
raise exceptions.AdminRequired(reason=reason)
else:
tenant_id = context.tenant_id
return tenant_id
def _get_collection(self, context, model, dict_func, filters=None,
fields=None, sorts=None, limit=None, marker_obj=None,
page_reverse=False):
"""Get collection object based on query for resources."""
query = self._get_collection_query(context, model, filters=filters,
sorts=sorts,
limit=limit,
marker_obj=marker_obj,
page_reverse=page_reverse)
items = [dict_func(c, fields) for c in query]
if limit and page_reverse:
items.reverse()
return items
def _make_segment_dict(self, record):
"""Make a segment dictionary out of a DB record."""
return {api.ID: record.id,
api.NETWORK_TYPE: record.network_type,
api.PHYSICAL_NETWORK: record.physical_network,
api.SEGMENTATION_ID: record.segmentation_id}
def _get_network(self, context, id):
try:
network = self._get_by_id(context, models_v2.Network, id)
except exc.NoResultFound:
raise exceptions.NetworkNotFound(net_id=id)
return network
def _get_by_id(self, context, model, id):
query = self._model_query(context, model)
return query.filter(model.id == id).one()

View File

@ -1,20 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from neutron.db.migration.models import head
import networking_l2gw.db.l2gateway.l2gateway_models # noqa
import networking_l2gw.db.l2gateway.ovsdb.models # noqa
def get_metadata():
return head.model_base.BASEV2.metadata

View File

@ -1,566 +0,0 @@
# Copyright 2015 OpenStack Foundation
# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutron.callbacks import events
from neutron.callbacks import registry
from neutron.callbacks import resources
from networking_l2gw.db.l2gateway import db_query
from networking_l2gw.db.l2gateway import l2gateway_models as models
from networking_l2gw.extensions import l2gateway
from networking_l2gw.extensions import l2gatewayconnection
from networking_l2gw.services.l2gateway.common import config
from networking_l2gw.services.l2gateway.common import constants
from networking_l2gw.services.l2gateway.common import l2gw_validators
from networking_l2gw.services.l2gateway import exceptions as l2gw_exc
from neutron_lib import exceptions
from neutron_lib.plugins import directory
from oslo_log import log as logging
from oslo_utils import uuidutils
from sqlalchemy.orm import exc as sa_orm_exc
LOG = logging.getLogger(__name__)
class L2GatewayMixin(l2gateway.L2GatewayPluginBase,
db_query.L2GatewayCommonDbMixin,
l2gatewayconnection.L2GatewayConnectionPluginBase):
"""Class L2GatewayMixin for handling l2_gateway resource."""
gateway_resource = constants.GATEWAY_RESOURCE_NAME
connection_resource = constants.CONNECTION_RESOURCE_NAME
config.register_l2gw_opts_helper()
def _get_l2_gateway(self, context, gw_id):
gw = context.session.query(models.L2Gateway).get(gw_id)
if not gw:
raise l2gw_exc.L2GatewayNotFound(gateway_id=gw_id)
return gw
def _get_l2_gateways(self, context):
return context.session.query(models.L2Gateway).all()
def _get_l2_gw_interfaces(self, context, id):
return context.session.query(models.L2GatewayInterface).filter_by(
device_id=id).all()
def _is_vlan_configured_on_any_interface_for_l2gw(self,
context,
l2gw_id):
devices_db = self._get_l2_gateway_devices(context, l2gw_id)
for device_model in devices_db:
interfaces_db = self._get_l2_gw_interfaces(context,
device_model.id)
for int_model in interfaces_db:
query = context.session.query(models.L2GatewayInterface)
int_db = query.filter_by(id=int_model.id).first()
seg_id = int_db[constants.SEG_ID]
if seg_id > 0:
return True
return False
def _get_l2_gateway_devices(self, context, l2gw_id):
return context.session.query(models.L2GatewayDevice).filter_by(
l2_gateway_id=l2gw_id).all()
def _get_l2gw_devices_by_name_andl2gwid(self, context, device_name,
l2gw_id):
return context.session.query(models.L2GatewayDevice).filter_by(
device_name=device_name, l2_gateway_id=l2gw_id).all()
def _get_l2_gateway_connection(self, context, cn_id):
try:
con = context.session.query(models.L2GatewayConnection).get(cn_id)
except sa_orm_exc.NoResultFound:
raise l2gw_exc.L2GatewayConnectionNotFound(id=cn_id)
return con
def _make_l2gw_connections_dict(self, gw_conn, fields=None):
if gw_conn is None:
raise l2gw_exc.L2GatewayConnectionNotFound(id="")
segmentation_id = gw_conn['segmentation_id']
if segmentation_id == 0:
segmentation_id = ""
res = {'id': gw_conn['id'],
'network_id': gw_conn['network_id'],
'l2_gateway_id': gw_conn['l2_gateway_id'],
'tenant_id': gw_conn['tenant_id'],
'segmentation_id': segmentation_id
}
return self._fields(res, fields)
def _make_l2_gateway_dict(self, l2_gateway, fields=None):
device_list = []
for d in l2_gateway.devices:
interface_list = []
for interfaces_db in d.interfaces:
seg_id = interfaces_db[constants.SEG_ID]
if seg_id == 0:
seg_id = ""
interface_list.append({'name':
interfaces_db['interface_name'],
constants.SEG_ID:
seg_id})
aligned_int__list = self._align_interfaces_list(interface_list)
device_list.append({'device_name': d['device_name'],
'id': d['id'],
'interfaces': aligned_int__list})
res = {'id': l2_gateway['id'],
'name': l2_gateway['name'],
'devices': device_list,
'tenant_id': l2_gateway['tenant_id']}
return self._fields(res, fields)
def _set_mapping_info_defaults(self, mapping_info):
if not mapping_info.get(constants.SEG_ID):
mapping_info[constants.SEG_ID] = 0
def _retrieve_gateway_connections(self, context, gateway_id,
mapping_info={}, only_one=False):
filters = {'l2_gateway_id': [gateway_id]}
for k, v in mapping_info.items():
if v and k != constants.SEG_ID:
filters[k] = [v]
query = self._get_collection_query(context,
models.L2GatewayConnection,
filters)
return query.one() if only_one else query.all()
def create_l2_gateway(self, context, l2_gateway):
"""Create a logical gateway."""
self._admin_check(context, 'CREATE')
gw = l2_gateway[self.gateway_resource]
tenant_id = self._get_tenant_id_for_create(context, gw)
devices = gw['devices']
with context.session.begin(subtransactions=True):
gw_db = models.L2Gateway(
id=gw.get('id', uuidutils.generate_uuid()),
tenant_id=tenant_id,
name=gw.get('name'))
context.session.add(gw_db)
l2gw_device_dict = {}
for device in devices:
l2gw_device_dict['l2_gateway_id'] = id
device_name = device['device_name']
l2gw_device_dict['device_name'] = device_name
l2gw_device_dict['id'] = uuidutils.generate_uuid()
uuid = self._generate_uuid()
dev_db = models.L2GatewayDevice(id=uuid,
l2_gateway_id=gw_db.id,
device_name=device_name)
context.session.add(dev_db)
for interface_list in device['interfaces']:
int_name = interface_list.get('name')
if constants.SEG_ID in interface_list:
seg_id_list = interface_list.get(constants.SEG_ID)
for seg_ids in seg_id_list:
uuid = self._generate_uuid()
interface_db = self._get_int_model(uuid,
int_name,
dev_db.id,
seg_ids)
context.session.add(interface_db)
else:
uuid = self._generate_uuid()
interface_db = self._get_int_model(uuid,
int_name,
dev_db.id,
0)
context.session.add(interface_db)
context.session.query(models.L2GatewayDevice).all()
return self._make_l2_gateway_dict(gw_db)
def update_l2_gateway(self, context, id, l2_gateway):
"""Update L2Gateway."""
gw = l2_gateway[self.gateway_resource]
devices = gw.get('devices')
dev_db = None
l2gw_db = None
with context.session.begin(subtransactions=True):
l2gw_db = self._get_l2_gateway(context, id)
if not devices and l2gw_db:
l2gw_db.name = gw.get('name')
return self._make_l2_gateway_dict(l2gw_db)
if devices:
for device in devices:
dev_name = device['device_name']
dev_db = (self._get_l2gw_devices_by_name_andl2gwid(
context, dev_name, id))
interface_dict_list = [i for i in device['interfaces']]
interface_db = self._get_l2_gw_interfaces(context,
dev_db[0].id)
self._delete_l2_gateway_interfaces(context, interface_db)
self._update_interfaces_db(context, interface_dict_list,
dev_db)
if l2gw_db:
if gw.get('name'):
l2gw_db.name = gw.get('name')
return self._make_l2_gateway_dict(l2gw_db)
def _update_interfaces_db(self, context, interface_dict_list, device_db):
for interfaces in interface_dict_list:
int_name = interfaces.get('name')
if constants.SEG_ID in interfaces:
seg_id_list = interfaces.get(constants.SEG_ID)
for seg_ids in seg_id_list:
uuid = self._generate_uuid()
int_db = self._get_int_model(uuid,
int_name,
device_db[0].id,
seg_ids)
context.session.add(int_db)
else:
uuid = self._generate_uuid()
interface_db = self._get_int_model(uuid,
int_name,
device_db[0].id,
0)
context.session.add(interface_db)
def get_l2_gateway(self, context, id, fields=None):
"""get the l2 gateway by id."""
self._admin_check(context, 'GET')
gw_db = self._get_l2_gateway(context, id)
return self._make_l2_gateway_dict(gw_db, fields)
def delete_l2_gateway(self, context, id):
"""delete the l2 gateway by id."""
gw_db = self._get_l2_gateway(context, id)
if gw_db:
with context.session.begin(subtransactions=True):
context.session.delete(gw_db)
LOG.debug("l2 gateway '%s' was deleted.", id)
def get_l2_gateways(self, context, filters=None, fields=None,
sorts=None,
limit=None,
marker=None,
page_reverse=False):
"""list the l2 gateways available in the neutron DB."""
self._admin_check(context, 'GET')
marker_obj = self._get_marker_obj(
context, 'l2_gateway', limit, marker)
return self._get_collection(context, models.L2Gateway,
self._make_l2_gateway_dict,
filters=filters, fields=fields,
sorts=sorts, limit=limit,
marker_obj=marker_obj,
page_reverse=page_reverse)
def _update_segmentation_id(self, context, l2gw_id, segmentation_id):
"""Update segmentation id for interfaces."""
device_db = self._get_l2_gateway_devices(context, l2gw_id)
for device_model in device_db:
interface_db = self._get_l2_gw_interfaces(context,
device_model.id)
for interface_model in interface_db:
interface_model.segmentation_id = segmentation_id
def _delete_l2_gateway_interfaces(self, context, int_db_list):
"""delete the l2 interfaces by id."""
with context.session.begin(subtransactions=True):
for interfaces in int_db_list:
context.session.delete(interfaces)
LOG.debug("l2 gateway interfaces was deleted.")
def create_l2_gateway_connection(self, context, l2_gateway_connection):
"""Create l2 gateway connection."""
gw_connection = l2_gateway_connection[self.connection_resource]
l2_gw_id = gw_connection.get('l2_gateway_id')
network_id = gw_connection.get('network_id')
nw_map = {}
nw_map['network_id'] = network_id
nw_map['l2_gateway_id'] = l2_gw_id
segmentation_id = ""
if constants.SEG_ID in gw_connection:
segmentation_id = gw_connection.get(constants.SEG_ID)
nw_map[constants.SEG_ID] = segmentation_id
with context.session.begin(subtransactions=True):
gw_db = self._get_l2_gateway(context, l2_gw_id)
tenant_id = self._get_tenant_id_for_create(context, gw_db)
if self._retrieve_gateway_connections(context,
l2_gw_id,
nw_map):
raise l2gw_exc.L2GatewayConnectionExists(mapping=nw_map,
gateway_id=l2_gw_id)
nw_map['tenant_id'] = tenant_id
connection_id = uuidutils.generate_uuid()
nw_map['id'] = connection_id
if not segmentation_id:
nw_map['segmentation_id'] = "0"
gw_db.network_connections.append(
models.L2GatewayConnection(**nw_map))
gw_db = models.L2GatewayConnection(id=connection_id,
tenant_id=tenant_id,
network_id=network_id,
l2_gateway_id=l2_gw_id,
segmentation_id=segmentation_id)
return self._make_l2gw_connections_dict(gw_db)
def get_l2_gateway_connections(self, context, filters=None,
fields=None,
sorts=None, limit=None, marker=None,
page_reverse=False):
"""List l2 gateway connections."""
self._admin_check(context, 'GET')
marker_obj = self._get_marker_obj(
context, 'l2_gateway_connection', limit, marker)
return self._get_collection(context, models.L2GatewayConnection,
self._make_l2gw_connections_dict,
filters=filters, fields=fields,
sorts=sorts, limit=limit,
marker_obj=marker_obj,
page_reverse=page_reverse)
def get_l2_gateway_connection(self, context, id, fields=None):
"""Get l2 gateway connection."""
self._admin_check(context, 'GET')
"""Get the l2 gateway connection by id."""
gw_db = self._get_l2_gateway_connection(context, id)
return self._make_l2gw_connections_dict(gw_db, fields)
def delete_l2_gateway_connection(self, context, id):
"""Delete the l2 gateway connection by id."""
with context.session.begin(subtransactions=True):
gw_db = self._get_l2_gateway_connection(context, id)
context.session.delete(gw_db)
LOG.debug("l2 gateway '%s' was destroyed.", id)
def _admin_check(self, context, action):
"""Admin role check helper."""
# TODO(selva): his check should be required if the tenant_id is
# specified in the request, otherwise the policy.json do a trick
# this need further revision.
if not context.is_admin:
reason = _('Cannot %s resource for non admin tenant') % action
raise exceptions.AdminRequired(reason=reason)
def _generate_uuid(self):
"""Generate uuid helper."""
uuid = uuidutils.generate_uuid()
return uuid
def _get_int_model(self, uuid, interface_name, dev_id, seg_id):
return models.L2GatewayInterface(id=uuid,
interface_name=interface_name,
device_id=dev_id,
segmentation_id=seg_id)
def get_l2gateway_devices_by_gateway_id(self, context, l2_gateway_id):
"""Get l2gateway_devices_by id."""
session = context.session
with session.begin():
return session.query(models.L2GatewayDevice).filter_by(
l2_gateway_id=l2_gateway_id).all()
def get_l2gateway_interfaces_by_device_id(self, context, device_id):
"""Get all l2gateway_interfaces_by device_id."""
session = context.session
with session.begin():
return session.query(models.L2GatewayInterface).filter_by(
device_id=device_id).all()
def validate_device_name(self, context, device_name, l2gw_id):
if device_name:
devices_db = self._get_l2gw_devices_by_name_andl2gwid(context,
device_name,
l2gw_id)
if not devices_db:
raise l2gw_exc.L2GatewayDeviceNameNotFound(device_name=device_name)
def _validate_any_seg_id_empty_in_interface_dict(self, devices):
"""Validate segmentation_id for consistency."""
for device in devices:
interface_list = device['interfaces']
if not interface_list:
raise l2gw_exc.L2GatewayInterfaceRequired()
if constants.SEG_ID in interface_list[0]:
for interfaces in interface_list[1:len(interface_list)]:
if constants.SEG_ID not in interfaces:
raise l2gw_exc.L2GatewaySegmentationRequired()
if constants.SEG_ID not in interface_list[0]:
for interfaces in interface_list[1:len(interface_list)]:
if constants.SEG_ID in interfaces:
raise l2gw_exc.L2GatewaySegmentationRequired()
def _align_interfaces_list(self, interface_list):
"""Align interfaces list based on input dict for multiple seg ids."""
interface_dict = {}
aligned_interface_list = []
for interfaces in interface_list:
actual__name = interfaces.get('name')
if actual__name in interface_dict:
interface_name = interface_dict.get(actual__name)
seg_id_list = interfaces.get(constants.SEG_ID)
interface_name.append(str(seg_id_list))
interface_dict.update({actual__name: interface_name})
else:
seg_id = str(interfaces.get(constants.SEG_ID)).split()
interface_dict.update({actual__name: seg_id})
for name in interface_dict:
aligned_interface_list.append({'segmentation_id':
interface_dict[name],
'name': name})
return aligned_interface_list
def _get_l2_gateway_connections(self, context):
"""Get l2 gateway connections."""
try:
con = context.session.query(models.L2GatewayConnection).all()
except sa_orm_exc.NoResultFound:
raise l2gw_exc.L2GatewayConnectionNotFound(
id="")
return con
def _get_l2gw_ids_by_interface_switch(self, context, interface_name,
switch_name):
"""Get l2 gateway ids by interface and switch."""
connections = self._get_l2_gateway_connections(context)
l2gw_id_list = []
if connections:
for connection in connections:
l2gw_id = connection.l2_gateway_id
devices = self._get_l2_gateway_device_by_name_id(context,
switch_name,
l2gw_id)
if devices:
for device in devices:
interfaces = self._get_l2_gw_interfaces(context,
device.id)
for interface in interfaces:
if interface_name == interface.interface_name:
l2gw_id_list.append(l2gw_id)
else:
LOG.debug("l2 gateway devices are empty")
else:
LOG.debug("l2 gateway connections are empty")
return l2gw_id_list
def _delete_connection_by_l2gw_id(self, context, l2gw_id):
"""Delete the l2 gateway connection by l2gw id."""
with context.session.begin(subtransactions=True):
con_db = self._get_l2_gateway_connection_by_l2gw_id(context,
l2gw_id)
if con_db:
context.session.delete(con_db[0])
LOG.debug("l2 gateway connection was destroyed.")
def _get_l2_gateway_connection_by_l2gw_id(self, context, l2gw_id):
"""Get the l2 gateway connection by l2gw id."""
try:
con = context.session.query(models.L2GatewayConnection).filter_by(
l2_gateway_id=l2gw_id).all()
except sa_orm_exc.NoResultFound:
raise l2gw_exc.L2GatewayConnectionNotFound(
id=l2gw_id)
return con
def _get_l2_gateway_device_by_name_id(self, context, device_name, l2gw_id):
"""Get the l2 gateway device by name and id."""
try:
gw = context.session.query(models.L2GatewayDevice).filter_by(
device_name=device_name, l2_gateway_id=l2gw_id).all()
except sa_orm_exc.NoResultFound:
raise l2gw_exc.L2GatewayDeviceNotFound(
device_id=device_name)
return gw
def validate_l2_gateway_for_create(self, context, l2_gateway):
self._admin_check(context, 'CREATE')
gw = l2_gateway[self.gateway_resource]
devices = gw['devices']
self._validate_any_seg_id_empty_in_interface_dict(devices)
def validate_l2_gateway_for_delete(self, context, l2gw_id):
self._admin_check(context, 'DELETE')
gw_db = self._get_l2_gateway(context, l2gw_id)
if gw_db.network_connections:
raise l2gw_exc.L2GatewayInUse(gateway_id=l2gw_id)
return
def validate_l2_gateway_for_update(self, context, id, l2_gateway):
self._admin_check(context, 'UPDATE')
gw = l2_gateway[self.gateway_resource]
devices = None
dev_db = None
l2gw_db = None
if 'devices' in gw:
devices = gw['devices']
with context.session.begin(subtransactions=True):
l2gw_db = self._get_l2_gateway(context, id)
if l2gw_db.network_connections:
raise l2gw_exc.L2GatewayInUse(gateway_id=id)
if devices:
for device in devices:
dev_name = device['device_name']
dev_db = (self._get_l2gw_devices_by_name_andl2gwid(
context, dev_name, id))
if not dev_db:
raise l2gw_exc.L2GatewayDeviceNotFound(device_id="")
self.validate_device_name(context, dev_name, id)
def validate_l2_gateway_connection_for_create(self, context,
l2_gateway_connection):
self._admin_check(context, 'CREATE')
gw_connection = l2_gateway_connection[self.connection_resource]
l2_gw_id = gw_connection.get('l2_gateway_id')
network_id = gw_connection.get('network_id')
plugin = directory.get_plugin()
plugin.get_network(context, network_id)
nw_map = {}
nw_map['network_id'] = network_id
nw_map['l2_gateway_id'] = l2_gw_id
segmentation_id = ""
if constants.SEG_ID in gw_connection:
segmentation_id = gw_connection.get(constants.SEG_ID)
nw_map[constants.SEG_ID] = segmentation_id
is_vlan = self._is_vlan_configured_on_any_interface_for_l2gw(context,
l2_gw_id)
network_id = l2gw_validators.validate_network_mapping_list(nw_map,
is_vlan)
with context.session.begin(subtransactions=True):
if self._retrieve_gateway_connections(context,
l2_gw_id,
nw_map):
raise l2gw_exc.L2GatewayConnectionExists(mapping=nw_map,
gateway_id=l2_gw_id)
def validate_l2_gateway_connection_for_delete(self, context,
l2_gateway_conn_id):
self._admin_check(context, 'DELETE')
gw_db = self._get_l2_gateway_connection(context,
l2_gateway_conn_id)
if gw_db is None:
raise l2gw_exc.L2GatewayConnectionNotFound(
gateway_id=l2_gateway_conn_id)
def l2gw_callback(resource, event, trigger, **kwargs):
l2gwservice = directory.get_plugin(constants.L2GW)
context = kwargs.get('context')
port_dict = kwargs.get('port')
if l2gwservice:
if event == events.AFTER_UPDATE:
l2gwservice.add_port_mac(context, port_dict)
elif event == events.AFTER_DELETE:
l2gwservice.delete_port_mac(context, port_dict)
def subscribe():
interested_events = (events.AFTER_UPDATE,
events.AFTER_DELETE)
for x in interested_events:
registry.subscribe(
l2gw_callback, resources.PORT, x)

View File

@ -1,64 +0,0 @@
# Copyright 2015 OpenStack Foundation
# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutron_lib.db import model_base
import sqlalchemy as sa
from sqlalchemy import orm
class L2GatewayConnection(model_base.BASEV2, model_base.HasProject,
model_base.HasId):
"""Define an l2 gateway connection between a l2 gateway and a network."""
l2_gateway_id = sa.Column(sa.String(36),
sa.ForeignKey('l2gateways.id',
ondelete='CASCADE'))
network_id = sa.Column(sa.String(36),
sa.ForeignKey('networks.id', ondelete='CASCADE'),
nullable=False)
segmentation_id = sa.Column(sa.Integer)
__table_args__ = (sa.UniqueConstraint(l2_gateway_id,
network_id),)
class L2GatewayInterface(model_base.BASEV2, model_base.HasId):
"""Define an l2 gateway interface."""
interface_name = sa.Column(sa.String(255))
device_id = sa.Column(sa.String(36),
sa.ForeignKey('l2gatewaydevices.id',
ondelete='CASCADE'),
nullable=False)
segmentation_id = sa.Column(sa.Integer)
class L2GatewayDevice(model_base.BASEV2, model_base.HasId):
"""Define an l2 gateway device."""
device_name = sa.Column(sa.String(255), nullable=False)
interfaces = orm.relationship(L2GatewayInterface,
backref='l2gatewaydevices',
cascade='all,delete')
l2_gateway_id = sa.Column(sa.String(36),
sa.ForeignKey('l2gateways.id',
ondelete='CASCADE'),
nullable=False)
class L2Gateway(model_base.BASEV2, model_base.HasId, model_base.HasProject):
"""Define an l2 gateway."""
name = sa.Column(sa.String(255))
devices = orm.relationship(L2GatewayDevice,
backref='l2gateways',
cascade='all,delete')
network_connections = orm.relationship(L2GatewayConnection,
lazy='joined')

View File

@ -1,570 +0,0 @@
# Copyright (c) 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log as logging
from oslo_utils import timeutils
from sqlalchemy import asc
from sqlalchemy.orm import exc
from networking_l2gw.db.l2gateway.ovsdb import models
LOG = logging.getLogger(__name__)
def add_vlan_binding(context, record_dict):
"""Insert a vlan binding of a given physical port."""
session = context.session
with session.begin(subtransactions=True):
binding = models.VlanBindings(
port_uuid=record_dict['port_uuid'],
vlan=record_dict['vlan'],
logical_switch_uuid=record_dict['logical_switch_uuid'],
ovsdb_identifier=record_dict['ovsdb_identifier'])
session.add(binding)
def delete_vlan_binding(context, record_dict):
"""Delete vlan bindings of a given physical port."""
session = context.session
with session.begin(subtransactions=True):
if(record_dict['vlan'] and record_dict['logical_switch_uuid']):
session.query(models.VlanBindings).filter_by(
port_uuid=record_dict['port_uuid'], vlan=record_dict['vlan'],
logical_switch_uuid=record_dict['logical_switch_uuid'],
ovsdb_identifier=record_dict['ovsdb_identifier']).delete()
def add_physical_locator(context, record_dict):
"""Insert a new physical locator."""
session = context.session
with session.begin(subtransactions=True):
locator = models.PhysicalLocators(
uuid=record_dict['uuid'],
dst_ip=record_dict['dst_ip'],
ovsdb_identifier=record_dict['ovsdb_identifier'])
session.add(locator)
def delete_physical_locator(context, record_dict):
"""Delete physical locator that matches the supplied uuid."""
session = context.session
with session.begin(subtransactions=True):
if(record_dict['uuid']):
session.query(models.PhysicalLocators).filter_by(
uuid=record_dict['uuid'],
ovsdb_identifier=record_dict['ovsdb_identifier']).delete()
def add_physical_switch(context, record_dict):
"""Insert a new physical switch."""
session = context.session
with session.begin(subtransactions=True):
physical_switch = models.PhysicalSwitches(
uuid=record_dict['uuid'],
name=record_dict['name'],
tunnel_ip=record_dict['tunnel_ip'],
ovsdb_identifier=record_dict['ovsdb_identifier'],
switch_fault_status=record_dict['switch_fault_status'])
session.add(physical_switch)
def delete_physical_switch(context, record_dict):
"""Delete physical switch that matches the supplied uuid."""
session = context.session
with session.begin(subtransactions=True):
if(record_dict['uuid']):
session.query(models.PhysicalSwitches).filter_by(
uuid=record_dict['uuid'],
ovsdb_identifier=record_dict['ovsdb_identifier']).delete()
def add_logical_switch(context, record_dict):
"""Insert a new logical switch."""
session = context.session
with session.begin(subtransactions=True):
logical_switch = models.LogicalSwitches(
uuid=record_dict['uuid'],
name=record_dict['name'],
key=record_dict['key'],
ovsdb_identifier=record_dict['ovsdb_identifier'])
session.add(logical_switch)
def delete_logical_switch(context, record_dict):
"""delete logical switch that matches the supplied uuid."""
session = context.session
with session.begin(subtransactions=True):
if(record_dict['uuid']):
session.query(models.LogicalSwitches).filter_by(
uuid=record_dict['uuid'],
ovsdb_identifier=record_dict['ovsdb_identifier']).delete()
def add_physical_port(context, record_dict):
"""Insert a new physical port."""
session = context.session
with session.begin(subtransactions=True):
physical_port = models.PhysicalPorts(
uuid=record_dict['uuid'],
name=record_dict['name'],
physical_switch_id=record_dict['physical_switch_id'],
ovsdb_identifier=record_dict['ovsdb_identifier'],
port_fault_status=record_dict['port_fault_status'])
session.add(physical_port)
def update_physical_ports_status(context, record_dict):
"""Update physical port fault status."""
with context.session.begin(subtransactions=True):
(context.session.query(models.PhysicalPorts).
filter(models.PhysicalPorts.uuid == record_dict['uuid']).
update({'port_fault_status': record_dict['port_fault_status']},
synchronize_session=False))
def update_physical_switch_status(context, record_dict):
"""Update physical switch fault status."""
with context.session.begin(subtransactions=True):
(context.session.query(models.PhysicalSwitches).
filter(models.PhysicalSwitches.uuid == record_dict['uuid']).
update({'switch_fault_status': record_dict['switch_fault_status']},
synchronize_session=False))
def delete_physical_port(context, record_dict):
"""Delete physical port that matches the supplied uuid."""
session = context.session
with session.begin(subtransactions=True):
if(record_dict['uuid']):
session.query(models.PhysicalPorts).filter_by(
uuid=record_dict['uuid'],
ovsdb_identifier=record_dict['ovsdb_identifier']).delete()
def add_ucast_mac_local(context, record_dict):
"""Insert a new ucast mac local."""
session = context.session
with session.begin(subtransactions=True):
ucast_mac_local = models.UcastMacsLocals(
uuid=record_dict['uuid'],
mac=record_dict['mac'],
logical_switch_id=record_dict['logical_switch_id'],
physical_locator_id=record_dict['physical_locator_id'],
ip_address=record_dict['ip_address'],
ovsdb_identifier=record_dict['ovsdb_identifier'])
session.add(ucast_mac_local)
def delete_ucast_mac_local(context, record_dict):
"""Delete ucast mac local that matches the supplied uuid."""
session = context.session
with session.begin(subtransactions=True):
if(record_dict['uuid']):
session.query(models.UcastMacsLocals).filter_by(
uuid=record_dict['uuid'],
ovsdb_identifier=record_dict['ovsdb_identifier']).delete()
def add_ucast_mac_remote(context, record_dict):
"""Insert a new ucast mac remote."""
session = context.session
with session.begin(subtransactions=True):
ucast_mac_remote = models.UcastMacsRemotes(
uuid=record_dict['uuid'],
mac=record_dict['mac'],
logical_switch_id=record_dict['logical_switch_id'],
physical_locator_id=record_dict['physical_locator_id'],
ip_address=record_dict['ip_address'],
ovsdb_identifier=record_dict['ovsdb_identifier'])
session.add(ucast_mac_remote)
def update_ucast_mac_remote(context, rec_dict):
"""Update ucast mac remote."""
try:
with context.session.begin(subtransactions=True):
(context.session.query(models.UcastMacsRemotes).filter_by(
uuid=rec_dict['uuid'],
ovsdb_identifier=rec_dict['ovsdb_identifier']).update(
{'physical_locator_id': rec_dict['physical_locator_id'],
'ip_address': rec_dict['ip_address']},
synchronize_session=False))
except exc.NoResultFound:
LOG.debug('no Remote mac found for %s and %s',
rec_dict['uuid'],
rec_dict['ovsdb_identifier'])
def delete_ucast_mac_remote(context, record_dict):
"""Delete ucast mac remote that matches the supplied uuid."""
session = context.session
with session.begin(subtransactions=True):
if(record_dict['uuid']):
session.query(models.UcastMacsRemotes).filter_by(
uuid=record_dict['uuid'],
ovsdb_identifier=record_dict['ovsdb_identifier']).delete()
def get_physical_port(context, record_dict):
"""Get physical port that matches the uuid and ovsdb_identifier."""
try:
query = context.session.query(models.PhysicalPorts)
physical_port = query.filter_by(
uuid=record_dict['uuid'],
ovsdb_identifier=record_dict['ovsdb_identifier']).one()
except exc.NoResultFound:
LOG.debug('no physical port found for %s and %s',
record_dict['uuid'],
record_dict['ovsdb_identifier'])
return
return physical_port
def get_logical_switch(context, record_dict):
"""Get logical switch that matches the uuid and ovsdb_identifier."""
try:
query = context.session.query(models.LogicalSwitches)
logical_switch = query.filter_by(
uuid=record_dict['uuid'],
ovsdb_identifier=record_dict['ovsdb_identifier']).one()
except exc.NoResultFound:
LOG.debug('no logical switch found for %s and %s',
record_dict['uuid'],
record_dict['ovsdb_identifier'])
return
return logical_switch
def get_all_logical_switches_by_name(context, name):
"""Get logical switch that matches the supplied name."""
query = context.session.query(models.LogicalSwitches)
return query.filter_by(name=name).all()
def get_ucast_mac_remote(context, record_dict):
"""Get ucast macs remote that matches the uuid and ovsdb_identifier."""
try:
query = context.session.query(models.UcastMacsRemotes)
remote_mac = query.filter_by(
uuid=record_dict['uuid'],
ovsdb_identifier=record_dict['ovsdb_identifier']).one()
except exc.NoResultFound:
LOG.debug('no Remote mac found for %s and %s',
record_dict['uuid'],
record_dict['ovsdb_identifier'])
return
return remote_mac
def get_ucast_mac_local(context, record_dict):
"""Get ucast macs local that matches the uuid and ovsdb_identifier."""
try:
query = context.session.query(models.UcastMacsLocals)
local_mac = query.filter_by(
uuid=record_dict['uuid'],
ovsdb_identifier=record_dict['ovsdb_identifier']).one()
except exc.NoResultFound:
LOG.debug('no Local mac found for %s and %s',
record_dict['uuid'],
record_dict['ovsdb_identifier'])
return
return local_mac
def get_ucast_mac_remote_by_mac_and_ls(context, record_dict):
"""Get ucast macs remote that matches the MAC address and
ovsdb_identifier.
"""
try:
query = context.session.query(models.UcastMacsRemotes)
remote_mac = query.filter_by(
mac=record_dict['mac'],
ovsdb_identifier=record_dict['ovsdb_identifier'],
logical_switch_id=record_dict['logical_switch_uuid']).one()
except exc.NoResultFound:
LOG.debug('no Remote mac found for %s and %s',
record_dict['mac'],
record_dict['logical_switch_uuid'])
return
return remote_mac
def get_physical_switch(context, record_dict):
"""Get physical switch that matches the uuid and ovsdb_identifier."""
try:
query = context.session.query(models.PhysicalSwitches)
physical_switch = query.filter_by(
uuid=record_dict['uuid'],
ovsdb_identifier=record_dict['ovsdb_identifier']).one()
except exc.NoResultFound:
LOG.debug('no physical switch found for %s and %s',
record_dict['uuid'],
record_dict['ovsdb_identifier'])
return
return physical_switch
def get_physical_locator(context, record_dict):
"""Get physical locator that matches the supplied uuid."""
try:
query = context.session.query(models.PhysicalLocators)
physical_locator = query.filter_by(
uuid=record_dict['uuid'],
ovsdb_identifier=record_dict['ovsdb_identifier']).one()
except exc.NoResultFound:
LOG.debug('no physical locator found for %s and %s',
record_dict['uuid'],
record_dict['ovsdb_identifier'])
return
return physical_locator
def get_physical_locator_by_dst_ip(context, record_dict):
"""Get physical locator that matches the supplied destination IP."""
try:
query = context.session.query(models.PhysicalLocators)
physical_locator = query.filter_by(
dst_ip=record_dict['dst_ip'],
ovsdb_identifier=record_dict['ovsdb_identifier']).one()
except exc.NoResultFound:
LOG.debug('no physical locator found for %s and %s',
record_dict['dst_ip'],
record_dict['ovsdb_identifier'])
return
return physical_locator
def get_logical_switch_by_name(context, record_dict):
"""Get logical switch that matches the supplied name."""
try:
query = context.session.query(models.LogicalSwitches)
logical_switch = query.filter_by(
name=record_dict['logical_switch_name'],
ovsdb_identifier=record_dict['ovsdb_identifier']).one()
except exc.NoResultFound:
LOG.debug('no logical switch found for %s and %s',
record_dict['logical_switch_name'],
record_dict['ovsdb_identifier'])
return
return logical_switch
def get_all_vlan_bindings_by_physical_port(context, record_dict):
"""Get vlan bindings that matches the supplied physical port."""
query = context.session.query(models.VlanBindings)
return query.filter_by(
port_uuid=record_dict['uuid'],
ovsdb_identifier=record_dict['ovsdb_identifier']).all()
def get_vlan_binding(context, record_dict):
"""Get vlan bindings that matches the supplied physical port."""
try:
query = context.session.query(models.VlanBindings)
vlan_binding = query.filter_by(
port_uuid=record_dict['port_uuid'],
vlan=record_dict['vlan'],
logical_switch_uuid=record_dict['logical_switch_uuid'],
ovsdb_identifier=record_dict['ovsdb_identifier']).one()
except exc.NoResultFound:
LOG.debug('no vlan binding found for %s and %s',
record_dict['port_uuid'],
record_dict['ovsdb_identifier'])
return
return vlan_binding
def get_physical_switch_by_name(context, name):
"""Get logical switch that matches the supplied name."""
query = context.session.query(models.PhysicalSwitches)
return query.filter_by(name=name).first()
def get_physical_port_by_name_and_ps(context, record_dict):
"""Get vlan bindings that matches the supplied physical port."""
try:
query = context.session.query(models.PhysicalPorts)
physical_port = query.filter_by(
name=record_dict['interface_name'],
physical_switch_id=record_dict['physical_switch_id'],
ovsdb_identifier=record_dict['ovsdb_identifier']).one()
except exc.NoResultFound:
LOG.debug('no physical port found for %s and %s',
record_dict['physical_switch_id'],
record_dict['interface_name'])
return
return physical_port
def get_all_physical_switches_by_ovsdb_id(context, ovsdb_identifier):
"""Get Physical Switches that match the supplied ovsdb identifier."""
query = context.session.query(models.PhysicalSwitches)
return query.filter_by(
ovsdb_identifier=ovsdb_identifier).all()
def get_all_logical_switches_by_ovsdb_id(context, ovsdb_identifier):
"""Get logical Switches that match the supplied ovsdb identifier."""
query = context.session.query(models.LogicalSwitches)
return query.filter_by(
ovsdb_identifier=ovsdb_identifier).all()
def get_all_vlan_bindings_by_logical_switch(context, record_dict):
"""Get Vlan bindings that match the supplied logical switch."""
query = context.session.query(models.VlanBindings)
return query.filter_by(
logical_switch_uuid=record_dict['logical_switch_id'],
ovsdb_identifier=record_dict['ovsdb_identifier']).all()
def add_pending_ucast_mac_remote(context, operation,
ovsdb_identifier,
logical_switch_id,
physical_locator,
mac_remotes):
"""Insert a pending ucast_mac_remote (insert/update/delete)."""
session = context.session
with session.begin(subtransactions=True):
for mac in mac_remotes:
pending_mac = models.PendingUcastMacsRemote(
uuid=mac.get('uuid', None),
mac=mac['mac'],
logical_switch_uuid=logical_switch_id,
vm_ip=mac['ip_address'],
ovsdb_identifier=ovsdb_identifier,
operation=operation,
timestamp=timeutils.utcnow())
if physical_locator:
pending_mac['dst_ip'] = physical_locator.get('dst_ip', None)
pending_mac['locator_uuid'] = physical_locator.get('uuid',
None)
session.add(pending_mac)
def delete_pending_ucast_mac_remote(context, operation,
ovsdb_identifier,
logical_switch_id,
mac_remote):
"""Delete a pending ucast_mac_remote."""
session = context.session
with session.begin(subtransactions=True):
if(mac_remote and logical_switch_id
and ovsdb_identifier and operation):
query = session.query(models.PendingUcastMacsRemote).filter_by(
mac=mac_remote,
ovsdb_identifier=ovsdb_identifier,
logical_switch_uuid=logical_switch_id,
operation=operation)
row_count = query.count()
query.delete()
return row_count
def get_pending_ucast_mac_remote(context, ovsdb_identifier, mac,
logical_switch_uuid):
"""Get pending mac that matches the supplied parameters."""
try:
query = context.session.query(models.PendingUcastMacsRemote)
pending_mac = query.filter_by(
ovsdb_identifier=ovsdb_identifier,
logical_switch_uuid=logical_switch_uuid,
mac=mac).one()
return pending_mac
except exc.NoResultFound:
return
def get_all_pending_remote_macs_in_asc_order(context, ovsdb_identifier):
"""Get all the pending remote macs in ascending order of timestamp."""
session = context.session
with session.begin():
return session.query(
models.PendingUcastMacsRemote
).filter_by(ovsdb_identifier=ovsdb_identifier
).order_by(
asc(models.PendingUcastMacsRemote.timestamp)).all()
def get_all_ucast_mac_remote_by_ls(context, record_dict):
"""Get ucast macs remote that matches ls_id and ovsdb_identifier."""
session = context.session
with session.begin():
return session.query(models.UcastMacsRemotes).filter_by(
ovsdb_identifier=record_dict['ovsdb_identifier'],
logical_switch_id=record_dict['logical_switch_id']).all()
def delete_all_physical_locators_by_ovsdb_identifier(context,
ovsdb_identifier):
"""Delete all physical locators based on ovsdb identifier."""
session = context.session
with session.begin(subtransactions=True):
session.query(models.PhysicalLocators).filter_by(
ovsdb_identifier=ovsdb_identifier).delete()
def delete_all_physical_switches_by_ovsdb_identifier(context,
ovsdb_identifier):
"""Delete all physical switches based on ovsdb identifier."""
session = context.session
with session.begin(subtransactions=True):
session.query(models.PhysicalSwitches).filter_by(
ovsdb_identifier=ovsdb_identifier).delete()
def delete_all_physical_ports_by_ovsdb_identifier(context,
ovsdb_identifier):
"""Delete all physical ports based on ovsdb identifier."""
session = context.session
with session.begin(subtransactions=True):
session.query(models.PhysicalPorts).filter_by(
ovsdb_identifier=ovsdb_identifier).delete()
def delete_all_logical_switches_by_ovsdb_identifier(context,
ovsdb_identifier):
"""Delete all physical switches based on ovsdb identifier."""
session = context.session
with session.begin(subtransactions=True):
session.query(models.LogicalSwitches).filter_by(
ovsdb_identifier=ovsdb_identifier).delete()
def delete_all_ucast_macs_locals_by_ovsdb_identifier(context,
ovsdb_identifier):
"""Delete all ucast mac locals based on ovsdb identifier."""
session = context.session
with session.begin(subtransactions=True):
session.query(models.UcastMacsLocals).filter_by(
ovsdb_identifier=ovsdb_identifier).delete()
def delete_all_ucast_macs_remotes_by_ovsdb_identifier(context,
ovsdb_identifier):
"""Delete all ucast mac remotes based on ovsdb identifier."""
session = context.session
with session.begin(subtransactions=True):
session.query(models.UcastMacsRemotes).filter_by(
ovsdb_identifier=ovsdb_identifier).delete()
def delete_all_vlan_bindings_by_ovsdb_identifier(context,
ovsdb_identifier):
"""Delete all vlan bindings based on ovsdb identifier."""
session = context.session
with session.begin(subtransactions=True):
session.query(models.VlanBindings).filter_by(
ovsdb_identifier=ovsdb_identifier).delete()

View File

@ -1,99 +0,0 @@
# Copyright (c) 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from neutron_lib.db import model_base
import sqlalchemy as sa
class PhysicalLocators(model_base.BASEV2):
__tablename__ = 'physical_locators'
uuid = sa.Column(sa.String(36), nullable=False, primary_key=True)
dst_ip = sa.Column(sa.String(64), nullable=True)
ovsdb_identifier = sa.Column(sa.String(64), nullable=False,
primary_key=True)
class PhysicalSwitches(model_base.BASEV2):
__tablename__ = 'physical_switches'
uuid = sa.Column(sa.String(36), nullable=False, primary_key=True)
name = sa.Column(sa.String(255), nullable=True)
tunnel_ip = sa.Column(sa.String(64), nullable=True)
ovsdb_identifier = sa.Column(sa.String(64), nullable=False,
primary_key=True)
switch_fault_status = sa.Column(sa.String(length=32), nullable=True)
class PhysicalPorts(model_base.BASEV2):
__tablename__ = 'physical_ports'
uuid = sa.Column(sa.String(36), nullable=False, primary_key=True)
name = sa.Column(sa.String(255), nullable=True)
physical_switch_id = sa.Column(sa.String(36), nullable=True)
ovsdb_identifier = sa.Column(sa.String(64), nullable=False,
primary_key=True)
port_fault_status = sa.Column(sa.String(length=32), nullable=True)
class LogicalSwitches(model_base.BASEV2):
__tablename__ = 'logical_switches'
uuid = sa.Column(sa.String(36), nullable=False, primary_key=True)
name = sa.Column(sa.String(255), nullable=True)
key = sa.Column(sa.Integer, nullable=True)
ovsdb_identifier = sa.Column(sa.String(64), nullable=False,
primary_key=True)
class UcastMacsLocals(model_base.BASEV2):
__tablename__ = 'ucast_macs_locals'
uuid = sa.Column(sa.String(36), nullable=False, primary_key=True)
mac = sa.Column(sa.String(32), nullable=True)
logical_switch_id = sa.Column(sa.String(36), nullable=True)
physical_locator_id = sa.Column(sa.String(36), nullable=True)
ip_address = sa.Column(sa.String(64), nullable=True)
ovsdb_identifier = sa.Column(sa.String(64), nullable=False,
primary_key=True)
class UcastMacsRemotes(model_base.BASEV2):
__tablename__ = 'ucast_macs_remotes'
uuid = sa.Column(sa.String(36), nullable=False, primary_key=True)
mac = sa.Column(sa.String(32), nullable=True)
logical_switch_id = sa.Column(sa.String(36), nullable=True)
physical_locator_id = sa.Column(sa.String(36), nullable=True)
ip_address = sa.Column(sa.String(64), nullable=True)
ovsdb_identifier = sa.Column(sa.String(64), nullable=False,
primary_key=True)
class VlanBindings(model_base.BASEV2):
__tablename__ = 'vlan_bindings'
port_uuid = sa.Column(sa.String(36), nullable=False, primary_key=True)
vlan = sa.Column(sa.Integer, nullable=False, primary_key=True)
logical_switch_uuid = sa.Column(sa.String(36), nullable=False,
primary_key=True)
ovsdb_identifier = sa.Column(sa.String(64), nullable=False,
primary_key=True)
class PendingUcastMacsRemote(model_base.BASEV2, model_base.HasId):
__tablename__ = 'pending_ucast_macs_remotes'
uuid = sa.Column(sa.String(36), nullable=True)
mac = sa.Column(sa.String(32), nullable=False)
logical_switch_uuid = sa.Column(sa.String(36), nullable=False)
locator_uuid = sa.Column(sa.String(36), nullable=True)
dst_ip = sa.Column(sa.String(64))
vm_ip = sa.Column(sa.String(64))
ovsdb_identifier = sa.Column(sa.String(64), nullable=False)
operation = sa.Column(sa.String(8), nullable=False)
timestamp = sa.Column(sa.DateTime, nullable=False)

View File

@ -1 +0,0 @@
This directory contains the migration scripts for the networking_l2gw project.

View File

@ -1,99 +0,0 @@
# Copyright 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from logging import config as logging_config
from alembic import context
from neutron_lib.db import model_base
from oslo_config import cfg
from oslo_db.sqlalchemy import session
import sqlalchemy as sa
from sqlalchemy import event
from neutron.db.migration.alembic_migrations import external
from neutron.db.migration.models import head # noqa
MYSQL_ENGINE = None
L2GW_VERSION_TABLE = 'l2gw_alembic_version'
config = context.config
neutron_config = config.neutron_config
logging_config.fileConfig(config.config_file_name)
target_metadata = model_base.BASEV2.metadata
def set_mysql_engine():
try:
mysql_engine = neutron_config.command.mysql_engine
except cfg.NoSuchOptError:
mysql_engine = None
global MYSQL_ENGINE
MYSQL_ENGINE = (mysql_engine or
model_base.BASEV2.__table_args__['mysql_engine'])
def include_object(object, name, type_, reflected, compare_to):
if type_ == 'table' and name in external.TABLES:
return False
else:
return True
def run_migrations_offline():
set_mysql_engine()
kwargs = dict()
if neutron_config.database.connection:
kwargs['url'] = neutron_config.database.connection
else:
kwargs['dialect_name'] = neutron_config.database.engine
kwargs['include_object'] = include_object
kwargs['version_table'] = L2GW_VERSION_TABLE
context.configure(**kwargs)
with context.begin_transaction():
context.run_migrations()
@event.listens_for(sa.Table, 'after_parent_attach')
def set_storage_engine(target, parent):
if MYSQL_ENGINE:
target.kwargs['mysql_engine'] = MYSQL_ENGINE
def run_migrations_online():
set_mysql_engine()
engine = session.create_engine(neutron_config.database.connection)
connection = engine.connect()
context.configure(
connection=connection,
target_metadata=target_metadata,
include_object=include_object,
version_table=L2GW_VERSION_TABLE
)
try:
with context.begin_transaction():
context.run_migrations()
finally:
connection.close()
engine.dispose()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()

View File

@ -1,36 +0,0 @@
# Copyright ${create_date.year} <PUT YOUR NAME/COMPANY HERE>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""${message}
Revision ID: ${up_revision}
Revises: ${down_revision}
Create Date: ${create_date}
"""
# revision identifiers, used by Alembic.
revision = ${repr(up_revision)}
down_revision = ${repr(down_revision)}
% if branch_labels:
branch_labels = ${repr(branch_labels)}
%endif
from alembic import op
import sqlalchemy as sa
${imports if imports else ""}
def upgrade():
${upgrades if upgrades else "pass"}

View File

@ -1,96 +0,0 @@
# Copyright 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""l2gateway_models
Revision ID: 42438454c556
Revises: 54c9c8fe22bf
Create Date: 2014-11-27 01:57:56.997665
"""
# revision identifiers, used by Alembic.
revision = '42438454c556'
down_revision = '54c9c8fe22bf'
from alembic import op
import sqlalchemy as sa
def upgrade():
op.create_table('l2gateways',
sa.Column('id', sa.String(length=36), nullable=False),
sa.Column('name', sa.String(length=255), nullable=True),
sa.Column('tenant_id', sa.String(length=255),
nullable=True),
sa.PrimaryKeyConstraint('id'))
op.create_table('l2gatewaydevices',
sa.Column('id', sa.String(length=36), nullable=False),
sa.Column('device_name', sa.String(length=255),
nullable=False),
sa.Column('l2_gateway_id', sa.String(length=36),
nullable=False),
sa.ForeignKeyConstraint(['l2_gateway_id'],
['l2gateways.id'],
ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id'))
op.create_table('l2gatewayinterfaces',
sa.Column('id', sa.String(length=36), nullable=False),
sa.Column('interface_name', sa.String(length=255),
nullable=True),
sa.Column('segmentation_id', sa.Integer(),
nullable=True),
sa.Column('device_id', sa.String(length=36),
nullable=False),
sa.ForeignKeyConstraint(['device_id'],
['l2gatewaydevices.id'],
ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id'))
op.create_table('l2gatewayconnections',
sa.Column('id', sa.String(length=36), nullable=False),
sa.Column('tenant_id', sa.String(length=255),
nullable=True),
sa.Column('l2_gateway_id', sa.String(length=36),
nullable=True),
sa.Column('network_id', sa.String(length=36),
nullable=False),
sa.Column('segmentation_id', sa.Integer(),
nullable=True),
sa.ForeignKeyConstraint(['l2_gateway_id'],
['l2gateways.id'],
ondelete='CASCADE'),
sa.ForeignKeyConstraint(['network_id'], ['networks.id'],
ondelete='CASCADE'),
sa.UniqueConstraint('l2_gateway_id',
'network_id'),
sa.PrimaryKeyConstraint('id'))
op.create_table('pending_ucast_macs_remotes',
sa.Column('id', sa.String(length=36), nullable=False),
sa.Column('uuid', sa.String(length=36), nullable=True),
sa.Column('mac', sa.String(32), nullable=False),
sa.Column('logical_switch_uuid', sa.String(36),
nullable=False),
sa.Column('locator_uuid', sa.String(36),
nullable=True),
sa.Column('dst_ip', sa.String(64)),
sa.Column('vm_ip', sa.String(64)),
sa.Column('ovsdb_identifier', sa.String(64),
nullable=False),
sa.Column('operation', sa.String(8), nullable=False),
sa.Column('timestamp', sa.DateTime, nullable=False))

View File

@ -1,105 +0,0 @@
# Copyright 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""DB_Models_for_OVSDB_Hardware_VTEP_Schema
Revision ID: 54c9c8fe22bf
Revises: 42438454c556
Create Date: 2015-01-27 02:05:21.599215
"""
# revision identifiers, used by Alembic.
revision = '54c9c8fe22bf'
down_revision = 'start_networking_l2gw'
from alembic import op
import sqlalchemy as sa
def upgrade():
op.create_table('physical_locators',
sa.Column('dst_ip', sa.String(length=64), nullable=True),
sa.Column('uuid', sa.String(length=36), nullable=False),
sa.Column('ovsdb_identifier', sa.String(length=64),
nullable=False),
sa.PrimaryKeyConstraint('uuid', 'ovsdb_identifier'))
op.create_table('physical_switches',
sa.Column('uuid', sa.String(length=36), nullable=False),
sa.Column('name', sa.String(length=255), nullable=True),
sa.Column('tunnel_ip', sa.String(length=64),
nullable=True),
sa.Column('ovsdb_identifier', sa.String(length=64),
nullable=False),
sa.Column('switch_fault_status', sa.String(length=32),
nullable=True),
sa.PrimaryKeyConstraint('uuid', 'ovsdb_identifier'))
op.create_table('physical_ports',
sa.Column('name', sa.String(length=255), nullable=True),
sa.Column('uuid', sa.String(length=36), nullable=False),
sa.Column('physical_switch_id', sa.String(length=36),
nullable=True),
sa.Column('ovsdb_identifier', sa.String(length=64),
nullable=False),
sa.Column('port_fault_status', sa.String(length=32),
nullable=True),
sa.PrimaryKeyConstraint('uuid', 'ovsdb_identifier'))
op.create_table('logical_switches',
sa.Column('uuid', sa.String(length=36), nullable=False),
sa.Column('name', sa.String(length=255), nullable=True),
sa.Column('key', sa.Integer(), nullable=True),
sa.Column('ovsdb_identifier', sa.String(length=64),
nullable=False),
sa.PrimaryKeyConstraint('uuid', 'ovsdb_identifier'))
op.create_table('ucast_macs_locals',
sa.Column('uuid', sa.String(length=36), nullable=False),
sa.Column('mac', sa.String(length=32), nullable=True),
sa.Column('logical_switch_id', sa.String(length=36),
nullable=True),
sa.Column('physical_locator_id', sa.String(length=36),
nullable=True),
sa.Column('ip_address', sa.String(length=64),
nullable=True),
sa.Column('ovsdb_identifier', sa.String(length=64),
nullable=False),
sa.PrimaryKeyConstraint('uuid', 'ovsdb_identifier'))
op.create_table('ucast_macs_remotes',
sa.Column('uuid', sa.String(length=36), nullable=False),
sa.Column('mac', sa.String(length=32), nullable=True),
sa.Column('logical_switch_id', sa.String(length=36),
nullable=True),
sa.Column('physical_locator_id', sa.String(length=36),
nullable=True),
sa.Column('ip_address', sa.String(length=64),
nullable=True),
sa.Column('ovsdb_identifier', sa.String(length=64),
nullable=False),
sa.PrimaryKeyConstraint('uuid', 'ovsdb_identifier'))
op.create_table('vlan_bindings',
sa.Column('port_uuid', sa.String(length=36),
nullable=False),
sa.Column('vlan', sa.Integer(), nullable=False),
sa.Column('logical_switch_uuid', sa.String(length=36),
nullable=False),
sa.Column('ovsdb_identifier', sa.String(length=64),
nullable=False),
sa.PrimaryKeyConstraint('port_uuid', 'ovsdb_identifier',
'vlan', 'logical_switch_uuid'))

View File

@ -1,29 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""kilo
Revision ID: kilo
Revises: 42438454c556
Create Date: 2015-04-16 00:00:00.000000
"""
# revision identifiers, used by Alembic.
revision = 'kilo'
down_revision = '42438454c556'
def upgrade():
"""A no-op migration for marking the Kilo release."""
pass

View File

@ -1,36 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Initial no-op Liberty contract rule.
Revision ID: 79919185aa99
Revises: kilo
Create Date: 2015-07-16 00:00:00.000000
"""
from neutron.db import migration
from neutron.db.migration import cli
# revision identifiers, used by Alembic.
revision = '79919185aa99'
down_revision = 'kilo'
branch_labels = (cli.CONTRACT_BRANCH,)
# milestone identifier, used by neutron-db-manage
neutron_milestone = [migration.LIBERTY, migration.MITAKA]
def upgrade():
pass

View File

@ -1,36 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Initial no-op Liberty expand rule.
Revision ID: 60019185aa99
Revises: kilo
Create Date: 2015-07-16 00:00:00.000000
"""
from neutron.db import migration
from neutron.db.migration import cli
# revision identifiers, used by Alembic.
revision = '60019185aa99'
down_revision = 'kilo'
branch_labels = (cli.EXPAND_BRANCH,)
# milestone identifier, used by neutron-db-manage
neutron_milestone = [migration.LIBERTY, migration.MITAKA]
def upgrade():
pass

View File

@ -1,138 +0,0 @@
# Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""rename tenant to project
Revision ID: 2f533f7705dd
Create Date: 2016-07-15 01:53:43.922235
"""
# revision identifiers, used by Alembic.
revision = '2f533f7705dd'
down_revision = '79919185aa99'
depends_on = ('49ce408ac349',)
from alembic import op
import sqlalchemy as sa
from sqlalchemy.engine import reflection
from neutron.db import migration
_INSPECTOR = None
# milestone identifier, used by neutron-db-manage
neutron_milestone = [migration.NEWTON, migration.OCATA]
def get_inspector():
"""Reuse inspector."""
global _INSPECTOR
if _INSPECTOR:
return _INSPECTOR
else:
bind = op.get_bind()
_INSPECTOR = reflection.Inspector.from_engine(bind)
return _INSPECTOR
def get_tables():
"""Returns hardcoded list of tables which have ``tenant_id`` column.
The list is hard-coded to match the state of the schema when this upgrade
script is run.
"""
tables = [
'l2gateways',
'l2gatewayconnections',
]
return tables
def get_columns(table):
"""Returns list of columns for given table."""
inspector = get_inspector()
return inspector.get_columns(table)
def get_data():
"""Returns combined list of tuples: [(table, column)].
The list is built from tables with a tenant_id column.
"""
output = []
tables = get_tables()
for table in tables:
columns = get_columns(table)
for column in columns:
if column['name'] == 'tenant_id':
output.append((table, column))
return output
def alter_column(table, column):
old_name = 'tenant_id'
new_name = 'project_id'
op.alter_column(
table_name=table,
column_name=old_name,
new_column_name=new_name,
existing_type=column['type'],
existing_nullable=column['nullable']
)
def recreate_index(index, table_name):
old_name = index['name']
new_name = old_name.replace('tenant', 'project')
op.drop_index(op.f(old_name), table_name)
op.create_index(new_name, table_name, ['project_id'])
def upgrade():
inspector = get_inspector()
data = get_data()
for table, column in data:
alter_column(table, column)
indexes = inspector.get_indexes(table)
for index in indexes:
if 'tenant_id' in index['name']:
recreate_index(index, table)
def contract_creation_exceptions():
"""Special migration for the blueprint to support Keystone V3.
We drop all tenant_id columns and create project_id columns instead.
"""
return {
sa.Column: ['.'.join([table, 'project_id']) for table in get_tables()],
sa.Index: get_tables()
}

View File

@ -1,37 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""add indexes to tenant_id
Revision ID: 49ce408ac349
Create Date: 2016-07-22 10:42:14.495451
"""
from alembic import op
from neutron.db import migration
# revision identifiers, used by Alembic.
revision = '49ce408ac349'
down_revision = '60019185aa99'
# milestone identifier, used by neutron-db-manage
neutron_milestone = [migration.NEWTON, migration.OCATA]
def upgrade():
for table in ['l2gateways', 'l2gatewayconnections']:
op.create_index(op.f('ix_%s_tenant_id' % table),
table, ['tenant_id'], unique=False)

View File

@ -1,30 +0,0 @@
# Copyright 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""start networking-l2gw chain
Revision ID: start_networking_l2gw
Revises: None
Create Date: 2015-02-04 11:06:18.196062
"""
# revision identifiers, used by Alembic.
revision = 'start_networking_l2gw'
down_revision = None
def upgrade():
pass

View File

@ -1,111 +0,0 @@
# Copyright 2015 OpenStack Foundation
# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
from neutron_lib.api import extensions as api_extensions
from neutron_lib.api import validators
from neutron.api import extensions
from neutron.api.v2 import resource_helper
from networking_l2gw import extensions as l2gw_extensions
from networking_l2gw.services.l2gateway.common import constants
from networking_l2gw.services.l2gateway.common import l2gw_validators
extensions.append_api_extensions_path(l2gw_extensions.__path__)
RESOURCE_ATTRIBUTE_MAP = {
constants.L2_GATEWAYS: {
'id': {'allow_post': False, 'allow_put': False,
'is_visible': True},
'name': {'allow_post': True, 'allow_put': True,
'validate': {'type:string': None},
'is_visible': True, 'default': ''},
'devices': {'allow_post': True, 'allow_put': True,
'validate': {'type:l2gwdevice_list': None},
'is_visible': True},
'tenant_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'required_by_policy': True,
'is_visible': True}
},
}
validators.add_validator('l2gwdevice_list',
l2gw_validators.validate_gwdevice_list)
class L2gateway(api_extensions.ExtensionDescriptor):
"""API extension for Layer-2 Gateway support."""
@classmethod
def get_name(cls):
return "L2 Gateway"
@classmethod
def get_alias(cls):
return "l2-gateway"
@classmethod
def get_description(cls):
return "Connects Neutron networks with external networks at layer 2."
@classmethod
def get_updated(cls):
return "2015-01-01T00:00:00-00:00"
@classmethod
def get_resources(cls):
"""Returns Ext Resources."""
plural_mappings = resource_helper.build_plural_mappings(
{}, RESOURCE_ATTRIBUTE_MAP)
resources = resource_helper.build_resource_info(plural_mappings,
RESOURCE_ATTRIBUTE_MAP,
constants.L2GW)
return resources
def get_extended_resources(self, version):
if version == "2.0":
return RESOURCE_ATTRIBUTE_MAP
else:
return {}
class L2GatewayPluginBase(object):
@abc.abstractmethod
def create_l2_gateway(self, context, l2_gateway):
pass
@abc.abstractmethod
def get_l2_gateway(self, context, id, fields=None):
pass
@abc.abstractmethod
def delete_l2_gateway(self, context, id):
pass
@abc.abstractmethod
def get_l2_gateways(self, context, filters=None, fields=None,
sorts=None, limit=None, marker=None,
page_reverse=False):
pass
@abc.abstractmethod
def update_l2_gateway(self, context, id, l2_gateway):
pass

View File

@ -1,110 +0,0 @@
# Copyright 2015 OpenStack Foundation
# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
from neutron_lib.api import extensions
from neutron.api.v2 import resource_helper
from networking_l2gw.services.l2gateway.common import constants
RESOURCE_ATTRIBUTE_MAP = {
constants.L2_GATEWAYS_CONNECTION: {
'id': {'allow_post': False, 'allow_put': False,
'is_visible': True},
'l2_gateway_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'is_visible': True, 'default': ''},
'network_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'is_visible': True},
'segmentation_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'is_visible': True, 'default': ''},
'tenant_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'required_by_policy': True,
'is_visible': True}
},
}
class L2gatewayconnection(extensions.ExtensionDescriptor):
"""API extension for Layer-2 Gateway connection support."""
@classmethod
def get_name(cls):
return "L2 Gateway connection"
@classmethod
def get_alias(cls):
return "l2-gateway-connection"
@classmethod
def get_description(cls):
return "Connects Neutron networks with external networks at layer 2."
@classmethod
def get_updated(cls):
return "2014-01-01T00:00:00-00:00"
@classmethod
def get_resources(cls):
"""Returns Ext Resources."""
mem_actions = {}
plural_mappings = resource_helper.build_plural_mappings(
{}, RESOURCE_ATTRIBUTE_MAP)
resources = resource_helper.build_resource_info(plural_mappings,
RESOURCE_ATTRIBUTE_MAP,
constants.L2GW,
action_map=mem_actions,
register_quota=True,
translate_name=True)
return resources
def get_extended_resources(self, version):
if version == "2.0":
return RESOURCE_ATTRIBUTE_MAP
else:
return {}
class L2GatewayConnectionPluginBase(object):
@abc.abstractmethod
def delete_l2_gateway_connection(self, context, l2_gateway_id,
network_mapping_list):
pass
@abc.abstractmethod
def create_l2_gateway_connection(self, context, l2_gateway_id,
network_mapping_list):
pass
@abc.abstractmethod
def get_l2_gateway_connections(self, context, filters=None,
fields=None,
sorts=None, limit=None, marker=None,
page_reverse=False):
pass
@abc.abstractmethod
def get_l2_gateway_connection(self, context, id, fields=None):
pass

View File

@ -1,160 +0,0 @@
# Copyright 2015 OpenStack Foundation.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from neutronclient.common import extension
from neutronclient.common import utils
from oslo_serialization import jsonutils
from networking_l2gw._i18n import _
INTERFACE_DELIMITER = ";"
SEGMENTATION_ID_DELIMITER = "#"
INTERFACE_SEG_ID_DELIMITER = "|"
def _format_devices(l2_gateway):
try:
return '\n'.join([jsonutils.dumps(gateway) for gateway in
l2_gateway['devices']])
except (TypeError, KeyError):
return ''
class L2Gateway(extension.NeutronClientExtension):
resource = 'l2_gateway'
resource_plural = 'l2_gateways'
path = 'l2-gateways'
object_path = '/%s' % path
resource_path = '/%s/%%s' % path
versions = ['2.0']
def get_interface(interfaces):
interface_dict = []
for interface in interfaces:
if INTERFACE_SEG_ID_DELIMITER in interface:
int_name = interface.split(INTERFACE_SEG_ID_DELIMITER)[0]
segid = interface.split(INTERFACE_SEG_ID_DELIMITER)[1]
if SEGMENTATION_ID_DELIMITER in segid:
segid = segid.split(SEGMENTATION_ID_DELIMITER)
else:
segid = [segid]
interface_detail = {'name': int_name, 'segmentation_id': segid}
else:
interface_detail = {'name': interface}
interface_dict.append(interface_detail)
return interface_dict
def add_known_arguments(self, parser):
parser.add_argument(
'--device',
metavar='name=name,interface_names=INTERFACE-DETAILS',
action='append', dest='devices', type=utils.str2dict,
help=_('Device name and Interface-names of l2gateway. '
'INTERFACE-DETAILS is of form '
'\"<interface_name1>;[<interface_name2>]'
'[|<seg_id1>[#<seg_id2>]]\" '
'(--device option can be repeated)'))
def args2body(self, parsed_args):
if parsed_args.devices:
devices = parsed_args.devices
interfaces = []
else:
devices = []
device_dict = []
for device in devices:
if 'interface_names' in device.keys():
interface = device['interface_names']
if INTERFACE_DELIMITER in interface:
interface_dict = interface.split(INTERFACE_DELIMITER)
interfaces = get_interface(interface_dict)
else:
interfaces = get_interface([interface])
if 'name' in device.keys():
device = {'device_name': device['name'],
'interfaces': interfaces}
else:
device = {'interfaces': interfaces}
device_dict.append(device)
if parsed_args.name:
l2gw_name = parsed_args.name
body = {'l2_gateway': {'name': l2gw_name,
'devices': device_dict}, }
else:
body = {'l2_gateway': {'devices': device_dict}, }
return body
class L2GatewayCreate(extension.ClientExtensionCreate, L2Gateway):
"""Create l2gateway information."""
shell_command = 'l2-gateway-create'
def add_known_arguments(self, parser):
parser.add_argument(
'name', metavar='<GATEWAY-NAME>',
help=_('Descriptive name for logical gateway.'))
add_known_arguments(self, parser)
def args2body(self, parsed_args):
body = args2body(self, parsed_args)
if parsed_args.tenant_id:
body['l2_gateway']['tenant_id'] = parsed_args.tenant_id
return body
class L2GatewayList(extension.ClientExtensionList, L2Gateway):
"""List l2gateway that belongs to a given tenant."""
shell_command = 'l2-gateway-list'
_formatters = {'devices': _format_devices, }
list_columns = ['id', 'name', 'devices']
pagination_support = True
sorting_support = True
class L2GatewayShow(extension.ClientExtensionShow, L2Gateway):
"""Show information of a given l2gateway."""
shell_command = 'l2-gateway-show'
class L2GatewayDelete(extension.ClientExtensionDelete, L2Gateway):
"""Delete a given l2gateway."""
shell_command = 'l2-gateway-delete'
class L2GatewayUpdate(extension.ClientExtensionUpdate, L2Gateway):
"""Update a given l2gateway."""
shell_command = 'l2-gateway-update'
def add_known_arguments(self, parser):
parser.add_argument(
'--name', metavar='name',
help=_('Descriptive name for logical gateway.'))
add_known_arguments(self, parser)
def args2body(self, parsed_args):
if parsed_args.devices:
body = args2body(self, parsed_args)
else:
body = {'l2_gateway': {'name': parsed_args.name}}
return body

View File

@ -1,101 +0,0 @@
# Copyright 2015 OpenStack Foundation.
# All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from neutronclient.common import extension
from neutronclient.neutron import v2_0 as l2gatewayV20
from networking_l2gw._i18n import _
class L2GatewayConnection(extension.NeutronClientExtension):
resource = 'l2_gateway_connection'
resource_plural = 'l2_gateway_connections'
path = 'l2-gateway-connections'
object_path = '/%s' % path
resource_path = '/%s/%%s' % path
versions = ['2.0']
class L2GatewayConnectionCreate(extension.ClientExtensionCreate,
L2GatewayConnection):
"""Create l2gateway-connection information."""
shell_command = 'l2-gateway-connection-create'
def retrieve_ids(self, client, args):
gateway_id = l2gatewayV20.find_resourceid_by_name_or_id(
client, 'l2_gateway', args.gateway_name)
network_id = l2gatewayV20.find_resourceid_by_name_or_id(
client, 'network', args.network)
return (gateway_id, network_id)
def get_parser(self, parser):
parser = super(l2gatewayV20.CreateCommand,
self).get_parser(parser)
parser.add_argument(
'gateway_name', metavar='<GATEWAY-NAME/UUID>',
help=_('Descriptive name for logical gateway.'))
parser.add_argument(
'network', metavar='<NETWORK-NAME/UUID>',
help=_('Network name or uuid.'))
parser.add_argument(
'--default-segmentation-id',
dest='seg_id',
help=_('default segmentation-id that will '
'be applied to the interfaces for which '
'segmentation id was not specified '
'in l2-gateway-create command.'))
return parser
def args2body(self, args):
neutron_client = self.get_client()
(gateway_id, network_id) = self.retrieve_ids(neutron_client,
args)
body = {'l2_gateway_connection':
{'l2_gateway_id': gateway_id,
'network_id': network_id}}
if args.seg_id:
body['l2_gateway_connection']['segmentation_id'] = args.seg_id
return body
class L2GatewayConnectionList(extension.ClientExtensionList,
L2GatewayConnection):
"""List l2gateway-connections."""
shell_command = 'l2-gateway-connection-list'
list_columns = ['id', 'l2_gateway_id', 'network_id', 'segmentation_id']
pagination_support = True
sorting_support = True
class L2GatewayConnectionShow(extension.ClientExtensionShow,
L2GatewayConnection):
"""Show information of a given l2gateway-connection."""
shell_command = 'l2-gateway-connection-show'
allow_names = False
class L2GatewayConnectionDelete(extension.ClientExtensionDelete,
L2GatewayConnection):
"""Delete a given l2gateway-connection."""
shell_command = 'l2-gateway-connection-delete'
allow_names = False

View File

@ -1,42 +0,0 @@
# Copyright (c) 2015 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutron.common import rpc as n_rpc
import oslo_messaging as messaging
class L2GatewayAgentApi(object):
"""Agent side of the Agent to Plugin RPC API."""
API_VERSION = '1.0'
def __init__(self, topic, host):
self.host = host
target = messaging.Target(topic=topic, version=self.API_VERSION)
self.client = n_rpc.get_client(target)
def update_ovsdb_changes(self, context, activity, ovsdb_data):
cctxt = self.client.prepare()
return cctxt.cast(context,
'update_ovsdb_changes',
activity=activity,
ovsdb_data=ovsdb_data)
def notify_ovsdb_states(self, context, ovsdb_states):
cctxt = self.client.prepare()
return cctxt.cast(context,
'notify_ovsdb_states',
ovsdb_states=ovsdb_states)

View File

@ -1,96 +0,0 @@
# Copyright (c) 2015 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutron.agent import rpc as agent_rpc
from neutron_lib import context
from oslo_config import cfg
from oslo_log import log as logging
from oslo_service import loopingcall
from oslo_service import periodic_task
from networking_l2gw.services.l2gateway.agent import agent_api
from networking_l2gw.services.l2gateway.common import constants as n_const
from networking_l2gw.services.l2gateway.common import topics
LOG = logging.getLogger(__name__)
class BaseAgentManager(periodic_task.PeriodicTasks):
"""Basic agent manager that handles basic RPCs and report states."""
def __init__(self, conf=None):
conf = getattr(self, "conf", cfg.CONF)
super(BaseAgentManager, self).__init__(conf)
self.l2gw_agent_type = ''
self.gateways = {}
self.plugin_rpc = agent_api.L2GatewayAgentApi(
topics.L2GATEWAY_PLUGIN,
self.conf.host
)
self._get_agent_state()
self.admin_state_up = True
self._setup_state_rpc()
def _get_agent_state(self):
self.agent_state = {
'binary': 'neutron-l2gateway-agent',
'host': self.conf.host,
'topic': topics.L2GATEWAY_AGENT,
'configurations': {
'report_interval': self.conf.AGENT.report_interval,
n_const.L2GW_AGENT_TYPE: self.l2gw_agent_type,
},
'start_flag': True,
'agent_type': n_const.AGENT_TYPE_L2GATEWAY}
def _setup_state_rpc(self):
self.state_rpc = agent_rpc.PluginReportStateAPI(
topics.L2GATEWAY_PLUGIN)
report_interval = self.conf.AGENT.report_interval
if report_interval:
heartbeat = loopingcall.FixedIntervalLoopingCall(
self._report_state)
heartbeat.start(interval=report_interval)
def _report_state(self):
try:
ctx = context.get_admin_context_without_session()
self.state_rpc.report_state(ctx, self.agent_state,
True)
self.agent_state['start_flag'] = False
except Exception:
LOG.exception("Failed reporting state!")
self.handle_report_state_failure()
def handle_report_state_failure(self):
pass
def agent_updated(self, context, payload):
LOG.info("agent_updated by server side %s!", payload)
def set_monitor_agent(self, context, hostname):
"""Handle RPC call from plugin to update agent type.
RPC call from the plugin to accept that I am a monitoring
or a transact agent. This is a fanout cast message
"""
if hostname == self.conf.host:
self.l2gw_agent_type = n_const.MONITOR
else:
self.l2gw_agent_type = ''
self.agent_state.get('configurations')[n_const.L2GW_AGENT_TYPE
] = self.l2gw_agent_type

View File

@ -1,37 +0,0 @@
# Copyright (c) 2015 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from networking_l2gw.services.l2gateway.common import constants as n_const
OVSDB_IP = 'ovsdb_ip'
OVSDB_PORT = 'ovsdb_port'
PRIVATE_KEY = 'private_key'
USE_SSL = 'use_ssl'
CERTIFICATE = 'certificate'
CA_CERT = 'ca_cert'
class L2GatewayConfig(object):
def __init__(self, ovsdb_config):
self.use_ssl = False
if ovsdb_config.get(USE_SSL, None):
self.use_ssl = ovsdb_config[USE_SSL]
self.private_key = ovsdb_config[PRIVATE_KEY]
self.certificate = ovsdb_config[CERTIFICATE]
self.ca_cert = ovsdb_config[CA_CERT]
self.ovsdb_identifier = ovsdb_config[n_const.OVSDB_IDENTIFIER]
self.ovsdb_ip = ovsdb_config[OVSDB_IP]
self.ovsdb_port = ovsdb_config[OVSDB_PORT]
self.ovsdb_fd = None

View File

@ -1,36 +0,0 @@
Other approaches for the L2 gateway agent to communicate with the OVSDB server
------------------------------------------------------------------------------
For an L2 gateway agent to communicate with an OVSDB server, IDL class provided by the OpenvSwitch library was explored.
The source code for the IDL class can be found at:
1. https://github.com/osrg/openvswitch/tree/master/python/ovs/db/idl.py
2. Alternatively, the source code can be downloaded from http://openvswitch.org/download/ as a tar ball.
Here are the findings.
1. IDL class maintains a socket connection with a given OVSDB server and maintains a local cache of the OVSDB tables.
2. Whenever there is a change in the OVSDB tables, the cache is updated automatically.
3. The caller has to pass the required data structures to perform a transaction on OVSDB tables.
Advantages:
1. Logic of maintaining connection with the OVSDB servers is easy.
2. No need to write extra logic of performing the transaction on the OVSDB tables.
Only rows to be inserted/modified/deleted are to be supplied.
Disadvantages:
1. To perform a transaction on a table, the caller has to register that table with IDL so that the local cache is maintained. Without registering, a transaction cannot be performed.
2. Due to this, every transact L2 gateway agent will have to maintain unnecessary local cache of OVSDB tables (which gets updated automatically whenever there is a change in the OVSDB table state).
3. With the current approach, when the Monitor agent receives notifications for OVSDB changes, the agent instantly sends RPC to the plugin notifying the changes. The IDL class internally receives event notifications for any updates to the OVSDB tables and are processed internally. We may not be able to invoke RPCs to the plugin if IDL class is used.
4. It violates our basic concept of transact and monitor agent.
5. After browsing the code, could not find any option in this python binding to provide SSL keys/certs so as to open an SSL connection/stream to the OVSDB server.
https://github.com/osrg/openvswitch/blob/master/python/ovs/socket_util.py
https://github.com/osrg/openvswitch/blob/master/python/ovs/stream.py
6. We may have to package the openvswitch 2.3.1 library with the agent
Advantages of the current code of the L2 gateway agent over the IDL library:
1. It is light weight does not maintain local cache of OVSDB tables.
2. Complies with the proposed spec/architecture.
3. As it implements the RFC 7047 described at http://tools.ietf.org/html/rfc7047, code maintenance is simple.
4. Changing the code to use IDL at the last moment is a bit of risk (this involves development from scratch, change in the agent architecture and testing).
We can always enhance the agent code to use IDL class in future.

View File

@ -1,62 +0,0 @@
# Copyright (c) 2014 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import six
@six.add_metaclass(abc.ABCMeta)
class API(object):
def __init__(self, context):
self.context = context
@abc.abstractmethod
def transaction(self, check_error=False, log_errors=True, **kwargs):
"""Create a transaction
:param check_error: Allow the transaction to raise an exception?
:type check_error: bool
:param log_errors: Log an error if the transaction fails?
:type log_errors: bool
:returns: A new transaction
:rtype: :class:`Transaction`
"""
@abc.abstractmethod
def db_find(self, table, *conditions, **kwargs):
"""Create a command to return find OVSDB records matching conditions
:param table: The OVS table to query
:type table: string
:param conditions:The conditions to satisfy the query
:type conditions: 3-tuples containing (column, operation, match)
Type of 'match' parameter MUST be identical to column
type
Examples:
atomic: ('tag', '=', 7)
map: ('external_ids' '=', {'iface-id': 'xxx'})
field exists?
('external_ids', '!=', {'iface-id', ''})
set contains?:
('protocols', '{>=}', 'OpenFlow13')
See the ovs-vsctl man page for more operations
:param columns: Limit results to only columns, None means all columns
:type columns: list of column names or None
:returns: :class:`Command` with [{'column', value}, ...] result
"""
@abc.abstractmethod
def get_physical_sw_list(self):
"""Create a command to list Physical Switches."""

View File

@ -1,309 +0,0 @@
# Copyright (c) 2015 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import os.path
import socket
import ssl
import time
import eventlet
from oslo_config import cfg
from oslo_log import log as logging
from oslo_serialization import jsonutils
from oslo_utils import excutils
from networking_l2gw.services.l2gateway.common import constants as n_const
LOG = logging.getLogger(__name__)
OVSDB_UNREACHABLE_MSG = 'Unable to reach OVSDB server %s'
OVSDB_CONNECTED_MSG = 'Connected to OVSDB server %s'
class BaseConnection(object):
"""Connects to OVSDB server.
Connects to an ovsdb server with/without SSL
on a given host and TCP port.
"""
def __init__(self, conf, gw_config, mgr=None):
self.responses = []
self.connected = False
self.mgr = mgr
self.enable_manager = cfg.CONF.ovsdb.enable_manager
if self.enable_manager:
self.manager_table_listening_port = (
cfg.CONF.ovsdb.manager_table_listening_port)
self.ip_ovsdb_mapping = self._get_ovsdb_ip_mapping()
self.s = None
self.check_c_sock = None
self.check_sock_rcv = False
self.ovsdb_dicts = {}
self.ovsdb_fd_states = {}
self.ovsdb_conn_list = []
eventlet.greenthread.spawn(self._rcv_socket)
else:
self.gw_config = gw_config
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
if gw_config.use_ssl:
ssl_sock = ssl.wrap_socket(
self.socket,
server_side=False,
keyfile=gw_config.private_key,
certfile=gw_config.certificate,
cert_reqs=ssl.CERT_REQUIRED,
ssl_version=ssl.PROTOCOL_TLSv1,
ca_certs=gw_config.ca_cert)
self.socket = ssl_sock
retryCount = 0
while True:
try:
self.socket.connect((str(gw_config.ovsdb_ip),
int(gw_config.ovsdb_port)))
break
except (socket.error, socket.timeout):
LOG.warning(OVSDB_UNREACHABLE_MSG, gw_config.ovsdb_ip)
if retryCount == conf.max_connection_retries:
# Retried for max_connection_retries times.
# Give up and return so that it can be tried in
# the next periodic interval.
with excutils.save_and_reraise_exception(reraise=True):
LOG.exception("Socket error in connecting to "
"the OVSDB server")
else:
time.sleep(1)
retryCount += 1
# Successfully connected to the socket
LOG.debug(OVSDB_CONNECTED_MSG, gw_config.ovsdb_ip)
self.connected = True
def _get_ovsdb_ip_mapping(self):
ovsdb_ip_mapping = {}
ovsdb_hosts = cfg.CONF.ovsdb.ovsdb_hosts
if ovsdb_hosts != '':
ovsdb_hosts = ovsdb_hosts.split(',')
for host in ovsdb_hosts:
host_splits = str(host).split(':')
ovsdb_identifier = str(host_splits[0]).strip()
ovsdb_ip = str(host_splits[1]).strip()
ovsdb_ip_mapping[ovsdb_ip] = ovsdb_identifier
return ovsdb_ip_mapping
def _is_ssl_configured(self, addr, client_sock):
priv_key_path = cfg.CONF.ovsdb.l2_gw_agent_priv_key_base_path
cert_path = cfg.CONF.ovsdb.l2_gw_agent_cert_base_path
ca_cert_path = cfg.CONF.ovsdb.l2_gw_agent_ca_cert_base_path
use_ssl = priv_key_path and cert_path and ca_cert_path
if use_ssl:
LOG.debug("ssl is enabled with priv_key_path %s, cert_path %s, "
"ca_cert_path %s", priv_key_path,
cert_path, ca_cert_path)
if addr in self.ip_ovsdb_mapping.keys():
ovsdb_id = self.ip_ovsdb_mapping.get(addr)
priv_key_file = priv_key_path + "/" + ovsdb_id + ".key"
cert_file = cert_path + "/" + ovsdb_id + ".cert"
ca_cert_file = ca_cert_path + "/" + ovsdb_id + ".ca_cert"
is_priv_key = os.path.isfile(priv_key_file)
is_cert_file = os.path.isfile(cert_file)
is_ca_cert_file = os.path.isfile(ca_cert_file)
if is_priv_key and is_cert_file and is_ca_cert_file:
ssl_conn_stream = ssl.wrap_socket(
client_sock,
server_side=True,
keyfile=priv_key_file,
certfile=cert_file,
ssl_version=ssl.PROTOCOL_SSLv23,
ca_certs=ca_cert_file)
client_sock = ssl_conn_stream
else:
if not is_priv_key:
LOG.error("Could not find private key in"
" %(path)s dir, expecting in the "
"file name %(file)s ",
{'path': priv_key_path,
'file': ovsdb_id + ".key"})
if not is_cert_file:
LOG.error("Could not find cert in %(path)s dir, "
"expecting in the file name %(file)s",
{'path': cert_path,
'file': ovsdb_id + ".cert"})
if not is_ca_cert_file:
LOG.error("Could not find cacert in %(path)s "
"dir, expecting in the file name "
"%(file)s",
{'path': ca_cert_path,
'file': ovsdb_id + ".ca_cert"})
else:
LOG.error("you have enabled SSL for ovsdb %s, "
"expecting the ovsdb identifier and ovdb IP "
"entry in ovsdb_hosts in l2gateway_agent.ini",
addr)
return client_sock
def _rcv_socket(self):
# Create a socket object.
self.s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
host = '' # Get local machine name
port = self.manager_table_listening_port
# configured port for your service.
self.s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.s.bind((host, port)) # Bind to the port
self.s.listen(5) # Now wait for client connection.
while True:
# Establish connection with client.
c_sock, ip_addr = self.s.accept()
addr = ip_addr[0]
c_sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
c_sock = self._is_ssl_configured(addr, c_sock)
LOG.debug("Got connection from %s ", addr)
self.connected = True
if addr in self.ovsdb_fd_states.keys():
del self.ovsdb_fd_states[addr]
if addr in self.ovsdb_conn_list:
self.ovsdb_conn_list.remove(addr)
if addr in self.ovsdb_dicts.keys():
if self.ovsdb_dicts.get(addr):
self.ovsdb_dicts.get(addr).close()
del self.ovsdb_dicts[addr]
self.ovsdb_dicts[addr] = c_sock
eventlet.greenthread.spawn(self._common_sock_rcv_thread, addr)
# Now that OVSDB server has sent a socket open request, let us wait
# for echo request. After the first echo request, we will send the
# "monitor" request to the OVSDB server.
def _send_monitor_msg_to_ovsdb_connection(self, addr):
if self.mgr.l2gw_agent_type == n_const.MONITOR:
try:
if (self.mgr.ovsdb_fd) and (addr in self.ovsdb_conn_list):
eventlet.greenthread.spawn_n(
self.mgr.ovsdb_fd._spawn_monitor_table_thread,
addr)
except Exception:
LOG.warning("Could not send monitor message to the "
"OVSDB server.")
self.disconnect(addr)
def _common_sock_rcv_thread(self, addr):
chunks = []
lc = rc = 0
prev_char = None
self.read_on = True
check_monitor_msg = True
self._echo_response(addr)
if self.enable_manager and (addr in self.ovsdb_conn_list):
while self.read_on:
response = self.ovsdb_dicts.get(addr).recv(n_const.BUFFER_SIZE)
self.ovsdb_fd_states[addr] = 'connected'
self.check_sock_rcv = True
eventlet.greenthread.sleep(0)
if check_monitor_msg:
self._send_monitor_msg_to_ovsdb_connection(addr)
check_monitor_msg = False
if response:
response = response.decode('utf8')
message_mark = 0
for i, c in enumerate(response):
if c == '{' and not (prev_char and
prev_char == '\\'):
lc += 1
elif c == '}' and not (prev_char and
prev_char == '\\'):
rc += 1
if rc > lc:
raise Exception("json string not valid")
elif lc == rc and lc is not 0:
chunks.append(response[message_mark:i + 1])
message = "".join(chunks)
eventlet.greenthread.spawn_n(
self._on_remote_message, message, addr)
eventlet.greenthread.sleep(0)
lc = rc = 0
message_mark = i + 1
chunks = []
prev_char = c
chunks.append(response[message_mark:])
else:
self.read_on = False
self.disconnect(addr)
def _echo_response(self, addr):
while True:
try:
if self.enable_manager:
eventlet.greenthread.sleep(0)
response = self.ovsdb_dicts.get(addr).recv(
n_const.BUFFER_SIZE)
sock_json_m = jsonutils.loads(response)
sock_handler_method = sock_json_m.get('method', None)
if sock_handler_method == 'echo':
self.check_c_sock = True
self.ovsdb_dicts.get(addr).send(jsonutils.dumps(
{"result": sock_json_m.get("params", None),
"error": None, "id": sock_json_m['id']}))
if (addr not in self.ovsdb_conn_list):
self.ovsdb_conn_list.append(addr)
break
except Exception:
continue
def send(self, message, callback=None, addr=None):
"""Sends a message to the OVSDB server."""
if callback:
self.callbacks[message['id']] = callback
retry_count = 0
bytes_sent = 0
while retry_count <= n_const.MAX_RETRIES:
try:
if self.enable_manager:
bytes_sent = self.ovsdb_dicts.get(addr).send(
jsonutils.dumps(message))
else:
bytes_sent = self.socket.send(jsonutils.dumps(message))
if bytes_sent:
return True
except Exception as ex:
LOG.exception("Exception [%s] occurred while sending "
"message to the OVSDB server", ex)
retry_count += 1
LOG.warning("Could not send message to the "
"OVSDB server.")
self.disconnect(addr)
return False
def disconnect(self, addr=None):
"""disconnects the connection from the OVSDB server."""
if self.enable_manager:
self.ovsdb_dicts.get(addr).close()
del self.ovsdb_dicts[addr]
if addr in self.ovsdb_fd_states.keys():
del self.ovsdb_fd_states[addr]
self.ovsdb_conn_list.remove(addr)
else:
self.socket.close()
self.connected = False
def _response(self, operation_id):
x_copy = None
to_delete = None
for x in self.responses:
if x['id'] == operation_id:
x_copy = copy.deepcopy(x)
to_delete = x
break
if to_delete:
self.responses.remove(to_delete)
return x_copy

View File

@ -1,58 +0,0 @@
# Copyright (c) 2016 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from ovsdbapp.backend.ovs_idl.transaction import Transaction
from networking_l2gw.services.l2gateway.agent.ovsdb import api
from networking_l2gw.services.l2gateway.agent.ovsdb.native import (
commands as cmd)
from networking_l2gw.services.l2gateway.agent.ovsdb.native import connection
class OvsdbHardwareVtepIdl(api.API):
def __init__(self, context, ovsdb_conn, timeout):
self.context = context
self.timeout = timeout
self.ovsdb_connection = connection.Connection(ovsdb_conn, timeout,
'hardware_vtep')
if self.is_passive(ovsdb_conn):
self.ovsdb_connection.accept()
else:
self.ovsdb_connection.start()
self.idl = self.ovsdb_connection.idl
@property
def _ovs(self):
return self._tables['Global'].rows.values()[0]
@property
def _tables(self):
return self.idl.tables
def is_passive(self, ovsdb_conn):
return ovsdb_conn.startswith("punix:") or ovsdb_conn.startswith(
"ptcp:")
def transaction(self, check_error=False, log_errors=True, **kwargs):
return Transaction(self, self.ovsdb_connection,
self.timeout,
check_error, log_errors)
def db_find(self, table, *conditions, **kwargs):
pass
def get_physical_sw_list(self):
return cmd.ListPhysicalSwitchCommand(self)

View File

@ -1,394 +0,0 @@
# Copyright (c) 2015 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from contextlib import contextmanager
import os.path
import eventlet
from neutron_lib import context as ctx
from oslo_config import cfg
from oslo_log import log as logging
from oslo_service import loopingcall
from networking_l2gw.services.l2gateway.agent import base_agent_manager
from networking_l2gw.services.l2gateway.agent import l2gateway_config
from networking_l2gw.services.l2gateway.agent.ovsdb import ovsdb_common_class
from networking_l2gw.services.l2gateway.agent.ovsdb import ovsdb_monitor
from networking_l2gw.services.l2gateway.agent.ovsdb import ovsdb_writer
from networking_l2gw.services.l2gateway.common import constants as n_const
LOG = logging.getLogger(__name__)
class OVSDBManager(base_agent_manager.BaseAgentManager):
"""OVSDB variant of agent manager.
Listens to state change notifications from OVSDB servers and
handles transactions (RPCs) destined to OVSDB servers.
"""
def __init__(self, conf=None):
super(OVSDBManager, self).__init__(conf)
self._extract_ovsdb_config(conf)
self.enable_manager = cfg.CONF.ovsdb.enable_manager
if self.enable_manager:
self.ovsdb_fd = None
self._sock_open_connection()
self.looping_task_ovsdb_states = (
loopingcall.FixedIntervalLoopingCall(self._send_ovsdb_states))
else:
self.looping_task = loopingcall.FixedIntervalLoopingCall(
self._connect_to_ovsdb_server)
def _extract_ovsdb_config(self, conf):
self.conf = conf or cfg.CONF
ovsdb_hosts = self.conf.ovsdb.ovsdb_hosts
if ovsdb_hosts != '':
ovsdb_hosts = ovsdb_hosts.split(',')
for host in ovsdb_hosts:
self._process_ovsdb_host(host)
# Ensure that max_connection_retries is less than
# the periodic interval.
if (self.conf.ovsdb.max_connection_retries >=
self.conf.ovsdb.periodic_interval):
raise SystemExit("max_connection_retries should be "
"less than periodic interval")
def _process_ovsdb_host(self, host):
try:
host_splits = str(host).split(':')
ovsdb_identifier = str(host_splits[0]).strip()
ovsdb_conf = {n_const.OVSDB_IDENTIFIER: ovsdb_identifier,
'ovsdb_ip': str(host_splits[1]).strip(),
'ovsdb_port': str(host_splits[2]).strip()}
priv_key_path = self.conf.ovsdb.l2_gw_agent_priv_key_base_path
cert_path = self.conf.ovsdb.l2_gw_agent_cert_base_path
ca_cert_path = self.conf.ovsdb.l2_gw_agent_ca_cert_base_path
use_ssl = priv_key_path and cert_path and ca_cert_path
if use_ssl:
LOG.debug("ssl is enabled with priv_key_path %s, cert_path "
"%s, ca_cert_path %s", priv_key_path,
cert_path, ca_cert_path)
priv_key_file = priv_key_path + "/" + ovsdb_identifier + ".key"
cert_file = cert_path + "/" + ovsdb_identifier + ".cert"
ca_cert_file = (ca_cert_path + "/" + ovsdb_identifier
+ ".ca_cert")
is_priv_key = os.path.isfile(priv_key_file)
is_cert_file = os.path.isfile(cert_file)
is_ca_cert_file = os.path.isfile(ca_cert_file)
if not is_priv_key:
LOG.exception("Could not find private key in "
"%(path)s dir, expecting in the "
"file name %(file)s ",
{'path': priv_key_path,
'file': ovsdb_identifier + ".key"})
if not is_cert_file:
LOG.exception("Could not find cert in %(path)s dir, "
"expecting in the file name %(file)s",
{'path': cert_path,
'file': ovsdb_identifier + ".cert"})
if not is_ca_cert_file:
LOG.exception("Could not find cacert in %(path)s "
"dir, expecting in the file name "
"%(file)s",
{'path': ca_cert_path,
'file': ovsdb_identifier + ".ca_cert"})
ssl_ovsdb = {'use_ssl': True,
'private_key':
"/".join([str(priv_key_path),
'.'.join([str(host_splits[0]).
strip(),
'key'])]),
'certificate':
"/".join([str(cert_path),
'.'.join([str(host_splits[0]).
strip(), 'cert'])]),
'ca_cert':
"/".join([str(ca_cert_path),
'.'.join([str(host_splits[0]).
strip(), 'ca_cert'])])
}
ovsdb_conf.update(ssl_ovsdb)
LOG.debug("ovsdb_conf = %s", str(ovsdb_conf))
gateway = l2gateway_config.L2GatewayConfig(ovsdb_conf)
self.gateways[ovsdb_identifier] = gateway
except Exception as ex:
LOG.exception("Exception %(ex)s occurred while processing "
"host %(host)s", {'ex': ex, 'host': host})
def _connect_to_ovsdb_server(self):
"""Initializes the connection to the OVSDB servers."""
ovsdb_states = {}
if self.gateways and self.l2gw_agent_type == n_const.MONITOR:
for key in self.gateways.keys():
gateway = self.gateways.get(key)
ovsdb_fd = gateway.ovsdb_fd
if not (ovsdb_fd and ovsdb_fd.connected):
LOG.debug("OVSDB server %s is disconnected",
str(gateway.ovsdb_ip))
try:
ovsdb_fd = ovsdb_monitor.OVSDBMonitor(
self.conf.ovsdb,
gateway,
self.agent_to_plugin_rpc)
except Exception:
ovsdb_states[key] = 'disconnected'
# Log a warning and continue so that it can be
# retried in the next iteration.
LOG.error("OVSDB server %s is not "
"reachable", gateway.ovsdb_ip)
# Continue processing the next element in the list.
continue
gateway.ovsdb_fd = ovsdb_fd
try:
eventlet.greenthread.spawn_n(
ovsdb_fd.set_monitor_response_handler)
except Exception:
raise SystemExit(Exception.message)
if ovsdb_fd and ovsdb_fd.connected:
ovsdb_states[key] = 'connected'
LOG.debug("Calling notify_ovsdb_states")
self.plugin_rpc.notify_ovsdb_states(ctx.get_admin_context(),
ovsdb_states)
def handle_report_state_failure(self):
# Not able to deliver the heart beats to the Neutron server.
# Let us change the mode to Transact so that when the
# Neutron server is connected back, it will make an agent
# Monitor agent and OVSDB data will be read entirely. This way,
# the OVSDB data in Neutron database will be the latest and in
# sync with that in the OVSDB server tables.
if self.l2gw_agent_type == n_const.MONITOR:
self.l2gw_agent_type = ''
self.agent_state.get('configurations')[n_const.L2GW_AGENT_TYPE
] = self.l2gw_agent_type
if not self.enable_manager:
self._stop_looping_task()
self._disconnect_all_ovsdb_servers()
def _send_ovsdb_states(self):
self.plugin_rpc.notify_ovsdb_states(ctx.get_admin_context(),
self.ovsdb_fd.ovsdb_fd_states)
def _disconnect_all_ovsdb_servers(self):
if self.gateways:
for key, gateway in self.gateways.items():
ovsdb_fd = gateway.ovsdb_fd
if ovsdb_fd and ovsdb_fd.connected:
gateway.ovsdb_fd.disconnect()
def set_monitor_agent(self, context, hostname):
"""Handle RPC call from plugin to update agent type.
RPC call from the plugin to accept that I am a monitoring
or a transact agent. This is a fanout cast message
"""
super(OVSDBManager, self).set_monitor_agent(context, hostname)
# If set to Monitor, then let us start monitoring the OVSDB
# servers without any further delay.
if ((self.l2gw_agent_type == n_const.MONITOR) and not (
self.enable_manager)):
self._start_looping_task()
elif self.enable_manager and self.l2gw_agent_type == n_const.MONITOR:
if self.ovsdb_fd is None:
self._sock_open_connection()
elif ((self.ovsdb_fd) and not (
self.ovsdb_fd.check_monitor_table_thread) and (
self.ovsdb_fd.check_sock_rcv)):
for key in self.ovsdb_fd.ovsdb_dicts.keys():
if key in self.ovsdb_fd.ovsdb_conn_list:
eventlet.greenthread.spawn_n(
self.ovsdb_fd._spawn_monitor_table_thread, key)
self._start_looping_task_ovsdb_states()
elif ((self.enable_manager) and not (
self.l2gw_agent_type == n_const.MONITOR)):
self._stop_looping_task_ovsdb_states()
elif (not (self.l2gw_agent_type == n_const.MONITOR) and not (
self.enable_manager)):
# Otherwise, stop monitoring the OVSDB servers
# and close the open connections if any.
self._stop_looping_task()
self._disconnect_all_ovsdb_servers()
def _stop_looping_task_ovsdb_states(self):
if self.looping_task_ovsdb_states._running:
self.looping_task_ovsdb_states.stop()
def _start_looping_task_ovsdb_states(self):
if not self.looping_task_ovsdb_states._running:
self.looping_task_ovsdb_states.start(
interval=self.conf.ovsdb.periodic_interval)
def _stop_looping_task(self):
if self.looping_task._running:
self.looping_task.stop()
def _start_looping_task(self):
if not self.looping_task._running:
self.looping_task.start(interval=self.conf.ovsdb.
periodic_interval)
def _sock_open_connection(self):
gateway = ''
if self.ovsdb_fd is None:
self.ovsdb_fd = ovsdb_common_class.OVSDB_commom_class(
self.conf.ovsdb,
gateway,
self.agent_to_plugin_rpc, self)
@contextmanager
def _open_connection(self, ovsdb_identifier):
ovsdb_fd = None
gateway = self.gateways.get(ovsdb_identifier)
try:
ovsdb_fd = ovsdb_writer.OVSDBWriter(self.conf.ovsdb,
gateway)
yield ovsdb_fd
finally:
if ovsdb_fd:
ovsdb_fd.disconnect()
def _is_valid_request(self, ovsdb_identifier):
val_req = ovsdb_identifier and ovsdb_identifier in self.gateways.keys()
if not val_req:
LOG.warning(n_const.ERROR_DICT
[n_const.L2GW_INVALID_OVSDB_IDENTIFIER])
return val_req
def delete_network(self, context, ovsdb_identifier, logical_switch_uuid):
"""Handle RPC cast from plugin to delete a network."""
if self.enable_manager and self.l2gw_agent_type == n_const.MONITOR:
self.ovsdb_fd.delete_logical_switch(logical_switch_uuid,
ovsdb_identifier, False)
elif ((self.enable_manager) and (
not self.l2gw_agent_type == n_const.MONITOR)):
self._sock_open_connection()
if ovsdb_identifier in self.ovsdb_fd.ovsdb_conn_list:
self.ovsdb_fd.delete_logical_switch(logical_switch_uuid,
ovsdb_identifier,
False)
elif not self.enable_manager:
if self._is_valid_request(ovsdb_identifier):
with self._open_connection(ovsdb_identifier) as ovsdb_fd:
ovsdb_fd.delete_logical_switch(logical_switch_uuid,
ovsdb_identifier)
def add_vif_to_gateway(self, context, ovsdb_identifier,
logical_switch_dict, locator_dict,
mac_dict):
"""Handle RPC cast from plugin to insert neutron port MACs."""
if self.enable_manager and self.l2gw_agent_type == n_const.MONITOR:
self.ovsdb_fd.insert_ucast_macs_remote(logical_switch_dict,
locator_dict,
mac_dict, ovsdb_identifier,
False)
elif ((self.enable_manager) and (
not self.l2gw_agent_type == n_const.MONITOR)):
self._sock_open_connection()
if ovsdb_identifier in self.ovsdb_fd.ovsdb_conn_list:
self.ovsdb_fd.insert_ucast_macs_remote(
logical_switch_dict,
locator_dict,
mac_dict, ovsdb_identifier, False)
elif not self.enable_manager:
if self._is_valid_request(ovsdb_identifier):
with self._open_connection(ovsdb_identifier) as ovsdb_fd:
ovsdb_fd.insert_ucast_macs_remote(logical_switch_dict,
locator_dict,
mac_dict,
ovsdb_identifier)
def delete_vif_from_gateway(self, context, ovsdb_identifier,
logical_switch_uuid, mac):
"""Handle RPC cast from plugin to delete neutron port MACs."""
if self.enable_manager and self.l2gw_agent_type == n_const.MONITOR:
self.ovsdb_fd.delete_ucast_macs_remote(logical_switch_uuid, mac,
ovsdb_identifier,
False)
elif ((self.enable_manager) and (
not self.l2gw_agent_type == n_const.MONITOR)):
self._sock_open_connection()
if ovsdb_identifier in self.ovsdb_fd.ovsdb_conn_list:
self.ovsdb_fd.delete_ucast_macs_remote(
logical_switch_uuid,
mac, ovsdb_identifier, False)
elif not self.enable_manager:
if self._is_valid_request(ovsdb_identifier):
with self._open_connection(ovsdb_identifier) as ovsdb_fd:
ovsdb_fd.delete_ucast_macs_remote(
logical_switch_uuid, mac, ovsdb_identifier)
def update_vif_to_gateway(self, context, ovsdb_identifier,
locator_dict, mac_dict):
"""Handle RPC cast from plugin to update neutron port MACs.
for VM migration.
"""
if self.enable_manager and self.l2gw_agent_type == n_const.MONITOR:
self.ovsdb_fd.update_ucast_macs_remote(locator_dict,
mac_dict, ovsdb_identifier,
False)
elif ((self.enable_manager) and (
not self.l2gw_agent_type == n_const.MONITOR)):
self._sock_open_connection()
if ovsdb_identifier in self.ovsdb_fd.ovsdb_conn_list:
self.ovsdb_fd.update_ucast_macs_remote(locator_dict,
mac_dict,
ovsdb_identifier, False)
elif not self.enable_manager:
if self._is_valid_request(ovsdb_identifier):
with self._open_connection(ovsdb_identifier) as ovsdb_fd:
ovsdb_fd.update_ucast_macs_remote(
locator_dict, mac_dict, ovsdb_identifier)
def update_connection_to_gateway(self, context, ovsdb_identifier,
logical_switch_dict, locator_dicts,
mac_dicts, port_dicts, op_method):
"""Handle RPC cast from plugin.
Handle RPC cast from plugin to connect/disconnect a network
to/from an L2 gateway.
"""
if self.enable_manager and self.l2gw_agent_type == n_const.MONITOR:
self.ovsdb_fd.update_connection_to_gateway(logical_switch_dict,
locator_dicts,
mac_dicts,
port_dicts,
ovsdb_identifier,
op_method,
False)
elif ((self.enable_manager) and (
not self.l2gw_agent_type == n_const.MONITOR)):
self._sock_open_connection()
if ovsdb_identifier in self.ovsdb_fd.ovsdb_conn_list:
self.ovsdb_fd.update_connection_to_gateway(
logical_switch_dict,
locator_dicts,
mac_dicts,
port_dicts, ovsdb_identifier, op_method, False)
elif not self.enable_manager:
if self._is_valid_request(ovsdb_identifier):
with self._open_connection(ovsdb_identifier) as ovsdb_fd:
ovsdb_fd.update_connection_to_gateway(logical_switch_dict,
locator_dicts,
mac_dicts,
port_dicts,
ovsdb_identifier,
op_method)
def agent_to_plugin_rpc(self, activity, ovsdb_data):
self.plugin_rpc.update_ovsdb_changes(ctx.get_admin_context(),
activity,
ovsdb_data)

View File

@ -1,25 +0,0 @@
# Copyright (c) 2016 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from ovsdbapp.backend.ovs_idl import command as comm
class ListPhysicalSwitchCommand(comm.BaseCommand):
def __init__(self, api):
super(ListPhysicalSwitchCommand, self).__init__(api)
def run_idl(self, txn):
self.result = [x.name for x in
self.api._tables['Physical_Switch'].rows.values()]

View File

@ -1,31 +0,0 @@
# Copyright (c) 2016 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from ovs.db import idl
from ovsdbapp.backend.ovs_idl import connection as conn
def get_schema_helper_for_vtep():
current_dir = os.path.dirname(os.path.realpath(__file__))
return idl.SchemaHelper(current_dir + '/../vtep/vtep.ovsschema')
class Connection(conn.Connection):
def __init__(self, connection, timeout, schema_name):
idl_ = idl.Idl(connection, get_schema_helper_for_vtep())
super(Connection, self).__init__(idl_, timeout)

View File

@ -1,21 +0,0 @@
# Copyright (c) 2015 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from networking_l2gw.services.l2gateway.agent.ovsdb import ovsdb_monitor
from networking_l2gw.services.l2gateway.agent.ovsdb import ovsdb_writer
class OVSDB_commom_class(ovsdb_monitor.OVSDBMonitor, ovsdb_writer.OVSDBWriter):
pass

View File

@ -1,70 +0,0 @@
# Copyright (c) 2016 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class OvsdbObject(object):
def __init__(self, uuid):
self.uuid = uuid
class LogicalSwitch(OvsdbObject):
def __init__(self, uuid, name, description, tunnel_key):
super(LogicalSwitch, self).__init__(uuid)
self.name = name
self.description = description
self.tunnel_key = tunnel_key
class PhysicalLocatorSet(OvsdbObject):
def __init__(self, uuid, locator_uuid_list):
super(PhysicalLocatorSet, self).__init__(uuid)
self.locator_uuid_list = locator_uuid_list
class PhysicalLocator(OvsdbObject):
def __init__(self, uuid,
dst_ip, tunnel_key=None,
encapsulation_type='vxlan_over_ipv4'):
super(PhysicalLocator, self).__init__(uuid)
self.dst_ip = dst_ip
self.encapsulation_type = encapsulation_type
self.tunnel_key = tunnel_key
class UcastMacs(OvsdbObject):
def __init__(self, uuid, mac, ipaddr, logical_switch_uuid, locator_uuid):
super(UcastMacs, self).__init__(uuid)
self.mac = mac
self.ipaddr = ipaddr
self.logical_switch_uuid = logical_switch_uuid
self.locator_uuid = locator_uuid
class McastMacs(OvsdbObject):
def __init__(self, uuid, mac, dst_ip, logical_switch_uuid,
locator_set__uuid):
super(McastMacs, self).__init__(uuid)
self.mac = mac
self.dst_ip = dst_ip
self.logical_switch_uuid = logical_switch_uuid
self.locator_set__uuid = locator_set__uuid
class PhysicalPort(OvsdbObject):
def __init__(self, uuid, name, description, vlan_bindings_dict):
super(PhysicalPort, self).__init__(uuid)
self.name = name
self.description = description
self.vlan_bindings_dict = vlan_bindings_dict

View File

@ -1,611 +0,0 @@
# Copyright (c) 2015 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import random
import eventlet
from oslo_config import cfg
from oslo_log import log as logging
from oslo_serialization import jsonutils
from oslo_utils import excutils
from networking_l2gw.services.l2gateway.agent.ovsdb import base_connection
from networking_l2gw.services.l2gateway.common import constants as n_const
from networking_l2gw.services.l2gateway.common import ovsdb_schema
from networking_l2gw.services.l2gateway import exceptions
LOG = logging.getLogger(__name__)
class Activity(object):
Initial, Update = range(2)
class OVSDBMonitor(base_connection.BaseConnection):
"""Monitors OVSDB servers."""
def __init__(self, conf, gw_config, callback, mgr=None):
super(OVSDBMonitor, self).__init__(conf, gw_config, mgr=None)
self.mgr = mgr
self.rpc_callback = callback
self.callbacks = {}
self._setup_dispatch_table()
self.read_on = True
self.handlers = {"echo": self._default_echo_handler}
self.sock_timeout = cfg.CONF.ovsdb.socket_timeout
if self.enable_manager:
self.check_monitor_table_thread = False
if not self.enable_manager:
eventlet.greenthread.spawn(self._rcv_thread)
def _spawn_monitor_table_thread(self, addr):
self.set_monitor_response_handler(addr)
self.check_monitor_table_thread = True
def _initialize_data_dict(self):
data_dict = {'new_local_macs': [],
'deleted_local_macs': [],
'modified_local_macs': [],
'new_remote_macs': [],
'deleted_remote_macs': [],
'modified_remote_macs': [],
'new_physical_ports': [],
'deleted_physical_ports': [],
'modified_physical_ports': [],
'new_physical_switches': [],
'deleted_physical_switches': [],
'modified_physical_switches': [],
'new_physical_locators': [],
'deleted_physical_locators': [],
'modified_physical_locators': [],
'new_logical_switches': [],
'deleted_logical_switches': [],
'modified_logical_switches': [],
'new_mlocal_macs': [],
'deleted_mlocal_macs': [],
'modified_mlocal_macs': [],
'new_locator_sets': [],
'deleted_locator_sets': [],
'modified_locator_sets': []}
return data_dict
def _setup_dispatch_table(self):
self.dispatch_table = {'Logical_Switch': self._process_logical_switch,
'Ucast_Macs_Local':
self._process_ucast_macs_local,
'Physical_Locator':
self._process_physical_locator,
'Ucast_Macs_Remote':
self._process_ucast_macs_remote,
'Mcast_Macs_Local':
self._process_mcast_macs_local,
'Physical_Locator_Set':
self._process_physical_locator_set
}
def set_monitor_response_handler(self, addr=None):
"""Monitor OVSDB tables to receive events for any changes in OVSDB."""
if self.connected:
op_id = str(random.getrandbits(128))
props = {'select': {'initial': True,
'insert': True,
'delete': True,
'modify': True}}
monitor_message = {'id': op_id,
'method': 'monitor',
'params': [n_const.OVSDB_SCHEMA_NAME,
None,
{'Logical_Switch': [props],
'Physical_Switch': [props],
'Physical_Port': [props],
'Ucast_Macs_Local': [props],
'Ucast_Macs_Remote': [props],
'Physical_Locator': [props],
'Mcast_Macs_Local': [props],
'Physical_Locator_Set': [props]}
]}
self._set_handler("update", self._update_event_handler)
if not self.send(monitor_message, addr=addr):
# Return so that this will retried in the next iteration
return
try:
response_result = self._process_response(op_id)
except exceptions.OVSDBError:
with excutils.save_and_reraise_exception():
if self.enable_manager:
self.check_monitor_table_thread = False
LOG.exception("Exception while receiving the "
"response for the monitor message")
self._process_monitor_msg(response_result, addr)
def _update_event_handler(self, message, addr):
self._process_update_event(message, addr)
def _process_update_event(self, message, addr):
"""Process update event that is triggered by the OVSDB server."""
LOG.debug("_process_update_event: message = %s ", str(message))
data_dict = self._initialize_data_dict()
if message.get('method') == 'update':
params_list = message.get('params')
param_dict = params_list[1]
self._process_tables(param_dict, data_dict)
self.rpc_callback(Activity.Update,
self._form_ovsdb_data(data_dict, addr))
def _process_tables(self, param_dict, data_dict):
# Process all the tables one by one.
# OVSDB table name is the key in the dictionary.
port_map = {}
for table_name in param_dict.keys():
table_dict = param_dict.get(table_name)
for uuid in table_dict.keys():
uuid_dict = table_dict.get(uuid)
if table_name == 'Physical_Switch':
self._process_physical_switch(uuid,
uuid_dict,
port_map,
data_dict)
elif table_name == 'Physical_Port':
self._process_physical_port(uuid, uuid_dict,
port_map, data_dict)
else:
self.dispatch_table.get(table_name)(uuid, uuid_dict,
data_dict)
def _process_response(self, op_id):
result = self._response(op_id)
count = 0
while (not result and count <= n_const.MAX_RETRIES):
count = count + 1
eventlet.greenthread.sleep(0)
result = self._response(op_id)
if not result and count >= n_const.MAX_RETRIES:
raise exceptions.OVSDBError(
message="OVSDB server did not respond within "
"max retry attempts.")
error = result.get("error", None)
if error:
raise exceptions.OVSDBError(
message="Error from the OVSDB server %s" % error
)
return result
def _default_echo_handler(self, message, addr):
"""Message handler for the OVSDB server's echo request."""
self.send({"result": message.get("params", None),
"error": None, "id": message['id']}, addr=addr)
def _set_handler(self, method_name, handler):
self.handlers[method_name] = handler
def _on_remote_message(self, message, addr=None):
"""Processes the message received on the socket."""
try:
json_m = jsonutils.loads(message)
handler_method = json_m.get('method', None)
if handler_method:
self.handlers.get(handler_method)(json_m, addr)
else:
self.responses.append(json_m)
except Exception as e:
LOG.exception("Exception [%s] while handling "
"message", e)
def _rcv_thread(self):
chunks = []
lc = rc = 0
prev_char = None
while self.read_on:
try:
# self.socket.recv() is a blocked call
# (if timeout value is not passed) due to which we cannot
# determine if the remote OVSDB server has died. The remote
# OVSDB server sends echo requests every 4 seconds.
# If there is no echo request on the socket for socket_timeout
# seconds(by default its 30 seconds),
# the agent can safely assume that the connection with the
# remote OVSDB server is lost. Better to retry by reopening
# the socket.
self.socket.settimeout(self.sock_timeout)
response = self.socket.recv(n_const.BUFFER_SIZE)
eventlet.greenthread.sleep(0)
if response:
response = response.decode('utf8')
message_mark = 0
for i, c in enumerate(response):
if c == '{' and not (prev_char and
prev_char == '\\'):
lc += 1
elif c == '}' and not (prev_char and
prev_char == '\\'):
rc += 1
if rc > lc:
raise Exception("json string not valid")
elif lc == rc and lc is not 0:
chunks.append(response[message_mark:i + 1])
message = "".join(chunks)
eventlet.greenthread.spawn_n(
self._on_remote_message, message)
lc = rc = 0
message_mark = i + 1
chunks = []
prev_char = c
chunks.append(response[message_mark:])
else:
self.read_on = False
self.disconnect()
except Exception as ex:
self.read_on = False
self.disconnect()
LOG.exception("Exception [%s] occurred while receiving"
"message from the OVSDB server", ex)
def disconnect(self, addr=None):
"""disconnects the connection from the OVSDB server."""
self.read_on = False
super(OVSDBMonitor, self).disconnect(addr)
def _process_monitor_msg(self, message, addr=None):
"""Process initial set of records in the OVSDB at startup."""
result_dict = message.get('result')
data_dict = self._initialize_data_dict()
try:
self._process_tables(result_dict, data_dict)
self.rpc_callback(Activity.Initial,
self._form_ovsdb_data(data_dict, addr))
except Exception as e:
LOG.exception("_process_monitor_msg:ERROR %s ", e)
def _get_list(self, resource_list):
return [element.__dict__ for element in resource_list]
def _form_ovsdb_data(self, data_dict, addr):
return {n_const.OVSDB_IDENTIFIER: str(addr) if (
self.enable_manager) else (self.gw_config.ovsdb_identifier),
'new_logical_switches': self._get_list(
data_dict.get('new_logical_switches')),
'new_physical_switches': self._get_list(
data_dict.get('new_physical_switches')),
'new_physical_ports': self._get_list(
data_dict.get('new_physical_ports')),
'new_physical_locators': self._get_list(
data_dict.get('new_physical_locators')),
'new_local_macs': self._get_list(
data_dict.get('new_local_macs')),
'new_remote_macs': self._get_list(
data_dict.get('new_remote_macs')),
'new_mlocal_macs': self._get_list(
data_dict.get('new_mlocal_macs')),
'new_locator_sets': self._get_list(
data_dict.get('new_locator_sets')),
'deleted_logical_switches': self._get_list(
data_dict.get('deleted_logical_switches')),
'deleted_physical_switches': self._get_list(
data_dict.get('deleted_physical_switches')),
'deleted_physical_ports': self._get_list(
data_dict.get('deleted_physical_ports')),
'deleted_physical_locators': self._get_list(
data_dict.get('deleted_physical_locators')),
'deleted_local_macs': self._get_list(
data_dict.get('deleted_local_macs')),
'deleted_remote_macs': self._get_list(
data_dict.get('deleted_remote_macs')),
'deleted_mlocal_macs': self._get_list(
data_dict.get('deleted_mlocal_macs')),
'deleted_locator_sets': self._get_list(
data_dict.get('deleted_locator_sets')),
'modified_logical_switches': self._get_list(
data_dict.get('modified_logical_switches')),
'modified_physical_switches': self._get_list(
data_dict.get('modified_physical_switches')),
'modified_physical_ports': self._get_list(
data_dict.get('modified_physical_ports')),
'modified_physical_locators': self._get_list(
data_dict.get('modified_physical_locators')),
'modified_local_macs': self._get_list(
data_dict.get('modified_local_macs')),
'modified_remote_macs': self._get_list(
data_dict.get('modified_remote_macs')),
'modified_mlocal_macs': self._get_list(
data_dict.get('modified_mlocal_macs')),
'modified_locator_sets': self._get_list(
data_dict.get('modified_locator_sets'))}
def _process_physical_port(self, uuid, uuid_dict, port_map, data_dict):
"""Processes Physical_Port record from the OVSDB event."""
new_row = uuid_dict.get('new', None)
old_row = uuid_dict.get('old', None)
if new_row:
port_fault_status = new_row.get('port_fault_status')
if type(port_fault_status) is list:
port_fault_status = None
port = ovsdb_schema.PhysicalPort(uuid, new_row.get('name'), None,
None,
port_fault_status)
switch_id = port_map.get(uuid, None)
if switch_id:
port.physical_switch_id = switch_id
# Update the vlan bindings
outer_binding_list = new_row.get('vlan_bindings')
# First element is "map"
outer_binding_list.remove(outer_binding_list[0])
vlan_bindings = []
if len(outer_binding_list) > 0:
for binding in outer_binding_list:
if len(binding) > 0:
for element in binding:
vlan = element[0]
inner_most_list = element[1]
ls_id = inner_most_list[1]
vb = ovsdb_schema.VlanBinding(vlan, ls_id).__dict__
vlan_bindings.append(vb)
port.vlan_bindings = vlan_bindings
if old_row:
modified_physical_ports = data_dict.get(
'modified_physical_ports')
modified_physical_ports.append(port)
else:
new_physical_ports = data_dict.get(
'new_physical_ports')
new_physical_ports.append(port)
elif old_row:
# Port is deleted permanently from OVSDB server
port_fault_status = old_row.get('port_fault_status')
if type(port_fault_status) is list:
port_fault_status = None
port = ovsdb_schema.PhysicalPort(uuid, old_row.get('name'), None,
None,
port_fault_status)
deleted_physical_ports = data_dict.get('deleted_physical_ports')
deleted_physical_ports.append(port)
def _process_physical_switch(self, uuid, uuid_dict, port_map, data_dict):
"""Processes Physical_Switch record from the OVSDB event."""
new_row = uuid_dict.get('new', None)
old_row = uuid_dict.get('old', None)
if new_row:
# insert or modify operation
ports = new_row.get('ports')
# First element in the list is either 'set' or 'uuid'
# Let us remove it.
is_set = False
if ports[0] == 'set':
is_set = True
ports.remove(ports[0])
all_ports = []
if not is_set:
all_ports.append(ports[0])
else:
for port_list in ports:
# each port variable is again list
for port in port_list:
for inner_port in port:
if inner_port != 'uuid':
all_ports.append(inner_port)
switch_fault_status = new_row.get('switch_fault_status')
if type(switch_fault_status) is list:
switch_fault_status = None
phys_switch = ovsdb_schema.PhysicalSwitch(
uuid, new_row.get('name'), new_row.get('tunnel_ips'),
switch_fault_status)
# Now, store mapping of physical ports to
# physical switch so that it is useful while
# processing Physical_Switch record
for port in all_ports:
port_map[port] = uuid
for pport in data_dict['new_physical_ports']:
if pport.uuid == port:
pport.physical_switch_id = uuid
if old_row:
modified_physical_switches = data_dict.get(
'modified_physical_switches')
modified_physical_switches.append(phys_switch)
else:
new_physical_switches = data_dict.get(
'new_physical_switches')
new_physical_switches.append(phys_switch)
elif old_row:
# Physical switch is deleted permanently from OVSDB
# server
switch_fault_status = old_row.get('switch_fault_status')
if type(switch_fault_status) is list:
switch_fault_status = None
phys_switch = ovsdb_schema.PhysicalSwitch(
uuid, old_row.get('name'), old_row.get('tunnel_ips'),
switch_fault_status)
deleted_physical_switches = data_dict.get(
'deleted_physical_switches')
deleted_physical_switches.append(phys_switch)
def _process_logical_switch(self, uuid, uuid_dict, data_dict):
"""Processes Logical_Switch record from the OVSDB event."""
new_row = uuid_dict.get('new', None)
old_row = uuid_dict.get('old', None)
if new_row:
l_switch = ovsdb_schema.LogicalSwitch(uuid,
new_row.get('name'),
new_row.get('tunnel_key'),
new_row.get('description'))
if old_row:
modified_logical_switches = data_dict.get(
'modified_logical_switches')
modified_logical_switches.append(l_switch)
else:
new_logical_switches = data_dict.get(
'new_logical_switches')
new_logical_switches.append(l_switch)
elif old_row:
l_switch = ovsdb_schema.LogicalSwitch(uuid,
old_row.get('name'),
old_row.get('tunnel_key'),
old_row.get('description'))
deleted_logical_switches = data_dict.get(
'deleted_logical_switches')
deleted_logical_switches.append(l_switch)
def _process_ucast_macs_local(self, uuid, uuid_dict, data_dict):
"""Processes Ucast_Macs_Local record from the OVSDB event."""
new_row = uuid_dict.get('new', None)
old_row = uuid_dict.get('old', None)
if new_row:
locator_list = new_row.get('locator')
locator_id = locator_list[1]
logical_switch_list = new_row.get('logical_switch')
logical_switch_id = logical_switch_list[1]
mac_local = ovsdb_schema.UcastMacsLocal(uuid,
new_row.get('MAC'),
logical_switch_id,
locator_id,
new_row.get('ipaddr'))
if old_row:
modified_local_macs = data_dict.get(
'modified_local_macs')
modified_local_macs.append(mac_local)
else:
new_local_macs = data_dict.get(
'new_local_macs')
new_local_macs.append(mac_local)
elif old_row:
# A row from UcastMacLocal is deleted.
logical_switch_list = old_row.get('logical_switch')
l_sw_id = logical_switch_list[1]
mac_local = ovsdb_schema.UcastMacsLocal(uuid,
old_row.get('MAC'),
l_sw_id,
None,
None)
deleted_local_macs = data_dict.get(
'deleted_local_macs')
deleted_local_macs.append(mac_local)
def _process_ucast_macs_remote(self, uuid, uuid_dict, data_dict):
"""Processes Ucast_Macs_Remote record from the OVSDB event."""
new_row = uuid_dict.get('new', None)
old_row = uuid_dict.get('old', None)
if new_row:
locator_list = new_row.get('locator')
locator_id = locator_list[1]
logical_switch_list = new_row.get('logical_switch')
logical_switch_id = logical_switch_list[1]
mac_remote = ovsdb_schema.UcastMacsRemote(uuid,
new_row.get('MAC'),
logical_switch_id,
locator_id,
new_row.get('ipaddr'))
if old_row:
modified_remote_macs = data_dict.get(
'modified_remote_macs')
modified_remote_macs.append(mac_remote)
else:
new_remote_macs = data_dict.get(
'new_remote_macs')
new_remote_macs.append(mac_remote)
elif old_row:
logical_switch_list = old_row.get('logical_switch')
l_sw_id = logical_switch_list[1]
mac_remote = ovsdb_schema.UcastMacsRemote(uuid,
old_row.get('MAC'),
l_sw_id,
None,
None)
deleted_remote_macs = data_dict.get(
'deleted_remote_macs')
deleted_remote_macs.append(mac_remote)
def _process_physical_locator(self, uuid, uuid_dict, data_dict):
"""Processes Physical_Locator record from the OVSDB event."""
new_row = uuid_dict.get('new', None)
old_row = uuid_dict.get('old', None)
if new_row:
dstip = new_row['dst_ip']
locator = ovsdb_schema.PhysicalLocator(uuid, dstip)
if old_row:
modified_physical_locators = data_dict.get(
'modified_physical_locators')
modified_physical_locators.append(locator)
else:
new_physical_locators = data_dict.get(
'new_physical_locators')
new_physical_locators.append(locator)
elif old_row:
dstip = old_row['dst_ip']
locator = ovsdb_schema.PhysicalLocator(uuid, dstip)
deleted_physical_locators = data_dict.get(
'deleted_physical_locators')
deleted_physical_locators.append(locator)
def _process_mcast_macs_local(self, uuid, uuid_dict, data_dict):
"""Processes Mcast_Macs_Local record from the OVSDB event."""
new_row = uuid_dict.get('new', None)
old_row = uuid_dict.get('old', None)
if new_row:
locator_set_list = new_row.get('locator_set')
logical_switch_list = new_row.get('logical_switch')
mcast_local = ovsdb_schema.McastMacsLocal(uuid,
new_row['MAC'],
logical_switch_list[1],
locator_set_list[1],
new_row['ipaddr'])
if old_row:
modified_mlocal_macs = data_dict.get(
'modified_mlocal_macs')
modified_mlocal_macs.append(mcast_local)
else:
new_mlocal_macs = data_dict.get(
'new_mlocal_macs')
new_mlocal_macs.append(mcast_local)
elif old_row:
logical_switch_list = old_row.get('logical_switch')
l_sw_id = logical_switch_list[1]
mcast_local = ovsdb_schema.McastMacsLocal(uuid,
old_row.get('MAC'),
l_sw_id,
None,
None)
deleted_mlocal_macs = data_dict.get(
'deleted_mlocal_macs')
deleted_mlocal_macs.append(mcast_local)
def _process_physical_locator_set(self, uuid, uuid_dict, data_dict):
"""Processes Physical_Locator_Set record from the OVSDB event."""
new_row = uuid_dict.get('new', None)
old_row = uuid_dict.get('old', None)
if new_row:
locator_set = self._form_locator_set(uuid, new_row)
if old_row:
modified_locator_sets = data_dict.get(
'modified_locator_sets')
modified_locator_sets.append(locator_set)
else:
new_locator_sets = data_dict.get(
'new_locator_sets')
new_locator_sets.append(locator_set)
elif old_row:
locator_set = self._form_locator_set(uuid, old_row)
deleted_locator_sets = data_dict.get(
'deleted_locator_sets')
deleted_locator_sets.append(locator_set)
def _form_locator_set(self, uuid, row):
locators = []
locator_set_list = row.get('locators')
if locator_set_list[0] == 'set':
locator_set_list = locator_set_list[1]
for locator in locator_set_list:
locators.append(locator[1])
else:
locators.append(locator_set_list[1])
locator_set = ovsdb_schema.PhysicalLocatorSet(uuid,
locators)
return locator_set

View File

@ -1,434 +0,0 @@
# Copyright (c) 2015 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import random
import socket
from oslo_log import log as logging
from oslo_serialization import jsonutils
from oslo_utils import excutils
from networking_l2gw.services.l2gateway.agent.ovsdb import base_connection
from networking_l2gw.services.l2gateway.common import constants as n_const
from networking_l2gw.services.l2gateway.common import ovsdb_schema
from networking_l2gw.services.l2gateway import exceptions
LOG = logging.getLogger(__name__)
class OVSDBWriter(base_connection.BaseConnection):
"""Performs transactions to OVSDB server tables."""
def __init__(self, conf, gw_config, mgr=None):
super(OVSDBWriter, self).__init__(conf, gw_config, mgr=None)
self.mgr = mgr
def _process_response(self, op_id):
result = self._response(op_id)
error = result.get("error", None)
if error:
raise exceptions.OVSDBError(
message="Error from the OVSDB server: %s" % error
)
# Check errors in responses of all the subqueries
outcomes = result.get("result", None)
if outcomes:
for outcome in outcomes:
error = outcome.get("error", None)
if error:
raise exceptions.OVSDBError(
message="Error from the OVSDB server: %s" % error)
return result
def _get_reply(self, operation_id, ovsdb_identifier):
count = 0
while count <= n_const.MAX_RETRIES:
response = self._recv_data(ovsdb_identifier)
LOG.debug("Response from OVSDB server = %s", str(response))
if response:
try:
json_m = jsonutils.loads(response)
self.responses.append(json_m)
method_type = json_m.get('method', None)
if method_type == "echo" and self.enable_manager:
self.ovsdb_dicts.get(ovsdb_identifier).send(
jsonutils.dumps(
{"result": json_m.get("params", None),
"error": None, "id": json_m['id']}))
else:
if self._process_response(operation_id):
return True
except Exception as ex:
with excutils.save_and_reraise_exception():
LOG.exception("Exception while receiving the "
"response for the write request:"
" [%s]", ex)
count += 1
with excutils.save_and_reraise_exception():
LOG.error("Could not obtain response from the OVSDB server "
"for the request")
def _send_and_receive(self, query, operation_id, ovsdb_identifier,
rcv_required):
if not self.send(query, addr=ovsdb_identifier):
return
if rcv_required:
self._get_reply(operation_id, ovsdb_identifier)
def delete_logical_switch(self, logical_switch_uuid, ovsdb_identifier,
rcv_required=True):
"""Delete an entry from Logical_Switch OVSDB table."""
commit_dict = {"op": "commit", "durable": True}
op_id = str(random.getrandbits(128))
query = {"method": "transact",
"params": [n_const.OVSDB_SCHEMA_NAME,
{"op": "delete",
"table": "Logical_Switch",
"where": [["_uuid", "==",
["uuid", logical_switch_uuid]]]},
commit_dict],
"id": op_id}
LOG.debug("delete_logical_switch: query: %s", query)
self._send_and_receive(query, op_id, ovsdb_identifier, rcv_required)
def insert_ucast_macs_remote(self, l_switch_dict, locator_dict,
mac_dict, ovsdb_identifier,
rcv_required=True):
"""Insert an entry in Ucast_Macs_Remote OVSDB table."""
# To insert an entry in Ucast_Macs_Remote table, it requires
# corresponding entry in Physical_Locator (Compute node VTEP IP)
# and Logical_Switch (Neutron network) tables.
logical_switch = ovsdb_schema.LogicalSwitch(l_switch_dict['uuid'],
l_switch_dict['name'],
l_switch_dict['key'],
l_switch_dict['description'
])
locator = ovsdb_schema.PhysicalLocator(locator_dict['uuid'],
locator_dict['dst_ip'])
macObject = ovsdb_schema.UcastMacsRemote(mac_dict['uuid'],
mac_dict['mac'],
mac_dict['logical_switch_id'],
mac_dict['physical_locator_id'
],
mac_dict['ip_address'])
# Form the insert query now.
commit_dict = {"op": "commit", "durable": True}
op_id = str(random.getrandbits(128))
params = [n_const.OVSDB_SCHEMA_NAME]
if locator.uuid:
locator_list = ['uuid', locator.uuid]
else:
locator.uuid = ''.join(['a', str(random.getrandbits(128))])
locator_list = ["named-uuid", locator.uuid]
params.append(self._get_physical_locator_dict(locator))
if logical_switch.uuid:
l_switches = ['uuid', logical_switch.uuid]
else:
logical_switch.uuid = ''.join(['a', str(random.getrandbits(128))])
l_switches = ["named-uuid", logical_switch.uuid]
params.append(self._get_logical_switch_dict(logical_switch))
params.append(self._get_ucast_macs_remote_dict(
macObject, locator_list, l_switches))
params.append(commit_dict)
query = {"method": "transact",
"params": params,
"id": op_id}
LOG.debug("insert_ucast_macs_remote: query: %s", query)
self._send_and_receive(query, op_id, ovsdb_identifier, rcv_required)
def update_ucast_macs_remote(self, locator_dict, mac_dict,
ovsdb_identifier,
rcv_required=True):
"""Update an entry in Ucast_Macs_Remote OVSDB table."""
# It is possible that the locator may not exist already.
locator = ovsdb_schema.PhysicalLocator(locator_dict['uuid'],
locator_dict['dst_ip'])
macObject = ovsdb_schema.UcastMacsRemote(mac_dict['uuid'],
mac_dict['mac'],
mac_dict['logical_switch_id'],
mac_dict['physical_locator_id'
],
mac_dict['ip_address'])
# Form the insert query now.
commit_dict = {"op": "commit", "durable": True}
op_id = str(random.getrandbits(128))
params = [n_const.OVSDB_SCHEMA_NAME]
# If the physical_locator does not exist (VM moving to a new compute
# node), then insert a new record in Physical_Locator first.
if locator.uuid:
locator_list = ['uuid', locator.uuid]
else:
locator.uuid = ''.join(['a', str(random.getrandbits(128))])
locator_list = ["named-uuid", locator.uuid]
params.append(self._get_physical_locator_dict(locator))
params.append(self._get_dict_for_update_ucast_mac_remote(
macObject, locator_list))
params.append(commit_dict)
query = {"method": "transact",
"params": params,
"id": op_id}
LOG.debug("update_ucast_macs_remote: query: %s", query)
self._send_and_receive(query, op_id, ovsdb_identifier, rcv_required)
def delete_ucast_macs_remote(self, logical_switch_uuid, macs,
ovsdb_identifier,
rcv_required=True):
"""Delete entries from Ucast_Macs_Remote OVSDB table."""
commit_dict = {"op": "commit", "durable": True}
op_id = str(random.getrandbits(128))
params = [n_const.OVSDB_SCHEMA_NAME]
for mac in macs:
sub_query = {"op": "delete",
"table": "Ucast_Macs_Remote",
"where": [["MAC",
"==",
mac],
["logical_switch",
"==",
["uuid",
logical_switch_uuid]]]}
params.append(sub_query)
params.append(commit_dict)
query = {"method": "transact",
"params": params,
"id": op_id}
LOG.debug("delete_ucast_macs_remote: query: %s", query)
self._send_and_receive(query, op_id, ovsdb_identifier, rcv_required)
def update_connection_to_gateway(self, logical_switch_dict,
locator_dicts, mac_dicts,
port_dicts, ovsdb_identifier,
op_method,
rcv_required=True):
"""Updates Physical Port's VNI to VLAN binding."""
# Form the JSON Query so as to update the physical port with the
# vni-vlan (logical switch uuid to vlan) binding
update_dicts = self._get_bindings_to_update(logical_switch_dict,
locator_dicts,
mac_dicts,
port_dicts,
op_method)
op_id = str(random.getrandbits(128))
query = {"method": "transact",
"params": update_dicts,
"id": op_id}
LOG.debug("update_connection_to_gateway: query = %s", query)
self._send_and_receive(query, op_id, ovsdb_identifier, rcv_required)
def _recv_data(self, ovsdb_identifier):
chunks = []
lc = rc = 0
prev_char = None
while True:
try:
if self.enable_manager:
response = self.ovsdb_dicts.get(ovsdb_identifier).recv(
n_const.BUFFER_SIZE)
else:
response = self.socket.recv(n_const.BUFFER_SIZE)
if response:
response = response.decode('utf8')
for i, c in enumerate(response):
if c == '{' and not (prev_char and
prev_char == '\\'):
lc += 1
elif c == '}' and not (prev_char and
prev_char == '\\'):
rc += 1
if lc == rc and lc is not 0:
chunks.append(response[0:i + 1])
message = "".join(chunks)
return message
prev_char = c
chunks.append(response)
else:
LOG.warning("Did not receive any reply from the OVSDB "
"server")
return
except (socket.error, socket.timeout):
LOG.warning("Did not receive any reply from the OVSDB "
"server")
return
def _get_bindings_to_update(self, l_switch_dict, locator_dicts,
mac_dicts, port_dicts, op_method):
# For connection-create, there are two cases to be handled
# Case 1: VMs exist in a network on compute nodes.
# Connection request will contain locators, ports, MACs and
# network.
# Case 2: VMs do not exist in a network on compute nodes.
# Connection request will contain only ports and network
#
# For connection-delete, we do not need logical_switch and locators
# information, we just need ports.
locator_list = []
port_list = []
ls_list = []
logical_switch = None
# Convert logical switch dict to a class object
if l_switch_dict:
logical_switch = ovsdb_schema.LogicalSwitch(
l_switch_dict['uuid'],
l_switch_dict['name'],
l_switch_dict['key'],
l_switch_dict['description'])
# Convert locator dicts into class objects
for locator in locator_dicts:
locator_list.append(ovsdb_schema.PhysicalLocator(locator['uuid'],
locator['dst_ip'])
)
# Convert MAC dicts into class objects. mac_dicts is a dictionary with
# locator VTEP IP as the key and list of MACs as the value.
locator_macs = {}
for locator_ip, mac_list in mac_dicts.items():
mac_object_list = []
for mac_dict in mac_list:
mac_object = ovsdb_schema.UcastMacsRemote(
mac_dict['uuid'],
mac_dict['mac'],
mac_dict['logical_switch_id'],
mac_dict['physical_locator_id'],
mac_dict['ip_address'])
mac_object_list.append(mac_object)
locator_macs[locator_ip] = mac_object_list
# Convert port dicts into class objects
for port in port_dicts:
phys_port = ovsdb_schema.PhysicalPort(port['uuid'],
port['name'],
port['physical_switch_id'],
port['vlan_bindings'],
port['port_fault_status'])
port_list.append(phys_port)
bindings = []
bindings.append(n_const.OVSDB_SCHEMA_NAME)
# Form the query.
commit_dict = {"op": "commit", "durable": True}
params = [n_const.OVSDB_SCHEMA_NAME]
# Use logical switch
if logical_switch:
ls_list = self._form_logical_switch(logical_switch, params)
# Use physical locators
if locator_list:
self._form_physical_locators(ls_list, locator_list, locator_macs,
params)
# Use ports
self._form_ports(ls_list, port_list, params, op_method)
params.append(commit_dict)
return params
def _form_logical_switch(self, logical_switch, params):
ls_list = []
if logical_switch.uuid:
ls_list = ['uuid', logical_switch.uuid]
else:
logical_switch.uuid = ''.join(['a', str(random.getrandbits(128))])
ls_list = ["named-uuid", logical_switch.uuid]
params.append(self._get_logical_switch_dict(logical_switch))
return ls_list
def _form_physical_locators(self, ls_list, locator_list, locator_macs,
params):
for locator in locator_list:
if locator.uuid:
loc_list = ['uuid', locator.uuid]
else:
locator.uuid = ''.join(['a', str(random.getrandbits(128))])
loc_list = ["named-uuid", locator.uuid]
params.append(self._get_physical_locator_dict(locator))
macs = locator_macs.get(locator.dst_ip, None)
if macs:
for mac in macs:
query = self._get_ucast_macs_remote_dict(mac,
loc_list,
ls_list)
params.append(query)
def _form_ports(self, ls_list, port_list, params, op_method):
for port in port_list:
port_vlan_bindings = []
outer_list = []
port_vlan_bindings.append("map")
if port.vlan_bindings:
for vlan_binding in port.vlan_bindings:
if vlan_binding.logical_switch_uuid:
outer_list.append([vlan_binding.vlan,
['uuid',
vlan_binding.logical_switch_uuid]])
else:
outer_list.append([vlan_binding.vlan,
ls_list])
port_vlan_bindings.append(outer_list)
if op_method == 'CREATE':
update_dict = {"op": "mutate",
"table": "Physical_Port",
"where": [["_uuid", "==",
["uuid", port.uuid]]],
"mutations": [["vlan_bindings",
"insert",
port_vlan_bindings]]}
elif op_method == 'DELETE':
update_dict = {"op": "mutate",
"table": "Physical_Port",
"where": [["_uuid", "==",
["uuid", port.uuid]]],
"mutations": [["vlan_bindings",
"delete",
port_vlan_bindings]]}
params.append(update_dict)
def _get_physical_locator_dict(self, locator):
return {"op": "insert",
"table": "Physical_Locator",
"uuid-name": locator.uuid,
"row": {"dst_ip": locator.dst_ip,
"encapsulation_type": "vxlan_over_ipv4"}}
def _get_logical_switch_dict(self, logical_switch):
return {"op": "insert",
"uuid-name": logical_switch.uuid,
"table": "Logical_Switch",
"row": {"description": logical_switch.description,
"name": logical_switch.name,
"tunnel_key": int(logical_switch.key)}}
def _get_ucast_macs_remote_dict(self, mac, locator_list,
logical_switch_list):
named_string = str(random.getrandbits(128))
return {"op": "insert",
"uuid-name": ''.join(['a', named_string]),
"table": "Ucast_Macs_Remote",
"row": {"MAC": mac.mac,
"ipaddr": mac.ip_address,
"locator": locator_list,
"logical_switch": logical_switch_list}}
def _get_dict_for_update_ucast_mac_remote(self, mac, locator_list):
return {"op": "update",
"table": "Ucast_Macs_Remote",
"where": [["_uuid", "==",
["uuid", mac.uuid]]],
"row": {"locator": locator_list}}

View File

@ -1,307 +0,0 @@
{
"name": "hardware_vtep",
"cksum": "353943336 11434",
"tables": {
"Global": {
"columns": {
"managers": {
"type": {"key": {"type": "uuid",
"refTable": "Manager"},
"min": 0, "max": "unlimited"}},
"switches": {
"type": {"key": {"type": "uuid", "refTable": "Physical_Switch"},
"min": 0, "max": "unlimited"}},
"other_config": {
"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"}}
},
"maxRows": 1,
"isRoot": true},
"Physical_Switch": {
"columns": {
"ports": {
"type": {"key": {"type": "uuid", "refTable": "Physical_Port"},
"min": 0, "max": "unlimited"}},
"name": {"type": "string"},
"description": {"type": "string"},
"management_ips": {
"type": {"key": {"type": "string"}, "min": 0, "max": "unlimited"}},
"tunnel_ips": {
"type": {"key": {"type": "string"}, "min": 0, "max": "unlimited"}},
"tunnels": {
"type": {"key": {"type": "uuid", "refTable": "Tunnel"},
"min": 0, "max": "unlimited"}},
"other_config": {
"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"}},
"switch_fault_status": {
"type": {
"key": "string", "min": 0, "max": "unlimited"},
"ephemeral": true}},
"indexes": [["name"]]},
"Physical_Port": {
"columns": {
"name": {"type": "string"},
"description": {"type": "string"},
"vlan_bindings": {
"type": {"key": {"type": "integer",
"minInteger": 0, "maxInteger": 4095},
"value": {"type": "uuid", "refTable": "Logical_Switch"},
"min": 0, "max": "unlimited"}},
"acl_bindings": {
"type": {"key": {"type": "integer",
"minInteger": 0, "maxInteger": 4095},
"value": {"type": "uuid", "refTable": "ACL"},
"min": 0, "max": "unlimited"}},
"vlan_stats": {
"type": {"key": {"type": "integer",
"minInteger": 0, "maxInteger": 4095},
"value": {"type": "uuid",
"refTable": "Logical_Binding_Stats"},
"min": 0, "max": "unlimited"},
"ephemeral": true},
"other_config": {
"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"}},
"port_fault_status": {
"type": {
"key": "string", "min": 0, "max": "unlimited"},
"ephemeral": true}}},
"Tunnel": {
"columns": {
"local": {
"type": {"key": {"type": "uuid",
"refTable": "Physical_Locator"}}},
"remote": {
"type": {"key": {"type": "uuid",
"refTable": "Physical_Locator"}}},
"bfd_config_local": {
"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"}},
"bfd_config_remote": {
"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"}},
"bfd_params": {
"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"}},
"bfd_status": {
"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"},
"ephemeral": true}}},
"Logical_Binding_Stats": {
"columns": {
"bytes_from_local": {"type": "integer", "ephemeral": true},
"packets_from_local": {"type": "integer", "ephemeral": true},
"bytes_to_local": {"type": "integer", "ephemeral": true},
"packets_to_local": {"type": "integer", "ephemeral": true}}},
"Logical_Switch": {
"columns": {
"name": {"type": "string"},
"description": {"type": "string"},
"tunnel_key": {"type": {"key": "integer", "min": 0, "max": 1}},
"replication_mode": {
"type": {
"key": {
"enum": ["set", ["service_node", "source_node"]],
"type": "string"},"min": 0, "max": 1}},
"other_config": {
"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"}}},
"isRoot": true,
"indexes": [["name"]]},
"Ucast_Macs_Local": {
"columns": {
"MAC": {"type": "string"},
"logical_switch": {
"type": {"key": {"type": "uuid",
"refTable": "Logical_Switch"}}},
"locator": {
"type": {"key": {"type": "uuid",
"refTable": "Physical_Locator"}}},
"ipaddr": {"type": "string"}},
"isRoot": true},
"Ucast_Macs_Remote": {
"columns": {
"MAC": {"type": "string"},
"logical_switch": {
"type": {"key": {"type": "uuid",
"refTable": "Logical_Switch"}}},
"locator": {
"type": {"key": {"type": "uuid",
"refTable": "Physical_Locator"}}},
"ipaddr": {"type": "string"}},
"isRoot": true},
"Mcast_Macs_Local": {
"columns": {
"MAC": {"type": "string"},
"logical_switch": {
"type": {"key": {"type": "uuid",
"refTable": "Logical_Switch"}}},
"locator_set": {
"type": {"key": {"type": "uuid",
"refTable": "Physical_Locator_Set"}}},
"ipaddr": {"type": "string"}},
"isRoot": true},
"Mcast_Macs_Remote": {
"columns": {
"MAC": {"type": "string"},
"logical_switch": {
"type": {"key": {"type": "uuid",
"refTable": "Logical_Switch"}}},
"locator_set": {
"type": {"key": {"type": "uuid",
"refTable": "Physical_Locator_Set"}}},
"ipaddr": {"type": "string"}},
"isRoot": true},
"Logical_Router": {
"columns": {
"name": {"type": "string"},
"description": {"type": "string"},
"switch_binding": {
"type": {"key": {"type": "string"},
"value": {"type": "uuid",
"refTable": "Logical_Switch"},
"min": 0, "max": "unlimited"}},
"static_routes": {
"type": {"key": {"type": "string"},
"value": {"type" : "string"},
"min": 0, "max": "unlimited"}},
"acl_binding": {
"type": {"key": {"type": "string"},
"value": {"type": "uuid",
"refTable": "ACL"},
"min": 0, "max": "unlimited"}},
"other_config": {
"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"}},
"LR_fault_status": {
"type": {
"key": "string", "min": 0, "max": "unlimited"},
"ephemeral": true}},
"isRoot": true,
"indexes": [["name"]]},
"Arp_Sources_Local": {
"columns": {
"src_mac": {"type": "string"},
"locator": {
"type": {"key": {"type": "uuid",
"refTable": "Physical_Locator"}}}},
"isRoot": true},
"Arp_Sources_Remote": {
"columns": {
"src_mac": {"type": "string"},
"locator": {
"type": {"key": {"type": "uuid",
"refTable": "Physical_Locator"}}}},
"isRoot": true},
"Physical_Locator_Set": {
"columns": {
"locators": {
"type": {"key": {"type": "uuid", "refTable": "Physical_Locator"},
"min": 1, "max": "unlimited"},
"mutable": false}}},
"Physical_Locator": {
"columns": {
"encapsulation_type": {
"type": {
"key": {
"enum": ["set", ["vxlan_over_ipv4"]],
"type": "string"}},
"mutable": false},
"dst_ip": {"type": "string", "mutable": false},
"tunnel_key": {"type": {"key": "integer", "min": 0, "max": 1}}},
"indexes": [["encapsulation_type", "dst_ip", "tunnel_key"]]},
"ACL_entry": {
"columns": {
"sequence": {"type": "integer"},
"source_mac": {
"type": {
"key": "string", "min": 0, "max": 1}},
"dest_mac": {
"type": {
"key": "string", "min": 0, "max": 1}},
"ethertype": {
"type": {
"key": "string", "min": 0, "max": 1}},
"source_ip": {
"type": {
"key": "string", "min": 0, "max": 1}},
"source_mask": {
"type": {
"key": "string", "min": 0, "max": 1}},
"dest_ip": {
"type": {
"key": "string", "min": 0, "max": 1}},
"dest_mask": {
"type": {
"key": "string", "min": 0, "max": 1}},
"protocol": {
"type": {
"key": "integer", "min": 0, "max": 1}},
"source_port_min": {
"type": {
"key": "integer", "min": 0, "max": 1}},
"source_port_max": {
"type": {
"key": "integer", "min": 0, "max": 1}},
"dest_port_min": {
"type": {
"key": "integer", "min": 0, "max": 1}},
"dest_port_max": {
"type": {
"key": "integer", "min": 0, "max": 1}},
"tcp_flags": {
"type": {
"key": "integer", "min": 0, "max": 1}},
"tcp_flags_mask": {
"type": {
"key": "integer", "min": 0, "max": 1}},
"icmp_code": {
"type": {
"key": "integer", "min": 0, "max": 1}},
"icmp_type": {
"type": {
"key": "integer", "min": 0, "max": 1}},
"direction": {
"type": {
"key": {"type": "string", "enum": ["set", ["ingress", "egress"]]}}},
"action": {
"type": {
"key": {"type": "string", "enum": ["set", ["permit", "deny"]]}}},
"acle_fault_status": {
"type": {
"key": "string", "min": 0, "max": "unlimited"},
"ephemeral": true}},
"isRoot": true},
"ACL": {
"columns": {
"acl_entries": {
"type": {"key": {"type": "uuid", "refTable": "ACL_entry"},
"min": 1, "max": "unlimited"}},
"acl_name": {"type": "string"},
"acl_fault_status": {
"type": {
"key": "string", "min": 0, "max": "unlimited"},
"ephemeral": true}},
"indexes": [["acl_name"]],
"isRoot": true},
"Manager": {
"columns": {
"target": {"type": "string"},
"max_backoff": {
"type": {"key": {"type": "integer",
"minInteger": 1000},
"min": 0, "max": 1}},
"inactivity_probe": {
"type": {"key": "integer", "min": 0, "max": 1}},
"other_config": {
"type": {"key": "string", "value": "string", "min": 0, "max": "unlimited"}},
"is_connected": {
"type": "boolean",
"ephemeral": true},
"status": {
"type": {"key": "string", "value": "string", "min": 0, "max": "unlimited"},
"ephemeral": true}},
"indexes": [["target"]],
"isRoot": false}},
"version": "1.7.0"}

View File

@ -1,135 +0,0 @@
# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import random
from neutron_lib.plugins import directory
from oslo_config import cfg
from oslo_log import log as logging
from oslo_service import loopingcall
from neutron.db import agents_db
from neutron_lib import context as neutron_context
from networking_l2gw.services.l2gateway.common import config
from networking_l2gw.services.l2gateway.common import constants as srv_const
LOG = logging.getLogger(__name__)
class L2GatewayAgentScheduler(agents_db.AgentDbMixin):
"""L2gateway agent scheduler class.
This maintains active and inactive agents and
select monitor and transact agents.
"""
_plugin = None
_l2gwplugin = None
def __init__(self, agent_rpc, notifier=None):
super(L2GatewayAgentScheduler, self).__init__()
self.notifier = notifier
config.register_l2gw_opts_helper()
self.monitor_interval = cfg.CONF.periodic_monitoring_interval
self.agent_rpc = agent_rpc
@property
def l2gwplugin(self):
if self._l2gwplugin is None:
self._l2gwplugin = directory.get_plugin(srv_const.L2GW)
return self._l2gwplugin
@property
def plugin(self):
if self._plugin is None:
self._plugin = directory.get_plugin()
return self._plugin
def initialize_thread(self):
"""Initialization of L2gateway agent scheduler thread."""
try:
monitor_thread = loopingcall.FixedIntervalLoopingCall(
self.monitor_agent_state)
monitor_thread.start(
interval=self.monitor_interval,
initial_delay=random.randint(self.monitor_interval,
self.monitor_interval * 2))
LOG.debug("Successfully initialized L2gateway agent scheduler"
" thread with loop interval %s", self.monitor_interval)
except Exception:
LOG.error("Cannot initialize agent scheduler thread")
def _select_agent_type(self, context, agents_to_process):
"""Select the Monitor agent."""
# Various cases to be handled:
# 1. Check if there is a single active L2 gateway agent.
# If only one agent is active, then make it the Monitor agent.
# 2. Else, in the list of the active agents, if there does not
# exist Monitor agent, then make the agent that
# started first as the Monitor agent.
# 3. If multiple Monitor agents exist (case where the Monitor agent
# gets disconnected from the Neutron server and another agent
# becomes the Monitor agent and then the original Monitor agent
# connects back within the agent downtime value), then we need to
# send the fanout message so that only one becomes the Monitor
# agent.
# Check if there already exists Monitor agent and it's the only one.
monitor_agents = [x for x in agents_to_process
if x['configurations'].get(srv_const.L2GW_AGENT_TYPE)
== srv_const.MONITOR]
if len(monitor_agents) == 1:
return
# We either have more than one Monitor agent,
# or there does not exist Monitor agent.
# We will decide which agent should be the Monitor agent.
chosen_agent = None
if len(agents_to_process) == 1:
# Only one agent is configured.
# Make it the Monitor agent
chosen_agent = agents_to_process[0]
else:
# Select the agent with the oldest started_at
# timestamp as the Monitor agent.
sorted_active_agents = sorted(agents_to_process,
key=lambda k: k['started_at'])
chosen_agent = sorted_active_agents[0]
self.agent_rpc.set_monitor_agent(context, chosen_agent['host'])
def monitor_agent_state(self):
"""Represents L2gateway agent scheduler thread.
Maintains list of active and inactive agents based on
the heartbeat recorded.
"""
context = neutron_context.get_admin_context()
try:
all_agents = self.plugin.get_agents(
context,
filters={'agent_type': [srv_const.AGENT_TYPE_L2GATEWAY]})
except Exception:
LOG.exception("Unable to get the agent list. Continuing...")
return
# Reset the agents that will be processed for selecting the
# Monitor agent
agents_to_process = []
for agent in all_agents:
if not self.is_agent_down(agent['heartbeat_timestamp']):
agents_to_process.append(agent)
if agents_to_process:
self._select_agent_type(context, agents_to_process)
return

View File

@ -1,90 +0,0 @@
# Copyright 2015 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutron.common import config
from oslo_config import cfg
from networking_l2gw._i18n import _
OVSDB_OPTS = [
cfg.StrOpt('ovsdb_hosts',
default='host1:127.0.0.1:6632',
help=_("OVSDB server name:host/IP:port")),
cfg.StrOpt('l2_gw_agent_priv_key_base_path',
help=_('L2 gateway agent private key')),
cfg.StrOpt('l2_gw_agent_cert_base_path',
help=_('L2 gateway agent public certificate')),
cfg.StrOpt('l2_gw_agent_ca_cert_base_path',
help=_('Trusted issuer CA cert')),
cfg.IntOpt('periodic_interval',
default=20,
help=_('Seconds between periodic task runs')),
cfg.IntOpt('socket_timeout',
default=30,
help=_('Socket timeout in seconds. '
'If there is no echo request on the socket for '
'socket_timeout seconds, the agent can safely '
'assume that the connection with the remote '
'OVSDB server is lost')),
cfg.BoolOpt('enable_manager',
default=False,
help=_('Set to True if ovsdb Manager manages the client')),
cfg.PortOpt('manager_table_listening_port',
default=6632,
help=_('Set port number for l2gw agent, so that it can '
'listen to whenever its IP is entered in manager '
'table of ovsdb server, For Ex: tcp:x.x.x.x:6640, '
'where x.x.x.x is IP of l2gw agent')),
cfg.IntOpt('max_connection_retries',
default=10,
help=_('Maximum number of retries to open a socket '
'with the OVSDB server'))
]
L2GW_OPTS = [
cfg.StrOpt('default_interface_name',
default='FortyGigE1/0/1',
help=_('default_interface_name of the l2 gateway')),
cfg.StrOpt('default_device_name',
default='Switch1',
help=_('default_device_name of the l2 gateway')),
cfg.IntOpt('quota_l2_gateway',
default=5,
help=_('Number of l2 gateways allowed per tenant, '
'-1 for unlimited')),
cfg.IntOpt('periodic_monitoring_interval',
default=5,
help=_('Periodic interval at which the plugin '
'checks for the monitoring L2 gateway agent')),
cfg.StrOpt('l2gw_callback_class',
default='networking_l2gw.services.l2gateway.ovsdb.'
'data.L2GatewayOVSDBCallbacks',
help=_('L2 gateway plugin callback class where the '
'RPCs from the agent are going to get invoked'))
]
def register_l2gw_opts_helper():
cfg.CONF.register_opts(L2GW_OPTS)
def register_ovsdb_opts_helper(conf):
conf.register_opts(OVSDB_OPTS, 'ovsdb')
# add a logging setup method here for convenience
setup_logging = config.setup_logging

View File

@ -1,47 +0,0 @@
# Copyright 2015 OpenStack Foundation
# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# service type constants:
L2GW = "L2GW"
l2gw = "l2gw"
AGENT_TYPE_L2GATEWAY = 'L2 Gateway agent'
L2GW_INVALID_OVSDB_IDENTIFIER = 101
ERROR_DICT = {L2GW_INVALID_OVSDB_IDENTIFIER: "Invalid ovsdb_identifier in the "
"request"}
MONITOR = 'monitor'
OVSDB_SCHEMA_NAME = 'hardware_vtep'
OVSDB_IDENTIFIER = 'ovsdb_identifier'
L2GW_AGENT_TYPE = 'l2gw_agent_type'
NETWORK_ID = 'network_id'
SEG_ID = 'segmentation_id'
L2GATEWAY_ID = 'l2_gateway_id'
GATEWAY_RESOURCE_NAME = 'l2_gateway'
L2_GATEWAYS = 'l2-gateways'
DEVICE_ID_ATTR = 'device_name'
IFACE_NAME_ATTR = 'interfaces'
CONNECTION_RESOURCE_NAME = 'l2_gateway_connection'
EXT_ALIAS = 'l2-gateway-connection'
L2_GATEWAYS_CONNECTION = "%ss" % EXT_ALIAS
BUFFER_SIZE = 4096
MAX_RETRIES = 1000
L2_GATEWAY_SERVICE_PLUGIN = "Neutron L2 gateway Service Plugin"
PORT_FAULT_STATUS_UP = "UP"
SWITCH_FAULT_STATUS_UP = "UP"
VXLAN = "vxlan"
CREATE = "CREATE"
DELETE = "DELETE"

View File

@ -1,112 +0,0 @@
# Copyright 2015 OpenStack Foundation
# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutron_lib.api import validators
from neutron_lib import exceptions
from networking_l2gw._i18n import _
from networking_l2gw.services.l2gateway.common import constants
ALLOWED_CONNECTION_ATTRIBUTES = set((constants.NETWORK_ID,
constants.SEG_ID,
constants.L2GATEWAY_ID
))
def validate_gwdevice_list(data, valid_values=None):
"""Validate the list of devices."""
if not data:
# Devices must be provided
msg = _("Cannot create a gateway with an empty device list")
return msg
try:
for device in data:
interface_data = device.get(constants.IFACE_NAME_ATTR)
device_name = device.get(constants.DEVICE_ID_ATTR)
if not device_name:
msg = _("Cannot create a gateway with an empty device_name")
return msg
if not interface_data:
msg = _("Cannot create a gateway with an empty interfaces")
return msg
if not isinstance(interface_data, list):
msg = _("interfaces format is not a type list of dicts")
return msg
for int_dict in interface_data:
if not isinstance(int_dict, dict):
msg = _("interfaces format is not a type dict")
return msg
err_msg = validators.validate_dict(int_dict, None)
if not int_dict.get('name'):
msg = _("Cannot create a gateway with an empty "
"interface name")
return msg
if constants.SEG_ID in int_dict:
seg_id_list = int_dict.get(constants.SEG_ID)
if seg_id_list and type(seg_id_list) is not list:
msg = _("segmentation_id type should be of list type ")
return msg
if not seg_id_list:
msg = _("segmentation_id_list should not be empty")
return msg
for seg_id in seg_id_list:
is_valid_vlan_id(seg_id)
if err_msg:
return err_msg
except TypeError:
return (_("%s: provided data are not iterable") %
validate_gwdevice_list.__name__)
def validate_network_mapping_list(network_mapping, check_vlan):
"""Validate network mapping list in connection."""
if network_mapping.get('segmentation_id'):
if check_vlan:
raise exceptions.InvalidInput(
error_message=_("default segmentation_id should not be"
" provided when segmentation_id is assigned"
" during l2gateway creation"))
seg_id = network_mapping.get(constants.SEG_ID)
is_valid_vlan_id(seg_id)
if not network_mapping.get('segmentation_id'):
if check_vlan is False:
raise exceptions.InvalidInput(
error_message=_("Segmentation id must be specified in create "
"l2gateway connections"))
network_id = network_mapping.get(constants.NETWORK_ID)
if not network_id:
raise exceptions.InvalidInput(
error_message=_("A valid network identifier must be specified "
"when connecting a network to a network "
"gateway. Unable to complete operation"))
connection_attrs = set(network_mapping.keys())
if not connection_attrs.issubset(ALLOWED_CONNECTION_ATTRIBUTES):
raise exceptions.InvalidInput(
error_message=(_("Invalid keys found among the ones provided "
"in request : %(connection_attrs)s."),
connection_attrs))
return network_id
def is_valid_vlan_id(seg_id):
try:
int_seg_id = int(seg_id)
except ValueError:
msg = _("Segmentation id must be a valid integer")
raise exceptions.InvalidInput(error_message=msg)
if int_seg_id < 0 or int_seg_id >= 4095:
msg = _("Segmentation id is out of range")
raise exceptions.InvalidInput(error_message=msg)

View File

@ -1,93 +0,0 @@
# Copyright (c) 2015 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class PhysicalLocator(object):
def __init__(self, uuid, dst_ip):
self.uuid = uuid
self.dst_ip = dst_ip
class PhysicalSwitch(object):
def __init__(self, uuid, name, tunnel_ip, switch_fault_status):
self.uuid = uuid
self.name = name
self.tunnel_ip = tunnel_ip
self.switch_fault_status = switch_fault_status
class PhysicalPort(object):
def __init__(self, uuid, name, phys_switch_id, vlan_binding_dicts,
port_fault_status):
self.uuid = uuid
self.name = name
self.physical_switch_id = phys_switch_id
self.vlan_bindings = []
self.port_fault_status = port_fault_status
if vlan_binding_dicts:
for vlan_binding in vlan_binding_dicts:
v_binding = VlanBinding(vlan_binding['vlan'],
vlan_binding['logical_switch_uuid'])
self.vlan_bindings.append(v_binding)
class LogicalSwitch(object):
def __init__(self, uuid, name, key, description):
self.uuid = uuid
self.name = name
self.key = key
self.description = description
class UcastMacsLocal(object):
def __init__(self, uuid, mac, logical_switch_id, physical_locator_id,
ip_address):
self.uuid = uuid
self.mac = mac
self.logical_switch_id = logical_switch_id
self.physical_locator_id = physical_locator_id
self.ip_address = ip_address
class UcastMacsRemote(object):
def __init__(self, uuid, mac, logical_switch_id, physical_locator_id,
ip_address):
self.uuid = uuid
self.mac = mac
self.logical_switch_id = logical_switch_id
self.physical_locator_id = physical_locator_id
self.ip_address = ip_address
class VlanBinding(object):
def __init__(self, vlan, logical_switch_uuid):
self.vlan = vlan
self.logical_switch_uuid = logical_switch_uuid
class McastMacsLocal(object):
def __init__(self, uuid, mac, logical_switch, locator_set,
ip_address):
self.uuid = uuid
self.mac = mac
self.logical_switch_id = logical_switch
self.locator_set = locator_set
self.ip_address = ip_address
class PhysicalLocatorSet(object):
def __init__(self, uuid, locators):
self.uuid = uuid
self.locators = locators

View File

@ -1,17 +0,0 @@
# Copyright 2015 OpenStack Foundation
# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
L2GATEWAY_PLUGIN = 'l2gateway_plugin'
L2GATEWAY_AGENT = 'l2gateway_agent'

View File

@ -1,51 +0,0 @@
# Copyright (c) 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from neutron.common import topics as neutron_topics
from neutron.plugins.ml2.drivers.l2pop import rpc as l2pop_rpc
from neutron.plugins.ml2 import managers
from neutron.plugins.ml2 import rpc as rpc
class Tunnel_Calls(object):
"""Common tunnel calls for L2 agent."""
def __init__(self):
self._construct_rpc_stuff()
def _construct_rpc_stuff(self):
self.notifier = rpc.AgentNotifierApi(neutron_topics.AGENT)
self.type_manager = managers.TypeManager()
self.tunnel_rpc_obj = rpc.RpcCallbacks(self.notifier,
self.type_manager)
def trigger_tunnel_sync(self, context, tunnel_ip):
"""Sends tunnel sync RPC message to the neutron
L2 agent.
"""
tunnel_dict = {'tunnel_ip': tunnel_ip,
'tunnel_type': 'vxlan'}
self.tunnel_rpc_obj.tunnel_sync(context,
**tunnel_dict)
def trigger_l2pop_sync(self, context, other_fdb_entries):
"""Sends L2pop ADD RPC message to the neutron L2 agent."""
l2pop_rpc.L2populationAgentNotifyAPI(
).add_fdb_entries(context, other_fdb_entries)
def trigger_l2pop_delete(self, context, other_fdb_entries, host=None):
"""Sends L2pop DELETE RPC message to the neutron L2 agent."""
l2pop_rpc.L2populationAgentNotifyAPI(
).remove_fdb_entries(context, other_fdb_entries, host)

Some files were not shown because too many files have changed in this diff Show More