Retire Packaging Deb project repos

This commit is part of a series to retire the Packaging Deb
project. Step 2 is to remove all content from the project
repos, replacing it with a README notification where to find
ongoing work, and how to recover the repo if needed at some
future point (as in
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project).

Change-Id: Iba382dfd8c4949b5881a2213bf73f3ddafe979a1
This commit is contained in:
Tony Breeds 2017-09-12 15:41:59 -06:00
parent 6673921476
commit 80a2beef51
71 changed files with 14 additions and 12029 deletions

View File

@ -1,7 +0,0 @@
[run]
branch = True
source = networking-arista
omit = networking-arista/tests/*,networking-arista/openstack/*
[report]
ignore_errors = True

53
.gitignore vendored
View File

@ -1,53 +0,0 @@
*.py[cod]
# C extensions
*.so
# Packages
*.egg
*.egg-info
dist
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
.tox
nosetests.xml
.testrepository
.venv
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Complexity
output/*.html
output/*/index.html
# Sphinx
doc/build
# pbr generates these
AUTHORS
ChangeLog
# Editors
*~
.*.swp
.*sw?

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/networking-arista.git

View File

@ -1,6 +0,0 @@
# Format is:
# <preferred e-mail> <other e-mail 1>
# <preferred e-mail> <other e-mail 2>
Sukhdev Kapur <sukhdev@arista.com> <sukhdevkapur@gmail.com>
Shashank Hegde <shashank@arista.com>
Andre Pech <apech@arista.com>

View File

@ -1,8 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
OS_LOG_CAPTURE=1 \
${PYTHON:-python} -m subunit.run discover -t ./ . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -1,16 +0,0 @@
If you would like to contribute to the development of OpenStack,
you must follow the steps in this page:
http://docs.openstack.org/infra/manual/developers.html
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/networking-arista

View File

@ -1,4 +0,0 @@
networking-arista Style Commandments
===============================================
Read the OpenStack Style Commandments http://docs.openstack.org/developer/hacking/

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,6 +0,0 @@
include AUTHORS
include ChangeLog
exclude .gitignore
exclude .gitreview
global-exclude *.pyc

14
README Normal file
View File

@ -0,0 +1,14 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For ongoing work on maintaining OpenStack packages in the Debian
distribution, please see the Debian OpenStack packaging team at
https://wiki.debian.org/OpenStack/.
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,13 +0,0 @@
===============================
networking-arista
===============================
Arista Networking drivers
* Free software: Apache license
* Source: http://git.openstack.org/cgit/openstack/networking-arista
Features
--------
* TODO

View File

@ -1,2 +0,0 @@
[python: **.py]

View File

@ -1,97 +0,0 @@
# -*- mode: shell-script -*-
function install_lldp() {
echo_summary "Installing LLDP"
install_package lldpd
restart_service lldpd
}
function install_arista_driver() {
echo_summary "Installing Arista Driver"
setup_develop $ARISTA_DIR
}
function configure_arista() {
echo_summary "Configuring Neutron for Arista Driver"
cp $ARISTA_ML2_CONF_SAMPLE $ARISTA_ML2_CONF_FILE
iniset $ARISTA_ML2_CONF_FILE ml2_arista eapi_host $ARISTA_EAPI_HOST
iniset $ARISTA_ML2_CONF_FILE ml2_arista eapi_username $ARISTA_EAPI_USERNAME
iniset $ARISTA_ML2_CONF_FILE ml2_arista eapi_password $ARISTA_EAPI_PASSWORD
iniset $ARISTA_ML2_CONF_FILE ml2_arista api_type $ARISTA_API_TYPE
iniset $ARISTA_ML2_CONF_FILE ml2_arista region_name $ARISTA_REGION_NAME
if [ -n "${ARISTA_USE_FQDN+x}" ]; then
iniset $ARISTA_ML2_CONF_FILE ml2_arista use_fqdn $ARISTA_USE_FQDN
fi
if [ -n "${ARISTA_ML2_SYNC_INTERVAL+x}" ]; then
iniset $ARISTA_ML2_CONF_FILE ml2_arista sync_interval $ARISTA_ML2_SYNC_INTERVAL
fi
if [ -n "${ARISTA_SEC_GROUP_SUPPORT+x}" ]; then
iniset $ARISTA_ML2_CONF_FILE ml2_arista sec_group_support $ARISTA_SEC_GROUP_SUPPORT
fi
if [ -n "${ARISTA_SWITCH_INFO+x}" ]; then
iniset $ARISTA_ML2_CONF_FILE ml2_arista switch_info $ARISTA_SWITCH_INFO
fi
if [ -n "${ARISTA_PRIMARY_L3_HOST+x}" ]; then
iniset $ARISTA_ML2_CONF_FILE l3_arista primary_l3_host $ARISTA_PRIMARY_L3_HOST
fi
if [ -n "${ARISTA_PRIMARY_L3_HOST_USERNAME+x}" ]; then
iniset $ARISTA_ML2_CONF_FILE l3_arista primary_l3_host_username $ARISTA_PRIMARY_L3_HOST_USERNAME
fi
if [ -n "${ARISTA_PRIMARY_L3_HOST_PASSWORD+x}" ]; then
iniset $ARISTA_ML2_CONF_FILE l3_arista primary_l3_host_password $ARISTA_PRIMARY_L3_HOST_PASSWORD
fi
if [ -n "${ARISTA_SECONDARY_L3_HOST+x}" ]; then
iniset $ARISTA_ML2_CONF_FILE l3_arista secondary_l3_host $ARISTA_SECONDARY_L3_HOST
fi
if [ -n "${ARISTA_SECONDARY_L3_HOST_USERNAME+x}" ]; then
iniset $ARISTA_ML2_CONF_FILE l3_arista secondary_l3_host_username $ARISTA_SECONDARY_L3_HOST_USERNAME
fi
if [ -n "${ARISTA_SECONDARY_L3_HOST_PASSWORD+x}" ]; then
iniset $ARISTA_ML2_CONF_FILE l3_arista secondary_l3_host_password $ARISTA_SECONDARY_L3_HOST_PASSWORD
fi
if [ -n "${ARISTA_MLAG_CONFIG+x}" ]; then
iniset $ARISTA_ML2_CONF_FILE l3_arista mlag_config $ARISTA_MLAG_CONFIG
fi
if [ -n "${ARISTA_USE_VRF+x}" ]; then
iniset $ARISTA_ML2_CONF_FILE l3_arista use_vrf $ARISTA_USE_VRF
fi
if [ -n "${ARISTA_L3_SYNC_INTERVAL+x}" ]; then
iniset $ARISTA_ML2_CONF_FILE l3_arista l3_sync_interval $ARISTA_L3_SYNC_INTERVAL
fi
if [ -n "${ARISTA_TYPE_DRIVER_SYNC_INTERVAL+x}" ]; then
iniset $ARISTA_ML2_CONF_FILE arista_type_driver sync_interval $ARISTA_TYPE_DRIVER_SYNC_INTERVAL
fi
neutron_server_config_add $ARISTA_ML2_CONF_FILE
}
if [[ "$1" == "stack" && "$2" == "pre-install" ]]; then
if is_service_enabled "q-agt"; then
install_lldp
fi
elif [[ "$1" == "stack" && "$2" == "install" ]]; then
install_arista_driver
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
configure_arista
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
# no-op
:
fi
if [[ "$1" == "unstack" ]]; then
# no-op
:
fi
if [[ "$1" == "clean" ]]; then
# no-op
:
fi

View File

@ -1,11 +0,0 @@
if ! [[ "$Q_ML2_PLUGIN_MECHANISM_DRIVERS" =~ "arista" ]]; then
Q_ML2_PLUGIN_MECHANISM_DRIVERS="$Q_ML2_PLUGIN_MECHANISM_DRIVERS,arista"
fi
ARISTA_DIR=${ARISTA_DIR:-$DEST/networking-arista}
ARISTA_ML2_CONF_SAMPLE=$ARISTA_DIR/etc/ml2_conf_arista.ini
ARISTA_ML2_CONF_FILE=${ARISTA_ML2_CONF_FILE:-"$NEUTRON_CONF_DIR/ml2_conf_arista.ini"}
ARISTA_API_TYPE=${ARISTA_API_TYPE:-"EAPI"}
ARISTA_REGION_NAME=${ARISTA_REGION_NAME:-"$REGION_NAME"}

View File

@ -1,75 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
#'sphinx.ext.intersphinx',
'oslosphinx'
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'networking-arista'
copyright = u'2013, OpenStack Foundation'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
#intersphinx_mapping = {'http://docs.python.org/': None}

View File

@ -1,4 +0,0 @@
============
Contributing
============
.. include:: ../../CONTRIBUTING.rst

View File

@ -1,25 +0,0 @@
.. networking-arista documentation master file, created by
sphinx-quickstart on Tue Jul 9 22:26:36 2013.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to networking-arista's documentation!
========================================================
Contents:
.. toctree::
:maxdepth: 2
readme
installation
usage
contributing
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,12 +0,0 @@
============
Installation
============
At the command line::
$ pip install networking-arista
Or, if you have virtualenvwrapper installed::
$ mkvirtualenv networking-arista
$ pip install networking-arista

View File

@ -1 +0,0 @@
.. include:: ../../README.rst

View File

@ -1,7 +0,0 @@
========
Usage
========
To use networking-arista in a project::
import networking-arista

View File

@ -1,160 +0,0 @@
# Defines configuration options specific for Arista ML2 Mechanism driver
[ml2_arista]
# (StrOpt) Comma separated list of IP addresses for all CVX instances in
# the high availabilty CVX cluster. This is a required field with
# a minimum of one address (if CVX is deployed in a non-redundant
# (standalone) manner). If not set, all communications to Arista
# EOS will fail.
#
# eapi_host =
# Example: eapi_host = 192.168.0.1, 192.168.11.1, 192.168.22.1
#
# (StrOpt) EOS command API username. This is required field.
# if not set, all communications to Arista EOS will fail.
#
# eapi_username =
# Example: eapi_username = admin
#
# (StrOpt) EOS command API password. This is required field.
# if not set, all communications to Arista EOS will fail.
#
# eapi_password =
# Example: eapi_password = my_password
#
# (StrOpt) Defines if hostnames are sent to Arista EOS as FQDNs
# ("node1.domain.com") or as short names ("node1"). This is
# optional. If not set, a value of "True" is assumed.
#
# use_fqdn =
# Example: use_fqdn = True
#
# (IntOpt) Sync interval in seconds between Neutron plugin and EOS.
# This field defines how often the synchronization is performed.
# This is an optional field. If not set, a value of 30 seconds
# is assumed.
#
# sync_interval =
# Example: sync_interval = 30
#
# (StrOpt) Defines Region Name that is assigned to this OpenStack Controller.
# This is useful when multiple OpenStack/Neutron controllers are
# managing the same Arista HW clusters. Note that this name must
# match with the region name registered (or known) to keystone
# service. Authentication with Keysotne is performed by EOS.
# This is optional. If not set, a value of "RegionOne" is assumed.
#
# region_name =
# Example: region_name = RegionOne
#
# (BoolOpt) Specifies if the Security Groups need to be deployed for baremetal
# deployments. If this flag is set to "True", this means switch_info
# (see below) must be defined. If this flag is not defined, it is
# assumed to be False.
#
# sec_group_support =
# Example: sec_group_support = True
#
# (ListOpt) This is a comma separated list of Arista switches where
# security groups (i.e. ACLs) need to be applied. Each string has
# three values separated by ":" in the following format.
# <switch IP>:<username>:<password>,<switch IP>:<username>:<password>
# This is required if sec_group_support is set to "True"
#
# switch_info =
# Example: switch_info = 172.13.23.55:admin:admin,172.13.23.56:admin:admin
#
# (StrOpt) Tells the plugin to use a sepcific API interfaces to communicate
# with CVX. Valid options are:
# EAPI - Use EOS' extensible API.
# JSON - Use EOS' JSON/REST API.
# api_type =
# Example: api_type = EAPI
#
# (ListOpt) This is a comma separated list of physical networks which are
# managed by Arista switches. This list will be used in
# by the Arista ML2 plugin to make the decision if it can
# participate on binding or updating a port.
#
# managed_physnets =
# Example: managed_physnets = arista_network
#
# (BoolOpt) Specifies whether the Arista ML2 plugin should bind ports to vxlan
# fabric segments and dynamically allocate vlan segments based on
# the host to connect the port to the vxlan fabric.
#
# manage_fabric =
# Example: manage_fabric = False
[l3_arista]
# (StrOpt) primary host IP address. This is required field. If not set, all
# communications to Arista EOS will fail. This is the host where
# primary router is created.
#
# primary_l3_host =
# Example: primary_l3_host = 192.168.10.10
#
# (StrOpt) Primary host username. This is required field.
# if not set, all communications to Arista EOS will fail.
#
# primary_l3_host_username =
# Example: primary_l3_username = admin
#
# (StrOpt) Primary host password. This is required field.
# if not set, all communications to Arista EOS will fail.
#
# primary_l3_host_password =
# Example: primary_l3_password = my_password
#
# (StrOpt) IP address of the second Arista switch paired as
# MLAG (Multi-chassis Link Aggregation) with the first.
# This is optional field, however, if mlag_config flag is set,
# then this is a required field. If not set, all
# communications to Arista EOS will fail. If mlag_config is set
# to False, then this field is ignored
#
# secondary_l3_host =
# Example: secondary_l3_host = 192.168.10.20
#
# (IntOpt) Connection timeout interval in seconds. This interval
# defines how long an EAPI request from the driver to '
# EOS waits before timing out. If not set, a value of 10
# seconds is assumed.
#
# conn_timeout =
# Example: conn_timeout = 10
#
# (BoolOpt) Defines if Arista switches are configured in MLAG mode
# If yes, all L3 configuration is pushed to both switches
# automatically. If this flag is set, ensure that secondary_l3_host
# is set to the second switch's IP.
# This flag is Optional. If not set, a value of "False" is assumed.
#
# mlag_config =
# Example: mlag_config = True
#
# (BoolOpt) Defines if the router is created in default VRF or a
# a specific VRF. This is optional.
# If not set, a value of "False" is assumed.
#
# Example: use_vrf = True
#
# (IntOpt) Sync interval in seconds between Neutron plugin and EOS.
# This field defines how often the synchronization is performed.
# This is an optional field. If not set, a value of 180 seconds
# is assumed.
#
# l3_sync_interval =
# Example: l3_sync_interval = 60
[arista_type_driver]
# (IntOpt) VLAN Sync interval in seconds between the type driver and EOS.
# This interval defines how often the VLAN synchronization is
# performed. This is an optional field. If not set, a value of
# 10 seconds is assumed.
#
# sync_interval =
# Example: sync_interval = 10

View File

@ -1,143 +0,0 @@
{
"context_is_admin": "role:admin",
"admin_or_owner": "rule:context_is_admin or tenant_id:%(tenant_id)s",
"context_is_advsvc": "role:advsvc",
"admin_or_network_owner": "rule:context_is_admin or tenant_id:%(network:tenant_id)s",
"admin_only": "rule:context_is_admin",
"regular_user": "",
"shared": "field:networks:shared=True",
"shared_firewalls": "field:firewalls:shared=True",
"external": "field:networks:router:external=True",
"default": "rule:admin_or_owner",
"create_subnet": "rule:admin_or_network_owner",
"get_subnet": "rule:admin_or_owner or rule:shared",
"update_subnet": "rule:admin_or_network_owner",
"delete_subnet": "rule:admin_or_network_owner",
"create_network": "",
"get_network": "rule:admin_or_owner or rule:shared or rule:external or rule:context_is_advsvc",
"get_network:router:external": "rule:regular_user",
"get_network:segments": "rule:admin_only",
"get_network:provider:network_type": "rule:admin_only",
"get_network:provider:physical_network": "rule:admin_only",
"get_network:provider:segmentation_id": "rule:admin_only",
"get_network:queue_id": "rule:admin_only",
"create_network:shared": "rule:admin_only",
"create_network:router:external": "rule:admin_only",
"create_network:segments": "rule:admin_only",
"create_network:provider:network_type": "rule:admin_only",
"create_network:provider:physical_network": "rule:admin_only",
"create_network:provider:segmentation_id": "rule:admin_only",
"update_network": "rule:admin_or_owner",
"update_network:segments": "rule:admin_only",
"update_network:shared": "rule:admin_only",
"update_network:provider:network_type": "rule:admin_only",
"update_network:provider:physical_network": "rule:admin_only",
"update_network:provider:segmentation_id": "rule:admin_only",
"update_network:router:external": "rule:admin_only",
"delete_network": "rule:admin_or_owner",
"create_port": "",
"create_port:mac_address": "rule:admin_or_network_owner or rule:context_is_advsvc",
"create_port:fixed_ips": "rule:admin_or_network_owner or rule:context_is_advsvc",
"create_port:port_security_enabled": "rule:admin_or_network_owner or rule:context_is_advsvc",
"create_port:binding:host_id": "rule:admin_only",
"create_port:binding:profile": "rule:admin_only",
"create_port:mac_learning_enabled": "rule:admin_or_network_owner or rule:context_is_advsvc",
"get_port": "rule:admin_or_owner or rule:context_is_advsvc",
"get_port:queue_id": "rule:admin_only",
"get_port:binding:vif_type": "rule:admin_only",
"get_port:binding:vif_details": "rule:admin_only",
"get_port:binding:host_id": "rule:admin_only",
"get_port:binding:profile": "rule:admin_only",
"update_port": "rule:admin_or_owner or rule:context_is_advsvc",
"update_port:fixed_ips": "rule:admin_or_network_owner or rule:context_is_advsvc",
"update_port:port_security_enabled": "rule:admin_or_network_owner or rule:context_is_advsvc",
"update_port:binding:host_id": "rule:admin_only",
"update_port:binding:profile": "rule:admin_only",
"update_port:mac_learning_enabled": "rule:admin_or_network_owner or rule:context_is_advsvc",
"delete_port": "rule:admin_or_owner or rule:context_is_advsvc",
"get_router:ha": "rule:admin_only",
"create_router": "rule:regular_user",
"create_router:external_gateway_info:enable_snat": "rule:admin_only",
"create_router:distributed": "rule:admin_only",
"create_router:ha": "rule:admin_only",
"get_router": "rule:admin_or_owner",
"get_router:distributed": "rule:admin_only",
"update_router:external_gateway_info:enable_snat": "rule:admin_only",
"update_router:distributed": "rule:admin_only",
"update_router:ha": "rule:admin_only",
"delete_router": "rule:admin_or_owner",
"add_router_interface": "rule:admin_or_owner",
"remove_router_interface": "rule:admin_or_owner",
"create_router:external_gateway_info:external_fixed_ips": "rule:admin_only",
"update_router:external_gateway_info:external_fixed_ips": "rule:admin_only",
"create_firewall": "",
"get_firewall": "rule:admin_or_owner",
"create_firewall:shared": "rule:admin_only",
"get_firewall:shared": "rule:admin_only",
"update_firewall": "rule:admin_or_owner",
"update_firewall:shared": "rule:admin_only",
"delete_firewall": "rule:admin_or_owner",
"create_firewall_policy": "",
"get_firewall_policy": "rule:admin_or_owner or rule:shared_firewalls",
"create_firewall_policy:shared": "rule:admin_or_owner",
"update_firewall_policy": "rule:admin_or_owner",
"delete_firewall_policy": "rule:admin_or_owner",
"create_firewall_rule": "",
"get_firewall_rule": "rule:admin_or_owner or rule:shared_firewalls",
"update_firewall_rule": "rule:admin_or_owner",
"delete_firewall_rule": "rule:admin_or_owner",
"create_qos_queue": "rule:admin_only",
"get_qos_queue": "rule:admin_only",
"update_agent": "rule:admin_only",
"delete_agent": "rule:admin_only",
"get_agent": "rule:admin_only",
"create_dhcp-network": "rule:admin_only",
"delete_dhcp-network": "rule:admin_only",
"get_dhcp-networks": "rule:admin_only",
"create_l3-router": "rule:admin_only",
"delete_l3-router": "rule:admin_only",
"get_l3-routers": "rule:admin_only",
"get_dhcp-agents": "rule:admin_only",
"get_l3-agents": "rule:admin_only",
"get_loadbalancer-agent": "rule:admin_only",
"get_loadbalancer-pools": "rule:admin_only",
"create_floatingip": "rule:regular_user",
"create_floatingip:floating_ip_address": "rule:admin_only",
"update_floatingip": "rule:admin_or_owner",
"delete_floatingip": "rule:admin_or_owner",
"get_floatingip": "rule:admin_or_owner",
"create_network_profile": "rule:admin_only",
"update_network_profile": "rule:admin_only",
"delete_network_profile": "rule:admin_only",
"get_network_profiles": "",
"get_network_profile": "",
"update_policy_profiles": "rule:admin_only",
"get_policy_profiles": "",
"get_policy_profile": "",
"create_metering_label": "rule:admin_only",
"delete_metering_label": "rule:admin_only",
"get_metering_label": "rule:admin_only",
"create_metering_label_rule": "rule:admin_only",
"delete_metering_label_rule": "rule:admin_only",
"get_metering_label_rule": "rule:admin_only",
"get_service_provider": "rule:regular_user",
"get_lsn": "rule:admin_only",
"create_lsn": "rule:admin_only"
}

View File

@ -1,27 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import gettext
import pbr.version
import six
__version__ = pbr.version.VersionInfo(
'networking_arista').version_string()
if six.PY2:
gettext.install('networking_arista', unicode=1)
else:
gettext.install('networking_arista')

View File

@ -1,42 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import oslo_i18n
DOMAIN = "networking_arista"
_translators = oslo_i18n.TranslatorFactory(domain=DOMAIN)
# The primary translation function using the well-known name "_"
_ = _translators.primary
# The contextual translation function using the name "_C"
_C = _translators.contextual_form
# The plural translation function using the name "_P"
_P = _translators.plural_form
# Translators for log levels.
#
# The abbreviated names are meant to reflect the usual use of a short
# name like '_'. The "L" is for "log" and the other letter comes from
# the level.
_LI = _translators.log_info
_LW = _translators.log_warning
_LE = _translators.log_error
_LC = _translators.log_critical
def get_available_languages():
return oslo_i18n.get_available_languages(DOMAIN)

View File

@ -1,133 +0,0 @@
# Copyright (c) 2017 Arista Networks, Inc
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
from oslo_log import log as logging
from oslo_utils import excutils
import requests
from requests import exceptions as requests_exc
from six.moves.urllib import parse
from networking_arista._i18n import _LI, _LW
from networking_arista.common import exceptions as arista_exc
LOG = logging.getLogger(__name__)
# EAPI error message
ERR_CVX_NOT_LEADER = 'only available on cluster leader'
class EAPIClient(object):
def __init__(self, host, username=None, password=None, verify=False,
timeout=None):
self.host = host
self.timeout = timeout
self.url = self._make_url(host)
self.session = requests.Session()
self.session.headers['Content-Type'] = 'application/json'
self.session.headers['Accept'] = 'application/json'
self.session.verify = verify
if username and password:
self.session.auth = (username, password)
@staticmethod
def _make_url(host, scheme='https'):
return parse.urlunsplit(
(scheme, host, '/command-api', '', '')
)
def execute(self, commands, commands_to_log=None):
params = {
'timestamps': False,
'format': 'json',
'version': 1,
'cmds': commands
}
data = {
'id': 'Networking Arista Driver',
'method': 'runCmds',
'jsonrpc': '2.0',
'params': params
}
if commands_to_log:
log_data = dict(data)
log_data['params'] = dict(params)
log_data['params']['cmds'] = commands_to_log
else:
log_data = data
LOG.info(
_LI('EAPI request %(ip)s contains %(data)s'),
{'ip': self.host, 'data': json.dumps(log_data)}
)
# request handling
try:
error = None
response = self.session.post(
self.url,
data=json.dumps(data),
timeout=self.timeout
)
except requests_exc.ConnectionError:
error = _LW('Error while trying to connect to %(ip)s')
except requests_exc.ConnectTimeout:
error = _LW('Timed out while trying to connect to %(ip)s')
except requests_exc.Timeout:
error = _LW('Timed out during an EAPI request to %(ip)s')
except requests_exc.InvalidURL:
error = _LW('Ingoring attempt to connect to invalid URL at %(ip)s')
except Exception as e:
with excutils.save_and_reraise_exception():
LOG.warning(
_LW('Error during processing the EAPI request %(error)s'),
{'error': e}
)
finally:
if error:
msg = error % {'ip': self.host}
# stop processing since we've encountered request error
LOG.warning(msg)
raise arista_exc.AristaRpcError(msg=msg)
# response handling
try:
resp_data = response.json()
return resp_data['result']
except ValueError as e:
LOG.info(_LI('Ignoring invalid JSON response'))
except KeyError:
if 'error' in resp_data and resp_data['error']['code'] == 1002:
for d in resp_data['error']['data']:
if not isinstance(d, dict):
continue
elif ERR_CVX_NOT_LEADER in d.get('errors', {})[0]:
LOG.info(
_LI('%(ip)s is not the CVX leader'),
{'ip': self.host}
)
return
msg = _LI('Unexpected EAPI error')
LOG.info(msg)
raise arista_exc.AristaRpcError(msg=msg)
except Exception as e:
with excutils.save_and_reraise_exception():
LOG.warning(
_LW('Error during processing the EAPI response %(error)s'),
{'error': e}
)

View File

@ -1,194 +0,0 @@
# Copyright (c) 2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
from networking_arista._i18n import _
# Arista ML2 Mechanism driver specific configuration knobs.
#
# Following are user configurable options for Arista ML2 Mechanism
# driver. The eapi_username, eapi_password, and eapi_host are
# required options. Region Name must be the same that is used by
# Keystone service. This option is available to support multiple
# OpenStack/Neutron controllers.
ARISTA_DRIVER_OPTS = [
cfg.StrOpt('eapi_username',
default='',
help=_('Username for Arista EOS. This is required field. '
'If not set, all communications to Arista EOS '
'will fail.')),
cfg.StrOpt('eapi_password',
default='',
secret=True, # do not expose value in the logs
help=_('Password for Arista EOS. This is required field. '
'If not set, all communications to Arista EOS '
'will fail.')),
cfg.StrOpt('eapi_host',
default='',
help=_('Arista EOS IP address. This is required field. '
'If not set, all communications to Arista EOS '
'will fail.')),
cfg.BoolOpt('use_fqdn',
default=True,
help=_('Defines if hostnames are sent to Arista EOS as FQDNs '
'("node1.domain.com") or as short names ("node1"). '
'This is optional. If not set, a value of "True" '
'is assumed.')),
cfg.IntOpt('sync_interval',
default=30,
help=_('Sync interval in seconds between Neutron plugin and '
'EOS. This interval defines how often the '
'synchronization is performed. This is an optional '
'field. If not set, a value of 30 seconds is '
'assumed.')),
cfg.IntOpt('conn_timeout',
default=10,
help=_('Connection timeout interval in seconds. This interval '
'defines how long an EAPI request from the driver to '
'EOS waits before timing out. If not set, a value of 10 '
'seconds is assumed.')),
cfg.StrOpt('region_name',
default='RegionOne',
help=_('Defines Region Name that is assigned to this OpenStack '
'Controller. This is useful when multiple '
'OpenStack/Neutron controllers are managing the same '
'Arista HW clusters. Note that this name must match '
'with the region name registered (or known) to keystone '
'service. Authentication with Keysotne is performed by '
'EOS. This is optional. If not set, a value of '
'"RegionOne" is assumed.')),
cfg.BoolOpt('sec_group_support',
default=False,
help=_('Specifies if the Security Groups needs to deployed '
'for baremetal deployments. If this flag is set to '
'True, this means switch_info(see below) must be '
'defined. If this flag is not defined, it is assumed '
'to be False')),
cfg.ListOpt('switch_info',
default=[],
help=_('This is a comma separated list of Arista switches '
'where security groups (i.e. ACLs) need to be '
'applied. Each string has three values separated '
'by : in the follow format '
'<IP of switch>:<username>:<password>, ...... '
'For Example: 172.13.23.55:admin:admin, '
'172.13.23.56:admin:admin, .... '
'This is required if sec_group_support is set to '
'"True"')),
cfg.StrOpt('api_type',
default='JSON',
help=_('Tells the plugin to use a sepcific API interfaces '
'to communicate with CVX. Valid options are:'
'EAPI - Use EOS\' extensible API.'
'JSON - Use EOS\' JSON/REST API.')),
cfg.ListOpt('managed_physnets',
default=[],
help=_('This is a comma separated list of physical networks '
'which are managed by Arista switches.'
'This list will be used by the Arista ML2 plugin'
'to make the decision if it can participate in binding'
'or updating a port.'
'For Example: '
'managed_physnets = arista_network')),
cfg.BoolOpt('manage_fabric',
default=False,
help=_('Specifies whether the Arista ML2 plugin should bind '
'ports to vxlan fabric segments and dynamically '
'allocate vlan segments based on the host to connect '
'the port to the vxlan fabric')),
]
""" Arista L3 Service Plugin specific configuration knobs.
Following are user configurable options for Arista L3 plugin
driver. The eapi_username, eapi_password, and eapi_host are
required options.
"""
ARISTA_L3_PLUGIN = [
cfg.StrOpt('primary_l3_host_username',
default='',
help=_('Username for Arista EOS. This is required field. '
'If not set, all communications to Arista EOS '
'will fail')),
cfg.StrOpt('primary_l3_host_password',
default='',
secret=True, # do not expose value in the logs
help=_('Password for Arista EOS. This is required field. '
'If not set, all communications to Arista EOS '
'will fail')),
cfg.StrOpt('primary_l3_host',
default='',
help=_('Arista EOS IP address. This is required field. '
'If not set, all communications to Arista EOS '
'will fail')),
cfg.StrOpt('secondary_l3_host',
default='',
help=_('Arista EOS IP address for second Switch MLAGed with '
'the first one. This an optional field, however, if '
'mlag_config flag is set, then this is required. '
'If not set, all communications to Arista EOS '
'will fail')),
cfg.IntOpt('conn_timeout',
default=10,
help=_('Connection timeout interval in seconds. This interval '
'defines how long an EAPI request from the driver to '
'EOS waits before timing out. If not set, a value of 10 '
'seconds is assumed.')),
cfg.BoolOpt('mlag_config',
default=False,
help=_('This flag is used indicate if Arista Switches are '
'configured in MLAG mode. If yes, all L3 config '
'is pushed to both the switches automatically. '
'If this flag is set to True, ensure to specify IP '
'addresses of both switches. '
'This is optional. If not set, a value of "False" '
'is assumed.')),
cfg.BoolOpt('use_vrf',
default=False,
help=_('A "True" value for this flag indicates to create a '
'router in VRF. If not set, all routers are created '
'in default VRF. '
'This is optional. If not set, a value of "False" '
'is assumed.')),
cfg.IntOpt('l3_sync_interval',
default=180,
help=_('Sync interval in seconds between L3 Service plugin '
'and EOS. This interval defines how often the '
'synchronization is performed. This is an optional '
'field. If not set, a value of 180 seconds is assumed'))
]
ARISTA_TYPE_DRIVER_OPTS = [
cfg.IntOpt('sync_interval',
default=10,
help=_('VLAN Sync interval in seconds between Neutron plugin '
'and EOS. This interval defines how often the VLAN '
'synchronization is performed. This is an optional '
'field. If not set, a value of 10 seconds is '
'assumed.')),
]
cfg.CONF.register_opts(ARISTA_L3_PLUGIN, "l3_arista")
cfg.CONF.register_opts(ARISTA_DRIVER_OPTS, "ml2_arista")
cfg.CONF.register_opts(ARISTA_TYPE_DRIVER_OPTS, "arista_type_driver")

View File

@ -1,83 +0,0 @@
# Copyright (c) 2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from neutron_lib.db import constants as db_const
from neutron_lib.db import model_base
import sqlalchemy as sa
UUID_LEN = 36
STR_LEN = 255
class HasTenant(object):
"""Tenant mixin, add to subclasses that have a tenant."""
tenant_id = sa.Column(sa.String(db_const.PROJECT_ID_FIELD_SIZE),
index=True)
class AristaProvisionedNets(model_base.BASEV2, model_base.HasId,
HasTenant):
"""Stores networks provisioned on Arista EOS.
Saves the segmentation ID for each network that is provisioned
on EOS. This information is used during synchronization between
Neutron and EOS.
"""
__tablename__ = 'arista_provisioned_nets'
network_id = sa.Column(sa.String(UUID_LEN))
segmentation_id = sa.Column(sa.Integer)
def eos_network_representation(self, segmentation_type):
return {u'networkId': self.network_id,
u'segmentationTypeId': self.segmentation_id,
u'segmentationType': segmentation_type,
u'tenantId': self.tenant_id,
u'segmentId': self.id,
}
class AristaProvisionedVms(model_base.BASEV2, model_base.HasId,
HasTenant):
"""Stores VMs provisioned on Arista EOS.
All VMs launched on physical hosts connected to Arista
Switches are remembered
"""
__tablename__ = 'arista_provisioned_vms'
vm_id = sa.Column(sa.String(STR_LEN))
host_id = sa.Column(sa.String(STR_LEN))
port_id = sa.Column(sa.String(UUID_LEN))
network_id = sa.Column(sa.String(UUID_LEN))
def eos_port_representation(self):
return {u'portId': self.port_id,
u'deviceId': self.vm_id,
u'hosts': [self.host_id],
u'networkId': self.network_id}
class AristaProvisionedTenants(model_base.BASEV2, model_base.HasId,
HasTenant):
"""Stores Tenants provisioned on Arista EOS.
Tenants list is maintained for sync between Neutron and EOS.
"""
__tablename__ = 'arista_provisioned_tenants'
def eos_tenant_representation(self):
return {u'tenantId': self.tenant_id}

View File

@ -1,595 +0,0 @@
# Copyright (c) 2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from neutron_lib import constants as n_const
from neutron_lib import context as nctx
from neutron_lib.plugins.ml2 import api as driver_api
import neutron.db.api as db
from neutron.db import db_base_plugin_v2
from neutron.db import securitygroups_db as sec_db
from neutron.db import segments_db
from neutron.plugins.ml2 import models as ml2_models
from networking_arista.common import db as db_models
VLAN_SEGMENTATION = 'vlan'
def remember_tenant(tenant_id):
"""Stores a tenant information in repository.
:param tenant_id: globally unique neutron tenant identifier
"""
session = db.get_writer_session()
with session.begin():
tenant = (session.query(db_models.AristaProvisionedTenants).
filter_by(tenant_id=tenant_id).first())
if not tenant:
tenant = db_models.AristaProvisionedTenants(tenant_id=tenant_id)
session.add(tenant)
def forget_tenant(tenant_id):
"""Removes a tenant information from repository.
:param tenant_id: globally unique neutron tenant identifier
"""
session = db.get_writer_session()
with session.begin():
(session.query(db_models.AristaProvisionedTenants).
filter_by(tenant_id=tenant_id).
delete())
def get_all_tenants():
"""Returns a list of all tenants stored in repository."""
session = db.get_reader_session()
with session.begin():
return session.query(db_models.AristaProvisionedTenants).all()
def num_provisioned_tenants():
"""Returns number of tenants stored in repository."""
session = db.get_reader_session()
with session.begin():
return session.query(db_models.AristaProvisionedTenants).count()
def remember_vm(vm_id, host_id, port_id, network_id, tenant_id):
"""Stores all relevant information about a VM in repository.
:param vm_id: globally unique identifier for VM instance
:param host_id: ID of the host where the VM is placed
:param port_id: globally unique port ID that connects VM to network
:param network_id: globally unique neutron network identifier
:param tenant_id: globally unique neutron tenant identifier
"""
session = db.get_writer_session()
with session.begin():
vm = db_models.AristaProvisionedVms(
vm_id=vm_id,
host_id=host_id,
port_id=port_id,
network_id=network_id,
tenant_id=tenant_id)
session.add(vm)
def forget_all_ports_for_network(net_id):
"""Removes all ports for a given network fron repository.
:param net_id: globally unique network ID
"""
session = db.get_writer_session()
with session.begin():
(session.query(db_models.AristaProvisionedVms).
filter_by(network_id=net_id).delete())
def update_port(vm_id, host_id, port_id, network_id, tenant_id):
"""Updates the port details in the database.
:param vm_id: globally unique identifier for VM instance
:param host_id: ID of the new host where the VM is placed
:param port_id: globally unique port ID that connects VM to network
:param network_id: globally unique neutron network identifier
:param tenant_id: globally unique neutron tenant identifier
"""
session = db.get_writer_session()
with session.begin():
port = session.query(db_models.AristaProvisionedVms).filter_by(
port_id=port_id).first()
if port:
# Update the VM's host id
port.host_id = host_id
port.vm_id = vm_id
port.network_id = network_id
port.tenant_id = tenant_id
def forget_port(port_id, host_id):
"""Deletes the port from the database
:param port_id: globally unique port ID that connects VM to network
:param host_id: host to which the port is bound to
"""
session = db.get_writer_session()
with session.begin():
session.query(db_models.AristaProvisionedVms).filter_by(
port_id=port_id,
host_id=host_id).delete()
def remember_network_segment(tenant_id,
network_id, segmentation_id, segment_id):
"""Stores all relevant information about a Network in repository.
:param tenant_id: globally unique neutron tenant identifier
:param network_id: globally unique neutron network identifier
:param segmentation_id: segmentation id that is assigned to the network
:param segment_id: globally unique neutron network segment identifier
"""
session = db.get_writer_session()
with session.begin():
net = db_models.AristaProvisionedNets(
tenant_id=tenant_id,
id=segment_id,
network_id=network_id,
segmentation_id=segmentation_id)
session.add(net)
def forget_network_segment(tenant_id, network_id, segment_id=None):
"""Deletes all relevant information about a Network from repository.
:param tenant_id: globally unique neutron tenant identifier
:param network_id: globally unique neutron network identifier
:param segment_id: globally unique neutron network segment identifier
"""
filters = {
'tenant_id': tenant_id,
'network_id': network_id
}
if segment_id:
filters['id'] = segment_id
session = db.get_writer_session()
with session.begin():
(session.query(db_models.AristaProvisionedNets).
filter_by(**filters).delete())
def get_segmentation_id(tenant_id, network_id):
"""Returns Segmentation ID (VLAN) associated with a network.
:param tenant_id: globally unique neutron tenant identifier
:param network_id: globally unique neutron network identifier
"""
session = db.get_reader_session()
with session.begin():
net = (session.query(db_models.AristaProvisionedNets).
filter_by(tenant_id=tenant_id,
network_id=network_id).first())
return net.segmentation_id if net else None
def is_vm_provisioned(vm_id, host_id, port_id,
network_id, tenant_id):
"""Checks if a VM is already known to EOS
:returns: True, if yes; False otherwise.
:param vm_id: globally unique identifier for VM instance
:param host_id: ID of the host where the VM is placed
:param port_id: globally unique port ID that connects VM to network
:param network_id: globally unique neutron network identifier
:param tenant_id: globally unique neutron tenant identifier
"""
session = db.get_reader_session()
with session.begin():
num_vm = (session.query(db_models.AristaProvisionedVms).
filter_by(tenant_id=tenant_id,
vm_id=vm_id,
port_id=port_id,
network_id=network_id,
host_id=host_id).count())
return num_vm > 0
def is_port_provisioned(port_id, host_id=None):
"""Checks if a port is already known to EOS
:returns: True, if yes; False otherwise.
:param port_id: globally unique port ID that connects VM to network
:param host_id: host to which the port is bound to
"""
filters = {
'port_id': port_id
}
if host_id:
filters['host_id'] = host_id
session = db.get_reader_session()
with session.begin():
num_ports = (session.query(db_models.AristaProvisionedVms).
filter_by(**filters).count())
return num_ports > 0
def is_network_provisioned(tenant_id, network_id, segmentation_id=None,
segment_id=None):
"""Checks if a networks is already known to EOS
:returns: True, if yes; False otherwise.
:param tenant_id: globally unique neutron tenant identifier
:param network_id: globally unique neutron network identifier
:param segment_id: globally unique neutron network segment identifier
"""
session = db.get_reader_session()
with session.begin():
filters = {'tenant_id': tenant_id,
'network_id': network_id}
if segmentation_id:
filters['segmentation_id'] = segmentation_id
if segment_id:
filters['id'] = segment_id
num_nets = (session.query(db_models.AristaProvisionedNets).
filter_by(**filters).count())
return num_nets > 0
def is_tenant_provisioned(tenant_id):
"""Checks if a tenant is already known to EOS
:returns: True, if yes; False otherwise.
:param tenant_id: globally unique neutron tenant identifier
"""
session = db.get_reader_session()
with session.begin():
num_tenants = (session.query(db_models.AristaProvisionedTenants).
filter_by(tenant_id=tenant_id).count())
return num_tenants > 0
def num_nets_provisioned(tenant_id):
"""Returns number of networks for a given tennat.
:param tenant_id: globally unique neutron tenant identifier
"""
session = db.get_reader_session()
with session.begin():
return (session.query(db_models.AristaProvisionedNets).
filter_by(tenant_id=tenant_id).count())
def num_vms_provisioned(tenant_id):
"""Returns number of VMs for a given tennat.
:param tenant_id: globally unique neutron tenant identifier
"""
session = db.get_reader_session()
with session.begin():
return (session.query(db_models.AristaProvisionedVms).
filter_by(tenant_id=tenant_id).count())
def get_networks(tenant_id):
"""Returns all networks for a given tenant in EOS-compatible format.
See AristaRPCWrapper.get_network_list() for return value format.
:param tenant_id: globally unique neutron tenant identifier
"""
session = db.get_reader_session()
with session.begin():
model = db_models.AristaProvisionedNets
# hack for pep8 E711: comparison to None should be
# 'if cond is not None'
none = None
all_nets = []
if tenant_id != 'any':
all_nets = (session.query(model).
filter(model.tenant_id == tenant_id,
model.segmentation_id != none))
else:
all_nets = (session.query(model).
filter(model.segmentation_id != none))
res = dict(
(net.network_id, net.eos_network_representation(
VLAN_SEGMENTATION))
for net in all_nets
)
return res
def get_vms(tenant_id):
"""Returns all VMs for a given tenant in EOS-compatible format.
:param tenant_id: globally unique neutron tenant identifier
"""
session = db.get_reader_session()
with session.begin():
model = db_models.AristaProvisionedVms
# hack for pep8 E711: comparison to None should be
# 'if cond is not None'
none = None
all_ports = (session.query(model).
filter(model.tenant_id == tenant_id,
model.host_id != none,
model.vm_id != none,
model.network_id != none,
model.port_id != none))
ports = {}
for port in all_ports:
if port.port_id not in ports:
ports[port.port_id] = port.eos_port_representation()
else:
ports[port.port_id]['hosts'].append(port.host_id)
vm_dict = dict()
def eos_vm_representation(port):
return {u'vmId': port['deviceId'],
u'baremetal_instance': False,
u'ports': [port]}
for port in ports.values():
deviceId = port['deviceId']
if deviceId in vm_dict:
vm_dict[deviceId]['ports'].append(port)
else:
vm_dict[deviceId] = eos_vm_representation(port)
return vm_dict
def are_ports_attached_to_network(net_id):
"""Checks if a given network is used by any port, excluding dhcp port.
:param net_id: globally unique network ID
"""
session = db.get_reader_session()
with session.begin():
model = db_models.AristaProvisionedVms
return session.query(model).filter_by(network_id=net_id).filter(
~model.vm_id.startswith('dhcp')).count() > 0
def get_ports(tenant_id=None):
"""Returns all ports of VMs in EOS-compatible format.
:param tenant_id: globally unique neutron tenant identifier
"""
session = db.get_reader_session()
with session.begin():
model = db_models.AristaProvisionedVms
# hack for pep8 E711: comparison to None should be
# 'if cond is not None'
none = None
if tenant_id:
all_ports = (session.query(model).
filter(model.tenant_id == tenant_id,
model.host_id != none,
model.vm_id != none,
model.network_id != none,
model.port_id != none))
else:
all_ports = (session.query(model).
filter(model.tenant_id != none,
model.host_id != none,
model.vm_id != none,
model.network_id != none,
model.port_id != none))
ports = {}
for port in all_ports:
if port.port_id not in ports:
ports[port.port_id] = port.eos_port_representation()
ports[port.port_id]['hosts'].append(port.host_id)
return ports
def get_tenants():
"""Returns list of all tenants in EOS-compatible format."""
session = db.get_reader_session()
with session.begin():
model = db_models.AristaProvisionedTenants
all_tenants = session.query(model)
res = dict(
(tenant.tenant_id, tenant.eos_tenant_representation())
for tenant in all_tenants
)
return res
def _make_port_dict(record):
"""Make a dict from the BM profile DB record."""
return {'port_id': record.port_id,
'host_id': record.host,
'vnic_type': record.vnic_type,
'profile': record.profile}
def get_all_baremetal_ports():
"""Returns a list of all ports that belong to baremetal hosts."""
session = db.get_reader_session()
with session.begin():
querry = session.query(ml2_models.PortBinding)
bm_ports = querry.filter_by(vnic_type='baremetal').all()
return {bm_port.port_id: _make_port_dict(bm_port)
for bm_port in bm_ports}
def get_all_portbindings():
"""Returns a list of all ports bindings."""
session = db.get_session()
with session.begin():
query = session.query(ml2_models.PortBinding)
ports = query.all()
return {port.port_id: _make_port_dict(port)
for port in ports}
def get_port_binding_level(filters):
"""Returns entries from PortBindingLevel based on the specified filters."""
session = db.get_reader_session()
with session.begin():
return (session.query(ml2_models.PortBindingLevel).
filter_by(**filters).all())
class NeutronNets(db_base_plugin_v2.NeutronDbPluginV2,
sec_db.SecurityGroupDbMixin):
"""Access to Neutron DB.
Provides access to the Neutron Data bases for all provisioned
networks as well ports. This data is used during the synchronization
of DB between ML2 Mechanism Driver and Arista EOS
Names of the networks and ports are not stroed in Arista repository
They are pulled from Neutron DB.
"""
def __init__(self):
self.admin_ctx = nctx.get_admin_context()
def get_network_name(self, tenant_id, network_id):
network = self._get_network(tenant_id, network_id)
network_name = None
if network:
network_name = network[0]['name']
return network_name
def get_all_networks_for_tenant(self, tenant_id):
filters = {'tenant_id': [tenant_id]}
return super(NeutronNets,
self).get_networks(self.admin_ctx, filters=filters) or []
def get_all_networks(self):
return super(NeutronNets, self).get_networks(self.admin_ctx) or []
def get_all_ports_for_tenant(self, tenant_id):
filters = {'tenant_id': [tenant_id]}
return super(NeutronNets,
self).get_ports(self.admin_ctx, filters=filters) or []
def get_shared_network_owner_id(self, network_id):
filters = {'id': [network_id]}
nets = self.get_networks(self.admin_ctx, filters=filters) or []
segments = segments_db.get_network_segments(self.admin_ctx,
network_id)
if not nets or not segments:
return
if (nets[0]['shared'] and
segments[0][driver_api.NETWORK_TYPE] == n_const.TYPE_VLAN):
return nets[0]['tenant_id']
def get_network_segments(self, network_id, dynamic=False, context=None):
context = context if context is not None else self.admin_ctx
segments = segments_db.get_network_segments(context, network_id,
filter_dynamic=dynamic)
if dynamic:
for segment in segments:
segment['is_dynamic'] = True
return segments
def get_all_network_segments(self, network_id, context=None):
segments = self.get_network_segments(network_id, context=context)
segments += self.get_network_segments(network_id, dynamic=True,
context=context)
return segments
def get_segment_by_id(self, context, segment_id):
return segments_db.get_segment_by_id(context,
segment_id)
def get_network_from_net_id(self, network_id, context=None):
filters = {'id': [network_id]}
ctxt = context if context else self.admin_ctx
return super(NeutronNets,
self).get_networks(ctxt, filters=filters) or []
def _get_network(self, tenant_id, network_id):
filters = {'tenant_id': [tenant_id],
'id': [network_id]}
return super(NeutronNets,
self).get_networks(self.admin_ctx, filters=filters) or []
def get_subnet_info(self, subnet_id):
return self.get_subnet(subnet_id)
def get_subnet_ip_version(self, subnet_id):
subnet = self.get_subnet(subnet_id)
return subnet['ip_version'] if 'ip_version' in subnet else None
def get_subnet_gateway_ip(self, subnet_id):
subnet = self.get_subnet(subnet_id)
return subnet['gateway_ip'] if 'gateway_ip' in subnet else None
def get_subnet_cidr(self, subnet_id):
subnet = self.get_subnet(subnet_id)
return subnet['cidr'] if 'cidr' in subnet else None
def get_network_id(self, subnet_id):
subnet = self.get_subnet(subnet_id)
return subnet['network_id'] if 'network_id' in subnet else None
def get_network_id_from_port_id(self, port_id):
port = self.get_port(port_id)
return port['network_id'] if 'network_id' in port else None
def get_subnet(self, subnet_id):
return super(NeutronNets,
self).get_subnet(self.admin_ctx, subnet_id) or {}
def get_port(self, port_id):
return super(NeutronNets,
self).get_port(self.admin_ctx, port_id) or {}
def get_all_security_gp_to_port_bindings(self):
return super(NeutronNets, self)._get_port_security_group_bindings(
self.admin_ctx) or []
def get_security_gp_to_port_bindings(self, sec_gp_id):
filters = {'security_group_id': [sec_gp_id]}
return super(NeutronNets, self)._get_port_security_group_bindings(
self.admin_ctx, filters=filters) or []
def get_security_group(self, sec_gp_id):
return super(NeutronNets,
self).get_security_group(self.admin_ctx, sec_gp_id) or []
def get_security_groups(self):
sgs = super(NeutronNets,
self).get_security_groups(self.admin_ctx) or []
sgs_all = {}
if sgs:
for s in sgs:
sgs_all[s['id']] = s
return sgs_all
def get_security_group_rule(self, sec_gpr_id):
return super(NeutronNets,
self).get_security_group_rule(self.admin_ctx,
sec_gpr_id) or []
def validate_network_rbac_policy_change(self, resource, event, trigger,
context, object_type, policy,
**kwargs):
return super(NeutronNets, self).validate_network_rbac_policy_change(
resource, event, trigger, context, object_type, policy, kwargs)

View File

@ -1,55 +0,0 @@
# Copyright (c) 2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Exceptions used by Arista ML2 Mechanism Driver."""
from neutron_lib import exceptions
from networking_arista._i18n import _
class AristaRpcError(exceptions.NeutronException):
message = _('%(msg)s')
class AristaConfigError(exceptions.NeutronException):
message = _('%(msg)s')
class AristaServicePluginRpcError(exceptions.NeutronException):
message = _('%(msg)s')
class AristaServicePluginConfigError(exceptions.NeutronException):
message = _('%(msg)s')
class AristaSecurityGroupError(exceptions.NeutronException):
message = _('%(msg)s')
class VlanUnavailable(exceptions.NeutronException):
"""An exception indicating VLAN creation failed because it's not available.
A specialization of the NeutronException indicating network creation failed
because a specified VLAN is unavailable on the physical network.
:param vlan_id: The VLAN ID.
:param physical_network: The physical network.
"""
message = _("Unable to create the network. "
"The VLAN %(vlan_id)s on physical network "
"%(physical_network)s is not available.")

View File

@ -1 +0,0 @@
Alembic database migration scripts for the networking-arista package.

View File

@ -1 +0,0 @@
Alembic database migration scripts for the networking-arista package.

View File

@ -1 +0,0 @@
Generic single-database configuration.

View File

@ -1,123 +0,0 @@
# Copyright (c) 2015 Arista Networks, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from logging.config import fileConfig
from alembic import context
from neutron_lib.db import model_base
from oslo_config import cfg
from oslo_db.sqlalchemy import session
import sqlalchemy as sa
from sqlalchemy import event
from neutron.db.migration.alembic_migrations import external
from neutron.db.migration.models import head # noqa
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
neutron_config = config.neutron_config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
target_metadata = model_base.BASEV2.metadata
MYSQL_ENGINE = None
ARISTA_VERSION_TABLE = 'arista_alembic_version'
def set_mysql_engine():
try:
mysql_engine = neutron_config.command.mysql_engine
except cfg.NoSuchOptError:
mysql_engine = None
global MYSQL_ENGINE
MYSQL_ENGINE = (mysql_engine or
model_base.BASEV2.__table_args__['mysql_engine'])
def include_object(object, name, type_, reflected, compare_to):
if type_ == 'table' and name in external.TABLES:
return False
else:
return True
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL or an Engine.
Calls to context.execute() here emit the given string to the
script output.
"""
set_mysql_engine()
kwargs = dict()
if neutron_config.database.connection:
kwargs['url'] = neutron_config.database.connection
else:
kwargs['dialect_name'] = neutron_config.database.engine
kwargs['include_object'] = include_object
kwargs['version_table'] = ARISTA_VERSION_TABLE
context.configure(**kwargs)
with context.begin_transaction():
context.run_migrations()
@event.listens_for(sa.Table, 'after_parent_attach')
def set_storage_engine(target, parent):
if MYSQL_ENGINE:
target.kwargs['mysql_engine'] = MYSQL_ENGINE
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
set_mysql_engine()
engine = session.create_engine(neutron_config.database.connection)
connection = engine.connect()
context.configure(
connection=connection,
target_metadata=target_metadata,
include_object=include_object,
version_table=ARISTA_VERSION_TABLE,
)
try:
with context.begin_transaction():
context.run_migrations()
finally:
connection.close()
engine.dispose()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()

View File

@ -1,36 +0,0 @@
# Copyright ${create_date.year} Arista Networks, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""${message}
Revision ID: ${up_revision}
Revises: ${down_revision | comma,n}
Create Date: ${create_date}
"""
# revision identifiers, used by Alembic.
revision = ${repr(up_revision)}
down_revision = ${repr(down_revision)}
branch_labels = ${repr(branch_labels)}
depends_on = ${repr(depends_on)}
from alembic import op
import sqlalchemy as sa
${imports if imports else ""}
def upgrade():
${upgrades if upgrades else "pass"}

View File

@ -1,29 +0,0 @@
# Copyright (c) 2015 Arista Networks, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Initial db version
Revision ID: 296b4e0236e0
Create Date: 2015-10-23 14:37:49.594974
"""
# revision identifiers, used by Alembic.
revision = '296b4e0236e0'
down_revision = None
def upgrade():
pass

View File

@ -1,32 +0,0 @@
# Copyright (c) 2015 Arista Networks, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Initial db version
Revision ID: 47036dc8697a
Create Date: 2015-10-23 14:37:49.594974
"""
from neutron.db.migration import cli
# revision identifiers, used by Alembic.
revision = '47036dc8697a'
down_revision = '296b4e0236e0'
branch_labels = (cli.CONTRACT_BRANCH,)
def upgrade():
pass

View File

@ -1,32 +0,0 @@
# Copyright (c) 2015 Arista Networks, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Initial db version
Revision ID: 1c6993ce7db0
Create Date: 2015-10-23 14:37:49.594974
"""
from neutron.db.migration import cli
# revision identifiers, used by Alembic.
revision = '1c6993ce7db0'
down_revision = '296b4e0236e0'
branch_labels = (cli.EXPAND_BRANCH,)
def upgrade():
pass

View File

@ -1,457 +0,0 @@
# Copyright 2014 Arista Networks, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import hashlib
import socket
import struct
from oslo_config import cfg
from oslo_log import log as logging
from networking_arista._i18n import _, _LI
from networking_arista.common import api
from networking_arista.common import exceptions as arista_exc
LOG = logging.getLogger(__name__)
cfg.CONF.import_group('l3_arista', 'networking_arista.common.config')
EOS_UNREACHABLE_MSG = _('Unable to reach EOS')
DEFAULT_VLAN = 1
MLAG_SWITCHES = 2
VIRTUAL_ROUTER_MAC = '00:11:22:33:44:55'
IPV4_BITS = 32
IPV6_BITS = 128
# This string-format-at-a-distance confuses pylint :(
# pylint: disable=too-many-format-args
router_in_vrf = {
'router': {'create': ['vrf definition {0}',
'rd {1}',
'exit'],
'delete': ['no vrf definition {0}']},
'interface': {'add': ['ip routing vrf {1}',
'vlan {0}',
'exit',
'interface vlan {0}',
'vrf forwarding {1}',
'ip address {2}'],
'remove': ['no interface vlan {0}']}}
router_in_default_vrf = {
'router': {'create': [], # Place holder for now.
'delete': []}, # Place holder for now.
'interface': {'add': ['ip routing',
'vlan {0}',
'exit',
'interface vlan {0}',
'ip address {2}'],
'remove': ['no interface vlan {0}']}}
router_in_default_vrf_v6 = {
'router': {'create': [],
'delete': []},
'interface': {'add': ['ipv6 unicast-routing',
'vlan {0}',
'exit',
'interface vlan {0}',
'ipv6 enable',
'ipv6 address {2}'],
'remove': ['no interface vlan {0}']}}
additional_cmds_for_mlag = {
'router': {'create': ['ip virtual-router mac-address {0}'],
'delete': []},
'interface': {'add': ['ip virtual-router address {0}'],
'remove': []}}
additional_cmds_for_mlag_v6 = {
'router': {'create': [],
'delete': []},
'interface': {'add': ['ipv6 virtual-router address {0}'],
'remove': []}}
class AristaL3Driver(object):
"""Wraps Arista JSON RPC.
All communications between Neutron and EOS are over JSON RPC.
EOS - operating system used on Arista hardware
Command API - JSON RPC API provided by Arista EOS
"""
def __init__(self):
self._servers = []
self._hosts = []
self._interfaceDict = None
self._validate_config()
host = cfg.CONF.l3_arista.primary_l3_host
self._hosts.append(host)
self._servers.append(self._make_eapi_client(host))
self._mlag_configured = cfg.CONF.l3_arista.mlag_config
self._use_vrf = cfg.CONF.l3_arista.use_vrf
if self._mlag_configured:
host = cfg.CONF.l3_arista.secondary_l3_host
self._hosts.append(host)
self._servers.append(self._make_eapi_client(host))
self._additionalRouterCmdsDict = additional_cmds_for_mlag['router']
self._additionalInterfaceCmdsDict = (
additional_cmds_for_mlag['interface'])
if self._use_vrf:
self.routerDict = router_in_vrf['router']
self._interfaceDict = router_in_vrf['interface']
else:
self.routerDict = router_in_default_vrf['router']
self._interfaceDict = router_in_default_vrf['interface']
@staticmethod
def _make_eapi_client(host):
return api.EAPIClient(
host,
username=cfg.CONF.l3_arista.primary_l3_host_username,
password=cfg.CONF.l3_arista.primary_l3_host_password,
verify=False,
timeout=cfg.CONF.l3_arista.conn_timeout
)
def _validate_config(self):
if cfg.CONF.l3_arista.get('primary_l3_host') == '':
msg = _('Required option primary_l3_host is not set')
LOG.error(msg)
raise arista_exc.AristaServicePluginConfigError(msg=msg)
if cfg.CONF.l3_arista.get('mlag_config'):
if cfg.CONF.l3_arista.get('use_vrf'):
# This is invalid/unsupported configuration
msg = _('VRFs are not supported MLAG config mode')
LOG.error(msg)
raise arista_exc.AristaServicePluginConfigError(msg=msg)
if cfg.CONF.l3_arista.get('secondary_l3_host') == '':
msg = _('Required option secondary_l3_host is not set')
LOG.error(msg)
raise arista_exc.AristaServicePluginConfigError(msg=msg)
if cfg.CONF.l3_arista.get('primary_l3_host_username') == '':
msg = _('Required option primary_l3_host_username is not set')
LOG.error(msg)
raise arista_exc.AristaServicePluginConfigError(msg=msg)
def create_router_on_eos(self, router_name, rdm, server):
"""Creates a router on Arista HW Device.
:param router_name: globally unique identifier for router/VRF
:param rdm: A value generated by hashing router name
:param server: Server endpoint on the Arista switch to be configured
"""
cmds = []
rd = "%s:%s" % (rdm, rdm)
for c in self.routerDict['create']:
cmds.append(c.format(router_name, rd))
if self._mlag_configured:
mac = VIRTUAL_ROUTER_MAC
for c in self._additionalRouterCmdsDict['create']:
cmds.append(c.format(mac))
self._run_openstack_l3_cmds(cmds, server)
def delete_router_from_eos(self, router_name, server):
"""Deletes a router from Arista HW Device.
:param router_name: globally unique identifier for router/VRF
:param server: Server endpoint on the Arista switch to be configured
"""
cmds = []
for c in self.routerDict['delete']:
cmds.append(c.format(router_name))
if self._mlag_configured:
for c in self._additionalRouterCmdsDict['delete']:
cmds.append(c)
self._run_openstack_l3_cmds(cmds, server)
def _select_dicts(self, ipv):
if self._use_vrf:
self._interfaceDict = router_in_vrf['interface']
else:
if ipv == 6:
# for IPv6 use IPv6 commmands
self._interfaceDict = router_in_default_vrf_v6['interface']
self._additionalInterfaceCmdsDict = (
additional_cmds_for_mlag_v6['interface'])
else:
self._interfaceDict = router_in_default_vrf['interface']
self._additionalInterfaceCmdsDict = (
additional_cmds_for_mlag['interface'])
def add_interface_to_router(self, segment_id,
router_name, gip, router_ip, mask, server):
"""Adds an interface to existing HW router on Arista HW device.
:param segment_id: VLAN Id associated with interface that is added
:param router_name: globally unique identifier for router/VRF
:param gip: Gateway IP associated with the subnet
:param router_ip: IP address of the router
:param mask: subnet mask to be used
:param server: Server endpoint on the Arista switch to be configured
"""
if not segment_id:
segment_id = DEFAULT_VLAN
cmds = []
for c in self._interfaceDict['add']:
if self._mlag_configured:
# In VARP config, use router ID else, use gateway IP address.
ip = router_ip
else:
ip = gip + '/' + mask
cmds.append(c.format(segment_id, router_name, ip))
if self._mlag_configured:
for c in self._additionalInterfaceCmdsDict['add']:
cmds.append(c.format(gip))
self._run_openstack_l3_cmds(cmds, server)
def delete_interface_from_router(self, segment_id, router_name, server):
"""Deletes an interface from existing HW router on Arista HW device.
:param segment_id: VLAN Id associated with interface that is added
:param router_name: globally unique identifier for router/VRF
:param server: Server endpoint on the Arista switch to be configured
"""
if not segment_id:
segment_id = DEFAULT_VLAN
cmds = []
for c in self._interfaceDict['remove']:
cmds.append(c.format(segment_id))
self._run_openstack_l3_cmds(cmds, server)
def create_router(self, context, tenant_id, router):
"""Creates a router on Arista Switch.
Deals with multiple configurations - such as Router per VRF,
a router in default VRF, Virtual Router in MLAG configurations
"""
if router:
router_name = self._arista_router_name(tenant_id, router['name'])
hashed = hashlib.sha256(router_name.encode('utf-8'))
rdm = str(int(hashed.hexdigest(), 16) % 65536)
mlag_peer_failed = False
for s in self._servers:
try:
self.create_router_on_eos(router_name, rdm, s)
mlag_peer_failed = False
except Exception:
if self._mlag_configured and not mlag_peer_failed:
# In paied switch, it is OK to fail on one switch
mlag_peer_failed = True
else:
msg = (_('Failed to create router %s on EOS') %
router_name)
LOG.exception(msg)
raise arista_exc.AristaServicePluginRpcError(msg=msg)
def delete_router(self, context, tenant_id, router_id, router):
"""Deletes a router from Arista Switch."""
if router:
router_name = self._arista_router_name(tenant_id, router['name'])
mlag_peer_failed = False
for s in self._servers:
try:
self.delete_router_from_eos(router_name, s)
mlag_peer_failed = False
except Exception:
if self._mlag_configured and not mlag_peer_failed:
# In paied switch, it is OK to fail on one switch
mlag_peer_failed = True
else:
msg = (_('Failed to create router %s on EOS') %
router_name)
LOG.exception(msg)
raise arista_exc.AristaServicePluginRpcError(msg=msg)
def update_router(self, context, router_id, original_router, new_router):
"""Updates a router which is already created on Arista Switch.
TODO: (Sukhdev) - to be implemented in next release.
"""
pass
def add_router_interface(self, context, router_info):
"""Adds an interface to a router created on Arista HW router.
This deals with both IPv6 and IPv4 configurations.
"""
if router_info:
self._select_dicts(router_info['ip_version'])
cidr = router_info['cidr']
subnet_mask = cidr.split('/')[1]
router_name = self._arista_router_name(router_info['tenant_id'],
router_info['name'])
if self._mlag_configured:
# For MLAG, we send a specific IP address as opposed to cidr
# For now, we are using x.x.x.253 and x.x.x.254 as virtual IP
mlag_peer_failed = False
for i, server in enumerate(self._servers):
# Get appropriate virtual IP address for this router
router_ip = self._get_router_ip(cidr, i,
router_info['ip_version'])
try:
self.add_interface_to_router(router_info['seg_id'],
router_name,
router_info['gip'],
router_ip, subnet_mask,
server)
mlag_peer_failed = False
except Exception:
if not mlag_peer_failed:
mlag_peer_failed = True
else:
msg = (_('Failed to add interface to router '
'%s on EOS') % router_name)
LOG.exception(msg)
raise arista_exc.AristaServicePluginRpcError(
msg=msg)
else:
for s in self._servers:
self.add_interface_to_router(router_info['seg_id'],
router_name,
router_info['gip'],
None, subnet_mask, s)
def remove_router_interface(self, context, router_info):
"""Removes previously configured interface from router on Arista HW.
This deals with both IPv6 and IPv4 configurations.
"""
if router_info:
router_name = self._arista_router_name(router_info['tenant_id'],
router_info['name'])
mlag_peer_failed = False
for s in self._servers:
try:
self.delete_interface_from_router(router_info['seg_id'],
router_name, s)
if self._mlag_configured:
mlag_peer_failed = False
except Exception:
if self._mlag_configured and not mlag_peer_failed:
mlag_peer_failed = True
else:
msg = (_('Failed to add interface to router '
'%s on EOS') % router_name)
LOG.exception(msg)
raise arista_exc.AristaServicePluginRpcError(msg=msg)
def _run_openstack_l3_cmds(self, commands, server):
"""Execute/sends a CAPI (Command API) command to EOS.
In this method, list of commands is appended with prefix and
postfix commands - to make is understandble by EOS.
:param commands : List of command to be executed on EOS.
:param server: Server endpoint on the Arista switch to be configured
"""
command_start = ['enable', 'configure']
command_end = ['exit']
full_command = command_start + commands + command_end
LOG.info(_LI('Executing command on Arista EOS: %s'), full_command)
try:
# this returns array of return values for every command in
# full_command list
ret = server.execute(full_command)
LOG.info(_LI('Results of execution on Arista EOS: %s'), ret)
except Exception:
msg = (_('Error occurred while trying to execute '
'commands %(cmd)s on EOS %(host)s') %
{'cmd': full_command, 'host': server})
LOG.exception(msg)
raise arista_exc.AristaServicePluginRpcError(msg=msg)
def _arista_router_name(self, tenant_id, name):
"""Generate an arista specific name for this router.
Use a unique name so that OpenStack created routers/SVIs
can be distinguishged from the user created routers/SVIs
on Arista HW.
"""
return 'OS' + '-' + tenant_id + '-' + name
def _get_binary_from_ipv4(self, ip_addr):
"""Converts IPv4 address to binary form."""
return struct.unpack("!L", socket.inet_pton(socket.AF_INET,
ip_addr))[0]
def _get_binary_from_ipv6(self, ip_addr):
"""Converts IPv6 address to binary form."""
hi, lo = struct.unpack("!QQ", socket.inet_pton(socket.AF_INET6,
ip_addr))
return (hi << 64) | lo
def _get_ipv4_from_binary(self, bin_addr):
"""Converts binary address to Ipv4 format."""
return socket.inet_ntop(socket.AF_INET, struct.pack("!L", bin_addr))
def _get_ipv6_from_binary(self, bin_addr):
"""Converts binary address to Ipv6 format."""
hi = bin_addr >> 64
lo = bin_addr & 0xFFFFFFFF
return socket.inet_ntop(socket.AF_INET6, struct.pack("!QQ", hi, lo))
def _get_router_ip(self, cidr, ip_count, ip_ver):
"""For a given IP subnet and IP version type, generate IP for router.
This method takes the network address (cidr) and selects an
IP address that should be assigned to virtual router running
on multiple switches. It uses upper addresses in a subnet address
as IP for the router. Each instace of the router, on each switch,
requires uniqe IP address. For example in IPv4 case, on a 255
subnet, it will pick X.X.X.254 as first addess, X.X.X.253 for next,
and so on.
"""
start_ip = MLAG_SWITCHES + ip_count
network_addr, prefix = cidr.split('/')
if ip_ver == 4:
bits = IPV4_BITS
ip = self._get_binary_from_ipv4(network_addr)
elif ip_ver == 6:
bits = IPV6_BITS
ip = self._get_binary_from_ipv6(network_addr)
mask = (pow(2, bits) - 1) << (bits - int(prefix))
network_addr = ip & mask
router_ip = pow(2, bits - int(prefix)) - start_ip
router_ip = network_addr | router_ip
if ip_ver == 4:
return self._get_ipv4_from_binary(router_ip) + '/' + prefix
else:
return self._get_ipv6_from_binary(router_ip) + '/' + prefix

View File

@ -1,277 +0,0 @@
# Copyright 2014 Arista Networks, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import threading
from neutron_lib import constants as n_const
from neutron_lib import context as nctx
from neutron_lib.plugins import constants as plugin_constants
from oslo_config import cfg
from oslo_log import helpers as log_helpers
from oslo_log import log as logging
from oslo_utils import excutils
from neutron.api.rpc.agentnotifiers import l3_rpc_agent_api
from neutron.api.rpc.handlers import l3_rpc
from neutron.common import rpc as n_rpc
from neutron.common import topics
from neutron.db import db_base_plugin_v2
from neutron.db import extraroute_db
from neutron.db import l3_agentschedulers_db
from neutron.db import l3_gwmode_db
from neutron.plugins.ml2.driver_context import NetworkContext # noqa
from networking_arista._i18n import _LE, _LI
from networking_arista.common import db_lib
from networking_arista.l3Plugin import arista_l3_driver
LOG = logging.getLogger(__name__)
class AristaL3ServicePlugin(db_base_plugin_v2.NeutronDbPluginV2,
extraroute_db.ExtraRoute_db_mixin,
l3_gwmode_db.L3_NAT_db_mixin,
l3_agentschedulers_db.L3AgentSchedulerDbMixin):
"""Implements L3 Router service plugin for Arista hardware.
Creates routers in Arista hardware, manages them, adds/deletes interfaces
to the routes.
"""
supported_extension_aliases = ["router", "ext-gw-mode",
"extraroute"]
def __init__(self, driver=None):
self.driver = driver or arista_l3_driver.AristaL3Driver()
self.ndb = db_lib.NeutronNets()
self.setup_rpc()
self.sync_timeout = cfg.CONF.l3_arista.l3_sync_interval
self.sync_lock = threading.Lock()
self._synchronization_thread()
def setup_rpc(self):
# RPC support
self.topic = topics.L3PLUGIN
self.conn = n_rpc.create_connection()
self.agent_notifiers.update(
{n_const.AGENT_TYPE_L3: l3_rpc_agent_api.L3AgentNotifyAPI()})
self.endpoints = [l3_rpc.L3RpcCallback()]
self.conn.create_consumer(self.topic, self.endpoints,
fanout=False)
self.conn.consume_in_threads()
def get_plugin_type(self):
return plugin_constants.L3
def get_plugin_description(self):
"""Returns string description of the plugin."""
return ("Arista L3 Router Service Plugin for Arista Hardware "
"based routing")
def _synchronization_thread(self):
with self.sync_lock:
self.synchronize()
self.timer = threading.Timer(self.sync_timeout,
self._synchronization_thread)
self.timer.start()
def stop_synchronization_thread(self):
if self.timer:
self.timer.cancel()
self.timer = None
@log_helpers.log_method_call
def create_router(self, context, router):
"""Create a new router entry in DB, and create it Arista HW."""
tenant_id = router['router']['tenant_id']
# Add router to the DB
new_router = super(AristaL3ServicePlugin, self).create_router(
context,
router)
# create router on the Arista Hw
try:
self.driver.create_router(context, tenant_id, new_router)
return new_router
except Exception:
with excutils.save_and_reraise_exception():
LOG.error(_LE("Error creating router on Arista HW router=%s "),
new_router)
super(AristaL3ServicePlugin, self).delete_router(
context, new_router['id'])
@log_helpers.log_method_call
def update_router(self, context, router_id, router):
"""Update an existing router in DB, and update it in Arista HW."""
# Read existing router record from DB
original_router = super(AristaL3ServicePlugin, self).get_router(
context, router_id)
# Update router DB
new_router = super(AristaL3ServicePlugin, self).update_router(
context, router_id, router)
# Modify router on the Arista Hw
try:
self.driver.update_router(context, router_id,
original_router, new_router)
return new_router
except Exception:
LOG.error(_LE("Error updating router on Arista HW router=%s "),
new_router)
@log_helpers.log_method_call
def delete_router(self, context, router_id):
"""Delete an existing router from Arista HW as well as from the DB."""
router = super(AristaL3ServicePlugin, self).get_router(context,
router_id)
tenant_id = router['tenant_id']
# Delete router on the Arista Hw
try:
self.driver.delete_router(context, tenant_id, router_id, router)
except Exception as e:
LOG.error(_LE("Error deleting router on Arista HW "
"router %(r)s exception=%(e)s"),
{'r': router, 'e': e})
super(AristaL3ServicePlugin, self).delete_router(context, router_id)
@log_helpers.log_method_call
def add_router_interface(self, context, router_id, interface_info):
"""Add a subnet of a network to an existing router."""
new_router = super(AristaL3ServicePlugin, self).add_router_interface(
context, router_id, interface_info)
# Get network info for the subnet that is being added to the router.
# Check if the interface information is by port-id or subnet-id
add_by_port, add_by_sub = self._validate_interface_info(interface_info)
if add_by_sub:
subnet = self.get_subnet(context, interface_info['subnet_id'])
elif add_by_port:
port = self.get_port(context, interface_info['port_id'])
subnet_id = port['fixed_ips'][0]['subnet_id']
subnet = self.get_subnet(context, subnet_id)
network_id = subnet['network_id']
# To create SVI's in Arista HW, the segmentation Id is required
# for this network.
ml2_db = NetworkContext(self, context, {'id': network_id})
seg_id = ml2_db.network_segments[0]['segmentation_id']
# Package all the info needed for Hw programming
router = super(AristaL3ServicePlugin, self).get_router(context,
router_id)
router_info = copy.deepcopy(new_router)
router_info['seg_id'] = seg_id
router_info['name'] = router['name']
router_info['cidr'] = subnet['cidr']
router_info['gip'] = subnet['gateway_ip']
router_info['ip_version'] = subnet['ip_version']
try:
self.driver.add_router_interface(context, router_info)
return new_router
except Exception:
with excutils.save_and_reraise_exception():
LOG.error(_LE("Error Adding subnet %(subnet)s to "
"router %(router_id)s on Arista HW"),
{'subnet': subnet, 'router_id': router_id})
super(AristaL3ServicePlugin, self).remove_router_interface(
context,
router_id,
interface_info)
@log_helpers.log_method_call
def remove_router_interface(self, context, router_id, interface_info):
"""Remove a subnet of a network from an existing router."""
new_router = (
super(AristaL3ServicePlugin, self).remove_router_interface(
context, router_id, interface_info))
# Get network information of the subnet that is being removed
subnet = self.get_subnet(context, new_router['subnet_id'])
network_id = subnet['network_id']
# For SVI removal from Arista HW, segmentation ID is needed
ml2_db = NetworkContext(self, context, {'id': network_id})
seg_id = ml2_db.network_segments[0]['segmentation_id']
router = super(AristaL3ServicePlugin, self).get_router(context,
router_id)
router_info = copy.deepcopy(new_router)
router_info['seg_id'] = seg_id
router_info['name'] = router['name']
try:
self.driver.remove_router_interface(context, router_info)
return new_router
except Exception as exc:
LOG.error(_LE("Error removing interface %(interface)s from "
"router %(router_id)s on Arista HW"
"Exception =(exc)s"),
{'interface': interface_info, 'router_id': router_id,
'exc': exc})
def synchronize(self):
"""Synchronizes Router DB from Neturon DB with EOS.
Walks through the Neturon Db and ensures that all the routers
created in Netuton DB match with EOS. After creating appropriate
routers, it ensures to add interfaces as well.
Uses idempotent properties of EOS configuration, which means
same commands can be repeated.
"""
LOG.info(_LI('Syncing Neutron Router DB <-> EOS'))
ctx = nctx.get_admin_context()
routers = super(AristaL3ServicePlugin, self).get_routers(ctx)
for r in routers:
tenant_id = r['tenant_id']
ports = self.ndb.get_all_ports_for_tenant(tenant_id)
try:
self.driver.create_router(self, tenant_id, r)
except Exception:
continue
# Figure out which interfaces are added to this router
for p in ports:
if p['device_id'] == r['id']:
net_id = p['network_id']
subnet_id = p['fixed_ips'][0]['subnet_id']
subnet = self.ndb.get_subnet_info(subnet_id)
ml2_db = NetworkContext(self, ctx, {'id': net_id})
seg_id = ml2_db.network_segments[0]['segmentation_id']
r['seg_id'] = seg_id
r['cidr'] = subnet['cidr']
r['gip'] = subnet['gateway_ip']
r['ip_version'] = subnet['ip_version']
try:
self.driver.add_router_interface(self, r)
except Exception:
LOG.error(_LE("Error Adding interface %(subnet_id)s "
"to router %(router_id)s on Arista HW"),
{'subnet_id': subnet_id, 'router_id': r})

File diff suppressed because it is too large Load Diff

View File

@ -1,603 +0,0 @@
# Copyright (c) 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
from oslo_config import cfg
from oslo_log import log as logging
from networking_arista._i18n import _, _LI
from networking_arista.common import api
from networking_arista.common import db_lib
from networking_arista.common import exceptions as arista_exc
LOG = logging.getLogger(__name__)
EOS_UNREACHABLE_MSG = _('Unable to reach EOS')
# Note 'None,null' means default rule - i.e. deny everything
SUPPORTED_SG_PROTOCOLS = [None, 'tcp', 'udp', 'icmp']
acl_cmd = {
'acl': {'create': ['ip access-list {0}'],
'in_rule': ['permit {0} {1} any range {2} {3}'],
'out_rule': ['permit {0} any {1} range {2} {3}'],
'in_icmp_custom1': ['permit icmp {0} any {1}'],
'out_icmp_custom1': ['permit icmp any {0} {1}'],
'in_icmp_custom2': ['permit icmp {0} any {1} {2}'],
'out_icmp_custom2': ['permit icmp any {0} {1} {2}'],
'default': [],
'delete_acl': ['no ip access-list {0}'],
'del_in_icmp_custom1': ['ip access-list {0}',
'no permit icmp {1} any {2}',
'exit'],
'del_out_icmp_custom1': ['ip access-list {0}',
'no permit icmp any {1} {2}',
'exit'],
'del_in_icmp_custom2': ['ip access-list {0}',
'no permit icmp {1} any {2} {3}',
'exit'],
'del_out_icmp_custom2': ['ip access-list {0}',
'no permit icmp any {1} {2} {3}',
'exit'],
'del_in_acl_rule': ['ip access-list {0}',
'no permit {1} {2} any range {3} {4}',
'exit'],
'del_out_acl_rule': ['ip access-list {0}',
'no permit {1} any {2} range {3} {4}',
'exit']},
'apply': {'ingress': ['interface {0}',
'ip access-group {1} in',
'exit'],
'egress': ['interface {0}',
'ip access-group {1} out',
'exit'],
'rm_ingress': ['interface {0}',
'no ip access-group {1} in',
'exit'],
'rm_egress': ['interface {0}',
'no ip access-group {1} out',
'exit']}}
class AristaSecGroupSwitchDriver(object):
"""Wraps Arista JSON RPC.
All communications between Neutron and EOS are over JSON RPC.
EOS - operating system used on Arista hardware
Command API - JSON RPC API provided by Arista EOS
"""
def __init__(self, neutron_db):
self._ndb = neutron_db
self._servers = []
self._hosts = {}
self.sg_enabled = cfg.CONF.ml2_arista.get('sec_group_support')
self._validate_config()
for s in cfg.CONF.ml2_arista.switch_info:
switch_ip, switch_user, switch_pass = s.split(":")
if switch_pass == "''":
switch_pass = ''
self._hosts[switch_ip] = (
{'user': switch_user, 'password': switch_pass})
self._servers.append(self._make_eapi_client(switch_ip))
self.aclCreateDict = acl_cmd['acl']
self.aclApplyDict = acl_cmd['apply']
def _make_eapi_client(self, host):
return api.EAPIClient(
host,
username=self._hosts[host]['user'],
password=self._hosts[host]['password'],
verify=False,
timeout=cfg.CONF.ml2_arista.conn_timeout
)
def _validate_config(self):
if not self.sg_enabled:
return
if len(cfg.CONF.ml2_arista.get('switch_info')) < 1:
msg = _('Required option - when "sec_group_support" is enabled, '
'at least one switch must be specified ')
LOG.exception(msg)
raise arista_exc.AristaConfigError(msg=msg)
def _create_acl_on_eos(self, in_cmds, out_cmds, protocol, cidr,
from_port, to_port, direction):
"""Creates an ACL on Arista HW Device.
:param name: Name for the ACL
:param server: Server endpoint on the Arista switch to be configured
"""
if protocol == 'icmp':
# ICMP rules require special processing
if ((from_port and to_port) or
(not from_port and not to_port)):
rule = 'icmp_custom2'
elif from_port and not to_port:
rule = 'icmp_custom1'
else:
msg = _('Invalid ICMP rule specified')
LOG.exception(msg)
raise arista_exc.AristaSecurityGroupError(msg=msg)
rule_type = 'in'
cmds = in_cmds
if direction == 'egress':
rule_type = 'out'
cmds = out_cmds
final_rule = rule_type + '_' + rule
acl_dict = self.aclCreateDict[final_rule]
# None port is probematic - should be replaced with 0
if not from_port:
from_port = 0
if not to_port:
to_port = 0
for c in acl_dict:
if rule == 'icmp_custom2':
cmds.append(c.format(cidr, from_port, to_port))
else:
cmds.append(c.format(cidr, from_port))
return in_cmds, out_cmds
else:
# Non ICMP rules processing here
acl_dict = self.aclCreateDict['in_rule']
cmds = in_cmds
if direction == 'egress':
acl_dict = self.aclCreateDict['out_rule']
cmds = out_cmds
if not protocol:
acl_dict = self.aclCreateDict['default']
for c in acl_dict:
cmds.append(c.format(protocol, cidr,
from_port, to_port))
return in_cmds, out_cmds
def _delete_acl_from_eos(self, name, server):
"""deletes an ACL from Arista HW Device.
:param name: Name for the ACL
:param server: Server endpoint on the Arista switch to be configured
"""
cmds = []
for c in self.aclCreateDict['delete_acl']:
cmds.append(c.format(name))
self._run_openstack_sg_cmds(cmds, server)
def _delete_acl_rule_from_eos(self, name,
protocol, cidr,
from_port, to_port,
direction, server):
"""deletes an ACL from Arista HW Device.
:param name: Name for the ACL
:param server: Server endpoint on the Arista switch to be configured
"""
cmds = []
if protocol == 'icmp':
# ICMP rules require special processing
if ((from_port and to_port) or
(not from_port and not to_port)):
rule = 'icmp_custom2'
elif from_port and not to_port:
rule = 'icmp_custom1'
else:
msg = _('Invalid ICMP rule specified')
LOG.exception(msg)
raise arista_exc.AristaSecurityGroupError(msg=msg)
rule_type = 'del_in'
if direction == 'egress':
rule_type = 'del_out'
final_rule = rule_type + '_' + rule
acl_dict = self.aclCreateDict[final_rule]
# None port is probematic - should be replaced with 0
if not from_port:
from_port = 0
if not to_port:
to_port = 0
for c in acl_dict:
if rule == 'icmp_custom2':
cmds.append(c.format(name, cidr, from_port, to_port))
else:
cmds.append(c.format(name, cidr, from_port))
else:
acl_dict = self.aclCreateDict['del_in_acl_rule']
if direction == 'egress':
acl_dict = self.aclCreateDict['del_out_acl_rule']
for c in acl_dict:
cmds.append(c.format(name, protocol, cidr,
from_port, to_port))
self._run_openstack_sg_cmds(cmds, server)
def _apply_acl_on_eos(self, port_id, name, direction, server):
"""Creates an ACL on Arista HW Device.
:param port_id: The port where the ACL needs to be applied
:param name: Name for the ACL
:param direction: must contain "ingress" or "egress"
:param server: Server endpoint on the Arista switch to be configured
"""
cmds = []
for c in self.aclApplyDict[direction]:
cmds.append(c.format(port_id, name))
self._run_openstack_sg_cmds(cmds, server)
def _remove_acl_from_eos(self, port_id, name, direction, server):
"""Remove an ACL from a port on Arista HW Device.
:param port_id: The port where the ACL needs to be applied
:param name: Name for the ACL
:param direction: must contain "ingress" or "egress"
:param server: Server endpoint on the Arista switch to be configured
"""
cmds = []
acl_cmd = self.aclApplyDict['rm_ingress']
if direction == 'egress':
acl_cmd = self.aclApplyDict['rm_egress']
for c in acl_cmd:
cmds.append(c.format(port_id, name))
self._run_openstack_sg_cmds(cmds, server)
def _create_acl_rule(self, in_cmds, out_cmds, sgr):
"""Creates an ACL on Arista Switch.
For a given Security Group (ACL), it adds additional rule
Deals with multiple configurations - such as multiple switches
"""
# Only deal with valid protocols - skip the rest
if not sgr or sgr['protocol'] not in SUPPORTED_SG_PROTOCOLS:
return in_cmds, out_cmds
remote_ip = sgr['remote_ip_prefix']
if not remote_ip:
remote_ip = 'any'
min_port = sgr['port_range_min']
if not min_port:
min_port = 0
max_port = sgr['port_range_max']
if not max_port and sgr['protocol'] != 'icmp':
max_port = 65535
in_cmds, out_cmds = self._create_acl_on_eos(in_cmds, out_cmds,
sgr['protocol'],
remote_ip,
min_port,
max_port,
sgr['direction'])
return in_cmds, out_cmds
def create_acl_rule(self, sgr):
"""Creates an ACL on Arista Switch.
For a given Security Group (ACL), it adds additional rule
Deals with multiple configurations - such as multiple switches
"""
# Do nothing if Security Groups are not enabled
if not self.sg_enabled:
return
name = self._arista_acl_name(sgr['security_group_id'],
sgr['direction'])
cmds = []
for c in self.aclCreateDict['create']:
cmds.append(c.format(name))
in_cmds, out_cmds = self._create_acl_rule(cmds, cmds, sgr)
cmds = in_cmds
if sgr['direction'] == 'egress':
cmds = out_cmds
cmds.append('exit')
for s in self._servers:
try:
self._run_openstack_sg_cmds(cmds, s)
except Exception:
msg = (_('Failed to create ACL rule on EOS %s') % s)
LOG.exception(msg)
raise arista_exc.AristaSecurityGroupError(msg=msg)
def delete_acl_rule(self, sgr):
"""Deletes an ACL rule on Arista Switch.
For a given Security Group (ACL), it adds removes a rule
Deals with multiple configurations - such as multiple switches
"""
# Do nothing if Security Groups are not enabled
if not self.sg_enabled:
return
# Only deal with valid protocols - skip the rest
if not sgr or sgr['protocol'] not in SUPPORTED_SG_PROTOCOLS:
return
# Build seperate ACL for ingress and egress
name = self._arista_acl_name(sgr['security_group_id'],
sgr['direction'])
remote_ip = sgr['remote_ip_prefix']
if not remote_ip:
remote_ip = 'any'
min_port = sgr['port_range_min']
if not min_port:
min_port = 0
max_port = sgr['port_range_max']
if not max_port and sgr['protocol'] != 'icmp':
max_port = 65535
for s in self._servers:
try:
self._delete_acl_rule_from_eos(name,
sgr['protocol'],
remote_ip,
min_port,
max_port,
sgr['direction'],
s)
except Exception:
msg = (_('Failed to delete ACL on EOS %s') % s)
LOG.exception(msg)
raise arista_exc.AristaSecurityGroupError(msg=msg)
def _create_acl_shell(self, sg_id):
"""Creates an ACL on Arista Switch.
For a given Security Group (ACL), it adds additional rule
Deals with multiple configurations - such as multiple switches
"""
# Build seperate ACL for ingress and egress
direction = ['ingress', 'egress']
cmds = []
for d in range(len(direction)):
name = self._arista_acl_name(sg_id, direction[d])
cmds.append([])
for c in self.aclCreateDict['create']:
cmds[d].append(c.format(name))
return cmds[0], cmds[1]
def create_acl(self, sg):
"""Creates an ACL on Arista Switch.
Deals with multiple configurations - such as multiple switches
"""
# Do nothing if Security Groups are not enabled
if not self.sg_enabled:
return
if not sg:
msg = _('Invalid or Empty Security Group Specified')
raise arista_exc.AristaSecurityGroupError(msg=msg)
in_cmds, out_cmds = self._create_acl_shell(sg['id'])
for sgr in sg['security_group_rules']:
in_cmds, out_cmds = self._create_acl_rule(in_cmds, out_cmds, sgr)
in_cmds.append('exit')
out_cmds.append('exit')
for s in self._servers:
try:
self._run_openstack_sg_cmds(in_cmds, s)
self._run_openstack_sg_cmds(out_cmds, s)
except Exception:
msg = (_('Failed to create ACL on EOS %s') % s)
LOG.exception(msg)
raise arista_exc.AristaSecurityGroupError(msg=msg)
def delete_acl(self, sg):
"""Deletes an ACL from Arista Switch.
Deals with multiple configurations - such as multiple switches
"""
# Do nothing if Security Groups are not enabled
if not self.sg_enabled:
return
if not sg:
msg = _('Invalid or Empty Security Group Specified')
raise arista_exc.AristaSecurityGroupError(msg=msg)
direction = ['ingress', 'egress']
for d in range(len(direction)):
name = self._arista_acl_name(sg['id'], direction[d])
for s in self._servers:
try:
self._delete_acl_from_eos(name, s)
except Exception:
msg = (_('Failed to create ACL on EOS %s') % s)
LOG.exception(msg)
raise arista_exc.AristaSecurityGroupError(msg=msg)
def apply_acl(self, sgs, switch_id, port_id, switch_info):
"""Creates an ACL on Arista Switch.
Applies ACLs to the baremetal ports only. The port/switch
details is passed through the parameters.
Deals with multiple configurations - such as multiple switches
param sgs: List of Security Groups
param switch_id: Switch ID of TOR where ACL needs to be applied
param port_id: Port ID of port where ACL needs to be applied
param switch_info: IP address of the TOR
"""
# Do nothing if Security Groups are not enabled
if not self.sg_enabled:
return
# We do not support more than one security group on a port
if not sgs or len(sgs) > 1:
msg = (_('Only one Security Group Supported on a port %s') % sgs)
raise arista_exc.AristaSecurityGroupError(msg=msg)
sg = self._ndb.get_security_group(sgs[0])
# We already have ACLs on the TORs.
# Here we need to find out which ACL is applicable - i.e.
# Ingress ACL, egress ACL or both
direction = ['ingress', 'egress']
server = self._make_eapi_client(switch_info)
for d in range(len(direction)):
name = self._arista_acl_name(sg['id'], direction[d])
try:
self._apply_acl_on_eos(port_id, name, direction[d], server)
except Exception:
msg = (_('Failed to apply ACL on port %s') % port_id)
LOG.exception(msg)
raise arista_exc.AristaSecurityGroupError(msg=msg)
def remove_acl(self, sgs, switch_id, port_id, switch_info):
"""Removes an ACL from Arista Switch.
Removes ACLs from the baremetal ports only. The port/switch
details is passed throuhg the parameters.
param sgs: List of Security Groups
param switch_id: Switch ID of TOR where ACL needs to be removed
param port_id: Port ID of port where ACL needs to be removed
param switch_info: IP address of the TOR
"""
# Do nothing if Security Groups are not enabled
if not self.sg_enabled:
return
# We do not support more than one security group on a port
if not sgs or len(sgs) > 1:
msg = (_('Only one Security Group Supported on a port %s') % sgs)
raise arista_exc.AristaSecurityGroupError(msg=msg)
sg = self._ndb.get_security_group(sgs[0])
# We already have ACLs on the TORs.
# Here we need to find out which ACL is applicable - i.e.
# Ingress ACL, egress ACL or both
direction = []
for sgr in sg['security_group_rules']:
# Only deal with valid protocols - skip the rest
if not sgr or sgr['protocol'] not in SUPPORTED_SG_PROTOCOLS:
continue
if sgr['direction'] not in direction:
direction.append(sgr['direction'])
# THIS IS TOTAL HACK NOW - just for testing
# Assumes the credential of all switches are same as specified
# in the condig file
server = self._make_eapi_client(switch_info)
for d in range(len(direction)):
name = self._arista_acl_name(sg['id'], direction[d])
try:
self._remove_acl_from_eos(port_id, name, direction[d], server)
except Exception:
msg = (_('Failed to remove ACL on port %s') % port_id)
LOG.exception(msg)
# No need to raise exception for ACL removal
# raise arista_exc.AristaSecurityGroupError(msg=msg)
def _run_openstack_sg_cmds(self, commands, server):
"""Execute/sends a CAPI (Command API) command to EOS.
In this method, list of commands is appended with prefix and
postfix commands - to make is understandble by EOS.
:param commands : List of command to be executed on EOS.
:param server: Server endpoint on the Arista switch to be configured
"""
command_start = ['enable', 'configure']
command_end = ['exit']
full_command = command_start + commands + command_end
LOG.info(_LI('Executing command on Arista EOS: %s'), full_command)
try:
# this returns array of return values for every command in
# full_command list
ret = server.execute(full_command)
LOG.info(_LI('Results of execution on Arista EOS: %s'), ret)
except Exception:
msg = (_('Error occurred while trying to execute '
'commands %(cmd)s on EOS %(host)s') %
{'cmd': full_command, 'host': server})
LOG.exception(msg)
raise arista_exc.AristaServicePluginRpcError(msg=msg)
def _arista_acl_name(self, name, direction):
"""Generate an arista specific name for this ACL.
Use a unique name so that OpenStack created ACLs
can be distinguishged from the user created ACLs
on Arista HW.
"""
in_out = 'IN'
if direction == 'egress':
in_out = 'OUT'
return 'SG' + '-' + in_out + '-' + name
def perform_sync_of_sg(self):
"""Perform sync of the security groups between ML2 and EOS.
This is unconditional sync to ensure that all security
ACLs are pushed to all the switches, in case of switch
or neutron reboot
"""
# Do nothing if Security Groups are not enabled
if not self.sg_enabled:
return
arista_ports = db_lib.get_ports()
neutron_sgs = self._ndb.get_security_groups()
sg_bindings = self._ndb.get_all_security_gp_to_port_bindings()
sgs = []
sgs_dict = {}
arista_port_ids = arista_ports.keys()
# Get the list of Security Groups of interest to us
for s in sg_bindings:
if s['port_id'] in arista_port_ids:
if not s['security_group_id'] in sgs:
sgs_dict[s['port_id']] = (
{'security_group_id': s['security_group_id']})
sgs.append(s['security_group_id'])
# Create the ACLs on Arista Switches
for idx in range(len(sgs)):
self.create_acl(neutron_sgs[sgs[idx]])
# Get Baremetal port profiles, if any
bm_port_profiles = db_lib.get_all_baremetal_ports()
if bm_port_profiles:
for bm in bm_port_profiles.values():
if bm['port_id'] in sgs_dict:
sg = sgs_dict[bm['port_id']]['security_group_id']
profile = json.loads(bm['profile'])
link_info = profile['local_link_information']
for l in link_info:
if not l:
# skip all empty entries
continue
self.apply_acl([sg], l['switch_id'],
l['port_id'], l['switch_info'])

View File

@ -1,148 +0,0 @@
# Copyright (c) 2016 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutron_lib.db import api as db_api
from oslo_log import log
from six import moves
from neutron.db.models.plugins.ml2 import vlanallocation
from networking_arista._i18n import _LI
from networking_arista.common import exceptions as arista_exc
from networking_arista.ml2.arista_ml2 import EOS_UNREACHABLE_MSG
LOG = log.getLogger(__name__)
class VlanSyncService(object):
"""Sync vlan assignment from CVX into the OpenStack db."""
def __init__(self, rpc_wrapper):
self._rpc = rpc_wrapper
self._force_sync = True
self._vlan_assignment_uuid = None
self._assigned_vlans = dict()
def force_sync(self):
self._force_sync = True
def _parse_vlan_ranges(self, vlan_pool, return_as_ranges=False):
vlan_ids = set()
if return_as_ranges:
vlan_ids = list()
if not vlan_pool:
return vlan_ids
vlan_ranges = vlan_pool.split(',')
for vlan_range in vlan_ranges:
endpoints = vlan_range.split('-')
if len(endpoints) == 2:
vlan_min = int(endpoints[0])
vlan_max = int(endpoints[1])
if return_as_ranges:
vlan_ids.append((vlan_min, vlan_max))
else:
vlan_ids |= set(moves.range(vlan_min, vlan_max + 1))
elif len(endpoints) == 1:
single_vlan = int(endpoints[0])
if return_as_ranges:
vlan_ids.append((single_vlan, single_vlan))
else:
vlan_ids.add(single_vlan)
return vlan_ids
def get_network_vlan_ranges(self):
return self._assigned_vlans
def _sync_required(self):
try:
if not self._force_sync and self._region_in_sync():
LOG.info(_LI('VLANs are in sync!'))
return False
except arista_exc.AristaRpcError:
LOG.warning(EOS_UNREACHABLE_MSG)
self._force_sync = True
return True
def _region_in_sync(self):
eos_vlan_assignment_uuid = self._rpc.get_vlan_assignment_uuid()
return (self._vlan_assignment_uuid and
(self._vlan_assignment_uuid['uuid'] ==
eos_vlan_assignment_uuid['uuid']))
def _set_vlan_assignment_uuid(self):
try:
self._vlan_assignment_uuid = self._rpc.get_vlan_assignment_uuid()
except arista_exc.AristaRpcError:
self._force_sync = True
def do_synchronize(self):
if not self._sync_required():
return
self.synchronize()
self._set_vlan_assignment_uuid()
def synchronize(self):
LOG.info(_LI('Syncing VLANs with EOS'))
try:
self._rpc.register_with_eos()
vlan_pool = self._rpc.get_vlan_allocation()
except arista_exc.AristaRpcError:
LOG.warning(EOS_UNREACHABLE_MSG)
self._force_sync = True
return
self._assigned_vlans = {
'default': self._parse_vlan_ranges(vlan_pool['assignedVlans'],
return_as_ranges=True),
}
assigned_vlans = (
self._parse_vlan_ranges(vlan_pool['assignedVlans']))
available_vlans = frozenset(
self._parse_vlan_ranges(vlan_pool['availableVlans']))
used_vlans = frozenset(
self._parse_vlan_ranges(vlan_pool['allocatedVlans']))
self._force_sync = False
session = db_api.get_writer_session()
with session.begin(subtransactions=True):
allocs = (
session.query(vlanallocation.VlanAllocation).with_lockmode(
'update'))
for alloc in allocs:
if alloc.physical_network != 'default':
session.delete(alloc)
try:
assigned_vlans.remove(alloc.vlan_id)
except KeyError:
session.delete(alloc)
continue
if alloc.allocated and alloc.vlan_id in available_vlans:
alloc.update({"allocated": False})
elif not alloc.allocated and alloc.vlan_id in used_vlans:
alloc.update({"allocated": True})
for vlan_id in sorted(assigned_vlans):
allocated = vlan_id in used_vlans
alloc = vlanallocation.VlanAllocation(
physical_network='default',
vlan_id=vlan_id,
allocated=allocated)
session.add(alloc)

View File

@ -1,71 +0,0 @@
# Copyright (c) 2016 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import threading
from oslo_config import cfg
from oslo_log import log
from neutron.plugins.ml2.drivers import type_vlan
from networking_arista._i18n import _LI
from networking_arista.common import db_lib
from networking_arista.common import exceptions as exc
from networking_arista.ml2 import arista_ml2
from networking_arista.ml2.drivers import driver_helpers
LOG = log.getLogger(__name__)
cfg.CONF.import_group('arista_type_driver', 'networking_arista.common.config')
class AristaVlanTypeDriver(type_vlan.VlanTypeDriver):
"""Manage state for VLAN networks with ML2.
The VlanTypeDriver implements the 'vlan' network_type. VLAN
network segments provide connectivity between VMs and other
devices using any connected IEEE 802.1Q conformant
physical_network segmented into virtual networks via IEEE 802.1Q
headers. Up to 4094 VLAN network segments can exist on each
available physical_network.
"""
def __init__(self):
super(AristaVlanTypeDriver, self).__init__()
ndb = db_lib.NeutronNets()
self.rpc = arista_ml2.AristaRPCWrapperEapi(ndb)
self.sync_service = driver_helpers.VlanSyncService(self.rpc)
self.network_vlan_ranges = dict()
self.sync_timeout = cfg.CONF.arista_type_driver['sync_interval']
def initialize(self):
self.rpc.check_supported_features()
self.rpc.check_vlan_type_driver_commands()
self._synchronization_thread()
LOG.info(_LI("AristaVlanTypeDriver initialization complete"))
def _synchronization_thread(self):
self.sync_service.do_synchronize()
self.network_vlan_ranges = self.sync_service.get_network_vlan_ranges()
self.timer = threading.Timer(self.sync_timeout,
self._synchronization_thread)
self.timer.start()
def allocate_fully_specified_segment(self, session, **raw_segment):
alloc = session.query(self.model).filter_by(**raw_segment).first()
if not alloc:
raise exc.VlanUnavailable(**raw_segment)
return super(AristaVlanTypeDriver,
self).allocate_fully_specified_segment(
session, **raw_segment)

File diff suppressed because it is too large Load Diff

View File

@ -1,117 +0,0 @@
# Copyright (c) 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from neutron_lib.callbacks import events
from neutron_lib.callbacks import registry
from neutron_lib.callbacks import resources
from oslo_log import helpers as log_helpers
from oslo_log import log as logging
from oslo_utils import excutils
from networking_arista._i18n import _LE
LOG = logging.getLogger(__name__)
class AristaSecurityGroupHandler(object):
"""Security Group Handler for Arista networking hardware.
Registers for the notification of security group updates.
Once a notification is recieved, it takes appropriate actions by updating
Arista hardware appropriately.
"""
def __init__(self, client):
self.client = client
self.subscribe()
@log_helpers.log_method_call
def create_security_group(self, resource, event, trigger, **kwargs):
sg = kwargs.get('security_group')
try:
self.client.create_security_group(sg)
except Exception as e:
with excutils.save_and_reraise_exception():
LOG.error(_LE("Failed to create a security group %(sg_id)s "
"in Arista Driver: %(err)s"),
{"sg_id": sg["id"], "err": e})
try:
self.client.delete_security_group(sg)
except Exception:
LOG.exception(_LE("Failed to delete security group %s"),
sg['id'])
@log_helpers.log_method_call
def delete_security_group(self, resource, event, trigger, **kwargs):
sg = kwargs.get('security_group')
try:
self.client.delete_security_group(sg)
except Exception as e:
LOG.error(_LE("Failed to delete security group %(sg_id)s "
"in Arista Driver: %(err)s"),
{"sg_id": sg["id"], "err": e})
@log_helpers.log_method_call
def update_security_group(self, resource, event, trigger, **kwargs):
sg = kwargs.get('security_group')
try:
self.client.update_security_group(sg)
except Exception as e:
LOG.error(_LE("Failed to update security group %(sg_id)s "
"in Arista Driver: %(err)s"),
{"sg_id": sg["id"], "err": e})
@log_helpers.log_method_call
def create_security_group_rule(self, resource, event, trigger, **kwargs):
sgr = kwargs.get('security_group_rule')
try:
self.client.create_security_group_rule(sgr)
except Exception as e:
with excutils.save_and_reraise_exception():
LOG.error(_LE("Failed to create a security group %(sgr_id)s "
"rule in Arista Driver: %(err)s"),
{"sgr_id": sgr["id"], "err": e})
try:
self.client.delete_security_group_rule(sgr)
except Exception:
LOG.exception(_LE("Failed to delete security group "
"rule %s"), sgr['id'])
@log_helpers.log_method_call
def delete_security_group_rule(self, resource, event, trigger, **kwargs):
sgr_id = kwargs.get('security_group_rule_id')
try:
self.client.delete_security_group_rule(sgr_id)
except Exception as e:
LOG.error(_LE("Failed to delete security group %(sgr_id)s "
"rule in Arista Driver: %(err)s"),
{"sgr_id": sgr_id, "err": e})
def subscribe(self):
# Subscribe to the events related to security groups and rules
registry.subscribe(
self.create_security_group, resources.SECURITY_GROUP,
events.AFTER_CREATE)
registry.subscribe(
self.update_security_group, resources.SECURITY_GROUP,
events.AFTER_UPDATE)
registry.subscribe(
self.delete_security_group, resources.SECURITY_GROUP,
events.BEFORE_DELETE)
registry.subscribe(
self.create_security_group_rule, resources.SECURITY_GROUP_RULE,
events.AFTER_CREATE)
registry.subscribe(
self.delete_security_group_rule, resources.SECURITY_GROUP_RULE,
events.BEFORE_DELETE)

View File

@ -1,23 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2010-2011 OpenStack Foundation
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslotest import base
class TestCase(base.BaseTestCase):
"""Test case base class for all unit tests."""

View File

@ -1,28 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
test_networking_arista
----------------------------------
Tests for `networking_arista` module.
"""
from networking_arista.tests import base
class TestNetworking_arista(base.TestCase):
def test_something(self):
pass

View File

@ -1,19 +0,0 @@
# Copyright 2011 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
cfg.CONF.use_stderr = False

View File

@ -1,257 +0,0 @@
# Copyright (c) 2017 Arista Networks, Inc
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import mock
import requests
from requests import exceptions as requests_exc
import testtools
from networking_arista.common import api
class TestEAPIClientInit(testtools.TestCase):
def test_basic_init(self):
host_ip = '10.20.30.40'
client = api.EAPIClient(host_ip)
self.assertEqual(client.host, host_ip)
self.assertEqual(client.url, 'https://10.20.30.40/command-api')
self.assertDictContainsSubset(
{'Content-Type': 'application/json', 'Accept': 'application/json'},
client.session.headers
)
def test_init_enable_verify(self):
client = api.EAPIClient('10.0.0.1', verify=True)
self.assertTrue(client.session.verify)
def test_init_auth(self):
client = api.EAPIClient('10.0.0.1', username='user', password='pass')
self.assertEqual(client.session.auth, ('user', 'pass'))
def test_init_timeout(self):
client = api.EAPIClient('10.0.0.1', timeout=99)
self.assertEqual(client.timeout, 99)
def test_make_url(self):
url = api.EAPIClient._make_url('1.2.3.4')
self.assertEqual(url, 'https://1.2.3.4/command-api')
def test_make_url_http(self):
url = api.EAPIClient._make_url('5.6.7.8', 'http')
self.assertEqual(url, 'http://5.6.7.8/command-api')
class TestEAPIClientExecute(testtools.TestCase):
def setUp(self):
super(TestEAPIClientExecute, self).setUp()
mock.patch('requests.Session.post').start()
self.mock_log = mock.patch.object(api, 'LOG').start()
self.mock_json_dumps = mock.patch.object(api.json, 'dumps').start()
self.addCleanup(mock.patch.stopall)
self.client = api.EAPIClient('10.0.0.1', timeout=99)
def _test_execute_helper(self, commands, commands_to_log=None):
expected_data = {
'id': 'Networking Arista Driver',
'method': 'runCmds',
'jsonrpc': '2.0',
'params': {
'timestamps': False,
'format': 'json',
'version': 1,
'cmds': commands
}
}
self.client.session.post.assert_called_once_with(
'https://10.0.0.1/command-api',
data=self.mock_json_dumps.return_value,
timeout=99
)
self.mock_log.info.assert_has_calls(
[
mock.call(
mock.ANY,
{
'ip': '10.0.0.1',
'data': self.mock_json_dumps.return_value
}
)
]
)
log_data = dict(expected_data)
log_data['params'] = dict(expected_data['params'])
log_data['params']['cmds'] = commands_to_log or commands
self.mock_json_dumps.assert_has_calls(
[
mock.call(log_data),
mock.call(expected_data)
]
)
def test_command_prep(self):
commands = ['enable']
self.client.execute(commands)
self._test_execute_helper(commands)
def test_commands_to_log(self):
commands = ['config', 'secret']
commands_to_log = ['config', '******']
self.client.execute(commands, commands_to_log)
self._test_execute_helper(commands, commands_to_log)
def _test_execute_error_helper(self, raise_exception, expected_exception,
warning_has_params=False):
commands = ['config']
self.client.session.post.side_effect = raise_exception
self.assertRaises(
expected_exception,
self.client.execute,
commands
)
self._test_execute_helper(commands)
if warning_has_params:
args = (mock.ANY, mock.ANY)
else:
args = (mock.ANY,)
self.mock_log.warning.assert_called_once_with(*args)
def test_request_connection_error(self):
self._test_execute_error_helper(
requests_exc.ConnectionError,
api.arista_exc.AristaRpcError
)
def test_request_connect_timeout(self):
self._test_execute_error_helper(
requests_exc.ConnectTimeout,
api.arista_exc.AristaRpcError
)
def test_request_timeout(self):
self._test_execute_error_helper(
requests_exc.Timeout,
api.arista_exc.AristaRpcError
)
def test_request_connect_InvalidURL(self):
self._test_execute_error_helper(
requests_exc.InvalidURL,
api.arista_exc.AristaRpcError
)
def test_request_other_exception(self):
class OtherException(Exception):
pass
self._test_execute_error_helper(
OtherException,
OtherException,
warning_has_params=True
)
def _test_response_helper(self, response_data):
mock_response = mock.MagicMock(requests.Response)
mock_response.json.return_value = response_data
self.client.session.post.return_value = mock_response
def test_response_success(self):
mock_response = mock.MagicMock(requests.Response)
mock_response.json.return_value = {'result': mock.sentinel}
self.client.session.post.return_value = mock_response
retval = self.client.execute(['enable'])
self.assertEqual(retval, mock.sentinel)
def test_response_json_error(self):
mock_response = mock.MagicMock(requests.Response)
mock_response.json.side_effect = ValueError
self.client.session.post.return_value = mock_response
retval = self.client.execute(['enable'])
self.assertIsNone(retval)
self.mock_log.info.assert_has_calls([mock.call(mock.ANY)])
def _test_response_format_error_helper(self, bad_response):
mock_response = mock.MagicMock(requests.Response)
mock_response.json.return_value = bad_response
self.client.session.post.return_value = mock_response
self.assertRaises(
api.arista_exc.AristaRpcError,
self.client.execute,
['enable']
)
self.mock_log.info.assert_has_calls([mock.call(mock.ANY)])
def test_response_format_error(self):
self._test_response_format_error_helper({})
def test_response_unknown_error_code(self):
self._test_response_format_error_helper(
{'error': {'code': 999}}
)
def test_response_known_error_code(self):
self._test_response_format_error_helper(
{'error': {'code': 1002, 'data': []}}
)
def test_response_known_error_code_data_is_not_dict(self):
self._test_response_format_error_helper(
{'error': {'code': 1002, 'data': ['some text']}}
)
def test_response_not_cvx_leader(self):
mock_response = mock.MagicMock(requests.Response)
mock_response.json.return_value = {
'error': {
'code': 1002,
'data': [{'errors': [api.ERR_CVX_NOT_LEADER]}]
}
}
self.client.session.post.return_value = mock_response
retval = self.client.execute(['enable'])
self.assertIsNone(retval)
def test_response_other_exception(self):
class OtherException(Exception):
pass
mock_response = mock.MagicMock(requests.Response)
mock_response.json.return_value = 'text'
self.client.session.post.return_value = mock_response
self.assertRaises(
TypeError,
self.client.execute,
['enable']
)
self.mock_log.warning.assert_has_calls(
[
mock.call(mock.ANY, {'error': mock.ANY})
]
)

View File

@ -1,19 +0,0 @@
# Copyright 2011 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
cfg.CONF.use_stderr = False

View File

@ -1,456 +0,0 @@
# Copyright (c) 2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import mock
from oslo_config import cfg
from neutron.tests import base
from networking_arista.l3Plugin import arista_l3_driver as arista
def setup_arista_config(value='', vrf=False, mlag=False):
cfg.CONF.set_override('primary_l3_host', value, "l3_arista")
cfg.CONF.set_override('primary_l3_host_username', value, "l3_arista")
if vrf:
cfg.CONF.set_override('use_vrf', vrf, "l3_arista")
if mlag:
cfg.CONF.set_override('secondary_l3_host', value, "l3_arista")
cfg.CONF.set_override('mlag_config', mlag, "l3_arista")
class AristaL3DriverTestCasesDefaultVrf(base.BaseTestCase):
"""Test cases to test the RPC between Arista Driver and EOS.
Tests all methods used to send commands between Arista L3 Driver and EOS
to program routing functions in Default VRF.
"""
def setUp(self):
super(AristaL3DriverTestCasesDefaultVrf, self).setUp()
setup_arista_config('value')
self.drv = arista.AristaL3Driver()
self.drv._servers = []
self.drv._servers.append(mock.MagicMock())
def test_no_exception_on_correct_configuration(self):
self.assertIsNotNone(self.drv)
def test_create_router_on_eos(self):
router_name = 'test-router-1'
route_domain = '123:123'
self.drv.create_router_on_eos(router_name, route_domain,
self.drv._servers[0])
cmds = ['enable', 'configure', 'exit']
self.drv._servers[0].execute.assert_called_once_with(cmds)
def test_delete_router_from_eos(self):
router_name = 'test-router-1'
self.drv.delete_router_from_eos(router_name, self.drv._servers[0])
cmds = ['enable', 'configure', 'exit']
self.drv._servers[0].execute.assert_called_once_with(cmds)
def test_add_interface_to_router_on_eos(self):
router_name = 'test-router-1'
segment_id = '123'
router_ip = '10.10.10.10'
gw_ip = '10.10.10.1'
mask = '255.255.255.0'
self.drv.add_interface_to_router(segment_id, router_name, gw_ip,
router_ip, mask, self.drv._servers[0])
cmds = ['enable', 'configure', 'ip routing',
'vlan %s' % segment_id, 'exit',
'interface vlan %s' % segment_id,
'ip address %s/%s' % (gw_ip, mask), 'exit']
self.drv._servers[0].execute.assert_called_once_with(cmds)
def test_delete_interface_from_router_on_eos(self):
router_name = 'test-router-1'
segment_id = '123'
self.drv.delete_interface_from_router(segment_id, router_name,
self.drv._servers[0])
cmds = ['enable', 'configure', 'no interface vlan %s' % segment_id,
'exit']
self.drv._servers[0].execute.assert_called_once_with(cmds)
class AristaL3DriverTestCasesUsingVRFs(base.BaseTestCase):
"""Test cases to test the RPC between Arista Driver and EOS.
Tests all methods used to send commands between Arista L3 Driver and EOS
to program routing functions using multiple VRFs.
Note that the configuration commands are different when VRFs are used.
"""
def setUp(self):
super(AristaL3DriverTestCasesUsingVRFs, self).setUp()
setup_arista_config('value', vrf=True)
self.drv = arista.AristaL3Driver()
self.drv._servers = []
self.drv._servers.append(mock.MagicMock())
def test_no_exception_on_correct_configuration(self):
self.assertIsNotNone(self.drv)
def test_create_router_on_eos(self):
max_vrfs = 5
routers = ['testRouter-%s' % n for n in range(max_vrfs)]
domains = ['10%s' % n for n in range(max_vrfs)]
for (r, d) in zip(routers, domains):
self.drv.create_router_on_eos(r, d, self.drv._servers[0])
cmds = ['enable', 'configure',
'vrf definition %s' % r,
'rd %(rd)s:%(rd)s' % {'rd': d}, 'exit', 'exit']
self.drv._servers[0].execute.assert_called_with(cmds)
def test_delete_router_from_eos(self):
max_vrfs = 5
routers = ['testRouter-%s' % n for n in range(max_vrfs)]
for r in routers:
self.drv.delete_router_from_eos(r, self.drv._servers[0])
cmds = ['enable', 'configure', 'no vrf definition %s' % r,
'exit']
self.drv._servers[0].execute.assert_called_with(cmds)
def test_add_interface_to_router_on_eos(self):
router_name = 'test-router-1'
segment_id = '123'
router_ip = '10.10.10.10'
gw_ip = '10.10.10.1'
mask = '255.255.255.0'
self.drv.add_interface_to_router(segment_id, router_name, gw_ip,
router_ip, mask, self.drv._servers[0])
cmds = ['enable', 'configure',
'ip routing vrf %s' % router_name,
'vlan %s' % segment_id, 'exit',
'interface vlan %s' % segment_id,
'vrf forwarding %s' % router_name,
'ip address %s/%s' % (gw_ip, mask), 'exit']
self.drv._servers[0].execute.assert_called_once_with(cmds)
def test_delete_interface_from_router_on_eos(self):
router_name = 'test-router-1'
segment_id = '123'
self.drv.delete_interface_from_router(segment_id, router_name,
self.drv._servers[0])
cmds = ['enable', 'configure', 'no interface vlan %s' % segment_id,
'exit']
self.drv._servers[0].execute.assert_called_once_with(cmds)
class AristaL3DriverTestCasesMlagConfig(base.BaseTestCase):
"""Test cases to test the RPC between Arista Driver and EOS.
Tests all methods used to send commands between Arista L3 Driver and EOS
to program routing functions in Default VRF using MLAG configuration.
MLAG configuration means that the commands will be sent to both
primary and secondary Arista Switches.
"""
def setUp(self):
super(AristaL3DriverTestCasesMlagConfig, self).setUp()
setup_arista_config('value', mlag=True)
self.drv = arista.AristaL3Driver()
self.drv._servers = []
self.drv._servers.append(mock.MagicMock())
self.drv._servers.append(mock.MagicMock())
def test_no_exception_on_correct_configuration(self):
self.assertIsNotNone(self.drv)
def test_create_router_on_eos(self):
router_name = 'test-router-1'
route_domain = '123:123'
router_mac = '00:11:22:33:44:55'
for s in self.drv._servers:
self.drv.create_router_on_eos(router_name, route_domain, s)
cmds = ['enable', 'configure',
'ip virtual-router mac-address %s' % router_mac, 'exit']
s.execute.assert_called_with(cmds)
def test_delete_router_from_eos(self):
router_name = 'test-router-1'
for s in self.drv._servers:
self.drv.delete_router_from_eos(router_name, s)
cmds = ['enable', 'configure', 'exit']
s.execute.assert_called_once_with(cmds)
def test_add_interface_to_router_on_eos(self):
router_name = 'test-router-1'
segment_id = '123'
router_ip = '10.10.10.10'
gw_ip = '10.10.10.1'
mask = '255.255.255.0'
for s in self.drv._servers:
self.drv.add_interface_to_router(segment_id, router_name, gw_ip,
router_ip, mask, s)
cmds = ['enable', 'configure', 'ip routing',
'vlan %s' % segment_id, 'exit',
'interface vlan %s' % segment_id,
'ip address %s' % router_ip,
'ip virtual-router address %s' % gw_ip, 'exit']
s.execute.assert_called_once_with(cmds)
def test_delete_interface_from_router_on_eos(self):
router_name = 'test-router-1'
segment_id = '123'
for s in self.drv._servers:
self.drv.delete_interface_from_router(segment_id, router_name, s)
cmds = ['enable', 'configure', 'no interface vlan %s' % segment_id,
'exit']
s.execute.assert_called_once_with(cmds)
class AristaL3DriverTestCases_v4(base.BaseTestCase):
"""Test cases to test the RPC between Arista Driver and EOS.
Tests all methods used to send commands between Arista L3 Driver and EOS
to program routing functions in Default VRF using IPv4.
"""
def setUp(self):
super(AristaL3DriverTestCases_v4, self).setUp()
setup_arista_config('value')
self.drv = arista.AristaL3Driver()
self.drv._servers = []
self.drv._servers.append(mock.MagicMock())
def test_no_exception_on_correct_configuration(self):
self.assertIsNotNone(self.drv)
def test_add_v4_interface_to_router(self):
gateway_ip = '10.10.10.1'
cidrs = ['10.10.10.0/24', '10.11.11.0/24']
# Add couple of IPv4 subnets to router
for cidr in cidrs:
router = {'name': 'test-router-1',
'tenant_id': 'ten-a',
'seg_id': '123',
'cidr': "%s" % cidr,
'gip': "%s" % gateway_ip,
'ip_version': 4}
self.assertFalse(self.drv.add_router_interface(None, router))
def test_delete_v4_interface_from_router(self):
gateway_ip = '10.10.10.1'
cidrs = ['10.10.10.0/24', '10.11.11.0/24']
# remove couple of IPv4 subnets from router
for cidr in cidrs:
router = {'name': 'test-router-1',
'tenant_id': 'ten-a',
'seg_id': '123',
'cidr': "%s" % cidr,
'gip': "%s" % gateway_ip,
'ip_version': 4}
self.assertFalse(self.drv.remove_router_interface(None, router))
class AristaL3DriverTestCases_v6(base.BaseTestCase):
"""Test cases to test the RPC between Arista Driver and EOS.
Tests all methods used to send commands between Arista L3 Driver and EOS
to program routing functions in Default VRF using IPv6.
"""
def setUp(self):
super(AristaL3DriverTestCases_v6, self).setUp()
setup_arista_config('value')
self.drv = arista.AristaL3Driver()
self.drv._servers = []
self.drv._servers.append(mock.MagicMock())
def test_no_exception_on_correct_configuration(self):
self.assertIsNotNone(self.drv)
def test_add_v6_interface_to_router(self):
gateway_ip = '3FFE::1'
cidrs = ['3FFE::/16', '2001::/16']
# Add couple of IPv6 subnets to router
for cidr in cidrs:
router = {'name': 'test-router-1',
'tenant_id': 'ten-a',
'seg_id': '123',
'cidr': "%s" % cidr,
'gip': "%s" % gateway_ip,
'ip_version': 6}
self.assertFalse(self.drv.add_router_interface(None, router))
def test_delete_v6_interface_from_router(self):
gateway_ip = '3FFE::1'
cidrs = ['3FFE::/16', '2001::/16']
# remove couple of IPv6 subnets from router
for cidr in cidrs:
router = {'name': 'test-router-1',
'tenant_id': 'ten-a',
'seg_id': '123',
'cidr': "%s" % cidr,
'gip': "%s" % gateway_ip,
'ip_version': 6}
self.assertFalse(self.drv.remove_router_interface(None, router))
class AristaL3DriverTestCases_MLAG_v6(base.BaseTestCase):
"""Test cases to test the RPC between Arista Driver and EOS.
Tests all methods used to send commands between Arista L3 Driver and EOS
to program routing functions in Default VRF on MLAG'ed switches using IPv6.
"""
def setUp(self):
super(AristaL3DriverTestCases_MLAG_v6, self).setUp()
setup_arista_config('value', mlag=True)
self.drv = arista.AristaL3Driver()
self.drv._servers = []
self.drv._servers.append(mock.MagicMock())
self.drv._servers.append(mock.MagicMock())
def test_no_exception_on_correct_configuration(self):
self.assertIsNotNone(self.drv)
def test_add_v6_interface_to_router(self):
gateway_ip = '3FFE::1'
cidrs = ['3FFE::/16', '2001::/16']
# Add couple of IPv6 subnets to router
for cidr in cidrs:
router = {'name': 'test-router-1',
'tenant_id': 'ten-a',
'seg_id': '123',
'cidr': "%s" % cidr,
'gip': "%s" % gateway_ip,
'ip_version': 6}
self.assertFalse(self.drv.add_router_interface(None, router))
def test_delete_v6_interface_from_router(self):
gateway_ip = '3FFE::1'
cidrs = ['3FFE::/16', '2001::/16']
# remove couple of IPv6 subnets from router
for cidr in cidrs:
router = {'name': 'test-router-1',
'tenant_id': 'ten-a',
'seg_id': '123',
'cidr': "%s" % cidr,
'gip': "%s" % gateway_ip,
'ip_version': 6}
self.assertFalse(self.drv.remove_router_interface(None, router))
class AristaL3DriverTestCasesMlag_one_switch_failed(base.BaseTestCase):
"""Test cases to test with non redundant hardare in redundancy mode.
In the following test cases, the driver is configured in MLAG (redundancy
mode) but, one of the switches is mocked to throw exceptoin to mimic
failure of the switch. Ensure that the the operation does not fail when
one of the switches fails.
"""
def setUp(self):
super(AristaL3DriverTestCasesMlag_one_switch_failed, self).setUp()
setup_arista_config('value', mlag=True)
self.drv = arista.AristaL3Driver()
self.drv._servers = []
self.drv._servers.append(mock.MagicMock())
self.drv._servers.append(mock.MagicMock())
def test_create_router_when_one_switch_fails(self):
router = {}
router['name'] = 'test-router-1'
tenant = '123'
# Make one of the switches throw an exception - i.e. fail
self.drv._servers[0].execute = mock.Mock(side_effect=Exception)
with mock.patch.object(arista.LOG, 'exception') as log_exception:
self.drv.create_router(None, tenant, router)
log_exception.assert_called_once_with(mock.ANY)
def test_delete_router_when_one_switch_fails(self):
router = {}
router['name'] = 'test-router-1'
tenant = '123'
router_id = '345'
# Make one of the switches throw an exception - i.e. fail
self.drv._servers[1].execute = mock.Mock(side_effect=Exception)
with mock.patch.object(arista.LOG, 'exception') as log_exception:
self.drv.delete_router(None, tenant, router_id, router)
log_exception.assert_called_once_with(mock.ANY)
def test_add_router_interface_when_one_switch_fails(self):
router = {}
router['name'] = 'test-router-1'
router['tenant_id'] = 'ten-1'
router['seg_id'] = '100'
router['ip_version'] = 4
router['cidr'] = '10.10.10.0/24'
router['gip'] = '10.10.10.1'
# Make one of the switches throw an exception - i.e. fail
self.drv._servers[1].execute = mock.Mock(side_effect=Exception)
with mock.patch.object(arista.LOG, 'exception') as log_exception:
self.drv.add_router_interface(None, router)
log_exception.assert_called_once_with(mock.ANY)
def test_remove_router_interface_when_one_switch_fails(self):
router = {}
router['name'] = 'test-router-1'
router['tenant_id'] = 'ten-1'
router['seg_id'] = '100'
router['ip_version'] = 4
router['cidr'] = '10.10.10.0/24'
router['gip'] = '10.10.10.1'
# Make one of the switches throw an exception - i.e. fail
self.drv._servers[0].execute = mock.Mock(side_effect=Exception)
with mock.patch.object(arista.LOG, 'exception') as log_exception:
self.drv.remove_router_interface(None, router)
log_exception.assert_called_once_with(mock.ANY)

View File

@ -1,19 +0,0 @@
# Copyright 2011 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
cfg.CONF.use_stderr = False

View File

@ -1,111 +0,0 @@
# Copyright (c) 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import itertools
import mock
from mock import patch
from neutron_lib.db import api as db_api
from oslo_config import cfg
from neutron.db.models.plugins.ml2 import vlanallocation
from neutron.tests.unit import testlib_api
from networking_arista.ml2.drivers.driver_helpers import VlanSyncService
from networking_arista.ml2.drivers.type_arista_vlan import AristaVlanTypeDriver
import networking_arista.tests.unit.ml2.utils as utils
EAPI_SEND_FUNC = ('networking_arista.ml2.arista_ml2.AristaRPCWrapperEapi'
'._send_eapi_req')
class AristaTypeDriverTest(testlib_api.SqlTestCase):
def setUp(self):
super(AristaTypeDriverTest, self).setUp()
utils.setup_arista_wrapper_config(cfg)
@patch(EAPI_SEND_FUNC)
def test_initialize_type_driver(self, mock_send_eapi_req):
type_driver = AristaVlanTypeDriver()
type_driver.sync_service._force_sync = False
type_driver.sync_service._vlan_assignment_uuid = {'uuid': 1}
type_driver.sync_service._rpc = mock.MagicMock()
rpc = type_driver.sync_service._rpc
rpc.get_vlan_assignment_uuid.return_value = {'uuid': 1}
type_driver.initialize()
cmds = ['show openstack agent uuid',
'show openstack instances',
'show openstack agent uuid',
'show openstack features']
calls = [mock.call(cmds=[cmd], commands_to_log=[cmd])
for cmd in cmds]
mock_send_eapi_req.assert_has_calls(calls)
type_driver.timer.cancel()
class VlanSyncServiceTest(testlib_api.SqlTestCase):
"""Test that VLANs are synchronized between EOS and Neutron."""
def _ensure_in_db(self, assigned, allocated, available):
session = db_api.get_reader_session()
with session.begin():
vlans = session.query(vlanallocation.VlanAllocation).all()
for vlan in vlans:
self.assertIn(vlan.vlan_id, assigned)
if vlan.vlan_id in available:
self.assertFalse(vlan.allocated)
elif vlan.vlan_id in allocated:
self.assertTrue(vlan.allocated)
def test_synchronization_test(self):
rpc = mock.MagicMock()
rpc.get_vlan_allocation.return_value = {
'assignedVlans': '1-10,21-30',
'availableVlans': '1-5,21,23,25,27,29',
'allocatedVlans': '6-10,22,24,26,28,30'
}
assigned = list(itertools.chain(range(1, 11), range(21, 31)))
available = [1, 2, 3, 4, 5, 21, 23, 25, 27, 29]
allocated = list(set(assigned) - set(available))
sync_service = VlanSyncService(rpc)
sync_service.synchronize()
self._ensure_in_db(assigned, allocated, available)
# Call synchronize again which returns different data
rpc.get_vlan_allocation.return_value = {
'assignedVlans': '51-60,71-80',
'availableVlans': '51-55,71,73,75,77,79',
'allocatedVlans': '56-60,72,74,76,78,80'
}
assigned = list(itertools.chain(range(51, 61), range(71, 81)))
available = [51, 52, 53, 54, 55, 71, 73, 75, 77, 79]
allocated = list(set(assigned) - set(available))
sync_service = VlanSyncService(rpc)
sync_service.synchronize()
self._ensure_in_db(assigned, allocated, available)

File diff suppressed because it is too large Load Diff

View File

@ -1,39 +0,0 @@
# Copyright (c) 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
def setup_arista_wrapper_config(cfg, host='host', user='user'):
cfg.CONF.keystone_authtoken = fake_keystone_info_class()
cfg.CONF.set_override('eapi_host', host, "ml2_arista")
cfg.CONF.set_override('eapi_username', user, "ml2_arista")
cfg.CONF.set_override('sync_interval', 10, "ml2_arista")
cfg.CONF.set_override('conn_timeout', 20, "ml2_arista")
cfg.CONF.set_override('switch_info', ['switch1:user:pass'], "ml2_arista")
cfg.CONF.set_override('sec_group_support', False, "ml2_arista")
class fake_keystone_info_class(object):
"""To generate fake Keystone Authentication token information
Arista Driver expects Keystone auth info. This fake information
is for testing only
"""
auth_uri = False
auth_protocol = 'abc'
auth_host = 'host'
auth_port = 5000
admin_user = 'neutron'
admin_password = 'fun'
admin_tenant_name = 'tenant_name'

View File

@ -1,15 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr!=2.1.0,>=2.0.0 # Apache-2.0
alembic>=0.8.10 # MIT
neutron-lib>=1.9.0 # Apache-2.0
oslo.i18n!=3.15.2,>=2.1.0 # Apache-2.0
oslo.config!=4.3.0,!=4.4.0,>=4.0.0 # Apache-2.0
oslo.log>=3.22.0 # Apache-2.0
oslo.service>=1.10.0 # Apache-2.0
oslo.utils>=3.20.0 # Apache-2.0
requests>=2.14.2 # Apache-2.0
six>=1.9.0 # MIT
SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT

View File

@ -1,66 +0,0 @@
[metadata]
name = networking_arista
summary = Arista Networking drivers
description-file =
README.rst
author = Arista Networks
author-email = openstack-dev@arista.com
home-page = https://github.com/openstack/networking-arista/
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 3
Programming Language :: Python :: 3.5
[files]
packages =
networking_arista
data_files =
/etc/neutron/plugins/ml2 =
etc/ml2_conf_arista.ini
[global]
setup-hooks =
pbr.hooks.setup_hook
[entry_points]
neutron.ml2.mechanism_drivers =
arista = networking_arista.ml2.mechanism_arista:AristaDriver
arista_ml2 = networking_arista.ml2.mechanism_arista:AristaDriver
neutron.service_plugins =
arista_l3 = networking_arista.l3Plugin.l3_arista:AristaL3ServicePlugin
neutron.db.alembic_migrations =
networking-arista = networking_arista.db.migration:alembic_migrations
neutron.ml2.type_drivers =
arista_vlan = networking_arista.ml2.drivers.type_arista_vlan:AristaVlanTypeDriver
[build_sphinx]
source-dir = doc/source
build-dir = doc/build
all_files = 1
[upload_sphinx]
upload-dir = doc/build/html
[compile_catalog]
directory = networking_arista/locale
domain = networking-arista
[update_catalog]
domain = networking-arista
output_dir = networking_arista/locale
input_file = networking_arista/locale/networking-arista.pot
[extract_messages]
keywords = _ gettext ngettext l_ lazy_gettext
mapping_file = babel.cfg
output_file = networking_arista/locale/networking-arista.pot
[wheel]
universal = 1

View File

@ -1,29 +0,0 @@
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
setup_requires=['pbr>=2.0.0'],
pbr=True)

View File

@ -1,15 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0
coverage!=4.4,>=4.0 # Apache-2.0
mock>=2.0 # BSD
python-subunit>=0.0.18 # Apache-2.0/BSD
sphinx>=1.6.2 # BSD
oslosphinx>=4.7.0 # Apache-2.0
oslotest>=1.10.0 # Apache-2.0
testrepository>=0.0.18 # Apache-2.0/BSD
testtools>=1.4.0 # MIT
testresources>=0.2.4 # Apache-2.0/BSD
testscenarios>=0.4 # Apache-2.0/BSD

40
tox.ini
View File

@ -1,40 +0,0 @@
[tox]
envlist = py27,py35,pep8
minversion = 1.6
skipsdist = True
[testenv]
usedevelop = True
install_command = pip install -c {env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} -r requirements.txt -U {opts} {packages}
setenv = VIRTUAL_ENV={envdir}
PYTHONWARNINGS=default::DeprecationWarning
deps = -r{toxinidir}/test-requirements.txt
-egit+https://git.openstack.org/openstack/neutron.git#egg=neutron
whitelist_externals = sh
commands = python setup.py testr --slowest --testr-args='{posargs}'
[testenv:pep8]
commands =
flake8
neutron-db-manage --subproject networking-arista check_migration
[testenv:venv]
commands = {posargs}
[testenv:cover]
commands = python setup.py testr --coverage --testr-args='{posargs}'
[testenv:docs]
commands = python setup.py build_sphinx
[flake8]
# H803 skipped on purpose per list discussion.
# E123, E125 skipped as they are invalid PEP-8.
show-source = True
ignore = E123,E125,H803
builtins = _
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build
[hacking]
import_exceptions = networking_arista._i18n