Merge remote-tracking branch 'origin/master' into merge-branch

Change-Id: Ic237c09131f6579f3df1a3a10ba1e5f7a3d42bde
This commit is contained in:
Kyle Mestery 2015-08-11 10:39:07 +00:00
commit a7b91632fc
156 changed files with 4018 additions and 2935 deletions

View File

@ -1,13 +1,36 @@
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Convention for heading levels in Neutron devref:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)
Testing Neutron
=============================================================
===============
Overview
--------
Neutron relies on different types of testing to ensure its quality, as
described below. In addition to in-tree testing, `Tempest`_ is
Neutron relies on unit, functional, fullstack and API tests to ensure its
quality, as described below. In addition to in-tree testing, `Tempest`_ is
responsible for validating Neutron's integration with other OpenStack
components, and `Rally`_ is responsible for benchmarking.
components via scenario tests, and `Rally`_ is responsible for benchmarking.
.. _Tempest: http://docs.openstack.org/developer/tempest/
.. _Rally: http://rally.readthedocs.org/en/latest/
@ -16,45 +39,59 @@ Unit Tests
~~~~~~~~~~
Unit tests (neutron/test/unit/) are meant to cover as much code as
possible and should be executed without the service running. They are
designed to test the various pieces of the neutron tree to make sure
any new changes don't break existing functionality.
possible. They are designed to test the various pieces of the Neutron tree to
make sure any new changes don't break existing functionality. Unit tests have
no requirements nor make changes to the system they are running on. They use
an in-memory sqlite database to test DB interaction.
Functional Tests
~~~~~~~~~~~~~~~~
Functional tests (neutron/tests/functional/) are intended to
validate actual system interaction. Mocks should be used sparingly,
if at all. Care should be taken to ensure that existing system
validate actual system interaction. Mocks should be used sparingly,
if at all. Care should be taken to ensure that existing system
resources are not modified and that resources created in tests are
properly cleaned up.
properly cleaned up both on test success and failure.
Fullstack Tests
~~~~~~~~~~~~~~~
Fullstack tests (neutron/tests/fullstack/) target Neutron as a whole.
The test infrastructure itself manages the Neutron server and its agents.
Fullstack tests are a form of integration testing and fill a void between
unit/functional tests and Tempest. More information may be found
`here. <fullstack_testing.html>`_
API Tests
~~~~~~~~~
API tests (neutron/tests/api/) are intended to ensure the function
and stability of the Neutron API. As much as possible, changes to
and stability of the Neutron API. As much as possible, changes to
this path should not be made at the same time as changes to the code
to limit the potential for introducing backwards-incompatible changes.
to limit the potential for introducing backwards-incompatible changes,
although the same patch that introduces a new API should include an API
test.
Since API tests need to be able to target a deployed Neutron daemon
that is not necessarily test-managed, they should not depend on
controlling the runtime configuration of the target daemon. API tests
should be black-box - no assumptions should be made about
implementation. Only the contract defined by Neutron's REST API
Since API tests target a deployed Neutron daemon that is not test-managed,
they should not depend on controlling the runtime configuration
of the target daemon. API tests should be black-box - no assumptions should
be made about implementation. Only the contract defined by Neutron's REST API
should be validated, and all interaction with the daemon should be via
a REST client.
Development process
neutron/tests/api was copied from the Tempest project. The Tempest networking
API directory was frozen and any new tests belong to the Neutron repository.
Development Process
-------------------
It is expected that any new changes that are proposed for merge
come with tests for that feature or code area. Ideally any bugs
fixes that are submitted also have tests to prove that they stay
fixed! In addition, before proposing for merge, all of the
fixed! In addition, before proposing for merge, all of the
current tests should be passing.
Structure of the unit test tree
Structure of the Unit Test Tree
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The structure of the unit test tree should match the structure of the
@ -66,7 +103,7 @@ code tree, e.g. ::
Unit test modules should have the same path under neutron/tests/unit/
as the module they target has under neutron/, and their name should be
the name of the target module prefixed by `test_`. This requirement
the name of the target module prefixed by `test_`. This requirement
is intended to make it easier for developers to find the unit tests
for a given module.
@ -83,26 +120,11 @@ tree is structured according to the above requirements: ::
./tools/check_unit_test_structure.sh
Where appropriate, exceptions can be added to the above script. If
code is not part of the neutron namespace, for example, it's probably
Where appropriate, exceptions can be added to the above script. If
code is not part of the Neutron namespace, for example, it's probably
reasonable to exclude their unit tests from the check.
Virtual environments
~~~~~~~~~~~~~~~~~~~~
Testing OpenStack projects, including Neutron, is made easier with `DevStack <https://git.openstack.org/cgit/openstack-dev/devstack>`_.
Create a machine (such as a VM or Vagrant box) running a distribution supported
by DevStack and install DevStack there. For example, there is a Vagrant script
for DevStack at https://github.com/bcwaldon/vagrant_devstack.
.. note::
If you prefer not to use DevStack, you can still check out source code on your local
machine and develop from there.
Running tests
Running Tests
-------------
There are three mechanisms for running tests: run_tests.sh, tox,
@ -143,16 +165,11 @@ some rough edges when it comes to diagnosing errors and failures, and there is
no easy way to set a breakpoint in the Neutron code, and enter an
interactive debugging session while using testr.
It is also possible to use nose2's predecessor, `nose`_, to run the tests::
source .venv/bin/activate
pip install nose
nosetests
nose has one additional disadvantage over nose2 - it does not
understand the `load_tests protocol`_ introduced in Python 2.7. This
limitation will result in errors being reported for modules that
depend on load_tests (usually due to use of `testscenarios`_).
Note that nose2's predecessor, `nose`_, does not understand
`load_tests protocol`_ introduced in Python 2.7. This limitation will result in
errors being reported for modules that depend on load_tests
(usually due to use of `testscenarios`_). nose, therefore, is not supported,
while nose2 is.
.. _nose2: http://nose2.readthedocs.org/en/latest/index.html
.. _nose: https://nose.readthedocs.org/en/latest/index.html
@ -167,7 +184,7 @@ environments for running test cases. It uses `Testr`_ for managing the running
of the test cases.
Tox handles the creation of a series of `virtualenvs`_ that target specific
versions of Python (2.6, 2.7, 3.3, etc).
versions of Python.
Testr handles the parallel execution of series of test cases as well as
the tracking of long-running tests and other things.
@ -183,7 +200,7 @@ see this wiki page:
.. _virtualenvs: https://pypi.python.org/pypi/virtualenv
PEP8 and Unit Tests
===================
+++++++++++++++++++
Running pep8 and unit tests is as easy as executing this in the root
directory of the Neutron source code::
@ -204,7 +221,7 @@ To run only the unit tests::
tox -e py27
Functional Tests
================
++++++++++++++++
To run functional tests that do not require sudo privileges or
specific-system dependencies::
@ -215,23 +232,23 @@ To run all the functional tests, including those requiring sudo
privileges and system-specific dependencies, the procedure defined by
tools/configure_for_func_testing.sh should be followed.
IMPORTANT: configure_for_func_testing.sh relies on devstack to perform
extensive modification to the underlying host. Execution of the
IMPORTANT: configure_for_func_testing.sh relies on DevStack to perform
extensive modification to the underlying host. Execution of the
script requires sudo privileges and it is recommended that the
following commands be invoked only on a clean and disposeable VM. A
VM that has had devstack previously installed on it is also fine. ::
following commands be invoked only on a clean and disposeable VM.
A VM that has had DevStack previously installed on it is also fine. ::
git clone https://git.openstack.org/openstack-dev/devstack ../devstack
./tools/configure_for_func_testing.sh ../devstack -i
tox -e dsvm-functional
The '-i' option is optional and instructs the script to use devstack
to install and configure all of Neutron's package dependencies. It is
not necessary to provide this option if devstack has already been used
The '-i' option is optional and instructs the script to use DevStack
to install and configure all of Neutron's package dependencies. It is
not necessary to provide this option if DevStack has already been used
to deploy Neutron to the target host.
Fullstack Tests
===============
+++++++++++++++
To run all the full-stack tests, you may use: ::
@ -239,7 +256,7 @@ To run all the full-stack tests, you may use: ::
Since full-stack tests often require the same resources and
dependencies as the functional tests, using the configuration script
tools/configure_for_func_testing.sh is advised (as described above).
tools/configure_for_func_testing.sh is advised (As described above).
When running full-stack tests on a clean VM for the first time, we
advise to run ./stack.sh successfully to make sure all Neutron's
dependencies are met. Full-stack based Neutron daemons produce logs to a
@ -248,47 +265,47 @@ sub-folder in /tmp/fullstack-logs (for example, a test named
so that will be a good place to look if your test is failing.
API Tests
=========
+++++++++
To run the api tests, deploy tempest and neutron with devstack and
To run the api tests, deploy Tempest and Neutron with DevStack and
then run the following command: ::
tox -e api
If tempest.conf cannot be found at the default location used by
devstack (/opt/stack/tempest/etc) it may be necessary to set
DevStack (/opt/stack/tempest/etc) it may be necessary to set
TEMPEST_CONFIG_DIR before invoking tox: ::
export TEMPEST_CONFIG_DIR=[path to dir containing tempest.conf]
tox -e api
Running individual tests
------------------------
Running Individual Tests
~~~~~~~~~~~~~~~~~~~~~~~~
For running individual test modules or cases, you just need to pass
the dot-separated path to the module you want as an argument to it.
For running individual test modules, cases or tests, you just need to pass
the dot-separated path you want as an argument to it.
For executing a specific test case, specify the name of the test case
class separating it from the module path with a colon.
For example, the following would run only a single test or test case::
For example, the following would run only the JSONV2TestCase tests from
neutron/tests/unit/test_api_v2.py::
$ ./run_tests.sh neutron.tests.unit.test_api_v2.JSONV2TestCase
$ ./run_tests.sh neutron.tests.unit.test_manager
$ ./run_tests.sh neutron.tests.unit.test_manager.NeutronManagerTestCase
$ ./run_tests.sh neutron.tests.unit.test_manager.NeutronManagerTestCase.test_service_plugin_is_loaded
or::
$ tox -e py27 neutron.tests.unit.test_api_v2.JSONV2TestCase
$ tox -e py27 neutron.tests.unit.test_manager
$ tox -e py27 neutron.tests.unit.test_manager.NeutronManagerTestCase
$ tox -e py27 neutron.tests.unit.test_manager.NeutronManagerTestCase.test_service_plugin_is_loaded
Adding more tests
~~~~~~~~~~~~~~~~~
Coverage
--------
Neutron has a fast growing code base and there is plenty of areas that
need to be covered by unit and functional tests.
Neutron has a fast growing code base and there are plenty of areas that
need better coverage.
To get a grasp of the areas where tests are needed, you can check
current coverage by running::
current unit tests coverage by running::
$ ./run_tests.sh -c
@ -296,7 +313,7 @@ Debugging
---------
By default, calls to pdb.set_trace() will be ignored when tests
are run. For pdb statements to work, invoke run_tests as follows::
are run. For pdb statements to work, invoke run_tests as follows::
$ ./run_tests.sh -d [test module path]
@ -311,7 +328,7 @@ after a tox run and reused for debugging::
$ . .tox/venv/bin/activate
$ python -m testtools.run [test module path]
Tox packages and installs the neutron source tree in a given venv
Tox packages and installs the Neutron source tree in a given venv
on every invocation, but if modifications need to be made between
invocation (e.g. adding more pdb statements), it is recommended
that the source tree be installed in the venv in editable mode::
@ -323,7 +340,7 @@ Editable mode ensures that changes made to the source tree are
automatically reflected in the venv, and that such changes are not
overwritten during the next tox run.
Post-mortem debugging
Post-mortem Debugging
~~~~~~~~~~~~~~~~~~~~~
Setting OS_POST_MORTEM_DEBUGGER in the shell environment will ensure
@ -341,7 +358,7 @@ with pdb::
$ OS_POST_MORTEM_DEBUGGER=pudb ./run_tests.sh -d [test module path]
References
==========
~~~~~~~~~~
.. [#pudb] PUDB debugger:
https://pypi.python.org/pypi/pudb

View File

@ -1,9 +1,31 @@
=================================
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Convention for heading levels in Neutron devref:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)
Client command extension support
=================================
================================
The client command extension adds support for extending the neutron client while
considering ease of creation.
The full document can be found in the python-neutronclient repository:
http://docs.openstack.org/developer/python-neutronclient/devref/client_command_extensions.html
http://docs.openstack.org/developer/python-neutronclient/devref/client_command_extensions.html

View File

@ -1,3 +1,26 @@
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Convention for heading levels in Neutron devref:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)
Contributing new extensions to Neutron
======================================
@ -546,6 +569,6 @@ Other repo-split items
Decomposition Phase II Progress Chart
=====================================
-------------------------------------
TBD.

View File

@ -15,6 +15,16 @@
License for the specific language governing permissions and limitations
under the License.
Convention for heading levels in Neutron devref:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)
Setting Up a Development Environment
====================================

View File

@ -1,9 +1,31 @@
==========================
Neutron Full Stack Testing
==========================
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Convention for heading levels in Neutron devref:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)
Full Stack Testing
==================
Why?
====
----
The idea behind "fullstack" testing is to fill a gap between unit + functional
tests and Tempest. Tempest tests are expensive to run, difficult to run in
@ -14,7 +36,7 @@ environment and provide a rapidly reproducible way to verify code as you're
still writing it.
How?
====
----
Full stack tests set up their own Neutron processes (Server & agents). They
assume a working Rabbit and MySQL server before the run starts. Instructions
@ -44,7 +66,7 @@ interconnected.
.. image:: images/fullstack-multinode-simulation.png
When?
=====
-----
1) You'd like to test the interaction between Neutron components (Server
and agents) and have already tested each component in isolation via unit or
@ -59,27 +81,24 @@ When?
agent during the test.
Short Term Goals
================
----------------
* Multinode & Stability:
- Interconnect the internal and external bridges
- Convert the L3 HA failover functional test to a full stack test
- Write a test for DHCP HA / Multiple DHCP agents per network
* Write DVR tests
* Write L3 HA tests
* Write additional L3 HA tests
* Write a test that validates L3 HA + l2pop integration after
https://bugs.launchpad.net/neutron/+bug/1365476 is fixed.
* Write a test that validates DVR + L3 HA integration after
https://bugs.launchpad.net/neutron/+bug/1365473 is fixed.
None of these tasks currently have owners. Feel free to send patches!
After these tests are merged, it should be fair to start asking contributors to
add full stack tests when appropriate in the patches themselves and not after
the fact as there will probably be something to copy/paste from.
Long Term Goals
===============
---------------
* Currently we configure the OVS agent with VLANs segmentation (Only because
it's easier). This allows us to validate most functionality, but we might

View File

@ -15,6 +15,15 @@
License for the specific language governing permissions and limitations
under the License.
Convention for heading levels in Neutron devref:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)
Developer Guide
===============

View File

@ -1,3 +1,26 @@
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Convention for heading levels in Neutron devref:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)
Neutron public API
==================

View File

@ -1,3 +1,26 @@
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Convention for heading levels in Neutron devref:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)
Official Sub-Projects
=====================
@ -62,7 +85,7 @@ The official source of all repositories that exist under the Neutron project is:
http://governance.openstack.org/reference/projects/neutron.html
Affiliated projects
===================
~~~~~~~~~~~~~~~~~~~
This table shows the affiliated projects that integrate with Neutron,
in one form or another. These projects typically leverage the pluggable
@ -131,7 +154,7 @@ repo but are summarized here to describe the functionality they provide.
+-------------------------------+-----------------------+
Functionality legend
--------------------
++++++++++++++++++++
- l2: a Layer 2 service;
- ml2: an ML2 mechanism driver;
@ -145,7 +168,7 @@ Functionality legend
.. _networking-arista:
Arista
------
++++++
* Git: https://git.openstack.org/cgit/stackforge/networking-arista
* Launchpad: https://launchpad.net/networking-arista
@ -154,7 +177,7 @@ Arista
.. _networking-bagpipe-l2:
BaGPipe
-------
+++++++
* Git: https://git.openstack.org/cgit/stackforge/networking-bagpipe-l2
* Launchpad: https://launchpad.net/bagpipe-l2
@ -163,14 +186,14 @@ BaGPipe
.. _networking-bgpvpn:
BGPVPN
-------
++++++
* Git: https://git.openstack.org/cgit/openstack/networking-bgpvpn
.. _networking-bigswitch:
Big Switch Networks
-------------------
+++++++++++++++++++
* Git: https://git.openstack.org/cgit/stackforge/networking-bigswitch
* Pypi: https://pypi.python.org/pypi/bsnstacklib
@ -178,7 +201,7 @@ Big Switch Networks
.. _networking-brocade:
Brocade
-------
+++++++
* Git: https://git.openstack.org/cgit/stackforge/networking-brocade
* Launchpad: https://launchpad.net/networking-brocade
@ -187,7 +210,7 @@ Brocade
.. _networking-cisco:
Cisco
-----
+++++
* Git: https://git.openstack.org/cgit/stackforge/networking-cisco
* Launchpad: https://launchpad.net/networking-cisco
@ -196,7 +219,7 @@ Cisco
.. _dragonflow:
DragonFlow
----------
++++++++++
* Git: https://git.openstack.org/cgit/openstack/dragonflow
* Launchpad: https://launchpad.net/dragonflow
@ -205,7 +228,7 @@ DragonFlow
.. _networking-edge-vpn:
Edge VPN
--------
++++++++
* Git: https://git.openstack.org/cgit/stackforge/networking-edge-vpn
* Launchpad: https://launchpad.net/edge-vpn
@ -213,7 +236,7 @@ Edge VPN
.. _networking-fujitsu:
FUJITSU
-------
+++++++
* Git: https://git.openstack.org/cgit/openstack/networking-fujitsu
* Launchpad: https://launchpad.net/networking-fujitsu
@ -222,7 +245,7 @@ FUJITSU
.. _networking-hyperv:
Hyper-V
-------
+++++++
* Git: https://git.openstack.org/cgit/stackforge/networking-hyperv
* Launchpad: https://launchpad.net/networking-hyperv
@ -231,7 +254,7 @@ Hyper-V
.. _group-based-policy:
Group Based Policy
------------------
++++++++++++++++++
* Git: https://git.openstack.org/cgit/stackforge/group-based-policy
* Launchpad: https://launchpad.net/group-based-policy
@ -240,7 +263,7 @@ Group Based Policy
.. _networking-ibm:
IBM SDNVE
---------
+++++++++
* Git: https://git.openstack.org/cgit/stackforge/networking-ibm
* Launchpad: https://launchpad.net/networking-ibm
@ -248,7 +271,7 @@ IBM SDNVE
.. _networking-l2gw:
L2 Gateway
----------
++++++++++
* Git: https://git.openstack.org/cgit/openstack/networking-l2gw
* Launchpad: https://launchpad.net/networking-l2gw
@ -256,7 +279,7 @@ L2 Gateway
.. _networking-midonet:
MidoNet
-------
+++++++
* Git: https://git.openstack.org/cgit/openstack/networking-midonet
* Launchpad: https://launchpad.net/networking-midonet
@ -265,7 +288,7 @@ MidoNet
.. _networking-mlnx:
Mellanox
--------
++++++++
* Git: https://git.openstack.org/cgit/stackforge/networking-mlnx
* Launchpad: https://launchpad.net/networking-mlnx
@ -273,7 +296,7 @@ Mellanox
.. _networking-nec:
NEC
---
+++
* Git: https://git.openstack.org/cgit/stackforge/networking-nec
* Launchpad: https://launchpad.net/networking-nec
@ -282,14 +305,14 @@ NEC
.. _nuage-openstack-neutron:
Nuage
-----
+++++
* Git: https://github.com/nuage-networks/nuage-openstack-neutron
.. _networking-odl:
OpenDayLight
------------
++++++++++++
* Git: https://git.openstack.org/cgit/openstack/networking-odl
* Launchpad: https://launchpad.net/networking-odl
@ -297,7 +320,7 @@ OpenDayLight
.. _networking-ofagent:
OpenFlow Agent (ofagent)
------------------------
++++++++++++++++++++++++
* Git: https://git.openstack.org/cgit/openstack/networking-ofagent
* Launchpad: https://launchpad.net/networking-ofagent
@ -306,7 +329,7 @@ OpenFlow Agent (ofagent)
.. _networking-ovn:
Open Virtual Network
--------------------
++++++++++++++++++++
* Git: https://git.openstack.org/cgit/openstack/networking-ovn
* Launchpad: https://launchpad.net/networking-ovn
@ -315,7 +338,7 @@ Open Virtual Network
.. _networking-ovs-dpdk:
Open DPDK
---------
+++++++++
* Git: https://git.openstack.org/cgit/stackforge/networking-ovs-dpdk
* Launchpad: https://launchpad.net/networking-ovs-dpdk
@ -323,7 +346,7 @@ Open DPDK
.. _networking-plumgrid:
PLUMgrid
--------
++++++++
* Git: https://git.openstack.org/cgit/stackforge/networking-plumgrid
* Launchpad: https://launchpad.net/networking-plumgrid
@ -332,7 +355,7 @@ PLUMgrid
.. _neutron-powervm:
PowerVM
-------
+++++++
* Git: https://git.openstack.org/cgit/stackforge/neutron-powervm
* Launchpad: https://launchpad.net/neutron-powervm
@ -341,7 +364,7 @@ PowerVM
.. _networking-portforwarding:
PortForwarding
--------------
++++++++++++++
* Git: https://git.openstack.org/cgit/stackforge/networking-portforwarding
* Launchpad: https://launchpad.net/networking-portforwarding
@ -349,22 +372,22 @@ PortForwarding
.. _networking-sfc:
SFC
---
+++
* Git: https://git.openstack.org/cgit/openstack/networking-sfc
.. _networking-vsphere:
vSphere
-------
+++++++
* Git: https://git.openstack.org/cgit/stackforge/networking-vsphere
* Git: https://git.openstack.org/cgit/openstack/networking-vsphere
* Launchpad: https://launchpad.net/networking-vsphere
.. _vmware-nsx:
VMware NSX
----------
++++++++++
* Git: https://git.openstack.org/cgit/openstack/vmware-nsx
* Launchpad: https://launchpad.net/vmware-nsx
@ -373,7 +396,7 @@ VMware NSX
.. _octavia:
Octavia
-------
+++++++
* Git: https://git.openstack.org/cgit/openstack/octavia
* Launchpad: https://launchpad.net/octavia

View File

@ -111,16 +111,17 @@ running.
At the root of the results - there should be the following:
console.html.gz - contains the output of stdout of the test run
local.conf / localrc - contains the setup used for this run
logs/
Logs must be a directory, which contains the following:
* console.html.gz - contains the output of stdout of the test run
* local.conf / localrc - contains the setup used for this run
* logs - contains the output of detail test log of the test run
Log files for each screen session that DevStack creates and launches an
OpenStack component in
Test result files
testr_results.html.gz
tempest.txt.gz
The above "logs" must be a directory, which contains the following:
* Log files for each screen session that DevStack creates and launches an
OpenStack component in
* Test result files
* testr_results.html.gz
* tempest.txt.gz
List of existing plugins and drivers
------------------------------------

View File

@ -68,6 +68,11 @@
# as forwarders.
# dnsmasq_dns_servers =
# Base log dir for dnsmasq logging. The log contains DHCP and DNS log
# information and is useful for debugging issues with either DHCP or DNS.
# If this section is null, disable dnsmasq log.
# dnsmasq_base_log_dir =
# Limit number of leases to prevent a denial-of-service.
# dnsmasq_lease_max = 16777216

0
etc/neutron.conf Executable file → Normal file
View File

View File

@ -11,9 +11,3 @@
# ID of the project that MidoNet admin user belongs to
# project_id = 77777777-7777-7777-7777-777777777777
# Virtual provider router ID
# provider_router_id = 00112233-0011-0011-0011-001122334455
# Path to midonet host uuid file
# midonet_host_uuid_path = /etc/midolman/host_uuid.properties

View File

@ -21,8 +21,12 @@
# (IntOpt) use specific TOS for vxlan interface protocol packets
# tos =
#
# (StrOpt) multicast group to use for broadcast emulation.
# This group must be the same on all the agents.
# (StrOpt) multicast group or group range to use for broadcast emulation.
# Specifying a range allows different VNIs to use different group addresses,
# reducing or eliminating spurious broadcast traffic to the tunnel endpoints.
# Ranges are specified by using CIDR notation. To reserve a unique group for
# each possible (24-bit) VNI, use a /8 such as 239.0.0.0/8.
# This setting must be the same on all the agents.
# vxlan_group = 224.0.0.1
#
# (StrOpt) Local IP address to use for VXLAN endpoints (required)

View File

@ -1,100 +0,0 @@
# Defines configuration options specific for Arista ML2 Mechanism driver
[ml2_arista]
# (StrOpt) EOS IP address. This is required field. If not set, all
# communications to Arista EOS will fail
#
# eapi_host =
# Example: eapi_host = 192.168.0.1
#
# (StrOpt) EOS command API username. This is required field.
# if not set, all communications to Arista EOS will fail.
#
# eapi_username =
# Example: arista_eapi_username = admin
#
# (StrOpt) EOS command API password. This is required field.
# if not set, all communications to Arista EOS will fail.
#
# eapi_password =
# Example: eapi_password = my_password
#
# (StrOpt) Defines if hostnames are sent to Arista EOS as FQDNs
# ("node1.domain.com") or as short names ("node1"). This is
# optional. If not set, a value of "True" is assumed.
#
# use_fqdn =
# Example: use_fqdn = True
#
# (IntOpt) Sync interval in seconds between Neutron plugin and EOS.
# This field defines how often the synchronization is performed.
# This is an optional field. If not set, a value of 180 seconds
# is assumed.
#
# sync_interval =
# Example: sync_interval = 60
#
# (StrOpt) Defines Region Name that is assigned to this OpenStack Controller.
# This is useful when multiple OpenStack/Neutron controllers are
# managing the same Arista HW clusters. Note that this name must
# match with the region name registered (or known) to keystone
# service. Authentication with Keysotne is performed by EOS.
# This is optional. If not set, a value of "RegionOne" is assumed.
#
# region_name =
# Example: region_name = RegionOne
[l3_arista]
# (StrOpt) primary host IP address. This is required field. If not set, all
# communications to Arista EOS will fail. This is the host where
# primary router is created.
#
# primary_l3_host =
# Example: primary_l3_host = 192.168.10.10
#
# (StrOpt) Primary host username. This is required field.
# if not set, all communications to Arista EOS will fail.
#
# primary_l3_host_username =
# Example: arista_primary_l3_username = admin
#
# (StrOpt) Primary host password. This is required field.
# if not set, all communications to Arista EOS will fail.
#
# primary_l3_host_password =
# Example: primary_l3_password = my_password
#
# (StrOpt) IP address of the second Arista switch paired as
# MLAG (Multi-chassis Link Aggregation) with the first.
# This is optional field, however, if mlag_config flag is set,
# then this is a required field. If not set, all
# communications to Arista EOS will fail. If mlag_config is set
# to False, then this field is ignored
#
# seconadary_l3_host =
# Example: seconadary_l3_host = 192.168.10.20
#
# (BoolOpt) Defines if Arista switches are configured in MLAG mode
# If yes, all L3 configuration is pushed to both switches
# automatically. If this flag is set, ensure that secondary_l3_host
# is set to the second switch's IP.
# This flag is Optional. If not set, a value of "False" is assumed.
#
# mlag_config =
# Example: mlag_config = True
#
# (BoolOpt) Defines if the router is created in default VRF or a
# a specific VRF. This is optional.
# If not set, a value of "False" is assumed.
#
# Example: use_vrf = True
#
# (IntOpt) Sync interval in seconds between Neutron plugin and EOS.
# This field defines how often the synchronization is performed.
# This is an optional field. If not set, a value of 180 seconds
# is assumed.
#
# l3_sync_interval =
# Example: l3_sync_interval = 60

View File

@ -39,7 +39,7 @@
# Name of the default interface name to be used on network-gateway. This value
# will be used for any device associated with a network gateway for which an
# interface name was not specified
# default_interface_name = breth0
# nsx_default_interface_name = breth0
# Reconnect connection to nsx if not used within this amount of time.
# conn_idle_timeout = 900

View File

@ -141,12 +141,6 @@ class BaseOVS(object):
return self.ovsdb.db_get(table, record, column).execute(
check_error=check_error, log_errors=log_errors)
def db_list(self, table, records=None, columns=None,
check_error=True, log_errors=True, if_exists=False):
return (self.ovsdb.db_list(table, records=records, columns=columns,
if_exists=if_exists).
execute(check_error=check_error, log_errors=log_errors))
class OVSBridge(BaseOVS):
def __init__(self, br_name):
@ -326,20 +320,24 @@ class OVSBridge(BaseOVS):
"Exception: %(exception)s"),
{'cmd': args, 'exception': e})
def get_ports_attributes(self, table, columns=None, ports=None,
check_error=True, log_errors=True,
if_exists=False):
port_names = ports or self.get_port_name_list()
return (self.ovsdb.db_list(table, port_names, columns=columns,
if_exists=if_exists).
execute(check_error=check_error, log_errors=log_errors))
# returns a VIF object for each VIF port
def get_vif_ports(self):
edge_ports = []
port_names = self.get_port_name_list()
port_info = self.db_list(
'Interface', columns=['name', 'external_ids', 'ofport'])
by_name = {x['name']: x for x in port_info}
for name in port_names:
if not by_name.get(name):
#NOTE(dprince): some ports (like bonds) won't have all
# these attributes so we skip them entirely
continue
external_ids = by_name[name]['external_ids']
ofport = by_name[name]['ofport']
port_info = self.get_ports_attributes(
'Interface', columns=['name', 'external_ids', 'ofport'],
if_exists=True)
for port in port_info:
name = port['name']
external_ids = port['external_ids']
ofport = port['ofport']
if "iface-id" in external_ids and "attached-mac" in external_ids:
p = VifPort(name, ofport, external_ids["iface-id"],
external_ids["attached-mac"], self)
@ -356,9 +354,8 @@ class OVSBridge(BaseOVS):
return edge_ports
def get_vif_port_to_ofport_map(self):
port_names = self.get_port_name_list()
results = self.db_list(
'Interface', port_names, ['name', 'external_ids', 'ofport'],
results = self.get_ports_attributes(
'Interface', columns=['name', 'external_ids', 'ofport'],
if_exists=True)
port_map = {}
for r in results:
@ -373,9 +370,8 @@ class OVSBridge(BaseOVS):
def get_vif_port_set(self):
edge_ports = set()
port_names = self.get_port_name_list()
results = self.db_list(
'Interface', port_names, ['name', 'external_ids', 'ofport'],
results = self.get_ports_attributes(
'Interface', columns=['name', 'external_ids', 'ofport'],
if_exists=True)
for result in results:
if result['ofport'] == UNASSIGNED_OFPORT:
@ -413,22 +409,18 @@ class OVSBridge(BaseOVS):
in the "Interface" table queried by the get_vif_port_set() method.
"""
port_names = self.get_port_name_list()
results = self.db_list('Port', port_names, ['name', 'tag'],
if_exists=True)
results = self.get_ports_attributes(
'Port', columns=['name', 'tag'], if_exists=True)
return {p['name']: p['tag'] for p in results}
def get_vifs_by_ids(self, port_ids):
interface_info = self.db_list(
interface_info = self.get_ports_attributes(
"Interface", columns=["name", "external_ids", "ofport"])
by_id = {x['external_ids'].get('iface-id'): x for x in interface_info}
intfs_on_bridge = self.ovsdb.list_ports(self.br_name).execute(
check_error=True)
result = {}
for port_id in port_ids:
result[port_id] = None
if (port_id not in by_id or
by_id[port_id]['name'] not in intfs_on_bridge):
if port_id not in by_id:
LOG.info(_LI("Port %(port_id)s not present in bridge "
"%(br_name)s"),
{'port_id': port_id, 'br_name': self.br_name})

View File

@ -54,6 +54,11 @@ DNSMASQ_OPTS = [
"This option is deprecated and "
"will be removed in a future release."),
deprecated_for_removal=True),
cfg.StrOpt('dnsmasq_base_log_dir',
help=_("Base log dir for dnsmasq logging. "
"The log contains DHCP and DNS log information and "
"is useful for debugging issues with either DHCP or "
"DNS. If this section is null, disable dnsmasq log.")),
cfg.IntOpt(
'dnsmasq_lease_max',
default=(2 ** 24),

View File

@ -117,6 +117,11 @@ class FirewallDriver(object):
"""Update rules in a security group."""
raise NotImplementedError()
def security_group_updated(self, action_type, sec_group_ids,
device_id=None):
"""Called when a security group is updated."""
raise NotImplementedError()
class NoopFirewallDriver(FirewallDriver):
"""Noop Firewall Driver.
@ -152,3 +157,7 @@ class NoopFirewallDriver(FirewallDriver):
def update_security_group_rules(self, sg_id, rules):
pass
def security_group_updated(self, action_type, sec_group_ids,
device_id=None):
pass

View File

@ -538,6 +538,12 @@ class L3NATAgent(firewall_l3_agent.FWaaSL3AgentRpcCallback,
LOG.debug('Processing :%r', routers)
for r in routers:
ns_manager.keep_router(r['id'])
if r.get('distributed'):
# need to keep fip namespaces as well
ext_net_id = (r['external_gateway_info'] or {}).get(
'network_id')
if ext_net_id:
ns_manager.keep_ext_net(ext_net_id)
update = queue.RouterUpdate(r['id'],
queue.PRIORITY_SYNC_ROUTERS_TASK,
router=r,

3
neutron/agent/l3/dvr_local_router.py Executable file → Normal file
View File

@ -17,6 +17,7 @@ import netaddr
from oslo_log import log as logging
from oslo_utils import excutils
import six
from neutron.agent.l3 import dvr_fip_ns
from neutron.agent.l3 import dvr_router_base
@ -206,6 +207,8 @@ class DvrLocalRouter(dvr_router_base.DvrRouterBase):
"""
net = netaddr.IPNetwork(ip_cidr)
if net.version == 6:
if isinstance(ip_cidr, six.text_type):
ip_cidr = ip_cidr.encode() # Needed for Python 3.x
# the crc32 & 0xffffffff is for Python 2.6 and 3.0 compatibility
snat_idx = binascii.crc32(ip_cidr) & 0xffffffff
# xor-fold the hash to reserve upper range to extend smaller values

View File

@ -0,0 +1,104 @@
# Copyright 2015 IBM Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
class ItemAllocator(object):
"""Manages allocation of items from a pool
Some of the allocations such as link local addresses used for routing
inside the fip namespaces need to persist across agent restarts to maintain
consistency. Persisting such allocations in the neutron database is
unnecessary and would degrade performance. ItemAllocator utilizes local
file system to track allocations made for objects of a given class.
The persistent datastore is a file. The records are one per line of
the format: key<delimiter>value. For example if the delimiter is a ','
(the default value) then the records will be: key,value (one per line)
"""
def __init__(self, state_file, ItemClass, item_pool, delimiter=','):
"""Read the file with previous allocations recorded.
See the note in the allocate method for more detail.
"""
self.ItemClass = ItemClass
self.state_file = state_file
self.allocations = {}
self.remembered = {}
self.pool = item_pool
for line in self._read():
key, saved_value = line.strip().split(delimiter)
self.remembered[key] = self.ItemClass(saved_value)
self.pool.difference_update(self.remembered.values())
def allocate(self, key):
"""Try to allocate an item of ItemClass type.
I expect this to work in all cases because I expect the pool size to be
large enough for any situation. Nonetheless, there is some defensive
programming in here.
Since the allocations are persisted, there is the chance to leak
allocations which should have been released but were not. This leak
could eventually exhaust the pool.
So, if a new allocation is needed, the code first checks to see if
there are any remembered allocations for the key. If not, it checks
the free pool. If the free pool is empty then it dumps the remembered
allocations to free the pool. This final desperate step will not
happen often in practice.
"""
if key in self.remembered:
self.allocations[key] = self.remembered.pop(key)
return self.allocations[key]
if not self.pool:
# Desperate times. Try to get more in the pool.
self.pool.update(self.remembered.values())
self.remembered.clear()
if not self.pool:
# More than 256 routers on a compute node!
raise RuntimeError("Cannot allocate item of type:"
" %s from pool using file %s"
% (self.ItemClass, self.state_file))
self.allocations[key] = self.pool.pop()
self._write_allocations()
return self.allocations[key]
def release(self, key):
self.pool.add(self.allocations.pop(key))
self._write_allocations()
def _write_allocations(self):
current = ["%s,%s\n" % (k, v) for k, v in self.allocations.items()]
remembered = ["%s,%s\n" % (k, v) for k, v in self.remembered.items()]
current.extend(remembered)
self._write(current)
def _write(self, lines):
with open(self.state_file, "w") as f:
f.writelines(lines)
def _read(self):
if not os.path.exists(self.state_file):
return []
with open(self.state_file) as f:
return f.readlines()

View File

@ -13,7 +13,8 @@
# under the License.
import netaddr
import os
from neutron.agent.l3.item_allocator import ItemAllocator
class LinkLocalAddressPair(netaddr.IPNetwork):
@ -26,7 +27,7 @@ class LinkLocalAddressPair(netaddr.IPNetwork):
netaddr.IPNetwork("%s/%s" % (self.broadcast, self.prefixlen)))
class LinkLocalAllocator(object):
class LinkLocalAllocator(ItemAllocator):
"""Manages allocation of link local IP addresses.
These link local addresses are used for routing inside the fip namespaces.
@ -37,73 +38,13 @@ class LinkLocalAllocator(object):
Persisting these in the database is unnecessary and would degrade
performance.
"""
def __init__(self, state_file, subnet):
"""Read the file with previous allocations recorded.
See the note in the allocate method for more detail.
def __init__(self, data_store_path, subnet):
"""Create the necessary pool and item allocator
using ',' as the delimiter and LinkLocalAllocator as the
class type
"""
self.state_file = state_file
subnet = netaddr.IPNetwork(subnet)
self.allocations = {}
self.remembered = {}
for line in self._read():
key, cidr = line.strip().split(',')
self.remembered[key] = LinkLocalAddressPair(cidr)
self.pool = set(LinkLocalAddressPair(s) for s in subnet.subnet(31))
self.pool.difference_update(self.remembered.values())
def allocate(self, key):
"""Try to allocate a link local address pair.
I expect this to work in all cases because I expect the pool size to be
large enough for any situation. Nonetheless, there is some defensive
programming in here.
Since the allocations are persisted, there is the chance to leak
allocations which should have been released but were not. This leak
could eventually exhaust the pool.
So, if a new allocation is needed, the code first checks to see if
there are any remembered allocations for the key. If not, it checks
the free pool. If the free pool is empty then it dumps the remembered
allocations to free the pool. This final desperate step will not
happen often in practice.
"""
if key in self.remembered:
self.allocations[key] = self.remembered.pop(key)
return self.allocations[key]
if not self.pool:
# Desperate times. Try to get more in the pool.
self.pool.update(self.remembered.values())
self.remembered.clear()
if not self.pool:
# More than 256 routers on a compute node!
raise RuntimeError(_("Cannot allocate link local address"))
self.allocations[key] = self.pool.pop()
self._write_allocations()
return self.allocations[key]
def release(self, key):
self.pool.add(self.allocations.pop(key))
self._write_allocations()
def _write_allocations(self):
current = ["%s,%s\n" % (k, v) for k, v in self.allocations.items()]
remembered = ["%s,%s\n" % (k, v) for k, v in self.remembered.items()]
current.extend(remembered)
self._write(current)
def _write(self, lines):
with open(self.state_file, "w") as f:
f.writelines(lines)
def _read(self):
if not os.path.exists(self.state_file):
return []
with open(self.state_file) as f:
return f.readlines()
pool = set(LinkLocalAddressPair(s) for s in subnet.subnet(31))
super(LinkLocalAllocator, self).__init__(data_store_path,
LinkLocalAddressPair,
pool)

View File

@ -95,6 +95,9 @@ class NamespaceManager(object):
def keep_router(self, router_id):
self._ids_to_keep.add(router_id)
def keep_ext_net(self, ext_net_id):
self._ids_to_keep.add(ext_net_id)
def get_prefix_and_id(self, ns_name):
"""Get the prefix and id from the namespace name.

View File

@ -36,7 +36,7 @@ from neutron.common import exceptions
from neutron.common import ipv6_utils
from neutron.common import utils as commonutils
from neutron.extensions import extra_dhcp_opt as edo_ext
from neutron.i18n import _LI, _LW
from neutron.i18n import _LI, _LW, _LE
LOG = logging.getLogger(__name__)
@ -379,6 +379,20 @@ class Dnsmasq(DhcpLocalProcess):
if self.conf.dhcp_broadcast_reply:
cmd.append('--dhcp-broadcast')
if self.conf.dnsmasq_base_log_dir:
try:
if not os.path.exists(self.conf.dnsmasq_base_log_dir):
os.makedirs(self.conf.dnsmasq_base_log_dir)
log_filename = os.path.join(
self.conf.dnsmasq_base_log_dir,
self.network.id, 'dhcp_dns_log')
cmd.append('--log-queries')
cmd.append('--log-dhcp')
cmd.append('--log-facility=%s' % log_filename)
except OSError:
LOG.error(_LE('Error while create dnsmasq base log dir: %s'),
self.conf.dnsmasq_base_log_dir)
return cmd
def spawn_process(self):
@ -408,6 +422,11 @@ class Dnsmasq(DhcpLocalProcess):
def _release_lease(self, mac_address, ip, client_id):
"""Release a DHCP lease."""
if netaddr.IPAddress(ip).version == constants.IP_VERSION_6:
# Note(SridharG) dhcp_release is only supported for IPv4
# addresses. For more details, please refer to man page.
return
cmd = ['dhcp_release', self.interface_name, ip, mac_address]
if client_id:
cmd.append(client_id)

View File

@ -0,0 +1,89 @@
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import netaddr
from oslo_log import log as logging
from neutron.agent.linux import utils as linux_utils
from neutron.i18n import _LE
LOG = logging.getLogger(__name__)
class IpConntrackManager(object):
"""Smart wrapper for ip conntrack."""
def __init__(self, execute=None, namespace=None):
self.execute = execute or linux_utils.execute
self.namespace = namespace
@staticmethod
def _generate_conntrack_cmd_by_rule(rule, namespace):
ethertype = rule.get('ethertype')
protocol = rule.get('protocol')
direction = rule.get('direction')
cmd = ['conntrack', '-D']
if protocol:
cmd.extend(['-p', str(protocol)])
cmd.extend(['-f', str(ethertype).lower()])
cmd.append('-d' if direction == 'ingress' else '-s')
cmd_ns = []
if namespace:
cmd_ns.extend(['ip', 'netns', 'exec', namespace])
cmd_ns.extend(cmd)
return cmd_ns
def _get_conntrack_cmds(self, device_info_list, rule, remote_ip=None):
conntrack_cmds = []
cmd = self._generate_conntrack_cmd_by_rule(rule, self.namespace)
ethertype = rule.get('ethertype')
for device_info in device_info_list:
zone_id = device_info.get('zone_id')
if not zone_id:
continue
ips = device_info.get('fixed_ips', [])
for ip in ips:
net = netaddr.IPNetwork(ip)
if str(net.version) not in ethertype:
continue
ip_cmd = [str(net.ip), '-w', zone_id]
if remote_ip and str(
netaddr.IPNetwork(remote_ip).version) in ethertype:
ip_cmd.extend(['-s', str(remote_ip)])
conntrack_cmds.append(cmd + ip_cmd)
return conntrack_cmds
def _delete_conntrack_state(self, device_info_list, rule, remote_ip=None):
conntrack_cmds = self._get_conntrack_cmds(device_info_list,
rule, remote_ip)
for cmd in conntrack_cmds:
try:
self.execute(cmd, run_as_root=True,
check_exit_code=True,
extra_ok_codes=[1])
except RuntimeError:
LOG.exception(
_LE("Failed execute conntrack command %s"), str(cmd))
def delete_conntrack_state_by_rule(self, device_info_list, rule):
self._delete_conntrack_state(device_info_list, rule)
def delete_conntrack_state_by_remote_ips(self, device_info_list,
ethertype, remote_ips):
rule = {'ethertype': str(ethertype).lower(), 'direction': 'ingress'}
if remote_ips:
for remote_ip in remote_ips:
self._delete_conntrack_state(
device_info_list, rule, remote_ip)
else:
self._delete_conntrack_state(device_info_list, rule)

View File

@ -723,6 +723,14 @@ class IpNetnsCommand(IpCommandBase):
return False
def vxlan_in_use(segmentation_id, namespace=None):
"""Return True if VXLAN VNID is in use by an interface, else False."""
ip_wrapper = IPWrapper(namespace=namespace)
interfaces = ip_wrapper.netns.execute(["ip", "-d", "link", "list"],
check_exit_code=True)
return 'vxlan id %s ' % segmentation_id in interfaces
def device_exists(device_name, namespace=None):
"""Return True if the device exists in the namespace."""
try:

View File

@ -20,6 +20,7 @@ from oslo_log import log as logging
import six
from neutron.agent import firewall
from neutron.agent.linux import ip_conntrack
from neutron.agent.linux import ipset_manager
from neutron.agent.linux import iptables_comments as ic
from neutron.agent.linux import iptables_manager
@ -56,6 +57,7 @@ class IptablesFirewallDriver(firewall.FirewallDriver):
# TODO(majopela, shihanzhang): refactor out ipset to a separate
# driver composed over this one
self.ipset = ipset_manager.IpsetManager(namespace=namespace)
self.ipconntrack = ip_conntrack.IpConntrackManager(namespace=namespace)
# list of port which has security group
self.filtered_ports = {}
self.unfiltered_ports = {}
@ -72,6 +74,9 @@ class IptablesFirewallDriver(firewall.FirewallDriver):
self.pre_sg_members = None
self.enable_ipset = cfg.CONF.SECURITYGROUP.enable_ipset
self._enabled_netfilter_for_bridges = False
self.updated_rule_sg_ids = set()
self.updated_sg_members = set()
self.devices_with_udpated_sg_members = collections.defaultdict(list)
def _enable_netfilter_for_bridges(self):
# we only need to set these values once, but it has to be when
@ -102,6 +107,22 @@ class IptablesFirewallDriver(firewall.FirewallDriver):
def ports(self):
return dict(self.filtered_ports, **self.unfiltered_ports)
def _update_remote_security_group_members(self, sec_group_ids):
for sg_id in sec_group_ids:
for device in self.filtered_ports.values():
if sg_id in device.get('security_group_source_groups', []):
self.devices_with_udpated_sg_members[sg_id].append(device)
def security_group_updated(self, action_type, sec_group_ids,
device_ids=[]):
if action_type == 'sg_rule':
self.updated_rule_sg_ids.update(sec_group_ids)
elif action_type == 'sg_member':
if device_ids:
self.updated_sg_members.update(device_ids)
else:
self._update_remote_security_group_members(sec_group_ids)
def update_security_group_rules(self, sg_id, sg_rules):
LOG.debug("Update rules of security group (%s)", sg_id)
self.sg_rules[sg_id] = sg_rules
@ -688,6 +709,79 @@ class IptablesFirewallDriver(firewall.FirewallDriver):
if not sg_has_members:
del self.sg_members[sg_id]
def _find_deleted_sg_rules(self, sg_id):
del_rules = list()
for pre_rule in self.pre_sg_rules.get(sg_id, []):
if pre_rule not in self.sg_rules.get(sg_id, []):
del_rules.append(pre_rule)
return del_rules
def _find_devices_on_security_group(self, sg_id):
device_list = list()
for device in self.filtered_ports.values():
if sg_id in device.get('security_groups', []):
device_list.append(device)
return device_list
def _clean_deleted_sg_rule_conntrack_entries(self):
deleted_sg_ids = set()
for sg_id in self.updated_rule_sg_ids:
del_rules = self._find_deleted_sg_rules(sg_id)
if not del_rules:
continue
device_list = self._find_devices_on_security_group(sg_id)
for rule in del_rules:
self.ipconntrack.delete_conntrack_state_by_rule(
device_list, rule)
deleted_sg_ids.add(sg_id)
for id in deleted_sg_ids:
self.updated_rule_sg_ids.remove(id)
def _clean_updated_sg_member_conntrack_entries(self):
updated_device_ids = set()
for device in self.updated_sg_members:
sec_group_change = False
device_info = self.filtered_ports.get(device)
pre_device_info = self._pre_defer_filtered_ports.get(device)
if not (device_info or pre_device_info):
continue
for sg_id in pre_device_info.get('security_groups', []):
if sg_id not in device_info.get('security_groups', []):
sec_group_change = True
break
if not sec_group_change:
continue
for ethertype in [constants.IPv4, constants.IPv6]:
self.ipconntrack.delete_conntrack_state_by_remote_ips(
[device_info], ethertype, set())
updated_device_ids.add(device)
for id in updated_device_ids:
self.updated_sg_members.remove(id)
def _clean_deleted_remote_sg_members_conntrack_entries(self):
deleted_sg_ids = set()
for sg_id, devices in self.devices_with_udpated_sg_members.items():
for ethertype in [constants.IPv4, constants.IPv6]:
pre_ips = self._get_sg_members(
self.pre_sg_members, sg_id, ethertype)
cur_ips = self._get_sg_members(
self.sg_members, sg_id, ethertype)
ips = (pre_ips - cur_ips)
if devices and ips:
self.ipconntrack.delete_conntrack_state_by_remote_ips(
devices, ethertype, ips)
deleted_sg_ids.add(sg_id)
for id in deleted_sg_ids:
self.devices_with_udpated_sg_members.pop(id, None)
def _remove_conntrack_entries_from_sg_updates(self):
self._clean_deleted_sg_rule_conntrack_entries()
self._clean_updated_sg_member_conntrack_entries()
self._clean_deleted_remote_sg_members_conntrack_entries()
def _get_sg_members(self, sg_info, sg_id, ethertype):
return set(sg_info.get(sg_id, {}).get(ethertype, []))
def filter_defer_apply_off(self):
if self._defer_apply:
self._defer_apply = False
@ -696,6 +790,7 @@ class IptablesFirewallDriver(firewall.FirewallDriver):
self._setup_chains_apply(self.filtered_ports,
self.unfiltered_ports)
self.iptables.defer_apply_off()
self._remove_conntrack_entries_from_sg_updates()
self._remove_unused_security_group_info()
self._pre_defer_filtered_ports = None
self._pre_defer_unfiltered_ports = None

View File

@ -80,7 +80,7 @@ class PluginReportStateAPI(object):
agent_state['uuid'] = uuidutils.generate_uuid()
kwargs = {
'agent_state': {'agent_state': agent_state},
'time': datetime.utcnow().isoformat(),
'time': datetime.utcnow().strftime(constants.ISO8601_TIME_FORMAT),
}
method = cctxt.call if use_call else cctxt.cast
return method(context, 'report_state', **kwargs)

View File

@ -198,22 +198,25 @@ class SecurityGroupAgentRpc(object):
"rule updated %r"), security_groups)
self._security_group_updated(
security_groups,
'security_groups')
'security_groups',
'sg_rule')
def security_groups_member_updated(self, security_groups):
LOG.info(_LI("Security group "
"member updated %r"), security_groups)
self._security_group_updated(
security_groups,
'security_group_source_groups')
'security_group_source_groups',
'sg_member')
def _security_group_updated(self, security_groups, attribute):
def _security_group_updated(self, security_groups, attribute, action_type):
devices = []
sec_grp_set = set(security_groups)
for device in self.firewall.ports.values():
if sec_grp_set & set(device.get(attribute, [])):
devices.append(device['device'])
if devices:
self.firewall.security_group_updated(action_type, sec_grp_set)
if self.defer_refresh_firewall:
LOG.debug("Adding %s devices to the list of devices "
"for which firewall needs to be refreshed",
@ -307,6 +310,8 @@ class SecurityGroupAgentRpc(object):
LOG.debug("Refreshing firewall for all filtered devices")
self.refresh_firewall()
else:
self.firewall.security_group_updated('sg_member', [],
updated_devices)
# If a device is both in new and updated devices
# avoid reprocessing it
updated_devices = ((updated_devices | devices_to_refilter) -

View File

@ -26,6 +26,7 @@ from neutron.api.v2 import attributes
from neutron.common import constants
from neutron.common import exceptions as n_exc
from neutron.common import utils
from neutron.db import api as db_api
from neutron.extensions import portbindings
from neutron.i18n import _LW
from neutron import manager
@ -157,6 +158,7 @@ class DhcpRpcCallback(object):
network['ports'] = plugin.get_ports(context, filters=filters)
return network
@db_api.retry_db_errors
def release_dhcp_port(self, context, **kwargs):
"""Release the port currently being used by a DHCP agent."""
host = kwargs.get('host')
@ -169,6 +171,7 @@ class DhcpRpcCallback(object):
plugin = manager.NeutronManager.get_plugin()
plugin.delete_ports_by_device_id(context, device_id, network_id)
@db_api.retry_db_errors
def release_port_fixed_ip(self, context, **kwargs):
"""Release the fixed_ip associated the subnet on a port."""
host = kwargs.get('host')
@ -203,6 +206,7 @@ class DhcpRpcCallback(object):
LOG.warning(_LW('Updating lease expiration is now deprecated. Issued '
'from host %s.'), host)
@db_api.retry_db_errors
@resource_registry.mark_resources_dirty
def create_dhcp_port(self, context, **kwargs):
"""Create and return dhcp port information.
@ -224,6 +228,7 @@ class DhcpRpcCallback(object):
plugin = manager.NeutronManager.get_plugin()
return self._port_action(plugin, context, port, 'create_port')
@db_api.retry_db_errors
def update_dhcp_port(self, context, **kwargs):
"""Update the dhcp port."""
host = kwargs.get('host')
@ -233,5 +238,6 @@ class DhcpRpcCallback(object):
'from %(host)s.',
{'port': port,
'host': host})
port['port'][portbindings.HOST_ID] = host
plugin = manager.NeutronManager.get_plugin()
return self._port_action(plugin, context, port, 'update_port')

View File

@ -23,6 +23,7 @@ from neutron.common import constants
from neutron.common import exceptions
from neutron.common import utils
from neutron import context as neutron_context
from neutron.db import api as db_api
from neutron.extensions import l3
from neutron.extensions import portbindings
from neutron.i18n import _LE
@ -43,7 +44,8 @@ class L3RpcCallback(object):
# 1.4 Added L3 HA update_router_state. This method was later removed,
# since it was unused. The RPC version was not changed
# 1.5 Added update_ha_routers_states
target = oslo_messaging.Target(version='1.5')
# 1.6 Added process_prefix_update to support IPv6 Prefix Delegation
target = oslo_messaging.Target(version='1.6')
@property
def plugin(self):
@ -58,6 +60,7 @@ class L3RpcCallback(object):
plugin_constants.L3_ROUTER_NAT]
return self._l3plugin
@db_api.retry_db_errors
def sync_routers(self, context, **kwargs):
"""Sync routers according to filters to a specific agent.
@ -104,33 +107,70 @@ class L3RpcCallback(object):
router.get('gw_port_host'),
p, router['id'])
else:
self._ensure_host_set_on_port(context, host,
router.get('gw_port'),
router['id'])
self._ensure_host_set_on_port(
context, host,
router.get('gw_port'),
router['id'],
ha_router_port=router.get('ha'))
for interface in router.get(constants.INTERFACE_KEY, []):
self._ensure_host_set_on_port(context, host,
interface, router['id'])
self._ensure_host_set_on_port(
context,
host,
interface,
router['id'],
ha_router_port=router.get('ha'))
interface = router.get(constants.HA_INTERFACE_KEY)
if interface:
self._ensure_host_set_on_port(context, host, interface,
router['id'])
def _ensure_host_set_on_port(self, context, host, port, router_id=None):
def _ensure_host_set_on_port(self, context, host, port, router_id=None,
ha_router_port=False):
if (port and host is not None and
(port.get('device_owner') !=
constants.DEVICE_OWNER_DVR_INTERFACE and
port.get(portbindings.HOST_ID) != host or
port.get(portbindings.VIF_TYPE) ==
portbindings.VIF_TYPE_BINDING_FAILED)):
# All ports, including ports created for SNAT'ing for
# DVR are handled here
try:
self.plugin.update_port(context, port['id'],
{'port': {portbindings.HOST_ID: host}})
except exceptions.PortNotFound:
LOG.debug("Port %(port)s not found while updating "
"agent binding for router %(router)s.",
{"port": port['id'], "router": router_id})
# Ports owned by non-HA routers are bound again if they're
# already bound but the router moved to another host.
if not ha_router_port:
# All ports, including ports created for SNAT'ing for
# DVR are handled here
try:
self.plugin.update_port(
context,
port['id'],
{'port': {portbindings.HOST_ID: host}})
except exceptions.PortNotFound:
LOG.debug("Port %(port)s not found while updating "
"agent binding for router %(router)s.",
{"port": port['id'], "router": router_id})
# Ports owned by HA routers should only be bound once, if
# they are unbound. These ports are moved when an agent reports
# that one of its routers moved to the active state.
else:
if not port.get(portbindings.HOST_ID):
active_host = (
self.l3plugin.get_active_host_for_ha_router(
context, router_id))
if active_host:
host = active_host
# If there is currently no active router instance (For
# example it's a new router), the host that requested
# the routers (Essentially a random host) will do. The
# port binding will be corrected when an active is
# elected.
try:
self.plugin.update_port(
context,
port['id'],
{'port': {portbindings.HOST_ID: host}})
except exceptions.PortNotFound:
LOG.debug("Port %(port)s not found while updating "
"agent binding for router %(router)s.",
{"port": port['id'], "router": router_id})
elif (port and
port.get('device_owner') ==
constants.DEVICE_OWNER_DVR_INTERFACE):
@ -196,6 +236,7 @@ class L3RpcCallback(object):
filters = {'fixed_ips': {'subnet_id': [subnet_id]}}
return self.plugin.get_ports(context, filters=filters)
@db_api.retry_db_errors
def get_agent_gateway_port(self, context, **kwargs):
"""Get Agent Gateway port for FIP.
@ -224,3 +265,10 @@ class L3RpcCallback(object):
LOG.debug('Updating HA routers states on host %s: %s', host, states)
self.l3plugin.update_routers_states(context, states, host)
def process_prefix_update(self, context, **kwargs):
subnets = kwargs.get('subnets')
for subnet_id, prefix in subnets.items():
self.plugin.update_subnet(context, subnet_id,
{'subnet': {'cidr': prefix}})

View File

@ -367,6 +367,16 @@ def _validate_regex_or_none(data, valid_values=None):
return _validate_regex(data, valid_values)
def _validate_subnetpool_id(data, valid_values=None):
if data != constants.IPV6_PD_POOL_ID:
return _validate_uuid_or_none(data, valid_values)
def _validate_subnetpool_id_or_none(data, valid_values=None):
if data is not None:
return _validate_subnetpool_id(data, valid_values)
def _validate_uuid(data, valid_values=None):
if not uuidutils.is_uuid_like(data):
msg = _("'%s' is not a valid UUID") % data
@ -613,6 +623,8 @@ validators = {'type:dict': _validate_dict,
'type:subnet': _validate_subnet,
'type:subnet_list': _validate_subnet_list,
'type:subnet_or_none': _validate_subnet_or_none,
'type:subnetpool_id': _validate_subnetpool_id,
'type:subnetpool_id_or_none': _validate_subnetpool_id_or_none,
'type:uuid': _validate_uuid,
'type:uuid_or_none': _validate_uuid_or_none,
'type:uuid_list': _validate_uuid_list,
@ -743,7 +755,7 @@ RESOURCE_ATTRIBUTE_MAP = {
'allow_put': False,
'default': ATTR_NOT_SPECIFIED,
'required_by_policy': False,
'validate': {'type:uuid_or_none': None},
'validate': {'type:subnetpool_id_or_none': None},
'is_visible': True},
'prefixlen': {'allow_post': True,
'allow_put': False,

View File

@ -23,7 +23,10 @@ class CallbackFailure(Exception):
self.errors = errors
def __str__(self):
return ','.join(str(error) for error in self.errors)
if isinstance(self.errors, list):
return ','.join(str(error) for error in self.errors)
else:
return str(self.errors)
class NotificationError(object):

View File

@ -36,10 +36,12 @@ from neutron.i18n import _LE
LOG = logging.getLogger(__name__)
NS_MANGLING_PATTERN = ('(%s|%s|%s|%s)' % (dhcp.NS_PREFIX,
LB_NS_PREFIX = 'qlbaas-'
NS_MANGLING_PATTERN = ('(%s|%s|%s|%s|%s)' % (dhcp.NS_PREFIX,
l3_agent.NS_PREFIX,
dvr.SNAT_NS_PREFIX,
dvr_fip_ns.FIP_NS_PREFIX) +
dvr_fip_ns.FIP_NS_PREFIX,
LB_NS_PREFIX) +
attributes.UUID_PATTERN)

View File

@ -127,13 +127,17 @@ def arp_header_match_supported():
def vf_management_supported():
required_caps = (
ip_link_support.IpLinkConstants.IP_LINK_CAPABILITY_STATE,
ip_link_support.IpLinkConstants.IP_LINK_CAPABILITY_SPOOFCHK)
try:
vf_section = ip_link_support.IpLinkSupport.get_vf_mgmt_section()
if not ip_link_support.IpLinkSupport.vf_mgmt_capability_supported(
vf_section,
ip_link_support.IpLinkConstants.IP_LINK_CAPABILITY_STATE):
LOG.debug("ip link command does not support vf capability")
return False
for cap in required_caps:
if not ip_link_support.IpLinkSupport.vf_mgmt_capability_supported(
vf_section, cap):
LOG.debug("ip link command does not support "
"vf capability '%(cap)s'", cap)
return False
except ip_link_support.UnsupportedIpLinkCommand:
LOG.exception(_LE("Unexpected exception while checking supported "
"ip link command"))

View File

@ -67,6 +67,8 @@ HA_NETWORK_NAME = 'HA network tenant %s'
HA_SUBNET_NAME = 'HA subnet tenant %s'
HA_PORT_NAME = 'HA port tenant %s'
MINIMUM_AGENTS_FOR_HA = 2
HA_ROUTER_STATE_ACTIVE = 'active'
HA_ROUTER_STATE_STANDBY = 'standby'
IPv4 = 'IPv4'
IPv6 = 'IPv6'
@ -141,6 +143,9 @@ IPV6_LLA_PREFIX = 'fe80::/64'
# indicate that IPv6 Prefix Delegation should be used to allocate subnet CIDRs
IPV6_PD_POOL_ID = 'prefix_delegation'
# Special provisional prefix for IPv6 Prefix Delegation
PROVISIONAL_IPV6_PD_PREFIX = '::/64'
# Linux interface max length
DEVICE_NAME_MAX_LEN = 15
@ -183,3 +188,6 @@ RPC_NAMESPACE_STATE = None
DEFAULT_NETWORK_MTU = 0
ROUTER_MARK_MASK = "0xffff"
# Time format
ISO8601_TIME_FORMAT = '%Y-%m-%dT%H:%M:%S.%f'

View File

@ -449,6 +449,21 @@ class SubnetAllocationError(NeutronException):
message = _("Failed to allocate subnet: %(reason)s")
class AddressScopePrefixConflict(Conflict):
message = _("Failed to associate address scope: subnetpools "
"within an address scope must have unique prefixes")
class IllegalSubnetPoolAssociationToAddressScope(BadRequest):
message = _("Illegal subnetpool association: subnetpool %(subnetpool_id)s "
" cannot be associated with address scope"
" %(address_scope_id)s")
class IllegalSubnetPoolUpdate(BadRequest):
message = _("Illegal subnetpool update : %(reason)s")
class MinPrefixSubnetAllocationError(BadRequest):
message = _("Unable to allocate subnet with prefix length %(prefixlen)s, "
"minimum allowed prefix is %(min_prefixlen)s")

View File

@ -77,3 +77,10 @@ def is_eui64_address(ip_address):
# '0xfffe' addition is used to build EUI-64 from MAC (RFC4291)
# Look for it in the middle of the EUI-64 part of address
return ip.version == 6 and not ((ip & 0xffff000000) ^ 0xfffe000000)
def is_ipv6_pd_enabled(subnet):
"""Returns True if the subnetpool_id of the given subnet is equal to
constants.IPV6_PD_POOL_ID
"""
return subnet.get('subnetpool_id') == constants.IPV6_PD_POOL_ID

View File

@ -233,6 +233,10 @@ def get_hostname():
return socket.gethostname()
def get_first_host_ip(net, ip_version):
return str(netaddr.IPAddress(net.first + 1, ip_version))
def compare_elements(a, b):
"""Compare elements if a and b have same elements.

View File

@ -51,6 +51,21 @@ class AddressScopeDbMixin(ext_address_scope.AddressScopePluginBase):
except exc.NoResultFound:
raise ext_address_scope.AddressScopeNotFound(address_scope_id=id)
def is_address_scope_owned_by_tenant(self, context, id):
"""Check if address scope id is owned by the tenant or not.
AddressScopeNotFound is raised if the
- address scope id doesn't exist or
- if the (unshared) address scope id is not owned by this tenant.
@return Returns true if the user is admin or tenant is owner
Returns false if the address scope id is shared and not
owned by the tenant.
"""
address_scope = self._get_address_scope(context, id)
return context.is_admin or (
address_scope.tenant_id == context.tenant_id)
def create_address_scope(self, context, address_scope):
"""Create a address scope."""
a_s = address_scope['address_scope']
@ -101,5 +116,7 @@ class AddressScopeDbMixin(ext_address_scope.AddressScopePluginBase):
def delete_address_scope(self, context, id):
with context.session.begin(subtransactions=True):
if self._get_subnetpools_by_address_scope_id(context, id):
raise ext_address_scope.AddressScopeInUse(address_scope_id=id)
address_scope = self._get_address_scope(context, id)
context.session.delete(address_scope)

View File

@ -122,7 +122,7 @@ class AgentSchedulerDbMixin(agents_db.AgentDbMixin):
self.periodic_agent_loop = loopingcall.FixedIntervalLoopingCall(
function)
# TODO(enikanorov): make interval configurable rather than computed
interval = max(cfg.CONF.agent_down_time / 2, 1)
interval = max(cfg.CONF.agent_down_time // 2, 1)
# add random initial delay to allow agents to check in after the
# neutron server first starts. random to offset multiple servers
initial_delay = random.randint(interval, interval * 2)

View File

@ -116,7 +116,8 @@ class DbBasePluginCommon(common_db_mixin.CommonDbMixin):
'prefixes': [prefix['cidr']
for prefix in subnetpool['prefixes']],
'ip_version': subnetpool['ip_version'],
'default_quota': subnetpool['default_quota']}
'default_quota': subnetpool['default_quota'],
'address_scope_id': subnetpool['address_scope_id']}
return self._fields(res, fields)
def _make_port_dict(self, port, fields=None,
@ -163,6 +164,12 @@ class DbBasePluginCommon(common_db_mixin.CommonDbMixin):
# NOTE(tidwellr): see note in _get_all_subnets()
return context.session.query(models_v2.SubnetPool).all()
def _get_subnetpools_by_address_scope_id(self, context, address_scope_id):
# NOTE(vikram.choudhary): see note in _get_all_subnets()
subnetpool_qry = context.session.query(models_v2.SubnetPool)
return subnetpool_qry.filter_by(
address_scope_id=address_scope_id).all()
def _get_port(self, context, id):
try:
port = self._get_by_id(context, models_v2.Port, id)

View File

@ -24,6 +24,7 @@ from oslo_utils import uuidutils
from sqlalchemy import and_
from sqlalchemy import event
from neutron.api.rpc.agentnotifiers import l3_rpc_agent_api
from neutron.api.v2 import attributes
from neutron.callbacks import events
from neutron.callbacks import exceptions
@ -32,6 +33,7 @@ from neutron.callbacks import resources
from neutron.common import constants
from neutron.common import exceptions as n_exc
from neutron.common import ipv6_utils
from neutron.common import utils
from neutron import context as ctx
from neutron.db import api as db_api
from neutron.db import db_base_plugin_common
@ -394,7 +396,7 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon,
# NOTE(salv-orlando): There is slight chance of a race, when
# a subnet-update and a router-interface-add operation are
# executed concurrently
if cur_subnet:
if cur_subnet and not ipv6_utils.is_ipv6_pd_enabled(s):
alloc_qry = context.session.query(models_v2.IPAllocation)
allocated = alloc_qry.filter_by(
ip_address=cur_subnet['gateway_ip'],
@ -439,6 +441,29 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon,
if ip_ver == 6:
self._validate_ipv6_attributes(s, cur_subnet)
def _validate_subnet_for_pd(self, subnet):
"""Validates that subnet parameters are correct for IPv6 PD"""
if (subnet.get('ip_version') != constants.IP_VERSION_6):
reason = _("Prefix Delegation can only be used with IPv6 "
"subnets.")
raise n_exc.BadRequest(resource='subnets', msg=reason)
mode_list = [constants.IPV6_SLAAC,
constants.DHCPV6_STATELESS,
attributes.ATTR_NOT_SPECIFIED]
ra_mode = subnet.get('ipv6_ra_mode')
if ra_mode not in mode_list:
reason = _("IPv6 RA Mode must be SLAAC or Stateless for "
"Prefix Delegation.")
raise n_exc.BadRequest(resource='subnets', msg=reason)
address_mode = subnet.get('ipv6_address_mode')
if address_mode not in mode_list:
reason = _("IPv6 Address Mode must be SLAAC or Stateless for "
"Prefix Delegation.")
raise n_exc.BadRequest(resource='subnets', msg=reason)
def _update_router_gw_ports(self, context, network, subnet):
l3plugin = manager.NeutronManager.get_service_plugins().get(
service_constants.L3_ROUTER_NAT)
@ -543,6 +568,17 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon,
subnetpool_id = self._get_subnetpool_id(s)
if subnetpool_id:
self.ipam.validate_pools_with_subnetpool(s)
if subnetpool_id == constants.IPV6_PD_POOL_ID:
if has_cidr:
# We do not currently support requesting a specific
# cidr with IPv6 prefix delegation. Set the subnetpool_id
# to None and allow the request to continue as normal.
subnetpool_id = None
self._validate_subnet(context, s)
else:
prefix = constants.PROVISIONAL_IPV6_PD_PREFIX
subnet['subnet']['cidr'] = prefix
self._validate_subnet_for_pd(s)
else:
if not has_cidr:
msg = _('A cidr must be specified in the absence of a '
@ -552,6 +588,16 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon,
return self._create_subnet(context, subnet, subnetpool_id)
def _update_allocation_pools(self, subnet):
"""Gets new allocation pools and formats them correctly"""
allocation_pools = self.ipam.generate_allocation_pools(
subnet['cidr'],
subnet['gateway_ip'])
return [{'start': str(netaddr.IPAddress(p.first,
subnet['ip_version'])),
'end': str(netaddr.IPAddress(p.last, subnet['ip_version']))}
for p in allocation_pools]
def update_subnet(self, context, id, subnet):
"""Update the subnet with new info.
@ -559,6 +605,7 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon,
dns lease or we support gratuitous DHCP offers
"""
s = subnet['subnet']
new_cidr = s.get('cidr')
db_subnet = self._get_subnet(context, id)
# Fill 'ip_version' and 'allocation_pools' fields with the current
# value since _validate_subnet() expects subnet spec has 'ip_version'
@ -567,6 +614,7 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon,
s['cidr'] = db_subnet.cidr
s['id'] = db_subnet.id
s['tenant_id'] = db_subnet.tenant_id
s['subnetpool_id'] = db_subnet.subnetpool_id
self._validate_subnet(context, s, cur_subnet=db_subnet)
db_pools = [netaddr.IPRange(p['first_ip'], p['last_ip'])
for p in db_subnet.allocation_pools]
@ -575,11 +623,27 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon,
if s.get('allocation_pools') is not None:
# Convert allocation pools to IPRange to simplify future checks
range_pools = self.ipam.pools_to_ip_range(s['allocation_pools'])
self.ipam.validate_allocation_pools(range_pools, s['cidr'])
s['allocation_pools'] = range_pools
if s.get('gateway_ip') is not None:
update_ports_needed = False
if new_cidr and ipv6_utils.is_ipv6_pd_enabled(s):
# This is an ipv6 prefix delegation-enabled subnet being given an
# updated cidr by the process_prefix_update RPC
s['cidr'] = new_cidr
update_ports_needed = True
net = netaddr.IPNetwork(s['cidr'], s['ip_version'])
# Update gateway_ip and allocation pools based on new cidr
s['gateway_ip'] = utils.get_first_host_ip(net, s['ip_version'])
s['allocation_pools'] = self._update_allocation_pools(s)
# If either gateway_ip or allocation_pools were specified
gateway_ip = s.get('gateway_ip')
if gateway_ip is not None or s.get('allocation_pools') is not None:
if gateway_ip is None:
gateway_ip = db_subnet.gateway_ip
pools = range_pools if range_pools is not None else db_pools
self.ipam.validate_gw_out_of_pools(s["gateway_ip"], pools)
self.ipam.validate_gw_out_of_pools(gateway_ip, pools)
with context.session.begin(subtransactions=True):
subnet, changes = self.ipam.update_db_subnet(context, id, s,
@ -587,6 +651,31 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon,
result = self._make_subnet_dict(subnet, context=context)
# Keep up with fields that changed
result.update(changes)
if update_ports_needed:
# Find ports that have not yet been updated
# with an IP address by Prefix Delegation, and update them
ports = self.get_ports(context)
routers = []
for port in ports:
fixed_ips = []
new_port = {'port': port}
for ip in port['fixed_ips']:
if ip['subnet_id'] == s['id']:
fixed_ip = {'subnet_id': s['id']}
if "router_interface" in port['device_owner']:
routers.append(port['device_id'])
fixed_ip['ip_address'] = s['gateway_ip']
fixed_ips.append(fixed_ip)
if fixed_ips:
new_port['port']['fixed_ips'] = fixed_ips
self.update_port(context, port['id'], new_port)
# Send router_update to l3_agent
if routers:
l3_rpc_notifier = l3_rpc_agent_api.L3AgentNotifyAPI()
l3_rpc_notifier.routers_updated(context, routers)
return result
def _subnet_check_ip_allocations(self, context, subnet_id):
@ -685,11 +774,63 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon,
subnetpool_prefix = models_v2.SubnetPoolPrefix(**prefix_args)
context.session.add(subnetpool_prefix)
def _validate_address_scope_id(self, context, address_scope_id,
subnetpool_id, sp_prefixes):
"""Validate the address scope before associating.
Subnetpool can associate with an address scope if
- the tenant user is the owner of both the subnetpool and
address scope
- the admin is associating the subnetpool with the shared
address scope
- there is no prefix conflict with the existing subnetpools
associated with the address scope.
"""
if not attributes.is_attr_set(address_scope_id):
return
if not self.is_address_scope_owned_by_tenant(context,
address_scope_id):
raise n_exc.IllegalSubnetPoolAssociationToAddressScope(
subnetpool_id=subnetpool_id, address_scope_id=address_scope_id)
subnetpools = self._get_subnetpools_by_address_scope_id(
context, address_scope_id)
new_set = netaddr.IPSet(sp_prefixes)
for sp in subnetpools:
if sp.id == subnetpool_id:
continue
sp_set = netaddr.IPSet([prefix['cidr'] for prefix in sp.prefixes])
if sp_set.intersection(new_set):
raise n_exc.AddressScopePrefixConflict()
def _check_subnetpool_update_allowed(self, context, subnetpool_id,
address_scope_id):
"""Check if the subnetpool can be updated or not.
If the subnetpool is associated to a shared address scope not owned
by the tenant, then the subnetpool cannot be updated.
"""
if not self.is_address_scope_owned_by_tenant(context,
address_scope_id):
msg = _("subnetpool %(subnetpool_id)s cannot be updated when"
" associated with shared address scope "
"%(address_scope_id)s") % {
'subnetpool_id': subnetpool_id,
'address_scope_id': address_scope_id}
raise n_exc.IllegalSubnetPoolUpdate(reason=msg)
def create_subnetpool(self, context, subnetpool):
"""Create a subnetpool"""
sp = subnetpool['subnetpool']
sp_reader = subnet_alloc.SubnetPoolReader(sp)
if sp_reader.address_scope_id is attributes.ATTR_NOT_SPECIFIED:
sp_reader.address_scope_id = None
self._validate_address_scope_id(context, sp_reader.address_scope_id,
id, sp_reader.prefixes)
tenant_id = self._get_tenant_id_for_create(context, sp)
with context.session.begin(subtransactions=True):
pool_args = {'tenant_id': tenant_id,
@ -701,7 +842,8 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon,
'min_prefixlen': sp_reader.min_prefixlen,
'max_prefixlen': sp_reader.max_prefixlen,
'shared': sp_reader.shared,
'default_quota': sp_reader.default_quota}
'default_quota': sp_reader.default_quota,
'address_scope_id': sp_reader.address_scope_id}
subnetpool = models_v2.SubnetPool(**pool_args)
context.session.add(subnetpool)
for prefix in sp_reader.prefixes:
@ -738,7 +880,7 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon,
for key in ['id', 'name', 'ip_version', 'min_prefixlen',
'max_prefixlen', 'default_prefixlen', 'shared',
'default_quota']:
'default_quota', 'address_scope_id']:
self._write_key(key, updated, model, new_pool)
return updated
@ -759,6 +901,12 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon,
updated = self._updated_subnetpool_dict(orig_sp, new_sp)
updated['tenant_id'] = orig_sp.tenant_id
reader = subnet_alloc.SubnetPoolReader(updated)
if orig_sp.address_scope_id:
self._check_subnetpool_update_allowed(context, id,
orig_sp.address_scope_id)
self._validate_address_scope_id(context, reader.address_scope_id,
id, reader.prefixes)
orig_sp.update(self._filter_non_model_columns(
reader.subnetpool,
models_v2.SubnetPool))

View File

@ -276,7 +276,7 @@ class FlavorManager(common_db_mixin.CommonDbMixin):
sp = service_profile['service_profile']
with context.session.begin(subtransactions=True):
driver_klass = self._load_dummy_driver(sp['driver'])
# 'get_service_type' must be a static method so it cant be changed
# 'get_service_type' must be a static method so it can't be changed
svc_type = DummyServiceDriver.get_service_type()
sp_db = ServiceProfile(id=uuidutils.generate_uuid(),

View File

@ -168,7 +168,8 @@ class IpamBackendMixin(db_base_plugin_common.DbBasePluginCommon):
context.session.add_all(new_pools)
# Call static method with self to redefine in child
# (non-pluggable backend)
self._rebuild_availability_ranges(context, [s])
if not ipv6_utils.is_ipv6_pd_enabled(s):
self._rebuild_availability_ranges(context, [s])
# Gather new pools for result
result_pools = [{'start': p[0], 'end': p[1]} for p in pools]
del s['allocation_pools']
@ -185,8 +186,6 @@ class IpamBackendMixin(db_base_plugin_common.DbBasePluginCommon):
context, subnet_id, s)
if "allocation_pools" in s:
self._validate_allocation_pools(s['allocation_pools'],
s['cidr'])
changes['allocation_pools'] = (
self._update_subnet_allocation_pools(context, subnet_id, s))
@ -199,7 +198,8 @@ class IpamBackendMixin(db_base_plugin_common.DbBasePluginCommon):
Verifies the specified CIDR does not overlap with the ones defined
for the other subnets specified for this network, or with any other
CIDR if overlapping IPs are disabled.
CIDR if overlapping IPs are disabled. Does not apply to subnets with
temporary IPv6 Prefix Delegation CIDRs (::/64).
"""
new_subnet_ipset = netaddr.IPSet([new_subnet_cidr])
# Disallow subnets with prefix length 0 as they will lead to
@ -217,7 +217,8 @@ class IpamBackendMixin(db_base_plugin_common.DbBasePluginCommon):
else:
subnet_list = self._get_all_subnets(context)
for subnet in subnet_list:
if (netaddr.IPSet([subnet.cidr]) & new_subnet_ipset):
if ((netaddr.IPSet([subnet.cidr]) & new_subnet_ipset) and
subnet.cidr != constants.PROVISIONAL_IPV6_PD_PREFIX):
# don't give out details of the overlapping subnet
err_msg = (_("Requested subnet with cidr: %(cidr)s for "
"network: %(network_id)s overlaps with another "
@ -242,7 +243,7 @@ class IpamBackendMixin(db_base_plugin_common.DbBasePluginCommon):
new_subnetpool_id != subnet.subnetpool_id):
raise n_exc.NetworkSubnetPoolAffinityError()
def _validate_allocation_pools(self, ip_pools, subnet_cidr):
def validate_allocation_pools(self, ip_pools, subnet_cidr):
"""Validate IP allocation pools.
Verify start and end address for each allocation pool are valid,
@ -330,13 +331,16 @@ class IpamBackendMixin(db_base_plugin_common.DbBasePluginCommon):
return subnet
raise n_exc.InvalidIpForNetwork(ip_address=fixed['ip_address'])
def generate_pools(self, cidr, gateway_ip):
return ipam_utils.generate_pools(cidr, gateway_ip)
def _prepare_allocation_pools(self, allocation_pools, cidr, gateway_ip):
"""Returns allocation pools represented as list of IPRanges"""
if not attributes.is_attr_set(allocation_pools):
return ipam_utils.generate_pools(cidr, gateway_ip)
return self.generate_pools(cidr, gateway_ip)
ip_range_pools = self.pools_to_ip_range(allocation_pools)
self._validate_allocation_pools(ip_range_pools, cidr)
self.validate_allocation_pools(ip_range_pools, cidr)
if gateway_ip:
self.validate_gw_out_of_pools(gateway_ip, ip_range_pools)
return ip_range_pools
@ -355,7 +359,8 @@ class IpamBackendMixin(db_base_plugin_common.DbBasePluginCommon):
return True
subnet = self._get_subnet(context, subnet_id)
return not ipv6_utils.is_auto_address_subnet(subnet)
return not (ipv6_utils.is_auto_address_subnet(subnet) and
not ipv6_utils.is_ipv6_pd_enabled(subnet))
def _get_changed_ips_for_port(self, context, original_ips,
new_ips, device_owner):

View File

@ -243,7 +243,8 @@ class IpamNonPluggableBackend(ipam_backend_mixin.IpamBackendMixin):
subnet = self._get_subnet_for_fixed_ip(context, fixed, network_id)
is_auto_addr_subnet = ipv6_utils.is_auto_address_subnet(subnet)
if 'ip_address' in fixed:
if ('ip_address' in fixed and
subnet['cidr'] != constants.PROVISIONAL_IPV6_PD_PREFIX):
# Ensure that the IP's are unique
if not IpamNonPluggableBackend._check_unique_ip(
context, network_id,
@ -268,6 +269,7 @@ class IpamNonPluggableBackend(ipam_backend_mixin.IpamBackendMixin):
# listed explicitly here by subnet ID) are associated
# with the port.
if (device_owner in constants.ROUTER_INTERFACE_OWNERS_SNAT or
ipv6_utils.is_ipv6_pd_enabled(subnet) or
not is_auto_addr_subnet):
fixed_ip_set.append({'subnet_id': subnet['id']})
@ -433,7 +435,7 @@ class IpamNonPluggableBackend(ipam_backend_mixin.IpamBackendMixin):
def allocate_subnet(self, context, network, subnet, subnetpool_id):
subnetpool = None
if subnetpool_id:
if subnetpool_id and not subnetpool_id == constants.IPV6_PD_POOL_ID:
subnetpool = self._get_subnetpool(context, subnetpool_id)
self._validate_ip_version_with_subnetpool(subnet, subnetpool)
@ -452,7 +454,7 @@ class IpamNonPluggableBackend(ipam_backend_mixin.IpamBackendMixin):
subnet,
subnetpool)
if subnetpool_id:
if subnetpool_id and not subnetpool_id == constants.IPV6_PD_POOL_ID:
driver = subnet_alloc.SubnetAllocator(subnetpool, context)
ipam_subnet = driver.allocate_subnet(subnet_request)
subnet_request = ipam_subnet.get_details()

View File

@ -137,8 +137,8 @@ class IpamPluggableBackend(ipam_backend_mixin.IpamBackendMixin):
return allocated
def _ipam_update_allocation_pools(self, context, ipam_driver, subnet):
self._validate_allocation_pools(subnet['allocation_pools'],
subnet['cidr'])
self.validate_allocation_pools(subnet['allocation_pools'],
subnet['cidr'])
factory = ipam_driver.get_subnet_request_factory()
subnet_request = factory.get_request(context, subnet, None)

View File

@ -30,6 +30,7 @@ from neutron.callbacks import registry
from neutron.callbacks import resources
from neutron.common import constants as l3_constants
from neutron.common import exceptions as n_exc
from neutron.common import ipv6_utils
from neutron.common import rpc as n_rpc
from neutron.common import utils
from neutron.db import model_base
@ -470,6 +471,9 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase):
msg = (_("Router already has a port on subnet %s")
% subnet_id)
raise n_exc.BadRequest(resource='router', msg=msg)
# Ignore temporary Prefix Delegation CIDRs
if subnet_cidr == l3_constants.PROVISIONAL_IPV6_PD_PREFIX:
continue
sub_id = ip['subnet_id']
cidr = self._core_plugin._get_subnet(context.elevated(),
sub_id)['cidr']
@ -579,7 +583,8 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase):
fixed_ip = {'ip_address': subnet['gateway_ip'],
'subnet_id': subnet['id']}
if subnet['ip_version'] == 6:
if (subnet['ip_version'] == 6 and not
ipv6_utils.is_ipv6_pd_enabled(subnet)):
# Add new prefix to an existing ipv6 port with the same network id
# if one exists
port = self._find_ipv6_router_port_by_network(router,
@ -963,6 +968,9 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase):
port['fixed_ips'] = [
{'ip_address': fip['floating_ip_address']}]
if fip.get('subnet_id'):
port['fixed_ips'] = [
{'subnet_id': fip['subnet_id']}]
external_port = self._core_plugin.create_port(context.elevated(),
{'port': port})
@ -1197,7 +1205,7 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase):
for p in each_port_having_fixed_ips())
filters = {'network_id': [id for id in network_ids]}
fields = ['id', 'cidr', 'gateway_ip',
'network_id', 'ipv6_ra_mode']
'network_id', 'ipv6_ra_mode', 'subnetpool_id']
subnets_by_network = dict((id, []) for id in network_ids)
for subnet in self._core_plugin.get_subnets(context, filters, fields):
@ -1215,7 +1223,8 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase):
subnet_info = {'id': subnet['id'],
'cidr': subnet['cidr'],
'gateway_ip': subnet['gateway_ip'],
'ipv6_ra_mode': subnet['ipv6_ra_mode']}
'ipv6_ra_mode': subnet['ipv6_ra_mode'],
'subnetpool_id': subnet['subnetpool_id']}
for fixed_ip in port['fixed_ips']:
if fixed_ip['subnet_id'] == subnet['id']:
port['subnets'].append(subnet_info)

View File

@ -475,7 +475,7 @@ class L3_NAT_with_dvr_db_mixin(l3_db.L3_NAT_db_mixin,
ports_to_populate += interfaces
self._populate_subnets_for_ports(context, ports_to_populate)
self._process_interfaces(routers_dict, interfaces)
return routers_dict.values()
return list(routers_dict.values())
def _get_vm_port_hostid(self, context, port_id, port=None):
"""Return the portbinding host_id."""
@ -661,9 +661,9 @@ class L3_NAT_with_dvr_db_mixin(l3_db.L3_NAT_db_mixin,
router.
"""
# Check this is a valid VM port
if ("compute:" not in port_dict['device_owner'] or
not port_dict['fixed_ips']):
# Check this is a valid VM or service port
if not (n_utils.is_dvr_serviced(port_dict['device_owner']) and
port_dict['fixed_ips']):
return
ip_address = port_dict['fixed_ips'][0]['ip_address']
subnet = port_dict['fixed_ips'][0]['subnet_id']

View File

@ -29,6 +29,7 @@ from neutron.db import l3_dvr_db
from neutron.db import model_base
from neutron.db import models_v2
from neutron.extensions import l3_ext_ha_mode as l3_ha
from neutron.extensions import portbindings
from neutron.i18n import _LI
VR_ID_RANGE = set(range(1, 255))
@ -80,9 +81,11 @@ class L3HARouterAgentPortBinding(model_base.BASEV2):
ondelete='CASCADE'))
agent = orm.relationship(agents_db.Agent)
state = sa.Column(sa.Enum('active', 'standby', name='l3_ha_states'),
default='standby',
server_default='standby')
state = sa.Column(sa.Enum(constants.HA_ROUTER_STATE_ACTIVE,
constants.HA_ROUTER_STATE_STANDBY,
name='l3_ha_states'),
default=constants.HA_ROUTER_STATE_STANDBY,
server_default=constants.HA_ROUTER_STATE_STANDBY)
class L3HARouterNetwork(model_base.BASEV2):
@ -452,6 +455,20 @@ class L3_HA_NAT_db_mixin(l3_dvr_db.L3_NAT_with_dvr_db_mixin):
bindings = self.get_ha_router_port_bindings(context, [router_id])
return [(binding.agent, binding.state) for binding in bindings]
def get_active_host_for_ha_router(self, context, router_id):
bindings = self.get_l3_bindings_hosting_router_with_ha_states(
context, router_id)
# TODO(amuller): In case we have two or more actives, this method
# needs to return the last agent to become active. This requires
# timestamps for state changes. Otherwise, if a host goes down
# and another takes over, we'll have two actives. In this case,
# if an interface is added to a router, its binding might be wrong
# and l2pop would not work correctly.
return next(
(agent.host for agent, state in bindings
if state == constants.HA_ROUTER_STATE_ACTIVE),
None)
def _process_sync_ha_data(self, context, routers, host):
routers_dict = dict((router['id'], router) for router in routers)
@ -503,3 +520,22 @@ class L3_HA_NAT_db_mixin(l3_dvr_db.L3_NAT_with_dvr_db_mixin):
bindings = self.get_ha_router_port_bindings(
context, router_ids=states.keys(), host=host)
self._set_router_states(context, bindings, states)
self._update_router_port_bindings(context, states, host)
def _update_router_port_bindings(self, context, states, host):
admin_ctx = context.elevated()
device_filter = {'device_id': states.keys(),
'device_owner':
[constants.DEVICE_OWNER_ROUTER_INTF]}
ports = self._core_plugin.get_ports(admin_ctx, filters=device_filter)
active_ports = (port for port in ports
if states[port['device_id']] == constants.HA_ROUTER_STATE_ACTIVE)
for port in active_ports:
port[portbindings.HOST_ID] = host
try:
self._core_plugin.update_port(admin_ctx, port['id'],
{attributes.PORT: port})
except (orm.exc.StaleDataError, orm.exc.ObjectDeletedError):
# Take concurrently deleted interfaces in to account
pass

View File

@ -25,6 +25,10 @@ LBAAS_TABLES = ['vips', 'sessionpersistences', 'pools', 'healthmonitors',
FWAAS_TABLES = ['firewall_rules', 'firewalls', 'firewall_policies']
DRIVER_TABLES = [
# Arista ML2 driver Models moved to openstack/networking-arista
'arista_provisioned_nets',
'arista_provisioned_vms',
'arista_provisioned_tenants',
# Models moved to openstack/networking-cisco
'cisco_ml2_apic_contracts',
'cisco_ml2_apic_names',

View File

@ -1,3 +1,3 @@
1c844d1677f7
45f955889773
1b4c6e320f79
2a16083502f3
kilo

View File

@ -0,0 +1,36 @@
# Copyright 2015 Huawei Technologies India Pvt. Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""address scope support in subnetpool
Revision ID: 1b4c6e320f79
Revises: 1c844d1677f7
Create Date: 2015-07-03 09:48:39.491058
"""
# revision identifiers, used by Alembic.
revision = '1b4c6e320f79'
down_revision = '1c844d1677f7'
from alembic import op
import sqlalchemy as sa
def upgrade():
op.add_column('subnetpools',
sa.Column('address_scope_id',
sa.String(length=36),
nullable=True))

View File

@ -16,14 +16,14 @@
"""add order to dnsnameservers
Revision ID: 1c844d1677f7
Revises: 2a16083502f3
Revises: 26c371498592
Create Date: 2015-07-21 22:59:03.383850
"""
# revision identifiers, used by Alembic.
revision = '1c844d1677f7'
down_revision = '2a16083502f3'
down_revision = '26c371498592'
from alembic import op
import sqlalchemy as sa

View File

@ -0,0 +1,35 @@
# Copyright (c) 2015 Thales Services SAS
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""subnetpool hash
Revision ID: 26c371498592
Revises: 45f955889773
Create Date: 2015-06-02 21:18:19.942076
"""
# revision identifiers, used by Alembic.
revision = '26c371498592'
down_revision = '45f955889773'
from alembic import op
import sqlalchemy as sa
def upgrade():
op.add_column(
'subnetpools',
sa.Column('hash', sa.String(36), nullable=False, server_default=''))

View File

@ -52,7 +52,6 @@ from neutron.plugins.brocade.db import models as brocade_models # noqa
from neutron.plugins.cisco.db.l3 import l3_models # noqa
from neutron.plugins.cisco.db import n1kv_models_v2 # noqa
from neutron.plugins.cisco.db import network_models_v2 # noqa
from neutron.plugins.ml2.drivers.arista import db # noqa
from neutron.plugins.ml2.drivers.brocade.db import ( # noqa
models as ml2_brocade_models)
from neutron.plugins.ml2.drivers.cisco.nexus import ( # noqa

View File

@ -244,6 +244,8 @@ class SubnetPool(model_base.BASEV2, HasId, HasTenant):
max_prefixlen = sa.Column(sa.Integer, nullable=False)
shared = sa.Column(sa.Boolean, nullable=False)
default_quota = sa.Column(sa.Integer, nullable=True)
hash = sa.Column(sa.String(36), nullable=False, server_default='')
address_scope_id = sa.Column(sa.String(36), nullable=True)
prefixes = orm.relationship(SubnetPoolPrefix,
backref='subnetpools',
cascade='all, delete, delete-orphan',

View File

@ -151,7 +151,7 @@ class SecurityGroupDbMixin(ext_sg.SecurityGroupPluginBase):
if not default_sg:
self._ensure_default_security_group(context, tenant_id)
with context.session.begin(subtransactions=True):
with db_api.autonested_transaction(context.session):
security_group_db = SecurityGroup(id=s.get('id') or (
uuidutils.generate_uuid()),
description=s['description'],
@ -441,7 +441,7 @@ class SecurityGroupDbMixin(ext_sg.SecurityGroupPluginBase):
raise ext_sg.SecurityGroupInvalidIcmpValue(
field=field, attr=attr, value=rule[attr])
if (rule['port_range_min'] is None and
rule['port_range_max']):
rule['port_range_max'] is not None):
raise ext_sg.SecurityGroupMissingIcmpType(
value=rule['port_range_max'])
@ -663,9 +663,8 @@ class SecurityGroupDbMixin(ext_sg.SecurityGroupPluginBase):
'description': _('Default security group')}
}
try:
with db_api.autonested_transaction(context.session):
ret = self.create_security_group(
context, security_group, default_sg=True)
ret = self.create_security_group(
context, security_group, default_sg=True)
except exception.DBDuplicateEntry as ex:
LOG.debug("Duplicate default security group %s was "
"not created", ex.value)

View File

@ -205,6 +205,11 @@ class SecurityGroupServerRpcMixin(sg_db.SecurityGroupDbMixin):
if rule_dict not in sg_info['security_groups'][security_group_id]:
sg_info['security_groups'][security_group_id].append(
rule_dict)
# Update the security groups info if they don't have any rules
sg_ids = self._select_sg_ids_for_ports(context, ports)
for (sg_id, ) in sg_ids:
if sg_id not in sg_info['security_groups']:
sg_info['security_groups'][sg_id] = []
sg_info['sg_member_ips'] = remote_security_group_info
# the provider rules do not belong to any security group, so these
@ -223,6 +228,15 @@ class SecurityGroupServerRpcMixin(sg_db.SecurityGroupDbMixin):
sg_info['sg_member_ips'][sg_id][ethertype].add(ip)
return sg_info
def _select_sg_ids_for_ports(self, context, ports):
if not ports:
return []
sg_binding_port = sg_db.SecurityGroupPortBinding.port_id
sg_binding_sgid = sg_db.SecurityGroupPortBinding.security_group_id
query = context.session.query(sg_binding_sgid)
query = query.filter(sg_binding_port.in_(ports.keys()))
return query.all()
def _select_rules_for_ports(self, context, ports):
if not ports:
return []

View File

@ -23,7 +23,7 @@ import six
ADDRESS_SCOPE = 'address_scope'
ADDRESS_SCOPES = '%ss' % ADDRESS_SCOPE
ADDRESS_SCOPE_ID = 'address_scope_id'
# Attribute Map
RESOURCE_ATTRIBUTE_MAP = {
@ -50,6 +50,13 @@ RESOURCE_ATTRIBUTE_MAP = {
'is_visible': True,
'required_by_policy': True,
'enforce_policy': True},
},
attr.SUBNETPOOLS: {
ADDRESS_SCOPE_ID: {'allow_post': True,
'allow_put': True,
'default': attr.ATTR_NOT_SPECIFIED,
'validate': {'type:uuid_or_none': None},
'is_visible': True}
}
}
@ -58,9 +65,10 @@ class AddressScopeNotFound(nexception.NotFound):
message = _("Address scope %(address_scope_id)s could not be found")
class AddressScopeDeleteError(nexception.BadRequest):
message = _("Unable to delete address scope %(address_scope_id)s : "
"%(reason)s")
class AddressScopeInUse(nexception.InUse):
message = _("Unable to complete operation on "
"address scope %(address_scope_id)s. There are one or more"
" subnet pools in use on the address scope")
class AddressScopeUpdateError(nexception.BadRequest):

View File

@ -124,6 +124,10 @@ RESOURCE_ATTRIBUTE_MAP = {
'validate': {'type:ip_address_or_none': None},
'is_visible': True, 'default': None,
'enforce_policy': True},
'subnet_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:uuid_or_none': None},
'is_visible': False, # Use False for input only attr
'default': None},
'floating_network_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:uuid': None},
'is_visible': True},

View File

@ -91,7 +91,8 @@ VIF_TYPES = [VIF_TYPE_UNBOUND, VIF_TYPE_BINDING_FAILED, VIF_TYPE_OVS,
VNIC_NORMAL = 'normal'
VNIC_DIRECT = 'direct'
VNIC_MACVTAP = 'macvtap'
VNIC_TYPES = [VNIC_NORMAL, VNIC_DIRECT, VNIC_MACVTAP]
VNIC_BAREMETAL = 'baremetal'
VNIC_TYPES = [VNIC_NORMAL, VNIC_DIRECT, VNIC_MACVTAP, VNIC_BAREMETAL]
EXTENDED_ATTRIBUTES_2_0 = {
'ports': {

View File

@ -17,6 +17,7 @@ import math
import operator
import netaddr
from oslo_db import exception as db_exc
from oslo_utils import uuidutils
from neutron.api.v2 import attributes
@ -46,10 +47,23 @@ class SubnetAllocator(driver.Pool):
subnetpool, it's required to ensure non-overlapping cidrs in the same
subnetpool.
"""
# FIXME(cbrandily): not working with Galera
(self._context.session.query(models_v2.SubnetPool.id).
filter_by(id=self._subnetpool['id']).
with_lockmode('update').first())
current_hash = (self._context.session.query(models_v2.SubnetPool.hash)
.filter_by(id=self._subnetpool['id']).scalar())
if current_hash is None:
# NOTE(cbrandily): subnetpool has been deleted
raise n_exc.SubnetPoolNotFound(
subnetpool_id=self._subnetpool['id'])
new_hash = uuidutils.generate_uuid()
# NOTE(cbrandily): the update disallows 2 concurrent subnet allocation
# to succeed: at most 1 transaction will succeed, others will be
# rollbacked and be caught in neutron.db.v2.base
query = self._context.session.query(models_v2.SubnetPool).filter_by(
id=self._subnetpool['id'], hash=current_hash)
count = query.update({'hash': new_hash})
if not count:
raise db_exc.RetryRequest()
def _get_allocated_cidrs(self):
query = self._context.session.query(models_v2.Subnet)
@ -212,6 +226,7 @@ class SubnetPoolReader(object):
self._read_prefix_bounds(subnetpool)
self._read_attrs(subnetpool,
['tenant_id', 'name', 'shared'])
self._read_address_scope(subnetpool)
self.subnetpool = {'id': self.id,
'name': self.name,
'tenant_id': self.tenant_id,
@ -223,6 +238,7 @@ class SubnetPoolReader(object):
'default_prefix': self.default_prefix,
'default_prefixlen': self.default_prefixlen,
'default_quota': self.default_quota,
'address_scope_id': self.address_scope_id,
'shared': self.shared}
def _read_attrs(self, subnetpool, keys):
@ -299,6 +315,10 @@ class SubnetPoolReader(object):
self.ip_version = ip_version
self.prefixes = self._compact_subnetpool_prefix_list(prefix_list)
def _read_address_scope(self, subnetpool):
self.address_scope_id = subnetpool.get('address_scope_id',
attributes.ATTR_NOT_SPECIFIED)
def _compact_subnetpool_prefix_list(self, prefix_list):
"""Compact any overlapping prefixes in prefix_list and return the
result

View File

@ -8,16 +8,16 @@ msgid ""
msgstr ""
"Project-Id-Version: Neutron\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2015-07-27 06:07+0000\n"
"PO-Revision-Date: 2015-07-25 03:05+0000\n"
"POT-Creation-Date: 2015-08-10 06:10+0000\n"
"PO-Revision-Date: 2015-08-01 03:37+0000\n"
"Last-Translator: openstackjenkins <jenkins@openstack.org>\n"
"Language-Team: German (http://www.transifex.com/projects/p/neutron/language/"
"Language-Team: German (http://www.transifex.com/openstack/neutron/language/"
"de/)\n"
"Language: de\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n"
"Generated-By: Babel 2.0\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
#, python-format

View File

@ -3,23 +3,28 @@
# This file is distributed under the same license as the neutron project.
#
# Translators:
# jhonangel jose mireles rodriguez <jhonangelmireles@gmail.com>, 2015
# Pablo Sanchez <furybeat@gmail.com>, 2015
msgid ""
msgstr ""
"Project-Id-Version: Neutron\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2015-07-27 06:07+0000\n"
"PO-Revision-Date: 2015-07-25 03:05+0000\n"
"POT-Creation-Date: 2015-08-10 06:10+0000\n"
"PO-Revision-Date: 2015-08-01 03:37+0000\n"
"Last-Translator: openstackjenkins <jenkins@openstack.org>\n"
"Language-Team: Spanish (http://www.transifex.com/projects/p/neutron/language/"
"Language-Team: Spanish (http://www.transifex.com/openstack/neutron/language/"
"es/)\n"
"Language: es\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n"
"Generated-By: Babel 2.0\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
#, python-format
msgid "%(action)s failed (client error): %(exc)s"
msgstr "%(action)s falló (error de cliente): %(exc)s"
#, python-format
msgid "%(method)s %(url)s"
msgstr "%(method)s %(url)s"
@ -29,6 +34,10 @@ msgid "%(plugin_key)s: %(function_name)s with args %(args)s ignored"
msgstr ""
"Se ha ignorado %(plugin_key)s: %(function_name)s con los argumentos %(args)s "
#, python-format
msgid "%(prog)s version %(version)s"
msgstr "%(prog)s versión %(version)s"
#, python-format
msgid "%(url)s returned a fault: %(exception)s"
msgstr "%(url)s ha devuelto un error: %(exception)s"
@ -37,6 +46,22 @@ msgstr "%(url)s ha devuelto un error: %(exception)s"
msgid "%(url)s returned with HTTP %(status)d"
msgstr "Se ha devuelto %(url)s con HTTP %(status)d"
#, python-format
msgid "%d probe(s) deleted"
msgstr "Se ha eliminado el Analizador(es) %d"
#, python-format
msgid "Adding network %(net)s to agent %(agent)s on host %(host)s"
msgstr "Agregando red %(net)s al agente %(agent)s en el host %(host)s"
#, python-format
msgid "Agent %s already present"
msgstr "El agente %s ya está presente."
#, python-format
msgid "Agent Gateway port does not exist, so create one: %s"
msgstr "El puerto pasarela del agente no existe, por lo tanto crear uno: %s"
msgid "Agent initialized successfully, now running... "
msgstr ""
"El agente se ha inicializado satisfactoriamente, ahora se está ejecutando... "
@ -73,6 +98,21 @@ msgstr "Se ha intentado eliminar el filtro de puerto que no está filtrado %r"
msgid "Attempted to update port filter which is not filtered %s"
msgstr "Se ha intentado actualizar el filtro de puerto que no está filtrado %s"
msgid "Bad resource for forming a list request"
msgstr "Mal recurso para la formación de una solicitud de lista"
#, python-format
msgid ""
"Cannot apply dhcp option %(opt)s because it's ip_version %(version)d is not "
"in port's address IP versions"
msgstr ""
"No se puede aplicar la opción dhcp %(opt)s porque su ip_version %(version)d "
"no está en la versión IP de la dirección del puerto"
#, python-format
msgid "Centralizing distributed router %s is not supported"
msgstr "No se soporta centralizar el enrutador distribuido %s"
#, python-format
msgid "Cleaning bridge: %s"
msgstr "LImpiando puente: %s"
@ -84,6 +124,11 @@ msgstr "Archivo de configuración de pegar: %s"
msgid "DHCP agent started"
msgstr "Se ha iniciado al agente DHCP"
#, python-format
msgid "Default provider is not specified for service type %s"
msgstr ""
"El proveedor por defecto no esta especificado para el tipo de servicio %s"
#, python-format
msgid "Deleting port: %s"
msgstr "Destruyendo puerto: %s"
@ -92,6 +137,10 @@ msgstr "Destruyendo puerto: %s"
msgid "Destroying IPset: %s"
msgstr "Destruyendo IPset: %s"
#, python-format
msgid "Destroying IPsets with prefix: %s"
msgstr "Destruyendo IPset con prefijo: %s"
#, python-format
msgid "Device %s already exists"
msgstr "El dispositivo %s ya existe"
@ -100,9 +149,18 @@ msgstr "El dispositivo %s ya existe"
msgid "Device %s not defined on plugin"
msgstr "El dispositivo %s no está definido en el plug-in"
msgid "Disabled allowed-address-pairs extension."
msgstr "La extensión allowed-address-pairs se ha inhabilitado."
msgid "Disabled security-group extension."
msgstr "La extensión security-group se ha inhabilitado."
msgid "Disabled vlantransparent extension."
msgstr "La extensión vlantransparent se ha inhabilitado."
msgid "Fake SDNVE controller initialized"
msgstr "Inicializado controlador falso SDNVE "
#, python-format
msgid "Found invalid IP address in pool: %(start)s - %(end)s:"
msgstr ""
@ -119,10 +177,35 @@ msgstr ""
"Se ha encontrado una agrupación mayor que el CIDR de subred: %(start)s - "
"%(end)s"
#, python-format
msgid ""
"Found port (%(port_id)s, %(ip)s) having IP allocation on subnet %(subnet)s, "
"cannot delete"
msgstr ""
"Se encontró el puerto (%(port_id)s, %(ip)s) con la asignación de IP en la "
"subred %(subnet)s, no se puede eliminar."
#, python-format
msgid "HTTP exception thrown: %s"
msgstr "Excepción de HTTP emitida: %s"
#, python-format
msgid ""
"Heartbeat received from %(type)s agent on host %(host)s, uuid %(uuid)s after "
"%(delta)s"
msgstr ""
"Heartbeat recibido del agente %(type)s en el host %(host)s, uuid %(uuid)s "
"después de %(delta)s"
msgid "IPset cleanup completed successfully"
msgstr "La limpieza de IPset se ha completado satisfactoriamente"
msgid "IPv6 is not enabled on this system."
msgstr "IPv6 no esta habitado en el sistema."
msgid "Initializing CRD client... "
msgstr "Inicialización de cliente CRD..."
msgid "Initializing extension manager."
msgstr "Inicializando gestor de ampliación."
@ -140,10 +223,26 @@ msgstr "Se ha iniciado el daemon RPC de agente de LinuxBridge."
msgid "Loaded extension: %s"
msgstr "Ampliación cargada: %s"
#, python-format
msgid "Loaded quota_driver: %s."
msgstr "Se ha cargado quota_driver %s."
#, python-format
msgid "Loading Metering driver %s"
msgstr "Cargando controlador de medición %s"
#, python-format
msgid "Loading Plugin: %s"
msgstr "Cargando complementos: %s"
#, python-format
msgid "Loading core plugin: %s"
msgstr "Cargando complemento principal: %s"
#, python-format
msgid "Loading interface driver %s"
msgstr "Cargando controlador de interfaz %s"
msgid "Logging enabled!"
msgstr "Registro habilitado."
@ -159,17 +258,35 @@ msgid "Mapping physical network %(physical_network)s to bridge %(bridge)s"
msgstr ""
"Correlacionando la red física %(physical_network)s con el puente %(bridge)s"
#, python-format
msgid ""
"Mapping physical network %(physical_network)s to interface %(interface)s"
msgstr ""
"Co-relacionando la red física %(physical_network)s con la interfaz "
"%(interface)s"
#, python-format
msgid "Network VLAN ranges: %s"
msgstr "Rangos de VLAN de red: %s"
#, python-format
msgid "Neutron service started, listening on %(host)s:%(port)s"
msgstr "Se ha iniciado el servicio Neutron, escuchando en %(host)s:%(port)s"
#, python-format
msgid "No %s Plugin loaded"
msgstr "No se ha cargado ningún plug-in de %s"
msgid "No ip allocation set"
msgstr "No se ha configurado la asignación IP"
msgid "No ports here to refresh firewall"
msgstr "No hay puertos aqui para actualizar firewall"
#, python-format
msgid "Nova event response: %s"
msgstr "Respuesta de evento Nova: %s"
msgid "OVS cleanup completed successfully"
msgstr "La limpieza de OVS se ha completado satisfactoriamente"
@ -177,14 +294,26 @@ msgstr "La limpieza de OVS se ha completado satisfactoriamente"
msgid "Port %(device)s updated. Details: %(details)s"
msgstr "Se ha actualizado el puerto %(device)s. Detalles: %(details)s"
#, python-format
msgid "Port %(port_id)s not present in bridge %(br_name)s"
msgstr "El puerto %(port_id)s no está presente en el puente %(br_name)s"
#, python-format
msgid "Port %s updated."
msgstr "El puerto %s se ha actualizado."
#, python-format
msgid "Ports %s removed"
msgstr "Se ha eliminado los puertos %s"
#, python-format
msgid "Preparing filters for devices %s"
msgstr "Preparando filtros para dispositivos %s"
#, python-format
msgid "Process runs with uid/gid: %(uid)s/%(gid)s"
msgstr "El proceso se ejecuta con uid/gid: %(uid)s/%(gid)s"
msgid "Provider rule updated"
msgstr "Se ha actualizado regla de proveedor"
@ -192,6 +321,9 @@ msgstr "Se ha actualizado regla de proveedor"
msgid "RPC agent_id: %s"
msgstr "agent_id de RPC: %s"
msgid "RPC was already started in parent process by plugin."
msgstr "RPC ya fue iniciado en el proceso padre por el complemento."
#, python-format
msgid "Reclaiming vlan = %(vlan_id)s from net-id = %(net_uuid)s"
msgstr "Reclamando vlan = %(vlan_id)s de net-id = %(net_uuid)s"
@ -203,6 +335,14 @@ msgstr "Renovar reglas de cortafuegos"
msgid "Remove device filter for %r"
msgstr "Eliminar filtro de dispositivo para %r"
#, python-format
msgid "Removing iptables rule for IPset: %s"
msgstr "Eliminando regla de iptables para IPset: %s"
#, python-format
msgid "Router %(router_id)s transitioned to %(state)s"
msgstr "El enrutador %(router_id)s ha hecho la transición a %(state)s"
#, python-format
msgid ""
"Router %s is not managed by this agent. It was possibly deleted concurrently."
@ -210,6 +350,10 @@ msgstr ""
"Router %s no es controlado por este agente.Fue posiblemente borrado "
"concurrentemente"
#, python-format
msgid "SNAT interface port list does not exist, so create one: %s"
msgstr "El puerto de la interfaz SNAT no existe, por lo tanto crear uno: %s"
#, python-format
msgid "Security group member updated %r"
msgstr "Se ha actualizado el miembro de grupo de seguridad %r"
@ -218,6 +362,32 @@ msgstr "Se ha actualizado el miembro de grupo de seguridad %r"
msgid "Security group rule updated %r"
msgstr "Se ha actualizado la regla de grupo de seguridad %r"
#, python-format
msgid "Service %s is supported by the core plugin"
msgstr "El complemento principal soporta el servicio %s"
msgid "Set a new controller if needed."
msgstr "Si es necesario configurar un nuevo controlador."
#, python-format
msgid "Set the controller to a new controller: %s"
msgstr "Configurar el controlador a un nuevo controlador: %s"
#, python-format
msgid ""
"Skipping method %s as firewall is disabled or configured as "
"NoopFirewallDriver."
msgstr ""
"Saltando el método %s, ya que el cortafuegos esta inhabilitado o configurado "
"como NoopFirewallDriver."
msgid ""
"Skipping periodic DHCP agent status check because automatic network "
"rescheduling is disabled."
msgstr ""
"Omitiendo la verificación de estado del agente DHCP porque la re-"
"planificación automática de red esta inhabilitada."
#, python-format
msgid "Skipping port %s as no IP is configure on it"
msgstr "Saltando el puerto %s, ya que no hay ninguna IP configurada en él"
@ -226,6 +396,9 @@ msgid "Specified IP addresses do not match the subnet IP version"
msgstr ""
"Las direcciones IP especificadas no coinciden con la versión de IP de subred "
msgid "Stopping linuxbridge agent."
msgstr "Deteniendo agente linuxbridge."
msgid "Synchronizing state"
msgstr "Sincronizando estado"

View File

@ -9,16 +9,16 @@ msgid ""
msgstr ""
"Project-Id-Version: Neutron\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2015-07-27 06:07+0000\n"
"PO-Revision-Date: 2015-07-25 03:05+0000\n"
"POT-Creation-Date: 2015-08-10 06:10+0000\n"
"PO-Revision-Date: 2015-08-01 03:37+0000\n"
"Last-Translator: openstackjenkins <jenkins@openstack.org>\n"
"Language-Team: French (http://www.transifex.com/projects/p/neutron/language/"
"Language-Team: French (http://www.transifex.com/openstack/neutron/language/"
"fr/)\n"
"Language: fr\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n"
"Generated-By: Babel 2.0\n"
"Plural-Forms: nplurals=2; plural=(n > 1);\n"
#, python-format
@ -45,6 +45,10 @@ msgstr "%(url)s a retourné une erreur : %(exception)s."
msgid "%(url)s returned with HTTP %(status)d"
msgstr "%(url)s retourné avec HTTP %(status)d"
#, python-format
msgid "%d probe(s) deleted"
msgstr "Sonde(s) %d supprimées"
#, python-format
msgid "Adding %s to list of bridges."
msgstr "Ajout %s à la liste de ponts."
@ -75,8 +79,12 @@ msgstr ""
"Autorisation de tri activée car la mise en page native nécessite le tri natif"
#, python-format
msgid "Ancillary Port %s added"
msgstr "Port auxiliaire %s ajouté"
msgid "Ancillary Ports %s added"
msgstr "Ports auxillaires %s ajoutés"
#, python-format
msgid "Ancillary ports %s removed"
msgstr "Ports auxillaires %s supprimés"
#, python-format
msgid "Assigning %(vlan_id)s as local vlan for net-id=%(net_uuid)s"
@ -118,8 +126,8 @@ msgid "Config paste file: %s"
msgstr "Config du fichier de collage : %s"
#, python-format
msgid "Configuration for device %s completed."
msgstr "Configuration complète de l'équipement %s"
msgid "Controller IPs: %s"
msgstr "IPs du controlleur: %s"
msgid "DHCP agent started"
msgstr "Agent DHCP démarré"
@ -228,10 +236,6 @@ msgstr "Mappage du réseau physique %(physical_network)s sur le pont %(bridge)s"
msgid "Network VLAN ranges: %s"
msgstr "Plages de réseau local virtuel de réseau : %s"
#, python-format
msgid "Network name changed to %s"
msgstr "Nom du réseau changé en %s"
#, python-format
msgid "Neutron service started, listening on %(host)s:%(port)s"
msgstr "Service Neutron démarré, en écoute sur %(host)s:%(port)s"
@ -268,8 +272,8 @@ msgid "Port %s was deleted concurrently"
msgstr "Le port %s a été effacé en même temps"
#, python-format
msgid "Port name changed to %s"
msgstr "Nom de port changé en %s"
msgid "Ports %s removed"
msgstr "Ports %s supprimés"
#, python-format
msgid "Preparing filters for devices %s"

View File

@ -8,16 +8,16 @@ msgid ""
msgstr ""
"Project-Id-Version: Neutron\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2015-07-27 06:07+0000\n"
"PO-Revision-Date: 2015-07-25 03:05+0000\n"
"POT-Creation-Date: 2015-08-10 06:10+0000\n"
"PO-Revision-Date: 2015-08-01 03:37+0000\n"
"Last-Translator: openstackjenkins <jenkins@openstack.org>\n"
"Language-Team: Italian (http://www.transifex.com/projects/p/neutron/language/"
"Language-Team: Italian (http://www.transifex.com/openstack/neutron/language/"
"it/)\n"
"Language: it\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n"
"Generated-By: Babel 2.0\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
#, python-format

View File

@ -8,16 +8,16 @@ msgid ""
msgstr ""
"Project-Id-Version: Neutron\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2015-07-27 06:07+0000\n"
"PO-Revision-Date: 2015-07-25 03:05+0000\n"
"POT-Creation-Date: 2015-08-10 06:10+0000\n"
"PO-Revision-Date: 2015-08-01 03:37+0000\n"
"Last-Translator: openstackjenkins <jenkins@openstack.org>\n"
"Language-Team: Japanese (http://www.transifex.com/projects/p/neutron/"
"language/ja/)\n"
"Language-Team: Japanese (http://www.transifex.com/openstack/neutron/language/"
"ja/)\n"
"Language: ja\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n"
"Generated-By: Babel 2.0\n"
"Plural-Forms: nplurals=1; plural=0;\n"
#, python-format

View File

@ -7,16 +7,16 @@ msgid ""
msgstr ""
"Project-Id-Version: Neutron\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2015-07-27 06:07+0000\n"
"PO-Revision-Date: 2015-07-25 03:05+0000\n"
"POT-Creation-Date: 2015-08-10 06:10+0000\n"
"PO-Revision-Date: 2015-08-01 03:37+0000\n"
"Last-Translator: openstackjenkins <jenkins@openstack.org>\n"
"Language-Team: Korean (Korea) (http://www.transifex.com/projects/p/neutron/"
"Language-Team: Korean (Korea) (http://www.transifex.com/openstack/neutron/"
"language/ko_KR/)\n"
"Language: ko_KR\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n"
"Generated-By: Babel 2.0\n"
"Plural-Forms: nplurals=1; plural=0;\n"
#, python-format

View File

@ -1,19 +1,19 @@
# Translations template for neutron.
# Copyright (C) 2014 ORGANIZATION
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the neutron project.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2014.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2015.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: neutron 2014.2.dev608.g787bba2\n"
"Project-Id-Version: neutron 7.0.0.0b3.dev96\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2014-06-09 06:08+0000\n"
"POT-Creation-Date: 2015-08-10 06:11+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n"
"Generated-By: Babel 2.0\n"

View File

@ -6,16 +6,16 @@
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: neutron 7.0.0.0b2.dev396\n"
"Project-Id-Version: neutron 7.0.0.0b3.dev96\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2015-07-27 06:07+0000\n"
"POT-Creation-Date: 2015-08-10 06:10+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n"
"Generated-By: Babel 2.0\n"
#: neutron/manager.py:136
msgid "Error, plugin is not set"
@ -76,17 +76,17 @@ msgstr ""
msgid "Internal error"
msgstr ""
#: neutron/agent/common/ovs_lib.py:225 neutron/agent/common/ovs_lib.py:325
#: neutron/agent/common/ovs_lib.py:219 neutron/agent/common/ovs_lib.py:319
#, python-format
msgid "Unable to execute %(cmd)s. Exception: %(exception)s"
msgstr ""
#: neutron/agent/common/ovs_lib.py:246
#: neutron/agent/common/ovs_lib.py:240
#, python-format
msgid "Timed out retrieving ofport on port %(pname)s. Exception: %(exception)s"
msgstr ""
#: neutron/agent/common/ovs_lib.py:575
#: neutron/agent/common/ovs_lib.py:567
#, python-format
msgid "OVS flows could not be applied on bridge %s"
msgstr ""
@ -119,13 +119,13 @@ msgstr ""
msgid "Network %s info call failed."
msgstr ""
#: neutron/agent/dhcp/agent.py:576 neutron/agent/l3/agent.py:632
#: neutron/agent/metadata/agent.py:315
#: neutron/agent/dhcp/agent.py:576 neutron/agent/l3/agent.py:638
#: neutron/agent/metadata/agent.py:319
#: neutron/plugins/hyperv/agent/l2_agent.py:94
#: neutron/plugins/ibm/agent/sdnve_neutron_agent.py:109
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:807
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:847
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:130
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:314
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:313
#: neutron/services/metering/agents/metering_agent.py:283
msgid "Failed reporting state!"
msgstr ""
@ -168,24 +168,24 @@ msgstr ""
msgid "Failed synchronizing routers due to RPC error"
msgstr ""
#: neutron/agent/l3/dvr_local_router.py:181
#: neutron/agent/l3/dvr_local_router.py:182
msgid "DVR: Failed updating arp entry"
msgstr ""
#: neutron/agent/l3/dvr_local_router.py:263
#: neutron/agent/l3/dvr_local_router.py:266
msgid "DVR: error adding redirection logic"
msgstr ""
#: neutron/agent/l3/dvr_local_router.py:265
#: neutron/agent/l3/dvr_local_router.py:268
msgid "DVR: removed snat failed"
msgstr ""
#: neutron/agent/l3/dvr_local_router.py:386
#: neutron/agent/l3/dvr_local_router.py:389
#, python-format
msgid "No FloatingIP agent gateway port returned from server for 'network-id': %s"
msgstr ""
#: neutron/agent/l3/dvr_local_router.py:391
#: neutron/agent/l3/dvr_local_router.py:394
msgid "Missing subnet/agent_gateway_port"
msgstr ""
@ -208,11 +208,11 @@ msgstr ""
msgid "Failed to process or handle event for line %s"
msgstr ""
#: neutron/agent/l3/namespace_manager.py:121
#: neutron/agent/l3/namespace_manager.py:124
msgid "RuntimeError in obtaining namespace list for namespace cleanup."
msgstr ""
#: neutron/agent/l3/namespace_manager.py:142
#: neutron/agent/l3/namespace_manager.py:145
#, python-format
msgid "Failed to destroy stale namespace %s"
msgstr ""
@ -237,15 +237,20 @@ msgstr ""
msgid "Error while handling pidfile: %s"
msgstr ""
#: neutron/agent/linux/daemon.py:190
#: neutron/agent/linux/daemon.py:189
msgid "Fork failed"
msgstr ""
#: neutron/agent/linux/daemon.py:243
#: neutron/agent/linux/daemon.py:242
#, python-format
msgid "Pidfile %s already exist. Daemon already running?"
msgstr ""
#: neutron/agent/linux/dhcp.py:393
#, python-format
msgid "Error while create dnsmasq base log dir: %s"
msgstr ""
#: neutron/agent/linux/external_process.py:225
#, python-format
msgid ""
@ -275,6 +280,11 @@ msgstr ""
msgid "Failed unplugging interface '%s'"
msgstr ""
#: neutron/agent/linux/ip_conntrack.py:76
#, python-format
msgid "Failed execute conntrack command %s"
msgstr ""
#: neutron/agent/linux/ip_lib.py:247
#, python-format
msgid "Failed deleting ingress connection state of floatingip %s"
@ -298,7 +308,7 @@ msgstr ""
msgid "Exceeded %s second limit waiting for address to leave the tentative state."
msgstr ""
#: neutron/agent/linux/ip_lib.py:819
#: neutron/agent/linux/ip_lib.py:827
#, python-format
msgid "Failed sending gratuitous ARP to %(addr)s on %(iface)s in namespace %(ns)s"
msgstr ""
@ -341,7 +351,7 @@ msgstr ""
msgid "Unable to convert value in %s"
msgstr ""
#: neutron/agent/metadata/agent.py:117
#: neutron/agent/metadata/agent.py:121
#: neutron/agent/metadata/namespace_proxy.py:57
msgid "Unexpected error."
msgstr ""
@ -423,13 +433,13 @@ msgid ""
"message %s"
msgstr ""
#: neutron/api/rpc/handlers/l3_rpc.py:74
#: neutron/api/rpc/handlers/l3_rpc.py:75
msgid ""
"No plugin for L3 routing registered! Will reply to l3 agent with empty "
"router dictionary."
msgstr ""
#: neutron/api/v2/base.py:377
#: neutron/api/v2/base.py:389
#, python-format
msgid "Unable to undo add for %(resource)s %(id)s"
msgstr ""
@ -455,7 +465,7 @@ msgstr ""
msgid "Error, unable to destroy IPset: %s"
msgstr ""
#: neutron/cmd/netns_cleanup.py:147
#: neutron/cmd/netns_cleanup.py:149
#, python-format
msgid "Error unable to destroy namespace: %s"
msgstr ""
@ -535,11 +545,11 @@ msgstr ""
msgid "Unexpected exception while checking supported feature via command: %s"
msgstr ""
#: neutron/cmd/sanity/checks.py:138
#: neutron/cmd/sanity/checks.py:142
msgid "Unexpected exception while checking supported ip link command"
msgstr ""
#: neutron/cmd/sanity/checks.py:302
#: neutron/cmd/sanity/checks.py:306
#, python-format
msgid ""
"Failed to import required modules. Ensure that the python-openvswitch "
@ -571,12 +581,12 @@ msgstr ""
msgid "Exception encountered during network rescheduling"
msgstr ""
#: neutron/db/db_base_plugin_v2.py:224 neutron/plugins/ml2/plugin.py:562
#: neutron/db/db_base_plugin_v2.py:226 neutron/plugins/ml2/plugin.py:571
#, python-format
msgid "An exception occurred while creating the %(resource)s:%(item)s"
msgstr ""
#: neutron/db/db_base_plugin_v2.py:835
#: neutron/db/db_base_plugin_v2.py:982
#, python-format
msgid "Unable to generate mac address after %s attempts"
msgstr ""
@ -616,11 +626,11 @@ msgstr ""
msgid "Exception encountered during router rescheduling."
msgstr ""
#: neutron/db/l3_db.py:517
#: neutron/db/l3_db.py:521
msgid "Router port must have at least one fixed IP"
msgstr ""
#: neutron/db/l3_db.py:546
#: neutron/db/l3_db.py:550
msgid "Cannot have multiple IPv4 subnets on router port"
msgstr ""
@ -696,24 +706,24 @@ msgstr ""
msgid "Did not find tenant: %r"
msgstr ""
#: neutron/plugins/ibm/sdnve_neutron_plugin.py:234
#: neutron/plugins/ibm/sdnve_neutron_plugin.py:233
#, python-format
msgid "Delete net failed after deleting the network in DB: %s"
msgstr ""
#: neutron/plugins/ibm/sdnve_neutron_plugin.py:351
#: neutron/plugins/ibm/sdnve_neutron_plugin.py:350
#, python-format
msgid "Delete port operation failed in SDN-VE after deleting the port from DB: %s"
msgstr ""
#: neutron/plugins/ibm/sdnve_neutron_plugin.py:416
#: neutron/plugins/ibm/sdnve_neutron_plugin.py:415
#, python-format
msgid ""
"Delete subnet operation failed in SDN-VE after deleting the subnet from "
"DB: %s"
msgstr ""
#: neutron/plugins/ibm/sdnve_neutron_plugin.py:497
#: neutron/plugins/ibm/sdnve_neutron_plugin.py:496
#: neutron/services/l3_router/l3_sdnve.py:92
#, python-format
msgid ""
@ -721,13 +731,13 @@ msgid ""
" %s"
msgstr ""
#: neutron/plugins/ibm/sdnve_neutron_plugin.py:541
#: neutron/plugins/ibm/sdnve_neutron_plugin.py:540
msgid ""
"SdnvePluginV2._add_router_interface_only: failed to add the interface in "
"the roll back. of a remove_router_interface operation"
msgstr ""
#: neutron/plugins/ibm/sdnve_neutron_plugin.py:679
#: neutron/plugins/ibm/sdnve_neutron_plugin.py:678
#: neutron/services/l3_router/l3_sdnve.py:203
#, python-format
msgid "Delete floatingip failed in SDN-VE: %s"
@ -741,13 +751,13 @@ msgid ""
msgstr ""
#: neutron/plugins/ibm/agent/sdnve_neutron_agent.py:256
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1714
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1739
#, python-format
msgid "%s Agent terminated!"
msgstr ""
#: neutron/plugins/ml2/db.py:242 neutron/plugins/ml2/db.py:326
#: neutron/plugins/ml2/plugin.py:1361
#: neutron/plugins/ml2/plugin.py:1370
#, python-format
msgid "Multiple ports have port_id starting with %s"
msgstr ""
@ -806,106 +816,121 @@ msgstr ""
msgid "Extension driver '%(name)s' failed in %(method)s"
msgstr ""
#: neutron/plugins/ml2/plugin.py:286
#: neutron/plugins/ml2/plugin.py:295
#, python-format
msgid "Failed to commit binding results for %(port)s after %(max)s tries"
msgstr ""
#: neutron/plugins/ml2/plugin.py:442
#: neutron/plugins/ml2/plugin.py:451
#, python-format
msgid "Serialized vif_details DB value '%(value)s' for port %(port)s is invalid"
msgstr ""
#: neutron/plugins/ml2/plugin.py:453
#: neutron/plugins/ml2/plugin.py:462
#, python-format
msgid "Serialized profile DB value '%(value)s' for port %(port)s is invalid"
msgstr ""
#: neutron/plugins/ml2/plugin.py:539
#: neutron/plugins/ml2/plugin.py:548
#, python-format
msgid "Could not find %s to delete."
msgstr ""
#: neutron/plugins/ml2/plugin.py:542
#: neutron/plugins/ml2/plugin.py:551
#, python-format
msgid "Could not delete %(res)s %(id)s."
msgstr ""
#: neutron/plugins/ml2/plugin.py:575
#: neutron/plugins/ml2/plugin.py:584
#, python-format
msgid ""
"mechanism_manager.create_%(res)s_postcommit failed for %(res)s: "
"'%(failed_id)s'. Deleting %(res)ss %(resource_ids)s"
msgstr ""
#: neutron/plugins/ml2/plugin.py:621
#: neutron/plugins/ml2/plugin.py:630
#, python-format
msgid "mechanism_manager.create_network_postcommit failed, deleting network '%s'"
msgstr ""
#: neutron/plugins/ml2/plugin.py:691
#: neutron/plugins/ml2/plugin.py:700
#, python-format
msgid "Exception auto-deleting port %s"
msgstr ""
#: neutron/plugins/ml2/plugin.py:704
#: neutron/plugins/ml2/plugin.py:713
#, python-format
msgid "Exception auto-deleting subnet %s"
msgstr ""
#: neutron/plugins/ml2/plugin.py:785
#: neutron/plugins/ml2/plugin.py:794
msgid "mechanism_manager.delete_network_postcommit failed"
msgstr ""
#: neutron/plugins/ml2/plugin.py:806
#: neutron/plugins/ml2/plugin.py:815
#, python-format
msgid "mechanism_manager.create_subnet_postcommit failed, deleting subnet '%s'"
msgstr ""
#: neutron/plugins/ml2/plugin.py:925
#: neutron/plugins/ml2/plugin.py:934
#, python-format
msgid "Exception deleting fixed_ip from port %s"
msgstr ""
#: neutron/plugins/ml2/plugin.py:934
#: neutron/plugins/ml2/plugin.py:943
msgid "mechanism_manager.delete_subnet_postcommit failed"
msgstr ""
#: neutron/plugins/ml2/plugin.py:999
#: neutron/plugins/ml2/plugin.py:1008
#, python-format
msgid "mechanism_manager.create_port_postcommit failed, deleting port '%s'"
msgstr ""
#: neutron/plugins/ml2/plugin.py:1011
#: neutron/plugins/ml2/plugin.py:1020
#, python-format
msgid "_bind_port_if_needed failed, deleting port '%s'"
msgstr ""
#: neutron/plugins/ml2/plugin.py:1042
#: neutron/plugins/ml2/plugin.py:1051
#, python-format
msgid "_bind_port_if_needed failed. Deleting all ports from create bulk '%s'"
msgstr ""
#: neutron/plugins/ml2/plugin.py:1176
#: neutron/plugins/ml2/plugin.py:1185
#, python-format
msgid "mechanism_manager.update_port_postcommit failed for port %s"
msgstr ""
#: neutron/plugins/ml2/plugin.py:1223
#: neutron/plugins/ml2/plugin.py:1232
#, python-format
msgid "No Host supplied to bind DVR Port %s"
msgstr ""
#: neutron/plugins/ml2/plugin.py:1342
#: neutron/plugins/ml2/plugin.py:1351
#, python-format
msgid "mechanism_manager.delete_port_postcommit failed for port %s"
msgstr ""
#: neutron/plugins/ml2/plugin.py:1374
#: neutron/plugins/ml2/plugin.py:1383
#, python-format
msgid "Binding info for DVR port %s not found"
msgstr ""
#: neutron/plugins/ml2/rpc.py:154
#, python-format
msgid "Failed to get details for device %s"
msgstr ""
#: neutron/plugins/ml2/rpc.py:242
#, python-format
msgid "Failed to update device %s up"
msgstr ""
#: neutron/plugins/ml2/rpc.py:256
#, python-format
msgid "Failed to update device %s down"
msgstr ""
#: neutron/plugins/ml2/drivers/type_gre.py:79
msgid "Failed to parse tunnel_id_ranges. Service terminated!"
msgstr ""
@ -918,12 +943,6 @@ msgstr ""
msgid "Failed to parse vni_ranges. Service terminated!"
msgstr ""
#: neutron/plugins/ml2/drivers/cisco/n1kv/n1kv_ext_driver.py:76
#: neutron/plugins/ml2/drivers/cisco/n1kv/n1kv_ext_driver.py:83
#, python-format
msgid "Policy Profile %(profile)s does not exist."
msgstr ""
#: neutron/plugins/ml2/drivers/cisco/ucsm/mech_cisco_ucsm.py:206
#, python-format
msgid ""
@ -931,51 +950,65 @@ msgid ""
"%(network)s"
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:186
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:102
#, python-format
msgid ""
"Interface %(intf)s for physical network %(net)s does not exist. Agent "
"terminated!"
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:213
#, python-format
msgid "Failed creating vxlan interface for %(segmentation_id)s"
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:336
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:287
#, python-format
msgid ""
"Unable to create VXLAN interface for VNI %s because it is in use by "
"another interface."
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:375
#, python-format
msgid "Unable to add %(interface)s to %(bridge_name)s! Exception: %(e)s"
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:349
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:388
#, python-format
msgid "Unable to add vxlan interface for network %s"
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:356
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:395
#, python-format
msgid "No mapping for physical network %s"
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:365
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:404
#, python-format
msgid "Unknown network_type %(network_type)s for network %(network_id)s."
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:456
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:495
#, python-format
msgid "Cannot delete bridge %s, does not exist"
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:534
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:574
msgid "No valid Segmentation ID to perform UCAST test."
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:817
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:857
msgid "Unable to obtain MAC address for unique ID. Agent terminated!"
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:1022
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:271
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:1062
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:282
#, python-format
msgid "Error in agent loop. Devices info: %s"
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:1050
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:1090
#: neutron/plugins/ml2/drivers/mlnx/agent/eswitch_neutron_agent.py:40
#, python-format
msgid "Parsing physical_interface_mappings failed: %s. Agent terminated!"
@ -993,16 +1026,16 @@ msgstr ""
msgid "Failed to get devices for %s"
msgstr ""
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:178
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:187
#, python-format
msgid "Failed to set device %s state"
msgstr ""
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:331
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:342
msgid "Failed on Agent configuration parse. Agent terminated!"
msgstr ""
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:343
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:354
msgid "Agent Initialization Failed"
msgstr ""
@ -1038,123 +1071,128 @@ msgid ""
"a different subnet %(orig_subnet)s"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:413
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:414
msgid "No tunnel_type specified, cannot create tunnels"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:416
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:439
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:417
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:440
#, python-format
msgid "tunnel_type %s not supported by agent"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:432
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:433
msgid "No tunnel_ip specified, cannot delete tunnels"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:436
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:437
msgid "No tunnel_type specified, cannot delete tunnels"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:582
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:583
#, python-format
msgid "No local VLAN available for net-id=%s"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:613
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:614
#, python-format
msgid ""
"Cannot provision %(network_type)s network for net-id=%(net_uuid)s - "
"tunneling disabled"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:621
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:622
#, python-format
msgid ""
"Cannot provision flat network for net-id=%(net_uuid)s - no bridge for "
"physical_network %(physical_network)s"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:631
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:632
#, python-format
msgid ""
"Cannot provision VLAN network for net-id=%(net_uuid)s - no bridge for "
"physical_network %(physical_network)s"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:640
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:641
#, python-format
msgid ""
"Cannot provision unknown network type %(network_type)s for net-"
"id=%(net_uuid)s"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:700
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:701
#, python-format
msgid ""
"Cannot reclaim unknown network type %(network_type)s for net-"
"id=%(net_uuid)s"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:907
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:788
#, python-format
msgid "Configuration for devices %s failed!"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:925
msgid ""
"Failed to create OVS patch port. Cannot have tunneling enabled on this "
"agent, since this version of OVS does not support tunnels or patch ports."
" Agent terminated!"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:966
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:984
#, python-format
msgid ""
"Bridge %(bridge)s for physical network %(physical_network)s does not "
"exist. Agent terminated!"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1155
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1171
#, python-format
msgid "Failed to set-up %(type)s tunnel port to %(ip)s"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1347
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1369
#, python-format
msgid ""
"process_network_ports - iteration:%d - failure while retrieving port "
"details from server"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1383
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1405
#, python-format
msgid ""
"process_ancillary_network_ports - iteration:%d - failure while retrieving"
" port details from server"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1533
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1557
msgid "Error while synchronizing tunnels"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1600
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1627
msgid "Error while processing VIF ports"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1708
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1733
msgid "Agent failed to create agent config map"
msgstr ""
#: neutron/plugins/oneconvergence/plugin.py:238
#: neutron/plugins/oneconvergence/plugin.py:237
msgid "Failed to create subnet, deleting it from neutron"
msgstr ""
#: neutron/plugins/oneconvergence/plugin.py:302
#: neutron/plugins/oneconvergence/plugin.py:301
#, python-format
msgid "Deleting newly created neutron port %s"
msgstr ""
#: neutron/plugins/oneconvergence/plugin.py:373
#: neutron/plugins/oneconvergence/plugin.py:372
msgid "Failed to create floatingip"
msgstr ""
#: neutron/plugins/oneconvergence/plugin.py:412
#: neutron/plugins/oneconvergence/plugin.py:411
msgid "Failed to create router"
msgstr ""
@ -1207,6 +1245,11 @@ msgstr ""
msgid "Request failed from Controller side with Status=%s"
msgstr ""
#: neutron/quota/resource.py:199
#, python-format
msgid "Model class %s does not have a tenant_id attribute"
msgstr ""
#: neutron/scheduler/l3_agent_scheduler.py:287
#, python-format
msgid "Not enough candidates, a HA router needs at least %s agents"
@ -1242,38 +1285,6 @@ msgstr ""
msgid "Failed fwaas process services sync"
msgstr ""
#: neutron/services/l3_router/l3_arista.py:114
#, python-format
msgid "Error creating router on Arista HW router=%s "
msgstr ""
#: neutron/services/l3_router/l3_arista.py:137
#, python-format
msgid "Error updating router on Arista HW router=%s "
msgstr ""
#: neutron/services/l3_router/l3_arista.py:152
#, python-format
msgid "Error deleting router on Arista HW router %(r)s exception=%(e)s"
msgstr ""
#: neutron/services/l3_router/l3_arista.py:198
#, python-format
msgid "Error Adding subnet %(subnet)s to router %(router_id)s on Arista HW"
msgstr ""
#: neutron/services/l3_router/l3_arista.py:232
#, python-format
msgid ""
"Error removing interface %(interface)s from router %(router_id)s on "
"Arista HWException =(exc)s"
msgstr ""
#: neutron/services/l3_router/l3_arista.py:278
#, python-format
msgid "Error Adding interface %(subnet_id)s to router %(router_id)s on Arista HW"
msgstr ""
#: neutron/services/l3_router/l3_sdnve.py:62
#, python-format
msgid "Create router failed in SDN-VE with error %s"

View File

@ -6,16 +6,16 @@
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: neutron 7.0.0.0b2.dev396\n"
"Project-Id-Version: neutron 7.0.0.0b3.dev96\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2015-07-27 06:07+0000\n"
"POT-Creation-Date: 2015-08-10 06:10+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n"
"Generated-By: Babel 2.0\n"
#: neutron/manager.py:118
#, python-format
@ -32,17 +32,6 @@ msgstr ""
msgid "Loading Plugin: %s"
msgstr ""
#: neutron/quota.py:221
msgid ""
"ConfDriver is used as quota_driver because the loaded plugin does not "
"support 'quotas' table."
msgstr ""
#: neutron/quota.py:232
#, python-format
msgid "Loaded quota_driver: %s."
msgstr ""
#: neutron/service.py:186
#, python-format
msgid "Neutron service started, listening on %(host)s:%(port)s"
@ -93,29 +82,29 @@ msgstr ""
msgid "Security group rule updated %r"
msgstr ""
#: neutron/agent/securitygroups_rpc.py:204
#: neutron/agent/securitygroups_rpc.py:205
#, python-format
msgid "Security group member updated %r"
msgstr ""
#: neutron/agent/securitygroups_rpc.py:226
#: neutron/agent/securitygroups_rpc.py:229
msgid "Provider rule updated"
msgstr ""
#: neutron/agent/securitygroups_rpc.py:238
#: neutron/agent/securitygroups_rpc.py:241
#, python-format
msgid "Remove device filter for %r"
msgstr ""
#: neutron/agent/securitygroups_rpc.py:248
#: neutron/agent/securitygroups_rpc.py:251
msgid "Refresh firewall rules"
msgstr ""
#: neutron/agent/securitygroups_rpc.py:252
#: neutron/agent/securitygroups_rpc.py:255
msgid "No ports here to refresh firewall"
msgstr ""
#: neutron/agent/common/ovs_lib.py:432 neutron/agent/common/ovs_lib.py:465
#: neutron/agent/common/ovs_lib.py:424 neutron/agent/common/ovs_lib.py:457
#, python-format
msgid "Port %(port_id)s not present in bridge %(br_name)s"
msgstr ""
@ -132,13 +121,13 @@ msgstr ""
msgid "Synchronizing state complete"
msgstr ""
#: neutron/agent/dhcp/agent.py:585 neutron/agent/l3/agent.py:646
#: neutron/agent/dhcp/agent.py:585 neutron/agent/l3/agent.py:652
#: neutron/services/metering/agents/metering_agent.py:286
#, python-format
msgid "agent_updated by server side %s!"
msgstr ""
#: neutron/agent/l3/agent.py:567 neutron/agent/l3/agent.py:636
#: neutron/agent/l3/agent.py:573 neutron/agent/l3/agent.py:642
msgid "L3 agent started"
msgstr ""
@ -159,7 +148,7 @@ msgstr ""
msgid "Process runs with uid/gid: %(uid)s/%(gid)s"
msgstr ""
#: neutron/agent/linux/dhcp.py:802
#: neutron/agent/linux/dhcp.py:816
#, python-format
msgid ""
"Cannot apply dhcp option %(opt)s because it's ip_version %(version)d is "
@ -171,12 +160,12 @@ msgstr ""
msgid "Device %s already exists"
msgstr ""
#: neutron/agent/linux/iptables_firewall.py:140
#: neutron/agent/linux/iptables_firewall.py:161
#, python-format
msgid "Attempted to update port filter which is not filtered %s"
msgstr ""
#: neutron/agent/linux/iptables_firewall.py:151
#: neutron/agent/linux/iptables_firewall.py:172
#, python-format
msgid "Attempted to remove port filter which is not filtered %r"
msgstr ""
@ -190,7 +179,7 @@ msgstr ""
msgid "Loaded extension: %s"
msgstr ""
#: neutron/api/v2/base.py:95
#: neutron/api/v2/base.py:96
msgid "Allow sorting is enabled because native pagination requires native sorting"
msgstr ""
@ -234,9 +223,9 @@ msgstr ""
#: neutron/cmd/eventlet/plugins/hyperv_neutron_agent.py:43
#: neutron/plugins/ibm/agent/sdnve_neutron_agent.py:262
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:1060
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:346
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1611
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:1100
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:357
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1636
msgid "Agent initialized successfully, now running... "
msgstr ""
@ -288,7 +277,7 @@ msgstr ""
msgid "Adding network %(net)s to agent %(agent)s on host %(host)s"
msgstr ""
#: neutron/db/db_base_plugin_v2.py:656 neutron/plugins/ml2/plugin.py:882
#: neutron/db/db_base_plugin_v2.py:744 neutron/plugins/ml2/plugin.py:891
#, python-format
msgid ""
"Found port (%(port_id)s, %(ip)s) having IP allocation on subnet "
@ -300,23 +289,23 @@ msgstr ""
msgid "Found invalid IP address in pool: %(start)s - %(end)s:"
msgstr ""
#: neutron/db/ipam_backend_mixin.py:227
#: neutron/db/ipam_backend_mixin.py:230
#, python-format
msgid ""
"Validation for CIDR: %(new_cidr)s failed - overlaps with subnet "
"%(subnet_id)s (CIDR: %(cidr)s)"
msgstr ""
#: neutron/db/ipam_backend_mixin.py:265
#: neutron/db/ipam_backend_mixin.py:268
msgid "Specified IP addresses do not match the subnet IP version"
msgstr ""
#: neutron/db/ipam_backend_mixin.py:269
#: neutron/db/ipam_backend_mixin.py:272
#, python-format
msgid "Found pool larger than subnet CIDR:%(start)s - %(end)s"
msgstr ""
#: neutron/db/ipam_backend_mixin.py:290
#: neutron/db/ipam_backend_mixin.py:293
#, python-format
msgid "Found overlapping ranges: %(l_range)s and %(r_range)s"
msgstr ""
@ -327,7 +316,7 @@ msgid ""
"rescheduling is disabled."
msgstr ""
#: neutron/db/l3_db.py:1190
#: neutron/db/l3_db.py:1198
#, python-format
msgid "Skipping port %s as no IP is configure on it"
msgstr ""
@ -351,14 +340,14 @@ msgstr ""
msgid "SNAT already bound to a service node."
msgstr ""
#: neutron/db/l3_hamode_db.py:188
#: neutron/db/l3_hamode_db.py:191
#, python-format
msgid ""
"Attempt %(count)s to allocate a VRID in the network %(network)s for the "
"router %(router)s"
msgstr ""
#: neutron/db/l3_hamode_db.py:271
#: neutron/db/l3_hamode_db.py:274
#, python-format
msgid ""
"Number of active agents lower than max_l3_agents_per_router. L3 agents "
@ -469,11 +458,11 @@ msgstr ""
msgid "Fake SDNVE controller: get controller"
msgstr ""
#: neutron/plugins/ibm/sdnve_neutron_plugin.py:147
#: neutron/plugins/ibm/sdnve_neutron_plugin.py:146
msgid "Set a new controller if needed."
msgstr ""
#: neutron/plugins/ibm/sdnve_neutron_plugin.py:153
#: neutron/plugins/ibm/sdnve_neutron_plugin.py:152
#, python-format
msgid "Set the controller to a new controller: %s"
msgstr ""
@ -568,26 +557,26 @@ msgstr ""
msgid "Got %(alias)s extension from driver '%(drv)s'"
msgstr ""
#: neutron/plugins/ml2/plugin.py:141
#: neutron/plugins/ml2/plugin.py:150
msgid "Modular L2 Plugin initialization complete"
msgstr ""
#: neutron/plugins/ml2/plugin.py:292
#: neutron/plugins/ml2/plugin.py:301
#, python-format
msgid "Attempt %(count)s to bind port %(port)s"
msgstr ""
#: neutron/plugins/ml2/plugin.py:688
#: neutron/plugins/ml2/plugin.py:697
#, python-format
msgid "Port %s was deleted concurrently"
msgstr ""
#: neutron/plugins/ml2/plugin.py:700
#: neutron/plugins/ml2/plugin.py:709
#, python-format
msgid "Subnet %s was deleted concurrently"
msgstr ""
#: neutron/plugins/ml2/plugin.py:1387
#: neutron/plugins/ml2/plugin.py:1396
#, python-format
msgid ""
"Binding info for port %s was not found, it might have been deleted "
@ -625,42 +614,12 @@ msgstr ""
msgid "VlanTypeDriver initialization complete"
msgstr ""
#: neutron/plugins/ml2/drivers/arista/mechanism_arista.py:112
#, python-format
msgid "Network %s is not created as it is not found in Arista DB"
msgstr ""
#: neutron/plugins/ml2/drivers/arista/mechanism_arista.py:125
#, python-format
msgid "Network name changed to %s"
msgstr ""
#: neutron/plugins/ml2/drivers/arista/mechanism_arista.py:157
#, python-format
msgid "Network %s is not updated as it is not found in Arista DB"
msgstr ""
#: neutron/plugins/ml2/drivers/arista/mechanism_arista.py:266
#, python-format
msgid "VM %s is not created as it is not found in Arista DB"
msgstr ""
#: neutron/plugins/ml2/drivers/arista/mechanism_arista.py:280
#, python-format
msgid "Port name changed to %s"
msgstr ""
#: neutron/plugins/ml2/drivers/arista/mechanism_arista.py:354
#, python-format
msgid "VM %s is not updated as it is not found in Arista DB"
msgstr ""
#: neutron/plugins/ml2/drivers/freescale/mechanism_fslsdn.py:40
msgid "Initializing CRD client... "
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/arp_protect.py:32
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:784
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:802
#, python-format
msgid ""
"Skipping ARP spoofing rules for port '%s' because it has port security "
@ -672,84 +631,87 @@ msgstr ""
msgid "Clearing orphaned ARP spoofing entries for devices %s"
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:791
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:831
msgid "Stopping linuxbridge agent."
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:821
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:861
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:100
#: neutron/plugins/oneconvergence/agent/nvsd_neutron_agent.py:89
#, python-format
msgid "RPC agent_id: %s"
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:888
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:210
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1226
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:928
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:219
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1246
#, python-format
msgid "Port %(device)s updated. Details: %(details)s"
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:926
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:966
#, python-format
msgid "Device %s not defined on plugin"
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:933
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1273
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1290
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:973
#, python-format
msgid "Attachment %s removed"
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:945
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:236
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1302
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:985
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:247
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1324
#, python-format
msgid "Port %s updated."
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:1003
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:1043
msgid "LinuxBridge Agent RPC Daemon Started!"
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:1013
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:252
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1500
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:1053
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:263
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1524
msgid "Agent out of sync with plugin!"
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:1053
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:1093
#: neutron/plugins/ml2/drivers/mlnx/agent/eswitch_neutron_agent.py:43
#, python-format
msgid "Interface mappings: %s"
msgstr ""
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:192
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:180
#, python-format
msgid "Device %(device)s spoofcheck %(spoofcheck)s"
msgstr ""
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:201
#, python-format
msgid "No device with MAC %s defined on agent."
msgstr ""
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:217
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:228
#, python-format
msgid "Device with MAC %s not defined on plugin"
msgstr ""
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:224
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:235
#, python-format
msgid "Removing device with mac_address %s"
msgstr ""
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:245
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:256
msgid "SRIOV NIC Agent RPC Daemon Started!"
msgstr ""
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:334
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:345
#, python-format
msgid "Physical Devices mappings: %s"
msgstr ""
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:335
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:346
#, python-format
msgid "Exclude Devices: %s"
msgstr ""
@ -763,62 +725,72 @@ msgstr ""
msgid "L2 Agent operating in DVR Mode with MAC %s"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:591
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:592
#, python-format
msgid "Assigning %(vlan_id)s as local vlan for net-id=%(net_uuid)s"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:655
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:656
#, python-format
msgid "Reclaiming vlan = %(vlan_id)s from net-id = %(net_uuid)s"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:777
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:793
#, python-format
msgid "Configuration for device %s completed."
msgid "Configuration for devices up %(up)s and devices down %(down)s completed."
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:816
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:834
#, python-format
msgid "port_unbound(): net_uuid %s not in local_vlan_map"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:882
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:900
#, python-format
msgid "Adding %s to list of bridges."
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:960
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:978
#, python-format
msgid "Mapping physical network %(physical_network)s to bridge %(bridge)s"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1116
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1132
#, python-format
msgid "Port '%(port_name)s' has lost its vlan tag '%(vlan_tag)d'!"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1220
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1240
#, python-format
msgid ""
"Port %s was not found on the integration bridge and will therefore not be"
" processed"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1261
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1279
#, python-format
msgid "Ancillary Port %s added"
msgid "Ancillary Ports %s added"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1529
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1296
#, python-format
msgid "Ports %s removed"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1312
#, python-format
msgid "Ancillary ports %s removed"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1553
msgid "Agent tunnel out of sync with plugin!"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1630
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1655
msgid "Agent caught SIGTERM, quitting daemon loop."
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1634
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1659
msgid "Agent caught SIGHUP, resetting."
msgstr ""
@ -830,6 +802,27 @@ msgstr ""
msgid "NVSD Agent initialized successfully, now running... "
msgstr ""
#: neutron/quota/__init__.py:180
msgid ""
"ConfDriver is used as quota_driver because the loaded plugin does not "
"support 'quotas' table."
msgstr ""
#: neutron/quota/__init__.py:191
#, python-format
msgid "Loaded quota_driver: %s."
msgstr ""
#: neutron/quota/resource_registry.py:168
#, python-format
msgid "Creating instance of CountableResource for resource:%s"
msgstr ""
#: neutron/quota/resource_registry.py:174
#, python-format
msgid "Creating instance of TrackedResource for resource:%s"
msgstr ""
#: neutron/scheduler/dhcp_agent_scheduler.py:110
#, python-format
msgid "Agent %s already present"
@ -844,10 +837,6 @@ msgstr ""
msgid "Default provider is not specified for service type %s"
msgstr ""
#: neutron/services/l3_router/l3_arista.py:247
msgid "Syncing Neutron Router DB <-> EOS"
msgstr ""
#: neutron/services/metering/agents/metering_agent.py:96
#, python-format
msgid "Loading Metering driver %s"

View File

@ -6,45 +6,27 @@
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: neutron 7.0.0.0b2.dev396\n"
"Project-Id-Version: neutron 7.0.0.0b3.dev96\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2015-07-27 06:07+0000\n"
"POT-Creation-Date: 2015-08-10 06:10+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n"
"Generated-By: Babel 2.0\n"
#: neutron/policy.py:116
#, python-format
msgid "Unable to find data type descriptor for attribute %s"
msgstr ""
#: neutron/quota.py:227
msgid ""
"The quota driver neutron.quota.ConfDriver is deprecated as of Liberty. "
"neutron.db.quota_db.DbQuotaDriver should be used in its place"
msgstr ""
#: neutron/quota.py:241
#, python-format
msgid "%s is already registered."
msgstr ""
#: neutron/quota.py:341
msgid ""
"Registering resources to apply quota limits to using the quota_items "
"option is deprecated as of Liberty.Resource REST controllers should take "
"care of registering resources with the quota engine."
msgstr ""
#: neutron/agent/rpc.py:119
#: neutron/agent/rpc.py:121
msgid "DVR functionality requires a server upgrade."
msgstr ""
#: neutron/agent/rpc.py:142
#: neutron/agent/rpc.py:199
msgid "Tunnel synchronization requires a server upgrade."
msgstr ""
@ -59,17 +41,17 @@ msgid ""
"falling back to old security_group_rules_for_devices which scales worse."
msgstr ""
#: neutron/agent/common/ovs_lib.py:382
#: neutron/agent/common/ovs_lib.py:378
#, python-format
msgid "Found not yet ready openvswitch port: %s"
msgstr ""
#: neutron/agent/common/ovs_lib.py:385
#: neutron/agent/common/ovs_lib.py:381
#, python-format
msgid "Found failed openvswitch port: %s"
msgstr ""
#: neutron/agent/common/ovs_lib.py:447
#: neutron/agent/common/ovs_lib.py:439
#, python-format
msgid "ofport: %(ofport)s for VIF: %(vif)s is not a positive integer"
msgstr ""
@ -101,8 +83,8 @@ msgid ""
"port %(port_id)s, for router %(router_id)s will be considered"
msgstr ""
#: neutron/agent/dhcp/agent.py:570 neutron/agent/l3/agent.py:627
#: neutron/agent/metadata/agent.py:310
#: neutron/agent/dhcp/agent.py:570 neutron/agent/l3/agent.py:633
#: neutron/agent/metadata/agent.py:314
#: neutron/services/metering/agents/metering_agent.py:278
msgid ""
"Neutron server does not support state report. State report for this agent"
@ -163,11 +145,11 @@ msgstr ""
msgid "Attempted to get traffic counters of chain %s which does not exist"
msgstr ""
#: neutron/agent/metadata/agent.py:133
#: neutron/agent/metadata/agent.py:137
msgid "Server does not support metadata RPC, fallback to using neutron client"
msgstr ""
#: neutron/agent/metadata/agent.py:246
#: neutron/agent/metadata/agent.py:250
msgid ""
"The remote metadata server responded with Forbidden. This response "
"usually occurs when shared secrets do not match."
@ -285,7 +267,7 @@ msgstr ""
msgid "No active L3 agents found for SNAT"
msgstr ""
#: neutron/db/securitygroups_rpc_base.py:361
#: neutron/db/securitygroups_rpc_base.py:375
#, python-format
msgid "No valid gateway port on subnet %s is found for IPv6 RA"
msgstr ""
@ -339,7 +321,7 @@ msgstr ""
msgid "Interface %s not found in the heleos back-end, likely already deleted"
msgstr ""
#: neutron/plugins/ibm/sdnve_neutron_plugin.py:428
#: neutron/plugins/ibm/sdnve_neutron_plugin.py:427
#, python-format
msgid "Ignoring admin_state_up=False for router=%r. Overriding with True"
msgstr ""
@ -349,28 +331,28 @@ msgstr ""
msgid "Could not expand segment %s"
msgstr ""
#: neutron/plugins/ml2/plugin.py:523
#: neutron/plugins/ml2/plugin.py:532
#, python-format
msgid ""
"In _notify_port_updated(), no bound segment for port %(port_id)s on "
"network %(network_id)s"
msgstr ""
#: neutron/plugins/ml2/plugin.py:773
#: neutron/plugins/ml2/plugin.py:782
msgid "A concurrent port creation has occurred"
msgstr ""
#: neutron/plugins/ml2/plugin.py:1446
#: neutron/plugins/ml2/plugin.py:1455
#, python-format
msgid "Port %s not found during update"
msgstr ""
#: neutron/plugins/ml2/rpc.py:76
#: neutron/plugins/ml2/rpc.py:78
#, python-format
msgid "Device %(device)s requested by agent %(agent_id)s not found in database"
msgstr ""
#: neutron/plugins/ml2/rpc.py:90
#: neutron/plugins/ml2/rpc.py:92
#, python-format
msgid ""
"Device %(device)s requested by agent %(agent_id)s on network "
@ -429,38 +411,45 @@ msgstr ""
msgid "Port %(port)s updated by agent %(agent)s isn't bound to any segment"
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:91
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:94
msgid "VXLAN is enabled, a valid local_ip must be provided"
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:105
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:116
msgid "Invalid Network ID, will lead to incorrect bridge name"
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:112
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:123
msgid "Invalid VLAN ID, will lead to incorrect subinterface name"
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:119
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:130
msgid "Invalid Interface ID, will lead to incorrect tap device name"
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:128
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:139
#, python-format
msgid "Invalid Segmentation ID: %s, will lead to incorrect vxlan device name"
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:520
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:556
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:153
#, python-format
msgid ""
"Invalid VXLAN Group: %s, must be an address or network (in CIDR notation)"
" in a multicast range"
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:559
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:596
#, python-format
msgid ""
"Option \"%(option)s\" must be supported by command \"%(command)s\" to "
"enable %(mode)s mode"
msgstr ""
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:550
#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:590
msgid ""
"VXLAN muticast group must be provided in vxlan_group option to enable "
"VXLAN muticast group(s) must be provided in vxlan_group option to enable "
"VXLAN MCAST mode"
msgstr ""
@ -470,21 +459,26 @@ msgstr ""
msgid "Cannot find vf index for pci slot %s"
msgstr ""
#: neutron/plugins/ml2/drivers/mech_sriov/agent/eswitch_manager.py:285
#: neutron/plugins/ml2/drivers/mech_sriov/agent/eswitch_manager.py:309
#, python-format
msgid "device pci mismatch: %(device_mac)s - %(pci_slot)s"
msgstr ""
#: neutron/plugins/ml2/drivers/mech_sriov/agent/pci_lib.py:126
#: neutron/plugins/ml2/drivers/mech_sriov/agent/pci_lib.py:142
#, python-format
msgid "Cannot find vfs %(vfs)s in device %(dev_name)s"
msgstr ""
#: neutron/plugins/ml2/drivers/mech_sriov/agent/pci_lib.py:142
#: neutron/plugins/ml2/drivers/mech_sriov/agent/pci_lib.py:158
#, python-format
msgid "failed to parse vf link show line %(line)s: for %(device)s"
msgstr ""
#: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:178
#, python-format
msgid "Failed to set spoofcheck for device %s"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_dvr_neutron_agent.py:163
#, python-format
msgid ""
@ -500,38 +494,38 @@ msgid ""
"message: %s"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:534
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:535
#, python-format
msgid "Action %s not supported"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:938
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:956
#, python-format
msgid ""
"Creating an interface named %(name)s exceeds the %(limit)d character "
"limitation. It was shortened to %(new_name)s to fit."
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1133
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1149
#, python-format
msgid "VIF port: %s has no ofport configured, and might not be able to transmit"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1244
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1261
#, python-format
msgid "Device %s not defined on plugin"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1404
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1426
#, python-format
msgid "Invalid remote IP: %s"
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1447
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1469
msgid "OVS is restarted. OVSNeutronAgent will reset bridges and recover ports."
msgstr ""
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1450
#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1472
msgid ""
"OVS is dead. OVSNeutronAgent will keep running and checking OVS status "
"periodically."
@ -541,6 +535,24 @@ msgstr ""
msgid "No Token, Re-login"
msgstr ""
#: neutron/quota/__init__.py:186
msgid ""
"The quota driver neutron.quota.ConfDriver is deprecated as of Liberty. "
"neutron.db.quota.driver.DbQuotaDriver should be used in its place"
msgstr ""
#: neutron/quota/__init__.py:259
msgid ""
"Registering resources to apply quota limits to using the quota_items "
"option is deprecated as of Liberty.Resource REST controllers should take "
"care of registering resources with the quota engine."
msgstr ""
#: neutron/quota/resource_registry.py:215
#, python-format
msgid "%s is already registered"
msgstr ""
#: neutron/scheduler/dhcp_agent_scheduler.py:58
#, python-format
msgid "DHCP agent %s is not active"

File diff suppressed because it is too large Load Diff

View File

@ -8,16 +8,16 @@ msgid ""
msgstr ""
"Project-Id-Version: Neutron\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2015-07-27 06:07+0000\n"
"PO-Revision-Date: 2015-07-25 03:05+0000\n"
"POT-Creation-Date: 2015-08-10 06:10+0000\n"
"PO-Revision-Date: 2015-08-01 03:37+0000\n"
"Last-Translator: openstackjenkins <jenkins@openstack.org>\n"
"Language-Team: Portuguese (Brazil) (http://www.transifex.com/projects/p/"
"Language-Team: Portuguese (Brazil) (http://www.transifex.com/openstack/"
"neutron/language/pt_BR/)\n"
"Language: pt_BR\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n"
"Generated-By: Babel 2.0\n"
"Plural-Forms: nplurals=2; plural=(n > 1);\n"
#, python-format
@ -74,10 +74,6 @@ msgstr ""
msgid "Allowable flat physical_network names: %s"
msgstr "Nomes permitidos de rede flat physical_network : %s"
#, python-format
msgid "Ancillary Port %s added"
msgstr "Porta auxiliar %s adicionada"
msgid "Arbitrary flat physical_network names allowed"
msgstr "Nomes arbitrários de rede flat physical_network permitidos"
@ -215,10 +211,6 @@ msgstr "Inicialização de plug-in L2 modular concluída"
msgid "Network VLAN ranges: %s"
msgstr "Intervalos de VLAN de rede: %s"
#, python-format
msgid "Network name changed to %s"
msgstr "Nome da rede alterado para %s"
#, python-format
msgid "Neutron service started, listening on %(host)s:%(port)s"
msgstr "Serviço Neutron iniciado, escutando em %(host)s:%(port)s"

View File

@ -8,16 +8,16 @@ msgid ""
msgstr ""
"Project-Id-Version: Neutron\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2015-07-27 06:07+0000\n"
"PO-Revision-Date: 2015-07-25 03:05+0000\n"
"POT-Creation-Date: 2015-08-10 06:10+0000\n"
"PO-Revision-Date: 2015-08-01 03:37+0000\n"
"Last-Translator: openstackjenkins <jenkins@openstack.org>\n"
"Language-Team: Chinese (China) (http://www.transifex.com/projects/p/neutron/"
"Language-Team: Chinese (China) (http://www.transifex.com/openstack/neutron/"
"language/zh_CN/)\n"
"Language: zh_CN\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n"
"Generated-By: Babel 2.0\n"
"Plural-Forms: nplurals=1; plural=0;\n"
#, python-format
@ -101,10 +101,6 @@ msgstr "已尝试更新未过滤的端口过滤器 %s"
msgid "Config paste file: %s"
msgstr "配置粘贴文件:%s"
#, python-format
msgid "Configuration for device %s completed."
msgstr "设备 %s 的配置已完成。"
#, python-format
msgid "Configured mechanism driver names: %s"
msgstr "配置装置驱动名称: %s"
@ -227,10 +223,6 @@ msgstr "L2插件模块初始化完成"
msgid "Network VLAN ranges: %s"
msgstr "网络 VLAN 范围:%s"
#, python-format
msgid "Network name changed to %s"
msgstr "网络名改变为 %s"
#, python-format
msgid "Neutron service started, listening on %(host)s:%(port)s"
msgstr "Neutron服务启动正在%(host)s:%(port)s上监听"
@ -266,10 +258,6 @@ msgstr "端口 %s 已更新。"
msgid "Port %s was deleted concurrently"
msgstr "端口 %s 被同时删除"
#, python-format
msgid "Port name changed to %s"
msgstr "端口名改变为 %s"
#, python-format
msgid "Preparing filters for devices %s"
msgstr "正在为设备 %s 准备过滤器"

View File

@ -7,16 +7,16 @@ msgid ""
msgstr ""
"Project-Id-Version: Neutron\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2015-07-27 06:07+0000\n"
"PO-Revision-Date: 2015-07-25 03:05+0000\n"
"POT-Creation-Date: 2015-08-10 06:10+0000\n"
"PO-Revision-Date: 2015-08-01 03:37+0000\n"
"Last-Translator: openstackjenkins <jenkins@openstack.org>\n"
"Language-Team: Chinese (Taiwan) (http://www.transifex.com/projects/p/neutron/"
"Language-Team: Chinese (Taiwan) (http://www.transifex.com/openstack/neutron/"
"language/zh_TW/)\n"
"Language: zh_TW\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 1.3\n"
"Generated-By: Babel 2.0\n"
"Plural-Forms: nplurals=1; plural=0;\n"
#, python-format

View File

@ -19,6 +19,7 @@ import netaddr
from oslo_log import log as logging
from oslo_serialization import jsonutils
import requests
import six
from neutron.common import exceptions as n_exc
from neutron.extensions import providernet
@ -32,6 +33,22 @@ from neutron.plugins.cisco.extensions import n1kv
LOG = logging.getLogger(__name__)
def safe_b64_encode(s):
if six.PY3:
method = base64.encodebytes
else:
method = base64.encodestring
if isinstance(s, six.text_type):
s = s.encode('utf-8')
encoded_string = method(s).rstrip()
if six.PY3:
return encoded_string.decode('utf-8')
else:
return encoded_string
class Client(object):
"""
@ -502,7 +519,7 @@ class Client(object):
"""
username = c_cred.Store.get_username(host_ip)
password = c_cred.Store.get_password(host_ip)
auth = base64.encodestring("%s:%s" % (username, password)).rstrip()
auth = safe_b64_encode("%s:%s" % (username, password))
header = {"Authorization": "Basic %s" % auth}
return header

View File

@ -21,3 +21,8 @@ from neutron.common import exceptions
class MechanismDriverError(exceptions.NeutronException):
"""Mechanism driver call failed."""
message = _("%(method)s failed.")
class ExtensionDriverError(exceptions.InvalidInput):
"""Extension driver call failed."""
message = _("Extension %(driver)s failed.")

View File

@ -1,12 +0,0 @@
Arista Neutron ML2 Mechanism Driver
This mechanism driver implements ML2 Driver API and is used to manage the virtual and physical networks using Arista Hardware.
Note: Initial version of this driver support VLANs only.
For more details on use please refer to:
https://wiki.openstack.org/wiki/Arista-neutron-ml2-driver
The back-end of the driver is now moved to:
https://github.com/stackforge/networking-arista

View File

@ -1,128 +0,0 @@
# Copyright (c) 2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
# Arista ML2 Mechanism driver specific configuration knobs.
#
# Following are user configurable options for Arista ML2 Mechanism
# driver. The eapi_username, eapi_password, and eapi_host are
# required options. Region Name must be the same that is used by
# Keystone service. This option is available to support multiple
# OpenStack/Neutron controllers.
ARISTA_DRIVER_OPTS = [
cfg.StrOpt('eapi_username',
default='',
help=_('Username for Arista EOS. This is required field. '
'If not set, all communications to Arista EOS '
'will fail.')),
cfg.StrOpt('eapi_password',
default='',
secret=True, # do not expose value in the logs
help=_('Password for Arista EOS. This is required field. '
'If not set, all communications to Arista EOS '
'will fail.')),
cfg.StrOpt('eapi_host',
default='',
help=_('Arista EOS IP address. This is required field. '
'If not set, all communications to Arista EOS '
'will fail.')),
cfg.BoolOpt('use_fqdn',
default=True,
help=_('Defines if hostnames are sent to Arista EOS as FQDNs '
'("node1.domain.com") or as short names ("node1"). '
'This is optional. If not set, a value of "True" '
'is assumed.')),
cfg.IntOpt('sync_interval',
default=180,
help=_('Sync interval in seconds between Neutron plugin and '
'EOS. This interval defines how often the '
'synchronization is performed. This is an optional '
'field. If not set, a value of 180 seconds is '
'assumed.')),
cfg.StrOpt('region_name',
default='RegionOne',
help=_('Defines Region Name that is assigned to this OpenStack '
'Controller. This is useful when multiple '
'OpenStack/Neutron controllers are managing the same '
'Arista HW clusters. Note that this name must match '
'with the region name registered (or known) to keystone '
'service. Authentication with Keysotne is performed by '
'EOS. This is optional. If not set, a value of '
'"RegionOne" is assumed.'))
]
""" Arista L3 Service Plugin specific configuration knobs.
Following are user configurable options for Arista L3 plugin
driver. The eapi_username, eapi_password, and eapi_host are
required options.
"""
ARISTA_L3_PLUGIN = [
cfg.StrOpt('primary_l3_host_username',
default='',
help=_('Username for Arista EOS. This is required field. '
'If not set, all communications to Arista EOS '
'will fail')),
cfg.StrOpt('primary_l3_host_password',
default='',
secret=True, # do not expose value in the logs
help=_('Password for Arista EOS. This is required field. '
'If not set, all communications to Arista EOS '
'will fail')),
cfg.StrOpt('primary_l3_host',
default='',
help=_('Arista EOS IP address. This is required field. '
'If not set, all communications to Arista EOS '
'will fail')),
cfg.StrOpt('secondary_l3_host',
default='',
help=_('Arista EOS IP address for second Switch MLAGed with '
'the first one. This an optional field, however, if '
'mlag_config flag is set, then this is required. '
'If not set, all communications to Arista EOS '
'will fail')),
cfg.BoolOpt('mlag_config',
default=False,
help=_('This flag is used indicate if Arista Switches are '
'configured in MLAG mode. If yes, all L3 config '
'is pushed to both the switches automatically. '
'If this flag is set to True, ensure to specify IP '
'addresses of both switches. '
'This is optional. If not set, a value of "False" '
'is assumed.')),
cfg.BoolOpt('use_vrf',
default=False,
help=_('A "True" value for this flag indicates to create a '
'router in VRF. If not set, all routers are created '
'in default VRF. '
'This is optional. If not set, a value of "False" '
'is assumed.')),
cfg.IntOpt('l3_sync_interval',
default=180,
help=_('Sync interval in seconds between L3 Service plugin '
'and EOS. This interval defines how often the '
'synchronization is performed. This is an optional '
'field. If not set, a value of 180 seconds is assumed'))
]
cfg.CONF.register_opts(ARISTA_L3_PLUGIN, "l3_arista")
cfg.CONF.register_opts(ARISTA_DRIVER_OPTS, "ml2_arista")

View File

@ -1,80 +0,0 @@
# Copyright (c) 2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sqlalchemy as sa
from neutron.db import model_base
from neutron.db import models_v2
UUID_LEN = 36
STR_LEN = 255
class AristaProvisionedNets(model_base.BASEV2, models_v2.HasId,
models_v2.HasTenant):
"""Stores networks provisioned on Arista EOS.
Saves the segmentation ID for each network that is provisioned
on EOS. This information is used during synchronization between
Neutron and EOS.
"""
__tablename__ = 'arista_provisioned_nets'
network_id = sa.Column(sa.String(UUID_LEN))
segmentation_id = sa.Column(sa.Integer)
def eos_network_representation(self, segmentation_type):
return {u'networkId': self.network_id,
u'segmentationTypeId': self.segmentation_id,
u'segmentationType': segmentation_type}
class AristaProvisionedVms(model_base.BASEV2, models_v2.HasId,
models_v2.HasTenant):
"""Stores VMs provisioned on Arista EOS.
All VMs launched on physical hosts connected to Arista
Switches are remembered
"""
__tablename__ = 'arista_provisioned_vms'
vm_id = sa.Column(sa.String(STR_LEN))
host_id = sa.Column(sa.String(STR_LEN))
port_id = sa.Column(sa.String(UUID_LEN))
network_id = sa.Column(sa.String(UUID_LEN))
def eos_vm_representation(self):
return {u'vmId': self.vm_id,
u'host': self.host_id,
u'ports': {self.port_id: [{u'portId': self.port_id,
u'networkId': self.network_id}]}}
def eos_port_representation(self):
return {u'vmId': self.vm_id,
u'host': self.host_id,
u'portId': self.port_id,
u'networkId': self.network_id}
class AristaProvisionedTenants(model_base.BASEV2, models_v2.HasId,
models_v2.HasTenant):
"""Stores Tenants provisioned on Arista EOS.
Tenants list is maintained for sync between Neutron and EOS.
"""
__tablename__ = 'arista_provisioned_tenants'
def eos_tenant_representation(self):
return {u'tenantId': self.tenant_id}

View File

@ -1,35 +0,0 @@
# Copyright (c) 2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Exceptions used by Arista ML2 Mechanism Driver."""
from neutron.common import exceptions
class AristaRpcError(exceptions.NeutronException):
message = _('%(msg)s')
class AristaConfigError(exceptions.NeutronException):
message = _('%(msg)s')
class AristaServicePluginRpcError(exceptions.NeutronException):
message = _('%(msg)s')
class AristaServicePluginConfigError(exceptions.NeutronException):
message = _('%(msg)s')

View File

@ -1,470 +0,0 @@
# Copyright (c) 2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import threading
from networking_arista.common import db_lib
from networking_arista.ml2 import arista_ml2
from oslo_config import cfg
from oslo_log import log as logging
from neutron.common import constants as n_const
from neutron.i18n import _LI
from neutron.plugins.common import constants as p_const
from neutron.plugins.ml2.common import exceptions as ml2_exc
from neutron.plugins.ml2 import driver_api
from neutron.plugins.ml2.drivers.arista import config # noqa
from neutron.plugins.ml2.drivers.arista import db
from neutron.plugins.ml2.drivers.arista import exceptions as arista_exc
LOG = logging.getLogger(__name__)
EOS_UNREACHABLE_MSG = _('Unable to reach EOS')
class AristaDriver(driver_api.MechanismDriver):
"""Ml2 Mechanism driver for Arista networking hardware.
Remembers all networks and VMs that are provisioned on Arista Hardware.
Does not send network provisioning request if the network has already been
provisioned before for the given port.
"""
def __init__(self, rpc=None):
self.rpc = rpc or arista_ml2.AristaRPCWrapper()
self.db_nets = db.AristaProvisionedNets()
self.db_vms = db.AristaProvisionedVms()
self.db_tenants = db.AristaProvisionedTenants()
self.ndb = db_lib.NeutronNets()
confg = cfg.CONF.ml2_arista
self.segmentation_type = db_lib.VLAN_SEGMENTATION
self.timer = None
self.eos = arista_ml2.SyncService(self.rpc, self.ndb)
self.sync_timeout = confg['sync_interval']
self.eos_sync_lock = threading.Lock()
def initialize(self):
self.rpc.register_with_eos()
self._cleanup_db()
self.rpc.check_cli_commands()
# Registering with EOS updates self.rpc.region_updated_time. Clear it
# to force an initial sync
self.rpc.clear_region_updated_time()
self._synchronization_thread()
def create_network_precommit(self, context):
"""Remember the tenant, and network information."""
network = context.current
segments = context.network_segments
if segments[0][driver_api.NETWORK_TYPE] != p_const.TYPE_VLAN:
# If network type is not VLAN, do nothing
return
network_id = network['id']
tenant_id = network['tenant_id']
if not tenant_id:
tenant_id = context._plugin_context.tenant_id
segmentation_id = segments[0]['segmentation_id']
with self.eos_sync_lock:
db_lib.remember_tenant(tenant_id)
db_lib.remember_network(tenant_id,
network_id,
segmentation_id)
def create_network_postcommit(self, context):
"""Provision the network on the Arista Hardware."""
network = context.current
network_id = network['id']
network_name = network['name']
tenant_id = network['tenant_id']
if not tenant_id:
tenant_id = context._plugin_context.tenant_id
segments = context.network_segments
vlan_id = segments[0]['segmentation_id']
shared_net = network['shared']
with self.eos_sync_lock:
if db_lib.is_network_provisioned(tenant_id, network_id):
try:
network_dict = {
'network_id': network_id,
'segmentation_id': vlan_id,
'network_name': network_name,
'shared': shared_net}
self.rpc.create_network(tenant_id, network_dict)
except arista_exc.AristaRpcError:
LOG.info(EOS_UNREACHABLE_MSG)
raise ml2_exc.MechanismDriverError()
else:
LOG.info(_LI('Network %s is not created as it is not found in '
'Arista DB'), network_id)
def update_network_precommit(self, context):
"""At the moment we only support network name change
Any other change in network is not supported at this time.
We do not store the network names, therefore, no DB store
action is performed here.
"""
new_network = context.current
orig_network = context.original
if new_network['name'] != orig_network['name']:
LOG.info(_LI('Network name changed to %s'), new_network['name'])
def update_network_postcommit(self, context):
"""At the moment we only support network name change
If network name is changed, a new network create request is
sent to the Arista Hardware.
"""
new_network = context.current
orig_network = context.original
if ((new_network['name'] != orig_network['name']) or
(new_network['shared'] != orig_network['shared'])):
network_id = new_network['id']
network_name = new_network['name']
tenant_id = new_network['tenant_id']
if not tenant_id:
tenant_id = context._plugin_context.tenant_id
vlan_id = new_network['provider:segmentation_id']
shared_net = new_network['shared']
with self.eos_sync_lock:
if db_lib.is_network_provisioned(tenant_id, network_id):
try:
network_dict = {
'network_id': network_id,
'segmentation_id': vlan_id,
'network_name': network_name,
'shared': shared_net}
self.rpc.create_network(tenant_id, network_dict)
except arista_exc.AristaRpcError:
LOG.info(EOS_UNREACHABLE_MSG)
raise ml2_exc.MechanismDriverError()
else:
LOG.info(_LI('Network %s is not updated as it is not found'
' in Arista DB'), network_id)
def delete_network_precommit(self, context):
"""Delete the network infromation from the DB."""
network = context.current
network_id = network['id']
tenant_id = network['tenant_id']
with self.eos_sync_lock:
if db_lib.is_network_provisioned(tenant_id, network_id):
db_lib.forget_network(tenant_id, network_id)
def delete_network_postcommit(self, context):
"""Send network delete request to Arista HW."""
network = context.current
segments = context.network_segments
if segments[0][driver_api.NETWORK_TYPE] != p_const.TYPE_VLAN:
# If networtk type is not VLAN, do nothing
return
network_id = network['id']
tenant_id = network['tenant_id']
if not tenant_id:
tenant_id = context._plugin_context.tenant_id
with self.eos_sync_lock:
# Succeed deleting network in case EOS is not accessible.
# EOS state will be updated by sync thread once EOS gets
# alive.
try:
self.rpc.delete_network(tenant_id, network_id)
# if necessary, delete tenant as well.
self.delete_tenant(tenant_id)
except arista_exc.AristaRpcError:
LOG.info(EOS_UNREACHABLE_MSG)
raise ml2_exc.MechanismDriverError()
def create_port_precommit(self, context):
"""Remember the infromation about a VM and its ports
A VM information, along with the physical host information
is saved.
"""
port = context.current
device_id = port['device_id']
device_owner = port['device_owner']
host = context.host
# device_id and device_owner are set on VM boot
is_vm_boot = device_id and device_owner
if host and is_vm_boot:
port_id = port['id']
network_id = port['network_id']
tenant_id = port['tenant_id']
if not tenant_id:
tenant_id = context._plugin_context.tenant_id
with self.eos_sync_lock:
if not db_lib.is_network_provisioned(tenant_id, network_id):
# Ignore this request if network is not provisioned
return
db_lib.remember_tenant(tenant_id)
db_lib.remember_vm(device_id, host, port_id,
network_id, tenant_id)
def create_port_postcommit(self, context):
"""Plug a physical host into a network.
Send provisioning request to Arista Hardware to plug a host
into appropriate network.
"""
port = context.current
device_id = port['device_id']
device_owner = port['device_owner']
host = context.host
# device_id and device_owner are set on VM boot
is_vm_boot = device_id and device_owner
if host and is_vm_boot:
port_id = port['id']
port_name = port['name']
network_id = port['network_id']
tenant_id = port['tenant_id']
if not tenant_id:
tenant_id = context._plugin_context.tenant_id
with self.eos_sync_lock:
hostname = self._host_name(host)
vm_provisioned = db_lib.is_vm_provisioned(device_id,
host,
port_id,
network_id,
tenant_id)
# If network does not exist under this tenant,
# it may be a shared network. Get shared network owner Id
net_provisioned = (
db_lib.is_network_provisioned(tenant_id, network_id) or
self.ndb.get_shared_network_owner_id(network_id)
)
if vm_provisioned and net_provisioned:
try:
self.rpc.plug_port_into_network(device_id,
hostname,
port_id,
network_id,
tenant_id,
port_name,
device_owner)
except arista_exc.AristaRpcError:
LOG.info(EOS_UNREACHABLE_MSG)
raise ml2_exc.MechanismDriverError()
else:
LOG.info(_LI('VM %s is not created as it is not found in '
'Arista DB'), device_id)
def update_port_precommit(self, context):
"""Update the name of a given port.
At the moment we only support port name change.
Any other change to port is not supported at this time.
We do not store the port names, therefore, no DB store
action is performed here.
"""
new_port = context.current
orig_port = context.original
if new_port['name'] != orig_port['name']:
LOG.info(_LI('Port name changed to %s'), new_port['name'])
new_port = context.current
device_id = new_port['device_id']
device_owner = new_port['device_owner']
host = context.host
# device_id and device_owner are set on VM boot
is_vm_boot = device_id and device_owner
if host and host != orig_port['binding:host_id'] and is_vm_boot:
port_id = new_port['id']
network_id = new_port['network_id']
tenant_id = new_port['tenant_id']
if not tenant_id:
tenant_id = context._plugin_context.tenant_id
with self.eos_sync_lock:
db_lib.update_vm_host(device_id, host, port_id,
network_id, tenant_id)
def update_port_postcommit(self, context):
"""Update the name of a given port in EOS.
At the moment we only support port name change
Any other change to port is not supported at this time.
"""
port = context.current
orig_port = context.original
device_id = port['device_id']
device_owner = port['device_owner']
host = context.host
is_vm_boot = device_id and device_owner
if host and is_vm_boot:
port_id = port['id']
port_name = port['name']
network_id = port['network_id']
tenant_id = port['tenant_id']
if not tenant_id:
tenant_id = context._plugin_context.tenant_id
with self.eos_sync_lock:
hostname = self._host_name(host)
segmentation_id = db_lib.get_segmentation_id(tenant_id,
network_id)
vm_provisioned = db_lib.is_vm_provisioned(device_id,
host,
port_id,
network_id,
tenant_id)
# If network does not exist under this tenant,
# it may be a shared network. Get shared network owner Id
net_provisioned = (
db_lib.is_network_provisioned(tenant_id, network_id,
segmentation_id) or
self.ndb.get_shared_network_owner_id(network_id)
)
if vm_provisioned and net_provisioned:
try:
orig_host = orig_port['binding:host_id']
if host != orig_host:
# The port moved to a different host. So delete the
# old port on the old host before creating a new
# port on the new host.
self._delete_port(port, orig_host, tenant_id)
self.rpc.plug_port_into_network(device_id,
hostname,
port_id,
network_id,
tenant_id,
port_name,
device_owner)
except arista_exc.AristaRpcError:
LOG.info(EOS_UNREACHABLE_MSG)
raise ml2_exc.MechanismDriverError()
else:
LOG.info(_LI('VM %s is not updated as it is not found in '
'Arista DB'), device_id)
def delete_port_precommit(self, context):
"""Delete information about a VM and host from the DB."""
port = context.current
host_id = context.host
device_id = port['device_id']
tenant_id = port['tenant_id']
if not tenant_id:
tenant_id = context._plugin_context.tenant_id
network_id = port['network_id']
port_id = port['id']
with self.eos_sync_lock:
if db_lib.is_vm_provisioned(device_id, host_id, port_id,
network_id, tenant_id):
db_lib.forget_vm(device_id, host_id, port_id,
network_id, tenant_id)
def delete_port_postcommit(self, context):
"""unPlug a physical host from a network.
Send provisioning request to Arista Hardware to unplug a host
from appropriate network.
"""
port = context.current
host = context.host
tenant_id = port['tenant_id']
if not tenant_id:
tenant_id = context._plugin_context.tenant_id
with self.eos_sync_lock:
self._delete_port(port, host, tenant_id)
def _delete_port(self, port, host, tenant_id):
"""Deletes the port from EOS.
param port: Port which is to be deleted
param host: The host on which the port existed
param tenant_id: The tenant to which the port belongs to. Some times
the tenant id in the port dict is not present (as in
the case of HA router).
"""
device_id = port['device_id']
port_id = port['id']
network_id = port['network_id']
device_owner = port['device_owner']
try:
if not db_lib.is_network_provisioned(tenant_id, network_id):
# If we do not have network associated with this, ignore it
return
hostname = self._host_name(host)
if device_owner == n_const.DEVICE_OWNER_DHCP:
self.rpc.unplug_dhcp_port_from_network(device_id,
hostname,
port_id,
network_id,
tenant_id)
else:
self.rpc.unplug_host_from_network(device_id,
hostname,
port_id,
network_id,
tenant_id)
# if necessary, delete tenant as well.
self.delete_tenant(tenant_id)
except arista_exc.AristaRpcError:
LOG.info(EOS_UNREACHABLE_MSG)
raise ml2_exc.MechanismDriverError()
def delete_tenant(self, tenant_id):
"""delete a tenant from DB.
A tenant is deleted only if there is no network or VM configured
configured for this tenant.
"""
objects_for_tenant = (db_lib.num_nets_provisioned(tenant_id) +
db_lib.num_vms_provisioned(tenant_id))
if not objects_for_tenant:
db_lib.forget_tenant(tenant_id)
try:
self.rpc.delete_tenant(tenant_id)
except arista_exc.AristaRpcError:
LOG.info(EOS_UNREACHABLE_MSG)
raise ml2_exc.MechanismDriverError()
def _host_name(self, hostname):
fqdns_used = cfg.CONF.ml2_arista['use_fqdn']
return hostname if fqdns_used else hostname.split('.')[0]
def _synchronization_thread(self):
with self.eos_sync_lock:
self.eos.do_synchronize()
self.timer = threading.Timer(self.sync_timeout,
self._synchronization_thread)
self.timer.start()
def stop_synchronization_thread(self):
if self.timer:
self.timer.cancel()
self.timer = None
def _cleanup_db(self):
"""Clean up any uncessary entries in our DB."""
db_tenants = db_lib.get_tenants()
for tenant in db_tenants:
neutron_nets = self.ndb.get_all_networks_for_tenant(tenant)
neutron_nets_id = []
for net in neutron_nets:
neutron_nets_id.append(net['id'])
db_nets = db_lib.get_networks(tenant)
for net_id in db_nets.keys():
if net_id not in neutron_nets_id:
db_lib.forget_network(tenant, net_id)

View File

View File

@ -30,7 +30,11 @@ vxlan_opts = [
cfg.IntOpt('tos',
help=_("TOS for vxlan interface protocol packets.")),
cfg.StrOpt('vxlan_group', default=DEFAULT_VXLAN_GROUP,
help=_("Multicast group for vxlan interface.")),
help=_("Multicast group(s) for vxlan interface. A range of "
"group addresses may be specified by using CIDR "
"notation. To reserve a unique group for each possible "
"(24-bit) VNI, use a /8 such as 239.0.0.0/8. This "
"setting must be the same on all the agents.")),
cfg.IPOpt('local_ip', version=4,
help=_("Local IP address of the VXLAN endpoints.")),
cfg.BoolOpt('l2_population', default=False,

View File

@ -26,11 +26,13 @@ import time
import eventlet
eventlet.monkey_patch()
import netaddr
from oslo_config import cfg
from oslo_log import log as logging
import oslo_messaging
from oslo_service import loopingcall
from oslo_service import service
from oslo_utils import excutils
from six import moves
from neutron.agent.linux import bridge_lib
@ -77,6 +79,7 @@ class NetworkSegment(object):
class LinuxBridgeManager(object):
def __init__(self, interface_mappings):
self.interface_mappings = interface_mappings
self.validate_interface_mappings()
self.ip = ip_lib.IPWrapper()
# VXLAN related parameters:
self.local_ip = cfg.CONF.VXLAN.local_ip
@ -93,6 +96,14 @@ class LinuxBridgeManager(object):
# Store network mapping to segments
self.network_map = {}
def validate_interface_mappings(self):
for physnet, interface in self.interface_mappings.items():
if not ip_lib.device_exists(interface):
LOG.error(_LE("Interface %(intf)s for physical network %(net)s"
" does not exist. Agent terminated!"),
{'intf': interface, 'net': physnet})
sys.exit(1)
def interface_exists_on_bridge(self, bridge, interface):
directory = '/sys/class/net/%s/brif' % bridge
for filename in os.listdir(directory):
@ -128,6 +139,22 @@ class LinuxBridgeManager(object):
LOG.warning(_LW("Invalid Segmentation ID: %s, will lead to "
"incorrect vxlan device name"), segmentation_id)
def get_vxlan_group(self, segmentation_id):
try:
# Ensure the configured group address/range is valid and multicast
net = netaddr.IPNetwork(cfg.CONF.VXLAN.vxlan_group)
if not (net.network.is_multicast() and
net.broadcast.is_multicast()):
raise ValueError()
# Map the segmentation ID to (one of) the group address(es)
return str(net.network +
(int(segmentation_id) & int(net.hostmask)))
except (netaddr.core.AddrFormatError, ValueError):
LOG.warning(_LW("Invalid VXLAN Group: %s, must be an address "
"or network (in CIDR notation) in a multicast "
"range"),
cfg.CONF.VXLAN.vxlan_group)
def get_all_neutron_bridges(self):
neutron_bridge_list = []
bridge_list = os.listdir(BRIDGE_FS)
@ -241,14 +268,26 @@ class LinuxBridgeManager(object):
'segmentation_id': segmentation_id})
args = {'dev': self.local_int}
if self.vxlan_mode == lconst.VXLAN_MCAST:
args['group'] = cfg.CONF.VXLAN.vxlan_group
args['group'] = self.get_vxlan_group(segmentation_id)
if cfg.CONF.VXLAN.ttl:
args['ttl'] = cfg.CONF.VXLAN.ttl
if cfg.CONF.VXLAN.tos:
args['tos'] = cfg.CONF.VXLAN.tos
if cfg.CONF.VXLAN.l2_population:
args['proxy'] = True
int_vxlan = self.ip.add_vxlan(interface, segmentation_id, **args)
try:
int_vxlan = self.ip.add_vxlan(interface, segmentation_id,
**args)
except RuntimeError:
with excutils.save_and_reraise_exception() as ctxt:
# perform this check after an attempt rather than before
# to avoid excessive lookups and a possible race condition.
if ip_lib.vxlan_in_use(segmentation_id):
ctxt.reraise = False
LOG.error(_LE("Unable to create VXLAN interface for "
"VNI %s because it is in use by another "
"interface."), segmentation_id)
return None
int_vxlan.link.set_up()
LOG.debug("Done creating vxlan interface %s", interface)
return interface
@ -526,10 +565,11 @@ class LinuxBridgeManager(object):
test_iface = None
for seg_id in moves.range(1, p_const.MAX_VXLAN_VNI + 1):
if not ip_lib.device_exists(
self.get_vxlan_device_name(seg_id)):
test_iface = self.ensure_vxlan(seg_id)
break
if (ip_lib.device_exists(self.get_vxlan_device_name(seg_id))
or ip_lib.vxlan_in_use(seg_id)):
continue
test_iface = self.ensure_vxlan(seg_id)
break
else:
LOG.error(_LE('No valid Segmentation ID to perform UCAST test.'))
return False
@ -547,7 +587,7 @@ class LinuxBridgeManager(object):
def vxlan_mcast_supported(self):
if not cfg.CONF.VXLAN.vxlan_group:
LOG.warning(_LW('VXLAN muticast group must be provided in '
LOG.warning(_LW('VXLAN muticast group(s) must be provided in '
'vxlan_group option to enable VXLAN MCAST mode'))
return False
if not ip_lib.iproute_arg_supported(

View File

@ -164,6 +164,17 @@ class EmbSwitch(object):
raise exc.InvalidPciSlotError(pci_slot=pci_slot)
return self.pci_dev_wrapper.set_vf_state(vf_index, state)
def set_device_spoofcheck(self, pci_slot, enabled):
"""Set device spoofchecking
@param pci_slot: Virtual Function address
@param enabled: True to enable spoofcheck, False to disable
"""
vf_index = self.pci_slot_map.get(pci_slot)
if vf_index is None:
raise exc.InvalidPciSlotError(pci_slot=pci_slot)
return self.pci_dev_wrapper.set_vf_spoofcheck(vf_index, enabled)
def get_pci_device(self, pci_slot):
"""Get mac address for given Virtual Function address
@ -252,6 +263,19 @@ class ESwitchManager(object):
embedded_switch.set_device_state(pci_slot,
admin_state_up)
def set_device_spoofcheck(self, device_mac, pci_slot, enabled):
"""Set device spoofcheck
Sets device spoofchecking (enabled or disabled)
@param device_mac: device mac
@param pci_slot: pci slot
@param enabled: device spoofchecking
"""
embedded_switch = self._get_emb_eswitch(device_mac, pci_slot)
if embedded_switch:
embedded_switch.set_device_spoofcheck(pci_slot,
enabled)
def _discover_devices(self, device_mappings, exclude_devices):
"""Discover which Virtual functions to manage.

View File

@ -106,6 +106,22 @@ class PciDeviceIPWrapper(ip_lib.IPWrapper):
raise exc.IpCommandError(dev_name=self.dev_name,
reason=e)
def set_vf_spoofcheck(self, vf_index, enabled):
"""sets vf spoofcheck
@param vf_index: vf index
@param enabled: True to enable spoof checking,
False to disable
"""
setting = "on" if enabled else "off"
try:
self._as_root('', "link", ("set", self.dev_name, "vf",
str(vf_index), "spoofchk", setting))
except Exception as e:
raise exc.IpCommandError(dev_name=self.dev_name,
reason=str(e))
def _get_vf_link_show(self, vf_list, link_show_out):
"""Get link show output for VFs

View File

@ -33,7 +33,7 @@ from neutron.common import constants as n_constants
from neutron.common import topics
from neutron.common import utils as n_utils
from neutron import context
from neutron.i18n import _LE, _LI
from neutron.i18n import _LE, _LI, _LW
from neutron.plugins.ml2.drivers.mech_sriov.agent.common import config # noqa
from neutron.plugins.ml2.drivers.mech_sriov.agent.common \
import exceptions as exc
@ -169,8 +169,17 @@ class SriovNicSwitchAgent(object):
# If one of the above operations fails => resync with plugin
return (resync_a | resync_b)
def treat_device(self, device, pci_slot, admin_state_up):
def treat_device(self, device, pci_slot, admin_state_up, spoofcheck=True):
if self.eswitch_mgr.device_exists(device, pci_slot):
try:
self.eswitch_mgr.set_device_spoofcheck(device, pci_slot,
spoofcheck)
except Exception:
LOG.warning(_LW("Failed to set spoofcheck for device %s"),
device)
LOG.info(_LI("Device %(device)s spoofcheck %(spoofcheck)s"),
{"device": device, "spoofcheck": spoofcheck})
try:
self.eswitch_mgr.set_device_state(device, pci_slot,
admin_state_up)
@ -210,9 +219,11 @@ class SriovNicSwitchAgent(object):
LOG.info(_LI("Port %(device)s updated. Details: %(details)s"),
{'device': device, 'details': device_details})
profile = device_details['profile']
spoofcheck = device_details.get('port_security_enabled', True)
self.treat_device(device_details['device'],
profile.get('pci_slot'),
device_details['admin_state_up'])
device_details['admin_state_up'],
spoofcheck)
else:
LOG.info(_LI("Device with MAC %s not defined on plugin"),
device)

View File

@ -614,7 +614,7 @@ class OVSDVRNeutronAgent(object):
# ports available on this agent anymore
self.local_dvr_map.pop(sub_uuid, None)
if network_type == p_const.TYPE_VLAN:
br = self.phys_br[physical_network]
br = self.phys_brs[physical_network]
if network_type in constants.TUNNEL_NETWORK_TYPES:
br = self.tun_br
if ip_version == 4:
@ -626,7 +626,7 @@ class OVSDVRNeutronAgent(object):
ovsport.remove_subnet(sub_uuid)
if lvm.network_type == p_const.TYPE_VLAN:
br = self.phys_br[physical_network]
br = self.phys_brs[physical_network]
if lvm.network_type in constants.TUNNEL_NETWORK_TYPES:
br = self.tun_br
br.delete_dvr_process(vlan_tag=lvm.vlan, vif_mac=port.vif_mac)

View File

@ -314,11 +314,13 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin,
def _restore_local_vlan_map(self):
cur_ports = self.int_br.get_vif_ports()
port_info = self.int_br.db_list(
"Port", columns=["name", "other_config", "tag"])
port_names = [p.port_name for p in cur_ports]
port_info = self.int_br.get_ports_attributes(
"Port", columns=["name", "other_config", "tag"], ports=port_names)
by_name = {x['name']: x for x in port_info}
for port in cur_ports:
# if a port was deleted between get_vif_ports and db_lists, we
# if a port was deleted between get_vif_ports and
# get_ports_attributes, we
# will get a KeyError
try:
local_vlan_map = by_name[port.port_name]['other_config']
@ -594,7 +596,7 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin,
if network_type in constants.TUNNEL_NETWORK_TYPES:
if self.enable_tunneling:
# outbound broadcast/multicast
ofports = self.tun_br_ofports[network_type].values()
ofports = list(self.tun_br_ofports[network_type].values())
if ofports:
self.tun_br.install_flood_to_tun(lvid,
segmentation_id,
@ -741,8 +743,9 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin,
def _bind_devices(self, need_binding_ports):
devices_up = []
devices_down = []
port_info = self.int_br.db_list(
"Port", columns=["name", "tag"])
port_names = [p['vif_port'].port_name for p in need_binding_ports]
port_info = self.int_br.get_ports_attributes(
"Port", columns=["name", "tag"], ports=port_names)
tags_by_name = {x['name']: x['tag'] for x in port_info}
for port_detail in need_binding_ports:
lvm = self.local_vlan_map.get(port_detail['network_id'])
@ -754,13 +757,14 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin,
device = port_detail['device']
# Do not bind a port if it's already bound
cur_tag = tags_by_name.get(port.port_name)
if cur_tag != lvm.vlan:
self.int_br.delete_flows(in_port=port.ofport)
if self.prevent_arp_spoofing:
self.setup_arp_spoofing_protection(self.int_br,
port, port_detail)
if cur_tag != lvm.vlan:
self.int_br.set_db_attribute(
"Port", port.port_name, "tag", lvm.vlan)
if port.ofport != -1:
# NOTE(yamamoto): Remove possible drop_port flow
# installed by port_dead.
self.int_br.delete_flows(in_port=port.ofport)
# update plugin about port status
# FIXME(salv-orlando): Failures while updating device status
@ -1041,16 +1045,13 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin,
# ofport-based rules, so make arp_spoofing protection a conditional
# until something else uses ofport
if not self.prevent_arp_spoofing:
return
return []
previous = self.vifname_to_ofport_map
current = self.int_br.get_vif_port_to_ofport_map()
# if any ofport numbers have changed, re-process the devices as
# added ports so any rules based on ofport numbers are updated.
moved_ports = self._get_ofport_moves(current, previous)
if moved_ports:
self.treat_devices_added_or_updated(moved_ports,
ovs_restarted=False)
# delete any stale rules based on removed ofports
ofports_deleted = set(previous.values()) - set(current.values())
@ -1059,6 +1060,7 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin,
# store map for next iteration
self.vifname_to_ofport_map = current
return moved_ports
@staticmethod
def _get_ofport_moves(current, previous):
@ -1252,9 +1254,6 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin,
details['fixed_ips'],
details['device_owner'],
ovs_restarted)
if self.prevent_arp_spoofing:
self.setup_arp_spoofing_protection(self.int_br,
port, details)
if need_binding:
details['vif_port'] = port
need_binding_devices.append(details)
@ -1516,6 +1515,8 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin,
tunnel_sync = True
ovs_restarted = False
while self._check_and_handle_signal():
port_info = {}
ancillary_port_info = {}
start = time.time()
LOG.debug("Agent rpc_loop - iteration:%d started",
self.iter_num)
@ -1571,7 +1572,10 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin,
reg_ports = (set() if ovs_restarted else ports)
port_info = self.scan_ports(reg_ports, updated_ports_copy)
self.process_deleted_ports(port_info)
self.update_stale_ofport_rules()
ofport_changed_ports = self.update_stale_ofport_rules()
if ofport_changed_ports:
port_info.setdefault('updated', set()).update(
ofport_changed_ports)
LOG.debug("Agent rpc_loop - iteration:%(iter_num)d - "
"port information retrieved. "
"Elapsed:%(elapsed).3f",
@ -1624,8 +1628,6 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin,
# Put the ports back in self.updated_port
self.updated_ports |= updated_ports_copy
sync = True
ancillary_port_info = (ancillary_port_info if self.ancillary_brs
else {})
port_stats = self.get_port_stats(port_info, ancillary_port_info)
self.loop_count_and_wait(start, port_stats)

View File

@ -15,6 +15,7 @@
from oslo_config import cfg
from oslo_log import log
from oslo_utils import excutils
import six
import stevedore
@ -764,10 +765,10 @@ class ExtensionManager(stevedore.named.NamedExtensionManager):
try:
getattr(driver.obj, method_name)(plugin_context, data, result)
except Exception:
LOG.exception(
_LE("Extension driver '%(name)s' failed in %(method)s"),
{'name': driver.name, 'method': method_name}
)
with excutils.save_and_reraise_exception():
LOG.info(_LI("Extension driver '%(name)s' failed in "
"%(method)s"),
{'name': driver.name, 'method': method_name})
def process_create_network(self, plugin_context, data, result):
"""Notify all extension drivers during network creation."""
@ -799,23 +800,30 @@ class ExtensionManager(stevedore.named.NamedExtensionManager):
self._call_on_ext_drivers("process_update_port", plugin_context,
data, result)
def _call_on_dict_driver(self, method_name, session, base_model, result):
for driver in self.ordered_ext_drivers:
try:
getattr(driver.obj, method_name)(session, base_model, result)
except Exception:
LOG.error(_LE("Extension driver '%(name)s' failed in "
"%(method)s"),
{'name': driver.name, 'method': method_name})
raise ml2_exc.ExtensionDriverError(driver=driver.name)
LOG.debug("%(method)s succeeded for driver %(driver)s",
{'method': method_name, 'driver': driver.name})
def extend_network_dict(self, session, base_model, result):
"""Notify all extension drivers to extend network dictionary."""
for driver in self.ordered_ext_drivers:
driver.obj.extend_network_dict(session, base_model, result)
LOG.debug("Extended network dict for driver '%(drv)s'",
{'drv': driver.name})
self._call_on_dict_driver("extend_network_dict", session, base_model,
result)
def extend_subnet_dict(self, session, base_model, result):
"""Notify all extension drivers to extend subnet dictionary."""
for driver in self.ordered_ext_drivers:
driver.obj.extend_subnet_dict(session, base_model, result)
LOG.debug("Extended subnet dict for driver '%(drv)s'",
{'drv': driver.name})
self._call_on_dict_driver("extend_subnet_dict", session, base_model,
result)
def extend_port_dict(self, session, base_model, result):
"""Notify all extension drivers to extend port dictionary."""
for driver in self.ordered_ext_drivers:
driver.obj.extend_port_dict(session, base_model, result)
LOG.debug("Extended port dict for driver '%(drv)s'",
{'drv': driver.name})
self._call_on_dict_driver("extend_port_dict", session, base_model,
result)

View File

@ -55,6 +55,7 @@ from neutron.db import extradhcpopt_db
from neutron.db import models_v2
from neutron.db import netmtu_db
from neutron.db.quota import driver # noqa
from neutron.db import securitygroups_db
from neutron.db import securitygroups_rpc_base as sg_db_rpc
from neutron.db import vlantransparent_db
from neutron.extensions import allowedaddresspairs as addr_pair
@ -74,6 +75,7 @@ from neutron.plugins.ml2 import driver_context
from neutron.plugins.ml2 import managers
from neutron.plugins.ml2 import models
from neutron.plugins.ml2 import rpc
from neutron.quota import resource_registry
LOG = log.getLogger(__name__)
@ -126,6 +128,13 @@ class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2,
self._aliases = aliases
return self._aliases
@resource_registry.tracked_resources(
network=models_v2.Network,
port=models_v2.Port,
subnet=models_v2.Subnet,
subnetpool=models_v2.SubnetPool,
security_group=securitygroups_db.SecurityGroup,
security_group_rule=securitygroups_db.SecurityGroupRule)
def __init__(self):
# First load drivers, then initialize DB, then initialize drivers
self.type_manager = managers.TypeManager()
@ -1125,6 +1134,10 @@ class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2,
mech_context = driver_context.PortContext(
self, context, updated_port, network, binding, levels,
original_port=original_port)
new_host_port = self._get_host_port_if_changed(
mech_context, attrs)
need_port_update_notify |= self._process_port_binding(
mech_context, attrs)
# For DVR router interface ports we need to retrieve the
# DVRPortbinding context instead of the normal port context.
# The normal Portbinding context does not have the status
@ -1151,10 +1164,6 @@ class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2,
self.mechanism_manager.update_port_precommit(mech_context)
bound_mech_contexts.append(mech_context)
new_host_port = self._get_host_port_if_changed(
mech_context, attrs)
need_port_update_notify |= self._process_port_binding(
mech_context, attrs)
# Notifications must be sent after the above transaction is complete
kwargs = {
'context': context,
@ -1267,8 +1276,6 @@ class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2,
raise e.errors[0].error
raise exc.ServicePortInUse(port_id=port_id, reason=e)
@oslo_db_api.wrap_db_retry(max_retries=db_api.MAX_RETRIES,
retry_on_deadlock=True)
def delete_port(self, context, id, l3_port_check=True):
self._pre_delete_port(context, id, l3_port_check)
# TODO(armax): get rid of the l3 dependency in the with block
@ -1485,7 +1492,7 @@ class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2,
port_ids_to_devices = dict(
(self._device_to_port_id(context, device), device)
for device in devices)
port_ids = port_ids_to_devices.keys()
port_ids = list(port_ids_to_devices.keys())
ports = db.get_ports_and_sgs(context, port_ids)
for port in ports:
# map back to original requested id

View File

@ -17,6 +17,7 @@ from oslo_config import cfg
from oslo_db import api as oslo_db_api
from oslo_db import exception as oslo_db_exception
from oslo_log import log
from oslo_utils import excutils
from sqlalchemy import event
from neutron.db import api as db_api
@ -191,14 +192,12 @@ class TrackedResource(BaseResource):
@lockutils.synchronized('dirty_tenants')
def _db_event_handler(self, mapper, _conn, target):
tenant_id = target.get('tenant_id')
if not tenant_id:
# NOTE: This is an unexpected error condition. Log anomaly but do
# not raise as this might have unexpected effects on other
# operations
LOG.error(_LE("Model class %s does not have tenant_id attribute"),
target)
return
try:
tenant_id = target['tenant_id']
except AttributeError:
with excutils.save_and_reraise_exception():
LOG.error(_LE("Model class %s does not have a tenant_id "
"attribute"), target)
self._dirty_tenants.add(tenant_id)
# Retry the operation if a duplicate entry exception is raised. This

View File

@ -59,7 +59,7 @@ class AutoScheduler(object):
continue
for net_id in net_ids:
agents = plugin.get_dhcp_agents_hosting_networks(
context, [net_id], active=True)
context, [net_id])
if len(agents) >= agents_per_network:
continue
if any(dhcp_agent.id == agent.id for agent in agents):
@ -131,7 +131,7 @@ class DhcpFilter(base_resource_filter.BaseResourceFilter):
# subnets whose enable_dhcp is false
with context.session.begin(subtransactions=True):
network_hosted_agents = plugin.get_dhcp_agents_hosting_networks(
context, [network['id']], active=True)
context, [network['id']])
if len(network_hosted_agents) >= agents_per_network:
LOG.debug('Network %s is already hosted by enough agents.',
network['id'])

View File

@ -232,8 +232,16 @@ class L3Scheduler(object):
def _schedule_router(self, plugin, context, router_id,
candidates=None):
sync_router = plugin.get_router(context, router_id)
candidates = candidates or self._get_candidates(
plugin, context, sync_router)
if not candidates:
return
router_distributed = sync_router.get('distributed', False)
if router_distributed:
for chosen_agent in candidates:
self.bind_router(context, router_id, chosen_agent)
# For Distributed routers check for SNAT Binding before
# calling the schedule_snat_router
snat_bindings = plugin.get_snat_bindings(context, [router_id])
@ -241,21 +249,13 @@ class L3Scheduler(object):
if not snat_bindings and router_gw_exists:
# If GW exists for DVR routers and no SNAT binding
# call the schedule_snat_router
return plugin.schedule_snat_router(
plugin.schedule_snat_router(
context, router_id, sync_router)
if not router_gw_exists and snat_bindings:
elif not router_gw_exists and snat_bindings:
# If DVR router and no Gateway but SNAT Binding exists then
# call the unbind_snat_servicenode to unbind the snat service
# from agent
plugin.unbind_snat_servicenode(context, router_id)
return
candidates = candidates or self._get_candidates(
plugin, context, sync_router)
if not candidates:
return
if router_distributed:
for chosen_agent in candidates:
self.bind_router(context, router_id, chosen_agent)
elif sync_router.get('ha', False):
chosen_agents = self._bind_ha_router(plugin, context,
router_id, candidates)

0
neutron/server/__init__.py Executable file → Normal file
View File

View File

@ -1,280 +0,0 @@
# Copyright 2014 Arista Networks, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import threading
from networking_arista.common import db_lib
from networking_arista.l3Plugin import arista_l3_driver
from oslo_config import cfg
from oslo_log import helpers as log_helpers
from oslo_log import log as logging
from oslo_utils import excutils
from neutron.api.rpc.agentnotifiers import l3_rpc_agent_api
from neutron.api.rpc.handlers import l3_rpc
from neutron.common import constants as n_const
from neutron.common import rpc as n_rpc
from neutron.common import topics
from neutron import context as nctx
from neutron.db import db_base_plugin_v2
from neutron.db import extraroute_db
from neutron.db import l3_agentschedulers_db
from neutron.db import l3_gwmode_db
from neutron.i18n import _LE, _LI
from neutron.plugins.common import constants
from neutron.plugins.ml2.driver_context import NetworkContext # noqa
LOG = logging.getLogger(__name__)
class AristaL3ServicePlugin(db_base_plugin_v2.NeutronDbPluginV2,
extraroute_db.ExtraRoute_db_mixin,
l3_gwmode_db.L3_NAT_db_mixin,
l3_agentschedulers_db.L3AgentSchedulerDbMixin):
"""Implements L3 Router service plugin for Arista hardware.
Creates routers in Arista hardware, manages them, adds/deletes interfaces
to the routes.
"""
supported_extension_aliases = ["router", "ext-gw-mode",
"extraroute"]
def __init__(self, driver=None):
self.driver = driver or arista_l3_driver.AristaL3Driver()
self.ndb = db_lib.NeutronNets()
self.setup_rpc()
self.sync_timeout = cfg.CONF.l3_arista.l3_sync_interval
self.sync_lock = threading.Lock()
self._synchronization_thread()
def setup_rpc(self):
# RPC support
self.topic = topics.L3PLUGIN
self.conn = n_rpc.create_connection(new=True)
self.agent_notifiers.update(
{n_const.AGENT_TYPE_L3: l3_rpc_agent_api.L3AgentNotifyAPI()})
self.endpoints = [l3_rpc.L3RpcCallback()]
self.conn.create_consumer(self.topic, self.endpoints,
fanout=False)
self.conn.consume_in_threads()
def get_plugin_type(self):
return constants.L3_ROUTER_NAT
def get_plugin_description(self):
"""Returns string description of the plugin."""
return ("Arista L3 Router Service Plugin for Arista Hardware "
"based routing")
def _synchronization_thread(self):
with self.sync_lock:
self.synchronize()
self.timer = threading.Timer(self.sync_timeout,
self._synchronization_thread)
self.timer.start()
def stop_synchronization_thread(self):
if self.timer:
self.timer.cancel()
self.timer = None
@log_helpers.log_method_call
def create_router(self, context, router):
"""Create a new router entry in DB, and create it Arista HW."""
tenant_id = self._get_tenant_id_for_create(context, router['router'])
# Add router to the DB
with context.session.begin(subtransactions=True):
new_router = super(AristaL3ServicePlugin, self).create_router(
context,
router)
# create router on the Arista Hw
try:
self.driver.create_router(context, tenant_id, new_router)
return new_router
except Exception:
with excutils.save_and_reraise_exception():
LOG.error(_LE("Error creating router on Arista HW router=%s "),
new_router)
super(AristaL3ServicePlugin, self).delete_router(context,
new_router['id'])
@log_helpers.log_method_call
def update_router(self, context, router_id, router):
"""Update an existing router in DB, and update it in Arista HW."""
with context.session.begin(subtransactions=True):
# Read existing router record from DB
original_router = super(AristaL3ServicePlugin, self).get_router(
context, router_id)
# Update router DB
new_router = super(AristaL3ServicePlugin, self).update_router(
context, router_id, router)
# Modify router on the Arista Hw
try:
self.driver.update_router(context, router_id,
original_router, new_router)
return new_router
except Exception:
LOG.error(_LE("Error updating router on Arista HW router=%s "),
new_router)
@log_helpers.log_method_call
def delete_router(self, context, router_id):
"""Delete an existing router from Arista HW as well as from the DB."""
router = super(AristaL3ServicePlugin, self).get_router(context,
router_id)
tenant_id = router['tenant_id']
# Delete router on the Arista Hw
try:
self.driver.delete_router(context, tenant_id, router_id, router)
except Exception as e:
LOG.error(_LE("Error deleting router on Arista HW "
"router %(r)s exception=%(e)s"),
{'r': router, 'e': e})
with context.session.begin(subtransactions=True):
super(AristaL3ServicePlugin, self).delete_router(context,
router_id)
@log_helpers.log_method_call
def add_router_interface(self, context, router_id, interface_info):
"""Add a subnet of a network to an existing router."""
new_router = super(AristaL3ServicePlugin, self).add_router_interface(
context, router_id, interface_info)
# Get network info for the subnet that is being added to the router.
# Check if the interface information is by port-id or subnet-id
add_by_port, add_by_sub = self._validate_interface_info(interface_info)
if add_by_sub:
subnet = self.get_subnet(context, interface_info['subnet_id'])
elif add_by_port:
port = self.get_port(context, interface_info['port_id'])
subnet_id = port['fixed_ips'][0]['subnet_id']
subnet = self.get_subnet(context, subnet_id)
network_id = subnet['network_id']
# To create SVI's in Arista HW, the segmentation Id is required
# for this network.
ml2_db = NetworkContext(self, context, {'id': network_id})
seg_id = ml2_db.network_segments[0]['segmentation_id']
# Package all the info needed for Hw programming
router = super(AristaL3ServicePlugin, self).get_router(context,
router_id)
router_info = copy.deepcopy(new_router)
router_info['seg_id'] = seg_id
router_info['name'] = router['name']
router_info['cidr'] = subnet['cidr']
router_info['gip'] = subnet['gateway_ip']
router_info['ip_version'] = subnet['ip_version']
try:
self.driver.add_router_interface(context, router_info)
return new_router
except Exception:
with excutils.save_and_reraise_exception():
LOG.error(_LE("Error Adding subnet %(subnet)s to "
"router %(router_id)s on Arista HW"),
{'subnet': subnet, 'router_id': router_id})
super(AristaL3ServicePlugin, self).remove_router_interface(
context,
router_id,
interface_info)
@log_helpers.log_method_call
def remove_router_interface(self, context, router_id, interface_info):
"""Remove a subnet of a network from an existing router."""
new_router = (
super(AristaL3ServicePlugin, self).remove_router_interface(
context, router_id, interface_info))
# Get network information of the subnet that is being removed
subnet = self.get_subnet(context, new_router['subnet_id'])
network_id = subnet['network_id']
# For SVI removal from Arista HW, segmentation ID is needed
ml2_db = NetworkContext(self, context, {'id': network_id})
seg_id = ml2_db.network_segments[0]['segmentation_id']
router = super(AristaL3ServicePlugin, self).get_router(context,
router_id)
router_info = copy.deepcopy(new_router)
router_info['seg_id'] = seg_id
router_info['name'] = router['name']
try:
self.driver.remove_router_interface(context, router_info)
return new_router
except Exception as exc:
LOG.error(_LE("Error removing interface %(interface)s from "
"router %(router_id)s on Arista HW"
"Exception =(exc)s"),
{'interface': interface_info, 'router_id': router_id,
'exc': exc})
def synchronize(self):
"""Synchronizes Router DB from Neturon DB with EOS.
Walks through the Neturon Db and ensures that all the routers
created in Netuton DB match with EOS. After creating appropriate
routers, it ensures to add interfaces as well.
Uses idempotent properties of EOS configuration, which means
same commands can be repeated.
"""
LOG.info(_LI('Syncing Neutron Router DB <-> EOS'))
ctx = nctx.get_admin_context()
routers = super(AristaL3ServicePlugin, self).get_routers(ctx)
for r in routers:
tenant_id = r['tenant_id']
ports = self.ndb.get_all_ports_for_tenant(tenant_id)
try:
self.driver.create_router(self, tenant_id, r)
except Exception:
continue
# Figure out which interfaces are added to this router
for p in ports:
if p['device_id'] == r['id']:
net_id = p['network_id']
subnet_id = p['fixed_ips'][0]['subnet_id']
subnet = self.ndb.get_subnet_info(subnet_id)
ml2_db = NetworkContext(self, ctx, {'id': net_id})
seg_id = ml2_db.network_segments[0]['segmentation_id']
r['seg_id'] = seg_id
r['cidr'] = subnet['cidr']
r['gip'] = subnet['gateway_ip']
r['ip_version'] = subnet['ip_version']
try:
self.driver.add_router_interface(self, r)
except Exception:
LOG.error(_LE("Error Adding interface %(subnet_id)s "
"to router %(router_id)s on Arista HW"),
{'subnet_id': subnet_id, 'router_id': r})

View File

@ -30,6 +30,7 @@ from neutron.db import l3_gwmode_db
from neutron.db import l3_hamode_db
from neutron.db import l3_hascheduler_db
from neutron.plugins.common import constants
from neutron.quota import resource_registry
class L3RouterPlugin(common_db_mixin.CommonDbMixin,
@ -52,6 +53,8 @@ class L3RouterPlugin(common_db_mixin.CommonDbMixin,
"extraroute", "l3_agent_scheduler",
"l3-ha"]
@resource_registry.tracked_resources(router=l3_db.Router,
floatingip=l3_db.FloatingIP)
def __init__(self):
self.setup_rpc()
self.router_scheduler = importutils.import_object(

Some files were not shown because too many files have changed in this diff Show More