Retire Packaging Deb project repos

This commit is part of a series to retire the Packaging Deb
project. Step 2 is to remove all content from the project
repos, replacing it with a README notification where to find
ongoing work, and how to recover the repo if needed at some
future point (as in
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project).

Change-Id: I07321aa86a539aae3917ae75af4ee58f487edb8e
This commit is contained in:
Tony Breeds 2017-09-12 15:42:19 -06:00
parent f4a24c1e16
commit 62ebaae905
77 changed files with 14 additions and 7345 deletions

View File

@ -1,7 +0,0 @@
[run]
branch = True
source = hyperv
omit = hyperv/tests/*,hyperv/openstack/*
[report]
ignore_errors = True

53
.gitignore vendored
View File

@ -1,53 +0,0 @@
*.py[cod]
# C extensions
*.so
# Packages
*.egg
*.egg-info
dist
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
.tox
nosetests.xml
.testrepository
.venv
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Complexity
output/*.html
output/*/index.html
# Sphinx
doc/build
# pbr generates these
AUTHORS
ChangeLog
# Editors
*~
.*.swp
.*sw?

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/networking-hyperv.git

View File

@ -1,3 +0,0 @@
# Format is:
# <preferred e-mail> <other e-mail 1>
# <preferred e-mail> <other e-mail 2>

View File

@ -1,7 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
${PYTHON:-python} -m subunit.run discover -t ./ . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -1,16 +0,0 @@
If you would like to contribute to the development of OpenStack,
you must follow the steps in this page:
http://docs.openstack.org/infra/manual/developers.html
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/networking-hyperv

View File

@ -1,4 +0,0 @@
networking-hyperv Style Commandments
===============================================
Read the OpenStack Style Commandments http://docs.openstack.org/developer/hacking/

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,6 +0,0 @@
include AUTHORS
include ChangeLog
exclude .gitignore
exclude .gitreview
global-exclude *.pyc

14
README Normal file
View File

@ -0,0 +1,14 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For ongoing work on maintaining OpenStack packages in the Debian
distribution, please see the Debian OpenStack packaging team at
https://wiki.debian.org/OpenStack/.
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,100 +0,0 @@
========================
Team and repository tags
========================
.. image:: http://governance.openstack.org/badges/networking-hyperv.svg
:target: http://governance.openstack.org/reference/tags/index.html
.. Change things from this point on
=================
networking-hyperv
=================
This project tracks the work to integrate the Hyper-V networking with Neutron.
This project contains the Hyper-V Neutron Agent, Security Groups Driver, and
ML2 Mechanism Driver, which are used to properly bind neutron ports on a
Hyper-V host.
This project resulted from the neutron core vendor decomposition.
Supports Python 2.7, Python 3.3, Python 3.4, and Python 3.5.
* Free software: Apache license
* Documentation: http://docs.openstack.org/developer/networking-hyperv
* Source: https://git.openstack.org/cgit/openstack/networking-hyperv
* Bugs: http://bugs.launchpad.net/networking-hyperv
How to Install
--------------
Run the following command to install the agent on the system:
::
C:\networking-hyperv> python setup.py install
To use the ``neutron-hyperv-agent``, the Neutron Controller will have to be
properly configured. For this, the config option ``core_plugin`` in the
``/etc/neutron/neutron.conf`` file must be set as follows:
::
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
Additionally, ``hyperv`` will have to be added as a mechanism driver in the
``/etc/neutron/plugins/ml2/ml2_conf.ini`` configuration file:
::
mechanism_drivers = openvswitch,hyperv
In order for these changes to take effect, the ``neutron-server`` service will
have to be restarted.
Finally, make sure the ``tenant_network_types`` field contains network types
supported by Hyper-V: local, flat, vlan, gre.
Tests
-----
You will have to install the test dependencies first to be able to run the
tests.
::
C:\networking-hyperv> pip install -r requirements.txt
C:\networking-hyperv> pip install -r test-requirements.txt
You can run the unit tests with the following command.
::
C:\networking-hyperv> nosetests hyperv\tests
How to contribute
-----------------
To contribute to this project, please go through the following steps.
1. Clone the project and keep your working tree updated.
2. Make modifications on your working tree.
3. Run unit tests.
4. If the tests pass, commit your code.
5. Submit your code via ``git review -v``.
6. Check that Jenkins and the Microsoft Hyper-V CI pass on your patch.
7. If there are issues with your commit, amend, and submit it again via
``git review -v``.
8. Wait for the patch to be reviewed.
Features
--------
* Supports Flat, VLAN, GRE / NVGRE network types.
* Supports Neutron Security Groups.
* Contains ML2 Mechanism Driver.
* Parallel port processing.

View File

@ -1,2 +0,0 @@
[python: **.py]

View File

@ -1,74 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
#'sphinx.ext.intersphinx',
'oslosphinx'
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = 'networking-hyperv'
copyright = '2013, OpenStack Foundation'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
'%s Documentation' % project,
'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
#intersphinx_mapping = {'http://docs.python.org/': None}

View File

@ -1,4 +0,0 @@
============
Contributing
============
.. include:: ../../CONTRIBUTING.rst

View File

@ -1,25 +0,0 @@
.. networking-hyperv documentation master file, created by
sphinx-quickstart on Tue Jul 9 22:26:36 2013.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to networking-hyperv's documentation!
========================================================
Contents:
.. toctree::
:maxdepth: 2
readme
installation
usage
contributing
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,12 +0,0 @@
============
Installation
============
At the command line::
$ pip install networking-hyperv
Or, if you have virtualenvwrapper installed::
$ mkvirtualenv networking-hyperv
$ pip install networking-hyperv

View File

@ -1 +0,0 @@
.. include:: ../../README.rst

View File

@ -1,7 +0,0 @@
========
Usage
========
To use networking-hyperv in a project::
import hyperv.neutron

View File

@ -1,110 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
========================================
Hyper-V Neutron Agent NVGRE network type
========================================
https://blueprints.launchpad.net/networking-hyperv/+spec/hyper-v-nvgre
Hyper-V Network Virtualization (HNV) was first introduced in Windows Hyper-V /
Server 2012 and has the purpose of enabling the virtualization of Layer 2 and
Layer 3 networking models. One of the HNV configuration approches is called
NVGRE (Network Virtualization through GRE). [1]
Problem Description
===================
NVGRE can be used between Windows Hyper-V / Server 2012 and Windows Hyper-V /
Server 2012 R2 VMs, but the usage can be extended to other hypervisors which
support GRE by using OpenVSwitch.
Proposed Change
===============
In order to implement this feature, there are a few considerations things that
need to be kept in mind:
* NVGRE does not exist prior to Windows / Hyper-V Server 2012. The
implementation will have to make sure it won't break the Hyper-V Neutron
Agent on a Windows / Hyper-V 2008 R2 compute node.
* HNV is not enabled by default in Windows / Hyper-V Server 2012.
* The vSwitch used for the NVGRE tunneling must have the AllowManagementOS
flag turned off.
* Additional information is needed from Neutron in order to for the feature
to behave as expected. In order to retrieve the information, Neutron
credentials are necessary.
* The network's segmentation_id, or the NVGRE's equivalent, VirtualSubnetId has
to be higher than 4095. Hyper-V cannot create Customer Routes or
Lookup Records if the SegmentationId is lower or equal to 4095.
* The NVGRE network cannot have a gateway ending in '.1', as Hyper-V does not
allow it. Any other gateway (including networks without a gateway) is
acceptable.
* Only one subnet per network. The reason is that it cannot be created more
Customer Routes for the same VirtualSubnetID. Adding new routes for the same
VirtualSubnetId will cause exceptions.
* Lookup Records should be added for the metadata address (default is
169.254.169.254) in order for instances to properly fetch their metadata.
* Lookup Records should be added for 0.0.0.0. One reason why they're necessary
is that they are required in order to receive DHCP offers.
* ProviderAddress, ProviderRoute, CustomerRoute and LookupRecord WMI objects
are not persistent. Which means they will not exist after the host restarts.
Configuration
-------------
A few configuration options can be set in order for the feature to function
properly. These configuration options are to be set in the [NVGRE] section
of the .conf file:
* enable_support (default=False). Enables Hyper-V NVGRE as a network type for
the agent.
* provider_vlan_id (default=0). The VLAN ID set to the physical network.
* provider_tunnel_ip. Specifies the local IP which will be used for NVGRE
tunneling.
Work Items
----------
* NVGRE Utils classes, which uses the ``//./root/StandardCimv2`` WMI namespace.
It will be responsible with creating the WMI objects required for the
feature to function properly: ProviderAddress, ProviderRoute, CustomerRoute,
Lookup Record objects; while considering the limitations described above.
* Create local database in order to persist the above objects and load them
when the agent starts. The database should be kept clean.
* Create method to synchronize LookupRecords with other Hyper-V Neutron Agents
that have NVGRE enabled, as they must exist on both ends of the NVGRE tunnel.
* Class that retrieves necessary information from Neutron in order to correctly
create the mentioned WMI objects.
* The Hyper-V Neutron Agent should report the following agent configuration, if
NVGRE is supported and enabled:
- ``tunneling_ip``: the host's IP which is used as a ProviderAddress.
- ``tunnel_types``: NVGRE
* HypervMechanismDriver.get_allowed_network_types method should check the
agent's reported ``tunnel_types`` and include it in the return value.
* Implement NVGRE network type in Neutron.
References
==========
[1] https://technet.microsoft.com/en-us/library/JJ134174.aspx

View File

@ -1,252 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===========================
Scale Hyper-V Neutron Agent
===========================
https://blueprints.launchpad.net/networking-hyperv/+spec/scale-hyperv-neutron-agent
A typical medium-sized hybrid cloud deployment consists of more than
50 Hyper-V compute nodes along with computes like KVM or ESX.
The rate at which VMs are spawned/updated under such a deployment is
around 25 operations/minute. And these operations consists of spawning,
updating and deleting of the VMs and their properties (like security group
rules). At this rate the possibility of concurrent spawn or update operations
on a given compute is High. What is typically observed is a spawn rate of
~2 VM(s)/minute. Since WMI is not that performant, a VM port binding in
Hyper-V neutron agent takes 10x amount of time when compared to KVM IPtables.
The situation worsens when the number of SG Rules to apply increases for a
given port (with the number of SG members), and there are many ports in queue
to treat. Under such a scenario neutron agent running on Hyper-v compute fails
to complete binding security rules to the VM port in given time, and VM remains
inaccessible on the allocated IP address.
This blueprint addresses the Neutron Hyper-V Agent's port binding rate by
introducing port binding concurrency.
Problem Description
===================
Under enterprise class cloud environment the possibility of single compute
receiving more than one VM spawn request grows. It is the nova scheduler that
chooses the compute node on which the VM will be spawned. The neutron part
on compute node runs as an independent task which does the port related
configuration for the spawned VM. Today, neutron agent runs in a single
threaded environment, the main thread is responsible for doing the port
binding (i.e. vlan configuration and applying port rules) for the spawned VM
and sending agent keep alive message to controller, while green threads are
responsible for processing the port updates (i.e. updating port acls/rules).
The threading mechanism is implemented using python's green thread library,
the green thread by nature operated in run until completion or preemption
mode, which means that a green thread will not yield the CPU until it
completes its job or it is preempted explicitly.
The above mentioned nature of green thread impacts the Hyper-V scale.
The problem starts when a compute already has around 15 VMs hosted and
security group update is in process, at the same time neutron agent's
deamon loop wakes up and finds that there were ports added for which binding
is pending. Because the update thread is holding the CPU, the port binding
main thread will not get turn to execute resulting in delayed port binding.
Since the nova-compute service runs in isolation independent of neutron, it
will not wait for neutron to complete port binding and will power on the VM.
The booted VM will start sending the DHCP discovery which ultimately gets
dropped resulting in VM not getting DHCP IP.
The problem becomes worse with growing number of VMs because more VMs in
network mean more time to complete port update, and the list of added ports
pending for port binding also grows due to arrival of new VMs.
Proposed Change
===============
This blueprint proposes solution to the above discussed problem in two parts.
**Part 1.** The Hyper-V Neutron Agent and the nova-compute service can be
syncronized, which can help solve the first part of the problem: the VMs are
currently starting before the neutron ports are properly bound. By waiting for
the ports to be processed, the VMs will be able to properly acquire the DHCP
replies.
Currently, Neutron generates a `vif plugged` notification when a port has been
reported as `up` (``update_device_up``), which is already done by the Neutron
Hyper-V Agent when it finished processing a port. The implementation for
waiting for the mentioned notification event in the nova Hyper-V Driver will
be addressed by the blueprint [1].
**Part 2.** The second part of the proposal is to improve the logic behind
the Neutron Hyper-V Agent's port processing. In this regard, there are a couple
of things that can be done.
**a.** Replace WMI. Performance-wise, WMI is notoriously bad. In order to
address this, the PyMI module has been created and it will be used instead [2].
PyMI is a drop-in replacement of WMI, as it maintains the same interface, via
its WMI wrapper, meaning that PyMI can be used on any previous, current and
future branches of networking-hyperv. It has been been observed that PyMI
reduces the execution time by roughly 2.0-2.2X, compared to the old WMI.
**b.** Implement vNIC creation / deletion event listeners. Currently, the
agent periodically polls for all the present vNICs on the host (which can be
an expensive operation when there are hundreds of vNICs) and then query the
Neutron server for port details for all of them. This is repeated if the
port binding failed even for one of them.
By implementing the vNIC creation / deletion event listeners, querying all the
vNICs is no longer necessary. Furthermore, the Neutron server will not have to
be queried for all of the vNICs when a single one of them failed to be bound,
reducing the load on the Neutron server.
**c.** Parallel port binding. Currently, the ports are being processed
sequencially. Processing them in parallel can lead to a performance boost.
Plus, PyMI was built while having parallelism in mind, as oposed to the old
WMI, meaning that the performance gain by using both PyMI and parallel port
binding will be even greater.
We will be using Native Threads for the purpose of port binding, as they can
span multiple processors (green threads do not). On a host with 32 cores,
using 10 Native Threads as workers + PyMI has a ~6X better performance than
the previous, single-threaded processing using PyMI, leading to a total ~12X
improvement over the single-threaded processing using WMI.
It is notable to mention that there a very small performance gain between
10 Native Thread workers and 20 (~5%). As a recommendation the best
experience, the number of workers should be set between 10 and 15, or the
number of cores on the host, whichever is lowest.
Data Model Impact
-----------------
None
REST API Impact
---------------
None
Security Impact
---------------
None
Notifications Impact
--------------------
None
Other End User Impact
---------------------
None
Performance Impact
------------------
This blueprint will improve the Hyper-V neutron agent performance.
IPv6 Impact
-----------
None
Other Deployer Impact
---------------------
The number of Native Thread workers can be set in the ``worker_count``
configuration option in ``neutron-hyperv-agent.conf``. As default, it is set
to 10.
Developer Impact
----------------
None
Community Impact
----------------
Scaling Openstack neutron is always a challenge and this change will allow
Hyper-V neutron to scale around 1000 VM with 10 tenants.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
<cbelu@cloudbasesolutions.com>
Other contributors:
<sonu.sudhakaran@hp.com>
<vinod.kumar5@hp.com>
<krishna.kanth-mallela@hp.com >
Work Items
----------
* Implementing vNIC creation / deletion event listeners.
* Implementing Native Thread workers.
* Writing unit test.
* Functionality testing.
* Scale testing.
Dependencies
============
* Nova to process neutron vif notification.
Testing
=======
The changes will be tested by deploying cloud with around 20 computes nodes
and spawning 1000 VMs at concurrency of 6 VMs per minute for overall cloud
with 10 tenants each having their own network.
Tempest Tests
-------------
TBD
Functional Tests
----------------
TBD
API Tests
---------
None
Documentation Impact
====================
None
User Documentation
------------------
Nova boot time may increase due to Neutron to Nova notification, the delay
could be seen when there are large number of security groups rules associated
with a port.
Developer Documentation
-----------------------
None
References
==========
[1] Hyper-V Spawn on Neutron Event nova blueprint:
https://blueprints.launchpad.net/nova/+spec/hyper-v-spawn-on-neutron-event
[2] PyMI github repository:
https://github.com/cloudbase/PyMI/

View File

@ -1,15 +0,0 @@
# Copyright (c) 2016 Cloudbase Solutions Srl
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
__import__('pkg_resources').declare_namespace(__name__)

View File

@ -1,30 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import oslo_i18n
_translators = oslo_i18n.TranslatorFactory(domain='neutron')
# The primary translation function using the well-known name "_"
_ = _translators.primary
# Translators for log levels.
#
# The abbreviated names are meant to reflect the usual use of a short
# name like '_'. The "L" is for "log" and the other letter comes from
# the level.
_LI = _translators.log_info
_LW = _translators.log_warning
_LE = _translators.log_error
_LC = _translators.log_critical

View File

@ -1,25 +0,0 @@
# Copyright (c) 2015 Cloudbase Solutions Srl
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import eventlet
# eventlet monkey patching the os modules causes subprocess.Popen to fail
# on Windows when using pipes due to missing non-blocking IO support.
#
# bug report on eventlet:
# https://bitbucket.org/eventlet/eventlet/issue/132/
# eventletmonkey_patch-breaks
eventlet.monkey_patch(os=False)

View File

@ -1,38 +0,0 @@
# Copyright 2013 Cloudbase Solutions SRL
# Copyright 2013 Pedro Navarro Perez
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import inspect
from oslo_concurrency import lockutils
def get_port_synchronized_decorator(lock_prefix):
synchronized = lockutils.synchronized_with_prefix(lock_prefix)
def _port_synchronized(f):
# This decorator synchronizes operations targeting the same port.
# The decorated method is expected to accept the port_id argument.
def wrapper(*args, **kwargs):
call_args = inspect.getcallargs(f, *args, **kwargs)
port_id = (call_args.get('port_id') or
call_args.get('port', {}).get('id'))
lock_name = lock_prefix + ('port-lock-%s' % port_id)
@synchronized(lock_name)
def inner():
return f(*args, **kwargs)
return inner()
return wrapper
return _port_synchronized

View File

@ -1,135 +0,0 @@
# Copyright 2017 Cloudbase Solutions SRL
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""This module contains the contract class for each agent."""
import abc
import threading
import time
from neutron.common import topics
from neutron_lib import context as neutron_context
from os_win import utilsfactory
from oslo_log import log as logging
import oslo_messaging
import six
from hyperv.common.i18n import _LE # noqa
from hyperv.neutron import config
LOG = logging.getLogger(__name__)
CONF = config.CONF
@six.add_metaclass(abc.ABCMeta)
class BaseAgent(object):
"""Contact class for all the neutron agents."""
_AGENT_BINARY = None
_AGENT_TYPE = None
_AGENT_TOPIC = None
target = oslo_messaging.Target(version='1.3')
def __init__(self):
"""Initializes local configuration of the current agent.
:param conf: dict or dict-like object containing the configuration
details used by this Agent. If None is specified, default
values are used instead.
"""
self._agent_id = None
self._topic = topics.AGENT
self._cache_lock = threading.Lock()
self._refresh_cache = False
self._host = CONF.get("host")
self._agent_state = {}
self._context = neutron_context.get_admin_context_without_session()
self._utils = utilsfactory.get_networkutils()
self._utils.init_caches()
# The following attributes will be initialized by the
# `_setup_rpc` method.
self._client = None
self._connection = None
self._endpoints = []
self._plugin_rpc = None
self._sg_plugin_rpc = None
self._state_rpc = None
agent_config = CONF.get("AGENT", {})
self._polling_interval = agent_config.get('polling_interval', 2)
@abc.abstractmethod
def _get_agent_configurations(self):
"""Get configurations for the current agent."""
pass
def _set_agent_state(self):
"""Set the state for the agent."""
self._agent_state = {
'agent_type': self._AGENT_TYPE,
'binary': self._AGENT_BINARY,
'configurations': self._get_agent_configurations(),
'host': self._host,
'start_flag': True,
'topic': self._AGENT_TOPIC,
}
@abc.abstractmethod
def _setup_rpc(self):
"""Setup the RPC client for the current agent."""
pass
@abc.abstractmethod
def _work(self):
"""Override this with your desired procedures."""
pass
def _prologue(self):
"""Executed once before the daemon loop."""
pass
def daemon_loop(self):
"""Process all the available ports."""
self._prologue()
while True:
start = time.time()
try:
self._work()
except Exception:
LOG.exception(_LE("Error in agent event loop"))
# Sleep until the end of polling interval
elapsed = (time.time() - start)
if elapsed < self._polling_interval:
time.sleep(self._polling_interval - elapsed)
else:
LOG.debug("Loop iteration exceeded interval "
"(%(polling_interval)s vs. %(elapsed)s)",
{'polling_interval': self._polling_interval,
'elapsed': elapsed})
def _report_state(self):
try:
self._state_rpc.report_state(self._context,
self._agent_state)
self._agent_state.pop('start_flag', None)
except Exception:
LOG.exception(_LE("Failed reporting state!"))

View File

@ -1,234 +0,0 @@
# Copyright 2017 Cloudbase Solutions SRL
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import hashlib
import hmac
import sys
import httplib2
from neutron.agent import rpc as agent_rpc
from neutron.common import config as common_config
from neutron.common import topics
from neutron.conf.agent import common as neutron_config
from neutron.conf.agent.metadata import config as meta_config
from neutron import wsgi
from neutron_lib import constants
from neutron_lib import context
from oslo_log import log as logging
from oslo_service import loopingcall
from oslo_utils import encodeutils
from oslo_utils import uuidutils
import six
import six.moves.urllib.parse as urlparse
import webob
from hyperv.common.i18n import _, _LW, _LE # noqa
from hyperv.neutron.agent import base as base_agent
from hyperv.neutron import config
from hyperv.neutron import neutron_client
CONF = config.CONF
LOG = logging.getLogger(__name__)
class _MetadataProxyHandler(object):
def __init__(self):
self._context = context.get_admin_context_without_session()
self._neutron_client = neutron_client.NeutronAPIClient()
@webob.dec.wsgify(RequestClass=webob.Request)
def __call__(self, req):
try:
return self._proxy_request(req)
except Exception:
LOG.exception(_LE("Unexpected error."))
msg = _('An unknown error has occurred. '
'Please try your request again.')
explanation = six.text_type(msg)
return webob.exc.HTTPInternalServerError(explanation=explanation)
def _get_port_profile_id(self, request):
"""Get the port profile ID from the request path."""
# Note(alexcoman): The port profile ID can be found as suffix
# in request path.
port_profile_id = request.path.split("/")[-1].strip()
if uuidutils.is_uuid_like(port_profile_id):
LOG.debug("The instance id was found in request path.")
return port_profile_id
LOG.debug("Failed to get the instance id from the request.")
return None
def _get_instance_id(self, port_profile_id):
tenant_id = None
instance_id = None
ports = self._neutron_client.get_network_ports()
for port in ports:
vif_details = port.get("binding:vif_details", {})
profile_id = vif_details.get("port_profile_id")
if profile_id and profile_id == port_profile_id:
tenant_id = port["tenant_id"]
# Note(alexcoman): The port["device_id"] is actually the
# Nova instance_id.
instance_id = port["device_id"]
break
else:
LOG.debug("Failed to get the port information.")
return tenant_id, instance_id
def _sign_instance_id(self, instance_id):
secret = CONF.metadata_proxy_shared_secret
secret = encodeutils.to_utf8(secret)
instance_id = encodeutils.to_utf8(instance_id)
return hmac.new(secret, instance_id, hashlib.sha256).hexdigest()
def _get_headers(self, port_profile_id):
tenant_id, instance_id = self._get_instance_id(port_profile_id)
if not (tenant_id and instance_id):
return None
headers = {
'X-Instance-ID': instance_id,
'X-Tenant-ID': tenant_id,
'X-Instance-ID-Signature': self._sign_instance_id(instance_id),
}
return headers
def _proxy_request(self, request):
LOG.debug("Request: %s", request)
port_profile_id = self._get_port_profile_id(request)
if not port_profile_id:
return webob.exc.HTTPNotFound()
headers = self._get_headers(port_profile_id)
if not headers:
return webob.exc.HTTPNotFound()
LOG.debug("Trying to proxy the request.")
nova_url = '%s:%s' % (CONF.nova_metadata_host,
CONF.nova_metadata_port)
allow_insecure = CONF.nova_metadata_insecure
http_request = httplib2.Http(
ca_certs=CONF.auth_ca_cert,
disable_ssl_certificate_validation=allow_insecure
)
if CONF.nova_client_cert and CONF.nova_client_priv_key:
http_request.add_certificate(
key=CONF.nova_client_priv_key,
cert=CONF.nova_client_cert,
domain=nova_url)
url = urlparse.urlunsplit((
CONF.nova_metadata_protocol, nova_url,
request.path_info, request.query_string, ''))
response, content = http_request.request(
url.replace(port_profile_id, ""),
method=request.method, headers=headers,
body=request.body)
LOG.debug("Response [%s]: %s", response.status, content)
if response.status == 200:
request.response.content_type = response['content-type']
request.response.body = content
return request.response
elif response.status == 403:
LOG.warning(_LW(
'The remote metadata server responded with Forbidden. This '
'response usually occurs when shared secrets do not match.'
))
return webob.exc.HTTPForbidden()
elif response.status == 400:
return webob.exc.HTTPBadRequest()
elif response.status == 404:
return webob.exc.HTTPNotFound()
elif response.status == 409:
return webob.exc.HTTPConflict()
elif response.status == 500:
message = _(
"Remote metadata server experienced an internal server error."
)
LOG.warning(message)
return webob.exc.HTTPInternalServerError(explanation=message)
else:
message = _("The HNV Metadata proxy experienced an internal"
" server error.")
LOG.warning(_('Unexpected response code: %s') % response.status)
return webob.exc.HTTPInternalServerError(explanation=message)
class MetadataProxy(base_agent.BaseAgent):
_AGENT_BINARY = 'neutron-hnv-metadata-proxy'
_AGENT_TYPE = constants.AGENT_TYPE_METADATA
_AGENT_TOPIC = 'N/A'
def __init__(self):
super(MetadataProxy, self).__init__()
self._set_agent_state()
self._setup_rpc()
def _setup_rpc(self):
"""Setup the RPC client for the current agent."""
self._state_rpc = agent_rpc.PluginReportStateAPI(topics.REPORTS)
report_interval = CONF.AGENT.report_interval
if report_interval:
heartbeat = loopingcall.FixedIntervalLoopingCall(
self._report_state)
heartbeat.start(interval=report_interval)
def _get_agent_configurations(self):
return {
'nova_metadata_ip': CONF.nova_metadata_host,
'nova_metadata_port': CONF.nova_metadata_port,
'log_agent_heartbeats': CONF.AGENT.log_agent_heartbeats,
}
def _work(self):
"""Start the neutron-hnv-metadata-proxy agent."""
server = wsgi.Server(
name=self._AGENT_BINARY,
num_threads=CONF.AGENT.worker_count)
server.start(
application=_MetadataProxyHandler(),
port=CONF.bind_port,
host=CONF.bind_host)
server.wait()
def run(self):
self._prologue()
try:
self._work()
except Exception:
LOG.exception(_LE("Error in agent."))
def register_config_opts():
neutron_config.register_agent_state_opts_helper(CONF)
meta_config.register_meta_conf_opts(
meta_config.METADATA_PROXY_HANDLER_OPTS)
def main():
"""The entry point for neutron-hnv-metadata-proxy."""
register_config_opts()
common_config.init(sys.argv[1:])
neutron_config.setup_logging()
proxy = MetadataProxy()
proxy.run()

View File

@ -1,115 +0,0 @@
# Copyright 2017 Cloudbase Solutions SRL
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""This module contains the L2 Agent needed for HNV."""
import platform
import sys
from neutron.common import config as common_config
from neutron.conf.agent import common as neutron_config
from oslo_log import log as logging
from hyperv.common.i18n import _LI # noqa
from hyperv.neutron import _common_utils as c_util
from hyperv.neutron.agent import layer2 as hyperv_base
from hyperv.neutron import config
from hyperv.neutron import constants as h_const
from hyperv.neutron import neutron_client
LOG = logging.getLogger(__name__)
CONF = config.CONF
_port_synchronized = c_util.get_port_synchronized_decorator('n-hv-agent-')
class HNVAgent(hyperv_base.Layer2Agent):
_AGENT_BINARY = "neutron-hnv-agent"
_AGENT_TYPE = h_const.AGENT_TYPE_HNV
def __init__(self):
super(HNVAgent, self).__init__()
# Handle updates from service
self._agent_id = 'hnv_%s' % platform.node()
self._neutron_client = neutron_client.NeutronAPIClient()
def _get_agent_configurations(self):
return {
'logical_network': CONF.HNV.logical_network,
'vswitch_mappings': self._physical_network_mappings,
'devices': 1,
'l2_population': False,
'tunnel_types': [],
'bridge_mappings': {},
'enable_distributed_routing': False,
}
def _provision_network(self, port_id, net_uuid, network_type,
physical_network, segmentation_id):
"""Provision the network with the received information."""
LOG.info(_LI("Provisioning network %s"), net_uuid)
vswitch_name = self._get_vswitch_name(network_type, physical_network)
vswitch_map = {
'network_type': network_type,
'vswitch_name': vswitch_name,
'ports': [],
'vlan_id': segmentation_id}
self._network_vswitch_map[net_uuid] = vswitch_map
def _port_bound(self, port_id, network_id, network_type, physical_network,
segmentation_id):
"""Bind the port to the recived network."""
super(HNVAgent, self)._port_bound(port_id, network_id, network_type,
physical_network, segmentation_id)
LOG.debug("Getting the profile id for the current port.")
profile_id = self._neutron_client.get_port_profile_id(port_id)
LOG.debug("Trying to set port profile id %r for the current port %r.",
profile_id, port_id)
self._utils.set_vswitch_port_profile_id(
switch_port_name=port_id,
profile_id=profile_id,
profile_data=h_const.PROFILE_DATA,
profile_name=h_const.PROFILE_NAME,
net_cfg_instance_id=h_const.NET_CFG_INSTANCE_ID,
cdn_label_id=h_const.CDN_LABEL_ID,
cdn_label_string=h_const.CDN_LABEL_STRING,
vendor_id=h_const.VENDOR_ID,
vendor_name=h_const.VENDOR_NAME)
@_port_synchronized
def _treat_vif_port(self, port_id, network_id, network_type,
physical_network, segmentation_id,
admin_state_up):
if admin_state_up:
self._port_bound(port_id, network_id, network_type,
physical_network, segmentation_id)
else:
self._port_unbound(port_id)
def main():
"""The entry point for the HNV Agent."""
neutron_config.register_agent_state_opts_helper(CONF)
common_config.init(sys.argv[1:])
neutron_config.setup_logging()
hnv_agent = HNVAgent()
# Start everything.
LOG.info(_LI("Agent initialized successfully, now running... "))
hnv_agent.daemon_loop()

View File

@ -1,303 +0,0 @@
# Copyright 2013 Cloudbase Solutions SRL
# Copyright 2013 Pedro Navarro Perez
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import platform
import sys
from neutron.agent.l2.extensions import qos as qos_extension
from neutron.agent import rpc as agent_rpc
from neutron.agent import securitygroups_rpc as sg_rpc
from neutron.common import config as common_config
from neutron.common import topics
from neutron.conf.agent import common as neutron_config
from os_win import exceptions
from os_win import utilsfactory
from oslo_log import log as logging
import oslo_messaging
from hyperv.common.i18n import _, _LI, _LW, _LE # noqa
from hyperv.neutron import _common_utils as c_util
from hyperv.neutron.agent import layer2 as hyperv_base
from hyperv.neutron import config
from hyperv.neutron import constants as h_constant
from hyperv.neutron import exception
from hyperv.neutron import nvgre_ops
from hyperv.neutron import trunk_driver
CONF = config.CONF
LOG = logging.getLogger(__name__)
_port_synchronized = c_util.get_port_synchronized_decorator('n-hv-agent-')
class HyperVSecurityAgent(sg_rpc.SecurityGroupAgentRpc):
def __init__(self, context, plugin_rpc):
super(HyperVSecurityAgent, self).__init__(context, plugin_rpc)
if sg_rpc.is_firewall_enabled():
self._setup_rpc()
@property
def use_enhanced_rpc(self):
return True
def _setup_rpc(self):
self.topic = topics.AGENT
self.endpoints = [HyperVSecurityCallbackMixin(self)]
consumers = [[topics.SECURITY_GROUP, topics.UPDATE]]
self.connection = agent_rpc.create_consumers(self.endpoints,
self.topic,
consumers)
class HyperVSecurityCallbackMixin(sg_rpc.SecurityGroupAgentRpcCallbackMixin):
target = oslo_messaging.Target(version='1.3')
def __init__(self, sg_agent):
super(HyperVSecurityCallbackMixin, self).__init__()
self.sg_agent = sg_agent
class HyperVNeutronAgent(hyperv_base.Layer2Agent):
_AGENT_BINARY = "neutron-hyperv-agent"
_AGENT_TYPE = h_constant.AGENT_TYPE_HYPERV
def __init__(self):
super(HyperVNeutronAgent, self).__init__()
self._agent_id = 'hyperv_%s' % platform.node()
self._qos_ext = None
self._nvgre_enabled = False
self._metricsutils = utilsfactory.get_metricsutils()
self._port_metric_retries = {}
agent_conf = CONF.get('AGENT', {})
security_conf = CONF.get('SECURITYGROUP', {})
self._enable_metrics_collection = agent_conf.get(
'enable_metrics_collection', False)
self._metrics_max_retries = agent_conf.get('metrics_max_retries', 100)
self._enable_security_groups = security_conf.get(
'enable_security_group', False)
self._init_nvgre()
def _get_agent_configurations(self):
configurations = {'vswitch_mappings': self._physical_network_mappings}
if CONF.NVGRE.enable_support:
configurations['arp_responder_enabled'] = False
configurations['tunneling_ip'] = CONF.NVGRE.provider_tunnel_ip
configurations['devices'] = 1
configurations['l2_population'] = False
configurations['tunnel_types'] = [h_constant.TYPE_NVGRE]
configurations['enable_distributed_routing'] = False
configurations['bridge_mappings'] = {}
return configurations
def _setup(self):
"""Setup the layer two agent."""
super(HyperVNeutronAgent, self)._setup()
self._sg_plugin_rpc = sg_rpc.SecurityGroupServerRpcApi(topics.PLUGIN)
self._sec_groups_agent = HyperVSecurityAgent(self._context,
self._sg_plugin_rpc)
self._vlan_driver = trunk_driver.HyperVTrunkDriver(self._context)
if CONF.NVGRE.enable_support:
self._consumers.append([h_constant.TUNNEL, topics.UPDATE])
self._consumers.append([h_constant.LOOKUP, h_constant.UPDATE])
def _setup_qos_extension(self):
"""Setup the QOS extension if it is required."""
if not CONF.AGENT.enable_qos_extension:
return
self._qos_ext = qos_extension.QosAgentExtension()
self._qos_ext.consume_api(self)
self._qos_ext.initialize(self._connection, 'hyperv')
def _init_nvgre(self):
# if NVGRE is enabled, self._nvgre_ops is required in order to properly
# set the agent state (see get_agent_configrations method).
if not CONF.NVGRE.enable_support:
return
if not CONF.NVGRE.provider_tunnel_ip:
err_msg = _('enable_nvgre_support is set to True, but '
'provider tunnel IP is not configured. '
'Check neutron.conf config file.')
LOG.error(err_msg)
raise exception.NetworkingHyperVException(err_msg)
self._nvgre_enabled = True
self._nvgre_ops = nvgre_ops.HyperVNvgreOps(
list(self._physical_network_mappings.values()))
self._nvgre_ops.init_notifier(self._context, self._client)
self._nvgre_ops.tunnel_update(self._context,
CONF.NVGRE.provider_tunnel_ip,
h_constant.TYPE_NVGRE)
def _provision_network(self, port_id, net_uuid, network_type,
physical_network, segmentation_id):
"""Provision the network with the received information."""
LOG.info(_LI("Provisioning network %s"), net_uuid)
vswitch_name = self._get_vswitch_name(network_type, physical_network)
if network_type == h_constant.TYPE_VLAN:
# Nothing to do
pass
elif network_type == h_constant.TYPE_FLAT:
# Nothing to do
pass
elif network_type == h_constant.TYPE_LOCAL:
# TODO(alexpilotti): Check that the switch type is private
# or create it if not existing.
pass
elif network_type == h_constant.TYPE_NVGRE and self._nvgre_enabled:
self._nvgre_ops.bind_nvgre_network(segmentation_id, net_uuid,
vswitch_name)
else:
raise exception.NetworkingHyperVException(
(_("Cannot provision unknown network type "
"%(network_type)s for network %(net_uuid)s") %
dict(network_type=network_type, net_uuid=net_uuid)))
vswitch_map = {
'network_type': network_type,
'vswitch_name': vswitch_name,
'ports': [],
'vlan_id': segmentation_id}
self._network_vswitch_map[net_uuid] = vswitch_map
def _port_bound(self, port_id, network_id, network_type, physical_network,
segmentation_id):
"""Bind the port to the recived network."""
super(HyperVNeutronAgent, self)._port_bound(
port_id, network_id, network_type, physical_network,
segmentation_id
)
vswitch_map = self._network_vswitch_map[network_id]
if network_type == h_constant.TYPE_VLAN:
self._vlan_driver.bind_vlan_port(port_id, segmentation_id)
elif network_type == h_constant.TYPE_NVGRE and self._nvgre_enabled:
self._nvgre_ops.bind_nvgre_port(
segmentation_id, vswitch_map['vswitch_name'], port_id)
elif network_type == h_constant.TYPE_FLAT:
pass # Nothing to do
elif network_type == h_constant.TYPE_LOCAL:
pass # Nothing to do
else:
LOG.error(_LE('Unsupported network type %s'), network_type)
if self._enable_metrics_collection:
self._utils.add_metrics_collection_acls(port_id)
self._port_metric_retries[port_id] = self._metrics_max_retries
def _port_enable_control_metrics(self):
if not self._enable_metrics_collection:
return
for port_id in list(self._port_metric_retries.keys()):
try:
if self._utils.is_metrics_collection_allowed(port_id):
self._metricsutils.enable_port_metrics_collection(port_id)
LOG.info(_LI('Port metrics enabled for port: %s'), port_id)
del self._port_metric_retries[port_id]
elif self._port_metric_retries[port_id] < 1:
self._metricsutils.enable_port_metrics_collection(port_id)
LOG.error(_LE('Port metrics raw enabling for port: %s'),
port_id)
del self._port_metric_retries[port_id]
else:
self._port_metric_retries[port_id] -= 1
except exceptions.NotFound:
# the vNIC no longer exists. it might have been removed or
# the VM it was attached to was destroyed.
LOG.warning(_LW("Port %s no longer exists. Cannot enable "
"metrics."), port_id)
del self._port_metric_retries[port_id]
@_port_synchronized
def _treat_vif_port(self, port_id, network_id, network_type,
physical_network, segmentation_id,
admin_state_up):
if admin_state_up:
self._port_bound(port_id, network_id, network_type,
physical_network, segmentation_id)
# check if security groups is enabled.
# if not, teardown the security group rules
if self._enable_security_groups:
self._sec_groups_agent.refresh_firewall([port_id])
else:
self._utils.remove_all_security_rules(port_id)
else:
self._port_unbound(port_id)
self._sec_groups_agent.remove_devices_filter([port_id])
def _process_added_port(self, device_details):
super(HyperVNeutronAgent, self)._process_added_port(
device_details)
if CONF.AGENT.enable_qos_extension:
self._qos_ext.handle_port(self._context, device_details)
def _process_removed_port(self, device):
super(HyperVNeutronAgent, self)._process_removed_port(device)
try:
self._sec_groups_agent.remove_devices_filter([device])
except Exception:
LOG.exception(_LE("Exception encountered while processing"
" port %s."), device)
# Readd the port as "removed", so it can be reprocessed.
self._removed_ports.add(device)
raise
def _work(self):
"""Process the information regarding the available ports."""
super(HyperVNeutronAgent, self)._work()
if self._nvgre_enabled:
self._nvgre_ops.refresh_nvgre_records()
self._port_enable_control_metrics()
def tunnel_update(self, context, **kwargs):
LOG.info(_LI('tunnel_update received: kwargs: %s'), kwargs)
tunnel_ip = kwargs.get('tunnel_ip')
if tunnel_ip == CONF.NVGRE.provider_tunnel_ip:
# the notification should be ignored if it originates from this
# node.
return
tunnel_type = kwargs.get('tunnel_type')
self._nvgre_ops.tunnel_update(context, tunnel_ip, tunnel_type)
def lookup_update(self, context, **kwargs):
self._nvgre_ops.lookup_update(kwargs)
def main():
"""The entry point for the Hyper-V Neutron Agent."""
neutron_config.register_agent_state_opts_helper(CONF)
common_config.init(sys.argv[1:])
neutron_config.setup_logging()
hyperv_agent = HyperVNeutronAgent()
# Start everything.
LOG.info(_LI("Agent initialized successfully, now running... "))
hyperv_agent.daemon_loop()

View File

@ -1,404 +0,0 @@
# Copyright 2017 Cloudbase Solutions SRL
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""This module contains all the available contract classes."""
import abc
import collections
import re
import eventlet
from eventlet import tpool
from neutron.agent import rpc as agent_rpc
from neutron.common import rpc as n_rpc
from neutron.common import topics
from neutron_lib import constants as n_const
from os_win import exceptions as os_win_exc
from oslo_concurrency import lockutils
from oslo_log import log as logging
from oslo_service import loopingcall
import six
from hyperv.common.i18n import _, _LI, _LE # noqa
from hyperv.neutron.agent import base as base_agent
from hyperv.neutron import config
from hyperv.neutron import constants
LOG = logging.getLogger(__name__)
CONF = config.CONF
_synchronized = lockutils.synchronized_with_prefix('n-hv-agent-')
class Layer2Agent(base_agent.BaseAgent):
"""Contract class for all the layer two agents."""
_AGENT_TOPIC = n_const.L2_AGENT_TOPIC
def __init__(self):
super(Layer2Agent, self).__init__()
self._network_vswitch_map = {}
# The following sets contain ports that are to be processed.
self._added_ports = set()
self._removed_ports = set()
# The following sets contain ports that have been processed.
self._bound_ports = set()
self._unbound_ports = set()
self._physical_network_mappings = collections.OrderedDict()
self._consumers = []
self._event_callback_pairs = []
# Setup the current agent.
self._setup()
self._set_agent_state()
self._setup_rpc()
def _setup(self):
"""Setup the layer two agent."""
agent_config = CONF.get("AGENT", {})
self._worker_count = agent_config.get('worker_count')
self._phys_net_map = agent_config.get(
'physical_network_vswitch_mappings', [])
self._local_network_vswitch = agent_config.get(
'local_network_vswitch', 'private')
self._load_physical_network_mappings(self._phys_net_map)
self._endpoints.append(self)
self._event_callback_pairs.extend([
(self._utils.EVENT_TYPE_CREATE, self._process_added_port_event),
(self._utils.EVENT_TYPE_DELETE, self._process_removed_port_event)
])
tpool.set_num_threads(self._worker_count)
def _setup_qos_extension(self):
"""Setup the QOS extension if it is required."""
pass
def _setup_rpc(self):
"""Setup the RPC client for the current agent."""
self._plugin_rpc = agent_rpc.PluginApi(topics.PLUGIN)
self._state_rpc = agent_rpc.PluginReportStateAPI(topics.PLUGIN)
self._client = n_rpc.get_client(self.target)
self._consumers.extend([
[topics.PORT, topics.UPDATE], [topics.NETWORK, topics.DELETE],
[topics.PORT, topics.DELETE]
])
self.connection = agent_rpc.create_consumers(
self._endpoints, self._topic, self._consumers,
start_listening=False
)
self._setup_qos_extension()
self.connection.consume_in_threads()
report_interval = CONF.AGENT.report_interval
if report_interval:
heartbeat = loopingcall.FixedIntervalLoopingCall(
self._report_state)
heartbeat.start(interval=report_interval)
def _process_added_port_event(self, port_name):
"""Callback for added ports."""
LOG.info(_LI("Hyper-V VM vNIC added: %s"), port_name)
self._added_ports.add(port_name)
def _process_removed_port_event(self, port_name):
LOG.info(_LI("Hyper-V VM vNIC removed: %s"), port_name)
self._removed_ports.add(port_name)
def _load_physical_network_mappings(self, phys_net_vswitch_mappings):
"""Load all the information regarding the physical network."""
for mapping in phys_net_vswitch_mappings:
parts = mapping.split(':')
if len(parts) != 2:
LOG.debug('Invalid physical network mapping: %s', mapping)
else:
pattern = re.escape(parts[0].strip()).replace('\\*', '.*')
pattern = pattern + '$'
vswitch = parts[1].strip()
self._physical_network_mappings[pattern] = vswitch
def _get_vswitch_name(self, network_type, physical_network):
"""Get the vswitch name for the received network information."""
if network_type != constants.TYPE_LOCAL:
vswitch_name = self._get_vswitch_for_physical_network(
physical_network)
else:
vswitch_name = self._local_network_vswitch
return vswitch_name
def _get_vswitch_for_physical_network(self, phys_network_name):
"""Get the vswitch name for the received network name."""
for pattern in self._physical_network_mappings:
if phys_network_name is None:
phys_network_name = ''
if re.match(pattern, phys_network_name):
return self._physical_network_mappings[pattern]
# Not found in the mappings, the vswitch has the same name
return phys_network_name
def _get_network_vswitch_map_by_port_id(self, port_id):
"""Get the vswitch name for the received port id."""
for network_id, vswitch in six.iteritems(self._network_vswitch_map):
if port_id in vswitch['ports']:
return (network_id, vswitch)
# If the port was not found, just return (None, None)
return (None, None)
def _update_port_status_cache(self, device, device_bound=True):
"""Update the ports status cache."""
with self._cache_lock:
if device_bound:
self._bound_ports.add(device)
self._unbound_ports.discard(device)
else:
self._bound_ports.discard(device)
self._unbound_ports.add(device)
def _create_event_listeners(self):
"""Create and bind the event listeners."""
LOG.debug("Create the event listeners.")
for event_type, callback in self._event_callback_pairs:
LOG.debug("Create listener for %r event", event_type)
listener = self._utils.get_vnic_event_listener(event_type)
eventlet.spawn_n(listener, callback)
def _prologue(self):
"""Executed once before the daemon loop."""
self._added_ports = self._utils.get_vnic_ids()
self._create_event_listeners()
def _reclaim_local_network(self, net_uuid):
LOG.info(_LI("Reclaiming local network %s"), net_uuid)
del self._network_vswitch_map[net_uuid]
def _port_bound(self, port_id, network_id, network_type, physical_network,
segmentation_id):
"""Bind the port to the recived network."""
LOG.debug("Binding port %s", port_id)
if network_id not in self._network_vswitch_map:
self._provision_network(
port_id, network_id, network_type,
physical_network, segmentation_id)
vswitch_map = self._network_vswitch_map[network_id]
vswitch_map['ports'].append(port_id)
LOG.debug("Trying to connect the current port to vswitch %r.",
vswitch_map['vswitch_name'])
self._utils.connect_vnic_to_vswitch(
vswitch_name=vswitch_map['vswitch_name'],
switch_port_name=port_id,
)
def _port_unbound(self, port_id, vnic_deleted=False):
LOG.debug(_("Trying to unbind the port %r"), port_id)
vswitch = self._get_network_vswitch_map_by_port_id(port_id)
net_uuid, vswitch_map = vswitch
if not net_uuid:
LOG.debug('Port %s was not found on this agent.', port_id)
return
LOG.debug("Unbinding port %s", port_id)
self._utils.remove_switch_port(port_id, vnic_deleted)
vswitch_map['ports'].remove(port_id)
if not vswitch_map['ports']:
self._reclaim_local_network(net_uuid)
def _process_added_port(self, device_details):
self._treat_vif_port(
port_id=device_details['port_id'],
network_id=device_details['network_id'],
network_type=device_details['network_type'],
physical_network=device_details['physical_network'],
segmentation_id=device_details['segmentation_id'],
admin_state_up=device_details['admin_state_up'])
def process_added_port(self, device_details):
"""Process the new ports.
Wraps _process_added_port, and treats the sucessful and exception
cases.
"""
device = device_details['device']
port_id = device_details['port_id']
reprocess = True
try:
self._process_added_port(device_details)
LOG.debug("Updating cached port %s status as UP.", port_id)
self._update_port_status_cache(device, device_bound=True)
LOG.info("Port %s processed.", port_id)
except os_win_exc.HyperVvNicNotFound:
LOG.debug('vNIC %s not found. This can happen if the VM was '
'destroyed.', port_id)
except os_win_exc.HyperVPortNotFoundException:
LOG.debug('vSwitch port %s not found. This can happen if the VM '
'was destroyed.', port_id)
except Exception as ex:
# NOTE(claudiub): in case of a non-transient error, the port will
# be processed over and over again, and will not be reported as
# bound (e.g.: InvalidParameterValue when setting QoS), until the
# port is deleted. These issues have to be investigated and solved
LOG.exception(_LE("Exception encountered while processing "
"port %(port_id)s. Exception: %(ex)s"),
dict(port_id=port_id, ex=ex))
else:
# no exception encountered, no need to reprocess.
reprocess = False
if reprocess:
# Readd the port as "added", so it can be reprocessed.
self._added_ports.add(device)
# Force cache refresh.
self._refresh_cache = True
return False
return True
def _treat_devices_added(self):
"""Process the new devices."""
try:
devices_details_list = self._plugin_rpc.get_devices_details_list(
self._context, self._added_ports, self._agent_id)
except Exception as exc:
LOG.debug("Unable to get ports details for "
"devices %(devices)s: %(exc)s",
{'devices': self._added_ports, 'exc': exc})
return
for device_details in devices_details_list:
device = device_details['device']
LOG.info(_LI("Adding port %s"), device)
if 'port_id' in device_details:
LOG.info(_LI("Port %(device)s updated. "
"Details: %(device_details)s"),
{'device': device, 'device_details': device_details})
eventlet.spawn_n(self.process_added_port, device_details)
else:
LOG.debug(_("Missing port_id from device details: "
"%(device)s. Details: %(device_details)s"),
{'device': device, 'device_details': device_details})
LOG.debug(_("Remove the port from added ports set, so it "
"doesn't get reprocessed."))
self._added_ports.discard(device)
def _process_removed_port(self, device):
"""Process the removed ports."""
LOG.debug(_("Trying to remove the port %r"), device)
self._update_port_status_cache(device, device_bound=False)
self._port_unbound(device, vnic_deleted=True)
LOG.debug(_("The port was successfully removed."))
self._removed_ports.discard(device)
def _treat_devices_removed(self):
"""Process the removed devices."""
for device in self._removed_ports.copy():
eventlet.spawn_n(self._process_removed_port, device)
@_synchronized('n-plugin-notifier')
def _notify_plugin_on_port_updates(self):
if not (self._bound_ports or self._unbound_ports):
return
with self._cache_lock:
bound_ports = self._bound_ports.copy()
unbound_ports = self._unbound_ports.copy()
self._plugin_rpc.update_device_list(
self._context, list(bound_ports), list(unbound_ports),
self._agent_id, self._host)
with self._cache_lock:
self._bound_ports = self._bound_ports.difference(bound_ports)
self._unbound_ports = self._unbound_ports.difference(
unbound_ports)
def _work(self):
"""Process the information regarding the available ports."""
if self._refresh_cache:
# Inconsistent cache might cause exceptions. For example,
# if a port has been removed, it will be known in the next
# loop. Using the old switch port can cause exceptions.
LOG.debug("Refreshing os_win caches...")
self._utils.update_cache()
self._refresh_cache = False
eventlet.spawn_n(self._notify_plugin_on_port_updates)
# notify plugin about port deltas
if self._added_ports:
LOG.debug(_("Agent loop has new devices!"))
self._treat_devices_added()
if self._removed_ports:
LOG.debug(_("Agent loop has lost devices..."))
self._treat_devices_removed()
def port_update(self, context, port=None, network_type=None,
segmentation_id=None, physical_network=None):
LOG.debug("port_update received: %s", port['id'])
if self._utils.vnic_port_exists(port['id']):
self._treat_vif_port(
port_id=port['id'],
network_id=port['network_id'],
network_type=network_type,
physical_network=physical_network,
segmentation_id=segmentation_id,
admin_state_up=port['admin_state_up'],
)
else:
LOG.debug("No port %s defined on agent.", port['id'])
def port_delete(self, context, port_id=None):
"""Delete the received port."""
LOG.debug("port_delete event received for %r", port_id)
def network_delete(self, context, network_id=None):
LOG.debug("network_delete received. "
"Deleting network %s", network_id)
# The network may not be defined on this agent
if network_id in self._network_vswitch_map:
self._reclaim_local_network(network_id)
else:
LOG.debug("Network %s not defined on agent.", network_id)
@abc.abstractmethod
def _provision_network(self, port_id, net_uuid, network_type,
physical_network, segmentation_id):
"""Provision the network with the received information."""
pass
@abc.abstractmethod
def _treat_vif_port(self, port_id, network_id, network_type,
physical_network, segmentation_id,
admin_state_up):
pass

View File

@ -1,165 +0,0 @@
# Copyright 2015 Cloudbase Solutions Srl
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from keystoneauth1 import loading as ks_loading
from oslo_config import cfg
from hyperv.common.i18n import _
CONF = cfg.CONF
HYPERV_AGENT_GROUP_NAME = 'AGENT'
HYPERV_AGENT_GROUP = cfg.OptGroup(
HYPERV_AGENT_GROUP_NAME,
title='Hyper-V Neutron Agent Options',
help=('Configuration options for the neutron-hyperv-agent (L2 agent).')
)
HYPERV_AGENT_OPTS = [
cfg.ListOpt(
'physical_network_vswitch_mappings',
default=[],
help=_('List of <physical_network>:<vswitch> '
'where the physical networks can be expressed with '
'wildcards, e.g.: ."*:external"')),
cfg.StrOpt(
'local_network_vswitch',
default='private',
help=_('Private vswitch name used for local networks')),
cfg.IntOpt('polling_interval', default=2, min=1,
help=_("The number of seconds the agent will wait between "
"polling for local device changes.")),
cfg.IntOpt('worker_count', default=10, min=1,
help=_("The number of worker threads allowed to run in "
"parallel to process port binding.")),
cfg.IntOpt('worker_retry', default=3, min=0,
help=_("The number of times worker process will retry "
"port binding.")),
cfg.BoolOpt('enable_metrics_collection',
default=False,
help=_('Enables metrics collections for switch ports by using '
'Hyper-V\'s metric APIs. Collected data can by '
'retrieved by other apps and services, e.g.: '
'Ceilometer. Requires Hyper-V / Windows Server 2012 '
'and above')),
cfg.IntOpt('metrics_max_retries',
default=100, min=0,
help=_('Specifies the maximum number of retries to enable '
'Hyper-V\'s port metrics collection. The agent will try '
'to enable the feature once every polling_interval '
'period for at most metrics_max_retries or until it '
'succeedes.')),
cfg.IPOpt('neutron_metadata_address',
default='169.254.169.254',
help=_('Specifies the address which will serve the metadata for'
' the instance.')),
cfg.BoolOpt('enable_qos_extension',
default=False,
help=_('Enables the QoS extension.')),
]
NVGRE_GROUP_NAME = 'NVGRE'
NVGRE_GROUP = cfg.OptGroup(
NVGRE_GROUP_NAME,
title='Hyper-V NVGRE Options',
help=('Configuration options for NVGRE.')
)
NVGRE_OPTS = [
cfg.BoolOpt('enable_support',
default=False,
help=_('Enables Hyper-V NVGRE. '
'Requires Windows Server 2012 or above.')),
cfg.IntOpt('provider_vlan_id',
default=0, min=0, max=4096,
help=_('Specifies the VLAN ID of the physical network, required'
' for setting the NVGRE Provider Address.')),
cfg.IPOpt('provider_tunnel_ip',
default=None,
help=_('Specifies the tunnel IP which will be used and '
'reported by this host for NVGRE networks.')),
]
NEUTRON_GROUP_NAME = 'neutron'
NEUTRON_GROUP = cfg.OptGroup(
NEUTRON_GROUP_NAME,
title='Neutron Options',
help=('Configuration options for neutron (network connectivity as a '
'service).')
)
NEUTRON_OPTS = [
cfg.StrOpt('url',
default='http://127.0.0.1:9696',
help='URL for connecting to neutron'),
cfg.IntOpt('url_timeout',
default=30, min=1,
help='timeout value for connecting to neutron in seconds'),
cfg.StrOpt('admin_username',
help='username for connecting to neutron in admin context'),
cfg.StrOpt('admin_password',
help='password for connecting to neutron in admin context',
secret=True),
cfg.StrOpt('admin_tenant_name',
help='tenant name for connecting to neutron in admin context'),
cfg.StrOpt('admin_auth_url',
default='http://localhost:5000/v2.0',
help='auth url for connecting to neutron in admin context'),
cfg.StrOpt('auth_strategy',
default='keystone',
help='auth strategy for connecting to neutron in admin context')
]
HNV_GROUP_NAME = 'HNV'
HNV_GROUP = cfg.OptGroup(
HNV_GROUP_NAME,
title='HNV Options',
help='Configuration options for the Windows Network Controller.'
)
HNV_OPTS = [
cfg.StrOpt(
"logical_network", default=None,
help=("Logical network to use as a medium for tenant network "
"traffic.")),
]
def register_opts():
CONF.register_group(HYPERV_AGENT_GROUP)
CONF.register_opts(HYPERV_AGENT_OPTS, group=HYPERV_AGENT_GROUP_NAME)
CONF.register_group(NVGRE_GROUP)
CONF.register_opts(NVGRE_OPTS, group=NVGRE_GROUP_NAME)
CONF.register_group(NEUTRON_GROUP)
CONF.register_opts(NEUTRON_OPTS, group=NEUTRON_GROUP_NAME)
ks_loading.register_session_conf_options(CONF, NEUTRON_GROUP)
ks_loading.register_auth_conf_options(CONF, NEUTRON_GROUP)
CONF.register_group(HNV_GROUP)
CONF.register_opts(HNV_OPTS, group=HNV_GROUP_NAME)
register_opts()

View File

@ -1,48 +0,0 @@
# Copyright 2013 Cloudbase Solutions SRL
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Topic for tunnel notifications between the plugin and agent
AGENT_TOPIC = 'q-agent-notifier'
AGENT_TYPE_HYPERV = 'HyperV agent'
AGENT_TYPE_HNV = "HNV agent"
VIF_TYPE_HYPERV = 'hyperv'
TUNNEL = 'tunnel'
LOOKUP = 'lookup'
UPDATE = 'update'
# Special vlan_id value in ovs_vlan_allocations table indicating flat network
FLAT_VLAN_ID = -1
TYPE_FLAT = 'flat'
TYPE_LOCAL = 'local'
TYPE_VLAN = 'vlan'
TYPE_NVGRE = 'gre'
IPV4_DEFAULT = '0.0.0.0'
# Windows Server 2016 Network Controller related constants.
# NOTE(claudiub): These constants HAVE to be defined exactly like this,
# otherwise networking using the Windows Server 2016 Network Controller won't
# work.
# https://docs.microsoft.com/en-us/windows-server/networking/sdn/manage/create-a-tenant-vm # noqa
NET_CFG_INSTANCE_ID = "{56785678-a0e5-4a26-bc9b-c0cba27311a3}"
CDN_LABEL_STRING = "OpenStackCdn"
CDN_LABEL_ID = 1111
PROFILE_NAME = "OpenStackProfile"
VENDOR_ID = "{1FA41B39-B444-4E43-B35A-E1F7985FD548}"
VENDOR_NAME = "NetworkController"
PROFILE_DATA = 1

View File

@ -1,19 +0,0 @@
# Copyright 2016 Cloudbase Solutions Srl
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class NetworkingHyperVException(Exception):
pass

View File

@ -1,62 +0,0 @@
# Copyright 2015 Cloudbase Solutions SRL
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from hyperv.neutron import constants
def get_topic_name(prefix, table, operation):
"""Create a topic name.
The topic name needs to be synced between the agents.
The agent will send a fanout message to all of the listening agents
so that the agents in turn can perform their updates accordingly.
:param prefix: Common prefix for the agent message queues.
:param table: The table in question (TUNNEL, LOOKUP).
:param operation: The operation that invokes notification (UPDATE)
:returns: The topic name.
"""
return '%s-%s-%s' % (prefix, table, operation)
class AgentNotifierApi(object):
"""Agent side of the OpenVSwitch rpc API."""
def __init__(self, topic, client):
self._client = client
self.topic_tunnel_update = get_topic_name(topic,
constants.TUNNEL,
constants.UPDATE)
self.topic_lookup_update = get_topic_name(topic,
constants.LOOKUP,
constants.UPDATE)
def _fanout_cast(self, context, topic, method, **info):
cctxt = self._client.prepare(topic=topic, fanout=True)
cctxt.cast(context, method, **info)
def tunnel_update(self, context, tunnel_ip, tunnel_type):
self._fanout_cast(context,
self.topic_tunnel_update,
'tunnel_update',
tunnel_ip=tunnel_ip,
tunnel_type=tunnel_type)
def lookup_update(self, context, lookup_ip, lookup_details):
self._fanout_cast(context,
self.topic_lookup_update,
'lookup_update',
lookup_ip=lookup_ip,
lookup_details=lookup_details)

View File

@ -1,35 +0,0 @@
Hyper-V Neutron Agent and ML2 Mechanism Driver for ML2 Plugin
=============================================================
This mechanism driver is used by ``neutron-server`` in order to bind neutron
ports to Hyper-V hosts. In order to use it, ``neutron-server`` must use the
Ml2Plugin as a ``core_plugin``. The service's configration file
(``/etc/neutron/neutron.conf``), must contain this following line:
::
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
In order to use this ML2 Mechanism Driver, ``networking-hyperv`` must be
installed on the Neutron Controller:
::
pip install networking-hyperv
Additionally, the ML2 Plugin must be configured to use the Hyper-V Mechanism
Driver, by adding it to the ``mechanism_drivers`` field in
``/etc/neutron/plugins/ml2/ml2_conf.ini``:
::
[ml2]
mechanism_drivers = openvswitch,hyperv
# any other mechanism_drivers can be added to the list.
After editing a configuration file, the ``neutron-server`` service must be
restarted, in order for the changes to take effect.
Currently, the mechanism driver supports the following network types: local,
flat, vlan, gre. Neutron ports on networks having other types than the ones
mentioned will not be bound on Hyper-V hosts.

View File

@ -1,53 +0,0 @@
# Copyright (c) 2015 Cloudbase Solutions Srl
# Copyright (c) 2013 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import re
from neutron.plugins.ml2.drivers import mech_agent
from neutron_lib.api.definitions import portbindings
from hyperv.neutron import constants
class HypervMechanismDriver(mech_agent.SimpleAgentMechanismDriverBase):
"""Attach to networks using Hyper-V L2 Agent.
The HypervMechanismDriver integrates the Ml2 Plugin with the
Hyperv L2 Agent. Port binding with this driver requires the Hyper-V
agent to be running on the port's host, and that agent to have
connectivity to at least one segment of the port's network.
"""
def __init__(self):
super(HypervMechanismDriver, self).__init__(
constants.AGENT_TYPE_HYPERV,
constants.VIF_TYPE_HYPERV,
{portbindings.CAP_PORT_FILTER: False})
def get_allowed_network_types(self, agent=None):
network_types = [constants.TYPE_LOCAL, constants.TYPE_FLAT,
constants.TYPE_VLAN]
if agent is not None:
tunnel_types = agent.get('configurations', {}).get('tunnel_types')
if tunnel_types:
network_types.extend(tunnel_types)
return network_types
def get_mappings(self, agent):
return agent['configurations'].get('vswitch_mappings', {})
def physnet_in_mappings(self, physnet, mappings):
return any(re.match(pattern, physnet) for pattern in mappings)

View File

@ -1,111 +0,0 @@
# Copyright 2015 Cloudbase Solutions SRL
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from keystoneauth1 import loading as ks_loading
from neutronclient.v2_0 import client as clientv20
from oslo_log import log as logging
from hyperv.common.i18n import _LW, _LE # noqa
from hyperv.neutron import config
from hyperv.neutron import constants
CONF = config.CONF
LOG = logging.getLogger(__name__)
class NeutronAPIClient(object):
def __init__(self):
self._init_client()
def _init_client(self):
session = ks_loading.load_session_from_conf_options(
CONF, config.NEUTRON_GROUP)
auth_plugin = ks_loading.load_auth_from_conf_options(
CONF, config.NEUTRON_GROUP)
self._client = clientv20.Client(
session=session,
auth=auth_plugin)
def get_network_subnets(self, network_id):
try:
net = self._client.show_network(network_id)
return net['network']['subnets']
except Exception as ex:
LOG.error(_LE("Could not retrieve network %(network_id)s . Error: "
"%(ex)s"), {'network_id': network_id, 'ex': ex})
return []
def get_network_subnet_cidr_and_gateway(self, subnet_id):
try:
subnet = self._client.show_subnet(subnet_id)['subnet']
return (str(subnet['cidr']), str(subnet['gateway_ip']))
except Exception as ex:
LOG.error(_LE("Could not retrieve subnet %(subnet_id)s . Error: "
"%(ex)s: "), {'subnet_id': subnet_id, 'ex': ex})
return None, None
def get_port_ip_address(self, port_id):
try:
port = self._client.show_port(port_id)
fixed_ips = port['port']['fixed_ips'][0]
return fixed_ips['ip_address']
except Exception as ex:
LOG.error(_LE("Could not retrieve port %(port_id)s . Error: "
"%(ex)s"), {'port_id': port_id, 'ex': ex})
return None
def get_tunneling_agents(self):
try:
agents = self._client.list_agents()
tunneling_agents = [
a for a in agents['agents'] if constants.TYPE_NVGRE in
a.get('configurations', {}).get('tunnel_types', [])]
tunneling_ip_agents = [
a for a in tunneling_agents if
a.get('configurations', {}).get('tunneling_ip')]
if len(tunneling_ip_agents) < len(tunneling_agents):
LOG.warning(_LW('Some agents have NVGRE tunneling enabled, but'
' do not provide tunneling_ip. Ignoring those '
'agents.'))
return dict([(a['host'], a['configurations']['tunneling_ip'])
for a in tunneling_ip_agents])
except Exception as ex:
LOG.error(_LE("Could not get tunneling agents. Error: %s"), ex)
return {}
def get_network_ports(self, **kwargs):
try:
return self._client.list_ports(**kwargs)['ports']
except Exception as ex:
LOG.error(_LE("Exception caught: %s"), ex)
return []
def get_port_profile_id(self, port_id):
try:
port = self._client.show_port(port_id)
return "{%s}" % (port["port"]["binding:vif_details"]
["port_profile_id"])
except Exception:
LOG.exception(_LE("Failed to retrieve profile id for port %s."),
port_id)
return {}

View File

@ -1,203 +0,0 @@
# Copyright 2015 Cloudbase Solutions SRL
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from os_win import utilsfactory
from oslo_log import log as logging
import six
import uuid
from hyperv.common.i18n import _LI, _LW, _LE # noqa
from hyperv.neutron import config
from hyperv.neutron import constants
from hyperv.neutron import hyperv_agent_notifier
from hyperv.neutron import neutron_client
CONF = config.CONF
LOG = logging.getLogger(__name__)
class HyperVNvgreOps(object):
def __init__(self, physical_networks):
self.topic = constants.AGENT_TOPIC
self._vswitch_ips = {}
self._tunneling_agents = {}
self._nvgre_ports = []
self._network_vsids = {}
self._hyperv_utils = utilsfactory.get_networkutils()
self._nvgre_utils = utilsfactory.get_nvgreutils()
self._n_client = neutron_client.NeutronAPIClient()
self._init_nvgre(physical_networks)
def init_notifier(self, context, rpc_client):
self.context = context
self._notifier = hyperv_agent_notifier.AgentNotifierApi(
self.topic, rpc_client)
def _init_nvgre(self, physical_networks):
for network in physical_networks:
LOG.info(_LI("Adding provider route and address for network: %s"),
network)
self._nvgre_utils.create_provider_route(network)
self._nvgre_utils.create_provider_address(
network, CONF.NVGRE.provider_vlan_id)
ip_addr, length = self._nvgre_utils.get_network_iface_ip(network)
self._vswitch_ips[network] = ip_addr
def _refresh_tunneling_agents(self):
self._tunneling_agents.update(self._n_client.get_tunneling_agents())
def lookup_update(self, kwargs):
lookup_ip = kwargs.get('lookup_ip')
lookup_details = kwargs.get('lookup_details')
LOG.info(_LI("Lookup Received: %(lookup_ip)s, %(lookup_details)s"),
{'lookup_ip': lookup_ip, 'lookup_details': lookup_details})
if not lookup_ip or not lookup_details:
return
self._register_lookup_record(lookup_ip,
lookup_details['customer_addr'],
lookup_details['mac_addr'],
lookup_details['customer_vsid'])
def tunnel_update(self, context, tunnel_ip, tunnel_type):
if tunnel_type != constants.TYPE_NVGRE:
return
self._notifier.tunnel_update(context, CONF.NVGRE.provider_tunnel_ip,
tunnel_type)
def _register_lookup_record(self, prov_addr, cust_addr, mac_addr, vsid):
LOG.info(_LI('Creating LookupRecord: VSID: %(vsid)s MAC: %(mac_addr)s '
'Customer IP: %(cust_addr)s Provider IP: %(prov_addr)s'),
dict(vsid=vsid,
mac_addr=mac_addr,
cust_addr=cust_addr,
prov_addr=prov_addr))
self._nvgre_utils.create_lookup_record(
prov_addr, cust_addr, mac_addr, vsid)
def bind_nvgre_port(self, segmentation_id, network_name, port_id):
mac_addr = self._hyperv_utils.get_vnic_mac_address(port_id)
provider_addr = self._nvgre_utils.get_network_iface_ip(network_name)[0]
customer_addr = self._n_client.get_port_ip_address(port_id)
if not provider_addr or not customer_addr:
LOG.warning(_LW('Cannot bind NVGRE port. Could not determine '
'provider address (%(prov_addr)s) or customer '
'address (%(cust_addr)s).'),
{'prov_addr': provider_addr,
'cust_addr': customer_addr})
return
LOG.info(_LI('Binding VirtualSubnetID %(segmentation_id)s '
'to switch port %(port_id)s'),
dict(segmentation_id=segmentation_id, port_id=port_id))
self._hyperv_utils.set_vswitch_port_vsid(segmentation_id, port_id)
# normal lookup record.
self._register_lookup_record(
provider_addr, customer_addr, mac_addr, segmentation_id)
# lookup record for dhcp requests.
self._register_lookup_record(
self._vswitch_ips[network_name], constants.IPV4_DEFAULT,
mac_addr, segmentation_id)
LOG.info('Fanning out LookupRecord...')
self._notifier.lookup_update(self.context,
provider_addr,
{'customer_addr': customer_addr,
'mac_addr': mac_addr,
'customer_vsid': segmentation_id})
def bind_nvgre_network(self, segmentation_id, net_uuid, vswitch_name):
subnets = self._n_client.get_network_subnets(net_uuid)
if len(subnets) > 1:
LOG.warning(_LW("Multiple subnets in the same network is not "
"supported."))
subnet = subnets[0]
try:
cidr, gw = self._n_client.get_network_subnet_cidr_and_gateway(
subnet)
cust_route_string = vswitch_name + cidr + str(segmentation_id)
rdid_uuid = str(uuid.uuid5(uuid.NAMESPACE_X500, cust_route_string))
self._create_customer_routes(segmentation_id, cidr, gw, rdid_uuid)
except Exception as ex:
LOG.error(_LE("Exception caught: %s"), ex)
self._network_vsids[net_uuid] = segmentation_id
self.refresh_nvgre_records(network_id=net_uuid)
self._notifier.tunnel_update(
self.context, CONF.NVGRE.provider_tunnel_ip, segmentation_id)
def _create_customer_routes(self, segmentation_id, cidr, gw, rdid_uuid):
self._nvgre_utils.clear_customer_routes(segmentation_id)
# create cidr -> 0.0.0.0/0 customer route
self._nvgre_utils.create_customer_route(
segmentation_id, cidr, constants.IPV4_DEFAULT, rdid_uuid)
if not gw:
LOG.info(_LI('Subnet does not have gateway configured. '
'Skipping.'))
elif gw.split('.')[-1] == '1':
LOG.error(_LE('Subnet has unsupported gateway IP ending in 1: '
'%s. Any other gateway IP is supported.'), gw)
else:
# create 0.0.0.0/0 -> gateway customer route
self._nvgre_utils.create_customer_route(
segmentation_id, '%s/0' % constants.IPV4_DEFAULT, gw,
rdid_uuid)
# create metadata address -> gateway customer route
metadata_addr = '%s/32' % CONF.AGENT.neutron_metadata_address
self._nvgre_utils.create_customer_route(
segmentation_id, metadata_addr, gw, rdid_uuid)
def refresh_nvgre_records(self, **kwargs):
self._refresh_tunneling_agents()
ports = self._n_client.get_network_ports(**kwargs)
# process ports that were not processed yet.
# process ports that are bound to tunneling_agents.
ports = [p for p in ports if p['id'] not in self._nvgre_ports and
p['binding:host_id'] in self._tunneling_agents and
p['network_id'] in six.iterkeys(self._network_vsids)]
for port in ports:
tunneling_ip = self._tunneling_agents[port['binding:host_id']]
customer_addr = port['fixed_ips'][0]['ip_address']
mac_addr = port['mac_address'].replace(':', '')
segmentation_id = self._network_vsids[port['network_id']]
try:
self._register_lookup_record(
tunneling_ip, customer_addr, mac_addr, segmentation_id)
self._nvgre_ports.append(port['id'])
except Exception as ex:
LOG.error(_LE("Exception while adding lookup_record: %(ex)s. "
"VSID: %(vsid)s MAC: %(mac_address)s Customer "
"IP:%(cust_addr)s Provider IP: %(prov_addr)s"),
dict(ex=ex,
vsid=segmentation_id,
mac_address=mac_addr,
cust_addr=customer_addr,
prov_addr=tunneling_ip))

View File

@ -1,90 +0,0 @@
# Copyright 2017 Cloudbase Solutions Srl
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutron.agent.l2.extensions import qos
from neutron.services.qos import qos_consts
from os_win.utils.network import networkutils
from oslo_log import log as logging
from hyperv.common.i18n import _LI, _LW # noqa
LOG = logging.getLogger(__name__)
class QosHyperVAgentDriver(qos.QosAgentDriver):
_SUPPORTED_QOS_RULES = [qos_consts.RULE_TYPE_BANDWIDTH_LIMIT,
qos_consts.RULE_TYPE_MINIMUM_BANDWIDTH]
def initialize(self):
self._utils = networkutils.NetworkUtils()
def create(self, port, qos_policy):
"""Apply QoS rules on port for the first time.
:param port: port object.
:param qos_policy: the QoS policy to be applied on port.
"""
LOG.info(_LI("Setting QoS policy %(qos_policy)s on "
"port %(port)s"),
dict(qos_policy=qos_policy,
port=port))
policy_data = self._get_policy_values(qos_policy)
self._utils.set_port_qos_rule(port["port_id"], policy_data)
def update(self, port, qos_policy):
"""Apply QoS rules on port.
:param port: port object.
:param qos_policy: the QoS policy to be applied on port.
"""
LOG.info(_LI("Updating QoS policy %(qos_policy)s on "
"port %(port)s"),
dict(qos_policy=qos_policy,
port=port))
policy_data = self._get_policy_values(qos_policy)
self._utils.set_port_qos_rule(port["port_id"], policy_data)
def delete(self, port, qos_policy=None):
"""Remove QoS rules from port.
:param port: port object.
:param qos_policy: the QoS policy to be removed from port.
"""
LOG.info(_LI("Deleting QoS policy %(qos_policy)s on "
"port %(port)s"),
dict(qos_policy=qos_policy,
port=port))
self._utils.remove_port_qos_rule(port["port_id"])
def _get_policy_values(self, qos_policy):
result = {}
for qos_rule in qos_policy.rules:
if qos_rule.rule_type not in self._SUPPORTED_QOS_RULES:
LOG.warning(_LW("Unsupported QoS rule: %(qos_rule)s"),
dict(qos_rule=qos_rule))
continue
result['min_kbps'] = getattr(qos_rule, 'min_kbps',
result.get('min_kbps'))
result['max_kbps'] = getattr(qos_rule, 'max_kbps',
result.get('max_kbps'))
result['max_burst_kbps'] = getattr(qos_rule, 'max_burst_kbps',
result.get('max_burst_kbps'))
return result

View File

@ -1,447 +0,0 @@
# Copyright 2014 Cloudbase Solutions SRL
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import netaddr
from neutron.agent import firewall
from os_win import exceptions
from os_win.utils.network import networkutils
from os_win import utilsfactory
from oslo_log import log as logging
import six
from hyperv.common.i18n import _LE, _LI # noqa
from hyperv.neutron import _common_utils as c_utils
import threading
LOG = logging.getLogger(__name__)
INGRESS_DIRECTION = 'ingress'
EGRESS_DIRECTION = 'egress'
DIRECTION_IP_PREFIX = {'ingress': 'source_ip_prefix',
'egress': 'dest_ip_prefix'}
ACL_PROP_MAP = {
'direction': {'ingress': networkutils.NetworkUtils._ACL_DIR_IN,
'egress': networkutils.NetworkUtils._ACL_DIR_OUT},
'ethertype': {'IPv4': networkutils.NetworkUtils._ACL_TYPE_IPV4,
'IPv6': networkutils.NetworkUtils._ACL_TYPE_IPV6},
'protocol': {'tcp': networkutils.NetworkUtils._TCP_PROTOCOL,
'udp': networkutils.NetworkUtils._UDP_PROTOCOL,
'icmp': networkutils.NetworkUtils._ICMP_PROTOCOL,
'ipv6-icmp': networkutils.NetworkUtils._ICMPV6_PROTOCOL,
'icmpv6': networkutils.NetworkUtils._ICMPV6_PROTOCOL},
'action': {'allow': networkutils.NetworkUtils._ACL_ACTION_ALLOW,
'deny': networkutils.NetworkUtils._ACL_ACTION_DENY},
'default': "ANY",
'address_default': {'IPv4': '0.0.0.0/0', 'IPv6': '::/0'}
}
_ports_synchronized = c_utils.get_port_synchronized_decorator('n-hv-driver-')
class HyperVSecurityGroupsDriverMixin(object):
"""Security Groups Driver.
Security Groups implementation for Hyper-V VMs.
"""
def __init__(self):
self._utils = utilsfactory.get_networkutils()
self._sg_gen = SecurityGroupRuleGeneratorR2()
self._sec_group_rules = {}
self._security_ports = {}
self._sg_members = {}
self._sg_rule_templates = {}
self.cache_lock = threading.Lock()
# TODO(claudiub): remove this on the next os-win release.
clear_cache = lambda port_id: self._utils._sg_acl_sds.pop(port_id,
None)
self._utils.clear_port_sg_acls_cache = clear_cache
def _select_sg_rules_for_port(self, port, direction):
sg_ids = port.get('security_groups', [])
port_rules = []
fixed_ips = port.get('fixed_ips', [])
for sg_id in sg_ids:
for rule in self._sg_rule_templates.get(sg_id, []):
if rule['direction'] != direction:
continue
remote_group_id = rule.get('remote_group_id')
if not remote_group_id:
grp_rule = rule.copy()
grp_rule.pop('security_group_id', None)
port_rules.append(grp_rule)
continue
ethertype = rule['ethertype']
for ip in self._sg_members[remote_group_id][ethertype]:
if ip in fixed_ips:
continue
ip_rule = rule.copy()
direction_ip_prefix = DIRECTION_IP_PREFIX[direction]
ip_rule[direction_ip_prefix] = str(
netaddr.IPNetwork(ip).cidr)
# NOTE(claudiub): avoid returning fields that are not
# directly used in setting the security group rules
# properly (remote_group_id, security_group_id), as they
# only make testing for rule's identity harder.
ip_rule.pop('security_group_id', None)
ip_rule.pop('remote_group_id', None)
port_rules.append(ip_rule)
return port_rules
def filter_defer_apply_on(self):
"""Defer application of filtering rule."""
pass
def filter_defer_apply_off(self):
"""Turn off deferral of rules and apply the rules now."""
pass
def update_security_group_rules(self, sg_id, sg_rules):
LOG.debug("Update rules of security group (%s)", sg_id)
with self.cache_lock:
self._sg_rule_templates[sg_id] = sg_rules
def update_security_group_members(self, sg_id, sg_members):
LOG.debug("Update members of security group (%s)", sg_id)
with self.cache_lock:
self._sg_members[sg_id] = sg_members
def _generate_rules(self, ports):
newports = {}
for port in ports:
_rules = []
_rules.extend(self._select_sg_rules_for_port(port,
INGRESS_DIRECTION))
_rules.extend(self._select_sg_rules_for_port(port,
EGRESS_DIRECTION))
newports[port['id']] = _rules
return newports
def prepare_port_filter(self, port):
if not port.get('port_security_enabled'):
LOG.info(_LI('Port %s does not have security enabled. '
'Skipping rules creation.'), port['id'])
return
LOG.debug('Creating port %s rules', len(port['security_group_rules']))
# newly created port, add default rules.
if port['device'] not in self._security_ports:
LOG.debug('Creating default reject rules.')
self._sec_group_rules[port['id']] = []
def_sg_rules = self._sg_gen.create_default_sg_rules()
self._add_sg_port_rules(port, def_sg_rules)
# Add provider rules
provider_rules = port['security_group_rules']
self._create_port_rules(port, provider_rules)
newrules = self._generate_rules([port])
self._create_port_rules(port, newrules[port['id']])
self._security_ports[port['device']] = port
self._sec_group_rules[port['id']] = newrules[port['id']]
@_ports_synchronized
def _create_port_rules(self, port, rules):
sg_rules = self._sg_gen.create_security_group_rules(rules)
old_sg_rules = self._sec_group_rules[port['id']]
add, rm = self._sg_gen.compute_new_rules_add(old_sg_rules, sg_rules)
self._add_sg_port_rules(port, list(set(add)))
self._remove_sg_port_rules(port, list(set(rm)))
@_ports_synchronized
def _remove_port_rules(self, port, rules):
sg_rules = self._sg_gen.create_security_group_rules(rules)
self._remove_sg_port_rules(port, list(set(sg_rules)))
def _add_sg_port_rules(self, port, sg_rules):
if not sg_rules:
return
old_sg_rules = self._sec_group_rules[port['id']]
try:
self._utils.create_security_rules(port['id'], sg_rules)
old_sg_rules.extend(sg_rules)
except exceptions.NotFound:
# port no longer exists.
# NOTE(claudiub): In the case of a rebuild / shelve, the
# neutron port is not deleted, and it can still be in the cache.
# We need to make sure the port's caches are cleared since it is
# not valid anymore. The port will be reprocessed in the next
# loop iteration.
self._sec_group_rules.pop(port['id'], None)
self._security_ports.pop(port.get('device'), None)
raise
except Exception:
LOG.exception(_LE('Exception encountered while adding rules for '
'port: %s'), port['id'])
raise
def _remove_sg_port_rules(self, port, sg_rules):
if not sg_rules:
return
old_sg_rules = self._sec_group_rules[port['id']]
try:
self._utils.remove_security_rules(port['id'], sg_rules)
for rule in sg_rules:
if rule in old_sg_rules:
old_sg_rules.remove(rule)
except exceptions.NotFound:
# port no longer exists.
self._sec_group_rules.pop(port['id'], None)
self._security_ports.pop(port.get('device'), None)
raise
except Exception:
LOG.exception(_LE('Exception encountered while removing rules for '
'port: %s'), port['id'])
raise
def apply_port_filter(self, port):
LOG.info(_LI('Aplying port filter.'))
def update_port_filter(self, port):
if not port.get('port_security_enabled'):
LOG.info(_LI('Port %s does not have security enabled. '
'Removing existing rules if any.'), port['id'])
self._security_ports.pop(port.get('device'), None)
existing_rules = self._sec_group_rules.pop(port['id'], None)
if existing_rules:
self._utils.remove_all_security_rules(port['id'])
return
LOG.info(_LI('Updating port rules.'))
if port['device'] not in self._security_ports:
LOG.info(_LI("Device %(port)s not yet added. Adding."),
{'port': port['id']})
self.prepare_port_filter(port)
return
old_port = self._security_ports[port['device']]
old_provider_rules = old_port['security_group_rules']
added_provider_rules = port['security_group_rules']
# Generate the rules
added_rules = self._generate_rules([port])
# Expand wildcard rules
expanded_rules = self._sg_gen.expand_wildcard_rules(
added_rules[port['id']])
# Consider added provider rules (if any)
new_rules = [r for r in added_provider_rules
if r not in old_provider_rules]
# Build new rules to add
new_rules.extend([r for r in added_rules[port['id']]
if r not in self._sec_group_rules[port['id']]])
# Remove non provider rules
remove_rules = [r for r in self._sec_group_rules[port['id']]
if r not in added_rules[port['id']]]
# Remove for non provider rules
remove_rules.extend([r for r in old_provider_rules
if r not in added_provider_rules])
# Avoid removing or adding rules which are contained in wildcard rules
new_rules = [r for r in new_rules if r not in expanded_rules]
remove_rules = [r for r in remove_rules if r not in expanded_rules]
LOG.info(_("Creating %(new)s new rules, removing %(old)s "
"old rules."),
{'new': len(new_rules),
'old': len(remove_rules)})
self._create_port_rules(port, new_rules)
self._remove_port_rules(old_port, remove_rules)
self._security_ports[port['device']] = port
self._sec_group_rules[port['id']] = added_rules[port['id']]
def remove_port_filter(self, port):
LOG.info(_LI('Removing port filter'))
self._security_ports.pop(port['device'], None)
self._sec_group_rules.pop(port['id'], None)
self._utils.clear_port_sg_acls_cache(port['id'])
def security_group_updated(self, action_type, sec_group_ids,
device_id=None):
pass
@property
def ports(self):
return self._security_ports
class SecurityGroupRuleGenerator(object):
def create_security_group_rules(self, rules):
security_group_rules = []
for rule in rules:
security_group_rules.extend(self.create_security_group_rule(rule))
return security_group_rules
def create_security_group_rule(self, rule):
# TODO(claudiub): implement
pass
def _get_rule_remote_address(self, rule):
if rule['direction'] == 'ingress':
ip_prefix = 'source_ip_prefix'
else:
ip_prefix = 'dest_ip_prefix'
if ip_prefix in rule:
return rule[ip_prefix]
return ACL_PROP_MAP['address_default'][rule['ethertype']]
class SecurityGroupRuleGeneratorR2(SecurityGroupRuleGenerator):
def create_security_group_rule(self, rule):
local_port = self._get_rule_port_range(rule)
direction = ACL_PROP_MAP['direction'][rule['direction']]
remote_address = self._get_rule_remote_address(rule)
remote_address = remote_address.split('/128', 1)[0]
protocol = self._get_rule_protocol(rule)
if protocol == ACL_PROP_MAP['default']:
# ANY protocols must be split up, to make stateful rules.
protocols = list(set(ACL_PROP_MAP['protocol'].values()))
else:
protocols = [protocol]
sg_rules = [SecurityGroupRuleR2(direction=direction,
local_port=local_port,
protocol=proto,
remote_addr=remote_address)
for proto in protocols]
return sg_rules
def create_default_sg_rules(self):
ip_type_pairs = [(ACL_PROP_MAP['ethertype'][ip],
ACL_PROP_MAP['address_default'][ip])
for ip in six.iterkeys(ACL_PROP_MAP['ethertype'])]
action = ACL_PROP_MAP['action']['deny']
port = ACL_PROP_MAP['default']
sg_rules = []
for direction in ACL_PROP_MAP['direction'].values():
for protocol in set(ACL_PROP_MAP['protocol'].values()):
for acl_type, address in ip_type_pairs:
sg_rules.append(SecurityGroupRuleR2(direction=direction,
local_port=port,
protocol=protocol,
remote_addr=address,
action=action))
return sg_rules
def compute_new_rules_add(self, old_rules, new_rules):
add_rules = [r for r in new_rules if r not in old_rules]
return add_rules, []
def expand_wildcard_rules(self, rules):
wildcard_rules = [
r for r in rules
if self._get_rule_protocol(r) == ACL_PROP_MAP['default']]
rules = []
for r in wildcard_rules:
rule_copy = r.copy()
if rule_copy['direction'] == 'ingress':
ip_prefix = 'source_ip_prefix'
else:
ip_prefix = 'dest_ip_prefix'
if ip_prefix not in rule_copy:
rule_copy[ip_prefix] = (
ACL_PROP_MAP['address_default'][rule_copy['ethertype']])
for proto in list(set(ACL_PROP_MAP['protocol'].keys())):
rule_to_add = rule_copy.copy()
rule_to_add['protocol'] = proto
rules.extend([rule_to_add])
return rules
def _get_rule_port_range(self, rule):
if 'port_range_min' in rule and 'port_range_max' in rule:
return '%s-%s' % (rule['port_range_min'],
rule['port_range_max'])
return ACL_PROP_MAP['default']
def _get_rule_protocol(self, rule):
protocol = self._get_rule_prop_or_default(rule, 'protocol')
if protocol == 'icmp' and rule.get('ethertype') == 'IPv6':
# If protocol is ICMP and ethertype is IPv6 the protocol has
# to be ICMPv6.
return ACL_PROP_MAP['protocol']['ipv6-icmp']
if protocol in six.iterkeys(ACL_PROP_MAP['protocol']):
return ACL_PROP_MAP['protocol'][protocol]
return protocol
def _get_rule_prop_or_default(self, rule, prop):
if prop in rule:
return rule[prop]
return ACL_PROP_MAP['default']
class SecurityGroupRuleBase(object):
_FIELDS = []
def __eq__(self, obj):
for f in self._FIELDS:
if not hasattr(obj, f) or getattr(obj, f) != getattr(self, f):
return False
return True
def __str__(self):
return str(self.to_dict())
def __repr__(self):
return str(self)
def to_dict(self):
return dict((field, getattr(self, field)) for field in self._FIELDS)
class SecurityGroupRuleR2(SecurityGroupRuleBase):
_FIELDS = ["Direction", "Action", "LocalPort", "Protocol",
"RemoteIPAddress", "Stateful", "IdleSessionTimeout"]
IdleSessionTimeout = 0
Weight = 65500
def __init__(self, direction, local_port, protocol, remote_addr,
action=ACL_PROP_MAP['action']['allow']):
is_not_icmp = protocol not in [ACL_PROP_MAP['protocol']['icmp'],
ACL_PROP_MAP['protocol']['ipv6-icmp']]
self.Direction = direction
self.Action = action
self.LocalPort = str(local_port) if is_not_icmp else ''
self.Protocol = protocol
self.RemoteIPAddress = remote_addr
self.Stateful = (is_not_icmp and
action is not ACL_PROP_MAP['action']['deny'])
self._cached_hash = hash((direction, action, self.LocalPort,
protocol, remote_addr))
def __lt__(self, obj):
return self.Protocol > obj.Protocol
def __hash__(self):
return self._cached_hash
class HyperVSecurityGroupsDriver(HyperVSecurityGroupsDriverMixin,
firewall.FirewallDriver):
pass

View File

@ -1,147 +0,0 @@
# Copyright 2017 Cloudbase Solutions Srl
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutron.api.rpc.callbacks import events
from neutron.api.rpc.handlers import resources_rpc
from neutron.services.trunk import constants as t_const
from neutron.services.trunk.rpc import agent as trunk_rpc
from os_win import constants as os_win_const
from os_win import utilsfactory
from oslo_log import log as logging
import oslo_messaging
from hyperv.common.i18n import _LI, _LE # noqa
LOG = logging.getLogger(__name__)
class HyperVTrunkDriver(trunk_rpc.TrunkSkeleton):
"""Driver responsible for handling trunk/subport/port events.
Receives data model events from the neutron server and uses them to setup
VLAN trunks for Hyper-V vSwitch ports.
"""
def __init__(self, context):
super(HyperVTrunkDriver, self).__init__()
self._context = context
self._utils = utilsfactory.get_networkutils()
self._trunk_rpc = trunk_rpc.TrunkStub()
# Map between trunk.id and trunk.
self._trunks = {}
def handle_trunks(self, trunks, event_type):
"""Trunk data model change from the server."""
LOG.debug("Trunks event received: %(event_type)s. Trunks: %(trunks)s",
{'event_type': event_type, 'trunks': trunks})
if event_type == events.DELETED:
# The port trunks have been deleted. Remove them from cache.
for trunk in trunks:
self._trunks.pop(trunk.id, None)
else:
for trunk in trunks:
self._trunks[trunk.id] = trunk
self._setup_trunk(trunk)
def handle_subports(self, subports, event_type):
"""Subport data model change from the server."""
LOG.debug("Subports event received: %(event_type)s. "
"Subports: %(subports)s",
{'event_type': event_type, 'subports': subports})
# update the cache.
if event_type == events.CREATED:
for subport in subports:
trunk = self._trunks.get(subport['trunk_id'])
if trunk:
trunk.sub_ports.append(subport)
elif event_type == events.DELETED:
for subport in subports:
trunk = self._trunks.get(subport['trunk_id'])
if trunk and subport in trunk.sub_ports:
trunk.sub_ports.remove(subport)
# update the bound trunks.
affected_trunk_ids = set([s['trunk_id'] for s in subports])
for trunk_id in affected_trunk_ids:
trunk = self._trunks.get(trunk_id)
if trunk:
self._setup_trunk(trunk)
def bind_vlan_port(self, port_id, segmentation_id):
trunk = self._fetch_trunk(port_id)
if not trunk:
# No trunk found. No VLAN IDs to set in trunk mode.
self._set_port_vlan(port_id, segmentation_id)
return
self._setup_trunk(trunk, segmentation_id)
def _fetch_trunk(self, port_id, context=None):
context = context or self._context
try:
trunk = self._trunk_rpc.get_trunk_details(context, port_id)
LOG.debug("Found trunk for port_id %(port_id)s: %(trunk)s",
{'port_id': port_id, 'trunk': trunk})
# cache it.
self._trunks[trunk.id] = trunk
return trunk
except resources_rpc.ResourceNotFound:
return None
except oslo_messaging.RemoteError as ex:
if 'CallbackNotFound' not in str(ex):
raise
LOG.debug("Trunk plugin disabled on server. Assuming port %s is "
"not a trunk.", port_id)
return None
def _setup_trunk(self, trunk, vlan_id=None):
"""Sets up VLAN trunk and updates the trunk status."""
LOG.info(_LI('Binding trunk port: %s.'), trunk)
try:
# bind sub_ports to host.
self._trunk_rpc.update_subport_bindings(self._context,
trunk.sub_ports)
vlan_trunk = [s.segmentation_id for s in trunk.sub_ports]
self._set_port_vlan(trunk.port_id, vlan_id, vlan_trunk)
self._trunk_rpc.update_trunk_status(self._context, trunk.id,
t_const.ACTIVE_STATUS)
except Exception:
# something broke
LOG.exception(_LE("Failure setting up subports for %s"),
trunk.port_id)
self._trunk_rpc.update_trunk_status(self._context, trunk.id,
t_const.DEGRADED_STATUS)
def _set_port_vlan(self, port_id, vlan_id, vlan_trunk=None):
LOG.info(_LI('Binding VLAN ID: %(vlan_id)s, VLAN trunk: '
'%(vlan_trunk)s to switch port %(port_id)s'),
dict(vlan_id=vlan_id, vlan_trunk=vlan_trunk, port_id=port_id))
op_mode = (os_win_const.VLAN_MODE_TRUNK if vlan_trunk else
os_win_const.VLAN_MODE_ACCESS)
self._utils.set_vswitch_port_vlan_id(
vlan_id,
port_id,
operation_mode=op_mode,
vlan_trunk=vlan_trunk)

View File

@ -1,149 +0,0 @@
# Copyright 2015 Cloudbase Solutions Srl
# Copyright 2010-2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Base test case for tests that do not rely on Tempest."""
import contextlib
import logging as std_logging
import os
import os.path
import traceback
import eventlet.timeout
import fixtures
import mock
from os_win import utilsfactory
from oslo_utils import strutils
import testtools
from hyperv.neutron import config
CONF = config.CONF
LOG_FORMAT = "%(asctime)s %(levelname)8s [%(name)s] %(message)s"
def bool_from_env(key, strict=False, default=False):
value = os.environ.get(key)
return strutils.bool_from_string(value, strict=strict, default=default)
class BaseTestCase(testtools.TestCase):
def setUp(self):
super(BaseTestCase, self).setUp()
self.addCleanup(CONF.reset)
self.addCleanup(mock.patch.stopall)
if bool_from_env('OS_DEBUG'):
_level = std_logging.DEBUG
else:
_level = std_logging.INFO
capture_logs = bool_from_env('OS_LOG_CAPTURE')
if not capture_logs:
std_logging.basicConfig(format=LOG_FORMAT, level=_level)
self.log_fixture = self.useFixture(
fixtures.FakeLogger(
format=LOG_FORMAT,
level=_level,
nuke_handlers=capture_logs,
))
test_timeout = int(os.environ.get('OS_TEST_TIMEOUT', 0))
if test_timeout == -1:
test_timeout = 0
if test_timeout > 0:
self.useFixture(fixtures.Timeout(test_timeout, gentle=True))
if bool_from_env('OS_STDOUT_CAPTURE'):
stdout = self.useFixture(fixtures.StringStream('stdout')).stream
self.useFixture(fixtures.MonkeyPatch('sys.stdout', stdout))
if bool_from_env('OS_STDERR_CAPTURE'):
stderr = self.useFixture(fixtures.StringStream('stderr')).stream
self.useFixture(fixtures.MonkeyPatch('sys.stderr', stderr))
self.addOnException(self.check_for_systemexit)
def check_for_systemexit(self, exc_info):
if isinstance(exc_info[1], SystemExit):
self.fail("A SystemExit was raised during the test. %s"
% traceback.format_exception(*exc_info))
@contextlib.contextmanager
def assert_max_execution_time(self, max_execution_time=5):
with eventlet.timeout.Timeout(max_execution_time, False):
yield
return
self.fail('Execution of this test timed out')
def assertOrderedEqual(self, expected, actual):
expect_val = self.sort_dict_lists(expected)
actual_val = self.sort_dict_lists(actual)
self.assertEqual(expect_val, actual_val)
def sort_dict_lists(self, dic):
for key, value in dic.items():
if isinstance(value, list):
dic[key] = sorted(value)
elif isinstance(value, dict):
dic[key] = self.sort_dict_lists(value)
return dic
def assertDictSupersetOf(self, expected_subset, actual_superset):
"""Checks that actual dict contains the expected dict.
After checking that the arguments are of the right type, this checks
that each item in expected_subset is in, and matches, what is in
actual_superset. Separate tests are done, so that detailed info can
be reported upon failure.
"""
if not isinstance(expected_subset, dict):
self.fail("expected_subset (%s) is not an instance of dict" %
type(expected_subset))
if not isinstance(actual_superset, dict):
self.fail("actual_superset (%s) is not an instance of dict" %
type(actual_superset))
for k, v in expected_subset.items():
self.assertIn(k, actual_superset)
self.assertEqual(v, actual_superset[k],
"Key %(key)s expected: %(exp)r, actual %(act)r" %
{'key': k, 'exp': v, 'act': actual_superset[k]})
def config(self, **kw):
"""Override some configuration values.
The keyword arguments are the names of configuration options to
override and their values.
If a group argument is supplied, the overrides are applied to
the specified configuration option group.
All overrides are automatically cleared at the end of the current
test by the fixtures cleanup process.
"""
group = kw.pop('group', None)
for k, v in kw.items():
CONF.set_override(k, v, group)
class HyperVBaseTestCase(BaseTestCase):
def setUp(self):
super(HyperVBaseTestCase, self).setUp()
utilsfactory_patcher = mock.patch.object(utilsfactory, '_get_class')
utilsfactory_patcher.start()
self.addCleanup(utilsfactory_patcher.stop)

View File

@ -1,95 +0,0 @@
# Copyright 2017 Cloudbase Solutions Srl
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Unit tests for Neutron base agent.
"""
import mock
from hyperv.neutron.agent import base as agent_base
from hyperv.tests import base as test_base
class _BaseAgent(agent_base.BaseAgent):
def _get_agent_configurations(self):
pass
def _setup_rpc(self):
pass
def _work(self):
pass
class TestBaseAgent(test_base.HyperVBaseTestCase):
def setUp(self):
super(TestBaseAgent, self).setUp()
self._agent = _BaseAgent()
self._agent._agent_id = mock.sentinel.agent_id
self._agent._context = mock.sentinel.admin_context
self._agent._utils = mock.MagicMock()
self._agent._client = mock.MagicMock()
self._agent._plugin_rpc = mock.Mock()
self._agent._connection = mock.MagicMock()
self._agent._state_rpc = mock.MagicMock()
def test_set_agent_state(self):
self._agent._agent_state = {}
self._agent._host = mock.sentinel.host
self._agent._set_agent_state()
expected_keys = ["binary", "host", "configurations", "agent_type",
"topic", "start_flag"]
self.assertEqual(sorted(expected_keys),
sorted(self._agent._agent_state.keys()))
self.assertEqual(mock.sentinel.host, self._agent._agent_state["host"])
@mock.patch('time.time')
@mock.patch('time.sleep')
@mock.patch.object(_BaseAgent, '_work')
@mock.patch.object(_BaseAgent, '_prologue')
def test_daemon_loop(self, mock_prologue, mock_work,
mock_sleep, mock_time):
mock_work.side_effect = [Exception()]
mock_time.side_effect = [1, 3, KeyboardInterrupt]
self.assertRaises(KeyboardInterrupt, self._agent.daemon_loop)
mock_prologue.assert_called_once_with()
def test_report_state(self):
self._agent._agent_state = {'start_flag': True}
self._agent._report_state()
self.assertNotIn('start_flag', self._agent._agent_state)
def test_report_state_exception(self):
self._agent._agent_state = {'start_flag': True}
self._agent._state_rpc.report_state.side_effect = Exception
self._agent._report_state()
self._agent._state_rpc.report_state.assert_called_once_with(
self._agent._context, {'start_flag': True})
self.assertTrue(self._agent._agent_state['start_flag'])

View File

@ -1,286 +0,0 @@
# Copyright 2017 Cloudbase Solutions Srl
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sys
import mock
from neutron.agent import rpc as agent_rpc
from neutron.common import topics
from neutron import wsgi
from oslo_config import cfg
import webob
from hyperv.neutron.agent import base as base_agent
from hyperv.neutron.agent import hnv_metadata_agent
from hyperv.tests import base as test_base
CONF = cfg.CONF
class TestMetadataProxyHandler(test_base.BaseTestCase):
@mock.patch("hyperv.neutron.neutron_client.NeutronAPIClient")
@mock.patch("neutron_lib.context.get_admin_context_without_session")
def _get_proxy(self, mock_get_context, mock_neutron_client):
return hnv_metadata_agent._MetadataProxyHandler()
def setUp(self):
super(TestMetadataProxyHandler, self).setUp()
hnv_metadata_agent.register_config_opts()
self._proxy = self._get_proxy()
self._neutron_client = self._proxy._neutron_client
@mock.patch.object(hnv_metadata_agent._MetadataProxyHandler,
"_proxy_request")
def test_call(self, mock_proxy_request):
mock_proxy_request.side_effect = [mock.sentinel.response,
ValueError("_proxy_request_error")]
self.assertEqual(mock.sentinel.response,
self._proxy(mock.sentinel.request))
mock_proxy_request.assert_called_once_with(mock.sentinel.request)
self.assertIsInstance(self._proxy(mock.sentinel.request),
webob.exc.HTTPInternalServerError)
def test_get_port_profile_id(self):
url = "http://169.254.169.254/"
port_profile_id = "9d0bab3e-1abf-11e7-a7ef-5cc5d4a321db"
request = mock.Mock(path=url + port_profile_id)
request_invalid = mock.Mock(path=url)
self.assertEqual(port_profile_id,
self._proxy._get_port_profile_id(request))
self.assertIsNone(self._proxy._get_port_profile_id(request_invalid))
def test_get_instance_id(self):
self._neutron_client.get_network_ports.return_value = [
{},
{"binding:vif_details": {"port_profile_id": None}},
{"binding:vif_details": {
"port_profile_id": mock.sentinel.port_profile_id},
"tenant_id": mock.sentinel.tenant_id,
"device_id": mock.sentinel.instance_id},
]
self.assertEqual(
(mock.sentinel.tenant_id, mock.sentinel.instance_id),
self._proxy._get_instance_id(mock.sentinel.port_profile_id))
self._neutron_client.get_network_ports.return_value = []
self.assertEqual(
(None, None),
self._proxy._get_instance_id(mock.sentinel.port_profile_id))
def test_sign_instance_id(self):
self.config(metadata_proxy_shared_secret="secret")
self.assertEqual(
"0329a06b62cd16b33eb6792be8c60b158d89a2ee3a876fce9a881ebb488c0914",
self._proxy._sign_instance_id("test")
)
@mock.patch.object(hnv_metadata_agent._MetadataProxyHandler,
"_sign_instance_id")
@mock.patch.object(hnv_metadata_agent._MetadataProxyHandler,
"_get_instance_id")
def test_get_headers(self, mock_get_instance_id, mock_sign_instance_id):
mock_get_instance_id.side_effect = [
(mock.sentinel.tenant_id, mock.sentinel.instance_id),
(None, None),
]
expected_headers = {
'X-Instance-ID': mock.sentinel.instance_id,
'X-Tenant-ID': mock.sentinel.tenant_id,
'X-Instance-ID-Signature': mock_sign_instance_id.return_value,
}
self.assertEqual(
expected_headers,
self._proxy._get_headers(mock.sentinel.port))
mock_get_instance_id.assert_called_once_with(mock.sentinel.port)
self.assertIsNone(self._proxy._get_headers(mock.sentinel.port))
@mock.patch("httplib2.Http")
@mock.patch.object(hnv_metadata_agent._MetadataProxyHandler,
"_get_headers")
def _test_proxy_request(self, mock_get_headers, mock_http,
valid_path=True, valid_profile_id=True,
response_code=200, method='GET'):
nova_url = '%s:%s' % (CONF.nova_metadata_ip,
CONF.nova_metadata_port)
path = "/9d0bab3e-1abf-11e7-a7ef-5cc5d4a321db" if valid_path else "/"
headers = {"X-Not-Empty": True} if valid_profile_id else {}
mock_get_headers.return_value = headers
http_response = mock.MagicMock(status=response_code)
http_response.__getitem__.return_value = "text/plain"
http_request = mock_http.return_value
http_request.request.return_value = (http_response,
mock.sentinel.content)
mock_resonse = mock.Mock(content_type=None, body=None)
mock_request = mock.Mock(path=path, path_info=path, query_string='',
headers={}, method=method,
body=mock.sentinel.body)
mock_request.response = mock_resonse
response = self._proxy._proxy_request(mock_request)
if not (valid_path and valid_profile_id):
http_request.add_certificate.assert_not_called()
http_request.request.assert_not_called()
return response
if CONF.nova_client_cert and CONF.nova_client_priv_key:
http_request.add_certificate.assert_called_once_with(
key=CONF.nova_client_priv_key,
cert=CONF.nova_client_cert,
domain=nova_url)
http_request.request.assert_called_once_with(
"http://127.0.0.1:8775/", method=method, headers=headers,
body=mock.sentinel.body)
return response
def test_proxy_request_200(self):
self.config(nova_client_cert=mock.sentinel.nova_client_cert,
nova_client_priv_key=mock.sentinel.priv_key)
response = self._test_proxy_request()
self.assertEqual("text/plain", response.content_type)
self.assertEqual(mock.sentinel.content, response.body)
def test_proxy_request_400(self):
self.assertIsInstance(
self._test_proxy_request(response_code=400),
webob.exc.HTTPBadRequest)
def test_proxy_request_403(self):
self.assertIsInstance(
self._test_proxy_request(response_code=403),
webob.exc.HTTPForbidden)
def test_proxy_request_409(self):
self.assertIsInstance(
self._test_proxy_request(response_code=409),
webob.exc.HTTPConflict)
def test_proxy_request_404(self):
self.assertIsInstance(
self._test_proxy_request(valid_path=False),
webob.exc.HTTPNotFound)
self.assertIsInstance(
self._test_proxy_request(valid_profile_id=False),
webob.exc.HTTPNotFound)
self.assertIsInstance(
self._test_proxy_request(response_code=404),
webob.exc.HTTPNotFound)
def test_proxy_request_500(self):
self.assertIsInstance(
self._test_proxy_request(response_code=500),
webob.exc.HTTPInternalServerError)
def test_proxy_request_other_code(self):
self.assertIsInstance(
self._test_proxy_request(response_code=527),
webob.exc.HTTPInternalServerError)
def test_proxy_request_post(self):
response = self._test_proxy_request(method='POST')
self.assertEqual("text/plain", response.content_type)
self.assertEqual(mock.sentinel.content, response.body)
class TestMetadataProxy(test_base.HyperVBaseTestCase):
@mock.patch.object(hnv_metadata_agent.MetadataProxy, "_setup_rpc")
@mock.patch.object(base_agent.BaseAgent, "_set_agent_state")
def _get_agent(self, mock_set_agent_state, mock_setup_rpc):
return hnv_metadata_agent.MetadataProxy()
def setUp(self):
super(TestMetadataProxy, self).setUp()
hnv_metadata_agent.register_config_opts()
self._agent = self._get_agent()
@mock.patch('oslo_service.loopingcall.FixedIntervalLoopingCall')
@mock.patch.object(agent_rpc, 'PluginReportStateAPI')
def test_setup_rpc(self, mock_plugin_report_state_api,
mock_looping_call):
report_interval = 10
self.config(report_interval=report_interval, group="AGENT")
self._agent._setup_rpc()
mock_plugin_report_state_api.assert_called_once_with(topics.REPORTS)
mock_looping_call.assert_called_once_with(self._agent._report_state)
mock_heartbeat = mock_looping_call.return_value
mock_heartbeat.start.assert_called_once_with(interval=report_interval)
def test_get_agent_configurations(self):
fake_ip = '10.10.10.10'
fake_port = 9999
self.config(nova_metadata_ip=fake_ip,
nova_metadata_port=fake_port)
configuration = self._agent._get_agent_configurations()
self.assertEqual(fake_ip, configuration["nova_metadata_ip"])
self.assertEqual(fake_port, configuration["nova_metadata_port"])
self.assertEqual(CONF.AGENT.log_agent_heartbeats,
configuration["log_agent_heartbeats"])
@mock.patch.object(hnv_metadata_agent, "_MetadataProxyHandler")
@mock.patch.object(wsgi, "Server")
def test_work(self, mock_server, mock_proxy_handler):
self._agent._work()
mock_server.assert_called_once_with(
name=self._agent._AGENT_BINARY,
num_threads=CONF.AGENT.worker_count)
server = mock_server.return_value
server.start.assert_called_once_with(
application=mock_proxy_handler.return_value,
port=CONF.bind_port,
host=CONF.bind_host)
server.wait.assert_called_once_with()
@mock.patch.object(hnv_metadata_agent.MetadataProxy, "_work")
@mock.patch.object(hnv_metadata_agent.MetadataProxy, "_prologue")
def test_run(self, mock_prologue, mock_work):
mock_work.side_effect = ValueError
self._agent.run()
mock_prologue.assert_called_once_with()
mock_work.assert_called_once_with()
class TestMain(test_base.BaseTestCase):
@mock.patch.object(hnv_metadata_agent, 'MetadataProxy')
@mock.patch.object(hnv_metadata_agent, 'common_config')
@mock.patch.object(hnv_metadata_agent, 'meta_config')
@mock.patch.object(hnv_metadata_agent, 'neutron_config')
def test_main(self, mock_config, mock_meta_config, mock_common_config,
mock_proxy):
hnv_metadata_agent.main()
mock_config.register_agent_state_opts_helper.assert_called_once_with(
CONF)
mock_meta_config.register_meta_conf_opts.assert_called_once_with(
hnv_metadata_agent.meta_config.METADATA_PROXY_HANDLER_OPTS)
mock_common_config.init.assert_called_once_with(sys.argv[1:])
mock_config.setup_logging.assert_called_once_with()
mock_proxy.assert_called_once_with()
mock_proxy.return_value.run.assert_called_once_with()

View File

@ -1,140 +0,0 @@
# Copyright 2017 Cloudbase Solutions SRL
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Unit tests for the Neutron HNV L2 Agent.
"""
import sys
import mock
from hyperv.neutron.agent import hnv_neutron_agent as hnv_agent
from hyperv.neutron import constants
from hyperv.tests import base as test_base
class TestHNVAgent(test_base.HyperVBaseTestCase):
@mock.patch.object(hnv_agent.HNVAgent, "_setup")
@mock.patch.object(hnv_agent.HNVAgent, "_setup_rpc")
@mock.patch.object(hnv_agent.HNVAgent, "_set_agent_state")
def _get_agent(self, mock_set_agent_state, mock_setup_rpc, mock_setup):
return hnv_agent.HNVAgent()
def setUp(self):
super(TestHNVAgent, self).setUp()
self.agent = self._get_agent()
self.agent._neutron_client = mock.Mock()
def test_get_agent_configurations(self):
self.config(logical_network=mock.sentinel.logical_network,
group="HNV")
self.agent._physical_network_mappings = mock.sentinel.mappings
agent_configurations = self.agent._get_agent_configurations()
expected_keys = ["logical_network", "vswitch_mappings",
"devices", "l2_population", "tunnel_types",
"bridge_mappings", "enable_distributed_routing"]
self.assertEqual(sorted(expected_keys),
sorted(agent_configurations.keys()))
self.assertEqual(mock.sentinel.mappings,
agent_configurations["vswitch_mappings"])
self.assertEqual(str(mock.sentinel.logical_network),
agent_configurations["logical_network"])
@mock.patch.object(hnv_agent.HNVAgent, "_get_vswitch_name")
def test_provision_network(self, mock_get_vswitch_name):
self.agent._provision_network(mock.sentinel.port_id,
mock.sentinel.net_uuid,
mock.sentinel.network_type,
mock.sentinel.physical_network,
mock.sentinel.segmentation_id)
mock_get_vswitch_name.assert_called_once_with(
mock.sentinel.network_type,
mock.sentinel.physical_network)
vswitch_map = self.agent._network_vswitch_map[mock.sentinel.net_uuid]
self.assertEqual(mock.sentinel.network_type,
vswitch_map['network_type'])
self.assertEqual(mock_get_vswitch_name.return_value,
vswitch_map['vswitch_name'])
self.assertEqual(mock.sentinel.segmentation_id,
vswitch_map['vlan_id'])
@mock.patch.object(hnv_agent.hyperv_base.Layer2Agent, '_port_bound')
def test_port_bound(self, mock_super_port_bound):
self.agent._port_bound(
mock.sentinel.port_id, mock.sentinel.network_id,
mock.sentinel.network_type, mock.sentinel.physical_network,
mock.sentinel.segmentation_id)
mock_super_port_bound.assert_called_once_with(
mock.sentinel.port_id, mock.sentinel.network_id,
mock.sentinel.network_type, mock.sentinel.physical_network,
mock.sentinel.segmentation_id)
mock_neutron_client = self.agent._neutron_client
mock_neutron_client.get_port_profile_id.assert_called_once_with(
mock.sentinel.port_id)
self.agent._utils.set_vswitch_port_profile_id.assert_called_once_with(
switch_port_name=mock.sentinel.port_id,
profile_id=mock_neutron_client.get_port_profile_id.return_value,
profile_data=constants.PROFILE_DATA,
profile_name=constants.PROFILE_NAME,
net_cfg_instance_id=constants.NET_CFG_INSTANCE_ID,
cdn_label_id=constants.CDN_LABEL_ID,
cdn_label_string=constants.CDN_LABEL_STRING,
vendor_id=constants.VENDOR_ID,
vendor_name=constants.VENDOR_NAME)
@mock.patch.object(hnv_agent.HNVAgent, '_port_bound')
def test_treat_vif_port_state_up(self, mock_port_bound):
self.agent._treat_vif_port(
mock.sentinel.port_id, mock.sentinel.network_id,
mock.sentinel.network_type, mock.sentinel.physical_network,
mock.sentinel.segmentation_id, True)
mock_port_bound.assert_called_once_with(
mock.sentinel.port_id, mock.sentinel.network_id,
mock.sentinel.network_type, mock.sentinel.physical_network,
mock.sentinel.segmentation_id)
@mock.patch.object(hnv_agent.HNVAgent, '_port_unbound')
def test_treat_vif_port_state_down(self, mock_port_unbound):
self.agent._treat_vif_port(
mock.sentinel.port_id, mock.sentinel.network_id,
mock.sentinel.network_type, mock.sentinel.physical_network,
mock.sentinel.segmentation_id, False)
mock_port_unbound.assert_called_once_with(mock.sentinel.port_id)
class TestMain(test_base.BaseTestCase):
@mock.patch.object(hnv_agent, 'HNVAgent')
@mock.patch.object(hnv_agent, 'common_config')
@mock.patch.object(hnv_agent, 'neutron_config')
def test_main(self, mock_config, mock_common_config, mock_hnv_agent):
hnv_agent.main()
mock_config.register_agent_state_opts_helper.assert_called_once_with(
hnv_agent.CONF)
mock_common_config.init.assert_called_once_with(sys.argv[1:])
mock_config.setup_logging.assert_called_once_with()
mock_hnv_agent.assert_called_once_with()
mock_hnv_agent.return_value.daemon_loop.assert_called_once_with()

View File

@ -1,449 +0,0 @@
# Copyright 2013 Cloudbase Solutions SRL
# Copyright 2013 Pedro Navarro Perez
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Unit tests for Windows Hyper-V virtual switch neutron driver
"""
import sys
import mock
from neutron.common import topics
from os_win import exceptions
from hyperv.neutron.agent import hyperv_neutron_agent as hyperv_agent
from hyperv.neutron.agent import layer2
from hyperv.neutron import constants
from hyperv.neutron import exception
from hyperv.tests import base
class TestHyperVSecurityAgent(base.BaseTestCase):
@mock.patch.object(hyperv_agent.HyperVSecurityAgent, '__init__',
lambda *args, **kwargs: None)
def setUp(self):
super(TestHyperVSecurityAgent, self).setUp()
self.agent = hyperv_agent.HyperVSecurityAgent()
@mock.patch.object(hyperv_agent, 'HyperVSecurityCallbackMixin')
@mock.patch.object(hyperv_agent.agent_rpc, 'create_consumers')
def test_setup_rpc(self, mock_create_consumers, mock_HyperVSecurity):
self.agent._setup_rpc()
self.assertEqual(topics.AGENT, self.agent.topic)
self.assertEqual([mock_HyperVSecurity.return_value],
self.agent.endpoints)
self.assertEqual(mock_create_consumers.return_value,
self.agent.connection)
mock_create_consumers.assert_called_once_with(
self.agent.endpoints, self.agent.topic,
[[topics.SECURITY_GROUP, topics.UPDATE]])
class TestHyperVNeutronAgent(base.HyperVBaseTestCase):
_FAKE_PORT_ID = 'fake_port_id'
@mock.patch.object(hyperv_agent.HyperVNeutronAgent, "_setup")
@mock.patch.object(hyperv_agent.HyperVNeutronAgent, "_setup_rpc")
@mock.patch.object(hyperv_agent.HyperVNeutronAgent, "_set_agent_state")
def _get_agent(self, mock_set_agent_state, mock_setup_rpc, mock_setup):
return hyperv_agent.HyperVNeutronAgent()
def setUp(self):
super(TestHyperVNeutronAgent, self).setUp()
self.agent = self._get_agent()
self.agent._qos_ext = mock.MagicMock()
self.agent._plugin_rpc = mock.Mock()
self.agent._metricsutils = mock.MagicMock()
self.agent._utils = mock.MagicMock()
self.agent._sec_groups_agent = mock.MagicMock()
self.agent._context = mock.Mock()
self.agent._client = mock.MagicMock()
self.agent._connection = mock.MagicMock()
self.agent._agent_id = mock.Mock()
self.agent._utils = mock.MagicMock()
self.agent._nvgre_ops = mock.MagicMock()
self.agent._vlan_driver = mock.MagicMock()
self.agent._refresh_cache = False
self.agent._added_ports = set()
def test_get_agent_configurations(self):
self.agent._physical_network_mappings = mock.sentinel.mappings
fake_ip = '10.10.10.10'
self.config(enable_support=True,
provider_tunnel_ip=fake_ip,
group="NVGRE")
agent_configurations = self.agent._get_agent_configurations()
expected_keys = ["vswitch_mappings", "arp_responder_enabled",
"tunneling_ip", "devices", "l2_population",
"tunnel_types", "enable_distributed_routing",
"bridge_mappings"]
self.assertEqual(sorted(expected_keys),
sorted(agent_configurations.keys()))
self.assertEqual(mock.sentinel.mappings,
agent_configurations["vswitch_mappings"])
self.assertEqual(fake_ip,
agent_configurations["tunneling_ip"])
@mock.patch("hyperv.neutron.trunk_driver.HyperVTrunkDriver")
@mock.patch("neutron.agent.securitygroups_rpc.SecurityGroupServerRpcApi")
@mock.patch("hyperv.neutron.agent.hyperv_neutron_agent"
".HyperVSecurityAgent")
@mock.patch.object(layer2.Layer2Agent, "_setup")
def test_setup(self, mock_setup, mock_security_agent, mock_sg_rpc,
mock_trunk_driver):
self.agent._context = mock.sentinel.admin_context
self.agent._consumers = []
self.config(enable_support=True, group="NVGRE")
self.agent._setup()
expected_consumers = [[constants.TUNNEL, topics.UPDATE],
[constants.LOOKUP, constants.UPDATE]]
mock_setup.assert_called_once_with()
mock_sg_rpc.assert_called_once_with(topics.PLUGIN)
mock_security_agent.assert_called_once_with(
mock.sentinel.admin_context, mock_sg_rpc.return_value)
mock_trunk_driver.assert_called_once_with(mock.sentinel.admin_context)
self.assertEqual(expected_consumers, self.agent._consumers)
@mock.patch("neutron.agent.l2.extensions.qos.QosAgentExtension")
def test_setup_qos_extension(self, mock_qos_agent):
self.config(enable_qos_extension=True, group="AGENT")
mock_qos_agent_extension = mock.Mock()
mock_qos_agent.return_value = mock_qos_agent_extension
self.agent._setup_qos_extension()
mock_qos_agent_extension.consume_api.assert_called_once_with(
self.agent)
mock_qos_agent_extension.initialize(self.agent._connection, "hyperv")
@mock.patch.object(hyperv_agent.nvgre_ops, 'HyperVNvgreOps')
def test_init_nvgre_disabled(self, mock_hyperv_nvgre_ops):
self.agent._init_nvgre()
self.assertFalse(mock_hyperv_nvgre_ops.called)
self.assertFalse(self.agent._nvgre_enabled)
@mock.patch.object(hyperv_agent.nvgre_ops, 'HyperVNvgreOps')
def test_init_nvgre_no_tunnel_ip(self, mock_hyperv_nvgre_ops):
self.config(enable_support=True, group='NVGRE')
self.assertRaises(exception.NetworkingHyperVException,
self.agent._init_nvgre)
@mock.patch.object(hyperv_agent.nvgre_ops, 'HyperVNvgreOps')
def test_init_nvgre_enabled(self, mock_hyperv_nvgre_ops):
self.config(enable_support=True, group='NVGRE')
fake_ip = '10.10.10.10'
self.config(provider_tunnel_ip=fake_ip,
group='NVGRE')
self.agent._init_nvgre()
mock_hyperv_nvgre_ops.assert_called_once_with(
list(self.agent._physical_network_mappings.values()))
self.assertTrue(self.agent._nvgre_enabled)
self.agent._nvgre_ops.init_notifier.assert_called_once_with(
self.agent._context, self.agent._client)
@mock.patch.object(hyperv_agent.HyperVNeutronAgent,
"_get_vswitch_name")
def test_provision_network_exception(self, mock_get_vswitch_name):
self.assertRaises(exception.NetworkingHyperVException,
self.agent._provision_network,
mock.sentinel.FAKE_PORT_ID,
mock.sentinel.FAKE_NET_UUID,
mock.sentinel.FAKE_NETWORK_TYPE,
mock.sentinel.FAKE_PHYSICAL_NETWORK,
mock.sentinel.FAKE_SEGMENTATION_ID)
mock_get_vswitch_name.assert_called_once_with(
mock.sentinel.FAKE_NETWORK_TYPE,
mock.sentinel.FAKE_PHYSICAL_NETWORK)
@mock.patch.object(hyperv_agent.HyperVNeutronAgent,
"_get_vswitch_name")
def test_provision_network_vlan(self, mock_get_vswitch_name):
self.agent._provision_network(mock.sentinel.FAKE_PORT_ID,
mock.sentinel.FAKE_NET_UUID,
constants.TYPE_VLAN,
mock.sentinel.FAKE_PHYSICAL_NETWORK,
mock.sentinel.FAKE_SEGMENTATION_ID)
mock_get_vswitch_name.assert_called_once_with(
constants.TYPE_VLAN,
mock.sentinel.FAKE_PHYSICAL_NETWORK)
@mock.patch.object(hyperv_agent.HyperVNeutronAgent,
"_get_vswitch_name")
def test_provision_network_nvgre(self, mock_get_vswitch_name):
self.agent._nvgre_enabled = True
vswitch_name = mock_get_vswitch_name.return_value
self.agent._provision_network(mock.sentinel.FAKE_PORT_ID,
mock.sentinel.FAKE_NET_UUID,
constants.TYPE_NVGRE,
mock.sentinel.FAKE_PHYSICAL_NETWORK,
mock.sentinel.FAKE_SEGMENTATION_ID)
mock_get_vswitch_name.assert_called_once_with(
constants.TYPE_NVGRE,
mock.sentinel.FAKE_PHYSICAL_NETWORK)
self.agent._nvgre_ops.bind_nvgre_network.assert_called_once_with(
mock.sentinel.FAKE_SEGMENTATION_ID,
mock.sentinel.FAKE_NET_UUID,
vswitch_name)
@mock.patch.object(hyperv_agent.HyperVNeutronAgent,
"_get_vswitch_name")
def test_provision_network_flat(self, mock_get_vswitch_name):
self.agent._provision_network(mock.sentinel.FAKE_PORT_ID,
mock.sentinel.FAKE_NET_UUID,
constants.TYPE_FLAT,
mock.sentinel.FAKE_PHYSICAL_NETWORK,
mock.sentinel.FAKE_SEGMENTATION_ID)
mock_get_vswitch_name.assert_called_once_with(
constants.TYPE_FLAT,
mock.sentinel.FAKE_PHYSICAL_NETWORK)
@mock.patch.object(hyperv_agent.HyperVNeutronAgent,
"_get_vswitch_name")
def test_provision_network_local(self, mock_get_vswitch_name):
self.agent._provision_network(mock.sentinel.FAKE_PORT_ID,
mock.sentinel.FAKE_NET_UUID,
constants.TYPE_LOCAL,
mock.sentinel.FAKE_PHYSICAL_NETWORK,
mock.sentinel.FAKE_SEGMENTATION_ID)
mock_get_vswitch_name.assert_called_once_with(
constants.TYPE_LOCAL,
mock.sentinel.FAKE_PHYSICAL_NETWORK)
def _test_port_bound(self, enable_metrics):
self.agent._enable_metrics_collection = enable_metrics
port = mock.MagicMock()
net_uuid = 'my-net-uuid'
self.agent._port_bound(port, net_uuid, 'vlan', None, None)
self.assertEqual(enable_metrics,
self.agent._utils.add_metrics_collection_acls.called)
def test_port_bound_enable_metrics(self):
self._test_port_bound(True)
def test_port_bound_no_metrics(self):
self._test_port_bound(False)
@mock.patch.object(hyperv_agent.HyperVNeutronAgent,
'_provision_network')
def _check_port_bound_net_type(self, mock_provision_network, network_type):
net_uuid = 'my-net-uuid'
fake_map = {'vswitch_name': mock.sentinel.vswitch_name,
'ports': []}
def fake_prov_network(*args, **kwargs):
self.agent._network_vswitch_map[net_uuid] = fake_map
mock_provision_network.side_effect = fake_prov_network
self.agent._port_bound(mock.sentinel.port_id, net_uuid, network_type,
mock.sentinel.physical_network,
mock.sentinel.segmentation_id)
self.assertIn(mock.sentinel.port_id, fake_map['ports'])
mock_provision_network.assert_called_once_with(
mock.sentinel.port_id, net_uuid, network_type,
mock.sentinel.physical_network, mock.sentinel.segmentation_id)
self.agent._utils.connect_vnic_to_vswitch.assert_called_once_with(
vswitch_name=mock.sentinel.vswitch_name,
switch_port_name=mock.sentinel.port_id)
def test_port_bound_vlan(self):
self._check_port_bound_net_type(network_type=constants.TYPE_VLAN)
self.agent._vlan_driver.bind_vlan_port.assert_called_once_with(
mock.sentinel.port_id, mock.sentinel.segmentation_id)
def test_port_bound_nvgre(self):
self.agent._nvgre_enabled = True
self._check_port_bound_net_type(network_type=constants.TYPE_NVGRE)
self.agent._nvgre_ops.bind_nvgre_port.assert_called_once_with(
mock.sentinel.segmentation_id, mock.sentinel.vswitch_name,
mock.sentinel.port_id)
def test_port_enable_control_metrics_ok(self):
self.agent._enable_metrics_collection = True
self.agent._port_metric_retries[self._FAKE_PORT_ID] = (
self.agent._metrics_max_retries)
self.agent._utils.is_metrics_collection_allowed.return_value = True
self.agent._port_enable_control_metrics()
enable_port_metrics_collection = (
self.agent._metricsutils.enable_port_metrics_collection)
enable_port_metrics_collection.assert_called_with(self._FAKE_PORT_ID)
self.assertNotIn(self._FAKE_PORT_ID, self.agent._port_metric_retries)
def test_port_enable_control_metrics_maxed(self):
self.agent._enable_metrics_collection = True
self.agent._metrics_max_retries = 3
self.agent._port_metric_retries[self._FAKE_PORT_ID] = 3
self.agent._utils.is_metrics_collection_allowed.return_value = False
for _ in range(4):
self.assertIn(self._FAKE_PORT_ID,
self.agent._port_metric_retries)
self.agent._port_enable_control_metrics()
self.assertNotIn(self._FAKE_PORT_ID, self.agent._port_metric_retries)
def test_port_enable_control_metrics_no_vnic(self):
self.agent._enable_metrics_collection = True
self.agent._port_metric_retries[self._FAKE_PORT_ID] = 3
self.agent._utils.is_metrics_collection_allowed.side_effect = (
exceptions.NotFound(resource=self._FAKE_PORT_ID))
self.agent._port_enable_control_metrics()
self.assertNotIn(self._FAKE_PORT_ID, self.agent._port_metric_retries)
@mock.patch.object(hyperv_agent.HyperVNeutronAgent,
'_port_unbound')
def test_vif_port_state_down(self, mock_port_unbound):
self.agent._treat_vif_port(
mock.sentinel.port_id, mock.sentinel.network_id,
mock.sentinel.network_type, mock.sentinel.physical_network,
mock.sentinel.segmentation_id, False)
mock_port_unbound.assert_called_once_with(mock.sentinel.port_id)
sg_agent = self.agent._sec_groups_agent
sg_agent.remove_devices_filter.assert_called_once_with(
[mock.sentinel.port_id])
@mock.patch.object(hyperv_agent.HyperVNeutronAgent,
'_port_bound')
def _check_treat_vif_port_state_up(self, mock_port_bound):
self.agent._treat_vif_port(
mock.sentinel.port_id, mock.sentinel.network_id,
mock.sentinel.network_type, mock.sentinel.physical_network,
mock.sentinel.segmentation_id, True)
mock_port_bound.assert_called_once_with(
mock.sentinel.port_id, mock.sentinel.network_id,
mock.sentinel.network_type, mock.sentinel.physical_network,
mock.sentinel.segmentation_id)
def test_treat_vif_port_sg_enabled(self):
self.agent._enable_security_groups = True
self._check_treat_vif_port_state_up()
sg_agent = self.agent._sec_groups_agent
sg_agent.refresh_firewall.assert_called_once_with(
[mock.sentinel.port_id])
def test_treat_vif_port_sg_disabled(self):
self.agent._enable_security_groups = False
self._check_treat_vif_port_state_up()
self.agent._utils.remove_all_security_rules.assert_called_once_with(
mock.sentinel.port_id)
def _get_fake_port_details(self):
return {
'device': mock.sentinel.device,
'port_id': mock.sentinel.port_id,
'network_id': mock.sentinel.network_id,
'network_type': mock.sentinel.network_type,
'physical_network': mock.sentinel.physical_network,
'segmentation_id': mock.sentinel.segmentation_id,
'admin_state_up': mock.sentinel.admin_state_up
}
@mock.patch.object(layer2.Layer2Agent, "_process_added_port")
def test_process_added_port(self, mock_process):
self.config(enable_qos_extension=True, group="AGENT")
self.agent._process_added_port(mock.sentinel.device_details)
mock_process.assert_called_once_with(mock.sentinel.device_details)
self.agent._qos_ext.handle_port.assert_called_once_with(
self.agent._context, mock.sentinel.device_details)
@mock.patch.object(hyperv_agent.HyperVNeutronAgent,
'_port_unbound')
@mock.patch.object(hyperv_agent.HyperVNeutronAgent,
'_update_port_status_cache')
def test_process_removed_port_exception(self, mock_update_port_cache,
mock_port_unbound):
self.agent._removed_ports = set([mock.sentinel.port_id])
remove_devices = self.agent._sec_groups_agent.remove_devices_filter
remove_devices.side_effect = exception.NetworkingHyperVException
self.assertRaises(exception.NetworkingHyperVException,
self.agent._process_removed_port,
mock.sentinel.port_id)
mock_update_port_cache.assert_called_once_with(
mock.sentinel.port_id, device_bound=False)
self.assertIn(mock.sentinel.port_id, self.agent._removed_ports)
@mock.patch.object(hyperv_agent.HyperVNeutronAgent,
'_port_unbound')
@mock.patch.object(hyperv_agent.HyperVNeutronAgent,
'_update_port_status_cache')
def test_process_removed_port(self, mock_update_port_cache,
mock_port_unbound):
self.agent._removed_ports = set([mock.sentinel.port_id])
self.agent._process_removed_port(mock.sentinel.port_id)
mock_update_port_cache.assert_called_once_with(
mock.sentinel.port_id, device_bound=False)
mock_port_unbound.assert_called_once_with(mock.sentinel.port_id,
vnic_deleted=True)
self.agent._sec_groups_agent.remove_devices_filter(
[mock.sentinel.port_id])
self.assertNotIn(mock.sentinel.port_id, self.agent._removed_ports)
@mock.patch.object(layer2.Layer2Agent, "_work")
@mock.patch.object(hyperv_agent.HyperVNeutronAgent,
'_port_enable_control_metrics')
def test_work(self, mock_port_enable_metrics, mock_work):
self.agent._nvgre_enabled = True
self.agent._work()
mock_work.assert_called_once_with()
self.agent._nvgre_ops.refresh_nvgre_records.assert_called_once_with()
mock_port_enable_metrics.assert_called_with()
class TestMain(base.BaseTestCase):
@mock.patch.object(hyperv_agent, 'HyperVNeutronAgent')
@mock.patch.object(hyperv_agent, 'common_config')
@mock.patch.object(hyperv_agent, 'neutron_config')
def test_main(self, mock_config, mock_common_config, mock_hyperv_agent):
hyperv_agent.main()
mock_config.register_agent_state_opts_helper.assert_called_once_with(
hyperv_agent.CONF)
mock_common_config.init.assert_called_once_with(sys.argv[1:])
mock_config.setup_logging.assert_called_once_with()
mock_hyperv_agent.assert_called_once_with()
mock_hyperv_agent.return_value.daemon_loop.assert_called_once_with()

View File

@ -1,552 +0,0 @@
# Copyright 2017 Cloudbase Solutions Srl
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Unit tests for Neutron layer 2 agent.
"""
import collections
import eventlet
import mock
import ddt
import neutron
from neutron.common import topics
from neutron.conf.agent import common as neutron_config
from os_win import exceptions as os_win_exc
from hyperv.neutron.agent import layer2 as agent_base
from hyperv.neutron import config
from hyperv.neutron import constants
from hyperv.neutron import exception
from hyperv.tests import base as test_base
CONF = config.CONF
class _Layer2Agent(agent_base.Layer2Agent):
def _get_agent_configurations(self):
pass
def _report_state(self):
pass
def _provision_network(self, port_id, net_uuid, network_type,
physical_network, segmentation_id):
pass
def _treat_vif_port(self, port_id, network_id, network_type,
physical_network, segmentation_id,
admin_state_up):
pass
@ddt.ddt
class TestLayer2Agent(test_base.HyperVBaseTestCase):
_FAKE_PORT_ID = 'fake_port_id'
@mock.patch.object(_Layer2Agent, "_setup")
@mock.patch.object(_Layer2Agent, "_setup_rpc")
@mock.patch.object(_Layer2Agent, "_set_agent_state")
def _get_agent(self, mock_set_agent_state, mock_setup_rpc, mock_setup):
return _Layer2Agent()
def setUp(self):
super(TestLayer2Agent, self).setUp()
neutron_config.register_agent_state_opts_helper(CONF)
self._agent = self._get_agent()
self._agent._qos_ext = mock.MagicMock()
self._agent._plugin_rpc = mock.Mock()
self._agent._metricsutils = mock.MagicMock()
self._agent._utils = mock.MagicMock()
self._agent._context = mock.Mock()
self._agent._client = mock.MagicMock()
self._agent._connection = mock.MagicMock()
self._agent._agent_id = mock.Mock()
self._agent._utils = mock.MagicMock()
self._agent._nvgre_ops = mock.MagicMock()
self._agent._vlan_driver = mock.MagicMock()
self._agent._physical_network_mappings = collections.OrderedDict()
self._agent._config = mock.MagicMock()
self._agent._endpoints = mock.MagicMock()
self._agent._event_callback_pairs = mock.MagicMock()
self._agent._network_vswitch_map = {}
def _get_fake_port_details(self):
return {
'device': mock.sentinel.device,
'port_id': mock.sentinel.port_id,
'network_id': mock.sentinel.network_id,
'network_type': mock.sentinel.network_type,
'physical_network': mock.sentinel.physical_network,
'segmentation_id': mock.sentinel.segmentation_id,
'admin_state_up': mock.sentinel.admin_state_up
}
@mock.patch.object(agent_base.Layer2Agent, '_process_removed_port_event',
mock.sentinel._process_removed_port_event)
@mock.patch.object(agent_base.Layer2Agent, '_process_added_port_event',
mock.sentinel._process_added_port_event)
@mock.patch.object(eventlet.tpool, 'set_num_threads')
@mock.patch.object(agent_base.Layer2Agent,
'_load_physical_network_mappings')
def test_setup(self, mock_load_phys_net_mapp,
mock_set_num_threads):
self.config(
group="AGENT",
worker_count=12,
physical_network_vswitch_mappings=["fake_mappings"],
local_network_vswitch="local_network_vswitch")
self._agent._event_callback_pairs = []
self._agent._setup()
mock_load_phys_net_mapp.assert_called_once_with(["fake_mappings"])
self._agent._endpoints.append.assert_called_once_with(self._agent)
self.assertIn((self._agent._utils.EVENT_TYPE_CREATE,
mock.sentinel._process_added_port_event),
self._agent._event_callback_pairs)
self.assertIn((self._agent._utils.EVENT_TYPE_DELETE,
mock.sentinel._process_removed_port_event),
self._agent._event_callback_pairs)
@mock.patch('oslo_service.loopingcall.FixedIntervalLoopingCall')
@mock.patch.object(agent_base.Layer2Agent, '_setup_qos_extension')
@mock.patch.object(neutron.agent.rpc, 'create_consumers')
@mock.patch.object(neutron.common.rpc, 'get_client')
@mock.patch.object(neutron.agent.rpc, 'PluginReportStateAPI')
@mock.patch.object(neutron.agent.rpc, 'PluginApi')
def test_setup_rpc(self, mock_plugin_api, mock_plugin_report_state_api,
mock_get_client, mock_create_consumers,
mock_setup_qos_extension, mock_looping_call):
self.config(group="AGENT",
report_interval=1)
consumers = [[topics.PORT, topics.UPDATE],
[topics.NETWORK, topics.DELETE],
[topics.PORT, topics.DELETE]]
mock_heartbeat = mock.MagicMock()
mock_create_consumers.return_value = self._agent._connection
mock_looping_call.return_value = mock_heartbeat
self._agent._setup_rpc()
mock_plugin_api.assert_called_once_with(topics.PLUGIN)
mock_plugin_report_state_api.assert_called_once_with(topics.PLUGIN)
mock_get_client.assert_called_once_with(self._agent.target)
self.assertEqual(self._agent._consumers, consumers)
mock_create_consumers.assert_called_once_with(
self._agent._endpoints, self._agent._topic, self._agent._consumers,
start_listening=False)
mock_setup_qos_extension.assert_called_once_with()
self._agent._connection.consume_in_threads.assert_called_once_with()
mock_looping_call.assert_called_once_with(self._agent._report_state)
mock_heartbeat.start.assert_called_once_with(
interval=CONF.AGENT.report_interval)
def test_process_added_port_event(self):
self._agent._added_ports = set()
self._agent._process_added_port_event(mock.sentinel.port_id)
self.assertIn(mock.sentinel.port_id, self._agent._added_ports)
def test_process_removed_port_event(self):
self._agent._removed_ports = set([])
self._agent._process_removed_port_event(mock.sentinel.port_id)
self.assertIn(mock.sentinel.port_id, self._agent._removed_ports)
def test_load_physical_network_mappings(self):
test_mappings = [
'fakenetwork1:fake_vswitch', 'fakenetwork2:fake_vswitch_2',
'*:fake_vswitch_3', 'bad_mapping'
]
expected = [
('fakenetwork1$', 'fake_vswitch'),
('fakenetwork2$', 'fake_vswitch_2'),
('.*$', 'fake_vswitch_3')
]
self._agent._physical_network_mappings = collections.OrderedDict()
self._agent._load_physical_network_mappings(test_mappings)
self.assertEqual(
sorted(expected),
sorted(self._agent._physical_network_mappings.items())
)
def test_get_vswitch_for_physical_network_with_default_switch(self):
test_mappings = [
'fakenetwork:fake_vswitch',
'fakenetwork2$:fake_vswitch_2',
'fakenetwork*:fake_vswitch_3'
]
self._agent._physical_network_mappings = collections.OrderedDict()
self._agent._load_physical_network_mappings(test_mappings)
get_vswitch = self._agent._get_vswitch_for_physical_network
self.assertEqual('fake_vswitch', get_vswitch('fakenetwork'))
self.assertEqual('fake_vswitch_2', get_vswitch('fakenetwork2$'))
self.assertEqual('fake_vswitch_3', get_vswitch('fakenetwork3'))
self.assertEqual('fake_vswitch_3', get_vswitch('fakenetwork35'))
self.assertEqual('fake_network1', get_vswitch('fake_network1'))
def test_get_vswitch_for_physical_network_without_default_switch(self):
test_mappings = [
'fakenetwork:fake_vswitch',
'fakenetwork2:fake_vswitch_2'
]
self._agent._load_physical_network_mappings(test_mappings)
get_vswitch = self._agent._get_vswitch_for_physical_network
self.assertEqual('fake_vswitch', get_vswitch("fakenetwork"))
self.assertEqual('fake_vswitch_2', get_vswitch("fakenetwork2"))
def test_get_vswitch_for_physical_network_none(self):
get_vswitch = self._agent._get_vswitch_for_physical_network
test_mappings = [
'fakenetwork:fake_vswitch',
'fakenetwork2:fake_vswitch_2'
]
self._agent._load_physical_network_mappings(test_mappings)
self.assertEqual('', get_vswitch(None))
test_mappings = [
'fakenetwork:fake_vswitch',
'fakenetwork2:fake_vswitch_2',
'*:fake_vswitch_3'
]
self._agent._load_physical_network_mappings(test_mappings)
self.assertEqual('fake_vswitch_3', get_vswitch(None))
def test_get_vswitch_name_local(self):
self._agent._local_network_vswitch = 'test_local_switch'
ret = self._agent._get_vswitch_name(
constants.TYPE_LOCAL, mock.sentinel.FAKE_PHYSICAL_NETWORK)
self.assertEqual('test_local_switch', ret)
@mock.patch.object(agent_base.Layer2Agent,
"_get_vswitch_for_physical_network")
def test_get_vswitch_name_vlan(self, mock_get_vswitch_for_phys_net):
ret = self._agent._get_vswitch_name(
constants.TYPE_VLAN, mock.sentinel.FAKE_PHYSICAL_NETWORK)
self.assertEqual(mock_get_vswitch_for_phys_net.return_value, ret)
mock_get_vswitch_for_phys_net.assert_called_once_with(
mock.sentinel.FAKE_PHYSICAL_NETWORK)
def test_get_network_vswitch_map_by_port_id(self):
net_uuid = 'net-uuid'
self._agent._network_vswitch_map = {
net_uuid: {'ports': [self._FAKE_PORT_ID]}
}
network, port_map = self._agent._get_network_vswitch_map_by_port_id(
self._FAKE_PORT_ID)
self.assertEqual(net_uuid, network)
self.assertEqual({'ports': [self._FAKE_PORT_ID]}, port_map)
def test_get_network_vswitch_map_by_port_id_not_found(self):
net_uuid = 'net-uuid'
self._agent._network_vswitch_map = {net_uuid: {'ports': []}}
network, port_map = self._agent._get_network_vswitch_map_by_port_id(
self._FAKE_PORT_ID)
self.assertIsNone(network)
self.assertIsNone(port_map)
def test_update_port_status_cache_added(self):
self._agent._unbound_ports = set([mock.sentinel.bound_port])
self._agent._update_port_status_cache(mock.sentinel.bound_port)
self.assertEqual(set([mock.sentinel.bound_port]),
self._agent._bound_ports)
self.assertEqual(set([]), self._agent._unbound_ports)
def test_update_port_status_cache_removed(self):
self._agent._bound_ports = set([mock.sentinel.unbound_port])
self._agent._update_port_status_cache(mock.sentinel.unbound_port,
device_bound=False)
self.assertEqual(set([]), self._agent._bound_ports)
self.assertEqual(set([mock.sentinel.unbound_port]),
self._agent._unbound_ports)
@mock.patch('eventlet.spawn_n')
def test_create_event_listeners(self, mock_spawn):
self._agent._event_callback_pairs = [
(mock.sentinel.event_type, mock.sentinel.callback)]
self._agent._create_event_listeners()
self._agent._utils.get_vnic_event_listener.assert_called_once_with(
mock.sentinel.event_type)
mock_spawn.assert_called_once_with(
self._agent._utils.get_vnic_event_listener.return_value,
mock.sentinel.callback)
@mock.patch.object(agent_base.Layer2Agent,
'_create_event_listeners')
def test_prologue(self, mock_create_listeners):
self._agent._prologue()
# self._added_ports = self._utils.get_vnic_ids()
mock_create_listeners.assert_called_once_with()
def test_reclaim_local_network(self):
self._agent._network_vswitch_map = {}
self._agent._network_vswitch_map[mock.sentinel.net_id] = (
mock.sentinel.vswitch)
self._agent._reclaim_local_network(mock.sentinel.net_id)
self.assertNotIn(mock.sentinel.net_id,
self._agent._network_vswitch_map)
def test_port_bound_no_metrics(self):
self._agent.enable_metrics_collection = False
port = mock.sentinel.port
net_uuid = 'my-net-uuid'
self._agent._network_vswitch_map[net_uuid] = {
'ports': [],
'vswitch_name': []
}
self._agent._port_bound(str(port), net_uuid, 'vlan', None, None)
self.assertFalse(self._agent._utils.add_metrics_collection_acls.called)
@mock.patch.object(agent_base.Layer2Agent,
'_provision_network')
def _check_port_bound_net_type(self, mock_provision_network, network_type):
net_uuid = 'my-net-uuid'
fake_map = {'vswitch_name': mock.sentinel.vswitch_name,
'ports': []}
def fake_prov_network(*args, **kwargs):
self._agent._network_vswitch_map[net_uuid] = fake_map
mock_provision_network.side_effect = fake_prov_network
self._agent._port_bound(mock.sentinel.port_id,
net_uuid, network_type,
mock.sentinel.physical_network,
mock.sentinel.segmentation_id)
self.assertIn(mock.sentinel.port_id, fake_map['ports'])
mock_provision_network.assert_called_once_with(
mock.sentinel.port_id, net_uuid, network_type,
mock.sentinel.physical_network, mock.sentinel.segmentation_id)
self._agent._utils.connect_vnic_to_vswitch.assert_called_once_with(
mock.sentinel.vswitch_name, mock.sentinel.port_id)
@mock.patch.object(agent_base.Layer2Agent,
'_get_network_vswitch_map_by_port_id')
def _check_port_unbound(self, mock_get_vswitch_map_by_port_id, ports=None,
net_uuid=None):
vswitch_map = {
'network_type': 'vlan',
'vswitch_name': 'fake-vswitch',
'ports': ports,
'vlan_id': 1}
network_vswitch_map = (net_uuid, vswitch_map)
mock_get_vswitch_map_by_port_id.return_value = network_vswitch_map
with mock.patch.object(
self._agent._utils,
'remove_switch_port') as mock_remove_switch_port:
self._agent._port_unbound(self._FAKE_PORT_ID, vnic_deleted=False)
if net_uuid:
mock_remove_switch_port.assert_called_once_with(
self._FAKE_PORT_ID, False)
else:
self.assertFalse(mock_remove_switch_port.called)
@mock.patch.object(agent_base.Layer2Agent,
'_reclaim_local_network')
def test_port_unbound(self, mock_reclaim_local_network):
net_uuid = 'my-net-uuid'
self._check_port_unbound(ports=[self._FAKE_PORT_ID],
net_uuid=net_uuid)
mock_reclaim_local_network.assert_called_once_with(net_uuid)
def test_port_unbound_port_not_found(self):
self._check_port_unbound()
@ddt.data(os_win_exc.HyperVvNicNotFound(vnic_name='fake_vnic'),
os_win_exc.HyperVPortNotFoundException(port_name='fake_port'),
Exception)
@mock.patch.object(_Layer2Agent, '_treat_vif_port')
def test_process_added_port_failed(self, side_effect, mock_treat_vif_port):
mock_treat_vif_port.side_effect = side_effect
self._agent._added_ports = set()
details = self._get_fake_port_details()
self._agent.process_added_port(details)
self.assertIn(mock.sentinel.device, self._agent._added_ports)
def test_treat_devices_added_returns_true_for_missing_device(self):
self._agent._added_ports = set([mock.sentinel.port_id])
attrs = {'get_devices_details_list.side_effect': Exception()}
self._agent._plugin_rpc.configure_mock(**attrs)
self._agent._treat_devices_added()
self.assertIn(mock.sentinel.port_id, self._agent._added_ports)
@mock.patch('eventlet.spawn_n')
def test_treat_devices_added_updates_known_port(self, mock_spawn):
self._agent._added_ports = set([mock.sentinel.device])
fake_port_details = self._get_fake_port_details()
kwargs = {'get_devices_details_list.return_value': [fake_port_details]}
self._agent._plugin_rpc.configure_mock(**kwargs)
self._agent._treat_devices_added()
mock_spawn.assert_called_once_with(
self._agent.process_added_port, fake_port_details)
self.assertNotIn(mock.sentinel.device, self._agent._added_ports)
def test_treat_devices_added_missing_port_id(self):
self._agent._added_ports = set([mock.sentinel.port_id])
details = {'device': mock.sentinel.port_id}
attrs = {'get_devices_details_list.return_value': [details]}
self._agent._plugin_rpc.configure_mock(**attrs)
self._agent._treat_devices_added()
self.assertNotIn(mock.sentinel.port_id, self._agent._added_ports)
@mock.patch.object(agent_base.Layer2Agent,
'_port_unbound')
@mock.patch.object(agent_base.Layer2Agent,
'_update_port_status_cache')
def test_process_removed_port_exception(self, mock_update_port_cache,
mock_port_unbound):
self._agent._removed_ports = set([mock.sentinel.port_id])
raised_exc = exception.NetworkingHyperVException
mock_port_unbound.side_effect = raised_exc
self.assertRaises(raised_exc,
self._agent._process_removed_port,
mock.sentinel.port_id)
mock_update_port_cache.assert_called_once_with(
mock.sentinel.port_id, device_bound=False)
self.assertIn(mock.sentinel.port_id, self._agent._removed_ports)
@mock.patch.object(agent_base.Layer2Agent,
'_port_unbound')
@mock.patch.object(agent_base.Layer2Agent,
'_update_port_status_cache')
def test_process_removed_port(self, mock_update_port_cache,
mock_port_unbound):
self._agent._removed_ports = set([mock.sentinel.port_id])
self._agent._process_removed_port(mock.sentinel.port_id)
mock_update_port_cache.assert_called_once_with(
mock.sentinel.port_id, device_bound=False)
mock_port_unbound.assert_called_once_with(mock.sentinel.port_id,
vnic_deleted=True)
self.assertNotIn(mock.sentinel.port_id, self._agent._removed_ports)
@mock.patch('eventlet.spawn_n')
def test_treat_devices_removed(self, mock_spawn):
mock_removed_ports = [mock.sentinel.port0, mock.sentinel.port1]
self._agent._removed_ports = set(mock_removed_ports)
self._agent._treat_devices_removed()
mock_spawn.assert_has_calls(
[mock.call(self._agent._process_removed_port, port)
for port in mock_removed_ports],
any_order=True)
def test_notify_plugin_no_updates(self):
self._agent._bound_ports = set()
self._agent._unbound_ports = set()
self._agent._notify_plugin_on_port_updates()
self.assertFalse(self._agent._plugin_rpc.update_device_list.called)
def test_notify_plugin(self):
self._agent._bound_ports = set([mock.sentinel.bound_port])
self._agent._unbound_ports = set([mock.sentinel.unbound_port])
self._agent._notify_plugin_on_port_updates()
self._agent._plugin_rpc.update_device_list.assert_called_once_with(
self._agent._context, [mock.sentinel.bound_port],
[mock.sentinel.unbound_port], self._agent._agent_id,
self._agent._host)
self.assertEqual(set([]), self._agent._bound_ports)
self.assertEqual(set([]), self._agent._unbound_ports)
@mock.patch.object(agent_base.Layer2Agent, '_treat_devices_removed')
@mock.patch.object(agent_base.Layer2Agent, '_treat_devices_added')
@mock.patch('eventlet.spawn_n')
def test_work(self, mock_spawn, mock_treat_dev_added,
mock_treat_dev_removed):
self._agent._refresh_cache = True
self._agent._added_ports = set([mock.sentinel.bound_port])
self._agent._removed_ports = set([mock.sentinel.unbound_port])
self._agent._work()
self._agent._utils.update_cache.assert_called_once_with()
self.assertFalse(self._agent._refresh_cache)
mock_spawn.assert_called_once_with(
self._agent._notify_plugin_on_port_updates)
mock_treat_dev_added.assert_called_once_with()
mock_treat_dev_removed.assert_called_once_with()
def test_port_update_not_found(self):
self._agent._utils.vnic_port_exists.return_value = False
port = {'id': mock.sentinel.port_id}
self._agent.port_update(self._agent._context, port)
def test_port_update(self):
self._agent._utils.vnic_port_exists.return_value = True
port = {'id': mock.sentinel.port_id,
'network_id': mock.sentinel.network_id,
'admin_state_up': mock.sentinel.admin_state_up}
self._agent.port_update(self._agent._context, port,
mock.sentinel.network_type,
mock.sentinel.segmentation_id,
mock.sentinel.physical_network)
@mock.patch.object(agent_base.Layer2Agent,
'_reclaim_local_network')
def test_network_delete(self, mock_reclaim_local_network):
self._agent._network_vswitch_map = {}
self._agent._network_vswitch_map[mock.sentinel.net_id] = (
mock.sentinel.vswitch)
self._agent.network_delete(mock.sentinel.context, mock.sentinel.net_id)
mock_reclaim_local_network.assert_called_once_with(
mock.sentinel.net_id)
@mock.patch.object(agent_base.Layer2Agent,
'_reclaim_local_network')
def test_network_delete_not_defined(self, mock_reclaim_local_network):
self._agent.network_delete(mock.sentinel.context, mock.sentinel.net_id)
self.assertFalse(mock_reclaim_local_network.called)

View File

@ -1,76 +0,0 @@
# Copyright 2017 Cloudbase Solutions Srl
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Unit tests for Windows Hyper-V QoS Driver.
"""
import mock
from neutron.services.qos import qos_consts
from hyperv.neutron.qos import qos_driver
from hyperv.tests import base
class TestQosHyperVAgentDriver(base.BaseTestCase):
@mock.patch.object(qos_driver.QosHyperVAgentDriver, '__init__',
lambda *args, **kwargs: None)
def setUp(self):
super(TestQosHyperVAgentDriver, self).setUp()
self.driver = qos_driver.QosHyperVAgentDriver()
self.driver._utils = mock.Mock()
@mock.patch.object(qos_driver, 'networkutils')
def test_initialize(self, mock_networkutils):
self.driver.initialize()
mock_networkutils.NetworkUtils.assert_called_once_with()
@mock.patch.object(qos_driver.QosHyperVAgentDriver, '_get_policy_values')
def test_create(self, mock_get_policy_values):
self.driver.create({'port_id': mock.sentinel.port_id},
mock.sentinel.qos_policy)
mock_get_policy_values.assert_called_once_with(
mock.sentinel.qos_policy)
self.driver._utils.set_port_qos_rule.assert_called_once_with(
mock.sentinel.port_id, mock_get_policy_values.return_value)
@mock.patch.object(qos_driver.QosHyperVAgentDriver, '_get_policy_values')
def test_update(self, mock_get_policy_values):
self.driver.update({'port_id': mock.sentinel.port_id},
mock.sentinel.qos_policy)
mock_get_policy_values.assert_called_once_with(
mock.sentinel.qos_policy)
self.driver._utils.set_port_qos_rule.assert_called_once_with(
mock.sentinel.port_id, mock_get_policy_values.return_value)
def test_delete(self):
self.driver.delete({'port_id': mock.sentinel.port_id})
self.driver._utils.remove_port_qos_rule.assert_called_once_with(
mock.sentinel.port_id)
def test_get_policy_values(self):
qos_rule_0 = mock.Mock(spec=['min_kbps', 'rule_type'])
qos_rule_0.rule_type = qos_consts.RULE_TYPE_MINIMUM_BANDWIDTH
qos_rule_1 = mock.Mock(spec=['max_kbps', 'max_burst_kbps',
'rule_type'])
qos_rule_1.rule_type = qos_consts.RULE_TYPE_BANDWIDTH_LIMIT
qos_policy = mock.Mock(rules=[qos_rule_0, qos_rule_1])
expected_val = dict(min_kbps=qos_rule_0.min_kbps,
max_kbps=qos_rule_1.max_kbps,
max_burst_kbps=qos_rule_1.max_burst_kbps)
policy_val = self.driver._get_policy_values(qos_policy)
self.assertEqual(expected_val, policy_val)

View File

@ -1,43 +0,0 @@
# Copyright 2013 Cloudbase Solutions SRL
# Copyright 2013 Pedro Navarro Perez
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from hyperv.neutron import _common_utils
from hyperv.tests import base
import mock
class TestCommonUtils(base.BaseTestCase):
@mock.patch.object(_common_utils.lockutils, 'synchronized_with_prefix')
def test_create_synchronized_decorator(self, mock_sync_with_prefix):
fake_method_side_effect = mock.Mock()
lock_prefix = 'test-'
port_synchronized = _common_utils.get_port_synchronized_decorator(
lock_prefix)
@port_synchronized
def fake_method(fake_arg, port_id):
fake_method_side_effect(fake_arg, port_id)
mock_synchronized = mock_sync_with_prefix.return_value
mock_synchronized.return_value = lambda x: x
expected_lock_name = 'test-port-lock-%s' % mock.sentinel.port_id
fake_method(fake_arg=mock.sentinel.arg, port_id=mock.sentinel.port_id)
mock_sync_with_prefix.assert_called_once_with(lock_prefix)
mock_synchronized.assert_called_once_with(expected_lock_name)
fake_method_side_effect.assert_called_once_with(
mock.sentinel.arg, mock.sentinel.port_id)

View File

@ -1,49 +0,0 @@
# Copyright 2017 Cloudbase Solutions Srl
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Unit tests for the networking-hyperv config module.
"""
import mock
from hyperv.neutron import config
from hyperv.tests import base
class TestConfig(base.HyperVBaseTestCase):
@mock.patch.object(config, 'ks_loading')
@mock.patch.object(config, 'CONF')
def test_register_opts(self, mock_CONF, mock_ks_loading):
config.register_opts()
all_groups = [config.HYPERV_AGENT_GROUP, config.NVGRE_GROUP,
config.NEUTRON_GROUP, config.HNV_GROUP]
mock_CONF.register_group.assert_has_calls([
mock.call(group) for group in all_groups])
all_opts = [
(config.HYPERV_AGENT_OPTS, config.HYPERV_AGENT_GROUP_NAME),
(config.NVGRE_OPTS, config.NVGRE_GROUP_NAME),
(config.NEUTRON_OPTS, config.NEUTRON_GROUP_NAME),
(config.HNV_OPTS, config.HNV_GROUP_NAME)]
mock_CONF.register_opts.assert_has_calls([
mock.call(opts, group=group) for opts, group in all_opts])
mock_ks_loading.register_session_conf_options.assert_called_once_with(
mock_CONF, config.NEUTRON_GROUP)
mock_ks_loading.register_auth_conf_options.assert_called_once_with(
mock_CONF, config.NEUTRON_GROUP)

View File

@ -1,65 +0,0 @@
# Copyright 2015 Cloudbase Solutions SRL
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Unit Tests for Hyper-V Agent Notifier.
"""
import mock
from hyperv.neutron import constants
from hyperv.neutron import hyperv_agent_notifier
from hyperv.tests import base
class TestAgentNotifierApi(base.BaseTestCase):
def setUp(self):
super(TestAgentNotifierApi, self).setUp()
self.notifier = hyperv_agent_notifier.AgentNotifierApi(
topic=constants.AGENT_TOPIC, client=mock.MagicMock())
def test_tunnel_update(self):
expected_topic = hyperv_agent_notifier.get_topic_name(
constants.AGENT_TOPIC, constants.TUNNEL, constants.UPDATE)
self.notifier.tunnel_update(mock.sentinel.context,
mock.sentinel.tunnel_ip,
constants.TYPE_NVGRE)
self.notifier._client.prepare.assert_called_once_with(
topic=expected_topic, fanout=True)
prepared_client = self.notifier._client.prepare.return_value
prepared_client.cast.assert_called_once_with(
mock.sentinel.context, 'tunnel_update',
tunnel_ip=mock.sentinel.tunnel_ip,
tunnel_type=constants.TYPE_NVGRE)
def test_lookup_update(self):
expected_topic = hyperv_agent_notifier.get_topic_name(
constants.AGENT_TOPIC, constants.LOOKUP, constants.UPDATE)
self.notifier.lookup_update(mock.sentinel.context,
mock.sentinel.lookup_ip,
mock.sentinel.lookup_details)
self.notifier._client.prepare.assert_called_once_with(
topic=expected_topic, fanout=True)
prepared_client = self.notifier._client.prepare.return_value
prepared_client.cast.assert_called_once_with(
mock.sentinel.context, 'lookup_update',
lookup_ip=mock.sentinel.lookup_ip,
lookup_details=mock.sentinel.lookup_details)

View File

@ -1,70 +0,0 @@
# Copyright 2015 Cloudbase Solutions SRL
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Unit tests for the Hyper-V Mechanism Driver.
"""
import mock
from hyperv.neutron import constants
from hyperv.neutron.ml2 import mech_hyperv
from hyperv.tests import base
class TestHypervMechanismDriver(base.BaseTestCase):
def setUp(self):
super(TestHypervMechanismDriver, self).setUp()
self.mech_hyperv = mech_hyperv.HypervMechanismDriver()
def test_get_allowed_network_types(self):
agent = {'configurations': {'tunnel_types': []}}
actual_net_types = self.mech_hyperv.get_allowed_network_types(agent)
network_types = [constants.TYPE_LOCAL, constants.TYPE_FLAT,
constants.TYPE_VLAN]
self.assertEqual(network_types, actual_net_types)
def test_get_allowed_network_types_nvgre(self):
agent = {'configurations': {'tunnel_types': [constants.TYPE_NVGRE]}}
actual_net_types = self.mech_hyperv.get_allowed_network_types(agent)
network_types = [constants.TYPE_LOCAL, constants.TYPE_FLAT,
constants.TYPE_VLAN, constants.TYPE_NVGRE]
self.assertEqual(network_types, actual_net_types)
def test_get_mappings(self):
agent = {'configurations': {
'vswitch_mappings': [mock.sentinel.mapping]}}
mappings = self.mech_hyperv.get_mappings(agent)
self.assertEqual([mock.sentinel.mapping], mappings)
def test_physnet_in_mappings(self):
physnet = 'test_physnet'
match_mapping = '.*'
different_mapping = 'fake'
pattern_matched = self.mech_hyperv.physnet_in_mappings(
physnet, [match_mapping])
self.assertTrue(pattern_matched)
pattern_matched = self.mech_hyperv.physnet_in_mappings(
physnet, [different_mapping])
self.assertFalse(pattern_matched)
pattern_matched = self.mech_hyperv.physnet_in_mappings(
physnet, [different_mapping, match_mapping])
self.assertTrue(pattern_matched)

View File

@ -1,174 +0,0 @@
# Copyright 2015 Cloudbase Solutions SRL
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Unit tests for the neutron client.
"""
import mock
from hyperv.neutron import config
from hyperv.neutron import constants
from hyperv.neutron import neutron_client
from hyperv.tests import base
CONF = config.CONF
class TestNeutronClient(base.BaseTestCase):
_FAKE_CIDR = '10.0.0.0/24'
_FAKE_GATEWAY = '10.0.0.1'
_FAKE_HOST = 'fake_host'
def setUp(self):
super(TestNeutronClient, self).setUp()
self._neutron = neutron_client.NeutronAPIClient()
self._neutron._client = mock.MagicMock()
@mock.patch.object(neutron_client.clientv20, "Client")
@mock.patch.object(neutron_client, "ks_loading")
def test_init_client(self, mock_ks_loading, mock_client):
self._neutron._init_client()
self.assertEqual(mock_client.return_value, self._neutron._client)
mock_ks_loading.load_session_from_conf_options.assert_called_once_with(
CONF, config.NEUTRON_GROUP)
mock_ks_loading.load_auth_from_conf_options.assert_called_once_with(
CONF, config.NEUTRON_GROUP)
session = mock_ks_loading.load_session_from_conf_options.return_value
plugin = mock_ks_loading.load_auth_from_conf_options.return_value
mock_client.assert_called_once_with(
session=session,
auth=plugin)
def test_get_network_subnets(self):
self._neutron._client.show_network.return_value = {
'network': {
'subnets': [mock.sentinel.fake_subnet]
}
}
subnets = self._neutron.get_network_subnets(mock.sentinel.net_id)
self._neutron._client.show_network.assert_called_once_with(
mock.sentinel.net_id)
self.assertEqual([mock.sentinel.fake_subnet], subnets)
def test_get_network_subnets_exception(self):
self._neutron._client.show_network.side_effect = Exception("Fail")
subnets = self._neutron.get_network_subnets(mock.sentinel.net_id)
self.assertEqual([], subnets)
def test_get_network_subnet_cidr(self):
self._neutron._client.show_subnet.return_value = {
'subnet': {
'cidr': self._FAKE_CIDR,
'gateway_ip': self._FAKE_GATEWAY,
}
}
cidr, gw = self._neutron.get_network_subnet_cidr_and_gateway(
mock.sentinel.subnet_id)
self._neutron._client.show_subnet.assert_called_once_with(
mock.sentinel.subnet_id)
self.assertEqual(self._FAKE_CIDR, cidr)
self.assertEqual(self._FAKE_GATEWAY, gw)
def test_get_network_subnet_cidr_exception(self):
self._neutron._client.show_subnet.side_effect = Exception("Fail")
cidr, gw = self._neutron.get_network_subnet_cidr_and_gateway(
mock.sentinel.subnet_id)
self.assertIsNone(cidr)
self.assertIsNone(gw)
def test_get_port_ip_address(self):
self._neutron._client.show_port.return_value = {
'port': {
'fixed_ips': [{'ip_address': mock.sentinel.ip_addr}]
}
}
ip_addr = self._neutron.get_port_ip_address(mock.sentinel.fake_port_id)
self._neutron._client.show_port.assert_called_once_with(
mock.sentinel.fake_port_id)
self.assertEqual(mock.sentinel.ip_addr, ip_addr)
def test_get_port_ip_address_exception(self):
self._neutron._client.show_port.side_effect = Exception("Fail")
ip_addr = self._neutron.get_port_ip_address(mock.sentinel.fake_port_id)
self.assertIsNone(ip_addr)
def test_get_tunneling_agents(self):
non_tunnel_agent = {}
ignored_agent = {'configurations': {
'tunnel_types': [constants.TYPE_NVGRE]}
}
tunneling_agent = {
'configurations': {'tunnel_types': [constants.TYPE_NVGRE],
'tunneling_ip': mock.sentinel.tunneling_ip},
'host': self._FAKE_HOST
}
self._neutron._client.list_agents.return_value = {
'agents': [non_tunnel_agent, ignored_agent, tunneling_agent]
}
actual = self._neutron.get_tunneling_agents()
self.assertEqual({self._FAKE_HOST: mock.sentinel.tunneling_ip}, actual)
def test_get_tunneling_agents_exception(self):
self._neutron._client.list_agents.side_effect = Exception("Fail")
actual = self._neutron.get_tunneling_agents()
self.assertEqual({}, actual)
def test_get_network_ports(self):
self._neutron._client.list_ports.return_value = {
'ports': [mock.sentinel.port]
}
actual = self._neutron.get_network_ports(key='value')
self._neutron._client.list_ports.assert_called_once_with(key='value')
self.assertEqual([mock.sentinel.port], actual)
def test_get_network_ports_exception(self):
self._neutron._client.list_ports.side_effect = Exception("Fail")
actual = self._neutron.get_network_ports()
self.assertEqual([], actual)
def test_get_port_profile_id(self):
fake_profile_id = 'fake_profile_id'
self._neutron._client.show_port.return_value = {
'port': {
'binding:vif_details': {'port_profile_id': fake_profile_id}
}
}
actual = self._neutron.get_port_profile_id(mock.sentinel.port_id)
self.assertEqual('{%s}' % fake_profile_id, actual)
self._neutron._client.show_port.assert_called_once_with(
mock.sentinel.port_id)
def test_get_port_profile_id_failed(self):
self._neutron._client.show_port.side_effect = Exception("Fail")
actual = self._neutron.get_port_profile_id(mock.sentinel.port_id)
self.assertEqual({}, actual)
self._neutron._client.show_port.assert_called_once_with(
mock.sentinel.port_id)

View File

@ -1,274 +0,0 @@
# Copyright 2015 Cloudbase Solutions SRL
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Unit tests for Windows Hyper-V NVGRE driver.
"""
import mock
from hyperv.neutron import config
from hyperv.neutron import constants
from hyperv.neutron import nvgre_ops
from hyperv.tests import base
CONF = config.CONF
class TestHyperVNvgreOps(base.HyperVBaseTestCase):
FAKE_MAC_ADDR = 'fa:ke:ma:ca:dd:re:ss'
FAKE_CIDR = '10.0.0.0/24'
FAKE_VSWITCH_NAME = 'fake_vswitch'
def setUp(self):
super(TestHyperVNvgreOps, self).setUp()
self.context = 'context'
self.ops = nvgre_ops.HyperVNvgreOps([])
self.ops._vswitch_ips[mock.sentinel.network_name] = (
mock.sentinel.ip_addr)
self.ops.context = self.context
self.ops._notifier = mock.MagicMock()
self.ops._hyperv_utils = mock.MagicMock()
self.ops._nvgre_utils = mock.MagicMock()
self.ops._n_client = mock.MagicMock()
self.ops._db = mock.MagicMock()
@mock.patch.object(nvgre_ops.hyperv_agent_notifier, 'AgentNotifierApi')
def test_init_notifier(self, mock_notifier):
self.ops.init_notifier(mock.sentinel.context, mock.sentinel.rpc_client)
mock_notifier.assert_called_once_with(
constants.AGENT_TOPIC,
mock.sentinel.rpc_client)
self.assertEqual(mock_notifier.return_value, self.ops._notifier)
self.assertEqual(mock.sentinel.context, self.ops.context)
def test_init_nvgre(self):
self.ops._nvgre_utils.get_network_iface_ip.return_value = (
mock.sentinel.ip_addr, mock.sentinel.length)
self.ops._init_nvgre([mock.sentinel.physical_network])
self.assertEqual(self.ops._vswitch_ips[mock.sentinel.physical_network],
mock.sentinel.ip_addr)
self.ops._nvgre_utils.create_provider_route.assert_called_once_with(
mock.sentinel.physical_network)
self.ops._nvgre_utils.create_provider_address.assert_called_once_with(
mock.sentinel.physical_network, CONF.NVGRE.provider_vlan_id)
def test_refresh_tunneling_agents(self):
self.ops._n_client.get_tunneling_agents.return_value = {
mock.sentinel.host: mock.sentinel.host_ip
}
self.ops._refresh_tunneling_agents()
self.assertEqual(mock.sentinel.host_ip,
self.ops._tunneling_agents[mock.sentinel.host])
@mock.patch.object(nvgre_ops.HyperVNvgreOps, '_register_lookup_record')
def test_lookup_update(self, mock_register_record):
args = {'lookup_ip': mock.sentinel.lookup_ip,
'lookup_details': {
'customer_addr': mock.sentinel.customer_addr,
'mac_addr': mock.sentinel.mac_addr,
'customer_vsid': mock.sentinel.vsid}
}
self.ops.lookup_update(args)
mock_register_record.assert_called_once_with(
mock.sentinel.lookup_ip,
mock.sentinel.customer_addr,
mock.sentinel.mac_addr,
mock.sentinel.vsid)
def test_tunnel_update_nvgre(self):
self.ops.tunnel_update(
mock.sentinel.context,
mock.sentinel.tunnel_ip,
tunnel_type=constants.TYPE_NVGRE)
self.ops._notifier.tunnel_update.assert_called_once_with(
mock.sentinel.context,
CONF.NVGRE.provider_tunnel_ip,
constants.TYPE_NVGRE)
def test_tunnel_update(self):
self.ops.tunnel_update(
mock.sentinel.context,
mock.sentinel.tunnel_ip,
mock.sentinel.tunnel_type)
self.assertFalse(self.ops._notifier.tunnel_update.called)
@mock.patch.object(nvgre_ops.HyperVNvgreOps, '_register_lookup_record')
def test_lookup_update_no_details(self, mock_register_record):
self.ops.lookup_update({})
self.assertFalse(mock_register_record.called)
def test_register_lookup_record(self):
self.ops._register_lookup_record(
mock.sentinel.provider_addr, mock.sentinel.customer_addr,
mock.sentinel.mac_addr, mock.sentinel.vsid)
self.ops._nvgre_utils.create_lookup_record.assert_called_once_with(
mock.sentinel.provider_addr, mock.sentinel.customer_addr,
mock.sentinel.mac_addr, mock.sentinel.vsid)
@mock.patch.object(nvgre_ops.HyperVNvgreOps, '_register_lookup_record')
def test_bind_nvgre_port(self, mock_register_record):
self.ops._nvgre_utils.get_network_iface_ip.return_value = (
mock.sentinel.provider_addr, mock.sentinel.prefix_len)
mac_addr = self.ops._hyperv_utils.get_vnic_mac_address.return_value
customer_addr = self.ops._n_client.get_port_ip_address.return_value
self.ops.bind_nvgre_port(mock.sentinel.vsid,
mock.sentinel.network_name,
mock.sentinel.port_id)
self.ops._hyperv_utils.set_vswitch_port_vsid.assert_called_once_with(
mock.sentinel.vsid, mock.sentinel.port_id)
mock_register_record.assert_has_calls([
mock.call(mock.sentinel.provider_addr, customer_addr, mac_addr,
mock.sentinel.vsid),
mock.call(mock.sentinel.ip_addr, constants.IPV4_DEFAULT, mac_addr,
mock.sentinel.vsid)])
self.ops._notifier.lookup_update.assert_called_once_with(
self.context, mock.sentinel.provider_addr, {
'customer_addr': customer_addr,
'mac_addr': mac_addr,
'customer_vsid': mock.sentinel.vsid
})
def test_bind_nvgre_port_no_provider_addr(self):
self.ops._nvgre_utils.get_network_iface_ip = mock.MagicMock(
return_value=(None, None))
self.ops.bind_nvgre_port(mock.sentinel.vsid,
mock.sentinel.network_name,
mock.sentinel.port_id)
self.assertFalse(self.ops._hyperv_utils.set_vswitch_port_vsid.called)
@mock.patch.object(nvgre_ops.HyperVNvgreOps, 'refresh_nvgre_records')
@mock.patch.object(nvgre_ops.HyperVNvgreOps, '_create_customer_routes')
def test_bind_nvgre_network(self, mock_create_routes,
mock_refresh_records):
fake_ip = '10.10.10.10'
self.config(provider_tunnel_ip=fake_ip, group='NVGRE')
self.ops._n_client.get_network_subnets.return_value = [
mock.sentinel.subnet, mock.sentinel.subnet2]
get_cidr = self.ops._n_client.get_network_subnet_cidr_and_gateway
get_cidr.return_value = (self.FAKE_CIDR, mock.sentinel.gateway)
self.ops.bind_nvgre_network(
mock.sentinel.vsid, mock.sentinel.net_uuid,
self.FAKE_VSWITCH_NAME)
self.assertEqual(mock.sentinel.vsid,
self.ops._network_vsids[mock.sentinel.net_uuid])
self.ops._n_client.get_network_subnets.assert_called_once_with(
mock.sentinel.net_uuid)
get_cidr.assert_called_once_with(mock.sentinel.subnet)
mock_create_routes.assert_called_once_with(
mock.sentinel.vsid, self.FAKE_CIDR,
mock.sentinel.gateway, mock.ANY)
mock_refresh_records.assert_called_once_with(
network_id=mock.sentinel.net_uuid)
self.ops._notifier.tunnel_update.assert_called_once_with(
self.context, fake_ip, mock.sentinel.vsid)
def _check_create_customer_routes(self, gateway=None):
self.ops._create_customer_routes(
mock.sentinel.vsid, mock.sentinel.cidr,
gateway, mock.sentinel.rdid)
self.ops._nvgre_utils.clear_customer_routes.assert_called_once_with(
mock.sentinel.vsid)
self.ops._nvgre_utils.create_customer_route.assert_called_once_with(
mock.sentinel.vsid, mock.sentinel.cidr, constants.IPV4_DEFAULT,
mock.sentinel.rdid)
def test_create_customer_routes_no_gw(self):
self._check_create_customer_routes()
def test_create_customer_routes_bad_gw(self):
gateway = '10.0.0.1'
self._check_create_customer_routes(gateway=gateway)
def test_create_customer_routes(self):
gateway = '10.0.0.2'
self.ops._create_customer_routes(
mock.sentinel.vsid, mock.sentinel.cidr,
gateway, mock.sentinel.rdid)
metadata_addr = '%s/32' % CONF.AGENT.neutron_metadata_address
self.ops._nvgre_utils.create_customer_route.assert_has_calls([
mock.call(mock.sentinel.vsid, mock.sentinel.cidr,
constants.IPV4_DEFAULT, mock.sentinel.rdid),
mock.call(mock.sentinel.vsid, '%s/0' % constants.IPV4_DEFAULT,
gateway, mock.ANY),
mock.call(mock.sentinel.vsid, metadata_addr,
gateway, mock.ANY)], any_order=True)
@mock.patch.object(nvgre_ops.HyperVNvgreOps, '_register_lookup_record')
def test_refresh_nvgre_records(self, mock_register_record):
self.ops._nvgre_ports.append(mock.sentinel.processed_port_id)
self.ops._tunneling_agents[mock.sentinel.host_id] = (
mock.sentinel.agent_ip)
self.ops._network_vsids[mock.sentinel.net_id] = (
mock.sentinel.vsid)
processed_port = {'id': mock.sentinel.processed_port_id}
no_host_port = {'id': mock.sentinel.port_no_host_id,
'binding:host_id': mock.sentinel.odd_host_id}
other_net_id_port = {'id': mock.sentinel.port_other_net_id,
'binding:host_id': mock.sentinel.host_id,
'network_id': mock.sentinel.odd_net_id}
port = {'id': mock.sentinel.port_id,
'binding:host_id': mock.sentinel.host_id,
'network_id': mock.sentinel.net_id,
'mac_address': self.FAKE_MAC_ADDR,
'fixed_ips': [{'ip_address': mock.sentinel.customer_addr}]
}
self.ops._n_client.get_network_ports.return_value = [
processed_port, no_host_port, other_net_id_port, port]
self.ops.refresh_nvgre_records()
expected_mac = self.FAKE_MAC_ADDR.replace(':', '')
mock_register_record.assert_has_calls([
mock.call(mock.sentinel.agent_ip, mock.sentinel.customer_addr,
expected_mac, mock.sentinel.vsid),
# mock.call(mock.sentinel.agent_ip, constants.METADATA_ADDR,
# expected_mac, mock.sentinel.vsid)
])
self.assertIn(mock.sentinel.port_id, self.ops._nvgre_ports)
@mock.patch.object(nvgre_ops.HyperVNvgreOps, '_register_lookup_record')
def test_refresh_nvgre_records_exception(self, mock_register_record):
self.ops._tunneling_agents[mock.sentinel.host_id] = (
mock.sentinel.agent_ip)
self.ops._network_vsids[mock.sentinel.net_id] = (mock.sentinel.vsid)
port = mock.MagicMock()
self.ops._n_client.get_network_ports.return_value = [port]
mock_register_record.side_effect = TypeError
self.ops.refresh_nvgre_records()
self.assertNotIn(mock.sentinel.port_id, self.ops._nvgre_ports)

View File

@ -1,654 +0,0 @@
# Copyright 2014 Cloudbase Solutions SRL
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Unit tests for the Hyper-V Security Groups Driver.
"""
import mock
from os_win import exceptions
from hyperv.neutron import security_groups_driver as sg_driver
from hyperv.tests import base
class SecurityGroupRuleTestHelper(base.HyperVBaseTestCase):
_FAKE_DIRECTION = 'egress'
_FAKE_ETHERTYPE = 'IPv4'
_FAKE_ETHERTYPE_IPV6 = 'IPv6'
_FAKE_PROTOCOL = 'tcp'
_FAKE_ACTION = sg_driver.ACL_PROP_MAP['action']['allow']
_FAKE_DEST_IP_PREFIX = '10.0.0.0/24'
_FAKE_SOURCE_IP_PREFIX = '10.0.1.0/24'
_FAKE_MEMBER_IP = '10.0.0.1'
_FAKE_IPV6_LEN128_IP = 'fddd:cafd:e664:0:f816:3eff:fe8d:59d2/128'
_FAKE_SG_ID = 'fake_sg_id'
_FAKE_PORT_MIN = 9001
_FAKE_PORT_MAX = 9011
def _create_security_rule(self, **rule_updates):
rule = {
'direction': self._FAKE_DIRECTION,
'ethertype': self._FAKE_ETHERTYPE,
'protocol': self._FAKE_PROTOCOL,
'dest_ip_prefix': self._FAKE_DEST_IP_PREFIX,
'source_ip_prefix': self._FAKE_SOURCE_IP_PREFIX,
'port_range_min': self._FAKE_PORT_MIN,
'port_range_max': self._FAKE_PORT_MAX,
'security_group_id': self._FAKE_SG_ID
}
rule.update(rule_updates)
return rule
@classmethod
def _acl(self, key1, key2):
return sg_driver.ACL_PROP_MAP[key1][key2]
class TestHyperVSecurityGroupsDriver(SecurityGroupRuleTestHelper):
_FAKE_DEVICE = 'fake_device'
_FAKE_ID = 'fake_id'
_FAKE_PARAM_NAME = 'fake_param_name'
_FAKE_PARAM_VALUE = 'fake_param_value'
def setUp(self):
super(TestHyperVSecurityGroupsDriver, self).setUp()
self._driver = sg_driver.HyperVSecurityGroupsDriver()
self._driver._utils = mock.MagicMock()
self._driver._sg_gen = mock.MagicMock()
def test__select_sg_rules_for_port(self):
mock_port = self._get_port()
mock_port['fixed_ips'] = [mock.MagicMock()]
mock_port['security_groups'] = [self._FAKE_SG_ID]
fake_sg_template = self._create_security_rule()
fake_sg_template['direction'] = 'ingress'
self._driver._sg_rule_templates[self._FAKE_SG_ID] = [fake_sg_template]
# Test without remote_group_id
rule_list = self._driver._select_sg_rules_for_port(mock_port,
'ingress')
self.assertNotIn('security_group_id', rule_list[0])
# Test with remote_group_id
fake_sg_template['remote_group_id'] = self._FAKE_SG_ID
self._driver._sg_members[self._FAKE_SG_ID] = {self._FAKE_ETHERTYPE:
[self._FAKE_MEMBER_IP]}
rule_list = self._driver._select_sg_rules_for_port(mock_port,
'ingress')
self.assertEqual('10.0.0.1/32', rule_list[0]['source_ip_prefix'])
self.assertNotIn('security_group_id', rule_list[0])
self.assertNotIn('remote_group_id', rule_list[0])
# Test for fixed 'ip' existing in 'sg_members'
self._driver._sg_members[self._FAKE_SG_ID][self._FAKE_ETHERTYPE] = [
'10.0.0.2']
mock_port['fixed_ips'] = ['10.0.0.2']
rule_list = self._driver._select_sg_rules_for_port(mock_port,
'ingress')
self.assertEqual([], rule_list)
# Test for 'egress' direction
fake_sg_template['direction'] = 'egress'
fix_ip = [self._FAKE_MEMBER_IP, '10.0.0.2']
self._driver._sg_members[self._FAKE_SG_ID][self._FAKE_ETHERTYPE] = (
fix_ip)
rule_list = self._driver._select_sg_rules_for_port(mock_port,
'egress')
self.assertEqual('10.0.0.1/32', rule_list[0]['dest_ip_prefix'])
# Test for rules with a different direction
rule_list = self._driver._select_sg_rules_for_port(mock_port,
'ingress')
self.assertEqual([], rule_list)
def test_update_security_group_rules(self):
mock_rule = [self._create_security_rule()]
self._driver.update_security_group_rules(self._FAKE_ID, mock_rule)
self.assertEqual(mock_rule,
self._driver._sg_rule_templates[self._FAKE_ID])
def test_update_security_group_members(self):
mock_member = ['10.0.0.1/32']
self._driver.update_security_group_members(self._FAKE_ID, mock_member)
self.assertEqual(mock_member, self._driver._sg_members[self._FAKE_ID])
@mock.patch.object(sg_driver.HyperVSecurityGroupsDriver,
'_select_sg_rules_for_port')
def test__generate_rules(self, mock_select_sg_rules):
mock_rule = [self._create_security_rule()]
mock_port = self._get_port()
mock_select_sg_rules.return_value = mock_rule
ports = self._driver._generate_rules([mock_port])
# Expected result
mock_rule.append(mock_rule[0])
expected = {self._FAKE_ID: mock_rule}
self.assertEqual(expected, ports)
@mock.patch.object(sg_driver.HyperVSecurityGroupsDriver,
'_generate_rules')
@mock.patch.object(sg_driver.HyperVSecurityGroupsDriver,
'_create_port_rules')
@mock.patch.object(sg_driver.HyperVSecurityGroupsDriver,
'_add_sg_port_rules')
def test_prepare_port_filter(self, mock_add_rules, mock_create_rules,
mock_gen_rules):
mock_port = self._get_port()
mock_create_default = self._driver._sg_gen.create_default_sg_rules
fake_rule = self._create_security_rule()
self._driver._get_rule_remote_address = mock.MagicMock(
return_value=self._FAKE_SOURCE_IP_PREFIX)
mock_gen_rules.return_value = {mock_port['id']: [fake_rule]}
self._driver.prepare_port_filter(mock_port)
self.assertEqual(mock_port,
self._driver._security_ports[self._FAKE_DEVICE])
mock_gen_rules.assert_called_with([self._driver._security_ports
[self._FAKE_DEVICE]])
mock_add_rules.assert_called_once_with(
mock_port, mock_create_default.return_value)
self._driver._create_port_rules.assert_called_with(
mock_port, [fake_rule])
def test_prepare_port_filter_security_disabled(self):
mock_port = self._get_port()
mock_port.pop('port_security_enabled')
self._driver.prepare_port_filter(mock_port)
self.assertNotIn(mock_port['device'], self._driver._security_ports)
self.assertNotIn(mock_port['id'], self._driver._sec_group_rules)
@mock.patch.object(sg_driver.HyperVSecurityGroupsDriver,
'_generate_rules')
def test_update_port_filter(self, mock_gen_rules):
mock_port = self._get_port()
new_mock_port = self._get_port()
new_mock_port['id'] += '2'
new_mock_port['security_group_rules'][0]['ethertype'] += "2"
fake_rule_new = self._create_security_rule()
self._driver._get_rule_remote_address = mock.MagicMock(
return_value=self._FAKE_SOURCE_IP_PREFIX)
mock_gen_rules.return_value = {new_mock_port['id']: [fake_rule_new]}
self._driver._security_ports[mock_port['device']] = mock_port
self._driver._sec_group_rules[new_mock_port['id']] = []
self._driver._create_port_rules = mock.MagicMock()
self._driver._remove_port_rules = mock.MagicMock()
self._driver.update_port_filter(new_mock_port)
self._driver._remove_port_rules.assert_called_once_with(
mock_port, mock_port['security_group_rules'])
self._driver._create_port_rules.assert_called_once_with(
new_mock_port, [new_mock_port['security_group_rules'][0],
fake_rule_new])
self.assertEqual(new_mock_port,
self._driver._security_ports[new_mock_port['device']])
@mock.patch.object(sg_driver.HyperVSecurityGroupsDriver,
'prepare_port_filter')
def test_update_port_filter_new_port(self, mock_method):
mock_port = self._get_port()
new_mock_port = self._get_port()
new_mock_port['id'] += '2'
new_mock_port['device'] += '2'
new_mock_port['security_group_rules'][0]['ethertype'] += "2"
self._driver._security_ports[mock_port['device']] = mock_port
self._driver.update_port_filter(new_mock_port)
self.assertNotIn(new_mock_port['device'], self._driver._security_ports)
mock_method.assert_called_once_with(new_mock_port)
def test_update_port_filter_security_disabled(self):
mock_port = self._get_port()
mock_port['port_security_enabled'] = False
self._driver.update_port_filter(mock_port)
self.assertFalse(self._driver._utils.remove_all_security_rules.called)
self.assertNotIn(mock_port['device'], self._driver._security_ports)
self.assertNotIn(mock_port['id'], self._driver._sec_group_rules)
def test_update_port_filter_security_disabled_existing_rules(self):
mock_port = self._get_port()
mock_port.pop('port_security_enabled')
self._driver._sec_group_rules[mock_port['id']] = mock.ANY
self._driver._security_ports[mock_port['device']] = mock.sentinel.port
self._driver.update_port_filter(mock_port)
self._driver._utils.remove_all_security_rules.assert_called_once_with(
mock_port['id'])
self.assertNotIn(mock_port['device'], self._driver._security_ports)
self.assertNotIn(mock_port['id'], self._driver._sec_group_rules)
@mock.patch.object(sg_driver.HyperVSecurityGroupsDriver,
'_generate_rules')
def test_update_port_filter_existing_wildcard_rules(self, mock_gen_rules):
mock_port = self._get_port()
new_mock_port = self._get_port()
new_mock_port['id'] += '2'
new_mock_port['security_group_rules'][0]['ethertype'] += "2"
fake_rule_new = self._create_security_rule(direction='egress',
protocol='udp')
fake_wildcard_rule_new = self._create_security_rule(protocol='ANY',
direction='egress')
fake_expanded_rules = [
self._create_security_rule(direction='egress', protocol='tcp'),
self._create_security_rule(direction='egress', protocol='udp'),
self._create_security_rule(direction='egress', protocol='icmp')]
self._driver._sg_gen.expand_wildcard_rules.return_value = (
fake_expanded_rules)
mock_gen_rules.return_value = {
new_mock_port['id']: [fake_rule_new, fake_wildcard_rule_new]}
self._driver._security_ports[mock_port['device']] = mock_port
self._driver._sec_group_rules[new_mock_port['id']] = [
self._create_security_rule(direction='egress', protocol='icmp')]
filtered_new_rules = [new_mock_port['security_group_rules'][0],
fake_wildcard_rule_new]
filtered_remove_rules = mock_port['security_group_rules']
self._driver._create_port_rules = mock.MagicMock()
self._driver._remove_port_rules = mock.MagicMock()
self._driver.update_port_filter(new_mock_port)
self._driver._sg_gen.expand_wildcard_rules.assert_called_once_with(
[fake_rule_new, fake_wildcard_rule_new])
self._driver._remove_port_rules.assert_called_once_with(
mock_port, filtered_remove_rules)
self._driver._create_port_rules.assert_called_once_with(
new_mock_port, filtered_new_rules)
self.assertEqual(new_mock_port,
self._driver._security_ports[new_mock_port['device']])
def test_remove_port_filter(self):
mock_port = self._get_port()
mock_rule = mock.MagicMock()
self._driver._sec_group_rules[self._FAKE_ID] = [mock_rule]
self._driver._security_ports[mock_port['device']] = mock_port
self._driver.remove_port_filter(mock_port)
self.assertNotIn(mock_port['device'], self._driver._security_ports)
self.assertNotIn(mock_port['id'], self._driver._sec_group_rules)
self._driver._utils.clear_port_sg_acls_cache(mock_port['id'])
@mock.patch.object(sg_driver.HyperVSecurityGroupsDriver,
'_add_sg_port_rules')
@mock.patch.object(sg_driver.HyperVSecurityGroupsDriver,
'_remove_sg_port_rules')
def test_create_port_rules(self, mock_remove, mock_add):
mock_port = self._get_port()
mock_rule = mock.MagicMock()
self._driver._sec_group_rules[self._FAKE_ID] = [mock_rule]
self._driver._sg_gen.create_security_group_rules.return_value = [
mock_rule]
self._driver._sg_gen.compute_new_rules_add.return_value = (
[mock_rule, mock_rule], [mock_rule, mock_rule])
self._driver._create_port_rules(mock_port, [mock_rule])
self._driver._sg_gen.compute_new_rules_add.assert_called_once_with(
[mock_rule], [mock_rule])
mock_remove.assert_called_once_with(mock_port, [mock_rule])
mock_add.assert_called_once_with(mock_port, [mock_rule])
@mock.patch.object(sg_driver.HyperVSecurityGroupsDriver,
'_remove_sg_port_rules')
def test_remove_port_rules(self, mock_remove):
mock_port = self._get_port()
mock_rule = mock.MagicMock()
self._driver._sec_group_rules[self._FAKE_ID] = [mock_rule]
self._driver._sg_gen.create_security_group_rules.return_value = [
mock_rule]
self._driver._remove_port_rules(mock_port, [mock_rule])
mock_remove.assert_called_once_with(mock_port, [mock_rule])
def test_add_sg_port_rules_exception(self):
mock_port = self._get_port()
mock_rule = mock.MagicMock()
self._driver._sec_group_rules[self._FAKE_ID] = []
self._driver._utils.create_security_rules.side_effect = (
exceptions.HyperVException(msg='Generated Exception for testing.'))
self.assertRaises(exceptions.HyperVException,
self._driver._add_sg_port_rules,
mock_port, [mock_rule])
self.assertNotIn(mock_rule,
self._driver._sec_group_rules[self._FAKE_ID])
def test_add_sg_port_rules_port_not_found(self):
mock_port = self._get_port()
self._driver._sec_group_rules[self._FAKE_ID] = []
self._driver._security_ports[self._FAKE_DEVICE] = mock.sentinel.port
self._driver._utils.create_security_rules.side_effect = (
exceptions.NotFound(resource='port_id'))
self.assertRaises(exceptions.NotFound,
self._driver._add_sg_port_rules,
mock_port, [mock.sentinel.rule])
self.assertNotIn(self._FAKE_ID, self._driver._sec_group_rules)
self.assertNotIn(self._FAKE_DEVICE, self._driver._security_ports)
def test_add_sg_port_rules(self):
mock_port = self._get_port()
mock_rule = mock.MagicMock()
self._driver._sec_group_rules[self._FAKE_ID] = []
self._driver._add_sg_port_rules(mock_port, [mock_rule])
self._driver._utils.create_security_rules.assert_called_once_with(
self._FAKE_ID, [mock_rule])
self.assertIn(mock_rule, self._driver._sec_group_rules[self._FAKE_ID])
def test_add_sg_port_rules_empty(self):
mock_port = self._get_port()
self._driver._add_sg_port_rules(mock_port, [])
self.assertFalse(self._driver._utils.create_security_rules.called)
def test_remove_sg_port_rules_exception(self):
mock_port = self._get_port()
mock_rule = mock.MagicMock()
self._driver._sec_group_rules[self._FAKE_ID] = [mock_rule]
self._driver._utils.remove_security_rules.side_effect = (
exceptions.HyperVException(msg='Generated Exception for testing.'))
self.assertRaises(exceptions.HyperVException,
self._driver._remove_sg_port_rules,
mock_port, [mock_rule])
self.assertIn(mock_rule, self._driver._sec_group_rules[self._FAKE_ID])
def test_remove_sg_port_rules_port_not_found(self):
mock_port = self._get_port()
self._driver._sec_group_rules[self._FAKE_ID] = []
self._driver._security_ports[self._FAKE_DEVICE] = mock.sentinel.port
self._driver._utils.remove_security_rules.side_effect = (
exceptions.NotFound(resource='port_id'))
self.assertRaises(exceptions.NotFound,
self._driver._remove_sg_port_rules,
mock_port, [mock.sentinel.rule])
self.assertNotIn(self._FAKE_ID, self._driver._sec_group_rules)
self.assertNotIn(self._FAKE_DEVICE, self._driver._security_ports)
def test_remove_sg_port_rules(self):
mock_port = self._get_port()
mock_rule = mock.MagicMock()
self._driver._sec_group_rules[self._FAKE_ID] = [mock_rule]
self._driver._remove_sg_port_rules(
mock_port, [mock_rule, mock.sentinel.other_rule])
self._driver._utils.remove_security_rules.assert_called_once_with(
self._FAKE_ID, [mock_rule, mock.sentinel.other_rule])
self.assertNotIn(mock_rule,
self._driver._sec_group_rules[self._FAKE_ID])
def test_remove_sg_port_rules_empty(self):
mock_port = self._get_port()
self._driver._remove_sg_port_rules(mock_port, [])
self.assertFalse(self._driver._utils.remove_security_rules.called)
def _get_port(self):
return {
'device': self._FAKE_DEVICE,
'id': self._FAKE_ID,
'security_group_rules': [mock.MagicMock()],
'port_security_enabled': True
}
class SecurityGroupRuleR2BaseTestCase(SecurityGroupRuleTestHelper):
def _create_sg_rule(self, protocol=None, action=None, direction='egress'):
protocol = protocol or self._FAKE_PROTOCOL
action = action or self._FAKE_ACTION
remote_addr = (self._FAKE_DEST_IP_PREFIX if direction is 'egress' else
self._FAKE_SOURCE_IP_PREFIX)
return sg_driver.SecurityGroupRuleR2(
self._acl('direction', self._FAKE_DIRECTION),
'%s-%s' % (self._FAKE_PORT_MIN, self._FAKE_PORT_MAX),
protocol, remote_addr, action)
class SecurityGroupRuleGeneratorTestCase(SecurityGroupRuleR2BaseTestCase):
def setUp(self):
super(SecurityGroupRuleGeneratorTestCase, self).setUp()
self.sg_gen = sg_driver.SecurityGroupRuleGenerator()
@mock.patch.object(sg_driver.SecurityGroupRuleGenerator,
'create_security_group_rule')
def test_create_security_group_rules(self, mock_create_sec_group_rule):
sg_rule = self._create_sg_rule()
mock_create_sec_group_rule.return_value = [sg_rule]
expected = [sg_rule] * 2
rules = [self._create_security_rule()] * 2
actual = self.sg_gen.create_security_group_rules(rules)
self.assertEqual(expected, actual)
def test_convert_any_address_to_same_ingress(self):
rule = self._create_security_rule()
rule['direction'] = 'ingress'
actual = self.sg_gen._get_rule_remote_address(rule)
self.assertEqual(self._FAKE_SOURCE_IP_PREFIX, actual)
def test_convert_any_address_to_same_egress(self):
rule = self._create_security_rule()
rule['direction'] += '2'
actual = self.sg_gen._get_rule_remote_address(rule)
self.assertEqual(self._FAKE_DEST_IP_PREFIX, actual)
def test_convert_any_address_to_ipv4(self):
rule = self._create_security_rule()
del rule['dest_ip_prefix']
actual = self.sg_gen._get_rule_remote_address(rule)
self.assertEqual(self._acl('address_default', 'IPv4'), actual)
def test_convert_any_address_to_ipv6(self):
rule = self._create_security_rule()
del rule['dest_ip_prefix']
rule['ethertype'] = self._FAKE_ETHERTYPE_IPV6
actual = self.sg_gen._get_rule_remote_address(rule)
self.assertEqual(self._acl('address_default', 'IPv6'), actual)
class SecurityGroupRuleGeneratorR2TestCase(SecurityGroupRuleR2BaseTestCase):
def setUp(self):
super(SecurityGroupRuleGeneratorR2TestCase, self).setUp()
self.sg_gen = sg_driver.SecurityGroupRuleGeneratorR2()
def test_create_security_group_rule(self):
expected = [self._create_sg_rule()]
rule = self._create_security_rule()
actual = self.sg_gen.create_security_group_rule(rule)
self.assertEqual(expected, actual)
def test_create_security_group_rule_len128(self):
expected = [self._create_sg_rule()]
expected[0].RemoteIPAddress = self._FAKE_IPV6_LEN128_IP.split(
'/128', 1)[0]
rule = self._create_security_rule()
rule['dest_ip_prefix'] = self._FAKE_IPV6_LEN128_IP
actual = self.sg_gen.create_security_group_rule(rule)
self.assertEqual(expected, actual)
def test_create_security_group_rule_any(self):
sg_rule1 = self._create_sg_rule(self._acl('protocol', 'tcp'))
sg_rule2 = self._create_sg_rule(self._acl('protocol', 'udp'))
sg_rule3 = self._create_sg_rule(self._acl('protocol', 'icmp'))
sg_rule4 = self._create_sg_rule(self._acl('protocol', 'ipv6-icmp'))
rule = self._create_security_rule()
rule['protocol'] = sg_driver.ACL_PROP_MAP["default"]
actual = self.sg_gen.create_security_group_rule(rule)
expected = [sg_rule1, sg_rule2, sg_rule3, sg_rule4]
self.assertEqual(sorted(expected), sorted(actual))
def test_create_default_sg_rules(self):
actual = self.sg_gen.create_default_sg_rules()
self.assertEqual(16, len(actual))
def test_compute_new_rules_add(self):
new_rule = self._create_sg_rule()
old_rule = self._create_sg_rule()
old_rule.Direction = mock.sentinel.FAKE_DIRECTION
add_rules, remove_rules = self.sg_gen.compute_new_rules_add(
[old_rule], [new_rule, old_rule])
self.assertEqual([new_rule], add_rules)
def test_expand_wildcard_rules(self):
egress_wildcard_rule = self._create_security_rule(
protocol='ANY',
direction='egress')
ingress_wildcard_rule = self._create_security_rule(
protocol='ANY',
direction='ingress')
normal_rule = self._create_security_rule()
rules = [egress_wildcard_rule, ingress_wildcard_rule, normal_rule]
actual_expanded_rules = self.sg_gen.expand_wildcard_rules(rules)
expanded_rules = []
for proto in sg_driver.ACL_PROP_MAP['protocol'].keys():
expanded_rules.extend(
[self._create_security_rule(protocol=proto,
direction='egress'),
self._create_security_rule(protocol=proto,
direction='ingress')])
diff_expanded_rules = [r for r in expanded_rules
if r not in actual_expanded_rules]
self.assertEqual([], diff_expanded_rules)
def test_expand_no_wildcard_rules(self):
normal_rule = self._create_security_rule(direction='egress')
another_normal_rule = self._create_security_rule(direction='ingress')
actual_expanded_rules = self.sg_gen.expand_wildcard_rules(
[normal_rule, another_normal_rule])
self.assertEqual([], actual_expanded_rules)
def test_get_rule_port_range(self):
rule = self._create_security_rule()
expected = '%s-%s' % (self._FAKE_PORT_MIN, self._FAKE_PORT_MAX)
actual = self.sg_gen._get_rule_port_range(rule)
self.assertEqual(expected, actual)
def test_get_rule_port_range_default(self):
rule = self._create_security_rule()
del rule['port_range_min']
expected = sg_driver.ACL_PROP_MAP['default']
actual = self.sg_gen._get_rule_port_range(rule)
self.assertEqual(expected, actual)
def test_get_rule_protocol_icmp_ipv6(self):
self._check_get_rule_protocol(
expected=self._acl('protocol', 'ipv6-icmp'),
protocol='icmp',
ethertype='IPv6')
def test_get_rule_protocol_icmp(self):
self._check_get_rule_protocol(expected=self._acl('protocol', 'icmp'),
protocol='icmp')
def test_get_rule_protocol_no_icmp(self):
self._check_get_rule_protocol(expected='tcp',
protocol='tcp')
def _check_get_rule_protocol(self, expected, **rule_updates):
rule = self._create_security_rule(**rule_updates)
actual = self.sg_gen._get_rule_protocol(rule)
self.assertEqual(expected, actual)
class SecurityGroupRuleR2TestCase(SecurityGroupRuleR2BaseTestCase):
def test_sg_rule_to_dict(self):
expected = {'Direction': self._acl('direction', self._FAKE_DIRECTION),
'Action': self._FAKE_ACTION,
'Protocol': self._FAKE_PROTOCOL,
'LocalPort': '%s-%s' % (self._FAKE_PORT_MIN,
self._FAKE_PORT_MAX),
'RemoteIPAddress': self._FAKE_DEST_IP_PREFIX,
'Stateful': True,
'IdleSessionTimeout': 0}
sg_rule = self._create_sg_rule()
self.assertEqual(expected, sg_rule.to_dict())
def test_localport(self):
sg_rule = self._create_sg_rule()
expected = '%s-%s' % (self._FAKE_PORT_MIN, self._FAKE_PORT_MAX)
self.assertEqual(expected, sg_rule.LocalPort)
def test_localport_icmp(self):
sg_rule = self._create_sg_rule(self._acl('protocol', 'icmp'))
self.assertEqual('', sg_rule.LocalPort)
def test_stateful_icmp(self):
sg_rule = self._create_sg_rule(self._acl('protocol', 'icmp'))
self.assertFalse(sg_rule.Stateful)
def test_stateful_ipv6_icmp(self):
sg_rule = self._create_sg_rule(self._acl('protocol', 'ipv6-icmp'))
self.assertFalse(sg_rule.Stateful)
def test_stateful_deny(self):
sg_rule = self._create_sg_rule(action=self._acl('action', 'deny'))
self.assertFalse(sg_rule.Stateful)
def test_stateful_true(self):
sg_rule = self._create_sg_rule()
self.assertTrue(sg_rule.Stateful)
def test_rule_uniqueness(self):
sg_rule = self._create_sg_rule()
sg_rule2 = self._create_sg_rule(self._acl('protocol', 'icmp'))
self.assertEqual([sg_rule], list(set([sg_rule] * 2)))
self.assertEqual(sorted([sg_rule, sg_rule2]),
sorted(list(set([sg_rule, sg_rule2]))))

View File

@ -1,153 +0,0 @@
# Copyright 2017 Cloudbase Solutions Srl
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Unit tests for the Hyper-V Trunk Driver.
"""
import mock
from neutron.api.rpc.callbacks import events
from neutron.api.rpc.handlers import resources_rpc
from neutron.services.trunk import constants as t_const
from os_win import constants as os_win_const
import oslo_messaging
import testtools
from hyperv.neutron import trunk_driver
from hyperv.tests import base
class TestHyperVTrunkDriver(base.HyperVBaseTestCase):
@mock.patch.object(trunk_driver.trunk_rpc, 'TrunkStub',
lambda *args, **kwargs: None)
@mock.patch.object(trunk_driver.trunk_rpc.TrunkSkeleton, '__init__',
lambda *args, **kwargs: None)
def setUp(self):
super(TestHyperVTrunkDriver, self).setUp()
self.trunk_driver = trunk_driver.HyperVTrunkDriver(
mock.sentinel.context)
self.trunk_driver._utils = mock.MagicMock()
self.trunk_driver._trunk_rpc = mock.MagicMock()
def test_handle_trunks_deleted(self):
mock_trunk = mock.MagicMock()
self.trunk_driver._trunks[mock_trunk.id] = mock_trunk
self.trunk_driver.handle_trunks([mock_trunk], events.DELETED)
self.assertNotIn(mock_trunk.id, self.trunk_driver._trunks)
@mock.patch.object(trunk_driver.HyperVTrunkDriver, '_setup_trunk')
def test_handle_trunks_created(self, mock_setup_trunk):
sub_ports = []
mock_trunk = mock.MagicMock(sub_ports=sub_ports)
self.trunk_driver.handle_trunks([mock_trunk], events.CREATED)
self.assertEqual(mock_trunk, self.trunk_driver._trunks[mock_trunk.id])
mock_setup_trunk.assert_called_once_with(mock_trunk)
@mock.patch.object(trunk_driver.HyperVTrunkDriver, '_set_port_vlan')
@mock.patch.object(trunk_driver.HyperVTrunkDriver, '_fetch_trunk')
def test_bind_vlan_port_not_trunk(self, mock_fetch_trunk, mock_set_vlan):
mock_fetch_trunk.return_value = None
self.trunk_driver.bind_vlan_port(mock.sentinel.port_id,
mock.sentinel.segmentation_id)
mock_fetch_trunk.assert_called_once_with(mock.sentinel.port_id)
mock_set_vlan.assert_called_once_with(mock.sentinel.port_id,
mock.sentinel.segmentation_id)
@mock.patch.object(trunk_driver.HyperVTrunkDriver, '_setup_trunk')
@mock.patch.object(trunk_driver.HyperVTrunkDriver, '_fetch_trunk')
def test_bind_vlan_port(self, mock_fetch_trunk, mock_setup_trunk):
self.trunk_driver.bind_vlan_port(mock.sentinel.port_id,
mock.sentinel.segmentation_id)
mock_fetch_trunk.assert_called_once_with(mock.sentinel.port_id)
mock_setup_trunk.assert_called_once_with(mock_fetch_trunk.return_value,
mock.sentinel.segmentation_id)
def test_fetch_trunk(self):
mock_trunk = (
self.trunk_driver._trunk_rpc.get_trunk_details.return_value)
trunk = self.trunk_driver._fetch_trunk(mock.sentinel.port_id,
mock.sentinel.context)
self.assertEqual(mock_trunk, trunk)
self.assertEqual(mock_trunk, self.trunk_driver._trunks[mock_trunk.id])
self.trunk_driver._trunk_rpc.get_trunk_details.assert_called_once_with(
mock.sentinel.context, mock.sentinel.port_id)
def test_fetch_trunk_resource_not_found(self):
self.trunk_driver._trunk_rpc.get_trunk_details.side_effect = (
resources_rpc.ResourceNotFound)
trunk = self.trunk_driver._fetch_trunk(mock.sentinel.port_id)
self.assertIsNone(trunk)
def test_fetch_trunk_resource_remote_error(self):
self.trunk_driver._trunk_rpc.get_trunk_details.side_effect = (
oslo_messaging.RemoteError('expected CallbackNotFound'))
trunk = self.trunk_driver._fetch_trunk(mock.sentinel.port_id)
self.assertIsNone(trunk)
def test_fetch_trunk_resource_remote_error_reraised(self):
self.trunk_driver._trunk_rpc.get_trunk_details.side_effect = (
oslo_messaging.RemoteError)
self.assertRaises(oslo_messaging.RemoteError,
self.trunk_driver._fetch_trunk,
mock.sentinel.port_id)
@mock.patch.object(trunk_driver.HyperVTrunkDriver, '_set_port_vlan')
def test_setup_trunk(self, mock_set_vlan):
mock_subport = mock.MagicMock()
mock_trunk = mock.MagicMock(sub_ports=[mock_subport])
trunk_rpc = self.trunk_driver._trunk_rpc
trunk_rpc.update_trunk_status.side_effect = [
testtools.ExpectedException, None]
self.trunk_driver._setup_trunk(mock_trunk, mock.sentinel.vlan_id)
trunk_rpc.update_subport_bindings.assert_called_once_with(
self.trunk_driver._context, [mock_subport])
mock_set_vlan.assert_called_once_with(
mock_trunk.port_id, mock.sentinel.vlan_id,
[mock_subport.segmentation_id])
mock_set_vlan.has_calls([
mock.call(self.trunk_driver._context, mock_trunk.id, status)
for status in [t_const.ACTIVE_STATUS, t_const.DEGRADED_STATUS]])
def _check_set_port_vlan(self, vlan_trunk, operation_mode):
self.trunk_driver._set_port_vlan(mock.sentinel.port_id,
mock.sentinel.vlan_id,
vlan_trunk)
self.trunk_driver._utils.set_vswitch_port_vlan_id(
mock.sentinel.vlan_id, mock.sentinel.port_id,
operation_mode=operation_mode,
vlan_trunk=vlan_trunk)
def test_set_port_vlan_trunk_mode(self):
self._check_set_port_vlan(mock.sentinel.vlan_trunk,
os_win_const.VLAN_MODE_TRUNK)
def test_set_port_vlan_access_mode(self):
self._check_set_port_vlan(None, os_win_const.VLAN_MODE_ACCESS)

View File

@ -1,20 +0,0 @@
# Copyright 2017 Cloudbase Solutions Srl
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pbr.version
version_info = pbr.version.VersionInfo('networking-hyperv')
__version__ = version_info.version_string()

View File

@ -1,283 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Winstackers Release Notes documentation build configuration file, created by
# sphinx-quickstart on Tue Nov 3 17:40:50 2015.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'openstackdocstheme',
'reno.sphinxext',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Networking-Hyperv Release Notes'
copyright = u'2015, Winstackers Developers'
# openstackdocstheme options
repository_name = 'openstack/networking-hyperv'
bug_project = 'networking-hyperv'
bug_tag = ''
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
from hyperv.version import version_info
# The full version, including alpha/beta/rc tags.
release = version_info.version_string_with_vcs()
# The short X.Y version.
version = version_info.canonical_version_string()
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'NetworkingHypervReleaseNotesdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'NetworkingHypervReleaseNotes.tex',
u'Networking-Hyperv Release Notes Documentation',
u'Winstackers Developers', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'networkinghypervreleasenotes',
u'Networking-Hyperv Release Notes Documentation',
[u'Winstackers Developers'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'NetworkingHypervReleaseNotes',
u'Networking-Hyperv Release Notes Documentation',
u'Winstackers Developers', 'NetworkingHypervReleaseNotes',
'Neutron L2 agent and mechanism driver.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
# -- Options for Internationalization output ------------------------------
locale_dirs = ['locale/']

View File

@ -1,8 +0,0 @@
================================
networking-hyperv Release Notes
================================
.. toctree::
:maxdepth: 1
unreleased

View File

@ -1,5 +0,0 @@
=============================
Current Series Release Notes
=============================
.. release-notes::

View File

@ -1,17 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr!=2.1.0,>=2.0.0 # Apache-2.0
Babel!=2.4.0,>=2.3.4 # BSD
eventlet!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2 # MIT
neutron-lib>=1.7.0 # Apache-2.0
os-win>=2.0.0 # Apache-2.0
oslo.config!=4.3.0,!=4.4.0,>=4.0.0 # Apache-2.0
oslo.i18n!=3.15.2,>=2.1.0 # Apache-2.0
oslo.log>=3.22.0 # Apache-2.0
oslo.serialization>=1.10.0 # Apache-2.0
oslo.utils>=3.20.0 # Apache-2.0
python-neutronclient>=6.3.0 # Apache-2.0

View File

@ -1,63 +0,0 @@
[metadata]
name = networking-hyperv
summary = This project tracks the work to integrate the Hyper-V networking with Neutron. This project contains the Hyper-V Neutron Agent Mixin, Security Groups Driver, ML2 Mechanism Driver and the utils modules they use in order to properly bind neutron ports on a Hyper-V host. This project resulted from the neutron core vendor decomposition.
description-file =
README.rst
license = Apache License, Version 2.0
author = OpenStack
author-email = openstack-dev@lists.openstack.org
home-page = https://github.com/openstack/networking-hyperv
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: Microsoft :: Windows
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 3
Programming Language :: Python :: 3.5
keywords = openstack neutron hyper-v networking
[files]
packages =
hyperv
[entry_points]
console_scripts =
neutron-hyperv-agent = hyperv.neutron.agent.hyperv_neutron_agent:main
neutron-hnv-agent = hyperv.neutron.agent.hnv_neutron_agent:main
neutron-hnv-metadata-proxy = hyperv.neutron.agent.hnv_metadata_agent:main
neutron.qos.agent_drivers =
hyperv = hyperv.neutron.qos.qos_driver:QosHyperVAgentDriver
neutron.ml2.mechanism_drivers =
hyperv = hyperv.neutron.ml2.mech_hyperv:HypervMechanismDriver
neutron.agent.firewall_drivers =
hyperv = hyperv.neutron.security_groups_driver:HyperVSecurityGroupsDriver
[build_sphinx]
source-dir = doc/source
build-dir = doc/build
all_files = 1
[upload_sphinx]
upload-dir = doc/build/html
[compile_catalog]
directory = hyperv/locale
domain = networking-hyperv
[update_catalog]
domain = networking-hyperv
output_dir = hyperv/locale
input_file = hyperv/locale/networking-hyperv.pot
[extract_messages]
keywords = _ gettext ngettext l_ lazy_gettext
mapping_file = babel.cfg
output_file = hyperv/locale/networking-hyperv.pot

View File

@ -1,29 +0,0 @@
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
setup_requires=['pbr>=2.0.0'],
pbr=True)

View File

@ -1,21 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0
coverage!=4.4,>=4.0 # Apache-2.0
ddt>=1.0.1 # MIT
fixtures>=3.0.0 # Apache-2.0/BSD
mock>=2.0 # BSD
python-subunit>=0.0.18 # Apache-2.0/BSD
docutils>=0.11 # OSI-Approved Open Source, Public Domain
sphinx>=1.6.2 # BSD
oslosphinx>=4.7.0 # Apache-2.0
oslotest>=1.10.0 # Apache-2.0
testrepository>=0.0.18 # Apache-2.0/BSD
testscenarios>=0.4 # Apache-2.0/BSD
testtools>=1.4.0 # MIT
openstackdocstheme>=1.11.0 # Apache-2.0
# releasenotes
reno!=2.3.1,>=1.8.0 # Apache-2.0

View File

@ -1,85 +0,0 @@
#!/usr/bin/env bash
# Client constraint file contains this client version pin that is in conflict
# with installing the client from source. We should remove the version pin in
# the constraints file before applying it for from-source installation.
# The script also has a secondary purpose to install certain special
# dependencies directly from git.
# Wrapper for pip install that always uses constraints.
function pip_install() {
pip install -c"$localfile" -U "$@"
}
# Grab the library from git using either zuul-cloner or pip. The former is
# there to a take advantage of the setup done by the gate infrastructure
# and honour any/all Depends-On headers in the commit message
function install_from_git() {
ZUUL_CLONER=/usr/zuul-env/bin/zuul-cloner
GIT_HOST=git.openstack.org
PROJ=$1
EGG=$2
edit-constraints "$localfile" -- "$EGG"
if [ -x "$ZUUL_CLONER" ]; then
SRC_DIR="$VIRTUAL_ENV/src"
mkdir -p "$SRC_DIR"
cd "$SRC_DIR" >/dev/null
ZUUL_CACHE_DIR=${ZUUL_CACHE_DIR:-/opt/git} $ZUUL_CLONER \
--branch "$BRANCH_NAME" \
"git://$GIT_HOST" "$PROJ"
pip_install -e "$PROJ/."
cd - >/dev/null
else
pip_install -e"git+https://$GIT_HOST/$PROJ@$BRANCH_NAME#egg=${EGG}"
fi
}
CONSTRAINTS_FILE="$1"
shift 1
# This script will either complete with a return code of 0 or the return code
# of whatever failed.
set -e
# NOTE(tonyb): Place this in the tox environment's log dir so it will get
# published to logs.openstack.org for easy debugging.
mkdir -p "$VIRTUAL_ENV/log/"
localfile="$VIRTUAL_ENV/log/upper-constraints.txt"
if [[ "$CONSTRAINTS_FILE" != http* ]]; then
CONSTRAINTS_FILE="file://$CONSTRAINTS_FILE"
fi
# NOTE(tonyb): need to add curl to bindep.txt if the project supports bindep
curl "$CONSTRAINTS_FILE" --insecure --progress-bar --output "$localfile"
pip_install openstack-requirements
# This is the main purpose of the script: Allow local installation of
# the current repo. It is listed in constraints file and thus any
# install will be constrained and we need to unconstrain it.
edit-constraints "$localfile" -- "$CLIENT_NAME"
declare -a passthrough_args
while [ $# -gt 0 ] ; do
case "$1" in
# If we have any special os:<repo_name:<egg_name> deps then process them
os:*)
declare -a pkg_spec
IFS=: pkg_spec=($1)
install_from_git "${pkg_spec[1]}" "${pkg_spec[2]}"
;;
# Otherwise just pass the other deps through to the constrained pip install
*)
passthrough_args+=("$1")
;;
esac
shift 1
done
# If *only* had special args then then isn't any need to run pip.
if [ -n "$passthrough_args" ] ; then
pip_install "${passthrough_args[@]}"
fi

43
tox.ini
View File

@ -1,43 +0,0 @@
[tox]
minversion = 2.0
envlist = py27,py35,pypy,pep8
skipsdist = True
[testenv]
usedevelop = True
install_command = {toxinidir}/tools/tox_install.sh {env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages}
setenv =
VIRTUAL_ENV={envdir}
BRANCH_NAME=master
CLIENT_NAME=networking-hyperv
PYTHONWARNINGS=default::DeprecationWarning
deps = -r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
os:openstack/neutron:neutron
commands = python setup.py testr --slowest --testr-args='{posargs}'
[testenv:pep8]
commands = flake8
[testenv:venv]
commands = {posargs}
[testenv:cover]
commands = python setup.py testr --coverage --testr-args='{posargs}'
[testenv:docs]
commands = python setup.py build_sphinx
[testenv:releasenotes]
commands =
sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html
[testenv:debug]
commands = oslo_debug_helper {posargs}
[flake8]
show-source = True
ignore =
builtins = _
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build