Retire repo
This patch completes step 2 of the infra project retirement process found here: https://docs.opendev.org/opendev/infra-manual/latest/drivers.html#step-2-remove-project-content Reference: http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015600.html Depend-on: https://review.opendev.org/737566 Change-Id: Id3a5477860323547a4e17155061f597a8c96640b Signed-off-by: Sean McGinnis <sean.mcginnis@gmail.com>
|
@ -1,7 +0,0 @@
|
||||||
[run]
|
|
||||||
branch = True
|
|
||||||
source = dragonflow
|
|
||||||
omit = dragonflow/tests/*
|
|
||||||
|
|
||||||
[report]
|
|
||||||
ignore_errors = True
|
|
|
@ -1,66 +0,0 @@
|
||||||
*.py[cod]
|
|
||||||
|
|
||||||
# C extensions
|
|
||||||
*.so
|
|
||||||
|
|
||||||
# Packages
|
|
||||||
*.egg
|
|
||||||
*.eggs
|
|
||||||
*.egg-info
|
|
||||||
dist
|
|
||||||
build
|
|
||||||
eggs
|
|
||||||
parts
|
|
||||||
bin
|
|
||||||
var
|
|
||||||
sdist
|
|
||||||
develop-eggs
|
|
||||||
.installed.cfg
|
|
||||||
lib
|
|
||||||
lib64
|
|
||||||
|
|
||||||
# Installer logs
|
|
||||||
pip-log.txt
|
|
||||||
|
|
||||||
# Unit test / coverage reports
|
|
||||||
.coverage
|
|
||||||
.tox
|
|
||||||
.stestr
|
|
||||||
.venv
|
|
||||||
|
|
||||||
# Translations
|
|
||||||
*.mo
|
|
||||||
|
|
||||||
# Mr Developer
|
|
||||||
.mr.developer.cfg
|
|
||||||
.project
|
|
||||||
.pydevproject
|
|
||||||
|
|
||||||
# Complexity
|
|
||||||
output/*.html
|
|
||||||
output/*/index.html
|
|
||||||
|
|
||||||
# Sphinx
|
|
||||||
doc/build
|
|
||||||
|
|
||||||
# pbr generates these
|
|
||||||
AUTHORS
|
|
||||||
ChangeLog
|
|
||||||
|
|
||||||
# Editors
|
|
||||||
*~
|
|
||||||
.*.swp
|
|
||||||
.*sw?
|
|
||||||
.idea
|
|
||||||
|
|
||||||
# Vagrant
|
|
||||||
.vagrant
|
|
||||||
|
|
||||||
# etcd Configuration
|
|
||||||
/devstack/etcd.override
|
|
||||||
|
|
||||||
# Configurations
|
|
||||||
etc/*.sample
|
|
||||||
|
|
||||||
# Releasenotes
|
|
||||||
releasenotes/build
|
|
3
.mailmap
|
@ -1,3 +0,0 @@
|
||||||
# Format is:
|
|
||||||
# <preferred e-mail> <other e-mail 1>
|
|
||||||
# <preferred e-mail> <other e-mail 2>
|
|
|
@ -1,3 +0,0 @@
|
||||||
[DEFAULT]
|
|
||||||
test_path=${OS_TEST_PATH:-./dragonflow/tests/unit}
|
|
||||||
top_dir=./
|
|
135
.zuul.yaml
|
@ -1,135 +0,0 @@
|
||||||
- project:
|
|
||||||
templates:
|
|
||||||
- check-requirements
|
|
||||||
- openstack-python-jobs-neutron
|
|
||||||
- openstack-python35-jobs-neutron
|
|
||||||
- openstack-python36-jobs-neutron
|
|
||||||
- build-openstack-docs-pti
|
|
||||||
check:
|
|
||||||
jobs:
|
|
||||||
- dragonflow-tox-lower-constraints
|
|
||||||
- dragonflow-dsvm-fullstack-redis
|
|
||||||
- dragonflow-dsvm-fullstack-etcd-zmq
|
|
||||||
- openstack-tox-pep8:
|
|
||||||
required-projects:
|
|
||||||
- openstack/networking-sfc
|
|
||||||
- openstack/neutron-dynamic-routing
|
|
||||||
- openstack-tox-py27:
|
|
||||||
required-projects:
|
|
||||||
- openstack/networking-sfc
|
|
||||||
- openstack/neutron-dynamic-routing
|
|
||||||
- openstack-tox-py35:
|
|
||||||
required-projects:
|
|
||||||
- openstack/networking-sfc
|
|
||||||
- openstack/neutron-dynamic-routing
|
|
||||||
- openstack-tox-py36:
|
|
||||||
required-projects:
|
|
||||||
- openstack/networking-sfc
|
|
||||||
- openstack/neutron-dynamic-routing
|
|
||||||
gate:
|
|
||||||
jobs:
|
|
||||||
- dragonflow-tox-lower-constraints
|
|
||||||
- dragonflow-dsvm-fullstack-redis
|
|
||||||
- openstack-tox-pep8:
|
|
||||||
required-projects:
|
|
||||||
- openstack/networking-sfc
|
|
||||||
- openstack/neutron-dynamic-routing
|
|
||||||
- openstack-tox-py27:
|
|
||||||
required-projects:
|
|
||||||
- openstack/networking-sfc
|
|
||||||
- openstack/neutron-dynamic-routing
|
|
||||||
- openstack-tox-py35:
|
|
||||||
required-projects:
|
|
||||||
- openstack/networking-sfc
|
|
||||||
- openstack/neutron-dynamic-routing
|
|
||||||
- openstack-tox-py36:
|
|
||||||
required-projects:
|
|
||||||
- openstack/networking-sfc
|
|
||||||
- openstack/neutron-dynamic-routing
|
|
||||||
|
|
||||||
experimental:
|
|
||||||
jobs:
|
|
||||||
- dragonflow-tempest:
|
|
||||||
voting: false
|
|
||||||
irrelevant-files:
|
|
||||||
- ^(test-|)requirements.txt$
|
|
||||||
- ^setup.cfg$
|
|
||||||
- dragonflow-dsvm-rally:
|
|
||||||
voting: false
|
|
||||||
- dragonflow-openstack-ansible-cross-repo:
|
|
||||||
voting: false
|
|
||||||
- kuryr-kubernetes-tempest-dragonflow:
|
|
||||||
voting: false
|
|
||||||
|
|
||||||
- job:
|
|
||||||
name: dragonflow-dsvm-fullstack-redis
|
|
||||||
parent: legacy-dsvm-base
|
|
||||||
run: zuul/dragonflow-dsvm-fullstack-redis/run.yaml
|
|
||||||
post-run: zuul/dragonflow-dsvm-fullstack-redis/post.yaml
|
|
||||||
timeout: 10800
|
|
||||||
required-projects:
|
|
||||||
- openstack/devstack-gate
|
|
||||||
- openstack/dragonflow
|
|
||||||
- openstack/neutron
|
|
||||||
- openstack/networking-sfc
|
|
||||||
- openstack/neutron-dynamic-routing
|
|
||||||
|
|
||||||
- job:
|
|
||||||
name: dragonflow-dsvm-fullstack-etcd-zmq
|
|
||||||
parent: legacy-dsvm-base
|
|
||||||
run: zuul/dragonflow-dsvm-fullstack-etcd-zmq/run.yaml
|
|
||||||
post-run: zuul/dragonflow-dsvm-fullstack-etcd-zmq/post.yaml
|
|
||||||
timeout: 10800
|
|
||||||
required-projects:
|
|
||||||
- openstack/devstack-gate
|
|
||||||
- openstack/dragonflow
|
|
||||||
- openstack/neutron
|
|
||||||
- openstack/networking-sfc
|
|
||||||
- openstack/neutron-dynamic-routing
|
|
||||||
|
|
||||||
- job:
|
|
||||||
name: dragonflow-dsvm-rally
|
|
||||||
parent: legacy-dsvm-base
|
|
||||||
run: zuul/dragonflow-dsvm-rally/run.yaml
|
|
||||||
post-run: zuul/dragonflow-dsvm-rally/post.yaml
|
|
||||||
timeout: 10800
|
|
||||||
required-projects:
|
|
||||||
- openstack/devstack-gate
|
|
||||||
- openstack/dragonflow
|
|
||||||
- openstack/neutron
|
|
||||||
- openstack/networking-sfc
|
|
||||||
- openstack/neutron-dynamic-routing
|
|
||||||
- openstack/rally
|
|
||||||
|
|
||||||
- job:
|
|
||||||
name: dragonflow-tempest
|
|
||||||
parent: legacy-dsvm-base
|
|
||||||
run: zuul/tempest-dsvm-dragonflow/run.yaml
|
|
||||||
post-run: zuul/tempest-dsvm-dragonflow/post.yaml
|
|
||||||
timeout: 10800
|
|
||||||
required-projects:
|
|
||||||
- openstack/devstack-gate
|
|
||||||
- openstack/dragonflow
|
|
||||||
- openstack/neutron
|
|
||||||
- openstack/networking-sfc
|
|
||||||
- openstack/neutron-dynamic-routing
|
|
||||||
- openstack/tempest
|
|
||||||
- openstack/neutron-tempest-plugin
|
|
||||||
|
|
||||||
- job:
|
|
||||||
name: dragonflow-openstack-ansible-cross-repo
|
|
||||||
parent: openstack-ansible-cross-repo-functional
|
|
||||||
required-projects:
|
|
||||||
- openstack/requirements
|
|
||||||
- openstack/openstack-ansible-os_neutron
|
|
||||||
vars:
|
|
||||||
tox_env: dragonflow
|
|
||||||
osa_test_repo: openstack/openstack-ansible-os_neutron
|
|
||||||
|
|
||||||
- job:
|
|
||||||
name: dragonflow-tox-lower-constraints
|
|
||||||
parent: openstack-tox-lower-constraints
|
|
||||||
required-projects:
|
|
||||||
- openstack/networking-sfc
|
|
||||||
- openstack/neutron-dynamic-routing
|
|
||||||
- openstack/neutron
|
|
|
@ -1,16 +0,0 @@
|
||||||
If you would like to contribute to the development of OpenStack,
|
|
||||||
you must follow the steps in this page:
|
|
||||||
|
|
||||||
https://docs.openstack.org/infra/manual/developers.html
|
|
||||||
|
|
||||||
Once those steps have been completed, changes to OpenStack
|
|
||||||
should be submitted for review via the Gerrit tool, following
|
|
||||||
the workflow documented at:
|
|
||||||
|
|
||||||
https://docs.openstack.org/infra/manual/developers.html#development-workflow
|
|
||||||
|
|
||||||
Pull requests submitted through GitHub will be ignored.
|
|
||||||
|
|
||||||
Bugs should be filed on Launchpad, not GitHub:
|
|
||||||
|
|
||||||
https://bugs.launchpad.net/dragonflow
|
|
25
Dockerfile
|
@ -1,25 +0,0 @@
|
||||||
FROM ubuntu:16.04
|
|
||||||
|
|
||||||
# Install dependencies and some useful tools.
|
|
||||||
ENV DRAGONFLOW_PACKAGES git \
|
|
||||||
python-pip python-psutil python-subprocess32 \
|
|
||||||
python-dev libpython-dev
|
|
||||||
|
|
||||||
# Ignore questions when installing with apt-get:
|
|
||||||
ENV DEBIAN_FRONTEND noninteractive
|
|
||||||
|
|
||||||
RUN apt-get update && apt-get install -y $DRAGONFLOW_PACKAGES
|
|
||||||
|
|
||||||
# Create config folder
|
|
||||||
ENV DRAGONFLOW_ETCDIR /etc/dragonflow
|
|
||||||
RUN mkdir -p $DRAGONFLOW_ETCDIR /opt/dragonflow /var/run/dragonflow
|
|
||||||
|
|
||||||
# Copy Dragonflow sources to the container
|
|
||||||
COPY . /opt/dragonflow/
|
|
||||||
|
|
||||||
# Install Dragonflow on the container
|
|
||||||
WORKDIR /opt/dragonflow
|
|
||||||
RUN pip install -e .
|
|
||||||
|
|
||||||
ENTRYPOINT ["/opt/dragonflow/tools/run_dragonflow.sh"]
|
|
||||||
|
|
|
@ -1,21 +0,0 @@
|
||||||
FROM fedora:latest
|
|
||||||
|
|
||||||
RUN dnf install -y git python3-pip python3-psutil python3-devel \
|
|
||||||
"@C Development Tools and Libraries"
|
|
||||||
|
|
||||||
RUN alternatives --install /usr/bin/python python /usr/bin/python3 1
|
|
||||||
RUN alternatives --install /usr/bin/pip pip /usr/bin/pip3 1
|
|
||||||
|
|
||||||
# Create config folder
|
|
||||||
ENV DRAGONFLOW_ETCDIR /etc/dragonflow
|
|
||||||
RUN mkdir -p $DRAGONFLOW_ETCDIR /opt/dragonflow /var/run/dragonflow
|
|
||||||
|
|
||||||
# Copy Dragonflow sources to the container
|
|
||||||
COPY . /opt/dragonflow/
|
|
||||||
|
|
||||||
# Install Dragonflow on the container
|
|
||||||
WORKDIR /opt/dragonflow
|
|
||||||
RUN pip install -e .
|
|
||||||
|
|
||||||
ENTRYPOINT ["/opt/dragonflow/tools/run_dragonflow.sh"]
|
|
||||||
|
|
|
@ -1,4 +0,0 @@
|
||||||
dragonflow Style Commandments
|
|
||||||
=============================
|
|
||||||
|
|
||||||
Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest/
|
|
176
LICENSE
|
@ -1,176 +0,0 @@
|
||||||
|
|
||||||
Apache License
|
|
||||||
Version 2.0, January 2004
|
|
||||||
http://www.apache.org/licenses/
|
|
||||||
|
|
||||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
|
||||||
|
|
||||||
1. Definitions.
|
|
||||||
|
|
||||||
"License" shall mean the terms and conditions for use, reproduction,
|
|
||||||
and distribution as defined by Sections 1 through 9 of this document.
|
|
||||||
|
|
||||||
"Licensor" shall mean the copyright owner or entity authorized by
|
|
||||||
the copyright owner that is granting the License.
|
|
||||||
|
|
||||||
"Legal Entity" shall mean the union of the acting entity and all
|
|
||||||
other entities that control, are controlled by, or are under common
|
|
||||||
control with that entity. For the purposes of this definition,
|
|
||||||
"control" means (i) the power, direct or indirect, to cause the
|
|
||||||
direction or management of such entity, whether by contract or
|
|
||||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
|
||||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
|
||||||
|
|
||||||
"You" (or "Your") shall mean an individual or Legal Entity
|
|
||||||
exercising permissions granted by this License.
|
|
||||||
|
|
||||||
"Source" form shall mean the preferred form for making modifications,
|
|
||||||
including but not limited to software source code, documentation
|
|
||||||
source, and configuration files.
|
|
||||||
|
|
||||||
"Object" form shall mean any form resulting from mechanical
|
|
||||||
transformation or translation of a Source form, including but
|
|
||||||
not limited to compiled object code, generated documentation,
|
|
||||||
and conversions to other media types.
|
|
||||||
|
|
||||||
"Work" shall mean the work of authorship, whether in Source or
|
|
||||||
Object form, made available under the License, as indicated by a
|
|
||||||
copyright notice that is included in or attached to the work
|
|
||||||
(an example is provided in the Appendix below).
|
|
||||||
|
|
||||||
"Derivative Works" shall mean any work, whether in Source or Object
|
|
||||||
form, that is based on (or derived from) the Work and for which the
|
|
||||||
editorial revisions, annotations, elaborations, or other modifications
|
|
||||||
represent, as a whole, an original work of authorship. For the purposes
|
|
||||||
of this License, Derivative Works shall not include works that remain
|
|
||||||
separable from, or merely link (or bind by name) to the interfaces of,
|
|
||||||
the Work and Derivative Works thereof.
|
|
||||||
|
|
||||||
"Contribution" shall mean any work of authorship, including
|
|
||||||
the original version of the Work and any modifications or additions
|
|
||||||
to that Work or Derivative Works thereof, that is intentionally
|
|
||||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
|
||||||
or by an individual or Legal Entity authorized to submit on behalf of
|
|
||||||
the copyright owner. For the purposes of this definition, "submitted"
|
|
||||||
means any form of electronic, verbal, or written communication sent
|
|
||||||
to the Licensor or its representatives, including but not limited to
|
|
||||||
communication on electronic mailing lists, source code control systems,
|
|
||||||
and issue tracking systems that are managed by, or on behalf of, the
|
|
||||||
Licensor for the purpose of discussing and improving the Work, but
|
|
||||||
excluding communication that is conspicuously marked or otherwise
|
|
||||||
designated in writing by the copyright owner as "Not a Contribution."
|
|
||||||
|
|
||||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
|
||||||
on behalf of whom a Contribution has been received by Licensor and
|
|
||||||
subsequently incorporated within the Work.
|
|
||||||
|
|
||||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
copyright license to reproduce, prepare Derivative Works of,
|
|
||||||
publicly display, publicly perform, sublicense, and distribute the
|
|
||||||
Work and such Derivative Works in Source or Object form.
|
|
||||||
|
|
||||||
3. Grant of Patent License. Subject to the terms and conditions of
|
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
(except as stated in this section) patent license to make, have made,
|
|
||||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
|
||||||
where such license applies only to those patent claims licensable
|
|
||||||
by such Contributor that are necessarily infringed by their
|
|
||||||
Contribution(s) alone or by combination of their Contribution(s)
|
|
||||||
with the Work to which such Contribution(s) was submitted. If You
|
|
||||||
institute patent litigation against any entity (including a
|
|
||||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
|
||||||
or a Contribution incorporated within the Work constitutes direct
|
|
||||||
or contributory patent infringement, then any patent licenses
|
|
||||||
granted to You under this License for that Work shall terminate
|
|
||||||
as of the date such litigation is filed.
|
|
||||||
|
|
||||||
4. Redistribution. You may reproduce and distribute copies of the
|
|
||||||
Work or Derivative Works thereof in any medium, with or without
|
|
||||||
modifications, and in Source or Object form, provided that You
|
|
||||||
meet the following conditions:
|
|
||||||
|
|
||||||
(a) You must give any other recipients of the Work or
|
|
||||||
Derivative Works a copy of this License; and
|
|
||||||
|
|
||||||
(b) You must cause any modified files to carry prominent notices
|
|
||||||
stating that You changed the files; and
|
|
||||||
|
|
||||||
(c) You must retain, in the Source form of any Derivative Works
|
|
||||||
that You distribute, all copyright, patent, trademark, and
|
|
||||||
attribution notices from the Source form of the Work,
|
|
||||||
excluding those notices that do not pertain to any part of
|
|
||||||
the Derivative Works; and
|
|
||||||
|
|
||||||
(d) If the Work includes a "NOTICE" text file as part of its
|
|
||||||
distribution, then any Derivative Works that You distribute must
|
|
||||||
include a readable copy of the attribution notices contained
|
|
||||||
within such NOTICE file, excluding those notices that do not
|
|
||||||
pertain to any part of the Derivative Works, in at least one
|
|
||||||
of the following places: within a NOTICE text file distributed
|
|
||||||
as part of the Derivative Works; within the Source form or
|
|
||||||
documentation, if provided along with the Derivative Works; or,
|
|
||||||
within a display generated by the Derivative Works, if and
|
|
||||||
wherever such third-party notices normally appear. The contents
|
|
||||||
of the NOTICE file are for informational purposes only and
|
|
||||||
do not modify the License. You may add Your own attribution
|
|
||||||
notices within Derivative Works that You distribute, alongside
|
|
||||||
or as an addendum to the NOTICE text from the Work, provided
|
|
||||||
that such additional attribution notices cannot be construed
|
|
||||||
as modifying the License.
|
|
||||||
|
|
||||||
You may add Your own copyright statement to Your modifications and
|
|
||||||
may provide additional or different license terms and conditions
|
|
||||||
for use, reproduction, or distribution of Your modifications, or
|
|
||||||
for any such Derivative Works as a whole, provided Your use,
|
|
||||||
reproduction, and distribution of the Work otherwise complies with
|
|
||||||
the conditions stated in this License.
|
|
||||||
|
|
||||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
|
||||||
any Contribution intentionally submitted for inclusion in the Work
|
|
||||||
by You to the Licensor shall be under the terms and conditions of
|
|
||||||
this License, without any additional terms or conditions.
|
|
||||||
Notwithstanding the above, nothing herein shall supersede or modify
|
|
||||||
the terms of any separate license agreement you may have executed
|
|
||||||
with Licensor regarding such Contributions.
|
|
||||||
|
|
||||||
6. Trademarks. This License does not grant permission to use the trade
|
|
||||||
names, trademarks, service marks, or product names of the Licensor,
|
|
||||||
except as required for reasonable and customary use in describing the
|
|
||||||
origin of the Work and reproducing the content of the NOTICE file.
|
|
||||||
|
|
||||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
|
||||||
agreed to in writing, Licensor provides the Work (and each
|
|
||||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
implied, including, without limitation, any warranties or conditions
|
|
||||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
|
||||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
|
||||||
appropriateness of using or redistributing the Work and assume any
|
|
||||||
risks associated with Your exercise of permissions under this License.
|
|
||||||
|
|
||||||
8. Limitation of Liability. In no event and under no legal theory,
|
|
||||||
whether in tort (including negligence), contract, or otherwise,
|
|
||||||
unless required by applicable law (such as deliberate and grossly
|
|
||||||
negligent acts) or agreed to in writing, shall any Contributor be
|
|
||||||
liable to You for damages, including any direct, indirect, special,
|
|
||||||
incidental, or consequential damages of any character arising as a
|
|
||||||
result of this License or out of the use or inability to use the
|
|
||||||
Work (including but not limited to damages for loss of goodwill,
|
|
||||||
work stoppage, computer failure or malfunction, or any and all
|
|
||||||
other commercial damages or losses), even if such Contributor
|
|
||||||
has been advised of the possibility of such damages.
|
|
||||||
|
|
||||||
9. Accepting Warranty or Additional Liability. While redistributing
|
|
||||||
the Work or Derivative Works thereof, You may choose to offer,
|
|
||||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
|
||||||
or other liability obligations and/or rights consistent with this
|
|
||||||
License. However, in accepting such obligations, You may act only
|
|
||||||
on Your own behalf and on Your sole responsibility, not on behalf
|
|
||||||
of any other Contributor, and only if You agree to indemnify,
|
|
||||||
defend, and hold each Contributor harmless for any liability
|
|
||||||
incurred by, or claims asserted against, such Contributor by reason
|
|
||||||
of your accepting any such warranty or additional liability.
|
|
||||||
|
|
136
README.rst
|
@ -1,130 +1,10 @@
|
||||||
========================
|
This project is no longer maintained.
|
||||||
Team and repository tags
|
|
||||||
========================
|
|
||||||
|
|
||||||
.. image:: https://governance.openstack.org/tc/badges/dragonflow.svg
|
The contents of this repository are still available in the Git
|
||||||
:target: https://governance.openstack.org/tc/reference/tags/index.html
|
source code management system. To see the contents of this
|
||||||
|
repository before it reached its end of life, please check out the
|
||||||
|
previous commit with "git checkout HEAD^1".
|
||||||
|
|
||||||
.. Change things from this point on
|
For any further questions, please email
|
||||||
|
openstack-discuss@lists.openstack.org or join #openstack-dev on
|
||||||
Distributed SDN-based Neutron Implementation
|
Freenode.
|
||||||
|
|
||||||
* Free software: Apache license
|
|
||||||
* Homepage: http://www.dragonflow.net/
|
|
||||||
* Source: https://opendev.org/openstack/dragonflow
|
|
||||||
* Bugs: https://bugs.launchpad.net/dragonflow
|
|
||||||
* Documentation: https://docs.openstack.org/dragonflow/latest/
|
|
||||||
* Release notes: https://docs.openstack.org/developer/dragonflow/releasenotes.html
|
|
||||||
|
|
||||||
.. image:: https://raw.githubusercontent.com/openstack/dragonflow/master/doc/images/df_logo.png
|
|
||||||
:alt: Solution Overview
|
|
||||||
:width: 500
|
|
||||||
:height: 350
|
|
||||||
:align: center
|
|
||||||
|
|
||||||
Overview
|
|
||||||
--------
|
|
||||||
|
|
||||||
Dragonflow implements Neutron using a lightweight embedded SDN Controller.
|
|
||||||
|
|
||||||
Our project mission is *to Implement advanced networking services in a manner
|
|
||||||
that is efficient, elegant and resource-nimble*
|
|
||||||
|
|
||||||
Distributed Dragonflow
|
|
||||||
======================
|
|
||||||
|
|
||||||
Comprehensive agentless implementation of the Neutron APIs and advanced
|
|
||||||
network services, such as fully distributed Switching, Routing, DHCP
|
|
||||||
and more.
|
|
||||||
|
|
||||||
This configuration is the current focus of Dragonflow.
|
|
||||||
Overview and details are available in the `Distributed Dragonflow Section`_
|
|
||||||
|
|
||||||
.. _Distributed Dragonflow Section: https://docs.openstack.org/dragonflow/latest/distributed_dragonflow.html
|
|
||||||
|
|
||||||
.. image:: https://raw.githubusercontent.com/openstack/dragonflow/master/doc/images/dragonflow_distributed_architecture.png
|
|
||||||
:alt: Solution Overview
|
|
||||||
:width: 600
|
|
||||||
:height: 525
|
|
||||||
:align: center
|
|
||||||
|
|
||||||
Mitaka Version Features
|
|
||||||
=======================
|
|
||||||
|
|
||||||
* L2 core API
|
|
||||||
|
|
||||||
IPv4, IPv6
|
|
||||||
GRE/VxLAN/STT/Geneve tunneling protocols
|
|
||||||
L2 Population
|
|
||||||
|
|
||||||
* Distributed L3 Virtual Router
|
|
||||||
|
|
||||||
* Distributed DHCP
|
|
||||||
|
|
||||||
* Distributed DNAT
|
|
||||||
|
|
||||||
* Security Groups Using OVS and Connection tracking
|
|
||||||
|
|
||||||
* Pluggable Distributed Database
|
|
||||||
|
|
||||||
Supported databases:
|
|
||||||
|
|
||||||
Stable:
|
|
||||||
|
|
||||||
ETCD, RAMCloud, Redis, Zookeeper
|
|
||||||
|
|
||||||
In progress:
|
|
||||||
|
|
||||||
RethinkDB
|
|
||||||
|
|
||||||
* Pluggable Publish-Subscribe
|
|
||||||
|
|
||||||
ZeroMQ, Redis
|
|
||||||
|
|
||||||
* Selective DB Distribution
|
|
||||||
|
|
||||||
Tenant Based Selective data distribution to the compute nodes
|
|
||||||
|
|
||||||
Experimental Mitaka Features
|
|
||||||
============================
|
|
||||||
|
|
||||||
* Local Controller Reliability
|
|
||||||
|
|
||||||
In progress
|
|
||||||
===========
|
|
||||||
|
|
||||||
* IGMP Distributed application
|
|
||||||
* Allowed Address Pairs
|
|
||||||
* Port Security
|
|
||||||
* DHCP DOS protection
|
|
||||||
* Distributed Meta Data Service
|
|
||||||
* Kuryr integration
|
|
||||||
* Local Controller HA
|
|
||||||
* ML2 Driver, hierarchical Port Binding
|
|
||||||
* VLAN L2 Networking support
|
|
||||||
* Smart broadcast/multicast
|
|
||||||
|
|
||||||
In planning
|
|
||||||
===========
|
|
||||||
|
|
||||||
* Distributed Load Balancing (East/West)
|
|
||||||
* DNS service
|
|
||||||
* Port Fault detection
|
|
||||||
* Dynamic service chaining (service Injection)
|
|
||||||
* SFC support
|
|
||||||
* Distributed FWaaS
|
|
||||||
* Distributed SNAT
|
|
||||||
* VPNaaS
|
|
||||||
|
|
||||||
Configurations
|
|
||||||
==============
|
|
||||||
|
|
||||||
To generate the sample dragonflow configuration files, run the following
|
|
||||||
command from the top level of the dragonflow directory:
|
|
||||||
|
|
||||||
tox -e genconfig
|
|
||||||
|
|
||||||
If a 'tox' environment is unavailable, then you can run the following script
|
|
||||||
instead to generate the configuration files:
|
|
||||||
|
|
||||||
./tools/generate_config_file_samples.sh
|
|
||||||
|
|
|
@ -1,171 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
# ``plugin.sh`` calls the following methods in the sourced driver:
|
|
||||||
#
|
|
||||||
# - nb_db_driver_install_server
|
|
||||||
# - nb_db_driver_install_client
|
|
||||||
# - nb_db_driver_start_server
|
|
||||||
# - nb_db_driver_stop_server
|
|
||||||
# - nb_db_driver_clean
|
|
||||||
# - nb_db_driver_configure
|
|
||||||
|
|
||||||
HOSTNAME=`hostname -f`
|
|
||||||
|
|
||||||
if is_ubuntu ; then
|
|
||||||
UBUNTU_RELEASE_BASE_NUM=`lsb_release -r | awk '{print $2}' | cut -d '.' -f 1`
|
|
||||||
fi
|
|
||||||
|
|
||||||
CASSANDRA_HOME="/etc/cassandra"
|
|
||||||
CASSANDRA_DATA_HOME="/var/lib/cassandra"
|
|
||||||
CASSANDRA_DEB_SOURCE_FILE="/etc/apt/sources.list.d/cassandra.list"
|
|
||||||
CASSANDRA_RPM_SOURCE_FILE="/etc/yum.repos.d/cassandra.repo"
|
|
||||||
|
|
||||||
CASSANDRA_DEFAULT_KEYSPACE="openstack"
|
|
||||||
# By default, the cassandra uses one replication for the all-in-one deployment
|
|
||||||
CASSANDRA_DEFAULT_REPLICATION=1
|
|
||||||
CASSANDRA_DEFAULT_CONSISTENCY_LEVEL="one"
|
|
||||||
|
|
||||||
# Cassandra service startup/cleanup duration
|
|
||||||
CASSANDRA_SERVICE_CHECK_REPLAY=5
|
|
||||||
|
|
||||||
# The seeds of cassandra (the cassandra hosts to form a cluster) should
|
|
||||||
# be specified in the configuration file. In order to generate the ip list
|
|
||||||
# of the cluster, string manipulation is needed here to get the right
|
|
||||||
# format of the seeds.
|
|
||||||
CASSANDRA_CLUSTER=$REMOTE_DB_HOSTS
|
|
||||||
CASSANDRA_NUM_OF_HOSTS_IN_CLUSTER=${CASSANDRA_NUM_OF_HOSTS:-1}
|
|
||||||
CASSANDRA_TEMP_FILE="/tmp/cassandra_hosts"
|
|
||||||
echo $CASSANDRA_CLUSTER > $CASSANDRA_TEMP_FILE
|
|
||||||
IPS=''
|
|
||||||
for ((i=1;i<=$CASSANDRA_NUM_OF_HOSTS_IN_CLUSTER;i++))
|
|
||||||
do
|
|
||||||
ip=`cut -d ',' -f $i < $CASSANDRA_TEMP_FILE | cut -d ':' -f 1`
|
|
||||||
IPS=$IPS','$ip
|
|
||||||
done
|
|
||||||
CASSANDRA_CLUSTER_IPS=${IPS#*","}
|
|
||||||
rm $CASSANDRA_TEMP_FILE
|
|
||||||
# End
|
|
||||||
|
|
||||||
if is_ubuntu; then
|
|
||||||
CASSANDRA_CONF_DIR="$CASSANDRA_HOME"
|
|
||||||
elif is_fedora; then
|
|
||||||
CASSANDRA_CONF_DIR="$CASSANDRA_HOME/conf"
|
|
||||||
else
|
|
||||||
die $LINENO "Other distributions are not supported"
|
|
||||||
fi
|
|
||||||
CASSANDRA_CONF_FILE="$CASSANDRA_CONF_DIR/cassandra.yaml"
|
|
||||||
|
|
||||||
function _cassandra_create_keyspace {
|
|
||||||
keyspace="CREATE KEYSPACE IF NOT EXISTS $CASSANDRA_DEFAULT_KEYSPACE "
|
|
||||||
replica="WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : $CASSANDRA_DEFAULT_REPLICATION } "
|
|
||||||
durable="AND DURABLE_WRITES = true;"
|
|
||||||
cqlsh $HOST_IP -e "$keyspace$replica$durable"
|
|
||||||
}
|
|
||||||
|
|
||||||
function _cassandra_drop_keyspace {
|
|
||||||
cqlsh $HOST_IP -e "DROP KEYSPACE IF EXISTS $CASSANDRA_DEFAULT_KEYSPACE;"
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_install_server {
|
|
||||||
if is_service_enabled df-cassandra-server ; then
|
|
||||||
echo "Installing Cassandra server"
|
|
||||||
if is_ubuntu; then
|
|
||||||
sudo tee -a $CASSANDRA_DEB_SOURCE_FILE >/dev/null <<'EOF'
|
|
||||||
deb http://debian.datastax.com/datastax-ddc 3.9 main
|
|
||||||
EOF
|
|
||||||
curl -L https://debian.datastax.com/debian/repo_key | sudo apt-key add -
|
|
||||||
sudo apt-get update -y
|
|
||||||
install_package openjdk-8-jre-headless
|
|
||||||
elif is_fedora; then
|
|
||||||
sudo tee -a $CASSANDRA_RPM_SOURCE_FILE >/dev/null <<'EOF'
|
|
||||||
[datastax-ddc]
|
|
||||||
name = DataStax Repo for Apache Cassandra
|
|
||||||
baseurl = http://rpm.datastax.com/datastax-ddc/3.9
|
|
||||||
enabled = 1
|
|
||||||
gpgcheck = 0
|
|
||||||
EOF
|
|
||||||
sudo yum update -y
|
|
||||||
install_package java-1.8.0-openjdk-headless
|
|
||||||
fi
|
|
||||||
|
|
||||||
install_package datastax-ddc
|
|
||||||
echo "Configuring Cassandra"
|
|
||||||
sudo sed -i "s/127.0.0.1/${CASSANDRA_CLUSTER_IPS}/g" $CASSANDRA_CONF_FILE
|
|
||||||
sudo sed -i "/^listen_address:/c listen_address: ${HOST_IP}" $CASSANDRA_CONF_FILE
|
|
||||||
sudo sed -i "/^rpc_address:/c rpc_address:" $CASSANDRA_CONF_FILE
|
|
||||||
sudo sed -i "/^broadcast_address:/c broadcast_address:" $CASSANDRA_CONF_FILE
|
|
||||||
# change ownership for data directory
|
|
||||||
sudo chown -R cassandra:cassandra $CASSANDRA_DATA_HOME
|
|
||||||
# start cassandra service
|
|
||||||
nb_db_driver_start_server
|
|
||||||
# initialize keyspace
|
|
||||||
_cassandra_create_keyspace
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_install_client {
|
|
||||||
echo 'Cassandra client sdk is in the requirements file.'
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_status_server
|
|
||||||
{
|
|
||||||
if is_service_enabled df-cassandra-server ; then
|
|
||||||
TEMP_PIDS=`pgrep -f "cassandra"`
|
|
||||||
if [ -z "$TEMP_PIDS" ]; then
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
function _check_cassandra_status {
|
|
||||||
times=0
|
|
||||||
# Initially Cassandra needs long duration to startup/cleanup
|
|
||||||
sleep 20
|
|
||||||
|
|
||||||
# Check the Cassandra cluster UP and Normal
|
|
||||||
result=$(nodetool -h $HOST_IP status | grep $HOST_IP | grep 'UN' | wc -l)
|
|
||||||
while [[ $result -lt 1 ]]
|
|
||||||
do
|
|
||||||
sleep 10
|
|
||||||
result=$(nodetool -h $HOST_IP status | grep $HOST_IP | grep 'UN' | wc -l)
|
|
||||||
times=`expr $times + 1`
|
|
||||||
if [[ $times > $CASSANDRA_SERVICE_CHECK_REPLAY ]];
|
|
||||||
then
|
|
||||||
echo "Cassandra Restart Error!"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_start_server {
|
|
||||||
if is_service_enabled df-cassandra-server ; then
|
|
||||||
sudo /etc/init.d/cassandra restart
|
|
||||||
_check_cassandra_status
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_stop_server {
|
|
||||||
if is_service_enabled df-cassandra-server ; then
|
|
||||||
sudo /etc/init.d/cassandra stop
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_clean {
|
|
||||||
nb_db_driver_start_server
|
|
||||||
_cassandra_drop_keyspace
|
|
||||||
nb_db_driver_stop_server
|
|
||||||
|
|
||||||
if is_ubuntu || is_fedora; then
|
|
||||||
uninstall_package -y datastax-ddc
|
|
||||||
fi
|
|
||||||
sudo rm -rf ${CASSANDRA_HOME}
|
|
||||||
sudo rm -rf ${CASSANDRA_DATA_HOME}
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_configure {
|
|
||||||
# set consistency level
|
|
||||||
iniset $DRAGONFLOW_CONF df-cassandra consistency_level "$CASSANDRA_DEFAULT_CONSISTENCY_LEVEL"
|
|
||||||
}
|
|
|
@ -1,24 +0,0 @@
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
#
|
|
||||||
# This script is executed in the OpenStack CI job that runs DevStack + tempest.
|
|
||||||
# It is also used by the rally job. You can find the CI job configuration here:
|
|
||||||
#
|
|
||||||
# https://opendev.org/openstack/dragonflow/src/branch/master/.zuul.yaml
|
|
||||||
#
|
|
||||||
|
|
||||||
export OVERRIDE_ENABLED_SERVICES=key,n-api,n-api-meta,n-cpu,n-cond,n-sch,n-crt,n-cauth,n-obj,g-api,g-reg,rabbit,mysql,dstat,df-controller,q-svc,df-metadata,q-qos,placement-api,q-trunk
|
|
||||||
export DEVSTACK_LOCAL_CONFIG+=$'\n'"DF_RUNNING_IN_GATE=True"
|
|
||||||
export DEVSTACK_LOCAL_CONFIG+=$'\n'"EXTERNAL_HOST_IP=172.24.4.100"
|
|
||||||
export DEVSTACK_LOCAL_CONFIG+=$'\n'"OVS_INSTALL_FROM_GIT=False"
|
|
||||||
export DEVSTACK_LOCAL_CONFIG+=$'\n'"OVS_BRANCH=v2.9.1"
|
|
|
@ -1,21 +0,0 @@
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
#
|
|
||||||
# This script is executed in the OpenStack CI job that runs DevStack + tempest.
|
|
||||||
# It is also used by the rally job. You can find the CI job configuration here:
|
|
||||||
#
|
|
||||||
# https://opendev.org/openstack/dragonflow/src/branch/master/.zuul.yaml
|
|
||||||
#
|
|
||||||
|
|
||||||
export OVERRIDE_ENABLED_SERVICES+=,df-etcd,etcd3,df-zmq-publisher-service
|
|
||||||
export DEVSTACK_LOCAL_CONFIG+=$'\n'"ENABLE_ACTIVE_DETECTION=False"
|
|
|
@ -1,21 +0,0 @@
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
#
|
|
||||||
# This script is executed in the OpenStack CI job that runs DevStack + tempest.
|
|
||||||
# It is also used by the rally job. You can find the CI job configuration here:
|
|
||||||
#
|
|
||||||
# https://opendev.org/openstack/dragonflow/src/branch/master/.zuul.yaml
|
|
||||||
#
|
|
||||||
|
|
||||||
export DEVSTACK_LOCAL_CONFIG+=$'\n'"ENABLE_DF_SFC=True"
|
|
||||||
export DEVSTACK_LOCAL_CONFIG+=$'\n'"NEUTRON_CREATE_INITIAL_NETWORKS=False"
|
|
|
@ -1,23 +0,0 @@
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
#
|
|
||||||
# This script is executed in the OpenStack CI job that runs DevStack + tempest.
|
|
||||||
# It is also used by the rally job. You can find the CI job configuration here:
|
|
||||||
#
|
|
||||||
# https://opendev.org/openstack/dragonflow/src/branch/master/.zuul.yaml
|
|
||||||
#
|
|
||||||
|
|
||||||
source /opt/stack/new/dragonflow/devstack/devstackgaterc-common
|
|
||||||
source /opt/stack/new/dragonflow/devstack/devstackgaterc-fullstack-common
|
|
||||||
source /opt/stack/new/dragonflow/devstack/devstackgaterc-etcd-zmq
|
|
||||||
export DEVSTACK_LOCAL_CONFIG+=$'\n'"DF_PUB_SUB=True"
|
|
|
@ -1,22 +0,0 @@
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
#
|
|
||||||
# This script is executed in the OpenStack CI job that runs DevStack + tempest.
|
|
||||||
# It is also used by the rally job. You can find the CI job configuration here:
|
|
||||||
#
|
|
||||||
# https://opendev.org/openstack/dragonflow/src/branch/master/.zuul.yaml
|
|
||||||
#
|
|
||||||
|
|
||||||
source /opt/stack/new/dragonflow/devstack/devstackgaterc-common
|
|
||||||
source /opt/stack/new/dragonflow/devstack/devstackgaterc-fullstack-common
|
|
||||||
source /opt/stack/new/dragonflow/devstack/devstackgaterc-redis
|
|
|
@ -1,22 +0,0 @@
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
#
|
|
||||||
# This script is executed in the OpenStack CI job that runs DevStack + tempest.
|
|
||||||
# It is also used by the rally job. You can find the CI job configuration here:
|
|
||||||
#
|
|
||||||
# https://opendev.org/openstack/dragonflow/src/branch/master/.zuul.yaml
|
|
||||||
#
|
|
||||||
|
|
||||||
source /opt/stack/new/dragonflow/devstack/devstackgaterc-common
|
|
||||||
source /opt/stack/new/dragonflow/devstack/devstackgaterc-redis
|
|
||||||
export DEVSTACK_LOCAL_CONFIG+=$'\n'"NEUTRON_CREATE_INITIAL_NETWORKS=False"
|
|
|
@ -1,21 +0,0 @@
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
#
|
|
||||||
# This script is executed in the OpenStack CI job that runs DevStack + tempest.
|
|
||||||
# It is also used by the rally job. You can find the CI job configuration here:
|
|
||||||
#
|
|
||||||
# https://opendev.org/openstack/dragonflow/src/branch/master/.zuul.yaml
|
|
||||||
#
|
|
||||||
|
|
||||||
export OVERRIDE_ENABLED_SERVICES+=,df-redis,df-redis-server
|
|
||||||
export DEVSTACK_LOCAL_CONFIG+=$'\n'"DF_REDIS_PUBSUB=True"
|
|
|
@ -1,24 +0,0 @@
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
#
|
|
||||||
# This script is executed in the OpenStack CI job that runs DevStack + tempest.
|
|
||||||
# It is also used by the rally job. You can find the CI job configuration here:
|
|
||||||
#
|
|
||||||
# https://opendev.org/openstack/dragonflow/src/branch/master/.zuul.yaml
|
|
||||||
#
|
|
||||||
|
|
||||||
source /opt/stack/new/dragonflow/devstack/devstackgaterc-common
|
|
||||||
source /opt/stack/new/dragonflow/devstack/devstackgaterc-redis
|
|
||||||
source /opt/stack/new/dragonflow/devstack/tempest-filter
|
|
||||||
export OVERRIDE_ENABLED_SERVICES+=,tempest,df-bgp
|
|
||||||
export DEVSTACK_LOCAL_CONFIG+=$'\n'"NEUTRON_CREATE_INITIAL_NETWORKS=True"
|
|
|
@ -1,22 +0,0 @@
|
||||||
description "etcd 2.0 distributed key-value store"
|
|
||||||
author "Scott Lowe <scott.lowe@scottlowe.org>"
|
|
||||||
|
|
||||||
start on (net-device-up
|
|
||||||
and local-filesystems
|
|
||||||
and runlevel [2345])
|
|
||||||
stop on runlevel [016]
|
|
||||||
|
|
||||||
respawn
|
|
||||||
respawn limit 10 5
|
|
||||||
|
|
||||||
script
|
|
||||||
if [ -f "/etc/default/etcd" ]; then
|
|
||||||
. /etc/default/etcd
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ ! -d "/var/etcd" ]; then
|
|
||||||
mkdir /var/etcd
|
|
||||||
fi
|
|
||||||
cd /var/etcd
|
|
||||||
exec /usr/local/bin/etcd >>/var/log/etcd.log 2>&1
|
|
||||||
end script
|
|
|
@ -1,15 +0,0 @@
|
||||||
[Unit]
|
|
||||||
Description=Etcd Server
|
|
||||||
After=network.target
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=notify
|
|
||||||
WorkingDirectory=/var/lib/etcd/
|
|
||||||
EnvironmentFile=-/etc/etcd/etcd.conf
|
|
||||||
#User=etcd
|
|
||||||
ExecStart=/usr/local/bin/etcd
|
|
||||||
Restart=on-failure
|
|
||||||
LimitNOFILE=65536
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
|
@ -1,41 +0,0 @@
|
||||||
# [member]
|
|
||||||
#ETCD_NAME=default
|
|
||||||
#ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
|
|
||||||
#ETCD_SNAPSHOT_COUNTER="10000"
|
|
||||||
#ETCD_HEARTBEAT_INTERVAL="100"
|
|
||||||
#ETCD_ELECTION_TIMEOUT="1000"
|
|
||||||
#ETCD_LISTEN_PEER_URLS="http://localhost:2380"
|
|
||||||
#ETCD_LISTEN_CLIENT_URLS="http://localhost:2379"
|
|
||||||
#ETCD_MAX_SNAPSHOTS="5"
|
|
||||||
#ETCD_MAX_WALS="5"
|
|
||||||
#ETCD_CORS=""
|
|
||||||
#
|
|
||||||
#[cluster]
|
|
||||||
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
|
|
||||||
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
|
|
||||||
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
|
|
||||||
#ETCD_INITIAL_CLUSTER_STATE="new"
|
|
||||||
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
|
|
||||||
#ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
|
|
||||||
#ETCD_DISCOVERY=""
|
|
||||||
#ETCD_DISCOVERY_SRV=""
|
|
||||||
#ETCD_DISCOVERY_FALLBACK="proxy"
|
|
||||||
#ETCD_DISCOVERY_PROXY=""
|
|
||||||
#
|
|
||||||
#[proxy]
|
|
||||||
#ETCD_PROXY="off"
|
|
||||||
#
|
|
||||||
#[security]
|
|
||||||
#ETCD_CERT_FILE=""
|
|
||||||
#ETCD_KEY_FILE=""
|
|
||||||
#ETCD_CLIENT_CERT_AUTH="false"
|
|
||||||
#ETCD_TRUSTED_CA_FILE=""
|
|
||||||
#ETCD_PEER_CERT_FILE=""
|
|
||||||
#ETCD_PEER_KEY_FILE=""
|
|
||||||
#ETCD_PEER_CLIENT_CERT_AUTH="false"
|
|
||||||
#ETCD_PEER_TRUSTED_CA_FILE=""
|
|
||||||
#
|
|
||||||
#[logging]
|
|
||||||
#ETCD_DEBUG="false"
|
|
||||||
# examples for -log-package-levels etcdserver=WARNING,security=DEBUG
|
|
||||||
#ETCD_LOG_PACKAGE_LEVELS=""
|
|
|
@ -1,35 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
# ``plugin.sh`` calls the following methods in the sourced driver:
|
|
||||||
#
|
|
||||||
# - nb_db_driver_install_server
|
|
||||||
# - nb_db_driver_install_client
|
|
||||||
# - nb_db_driver_start_server
|
|
||||||
# - nb_db_driver_stop_server
|
|
||||||
# - nb_db_driver_clean
|
|
||||||
# - nb_db_driver_configure
|
|
||||||
|
|
||||||
function nb_db_driver_install_server {
|
|
||||||
:
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_install_client {
|
|
||||||
:
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_start_server {
|
|
||||||
:
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_stop_server {
|
|
||||||
:
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_clean {
|
|
||||||
:
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_configure {
|
|
||||||
:
|
|
||||||
}
|
|
|
@ -1,7 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
function configure_pubsub_service_plugin {
|
|
||||||
NEUTRON_CONF=${NEUTRON_CONF:-"/etc/neutron/neutron.conf"}
|
|
||||||
PUB_SUB_DRIVER=${PUB_SUB_DRIVER:-"etcd_pubsub_driver"}
|
|
||||||
iniset $DRAGONFLOW_CONF df pub_sub_driver $PUB_SUB_DRIVER
|
|
||||||
}
|
|
|
@ -1,49 +0,0 @@
|
||||||
# NOTE(xiaohhui): By default, devstack will set Q_AGENT as openvswitch.
|
|
||||||
# Here we deny that, because there is no agent in DF. But most of
|
|
||||||
# functions in lib/neutron_plugins/openvswitch_agent are needed. So,
|
|
||||||
# lib/neutron_plugins/openvswitch_agent is still used here. And override
|
|
||||||
# functions can be added in this file.
|
|
||||||
Q_AGENT=${Q_AGENT:-" "}
|
|
||||||
|
|
||||||
source $TOP_DIR/lib/neutron_plugins/openvswitch_agent
|
|
||||||
|
|
||||||
# This function is invoked by DevStack's Neutron plugin setup
|
|
||||||
# code and is being overridden here since the DF devstack
|
|
||||||
# plugin will handle the install.
|
|
||||||
function neutron_plugin_install_agent_packages {
|
|
||||||
:
|
|
||||||
}
|
|
||||||
|
|
||||||
# Workaround for devstack/systemd, which will try to define q-agt even though
|
|
||||||
# it's disabled.
|
|
||||||
AGENT_BINARY=$(which true)
|
|
||||||
|
|
||||||
if is_service_enabled df-l3-agent ; then
|
|
||||||
AGENT_L3_BINARY=${AGENT_L3_BINARY:-"$(get_python_exec_prefix)/df-l3-agent"}
|
|
||||||
enable_service q-l3
|
|
||||||
fi
|
|
||||||
|
|
||||||
if is_service_enabled df-metadata ; then
|
|
||||||
disable_service q-meta
|
|
||||||
fi
|
|
||||||
|
|
||||||
DRAGONFLOW_CONF=/etc/neutron/dragonflow.ini
|
|
||||||
DRAGONFLOW_PUBLISHER_CONF=/etc/neutron/dragonflow_publisher.ini
|
|
||||||
DRAGONFLOW_DATAPATH=/etc/neutron/dragonflow_datapath_layout.yaml
|
|
||||||
Q_PLUGIN_EXTRA_CONF_PATH=/etc/neutron
|
|
||||||
Q_PLUGIN_EXTRA_CONF_FILES=(dragonflow.ini)
|
|
||||||
|
|
||||||
Q_ML2_PLUGIN_MECHANISM_DRIVERS=${Q_ML2_PLUGIN_MECHANISM_DRIVERS:-"df"}
|
|
||||||
if [[ -z ${ML2_L3_PLUGIN} ]]; then
|
|
||||||
if is_service_enabled q-l3 ; then
|
|
||||||
ML2_L3_PLUGIN="df-l3"
|
|
||||||
else
|
|
||||||
ML2_L3_PLUGIN="df-l3-agentless"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ "$ENABLE_DPDK" == "True" ]]; then
|
|
||||||
# By default, dragonflow uses OVS kernel datapath. If you want to use
|
|
||||||
# user space datapath powered by DPDK, please use 'netdev'.
|
|
||||||
OVS_DATAPATH_TYPE=netdev
|
|
||||||
fi
|
|
|
@ -1,203 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
DPDK_VERSION=16.07.2
|
|
||||||
DPDK_DIR=$DEST/dpdk/dpdk-stable-${DPDK_VERSION}
|
|
||||||
DPDK_TARGET=x86_64-native-linuxapp-gcc
|
|
||||||
DPDK_BUILD=$DPDK_DIR/$DPDK_TARGET
|
|
||||||
|
|
||||||
PCI_BUS_INFO=`sudo ethtool -i ${DPDK_NIC_NAME} | grep bus-info`
|
|
||||||
DPDK_PCI_TARGET=${PCI_BUS_INFO#*:}
|
|
||||||
|
|
||||||
OVS_DIR=/usr/local/var/run/openvswitch
|
|
||||||
OVSDB_SOCK=/usr/local/var/run/openvswitch/db.sock
|
|
||||||
|
|
||||||
# includes ovs_setup.sh
|
|
||||||
source $DEST/dragonflow/devstack/ovs_setup.sh
|
|
||||||
|
|
||||||
function _neutron_ovs_configure_dependencies {
|
|
||||||
# Configure TUN
|
|
||||||
if [ ! -e /sys/class/misc/tun]; then
|
|
||||||
sudo modprobe tun
|
|
||||||
fi
|
|
||||||
if [ ! -e /dev/net/tun]; then
|
|
||||||
sudo mkdir -p /dev/net
|
|
||||||
sudo mknod /dev/net/tun c 10 200
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Configure huge-pages
|
|
||||||
sudo sysctl -w vm.nr_hugepages=${DPDK_NUM_OF_HUGEPAGES}
|
|
||||||
sudo mkdir -p /dev/hugepages
|
|
||||||
sudo mount -t hugetlbfs none /dev/hugepages
|
|
||||||
|
|
||||||
# Configure UIO
|
|
||||||
sudo modprobe uio || true
|
|
||||||
sudo insmod $DPDK_BUILD/kmod/${DPDK_BIND_DRIVER}.ko || true
|
|
||||||
}
|
|
||||||
|
|
||||||
function _configure_ovs_dpdk {
|
|
||||||
# Configure user space datapath
|
|
||||||
iniset $DRAGONFLOW_CONF df vif_type vhostuser
|
|
||||||
iniset $DRAGONFLOW_CONF df vhost_sock_dir ${OVS_DIR}
|
|
||||||
|
|
||||||
# Disable kernel TCP/IP stack
|
|
||||||
sudo iptables -A INPUT -i ${DPDK_NIC_NAME} -j DROP
|
|
||||||
sudo iptables -A FORWARD -i ${DPDK_NIC_NAME} -j DROP
|
|
||||||
|
|
||||||
# Set up DPDK NIC
|
|
||||||
sudo ip link set ${DPDK_NIC_NAME} down
|
|
||||||
sudo $DPDK_DIR/tools/dpdk-devbind.py --bind=${DPDK_BIND_DRIVER} ${DPDK_PCI_TARGET}
|
|
||||||
}
|
|
||||||
|
|
||||||
function _install_dpdk {
|
|
||||||
if is_fedora; then
|
|
||||||
install_package kernel-devel
|
|
||||||
elif is_ubuntu; then
|
|
||||||
install_package build-essential
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ ! -d $DEST/dpdk ]; then
|
|
||||||
mkdir -p $DEST/dpdk
|
|
||||||
pushd $DEST/dpdk
|
|
||||||
if [ ! -e $DEST/dpdk/dpdk-${DPDK_VERSION}.tar.xz ]
|
|
||||||
then
|
|
||||||
wget http://fast.dpdk.org/rel/dpdk-${DPDK_VERSION}.tar.xz
|
|
||||||
fi
|
|
||||||
tar xvJf dpdk-${DPDK_VERSION}.tar.xz
|
|
||||||
cd $DPDK_DIR
|
|
||||||
sudo make install T=$DPDK_TARGET DESTDIR=install
|
|
||||||
popd
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function _uninstall_dpdk {
|
|
||||||
sudo $DPDK_DIR/tools/dpdk-devbind.py -u ${DPDK_PCI_TARGET}
|
|
||||||
sudo rmmod $DPDK_BUILD/kmod/${DPDK_BIND_DRIVER}.ko
|
|
||||||
sudo modprobe -r uio
|
|
||||||
sudo ip link set ${DPDK_NIC_NAME} up
|
|
||||||
|
|
||||||
# Enable kernel TCP/IP stack
|
|
||||||
sudo iptables -D INPUT -i ${DPDK_NIC_NAME} -j DROP
|
|
||||||
sudo iptables -D FORWARD -i ${DPDK_NIC_NAME} -j DROP
|
|
||||||
|
|
||||||
pushd $DPDK_DIR
|
|
||||||
sudo make uninstall
|
|
||||||
popd
|
|
||||||
sudo rm -rf $DPDK_DIR
|
|
||||||
}
|
|
||||||
|
|
||||||
function install_ovs {
|
|
||||||
_install_dpdk
|
|
||||||
_neutron_ovs_configure_dependencies
|
|
||||||
_neutron_ovs_clone_ovs
|
|
||||||
|
|
||||||
# If OVS is already installed, remove it, because we're about to re-install
|
|
||||||
# it from source.
|
|
||||||
for package in openvswitch openvswitch-switch openvswitch-common; do
|
|
||||||
if is_package_installed $package ; then
|
|
||||||
uninstall_package $package
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
install_package autoconf automake libtool gcc patch make
|
|
||||||
|
|
||||||
pushd $DEST/ovs
|
|
||||||
./boot.sh
|
|
||||||
./configure --with-dpdk=$DPDK_BUILD
|
|
||||||
make
|
|
||||||
sudo make install
|
|
||||||
sudo pip install ./python
|
|
||||||
popd
|
|
||||||
}
|
|
||||||
|
|
||||||
function uninstall_ovs {
|
|
||||||
sudo pip uninstall -y ovs
|
|
||||||
pushd $DEST/ovs
|
|
||||||
sudo make uninstall
|
|
||||||
popd
|
|
||||||
|
|
||||||
_uninstall_dpdk
|
|
||||||
}
|
|
||||||
|
|
||||||
function start_ovs {
|
|
||||||
# First time, only DB creation/clearing
|
|
||||||
sudo mkdir -p /var/run/openvswitch
|
|
||||||
|
|
||||||
# Start OVSDB
|
|
||||||
sudo ovsdb-server --remote=punix:$OVSDB_SOCK \
|
|
||||||
--remote=db:Open_vSwitch,Open_vSwitch,manager_options \
|
|
||||||
--pidfile --detach
|
|
||||||
|
|
||||||
# Start vswitchd
|
|
||||||
sudo ovs-vsctl --db=unix:$OVSDB_SOCK --no-wait init
|
|
||||||
sudo ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
|
|
||||||
sudo ovs-vswitchd unix:$OVSDB_SOCK --pidfile --detach
|
|
||||||
}
|
|
||||||
|
|
||||||
function configure_ovs {
|
|
||||||
_configure_ovs_dpdk
|
|
||||||
|
|
||||||
if is_service_enabled df-controller ; then
|
|
||||||
# setup external bridge if necessary
|
|
||||||
check_dnat=$(echo $DF_APPS_LIST | grep "dnat")
|
|
||||||
if [[ "$check_dnat" != "" ]]; then
|
|
||||||
echo "Setup external bridge for DNAT"
|
|
||||||
sudo ovs-vsctl add-br $PUBLIC_BRIDGE || true
|
|
||||||
sudo ip link set dev $PUBLIC_BRIDGE up || true
|
|
||||||
sudo ip addr add $PUBLIC_NETWORK_GATEWAY/$PUBLIC_NETWORK_PREFIXLEN dev $PUBLIC_BRIDGE || true
|
|
||||||
fi
|
|
||||||
|
|
||||||
_neutron_ovs_base_setup_bridge $INTEGRATION_BRIDGE
|
|
||||||
sudo ovs-vsctl --no-wait set bridge $INTEGRATION_BRIDGE fail-mode=secure other-config:disable-in-band=true
|
|
||||||
|
|
||||||
# Configure Open vSwitch to connect dpdk-enabled physical NIC
|
|
||||||
# to the OVS bridge. For example:
|
|
||||||
sudo ovs-vsctl add-port ${INTEGRATION_BRIDGE} dpdk0 -- set interface \
|
|
||||||
dpdk0 type=dpdk ofport_request=1
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function cleanup_ovs {
|
|
||||||
# Remove the patch ports
|
|
||||||
for port in $(sudo ovs-vsctl show | grep Port | awk '{print $2}' | cut -d '"' -f 2 | grep patch); do
|
|
||||||
sudo ovs-vsctl del-port ${port}
|
|
||||||
done
|
|
||||||
|
|
||||||
# remove all OVS ports that look like Neutron created ports
|
|
||||||
for port in $(sudo ovs-vsctl list port | grep -o -e tap[0-9a-f\-]* -e q[rg]-[0-9a-f\-]*); do
|
|
||||||
sudo ovs-vsctl del-port ${port}
|
|
||||||
done
|
|
||||||
|
|
||||||
# Remove all the vxlan ports
|
|
||||||
for port in $(sudo ovs-vsctl list port | grep name | grep vxlan | awk '{print $3}' | cut -d '"' -f 2); do
|
|
||||||
sudo ovs-vsctl del-port ${port}
|
|
||||||
done
|
|
||||||
}
|
|
||||||
|
|
||||||
function stop_ovs {
|
|
||||||
sudo ovs-dpctl dump-dps | sudo xargs -n1 ovs-dpctl del-dp
|
|
||||||
sudo killall ovsdb-server
|
|
||||||
sudo killall ovs-vswitchd
|
|
||||||
}
|
|
||||||
|
|
||||||
function init_ovs {
|
|
||||||
# clean up from previous (possibly aborted) runs
|
|
||||||
# create required data files
|
|
||||||
|
|
||||||
# Assumption: this is a dedicated test system and there is nothing important
|
|
||||||
# ovs databases. We're going to trash them and
|
|
||||||
# create new ones on each devstack run.
|
|
||||||
|
|
||||||
base_dir=/usr/local/etc/openvswitch
|
|
||||||
sudo mkdir -p $base_dir
|
|
||||||
|
|
||||||
for db in conf.db ; do
|
|
||||||
if [ -f $base_dir/$db ] ; then
|
|
||||||
sudo rm -f $base_dir/$db
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
sudo rm -f $base_dir/.*.db.~lock~
|
|
||||||
|
|
||||||
echo "Creating OVS Database"
|
|
||||||
sudo ovsdb-tool create $base_dir/conf.db \
|
|
||||||
/usr/local/share/openvswitch/vswitch.ovsschema
|
|
||||||
}
|
|
|
@ -1,271 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
function _neutron_ovs_install_ovs_deps_fedora {
|
|
||||||
install_package -y rpm-build rpmrebuild
|
|
||||||
# So apparently we need to compile to learn the requirements...
|
|
||||||
set `rpmspec -q --buildrequires rhel/openvswitch-fedora.spec`
|
|
||||||
set "$@" `rpmspec -q --buildrequires rhel/openvswitch-kmod-fedora.spec`
|
|
||||||
if [ $# > 0 ]; then
|
|
||||||
install_package -y $@
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function _neutron_ovs_get_rpm_basename {
|
|
||||||
PACKAGE=$1
|
|
||||||
SPEC=${2:-rhel/openvswitch-fedora.spec}
|
|
||||||
BASENAME=`rpmspec -q $SPEC --provides | awk "/^$PACKAGE\s*=/ {print \\\$1\"-\"\\\$3}" | head -1`
|
|
||||||
echo `rpmspec -q $SPEC | grep "^$BASENAME"`
|
|
||||||
}
|
|
||||||
|
|
||||||
function _neutron_ovs_get_rpm_file {
|
|
||||||
BASENAME=`_neutron_ovs_get_rpm_basename "$@"`
|
|
||||||
find $HOME/rpmbuild/RPMS/ -name "$BASENAME.rpm" | head -1
|
|
||||||
}
|
|
||||||
|
|
||||||
function _neutron_ovs_clone_ovs {
|
|
||||||
if [ -d $DEST/ovs ]; then
|
|
||||||
pushd $DEST/ovs
|
|
||||||
git checkout $OVS_BRANCH
|
|
||||||
git pull || true
|
|
||||||
popd
|
|
||||||
else
|
|
||||||
pushd $DEST
|
|
||||||
git clone $OVS_REPO -b $OVS_BRANCH
|
|
||||||
popd
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function _neutron_ovs_install_ovs_fedora {
|
|
||||||
_neutron_ovs_clone_ovs
|
|
||||||
|
|
||||||
mkdir -p $DEST/ovs/build-dragonflow
|
|
||||||
pushd $DEST/ovs/build-dragonflow
|
|
||||||
|
|
||||||
pushd ..
|
|
||||||
./boot.sh
|
|
||||||
popd
|
|
||||||
|
|
||||||
../configure
|
|
||||||
|
|
||||||
make dist
|
|
||||||
VERSION=`awk '/^Version:/ { print $2 }' ../rhel/openvswitch-fedora.spec | head -1`
|
|
||||||
|
|
||||||
mkdir -p $HOME/rpmbuild/SOURCES
|
|
||||||
cp openvswitch-${VERSION}.tar.gz $HOME/rpmbuild/SOURCES/
|
|
||||||
tar -xzf openvswitch-${VERSION}.tar.gz -C $HOME/rpmbuild/SOURCES
|
|
||||||
pushd $HOME/rpmbuild/SOURCES/openvswitch-${VERSION}
|
|
||||||
_neutron_ovs_install_ovs_deps_fedora
|
|
||||||
rpmbuild -bb --without check rhel/openvswitch-fedora.spec
|
|
||||||
rpmbuild -bb -D "kversion `uname -r`" rhel/openvswitch-kmod-fedora.spec
|
|
||||||
OVS_RPM_BASENAME=$(_neutron_ovs_get_rpm_file openvswitch)
|
|
||||||
rpmrebuild --change-spec-requires="awk '\$1 == \"Requires:\" && \$2 == \"/bin/python\" {\$2 = \"/usr/bin/python\"} {print \$0}'" -p $OVS_RPM_BASENAME
|
|
||||||
OVS_PY_RPM_BASENAME=""
|
|
||||||
OVS_KMOD_RPM_BASENAME=$(_neutron_ovs_get_rpm_file openvswitch-kmod rhel/openvswitch-kmod-fedora.spec)
|
|
||||||
install_package -y $OVS_RPM_BASENAME $OVS_PY_RPM_BASENAME $OVS_KMOD_RPM_BASENAME
|
|
||||||
sudo pip install ./python
|
|
||||||
popd
|
|
||||||
|
|
||||||
popd
|
|
||||||
}
|
|
||||||
|
|
||||||
function _neutron_ovs_install_ovs_deps_ubuntu {
|
|
||||||
install_package -y build-essential fakeroot devscripts equivs dkms linux-libc-dev linux-headers-$(uname -r)
|
|
||||||
sudo mk-build-deps -i -t "/usr/bin/apt-get --no-install-recommends -y"
|
|
||||||
}
|
|
||||||
|
|
||||||
function _neutron_ovs_install_ovs_ubuntu {
|
|
||||||
_neutron_ovs_clone_ovs
|
|
||||||
|
|
||||||
pushd $DEST/ovs
|
|
||||||
_neutron_ovs_install_ovs_deps_ubuntu
|
|
||||||
DEB_BUILD_OPTIONS='nocheck' fakeroot debian/rules binary
|
|
||||||
sudo dpkg -i ../openvswitch-datapath-dkms*.deb
|
|
||||||
sudo dpkg -i ../libopenvswitch*.deb ../openvswitch-common*.deb ../openvswitch-switch*.deb
|
|
||||||
sudo pip install ./python
|
|
||||||
popd
|
|
||||||
}
|
|
||||||
|
|
||||||
function _neutron_ovs_install_ovs {
|
|
||||||
local _is_ovs_installed=false
|
|
||||||
|
|
||||||
if [ "$OVS_INSTALL_FROM_GIT" == "True" ]; then
|
|
||||||
echo "Installing OVS and dependent packages from git"
|
|
||||||
# If OVS is already installed, remove it, because we're about to re-install
|
|
||||||
# it from source.
|
|
||||||
for package in openvswitch openvswitch-switch openvswitch-common; do
|
|
||||||
if is_package_installed $package ; then
|
|
||||||
_is_ovs_installed=true
|
|
||||||
break
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
if [ "$_is_ovs_installed" = true ]; then
|
|
||||||
cleanup_ovs
|
|
||||||
stop_ovs
|
|
||||||
uninstall_ovs
|
|
||||||
fi
|
|
||||||
|
|
||||||
install_package -y autoconf automake libtool gcc patch make
|
|
||||||
|
|
||||||
if is_ubuntu; then
|
|
||||||
_neutron_ovs_install_ovs_ubuntu
|
|
||||||
elif is_fedora; then
|
|
||||||
_neutron_ovs_install_ovs_fedora
|
|
||||||
else
|
|
||||||
echo "Unsupported system. Trying to install via package manager"
|
|
||||||
install_package $(get_packages "openvswitch")
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo "Installing OVS and dependent packages via package manager"
|
|
||||||
install_package $(get_packages "openvswitch")
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function install_ovs {
|
|
||||||
unload_module_if_loaded openvswitch
|
|
||||||
|
|
||||||
_neutron_ovs_install_ovs
|
|
||||||
|
|
||||||
# reload module
|
|
||||||
load_module_if_not_loaded openvswitch
|
|
||||||
}
|
|
||||||
|
|
||||||
function start_ovs {
|
|
||||||
echo "Starting OVS"
|
|
||||||
SERVICE_NAME=openvswitch # Default value
|
|
||||||
if is_fedora; then
|
|
||||||
SERVICE_NAME=openvswitch
|
|
||||||
elif is_ubuntu; then
|
|
||||||
SERVICE_NAME=openvswitch-switch
|
|
||||||
fi
|
|
||||||
|
|
||||||
restart_service $SERVICE_NAME
|
|
||||||
sleep 5
|
|
||||||
|
|
||||||
local _pwd=$(pwd)
|
|
||||||
cd $DATA_DIR/ovs
|
|
||||||
|
|
||||||
if ! ovs_service_status $OVS_DB_SERVICE; then
|
|
||||||
die "$OVS_DB_SERVICE is not running"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if is_service_enabled df-controller ; then
|
|
||||||
if ! ovs_service_status $OVS_VSWITCHD_SERVICE; then
|
|
||||||
die "$OVS_VSWITCHD_SERVICE is not running"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
cd $_pwd
|
|
||||||
}
|
|
||||||
|
|
||||||
function configure_ovs {
|
|
||||||
if is_service_enabled df-controller ; then
|
|
||||||
# setup external bridge if necessary
|
|
||||||
check_dnat=$(echo $DF_APPS_LIST | grep "dnat")
|
|
||||||
if [[ "$check_dnat" != "" ]]; then
|
|
||||||
if [[ "$DF_REINSTALL_OVS" == "True" ]]; then
|
|
||||||
# Create the bridge only if it does not already exist
|
|
||||||
if ! sudo ovs-vsctl br-exists $PUBLIC_BRIDGE; then
|
|
||||||
echo "Setup external bridge for DNAT"
|
|
||||||
sudo ovs-vsctl add-br $PUBLIC_BRIDGE || true
|
|
||||||
sudo ip link set dev $PUBLIC_BRIDGE up || true
|
|
||||||
sudo ip addr add $PUBLIC_NETWORK_GATEWAY/$PUBLIC_NETWORK_PREFIXLEN dev $PUBLIC_BRIDGE || true
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
_neutron_ovs_base_setup_bridge $INTEGRATION_BRIDGE
|
|
||||||
sudo ovs-vsctl --no-wait set bridge $INTEGRATION_BRIDGE fail-mode=secure other-config:disable-in-band=true
|
|
||||||
if [ -n "$OVS_INTEGRATION_BRIDGE_PROTOCOLS" ]; then
|
|
||||||
sudo ovs-vsctl set bridge $INTEGRATION_BRIDGE protocols=$OVS_INTEGRATION_BRIDGE_PROTOCOLS
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ -n "$OVS_MANAGER" ]; then
|
|
||||||
sudo ovs-vsctl set-manager $OVS_MANAGER
|
|
||||||
fi
|
|
||||||
|
|
||||||
cd $_pwd
|
|
||||||
}
|
|
||||||
|
|
||||||
function cleanup_ovs {
|
|
||||||
# Remove the patch ports
|
|
||||||
for port in $(sudo ovs-vsctl show | grep Port | awk '{print $2}' | cut -d '"' -f 2 | grep patch); do
|
|
||||||
sudo ovs-vsctl del-port ${port}
|
|
||||||
done
|
|
||||||
|
|
||||||
# remove all OVS ports that look like Neutron created ports
|
|
||||||
for port in $(sudo ovs-vsctl list port | grep -o -e tap[0-9a-f\-]* -e q[rg]-[0-9a-f\-]*); do
|
|
||||||
sudo ovs-vsctl del-port ${port}
|
|
||||||
done
|
|
||||||
|
|
||||||
# Remove all the vxlan ports
|
|
||||||
for port in $(sudo ovs-vsctl list port | grep name | grep vxlan | awk '{print $3}' | cut -d '"' -f 2); do
|
|
||||||
sudo ovs-vsctl del-port ${port}
|
|
||||||
done
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
function uninstall_ovs {
|
|
||||||
sudo pip uninstall -y ovs
|
|
||||||
PACKAGES="openvswitch openvswitch-kmod openvswitch-switch openvswitch-common openvswitch-datapath-dkms"
|
|
||||||
for package in $PACKAGES; do
|
|
||||||
if is_package_installed $package ; then
|
|
||||||
uninstall_package $package
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
# If the ovs dir is not found, just return.
|
|
||||||
pushd $DEST/ovs || return 0
|
|
||||||
make distclean || true
|
|
||||||
popd
|
|
||||||
}
|
|
||||||
|
|
||||||
# stop_ovs_dp() - Stop OVS datapath
|
|
||||||
function stop_ovs_dp {
|
|
||||||
dp=$(sudo ovs-dpctl dump-dps)
|
|
||||||
if [ $dp ]; then
|
|
||||||
sudo ovs-dpctl del-dp $dp
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Here we just remove vport_<tunnel_type>, because this is a minimal
|
|
||||||
# requirement to remove openvswitch. To do a deep clean, geneve, vxlan
|
|
||||||
# ip_gre, gre also need to be removed.
|
|
||||||
for module in vport_geneve vport_vxlan vport_gre openvswitch; do
|
|
||||||
unload_module_if_loaded $module
|
|
||||||
done
|
|
||||||
}
|
|
||||||
|
|
||||||
function stop_ovs
|
|
||||||
{
|
|
||||||
stop_ovs_dp
|
|
||||||
|
|
||||||
SERVICE_NAME=openvswitch # Default value
|
|
||||||
if is_fedora; then
|
|
||||||
SERVICE_NAME=openvswitch
|
|
||||||
elif is_ubuntu; then
|
|
||||||
SERVICE_NAME=openvswitch-switch
|
|
||||||
fi
|
|
||||||
stop_service $SERVICE_NAME
|
|
||||||
}
|
|
||||||
|
|
||||||
function init_ovs {
|
|
||||||
# clean up from previous (possibly aborted) runs
|
|
||||||
# create required data files
|
|
||||||
|
|
||||||
# Assumption: this is a dedicated test system and there is nothing important
|
|
||||||
# ovs databases. We're going to trash them and
|
|
||||||
# create new ones on each devstack run.
|
|
||||||
|
|
||||||
base_dir=$DATA_DIR/ovs
|
|
||||||
mkdir -p $base_dir
|
|
||||||
|
|
||||||
for db in conf.db ; do
|
|
||||||
if [ -f $base_dir/$db ] ; then
|
|
||||||
rm -f $base_dir/$db
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
rm -f $base_dir/.*.db.~lock~
|
|
||||||
|
|
||||||
echo "Creating OVS Database"
|
|
||||||
ovsdb-tool create $base_dir/conf.db $OVS_VSWITCH_OCSSCHEMA_FILE
|
|
||||||
}
|
|
|
@ -1,668 +0,0 @@
|
||||||
#@IgnoreInspection BashAddShebang
|
|
||||||
# dragonflow.sh - Devstack extras script to install Dragonflow
|
|
||||||
|
|
||||||
# Enable DPDK for Open vSwitch user space datapath
|
|
||||||
ENABLE_DPDK=${ENABLE_DPDK:-False}
|
|
||||||
DPDK_NUM_OF_HUGEPAGES=${DPDK_NUM_OF_HUGEPAGES:-1024}
|
|
||||||
DPDK_BIND_DRIVER=${DPDK_BIND_DRIVER:-igb_uio}
|
|
||||||
DPDK_NIC_NAME=${DPDK_NIC_NAME:-eth1}
|
|
||||||
|
|
||||||
# The git repo to use
|
|
||||||
OVS_REPO=${OVS_REPO:-https://github.com/openvswitch/ovs.git}
|
|
||||||
OVS_REPO_NAME=$(basename ${OVS_REPO} | cut -f1 -d'.')
|
|
||||||
|
|
||||||
# The branch to use from $OVS_REPO
|
|
||||||
OVS_BRANCH=${OVS_BRANCH:-branch-2.6}
|
|
||||||
|
|
||||||
# Set empty EXTERNAL_HOST_IP
|
|
||||||
EXTERNAL_HOST_IP=${EXTERNAL_HOST_IP:-}
|
|
||||||
|
|
||||||
DEFAULT_TUNNEL_TYPES="vxlan,geneve,gre"
|
|
||||||
DEFAULT_APPS_LIST="portbinding,l2,l3_proactive,dhcp,dnat,sg,portqos,classifier,tunneling,provider"
|
|
||||||
|
|
||||||
if [[ $ENABLE_DF_SFC == "True" ]]; then
|
|
||||||
DEFAULT_APPS_LIST="$DEFAULT_APPS_LIST,fc,sfc"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if is_service_enabled df-metadata ; then
|
|
||||||
DEFAULT_APPS_LIST="$DEFAULT_APPS_LIST,metadata_service"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if is_service_enabled q-trunk ; then
|
|
||||||
DEFAULT_APPS_LIST="$DEFAULT_APPS_LIST,trunk"
|
|
||||||
fi
|
|
||||||
|
|
||||||
ENABLE_ACTIVE_DETECTION=${ENABLE_ACTIVE_DETECTION:-True}
|
|
||||||
if [[ "$ENABLE_ACTIVE_DETECTION" == "True" ]]; then
|
|
||||||
DEFAULT_APPS_LIST="$DEFAULT_APPS_LIST,active_port_detection"
|
|
||||||
fi
|
|
||||||
|
|
||||||
ENABLE_LIVE_MIGRATION=${ENABLE_LIVE_MIGRATION:-True}
|
|
||||||
if [[ "$ENABLE_LIVE_MIGRATION" == "True" ]]; then
|
|
||||||
DEFAULT_APPS_LIST="$DEFAULT_APPS_LIST,migration"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ ! -z ${EXTERNAL_HOST_IP} ]]; then
|
|
||||||
DEFAULT_APPS_LIST="$DEFAULT_APPS_LIST,chassis_snat"
|
|
||||||
fi
|
|
||||||
|
|
||||||
ENABLED_AGING_APP=${ENABLE_AGING_APP:-True}
|
|
||||||
if [[ "$ENABLE_AGING_APP" == "True" ]]; then
|
|
||||||
DEFAULT_APPS_LIST="aging,$DEFAULT_APPS_LIST"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if is_service_enabled df-skydive ; then
|
|
||||||
SKYDIVE_ENDPOINT=${SKYDIVE_ENDPOINT:-$SERVICE_HOST:8082}
|
|
||||||
fi
|
|
||||||
|
|
||||||
DF_APPS_LIST=${DF_APPS_LIST:-$DEFAULT_APPS_LIST}
|
|
||||||
TUNNEL_TYPES=${TUNNEL_TYPE:-$DEFAULT_TUNNEL_TYPES}
|
|
||||||
|
|
||||||
# OVS related pid files
|
|
||||||
#----------------------
|
|
||||||
OVS_DB_SERVICE="ovsdb-server"
|
|
||||||
OVS_VSWITCHD_SERVICE="ovs-vswitchd"
|
|
||||||
OVS_DIR="/var/run/openvswitch"
|
|
||||||
OVS_DB_PID=$OVS_DIR"/"$OVS_DB_SERVICE".pid"
|
|
||||||
OVS_VSWITCHD_PID=$OVS_DIR"/"$OVS_VSWITCHD_SERVICE".pid"
|
|
||||||
OVS_VSWITCH_OCSSCHEMA_FILE=${OVS_VSWITCH_OCSSCHEMA_FILE:-"/usr/share/openvswitch/vswitch.ovsschema"}
|
|
||||||
|
|
||||||
# Neutron notifier
|
|
||||||
ENABLE_NEUTRON_NOTIFIER=${ENABLE_NEUTRON_NOTIFIER:-"False"}
|
|
||||||
|
|
||||||
# Set value of TUNNEL_ENDPOINT_IP if unset
|
|
||||||
TUNNEL_ENDPOINT_IP=${TUNNEL_ENDPOINT_IP:-$HOST_IP}
|
|
||||||
|
|
||||||
ENABLE_DF_SFC=${ENABLE_DF_SFC:-"False"}
|
|
||||||
if [[ $ENABLE_DF_SFC == "True" ]]; then
|
|
||||||
NEUTRON_SFC_DRIVERS=dragonflow
|
|
||||||
NEUTRON_FLOWCLASSIFIER_DRIVERS=dragonflow
|
|
||||||
fi
|
|
||||||
|
|
||||||
ACTION=$1
|
|
||||||
STAGE=$2
|
|
||||||
|
|
||||||
# Pluggable DB drivers
|
|
||||||
#----------------------
|
|
||||||
function is_df_db_driver_selected {
|
|
||||||
if [[ "$ACTION" == "stack" && "$STAGE" == "pre-install" ]]; then
|
|
||||||
test -n "$NB_DRIVER_CLASS"
|
|
||||||
return $?
|
|
||||||
fi
|
|
||||||
return 1
|
|
||||||
}
|
|
||||||
|
|
||||||
if is_service_enabled df-etcd ; then
|
|
||||||
is_df_db_driver_selected && die $LINENO "More than one database service is set for Dragonflow."
|
|
||||||
source $DEST/dragonflow/devstack/etcd_driver
|
|
||||||
NB_DRIVER_CLASS="etcd_nb_db_driver"
|
|
||||||
REMOTE_DB_PORT=${REMOTE_DB_PORT:-2379}
|
|
||||||
fi
|
|
||||||
if is_service_enabled df-ramcloud ; then
|
|
||||||
is_df_db_driver_selected && die $LINENO "More than one database service is set for Dragonflow."
|
|
||||||
source $DEST/dragonflow/devstack/ramcloud_driver
|
|
||||||
NB_DRIVER_CLASS="ramcloud_nb_db_driver"
|
|
||||||
fi
|
|
||||||
if is_service_enabled df-zookeeper ; then
|
|
||||||
is_df_db_driver_selected && die $LINENO "More than one database service is set for Dragonflow."
|
|
||||||
source $DEST/dragonflow/devstack/zookeeper_driver
|
|
||||||
NB_DRIVER_CLASS="zookeeper_nb_db_driver"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if is_service_enabled df-cassandra ; then
|
|
||||||
is_df_db_driver_selected && die $LINENO "More than one database service is set for Dragonflow."
|
|
||||||
source $DEST/dragonflow/devstack/cassandra_driver
|
|
||||||
NB_DRIVER_CLASS="cassandra_nb_db_driver"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if is_service_enabled df-rethinkdb ; then
|
|
||||||
is_df_db_driver_selected && die $LINENO "More than one database service is set for Dragonflow."
|
|
||||||
source $DEST/dragonflow/devstack/rethinkdb_driver
|
|
||||||
NB_DRIVER_CLASS="rethinkdb_nb_db_driver"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if is_service_enabled df-redis ; then
|
|
||||||
is_df_db_driver_selected && die $LINENO "More than one database service is set for Dragonflow."
|
|
||||||
source $DEST/dragonflow/devstack/redis_driver
|
|
||||||
NB_DRIVER_CLASS="redis_nb_db_driver"
|
|
||||||
DF_REDIS_PUBSUB=${DF_REDIS_PUBSUB:-"True"}
|
|
||||||
else
|
|
||||||
DF_REDIS_PUBSUB="False"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# How to connect to the database storing the virtual topology.
|
|
||||||
REMOTE_DB_IP=${REMOTE_DB_IP:-$HOST_IP}
|
|
||||||
REMOTE_DB_PORT=${REMOTE_DB_PORT:-4001}
|
|
||||||
REMOTE_DB_HOSTS=${REMOTE_DB_HOSTS:-"$REMOTE_DB_IP:$REMOTE_DB_PORT"}
|
|
||||||
|
|
||||||
# As the function returns actual value only on pre-install, ignore it on later stages
|
|
||||||
if [[ "$ACTION" == "stack" && "$STAGE" == "pre-install" ]]; then
|
|
||||||
is_df_db_driver_selected || die $LINENO "No database service is set for Dragonflow."
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Pub/Sub Service
|
|
||||||
#----------------
|
|
||||||
# To be called to initialise params common to all pubsub drivers
|
|
||||||
function init_pubsub {
|
|
||||||
DF_PUB_SUB="True"
|
|
||||||
}
|
|
||||||
|
|
||||||
if is_service_enabled df-zmq-publisher-service ; then
|
|
||||||
init_pubsub
|
|
||||||
enable_service df-publisher-service
|
|
||||||
source $DEST/dragonflow/devstack/zmq_pubsub_driver
|
|
||||||
fi
|
|
||||||
|
|
||||||
if is_service_enabled df-etcd-pubsub-service ; then
|
|
||||||
init_pubsub
|
|
||||||
source $DEST/dragonflow/devstack/etcd_pubsub_driver
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ "$DF_REDIS_PUBSUB" == "True" ]]; then
|
|
||||||
init_pubsub
|
|
||||||
source $DEST/dragonflow/devstack/redis_pubsub_driver
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Dragonflow installation uses functions from these files
|
|
||||||
source $TOP_DIR/lib/neutron_plugins/ovs_base
|
|
||||||
|
|
||||||
if [[ "$ENABLE_DPDK" == "True" ]]; then
|
|
||||||
source $DEST/dragonflow/devstack/ovs_dpdk_setup.sh
|
|
||||||
else
|
|
||||||
source $DEST/dragonflow/devstack/ovs_setup.sh
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Entry Points
|
|
||||||
# ------------
|
|
||||||
|
|
||||||
function configure_df_metadata_service {
|
|
||||||
if is_service_enabled df-metadata ; then
|
|
||||||
NOVA_CONF=${NOVA_CONF:-"/etc/nova/nova.conf"}
|
|
||||||
iniset $NOVA_CONF neutron service_metadata_proxy True
|
|
||||||
iniset $NOVA_CONF neutron metadata_proxy_shared_secret $METADATA_PROXY_SHARED_SECRET
|
|
||||||
iniset $NEUTRON_CONF DEFAULT metadata_proxy_shared_secret $METADATA_PROXY_SHARED_SECRET
|
|
||||||
iniset $DRAGONFLOW_CONF df_metadata ip "$DF_METADATA_SERVICE_IP"
|
|
||||||
iniset $DRAGONFLOW_CONF df_metadata port "$DF_METADATA_SERVICE_PORT"
|
|
||||||
iniset $DRAGONFLOW_CONF df_metadata metadata_interface "$DF_METADATA_SERVICE_INTERFACE"
|
|
||||||
pushd $DRAGONFLOW_DIR
|
|
||||||
# TODO(snapiri) When we add more switch backends, this should be conditional
|
|
||||||
tools/ovs_metadata_service_deployment.sh install $INTEGRATION_BRIDGE $DF_METADATA_SERVICE_INTERFACE $DF_METADATA_SERVICE_IP $DF_METADATA_SERVICE_PORT
|
|
||||||
popd
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function configure_qos {
|
|
||||||
Q_SERVICE_PLUGIN_CLASSES+=",qos"
|
|
||||||
Q_ML2_PLUGIN_EXT_DRIVERS+=",qos"
|
|
||||||
iniset /$Q_PLUGIN_CONF_FILE ml2 extension_drivers "$Q_ML2_PLUGIN_EXT_DRIVERS"
|
|
||||||
}
|
|
||||||
|
|
||||||
function configure_trunk {
|
|
||||||
Q_SERVICE_PLUGIN_CLASSES+=",trunk"
|
|
||||||
Q_ML2_PLUGIN_EXT_DRIVERS+=",trunk"
|
|
||||||
iniset /$Q_PLUGIN_CONF_FILE ml2 extension_drivers "$Q_ML2_PLUGIN_EXT_DRIVERS"
|
|
||||||
}
|
|
||||||
|
|
||||||
function configure_bgp {
|
|
||||||
setup_develop $DEST/neutron-dynamic-routing
|
|
||||||
_neutron_service_plugin_class_add df-bgp
|
|
||||||
# Since we are using a plugin outside neutron-dynamic-routing, we need to
|
|
||||||
# specify api_extensions_path explicitly.
|
|
||||||
iniset $NEUTRON_CONF DEFAULT api_extensions_path "$DEST/neutron-dynamic-routing/neutron_dynamic_routing/extensions"
|
|
||||||
}
|
|
||||||
|
|
||||||
function configure_sfc {
|
|
||||||
setup_develop $DEST/networking-sfc
|
|
||||||
}
|
|
||||||
|
|
||||||
function init_neutron_sample_config {
|
|
||||||
# NOTE: We must make sure that neutron config file exists before
|
|
||||||
# going further with ovs setup
|
|
||||||
if [ ! -f $NEUTRON_CONF ] ; then
|
|
||||||
sudo install -d -o $STACK_USER $NEUTRON_CONF_DIR
|
|
||||||
pushd $NEUTRON_DIR
|
|
||||||
tools/generate_config_file_samples.sh
|
|
||||||
popd
|
|
||||||
cp $NEUTRON_DIR/etc/neutron.conf.sample $NEUTRON_CONF
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function configure_df_skydive {
|
|
||||||
iniset $DRAGONFLOW_CONF df_skydive analyzer_endpoint "$SKYDIVE_ENDPOINT"
|
|
||||||
if [[ -n "$DF_SKYDIVE_USER" ]]; then
|
|
||||||
iniset $DRAGONFLOW_CONF df_skydive user "$DF_SKYDIVE_USER"
|
|
||||||
fi
|
|
||||||
local DF_SKYDIVE_PASSWORD=${DF_SKYDIVE_PASSWORD:-$ADMIN_PASSWORD}
|
|
||||||
iniset $DRAGONFLOW_CONF df_skydive password "$DF_SKYDIVE_PASSWORD"
|
|
||||||
if [[ -n "$DF_SKYDIVE_UPDATE_INTERVAL" ]]; then
|
|
||||||
iniset $DRAGONFLOW_CONF df_skydive update_interval "$DF_SKYDIVE_UPDATE_INTERVAL"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
function configure_df_plugin {
|
|
||||||
echo "Configuring Neutron for Dragonflow"
|
|
||||||
|
|
||||||
# Generate DF config file
|
|
||||||
pushd $DRAGONFLOW_DIR
|
|
||||||
tools/generate_config_file_samples.sh
|
|
||||||
popd
|
|
||||||
mkdir -p $Q_PLUGIN_EXTRA_CONF_PATH
|
|
||||||
sudo mkdir -p /var/run/dragonflow
|
|
||||||
sudo chown $STACK_USER /var/run/dragonflow
|
|
||||||
cp $DRAGONFLOW_DIR/etc/dragonflow.ini.sample $DRAGONFLOW_CONF
|
|
||||||
cp $DRAGONFLOW_DIR/etc/dragonflow_datapath_layout.yaml $DRAGONFLOW_DATAPATH
|
|
||||||
|
|
||||||
if is_service_enabled q-svc ; then
|
|
||||||
if is_service_enabled q-qos ; then
|
|
||||||
configure_qos
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ "$DR_MODE" == "df-bgp" ]]; then
|
|
||||||
configure_bgp
|
|
||||||
fi
|
|
||||||
|
|
||||||
if is_service_enabled q-trunk ; then
|
|
||||||
configure_trunk
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ "$ENABLE_DF_SFC" == "True" ]]; then
|
|
||||||
configure_sfc
|
|
||||||
fi
|
|
||||||
|
|
||||||
# NOTE(gsagie) needed for tempest
|
|
||||||
export NETWORK_API_EXTENSIONS=$(python -c \
|
|
||||||
'from dragonflow.common import extensions ;\
|
|
||||||
print ",".join(extensions.SUPPORTED_API_EXTENSIONS)')
|
|
||||||
|
|
||||||
# Set netron-server related settings
|
|
||||||
iniset $DRAGONFLOW_CONF df monitor_table_poll_time "$DF_MONITOR_TABLE_POLL_TIME"
|
|
||||||
iniset $DRAGONFLOW_CONF df publisher_rate_limit_timeout "$PUBLISHER_RATE_LIMIT_TIMEOUT"
|
|
||||||
iniset $DRAGONFLOW_CONF df publisher_rate_limit_count "$PUBLISHER_RATE_LIMIT_COUNT"
|
|
||||||
iniset $NEUTRON_CONF DEFAULT core_plugin "$Q_PLUGIN_CLASS"
|
|
||||||
iniset $NEUTRON_CONF DEFAULT service_plugins "$Q_SERVICE_PLUGIN_CLASSES"
|
|
||||||
|
|
||||||
iniset $DRAGONFLOW_CONF df auto_detect_port_behind_port "$DF_AUTO_DETECT_PORT_BEHIND_PORT"
|
|
||||||
iniset $DRAGONFLOW_CONF df_loadbalancer auto_enable_vip_ports "$DF_LBAAS_AUTO_ENABLE_VIP_PORTS"
|
|
||||||
|
|
||||||
if is_service_enabled q-dhcp ; then
|
|
||||||
iniset $DRAGONFLOW_CONF df use_centralized_ipv6_DHCP "True"
|
|
||||||
else
|
|
||||||
iniset $NEUTRON_CONF DEFAULT dhcp_agent_notification "False"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ "$DF_RUNNING_IN_GATE" == "True" ]]; then
|
|
||||||
iniset $NEUTRON_CONF quotas default_quota "-1"
|
|
||||||
iniset $NEUTRON_CONF quotas quota_network "-1"
|
|
||||||
iniset $NEUTRON_CONF quotas quota_subnet "-1"
|
|
||||||
iniset $NEUTRON_CONF quotas quota_port "-1"
|
|
||||||
iniset $NEUTRON_CONF quotas quota_router "-1"
|
|
||||||
iniset $NEUTRON_CONF quotas quota_floatingip "-1"
|
|
||||||
iniset $NEUTRON_CONF quotas quota_security_group_rule "-1"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# load dragonflow.ini into neutron-server
|
|
||||||
neutron_server_config_add_new $DRAGONFLOW_CONF
|
|
||||||
fi
|
|
||||||
|
|
||||||
iniset $DRAGONFLOW_CONF df remote_db_hosts "$REMOTE_DB_HOSTS"
|
|
||||||
iniset $DRAGONFLOW_CONF df nb_db_class "$NB_DRIVER_CLASS"
|
|
||||||
iniset $DRAGONFLOW_CONF df enable_neutron_notifier "$ENABLE_NEUTRON_NOTIFIER"
|
|
||||||
iniset $DRAGONFLOW_CONF df enable_dpdk "$ENABLE_DPDK"
|
|
||||||
iniset $DRAGONFLOW_CONF df management_ip "$HOST_IP"
|
|
||||||
iniset $DRAGONFLOW_CONF df local_ip "$TUNNEL_ENDPOINT_IP"
|
|
||||||
iniset $DRAGONFLOW_CONF df tunnel_types "$TUNNEL_TYPES"
|
|
||||||
iniset $DRAGONFLOW_CONF df integration_bridge "$INTEGRATION_BRIDGE"
|
|
||||||
iniset $DRAGONFLOW_CONF df apps_list "$DF_APPS_LIST"
|
|
||||||
iniset $DRAGONFLOW_CONF df_l2_app l2_responder "$DF_L2_RESPONDER"
|
|
||||||
iniset $DRAGONFLOW_CONF df enable_df_pub_sub "$DF_PUB_SUB"
|
|
||||||
iniset $DRAGONFLOW_CONF df_zmq ipc_socket "$DF_ZMQ_IPC_SOCKET"
|
|
||||||
if [[ ! -z ${EXTERNAL_HOST_IP} ]]; then
|
|
||||||
iniset $DRAGONFLOW_CONF df external_host_ip "$EXTERNAL_HOST_IP"
|
|
||||||
iniset $DRAGONFLOW_CONF df_snat_app external_network_bridge "$PUBLIC_BRIDGE"
|
|
||||||
fi
|
|
||||||
|
|
||||||
iniset $DRAGONFLOW_CONF df enable_selective_topology_distribution \
|
|
||||||
"$DF_SELECTIVE_TOPO_DIST"
|
|
||||||
configure_df_metadata_service
|
|
||||||
|
|
||||||
if is_service_enabled df-skydive ; then
|
|
||||||
configure_df_skydive
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function install_zeromq {
|
|
||||||
if is_fedora; then
|
|
||||||
install_package zeromq
|
|
||||||
elif is_ubuntu; then
|
|
||||||
install_package libzmq3-dev
|
|
||||||
elif is_suse; then
|
|
||||||
install_package libzmq3-dev
|
|
||||||
fi
|
|
||||||
# Necessary directory for socket location.
|
|
||||||
sudo mkdir -p /var/run/openstack
|
|
||||||
sudo chown $STACK_USER /var/run/openstack
|
|
||||||
}
|
|
||||||
|
|
||||||
function install_df {
|
|
||||||
install_zeromq
|
|
||||||
|
|
||||||
if function_exists nb_db_driver_install_server; then
|
|
||||||
nb_db_driver_install_server
|
|
||||||
fi
|
|
||||||
|
|
||||||
if function_exists nb_db_driver_install_client; then
|
|
||||||
nb_db_driver_install_client
|
|
||||||
fi
|
|
||||||
|
|
||||||
setup_package $DRAGONFLOW_DIR
|
|
||||||
}
|
|
||||||
|
|
||||||
# The following returns "0" when service is live.
|
|
||||||
# Zero (0) is considered a TRUE value in bash.
|
|
||||||
function ovs_service_status
|
|
||||||
{
|
|
||||||
TEMP_PID=$OVS_DIR"/"$1".pid"
|
|
||||||
if [ -e $TEMP_PID ]
|
|
||||||
then
|
|
||||||
TEMP_PID_VALUE=$(cat $TEMP_PID 2>/dev/null)
|
|
||||||
if [ -e /proc/$TEMP_PID_VALUE ]
|
|
||||||
then
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
# service is dead
|
|
||||||
return 1
|
|
||||||
}
|
|
||||||
|
|
||||||
function is_module_loaded {
|
|
||||||
return $(lsmod | grep -q $1)
|
|
||||||
}
|
|
||||||
|
|
||||||
function load_module_if_not_loaded {
|
|
||||||
local module=$1
|
|
||||||
local fatal=$2
|
|
||||||
|
|
||||||
if is_module_loaded $module; then
|
|
||||||
echo "Module already loaded: $module"
|
|
||||||
else
|
|
||||||
if [ "$(trueorfalse True fatal)" == "True" ]; then
|
|
||||||
sudo modprobe $module || (die $LINENO "FAILED TO LOAD $module")
|
|
||||||
else
|
|
||||||
sudo modprobe $module || (echo "FAILED TO LOAD $module")
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function unload_module_if_loaded {
|
|
||||||
local module=$1
|
|
||||||
|
|
||||||
if is_module_loaded $module; then
|
|
||||||
sudo rmmod $module || (die $LINENO "FAILED TO UNLOAD $module")
|
|
||||||
else
|
|
||||||
echo "Module is not loaded: $module"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# cleanup_nb_db() - Clean all the keys in the northbound database
|
|
||||||
function cleanup_nb_db {
|
|
||||||
# clean db only on the master node
|
|
||||||
if is_service_enabled q-svc ; then
|
|
||||||
if [[ "$DF_Q_SVC_MASTER" == "True" ]]; then
|
|
||||||
df-db clean
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# init_nb_db() - Create all the tables in northbound database
|
|
||||||
function init_nb_db {
|
|
||||||
# init db only on the master node
|
|
||||||
if is_service_enabled q-svc ; then
|
|
||||||
if [[ "$DF_Q_SVC_MASTER" == "True" ]]; then
|
|
||||||
df-db init
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# drop_nb_db() - Drop all the tables in northbound database
|
|
||||||
function drop_nb_db {
|
|
||||||
# drop db only on the master node
|
|
||||||
if is_service_enabled q-svc ; then
|
|
||||||
if [[ "$DF_Q_SVC_MASTER" == "True" ]]; then
|
|
||||||
df-db dropall
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# start_df() - Start running processes, including screen
|
|
||||||
function start_df {
|
|
||||||
echo "Starting Dragonflow"
|
|
||||||
|
|
||||||
if is_service_enabled df-controller ; then
|
|
||||||
sudo ovs-vsctl --no-wait set-controller $INTEGRATION_BRIDGE tcp:127.0.0.1:6633
|
|
||||||
run_process df-controller "$DF_LOCAL_CONTROLLER_BINARY --config-file $NEUTRON_CONF --config-file $DRAGONFLOW_CONF"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# stop_df() - Stop running processes (non-screen)
|
|
||||||
function stop_df {
|
|
||||||
if is_service_enabled df-controller ; then
|
|
||||||
stop_process df-controller
|
|
||||||
fi
|
|
||||||
|
|
||||||
cleanup_nb_db
|
|
||||||
drop_nb_db
|
|
||||||
|
|
||||||
if function_exists nb_db_driver_stop_server; then
|
|
||||||
nb_db_driver_stop_server
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function disable_libvirt_apparmor {
|
|
||||||
if ! sudo aa-status --enabled ; then
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
# NOTE(arosen): This is used as a work around to allow newer versions
|
|
||||||
# of libvirt to work with ovs configured ports. See LP#1466631.
|
|
||||||
# requires the apparmor-utils
|
|
||||||
install_package apparmor-utils
|
|
||||||
# disables apparmor for libvirtd
|
|
||||||
sudo aa-complain /etc/apparmor.d/usr.sbin.libvirtd
|
|
||||||
}
|
|
||||||
|
|
||||||
function verify_os_ken_version {
|
|
||||||
# Verify os_ken is installed. Version greater than 0.3.0. Does not return
|
|
||||||
# on failure.
|
|
||||||
OS_KEN_VER_LINE=`osken --version 2>&1 | head -n 1`
|
|
||||||
OS_KEN_VER=`echo $OS_KEN_VER_LINE | cut -d' ' -f2`
|
|
||||||
echo "Found os_ken version $OS_KEN_VER ($OS_KEN_VER_LINE)"
|
|
||||||
if [ `vercmp_numbers "$OS_KEN_VER" "0.3.0"` -lt 0 ]; then
|
|
||||||
die $LINENO "os_ken version $OS_KEN_VER too low. Version 0.3.0+ is required for Dragonflow."
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function start_pubsub_service {
|
|
||||||
if is_service_enabled df-publisher-service ; then
|
|
||||||
echo "Starting Dragonflow publisher service"
|
|
||||||
run_process df-publisher-service "$DF_PUBLISHER_SERVICE_BINARY --config-file $NEUTRON_CONF --config-file $DRAGONFLOW_CONF --config-file $DRAGONFLOW_PUBLISHER_CONF"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function stop_pubsub_service {
|
|
||||||
if is_service_enabled df-publisher-service ; then
|
|
||||||
stop_process df-publisher-service
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function start_df_metadata_agent {
|
|
||||||
if is_service_enabled df-metadata ; then
|
|
||||||
echo "Starting Dragonflow metadata service"
|
|
||||||
run_process df-metadata "$DF_METADATA_SERVICE --config-file $NEUTRON_CONF --config-file $DRAGONFLOW_CONF"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function stop_df_metadata_agent {
|
|
||||||
if is_service_enabled df-metadata ; then
|
|
||||||
echo "Stopping Dragonflow metadata service"
|
|
||||||
stop_process df-metadata
|
|
||||||
pushd $DRAGONFLOW_DIR
|
|
||||||
# TODO(snapiri) When we add more switch backends, this should be conditional
|
|
||||||
tools/ovs_metadata_service_deployment.sh remove $INTEGRATION_BRIDGE $DF_METADATA_SERVICE_INTERFACE
|
|
||||||
popd
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function start_df_bgp_service {
|
|
||||||
if is_service_enabled df-bgp ; then
|
|
||||||
echo "Starting Dragonflow BGP dynamic routing service"
|
|
||||||
run_process df-bgp "$DF_BGP_SERVICE --config-file $NEUTRON_CONF --config-file $DRAGONFLOW_CONF"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function start_df_skydive {
|
|
||||||
if is_service_enabled df-skydive ; then
|
|
||||||
echo "Starting Dragonflow skydive service"
|
|
||||||
run_process df-skydive "$DF_SKYDIVE_SERVICE --config-file $NEUTRON_CONF --config-file $DRAGONFLOW_CONF"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function stop_df_skydive {
|
|
||||||
if is_service_enabled df-skydive ; then
|
|
||||||
echo "Stopping Dragonflow skydive service"
|
|
||||||
stop_process df-skydive
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function setup_rootwrap_filters {
|
|
||||||
if [[ "$DF_INSTALL_DEBUG_ROOTWRAP_CONF" == "True" ]]; then
|
|
||||||
echo "Adding rootwrap filters"
|
|
||||||
sudo mkdir -p -m 755 $NEUTRON_CONF_DIR/etc/rootwrap.d
|
|
||||||
sudo cp -p $DRAGONFLOW_DIR/etc/rootwrap.d/* $NEUTRON_CONF_DIR/etc/rootwrap.d
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function stop_df_bgp_service {
|
|
||||||
if is_service_enabled df-bgp ; then
|
|
||||||
echo "Stopping Dragonflow BGP dynamic routing service"
|
|
||||||
stop_process df-bgp
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function handle_df_stack_install {
|
|
||||||
if [[ "$OFFLINE" != "True" ]]; then
|
|
||||||
if ! is_neutron_enabled ; then
|
|
||||||
install_neutron
|
|
||||||
fi
|
|
||||||
install_df
|
|
||||||
if [[ "$DF_REINSTALL_OVS" == "True" ]]; then
|
|
||||||
install_ovs
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
setup_develop $DRAGONFLOW_DIR
|
|
||||||
if [[ "$DF_REINSTALL_OVS" == "True" ]]; then
|
|
||||||
init_ovs
|
|
||||||
# We have to start at install time, because Neutron's post-config
|
|
||||||
# phase runs ovs-vsctl.
|
|
||||||
start_ovs
|
|
||||||
fi
|
|
||||||
if function_exists nb_db_driver_start_server; then
|
|
||||||
nb_db_driver_start_server
|
|
||||||
fi
|
|
||||||
disable_libvirt_apparmor
|
|
||||||
}
|
|
||||||
|
|
||||||
function handle_df_stack_post_install {
|
|
||||||
init_neutron_sample_config
|
|
||||||
configure_ovs
|
|
||||||
configure_df_plugin
|
|
||||||
# configure nb db driver
|
|
||||||
if function_exists nb_db_driver_configure; then
|
|
||||||
nb_db_driver_configure
|
|
||||||
fi
|
|
||||||
# initialize the nb db
|
|
||||||
init_nb_db
|
|
||||||
|
|
||||||
if [[ "$DF_PUB_SUB" == "True" ]]; then
|
|
||||||
# Implemented by the pub/sub plugin
|
|
||||||
configure_pubsub_service_plugin
|
|
||||||
# Defaults, in case no Pub/Sub service was selected
|
|
||||||
if [ -z $PUB_SUB_DRIVER ]; then
|
|
||||||
die $LINENO "pub-sub enabled, but no pub-sub driver selected"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
if is_service_enabled nova; then
|
|
||||||
configure_neutron_nova
|
|
||||||
fi
|
|
||||||
|
|
||||||
if is_service_enabled df-publisher-service; then
|
|
||||||
start_pubsub_service
|
|
||||||
fi
|
|
||||||
|
|
||||||
start_df
|
|
||||||
start_df_metadata_agent
|
|
||||||
start_df_bgp_service
|
|
||||||
setup_rootwrap_filters
|
|
||||||
start_df_skydive
|
|
||||||
install_package jq
|
|
||||||
}
|
|
||||||
|
|
||||||
function handle_df_stack {
|
|
||||||
if [[ "$STAGE" == "install" ]]; then
|
|
||||||
handle_df_stack_install
|
|
||||||
elif [[ "$STAGE" == "post-config" ]]; then
|
|
||||||
handle_df_stack_post_install
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function handle_df_unstack {
|
|
||||||
stop_df_skydive
|
|
||||||
stop_df_bgp_service
|
|
||||||
stop_df_metadata_agent
|
|
||||||
stop_df
|
|
||||||
if function_exists nb_db_driver_clean; then
|
|
||||||
nb_db_driver_clean
|
|
||||||
fi
|
|
||||||
if [[ "$DF_REINSTALL_OVS" == "True" ]]; then
|
|
||||||
cleanup_ovs
|
|
||||||
stop_ovs
|
|
||||||
uninstall_ovs
|
|
||||||
fi
|
|
||||||
if is_service_enabled df-publisher-service; then
|
|
||||||
stop_pubsub_service
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
# main loop
|
|
||||||
if [[ "$Q_ENABLE_DRAGONFLOW_LOCAL_CONTROLLER" == "True" ]]; then
|
|
||||||
|
|
||||||
if is_plugin_enabled octavia; then
|
|
||||||
# Only define this function if dragonflow is used
|
|
||||||
function octavia_create_network_interface_device {
|
|
||||||
INTERFACE=$1
|
|
||||||
MGMT_PORT_ID=$2
|
|
||||||
MGMT_PORT_MAC=$3
|
|
||||||
if [ -z "$INTERFACE" ]; then
|
|
||||||
die "octavia_create_network_interface_device for dragonflow: Interface not given (1st parameter)"
|
|
||||||
fi
|
|
||||||
if [ -z "$MGMT_PORT_ID" ]; then
|
|
||||||
die "octavia_create_network_interface_device for dragonflow: Management port ID not given (2nd parameter)"
|
|
||||||
fi
|
|
||||||
if [ -z "$MGMT_PORT_MAC" ]; then
|
|
||||||
die "octavia_create_network_interface_device for dragonflow: Management port MAC not given (3rd parameter)"
|
|
||||||
fi
|
|
||||||
sudo ovs-vsctl -- --may-exist add-port $INTEGRATION_BRIDGE $INTERFACE -- set Interface $INTERFACE type=internal -- set Interface $INTERFACE external-ids:iface-status=active -- set Interface $INTERFACE external-ids:attached-mac=$MGMT_PORT_MAC -- set Interface $INTERFACE external-ids:iface-id=$MGMT_PORT_ID -- set Interface $INTERFACE external-ids:skip_cleanup=true
|
|
||||||
}
|
|
||||||
|
|
||||||
function octavia_delete_network_interface_device {
|
|
||||||
: # Do nothing
|
|
||||||
}
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ "$ACTION" == "stack" ]]; then
|
|
||||||
handle_df_stack
|
|
||||||
elif [[ "$ACTION" == "unstack" ]]; then
|
|
||||||
handle_df_unstack
|
|
||||||
fi
|
|
||||||
fi
|
|
|
@ -1,85 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
# ``plugin.sh`` calls the following methods in the sourced driver:
|
|
||||||
#
|
|
||||||
# - nb_db_driver_install_server
|
|
||||||
# - nb_db_driver_install_client
|
|
||||||
# - nb_db_driver_start_server
|
|
||||||
# - nb_db_driver_stop_server
|
|
||||||
# - nb_db_driver_clean
|
|
||||||
# - nb_db_driver_configure
|
|
||||||
|
|
||||||
RAMCLOUD=$DEST/ramcloud
|
|
||||||
RAMCLOUD_LIB=$RAMCLOUD/lib
|
|
||||||
RAMCLOUD_BIN=$RAMCLOUD/bin
|
|
||||||
RAMCLOUD_MASTER_IP=${RAMCLOUD_MASTER_IP:-"$HOST_IP"}
|
|
||||||
RAMCLOUD_COORDINATOR_IP=${RAMCLOUD_COORDINATOR_IP:-"$HOST_IP"}
|
|
||||||
RAMCLOUD_MASTER_PORT=${RAMCLOUD_MASTER_PORT:-'21221'}
|
|
||||||
RAMCLOUD_COORDINATOR_PORT=${RAMCLOUD_COORDINATOR_PORT:-'21222'}
|
|
||||||
RAMCLOUD_TRANSPORT=${RAMCLOUD_TRANSPORT:-'fast+udp'}
|
|
||||||
|
|
||||||
LIB_BOOST_MAJOR_VERSION=1
|
|
||||||
LIB_BOOST_MINOR_VERSION=54
|
|
||||||
|
|
||||||
function nb_db_driver_install_server {
|
|
||||||
if is_service_enabled df-rcmaster ; then
|
|
||||||
echo "Installing Dependencies"
|
|
||||||
if is_ubuntu; then
|
|
||||||
boost_program=libboost-program-options"$LIB_BOOST_MAJOR_VERSION"."$LIB_BOOST_MINOR_VERSION"
|
|
||||||
boost_filesystem=libboost-program-options"$LIB_BOOST_MAJOR_VERSION"."$LIB_BOOST_MINOR_VERSION"
|
|
||||||
protobuf_lib=libprotobuf8
|
|
||||||
elif is_suse || is_oraclelinux; then
|
|
||||||
boost_program=libboost_program_options"$LIB_BOOST_MAJOR_VERSION"_"$LIB_BOOST_MINOR_VERSION"_0
|
|
||||||
boost_filesystem=libboost_filesystem"$LIB_BOOST_MAJOR_VERSION"_"$LIB_BOOST_MINOR_VERSION"_0
|
|
||||||
protobuf_lib=libprotobuf8
|
|
||||||
elif is_fedora; then
|
|
||||||
if [[ "$os_RELEASE" -ge "21" ]]; then
|
|
||||||
echo "Boost version 54 is not available for fedora > 20"
|
|
||||||
#TODO(gampel) add support for fedora > 20
|
|
||||||
else
|
|
||||||
boost_program=boost_program_options"$LIB_BOOST_MAJOR_VERSION"_"$LIB_BOOST_MINOR_VERSION"_0
|
|
||||||
boost_filesystem=boost_filesystem"$LIB_BOOST_MAJOR_VERSION"_"$LIB_BOOST_MINOR_VERSION"_0
|
|
||||||
fi
|
|
||||||
protobuf_lib=protobuf
|
|
||||||
fi
|
|
||||||
install_package "$boost_program" "$boost_filesystem" "$protobuf_lib"
|
|
||||||
echo "Installing RAMCloud server"
|
|
||||||
git_clone https://github.com/dsivov/RamCloudBin.git $RAMCLOUD
|
|
||||||
echo export LD_LIBRARY_PATH="$RAMCLOUD_LIB":"$LD_LIBRARY_PATH" | tee -a $HOME/.bashrc
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_install_client {
|
|
||||||
echo "Installing RAMCloud client"
|
|
||||||
git_clone https://github.com/dsivov/RamCloudBin.git $RAMCLOUD
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_start_server {
|
|
||||||
if is_service_enabled df-rccoordinator ; then
|
|
||||||
$RAMCLOUD_BIN/coordinator -C ${RAMCLOUD_TRANSPORT}:host=${RAMCLOUD_COORDINATOR_IP},port=${RAMCLOUD_COORDINATOR_PORT} 2&> /dev/null || true &
|
|
||||||
fi
|
|
||||||
if is_service_enabled df-rcmaster ; then
|
|
||||||
sleep 10
|
|
||||||
$RAMCLOUD_BIN/server -L ${RAMCLOUD_TRANSPORT}:host=${RAMCLOUD_MASTER_IP},port=${RAMCLOUD_MASTER_PORT} -C ${RAMCLOUD_TRANSPORT}:host=${RAMCLOUD_COORDINATOR_IP},port=${RAMCLOUD_COORDINATOR_PORT} 2&> /dev/null || true &
|
|
||||||
echo "Sleep for 20 secs. Giving time for db to start working!!!"
|
|
||||||
sleep 20
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_stop_server {
|
|
||||||
if is_service_enabled df-rccoordinator ; then
|
|
||||||
sudo killall coordinator 2&> /dev/null || true
|
|
||||||
fi
|
|
||||||
if is_service_enabled df-rcmaster ; then
|
|
||||||
sudo killall server 2&> /dev/null || true
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_clean {
|
|
||||||
sudo rm -rf $RAMCLOUD
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_configure {
|
|
||||||
:
|
|
||||||
}
|
|
|
@ -1,174 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
# ``plugin.sh`` calls the following methods in the sourced driver:
|
|
||||||
#
|
|
||||||
# - nb_db_driver_install_server
|
|
||||||
# - nb_db_driver_install_client
|
|
||||||
# - nb_db_driver_start_server
|
|
||||||
# - nb_db_driver_stop_server
|
|
||||||
# - nb_db_driver_clean
|
|
||||||
# - nb_db_driver_configure
|
|
||||||
|
|
||||||
REDIS_VERSION=3.0.6
|
|
||||||
|
|
||||||
function _redis_env {
|
|
||||||
# REMOTE_DB_* initialized after sourcing
|
|
||||||
export REDIS_SERVER_LIST=$(echo $REMOTE_DB_HOSTS | sed 's/,/ /g')
|
|
||||||
export REMOTE_PORT_START=$(echo $REDIS_SERVER_LIST | awk '{print $1}' | cut -d: -f2)
|
|
||||||
export REDIS_SERVER_IPS=$(echo $REDIS_SERVER_LIST | awk -F: 'BEGIN {RS=" "} { print $1 }' | sort | uniq)
|
|
||||||
export NODE_COUNT_END=5
|
|
||||||
export REMOTE_PORT_END=`expr $REMOTE_PORT_START + $NODE_COUNT_END`
|
|
||||||
export REDIS_PORT=`seq $REMOTE_PORT_START $REMOTE_PORT_END`
|
|
||||||
}
|
|
||||||
|
|
||||||
function _configure_redis {
|
|
||||||
_redis_env
|
|
||||||
pushd /opt/redis3/conf
|
|
||||||
sudo sh -c "grep -q ulimit /etc/profile ||
|
|
||||||
echo ulimit -SHn 40960 >> /etc/profile"
|
|
||||||
sudo sh -c "echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse"
|
|
||||||
sudo sh -c "echo never > /sys/kernel/mm/transparent_hugepage/enabled"
|
|
||||||
sudo sh -c "echo 2048 > /proc/sys/net/core/somaxconn"
|
|
||||||
sudo sh -c "grep -q vm.overcommit_memory /etc/sysctl.conf ||
|
|
||||||
echo vm.overcommit_memory = 1 >> /etc/sysctl.conf"
|
|
||||||
sudo sh -c "sysctl -p"
|
|
||||||
for redisserver in $REDIS_SERVER_IPS; do
|
|
||||||
for port in $REDIS_PORT; do
|
|
||||||
echo "shutdown redis: "$redisserver:$port
|
|
||||||
cp redis.conf /opt/redis3/conf/redis-$port.conf
|
|
||||||
sed -i "s/6379/$port/g" redis-$port.conf
|
|
||||||
sed -i "s/daemonize no/daemonize yes/g" redis-$port.conf
|
|
||||||
sed -i "s/dump.rdb/dump-$port.rdb/g" redis-$port.conf
|
|
||||||
sed -i "s/# cluster-enabled yes/cluster-enabled yes/g" redis-$port.conf
|
|
||||||
sed -i "s/# cluster-config-file/cluster-config-file/g" redis-$port.conf
|
|
||||||
sed -i "s/pubsub 32mb 8mb 60/pubsub 0 0 0/g" redis-$port.conf
|
|
||||||
done
|
|
||||||
done
|
|
||||||
popd
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_install_server {
|
|
||||||
if [ "$DF_REDIS_INSTALL_FROM_RUBY" == "True" ]; then
|
|
||||||
echo "Installing Redis cluster"
|
|
||||||
if [ ! -f "$DEST/redis/redis-$REDIS_VERSION/redis" ]; then
|
|
||||||
mkdir -p $DEST/redis
|
|
||||||
if [ ! -f "$DEST/redis/redis-$REDIS_VERSION.tar.gz" ]; then
|
|
||||||
wget http://download.redis.io/releases/redis-$REDIS_VERSION.tar.gz -O $DEST/redis/redis-$REDIS_VERSION.tar.gz
|
|
||||||
fi
|
|
||||||
tar xzvf $DEST/redis/redis-$REDIS_VERSION.tar.gz -C $DEST/redis
|
|
||||||
pushd $DEST/redis/redis-$REDIS_VERSION
|
|
||||||
make
|
|
||||||
|
|
||||||
cd src
|
|
||||||
sudo make PREFIX=/opt/redis3 install
|
|
||||||
sudo mkdir -p /opt/redis3/conf
|
|
||||||
sudo cp $DEST/redis/redis-$REDIS_VERSION/redis.conf /opt/redis3/conf
|
|
||||||
|
|
||||||
sudo ln -sf /opt/redis3/conf /etc/redis3
|
|
||||||
sudo cp $DEST/redis/redis-$REDIS_VERSION/src/redis-trib.rb /opt/redis3/bin/
|
|
||||||
|
|
||||||
sudo chown -hR $STACK_USER /opt/redis3/
|
|
||||||
if is_ubuntu || is_fedora; then
|
|
||||||
_configure_redis
|
|
||||||
fi
|
|
||||||
|
|
||||||
install_package -y ruby
|
|
||||||
|
|
||||||
if ! sudo gem list redis | grep -q redis; then
|
|
||||||
sudo gem source -a $DF_RUBY_SOURCE_ADD
|
|
||||||
if [ -n "$DF_RUBY_SOURCE_REMOVE" ];then
|
|
||||||
sudo gem source -r $DF_RUBY_SOURCE_REMOVE
|
|
||||||
fi
|
|
||||||
sudo gem install redis
|
|
||||||
fi
|
|
||||||
|
|
||||||
popd
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
if is_ubuntu; then
|
|
||||||
install_package -y redis-server
|
|
||||||
elif is_fedora; then
|
|
||||||
install_package -y redis
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_install_client {
|
|
||||||
sudo pip install "redis>=3.3.2"
|
|
||||||
sudo pip install "hiredis>=1.0.0"
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_start_server {
|
|
||||||
_redis_env
|
|
||||||
create=
|
|
||||||
if is_service_enabled df-redis-server ; then
|
|
||||||
if is_ubuntu || is_fedora; then
|
|
||||||
# to acquire if should recreate cluster
|
|
||||||
for redisserver in $REDIS_SERVER_LIST; do
|
|
||||||
for port in $REDIS_PORT; do
|
|
||||||
test -f /opt/redis3/conf/nodes-$port.conf || { create=true; break 2 ; }
|
|
||||||
done
|
|
||||||
done
|
|
||||||
# start redis
|
|
||||||
for redisserver in $REDIS_SERVER_IPS; do
|
|
||||||
for port in $REDIS_PORT; do
|
|
||||||
echo $redisserver:$port
|
|
||||||
pushd /opt/redis3/
|
|
||||||
[ "$create" ] && {
|
|
||||||
|
|
||||||
sudo rm nodes* -rf
|
|
||||||
}
|
|
||||||
|
|
||||||
./bin/redis-server ./conf/redis-$port.conf &
|
|
||||||
redis_cluster="$redis_cluster"" ""$redisserver:$port"
|
|
||||||
popd
|
|
||||||
done
|
|
||||||
done
|
|
||||||
# create cluster
|
|
||||||
[ "$create" ] && {
|
|
||||||
echo "Create the Redis cluster: "$redis_cluster
|
|
||||||
pushd /opt/redis3/bin/
|
|
||||||
echo "yes" | sudo ./redis-trib.rb create --replicas 1 $redis_cluster
|
|
||||||
popd
|
|
||||||
}
|
|
||||||
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_stop_server {
|
|
||||||
_redis_env
|
|
||||||
if is_service_enabled df-redis-server ; then
|
|
||||||
if is_ubuntu || is_fedora; then
|
|
||||||
for redisserver in $REDIS_SERVER_IPS; do
|
|
||||||
for port in $REDIS_PORT; do
|
|
||||||
echo "Shutdown Redis: "$redisserver:$port
|
|
||||||
sudo /opt/redis3/bin/redis-cli -p $port shutdown
|
|
||||||
pushd /opt/redis3/
|
|
||||||
sudo rm -rf nodes*.conf
|
|
||||||
sudo rm -rf dump*.rdb
|
|
||||||
sudo netstat -apn | grep $port | awk '{print $7}' | cut -d '/' -f1 | xargs sudo kill -9
|
|
||||||
popd
|
|
||||||
done
|
|
||||||
done
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_clean {
|
|
||||||
sudo rm -rf /opt/redis3
|
|
||||||
if [ "$DF_REDIS_INSTALL_FROM_RUBY" == "True" ]; then
|
|
||||||
sudo gem uninstall redis
|
|
||||||
else
|
|
||||||
if is_ubuntu; then
|
|
||||||
uninstall_package -y redis-server
|
|
||||||
elif is_fedora; then
|
|
||||||
uninstall_package -y redis
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_configure {
|
|
||||||
:
|
|
||||||
}
|
|
|
@ -1,7 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
function configure_pubsub_service_plugin {
|
|
||||||
NEUTRON_CONF=${NEUTRON_CONF:-"/etc/neutron/neutron.conf"}
|
|
||||||
PUB_SUB_DRIVER=${PUB_SUB_DRIVER:-"redis_db_pubsub_driver"}
|
|
||||||
iniset $DRAGONFLOW_CONF df pub_sub_driver $PUB_SUB_DRIVER
|
|
||||||
}
|
|
|
@ -1,88 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
# ``plugin.sh`` calls the following methods in the sourced driver:
|
|
||||||
#
|
|
||||||
# - nb_db_driver_install_server
|
|
||||||
# - nb_db_driver_install_client
|
|
||||||
# - nb_db_driver_start_server
|
|
||||||
# - nb_db_driver_stop_server
|
|
||||||
# - nb_db_driver_clean
|
|
||||||
RETHINKDB_IP=${RETHINKDB_IP:-"$HOST_IP"}
|
|
||||||
RETHINKDB_PORT=${RETHINKDB_PORT:-'4001'}
|
|
||||||
|
|
||||||
function nb_db_driver_install_server {
|
|
||||||
if is_service_enabled df-rethinkdb-server ; then
|
|
||||||
echo "Installing RethinkDB Server"
|
|
||||||
if is_ubuntu || is_fedora; then
|
|
||||||
if is_ubuntu; then
|
|
||||||
source /etc/lsb-release && echo "deb http://download.rethinkdb.com/apt $DISTRIB_CODENAME main" | sudo tee /etc/apt/sources.list.d/rethinkdb.list
|
|
||||||
wget -qO- http://download.rethinkdb.com/apt/pubkey.gpg | sudo apt-key add -
|
|
||||||
sudo apt-get update
|
|
||||||
sudo apt-get install rethinkdb
|
|
||||||
elif is_fedora; then
|
|
||||||
sudo wget https://download.rethinkdb.com/centos/7/$(uname -m)/rethinkdb.repo \
|
|
||||||
-O /etc/yum.repos.d/rethinkdb.repo
|
|
||||||
sudo dnf install -y rethinkdb
|
|
||||||
fi
|
|
||||||
echo "Configuring Rethingdb server"
|
|
||||||
sudo sh -c "cat > /etc/rethinkdb/instances.d/dragonflow.conf" << EOF
|
|
||||||
bind=all
|
|
||||||
driver-port=${RETHINKDB_PORT}
|
|
||||||
EOF
|
|
||||||
echo "starting rethinkdb"
|
|
||||||
start_service rethinkdb
|
|
||||||
until pids=$(pidof rethinkdb); do
|
|
||||||
echo "sleep 1, waiting for rethinkdb to start"
|
|
||||||
sleep 1
|
|
||||||
done
|
|
||||||
echo "sleep 5, waiting for rethinkdb to start"
|
|
||||||
echo 'Creating dragonflow database'
|
|
||||||
sleep 5
|
|
||||||
python -c "
|
|
||||||
import rethinkdb as r
|
|
||||||
r.connect('$RETHINKDB_IP', $RETHINKDB_PORT).repl()
|
|
||||||
try:
|
|
||||||
r.db_drop('dragonflow').run()
|
|
||||||
except r.errors.ReqlOpFailedError:
|
|
||||||
pass # Database probably doesn't exist
|
|
||||||
r.db_create('dragonflow').run()
|
|
||||||
"
|
|
||||||
stop_service rethinkdb
|
|
||||||
else
|
|
||||||
die $LINENO "Warning - RethinkDB is currently supported only for Ubuntu and Fedora. Any other distros are currently not supported."
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_install_client {
|
|
||||||
# We can't actually install rethinkdb due to licensing issues
|
|
||||||
python -c 'import rethinkdb' > /dev/null || die "rethinkdb python client not install. Please install manually"
|
|
||||||
echo "WARNING: You have to install python's rethingdb yourself"
|
|
||||||
echo >&2 "WARNING: You have to install python's rethingdb yourself"
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_start_server {
|
|
||||||
if is_service_enabled df-rethinkdb-server ; then
|
|
||||||
start_service rethinkdb
|
|
||||||
until pids=$(pidof rethinkdb); do
|
|
||||||
sleep 1
|
|
||||||
echo "sleep 1, waiting for rethinkdb to start"
|
|
||||||
done
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_stop_server {
|
|
||||||
if is_service_enabled df-rethinkdb-server ; then
|
|
||||||
stop_service rethinkdb
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_status_server
|
|
||||||
{
|
|
||||||
TEMP_PIDS=`ps cax | grep rethinkdb`
|
|
||||||
if [ -z "$TEMP_PIDS" ]; then
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
return 0
|
|
||||||
}
|
|
|
@ -1,66 +0,0 @@
|
||||||
if is_ubuntu ; then
|
|
||||||
UBUNTU_RELEASE_BASE_NUM=`lsb_release -r | awk '{print $2}' | cut -d '.' -f 1`
|
|
||||||
fi
|
|
||||||
|
|
||||||
DRAGONFLOW_REPO=${DRAGONFLOW_REPO:-git://github.com/openstack/dragonflow.git}
|
|
||||||
DRAGONFLOW_DIR=$DEST/dragonflow
|
|
||||||
DRAGONFLOW_BRANCH=${DRAGONFLOW_BRANCH:-master}
|
|
||||||
|
|
||||||
DF_INSTALL_DEBUG_ROOTWRAP_CONF=${DF_INSTALL_DEBUG_ROOTWRAP_CONF:-"True"}
|
|
||||||
|
|
||||||
DF_L3_BINARY=$NEUTRON_BIN_DIR/df-l3-agent
|
|
||||||
DF_LOCAL_CONTROLLER_BINARY=$NEUTRON_BIN_DIR/df-local-controller
|
|
||||||
DF_PUBLISHER_SERVICE_BINARY=$NEUTRON_BIN_DIR/df-publisher-service
|
|
||||||
DF_ZMQ_IPC_SOCKET=${DF_ZMQ_IPC_SOCKET:-"/var/run/zmq_pubsub/zmq-publisher-socket"}
|
|
||||||
|
|
||||||
DF_AUTO_DETECT_PORT_BEHIND_PORT=${DF_AUTO_DETECT_PORT_BEHIND_PORT:-"False"}
|
|
||||||
DF_LBAAS_AUTO_ENABLE_VIP_PORTS=${DF_LBAAS_AUTO_ENABLE_VIP_PORTS:-"True"}
|
|
||||||
|
|
||||||
# df-metadata
|
|
||||||
DF_METADATA_SERVICE=${DF_METADATA_SERVICE:-"$NEUTRON_BIN_DIR/df-metadata-service"}
|
|
||||||
DF_METADATA_SERVICE_IP=${DF_METADATA_SERVICE_IP:-"169.254.169.254"}
|
|
||||||
DF_METADATA_SERVICE_PORT=${DF_METADATA_SERVICE_PORT:-"18080"}
|
|
||||||
DF_METADATA_SERVICE_INTERFACE=${DF_METADATA_SERVICE_INTERFACE:-"tap-metadata"}
|
|
||||||
METADATA_PROXY_SHARED_SECRET=${METADATA_PROXY_SHARED_SECRET:-"secret"}
|
|
||||||
|
|
||||||
# df-bgp
|
|
||||||
DF_BGP_SERVICE=${DF_BGP_SERVICE:-"$NEUTRON_BIN_DIR/df-bgp-service"}
|
|
||||||
# This can be overridden in the localrc file, set df-bgp to enable
|
|
||||||
DR_MODE=${DR_MODE:-no-bgp}
|
|
||||||
|
|
||||||
# df-skydive
|
|
||||||
DF_SKYDIVE_SERVICE=${DF_SKYDIVE_SERVICE:-"$NEUTRON_BIN_DIR/df-skydive-service"}
|
|
||||||
|
|
||||||
|
|
||||||
DF_L2_RESPONDER=${DF_L2_RESPONDER:-'True'}
|
|
||||||
|
|
||||||
DF_MONITOR_TABLE_POLL_TIME=${DF_MONITOR_TABLE_POLL_TIME:-30}
|
|
||||||
DF_PUB_SUB=${DF_PUB_SUB:-"False"}
|
|
||||||
DF_Q_SVC_MASTER=${DF_Q_SVC_MASTER:-"True"}
|
|
||||||
|
|
||||||
PUBLISHER_RATE_LIMIT_TIMEOUT=${PUBLISHER_RATE_LIMIT_TIMEOUT:-180}
|
|
||||||
PUBLISHER_RATE_LIMIT_COUNT=${PUBLISHER_RATE_LIMIT_COUNT:-1}
|
|
||||||
|
|
||||||
if is_fedora && [ $os_RELEASE -ge 23 ] ; then
|
|
||||||
OVS_INSTALL_FROM_GIT=${OVS_INSTALL_FROM_GIT:-"False"}
|
|
||||||
elif is_ubuntu && [ $UBUNTU_RELEASE_BASE_NUM -ge 16 ] ; then
|
|
||||||
OVS_INSTALL_FROM_GIT=${OVS_INSTALL_FROM_GIT:-"False"}
|
|
||||||
else
|
|
||||||
OVS_INSTALL_FROM_GIT=${OVS_INSTALL_FROM_GIT:-"True"}
|
|
||||||
fi
|
|
||||||
OVS_MANAGER=${OVS_MANAGER:-"ptcp:6640:0.0.0.0"}
|
|
||||||
OVS_INTEGRATION_BRIDGE_PROTOCOLS=${OVS_INTEGRATION_BRIDGE_PROTOCOLS:-"OpenFlow10,OpenFlow13"}
|
|
||||||
|
|
||||||
DF_REDIS_INSTALL_FROM_RUBY=${DF_REDIS_INSTALL_FROM_RUBY:-"True"}
|
|
||||||
|
|
||||||
DF_RUBY_SOURCE_ADD=${DF_RUBY_SOURCE_ADD:-"https://rubygems.org/"}
|
|
||||||
DF_RUBY_SOURCE_REMOVE=${DF_RUBY_SOURCE_REMOVE:-""}
|
|
||||||
DF_SELECTIVE_TOPO_DIST=${DF_SELECTIVE_TOPO_DIST:-"False"}
|
|
||||||
Q_ENABLE_DRAGONFLOW_LOCAL_CONTROLLER=${Q_ENABLE_DRAGONFLOW_LOCAL_CONTROLLER:-"True"}
|
|
||||||
|
|
||||||
# OVS bridges related defaults
|
|
||||||
DF_REINSTALL_OVS=${DF_REINSTALL_OVS:-True} # Remove the OVS installed by Neutron
|
|
||||||
# and reinstall a newer version
|
|
||||||
INTEGRATION_BRIDGE=${INTEGRATION_BRIDGE:-br-int}
|
|
||||||
PUBLIC_NETWORK_GATEWAY=${PUBLIC_NETWORK_GATEWAY:-172.24.4.1}
|
|
||||||
PUBLIC_NETWORK_PREFIXLEN=${PUBLIC_NETWORK_PREFIXLEN:-24}
|
|
|
@ -1,86 +0,0 @@
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
#
|
|
||||||
# This script is executed in the OpenStack CI job that runs DevStack + tempest.
|
|
||||||
# It is also used by the rally job. You can find the CI job configuration here:
|
|
||||||
#
|
|
||||||
# http://opendev.org/openstack-infra/project-config/tree/jenkins/jobs/dragonflow.yaml
|
|
||||||
#
|
|
||||||
|
|
||||||
# Begin list of exclusions.
|
|
||||||
r="^(?!.*"
|
|
||||||
|
|
||||||
# exclude the slow tag (part of the default for 'full')
|
|
||||||
r="$r(?:.*\[.*\bslow\b.*\])"
|
|
||||||
|
|
||||||
# exclude things that just aren't enabled:
|
|
||||||
r="$r|(?:tempest\.api\.network\.admin\.test_quotas\.QuotasTest\.test_lbaas_quotas.*)"
|
|
||||||
r="$r|(?:tempest\.api\.network\.admin\.test_agent_management\.*)"
|
|
||||||
r="$r|(?:tempest\.api\.network\.test_load_balancer.*)"
|
|
||||||
r="$r|(?:tempest\.scenario\.test_load_balancer.*)"
|
|
||||||
r="$r|(?:tempest\.api\.network\.admin\.test_load_balancer.*)"
|
|
||||||
r="$r|(?:tempest\.api\.network\.admin\.test_lbaas.*)"
|
|
||||||
r="$r|(?:tempest\.api\.network\.test_fwaas_extensions.*)"
|
|
||||||
r="$r|(?:tempest\.api\.network\.test_vpnaas_extensions.*)"
|
|
||||||
r="$r|(?:tempest\.api\.network\.test_metering_extensions.*)"
|
|
||||||
r="$r|(?:tempest\.thirdparty\.boto\.test_s3.*)"
|
|
||||||
|
|
||||||
# skip tests unrelated to dragonflow
|
|
||||||
r="$r|(?:tempest\.api\.identity*)"
|
|
||||||
r="$r|(?:tempest\.api\.image*)"
|
|
||||||
r="$r|(?:tempest\.api\.volume*)"
|
|
||||||
r="$r|(?:tempest\.api\.compute\.images*)"
|
|
||||||
r="$r|(?:tempest\.api\.compute\.keypairs*)"
|
|
||||||
r="$r|(?:tempest\.api\.compute\.certificates*)"
|
|
||||||
r="$r|(?:tempest\.api\.compute\.flavors*)"
|
|
||||||
r="$r|(?:tempest\.api\.compute\.test_quotas*)"
|
|
||||||
r="$r|(?:tempest\.api\.compute\.test_versions*)"
|
|
||||||
r="$r|(?:tempest\.api\.compute\.volumes*)"
|
|
||||||
r="$r|(?:tempest\.scenario\.test_volume_boot_pattern.*)"
|
|
||||||
|
|
||||||
# Failing tests that needs to be re-visited
|
|
||||||
r="$r|(?:tempest\.api\.network.test_allowed_address_pair.*)"
|
|
||||||
r="$r|(?:tempest\.api\.network.admin.test_external_network_extension.*)"
|
|
||||||
r="$r|(?:tempest\.scenario\.test_network_v6\.TestGettingAddress\.test_dualnet_multi_prefix_dhcpv6_stateless*)"
|
|
||||||
r="$r|(?:tempest\.scenario\.test_network_advanced_server_ops.*)"
|
|
||||||
r="$r|(?:tempest\.scenario\.test_minimum_basic.*)"
|
|
||||||
r="$r|(?:tempest\.scenario\.test_network_v6.*)"
|
|
||||||
|
|
||||||
r="$r|(?:tempest\.scenario\.test_shelve_instance.*)"
|
|
||||||
r="$r|(?:tempest\.scenario\.test_snapshot_pattern.*)"
|
|
||||||
|
|
||||||
r="$r|(?:tempest\.api\.network\.test_routers_negative.*)"
|
|
||||||
|
|
||||||
# These tests are used for the DHCP agent scheduler, which is not used by default in Dragonflow
|
|
||||||
r="$r|(?:tempest\.api\.network.admin.test_dhcp_agent_scheduler.DHCPAgentSchedulersTestJSON.test_add_remove_network_from_dhcp_agent)"
|
|
||||||
r="$r|(?:tempest\.api\.network.admin.test_dhcp_agent_scheduler.DHCPAgentSchedulersTestJSON.test_list_networks_hosted_by_one_dhcp)"
|
|
||||||
|
|
||||||
# Current list of failing tests that need to be triaged, have bugs filed, and
|
|
||||||
# fixed as appropriate.
|
|
||||||
# Disable cross tenant traffic + security groups - bug #1740739
|
|
||||||
r="$r|(?:tempest\.scenario\.test_security_groups_basic_ops\.TestSecurityGroupsBasicOps\.test_cross_tenant_traffic)"
|
|
||||||
|
|
||||||
# End list of exclusions.
|
|
||||||
r="$r)"
|
|
||||||
|
|
||||||
# Start of include list
|
|
||||||
r="$r("
|
|
||||||
# only run tempest.api/scenario/thirdparty tests (part of the default for 'full')
|
|
||||||
r="$r((tempest\.(api|scenario|thirdparty)).*$)"
|
|
||||||
# Add Dynamic routing (BGP) tests
|
|
||||||
r="$r|(^neutron_dynamic_routing\.tests\.tempest\.scenario\.)"
|
|
||||||
# End of include list
|
|
||||||
r="$r)"
|
|
||||||
|
|
||||||
export DEVSTACK_GATE_TEMPEST_REGEX="$r"
|
|
||||||
export DEVSTACK_GATE_TEMPEST_ALL_PLUGINS="1"
|
|
|
@ -1,13 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
function configure_pubsub_service_plugin {
|
|
||||||
NEUTRON_CONF=${NEUTRON_CONF:-"/etc/neutron/neutron.conf"}
|
|
||||||
PUB_SUB_DRIVER=${PUB_SUB_DRIVER:-"zmq_pubsub_driver"}
|
|
||||||
iniset $DRAGONFLOW_CONF df pub_sub_driver $PUB_SUB_DRIVER
|
|
||||||
DF_PUBLISHER_DRIVER=${DF_PUBLISHER_DRIVER:-"zmq_bind_pubsub_driver"}
|
|
||||||
iniset $DRAGONFLOW_PUBLISHER_CONF df pub_sub_driver $DF_PUBLISHER_DRIVER
|
|
||||||
|
|
||||||
ZMQ_IPC_SOCKET_DIR=`dirname $DF_ZMQ_IPC_SOCKET`
|
|
||||||
sudo mkdir -p $ZMQ_IPC_SOCKET_DIR
|
|
||||||
sudo chown $STACK_USER $ZMQ_IPC_SOCKET_DIR
|
|
||||||
}
|
|
|
@ -1,104 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
#
|
|
||||||
# ``plugin.sh`` calls the following methods in the sourced driver:
|
|
||||||
#
|
|
||||||
# - nb_db_driver_install_server
|
|
||||||
# - nb_db_driver_install_client
|
|
||||||
# - nb_db_driver_start_server
|
|
||||||
# - nb_db_driver_stop_server
|
|
||||||
# - nb_db_driver_clean
|
|
||||||
# - nb_db_driver_configure
|
|
||||||
|
|
||||||
HOSTNAME=`hostname -f`
|
|
||||||
|
|
||||||
if is_ubuntu ; then
|
|
||||||
UBUNTU_RELEASE_BASE_NUM=`lsb_release -r | awk '{print $2}' | cut -d '.' -f 1`
|
|
||||||
fi
|
|
||||||
|
|
||||||
function _zookeeper_env {
|
|
||||||
export ZOOKEEPER_DATA_DIR="/var/lib/zookeeper"
|
|
||||||
export ZOOKEEPER_LOG_DIR="/var/log/zookeeper"
|
|
||||||
export ZOOKEEPER_DIR="/etc/zookeeper"
|
|
||||||
local SERVER_LIST=$(echo $REMOTE_DB_HOSTS | sed 's/,/ /g')
|
|
||||||
export ZOOKEEPER_SERVER_PORT=$(echo $SERVER_LIST | awk -F: 'BEGIN {RS=" "} { print $2 }' | sort | uniq | tail -1)
|
|
||||||
}
|
|
||||||
|
|
||||||
function update_key_in_file {
|
|
||||||
key=$1; shift
|
|
||||||
value=$1; shift
|
|
||||||
file=$1; shift
|
|
||||||
local result=`grep -c "^ *$key *=" $file 2> /dev/null`
|
|
||||||
if [ $result -gt 0 ]; then
|
|
||||||
sudo sed -i "/^ *$key *=/c $key=$value" $file
|
|
||||||
else
|
|
||||||
sudo sh -c "echo \"$key=$value\" >> $file"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_install_server {
|
|
||||||
if is_service_enabled df-zookeeper-server ; then
|
|
||||||
_zookeeper_env
|
|
||||||
echo "Installing Zookeeper server"
|
|
||||||
sudo mkdir -p $ZOOKEEPER_DATA_DIR
|
|
||||||
sudo mkdir -p $ZOOKEEPER_LOG_DIR
|
|
||||||
if is_ubuntu; then
|
|
||||||
ZOOKEEPER_CONF_DIR="${ZOOKEEPER_DIR}/conf"
|
|
||||||
install_package zookeeperd
|
|
||||||
ZOOKEEPER_CONF_FILE="${ZOOKEEPER_CONF_DIR}/zoo.cfg"
|
|
||||||
elif is_fedora; then
|
|
||||||
ZOOKEEPER_CONF_DIR="${ZOOKEEPER_DIR}"
|
|
||||||
install_package zookeeper
|
|
||||||
ZOOKEEPER_CONF_SAMPLE_FILE="${ZOOKEEPER_CONF_DIR}/zoo_sample.cfg"
|
|
||||||
ZOOKEEPER_CONF_FILE="${ZOOKEEPER_CONF_DIR}/zoo.cfg"
|
|
||||||
sudo cp $ZOOKEEPER_CONF_SAMPLE_FILE $ZOOKEEPER_CONF_FILE
|
|
||||||
else
|
|
||||||
die $LINENO "Other distributions are not supported"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "Configuring Zookeeper"
|
|
||||||
if [ -f $ZOOKEEPER_CONF_FILE ] ; then
|
|
||||||
update_key_in_file dataDir "${ZOOKEEPER_DATA_DIR}" $ZOOKEEPER_CONF_FILE
|
|
||||||
update_key_in_file dataLogDir "${ZOOKEEPER_LOG_DIR}" $ZOOKEEPER_CONF_FILE
|
|
||||||
update_key_in_file clientPort "${ZOOKEEPER_SERVER_PORT}" $ZOOKEEPER_CONF_FILE
|
|
||||||
update_key_in_file "server.1" "${HOSTNAME}:2888:3888" $ZOOKEEPER_CONF_FILE
|
|
||||||
fi
|
|
||||||
sudo systemctl restart zookeeper
|
|
||||||
sudo sh -c "echo 1 >$ZOOKEEPER_CONF_DIR/myid"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_clean {
|
|
||||||
if is_ubuntu; then
|
|
||||||
uninstall_package -y zookeeperd
|
|
||||||
uninstall_package -y zookeeper
|
|
||||||
uninstall_package -y libzookeeper-java
|
|
||||||
elif is_fedora; then
|
|
||||||
uninstall_package -y zookeeper
|
|
||||||
fi
|
|
||||||
if [ -f "/etc/systemd/system/zookeeper.service" ] ; then
|
|
||||||
sudo systemctl daemon-reload
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_install_client {
|
|
||||||
echo 'Zookeeper client sdk is in the requirements file.'
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_start_server {
|
|
||||||
if is_service_enabled df-zookeeper-server ; then
|
|
||||||
_zookeeper_env
|
|
||||||
start_service zookeeper
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_stop_server {
|
|
||||||
if is_service_enabled df-zookeeper-server ; then
|
|
||||||
_zookeeper_env
|
|
||||||
stop_service zookeeper
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function nb_db_driver_configure {
|
|
||||||
:
|
|
||||||
}
|
|
Before Width: | Height: | Size: 79 KiB |
Before Width: | Height: | Size: 66 KiB |
Before Width: | Height: | Size: 72 KiB |
Before Width: | Height: | Size: 222 KiB |
Before Width: | Height: | Size: 94 KiB |
Before Width: | Height: | Size: 106 KiB |
Before Width: | Height: | Size: 59 KiB |
Before Width: | Height: | Size: 44 KiB |
Before Width: | Height: | Size: 66 KiB |
Before Width: | Height: | Size: 57 KiB |
Before Width: | Height: | Size: 50 KiB |
Before Width: | Height: | Size: 79 KiB |
Before Width: | Height: | Size: 37 KiB |
|
@ -1,249 +0,0 @@
|
||||||
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
|
|
||||||
<graphml xmlns="http://graphml.graphdrawing.org/xmlns" xmlns:java="http://www.yworks.com/xml/yfiles-common/1.0/java" xmlns:sys="http://www.yworks.com/xml/yfiles-common/markup/primitives/2.0" xmlns:x="http://www.yworks.com/xml/yfiles-common/markup/2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:y="http://www.yworks.com/xml/graphml" xmlns:yed="http://www.yworks.com/xml/yed/3" xsi:schemaLocation="http://graphml.graphdrawing.org/xmlns http://www.yworks.com/xml/schema/graphml/1.1/ygraphml.xsd">
|
|
||||||
<!--Created by yEd 3.14.4-->
|
|
||||||
<key attr.name="Description" attr.type="string" for="graph" id="d0"/>
|
|
||||||
<key for="port" id="d1" yfiles.type="portgraphics"/>
|
|
||||||
<key for="port" id="d2" yfiles.type="portgeometry"/>
|
|
||||||
<key for="port" id="d3" yfiles.type="portuserdata"/>
|
|
||||||
<key attr.name="url" attr.type="string" for="node" id="d4"/>
|
|
||||||
<key attr.name="description" attr.type="string" for="node" id="d5"/>
|
|
||||||
<key for="node" id="d6" yfiles.type="nodegraphics"/>
|
|
||||||
<key for="graphml" id="d7" yfiles.type="resources"/>
|
|
||||||
<key attr.name="url" attr.type="string" for="edge" id="d8"/>
|
|
||||||
<key attr.name="description" attr.type="string" for="edge" id="d9"/>
|
|
||||||
<key for="edge" id="d10" yfiles.type="edgegraphics"/>
|
|
||||||
<graph edgedefault="directed" id="G">
|
|
||||||
<data key="d0"/>
|
|
||||||
<node id="n0">
|
|
||||||
<data key="d5"/>
|
|
||||||
<data key="d6">
|
|
||||||
<y:GenericNode configuration="com.yworks.entityRelationship.small_entity">
|
|
||||||
<y:Geometry height="40.0" width="80.0" x="281.0" y="500.0"/>
|
|
||||||
<y:Fill color="#E8EEF7" color2="#B7C9E3" transparent="false"/>
|
|
||||||
<y:BorderStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:NodeLabel alignment="center" autoSizePolicy="content" fontFamily="Dialog" fontSize="12" fontStyle="plain" hasBackgroundColor="false" hasLineColor="false" height="17.96875" modelName="custom" textColor="#000000" visible="true" width="40.36328125" x="19.818359375" y="11.015625">CPU-0<y:LabelModel>
|
|
||||||
<y:SmartNodeLabelModel distance="4.0"/>
|
|
||||||
</y:LabelModel>
|
|
||||||
<y:ModelParameter>
|
|
||||||
<y:SmartNodeLabelModelParameter labelRatioX="0.0" labelRatioY="0.0" nodeRatioX="0.0" nodeRatioY="0.0" offsetX="0.0" offsetY="0.0" upX="0.0" upY="-1.0"/>
|
|
||||||
</y:ModelParameter>
|
|
||||||
</y:NodeLabel>
|
|
||||||
<y:StyleProperties>
|
|
||||||
<y:Property class="java.lang.Boolean" name="y.view.ShadowNodePainter.SHADOW_PAINTING" value="true"/>
|
|
||||||
</y:StyleProperties>
|
|
||||||
</y:GenericNode>
|
|
||||||
</data>
|
|
||||||
</node>
|
|
||||||
<node id="n1">
|
|
||||||
<data key="d5"/>
|
|
||||||
<data key="d6">
|
|
||||||
<y:GenericNode configuration="com.yworks.entityRelationship.small_entity">
|
|
||||||
<y:Geometry height="40.0" width="80.0" x="281.0" y="744.0"/>
|
|
||||||
<y:Fill color="#E8EEF7" color2="#B7C9E3" transparent="false"/>
|
|
||||||
<y:BorderStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:NodeLabel alignment="center" autoSizePolicy="content" fontFamily="Dialog" fontSize="12" fontStyle="plain" hasBackgroundColor="false" hasLineColor="false" height="17.96875" modelName="custom" textColor="#000000" visible="true" width="41.705078125" x="19.1474609375" y="11.015625">CPU-N<y:LabelModel>
|
|
||||||
<y:SmartNodeLabelModel distance="4.0"/>
|
|
||||||
</y:LabelModel>
|
|
||||||
<y:ModelParameter>
|
|
||||||
<y:SmartNodeLabelModelParameter labelRatioX="0.0" labelRatioY="0.0" nodeRatioX="0.0" nodeRatioY="0.0" offsetX="0.0" offsetY="0.0" upX="0.0" upY="-1.0"/>
|
|
||||||
</y:ModelParameter>
|
|
||||||
</y:NodeLabel>
|
|
||||||
<y:StyleProperties>
|
|
||||||
<y:Property class="java.lang.Boolean" name="y.view.ShadowNodePainter.SHADOW_PAINTING" value="true"/>
|
|
||||||
</y:StyleProperties>
|
|
||||||
</y:GenericNode>
|
|
||||||
</data>
|
|
||||||
</node>
|
|
||||||
<node id="n2">
|
|
||||||
<data key="d5"/>
|
|
||||||
<data key="d6">
|
|
||||||
<y:GenericNode configuration="com.yworks.entityRelationship.small_entity">
|
|
||||||
<y:Geometry height="40.0" width="138.0" x="404.0" y="500.0"/>
|
|
||||||
<y:Fill color="#E8EEF7" color2="#B7C9E3" transparent="false"/>
|
|
||||||
<y:BorderStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:NodeLabel alignment="center" autoSizePolicy="content" fontFamily="Dialog" fontSize="12" fontStyle="plain" hasBackgroundColor="false" hasLineColor="false" height="17.96875" modelName="custom" textColor="#000000" visible="true" width="100.71484375" x="18.642578125" y="11.015625">Neutron Service<y:LabelModel>
|
|
||||||
<y:SmartNodeLabelModel distance="4.0"/>
|
|
||||||
</y:LabelModel>
|
|
||||||
<y:ModelParameter>
|
|
||||||
<y:SmartNodeLabelModelParameter labelRatioX="0.0" labelRatioY="0.0" nodeRatioX="0.0" nodeRatioY="0.0" offsetX="0.0" offsetY="0.0" upX="0.0" upY="-1.0"/>
|
|
||||||
</y:ModelParameter>
|
|
||||||
</y:NodeLabel>
|
|
||||||
<y:StyleProperties>
|
|
||||||
<y:Property class="java.lang.Boolean" name="y.view.ShadowNodePainter.SHADOW_PAINTING" value="true"/>
|
|
||||||
</y:StyleProperties>
|
|
||||||
</y:GenericNode>
|
|
||||||
</data>
|
|
||||||
</node>
|
|
||||||
<node id="n3">
|
|
||||||
<data key="d5"/>
|
|
||||||
<data key="d6">
|
|
||||||
<y:GenericNode configuration="com.yworks.entityRelationship.small_entity">
|
|
||||||
<y:Geometry height="40.0" width="138.0" x="404.0" y="744.0"/>
|
|
||||||
<y:Fill color="#E8EEF7" color2="#B7C9E3" transparent="false"/>
|
|
||||||
<y:BorderStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:NodeLabel alignment="center" autoSizePolicy="content" fontFamily="Dialog" fontSize="12" fontStyle="plain" hasBackgroundColor="false" hasLineColor="false" height="17.96875" modelName="custom" textColor="#000000" visible="true" width="100.71484375" x="18.642578125" y="11.015625">Neutron Service<y:LabelModel>
|
|
||||||
<y:SmartNodeLabelModel distance="4.0"/>
|
|
||||||
</y:LabelModel>
|
|
||||||
<y:ModelParameter>
|
|
||||||
<y:SmartNodeLabelModelParameter labelRatioX="0.0" labelRatioY="0.0" nodeRatioX="0.0" nodeRatioY="0.0" offsetX="0.0" offsetY="0.0" upX="0.0" upY="-1.0"/>
|
|
||||||
</y:ModelParameter>
|
|
||||||
</y:NodeLabel>
|
|
||||||
<y:StyleProperties>
|
|
||||||
<y:Property class="java.lang.Boolean" name="y.view.ShadowNodePainter.SHADOW_PAINTING" value="true"/>
|
|
||||||
</y:StyleProperties>
|
|
||||||
</y:GenericNode>
|
|
||||||
</data>
|
|
||||||
</node>
|
|
||||||
<node id="n4">
|
|
||||||
<data key="d5"/>
|
|
||||||
<data key="d6">
|
|
||||||
<y:GenericNode configuration="com.yworks.flowchart.process">
|
|
||||||
<y:Geometry height="40.0" width="80.0" x="567.0" y="622.0"/>
|
|
||||||
<y:Fill color="#E8EEF7" color2="#B7C9E3" transparent="false"/>
|
|
||||||
<y:BorderStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:NodeLabel alignment="center" autoSizePolicy="content" fontFamily="Dialog" fontSize="12" fontStyle="plain" hasBackgroundColor="false" hasLineColor="false" height="31.9375" modelName="custom" textColor="#000000" visible="true" width="59.30078125" x="10.349609375" y="4.03125">Publisher
|
|
||||||
Service<y:LabelModel>
|
|
||||||
<y:SmartNodeLabelModel distance="4.0"/>
|
|
||||||
</y:LabelModel>
|
|
||||||
<y:ModelParameter>
|
|
||||||
<y:SmartNodeLabelModelParameter labelRatioX="0.0" labelRatioY="0.0" nodeRatioX="0.0" nodeRatioY="0.0" offsetX="0.0" offsetY="0.0" upX="0.0" upY="-1.0"/>
|
|
||||||
</y:ModelParameter>
|
|
||||||
</y:NodeLabel>
|
|
||||||
</y:GenericNode>
|
|
||||||
</data>
|
|
||||||
</node>
|
|
||||||
<node id="n5">
|
|
||||||
<data key="d5"/>
|
|
||||||
<data key="d6">
|
|
||||||
<y:GenericNode configuration="com.yworks.flowchart.annotation">
|
|
||||||
<y:Geometry height="40.0" width="43.0" x="686.0" y="622.0"/>
|
|
||||||
<y:Fill color="#E8EEF7" color2="#B7C9E3" transparent="false"/>
|
|
||||||
<y:BorderStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:NodeLabel alignment="center" autoSizePolicy="content" fontFamily="Dialog" fontSize="12" fontStyle="plain" hasBackgroundColor="false" hasLineColor="false" height="31.9375" modelName="custom" textColor="#000000" visible="true" width="28.216796875" x="7.3916015625" y="4.03125">TCP
|
|
||||||
Port<y:LabelModel>
|
|
||||||
<y:SmartNodeLabelModel distance="4.0"/>
|
|
||||||
</y:LabelModel>
|
|
||||||
<y:ModelParameter>
|
|
||||||
<y:SmartNodeLabelModelParameter labelRatioX="0.0" labelRatioY="0.0" nodeRatioX="0.0" nodeRatioY="0.0" offsetX="0.0" offsetY="0.0" upX="0.0" upY="-1.0"/>
|
|
||||||
</y:ModelParameter>
|
|
||||||
</y:NodeLabel>
|
|
||||||
<y:StyleProperties>
|
|
||||||
<y:Property class="java.lang.Byte" name="com.yworks.flowchart.style.orientation" value="0"/>
|
|
||||||
<y:Property class="java.lang.Byte" name="LAYER_STYLE_PROPERTY_KEY" value="1"/>
|
|
||||||
</y:StyleProperties>
|
|
||||||
</y:GenericNode>
|
|
||||||
</data>
|
|
||||||
</node>
|
|
||||||
<node id="n6" yfiles.foldertype="group">
|
|
||||||
<data key="d4"/>
|
|
||||||
<data key="d5"/>
|
|
||||||
<data key="d6">
|
|
||||||
<y:ProxyAutoBoundsNode>
|
|
||||||
<y:Realizers active="0">
|
|
||||||
<y:GroupNode>
|
|
||||||
<y:Geometry height="331.0" width="453.0" x="255.0" y="464.0"/>
|
|
||||||
<y:Fill color="#CAECFF84" transparent="false"/>
|
|
||||||
<y:BorderStyle color="#666699" type="dotted" width="1.0"/>
|
|
||||||
<y:NodeLabel alignment="right" autoSizePolicy="node_width" backgroundColor="#99CCFF" borderDistance="0.0" fontFamily="Dialog" fontSize="15" fontStyle="plain" hasLineColor="false" height="21.4609375" modelName="internal" modelPosition="t" textColor="#000000" visible="true" width="453.0" x="0.0" y="0.0">Neutron API Server</y:NodeLabel>
|
|
||||||
<y:Shape type="roundrectangle"/>
|
|
||||||
<y:State closed="false" closedHeight="50.0" closedWidth="50.0" innerGraphDisplayEnabled="false"/>
|
|
||||||
<y:Insets bottom="15" bottomF="15.0" left="15" leftF="15.0" right="15" rightF="15.0" top="15" topF="15.0"/>
|
|
||||||
<y:BorderInsets bottom="0" bottomF="0.0" left="0" leftF="0.0" right="0" rightF="0.0" top="0" topF="0.0"/>
|
|
||||||
</y:GroupNode>
|
|
||||||
<y:GroupNode>
|
|
||||||
<y:Geometry height="50.0" width="50.0" x="255.0" y="464.0"/>
|
|
||||||
<y:Fill color="#CAECFF84" transparent="false"/>
|
|
||||||
<y:BorderStyle color="#666699" type="dotted" width="1.0"/>
|
|
||||||
<y:NodeLabel alignment="right" autoSizePolicy="node_width" backgroundColor="#99CCFF" borderDistance="0.0" fontFamily="Dialog" fontSize="15" fontStyle="plain" hasLineColor="false" height="21.4609375" modelName="internal" modelPosition="t" textColor="#000000" visible="true" width="50.0" x="0.0" y="0.0">2</y:NodeLabel>
|
|
||||||
<y:Shape type="roundrectangle"/>
|
|
||||||
<y:State closed="false" closedHeight="50.0" closedWidth="50.0" innerGraphDisplayEnabled="false"/>
|
|
||||||
<y:Insets bottom="15" bottomF="15.0" left="15" leftF="15.0" right="15" rightF="15.0" top="15" topF="15.0"/>
|
|
||||||
<y:BorderInsets bottom="0" bottomF="0.0" left="0" leftF="0.0" right="0" rightF="0.0" top="0" topF="0.0"/>
|
|
||||||
</y:GroupNode>
|
|
||||||
</y:Realizers>
|
|
||||||
</y:ProxyAutoBoundsNode>
|
|
||||||
</data>
|
|
||||||
<graph edgedefault="directed" id="n6:"/>
|
|
||||||
</node>
|
|
||||||
<edge id="e0" source="n0" target="n2">
|
|
||||||
<data key="d9"/>
|
|
||||||
<data key="d10">
|
|
||||||
<y:PolyLineEdge>
|
|
||||||
<y:Path sx="0.0" sy="0.0" tx="0.0" ty="0.0"/>
|
|
||||||
<y:LineStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:Arrows source="none" target="none"/>
|
|
||||||
<y:BendStyle smoothed="false"/>
|
|
||||||
</y:PolyLineEdge>
|
|
||||||
</data>
|
|
||||||
</edge>
|
|
||||||
<edge id="e1" source="n1" target="n3">
|
|
||||||
<data key="d9"/>
|
|
||||||
<data key="d10">
|
|
||||||
<y:PolyLineEdge>
|
|
||||||
<y:Path sx="0.0" sy="0.0" tx="0.0" ty="0.0"/>
|
|
||||||
<y:LineStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:Arrows source="none" target="none"/>
|
|
||||||
<y:BendStyle smoothed="false"/>
|
|
||||||
</y:PolyLineEdge>
|
|
||||||
</data>
|
|
||||||
</edge>
|
|
||||||
<edge id="e2" source="n0" target="n1">
|
|
||||||
<data key="d9"/>
|
|
||||||
<data key="d10">
|
|
||||||
<y:PolyLineEdge>
|
|
||||||
<y:Path sx="0.0" sy="0.0" tx="0.0" ty="0.0"/>
|
|
||||||
<y:LineStyle color="#000000" type="dotted" width="3.0"/>
|
|
||||||
<y:Arrows source="none" target="none"/>
|
|
||||||
<y:BendStyle smoothed="false"/>
|
|
||||||
</y:PolyLineEdge>
|
|
||||||
</data>
|
|
||||||
</edge>
|
|
||||||
<edge id="e3" source="n2" target="n3">
|
|
||||||
<data key="d9"/>
|
|
||||||
<data key="d10">
|
|
||||||
<y:PolyLineEdge>
|
|
||||||
<y:Path sx="0.0" sy="0.0" tx="0.0" ty="0.0"/>
|
|
||||||
<y:LineStyle color="#000000" type="dotted" width="3.0"/>
|
|
||||||
<y:Arrows source="none" target="none"/>
|
|
||||||
<y:BendStyle smoothed="false"/>
|
|
||||||
</y:PolyLineEdge>
|
|
||||||
</data>
|
|
||||||
</edge>
|
|
||||||
<edge id="e4" source="n2" target="n4">
|
|
||||||
<data key="d9"/>
|
|
||||||
<data key="d10">
|
|
||||||
<y:PolyLineEdge>
|
|
||||||
<y:Path sx="0.0" sy="0.0" tx="0.0" ty="0.0"/>
|
|
||||||
<y:LineStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:Arrows source="none" target="standard"/>
|
|
||||||
<y:BendStyle smoothed="false"/>
|
|
||||||
</y:PolyLineEdge>
|
|
||||||
</data>
|
|
||||||
</edge>
|
|
||||||
<edge id="e5" source="n3" target="n4">
|
|
||||||
<data key="d9"/>
|
|
||||||
<data key="d10">
|
|
||||||
<y:PolyLineEdge>
|
|
||||||
<y:Path sx="0.0" sy="0.0" tx="0.0" ty="0.0"/>
|
|
||||||
<y:LineStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:Arrows source="none" target="standard"/>
|
|
||||||
<y:BendStyle smoothed="false"/>
|
|
||||||
</y:PolyLineEdge>
|
|
||||||
</data>
|
|
||||||
</edge>
|
|
||||||
<edge id="e6" source="n4" target="n5">
|
|
||||||
<data key="d9"/>
|
|
||||||
<data key="d10">
|
|
||||||
<y:PolyLineEdge>
|
|
||||||
<y:Path sx="0.0" sy="0.0" tx="0.0" ty="0.0"/>
|
|
||||||
<y:LineStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:Arrows source="none" target="standard"/>
|
|
||||||
<y:BendStyle smoothed="false"/>
|
|
||||||
</y:PolyLineEdge>
|
|
||||||
</data>
|
|
||||||
</edge>
|
|
||||||
</graph>
|
|
||||||
<data key="d7">
|
|
||||||
<y:Resources/>
|
|
||||||
</data>
|
|
||||||
</graphml>
|
|
Before Width: | Height: | Size: 14 KiB |
|
@ -1,627 +0,0 @@
|
||||||
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
|
|
||||||
<graphml xmlns="http://graphml.graphdrawing.org/xmlns" xmlns:java="http://www.yworks.com/xml/yfiles-common/1.0/java" xmlns:sys="http://www.yworks.com/xml/yfiles-common/markup/primitives/2.0" xmlns:x="http://www.yworks.com/xml/yfiles-common/markup/2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:y="http://www.yworks.com/xml/graphml" xmlns:yed="http://www.yworks.com/xml/yed/3" xsi:schemaLocation="http://graphml.graphdrawing.org/xmlns http://www.yworks.com/xml/schema/graphml/1.1/ygraphml.xsd">
|
|
||||||
<!--Created by yEd 3.14.4-->
|
|
||||||
<key attr.name="Description" attr.type="string" for="graph" id="d0"/>
|
|
||||||
<key for="port" id="d1" yfiles.type="portgraphics"/>
|
|
||||||
<key for="port" id="d2" yfiles.type="portgeometry"/>
|
|
||||||
<key for="port" id="d3" yfiles.type="portuserdata"/>
|
|
||||||
<key attr.name="url" attr.type="string" for="node" id="d4"/>
|
|
||||||
<key attr.name="description" attr.type="string" for="node" id="d5"/>
|
|
||||||
<key for="node" id="d6" yfiles.type="nodegraphics"/>
|
|
||||||
<key for="graphml" id="d7" yfiles.type="resources"/>
|
|
||||||
<key attr.name="url" attr.type="string" for="edge" id="d8"/>
|
|
||||||
<key attr.name="description" attr.type="string" for="edge" id="d9"/>
|
|
||||||
<key for="edge" id="d10" yfiles.type="edgegraphics"/>
|
|
||||||
<graph edgedefault="directed" id="G">
|
|
||||||
<data key="d0"/>
|
|
||||||
<node id="n0">
|
|
||||||
<data key="d5"/>
|
|
||||||
<data key="d6">
|
|
||||||
<y:SVGNode>
|
|
||||||
<y:Geometry height="56.231998443603516" width="35.095298767089844" x="357.4523506164551" y="89.88400077819824"/>
|
|
||||||
<y:Fill color="#CCCCFF" transparent="false"/>
|
|
||||||
<y:BorderStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:NodeLabel alignment="center" autoSizePolicy="content" fontFamily="Dialog" fontSize="12" fontStyle="plain" hasBackgroundColor="false" hasLineColor="false" hasText="false" height="4.0" modelName="custom" textColor="#000000" visible="true" width="4.0" x="15.547649383544922" y="60.231998443603516">
|
|
||||||
<y:LabelModel>
|
|
||||||
<y:SmartNodeLabelModel distance="4.0"/>
|
|
||||||
</y:LabelModel>
|
|
||||||
<y:ModelParameter>
|
|
||||||
<y:SmartNodeLabelModelParameter labelRatioX="0.0" labelRatioY="-0.5" nodeRatioX="0.0" nodeRatioY="0.5" offsetX="0.0" offsetY="4.0" upX="0.0" upY="-1.0"/>
|
|
||||||
</y:ModelParameter>
|
|
||||||
</y:NodeLabel>
|
|
||||||
<y:SVGNodeProperties usingVisualBounds="true"/>
|
|
||||||
<y:SVGModel svgBoundsPolicy="0">
|
|
||||||
<y:SVGContent refid="1"/>
|
|
||||||
</y:SVGModel>
|
|
||||||
</y:SVGNode>
|
|
||||||
</data>
|
|
||||||
</node>
|
|
||||||
<node id="n1">
|
|
||||||
<data key="d5"/>
|
|
||||||
<data key="d6">
|
|
||||||
<y:GenericNode configuration="com.yworks.flowchart.start2">
|
|
||||||
<y:Geometry height="17.0" width="20.0" x="485.6308765411377" y="109.5"/>
|
|
||||||
<y:Fill color="#E8EEF7" color2="#B7C9E3" transparent="false"/>
|
|
||||||
<y:BorderStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:NodeLabel alignment="center" autoSizePolicy="content" fontFamily="Dialog" fontSize="12" fontStyle="plain" hasBackgroundColor="false" hasLineColor="false" hasText="false" height="4.0" modelName="custom" textColor="#000000" visible="true" width="4.0" x="8.0" y="6.5">
|
|
||||||
<y:LabelModel>
|
|
||||||
<y:SmartNodeLabelModel distance="4.0"/>
|
|
||||||
</y:LabelModel>
|
|
||||||
<y:ModelParameter>
|
|
||||||
<y:SmartNodeLabelModelParameter labelRatioX="0.0" labelRatioY="0.0" nodeRatioX="0.0" nodeRatioY="0.0" offsetX="0.0" offsetY="0.0" upX="0.0" upY="-1.0"/>
|
|
||||||
</y:ModelParameter>
|
|
||||||
</y:NodeLabel>
|
|
||||||
</y:GenericNode>
|
|
||||||
</data>
|
|
||||||
</node>
|
|
||||||
<node id="n2">
|
|
||||||
<data key="d5"/>
|
|
||||||
<data key="d6">
|
|
||||||
<y:GenericNode configuration="com.yworks.flowchart.start2">
|
|
||||||
<y:Geometry height="17.0" width="20.0" x="598.7141036987305" y="109.5"/>
|
|
||||||
<y:Fill color="#E8EEF7" color2="#B7C9E3" transparent="false"/>
|
|
||||||
<y:BorderStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:NodeLabel alignment="center" autoSizePolicy="content" fontFamily="Dialog" fontSize="12" fontStyle="plain" hasBackgroundColor="false" hasLineColor="false" hasText="false" height="4.0" modelName="custom" textColor="#000000" visible="true" width="4.0" x="8.0" y="6.5">
|
|
||||||
<y:LabelModel>
|
|
||||||
<y:SmartNodeLabelModel distance="4.0"/>
|
|
||||||
</y:LabelModel>
|
|
||||||
<y:ModelParameter>
|
|
||||||
<y:SmartNodeLabelModelParameter labelRatioX="0.0" labelRatioY="0.0" nodeRatioX="0.0" nodeRatioY="0.0" offsetX="0.0" offsetY="0.0" upX="0.0" upY="-1.0"/>
|
|
||||||
</y:ModelParameter>
|
|
||||||
</y:NodeLabel>
|
|
||||||
</y:GenericNode>
|
|
||||||
</data>
|
|
||||||
</node>
|
|
||||||
<node id="n3">
|
|
||||||
<data key="d5"/>
|
|
||||||
<data key="d6">
|
|
||||||
<y:GenericNode configuration="com.yworks.flowchart.start2">
|
|
||||||
<y:Geometry height="17.0" width="20.0" x="711.7973308563232" y="109.5"/>
|
|
||||||
<y:Fill color="#E8EEF7" color2="#B7C9E3" transparent="false"/>
|
|
||||||
<y:BorderStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:NodeLabel alignment="center" autoSizePolicy="content" fontFamily="Dialog" fontSize="12" fontStyle="plain" hasBackgroundColor="false" hasLineColor="false" hasText="false" height="4.0" modelName="custom" textColor="#000000" visible="true" width="4.0" x="8.0" y="6.5">
|
|
||||||
<y:LabelModel>
|
|
||||||
<y:SmartNodeLabelModel distance="4.0"/>
|
|
||||||
</y:LabelModel>
|
|
||||||
<y:ModelParameter>
|
|
||||||
<y:SmartNodeLabelModelParameter labelRatioX="0.0" labelRatioY="0.0" nodeRatioX="0.0" nodeRatioY="0.0" offsetX="0.0" offsetY="0.0" upX="0.0" upY="-1.0"/>
|
|
||||||
</y:ModelParameter>
|
|
||||||
</y:NodeLabel>
|
|
||||||
</y:GenericNode>
|
|
||||||
</data>
|
|
||||||
</node>
|
|
||||||
<node id="n4">
|
|
||||||
<data key="d5"/>
|
|
||||||
<data key="d6">
|
|
||||||
<y:SVGNode>
|
|
||||||
<y:Geometry height="56.231998443603516" width="35.095298767089844" x="357.4523506164551" y="335.7680015563965"/>
|
|
||||||
<y:Fill color="#CCCCFF" transparent="false"/>
|
|
||||||
<y:BorderStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:NodeLabel alignment="center" autoSizePolicy="content" fontFamily="Dialog" fontSize="12" fontStyle="plain" hasBackgroundColor="false" hasLineColor="false" hasText="false" height="4.0" modelName="custom" textColor="#000000" visible="true" width="4.0" x="15.547649383544922" y="60.231998443603516">
|
|
||||||
<y:LabelModel>
|
|
||||||
<y:SmartNodeLabelModel distance="4.0"/>
|
|
||||||
</y:LabelModel>
|
|
||||||
<y:ModelParameter>
|
|
||||||
<y:SmartNodeLabelModelParameter labelRatioX="0.0" labelRatioY="-0.5" nodeRatioX="0.0" nodeRatioY="0.5" offsetX="0.0" offsetY="4.0" upX="0.0" upY="-1.0"/>
|
|
||||||
</y:ModelParameter>
|
|
||||||
</y:NodeLabel>
|
|
||||||
<y:SVGNodeProperties usingVisualBounds="true"/>
|
|
||||||
<y:SVGModel svgBoundsPolicy="0">
|
|
||||||
<y:SVGContent refid="1"/>
|
|
||||||
</y:SVGModel>
|
|
||||||
</y:SVGNode>
|
|
||||||
</data>
|
|
||||||
</node>
|
|
||||||
<node id="n5">
|
|
||||||
<data key="d5"/>
|
|
||||||
<data key="d6">
|
|
||||||
<y:SVGNode>
|
|
||||||
<y:Geometry height="56.231998443603516" width="35.095298767089844" x="422.5476493835449" y="335.7680015563965"/>
|
|
||||||
<y:Fill color="#CCCCFF" transparent="false"/>
|
|
||||||
<y:BorderStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:NodeLabel alignment="center" autoSizePolicy="content" fontFamily="Dialog" fontSize="12" fontStyle="plain" hasBackgroundColor="false" hasLineColor="false" hasText="false" height="4.0" modelName="custom" textColor="#000000" visible="true" width="4.0" x="15.547649383544922" y="60.231998443603516">
|
|
||||||
<y:LabelModel>
|
|
||||||
<y:SmartNodeLabelModel distance="4.0"/>
|
|
||||||
</y:LabelModel>
|
|
||||||
<y:ModelParameter>
|
|
||||||
<y:SmartNodeLabelModelParameter labelRatioX="0.0" labelRatioY="-0.5" nodeRatioX="0.0" nodeRatioY="0.5" offsetX="0.0" offsetY="4.0" upX="0.0" upY="-1.0"/>
|
|
||||||
</y:ModelParameter>
|
|
||||||
</y:NodeLabel>
|
|
||||||
<y:SVGNodeProperties usingVisualBounds="true"/>
|
|
||||||
<y:SVGModel svgBoundsPolicy="0">
|
|
||||||
<y:SVGContent refid="1"/>
|
|
||||||
</y:SVGModel>
|
|
||||||
</y:SVGNode>
|
|
||||||
</data>
|
|
||||||
</node>
|
|
||||||
<node id="n6">
|
|
||||||
<data key="d5"/>
|
|
||||||
<data key="d6">
|
|
||||||
<y:SVGNode>
|
|
||||||
<y:Geometry height="56.231998443603516" width="35.095298767089844" x="759.7852592468262" y="335.7680015563965"/>
|
|
||||||
<y:Fill color="#CCCCFF" transparent="false"/>
|
|
||||||
<y:BorderStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:NodeLabel alignment="center" autoSizePolicy="content" fontFamily="Dialog" fontSize="12" fontStyle="plain" hasBackgroundColor="false" hasLineColor="false" hasText="false" height="4.0" modelName="custom" textColor="#000000" visible="true" width="4.0" x="15.547649383544922" y="60.231998443603516">
|
|
||||||
<y:LabelModel>
|
|
||||||
<y:SmartNodeLabelModel distance="4.0"/>
|
|
||||||
</y:LabelModel>
|
|
||||||
<y:ModelParameter>
|
|
||||||
<y:SmartNodeLabelModelParameter labelRatioX="0.0" labelRatioY="-0.5" nodeRatioX="0.0" nodeRatioY="0.5" offsetX="0.0" offsetY="4.0" upX="0.0" upY="-1.0"/>
|
|
||||||
</y:ModelParameter>
|
|
||||||
</y:NodeLabel>
|
|
||||||
<y:SVGNodeProperties usingVisualBounds="true"/>
|
|
||||||
<y:SVGModel svgBoundsPolicy="0">
|
|
||||||
<y:SVGContent refid="1"/>
|
|
||||||
</y:SVGModel>
|
|
||||||
</y:SVGNode>
|
|
||||||
</data>
|
|
||||||
</node>
|
|
||||||
<node id="n7">
|
|
||||||
<data key="d5"/>
|
|
||||||
<data key="d6">
|
|
||||||
<y:SVGNode>
|
|
||||||
<y:Geometry height="56.231998443603516" width="35.095298767089844" x="824.880558013916" y="335.7680015563965"/>
|
|
||||||
<y:Fill color="#CCCCFF" transparent="false"/>
|
|
||||||
<y:BorderStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:NodeLabel alignment="center" autoSizePolicy="content" fontFamily="Dialog" fontSize="12" fontStyle="plain" hasBackgroundColor="false" hasLineColor="false" hasText="false" height="4.0" modelName="custom" textColor="#000000" visible="true" width="4.0" x="15.547649383544922" y="60.231998443603516">
|
|
||||||
<y:LabelModel>
|
|
||||||
<y:SmartNodeLabelModel distance="4.0"/>
|
|
||||||
</y:LabelModel>
|
|
||||||
<y:ModelParameter>
|
|
||||||
<y:SmartNodeLabelModelParameter labelRatioX="0.0" labelRatioY="-0.5" nodeRatioX="0.0" nodeRatioY="0.5" offsetX="0.0" offsetY="4.0" upX="0.0" upY="-1.0"/>
|
|
||||||
</y:ModelParameter>
|
|
||||||
</y:NodeLabel>
|
|
||||||
<y:SVGNodeProperties usingVisualBounds="true"/>
|
|
||||||
<y:SVGModel svgBoundsPolicy="0">
|
|
||||||
<y:SVGContent refid="1"/>
|
|
||||||
</y:SVGModel>
|
|
||||||
</y:SVGNode>
|
|
||||||
</data>
|
|
||||||
</node>
|
|
||||||
<node id="n8">
|
|
||||||
<data key="d5"/>
|
|
||||||
<data key="d6">
|
|
||||||
<y:GenericNode configuration="com.yworks.flowchart.start2">
|
|
||||||
<y:Geometry height="17.0" width="20.0" x="552.7382469177246" y="355.38400077819824"/>
|
|
||||||
<y:Fill color="#E8EEF7" color2="#B7C9E3" transparent="false"/>
|
|
||||||
<y:BorderStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:NodeLabel alignment="center" autoSizePolicy="content" fontFamily="Dialog" fontSize="12" fontStyle="plain" hasBackgroundColor="false" hasLineColor="false" hasText="false" height="4.0" modelName="custom" textColor="#000000" visible="true" width="4.0" x="8.0" y="6.5">
|
|
||||||
<y:LabelModel>
|
|
||||||
<y:SmartNodeLabelModel distance="4.0"/>
|
|
||||||
</y:LabelModel>
|
|
||||||
<y:ModelParameter>
|
|
||||||
<y:SmartNodeLabelModelParameter labelRatioX="0.0" labelRatioY="0.0" nodeRatioX="0.0" nodeRatioY="0.0" offsetX="0.0" offsetY="0.0" upX="0.0" upY="-1.0"/>
|
|
||||||
</y:ModelParameter>
|
|
||||||
</y:NodeLabel>
|
|
||||||
</y:GenericNode>
|
|
||||||
</data>
|
|
||||||
</node>
|
|
||||||
<node id="n9">
|
|
||||||
<data key="d5"/>
|
|
||||||
<data key="d6">
|
|
||||||
<y:GenericNode configuration="com.yworks.flowchart.start2">
|
|
||||||
<y:Geometry height="17.0" width="20.0" x="600.7261753082275" y="355.38400077819824"/>
|
|
||||||
<y:Fill color="#E8EEF7" color2="#B7C9E3" transparent="false"/>
|
|
||||||
<y:BorderStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:NodeLabel alignment="center" autoSizePolicy="content" fontFamily="Dialog" fontSize="12" fontStyle="plain" hasBackgroundColor="false" hasLineColor="false" hasText="false" height="4.0" modelName="custom" textColor="#000000" visible="true" width="4.0" x="8.0" y="6.5">
|
|
||||||
<y:LabelModel>
|
|
||||||
<y:SmartNodeLabelModel distance="4.0"/>
|
|
||||||
</y:LabelModel>
|
|
||||||
<y:ModelParameter>
|
|
||||||
<y:SmartNodeLabelModelParameter labelRatioX="0.0" labelRatioY="0.0" nodeRatioX="0.0" nodeRatioY="0.0" offsetX="0.0" offsetY="0.0" upX="0.0" upY="-1.0"/>
|
|
||||||
</y:ModelParameter>
|
|
||||||
</y:NodeLabel>
|
|
||||||
</y:GenericNode>
|
|
||||||
</data>
|
|
||||||
</node>
|
|
||||||
<node id="n10">
|
|
||||||
<data key="d5"/>
|
|
||||||
<data key="d6">
|
|
||||||
<y:GenericNode configuration="com.yworks.flowchart.start2">
|
|
||||||
<y:Geometry height="17.0" width="20.0" x="648.7141036987305" y="355.38400077819824"/>
|
|
||||||
<y:Fill color="#E8EEF7" color2="#B7C9E3" transparent="false"/>
|
|
||||||
<y:BorderStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:NodeLabel alignment="center" autoSizePolicy="content" fontFamily="Dialog" fontSize="12" fontStyle="plain" hasBackgroundColor="false" hasLineColor="false" hasText="false" height="4.0" modelName="custom" textColor="#000000" visible="true" width="4.0" x="8.0" y="6.5">
|
|
||||||
<y:LabelModel>
|
|
||||||
<y:SmartNodeLabelModel distance="4.0"/>
|
|
||||||
</y:LabelModel>
|
|
||||||
<y:ModelParameter>
|
|
||||||
<y:SmartNodeLabelModelParameter labelRatioX="0.0" labelRatioY="0.0" nodeRatioX="0.0" nodeRatioY="0.0" offsetX="0.0" offsetY="0.0" upX="0.0" upY="-1.0"/>
|
|
||||||
</y:ModelParameter>
|
|
||||||
</y:NodeLabel>
|
|
||||||
</y:GenericNode>
|
|
||||||
</data>
|
|
||||||
</node>
|
|
||||||
<node id="n11">
|
|
||||||
<data key="d5"/>
|
|
||||||
<data key="d6">
|
|
||||||
<y:SVGNode>
|
|
||||||
<y:Geometry height="56.231998443603516" width="35.095298767089844" x="824.880558013916" y="89.88400077819824"/>
|
|
||||||
<y:Fill color="#CCCCFF" transparent="false"/>
|
|
||||||
<y:BorderStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:NodeLabel alignment="center" autoSizePolicy="content" fontFamily="Dialog" fontSize="12" fontStyle="plain" hasBackgroundColor="false" hasLineColor="false" hasText="false" height="4.0" modelName="custom" textColor="#000000" visible="true" width="4.0" x="15.547649383544922" y="60.231998443603516">
|
|
||||||
<y:LabelModel>
|
|
||||||
<y:SmartNodeLabelModel distance="4.0"/>
|
|
||||||
</y:LabelModel>
|
|
||||||
<y:ModelParameter>
|
|
||||||
<y:SmartNodeLabelModelParameter labelRatioX="0.0" labelRatioY="-0.5" nodeRatioX="0.0" nodeRatioY="0.5" offsetX="0.0" offsetY="4.0" upX="0.0" upY="-1.0"/>
|
|
||||||
</y:ModelParameter>
|
|
||||||
</y:NodeLabel>
|
|
||||||
<y:SVGNodeProperties usingVisualBounds="true"/>
|
|
||||||
<y:SVGModel svgBoundsPolicy="0">
|
|
||||||
<y:SVGContent refid="1"/>
|
|
||||||
</y:SVGModel>
|
|
||||||
</y:SVGNode>
|
|
||||||
</data>
|
|
||||||
</node>
|
|
||||||
<node id="n12">
|
|
||||||
<data key="d5"/>
|
|
||||||
<data key="d6">
|
|
||||||
<y:SVGNode>
|
|
||||||
<y:Geometry height="56.231998443603516" width="35.095298767089844" x="487.64294815063477" y="335.7680015563965"/>
|
|
||||||
<y:Fill color="#CCCCFF" transparent="false"/>
|
|
||||||
<y:BorderStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:NodeLabel alignment="center" autoSizePolicy="content" fontFamily="Dialog" fontSize="12" fontStyle="plain" hasBackgroundColor="false" hasLineColor="false" hasText="false" height="4.0" modelName="custom" textColor="#000000" visible="true" width="4.0" x="15.547649383544922" y="60.231998443603516">
|
|
||||||
<y:LabelModel>
|
|
||||||
<y:SmartNodeLabelModel distance="4.0"/>
|
|
||||||
</y:LabelModel>
|
|
||||||
<y:ModelParameter>
|
|
||||||
<y:SmartNodeLabelModelParameter labelRatioX="0.0" labelRatioY="-0.5" nodeRatioX="0.0" nodeRatioY="0.5" offsetX="0.0" offsetY="4.0" upX="0.0" upY="-1.0"/>
|
|
||||||
</y:ModelParameter>
|
|
||||||
</y:NodeLabel>
|
|
||||||
<y:SVGNodeProperties usingVisualBounds="true"/>
|
|
||||||
<y:SVGModel svgBoundsPolicy="0">
|
|
||||||
<y:SVGContent refid="1"/>
|
|
||||||
</y:SVGModel>
|
|
||||||
</y:SVGNode>
|
|
||||||
</data>
|
|
||||||
</node>
|
|
||||||
<node id="n13">
|
|
||||||
<data key="d5"/>
|
|
||||||
<data key="d6">
|
|
||||||
<y:SVGNode>
|
|
||||||
<y:Geometry height="56.231998443603516" width="35.095298767089844" x="696.7020320892334" y="335.7680015563965"/>
|
|
||||||
<y:Fill color="#CCCCFF" transparent="false"/>
|
|
||||||
<y:BorderStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:NodeLabel alignment="center" autoSizePolicy="content" fontFamily="Dialog" fontSize="12" fontStyle="plain" hasBackgroundColor="false" hasLineColor="false" hasText="false" height="4.0" modelName="custom" textColor="#000000" visible="true" width="4.0" x="15.547649383544922" y="60.231998443603516">
|
|
||||||
<y:LabelModel>
|
|
||||||
<y:SmartNodeLabelModel distance="4.0"/>
|
|
||||||
</y:LabelModel>
|
|
||||||
<y:ModelParameter>
|
|
||||||
<y:SmartNodeLabelModelParameter labelRatioX="0.0" labelRatioY="-0.5" nodeRatioX="0.0" nodeRatioY="0.5" offsetX="0.0" offsetY="4.0" upX="0.0" upY="-1.0"/>
|
|
||||||
</y:ModelParameter>
|
|
||||||
</y:NodeLabel>
|
|
||||||
<y:SVGNodeProperties usingVisualBounds="true"/>
|
|
||||||
<y:SVGModel svgBoundsPolicy="0">
|
|
||||||
<y:SVGContent refid="1"/>
|
|
||||||
</y:SVGModel>
|
|
||||||
</y:SVGNode>
|
|
||||||
</data>
|
|
||||||
</node>
|
|
||||||
<node id="n14">
|
|
||||||
<data key="d4"/>
|
|
||||||
<data key="d5"/>
|
|
||||||
<data key="d6">
|
|
||||||
<y:UMLClassNode>
|
|
||||||
<y:Geometry height="28.0" width="502.5235061645508" x="359.46442222595215" y="392.0"/>
|
|
||||||
<y:Fill color="#FFCC00" transparent="false"/>
|
|
||||||
<y:BorderStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:NodeLabel alignment="center" autoSizePolicy="content" fontFamily="Dialog" fontSize="13" fontStyle="bold" hasBackgroundColor="false" hasLineColor="false" height="19.1328125" modelName="custom" textColor="#000000" visible="true" width="323.26171875" x="89.63089370727539" y="3.0">Compute Nodes with Dragonflow Controller<y:LabelModel>
|
|
||||||
<y:SmartNodeLabelModel distance="4.0"/>
|
|
||||||
</y:LabelModel>
|
|
||||||
<y:ModelParameter>
|
|
||||||
<y:SmartNodeLabelModelParameter labelRatioX="0.0" labelRatioY="0.0" nodeRatioX="0.0" nodeRatioY="-0.03703090122767855" offsetX="0.0" offsetY="0.0" upX="0.0" upY="-1.0"/>
|
|
||||||
</y:ModelParameter>
|
|
||||||
</y:NodeLabel>
|
|
||||||
<y:UML clipContent="true" constraint="" omitDetails="false" stereotype="" use3DEffect="true">
|
|
||||||
<y:AttributeLabel/>
|
|
||||||
<y:MethodLabel/>
|
|
||||||
</y:UML>
|
|
||||||
</y:UMLClassNode>
|
|
||||||
</data>
|
|
||||||
</node>
|
|
||||||
<node id="n15">
|
|
||||||
<data key="d4"/>
|
|
||||||
<data key="d5"/>
|
|
||||||
<data key="d6">
|
|
||||||
<y:UMLClassNode>
|
|
||||||
<y:Geometry height="28.0" width="504.34498023986816" x="355.6308765411377" y="61.88400077819824"/>
|
|
||||||
<y:Fill color="#FFCC00" transparent="false"/>
|
|
||||||
<y:BorderStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:NodeLabel alignment="center" autoSizePolicy="content" fontFamily="Dialog" fontSize="13" fontStyle="bold" hasBackgroundColor="false" hasLineColor="false" height="19.1328125" modelName="custom" textColor="#000000" visible="true" width="153.271484375" x="175.53674793243408" y="3.0">Neutron API Servers<y:LabelModel>
|
|
||||||
<y:SmartNodeLabelModel distance="4.0"/>
|
|
||||||
</y:LabelModel>
|
|
||||||
<y:ModelParameter>
|
|
||||||
<y:SmartNodeLabelModelParameter labelRatioX="0.0" labelRatioY="0.0" nodeRatioX="0.0" nodeRatioY="-0.03703090122767855" offsetX="0.0" offsetY="0.0" upX="0.0" upY="-1.0"/>
|
|
||||||
</y:ModelParameter>
|
|
||||||
</y:NodeLabel>
|
|
||||||
<y:UML clipContent="true" constraint="" omitDetails="false" stereotype="" use3DEffect="true">
|
|
||||||
<y:AttributeLabel/>
|
|
||||||
<y:MethodLabel/>
|
|
||||||
</y:UML>
|
|
||||||
</y:UMLClassNode>
|
|
||||||
</data>
|
|
||||||
</node>
|
|
||||||
<edge id="e0" source="n4" target="n0">
|
|
||||||
<data key="d9"/>
|
|
||||||
<data key="d10">
|
|
||||||
<y:PolyLineEdge>
|
|
||||||
<y:Path sx="0.0" sy="0.0" tx="0.0" ty="0.0"/>
|
|
||||||
<y:LineStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:Arrows source="none" target="standard"/>
|
|
||||||
<y:BendStyle smoothed="false"/>
|
|
||||||
</y:PolyLineEdge>
|
|
||||||
</data>
|
|
||||||
</edge>
|
|
||||||
<edge id="e1" source="n4" target="n11">
|
|
||||||
<data key="d9"/>
|
|
||||||
<data key="d10">
|
|
||||||
<y:PolyLineEdge>
|
|
||||||
<y:Path sx="0.0" sy="0.0" tx="0.0" ty="0.0"/>
|
|
||||||
<y:LineStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:Arrows source="none" target="standard"/>
|
|
||||||
<y:BendStyle smoothed="false"/>
|
|
||||||
</y:PolyLineEdge>
|
|
||||||
</data>
|
|
||||||
</edge>
|
|
||||||
<edge id="e2" source="n5" target="n0">
|
|
||||||
<data key="d9"/>
|
|
||||||
<data key="d10">
|
|
||||||
<y:PolyLineEdge>
|
|
||||||
<y:Path sx="0.0" sy="0.0" tx="0.0" ty="0.0"/>
|
|
||||||
<y:LineStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:Arrows source="none" target="standard"/>
|
|
||||||
<y:BendStyle smoothed="false"/>
|
|
||||||
</y:PolyLineEdge>
|
|
||||||
</data>
|
|
||||||
</edge>
|
|
||||||
<edge id="e3" source="n5" target="n11">
|
|
||||||
<data key="d9"/>
|
|
||||||
<data key="d10">
|
|
||||||
<y:PolyLineEdge>
|
|
||||||
<y:Path sx="0.0" sy="0.0" tx="0.0" ty="0.0"/>
|
|
||||||
<y:LineStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:Arrows source="none" target="standard"/>
|
|
||||||
<y:BendStyle smoothed="false"/>
|
|
||||||
</y:PolyLineEdge>
|
|
||||||
</data>
|
|
||||||
</edge>
|
|
||||||
<edge id="e4" source="n6" target="n0">
|
|
||||||
<data key="d9"/>
|
|
||||||
<data key="d10">
|
|
||||||
<y:PolyLineEdge>
|
|
||||||
<y:Path sx="0.0" sy="0.0" tx="0.0" ty="0.0"/>
|
|
||||||
<y:LineStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:Arrows source="none" target="standard"/>
|
|
||||||
<y:BendStyle smoothed="false"/>
|
|
||||||
</y:PolyLineEdge>
|
|
||||||
</data>
|
|
||||||
</edge>
|
|
||||||
<edge id="e5" source="n7" target="n11">
|
|
||||||
<data key="d9"/>
|
|
||||||
<data key="d10">
|
|
||||||
<y:PolyLineEdge>
|
|
||||||
<y:Path sx="0.0" sy="0.0" tx="0.0" ty="0.0"/>
|
|
||||||
<y:LineStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:Arrows source="none" target="standard"/>
|
|
||||||
<y:BendStyle smoothed="false"/>
|
|
||||||
</y:PolyLineEdge>
|
|
||||||
</data>
|
|
||||||
</edge>
|
|
||||||
<edge id="e6" source="n6" target="n11">
|
|
||||||
<data key="d9"/>
|
|
||||||
<data key="d10">
|
|
||||||
<y:PolyLineEdge>
|
|
||||||
<y:Path sx="0.0" sy="0.0" tx="0.0" ty="0.0"/>
|
|
||||||
<y:LineStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:Arrows source="none" target="standard"/>
|
|
||||||
<y:BendStyle smoothed="false"/>
|
|
||||||
</y:PolyLineEdge>
|
|
||||||
</data>
|
|
||||||
</edge>
|
|
||||||
<edge id="e7" source="n7" target="n0">
|
|
||||||
<data key="d9"/>
|
|
||||||
<data key="d10">
|
|
||||||
<y:PolyLineEdge>
|
|
||||||
<y:Path sx="0.0" sy="0.0" tx="0.0" ty="0.0"/>
|
|
||||||
<y:LineStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:Arrows source="none" target="standard"/>
|
|
||||||
<y:BendStyle smoothed="false"/>
|
|
||||||
</y:PolyLineEdge>
|
|
||||||
</data>
|
|
||||||
</edge>
|
|
||||||
<edge id="e8" source="n12" target="n0">
|
|
||||||
<data key="d9"/>
|
|
||||||
<data key="d10">
|
|
||||||
<y:PolyLineEdge>
|
|
||||||
<y:Path sx="0.0" sy="0.0" tx="0.0" ty="0.0"/>
|
|
||||||
<y:LineStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:Arrows source="none" target="standard"/>
|
|
||||||
<y:BendStyle smoothed="false"/>
|
|
||||||
</y:PolyLineEdge>
|
|
||||||
</data>
|
|
||||||
</edge>
|
|
||||||
<edge id="e9" source="n12" target="n11">
|
|
||||||
<data key="d9"/>
|
|
||||||
<data key="d10">
|
|
||||||
<y:PolyLineEdge>
|
|
||||||
<y:Path sx="0.0" sy="0.0" tx="0.0" ty="0.0"/>
|
|
||||||
<y:LineStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:Arrows source="none" target="standard"/>
|
|
||||||
<y:BendStyle smoothed="false"/>
|
|
||||||
</y:PolyLineEdge>
|
|
||||||
</data>
|
|
||||||
</edge>
|
|
||||||
<edge id="e10" source="n13" target="n0">
|
|
||||||
<data key="d9"/>
|
|
||||||
<data key="d10">
|
|
||||||
<y:PolyLineEdge>
|
|
||||||
<y:Path sx="0.0" sy="0.0" tx="0.0" ty="0.0"/>
|
|
||||||
<y:LineStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:Arrows source="none" target="standard"/>
|
|
||||||
<y:BendStyle smoothed="false"/>
|
|
||||||
</y:PolyLineEdge>
|
|
||||||
</data>
|
|
||||||
</edge>
|
|
||||||
<edge id="e11" source="n13" target="n11">
|
|
||||||
<data key="d9"/>
|
|
||||||
<data key="d10">
|
|
||||||
<y:PolyLineEdge>
|
|
||||||
<y:Path sx="0.0" sy="0.0" tx="0.0" ty="0.0"/>
|
|
||||||
<y:LineStyle color="#000000" type="line" width="1.0"/>
|
|
||||||
<y:Arrows source="none" target="standard"/>
|
|
||||||
<y:BendStyle smoothed="false"/>
|
|
||||||
</y:PolyLineEdge>
|
|
||||||
</data>
|
|
||||||
</edge>
|
|
||||||
</graph>
|
|
||||||
<data key="d7">
|
|
||||||
<y:Resources>
|
|
||||||
<y:Resource id="1"><?xml version="1.0" encoding="utf-8"?>
|
|
||||||
<svg version="1.1"
|
|
||||||
xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"
|
|
||||||
x="0px" y="0px" width="36px" height="57px" viewBox="0 -0.741 36 57" enable-background="new 0 -0.741 36 57"
|
|
||||||
xml:space="preserve">
|
|
||||||
<defs>
|
|
||||||
</defs>
|
|
||||||
<linearGradient id="SVGID_1_" gradientUnits="userSpaceOnUse" x1="230.1768" y1="798.6021" x2="180.3346" y2="798.6021" gradientTransform="matrix(1 0 0 1 -195.2002 -770.8008)">
|
|
||||||
<stop offset="0" style="stop-color:#4D4D4D"/>
|
|
||||||
<stop offset="1" style="stop-color:#8D8D8D"/>
|
|
||||||
</linearGradient>
|
|
||||||
<rect y="0.943" fill="url(#SVGID_1_)" width="34.977" height="53.716"/>
|
|
||||||
<linearGradient id="SVGID_2_" gradientUnits="userSpaceOnUse" x1="224.6807" y1="798.6021" x2="200.6973" y2="798.6021" gradientTransform="matrix(1 0 0 1 -195.2002 -770.8008)">
|
|
||||||
<stop offset="0.0319" style="stop-color:#848484"/>
|
|
||||||
<stop offset="0.1202" style="stop-color:#8C8C8C"/>
|
|
||||||
<stop offset="0.308" style="stop-color:#969696"/>
|
|
||||||
<stop offset="0.5394" style="stop-color:#999999"/>
|
|
||||||
<stop offset="0.5501" style="stop-color:#9C9C9C"/>
|
|
||||||
<stop offset="0.6256" style="stop-color:#B0B0B0"/>
|
|
||||||
<stop offset="0.7118" style="stop-color:#BEBEBE"/>
|
|
||||||
<stop offset="0.8178" style="stop-color:#C7C7C7"/>
|
|
||||||
<stop offset="1" style="stop-color:#C9C9C9"/>
|
|
||||||
</linearGradient>
|
|
||||||
<path fill="url(#SVGID_2_)" d="M5.497,0.943c7.945-1.258,16.04-1.258,23.983,0c0,17.905,0,35.811,0,53.716
|
|
||||||
c-7.943,1.258-16.039,1.258-23.983,0C5.497,36.753,5.497,18.848,5.497,0.943z"/>
|
|
||||||
<path fill="#515151" d="M5.497,14.621c7.995,0,15.989,0,23.983,0c0,13.346,0,26.693,0,40.037c-7.943,1.258-16.039,1.258-23.983,0
|
|
||||||
C5.497,41.314,5.497,27.967,5.497,14.621z"/>
|
|
||||||
<path opacity="0.43" fill="#565656" d="M5.497,4.745c7.982-0.628,16.001-0.628,23.983,0c0,2.707,0,5.413,0,8.12
|
|
||||||
c-7.994,0-15.989,0-23.983,0C5.497,10.158,5.497,7.452,5.497,4.745z"/>
|
|
||||||
<path opacity="0.43" fill="none" stroke="#4D4D4D" stroke-width="0.0999" stroke-miterlimit="10" d="M5.497,4.745
|
|
||||||
c7.982-0.628,16.001-0.628,23.983,0c0,2.707,0,5.413,0,8.12c-7.994,0-15.989,0-23.983,0C5.497,10.158,5.497,7.452,5.497,4.745z"/>
|
|
||||||
<polygon opacity="0.43" fill="#565656" stroke="#4D4D4D" stroke-width="0.0135" stroke-miterlimit="10" enable-background="new " points="
|
|
||||||
6.496,5.746 9.869,5.606 9.869,6.661 6.496,6.799 "/>
|
|
||||||
<rect x="31.307" y="2.517" fill="#E7ED00" stroke="#717171" stroke-width="0.1926" stroke-miterlimit="10" width="3.692" height="1.505"/>
|
|
||||||
<rect x="31.307" y="5.8" fill="#C8FF00" stroke="#717171" stroke-width="0.1926" stroke-miterlimit="10" width="3.692" height="1.507"/>
|
|
||||||
<linearGradient id="SVGID_3_" gradientUnits="userSpaceOnUse" x1="29.4414" y1="35.1235" x2="5.4995" y2="35.1235">
|
|
||||||
<stop offset="0" style="stop-color:#808080"/>
|
|
||||||
<stop offset="0.1907" style="stop-color:#828282"/>
|
|
||||||
<stop offset="0.2955" style="stop-color:#8A8A8A"/>
|
|
||||||
<stop offset="0.3795" style="stop-color:#989898"/>
|
|
||||||
<stop offset="0.4524" style="stop-color:#ACACAC"/>
|
|
||||||
<stop offset="0.5175" style="stop-color:#C5C5C5"/>
|
|
||||||
<stop offset="0.5273" style="stop-color:#C9C9C9"/>
|
|
||||||
<stop offset="0.5914" style="stop-color:#C9C9C9"/>
|
|
||||||
<stop offset="0.9681" style="stop-color:#C9C9C9"/>
|
|
||||||
</linearGradient>
|
|
||||||
<path fill="url(#SVGID_3_)" d="M5.5,14.822c0,13.22,0,26.438,0,39.66c7.931,1.256,16.012,1.256,23.941,0c0-13.222,0-26.439,0-39.66
|
|
||||||
C21.461,14.822,13.48,14.822,5.5,14.822z M28.396,18.703c-0.74,0.01-1.482,0.02-2.225,0.029c0-0.951,0-1.901-0.001-2.85
|
|
||||||
c0.742-0.003,1.483-0.005,2.224-0.008C28.396,16.817,28.396,17.76,28.396,18.703z M16.354,42.496c0-0.961,0-1.924,0-2.885
|
|
||||||
c0.744,0.006,1.489,0.006,2.233,0c0,0.961,0,1.924,0,2.885C17.843,42.503,17.098,42.503,16.354,42.496z M18.587,43.568
|
|
||||||
c0,0.955,0,1.91,0,2.866c-0.744,0.009-1.489,0.009-2.234,0c0-0.956,0-1.911,0-2.866C17.098,43.574,17.843,43.574,18.587,43.568z
|
|
||||||
M18.586,27.742c0,0.961,0,1.922,0,2.886c-0.744,0.004-1.488,0.004-2.231,0c0-0.964,0-1.925,0-2.886
|
|
||||||
C17.099,27.746,17.842,27.746,18.586,27.742z M16.354,26.671c0-0.955,0-1.91,0-2.865c0.743,0.002,1.487,0.002,2.23,0
|
|
||||||
c0,0.955,0,1.91,0,2.865C17.842,26.675,17.099,26.675,16.354,26.671z M16.354,34.583c0-0.961,0-1.924,0-2.885
|
|
||||||
c0.744,0.004,1.488,0.004,2.231,0c0,0.961,0,1.924,0,2.885C17.842,34.588,17.099,34.588,16.354,34.583z M18.586,35.656
|
|
||||||
c0,0.961,0,1.924,0.001,2.885c-0.745,0.008-1.489,0.008-2.233,0c0-0.961,0-1.924,0-2.885C17.099,35.66,17.842,35.66,18.586,35.656z
|
|
||||||
M15.307,30.619c-0.742-0.01-1.484-0.021-2.227-0.039c0-0.957,0-1.916,0-2.875c0.742,0.014,1.485,0.023,2.226,0.029
|
|
||||||
C15.307,28.695,15.307,29.656,15.307,30.619z M15.307,31.689c0,0.961,0,1.924,0,2.885c-0.742-0.012-1.485-0.025-2.227-0.047
|
|
||||||
c0-0.959,0.001-1.92,0.001-2.877C13.822,31.667,14.565,31.68,15.307,31.689z M15.307,35.644c0,0.959,0,1.922-0.001,2.883
|
|
||||||
c-0.742-0.012-1.485-0.031-2.228-0.056c0-0.959,0.001-1.918,0.001-2.877C13.821,35.617,14.564,35.633,15.307,35.644z M15.306,39.597
|
|
||||||
c0,0.96,0,1.922,0,2.883c-0.742-0.016-1.486-0.037-2.228-0.064c0-0.959,0-1.916,0.001-2.877
|
|
||||||
C13.82,39.564,14.563,39.585,15.306,39.597z M19.637,39.597c0.742-0.012,1.484-0.033,2.227-0.059c0,0.959,0,1.918,0,2.875
|
|
||||||
c-0.741,0.029-1.483,0.052-2.227,0.064C19.637,41.519,19.637,40.559,19.637,39.597z M19.637,38.527c0-0.961,0-1.924,0-2.883
|
|
||||||
c0.74-0.012,1.482-0.027,2.225-0.05c0,0.959,0,1.918,0.002,2.876C21.121,38.496,20.377,38.515,19.637,38.527z M19.637,34.572
|
|
||||||
c0-0.961,0-1.922-0.002-2.883c0.741-0.01,1.483-0.021,2.225-0.039c0.002,0.957,0.002,1.916,0.002,2.875
|
|
||||||
C21.119,34.547,20.376,34.564,19.637,34.572z M19.635,30.619c0-0.963,0-1.924,0-2.885c0.74-0.006,1.483-0.017,2.225-0.029
|
|
||||||
c0,0.959,0,1.916,0,2.875C21.118,30.599,20.376,30.609,19.635,30.619z M19.633,26.666c0-0.955,0-1.909,0-2.864
|
|
||||||
c0.741-0.005,1.483-0.013,2.227-0.021c0,0.951,0,1.903,0,2.856C21.118,26.65,20.375,26.66,19.633,26.666z M19.633,22.732
|
|
||||||
c-0.001-0.963-0.001-1.924-0.001-2.885c0.741-0.002,1.483-0.006,2.226-0.012c0,0.959,0.002,1.918,0.002,2.877
|
|
||||||
C21.116,22.72,20.374,22.728,19.633,22.732z M18.586,22.736c-0.744,0.002-1.487,0.002-2.23,0c0-0.963,0-1.924,0-2.887
|
|
||||||
c0.743,0.002,1.487,0.002,2.23,0C18.586,20.813,18.586,21.773,18.586,22.736z M15.309,22.732c-0.742-0.004-1.483-0.012-2.226-0.02
|
|
||||||
c0-0.959,0.001-1.918,0.001-2.877c0.742,0.006,1.484,0.01,2.226,0.012C15.31,20.808,15.309,21.769,15.309,22.732z M15.309,23.801
|
|
||||||
c0,0.955,0,1.91,0,2.864c-0.742-0.006-1.483-0.016-2.227-0.027c0-0.953,0-1.906,0-2.859C13.825,23.789,14.566,23.796,15.309,23.801z
|
|
||||||
M12.036,26.617c-0.742-0.017-1.483-0.033-2.225-0.055c0-0.947,0-1.895,0.001-2.841c0.741,0.019,1.483,0.031,2.225,0.042
|
|
||||||
C12.037,24.716,12.036,25.666,12.036,26.617z M12.035,27.683c0,0.957,0,1.916,0,2.873c-0.742-0.021-1.483-0.047-2.225-0.076
|
|
||||||
c0-0.953,0-1.904,0-2.857C10.552,27.646,11.293,27.667,12.035,27.683z M12.035,31.621c0,0.957-0.001,1.914-0.001,2.871
|
|
||||||
c-0.742-0.023-1.483-0.055-2.224-0.092c0-0.953,0-1.906,0-2.859C10.551,31.572,11.292,31.6,12.035,31.621z M12.033,35.56
|
|
||||||
c0,0.956-0.001,1.914-0.001,2.871c-0.742-0.031-1.484-0.066-2.225-0.111c0-0.953,0.001-1.906,0.001-2.858
|
|
||||||
C10.549,35.5,11.291,35.533,12.033,35.56z M12.031,39.498c0,0.955,0,1.914-0.001,2.869c-0.742-0.035-1.484-0.078-2.225-0.129
|
|
||||||
c0-0.953,0-1.904,0.001-2.857C10.547,39.426,11.289,39.465,12.031,39.498z M12.03,43.435c0,0.951-0.001,1.901-0.001,2.854
|
|
||||||
c-0.742-0.041-1.484-0.09-2.225-0.149c0-0.944,0.001-1.892,0.001-2.838C10.546,43.353,11.288,43.4,12.03,43.435z M13.077,43.482
|
|
||||||
c0.743,0.031,1.486,0.053,2.228,0.067c0,0.956,0,1.91,0,2.864c-0.742-0.016-1.486-0.041-2.229-0.074
|
|
||||||
C13.077,45.389,13.077,44.435,13.077,43.482z M15.305,47.486c0,0.961,0,1.922,0,2.883c-0.743-0.019-1.487-0.047-2.23-0.084
|
|
||||||
c0-0.959,0-1.918,0.001-2.875C13.818,47.443,14.562,47.468,15.305,47.486z M16.353,47.504c0.745,0.009,1.49,0.009,2.234,0
|
|
||||||
c0.001,0.96,0.001,1.924,0.001,2.883c-0.745,0.011-1.49,0.011-2.235,0C16.353,49.427,16.353,48.464,16.353,47.504z M19.639,47.486
|
|
||||||
c0.741-0.018,1.483-0.043,2.227-0.076c0,0.957,0.002,1.916,0.002,2.875c-0.742,0.037-1.486,0.065-2.229,0.084
|
|
||||||
C19.639,49.406,19.639,48.447,19.639,47.486z M19.637,46.414c0-0.954,0-1.908,0-2.864c0.742-0.015,1.484-0.036,2.229-0.067
|
|
||||||
c0,0.953,0,1.905,0,2.857C21.122,46.373,20.379,46.398,19.637,46.414z M22.911,43.435c0.741-0.035,1.483-0.082,2.224-0.135
|
|
||||||
c0,0.945,0,1.895,0.002,2.838c-0.74,0.059-1.482,0.107-2.226,0.15C22.911,45.336,22.911,44.386,22.911,43.435z M22.911,42.369
|
|
||||||
c-0.001-0.957-0.001-1.914-0.002-2.871c0.741-0.032,1.483-0.069,2.225-0.117c0,0.954,0.001,1.906,0.001,2.857
|
|
||||||
C24.395,42.289,23.652,42.333,22.911,42.369z M22.909,38.431c0-0.957-0.001-1.915-0.001-2.871c0.742-0.027,1.482-0.061,2.224-0.098
|
|
||||||
c0.001,0.951,0.001,1.904,0.001,2.857C24.393,38.363,23.65,38.4,22.909,38.431z M22.908,34.494c0-0.957-0.002-1.916-0.002-2.871
|
|
||||||
c0.742-0.021,1.482-0.051,2.225-0.079c0,0.952,0,1.903,0.001,2.856C24.391,34.437,23.648,34.468,22.908,34.494z M22.906,30.556
|
|
||||||
c0-0.957,0-1.916-0.002-2.873c0.742-0.016,1.484-0.037,2.226-0.061c0,0.953,0.001,1.904,0.001,2.857
|
|
||||||
C24.391,30.509,23.648,30.535,22.906,30.556z M22.904,26.617c0-0.951,0-1.901,0-2.854c0.74-0.011,1.482-0.025,2.224-0.042
|
|
||||||
c0,0.946,0.001,1.894,0.001,2.841C24.389,26.583,23.646,26.601,22.904,26.617z M22.902,22.699c0-0.957,0-1.916,0-2.874
|
|
||||||
c0.742-0.007,1.482-0.014,2.225-0.023c0.001,0.953,0.001,1.906,0.001,2.859C24.387,22.676,23.646,22.689,22.902,22.699z
|
|
||||||
M22.902,18.76C22.9,17.802,22.9,16.845,22.9,15.887c0.742,0,1.481-0.003,2.225-0.004c0.001,0.953,0.001,1.906,0.002,2.858
|
|
||||||
C24.385,18.75,23.643,18.756,22.902,18.76z M21.855,18.767c-0.742,0.004-1.482,0.007-2.225,0.009c0-0.961,0-1.922,0-2.884
|
|
||||||
c0.741,0,1.482-0.001,2.225-0.002C21.855,16.849,21.855,17.808,21.855,18.767z M18.585,18.779c-0.743,0.001-1.486,0.001-2.229,0
|
|
||||||
c0-0.961,0-1.923,0-2.885c0.742,0,1.486,0,2.229,0C18.585,16.855,18.585,17.817,18.585,18.779z M15.31,18.777
|
|
||||||
c-0.742-0.002-1.483-0.005-2.225-0.009c0-0.959,0-1.918,0-2.877c0.742,0,1.483,0.001,2.225,0.002
|
|
||||||
C15.31,16.854,15.31,17.815,15.31,18.777z M12.039,18.76c-0.742-0.005-1.483-0.011-2.225-0.019c0-0.953,0-1.905,0.001-2.858
|
|
||||||
c0.742,0.001,1.483,0.004,2.224,0.004C12.039,16.845,12.039,17.803,12.039,18.76z M12.039,19.827c0,0.957-0.001,1.915-0.001,2.872
|
|
||||||
c-0.741-0.01-1.483-0.021-2.224-0.035c0-0.953,0-1.906,0-2.859C10.555,19.813,11.296,19.819,12.039,19.827z M8.768,22.64
|
|
||||||
c-0.741-0.018-1.482-0.035-2.223-0.057c0-0.943,0-1.887,0-2.831c0.741,0.013,1.482,0.025,2.223,0.036
|
|
||||||
C8.768,20.739,8.768,21.689,8.768,22.64z M8.767,23.697c0,0.944,0,1.89,0,2.832c-0.741-0.024-1.482-0.053-2.223-0.084
|
|
||||||
c0-0.938,0-1.873,0-2.811C7.284,23.658,8.026,23.679,8.767,23.697z M8.766,27.587c0,0.949-0.001,1.898-0.001,2.85
|
|
||||||
c-0.74-0.033-1.481-0.068-2.222-0.111c0-0.942,0-1.887,0-2.83C7.284,27.529,8.025,27.56,8.766,27.587z M8.765,31.494
|
|
||||||
c0,0.951-0.001,1.9-0.001,2.852c-0.74-0.04-1.481-0.087-2.221-0.139c0-0.943,0-1.887,0-2.831C7.283,31.42,8.023,31.459,8.765,31.494
|
|
||||||
z M8.763,35.404c0,0.949,0,1.899,0,2.851c-0.741-0.052-1.481-0.104-2.22-0.168c0-0.942,0-1.886,0-2.829
|
|
||||||
C7.282,35.31,8.022,35.361,8.763,35.404z M8.762,39.312c0,0.949,0,1.899-0.001,2.852c-0.741-0.059-1.48-0.123-2.219-0.195
|
|
||||||
c0-0.943,0-1.889,0-2.83C7.281,39.203,8.021,39.26,8.762,39.312z M8.76,43.219c0,0.944,0,1.888-0.001,2.832
|
|
||||||
c-0.74-0.065-1.479-0.14-2.218-0.224c0-0.938,0-1.875,0-2.812C7.281,43.092,8.02,43.16,8.76,43.219z M8.759,47.109
|
|
||||||
c0,0.951,0,1.9,0,2.851c-0.741-0.073-1.48-0.158-2.219-0.253c0-0.942,0-1.887,0-2.828C7.279,46.964,8.019,47.039,8.759,47.109z
|
|
||||||
M9.804,47.201c0.741,0.06,1.483,0.111,2.224,0.154c0,0.955,0,1.912,0,2.868c-0.742-0.045-1.484-0.103-2.225-0.166
|
|
||||||
C9.804,49.107,9.804,48.154,9.804,47.201z M12.027,51.291c0,0.957,0,1.916,0,2.873c-0.742-0.053-1.484-0.114-2.225-0.188
|
|
||||||
c0-0.951,0.001-1.904,0.001-2.857C10.544,51.187,11.285,51.244,12.027,51.291z M13.075,51.353c0.743,0.039,1.486,0.067,2.229,0.086
|
|
||||||
c0,0.961,0,1.922,0,2.885c-0.743-0.021-1.487-0.053-2.229-0.094C13.075,53.269,13.075,52.312,13.075,51.353z M16.353,51.459
|
|
||||||
c0.745,0.009,1.49,0.009,2.235,0c0,0.961,0,1.924,0,2.885c-0.745,0.013-1.491,0.013-2.235,0
|
|
||||||
C16.353,53.382,16.353,52.42,16.353,51.459z M19.639,51.439c0.741-0.019,1.485-0.049,2.229-0.086c0,0.959,0,1.92,0.001,2.877
|
|
||||||
c-0.743,0.041-1.485,0.072-2.229,0.094C19.639,53.361,19.639,52.4,19.639,51.439z M22.913,51.291
|
|
||||||
c0.743-0.047,1.483-0.104,2.226-0.172c0,0.953,0,1.906,0,2.857c-0.74,0.073-1.481,0.135-2.224,0.188
|
|
||||||
C22.914,53.205,22.914,52.248,22.913,51.291z M22.913,50.224c-0.001-0.956-0.001-1.912-0.001-2.869
|
|
||||||
c0.742-0.043,1.484-0.095,2.225-0.154c0,0.953,0,1.906,0.002,2.857C24.396,50.123,23.654,50.179,22.913,50.224z M26.184,47.109
|
|
||||||
c0.739-0.066,1.479-0.145,2.217-0.229c0,0.942,0,1.887,0,2.83c-0.736,0.092-1.478,0.177-2.217,0.252
|
|
||||||
C26.184,49.009,26.184,48.06,26.184,47.109z M26.184,46.051c-0.002-0.944-0.002-1.888-0.002-2.832
|
|
||||||
c0.739-0.06,1.48-0.127,2.219-0.202c0,0.938,0,1.873,0,2.811C27.662,45.912,26.923,45.986,26.184,46.051z M26.182,42.162
|
|
||||||
c0-0.95-0.002-1.9-0.002-2.85c0.74-0.052,1.48-0.109,2.219-0.176c0.002,0.943,0.002,1.887,0.002,2.83
|
|
||||||
C27.662,42.039,26.921,42.105,26.182,42.162z M26.18,38.253c0-0.95,0-1.9-0.002-2.852c0.742-0.041,1.482-0.093,2.221-0.146
|
|
||||||
c0,0.942,0,1.887,0,2.829C27.66,38.15,26.92,38.203,26.18,38.253z M26.178,34.345c0-0.949,0-1.898,0-2.852
|
|
||||||
c0.74-0.034,1.481-0.073,2.221-0.117c0,0.943,0,1.887,0,2.83C27.659,34.258,26.918,34.305,26.178,34.345z M26.177,30.437
|
|
||||||
c0-0.949,0-1.9-0.001-2.85c0.741-0.027,1.481-0.059,2.221-0.092c0,0.943,0.002,1.888,0.002,2.83
|
|
||||||
C27.659,30.367,26.918,30.404,26.177,30.437z M26.176,26.529c-0.001-0.942-0.001-1.888-0.001-2.832
|
|
||||||
c0.742-0.018,1.482-0.039,2.222-0.063c0,0.938,0,1.873,0,2.811C27.657,26.476,26.917,26.503,26.176,26.529z M26.174,22.64
|
|
||||||
c0-0.951-0.001-1.901-0.001-2.851c0.741-0.01,1.483-0.022,2.224-0.035c0,0.943,0,1.886,0,2.831
|
|
||||||
C27.657,22.605,26.915,22.623,26.174,22.64z M8.769,15.881c0,0.95,0,1.9-0.001,2.85c-0.741-0.008-1.482-0.018-2.223-0.028
|
|
||||||
c0-0.943,0-1.887,0-2.83C7.286,15.876,8.028,15.878,8.769,15.881z M6.54,50.758c0.738,0.097,1.478,0.183,2.218,0.258
|
|
||||||
c0,0.95,0,1.901,0,2.853c-0.741-0.084-1.48-0.178-2.218-0.28C6.54,52.646,6.54,51.701,6.54,50.758z M26.184,53.869
|
|
||||||
c0-0.95,0-1.899,0-2.853c0.739-0.075,1.479-0.163,2.217-0.259c0.002,0.941,0.002,1.889,0.002,2.83
|
|
||||||
C27.663,53.693,26.925,53.785,26.184,53.869z"/>
|
|
||||||
<path id="highlight_2_" opacity="0.17" fill="#FFFFFF" enable-background="new " d="M0,0.943h5.497c0,0,6.847-0.943,11.974-0.943
|
|
||||||
C22.6,0,29.48,0.943,29.48,0.943h5.496v41.951c0,0-12.076-0.521-18.623-2.548C9.807,38.32,0,30.557,0,30.557V0.943z"/>
|
|
||||||
</svg>
|
|
||||||
</y:Resource>
|
|
||||||
</y:Resources>
|
|
||||||
</data>
|
|
||||||
</graphml>
|
|
Before Width: | Height: | Size: 34 KiB |
|
@ -1,8 +0,0 @@
|
||||||
# The order of packages is significant, because pip processes them in the order
|
|
||||||
# of appearance. Changing the order has an impact on the overall integration
|
|
||||||
# process, which may cause wedges in the gate later.
|
|
||||||
|
|
||||||
sphinx!=1.6.6,!=1.6.7,>=1.6.2,<2.0.0;python_version=='2.7' # BSD
|
|
||||||
sphinx!=1.6.6,!=1.6.7,!=2.1.0,>=1.6.2;python_version>='3.4' # BSD
|
|
||||||
openstackdocstheme>=1.18.1 # Apache-2.0
|
|
||||||
reno>=2.5.0 # Apache-2.0
|
|
|
@ -1,78 +0,0 @@
|
||||||
=================
|
|
||||||
dragonflow-status
|
|
||||||
=================
|
|
||||||
|
|
||||||
Synopsis
|
|
||||||
========
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
dragonflow-status <category> <command> [<args>]
|
|
||||||
|
|
||||||
Description
|
|
||||||
===========
|
|
||||||
|
|
||||||
:program:`dragonflow-status` is a tool that provides routines for checking the
|
|
||||||
status of a Dragonflow deployment.
|
|
||||||
|
|
||||||
Options
|
|
||||||
=======
|
|
||||||
|
|
||||||
The standard pattern for executing a :program:`dragonflow-status` command is::
|
|
||||||
|
|
||||||
dragonflow-status <category> <command> [<args>]
|
|
||||||
|
|
||||||
Run without arguments to see a list of available command categories::
|
|
||||||
|
|
||||||
dragonflow-status
|
|
||||||
|
|
||||||
Categories are:
|
|
||||||
|
|
||||||
* ``upgrade``
|
|
||||||
|
|
||||||
Detailed descriptions are below.
|
|
||||||
|
|
||||||
You can also run with a category argument such as ``upgrade`` to see a list of
|
|
||||||
all commands in that category::
|
|
||||||
|
|
||||||
dragonflow-status upgrade
|
|
||||||
|
|
||||||
These sections describe the available categories and arguments for
|
|
||||||
:program:`dragonflow-status`.
|
|
||||||
|
|
||||||
Upgrade
|
|
||||||
~~~~~~~
|
|
||||||
|
|
||||||
.. _dragonflow-status-checks:
|
|
||||||
|
|
||||||
``dragonflow-status upgrade check``
|
|
||||||
Performs a release-specific readiness check before restarting services with
|
|
||||||
new code. This command expects to have complete configuration and access
|
|
||||||
to databases and services.
|
|
||||||
|
|
||||||
**Return Codes**
|
|
||||||
|
|
||||||
.. list-table::
|
|
||||||
:widths: 20 80
|
|
||||||
:header-rows: 1
|
|
||||||
|
|
||||||
* - Return code
|
|
||||||
- Description
|
|
||||||
* - 0
|
|
||||||
- All upgrade readiness checks passed successfully and there is nothing
|
|
||||||
to do.
|
|
||||||
* - 1
|
|
||||||
- At least one check encountered an issue and requires further
|
|
||||||
investigation. This is considered a warning but the upgrade may be OK.
|
|
||||||
* - 2
|
|
||||||
- There was an upgrade status check failure that needs to be
|
|
||||||
investigated. This should be considered something that stops an
|
|
||||||
upgrade.
|
|
||||||
* - 255
|
|
||||||
- An unexpected error occurred.
|
|
||||||
|
|
||||||
**History of Checks**
|
|
||||||
|
|
||||||
**4.0.0 (Stein)**
|
|
||||||
|
|
||||||
* Placeholder to be filled in with checks as they are added in Stein.
|
|
|
@ -1,7 +0,0 @@
|
||||||
CLI Guide
|
|
||||||
=========
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 1
|
|
||||||
|
|
||||||
dragonflow-status
|
|
|
@ -1,84 +0,0 @@
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
|
|
||||||
sys.path.insert(0, os.path.abspath('../..'))
|
|
||||||
# -- General configuration ----------------------------------------------------
|
|
||||||
|
|
||||||
# Add any Sphinx extension module names here, as strings. They can be
|
|
||||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
|
|
||||||
extensions = [
|
|
||||||
'sphinx.ext.autodoc',
|
|
||||||
#'sphinx.ext.intersphinx',
|
|
||||||
'openstackdocstheme',
|
|
||||||
'reno.sphinxext'
|
|
||||||
]
|
|
||||||
|
|
||||||
# openstackdocstheme options
|
|
||||||
repository_name = 'openstack/dragonflow'
|
|
||||||
bug_project = 'dragonflow'
|
|
||||||
bug_tag = ''
|
|
||||||
|
|
||||||
# autodoc generation is a bit aggressive and a nuisance when doing heavy
|
|
||||||
# text edit cycles.
|
|
||||||
# execute "export SPHINX_DEBUG=1" in your terminal to disable
|
|
||||||
|
|
||||||
# The suffix of source filenames.
|
|
||||||
source_suffix = '.rst'
|
|
||||||
|
|
||||||
# The master toctree document.
|
|
||||||
master_doc = 'index'
|
|
||||||
|
|
||||||
# General information about the project.
|
|
||||||
project = u'dragonflow'
|
|
||||||
copyright = u'2013, OpenStack Foundation'
|
|
||||||
|
|
||||||
# If true, '()' will be appended to :func: etc. cross-reference text.
|
|
||||||
add_function_parentheses = True
|
|
||||||
|
|
||||||
# If true, the current module name will be prepended to all description
|
|
||||||
# unit titles (such as .. function::).
|
|
||||||
add_module_names = True
|
|
||||||
|
|
||||||
# The name of the Pygments (syntax highlighting) style to use.
|
|
||||||
pygments_style = 'sphinx'
|
|
||||||
|
|
||||||
# -- Options for HTML output --------------------------------------------------
|
|
||||||
|
|
||||||
# The theme to use for HTML and HTML Help pages. Major themes that come with
|
|
||||||
# Sphinx are currently 'default' and 'sphinxdoc'.
|
|
||||||
# html_theme_path = ["."]
|
|
||||||
# html_theme = '_theme'
|
|
||||||
html_theme = 'openstackdocs'
|
|
||||||
#html_theme_path = []
|
|
||||||
# html_static_path = ['static']
|
|
||||||
|
|
||||||
# Output file base name for HTML help builder.
|
|
||||||
htmlhelp_basename = '%sdoc' % project
|
|
||||||
|
|
||||||
html_last_updated_fmt = '%Y-%m-%d %H:%M'
|
|
||||||
|
|
||||||
# Grouping the document tree into LaTeX files. List of tuples
|
|
||||||
# (source start file, target name, title, author, documentclass
|
|
||||||
# [howto/manual]).
|
|
||||||
latex_documents = [
|
|
||||||
('index',
|
|
||||||
'%s.tex' % project,
|
|
||||||
u'%s Documentation' % project,
|
|
||||||
u'OpenStack Foundation', 'manual'),
|
|
||||||
]
|
|
||||||
|
|
||||||
# Example configuration for intersphinx: refer to the Python standard library.
|
|
||||||
#intersphinx_mapping = {'http://docs.python.org/': None}
|
|
|
@ -1,12 +0,0 @@
|
||||||
==============
|
|
||||||
Configurations
|
|
||||||
==============
|
|
||||||
|
|
||||||
Since Newton, configuration files should be generated as follows::
|
|
||||||
|
|
||||||
``tox -e genconfig``
|
|
||||||
|
|
||||||
If a 'tox' environment is unavailable, then you can run the following script
|
|
||||||
instead to generate the configuration files::
|
|
||||||
|
|
||||||
./tools/generate_config_file_samples.sh
|
|
|
@ -1,4 +0,0 @@
|
||||||
==========
|
|
||||||
Containers
|
|
||||||
==========
|
|
||||||
|
|
|
@ -1,4 +0,0 @@
|
||||||
============
|
|
||||||
Contributing
|
|
||||||
============
|
|
||||||
.. include:: ../../CONTRIBUTING.rst
|
|
|
@ -1,43 +0,0 @@
|
||||||
..
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
not use this file except in compliance with the License. You may obtain
|
|
||||||
a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
License for the specific language governing permissions and limitations
|
|
||||||
under the License.
|
|
||||||
|
|
||||||
Convention for heading levels:
|
|
||||||
======= Heading 0 (reserved for the title in a document)
|
|
||||||
------- Heading 1
|
|
||||||
~~~~~~~ Heading 2
|
|
||||||
+++++++ Heading 3
|
|
||||||
''''''' Heading 4
|
|
||||||
(Avoid deeper levels because they do not render well.)
|
|
||||||
|
|
||||||
|
|
||||||
Developer references
|
|
||||||
====================
|
|
||||||
|
|
||||||
This section contains developer oriented documents, regarding the actual code
|
|
||||||
present in Dragonflow and its testing infrastructure.
|
|
||||||
|
|
||||||
Specs
|
|
||||||
-----
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 3
|
|
||||||
|
|
||||||
models
|
|
||||||
|
|
||||||
|
|
||||||
Indices and tables
|
|
||||||
------------------
|
|
||||||
|
|
||||||
* :ref:`genindex`
|
|
||||||
* :ref:`modindex`
|
|
||||||
* :ref:`search`
|
|
|
@ -1,248 +0,0 @@
|
||||||
..
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
not use this file except in compliance with the License. You may obtain
|
|
||||||
a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
License for the specific language governing permissions and limitations
|
|
||||||
under the License.
|
|
||||||
|
|
||||||
Convention for heading levels:
|
|
||||||
======= Heading 0 (reserved for the title in a document)
|
|
||||||
------- Heading 1
|
|
||||||
~~~~~~~ Heading 2
|
|
||||||
+++++++ Heading 3
|
|
||||||
''''''' Heading 4
|
|
||||||
(Avoid deeper levels because they do not render well.)
|
|
||||||
|
|
||||||
Models
|
|
||||||
======
|
|
||||||
|
|
||||||
Dragonflow, as many other projects interfacing with a database, uses model
|
|
||||||
layer to allow uniform and easy access to data stored in the (north-bound)
|
|
||||||
database. The current model framework is a fruit of the
|
|
||||||
:doc:`../specs/nb_api_refactor`
|
|
||||||
|
|
||||||
Creating new models
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
Each new model should be defined as a subclass of `ModelBase`, and decorated
|
|
||||||
with `construct_nb_db_model` decorator. Below we'll introduce an example:
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
@model_framework.construct_nb_db_model
|
|
||||||
class Movie(model_framework.ModelBase):
|
|
||||||
table_name = 'movies'
|
|
||||||
|
|
||||||
title = fields.StringField(required=True)
|
|
||||||
year = fields.IntField()
|
|
||||||
director = fields.ReferenceField(Director)
|
|
||||||
awards = fields.ListField(str)
|
|
||||||
|
|
||||||
The above example defines a new `Movie` model, that contains 5 fields:
|
|
||||||
|
|
||||||
#. `id` - Object identifier, derived form ModelBase, present in all model
|
|
||||||
objects.
|
|
||||||
#. `title` - A string containing the movie title, marked as mandatory.
|
|
||||||
#. `year` - A year movie was published.
|
|
||||||
#. `director` - A reference field to an object of director type (will be
|
|
||||||
covered later).
|
|
||||||
#. `awards` - A list of all the awards the movie received.
|
|
||||||
|
|
||||||
Class definition also contains `table_name` field that stores the name of the
|
|
||||||
table our model is stored in the north-bound database.
|
|
||||||
|
|
||||||
|
|
||||||
Initializing this object is done by passing the values as keyword arguments.
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
a_space_oddyssey = Movie(
|
|
||||||
id='movie-id-2001',
|
|
||||||
title='2001: A Space Oddyssey',
|
|
||||||
year=1968,
|
|
||||||
director=Director(id='stanley-kubrick')
|
|
||||||
awards=[
|
|
||||||
'Academy Award for Best Visual Effects',
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
We expect to write our data to the database as JSON document, the above object
|
|
||||||
will be translated to:
|
|
||||||
|
|
||||||
.. code-block:: json
|
|
||||||
|
|
||||||
{
|
|
||||||
"id": "movie-id-2001",
|
|
||||||
"title": "2001: A Space Oddyssey",
|
|
||||||
"year": 1968,
|
|
||||||
"director": "stanley-kubrick",
|
|
||||||
"awards": ["Academy Award for Best Visual Effects"]
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
Registry
|
|
||||||
--------
|
|
||||||
|
|
||||||
The model framework provides another decorator, @register_model that adds the
|
|
||||||
class to an internal lookup table.
|
|
||||||
|
|
||||||
This allows iterating of registered models, and retrieving models by class
|
|
||||||
name or table name, e.g:
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
for model in iterate_models():
|
|
||||||
instances = db_store.get_all(model)
|
|
||||||
|
|
||||||
or
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
movie_class = get_model('movies')
|
|
||||||
movie = movie_class(**params)
|
|
||||||
|
|
||||||
References
|
|
||||||
----------
|
|
||||||
|
|
||||||
Oftentimes one model is related to another object in some manner, consider the
|
|
||||||
movie example above, each movie has a director, so somewhere in our code we
|
|
||||||
have a director model, in the form of:
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
class Director(ModelBase):
|
|
||||||
table_name = 'directors'
|
|
||||||
|
|
||||||
full_name = fields.StringField()
|
|
||||||
|
|
||||||
In order to allow easy association and lookup, we can define a reference field
|
|
||||||
that will retrieve the actual object (by its ID) behind the scenes. Consider
|
|
||||||
we have the following object in our database:
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
kubrick = Director(id='stanley-kubrick', full_name='Stanley Kubrick')
|
|
||||||
|
|
||||||
We how can access `kubrick` object through `a_space_oddyssey.director`, e.g.
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
>>> a_space_oddyssey.director.full_name
|
|
||||||
Stanley Kubrick
|
|
||||||
|
|
||||||
|
|
||||||
The fetching is done behind the scenes (first from the local cache, then from
|
|
||||||
the northbound database).
|
|
||||||
|
|
||||||
|
|
||||||
Events
|
|
||||||
------
|
|
||||||
|
|
||||||
Each model can define an arbitrary set of events, which can be used to invoke
|
|
||||||
callbacks on various conditions, events are inherited from parent classes, and
|
|
||||||
are specified in `events=` parameter of construct_nb_db_model decorator:
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
|
|
||||||
@construct_nb_db_model(events={'premiered'})
|
|
||||||
class Director(ModelBase):
|
|
||||||
# ...
|
|
||||||
|
|
||||||
For each event, 2 class methods are defined:
|
|
||||||
|
|
||||||
* `register_{event_name}(callback)` - adds callback to be invoked each time
|
|
||||||
event is emitted.
|
|
||||||
* `unregister_{event_name}(callback)` - removes the callback from being
|
|
||||||
called.
|
|
||||||
|
|
||||||
Additionally, an instance method named `emit_{event_name}(*args, **kwargs)` is
|
|
||||||
added.
|
|
||||||
|
|
||||||
Emit can only be called on an instance, and the origin instance is passed as
|
|
||||||
first parameter to all the callbacks, then, `*args`, and `**kwargs` follow. So
|
|
||||||
a call
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
a_space_oddyssey.emit_premiered(1, 2, 3, a='a', b='b')
|
|
||||||
|
|
||||||
would be translated to a sequence of
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
callback(a_space_oddyssey, 1, 2, 3, a='a', b='b')
|
|
||||||
|
|
||||||
The convention of parameters is specific to each event.
|
|
||||||
|
|
||||||
The register calls can be also used as decorators for some extra syntactic
|
|
||||||
sugar
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
@Movie.register_premiered
|
|
||||||
def on_premiere(movie):
|
|
||||||
print('{title} has pemiered'.format(title=movie.title))
|
|
||||||
|
|
||||||
Indexes
|
|
||||||
-------
|
|
||||||
|
|
||||||
To allow easy retrieval and lookup of in memory objects we use DbStore module
|
|
||||||
to fetch by IDs and other properties, the new DbStore takes note of model's
|
|
||||||
indexes and creates lookups to allow faster retrieval. Indexes, similar to
|
|
||||||
events are passed in `indexes=` parameter of construct_nb_db_model decorator
|
|
||||||
and specified as a dictionary where the key is the index name and the value is
|
|
||||||
the field indexed by (or a tuple of fields, if the index is multi-key). For
|
|
||||||
example if we'd like to add index by year we can define it as:
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
@construct_nb_db_model(indexes={'by_year': 'year'})
|
|
||||||
class Director(ModelBase):
|
|
||||||
# ...
|
|
||||||
|
|
||||||
then query db_store by providing the index and the keys:
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
all_1968_movies = db_store.get_all(
|
|
||||||
Movie(year=1968),
|
|
||||||
index=Movie.get_index('by_year'),
|
|
||||||
)
|
|
||||||
|
|
||||||
Hooks
|
|
||||||
-----
|
|
||||||
We can also define several entry points to be called on various CRUD event in
|
|
||||||
the north-bound API, for example, if we wished to track our access movie and
|
|
||||||
director objects better we could define a common class:
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
class AccessMixin(MixinBase):
|
|
||||||
created_at = fields.DateTimeField()
|
|
||||||
updated_at = fields.DateTimeField()
|
|
||||||
|
|
||||||
def on_create_pre(self):
|
|
||||||
super(AccessMixin, self).on_create_pre()
|
|
||||||
self.created_at = datetime.datetime.now()
|
|
||||||
|
|
||||||
def on_update_pre(self, orig):
|
|
||||||
super(AccessMixin, self).on_update_pre(orig)
|
|
||||||
self.updated_at = datetime.datetime.now()
|
|
||||||
|
|
||||||
The above code updates the relevant fields on create/update operations, so if
|
|
||||||
we add those as parent class in our Movie or Director classes we'll receive
|
|
||||||
the new functionality:
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
# ...
|
|
||||||
class Movie(ModeBase, AccessMixin):
|
|
||||||
# ...
|
|
|
@ -1,57 +0,0 @@
|
||||||
================
|
|
||||||
Distributed DHCP
|
|
||||||
================
|
|
||||||
|
|
||||||
Current Neutron Reference Implementation
|
|
||||||
========================================
|
|
||||||
The DHCP server is implemented using the Dnsmasq server
|
|
||||||
running in a namespace on the network-node per tenant subnet
|
|
||||||
that is configured with DHCP enabled.
|
|
||||||
|
|
||||||
Currently High availability is achieved by running multiple Dnsmasq
|
|
||||||
servers on multiple Network nodes.
|
|
||||||
|
|
||||||
There is a namespace with Dnsmasq server per tenant subnet
|
|
||||||
|
|
||||||
Problems with current DHCP implementation:
|
|
||||||
|
|
||||||
1) Management and Scalability
|
|
||||||
- Need to configure and mange multiple Dnsmasq instances
|
|
||||||
2) Centralize solution depended on the network node
|
|
||||||
|
|
||||||
DHCP agent
|
|
||||||
----------
|
|
||||||
Same Concept as L3 agent and namespaces for virtual router.
|
|
||||||
Using black boxes that implement functionality and using them as the IaaS
|
|
||||||
backbone implementation
|
|
||||||
|
|
||||||
|
|
||||||
Distributed DHCP In Dragonflow
|
|
||||||
==============================
|
|
||||||
Dragonflow distribute DHCP policy/configuration using the pluggable DB.
|
|
||||||
Each local controller installs DHCP redirection Openflow rules, to be
|
|
||||||
handle by the local controller.
|
|
||||||
Those rules are installed only for local ports that are
|
|
||||||
attached to a virtual network with DHCP enabled.
|
|
||||||
|
|
||||||
The controller set the flow metadata in the redirection rules
|
|
||||||
to the local port unique key as a hint to allow fast port info lookup
|
|
||||||
for the reactive DHCP packets handle by the DHCP application.
|
|
||||||
|
|
||||||
The local DHCP application handle the redirected DHCP packets and answer as a
|
|
||||||
DHCP server. DHCP traffic is handled directly at the compute node and never
|
|
||||||
goes on the network.
|
|
||||||
|
|
||||||
The following diagrams demonstrate this process:
|
|
||||||
|
|
||||||
.. image:: ../images/dhcp1.jpg
|
|
||||||
:alt: Distributed DHCP 1
|
|
||||||
:width: 600
|
|
||||||
:height: 525
|
|
||||||
:align: center
|
|
||||||
|
|
||||||
.. image:: ../images/dhcp2.jpg
|
|
||||||
:alt: Distributed DHCP 1
|
|
||||||
:width: 600
|
|
||||||
:height: 525
|
|
||||||
:align: center
|
|
|
@ -1,143 +0,0 @@
|
||||||
======================
|
|
||||||
Distributed Dragonflow
|
|
||||||
======================
|
|
||||||
|
|
||||||
Dragonflow is a distributed SDN controller for OpenStack® Neutron™
|
|
||||||
supporting distributed Switching, Routing, DHCP and more.
|
|
||||||
|
|
||||||
Our project mission is to implement advanced networking services in a
|
|
||||||
manner that is efficient, elegant and simple.
|
|
||||||
|
|
||||||
Dragonflow is designed to support large scale deployments with a focus on
|
|
||||||
latency and performance, as well as providing advanced innovative
|
|
||||||
services that run locally on each compute node, with container technology
|
|
||||||
in mind.
|
|
||||||
|
|
||||||
Mission Statement
|
|
||||||
-----------------
|
|
||||||
|
|
||||||
* Implement Neutron APIs using SDN principles, while keeping both
|
|
||||||
Plug-in and Implementation fully under OpenStack project and
|
|
||||||
governance.
|
|
||||||
* 100% open source, contributors are welcome to partner and share
|
|
||||||
a mutual vision.
|
|
||||||
* Lightweight and Simple in terms of code size and complexity, so
|
|
||||||
new users / contributors have a simple and fast ramp-up.
|
|
||||||
* Aim for performance-intensive environments, where latency is a
|
|
||||||
big deal, while being small and intuitive enough to run on
|
|
||||||
small ones as well.
|
|
||||||
* Completely pluggable design, easy to extend and enhance.
|
|
||||||
* We *truly* believe in a distributed control plane.
|
|
||||||
|
|
||||||
Key Design Guidelines
|
|
||||||
---------------------
|
|
||||||
* Pluggable database, determines scale, lookup performance and latency
|
|
||||||
* Policy-level/Topology abstraction synchronization to the Compute Node
|
|
||||||
* Local Dragonflow Controller uses Reactive model (where it makes sense)
|
|
||||||
* Loadable Network Services Framework
|
|
||||||
|
|
||||||
High Level Architecture
|
|
||||||
-----------------------
|
|
||||||
|
|
||||||
.. image:: ../images/dragonflow_distributed_architecture.png
|
|
||||||
:alt: Solution Overview
|
|
||||||
:width: 600
|
|
||||||
:height: 455
|
|
||||||
:align: center
|
|
||||||
|
|
||||||
^^^^^^^^
|
|
||||||
Overview
|
|
||||||
^^^^^^^^
|
|
||||||
Dragonflow environment consist of a local controller running at each of the
|
|
||||||
compute nodes in the setup.
|
|
||||||
|
|
||||||
These controllers all sync the network topology and policy using a pluggable
|
|
||||||
DB solution.
|
|
||||||
The controllers then map the policy into OpenFlow flows using the local
|
|
||||||
Dragonflow applications that communicate with the local OpenVSwitch.
|
|
||||||
|
|
||||||
The DB is being populated by Dragonflow Neutron plugin that converts neutron
|
|
||||||
API to our model.
|
|
||||||
|
|
||||||
The following sections each describe a specific topic/functionality in
|
|
||||||
Dragonflow
|
|
||||||
|
|
||||||
Dragonflow Supported Features
|
|
||||||
=============================
|
|
||||||
1) L2 core API, IPv4 , IPv6
|
|
||||||
Supports GRE/VxLAN/Geneve tunneling protocols
|
|
||||||
|
|
||||||
2) Distributed virtual Router L3
|
|
||||||
Supports a hybrid of proactive and reactive flow installation
|
|
||||||
|
|
||||||
3) Distributed DHCP
|
|
||||||
|
|
||||||
4) Pluggable Distributed Data Base
|
|
||||||
ETCD, RethinkDB, RAMCloud, OVSDB
|
|
||||||
|
|
||||||
Dragonflow Pipeline
|
|
||||||
===================
|
|
||||||
`Dragonflow Pipeline <https://docs.openstack.org/dragonflow/latest/pipeline.html>`_
|
|
||||||
|
|
||||||
Dragonflow Pluggable DB
|
|
||||||
=======================
|
|
||||||
`Pluggable DB
|
|
||||||
<https://docs.openstack.org/dragonflow/latest/pluggable_db.html>`_
|
|
||||||
|
|
||||||
Distributed DHCP Application
|
|
||||||
============================
|
|
||||||
|
|
||||||
`Distributed DHCP Application
|
|
||||||
<https://docs.openstack.org/dragonflow/latest/distributed_dhcp.html>`_
|
|
||||||
|
|
||||||
Containers and Dragonflow
|
|
||||||
=========================
|
|
||||||
`Dragonflow and Containers <https://docs.openstack.org/dragonflow/latest/containers.html>`_
|
|
||||||
|
|
||||||
Dragonflow Roadmap
|
|
||||||
==================
|
|
||||||
|
|
||||||
The following topics are areas we are examining for future features and
|
|
||||||
roadmap into Dragonflow project
|
|
||||||
|
|
||||||
- Containers
|
|
||||||
- Distributed SNAT/DNAT
|
|
||||||
- Reactive DB
|
|
||||||
- Topology Service Injection / Service Chaining
|
|
||||||
- Smart NICs
|
|
||||||
- Hierarchical Port Binding (SDN ToR)
|
|
||||||
- Inter Cloud Connectivity (Boarder Gateway / L2GW)
|
|
||||||
- Fault Detection
|
|
||||||
|
|
||||||
How to Install
|
|
||||||
--------------
|
|
||||||
|
|
||||||
- `Installation Guide <https://docs.openstack.org/dragonflow/latest/readme.html>`_
|
|
||||||
- `DevStack Single Node Configuration
|
|
||||||
<https://github.com/openstack/dragonflow/tree/master/doc/source/single-node-conf>`_
|
|
||||||
- `DevStack Multi Node Configuration
|
|
||||||
<https://github.com/openstack/dragonflow/tree/master/doc/source/multi-node-conf>`_
|
|
||||||
|
|
||||||
Dragonflow Talks
|
|
||||||
----------------
|
|
||||||
- `Dragonflow - Neutron done the SDN Way - OpenStack Austin Summit
|
|
||||||
<https://www.openstack.org/videos/video/dragonflow-neutron-done-the-sdn-way>`_
|
|
||||||
- `Dragonflow Introduction Video - OpenStack Tokyo Summit
|
|
||||||
<https://www.youtube.com/watch?v=wo1Q-BL3nII>`_
|
|
||||||
|
|
||||||
More Useful Reading
|
|
||||||
-------------------
|
|
||||||
- `Distributed DHCP Service in Dragonflow
|
|
||||||
<http://blog.gampel.net/2015/09/dragonflow-distributed-dhcp-for.html>`_
|
|
||||||
- `Centralized vs. Distributed SDN Controller in Dragonflow
|
|
||||||
<http://blog.gampel.net/2015/08/centralized-vs-distributed-sdn-control.html>`_
|
|
||||||
- `Dragonflow in OpenStack Liberty
|
|
||||||
<http://galsagie.github.io/2015/10/14/dragonflow-liberty/>`_
|
|
||||||
- `Dragonflow Distributed Database
|
|
||||||
<http://galsagie.github.io/2015/08/03/df-distributed-db/>`_
|
|
||||||
- `Topology Service Injection
|
|
||||||
<http://galsagie.github.io/2015/11/10/topology-service-injection/>`_
|
|
||||||
- `Dragonflow Security Groups Design at Scale
|
|
||||||
<http://galsagie.github.io/2015/12/28/dragonflow-security-groups/>`_
|
|
||||||
- `Neutron DB Consistency
|
|
||||||
<http://galsagie.github.io/2016/02/14/neutron-db-consistency/>`_
|
|
|
@ -1,125 +0,0 @@
|
||||||
==============
|
|
||||||
DOCKER INSTALL
|
|
||||||
==============
|
|
||||||
|
|
||||||
Building the image
|
|
||||||
------------------
|
|
||||||
* Run the following command
|
|
||||||
|
|
||||||
.. code-block:: bash
|
|
||||||
|
|
||||||
docker build --tag dragonflow .
|
|
||||||
|
|
||||||
|
|
||||||
Running the image
|
|
||||||
-----------------
|
|
||||||
|
|
||||||
Preparation work
|
|
||||||
~~~~~~~~~~~~~~~~
|
|
||||||
* Create a network to be used by the containers, use any subnet you find fit,
|
|
||||||
the subnet here is just an example.
|
|
||||||
|
|
||||||
.. code-block:: bash
|
|
||||||
|
|
||||||
export DRAGONFLOW_NET_NAME=dragonflow_net
|
|
||||||
docker network create --subnet=172.18.0.0/16 $DRAGONFLOW_NET_NAME
|
|
||||||
|
|
||||||
Running etcd node
|
|
||||||
~~~~~~~~~~~~~~~~~
|
|
||||||
* Run the following commands:
|
|
||||||
|
|
||||||
.. code-block:: bash
|
|
||||||
|
|
||||||
mkdir -p /tmp/etcd
|
|
||||||
chcon -Rt svirt_sandbox_file_t /tmp/etcd
|
|
||||||
export NODE1=172.18.0.2 # Any free IP in the subnet
|
|
||||||
export DATA_DIR=/tmp/etcd
|
|
||||||
docker run --detach --net $DRAGONFLOW_NET_NAME --ip ${NODE1} --volume=${DATA_DIR}:/etcd-data --name etcd quay.io/coreos/etcd:latest /usr/local/bin/etcd --data-dir=/etcd-data --name node1 --initial-advertise-peer-urls http://${NODE1}:2380 --listen-peer-urls http://${NODE1}:2380 --advertise-client-urls http://${NODE1}:2379 --listen-client-urls http://${NODE1}:2379 --initial-cluster node1=http://${NODE1}:2380
|
|
||||||
|
|
||||||
|
|
||||||
* Make sure the IP was properly assigned to the container:
|
|
||||||
|
|
||||||
.. code-block:: bash
|
|
||||||
|
|
||||||
docker inspect --format "{{ .NetworkSettings.Networks.${DRAGONFLOW_NET_NAME}.IPAddress }}" etcd
|
|
||||||
|
|
||||||
|
|
||||||
Running controller node
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
This section assumes you have OVS set up. Make sure ovsdb-server listens on
|
|
||||||
TCP port 6640. This can be done with the following command. Note you may need
|
|
||||||
to allow this via `selinux`.
|
|
||||||
|
|
||||||
.. code-block:: bash
|
|
||||||
|
|
||||||
sudo ovs-appctl -t ovsdb-server ovsdb-server/add-remote ptcp:6640
|
|
||||||
|
|
||||||
* Run the following commands:
|
|
||||||
|
|
||||||
.. code-block:: bash
|
|
||||||
|
|
||||||
export DRAGONFLOW_IP=172.18.0.3 # Any free IP in the subnet
|
|
||||||
export MANAGEMENT_IP=$(docker inspect --format "{{ .NetworkSettings.Networks.${DRAGONFLOW_NET_NAME}.Gateway }}" etcd) # Assuming you put OVS on the host
|
|
||||||
docker run --name dragonflow --net $DRAGONFLOW_NET_NAME --ip ${DRAGONFLOW_IP} dragonflow:latest --dragonflow_ip ${DRAGONFLOW_IP} --db_ip ${NODE1}:2379 --management_ip ${MANAGEMENT_IP}
|
|
||||||
|
|
||||||
* Make sure the IP was properly assigned to the container:
|
|
||||||
|
|
||||||
.. code-block:: bash
|
|
||||||
|
|
||||||
docker inspect --format "{{ .NetworkSettings.Networks.${DRAGONFLOW_NET_NAME}.IPAddress }}" dragonflow
|
|
||||||
|
|
||||||
There are two configuration files that Dragonflow needs, and creates
|
|
||||||
automatically if they do not exist:
|
|
||||||
|
|
||||||
* `/etc/dragonflow/dragonflow.ini`
|
|
||||||
|
|
||||||
* `/etc/dragonflow//etc/dragonflow/dragonflow_datapath_layout.yaml`
|
|
||||||
|
|
||||||
If these files exist, they are used as-is, and are not overwritten. You can add
|
|
||||||
these files using e.g.
|
|
||||||
`-v local-dragonflow-conf.ini:/etc/dragonflow/dragonflow.ini`.
|
|
||||||
|
|
||||||
|
|
||||||
Running a REST API Service
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The docker entrypoint accepts verbs. To start the container with the REST API
|
|
||||||
service, running on HTTP port 8080, use the verb `rest`.
|
|
||||||
|
|
||||||
.. code-block:: bash
|
|
||||||
|
|
||||||
export DRAGONFLOW_IP=172.18.0.4 # Any free IP in the subnet
|
|
||||||
docker run --name dragonflow-rest --net $DRAGONFLOW_NET_NAME --ip ${DRAGONFLOW_IP} -i -t dragonflow:latest --dragonflow_ip ${DRAGONFLOW_IP} --db_ip ${NODE1}:2379 rest
|
|
||||||
|
|
||||||
The schema would be available on `http://$DRAGONFLOW_IP:8080/schema.json`.
|
|
||||||
|
|
||||||
|
|
||||||
Running the container without the any service
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The docker entrypoint accepts verbs. To start the container without any
|
|
||||||
service, use the verb `bash`.
|
|
||||||
|
|
||||||
.. code-block:: bash
|
|
||||||
|
|
||||||
export DRAGONFLOW_IP=172.18.0.5 # Any free IP in the subnet
|
|
||||||
docker run --name dragonflow-bash --net $DRAGONFLOW_NET_NAME --ip ${DRAGONFLOW_IP} -i -t dragonflow:latest --dragonflow_ip ${DRAGONFLOW_IP} --db_ip ${NODE1}:2379 bash
|
|
||||||
|
|
||||||
This will start the container with the Dragonflow installed, but no service.
|
|
||||||
This is useful in order to test any standalone binaries or code that should
|
|
||||||
use the Dragonflow as a library, separated from the controller node.
|
|
||||||
|
|
||||||
|
|
||||||
Using the container as a base for other container
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The docker entrypoint script accepts verbs. To only run the configuration and
|
|
||||||
use the container with another main process, in your entrypoint run the
|
|
||||||
following command:
|
|
||||||
|
|
||||||
.. code-block:: bash
|
|
||||||
|
|
||||||
/opt/dragonflow/tools/run_dragonflow.sh --dragonflow_ip <DRAGONFLOW_IP> --db_ip <DB_IP>:2379 noop
|
|
||||||
|
|
||||||
Note that running a container with the noop verb without a live process as
|
|
||||||
entrypoint will cause the container to exit immediately.
|
|
|
@ -1,84 +0,0 @@
|
||||||
========
|
|
||||||
Features
|
|
||||||
========
|
|
||||||
|
|
||||||
Dragonflow offers the following virtual network services:
|
|
||||||
|
|
||||||
* Layer-2 (switching)
|
|
||||||
|
|
||||||
Native implementation. Replaces the conventional Open vSwitch (OVS)
|
|
||||||
agent.
|
|
||||||
|
|
||||||
* Layer-3 (routing)
|
|
||||||
|
|
||||||
Native implementation or conventional layer-3 agent. The native
|
|
||||||
implementation supports distributed routing.
|
|
||||||
In the process of supporting distributed DNAT.
|
|
||||||
SNAT is centralized at networking node.
|
|
||||||
|
|
||||||
* DHCP
|
|
||||||
|
|
||||||
Distributed DHCP application that serves DHCP offers/acks locally at
|
|
||||||
each compute node.
|
|
||||||
|
|
||||||
* Metadata
|
|
||||||
|
|
||||||
Distributed Metadata proxy application running locally at each
|
|
||||||
compute node.
|
|
||||||
|
|
||||||
* DPDK
|
|
||||||
|
|
||||||
Dragonflow shall work to support using OVS DPDK as the
|
|
||||||
datapath alternative, this depends on the supported features
|
|
||||||
in OVS DPDK and the VIF binding script support in Neutron
|
|
||||||
plugin.
|
|
||||||
|
|
||||||
The following Neutron API extensions will be supported:
|
|
||||||
|
|
||||||
+----------------------------------+---------------------------+-------------+
|
|
||||||
| Extension Name | Extension Alias | TODO |
|
|
||||||
+==================================+===========================+=============+
|
|
||||||
| agent | agent | Done |
|
|
||||||
+----------------------------------+---------------------------+-------------+
|
|
||||||
| Auto Allocated Topology Services | auto-allocated-topology | Done |
|
|
||||||
+----------------------------------+---------------------------+-------------+
|
|
||||||
| Availability Zone | availability_zone | Done |
|
|
||||||
+----------------------------------+---------------------------+-------------+
|
|
||||||
| HA Router extension * | l3-ha | Done |
|
|
||||||
+----------------------------------+---------------------------+-------------+
|
|
||||||
| L3 Agent Scheduler * | l3_agent_scheduler | Done |
|
|
||||||
+----------------------------------+---------------------------+-------------+
|
|
||||||
| Neutron external network | external-net | Done |
|
|
||||||
+----------------------------------+---------------------------+-------------+
|
|
||||||
| Neutron Extra DHCP opts | extra_dhcp_opt | Done |
|
|
||||||
+----------------------------------+---------------------------+-------------+
|
|
||||||
| Neutron Extra Route | extraroute | Done |
|
|
||||||
+----------------------------------+---------------------------+-------------+
|
|
||||||
| Neutron L3 Router | router | Done |
|
|
||||||
+----------------------------------+---------------------------+-------------+
|
|
||||||
| Network MTU | net-mtu | Done |
|
|
||||||
+----------------------------------+---------------------------+-------------+
|
|
||||||
| Port Binding | binding | Done |
|
|
||||||
+----------------------------------+---------------------------+-------------+
|
|
||||||
| Provider Network | provider | Done |
|
|
||||||
+----------------------------------+---------------------------+-------------+
|
|
||||||
| Quality of Service | qos | Done |
|
|
||||||
+----------------------------------+---------------------------+-------------+
|
|
||||||
| Quota management support | quotas | Done |
|
|
||||||
+----------------------------------+---------------------------+-------------+
|
|
||||||
| RBAC Policies | rbac-policies | Done |
|
|
||||||
+----------------------------------+---------------------------+-------------+
|
|
||||||
| Security Group | security-group | Done |
|
|
||||||
+----------------------------------+---------------------------+-------------+
|
|
||||||
| Subnet Allocation | subnet_allocation | Done |
|
|
||||||
+----------------------------------+---------------------------+-------------+
|
|
||||||
| Tap as a Service | taas | In Progress |
|
|
||||||
+----------------------------------+---------------------------+-------------+
|
|
||||||
| Service Function Chaining | sfc | In Progress |
|
|
||||||
+----------------------------------+---------------------------+-------------+
|
|
||||||
| BGP dynamic routing | bgp | In Progress |
|
|
||||||
+----------------------------------+---------------------------+-------------+
|
|
||||||
| Firewall service v2 | fwaas_v2 | In Progress |
|
|
||||||
+----------------------------------+---------------------------+-------------+
|
|
||||||
|
|
||||||
(\*) Only applicable when conventional layer-3 agent enabled.
|
|
|
@ -1,87 +0,0 @@
|
||||||
..
|
|
||||||
Copyright (c) 2016 OpenStack Foundation
|
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
not use this file except in compliance with the License. You may obtain
|
|
||||||
a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
License for the specific language governing permissions and limitations
|
|
||||||
under the License.
|
|
||||||
|
|
||||||
Guru Meditation Reports
|
|
||||||
=======================
|
|
||||||
|
|
||||||
Dragonflow contains a mechanism whereby developers and system administrators
|
|
||||||
can generate a report about the state of a running Dragonflow executable.
|
|
||||||
This report is called a *Guru Meditation Report* (*GMR* for short).
|
|
||||||
|
|
||||||
Generating a GMR
|
|
||||||
----------------
|
|
||||||
|
|
||||||
A *GMR* can be generated by sending the *USR2* signal to any Dragonflow process
|
|
||||||
with support (see below).
|
|
||||||
The *GMR* will then be outputted standard error for that particular process.
|
|
||||||
|
|
||||||
For example, suppose that ``df-local-controller`` has process id ``2525``, and
|
|
||||||
was run with ``2>/var/log/dragonflow/df-controller.log``. Then,
|
|
||||||
``kill -USR2 2525`` will trigger the Guru Meditation report to be printed to
|
|
||||||
``/var/log/dragonflow/df-controller.log``.
|
|
||||||
|
|
||||||
Structure of a GMR
|
|
||||||
------------------
|
|
||||||
|
|
||||||
The *GMR* is designed to be extensible; any particular executable may add its
|
|
||||||
own sections. However, the base *GMR* consists of several sections:
|
|
||||||
|
|
||||||
Package
|
|
||||||
Shows information about the package to which this process belongs, including
|
|
||||||
version information
|
|
||||||
|
|
||||||
Threads
|
|
||||||
Shows stack traces and thread ids for each of the threads within this process
|
|
||||||
|
|
||||||
Green Threads
|
|
||||||
Shows stack traces for each of the green threads within this process (green
|
|
||||||
threads don't have thread ids)
|
|
||||||
|
|
||||||
Configuration
|
|
||||||
Lists all the configuration options currently accessible via the CONF object
|
|
||||||
for the current process
|
|
||||||
|
|
||||||
Adding Support for GMRs to New Executables
|
|
||||||
------------------------------------------
|
|
||||||
|
|
||||||
Adding support for a *GMR* to a given executable is fairly easy.
|
|
||||||
|
|
||||||
First import the module, as well as the Dragonflow version module:
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
from oslo_reports import guru_meditation_report as gmr
|
|
||||||
from dragonflow import version
|
|
||||||
|
|
||||||
Then, register any additional sections (optional):
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
TextGuruMeditation.register_section('Some Special Section',
|
|
||||||
some_section_generator)
|
|
||||||
|
|
||||||
Finally (under main), before running the "main loop" of the executable,
|
|
||||||
register the *GMR* hook:
|
|
||||||
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
TextGuruMeditation.setup_autorun(version)
|
|
||||||
|
|
||||||
Extending the GMR
|
|
||||||
-----------------
|
|
||||||
|
|
||||||
As mentioned above, additional sections can be added to the GMR for a
|
|
||||||
particular executable.
|
|
||||||
For more information, see the inline documentation under :mod:`oslo.reports`
|
|
|
@ -1,63 +0,0 @@
|
||||||
.. dragonflow documentation master file, created by
|
|
||||||
sphinx-quickstart on Tue Jul 9 22:26:36 2013.
|
|
||||||
You can adapt this file completely to your liking, but it should at least
|
|
||||||
contain the root `toctree` directive.
|
|
||||||
|
|
||||||
Welcome to Dragonflow's documentation
|
|
||||||
=====================================
|
|
||||||
|
|
||||||
Contents:
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 2
|
|
||||||
|
|
||||||
readme
|
|
||||||
installation
|
|
||||||
testing_and_debugging
|
|
||||||
distributed_dragonflow
|
|
||||||
docker_install
|
|
||||||
distributed_dhcp
|
|
||||||
pluggable_db
|
|
||||||
pluggable_pubsub
|
|
||||||
pipeline
|
|
||||||
containers
|
|
||||||
gmr
|
|
||||||
configuration
|
|
||||||
usage
|
|
||||||
features
|
|
||||||
manual_deployment
|
|
||||||
contributing
|
|
||||||
releasenotes_create
|
|
||||||
reviewers_guide
|
|
||||||
osprofiler
|
|
||||||
|
|
||||||
Dragonflow Specs
|
|
||||||
================
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 1
|
|
||||||
|
|
||||||
specs/index
|
|
||||||
|
|
||||||
CLI Reference
|
|
||||||
=============
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 1
|
|
||||||
|
|
||||||
cli/index
|
|
||||||
|
|
||||||
Developer References
|
|
||||||
====================
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 1
|
|
||||||
|
|
||||||
devrefs/index
|
|
||||||
|
|
||||||
Indices and tables
|
|
||||||
==================
|
|
||||||
|
|
||||||
* :ref:`genindex`
|
|
||||||
* :ref:`search`
|
|
||||||
|
|
|
@ -1,94 +0,0 @@
|
||||||
============
|
|
||||||
Installation
|
|
||||||
============
|
|
||||||
|
|
||||||
``git clone https://git.openstack.org/openstack-dev/devstack``
|
|
||||||
|
|
||||||
Copy one of the following as your local.conf to your devstack folder
|
|
||||||
|
|
||||||
|
|
||||||
- `DevStack Single Node Configuration <https://github.com/openstack/dragonflow/tree/master/doc/source/single-node-conf>`_
|
|
||||||
|
|
||||||
- `DevStack Multi Node Configuration <https://github.com/openstack/dragonflow/tree/master/doc/source/multi-node-conf>`_
|
|
||||||
|
|
||||||
Run ./stack.sh
|
|
||||||
|
|
||||||
Once the script has finished successfully, Dragonflow is
|
|
||||||
ready to move packets. You can see `Testing and Debugging
|
|
||||||
<testing_and_debugging.rst>`_ to test and troubleshoot the deployment.
|
|
||||||
|
|
||||||
=============================
|
|
||||||
Automated setup using Vagrant
|
|
||||||
=============================
|
|
||||||
|
|
||||||
This will create a 3 node devstack (controller + two computes), where
|
|
||||||
Dragonflow is used as the Open vSwitch backend.
|
|
||||||
|
|
||||||
Vagrant allows to configure the provider on which the virtual machines are
|
|
||||||
created. Virtualbox is the default provider used to launch the VM's on a
|
|
||||||
developer computer, but other providers can be used: libvirt, VMWare, AWS,
|
|
||||||
OpenStack, containers stuff, ...
|
|
||||||
|
|
||||||
Quick Start
|
|
||||||
-----------
|
|
||||||
|
|
||||||
1. Install a Hypervisor if not already installed
|
|
||||||
|
|
||||||
1.1. For Virtualbox - https://www.virtualbox.org/wiki/Downloads
|
|
||||||
|
|
||||||
1.2. For libvirt - use you Linux distribution manuals
|
|
||||||
|
|
||||||
2. Install Vagrant - https://www.vagrantup.com/downloads.html
|
|
||||||
|
|
||||||
3. Configure
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
git clone https://git.openstack.org/openstack/dragonflow
|
|
||||||
cd dragonflow
|
|
||||||
vagrant plugin install vagrant-cachier
|
|
||||||
vagrant plugin install vagrant-vbguest
|
|
||||||
|
|
||||||
4. | For full install with a controller node and 2 compute nodes follow step
|
|
||||||
4.1;
|
|
||||||
| For a minimal install with All-In-One setup, follow step 4.2
|
|
||||||
|
|
||||||
4.1. Adjust the settings in `vagrant/provisioning/dragonflow.conf.yml` if
|
|
||||||
needed (5GB RAM is the minimum to get 1 VM running on the controller
|
|
||||||
node)
|
|
||||||
|
|
||||||
* Launch the VM's: `vagrant up`
|
|
||||||
|
|
||||||
* This may take a while, once it is finished:
|
|
||||||
|
|
||||||
* You can ssh into the virtual machines:
|
|
||||||
`vagrant ssh devstack_controller`, `vagrant ssh devstack_compute1`
|
|
||||||
or `vagrant ssh devstack_compute2`
|
|
||||||
|
|
||||||
* You can access the horizon dashboard at
|
|
||||||
http://controller.devstack.dev
|
|
||||||
|
|
||||||
* The dragonflow folder is shared between the host and the two nodes
|
|
||||||
(at /home/vagrant/dragonflow)
|
|
||||||
|
|
||||||
* When you are done with the setup, you can remove the VMs:
|
|
||||||
`vagrant destroy`
|
|
||||||
|
|
||||||
4.2. Adjust the settings in `vagrant/provisioning/dragonflow.conf.yml` if
|
|
||||||
needed
|
|
||||||
|
|
||||||
* Launch the VM: `vagrant up devstack_aio`
|
|
||||||
|
|
||||||
* This may take a while, once it is finished:
|
|
||||||
|
|
||||||
* You can ssh into the virtual machine: `vagrant ssh devstack_aio`
|
|
||||||
|
|
||||||
* You can access the horizon dashboard at
|
|
||||||
http://allinone.devstack.dev
|
|
||||||
|
|
||||||
* The dragonflow folder is shared between the host and the VM (at
|
|
||||||
/home/vagrant/dragonflow)
|
|
||||||
|
|
||||||
* When you are done with the setup, you can remove the VM:
|
|
||||||
`vagrant destroy devstack_aio`
|
|
||||||
|
|
|
@ -1,259 +0,0 @@
|
||||||
..
|
|
||||||
Copyright (c) 2016 OpenStack Foundation
|
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
not use this file except in compliance with the License. You may obtain
|
|
||||||
a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
License for the specific language governing permissions and limitations
|
|
||||||
under the License.
|
|
||||||
|
|
||||||
Dragonflow Manual Deployment
|
|
||||||
============================
|
|
||||||
|
|
||||||
Dragonflow mainly has several components:
|
|
||||||
|
|
||||||
#. Dragonflow neutron plugins (set up in neutron-server configuration)
|
|
||||||
#. Dragonflow local controller running on each compute node
|
|
||||||
#. Dragonflow metadata service running on each compute node
|
|
||||||
#. Dragonflow publisher service running aside neutron server (if zeromq pub/sub
|
|
||||||
driver is enabled)
|
|
||||||
#. Dragonflow l3 agent running on each network node
|
|
||||||
#. Dragonflow northbound database (depends on which database you set up in
|
|
||||||
dragonflow configuration)
|
|
||||||
|
|
||||||
Source Code
|
|
||||||
-----------
|
|
||||||
|
|
||||||
https://github.com/openstack/dragonflow
|
|
||||||
|
|
||||||
Dependencies
|
|
||||||
------------
|
|
||||||
|
|
||||||
#. Open vSwitch 2.5+
|
|
||||||
#. Northbound Database (Etcd or Zookeeper or Redis)
|
|
||||||
|
|
||||||
Basic Configurations
|
|
||||||
--------------------
|
|
||||||
|
|
||||||
#. Generate the plugin configuration
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
bash tools/generate_config_file_samples.sh
|
|
||||||
cp etc/dragonflow.ini.sample /etc/neutron/dragonflow.ini
|
|
||||||
|
|
||||||
#. Modify the configuration
|
|
||||||
|
|
||||||
/etc/neutron/neutron.conf
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
metadata_proxy_shared_secret = secret
|
|
||||||
dhcp_agent_notification = False
|
|
||||||
notify_nova_on_port_data_changes = True
|
|
||||||
notify_nova_on_port_status_changes = True
|
|
||||||
allow_overlapping_ips = True
|
|
||||||
service_plugins = df-l3,qos
|
|
||||||
core_plugin = neutron_lib.plugins.ml2.plugin.Ml2Plugin
|
|
||||||
|
|
||||||
/etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
tenant_network_types = geneve
|
|
||||||
extension_drivers = port_security,qos
|
|
||||||
mechanism_drivers = df
|
|
||||||
|
|
||||||
[ml2_type_flat]
|
|
||||||
flat_networks = *
|
|
||||||
|
|
||||||
[ml2_type_geneve]
|
|
||||||
vni_ranges = 1:10000
|
|
||||||
|
|
||||||
/etc/nova/nova.conf
|
|
||||||
~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
[neutron]
|
|
||||||
service_metadata_proxy = True
|
|
||||||
metadata_proxy_shared_secret = secret
|
|
||||||
|
|
||||||
/etc/neutron/dragonflow.ini
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
[df]
|
|
||||||
metadata_interface = tap-metadata
|
|
||||||
enable_selective_topology_distribution = True
|
|
||||||
apps_list = l2,l3_proactive,dhcp,dnat,sg,portsec,portqos
|
|
||||||
integration_bridge = br-int
|
|
||||||
tunnel_type = geneve
|
|
||||||
|
|
||||||
[df_dnat_app]
|
|
||||||
ex_peer_patch_port = patch-int
|
|
||||||
int_peer_patch_port = patch-ex
|
|
||||||
external_network_bridge = br-ex
|
|
||||||
|
|
||||||
[df_l2_app]
|
|
||||||
l2_responder = True
|
|
||||||
|
|
||||||
[df_metadata]
|
|
||||||
port = 18080
|
|
||||||
ip = 169.254.169.254
|
|
||||||
|
|
||||||
Northbound Database
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
Dragonflow supports etcd, redis, zookeeper and ramcloud. You need to deploy one
|
|
||||||
of them in your environment and expose the necessary TCP port.
|
|
||||||
|
|
||||||
Next you need to change the configuration, for example, etcd:
|
|
||||||
|
|
||||||
/etc/neutron/dragonflow.ini:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
[df]
|
|
||||||
nb_db_class = etcd_nb_db_driver
|
|
||||||
remote_db_hosts = [{etcd_ip}:{etcd_port}]
|
|
||||||
|
|
||||||
Pub/Sub Driver
|
|
||||||
--------------
|
|
||||||
|
|
||||||
Dragonflow supports etcd, redis and zeromq. You need to change the
|
|
||||||
configuration, for example, etcd:
|
|
||||||
|
|
||||||
/etc/neutron/dragonflow.ini:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
[df]
|
|
||||||
enable_df_pub_sub = True
|
|
||||||
pub_sub_driver = etcd_pubsub_driver
|
|
||||||
publisher_rate_limit_count = 1
|
|
||||||
publisher_rate_limit_timeout = 180
|
|
||||||
monitor_table_poll_time = 30
|
|
||||||
|
|
||||||
Dragonflow Plugin (on neutron-server node)
|
|
||||||
------------------------------------------
|
|
||||||
|
|
||||||
Installation
|
|
||||||
~~~~~~~~~~~~
|
|
||||||
|
|
||||||
#. Install dragonflow dependencies: pip install -r requirements.txt
|
|
||||||
#. Install dragonflow: python setup.py install
|
|
||||||
|
|
||||||
Service Start
|
|
||||||
~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
neutron-server is the only service for this part.
|
|
||||||
|
|
||||||
Dragonflow Publisher Service (on neutron-server node)
|
|
||||||
-----------------------------------------------------
|
|
||||||
|
|
||||||
Installation
|
|
||||||
~~~~~~~~~~~~
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
mkdir -p /var/run/zmq_pubsub
|
|
||||||
chown -R neutron:neutron /var/run/zmq_pubsub
|
|
||||||
|
|
||||||
Service Start
|
|
||||||
~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
python /usr/local/bin/df-publisher-service --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dragonflow.ini
|
|
||||||
|
|
||||||
Dragonflow local controller (on compute node)
|
|
||||||
---------------------------------------------
|
|
||||||
|
|
||||||
Installation
|
|
||||||
~~~~~~~~~~~~
|
|
||||||
|
|
||||||
#. Install dragonflow dependencies: pip install -r requirements.txt
|
|
||||||
#. Install dragonflow: python setup.py install
|
|
||||||
#. Initialize ZeroMQ:
|
|
||||||
::
|
|
||||||
|
|
||||||
mkdir -p /var/run/zmq_pubsub
|
|
||||||
chown -R neutron:neutron /var/run/zmq_pubsub
|
|
||||||
|
|
||||||
#. Initialize OVS:
|
|
||||||
::
|
|
||||||
|
|
||||||
ovs-vsctl add-br br-ex
|
|
||||||
ovs-vsctl add-port br-ex {external_nic}
|
|
||||||
ovs-vsctl add-br br-int
|
|
||||||
ovs-vsctl add-port br-int {internal_nic}
|
|
||||||
ovs-vsctl --no-wait set bridge br-int fail-mode=secure other-config:disable-in-band=true
|
|
||||||
ovs-vsctl set bridge br-int protocols=OpenFlow10,OpenFlow13
|
|
||||||
ovs-vsctl set-manager ptcp:6640:0.0.0.0
|
|
||||||
|
|
||||||
Configuration
|
|
||||||
~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
/etc/neutron/dragonflow.ini:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
[df]
|
|
||||||
local_ip = {compute_node_ip}
|
|
||||||
|
|
||||||
Service Start
|
|
||||||
~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
python /usr/local/bin/df-local-controller --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dragonflow.ini
|
|
||||||
|
|
||||||
Dragonflow Metadata Service (on compute node)
|
|
||||||
---------------------------------------------
|
|
||||||
|
|
||||||
Service Start
|
|
||||||
~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
python /usr/local/bin/df-metadata-service --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dragonflow.ini
|
|
||||||
|
|
||||||
Dragonflow L3 Service (on network node)
|
|
||||||
---------------------------------------
|
|
||||||
|
|
||||||
Installation
|
|
||||||
~~~~~~~~~~~~
|
|
||||||
|
|
||||||
#. Install dragonflow dependencies: pip install -r requirements.txt
|
|
||||||
#. Install dragonflow: python setup.py install
|
|
||||||
|
|
||||||
Configuration
|
|
||||||
~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
/etc/neutron/l3_agent.ini:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
external_network_bridge =
|
|
||||||
interface_driver = openvswitch
|
|
||||||
ovs_use_veth = False
|
|
||||||
|
|
||||||
Service Start
|
|
||||||
~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
python /usr/local/bin/df-l3-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-file /etc/neutron/dragonflow.ini
|
|
|
@ -1,55 +0,0 @@
|
||||||
[[local|localrc]]
|
|
||||||
|
|
||||||
LOGFILE=$DEST/logs/stack.sh.log
|
|
||||||
|
|
||||||
DF_SELECTIVE_TOPO_DIST=True
|
|
||||||
DF_PUB_SUB=True
|
|
||||||
ENABLE_NEUTRON_NOTIFIER=False
|
|
||||||
|
|
||||||
DATABASE_PASSWORD=password
|
|
||||||
RABBIT_PASSWORD=password
|
|
||||||
SERVICE_PASSWORD=password
|
|
||||||
SERVICE_TOKEN=password
|
|
||||||
ADMIN_PASSWORD=password
|
|
||||||
|
|
||||||
enable_plugin dragonflow https://git.openstack.org/openstack/dragonflow
|
|
||||||
|
|
||||||
disable_all_services
|
|
||||||
enable_service n-cpu
|
|
||||||
enable_service df-cassandra
|
|
||||||
enable_service df-controller
|
|
||||||
enable_service n-novnc
|
|
||||||
enable_service placement-client
|
|
||||||
|
|
||||||
# Enable df-metadata (Dragonflow metadata service proxy) once nova is being used.
|
|
||||||
enable_service df-metadata
|
|
||||||
|
|
||||||
# Compute node control plane and data plane ip address
|
|
||||||
HOST_IP=<compute_node's_management_IP_Address>
|
|
||||||
TUNNEL_ENDPOINT_IP=<compute_node's_data_plane_IP_Address>
|
|
||||||
|
|
||||||
# Set this to the address of the main DevStack host running the rest of the
|
|
||||||
# OpenStack services. (Controller node)
|
|
||||||
|
|
||||||
SERVICE_HOST=<IP address of host running everything else>
|
|
||||||
RABBIT_HOST=$SERVICE_HOST
|
|
||||||
Q_HOST=$SERVICE_HOST
|
|
||||||
|
|
||||||
# Specify Cassandra server or cluster
|
|
||||||
# When deploying Cassandra cluster, you can use ',' to specify multiple servers.
|
|
||||||
REMOTE_DB_HOSTS=$SERVICE_HOST:9042
|
|
||||||
CASSANDRA_NUM_OF_HOSTS=1
|
|
||||||
|
|
||||||
# Make VNC work on compute node
|
|
||||||
NOVA_VNC_ENABLED=True
|
|
||||||
NOVNCPROXY_URL=http://$SERVICE_HOST:6080/vnc_auto.html
|
|
||||||
VNCSERVER_LISTEN=$HOST_IP
|
|
||||||
VNCSERVER_PROXYCLIENT_ADDRESS=$VNCSERVER_LISTEN
|
|
||||||
|
|
||||||
[[post-config|$NEUTRON_CONF]]
|
|
||||||
[df]
|
|
||||||
enable_df_pub_sub = True
|
|
||||||
pub_sub_driver = "zmq_pubsub_driver"
|
|
||||||
|
|
||||||
# Currently Active Port Detection and ZMQ collides (https://bugs.launchpad.net/dragonflow/+bug/1716933)
|
|
||||||
ENABLE_ACTIVE_DETECTION=False
|
|
|
@ -1,48 +0,0 @@
|
||||||
[[local|localrc]]
|
|
||||||
|
|
||||||
LOGFILE=$DEST/logs/stack.sh.log
|
|
||||||
|
|
||||||
DF_SELECTIVE_TOPO_DIST=True
|
|
||||||
DF_PUB_SUB=True
|
|
||||||
ENABLE_NEUTRON_NOTIFIER=False
|
|
||||||
|
|
||||||
DATABASE_PASSWORD=password
|
|
||||||
RABBIT_PASSWORD=password
|
|
||||||
SERVICE_PASSWORD=password
|
|
||||||
SERVICE_TOKEN=password
|
|
||||||
ADMIN_PASSWORD=password
|
|
||||||
|
|
||||||
enable_plugin dragonflow http://git.openstack.org/openstack/dragonflow
|
|
||||||
enable_service df-cassandra
|
|
||||||
enable_service df-cassandra-server
|
|
||||||
enable_service df-controller
|
|
||||||
|
|
||||||
disable_service n-net
|
|
||||||
enable_service q-svc
|
|
||||||
enable_service df-l3-agent
|
|
||||||
disable_service heat
|
|
||||||
disable_service tempest
|
|
||||||
|
|
||||||
# Enable df-metadata (Dragonflow metadata service proxy) once nova is being used.
|
|
||||||
enable_service df-metadata
|
|
||||||
|
|
||||||
# We have to disable the neutron L2 agent. DF does not use the L2 agent.
|
|
||||||
disable_service q-agt
|
|
||||||
|
|
||||||
# We have to disable the neutron dhcp agent. DF does not use the dhcp agent.
|
|
||||||
disable_service q-dhcp
|
|
||||||
|
|
||||||
# Control node control plane and data plane ip address
|
|
||||||
HOST_IP=<controller's_management_IP_Address>
|
|
||||||
TUNNEL_ENDPOINT_IP=<controller's_data_plane_IP_Address>
|
|
||||||
|
|
||||||
# Specify Cassandra server or cluster
|
|
||||||
# When deploying Cassandra cluster, you can use ',' to specify multiple servers.
|
|
||||||
REMOTE_DB_HOSTS=$HOST_IP:9042
|
|
||||||
CASSANDRA_NUM_OF_HOSTS=1
|
|
||||||
|
|
||||||
# The build-in PUB/SUB mechanism is mandatory for Zookeeper backend.
|
|
||||||
enable_service df-zmq-publisher-service
|
|
||||||
|
|
||||||
# Currently Active Port Detection and ZMQ collides (https://bugs.launchpad.net/dragonflow/+bug/1716933)
|
|
||||||
ENABLE_ACTIVE_DETECTION=False
|
|
|
@ -1,43 +0,0 @@
|
||||||
[[local|localrc]]
|
|
||||||
|
|
||||||
DATABASE_PASSWORD=password
|
|
||||||
RABBIT_PASSWORD=password
|
|
||||||
SERVICE_PASSWORD=password
|
|
||||||
SERVICE_TOKEN=password
|
|
||||||
ADMIN_PASSWORD=password
|
|
||||||
|
|
||||||
enable_plugin dragonflow https://git.openstack.org/openstack/dragonflow
|
|
||||||
|
|
||||||
disable_all_services
|
|
||||||
enable_service n-cpu
|
|
||||||
enable_service df-controller
|
|
||||||
enable_service df-etcd
|
|
||||||
|
|
||||||
enable_service n-novnc
|
|
||||||
enable_service placement-client
|
|
||||||
|
|
||||||
# Compute node control plane and data plane ip address
|
|
||||||
HOST_IP=<compute_node's_management_IP_Address>
|
|
||||||
TUNNEL_ENDPOINT_IP=<compute_node's_data_plane_IP_Address>
|
|
||||||
|
|
||||||
# Enable df-metadata (Dragonflow metadata service proxy) once nova is being used.
|
|
||||||
enable_service df-metadata
|
|
||||||
|
|
||||||
# Set this to the address of the main DevStack host running the rest of the
|
|
||||||
# OpenStack services. (Controller node)
|
|
||||||
|
|
||||||
SERVICE_HOST=<IP address of host running everything else>
|
|
||||||
RABBIT_HOST=$SERVICE_HOST
|
|
||||||
Q_HOST=$SERVICE_HOST
|
|
||||||
REMOTE_DB_HOSTS="$SERVICE_HOST:2379"
|
|
||||||
|
|
||||||
# Make VNC work on compute node
|
|
||||||
NOVA_VNC_ENABLED=True
|
|
||||||
NOVNCPROXY_URL=http://$SERVICE_HOST:6080/vnc_auto.html
|
|
||||||
VNCSERVER_LISTEN=$HOST_IP
|
|
||||||
VNCSERVER_PROXYCLIENT_ADDRESS=$VNCSERVER_LISTEN
|
|
||||||
|
|
||||||
[[post-config|$NEUTRON_CONF]]
|
|
||||||
[df]
|
|
||||||
enable_df_pub_sub = True
|
|
||||||
pub_sub_driver = etcd_pubsub_driver
|
|
|
@ -1,32 +0,0 @@
|
||||||
[[local|localrc]]
|
|
||||||
|
|
||||||
DATABASE_PASSWORD=password
|
|
||||||
RABBIT_PASSWORD=password
|
|
||||||
SERVICE_PASSWORD=password
|
|
||||||
SERVICE_TOKEN=password
|
|
||||||
ADMIN_PASSWORD=password
|
|
||||||
|
|
||||||
enable_plugin dragonflow http://git.openstack.org/openstack/dragonflow
|
|
||||||
enable_service df-etcd
|
|
||||||
enable_service etcd3
|
|
||||||
enable_service df-controller
|
|
||||||
enable_service df-etcd-pubsub-service
|
|
||||||
|
|
||||||
disable_service n-net
|
|
||||||
enable_service q-svc
|
|
||||||
enable_service df-l3-agent
|
|
||||||
disable_service heat
|
|
||||||
disable_service tempest
|
|
||||||
|
|
||||||
# Control node control plane and data plane ip address
|
|
||||||
HOST_IP=<controller's_management_IP_Address>
|
|
||||||
TUNNEL_ENDPOINT_IP=<controller's_data_plane_IP_Address>
|
|
||||||
|
|
||||||
# Enable df-metadata (Dragonflow metadata service proxy) once nova is being used.
|
|
||||||
enable_service df-metadata
|
|
||||||
|
|
||||||
# We have to disable the neutron L2 agent. DF does not use the L2 agent.
|
|
||||||
disable_service q-agt
|
|
||||||
|
|
||||||
# We have to disable the neutron dhcp agent. DF does not use the dhcp agent.
|
|
||||||
disable_service q-dhcp
|
|
|
@ -1,49 +0,0 @@
|
||||||
|
|
||||||
# Sample DevStack local.conf.
|
|
||||||
#
|
|
||||||
# This sample file is intended to be used when adding an additional compute node
|
|
||||||
# to your test environment. It runs a very minimal set of services.
|
|
||||||
#
|
|
||||||
# For this configuration to work, you *must* set the SERVICE_HOST option to the
|
|
||||||
# IP address of the main DevStack host and HOST_IP to the local IP of the compute node.
|
|
||||||
#
|
|
||||||
|
|
||||||
[[local|localrc]]
|
|
||||||
|
|
||||||
DATABASE_PASSWORD=devstack
|
|
||||||
RABBIT_PASSWORD=devstack
|
|
||||||
SERVICE_PASSWORD=devstack
|
|
||||||
SERVICE_TOKEN=devstack
|
|
||||||
ADMIN_PASSWORD=devstack
|
|
||||||
|
|
||||||
enable_plugin dragonflow https://git.openstack.org/openstack/dragonflow
|
|
||||||
|
|
||||||
|
|
||||||
disable_all_services
|
|
||||||
enable_service n-cpu
|
|
||||||
enable_service df-controller
|
|
||||||
enable_service df-ramcloud
|
|
||||||
|
|
||||||
enable_service n-novnc
|
|
||||||
enable_service placement-client
|
|
||||||
|
|
||||||
# Compute node control plane and data plane ip address
|
|
||||||
HOST_IP=<compute_node's_management_IP_Address>
|
|
||||||
TUNNEL_ENDPOINT_IP=<compute_node's_data_plane_IP_Address>
|
|
||||||
|
|
||||||
# Enable df-metadata (Dragonflow metadata service proxy) once nova is being used.
|
|
||||||
enable_service df-metadata
|
|
||||||
|
|
||||||
REMOTE_DB_PORT=21222
|
|
||||||
|
|
||||||
SERVICE_HOST=<Controller_node_IP_Address>
|
|
||||||
RABBIT_HOST=$SERVICE_HOST
|
|
||||||
Q_HOST=$SERVICE_HOST
|
|
||||||
REMOTE_DB_HOSTS="$SERVICE_HOST:4001"
|
|
||||||
|
|
||||||
# Make VNC work on compute node
|
|
||||||
NOVA_VNC_ENABLED=True
|
|
||||||
NOVNCPROXY_URL=http://$SERVICE_HOST:6080/vnc_auto.html
|
|
||||||
VNCSERVER_LISTEN=$HOST_IP
|
|
||||||
VNCSERVER_PROXYCLIENT_ADDRESS=$VNCSERVER_LISTEN
|
|
||||||
|
|
|
@ -1,34 +0,0 @@
|
||||||
[[local|localrc]]
|
|
||||||
|
|
||||||
DATABASE_PASSWORD=password
|
|
||||||
RABBIT_PASSWORD=password
|
|
||||||
SERVICE_PASSWORD=password
|
|
||||||
SERVICE_TOKEN=password
|
|
||||||
ADMIN_PASSWORD=password
|
|
||||||
|
|
||||||
enable_plugin dragonflow http://git.openstack.org/openstack/dragonflow
|
|
||||||
|
|
||||||
enable_service df-controller
|
|
||||||
enable_service df-ramcloud
|
|
||||||
enable_service df-rccoordinator
|
|
||||||
enable_service df-rcmaster
|
|
||||||
enable_service df-publisher-service
|
|
||||||
|
|
||||||
disable_service n-net
|
|
||||||
enable_service q-svc
|
|
||||||
enable_service df-l3-agent
|
|
||||||
disable_service q-dhcp
|
|
||||||
|
|
||||||
disable_service tempest
|
|
||||||
disable_service heat
|
|
||||||
disable_service q-agt
|
|
||||||
|
|
||||||
# Control node control plane and data plane ip address
|
|
||||||
HOST_IP=<controller's_management_IP_Address>
|
|
||||||
TUNNEL_ENDPOINT_IP=<controller's_data_plane_IP_Address>
|
|
||||||
|
|
||||||
# Enable df-metadata (Dragonflow metadata service proxy) once nova is being used.
|
|
||||||
enable_service df-metadata
|
|
||||||
|
|
||||||
# Used by the RAMCloud init scripts
|
|
||||||
REMOTE_DB_PORT=21222
|
|
|
@ -1,53 +0,0 @@
|
||||||
#
|
|
||||||
# Sample DevStack local.conf.
|
|
||||||
#
|
|
||||||
# This sample file is intended to be used when adding an additional compute node
|
|
||||||
# to your test environment. It runs a very minimal set of services.
|
|
||||||
#
|
|
||||||
# For this configuration to work, you *must* set the SERVICE_HOST option to the
|
|
||||||
# IP address of the main DevStack host.
|
|
||||||
#
|
|
||||||
|
|
||||||
[[local|localrc]]
|
|
||||||
|
|
||||||
DATABASE_PASSWORD=password
|
|
||||||
RABBIT_PASSWORD=password
|
|
||||||
SERVICE_PASSWORD=password
|
|
||||||
SERVICE_TOKEN=password
|
|
||||||
ADMIN_PASSWORD=password
|
|
||||||
|
|
||||||
enable_plugin dragonflow https://git.openstack.org/openstack/dragonflow
|
|
||||||
|
|
||||||
disable_all_services
|
|
||||||
enable_service n-cpu
|
|
||||||
enable_service df-controller
|
|
||||||
enable_service df-redis
|
|
||||||
|
|
||||||
enable_service n-novnc
|
|
||||||
enable_service placement-client
|
|
||||||
|
|
||||||
# Compute node control plane and data plane ip address
|
|
||||||
HOST_IP=<compute_node's_management_IP_Address>
|
|
||||||
TUNNEL_ENDPOINT_IP=<compute_node's_data_plane_IP_Address>
|
|
||||||
|
|
||||||
# Enable df-metadata (Dragonflow metadata service proxy) once nova is being used.
|
|
||||||
enable_service df-metadata
|
|
||||||
|
|
||||||
# Set this to the address of the main DevStack host running the rest of the
|
|
||||||
# OpenStack services. (Controller node)
|
|
||||||
|
|
||||||
SERVICE_HOST=<IP address of host running everything else>
|
|
||||||
RABBIT_HOST=$SERVICE_HOST
|
|
||||||
Q_HOST=$SERVICE_HOST
|
|
||||||
REMOTE_DB_HOSTS="$SERVICE_HOST:4001"
|
|
||||||
|
|
||||||
# Make VNC work on compute node
|
|
||||||
NOVA_VNC_ENABLED=True
|
|
||||||
NOVNCPROXY_URL=http://$SERVICE_HOST:6080/vnc_auto.html
|
|
||||||
VNCSERVER_LISTEN=$HOST_IP
|
|
||||||
VNCSERVER_PROXYCLIENT_ADDRESS=$VNCSERVER_LISTEN
|
|
||||||
|
|
||||||
[[post-config|$NEUTRON_CONF]]
|
|
||||||
[df]
|
|
||||||
enable_df_pub_sub = True
|
|
||||||
pub_sub_driver = "redis_db_pubsub_driver"
|
|
|
@ -1,33 +0,0 @@
|
||||||
[[local|localrc]]
|
|
||||||
|
|
||||||
DATABASE_PASSWORD=password
|
|
||||||
RABBIT_PASSWORD=password
|
|
||||||
SERVICE_PASSWORD=password
|
|
||||||
SERVICE_TOKEN=password
|
|
||||||
ADMIN_PASSWORD=password
|
|
||||||
|
|
||||||
DF_REDIS_PUBSUB=True
|
|
||||||
enable_plugin dragonflow https://git.openstack.org/openstack/dragonflow
|
|
||||||
enable_service df-redis
|
|
||||||
enable_service df-redis-server
|
|
||||||
enable_service df-controller
|
|
||||||
enable_service df-publisher-service
|
|
||||||
|
|
||||||
disable_service n-net
|
|
||||||
enable_service q-svc
|
|
||||||
enable_service df-l3-agent
|
|
||||||
disable_service heat
|
|
||||||
disable_service tempest
|
|
||||||
|
|
||||||
# Control node control plane and data plane ip address
|
|
||||||
HOST_IP=<controller's_management_IP_Address>
|
|
||||||
TUNNEL_ENDPOINT_IP=<controller's_data_plane_IP_Address>
|
|
||||||
|
|
||||||
# Enable df-metadata (Dragonflow metadata service proxy) once nova is being used.
|
|
||||||
enable_service df-metadata
|
|
||||||
|
|
||||||
# We have to disable the neutron L2 agent. DF does not use the L2 agent.
|
|
||||||
disable_service q-agt
|
|
||||||
|
|
||||||
# We have to disable the neutron dhcp agent. DF does not use the dhcp agent.
|
|
||||||
disable_service q-dhcp
|
|
|
@ -1,50 +0,0 @@
|
||||||
# Sample DevStack local.conf.
|
|
||||||
#
|
|
||||||
# This sample file is intended to be used when adding an additional compute node
|
|
||||||
# to your test environment. It runs a very minimal set of services.
|
|
||||||
#
|
|
||||||
# For this configuration to work, you *must* set the SERVICE_HOST option to the
|
|
||||||
# IP address of the main DevStack host and HOST_IP to the local IP of the compute node.
|
|
||||||
#
|
|
||||||
|
|
||||||
[[local|localrc]]
|
|
||||||
|
|
||||||
DATABASE_PASSWORD=password
|
|
||||||
RABBIT_PASSWORD=password
|
|
||||||
SERVICE_PASSWORD=password
|
|
||||||
SERVICE_TOKEN=password
|
|
||||||
ADMIN_PASSWORD=password
|
|
||||||
|
|
||||||
enable_plugin dragonflow https://git.openstack.org/openstack/dragonflow
|
|
||||||
|
|
||||||
|
|
||||||
disable_all_services
|
|
||||||
enable_service n-cpu
|
|
||||||
enable_service neutron
|
|
||||||
enable_service df-controller
|
|
||||||
enable_service df-rethinkdb
|
|
||||||
|
|
||||||
enable_service n-novnc
|
|
||||||
enable_service placement-client
|
|
||||||
|
|
||||||
# Compute node control plane and data plane ip address
|
|
||||||
HOST_IP=<compute_node's_management_IP_Address>
|
|
||||||
TUNNEL_ENDPOINT_IP=<compute_node's_data_plane_IP_Address>
|
|
||||||
|
|
||||||
# Enable df-metadata (Dragonflow metadata service proxy) once nova is being used.
|
|
||||||
enable_service df-metadata
|
|
||||||
|
|
||||||
REMOTE_DB_PORT=28015
|
|
||||||
|
|
||||||
SERVICE_HOST=<Controller_node_IP_Address>
|
|
||||||
RABBIT_HOST=$SERVICE_HOST
|
|
||||||
Q_HOST=$SERVICE_HOST
|
|
||||||
REMOTE_DB_HOSTS="$SERVICE_HOST:4001"
|
|
||||||
|
|
||||||
# Make VNC work on compute node
|
|
||||||
NOVA_VNC_ENABLED=True
|
|
||||||
NOVNCPROXY_URL=http://$SERVICE_HOST:6080/vnc_auto.html
|
|
||||||
VNCSERVER_LISTEN=$HOST_IP
|
|
||||||
VNCSERVER_PROXYCLIENT_ADDRESS=$VNCSERVER_LISTEN
|
|
||||||
~
|
|
||||||
|
|
|
@ -1,33 +0,0 @@
|
||||||
[[local|localrc]]
|
|
||||||
|
|
||||||
DATABASE_PASSWORD=password
|
|
||||||
RABBIT_PASSWORD=password
|
|
||||||
SERVICE_PASSWORD=password
|
|
||||||
SERVICE_TOKEN=password
|
|
||||||
ADMIN_PASSWORD=password
|
|
||||||
|
|
||||||
enable_plugin dragonflow https://git.openstack.org/openstack/dragonflow
|
|
||||||
|
|
||||||
enable_service df-rethinkdb
|
|
||||||
enable_service df-rethinkdb-server
|
|
||||||
enable_service df-controller
|
|
||||||
enable_service df-publisher-service
|
|
||||||
|
|
||||||
disable_service n-net
|
|
||||||
enable_service q-svc
|
|
||||||
enable_service df-l3-agent
|
|
||||||
disable_service q-dhcp
|
|
||||||
|
|
||||||
disable_service tempest
|
|
||||||
disable_service heat
|
|
||||||
disable_service q-agt
|
|
||||||
|
|
||||||
# Control node control plane and data plane ip address
|
|
||||||
HOST_IP=<controller's_management_IP_Address>
|
|
||||||
TUNNEL_ENDPOINT_IP=<controller's_data_plane_IP_Address>
|
|
||||||
|
|
||||||
# Enable df-metadata (Dragonflow metadata service proxy) once nova is being used.
|
|
||||||
enable_service df-metadata
|
|
||||||
|
|
||||||
# Used by the RethinkDB init scripts
|
|
||||||
REMOTE_DB_PORT=28015
|
|
|
@ -1,53 +0,0 @@
|
||||||
[[local|localrc]]
|
|
||||||
|
|
||||||
LOGFILE=$DEST/logs/stack.sh.log
|
|
||||||
|
|
||||||
#OFFLINE=True
|
|
||||||
#RECLONE=False
|
|
||||||
|
|
||||||
DATABASE_PASSWORD=password
|
|
||||||
RABBIT_PASSWORD=password
|
|
||||||
SERVICE_PASSWORD=password
|
|
||||||
SERVICE_TOKEN=password
|
|
||||||
ADMIN_PASSWORD=password
|
|
||||||
|
|
||||||
enable_plugin dragonflow https://git.openstack.org/openstack/dragonflow
|
|
||||||
|
|
||||||
disable_all_services
|
|
||||||
enable_service n-cpu
|
|
||||||
enable_service df-zookeeper
|
|
||||||
enable_service df-controller
|
|
||||||
enable_service n-novnc
|
|
||||||
enable_service placement-client
|
|
||||||
|
|
||||||
# Compute node control plane and data plane ip address
|
|
||||||
HOST_IP=<compute_node's_management_IP_Address>
|
|
||||||
TUNNEL_ENDPOINT_IP=<compute_node's_data_plane_IP_Address>
|
|
||||||
|
|
||||||
# Enable df-metadata (Dragonflow metadata service proxy) once nova is being used.
|
|
||||||
enable_service df-metadata
|
|
||||||
|
|
||||||
# Set this to the address of the main DevStack host running the rest of the
|
|
||||||
# OpenStack services. (Controller node)
|
|
||||||
|
|
||||||
SERVICE_HOST=<IP address of host running everything else>
|
|
||||||
RABBIT_HOST=$SERVICE_HOST
|
|
||||||
Q_HOST=$SERVICE_HOST
|
|
||||||
|
|
||||||
# Specify Zookeeper server or cluster
|
|
||||||
# When deploying Zookeeper cluster, you can use ',' to specify multiple servers.
|
|
||||||
REMOTE_DB_HOSTS=$SERVICE_HOST:2181
|
|
||||||
|
|
||||||
# Make VNC work on compute node
|
|
||||||
NOVA_VNC_ENABLED=True
|
|
||||||
NOVNCPROXY_URL=http://$SERVICE_HOST:6080/vnc_auto.html
|
|
||||||
VNCSERVER_LISTEN=$HOST_IP
|
|
||||||
VNCSERVER_PROXYCLIENT_ADDRESS=$VNCSERVER_LISTEN
|
|
||||||
|
|
||||||
[[post-config|$NEUTRON_CONF]]
|
|
||||||
[df]
|
|
||||||
enable_df_pub_sub = True
|
|
||||||
pub_sub_driver = "zmq_pubsub_driver"
|
|
||||||
|
|
||||||
# Currently Active Port Detection and ZMQ collides (https://bugs.launchpad.net/dragonflow/+bug/1716933)
|
|
||||||
ENABLE_ACTIVE_DETECTION=False
|
|
|
@ -1,48 +0,0 @@
|
||||||
[[local|localrc]]
|
|
||||||
|
|
||||||
LOGFILE=$DEST/logs/stack.sh.log
|
|
||||||
|
|
||||||
#OFFLINE=True
|
|
||||||
#RECLONE=False
|
|
||||||
|
|
||||||
DATABASE_PASSWORD=password
|
|
||||||
RABBIT_PASSWORD=password
|
|
||||||
SERVICE_PASSWORD=password
|
|
||||||
SERVICE_TOKEN=password
|
|
||||||
ADMIN_PASSWORD=password
|
|
||||||
|
|
||||||
enable_plugin dragonflow https://git.openstack.org/openstack/dragonflow
|
|
||||||
enable_service df-zookeeper
|
|
||||||
enable_service df-zookeeper-server
|
|
||||||
enable_service df-controller
|
|
||||||
enable_service df-publisher-service
|
|
||||||
|
|
||||||
disable_service n-net
|
|
||||||
enable_service q-svc
|
|
||||||
enable_service df-l3-agent
|
|
||||||
enable_service cinder
|
|
||||||
disable_service heat
|
|
||||||
disable_service tempest
|
|
||||||
|
|
||||||
# Control node control plane and data plane ip address
|
|
||||||
HOST_IP=<controller's_management_IP_Address>
|
|
||||||
TUNNEL_ENDPOINT_IP=<controller's_data_plane_IP_Address>
|
|
||||||
|
|
||||||
# Enable df-metadata (Dragonflow metadata service proxy) once nova is being used.
|
|
||||||
enable_service df-metadata
|
|
||||||
|
|
||||||
# We have to disable the neutron L2 agent. DF does not use the L2 agent.
|
|
||||||
disable_service q-agt
|
|
||||||
|
|
||||||
# We have to disable the neutron dhcp agent. DF does not use the dhcp agent.
|
|
||||||
disable_service q-dhcp
|
|
||||||
|
|
||||||
# Specify Zookeeper server or cluster
|
|
||||||
# When deploying Zookeeper cluster, you can use ',' to specify multiple servers.
|
|
||||||
REMOTE_DB_HOSTS=$HOST_IP:2181
|
|
||||||
|
|
||||||
# The build-in PUB/SUB mechanism is mandatory for Zookeeper backend.
|
|
||||||
enable_service df-zmq-publisher-service
|
|
||||||
|
|
||||||
# Currently Active Port Detection and ZMQ collides (https://bugs.launchpad.net/dragonflow/+bug/1716933)
|
|
||||||
ENABLE_ACTIVE_DETECTION=False
|
|
|
@ -1,68 +0,0 @@
|
||||||
..
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
not use this file except in compliance with the License. You may obtain
|
|
||||||
a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
License for the specific language governing permissions and limitations
|
|
||||||
under the License.
|
|
||||||
|
|
||||||
==========
|
|
||||||
OSProfiler
|
|
||||||
==========
|
|
||||||
|
|
||||||
OSProfiler provides a tiny but powerful library that is used by
|
|
||||||
most (soon to be all) OpenStack projects and their python clients. It
|
|
||||||
provides functionality to be able to generate one trace per request, that goes
|
|
||||||
through all involved services. This trace can then be extracted and used
|
|
||||||
to build a tree of calls which can be quite handy for a variety of
|
|
||||||
reasons (for example in isolating cross-project performance issues).
|
|
||||||
|
|
||||||
More about OSProfiler:
|
|
||||||
https://docs.openstack.org/osprofiler/latest/
|
|
||||||
|
|
||||||
Senlin supports using OSProfiler to trace the performance of each
|
|
||||||
key internal processing, including RESTful API, RPC, cluster actions,
|
|
||||||
node actions, DB operations etc.
|
|
||||||
|
|
||||||
Enabling OSProfiler
|
|
||||||
~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
To configure DevStack to enable OSProfiler, edit the
|
|
||||||
``${DEVSTACK_DIR}/local.conf`` file and add::
|
|
||||||
|
|
||||||
enable_plugin panko https://git.openstack.org/openstack/panko
|
|
||||||
enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer
|
|
||||||
enable_plugin osprofiler https://git.openstack.org/openstack/osprofiler
|
|
||||||
|
|
||||||
to the ``[[local|localrc]]`` section.
|
|
||||||
|
|
||||||
.. note:: The order of the plugins enabling matters.
|
|
||||||
|
|
||||||
Using OSProfiler
|
|
||||||
~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
After successfully deploy your development environment, following profiler
|
|
||||||
configs will be auto added to ``dragonflow.conf``::
|
|
||||||
|
|
||||||
[profiler]
|
|
||||||
enabled = True
|
|
||||||
trace_sqlalchemy = True
|
|
||||||
hmac_keys = SECRET_KEY
|
|
||||||
|
|
||||||
``hmac_keys`` is the secret key(s) to use for encrypting context data for
|
|
||||||
performance profiling, default value is 'SECRET_KEY', you can modify it to
|
|
||||||
any random string(s).
|
|
||||||
|
|
||||||
Run any command with ``--os-profile SECRET_KEY``::
|
|
||||||
|
|
||||||
$ openstack --os-profile SECRET_KEY floating ip create public
|
|
||||||
# it will print a <Trace ID>
|
|
||||||
|
|
||||||
Get pretty HTML with traces::
|
|
||||||
|
|
||||||
$ osprofiler trace show --html <Trace ID>
|
|
|
@ -1,43 +0,0 @@
|
||||||
========
|
|
||||||
Pipeline
|
|
||||||
========
|
|
||||||
|
|
||||||
|
|
||||||
The following diagrams explain Dragonflow pipeline in more details
|
|
||||||
|
|
||||||
.. image:: ../images/pipeline1.jpg
|
|
||||||
:alt: Dragonflow Pipeline
|
|
||||||
:width: 600
|
|
||||||
:height: 525
|
|
||||||
:align: center
|
|
||||||
|
|
||||||
.. image:: ../images/pipeline2.jpg
|
|
||||||
:alt: Dragonflow Pipeline
|
|
||||||
:width: 600
|
|
||||||
:height: 525
|
|
||||||
:align: center
|
|
||||||
|
|
||||||
.. image:: ../images/pipeline3.jpg
|
|
||||||
:alt: Dragonflow Pipeline
|
|
||||||
:width: 600
|
|
||||||
:height: 525
|
|
||||||
:align: center
|
|
||||||
|
|
||||||
.. image:: ../images/pipeline4.jpg
|
|
||||||
:alt: Dragonflow Pipeline
|
|
||||||
:width: 600
|
|
||||||
:height: 525
|
|
||||||
:align: center
|
|
||||||
|
|
||||||
.. image:: ../images/pipeline5.jpg
|
|
||||||
:alt: Dragonflow Pipeline
|
|
||||||
:width: 600
|
|
||||||
:height: 525
|
|
||||||
:align: center
|
|
||||||
|
|
||||||
.. image:: ../images/pipeline6.jpg
|
|
||||||
:alt: Dragonflow Pipeline
|
|
||||||
:width: 600
|
|
||||||
:height: 525
|
|
||||||
:align: center
|
|
||||||
|
|
|
@ -1,142 +0,0 @@
|
||||||
============
|
|
||||||
Pluggable DB
|
|
||||||
============
|
|
||||||
|
|
||||||
Instead of implementing a proprietary DB solution for Dragonflow or picking
|
|
||||||
one open source framework over the other, we designed the DB layer in
|
|
||||||
Dragonflow to be pluggable.
|
|
||||||
|
|
||||||
The DB framework is the mechanism to sync network policy and topology between
|
|
||||||
the CMS and the local controllers and hence control the performance, latency
|
|
||||||
and scale of the environments Dragonflow is deployed in.
|
|
||||||
|
|
||||||
This allows the operator/admin the flexibility of choosing and changing between
|
|
||||||
DB solutions to best fit his/her setup.
|
|
||||||
It also allows, with very minimal integration, a way to leverage the well
|
|
||||||
tested and mature feature set of these DB frameworks (clustering, HA, security,
|
|
||||||
consistency, low latency and more..)
|
|
||||||
|
|
||||||
This also allows the operator/admin to pick the correct balance between
|
|
||||||
performance and latency requirements of their setup and the resource overhead
|
|
||||||
of the DB framework.
|
|
||||||
|
|
||||||
Adding support for another DB framework is an easy process, all you need is to
|
|
||||||
implement the DB driver API class and add an installation script for the DB
|
|
||||||
framework server and client.
|
|
||||||
|
|
||||||
The following diagram depicts the pluggable DB architecture in Dragonflow and
|
|
||||||
the currently supported DB frameworks:
|
|
||||||
|
|
||||||
.. image:: ../images/db1.jpg
|
|
||||||
:alt: Pluggable DB architecture
|
|
||||||
:width: 600
|
|
||||||
:height: 525
|
|
||||||
:align: center
|
|
||||||
|
|
||||||
Classes in the DB Layer
|
|
||||||
=======================
|
|
||||||
|
|
||||||
The following sections describe the two main classes that are part of the
|
|
||||||
DB layer.
|
|
||||||
|
|
||||||
Applicative N/B DB Adapter Layer
|
|
||||||
--------------------------------
|
|
||||||
This component is the translator layer between the data model elements
|
|
||||||
to the DB driver which is generic.
|
|
||||||
|
|
||||||
This class should be used by all Dragonflow users that need to interact
|
|
||||||
with the DB (write/read).
|
|
||||||
For example: Dragonflow Neutron plugin, the Dragonflow local controller,
|
|
||||||
external applications.
|
|
||||||
|
|
||||||
This component was added for one main reason:
|
|
||||||
We didn't want to expose the DB driver to the internal data schema/model of
|
|
||||||
Dragonflow.
|
|
||||||
We didn't want that every new feature in Dragonflow will trigger changes in the
|
|
||||||
various different DB drivers.
|
|
||||||
|
|
||||||
This component has an interface to add/set/delete elements in our model (like
|
|
||||||
logical switches, logical routers and so on) and translate these APIs to a
|
|
||||||
simple, generic key/value operations that are done by the DB driver.
|
|
||||||
|
|
||||||
This component also define the Dragonflow data model objects and which fields
|
|
||||||
each one of the logical elements has.
|
|
||||||
|
|
||||||
The N/B DB Adapter has a reference to a DB Driver instance which is used to
|
|
||||||
interact with the DB framework.
|
|
||||||
We have identified that different DB frameworks might have different features
|
|
||||||
and capabilities, this layer is in charge of understanding the features exposed
|
|
||||||
by the driver and using them if possible.
|
|
||||||
|
|
||||||
|
|
||||||
DB Driver API
|
|
||||||
-------------
|
|
||||||
DB Driver is an interface class that list the methods needed to be implemented
|
|
||||||
in order to connect a certain DB framework to work with Dragonflow as a
|
|
||||||
backend.
|
|
||||||
|
|
||||||
The DB driver is a very minimalistic interface that uses a simple key/value
|
|
||||||
approach and can fit to almost all DB frameworks.
|
|
||||||
|
|
||||||
In order for Dragonflow to be able to leverage "advance" features of the DB,
|
|
||||||
the driver has a way to indicate if a specific feature is implemented or not,
|
|
||||||
and if it is, provide an API to consume it.
|
|
||||||
|
|
||||||
Using this method, the applicative DB adapter can choose the best way to manage
|
|
||||||
the way it interact with the DB.
|
|
||||||
|
|
||||||
For example: the driver can state if it support publish-subscribe on its
|
|
||||||
tables, If it does, the local controller will register a callback method to the
|
|
||||||
driver to receive any DB notifications and instead of polling the DB for
|
|
||||||
changes, wait for the driver to send them.
|
|
||||||
|
|
||||||
If the driver doesn't support publish-subscribe, the controller will keep
|
|
||||||
polling the DB framework looking for changes.
|
|
||||||
|
|
||||||
|
|
||||||
Modes of DB
|
|
||||||
===========
|
|
||||||
There are three different modes for the interaction between Dragonflow and the
|
|
||||||
DB.
|
|
||||||
|
|
||||||
Full Proactive
|
|
||||||
--------------
|
|
||||||
In this mode, all the DB data (policy and topology) is synced with all the
|
|
||||||
local Dragonflow controllers (each compute node).
|
|
||||||
Dragonflow saves in a local in-memory cache all the data that was synced from
|
|
||||||
the DB in order to do fast lookups.
|
|
||||||
|
|
||||||
Selective Proactive
|
|
||||||
-------------------
|
|
||||||
.. image:: ../images/db2.jpg
|
|
||||||
:alt: Pluggable DB architecture
|
|
||||||
:width: 600
|
|
||||||
:height: 525
|
|
||||||
:align: center
|
|
||||||
|
|
||||||
We have identified that in virtualized environments today with tenant
|
|
||||||
isolation, full proactive mode is not really needed.
|
|
||||||
We only need to synchronize each compute node (local-controller) with the
|
|
||||||
relevant data depending on the local ports of this compute node.
|
|
||||||
This mode is called selective proactive.
|
|
||||||
|
|
||||||
The following diagram depicts why this is needed:
|
|
||||||
|
|
||||||
.. image:: ../images/db3.jpg
|
|
||||||
:alt: Pluggable DB architecture
|
|
||||||
:width: 600
|
|
||||||
:height: 525
|
|
||||||
:align: center
|
|
||||||
|
|
||||||
We can see from the diagram that each compute node has VMs from one network,
|
|
||||||
and in the topology we can see that the networks are isolated, meaning VMs
|
|
||||||
from one network can not communicate with VMs from another.
|
|
||||||
|
|
||||||
It is obvious than that each compute node only needs to get the topology and
|
|
||||||
policy of the network and VMs that are local.
|
|
||||||
(If there was a router connecting between these two networks, this statement
|
|
||||||
was no longer correct, but we kept it simple in order to demonstrate that in
|
|
||||||
setups today there are many isolated topologies)
|
|
||||||
|
|
||||||
Reactive
|
|
||||||
--------
|
|
|
@ -1,140 +0,0 @@
|
||||||
==========================================
|
|
||||||
Pluggable Publish-Subscribe Infrastructure
|
|
||||||
==========================================
|
|
||||||
|
|
||||||
This document describes the pluggable API for publish-subscribe and
|
|
||||||
publish-subscribe drivers. For the design, see the `spec
|
|
||||||
publish_subscribe_abstraction`__.
|
|
||||||
|
|
||||||
__ SPEC_
|
|
||||||
|
|
||||||
Instead of relying on the DB driver to support reliable publish-subscribe, we
|
|
||||||
allow pub/sub mechanisms to be integrated to Dragonflow in a pluggable way.
|
|
||||||
|
|
||||||
There are several Neutron API servers, and many compute nodes. Every compute
|
|
||||||
node registers as a subscriber to every Neutron API server, which acts as a
|
|
||||||
publisher.
|
|
||||||
|
|
||||||
This can be seen in the following diagram:
|
|
||||||
|
|
||||||
.. image:: ../images/pubsub_topology.png
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Additionally, the Neutron server service is forked per the number of cores on
|
|
||||||
the server.
|
|
||||||
|
|
||||||
Since some publishers need to bind to a TCP socket, and we will want to run
|
|
||||||
monitoring services that need to run only once per server, and not once per
|
|
||||||
core, we provide a *publisher service*.
|
|
||||||
|
|
||||||
.. image:: ../images/pubsub_neutron_API_server.png
|
|
||||||
|
|
||||||
Therefore the communications between the Neutron service and the publisher
|
|
||||||
service requires an inter-process communications (IPC) solution.
|
|
||||||
|
|
||||||
This can also be solved using a publish-subscribe mechanism.
|
|
||||||
|
|
||||||
Therefore, there are two publish-subscribe implementations - a network-based
|
|
||||||
implementation between Neutron server and Compute node, and an IPC-based
|
|
||||||
implementation between Neutron services and the publisher service.
|
|
||||||
|
|
||||||
===
|
|
||||||
API
|
|
||||||
===
|
|
||||||
|
|
||||||
For simplicity, the API for both implementations is the same. It can be found
|
|
||||||
in ``dragonflow/db/pub_sub_api.py`` (`Link`__).
|
|
||||||
It is recommended to read the code to fully
|
|
||||||
understand the API.
|
|
||||||
|
|
||||||
__ _PUB_SUB_API
|
|
||||||
|
|
||||||
For both network and IPC based communication, a driver has to implement
|
|
||||||
``dragonflow.db.pub_sub_api.PubSubApi`` (`Link`__). In both cases,
|
|
||||||
``get_publisher`` and ``get_subscriber`` return a
|
|
||||||
``dragonflow.db.pub_sub_api.PublisherApi`` and a
|
|
||||||
``dragonflow.db.pub_sub_api.SubscriberApi``, respectively.
|
|
||||||
|
|
||||||
__ _PUB_SUB_API
|
|
||||||
|
|
||||||
The class ``dragonflow.db.pub_sub_api.SubscriberAgentBase`` provides a starting
|
|
||||||
point for implementing subscribers. Since the publisher API only requires an
|
|
||||||
initialisation and event-sending method, both very implementation specific, no
|
|
||||||
such base class is provided.
|
|
||||||
|
|
||||||
=============
|
|
||||||
Configuration
|
|
||||||
=============
|
|
||||||
|
|
||||||
The following parameters allows configuration of the publish-subscribe
|
|
||||||
mechanism. Only parameters which need to be handled by the publish-subscribe
|
|
||||||
drivers are listed here. For a full list, refer to
|
|
||||||
``dragonflow/conf/df_common_params.py`` (`Link`__).
|
|
||||||
|
|
||||||
__ _COMMON_PARAMS
|
|
||||||
|
|
||||||
1. pub_sub_driver - The alias to the class implementing ``PubSubApi`` for
|
|
||||||
network-based pub/sub.
|
|
||||||
|
|
||||||
2. publisher_port - The port to which the network publisher should bind. It is
|
|
||||||
also the port the network subscribers connect.
|
|
||||||
|
|
||||||
3. publisher_transport - The transport protocol (e.g. TCP, UDP) over which
|
|
||||||
pub/sub netwrok communication is passed.
|
|
||||||
|
|
||||||
4. publisher_bind_address - The local address to which the network publisher
|
|
||||||
should bind. '*' means all addresses.
|
|
||||||
|
|
||||||
Some publish-subscribe drivers do not need to use a publisher service.
|
|
||||||
|
|
||||||
This can be the case if e.g. the publisher does not bind to the communication
|
|
||||||
socket.
|
|
||||||
|
|
||||||
All publishers are created using the pub_sub_driver.
|
|
||||||
|
|
||||||
========================
|
|
||||||
Reference Implementation
|
|
||||||
========================
|
|
||||||
|
|
||||||
ZeroMQ is used as a base for the reference implementation.
|
|
||||||
|
|
||||||
The reference implementation can be found in
|
|
||||||
``dragonflow/db/pubsub_drivers/zmq_pubsub_driver.py`` (`Link`__).
|
|
||||||
|
|
||||||
__ _ZMQ_DRIVER
|
|
||||||
|
|
||||||
In it, there are two implementations of ``PubSubApi``:
|
|
||||||
1. ZMQPubSub - For the network implementation
|
|
||||||
2. ZMQPubSubMultiproc - For the IPC implementation.
|
|
||||||
|
|
||||||
In both cases, extensions of ``ZMQPublisherAgentBase`` and
|
|
||||||
``ZMQSubscriberAgentBase`` are returned.
|
|
||||||
|
|
||||||
In the case of subscriber, the only difference is in the implementation of
|
|
||||||
``connect``. Since the IPC implementation connects on ZMQ's *ipc* protocol, and
|
|
||||||
the network implementation connects over the transport protocol provided via
|
|
||||||
*publisher_transport*.
|
|
||||||
|
|
||||||
In the case of the publisher, the difference is both in the implementation of
|
|
||||||
``initialize``, ``_connect``, and ``send_event``. The difference in connect is
|
|
||||||
for the same reasons as the subscribers. The difference in ``initialize`` is
|
|
||||||
since the multi-proc subscriber uses the lazy initialization pattern. This also
|
|
||||||
accounts for the difference in ``send_event``.
|
|
||||||
|
|
||||||
==========
|
|
||||||
References
|
|
||||||
==========
|
|
||||||
|
|
||||||
.. _SPEC: https://raw.githubusercontent.com/openstack/dragonflow/master/doc/source/specs/publish_subscribe_abstraction.rst
|
|
||||||
.. _PUB_SUB_API: https://github.com/openstack/dragonflow/tree/master/dragonflow/db/pub_sub_api.py
|
|
||||||
.. _COMMON_PARAMS: https://github.com/openstack/dragonflow/tree/master/dragonflow/common/common_params.py
|
|
||||||
.. _ZMQ_DRIVER: https://github.com/openstack/dragonflow/tree/master/dragonflow/db/pubsub_drivers/zmp_pubsub_driver.py
|
|
||||||
|
|
||||||
[spec] https://docs.openstack.org/dragonflow/latest/specs/publish_subscribe_abstraction.html
|
|
||||||
|
|
||||||
[pub_sub_api.py] https://github.com/openstack/dragonflow/tree/master/dragonflow/db/pub_sub_api.py
|
|
||||||
|
|
||||||
[df_common_params.py] https://github.com/openstack/dragonflow/blob/master/dragonflow/conf/df_common_params.py
|
|
||||||
|
|
||||||
[zmq_pubsub_driver.py] https://github.com/openstack/dragonflow/tree/master/dragonflow/db/pubsub_drivers/zmq_pubsub_driver.py
|
|
|
@ -1,123 +0,0 @@
|
||||||
Installation guide for Dragonflow,
|
|
||||||
Keep in mind that Dragonflow is still in beta.
|
|
||||||
|
|
||||||
Prerequisites
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Require Open vSwitch 2.5.0
|
|
||||||
|
|
||||||
Quick Installation
|
|
||||||
------------------
|
|
||||||
|
|
||||||
1) Clone Devstack
|
|
||||||
|
|
||||||
``git clone https://git.openstack.org/openstack-dev/devstack``
|
|
||||||
|
|
||||||
2) Copy one of the following as your ``local.conf`` to your devstack folder
|
|
||||||
|
|
||||||
`DevStack Single Node Configuration <https://github.com/openstack/dragonflow/tree/master/doc/source/single-node-conf>`_
|
|
||||||
|
|
||||||
`DevStack Multi Node Configuration <https://github.com/openstack/dragonflow/tree/master/doc/source/multi-node-conf>`_
|
|
||||||
|
|
||||||
3) Edit local.conf according to your configuration,
|
|
||||||
See `Detailed Installation`_ for more details,
|
|
||||||
or the Devstack_ configuration manual
|
|
||||||
|
|
||||||
`Devstack <https://docs.openstack.org/devstack/latest/configuration.html>`_
|
|
||||||
|
|
||||||
|
|
||||||
DHCP configuration (IPv4 Only Environment)
|
|
||||||
-------------------------------------------
|
|
||||||
|
|
||||||
no configuration needed
|
|
||||||
|
|
||||||
DHCP configuration (mixed IPv4/IPv6 or pure IPv6)
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
enable_service q-dhcp
|
|
||||||
|
|
||||||
If the q-dhcp is installed on a different Node from the q-svc
|
|
||||||
|
|
||||||
Please add the following flag to the neutron.conf on the q-svc node
|
|
||||||
|
|
||||||
use_centralized_ipv6_DHCP=True
|
|
||||||
|
|
||||||
Meta data and cloud init
|
|
||||||
------------------------
|
|
||||||
|
|
||||||
In order to enable the VMs to get configuration like public keys,
|
|
||||||
hostnames, etc.. you need to enable meta service. You can do it
|
|
||||||
by adding the following lines to local.conf file (before running
|
|
||||||
'stack.sh' command):
|
|
||||||
|
|
||||||
enable_service q-meta
|
|
||||||
enable_service q-dhcp
|
|
||||||
|
|
||||||
For the meta service to work correctly, another "hidden" service
|
|
||||||
must be started. It is called meta-service-proxy and it is
|
|
||||||
used to forward meta data client requests to real meta service.
|
|
||||||
By default, it is started by regular q-dhcp service for each tenant.
|
|
||||||
As a result 'q-meta' and 'q-dhcp' services must be enabled.
|
|
||||||
|
|
||||||
Database configuration
|
|
||||||
-----------------------
|
|
||||||
|
|
||||||
Choose one of the following Database drivers in your local.conf
|
|
||||||
|
|
||||||
Etcd Database:
|
|
||||||
|
|
||||||
enable_service df-etcd
|
|
||||||
|
|
||||||
Ram Cloud Database:
|
|
||||||
|
|
||||||
enable_service df-ramcloud
|
|
||||||
|
|
||||||
enable_service df-rccoordinator
|
|
||||||
|
|
||||||
enable_service df-rcmaster
|
|
||||||
|
|
||||||
Zookeeper Database:
|
|
||||||
|
|
||||||
enable_service df-zookeeper
|
|
||||||
|
|
||||||
enable_service df-zookeeper-server
|
|
||||||
|
|
||||||
Redis Database:
|
|
||||||
|
|
||||||
enable_service df-redis
|
|
||||||
|
|
||||||
enable_service df-redis-server
|
|
||||||
|
|
||||||
Detailed Installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
Important parameters that needs to be set in ``local.conf`` :
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
HOST_IP <- The management IP address of the current node
|
|
||||||
FIXED_RANGE <- The overlay network address and mask
|
|
||||||
FIXED_NETWORK_SIZE <- Size of the overlay network
|
|
||||||
NETWORK_GATEWAY <- Default gateway for the overlay network
|
|
||||||
FLOATING_RANGE <- Network address and range for Floating IP addresses (in the public network)
|
|
||||||
Q_FLOATING_ALLOCATION_POOL <- range to allow allocation of floating IP from (within FLOATING_RANGE)
|
|
||||||
PUBLIC_NETWORK_GATEWAY <- Default gateway for the public network
|
|
||||||
SERVICE_HOST <- Management IP address of the controller node
|
|
||||||
MYSQL_HOST <- Management IP address of the controller node
|
|
||||||
RABBIT_HOST <- Management IP address of the controller node
|
|
||||||
GLANCE_HOSTPORT <- Management IP address of the controller node (Leave the port as-is)
|
|
||||||
|
|
||||||
You can find example configuration files in the multi-node-conf or the
|
|
||||||
single-node-conf directories.
|
|
||||||
|
|
||||||
|
|
||||||
==========================================
|
|
||||||
Automated setup using Vagrant + Virtualbox
|
|
||||||
==========================================
|
|
||||||
|
|
||||||
`Vagrant Installation Guide <https://docs.openstack.org/dragonflow/latest/installation.html>`_
|
|
||||||
|
|
||||||
Troubleshooting
|
|
||||||
---------------
|
|
||||||
You can check northbound database by using db-df utility, see details in
|
|
||||||
`Testing and Debugging <testing_and_debugging.rst>`_.
|
|
|
@ -1,40 +0,0 @@
|
||||||
================================
|
|
||||||
Creating release notes with Reno
|
|
||||||
================================
|
|
||||||
|
|
||||||
Release notes for Dragonflow are generated semi-automatically from source with
|
|
||||||
Reno.
|
|
||||||
|
|
||||||
Reno allows you to add a release note. It creates a yaml structure for you to
|
|
||||||
fill in. The items are explained `here <https://docs.openstack.org/reno/latest/user/usage.html#editing-a-release-note>`_. If an item is not needed, it can be
|
|
||||||
removed from the structure.
|
|
||||||
|
|
||||||
Basic Usage
|
|
||||||
-----------
|
|
||||||
|
|
||||||
To create a new release note, run:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
tox -e venv -- reno new <my-new-feature>
|
|
||||||
|
|
||||||
This creates a release notes file. You can identify the file with the output:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
Created new notes file in releasenotes/notes/asdf-1a11d0cca0cb76fa.yaml
|
|
||||||
|
|
||||||
You can now edit this file to fit your release notes needs.
|
|
||||||
|
|
||||||
Don't forget to add this file to the commit with `git add`.
|
|
||||||
|
|
||||||
The release notes are built automatically by the gate system. There is a tox
|
|
||||||
environment to generate all the release notes manually. To do so, run:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
tox -e releasenotes
|
|
||||||
|
|
||||||
Easy enough!
|
|
||||||
|
|
||||||
For more information, see the `reno usage documentation <https://docs.openstack.org/reno/latest/user/usage.html>`_.
|
|
|
@ -1,106 +0,0 @@
|
||||||
==============================
|
|
||||||
Contributors & Reviewers Guide
|
|
||||||
==============================
|
|
||||||
|
|
||||||
In this document, we try to guide contributors to know what should be included
|
|
||||||
in the patch.
|
|
||||||
This guide is also helpful for reviewers covering what to look for when
|
|
||||||
accepting a patch for Dragonflow.
|
|
||||||
|
|
||||||
Checklist
|
|
||||||
=========
|
|
||||||
|
|
||||||
The following items are expected for every patch:
|
|
||||||
|
|
||||||
# Commit message:
|
|
||||||
A title explaining what is done. The body of the commit message should
|
|
||||||
concisely explain *what this change does* (if not trivial and covered by
|
|
||||||
the title) and *why this change is done* (if not obvious). Triviality and
|
|
||||||
obviousness are left to the reviewer's discretion.
|
|
||||||
|
|
||||||
# Tests:
|
|
||||||
Every change must be covered by tests. Unit tests are often the bare
|
|
||||||
minimum and good enough, but a fullstack or tempest test will also do
|
|
||||||
in a pinch.
|
|
||||||
|
|
||||||
# Documentation:
|
|
||||||
Every non-trivial function (say, longer than 10 lines, but left to the
|
|
||||||
reviewer's discretion) must contain a pydoc. If a feature's design is
|
|
||||||
changed (e.g. flow structure), then the relevant spec or dev-ref must
|
|
||||||
be added or updated.
|
|
||||||
|
|
||||||
# Referenced Bug:
|
|
||||||
All but the most trivial changes should be linked with a Related-Bug,
|
|
||||||
Partial-Bug, or Closes-Bug declaration. In case of extremely trivial
|
|
||||||
fixes, TrivialFix may be stated instead, but it is at the reviewer's
|
|
||||||
discretion whether the change is truly a Trivial Fix.
|
|
||||||
|
|
||||||
# Release Notes:
|
|
||||||
For NB API changes, configuration changes, new drivers and new application
|
|
||||||
relevant release note should be added. It is recommended to use reno, see
|
|
||||||
TBD.
|
|
||||||
|
|
||||||
|
|
||||||
Spec & DevRef
|
|
||||||
=============
|
|
||||||
|
|
||||||
Spec should cover what the proposed feature is about, the impact it has
|
|
||||||
on the user, etc. It is the high-level design document. In essence,
|
|
||||||
it should show the *spirit* of the implementation. It should convey
|
|
||||||
the general idea of what the feature does, how packets are handled,
|
|
||||||
and where the information comes from. The spec should also include
|
|
||||||
data-model changes, since this is the basis for the Dragonflow API.
|
|
||||||
|
|
||||||
DevRef should cover how the proposed feature is supported. It is a low-level
|
|
||||||
design document explaining how the feature is implemented. It should cover
|
|
||||||
design decisions too low level to be included in the spec. It should also
|
|
||||||
cover the southbound implementation, including the rationale. The general
|
|
||||||
guideline should be - if a new contributor reads this document, they should
|
|
||||||
be able to understand the code of the application.
|
|
||||||
|
|
||||||
The difference between a spec and a devref is difficult to formalize. In
|
|
||||||
essence, the spec should give a high-level design, while the dev-ref should
|
|
||||||
give a low-level design of the feature. The guiding thought is that the spec
|
|
||||||
should remain unchange unless there is a massive feature overhaul, but the
|
|
||||||
dev-ref may change due to bug fixes, since it covers the low-level specifics.
|
|
||||||
|
|
||||||
Note that when writing the dev-ref, that the code is also available. Rather
|
|
||||||
than explain the code, try to explain what the code is supposed to do, what is
|
|
||||||
the end result supposed to look like, and most importantly, why the code looks
|
|
||||||
that way.
|
|
||||||
|
|
||||||
Specs are usually reviewed and accepted before the implementation begins.
|
|
||||||
Dev-refs are usually reviewed and accepted as part of the implementation or
|
|
||||||
implementation change.
|
|
||||||
|
|
||||||
Bugs & Blueprints
|
|
||||||
=================
|
|
||||||
|
|
||||||
For any issue with existing implementation, a bug report is expected.
|
|
||||||
|
|
||||||
For any new feature request or existing feature enhancement bug report with
|
|
||||||
[RFE] tag is expected.
|
|
||||||
Blueprint creation is not required.
|
|
||||||
|
|
||||||
Bug report should have descriptive title and detailed description. It is not
|
|
||||||
a trivial task to submit a good bug-report, so we try to outline some
|
|
||||||
guidelines that may help:
|
|
||||||
|
|
||||||
* First explain the functionality issue
|
|
||||||
We have seen many bug reports which were just a stack-trace dump, with no
|
|
||||||
explanation of the effect it has on the user. It is difficult to understand
|
|
||||||
if the e.g. exception is benign, or there's a real issue behind it. It is
|
|
||||||
also helpful to explain what's the expected behaviour. It's possible we
|
|
||||||
just mis-understood the feature.
|
|
||||||
|
|
||||||
* Explain how to reproduce
|
|
||||||
It is very difficult to mark a bug as solved, if we don't know how you
|
|
||||||
reached it. Reproduction steps go a long way to make a bug clear and easy to
|
|
||||||
tackle.
|
|
||||||
It is also very helpful to have a copy of the deployment configurations, e.g.
|
|
||||||
a config file or (in the case of devstack) a local.conf file.
|
|
||||||
|
|
||||||
* One issue per bug
|
|
||||||
We are not affraid of bug reports. And they are easier to manage if each bug
|
|
||||||
is a single atomic issue we need to fix (There are some exceptions to this
|
|
||||||
guideline, but they are usually very rare).
|
|
|
@ -1,327 +0,0 @@
|
||||||
heat_template_version: 2015-04-30
|
|
||||||
|
|
||||||
description: |
|
|
||||||
SFC example deployment
|
|
||||||
The script deploys 2 Fedora VMs:
|
|
||||||
* A VM with a UDP echo server, that listens on port 2345, and replies any
|
|
||||||
any datagram it receives back to the sender.
|
|
||||||
* A VM acting as a service function, that receives all port 2345 UDP packets
|
|
||||||
originating from the first VM, and replaces all instances of sf_filter
|
|
||||||
with sf_sub.
|
|
||||||
|
|
||||||
How to deploy:
|
|
||||||
$ openstack stack create -t doc/source/sfc-example/sfc-example.yaml stackname
|
|
||||||
Wait a few minutes
|
|
||||||
$ openstack stack show stackname
|
|
||||||
Look for server_fip address
|
|
||||||
e.g.:
|
|
||||||
server_fip=$(openstack stack show -f yaml stackname |
|
|
||||||
shyaml get-value outputs.0.output_value)
|
|
||||||
$ echo dragonflow | nc -u $server_fip 2345
|
|
||||||
DRAGONFLOW
|
|
||||||
|
|
||||||
The service function VM needs a few minutes to install dependencies.
|
|
||||||
|
|
||||||
parameters:
|
|
||||||
key_name:
|
|
||||||
type: string
|
|
||||||
label: Keypair name
|
|
||||||
default: stack
|
|
||||||
image_id:
|
|
||||||
type: string
|
|
||||||
label: Image ID
|
|
||||||
default: Fedora-Cloud-Base-25-1.3.x86_64
|
|
||||||
provider_net:
|
|
||||||
type: string
|
|
||||||
label: Provider net to use
|
|
||||||
default: public
|
|
||||||
sf_filter:
|
|
||||||
type: string
|
|
||||||
label: Filter to look for in returned messages
|
|
||||||
default: dragonflow
|
|
||||||
sf_sub:
|
|
||||||
type: string
|
|
||||||
label: The text to plug instead of filtered messages
|
|
||||||
default: DRAGONFLOW
|
|
||||||
|
|
||||||
resources:
|
|
||||||
flavor:
|
|
||||||
type: OS::Nova::Flavor
|
|
||||||
properties:
|
|
||||||
name: sfc-test-flavor
|
|
||||||
disk: 3
|
|
||||||
ram: 1024
|
|
||||||
vcpus: 1
|
|
||||||
|
|
||||||
private_net:
|
|
||||||
type: OS::Neutron::Net
|
|
||||||
properties:
|
|
||||||
name: sfc-test-net
|
|
||||||
|
|
||||||
private_subnet:
|
|
||||||
type: OS::Neutron::Subnet
|
|
||||||
properties:
|
|
||||||
name: sfc-test-subnet
|
|
||||||
network_id: { get_resource: private_net }
|
|
||||||
cidr: 20.0.0.0/24
|
|
||||||
gateway_ip: 20.0.0.1
|
|
||||||
enable_dhcp: true
|
|
||||||
allocation_pools:
|
|
||||||
- start: 20.0.0.10
|
|
||||||
end: 20.0.0.100
|
|
||||||
|
|
||||||
router:
|
|
||||||
type: OS::Neutron::Router
|
|
||||||
properties:
|
|
||||||
name: sfc-test-router
|
|
||||||
external_gateway_info:
|
|
||||||
network: { get_param: provider_net }
|
|
||||||
|
|
||||||
router_interface:
|
|
||||||
type: OS::Neutron::RouterInterface
|
|
||||||
properties:
|
|
||||||
router_id: { get_resource: router }
|
|
||||||
subnet_id: { get_resource: private_subnet }
|
|
||||||
|
|
||||||
sec_group:
|
|
||||||
type: OS::Neutron::SecurityGroup
|
|
||||||
properties:
|
|
||||||
name: sfc-test-sg
|
|
||||||
rules:
|
|
||||||
- remote_ip_prefix: 0.0.0.0/0
|
|
||||||
protocol: tcp
|
|
||||||
- remote_ip_prefix: 0.0.0.0/0
|
|
||||||
protocol: udp
|
|
||||||
- remote_ip_prefix: 0.0.0.0/0
|
|
||||||
protocol: icmp
|
|
||||||
|
|
||||||
source_vm_port:
|
|
||||||
type: OS::Neutron::Port
|
|
||||||
properties:
|
|
||||||
name: sfc-test-src-vm-port
|
|
||||||
network_id: { get_resource: private_net }
|
|
||||||
fixed_ips:
|
|
||||||
- subnet_id: { get_resource: private_subnet }
|
|
||||||
security_groups:
|
|
||||||
- { get_resource: sec_group }
|
|
||||||
|
|
||||||
source_vm:
|
|
||||||
type: OS::Nova::Server
|
|
||||||
properties:
|
|
||||||
name: sfc-test-src-vm
|
|
||||||
admin_pass: test
|
|
||||||
key_name: { get_param: key_name }
|
|
||||||
flavor: { get_resource: flavor }
|
|
||||||
image: { get_param: image_id }
|
|
||||||
networks:
|
|
||||||
- port: { get_resource: source_vm_port }
|
|
||||||
user_data_format: RAW
|
|
||||||
user_data: |
|
|
||||||
#cloud-config
|
|
||||||
write_files:
|
|
||||||
- content: |
|
|
||||||
import socket
|
|
||||||
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
|
|
||||||
sock.bind(('', 2345))
|
|
||||||
while True:
|
|
||||||
data, address = sock.recvfrom(1024)
|
|
||||||
sock.sendto(data, address)
|
|
||||||
path: /tmp/echo.py
|
|
||||||
runcmd:
|
|
||||||
- python3 /tmp/echo.py
|
|
||||||
|
|
||||||
source_fip:
|
|
||||||
type: OS::Neutron::FloatingIP
|
|
||||||
properties:
|
|
||||||
floating_network: { get_param: provider_net }
|
|
||||||
port_id: { get_resource: source_vm_port }
|
|
||||||
|
|
||||||
sf_port_ctrl:
|
|
||||||
type: OS::Neutron::Port
|
|
||||||
properties:
|
|
||||||
name: sfc-test-sf-port-ctrl
|
|
||||||
network_id: { get_resource: private_net }
|
|
||||||
fixed_ips:
|
|
||||||
- subnet_id: { get_resource: private_subnet }
|
|
||||||
security_groups:
|
|
||||||
- { get_resource: sec_group }
|
|
||||||
|
|
||||||
sf_port_ingress:
|
|
||||||
type: OS::Neutron::Port
|
|
||||||
properties:
|
|
||||||
name: sfc-test-sf-port-ingress
|
|
||||||
network_id: { get_resource: private_net }
|
|
||||||
fixed_ips:
|
|
||||||
- subnet_id: { get_resource: private_subnet }
|
|
||||||
port_security_enabled: false
|
|
||||||
|
|
||||||
sf_port_egress:
|
|
||||||
type: OS::Neutron::Port
|
|
||||||
properties:
|
|
||||||
name: sfc-test-sf-port-egress
|
|
||||||
network_id: { get_resource: private_net }
|
|
||||||
fixed_ips:
|
|
||||||
- subnet_id: { get_resource: private_subnet }
|
|
||||||
port_security_enabled: false
|
|
||||||
|
|
||||||
sf_vm:
|
|
||||||
type: OS::Nova::Server
|
|
||||||
properties:
|
|
||||||
name: sfc-test-sf
|
|
||||||
admin_pass: test
|
|
||||||
key_name: { get_param: key_name }
|
|
||||||
flavor: { get_resource: flavor }
|
|
||||||
image: { get_param: image_id }
|
|
||||||
networks:
|
|
||||||
- port: { get_resource: sf_port_ctrl }
|
|
||||||
- port: { get_resource: sf_port_ingress }
|
|
||||||
- port: { get_resource: sf_port_egress }
|
|
||||||
user_data_format: RAW
|
|
||||||
user_data:
|
|
||||||
str_replace:
|
|
||||||
template: |
|
|
||||||
#cloud-config
|
|
||||||
write_files:
|
|
||||||
- content: |
|
|
||||||
import os
|
|
||||||
from os_ken.base import app_manager
|
|
||||||
from os_ken.controller import ofp_event
|
|
||||||
from os_ken.controller.handler import CONFIG_DISPATCHER
|
|
||||||
from os_ken.controller.handler import MAIN_DISPATCHER
|
|
||||||
from os_ken.controller.handler import set_ev_cls
|
|
||||||
from os_ken.lib.packet import packet
|
|
||||||
from os_ken.lib.packet import ethernet
|
|
||||||
from os_ken.lib.packet import ipv4
|
|
||||||
from os_ken.lib.packet import mpls
|
|
||||||
from os_ken.lib.packet import udp
|
|
||||||
from os_ken.ofproto import ofproto_v1_3
|
|
||||||
FILTER = os.environ.get('SF_FILTER')
|
|
||||||
SUB = os.environ.get('SF_SUB')
|
|
||||||
class SimpleServiceFunction(app_manager.OsKenApp):
|
|
||||||
OFP_VERSIONS = [ofproto_v1_3.OFP_VERSION]
|
|
||||||
@set_ev_cls(ofp_event.EventOFPSwitchFeatures, CONFIG_DISPATCHER)
|
|
||||||
def switch_features_handler(self, ev):
|
|
||||||
msg = ev.msg
|
|
||||||
dp = msg.datapath
|
|
||||||
ofp_parser = dp.ofproto_parser
|
|
||||||
message = dp.ofproto_parser.OFPFlowMod(
|
|
||||||
datapath=dp,
|
|
||||||
table_id=0,
|
|
||||||
command=dp.ofproto.OFPFC_ADD,
|
|
||||||
priority=100,
|
|
||||||
match=ofp_parser.OFPMatch(in_port=1, eth_type=0x8847),
|
|
||||||
instructions=[
|
|
||||||
ofp_parser.OFPInstructionActions(
|
|
||||||
dp.ofproto.OFPIT_APPLY_ACTIONS,
|
|
||||||
[
|
|
||||||
ofp_parser.OFPActionOutput(
|
|
||||||
ofproto_v1_3.OFPP_CONTROLLER,
|
|
||||||
ofproto_v1_3.OFPCML_NO_BUFFER,
|
|
||||||
)
|
|
||||||
],
|
|
||||||
),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
dp.send_msg(message)
|
|
||||||
@set_ev_cls(ofp_event.EventOFPPacketIn, MAIN_DISPATCHER)
|
|
||||||
def packet_in_handler(self, ev):
|
|
||||||
msg = ev.msg
|
|
||||||
dp = msg.datapath
|
|
||||||
ofp_parser = dp.ofproto_parser
|
|
||||||
pkt = packet.Packet(msg.data)
|
|
||||||
payload = pkt.protocols[-1]
|
|
||||||
if isinstance(payload, (bytes, bytearray)):
|
|
||||||
new_payload = payload.decode(
|
|
||||||
'utf-8'
|
|
||||||
).replace(
|
|
||||||
FILTER,
|
|
||||||
SUB,
|
|
||||||
).encode('utf-8')
|
|
||||||
new_pkt = packet.Packet()
|
|
||||||
new_pkt.add_protocol(pkt.get_protocol(ethernet.ethernet))
|
|
||||||
new_pkt.add_protocol(pkt.get_protocol(mpls.mpls))
|
|
||||||
pkt_ip = pkt.get_protocol(ipv4.ipv4)
|
|
||||||
pkt_ip.csum = 0
|
|
||||||
pkt_ip.total_length = 0
|
|
||||||
new_pkt.add_protocol(pkt_ip)
|
|
||||||
pkt_udp = pkt.get_protocol(udp.udp)
|
|
||||||
pkt_udp.csum = 0
|
|
||||||
new_pkt.add_protocol(pkt_udp)
|
|
||||||
new_pkt.add_protocol(new_payload)
|
|
||||||
new_pkt.serialize()
|
|
||||||
pkt = new_pkt
|
|
||||||
actions = [ofp_parser.OFPActionOutput(port=2)]
|
|
||||||
out = ofp_parser.OFPPacketOut(
|
|
||||||
datapath=dp,
|
|
||||||
buffer_id=ofproto_v1_3.OFP_NO_BUFFER,
|
|
||||||
in_port=ofproto_v1_3.OFPP_CONTROLLER,
|
|
||||||
data=pkt.data,
|
|
||||||
actions=actions,
|
|
||||||
)
|
|
||||||
dp.send_msg(out)
|
|
||||||
path: /tmp/controller.py
|
|
||||||
- content: |
|
|
||||||
#!/bin/bash
|
|
||||||
dnf install -y openvswitch python3-ryu
|
|
||||||
systemctl start openvswitch
|
|
||||||
ovs-vsctl add-br br-sf
|
|
||||||
ovs-vsctl set-controller br-sf tcp:127.0.0.1:6653
|
|
||||||
ovs-vsctl add-port br-sf eth1
|
|
||||||
ovs-vsctl add-port br-sf eth2
|
|
||||||
ovs-ofctl del-flows br-sf
|
|
||||||
ip link set dev eth1 up
|
|
||||||
ip link set dev eth2 up
|
|
||||||
SF_FILTER=$filter SF_SUB=$sub ryu-manager-3 /tmp/controller.py
|
|
||||||
path: /tmp/run.sh
|
|
||||||
runcmd:
|
|
||||||
- sudo bash -x /tmp/run.sh
|
|
||||||
params:
|
|
||||||
$filter: { get_param: sf_filter }
|
|
||||||
$sub: { get_param: sf_sub }
|
|
||||||
|
|
||||||
sf_fip:
|
|
||||||
type: OS::Neutron::FloatingIP
|
|
||||||
properties:
|
|
||||||
floating_network: { get_param: provider_net }
|
|
||||||
port_id: { get_resource: sf_port_ctrl }
|
|
||||||
|
|
||||||
port_pair:
|
|
||||||
type: OS::Neutron::PortPair
|
|
||||||
properties:
|
|
||||||
name: sfc-test-pp
|
|
||||||
ingress: { get_resource: sf_port_ingress }
|
|
||||||
egress: { get_resource: sf_port_egress }
|
|
||||||
service_function_parameters:
|
|
||||||
correlation: mpls
|
|
||||||
depends_on: sf_vm
|
|
||||||
|
|
||||||
port_pair_group:
|
|
||||||
type: OS::Neutron::PortPairGroup
|
|
||||||
properties:
|
|
||||||
name: sfc-test-ppg
|
|
||||||
port_pairs:
|
|
||||||
- { get_resource: port_pair }
|
|
||||||
|
|
||||||
flow_classifier:
|
|
||||||
type: OS::Neutron::FlowClassifier
|
|
||||||
properties:
|
|
||||||
name: sfc-test-fc
|
|
||||||
logical_source_port: { get_resource: source_vm_port }
|
|
||||||
ethertype: IPv4
|
|
||||||
protocol: udp
|
|
||||||
source_port_range_min: 2345
|
|
||||||
source_port_range_max: 2345
|
|
||||||
|
|
||||||
port_chain:
|
|
||||||
type: OS::Neutron::PortChain
|
|
||||||
properties:
|
|
||||||
name: sfc-test-pc
|
|
||||||
flow_classifiers:
|
|
||||||
- { get_resource: flow_classifier }
|
|
||||||
port_pair_groups:
|
|
||||||
- { get_resource: port_pair_group }
|
|
||||||
|
|
||||||
outputs:
|
|
||||||
server_fip:
|
|
||||||
description: Floating IP of the echo server
|
|
||||||
value: { get_attr: [source_fip, floating_ip_address] }
|
|
|
@ -1,47 +0,0 @@
|
||||||
[[local|localrc]]
|
|
||||||
|
|
||||||
LOGFILE=$DEST/logs/stack.sh.log
|
|
||||||
|
|
||||||
#OFFLINE=True
|
|
||||||
#RECLONE=False
|
|
||||||
|
|
||||||
DF_SELECTIVE_TOPO_DIST=True
|
|
||||||
DF_PUB_SUB=True
|
|
||||||
ENABLE_NEUTRON_NOTIFIER=False
|
|
||||||
|
|
||||||
DATABASE_PASSWORD=password
|
|
||||||
RABBIT_PASSWORD=password
|
|
||||||
SERVICE_PASSWORD=password
|
|
||||||
SERVICE_TOKEN=password
|
|
||||||
ADMIN_PASSWORD=password
|
|
||||||
|
|
||||||
enable_plugin dragonflow https://git.openstack.org/openstack/dragonflow
|
|
||||||
enable_service df-cassandra
|
|
||||||
enable_service df-cassandra-server
|
|
||||||
enable_service df-controller
|
|
||||||
|
|
||||||
disable_service n-net
|
|
||||||
enable_service q-svc
|
|
||||||
enable_service df-l3-agent
|
|
||||||
disable_service heat
|
|
||||||
disable_service tempest
|
|
||||||
|
|
||||||
# We have to disable the neutron L2 agent. DF does not use the L2 agent.
|
|
||||||
disable_service q-agt
|
|
||||||
|
|
||||||
# We have to disable the neutron dhcp agent. DF does not use the dhcp agent.
|
|
||||||
disable_service q-dhcp
|
|
||||||
|
|
||||||
# Enable df-metadata (Dragonflow metadata service proxy) once nova is being used.
|
|
||||||
enable_service df-metadata
|
|
||||||
|
|
||||||
# Specify Cassandra server or cluster
|
|
||||||
# When deploying Cassandra cluster, you can use ',' to specify multiple servers.
|
|
||||||
REMOTE_DB_HOSTS=$HOST_IP:9042
|
|
||||||
CASSANDRA_NUM_OF_HOSTS=1
|
|
||||||
|
|
||||||
# The build-in PUB/SUB mechanism is mandatory for Cassandra backend.
|
|
||||||
enable_service df-zmq-publisher-service
|
|
||||||
|
|
||||||
# Currently Active Port Detection and ZMQ collides (https://bugs.launchpad.net/dragonflow/+bug/1716933)
|
|
||||||
ENABLE_ACTIVE_DETECTION=False
|
|
|
@ -1,29 +0,0 @@
|
||||||
[[local|localrc]]
|
|
||||||
|
|
||||||
DATABASE_PASSWORD=password
|
|
||||||
RABBIT_PASSWORD=password
|
|
||||||
SERVICE_PASSWORD=password
|
|
||||||
SERVICE_TOKEN=password
|
|
||||||
ADMIN_PASSWORD=password
|
|
||||||
|
|
||||||
enable_plugin dragonflow https://git.openstack.org/openstack/dragonflow
|
|
||||||
enable_service df-etcd
|
|
||||||
enable_service etcd3
|
|
||||||
enable_service df-controller
|
|
||||||
enable_service df-etcd-pubsub-service
|
|
||||||
|
|
||||||
disable_service n-net
|
|
||||||
enable_service q-svc
|
|
||||||
enable_service df-l3-agent
|
|
||||||
disable_service heat
|
|
||||||
disable_service tempest
|
|
||||||
|
|
||||||
|
|
||||||
# Enable df-metadata (Dragonflow metadata service proxy) once nova is being used.
|
|
||||||
enable_service df-metadata
|
|
||||||
|
|
||||||
# We have to disable the neutron L2 agent. DF does not use the L2 agent.
|
|
||||||
disable_service q-agt
|
|
||||||
|
|
||||||
# We have to disable the neutron dhcp agent. DF does not use the dhcp agent.
|
|
||||||
disable_service q-dhcp
|
|
|
@ -1,35 +0,0 @@
|
||||||
[[local|localrc]]
|
|
||||||
|
|
||||||
# These MUST come before the 'enable_plugin dragonflow' as the dragonflow
|
|
||||||
# assumes the skydive analyzer is already installed
|
|
||||||
enable_plugin skydive https://github.com/skydive-project/skydive.git
|
|
||||||
enable_service skydive-agent skydive-analyzer
|
|
||||||
|
|
||||||
DATABASE_PASSWORD=password
|
|
||||||
RABBIT_PASSWORD=password
|
|
||||||
SERVICE_PASSWORD=password
|
|
||||||
SERVICE_TOKEN=password
|
|
||||||
ADMIN_PASSWORD=password
|
|
||||||
|
|
||||||
enable_plugin dragonflow https://git.openstack.org/openstack/dragonflow
|
|
||||||
enable_service df-etcd
|
|
||||||
enable_service etcd3
|
|
||||||
enable_service df-controller
|
|
||||||
enable_service df-etcd-pubsub-service
|
|
||||||
|
|
||||||
disable_service n-net
|
|
||||||
enable_service q-svc
|
|
||||||
enable_service df-l3-agent
|
|
||||||
disable_service heat
|
|
||||||
disable_service tempest
|
|
||||||
|
|
||||||
enable_service df-skydive
|
|
||||||
|
|
||||||
# Enable df-metadata (Dragonflow metadata service proxy) once nova is being used.
|
|
||||||
enable_service df-metadata
|
|
||||||
|
|
||||||
# We have to disable the neutron L2 agent. DF does not use the L2 agent.
|
|
||||||
disable_service q-agt
|
|
||||||
|
|
||||||
# We have to disable the neutron dhcp agent. DF does not use the dhcp agent.
|
|
||||||
disable_service q-dhcp
|
|
|
@ -1,31 +0,0 @@
|
||||||
[[local|localrc]]
|
|
||||||
|
|
||||||
DATABASE_PASSWORD=password
|
|
||||||
RABBIT_PASSWORD=password
|
|
||||||
SERVICE_PASSWORD=password
|
|
||||||
SERVICE_TOKEN=password
|
|
||||||
ADMIN_PASSWORD=password
|
|
||||||
|
|
||||||
enable_plugin dragonflow https://git.openstack.org/openstack/dragonflow
|
|
||||||
|
|
||||||
enable_service df-controller
|
|
||||||
enable_service df-ramcloud
|
|
||||||
enable_service df-rccoordinator
|
|
||||||
enable_service df-rcmaster
|
|
||||||
enable_service df-publisher-service
|
|
||||||
|
|
||||||
disable_service n-net
|
|
||||||
enable_service q-svc
|
|
||||||
enable_service df-l3-agent
|
|
||||||
disable_service q-dhcp
|
|
||||||
|
|
||||||
disable_service tempest
|
|
||||||
disable_service heat
|
|
||||||
disable_service q-agt
|
|
||||||
|
|
||||||
|
|
||||||
# Enable df-metadata (Dragonflow metadata service proxy) once nova is being used.
|
|
||||||
enable_service df-metadata
|
|
||||||
|
|
||||||
# Used by the RAMCloud init scripts
|
|
||||||
REMOTE_DB_PORT=21222
|
|
|
@ -1,30 +0,0 @@
|
||||||
[[local|localrc]]
|
|
||||||
|
|
||||||
DATABASE_PASSWORD=password
|
|
||||||
RABBIT_PASSWORD=password
|
|
||||||
SERVICE_PASSWORD=password
|
|
||||||
SERVICE_TOKEN=password
|
|
||||||
ADMIN_PASSWORD=password
|
|
||||||
|
|
||||||
DF_REDIS_PUBSUB=True
|
|
||||||
enable_plugin dragonflow https://git.openstack.org/openstack/dragonflow
|
|
||||||
enable_service df-redis
|
|
||||||
enable_service df-redis-server
|
|
||||||
enable_service df-controller
|
|
||||||
enable_service df-publisher-service
|
|
||||||
|
|
||||||
disable_service n-net
|
|
||||||
enable_service q-svc
|
|
||||||
enable_service df-l3-agent
|
|
||||||
disable_service heat
|
|
||||||
disable_service tempest
|
|
||||||
|
|
||||||
|
|
||||||
# Enable df-metadata (Dragonflow metadata service proxy) once nova is being used.
|
|
||||||
enable_service df-metadata
|
|
||||||
|
|
||||||
# We have to disable the neutron L2 agent. DF does not use the L2 agent.
|
|
||||||
disable_service q-agt
|
|
||||||
|
|
||||||
# We have to disable the neutron dhcp agent. DF does not use the dhcp agent.
|
|
||||||
disable_service q-dhcp
|
|
|
@ -1,30 +0,0 @@
|
||||||
[[local|localrc]]
|
|
||||||
|
|
||||||
DATABASE_PASSWORD=password
|
|
||||||
RABBIT_PASSWORD=password
|
|
||||||
SERVICE_PASSWORD=password
|
|
||||||
SERVICE_TOKEN=password
|
|
||||||
ADMIN_PASSWORD=password
|
|
||||||
|
|
||||||
enable_plugin dragonflow https://git.openstack.org/openstack/dragonflow
|
|
||||||
|
|
||||||
enable_service df-rethinkdb
|
|
||||||
enable_service df-rethinkdb-server
|
|
||||||
enable_service df-controller
|
|
||||||
enable_service df-publisher-service
|
|
||||||
|
|
||||||
disable_service n-net
|
|
||||||
enable_service q-svc
|
|
||||||
enable_service df-l3-agent
|
|
||||||
disable_service q-dhcp
|
|
||||||
|
|
||||||
disable_service tempest
|
|
||||||
disable_service heat
|
|
||||||
disable_service q-agt
|
|
||||||
|
|
||||||
|
|
||||||
# Enable df-metadata (Dragonflow metadata service proxy) once nova is being used.
|
|
||||||
enable_service df-metadata
|
|
||||||
|
|
||||||
# Used by the RethinkDB init scripts
|
|
||||||
REMOTE_DB_PORT=28015
|
|
|
@ -1,45 +0,0 @@
|
||||||
[[local|localrc]]
|
|
||||||
|
|
||||||
LOGFILE=$DEST/logs/stack.sh.log
|
|
||||||
|
|
||||||
#OFFLINE=True
|
|
||||||
#RECLONE=False
|
|
||||||
|
|
||||||
DATABASE_PASSWORD=password
|
|
||||||
RABBIT_PASSWORD=password
|
|
||||||
SERVICE_PASSWORD=password
|
|
||||||
SERVICE_TOKEN=password
|
|
||||||
ADMIN_PASSWORD=password
|
|
||||||
|
|
||||||
enable_plugin dragonflow https://git.openstack.org/openstack/dragonflow
|
|
||||||
enable_service df-zookeeper
|
|
||||||
enable_service df-zookeeper-server
|
|
||||||
enable_service df-controller
|
|
||||||
enable_service df-publisher-service
|
|
||||||
|
|
||||||
disable_service n-net
|
|
||||||
enable_service q-svc
|
|
||||||
enable_service df-l3-agent
|
|
||||||
enable_service cinder
|
|
||||||
disable_service heat
|
|
||||||
disable_service tempest
|
|
||||||
|
|
||||||
|
|
||||||
# We have to disable the neutron L2 agent. DF does not use the L2 agent.
|
|
||||||
disable_service q-agt
|
|
||||||
|
|
||||||
# We have to disable the neutron dhcp agent. DF does not use the dhcp agent.
|
|
||||||
disable_service q-dhcp
|
|
||||||
|
|
||||||
# Enable df-metadata (Dragonflow metadata service proxy) once nova is being used.
|
|
||||||
enable_service df-metadata
|
|
||||||
|
|
||||||
# Specify Zookeeper server or cluster
|
|
||||||
# When deploying Zookeeper cluster, you can use ',' to specify multiple servers.
|
|
||||||
REMOTE_DB_HOSTS=$HOST_IP:2181
|
|
||||||
|
|
||||||
# The build-in PUB/SUB mechanism is mandatory for Zookeeper backend.
|
|
||||||
enable_service df-zmq-publisher-service
|
|
||||||
|
|
||||||
# Currently Active Port Detection and ZMQ collides (https://bugs.launchpad.net/dragonflow/+bug/1716933)
|
|
||||||
ENABLE_ACTIVE_DETECTION=False
|
|
|
@ -1,95 +0,0 @@
|
||||||
..
|
|
||||||
This work is licensed under a Creative Commons Attribution 3.0 Unported
|
|
||||||
License.
|
|
||||||
|
|
||||||
https://creativecommons.org/licenses/by/3.0/legalcode
|
|
||||||
|
|
||||||
=====================
|
|
||||||
Allowed address pairs
|
|
||||||
=====================
|
|
||||||
|
|
||||||
https://blueprints.launchpad.net/dragonflow/+spec/allowed-address-pairs
|
|
||||||
|
|
||||||
This blueprint describes how to support allowed address pairs for
|
|
||||||
Dragonflow.
|
|
||||||
|
|
||||||
Problem Description
|
|
||||||
===================
|
|
||||||
Allowed address pairs feature allows one port to add additional IP/MAC address
|
|
||||||
pairs on that port to allow traffic that matches those specified values.
|
|
||||||
|
|
||||||
In Neutron reference implementation, IP address in allowed address pairs could
|
|
||||||
be a prefix, and the IP address prefix might not be in the port's fixed IP
|
|
||||||
subnet. This wide tolerance will greatly increase efforts to support allowed
|
|
||||||
address pairs, and we don't see any requirement for now to using it. So in
|
|
||||||
Dragonflow, we will only support allowed address pairs using IP addresses (not
|
|
||||||
IP address prefixes) in the same subnet of the port's fixed IP.
|
|
||||||
|
|
||||||
In current implementation, security modules like port security and security
|
|
||||||
group will require that packets sent/received from a VM port which must have
|
|
||||||
the fixed IP/MAC address of this VM port. Besides, L2 and L3 transmission will
|
|
||||||
forward packets only according to those fixed addresses. Those modules should
|
|
||||||
make some changes to support allowed address pairs.
|
|
||||||
|
|
||||||
Proposed Change
|
|
||||||
===============
|
|
||||||
A VM port could send or receive packets using the addresses configured in
|
|
||||||
allowed address pairs. In some aspects, allowed address pairs plays a role
|
|
||||||
which is similar with fixed IP/MAC address pair in a port, and functional
|
|
||||||
modules should also handle them like fixed IP/MAC address pair.
|
|
||||||
|
|
||||||
Port Security
|
|
||||||
-------------
|
|
||||||
Port security module should allow packets with the fixed IP/MAC address pair
|
|
||||||
and also packets with address pairs configured in allowed address pairs field
|
|
||||||
of a port. That is already done in the blueprint of mac-spoofing-protection.
|
|
||||||
|
|
||||||
Security Group
|
|
||||||
--------------
|
|
||||||
The security group module transforms the remote group field in a rule to
|
|
||||||
flows according to IP addresses of VM ports associated with the remote group.
|
|
||||||
To support allowed address pairs, those IP addresses should include both
|
|
||||||
fixed IP address and the IP addresses in allowed address pairs.
|
|
||||||
|
|
||||||
L2/L3 Lookup
|
|
||||||
------------
|
|
||||||
One or more VM ports could share the same IP address (and the same MAC address
|
|
||||||
in some scenarios) in allowed address pairs. In L2/L3 lookup table, we could
|
|
||||||
simply send the packets of which destination address is this address to all
|
|
||||||
VM ports which have this address in their allowed address pairs field,
|
|
||||||
but that will cause extra bandwidth cost if there are only few VMs actually
|
|
||||||
using the IP/MAC address in the allowed address pairs field of its port.
|
|
||||||
|
|
||||||
A alternative way is sending those packets only to the ports of the VMs who
|
|
||||||
actually using this IP/MAC. We can distinguish those VMs by receiving its
|
|
||||||
gratuitous ARP packets of this IP/MAC from their ports, or by periodically
|
|
||||||
sending ARP requests to the IP and receiving the corresponding ARP replies.
|
|
||||||
Once those active VMs have been detected, local controllers should save this
|
|
||||||
information in NB DB and publish it. When L2/L3 APPs receive this notification,
|
|
||||||
they could install flows to forward packets to the ports of those active VMs
|
|
||||||
like they do for fixed IP/MAC.
|
|
||||||
|
|
||||||
In particularly, if there is only one VM who could use the IP/MAC among VMs
|
|
||||||
who have this IP/MAC in allowed address pairs field of their ports, the
|
|
||||||
processes of L2/L3 APPs to install those flows could be simpler. Because
|
|
||||||
this is a more common usage of allowed address pairs (for example, VRRP),
|
|
||||||
we only support this situation in Dragonflow as the first step.
|
|
||||||
|
|
||||||
In Dragonflow, we propose to support both the first "broadcast way" and the
|
|
||||||
latter "detection way", and add an option in the configuration for users to
|
|
||||||
choose one of them.
|
|
||||||
|
|
||||||
ARP Responder
|
|
||||||
-------------
|
|
||||||
Because more than one VM ports' allowed address pairs could have the same IP
|
|
||||||
address but different MAC addresses, ARP responder can hardly know which MAC
|
|
||||||
address should be responded to an ARP request to this IP. We could simply
|
|
||||||
continue to broadcast those ARP requests, or we could only use the detected
|
|
||||||
MAC address of the active VM's port to reply those ARP requests, if the active
|
|
||||||
VMs mentioned above was detected.
|
|
||||||
|
|
||||||
|
|
||||||
References
|
|
||||||
==========
|
|
||||||
[1] https://specs.openstack.org/openstack/neutron-specs/specs/api/allowed_address_pairs.html
|
|
||||||
[2] https://www.ietf.org/rfc/rfc3768.txt
|
|