Retire repository

Fuel repositories are all retired in openstack namespace, retire
remaining fuel repos in x namespace since they are unused now.

This change removes all content from the repository and adds the usual
README file to point out that the repository is retired following the
process from
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

See also
http://lists.openstack.org/pipermail/openstack-discuss/2019-December/011675.html

A related change is: https://review.opendev.org/699752 .

Change-Id: I5a23c6a1f8cd3b055b686e16714a8ff5ccf23d86
This commit is contained in:
Andreas Jaeger 2019-12-18 19:48:13 +01:00
parent 5b0e745abc
commit 81eb842140
68 changed files with 10 additions and 6564 deletions

9
.gitignore vendored
View File

@ -1,9 +0,0 @@
.tox
.build
*.pyc
*.rpm
*.sublime-workspace
BUILD/*
output
suppack/xenserver-suppack
*~

202
LICENSE
View File

@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,78 +0,0 @@
BRANDING=branding.inc
include ${BRANDING}
OPENSTACK_RELEASE:=mitaka
PLUGIN_VERSION:=$(shell ./get_plugin_version.sh ${BRANDING} | cut -d' ' -f1)
PLUGIN_REVISION:=$(shell ./get_plugin_version.sh ${BRANDING} | cut -d' ' -f2)
RPM_NAME=${PLUGIN_NAME}-${PLUGIN_VERSION}-${PLUGIN_VERSION}.${PLUGIN_REVISION}-1.noarch.rpm
MD5_FILENAME=${PLUGIN_NAME}-${PLUGIN_VERSION}.${PLUGIN_REVISION}_md5.txt
BUILDROOT=BUILD
DOC_NAMES=user-guide ${PLUGIN_REVISION}-test-plan ${PLUGIN_REVISION}-test-report
.SUFFIXES:
build: rpm docs md5
rpm: output/${RPM_NAME}
md5: output/${MD5_FILENAME}
docs: md5 $(DOC_NAMES:%=output/${PLUGIN_NAME}-${PLUGIN_VERSION}-%.pdf)
REQUIRED_ISOS=$(PLATFORMS:%=suppack/xcp_%/xenapi-plugins-${OPENSTACK_RELEASE}.iso)
iso: $(REQUIRED_ISOS)
$(REQUIRED_ISOS): plugin_source/deployment_scripts/patchset/xenhost
suppack/build-xenserver-suppack.sh ${OPENSTACK_RELEASE}
${BUILDROOT}/${PLUGIN_NAME}/branded: ${BRANDING} ${REQUIRED_ISOS} plugin_source
mkdir -p ${BUILDROOT}/${PLUGIN_NAME}
cp -r plugin_source/* ${BUILDROOT}/${PLUGIN_NAME}
find ${BUILDROOT}/${PLUGIN_NAME} -type f -print0 | \
xargs -0 -i sed -i \
-e "s/@HYPERVISOR_NAME@/${HYPERVISOR_NAME}/g" \
-e "s/@HYPERVISOR_LOWER@/${HYPERVISOR_LOWER}/g" \
-e "s/@PLUGIN_NAME@/${PLUGIN_NAME}/g" \
-e "s/@PLUGIN_VERSION@/${PLUGIN_VERSION}/g" \
-e "s/@PLUGIN_REVISION@/${PLUGIN_REVISION}/g" \
-e s/@VERSION_HOTFIXES@/'${VERSION_HOTFIXES}'/g {}
cp -r suppack/xcp_* ${BUILDROOT}/${PLUGIN_NAME}/deployment_scripts/
touch ${BUILDROOT}/${PLUGIN_NAME}/branded
output/${RPM_NAME}: ${BUILDROOT}/${PLUGIN_NAME}/branded
mkdir -p output
(cd ${BUILDROOT}; which flake8 > /dev/null && flake8 ${PLUGIN_NAME}/deployment_scripts --exclude=XenAPI.py)
(cd ${BUILDROOT}; fpb --check ${PLUGIN_NAME})
(cd ${BUILDROOT}; fpb --build ${PLUGIN_NAME})
cp ${BUILDROOT}/${PLUGIN_NAME}/${RPM_NAME} $@
${BUILDROOT}/doc/source ${BUILDROOT}/doc/Makefile: ${BRANDING} doc/Makefile doc/source
mkdir -p ${BUILDROOT}/doc
cp -r doc/Makefile doc/source ${BUILDROOT}/doc
find ${BUILDROOT}/doc -type f -print0 | \
xargs -0 -i sed -i \
-e "s/@HYPERVISOR_NAME@/${HYPERVISOR_NAME}/g" \
-e "s/@PLUGIN_NAME@/${PLUGIN_NAME}/g" \
-e "s/@PLUGIN_VERSION@/${PLUGIN_VERSION}/g" \
-e "s/@PLUGIN_REVISION@/${PLUGIN_REVISION}/g" \
-e "s/@PLUGIN_MD5@/`cat output/${MD5_FILENAME} | cut -d' ' -f1`/g" {}
${BUILDROOT}/doc/build/latex/%.pdf: ${BUILDROOT}/doc/Makefile ${shell find ${BUILDROOT}/doc/source}
make -C ${BUILDROOT}/doc latexpdf
output/${PLUGIN_NAME}-${PLUGIN_VERSION}-${PLUGIN_REVISION}-%.pdf: ${BUILDROOT}/doc/build/latex/%.pdf
mkdir -p output
cp $^ $@
output/${PLUGIN_NAME}-${PLUGIN_VERSION}-%.pdf: ${BUILDROOT}/doc/build/latex/%.pdf
mkdir -p output
cp $^ $@
output/${MD5_FILENAME}: output/${RPM_NAME}
md5sum $^ > $@
clean:
rm -rf ${BUILDROOT} output suppack/xcp_* suppack/build

View File

@ -1,24 +0,0 @@
XenServer Fuel Plugin
=====================
Intro
=====
XenServer Fuel Plugin will help to deploy Mirantis OpenStack over XenServer hosts and make sure they work as xenapi rather than qemu.
Usage
=====
Please run `make latexpdf` and look at the User Guide `fuel-plugin-xenserver.pdf` generated under `doc/build/latex/fuel-plugin-xenserver`.
How to build plugin
===================
pip install git+https://github.com/openstack/fuel-plugins
pip show fuel-plugin-builder | grep ^Version # make sure here >= 4.0.1
git clone https://git.openstack.org/openstack/fuel-plugin-xenserver
cd fuel-plugin-xenserver
make

10
README.rst Normal file
View File

@ -0,0 +1,10 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,10 +0,0 @@
HYPERVISOR_NAME=XenServer
HYPERVISOR_LOWER=xenserver
PLUGIN_NAME=fuel-plugin-xenserver
VERSION_HOTFIXES={"6.5.0":["XS65ESP1013"]}
PLATFORMS=1.9.0 2.1.0 2.2.0
PLUGIN_BRANCHES=9.0 8.0 7.0 6.1
PLUGIN_VERSION_6_1=1.0
PLUGIN_VERSION_7_0=2.0
PLUGIN_VERSION_8_0=3.1
PLUGIN_VERSION_9_0=4.0

View File

@ -1,192 +0,0 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest coverage gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " applehelp to make an Apple Help Book"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " coverage to run coverage check of the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/fuel-plugin-xenserver.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/fuel-plugin-xenserver.qhc"
applehelp:
$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
@echo
@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
@echo "N.B. You won't be able to view it unless you put it in" \
"~/Library/Documentation/Help or install it in your application" \
"bundle."
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/fuel-plugin-xenserver"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/fuel-plugin-xenserver"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
coverage:
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
@echo "Testing of coverage in the sources finished, look at the " \
"results in $(BUILDDIR)/coverage/python.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."

Binary file not shown.

Before

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

View File

@ -1,292 +0,0 @@
# -*- coding: utf-8 -*-
#
# fuel-plugin-xenserver documentation build configuration file, created by
# sphinx-quickstart on Fri Nov 27 11:57:49 2015.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
import shlex
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = []
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
test_plan = 'test-plan'
test_report = 'test-report'
# General information about the project.
project = u'fuel-plugin-xenserver'
copyright = u'2016, John Hua (john.hua@citrix.com)'
author = u'John Hua (john.hua@citrix.com)'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '@PLUGIN_VERSION@'
# The full version, including alpha/beta/rc tags.
release = '@PLUGIN_VERSION@'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'alabaster'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Language to be used for generating the HTML full-text search index.
# Sphinx supports the following languages:
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
#html_search_language = 'en'
# A dictionary with options for the search language support, empty by default.
# Now only 'ja' uses this config value
#html_search_options = {'type': 'default'}
# The name of a javascript file (relative to the configuration directory) that
# implements a search results scorer. If empty, the default will be used.
#html_search_scorer = 'scorer.js'
# Output file base name for HTML help builder.
htmlhelp_basename = 'fuel-plugin-xenserverdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
'classoptions': ',openany,oneside',
'babel' : '\\usepackage[english]{babel}'
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
# Latex figure (float) alignment
#'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX filrm es. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'user-guide.tex', u'User Guide for XenServer Fuel Plugin',
author, 'manual'),
(test_plan, 'test-plan.tex', u'Test Plan for XenServer Fuel Plugin',
author, 'howto'),
(test_report, 'test-report.tex', u'Test Report for XenServer Fuel Plugin',
author, 'howto'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'fuel-plugin-xenserver', u'fuel-plugin-xenserver Documentation',
[author], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'fuel-plugin-xenserver', u'fuel-plugin-xenserver Documentation',
author, 'fuel-plugin-xenserver', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False

View File

@ -1,37 +0,0 @@
XenServer Plugin for Fuel 9.0
=============================
Requirements
------------
========================= ============================
Requirement Version/Comment
========================= ============================
Fuel 9.0
XenServer 7.0 and 7.1
XenServer plugin for Fuel @PLUGIN_VERSION@
========================= ============================
* This plugin will not install XenServer or configure the Virtual
Machines used to run the OpenStack services. Installation of
XenServer and configuration of these Virtual Machines must be
performed manually. We recommend all hotfixes are applied.
* File-based storage (EXT / NFS) must be used. If using local storage
then select "Enable thin provisioning" at host installation time
* Each hypervisor must have the same access credentials as Fuel
does not support per-node settings.
* One Virtual Machine, which will be used to run Nova (the compute
node), must exist on each hypervisor. This must be created as an
HVM guest (in XenCenter, use the "Other Install Media" template) and
configured to PXE boot from the PXE network used by Fuel.
* XenCenter is expected to be used to configure VMs, and is required
by the HIMN tool in the installation steps
Limitations
-----------
* The plugin is **only** compatible with OpenStack environments deployed with
**Neutron with VLAN segmentation** as network configuration in the
environment configuration options. The plugin will disable incompatible
options when the XenServer Release is selected.

View File

@ -1,86 +0,0 @@
Xenserver Fuel Plugin User Guide
================================
Once the Fuel XenServer plugin has been installed (following
`Installation Guide`_), you can create *OpenStack* environments that
use XenServer as the underlying hypervisor
Prepare infrastructure
----------------------
1. Everyone will have different infrastructure requirements. The additional requirements placed by XenServer are:
- Compute nodes must be run as a Virtual Machine, with one VM per XenServer hypervisor
- Ensure that the connectivity through to this virtual machine is the same as all other service nodes, as with standard Mirantis OpenStack setups
- An internal network is added by the instructions below, to provide communication between the host and the compute VM.
- Other service nodes (e.g. storage node) can also be created as virtual machines, but this is not required
2. Download and install XenServer and HIMN tool, a XenServer plugin, as install guide mentioned. Use it for future VM creation and network configuration.
3. While many networking setups are expected to work, the following setup is known to work:
- Physical machines with three ethernet devices:
- eth0 / “Access network”: Used to access the XenServer hosts and the Fuel Masters web interface
- eth1 / “Control network”: OpenStack control plane (management and storage), the PXE network and the public network; all separated by VLAN tags. The public network is also on this network, and if a VLAN is required this is applied by the switch for untagged traffic.
- eth2 / “VLAN network”: This version of the plugin only supports VLAN segmentation for Neutron networking. This device carries all of the VLANs to be used by Neutron for VM traffic.
- One virtual network
- VLAN 'pxe' on eth1 / “PXE network”: Used for node bootstrapping.
4. To simplify the setup, the fuel master can also be installed on the XenServer hosts (so XenServer hosts can fully control the network setup), but this is not required.
One example deployment is shown below.
.. image:: _static/topology00.png
:width: 100%
Select Environment
------------------
#. Create a new environment with the Fuel UI wizard. Select "Mitaka on Ubuntu 14.04" from OpenStack Release dropdown list, check off QEMU-KVM and check on XenServer. At the moment you will see most of options are disabled in the wizard.
.. image:: _static/fmwizard00.png
:width: 100%
#. Create new VMs in XenCenter for the compute nodes
#. Select all Compute virtual Machines, Right click on one of the
Virtual Machines and select "Manage internal management network"
#. Use the dialog to add the Host Internal Management
Network to the compute virtual machines
.. image:: _static/HIMN_dialog.jpg
:width: 100%
#. Add new VMs to the new environment according to `Fuel User Guide <http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide/configure-environment/add-nodes.html>`_ and configure them properly. A typical topology of 3 controller nodes + 3 compute nodes + 1 storage node is recommended.
#. Check the MAC address of the networks assigned in the "Interface Configuration"
tab correspond to the correct physical or virtual interface.
Note that no networks should be assigned to the HIMN interface on compute nodes,
which will normally show as the last interface for these nodes.
#. Go to Settings tab and "Compute" section. You need to input the common access credentials to all XenServers that previously are used to create new VMs.
.. image:: _static/fmsetting00.png
:width: 100%
#. If the XenServer host already has compatible Nova plugins installed, untick the checkbox to install the supplemental packs. In normal cases, the XenServer host will not have compatible Nova plugins installed, so leave the checkbox enabled
Finish environment configuration
--------------------------------
#. Run `network verification check <http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide/configure-environment/verify-networks.html>`_
#. Press `Deploy button <http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide/deploy-environment/deploy-changes.html>`_ to once you are done with environment configuration.
#. After deployment is done, you will see in Horizon that all hypervisors are xen.
.. image:: _static/fmhorizon00.png
:width: 100%

View File

@ -1,41 +0,0 @@
Intro
=====
This document will guide you through the steps of install, configure and use of the XenServer Plugin for Fuel
XenServer is an Open Source hypervisor with commercial support options
provided by Citrix. This plugin provides a new Release definition in
Mirantis OpenStack to allow easy installation of production
environments based on XenServer with Fuel.
XenServer is freely available from `xenserver.org
<http://xenserver.org/open-source-virtualization-download.html>`_ and
can also be downloaded directly from `citrix.com
<http://www.citrix.com/downloads/xenserver.html>`_ if you have a My
Citrix account.
Documentation for XenServer can be found on `docs.vmd.citrix.com
<http://docs.citrix.com/en-us/xenserver/xenserver-7-0.html>`_ and for how
XenServer works within OpenStack at docs.openstack.org in the
`OpenStack Configuration Reference
<http://docs.openstack.org/juno/config-reference/content/introduction-to-xen.html>`_
guide
.. include:: description.rst
.. include:: terms.rst
.. include:: installation.rst
.. include:: guide.rst
.. include:: troubleshooting.rst
.. include:: relnotes.rst
Further reading
===============
Here are some of the resources available to learn more about Xen:
* Citrix XenServer official documentation: http://docs.vmd.citrix.com/XenServer
* What is Xen? by Xen.org: http://xen.org/files/Marketing/WhatisXen.pdf
* Xen Hypervisor project: http://www.xenproject.org/developers/teams/hypervisor.html
* Xapi project: http://www.xenproject.org/developers/teams/xapi.html
* Further XenServer and OpenStack information: http://wiki.openstack.org/XenServer

View File

@ -1,46 +0,0 @@
Installation Guide
==================
Install the Plugin
------------------
To install the XenServer Fuel plugin:
#. Download it from the `Fuel Plugins Catalog`_
#. Copy the *rpm* file to the Fuel Master node:
::
[root@home ~]# scp fuel-plugin-xenserver-4.0-4.0.*-1.noarch.rpm root@fuel:/tmp
#. Log into Fuel Master node and install the plugin using the
`Fuel CLI <http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide/cli.html>`_:
::
[root@fuel-master ~]# fuel plugins --install /tmp/fuel-plugin-xenserver-4.0-4.0.*-1.noarch.rpm
#. Verify that the plugin is installed correctly. Note that the
version displayed may differ in the final part of the version
number (the patch level). All versions with the same major and
minor version number are compatible with this guide.
::
[root@fuel-master ~]# fuel plugins
id | name | version | package_version
---|-----------------------|---------|----------------
1 | fuel-plugin-xenserver | 4.0.0 | 4.0.0
Add Management Network tool
---------------------------
#. Download the HIMN tool `xencenter-himn-plugin <https://github.com/citrix-openstack/xencenter-himn-plugin>`_
#. Stop XenCenter if it is running
#. Install the HIMN tool
#. Re-start XenCenter
.. _Fuel Plugins Catalog: https://www.mirantis.com/validated-solution-integrations/fuel-plugins/

View File

@ -1,38 +0,0 @@
Release Notes
=============
Version 4.0
-------------
* Works with Fuel 9.2
* Add Ceph as backend for Glance
* Add image caching clean up script
* Add Ceilometer support
* Add vif hot plug support
* Add live migration support
* Add Conntrack support
Version 3.1
-------------
* Uses new features of FPB 4.0
* Uses ssh-key instead of password
* Fix image cache
* Fix static routes with udev rules
Version 3.0
-------------
* Works with Fuel 8.0
* Add Neutron support
Version 2.0
-------------
* Works with Fuel 7.0
Version 1.0
-------------
* Initial release of the plugin.
* Works with Fuel 6.1

View File

View File

@ -1,472 +0,0 @@
Test Plan for XenServer Fuel Plugin
===================================
XenServer Fuel Plugin
=====================
XenServer Fuel Plugin will help to deploy Mirantis OpenStack using the
XenServer hypervisor to host virtual machines, making all the necessary
changes to the Mirantis OpenStack to use the xenapi Nova compute driver.
Developers Specification
=========================
See developers specification in the source code repository at
https://git.openstack.org/openstack/fuel-plugin-xenserver
Limitations
-----------
This version of XenServer Fuel Plugin has not been certified to work with the
Ceilometer, MongoDB or Murano additional services. Future versions of the
plugin will relax these restrictions.
Test strategy
=============
Acceptance criteria
-------------------
All tests that do not depend on additional services must pass.
Test environment, infrastructure and tools
------------------------------------------
All tests need to be run under a cluster of at least 4 XenServer machines
with 3 physical NICs. As HA and multihost are enabled, a topology of 3
Controller Nodes + 3 Compute Nodes + 1 Storage Node will be recommended to be
created as VMs on XenServer machines. Easy setup and management of those
XenServers and VM Nodes can be achieved using XenCenter and a plugin,
described below, to add an internal management network to VMs.
To simplify setup, the fuel master is also installed on the XenServer hosts
(so XenServer hosts can fully control the network setup), but this is not
required.
While many networking setups are expected to work, the following setup is
used by this test plan:
* eth0 / “Access network”: Used to access the XenServer hosts and the Fuel
Masters web interface
* eth1 / “Control network”: OpenStack control plane (management and storage),
the PXE network and the public network; all separated by VLAN tags. The
public network is also on this network, and if a VLAN is required this is
applied by the switch for untagged traffic.
* eth2 / “VLAN network”: This version of the plugin only supports VLAN
segmentation for Neutron networking. This device carries all of the VLANs to
be used by Neutron for VM traffic.
.. image:: _static/topology00.png
:width: 80%
For the hardware configuration see Mirantis OpenStack Planning Guide at
https://docs.mirantis.com/openstack/fuel/fuel-9.0/mos-planning-guide.html
Product compatibility matrix
----------------------------
The plugin is compatible with MOS 9.0 and XenServer versions 7.0
and 7.1, with all hotfixes applied.
Prerequirements
===============
Prepare XenServers
------------------
#. Install and start XenCenter on your Windows PC
#. Add new servers with a common root password in XenCenter
#. Plug three physical NIC to each of all XenServer machines, make sure the
cabling of all NIC 0 are attached to the access network, all NIC 1 to the
public network and NIC 2 are attached to the isolated, VLAN network.
It is recommended to rename these networks using XenCenter to make the
network topology clear.
#. Add a further network, with a vlan tag that will be used for PXE.
Prepare Fuel Master
-------------------
#. Upload Fuel ISO to a NFS/Samba server and make it accessible to your
XenServer hosts.
#. Select a XenServer and click “New Storage” button, in the popup window
check on CIFS/NFS ISO library and input NFS/Samba server path.
#. Create a new VM in XenCenter using the “Other Install Media” template (to
ensure a HVM domain is created) with and PXE network as eth0 and access
network as eth1. In the Console Tab, insert Fuel ISO and install.
#. In fuel menu, enable eth1 with DHCP so the fuel master can be accessed
over the access network.
#. Select Fuel Master in XenCenter and switch to Console tab, login with
prompted user and password
#. Visit http://ip_of_fuel_master:8000 in browser.
Type of testing
===============
Install XenServer Fuel Plugin
-----------------------------
.. tabularcolumns:: |p{3cm}|p{13cm}|
.. list-table::
:header-rows: 0
* - Test Case ID
- insall_xfp
* - Description
- Verify that XenServer Fuel Plugin can be installed into Fuel Master,
and the new OpenStack release is registered.
* - Steps
-
#. Run ``ls /tmp/fuel-plugin-xenserver-4.0-4.0.*-1.noarch.rpm`` to confirm the exact version of fuel plugin to be installed.
#. Run ``fuel plugins --install /tmp/<rpm filename>`` to install the plugin
#. Run ``fuel plugins`` to list the plugins installed
* - Expected Result
- The table output by Fuel shows the version identified in the first step as installed
Prepare Nodes
-------------
.. tabularcolumns:: |p{3cm}|p{13cm}|
.. list-table::
:header-rows: 0
* - Test Case ID
- prepare_nodes
* - Description
- Verify all controller/compute/storage nodes are ready for PXE install.
* - Steps
-
#. Create 3 new VMs in XenCenter in different XenServers and name them
Controller1, Controller2, Controller3
#. Create 3 new VMs in XenCenter in different XenServers and name them
Compute1, Compute2, Compute3
#. Create 1 new VM in XenCenter and name it Storage1
#. Add PXE network as eth0, Public/Management/Storage network as
eth1 and VLAN network as eth2 to each of new VMs created above.
* - Expected Result
- All nodes are shown in XenCenter with PXE network as eth0 and VLAN
network as eth1.
Install XenCenter HIMN plugin
-----------------------------
.. tabularcolumns:: |p{3cm}|p{13cm}|
.. list-table::
:header-rows: 0
* - Test Case ID
- install_xcp
* - Description
- Verify XenCenter HIMN plugin is installed to Windows.
* - Steps
-
#. Download SetupHIMN from http://ca.downloads.xensource.com/OpenStack/Plugins/
#. Install MSI to your XenCenter
#. Restart XenCenter
* - Expected Result
- Right click on any selected VMs, there will be a menu item “Manage
internal management network”.
Add Host Internal Management Network to Compute Nodes
-----------------------------------------------------
.. tabularcolumns:: |p{3cm}|p{13cm}|
.. list-table::
:header-rows: 0
* - Test Case ID
- add_himn
* - Description
- Verify (or add) Host Internal Management Network is added to all
Compute Nodes.
* - Steps
-
#. Select Compute1, Compute2, Compute3 in XenCenter
#. Right click on above nodes and select “Manage internal management
network” menu.
#. In the popup window, after status detection, make sure all selected
Compute nodes are checked on. Click on “Manage internal management
network” button.
#. After processing, the status column should be shown as management
network is added with new generated MAC address
#. Close the management network window
* - Expected Result
- The wizard will report success, however the networks may not be
visible in XenCenter.
Create an OpenStack environment with XenServer Fuel Plugin
----------------------------------------------------------
.. tabularcolumns:: |p{3cm}|p{13cm}|
.. list-table::
:header-rows: 0
* - Test Case ID
- create_env
* - Description
- Verify that an OpenStack environment created with XenServer Fuel
Plugin can have XenServer options and options of
hypervisor/network/storage/additional services are disabled.
* - Steps
-
#. Create new OpenStack environment Fuel Web UI and select
"Mitaka on Ubuntu 14.04” in the OpenStack release
dropdown list
#. Check off QEMU and check on XenServer, Network is default to “Neutron
with VLAN segmentation” and Storage is default to Cinder. Other
options are disabled.
#. In Nodes Tab, add all 3 Controller Nodes, 3 Compute Nodes and 1
Storage Node.
#. Select all Compute Nodes and click “Configure Interfaces”, drag
Storage/Management network from default eth0 to eth1, Private
network to eth2. Leave PXE on eth0. No networks should be
assigned to the final interface.
#. Select all Controller and Storage Nodes and click “Configure
Interfaces”, drag Storage/Management network from default eth0 to
eth1, Private network to eth2. Leave PXE on eth0.
#. In Networks Tab, set the vlan tags according to your network
interfaces previous set and make sure network range will not be
conflicting with other systems in the same lab. Then click “Verify
Networks” button.
#. In the Settings Tab under the side tab “Compute”, input the
credential applied to all your XenServer hosts.
#. Click “Deploy Changes” button
* - Expected Result
- Deploy of nodes all succeed
Verify hypervisor type
----------------------
.. tabularcolumns:: |p{3cm}|p{13cm}|
.. list-table::
:header-rows: 0
* - Test Case ID
- verify_hypervisor
* - Description
- Verify that all hypervisors are identified by OpenStack as "XenServer".
* - Steps
-
#. Login to Horizon with admin user when OpenStack deployment is
finished.
#. Enter into Admin->Hypervisors
* - Expected Result
- The Type column should show XenServer for all hypervisors.
Create guest instances
----------------------
.. tabularcolumns:: |p{3cm}|p{13cm}|
.. list-table::
:header-rows: 0
* - Test Case ID
- create_instances
* - Description
- Verify that new environment can create guest instances.
* - Steps
-
#. Create an instance with image of TestVM and flavor of m1.tiny in
either of Horizon or Controller Node.
#. Find the instance in XenCenter and switch to Console Tab.
#. Login with the username and password that prompted in the terminal
screen.
#. Ping out to 8.8.8.8
* - Expected Result
- Guest instances can ping out.
Verify Fuel Health Checks
-------------------------
.. tabularcolumns:: |p{3cm}|p{13cm}|
.. list-table::
:header-rows: 0
* - Test Case ID
- verify_health_checks
* - Description
- Ensure that all applicable health checks pass
* - Steps
-
#. Within the Fuel Master, select the appropriate environment
#. Run all health checks and wait for completion
* - Expected Result
- All pass
Mandatory Tests
===============
Install plugin and deploy environment
-------------------------------------
Covered above.
Modifying env with enabled plugin (removing/adding compute nodes)
-----------------------------------------------------------------
.. tabularcolumns:: |p{3cm}|p{13cm}|
.. list-table::
:header-rows: 0
* - Test Case ID
- modify_env_compute_nodes
* - Description
- Adding/removing compute nodes to an existing environment
* - Steps
-
#. Create one more compute following the procedure in step
prepare_nodes
#. Add compute node to an existing environment
#. Redeploy cluster
#. Run Health Check
#. Remove a compute node
#. Redeploy cluster
#. Run Health Check
* - Expected Result
- All pass
Modifying env with enabled plugin (removing/adding controller nodes)
--------------------------------------------------------------------
.. tabularcolumns:: |p{3cm}|p{13cm}|
.. list-table::
:header-rows: 0
* - Test Case ID
- modify_env_controller_nodes
* - Description
- Adding/removing controller nodes to an existing environment
* - Steps
-
#. Create one more controller following the procedure in step
prepare_nodes
#. Add controller node to an existing environment
#. Redeploy cluster
#. Run Health Check
#. Remove a compute node (not the primary controller node)
#. Redeploy cluster
#. Run Health Check
* - Expected Result
- All pass
Create mirror and update (setup) of core repos
---------------------------------------------------
.. tabularcolumns:: |p{3cm}|p{13cm}|
.. list-table::
:header-rows: 0
* - Test Case ID
- create_mirror_update_core_repos
* - Description
- Fuel create mirror and update (setup) of core repos
* - Steps
-
#. Launch the following command on the Fuel Master node: ``fuel-mirror create -G mos -P ubuntu``
#. Launch the following command on the Fuel Master node: ``fuel-mirror apply -G mos -P ubuntu -e ENV_ID``, ENV_ID is the id of the deployed cluster
#. Check if MOS repositories have been changed to local
#. Run Health Check
* - Expected Result
-
#. Health Checks are passed.
#. MOS repositories have been changed to local
#. XenServer Fuel plugin doesn't launch any services, so the check of process PID and status can be skipped
Uninstall of plugin with deployed environment
---------------------------------------------
.. tabularcolumns:: |p{3cm}|p{13cm}|
.. list-table::
:header-rows: 0
* - Test Case ID
- uninstall_plugin_with_deployed_env
* - Description
- Verify XenServer Fuel Plugin cannot be uninstalled before all
dependant environments are removed.
* - Steps
-
#. Run ``fuel plugins`` to identify the exact version of the plugin installed
#. Run ``fuel plugins --remove fuel-plugin-xenserver==<version>`` to remove the plugin
* - Expected Result
- 400 Client Error: Bad Request (Can't delete plugin which is enabled
for some environment.)
Uninstall of plugin
-------------------
.. tabularcolumns:: |p{3cm}|p{13cm}|
.. list-table::
:header-rows: 0
* - Test Case ID
- uninstall_plugin
* - Description
- Verify XenServer Fuel Plugin can be uninstalled as well as XenServer
OpenStack release after all dependant environments are removed.
* - Steps
-
#. Run ``fuel plugins`` to identify the exact version of the plugin installed
#. Run ``fuel plugins --remove fuel-plugin-xenserver==<version>`` to remove the plugin
#. Run ``fuel plugins``
* - Expected Result
- Plugin is removed.
Appendix
========
* XenServer Fuel Plugin Repository: https://git.openstack.org/cgit/openstack/fuel-plugin-xenserver
* XenCenter HIMN Plugin GitHub: https://github.com/citrix-openstack/xencenter-himn-plugin
* Plugin download server: http://ca.downloads.xensource.com/OpenStack/Plugins/
Revision history
================
.. list-table::
:header-rows: 1
* - Version
- Revision Date
- Editor
- Comment
* - 1.0
- 18.09.2015
- John Hua (john.hua@citrix.com)
- First draft.
* - 2.0
- 18.11.2015
- John Hua (john.hua@citrix.com)
- Revised for Fuel 7.0
* - 3.0
- 22.03.2016
- John Hua (john.hua@citrix.com)
- Revised for Fuel 8.0
* - 3.1
- 22.03.2016
- John Hua (john.hua@citrix.com)
- Revised for plugin 4.0.0
* - 4.0
- 12.08.2016
- John Hua (john.hua@citrix.com)
- Revised for Fuel 9.0
* -
- 03.08.2017
- Huan Xie (huan.xie@citrix.com)
- Revised for Mirantis Fuel 9.2

View File

@ -1,226 +0,0 @@
Test Report for XenServer Fuel Plugin
=====================================
Revision history
================
.. tabularcolumns:: |p{1.5cm}|p{2.5cm}|p{7cm}|p{4.5cm}|
.. list-table::
:header-rows: 1
* - Version
- Revision Date
- Editor
- Comment
* - 1.0
- 25.09.2015
- John Hua(john.hua@citrix.com)
- First draft.
* - 2.0
- 8.11.2015
- John Hua(john.hua@citrix.com)
- Revised for Mirantis Fuel 7.0
* - 3.0
- 13.04.2016
- John_Hua(john.hua@citrix.com)
Jianghua_Wang(jianghua.wang@citrix.com)
- Revised for Mirantis Fuel 8.0
* - 3.1
- 19.04.2016
- John Hua(john.hua@citrix.com)
- Rewrite in RST
* - 4.0
- 12.08.2016
- John Hua(john.hua@citrix.com)
- Revised for Mirantis Fuel 9.0
* -
- 03.08.2017
- Huan Xie (huan.xie@citrix.com)
- Revised for Mirantis Fuel 9.2
Document purpose
================
This document provides test run results for the @HYPERVISOR_NAME@ Fuel
Plugin version @PLUGIN_VERSION@.@PLUGIN_REVISION@ on Mirantis
OpenStack 9.0.
Test environment
================
The following is the hardware configuration for target nodes used for
verification. For other configuration settings, please see the test plan.
.. list-table::
:header-rows: 1
* - Node Type
- vCPU
- Memory
- Disk
* - Controller
- 4
- 6GB
- 80GB
* - Compute
- 4
- 4GB
- 60GB
* - Storage
- 4
- 4GB
- 60GB
Plugin's RPM
------------
.. list-table::
:header-rows: 1
* - RPM Name
- MD5 Checksum
* - @PLUGIN_NAME@-@PLUGIN_VERSION@-@PLUGIN_VERSION@.@PLUGIN_REVISION@-1.noarch.rpm
- @PLUGIN_MD5@
Interoperability with other plugins
-----------------------------------
No other plugins were tested for interoperability.
Test coverage and metrics
-------------------------
* Test Coverage 100%
* Tests Passed 100%
* Tests Failed 0%
Test results summary
====================
System Testing
--------------
.. list-table::
:header-rows: 1
* - Parameter
- Value
* - Total quantity of executed test cases
- 13
* - Total quantity of not executed test cases
- 0
* - Quantity of automated test cases
- 0
* - Quantity of not automated test cases
- 0
Detailed test run results
-------------------------
.. tabularcolumns:: |p{1cm}|p{4cm}|p{1.2cm}|p{1.2cm}|p{1.2cm}|p{7cm}|
.. list-table::
:header-rows: 1
* - #
- Test case ID
- Passed
- Failed
- Skipped
- Comment
* - 1
- Install XenServer Fuel Plugin
- Yes
-
-
-
* - 2
- Prepare Nodes
- Yes
-
-
-
* - 3
- Install XenCenter HIMN plugin
- Yes
-
-
-
* - 4
- Add Host Internal Management Network to Compute Nodes
- Yes
-
-
-
* - 5
- Create an OpenStack environment with XenServer Fuel Plugin
- Yes
-
-
-
* - 6
- Verify hypervisor type
- Yes
-
-
-
* - 7
- Create guest instances
- Yes
-
-
-
* - 8
- Verify Fuel Health Checks
- Yes
-
-
-
* - 9
- Add/Remove compute node
- Yes
-
-
-
* - 10
- Add/Remove controller node
- Yes
-
-
-
* - 11
- Create mirror and update (setup) of core repos
- Yes
-
-
-
* - 12
- Uninstall of plugin with deployed environment
- Yes
-
-
-
* - 13
- Uninstall of plugin
- Yes
-
-
-
* - Total
-
- 13
- 0
- 0
-
* - Total,%
-
- 100
- 0
- 0
-
Known issues
============
No issues were found during the testing

View File

@ -1,19 +0,0 @@
Troubleshooting
===============
#. Logging
In addition to the Astute log, XenServer Fuel Plugin has its own log under
/var/log/fuel-plugin-xenserver on all Compute and Controller nodes.
The HIMN tool mentioned in Installation Guide also has its own log
under ``%LOCALAPPDATA%/Temp/XCHIMN.log``.
#. XenServer Bug Reports
You can upload issued logs to https://cis.citrix.com or send to support if
they are requested.
Please see the following blog post for advice on bug reports
XenServer: `Writing Good Bug Reports for XenServer
<https://www.citrix.com/blogs/2012/07/16/writing-good-bug-reports-for-xenserver/>`_

View File

@ -1,41 +0,0 @@
#!/bin/bash
set -eu
# Source the branding file
source <(sed -e 's/\(.*\)=\(.*\)/\1="\2"/' ${1})
# Find shortest delta
my_merge_base=$(git merge-base HEAD origin/master)
shortest_branch=master
for branch in $PLUGIN_BRANCHES; do
# Verify that the named branch actually exists
set +e
git rev-parse --verify origin/$branch >/dev/null 2>/dev/null
branch_test_exit_code=$?
set -e
if [ $branch_test_exit_code -gt 0 ]; then
continue
fi
branch_merge_base=$(git merge-base origin/master origin/$branch)
if [ "$branch_merge_base" == "$my_merge_base" ]; then
shortest_branch=$branch
fi
done
if [ $shortest_branch == 'master' ]; then
shortest_branch=$(echo $PLUGIN_BRANCHES | cut -d' ' -f1)
var_name=PLUGIN_VERSION_${shortest_branch//./_}
branch_major=$(echo ${!var_name} | cut -d'.' -f1)
branch_major=${branch_major}.90
branch_minor=$(git rev-list HEAD --count)
else
var_name=PLUGIN_VERSION_${shortest_branch//./_}
branch_merge_base=$(git merge-base origin/master origin/$shortest_branch)
branch_major=${!var_name}
branch_minor=$(git rev-list HEAD ^$branch_merge_base --count)
fi
echo $branch_major $branch_minor

View File

@ -1,24 +0,0 @@
- name: 'hypervisor:@HYPERVISOR_LOWER@'
label: '@HYPERVISOR_NAME@'
description: 'Select this option if you run OpenStack on @HYPERVISOR_NAME@ Hypervisor'
incompatible:
- name: 'hypervisor:vmware'
description: ''
- name: 'hypervisor:qemu'
description: ''
- name: 'additional_service:sahara'
description: ''
- name: 'additional_service:murano'
description: ''
- name: 'additional_service:mongo'
description: ''
- name: 'additional_service:ironic'
description: ''
- name: 'storage:volumes_ceph'
description: ''
- name: 'storage:ephemeral_ceph'
description: ''
- name: 'storage:block:ceph'
description: ''
- name: 'storage:ephemeral:ceph'
description: ''

View File

@ -1,229 +0,0 @@
#============================================================================
# This library is free software; you can redistribute it and/or
# modify it under the terms of version 2.1 of the GNU Lesser General Public
# License as published by the Free Software Foundation.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
#============================================================================
# Copyright (C) 2006-2007 XenSource Inc.
#============================================================================
#
# Parts of this file are based upon xmlrpclib.py, the XML-RPC client
# interface included in the Python distribution.
#
# Copyright (c) 1999-2002 by Secret Labs AB
# Copyright (c) 1999-2002 by Fredrik Lundh
#
# By obtaining, using, and/or copying this software and/or its
# associated documentation, you agree that you have read, understood,
# and will comply with the following terms and conditions:
#
# Permission to use, copy, modify, and distribute this software and
# its associated documentation for any purpose and without fee is
# hereby granted, provided that the above copyright notice appears in
# all copies, and that both that copyright notice and this permission
# notice appear in supporting documentation, and that the name of
# Secret Labs AB or the author not be used in advertising or publicity
# pertaining to distribution of the software without specific, written
# prior permission.
#
# SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD
# TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT-
# ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR
# BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
# OF THIS SOFTWARE.
# --------------------------------------------------------------------
import gettext
import xmlrpclib
import httplib
import socket
translation = gettext.translation('xen-xm', fallback = True)
API_VERSION_1_1 = '1.1'
API_VERSION_1_2 = '1.2'
class Failure(Exception):
def __init__(self, details):
self.details = details
def __str__(self):
try:
return str(self.details)
except Exception, exn:
import sys
print >>sys.stderr, exn
return "Xen-API failure: %s" % str(self.details)
def _details_map(self):
return dict([(str(i), self.details[i])
for i in range(len(self.details))])
_RECONNECT_AND_RETRY = (lambda _ : ())
class UDSHTTPConnection(httplib.HTTPConnection):
"""HTTPConnection subclass to allow HTTP over Unix domain sockets. """
def connect(self):
path = self.host.replace("_", "/")
self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
self.sock.connect(path)
class UDSHTTP(httplib.HTTP):
_connection_class = UDSHTTPConnection
class UDSTransport(xmlrpclib.Transport):
def __init__(self, use_datetime=0):
self._use_datetime = use_datetime
self._extra_headers=[]
def add_extra_header(self, key, value):
self._extra_headers += [ (key,value) ]
def make_connection(self, host):
return UDSHTTP(host)
def send_request(self, connection, handler, request_body):
connection.putrequest("POST", handler)
for key, value in self._extra_headers:
connection.putheader(key, value)
class Session(xmlrpclib.ServerProxy):
"""A server proxy and session manager for communicating with xapi using
the Xen-API.
Example:
session = Session('http://localhost/')
session.login_with_password('me', 'mypassword')
session.xenapi.VM.start(vm_uuid)
session.xenapi.session.logout()
"""
def __init__(self, uri, transport=None, encoding=None, verbose=0,
allow_none=1):
xmlrpclib.ServerProxy.__init__(self, uri, transport, encoding,
verbose, allow_none)
self.transport = transport
self._session = None
self.last_login_method = None
self.last_login_params = None
self.API_version = API_VERSION_1_1
def xenapi_request(self, methodname, params):
if methodname.startswith('login'):
self._login(methodname, params)
return None
elif methodname == 'logout' or methodname == 'session.logout':
self._logout()
return None
else:
retry_count = 0
while retry_count < 3:
full_params = (self._session,) + params
result = _parse_result(getattr(self, methodname)(*full_params))
if result == _RECONNECT_AND_RETRY:
retry_count += 1
if self.last_login_method:
self._login(self.last_login_method,
self.last_login_params)
else:
raise xmlrpclib.Fault(401, 'You must log in')
else:
return result
raise xmlrpclib.Fault(
500, 'Tried 3 times to get a valid session, but failed')
def _login(self, method, params):
result = _parse_result(getattr(self, 'session.%s' % method)(*params))
if result == _RECONNECT_AND_RETRY:
raise xmlrpclib.Fault(
500, 'Received SESSION_INVALID when logging in')
self._session = result
self.last_login_method = method
self.last_login_params = params
self.API_version = self._get_api_version()
def _logout(self):
try:
if self.last_login_method.startswith("slave_local"):
return _parse_result(self.session.local_logout(self._session))
else:
return _parse_result(self.session.logout(self._session))
finally:
self._session = None
self.last_login_method = None
self.last_login_params = None
self.API_version = API_VERSION_1_1
def _get_api_version(self):
pool = self.xenapi.pool.get_all()[0]
host = self.xenapi.pool.get_master(pool)
major = self.xenapi.host.get_API_version_major(host)
minor = self.xenapi.host.get_API_version_minor(host)
return "%s.%s"%(major,minor)
def __getattr__(self, name):
if name == 'handle':
return self._session
elif name == 'xenapi':
return _Dispatcher(self.API_version, self.xenapi_request, None)
elif name.startswith('login') or name.startswith('slave_local'):
return lambda *params: self._login(name, params)
else:
return xmlrpclib.ServerProxy.__getattr__(self, name)
def xapi_local():
return Session("http://_var_xapi_xapi/", transport=UDSTransport())
def _parse_result(result):
if type(result) != dict or 'Status' not in result:
raise xmlrpclib.Fault(500, 'Missing Status in response from server' + result)
if result['Status'] == 'Success':
if 'Value' in result:
return result['Value']
else:
raise xmlrpclib.Fault(500,
'Missing Value in response from server')
else:
if 'ErrorDescription' in result:
if result['ErrorDescription'][0] == 'SESSION_INVALID':
return _RECONNECT_AND_RETRY
else:
raise Failure(result['ErrorDescription'])
else:
raise xmlrpclib.Fault(
500, 'Missing ErrorDescription in response from server')
# Based upon _Method from xmlrpclib.
class _Dispatcher:
def __init__(self, API_version, send, name):
self.__API_version = API_version
self.__send = send
self.__name = name
def __repr__(self):
if self.__name:
return '<XenAPI._Dispatcher for %s>' % self.__name
else:
return '<XenAPI._Dispatcher>'
def __getattr__(self, name):
if self.__name is None:
return _Dispatcher(self.__API_version, self.__send, name)
else:
return _Dispatcher(self.__API_version, self.__send, "%s.%s" % (self.__name, name))
def __call__(self, *args):
return self.__send(self.__name, args)

View File

@ -1,689 +0,0 @@
#!/usr/bin/env python
import ConfigParser
from distutils.version import LooseVersion
import ipaddress
import netifaces
import os
import re
import shutil
from socket import inet_ntoa
import stat
from struct import pack
import utils
from utils import HIMN_IP
INT_BRIDGE = 'br-int'
MESH_BRIDGE = 'br-mesh'
AUTO_START_SERVICE = 'mos-vxlan.service'
AUTO_START_SERVICE_TEMPLATE = 'mos-vxlan-template.service'
AUTO_SCRIPT = 'fuel-xs-vxlan.sh'
XS_PLUGIN_ISO = 'xenapi-plugins-mitaka.iso'
CONNTRACK_CONF_SAMPLE =\
'/usr/share/doc/conntrack-tools-1.4.2/doc/stats/conntrackd.conf'
utils.setup_logging('compute_post_deployment.log')
LOG = utils.LOG
def get_endpoints(astute):
"""Return the IP addresses of the storage/mgmt endpoints."""
endpoints = astute['network_scheme']['endpoints']
endpoints = dict([(
k.replace('br-', ''),
endpoints[k]['IP'][0]
) for k in endpoints])
LOG.info('storage network: {storage}'.format(**endpoints))
LOG.info('mgmt network: {mgmt}'.format(**endpoints))
return endpoints
def install_xenapi_sdk():
"""Install XenAPI Python SDK"""
utils.execute('cp', 'XenAPI.py', utils.DIST_PACKAGES_DIR)
def create_novacompute_conf(himn, username, password, public_ip, services_ssl):
"""Fill nova-compute.conf with HIMN IP and root password. """
mgmt_if = netifaces.ifaddresses('br-mgmt')
if mgmt_if and mgmt_if.get(netifaces.AF_INET) \
and mgmt_if.get(netifaces.AF_INET)[0]['addr']:
mgmt_ip = mgmt_if.get(netifaces.AF_INET)[0]['addr']
else:
utils.reportError('Cannot get IP Address on Management Network')
filename = '/etc/nova/nova-compute.conf'
cf = ConfigParser.ConfigParser()
try:
cf.read(filename)
cf.set('DEFAULT', 'compute_driver', 'xenapi.XenAPIDriver')
cf.set('DEFAULT', 'force_config_drive', 'True')
if not cf.has_section('vnc'):
cf.add_section('vnc')
scheme = "https" if services_ssl else "http"
cf.set('vnc', 'novncproxy_base_url',
'%s://%s:6080/vnc_auto.html' % (scheme, public_ip))
cf.set('vnc', 'vncserver_proxyclient_address', mgmt_ip)
if not cf.has_section('xenserver'):
cf.add_section('xenserver')
cf.set('xenserver', 'connection_url', 'http://%s' % himn)
cf.set('xenserver', 'connection_username', username)
cf.set('xenserver', 'connection_password', password)
cf.set('xenserver', 'vif_driver',
'nova.virt.xenapi.vif.XenAPIOpenVswitchDriver')
cf.set('xenserver', 'ovs_integration_bridge', INT_BRIDGE)
cf.set('xenserver', 'cache_images', 'all')
with open(filename, 'w') as configfile:
cf.write(configfile)
except Exception:
utils.reportError('Cannot set configurations to %s' % filename)
LOG.info('%s created' % filename)
def route_to_compute(endpoints, himn_xs, himn_local, username):
"""Route storage/mgmt requests to compute nodes. """
def _net(ip):
return '.'.join(ip.split('.')[:-1] + ['0'])
def _mask(cidr):
return inet_ntoa(pack('>I', 0xffffffff ^ (1 << 32 - int(cidr)) - 1))
def _routed(net, mask, gw):
return re.search(r'%s\s+%s\s+%s\s+' % (
net.replace('.', r'\.'),
gw.replace('.', r'\.'),
mask
), out)
out = utils.ssh(himn_xs, username, 'route', '-n')
utils.ssh(himn_xs, username,
('printf "#!/bin/bash\nsleep 5\n" >'
'/etc/udev/scripts/reroute.sh'))
endpoint_names = ['storage', 'mgmt']
for endpoint_name in endpoint_names:
endpoint = endpoints.get(endpoint_name)
if endpoint:
ip, cidr = endpoint.split('/')
net, mask = _net(ip), _mask(cidr)
if not _routed(net, mask, himn_local):
params = ['route', 'add', '-net', '"%s"' % net, 'netmask',
'"%s"' % mask, 'gw', himn_local]
utils.ssh(himn_xs, username, *params)
# Always add the route to the udev, even if it's currently active
cmd = (
"printf 'if !(/sbin/route -n | /bin/grep -q -F \"{net}\"); "
"then\n"
"/sbin/route add -net \"{net}\" netmask "
"\"{mask}\" gw {himn_local};\n"
"fi\n' >> /etc/udev/scripts/reroute.sh"
)
cmd = cmd.format(net=net, mask=mask, himn_local=himn_local)
utils.ssh(himn_xs, username, cmd)
else:
LOG.info('%s network ip is missing' % endpoint_name)
utils.ssh(himn_xs, username, 'chmod +x /etc/udev/scripts/reroute.sh')
utils.ssh(himn_xs, username,
('echo \'SUBSYSTEM=="net" ACTION=="add" '
'KERNEL=="xenapi" RUN+="/etc/udev/scripts/reroute.sh"\' '
'> /etc/udev/rules.d/90-reroute.rules'))
def parse_uuid(output):
uuid = None
index = output.strip().find('uuid:')
if index >= 0:
start = index + len('uuid:')
uuid = output[start:].strip()
return uuid
def install_suppack(himn, username, package, xcp_version):
"""Install xapi driver supplemental pack. """
tmp = utils.ssh(himn, username, 'mktemp', '-d')
real_pack = "xcp_%s/%s" % (xcp_version, package)
if not os.path.exists(real_pack):
utils.reportError('Package folder %s not exist' % real_pack)
utils.scp(himn, username, tmp, real_pack)
if LooseVersion(xcp_version) < LooseVersion('2.2.0'):
utils.ssh(himn, username, 'xe-install-supplemental-pack',
tmp + '/' + package, prompt='Y\n')
else:
errcode, uuid, errmsg = \
utils.ssh_detailed(himn, username, 'xe', 'update-upload',
'file-name=' + tmp + '/' + package,
allowed_return_codes=[0, 1])
if errcode == 0:
utils.ssh(himn, username, 'xe', 'update-apply',
'uuid=' + uuid.strip())
else:
LOG.debug("Install supplemental pack failed, err: %s", errmsg)
if "The uploaded update already exists" in errmsg:
uuid = parse_uuid(errmsg)
if uuid is None:
raise utils.ExecutionError(errmsg)
# Check current update is applied already
out = utils.ssh(himn, username, 'xe', 'update-list',
'uuid=' + uuid, '--minimal')
# Apply this update if cannot find it with uuid
if not out:
utils.ssh(himn, username, 'xe', 'update-apply',
'uuid=' + uuid)
utils.ssh(himn, username, 'rm', tmp, '-rf')
def forward_from_himn(eth):
"""Forward packets from HIMN to storage/mgmt network. """
# make change to be persistent
utils.execute('sed', '-i',
's/.*net\.ipv4\.ip_forward.*=.*/net.ipv4.ip_forward=1/g',
'/etc/sysctl.conf')
# make it to take effective now.
utils.execute('sysctl', 'net.ipv4.ip_forward=1')
endpoint_names = ['br-storage', 'br-mgmt']
for endpoint_name in endpoint_names:
utils.execute('iptables', '-t', 'nat', '-A', 'POSTROUTING',
'-o', endpoint_name, '-j', 'MASQUERADE')
utils.execute('iptables', '-A', 'FORWARD',
'-i', endpoint_name, '-o', eth,
'-m', 'state', '--state', 'RELATED,ESTABLISHED',
'-j', 'ACCEPT')
utils.execute('iptables', '-A', 'FORWARD',
'-i', eth, '-o', endpoint_name,
'-j', 'ACCEPT')
utils.execute('iptables', '-A', 'INPUT', '-i', eth, '-j', 'ACCEPT')
utils.execute('iptables', '-t', 'filter', '-S', 'FORWARD')
utils.execute('iptables', '-t', 'nat', '-S', 'POSTROUTING')
utils.execute('service', 'iptables-persistent', 'save')
def forward_port(eth_in, eth_out, target_host, target_port):
"""Forward packets from eth_in to eth_out on target_host:target_port. """
utils.execute('iptables', '-t', 'nat', '-A', 'PREROUTING',
'-i', eth_in, '-p', 'tcp', '--dport', target_port,
'-j', 'DNAT', '--to', target_host)
utils.execute('iptables', '-A', 'FORWARD',
'-i', eth_out, '-o', eth_in,
'-m', 'state', '--state', 'RELATED,ESTABLISHED',
'-j', 'ACCEPT')
utils.execute('iptables', '-A', 'FORWARD',
'-i', eth_in, '-o', eth_out,
'-j', 'ACCEPT')
utils.execute('iptables', '-t', 'filter', '-S', 'FORWARD')
utils.execute('iptables', '-t', 'nat', '-S', 'POSTROUTING')
utils.execute('service', 'iptables-persistent', 'save')
def install_logrotate_script(himn, username):
"Install console logrotate script"
utils.scp(himn, username, '/root/', 'rotate_xen_guest_logs.sh')
utils.ssh(himn, username, 'mkdir -p /var/log/xen/guest')
utils.ssh(himn, username, '''crontab - << CRONTAB
* * * * * /root/rotate_xen_guest_logs.sh >/dev/null 2>&1
CRONTAB''')
def install_image_cache_cleanup():
tool_path = '/usr/bin/destroy_cached_images'
tool_conf = '/etc/nova/nova-compute.conf'
# install this tool.
try:
src_file = 'tools/destroy_cached_images.py'
target_file = tool_path
shutil.copy(src_file, target_file)
os.chown(target_file, 0, 0)
os.chmod(target_file, stat.S_IRWXU)
except Exception:
utils.reportError("Failed to install file %s" % target_file)
# create a daily clean-up cron job
cron_entry = '5 3 * * * {} --config-file={} >/dev/null 2>&1'.format(
tool_path,
tool_conf)
user = 'root'
utils.add_cron_job(user, cron_entry)
LOG.info('Added crontab successfully: %s' % cron_entry)
def modify_neutron_rootwrap_conf(himn, username, password):
"""Set xenapi configurations"""
filename = '/etc/neutron/rootwrap.conf'
cf = ConfigParser.ConfigParser()
try:
cf.read(filename)
cf.set('xenapi', 'xenapi_connection_url', 'http://%s' % himn)
cf.set('xenapi', 'xenapi_connection_username', username)
cf.set('xenapi', 'xenapi_connection_password', password)
with open(filename, 'w') as configfile:
cf.write(configfile)
except Exception:
utils.reportError("Fail to modify file %s", filename)
LOG.info('Modify file %s successfully', filename)
def modify_neutron_ovs_agent_conf(int_br, br_mappings=None, local_ip=None):
filename = '/etc/neutron/plugins/ml2/openvswitch_agent.ini'
cf = ConfigParser.ConfigParser()
try:
cf.read(filename)
cf.set('agent', 'root_helper',
'neutron-rootwrap-xen-dom0 /etc/neutron/rootwrap.conf')
cf.set('agent', 'root_helper_daemon', '')
cf.set('agent', 'minimize_polling', False)
cf.set('ovs', 'integration_bridge', int_br)
if br_mappings:
cf.set('ovs', 'bridge_mappings', br_mappings)
if local_ip:
cf.set('ovs', 'local_ip', local_ip)
with open(filename, 'w') as configfile:
cf.write(configfile)
except Exception:
utils.reportError("Fail to modify %s", filename)
LOG.info('Modify %s successfully', filename)
def get_network_ethX(bridge_name):
# find out ethX in DomU which connect to private network
# br-aux is the auxiliary bridge and in normal case there will be a patch
# between br-prv and br-aux
values = astute['network_scheme']['transformations']
for item in values:
if item['action'] == 'add-port' and item['bridge'] == bridge_name:
return item['name']
# If cannot find given bridge, the network topo should be public and
# private connecting to the same network and the checkbox
# "Assign public network to all nodes" is checked, we need to use br-ex
# to find ethX in domU
for item in values:
if item['action'] == 'add-port' and item['bridge'] == 'br-ex':
return item['name']
def find_dom0_bridge(himn, username, bridge_name):
ethX = get_network_ethX(bridge_name)
if not ethX:
utils.reportError("Cannot find eth used for private network")
ethX = ethX.split('.')[0]
# find the ethX mac in /sys/class/net/ethX/address
with open('/sys/class/net/%s/address' % ethX, 'r') as fo:
mac = fo.readline()
network_uuid = utils.ssh(himn, username,
('xe vif-list params=network-uuid '
'minimal=true MAC=%s') % mac)
bridge = utils.ssh(himn, username,
('xe network-param-get param-name=bridge '
'uuid=%s') % network_uuid)
return bridge
def find_physical_network_mappings(astute, himn, username):
# find corresponding bridge in Dom0
bridge = find_dom0_bridge(himn, username, 'br-aux')
# find physical network name
phynet_setting = astute['quantum_settings']['L2']['phys_nets']
physnet = phynet_setting.keys()[0]
return physnet + ':' + bridge
def restart_services(service_name):
utils.execute('stop', service_name)
utils.execute('start', service_name)
def enable_linux_bridge(himn, username):
# When using OVS under XS6.5, it will prevent use of Linux bridge in
# Dom0, but neutron-openvswitch-agent in compute node will use Linux
# bridge, so we remove this restriction here
utils.ssh(himn, username, 'rm -f /etc/modprobe.d/blacklist-bridge*')
def patch_ceilometer():
"""Add patches which are not merged to upstream
Order of patches applied:
ceilometer-poll-cpu-util.patch
ceilometer-rates-always-zero.patch
ceilometer-support-network-bytes.patch
ceilometer-add-purge_inspection_cache.patch
"""
patchfile_list = [
'ceilometer-poll-cpu-util.patch',
'ceilometer-rates-always-zero.patch',
'ceilometer-support-network-bytes.patch',
'ceilometer-add-purge_inspection_cache.patch',
]
for patch_file in patchfile_list:
utils.patch(utils.DIST_PACKAGES_DIR, patch_file, 1)
def patch_compute_xenapi():
"""Add patches which are not merged to upstream
Order of patches applied:
support-disable-image-cache.patch
speed-up-config-drive.patch
ovs-interim-bridge.patch
neutron-security-group.patch
live-migration-iscsi.patch
support-vif-hotplug.patch
fix-rescue-vm.patch
live-migration-vifmapping.patch
"""
patchfile_list = [
# Change-Id: I5ebff2c1f7534b06233a4d41d7f5f2e5e3b60b5a
'support-disable-image-cache.patch',
# Change-Id: I359e17d6d5838f4028df0bd47e4825de420eb383
'speed-up-config-drive.patch',
# Change-Id: I0cfc0284e1fcd1a6169d31a7ad410716037e5cc2
'ovs-interim-bridge.patch',
# Change-Id: Id9b39aa86558a9f7099caedabd2d517bf8ad3d68
'neutron-security-group.patch',
# Change-Id: I88d1d384ab7587c428e517d184258bb517dfb4ab
'live-migration-iscsi.patch',
# Change-Id: I22f3fe52d07100592015007653c7f8c47c25d22c
'support-vif-hotplug.patch',
# Change-Id: I32c66733330bc9877caea7e2a2290c02b3906708
'fix-rescue-vm.patch',
# Change-Id: If0fb5d764011521916fbbe15224f524a220052f3
'live-migration-vifmapping.patch',
# TODO(huanxie): below patch isn't merged into upstream yet,
# it only affects XS7.1 and later
# Change-Id: I31850b25e2f32eb65a00fbb824b08646c9ed340a
'assert_can_migrated.patch',
]
for patch_file in patchfile_list:
utils.patch(utils.DIST_PACKAGES_DIR, patch_file, 1)
def patch_neutron_ovs_agent():
"""Apply neutron patch
Add conntrack-tools patch to support conntrack in Dom0
"""
utils.patch('/usr/bin', 'fix-xenapi-returncode.patch', 2)
def reconfig_multipath():
"""Ignore local disks for multipathd
Change devnode rule from "^hd[a-z]" to "^(hd|xvd)[a-z]"
"""
multipath_conf = '/etc/multipath.conf'
if os.path.exists(multipath_conf):
utils.execute('sed', '-i', r's/"\^hd\[a-z\]"/"^(hd|xvd)[a-z]"/',
multipath_conf)
else:
with open(multipath_conf, "w") as f:
f.write('# Generated by %s:\n' % utils.PLUGIN_NAME)
f.write('blacklist {\ndevnode "^(hd|xvd)[a-z]"\n}')
utils.execute('service', 'multipath-tools', 'restart')
def check_and_setup_ceilometer(himn, username, password):
"""Set xenapi configuration for ceilometer service"""
filename = '/etc/ceilometer/ceilometer.conf'
if not os.path.exists(filename):
utils.reportError("The file: %s doesn't exist" % filename)
return
patch_ceilometer()
cf = ConfigParser.ConfigParser()
try:
cf.read(filename)
cf.set('DEFAULT', 'hypervisor_inspector', 'xenapi')
cf.set('xenapi', 'connection_url', 'http://%s' % himn)
cf.set('xenapi', 'connection_username', username)
cf.set('xenapi', 'connection_password', password)
with open(filename, 'w') as configfile:
cf.write(configfile)
LOG.info('Modify file %s successfully', filename)
except Exception:
utils.reportError("Fail to modify file %s", filename)
return
restart_services('ceilometer-polling')
def enable_conntrack_service(himn, username):
# use conntrack statistic mode, so change conntrackd.conf
errcode, out, err = utils.ssh_detailed(
himn, username, 'ls', '/etc/conntrackd/conntrackd.conf.back',
allowed_return_codes=[0, 2])
if errcode == 2:
# Only make conntrackd.conf.back if it doesn't exist
utils.ssh(himn, username,
'mv',
'/etc/conntrackd/conntrackd.conf',
'/etc/conntrackd/conntrackd.conf.back')
utils.ssh(himn, username,
'cp',
CONNTRACK_CONF_SAMPLE,
'/etc/conntrackd/conntrackd.conf')
# Rotate log file for conntrack
utils.scp(himn, username,
'/etc/logrotate.d', 'etc/logrotate.d/conntrackd')
# Restart conntrackd service
utils.ssh(himn, username, 'service', 'conntrackd', 'restart')
def configure_dom0_iptables(himn, username):
xs_chain = 'XenServer-Neutron-INPUT'
# Check XenServer specific chain, create if not exist
commands = ('iptables -t filter -L %s' % xs_chain,
'iptables -t filter --new %s' % xs_chain,
'iptables -t filter -I INPUT -j %s' % xs_chain)
execute_iptables_commands(himn, username, commands)
# Check XenServer rule for ovs native mode, create if not exist
commands = ('iptables -t filter -C %s -p tcp -m tcp --dport 6640 -j ACCEPT'
% xs_chain,
'iptables -t filter -I %s -p tcp --dport 6640 -j ACCEPT'
% xs_chain)
execute_iptables_commands(himn, username, commands)
# Check XenServer rule for vxlan, create if not exist
commands = ('iptables -t filter -C %s -p udp -m multiport --dports 4789 '
'-j ACCEPT' % xs_chain,
'iptables -t filter -I %s -p udp -m multiport --dport 4789 -j '
'ACCEPT' % xs_chain)
execute_iptables_commands(himn, username, commands)
# Persist iptables rules
utils.ssh(himn, username, 'service', 'iptables', 'save')
def execute_iptables_commands(himn, username, command_list):
# Execute first command and continue based on first command result
exitcode, _, _ = utils.ssh_detailed(
himn, username, command_list[0], allowed_return_codes=[0, 1])
if exitcode == 1:
for command in command_list[1:]:
LOG.info('Execute iptables command %s', command)
utils.ssh(himn, username, command)
def create_dom0_mesh_bridge(himn, username, dom0_bridge, mesh_info):
# Create br-mesh and veth pair if not exist in Dom0
exitcode, out, _ = utils.ssh_detailed(himn, username,
'ip', 'addr', 'show', MESH_BRIDGE,
allowed_return_codes=[0, 1])
if exitcode == 1:
# create mesh bridge if it not exist in Dom0
create_mesh_bridge = True
else:
# if mesh bridge exist in Dom0, check its ip, re-configure ip if
# it's not the same as what we want to set
bridge_info = out.split()
try:
index_inet = bridge_info.index('inet')
# get inet info like '192.168.2.2/24'
ipaddr = bridge_info[index_inet + 1].split('/')[0]
current_ip = ipaddress.ip_address(unicode(ipaddr))
configured_ip = ipaddress.ip_address(unicode(mesh_info['ipaddr']))
if current_ip == configured_ip:
LOG.info('Bridge %s already exist in Dom0' % MESH_BRIDGE)
return
else:
create_mesh_bridge = True
remove_old_mesh_bridge(himn, username, MESH_BRIDGE)
except ValueError:
create_mesh_bridge = True
remove_old_mesh_bridge(himn, username, MESH_BRIDGE)
if create_mesh_bridge:
LOG.debug("Create mesh bridge in Dom0")
utils.scp(himn, username, '/etc/sysconfig/network-scripts/',
AUTO_SCRIPT)
utils.ssh(himn, username, 'chmod', '+x',
'/etc/sysconfig/network-scripts/%s' % AUTO_SCRIPT)
start_param = '%(bridge)s %(ip)s %(netmask)s %(broadcast)s %(tag)s' \
% {'bridge': dom0_bridge,
'ip': mesh_info['ipaddr'],
'netmask': mesh_info['netmask'],
'broadcast': mesh_info['broadcast'],
'tag': mesh_info['tag']}
with open(AUTO_START_SERVICE_TEMPLATE) as f:
contents = f.read()
contents = contents.replace('@MESH_INFO@', start_param)
with open(AUTO_START_SERVICE, 'w') as f:
f.write(contents)
utils.scp(himn, username, '/etc/systemd/system', AUTO_START_SERVICE)
utils.ssh(himn, username, 'systemctl', 'daemon-reload')
utils.ssh(himn, username, 'systemctl', 'enable', AUTO_START_SERVICE)
utils.ssh(himn, username, 'systemctl', 'start', AUTO_START_SERVICE)
def disable_local_mesh_bridge(bridge):
iface_list = netifaces.interfaces()
if bridge in iface_list:
utils.execute('ifconfig', bridge, '0.0.0.0')
utils.execute('ip', 'link', 'set', bridge, 'down')
filename = '/etc/network/interfaces.d/ifcfg-%s' % bridge
if os.path.isfile(filename):
utils.execute('rm', '-f', filename)
def get_mesh_info(astute, bridge):
mesh_nets = astute['network_scheme']['endpoints'][bridge]['IP'][0]
mesh_ip = mesh_nets.split('/')[0]
ipv4_net = ipaddress.ip_network(unicode(mesh_nets), strict=False)
mesh_broadcast = str(ipv4_net.broadcast_address)
network_netmask = str(ipv4_net.with_netmask).split('/')
mesh_netmask = network_netmask[1]
mesh_network = network_netmask[0]
mesh_eth = get_network_ethX(bridge)
mesh_tag = "''"
index = mesh_eth.index('.')
if index > 0:
mesh_tag = mesh_eth[index+1:]
mesh_info = {'ipaddr': mesh_ip, 'network': mesh_network,
'netmask': mesh_netmask, 'broadcast': mesh_broadcast,
'tag': mesh_tag}
return mesh_info
def remove_old_mesh_bridge(himn, username, bridge):
exitcode, _, _ = utils.ssh_detailed(himn, username, 'ip', 'link', 'show',
bridge, allowed_return_codes=[0, 1])
if exitcode == 0:
# Allow return code 5 to make sure it won't fail when
# mos-vxlan.service isn't exist
utils.ssh_detailed(himn, username, 'systemctl', 'stop',
AUTO_START_SERVICE, allowed_return_codes=[0, 5])
utils.ssh_detailed(himn, username, 'systemctl', 'disable',
AUTO_START_SERVICE, allowed_return_codes=[0, 1])
utils.ssh(himn, username, 'rm', '-f',
'/etc/systemd/system/%s' % AUTO_START_SERVICE)
utils.ssh(himn, username, 'rm', '-f',
'/etc/sysconfig/network-scripts/%s' % AUTO_SCRIPT)
utils.ssh(himn, username, 'systemctl', 'daemon-reload')
if __name__ == '__main__':
install_xenapi_sdk()
astute = utils.get_astute()
if astute:
username, password, install_xapi = utils.get_options(astute)
endpoints = get_endpoints(astute)
himn_eth, himn_local = utils.init_eth()
public_ip = utils.astute_get(
astute, ('network_metadata', 'vips', 'public', 'ipaddr'))
services_ssl = utils.astute_get(
astute, ('public_ssl', 'services'))
if username and password and endpoints and himn_local:
route_to_compute(endpoints, HIMN_IP, himn_local, username)
xcp_version = utils.get_xcp_version(HIMN_IP, username)
if install_xapi:
install_suppack(HIMN_IP, username, XS_PLUGIN_ISO, xcp_version)
enable_linux_bridge(HIMN_IP, username)
forward_from_himn(himn_eth)
# port forwarding for novnc
forward_port('br-mgmt', himn_eth, HIMN_IP, '80')
create_novacompute_conf(HIMN_IP, username, password, public_ip,
services_ssl)
patch_compute_xenapi()
restart_services('nova-compute')
install_logrotate_script(HIMN_IP, username)
# enable conntrackd service in Dom0
enable_conntrack_service(HIMN_IP, username)
# configure iptables in Dom0 to support ovs native mode and VxLAN
configure_dom0_iptables(HIMN_IP, username)
# neutron-l2-agent in compute node
modify_neutron_rootwrap_conf(HIMN_IP, username, password)
l2_net_type = astute['quantum_settings']['predefined_networks'][
'admin_internal_net']['L2']['network_type']
br_mappings = None
if l2_net_type == 'vlan':
br_mappings = find_physical_network_mappings(astute, HIMN_IP,
username)
remove_old_mesh_bridge(HIMN_IP, username, MESH_BRIDGE)
ip = None
if l2_net_type == 'tun':
dom0_priv_bridge = find_dom0_bridge(HIMN_IP, username,
MESH_BRIDGE)
mesh_info = get_mesh_info(astute, MESH_BRIDGE)
ip = mesh_info['ipaddr']
disable_local_mesh_bridge(MESH_BRIDGE)
create_dom0_mesh_bridge(HIMN_IP, username, dom0_priv_bridge,
mesh_info)
modify_neutron_ovs_agent_conf(INT_BRIDGE, br_mappings=br_mappings,
local_ip=ip)
patch_neutron_ovs_agent()
restart_services('neutron-openvswitch-agent')
reconfig_multipath()
# Add xenapi specific setup for ceilometer if service is enabled.
is_ceilometer_enabled = utils.astute_get(astute,
('ceilometer', 'enabled'))
if is_ceilometer_enabled:
check_and_setup_ceilometer(HIMN_IP, username, password)
else:
LOG.info('Skip ceilomter setup as this service is '
'disabled.')
install_image_cache_cleanup()

View File

@ -1,89 +0,0 @@
#!/usr/bin/env python
from distutils.version import LooseVersion
import json
import os
import stat
import utils
from utils import HIMN_IP
XS_RSA = '/root/.ssh/xs_rsa'
VERSION_HOTFIXES = '@VERSION_HOTFIXES@'
MIN_XCP_VERSION = '2.1.0'
utils.setup_logging('compute_pre_test.log')
LOG = utils.LOG
def ssh_copy_id(host, username, password):
ssh_askpass = "askpass.sh"
s = ('#!/bin/sh\n'
'echo "{password}"').format(password=password)
with open(ssh_askpass, 'w') as f:
f.write(s)
os.chmod(ssh_askpass, stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH)
if os.path.exists(XS_RSA):
os.remove(XS_RSA)
if os.path.exists(XS_RSA + ".pub"):
os.remove(XS_RSA + ".pub")
utils.execute('ssh-keygen', '-f', XS_RSA, '-t', 'rsa', '-N', '')
env = {
"HOME": "/root",
"SSH_ASKPASS": os.path.abspath(ssh_askpass),
"DISPLAY": ":.",
}
utils.execute("setsid", "ssh-copy-id", "-o", "StrictHostKeyChecking=no",
"-i", XS_RSA, "%s@%s" % (username, host), env=env)
def check_host_compatibility(himn, username):
xcp_version = utils.get_xcp_version(himn, username)
if LooseVersion(xcp_version) < LooseVersion(MIN_XCP_VERSION):
utils.reportError('Platform version %s should equal or greater than %s'
% (xcp_version, MIN_XCP_VERSION))
return
version_hotfixes = json.loads(VERSION_HOTFIXES)
ver = utils.ssh(himn, username,
('xe host-param-get uuid=$(xe host-list --minimal) '
'param-name=software-version param-key=product_version'))
hotfixes = version_hotfixes.get(ver)
if not hotfixes:
return
for hotfix in hotfixes:
if not hotfix:
continue
installed = utils.ssh(himn, username,
'xe patch-list name-label=%s --minimal' % hotfix)
if not installed:
utils.reportError('Hotfix %s has not been installed ' % ver)
def check_local_sr(himn, username):
sr_type = utils.ssh(himn, username,
('xe sr-param-get param-name=type '
'uuid=`xe pool-list params=default-SR --minimal`'))
if sr_type != "ext" and sr_type != "nfs":
utils.reportError(('Default SR type should be EXT or NFS. If using '
'local storage, Please make sure thin provisioning '
'is enabled on your host during installation.'))
if __name__ == '__main__':
astute = utils.get_astute()
if astute:
username, password, install_xapi = utils.get_options(astute)
himn_eth, himn_local = utils.init_eth()
if username and password and himn_local:
ssh_copy_id(HIMN_IP, username, password)
check_host_compatibility(HIMN_IP, username)
check_local_sr(HIMN_IP, username)

View File

@ -1,58 +0,0 @@
#!/usr/bin/env python
import ConfigParser
import os
import shutil
import utils
utils.setup_logging('controller_post_deployment.log')
LOG = utils.LOG
def mod_novnc():
astute = utils.get_astute()
if astute:
filename = '/etc/nova/nova.conf'
orig_filename = filename + ".orig"
if not os.path.exists(orig_filename):
shutil.copyfile(filename, orig_filename)
cf = ConfigParser.ConfigParser()
try:
cf.read(orig_filename)
if not cf.has_section('cache'):
cf.add_section('cache')
cf.set('cache', 'enable', 'True')
memcached_servers = cf.get('keystone_authtoken',
'memcached_servers')
cf.set('cache', 'memcached_servers', memcached_servers)
cf.set('DEFAULT', 'memcached_servers', memcached_servers)
with open(filename, 'w') as configfile:
cf.write(configfile)
LOG.info('%s created' % filename)
utils.execute('service', 'nova-novncproxy', 'restart')
utils.execute('service', 'nova-consoleauth', 'restart')
except Exception:
utils.reportError('Cannot set configurations to %s' % filename)
def patch_nova_conductor():
"""Add patches which are not merged to upstream
Order of patches applied:
live-migration-vifmapping-controller.patch
"""
patchfile_list = [
# Change-Id: If0fb5d764011521916fbbe15224f524a220052f3
'live-migration-vifmapping-controller.patch',
]
for patch_file in patchfile_list:
utils.patch(utils.DIST_PACKAGES_DIR, patch_file, 1)
# Restart related service
utils.execute('service', 'nova-conductor', 'restart')
if __name__ == '__main__':
patch_nova_conductor()
mod_novnc()

View File

@ -1,7 +0,0 @@
/var/log/conntrackd*.log {
daily
maxsize 50M
rotate 7
copytruncate
missingok
}

View File

@ -1,63 +0,0 @@
#!/bin/bash
OP=$1
COUNT=$#
function create_mesh_bridge {
local dom0_bridge=$1
local mesh_ip=$2
local mesh_netmask=$3
local mesh_broadcast=$4
local tag=$5
ip link show br-mesh
exitcode=$?
if [ "$exitcode" == "1" ]; then
brctl addbr br-mesh
brctl setfd br-mesh 0
brctl stp br-mesh off
ip link set br-mesh up
ip link delete mesh_ovs
ip link add mesh_ovs type veth peer name mesh_linux
ip link set mesh_ovs up
ip link set mesh_ovs promisc on
ip link set mesh_linux up
ip link set mesh_linux promisc on
brctl addif br-mesh mesh_linux
ovs-vsctl -- --if-exists del-port mesh_ovs -- add-port $dom0_bridge mesh_ovs
ip addr add $mesh_ip/$mesh_netmask broadcast $mesh_broadcast dev br-mesh
if [ -n "$tag" ]; then
ovs-vsctl -- set Port mesh_ovs tag=$tag
fi
fi
}
function delete_mesh_bridge {
ip link show br-mesh
exitcode=$?
if [ "$exitcode" == "0" ]; then
ip link set br-mesh down
ip link set mesh_ovs down
ip link delete mesh_ovs
ovs-vsctl -- --if-exist del-port mesh_ovs
brctl delbr br-mesh
fi
}
if [ "$OP" == "start" ]; then
if [ $COUNT -lt 6 ]; then
echo "usage: fuel-xs-vlan.sh start BRIDGE IP NETMASK BROADCAST [VLAN_TAG]"
echo "Exit due to lack of parameters!"
exit 0
fi
dom0_bridge=$2
mesh_ip=$3
mesh_netmask=$4
mesh_broadcast=$5
tag=$6
create_mesh_bridge $dom0_bridge $mesh_ip $mesh_netmask $mesh_broadcast $tag
elif [ "$OP" == "stop" ]; then
delete_mesh_bridge
fi

View File

@ -1,14 +0,0 @@
[Unit]
Description=Configure Mirantis OpenStack mesh bridge
Requires=xcp-networkd.service openvswitch-xapi-sync.service
After=xcp-networkd.service openvswitch-xapi-sync.service
AssertPathExists=/etc/sysconfig/network-scripts/
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/bin/bash /etc/sysconfig/network-scripts/fuel-xs-vxlan.sh start @MESH_INFO@
ExecStop=/bin/bash /etc/sysconfig/network-scripts/fuel-xs-vxlan.sh stop
[Install]
WantedBy=multi-user.target

View File

@ -1,29 +0,0 @@
diff --git a/nova/virt/xenapi/vmops.py b/nova/virt/xenapi/vmops.py
index 82a9aef..d5048cd 100644
--- a/nova/virt/xenapi/vmops.py
+++ b/nova/virt/xenapi/vmops.py
@@ -2278,10 +2278,11 @@ class VMOps(object):
self._call_live_migrate_command(
"VM.assert_can_migrate", vm_ref, dest_check_data)
except self._session.XenAPI.Failure as exc:
- reason = exc.details[0]
- msg = _('assert_can_migrate failed because: %s') % reason
- LOG.debug(msg, exc_info=True)
- raise exception.MigrationPreCheckError(reason=msg)
+ reason = '%s' % exc.details[0]
+ if reason.strip().upper() != "VIF_NOT_IN_MAP":
+ msg = _('assert_can_migrate failed because: %s') % reason
+ LOG.debug(msg, exc_info=True)
+ raise exception.MigrationPreCheckError(reason=msg)
return dest_check_data
def _ensure_pv_driver_info_for_live_migration(self, instance, vm_ref):
@@ -2500,6 +2501,8 @@ class VMOps(object):
def post_live_migration_at_destination(self, context, instance,
network_info, block_migration,
block_device_info):
+ # Hook interim bridge with ovs bridge
+ self._post_start_actions(instance)
# FIXME(johngarbutt): we should block all traffic until we have
# applied security groups, however this requires changes to XenServer
self._prepare_instance_filter(instance, network_info)

View File

@ -1,64 +0,0 @@
#!/usr/bin/env python
# Copyright (c) 2012 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# NOTE: XenServer still only supports Python 2.4 in it's dom0 userspace
# which means the Nova xenapi plugins must use only Python 2.4 features
"""Fetch Bandwidth data from VIF network devices."""
import utils
import pluginlib_nova
import re
pluginlib_nova.configure_logging('bandwidth')
def _read_proc_net():
f = open('/proc/net/dev', 'r')
try:
return f.readlines()
finally:
f.close()
def _get_bandwitdth_from_proc():
devs = [l.strip() for l in _read_proc_net()]
# ignore headers
devs = devs[2:]
vif_pattern = re.compile("^vif(\d+)\.(\d+)")
dlist = [d.split(':', 1) for d in devs if vif_pattern.match(d)]
devmap = dict()
for name, stats in dlist:
slist = stats.split()
dom, vifnum = name[3:].split('.', 1)
dev = devmap.get(dom, {})
# Note, we deliberately swap in and out, as instance traffic
# shows up inverted due to going though the bridge. (mdragon)
dev[vifnum] = dict(bw_in=int(slist[8]), bw_out=int(slist[0]))
devmap[dom] = dev
return devmap
def fetch_all_bandwidth(session):
return _get_bandwitdth_from_proc()
if __name__ == '__main__':
utils.register_plugin_calls(fetch_all_bandwidth)

View File

@ -1,15 +0,0 @@
diff --git a/ceilometer/compute/virt/xenapi/inspector.py b/ceilometer/compute/virt/xenapi/inspector.py
index 6048e3a..d5676b1 100644
--- a/ceilometer/compute/virt/xenapi/inspector.py
+++ b/ceilometer/compute/virt/xenapi/inspector.py
@@ -238,3 +238,10 @@ class XenapiInspector(virt_inspector.Inspector):
write_bytes_rate=write_rate,
write_requests_rate=0)
yield(disk, disk_rate_info)
+
+ def purge_inspection_cache(self):
+ # Empty function to fit MOS9.2 where get_samples will invoke
+ # self.inspector.purge_inspection_cache(); will envalue if
+ # support cache for XenAPI. But obviously we need the
+ # the code for caching available in upstream firstly.
+ pass

View File

@ -1,30 +0,0 @@
diff --git a/ceilometer/compute/virt/xenapi/inspector.py b/ceilometer/compute/virt/xenapi/inspector.py
index 19405dd..62960da 100644
--- a/ceilometer/compute/virt/xenapi/inspector.py
+++ b/ceilometer/compute/virt/xenapi/inspector.py
@@ -120,18 +120,15 @@ class XenapiInspector(virt_inspector.Inspector):
def inspect_cpu_util(self, instance, duration=None):
instance_name = util.instance_name(instance)
vm_ref = self._lookup_by_name(instance_name)
- metrics_ref = self._call_xenapi("VM.get_metrics", vm_ref)
- metrics_rec = self._call_xenapi("VM_metrics.get_record",
- metrics_ref)
- vcpus_number = metrics_rec['VCPUs_number']
- vcpus_utils = metrics_rec['VCPUs_utilisation']
- if len(vcpus_utils) == 0:
- msg = _("Could not get VM %s CPU Utilization") % instance_name
+ vcpus_number = int(self._call_xenapi("VM.get_VCPUs_max", vm_ref))
+ if vcpus_number <= 0:
+ msg = _("Could not get VM %s CPU number") % instance_name
raise XenapiException(msg)
-
utils = 0.0
- for num in range(int(vcpus_number)):
- utils += vcpus_utils.get(str(num))
+ for index in range(vcpus_number):
+ utils += float(self._call_xenapi("VM.query_data_source",
+ vm_ref,
+ "cpu%d" % index))
utils = utils / int(vcpus_number) * 100
return virt_inspector.CPUUtilStats(util=utils)

View File

@ -1,150 +0,0 @@
diff --git a/ceilometer/compute/virt/xenapi/inspector.py b/ceilometer/compute/virt/xenapi/inspector.py
index 9632cba..18ed5d7 100644
--- a/ceilometer/compute/virt/xenapi/inspector.py
+++ b/ceilometer/compute/virt/xenapi/inspector.py
@@ -160,18 +160,19 @@ class XenapiInspector(virt_inspector.Inspector):
if vif_refs:
for vif_ref in vif_refs:
vif_rec = self._call_xenapi("VIF.get_record", vif_ref)
- vif_metrics_ref = self._call_xenapi(
- "VIF.get_metrics", vif_ref)
- vif_metrics_rec = self._call_xenapi(
- "VIF_metrics.get_record", vif_metrics_ref)
+
+ rx_rate = float(self._call_xenapi(
+ "VM.query_data_source", vm_ref,
+ "vif_%s_rx" % vif_rec['device']))
+ tx_rate = float(self._call_xenapi(
+ "VM.query_data_source", vm_ref,
+ "vif_%s_tx" % vif_rec['device']))
interface = virt_inspector.Interface(
name=vif_rec['uuid'],
mac=vif_rec['MAC'],
fref=None,
parameters=None)
- rx_rate = float(vif_metrics_rec['io_read_kbs']) * units.Ki
- tx_rate = float(vif_metrics_rec['io_write_kbs']) * units.Ki
stats = virt_inspector.InterfaceRateStats(rx_rate, tx_rate)
yield (interface, stats)
@@ -182,16 +183,14 @@ class XenapiInspector(virt_inspector.Inspector):
if vbd_refs:
for vbd_ref in vbd_refs:
vbd_rec = self._call_xenapi("VBD.get_record", vbd_ref)
- vbd_metrics_ref = self._call_xenapi("VBD.get_metrics",
- vbd_ref)
- vbd_metrics_rec = self._call_xenapi("VBD_metrics.get_record",
- vbd_metrics_ref)
disk = virt_inspector.Disk(device=vbd_rec['device'])
- # Stats provided from XenServer are in KB/s,
- # converting it to B/s.
- read_rate = float(vbd_metrics_rec['io_read_kbs']) * units.Ki
- write_rate = float(vbd_metrics_rec['io_write_kbs']) * units.Ki
+ read_rate = float(self._call_xenapi(
+ "VM.query_data_source", vm_ref,
+ "vbd_%s_read" % vbd_rec['device']))
+ write_rate = float(self._call_xenapi(
+ "VM.query_data_source", vm_ref,
+ "vbd_%s_write" % vbd_rec['device']))
disk_rate_info = virt_inspector.DiskRateStats(
read_bytes_rate=read_rate,
read_requests_rate=0,
diff --git a/ceilometer/tests/unit/compute/virt/xenapi/test_inspector.py b/ceilometer/tests/unit/compute/virt/xenapi/test_inspector.py
index caa1c93..7e8f827 100644
--- a/ceilometer/tests/unit/compute/virt/xenapi/test_inspector.py
+++ b/ceilometer/tests/unit/compute/virt/xenapi/test_inspector.py
@@ -142,75 +142,42 @@ class TestXenapiInspection(base.BaseTestCase):
fake_instance = {'OS-EXT-SRV-ATTR:instance_name': 'fake_instance_name',
'id': 'fake_instance_id'}
- def fake_xenapi_request(method, args):
- vif_rec = {
- 'metrics': 'vif_metrics_ref',
- 'uuid': 'vif_uuid',
- 'MAC': 'vif_mac',
- }
-
- vif_metrics_rec = {
- 'io_read_kbs': '1',
- 'io_write_kbs': '2',
- }
- if method == 'VM.get_by_name_label':
- return ['vm_ref']
- elif method == 'VM.get_VIFs':
- return ['vif_ref']
- elif method == 'VIF.get_record':
- return vif_rec
- elif method == 'VIF.get_metrics':
- return 'vif_metrics_ref'
- elif method == 'VIF_metrics.get_record':
- return vif_metrics_rec
- else:
- return None
+ vif_rec = {
+ 'metrics': 'vif_metrics_ref',
+ 'uuid': 'vif_uuid',
+ 'MAC': 'vif_mac',
+ 'device': '0',
+ }
+ side_effects = [['vm_ref'], ['vif_ref'], vif_rec, 1024.0, 2048.0]
session = self.inspector.session
with mock.patch.object(session, 'xenapi_request',
- side_effect=fake_xenapi_request):
+ side_effect=side_effects):
interfaces = list(self.inspector.inspect_vnic_rates(fake_instance))
self.assertEqual(1, len(interfaces))
vnic0, info0 = interfaces[0]
self.assertEqual('vif_uuid', vnic0.name)
self.assertEqual('vif_mac', vnic0.mac)
- self.assertEqual(1024, info0.rx_bytes_rate)
- self.assertEqual(2048, info0.tx_bytes_rate)
+ self.assertEqual(1024.0, info0.rx_bytes_rate)
+ self.assertEqual(2048.0, info0.tx_bytes_rate)
def test_inspect_disk_rates(self):
fake_instance = {'OS-EXT-SRV-ATTR:instance_name': 'fake_instance_name',
'id': 'fake_instance_id'}
- def fake_xenapi_request(method, args):
- vbd_rec = {
- 'device': 'xvdd'
- }
-
- vbd_metrics_rec = {
- 'io_read_kbs': '1',
- 'io_write_kbs': '2'
- }
- if method == 'VM.get_by_name_label':
- return ['vm_ref']
- elif method == 'VM.get_VBDs':
- return ['vbd_ref']
- elif method == 'VBD.get_record':
- return vbd_rec
- elif method == 'VBD.get_metrics':
- return 'vbd_metrics_ref'
- elif method == 'VBD_metrics.get_record':
- return vbd_metrics_rec
- else:
- return None
+ vbd_rec = {
+ 'device': 'xvdd'
+ }
+ side_effects = [['vm_ref'], ['vbd_ref'], vbd_rec, 1024.0, 2048.0]
session = self.inspector.session
with mock.patch.object(session, 'xenapi_request',
- side_effect=fake_xenapi_request):
+ side_effect=side_effects):
disks = list(self.inspector.inspect_disk_rates(fake_instance))
self.assertEqual(1, len(disks))
disk0, info0 = disks[0]
self.assertEqual('xvdd', disk0.device)
- self.assertEqual(1024, info0.read_bytes_rate)
- self.assertEqual(2048, info0.write_bytes_rate)
+ self.assertEqual(1024.0, info0.read_bytes_rate)
+ self.assertEqual(2048.0, info0.write_bytes_rate)

View File

@ -1,123 +0,0 @@
diff --git a/ceilometer/compute/virt/xenapi/inspector.py b/ceilometer/compute/virt/xenapi/inspector.py
index 9632cba..bbd5dc2 100644
--- a/ceilometer/compute/virt/xenapi/inspector.py
+++ b/ceilometer/compute/virt/xenapi/inspector.py
@@ -21,6 +21,11 @@ try:
except ImportError:
api = None
+try:
+ import cPickle as pickle
+except ImportError:
+ import pickle
+
from ceilometer.compute.pollsters import util
from ceilometer.compute.virt import inspector as virt_inspector
from ceilometer.i18n import _
@@ -97,14 +102,29 @@ class XenapiInspector(virt_inspector.Inspector):
def __init__(self):
super(XenapiInspector, self).__init__()
self.session = get_api_session()
+ self.host_ref = self._get_host_ref()
+ self.host_uuid = self._get_host_uuid()
def _get_host_ref(self):
"""Return the xenapi host on which nova-compute runs on."""
return self.session.xenapi.session.get_this_host(self.session.handle)
+ def _get_host_uuid(self):
+ return self.session.xenapi.host.get_uuid(self.host_ref)
+
def _call_xenapi(self, method, *args):
return self.session.xenapi_request(method, args)
+ def _call_plugin(self, plugin, fn, args):
+ args['host_uuid'] = self.host_uuid
+ return self.session.xenapi.host.call_plugin(
+ self.host_ref, plugin, fn, args)
+
+ def _call_plugin_serialized(self, plugin, fn, *args, **kwargs):
+ params = {'params': pickle.dumps(dict(args=args, kwargs=kwargs))}
+ rv = self._call_plugin(plugin, fn, params)
+ return pickle.loads(rv)
+
def _lookup_by_name(self, instance_name):
vm_refs = self._call_xenapi("VM.get_by_name_label", instance_name)
n = len(vm_refs)
@@ -153,6 +173,31 @@ class XenapiInspector(virt_inspector.Inspector):
memory_usage = (total_mem - free_mem * units.Ki) / units.Mi
return virt_inspector.MemoryUsageStats(usage=memory_usage)
+ def inspect_vnics(self, instance):
+ instance_name = util.instance_name(instance)
+ vm_ref = self._lookup_by_name(instance_name)
+ dom_id = self._call_xenapi("VM.get_domid", vm_ref)
+ vif_refs = self._call_xenapi("VM.get_VIFs", vm_ref)
+ bw_all = self._call_plugin_serialized('bandwidth',
+ 'fetch_all_bandwidth')
+ if vif_refs:
+ for vif_ref in vif_refs:
+ vif_rec = self._call_xenapi("VIF.get_record", vif_ref)
+
+ interface = virt_inspector.Interface(
+ name=vif_rec['uuid'],
+ mac=vif_rec['MAC'],
+ fref=None,
+ parameters=None)
+ bw_vif = bw_all[dom_id][vif_rec['device']]
+
+ # Todo <jianghuaw>: Currently the plugin can't support
+ # rx_packets and tx_packets, temporarily set them as -1.
+ stats = virt_inspector.InterfaceStats(
+ rx_bytes=bw_vif['bw_in'], rx_packets='-1',
+ tx_bytes=bw_vif['bw_out'], tx_packets='-1')
+ yield (interface, stats)
+
def inspect_vnic_rates(self, instance, duration=None):
instance_name = util.instance_name(instance)
vm_ref = self._lookup_by_name(instance_name)
diff --git a/ceilometer/tests/unit/compute/virt/xenapi/test_inspector.py b/ceilometer/tests/unit/compute/virt/xenapi/test_inspector.py
index caa1c93..fae1eef 100644
--- a/ceilometer/tests/unit/compute/virt/xenapi/test_inspector.py
+++ b/ceilometer/tests/unit/compute/virt/xenapi/test_inspector.py
@@ -138,6 +138,40 @@ class TestXenapiInspection(base.BaseTestCase):
memory_stat = self.inspector.inspect_memory_usage(fake_instance)
self.assertEqual(fake_stat, memory_stat)
+ def test_inspect_vnics(self):
+ fake_instance = {
+ 'OS-EXT-SRV-ATTR:instance_name': 'fake_instance_name',
+ 'id': 'fake_instance_id'}
+ vif_rec = {
+ 'uuid': 'vif_uuid',
+ 'MAC': 'vif_mac',
+ 'device': '0',
+ }
+ request_returns = [['vm_ref'], '10', ['vif_ref'], vif_rec]
+ bandwidth_returns = [{
+ '10': {
+ '0': {
+ 'bw_in': 1024, 'bw_out': 2048
+ }
+ }
+ }]
+ session = self.inspector.session
+ with mock.patch.object(session, 'xenapi_request',
+ side_effect=request_returns):
+ with mock.patch.object(self.inspector,
+ '_call_plugin_serialized',
+ side_effect=bandwidth_returns):
+
+ interfaces = list(
+ self.inspector.inspect_vnics(fake_instance))
+
+ self.assertEqual(1, len(interfaces))
+ vnic0, info0 = interfaces[0]
+ self.assertEqual('vif_uuid', vnic0.name)
+ self.assertEqual('vif_mac', vnic0.mac)
+ self.assertEqual(1024, info0.rx_bytes)
+ self.assertEqual(2048, info0.tx_bytes)
+
def test_inspect_vnic_rates(self):
fake_instance = {'OS-EXT-SRV-ATTR:instance_name': 'fake_instance_name',
'id': 'fake_instance_id'}

View File

@ -1,71 +0,0 @@
diff --git a/nova/virt/xenapi/vif.py b/nova/virt/xenapi/vif.py
index 6b07a62..ac271e3 100644
--- a/nova/virt/xenapi/vif.py
+++ b/nova/virt/xenapi/vif.py
@@ -463,14 +463,15 @@ class XenAPIOpenVswitchDriver(XenVIFDriver):
# Create Linux bridge qbrXXX
linux_br_name = self._create_linux_bridge(vif_rec)
- LOG.debug("create veth pair for interim bridge %(interim_bridge)s and "
- "linux bridge %(linux_bridge)s",
- {'interim_bridge': bridge_name,
- 'linux_bridge': linux_br_name})
- self._create_veth_pair(tap_name, patch_port1)
- self._brctl_add_if(linux_br_name, tap_name)
- # Add port to interim bridge
- self._ovs_add_port(bridge_name, patch_port1)
+ if not self._device_exists(tap_name):
+ LOG.debug("create veth pair for interim bridge %(interim_bridge)s "
+ "and linux bridge %(linux_bridge)s",
+ {'interim_bridge': bridge_name,
+ 'linux_bridge': linux_br_name})
+ self._create_veth_pair(tap_name, patch_port1)
+ self._brctl_add_if(linux_br_name, tap_name)
+ # Add port to interim bridge
+ self._ovs_add_port(bridge_name, patch_port1)
def get_vif_interim_net_name(self, vif):
return ("net-" + vif['id'])[:network_model.NIC_NAME_LEN]
diff --git a/nova/virt/xenapi/vmops.py b/nova/virt/xenapi/vmops.py
index 182873f..e44117e 100644
--- a/nova/virt/xenapi/vmops.py
+++ b/nova/virt/xenapi/vmops.py
@@ -603,15 +603,18 @@ class VMOps(object):
# for neutron event regardless of whether or not it is
# migrated to another host, if unplug VIFs locally, the
# port status may not changed in neutron side and we
- # cannot get the vif plug event from neturon
+ # cannot get the vif plug event from neutron
+ # rescue is True in rescued instance and the port in neutron side
+ # won't change, so we don't wait event from neutron
timeout = CONF.vif_plugging_timeout
- events = self._get_neutron_events(network_info,
- power_on, first_boot)
+ events = self._get_neutron_events(network_info, power_on,
+ first_boot, rescue)
try:
with self._virtapi.wait_for_instance_event(
instance, events, deadline=timeout,
error_callback=self._neutron_failed_callback):
- LOG.debug("wait for instance event:%s", events)
+ LOG.debug("wait for instance event:%s", events,
+ instance=instance)
setup_network_step(undo_mgr, vm_ref)
if rescue:
attach_orig_disks_step(undo_mgr, vm_ref)
@@ -647,11 +650,13 @@ class VMOps(object):
if CONF.vif_plugging_is_fatal:
raise exception.VirtualInterfaceCreateException()
- def _get_neutron_events(self, network_info, power_on, first_boot):
+ def _get_neutron_events(self, network_info, power_on, first_boot, rescue):
# Only get network-vif-plugged events with VIF's status is not active.
# With VIF whose status is active, neutron may not notify such event.
+ # Don't get network-vif-plugged events from rescued VM or migrated VM
timeout = CONF.vif_plugging_timeout
- if (utils.is_neutron() and power_on and timeout and first_boot):
+ if (utils.is_neutron() and power_on and timeout and first_boot and
+ not rescue):
return [('network-vif-plugged', vif['id'])
for vif in network_info if vif.get('active', True) is False]
else:

View File

@ -1,84 +0,0 @@
diff --git a/bin/neutron-rootwrap-xen-dom0 b/bin/neutron-rootwrap-xen-dom0
index 829b9c1..9210e85 100755
--- a/bin/neutron-rootwrap-xen-dom0
+++ b/bin/neutron-rootwrap-xen-dom0
@@ -29,7 +29,6 @@ from oslo_serialization import jsonutils as json
import os
import select
import sys
-import traceback
import XenAPI
@@ -45,7 +44,7 @@ def parse_args():
exec_name = sys.argv.pop(0)
# argv[0] required; path to conf file
if len(sys.argv) < 2:
- print("%s: No command specified" % exec_name)
+ sys.stderr.write("%s: No command specified" % exec_name)
sys.exit(RC_NOCOMMAND)
config_file = sys.argv.pop(0)
@@ -59,7 +58,7 @@ def _xenapi_section_name(config):
if len(sections) == 1:
return sections[0]
- print("Multiple [xenapi] sections or no [xenapi] section found!")
+ sys.stderr.write("Multiple [xenapi] sections or no [xenapi] section found!")
sys.exit(RC_BADCONFIG)
@@ -74,13 +73,14 @@ def load_configuration(exec_name, config_file):
username = config.get(section, "xenapi_connection_username")
password = config.get(section, "xenapi_connection_password")
except ConfigParser.Error:
- print("%s: Incorrect configuration file: %s" % (exec_name, config_file))
+ sys.stderr.write("%s: Incorrect configuration file: %s" %
+ (exec_name, config_file))
sys.exit(RC_BADCONFIG)
if not url or not password:
msg = ("%s: Must specify xenapi_connection_url, "
"xenapi_connection_username (optionally), and "
"xenapi_connection_password in %s") % (exec_name, config_file)
- print(msg)
+ sys.stderr.write(msg)
sys.exit(RC_BADCONFIG)
return dict(
filters_path=filters_path,
@@ -105,7 +105,7 @@ def filter_command(exec_name, filters_path, user_args, exec_dirs):
filter_match = wrapper.match_filter(
filters, user_args, exec_dirs=exec_dirs)
if not filter_match:
- print("Unauthorized command: %s" % ' '.join(user_args))
+ sys.stderr.write("Unauthorized command: %s" % ' '.join(user_args))
sys.exit(RC_UNAUTHORIZED)
@@ -118,11 +118,17 @@ def run_command(url, username, password, user_args, cmd_input):
result = session.xenapi.host.call_plugin(
host, 'netwrap', 'run_command',
{'cmd': json.dumps(user_args), 'cmd_input': json.dumps(cmd_input)})
- return json.loads(result)
+ _out = json.loads(result)
+ returncode = _out.get('returncode')
+ _stdout = _out.get('out')
+ _stderr = _out.get('err')
+ sys.stdout.write(_stdout)
+ sys.stderr.write(_stderr)
+ sys.exit(returncode)
finally:
session.xenapi.session.logout()
except Exception as e:
- traceback.print_exc()
+ sys.stderr.write("Failed to execute command in Dom0, %s" % e)
sys.exit(RC_XENAPI_ERROR)
@@ -142,4 +148,4 @@ def main():
if __name__ == '__main__':
- print(main())
+ main()

View File

@ -1,64 +0,0 @@
diff --git a/nova/virt/xenapi/client/session.py b/nova/virt/xenapi/client/session.py
index 70e9bec..80bf235 100644
--- a/nova/virt/xenapi/client/session.py
+++ b/nova/virt/xenapi/client/session.py
@@ -102,8 +102,9 @@ class XenAPISession(object):
self.host_ref = self._get_host_ref()
self.product_version, self.product_brand = \
self._get_product_version_and_brand()
-
self._verify_plugin_version()
+ self.platform_version = self._get_platform_version()
+ self._cached_xsm_sr_relaxed = None
apply_session_helpers(self)
@@ -177,6 +178,15 @@ class XenAPISession(object):
return product_version, product_brand
+ def _get_platform_version(self):
+ """Return a tuple of (major, minor, rev) for the host version"""
+ software_version = self._get_software_version()
+ platform_version_str = software_version.get('platform_version',
+ '0.0.0')
+ platform_version = versionutils.convert_version_to_tuple(
+ platform_version_str)
+ return platform_version
+
def _get_software_version(self):
return self.call_xenapi('host.get_software_version', self.host_ref)
@@ -328,3 +338,19 @@ class XenAPISession(object):
"""
return self.call_xenapi('%s.get_all_records' % record_type).items()
+
+ def is_xsm_sr_check_relaxed(self):
+ if self._cached_xsm_sr_relaxed is None:
+ config_value = self.call_plugin('config_file', 'get_val',
+ key='relax-xsm-sr-check')
+ if not config_value:
+ version_str = '.'.join(str(v) for v in self.platform_version)
+ if versionutils.is_compatible('2.1.0', version_str,
+ same_major=False):
+ self._cached_xsm_sr_relaxed = True
+ else:
+ self._cached_xsm_sr_relaxed = False
+ else:
+ self._cached_xsm_sr_relaxed = config_value.lower() == 'true'
+
+ return self._cached_xsm_sr_relaxed
diff --git a/nova/virt/xenapi/vmops.py b/nova/virt/xenapi/vmops.py
index 51d9627..1c93eac 100644
--- a/nova/virt/xenapi/vmops.py
+++ b/nova/virt/xenapi/vmops.py
@@ -2257,7 +2257,7 @@ class VMOps(object):
if len(self._get_iscsi_srs(ctxt, instance_ref)) > 0:
# XAPI must support the relaxed SR check for live migrating with
# iSCSI VBDs
- if not self._is_xsm_sr_check_relaxed():
+ if not self._session.is_xsm_sr_check_relaxed():
raise exception.MigrationError(reason=_('XAPI supporting '
'relax-xsm-sr-check=true required'))

View File

@ -1,49 +0,0 @@
diff --git a/nova/objects/migrate_data.py b/nova/objects/migrate_data.py
index 44dce4f..07a16ea 100644
--- a/nova/objects/migrate_data.py
+++ b/nova/objects/migrate_data.py
@@ -210,7 +210,9 @@ class LibvirtLiveMigrateData(LiveMigrateData):
@obj_base.NovaObjectRegistry.register
class XenapiLiveMigrateData(LiveMigrateData):
- VERSION = '1.0'
+ # Version 1.0: Initial version
+ # Version 1.1: Added vif_uuid_map
+ VERSION = '1.1'
fields = {
'block_migration': fields.BooleanField(nullable=True),
@@ -219,6 +221,7 @@ class XenapiLiveMigrateData(LiveMigrateData):
'sr_uuid_map': fields.DictOfStringsField(),
'kernel_file': fields.StringField(),
'ramdisk_file': fields.StringField(),
+ 'vif_uuid_map': fields.DictOfStringsField(),
}
def to_legacy_dict(self, pre_migration_result=False):
@@ -233,6 +236,8 @@ class XenapiLiveMigrateData(LiveMigrateData):
live_result = {
'sr_uuid_map': ('sr_uuid_map' in self and self.sr_uuid_map
or {}),
+ 'vif_uuid_map': ('vif_uuid_map' in self and self.vif_uuid_map
+ or {}),
}
if pre_migration_result:
legacy['pre_live_migration_result'] = live_result
@@ -252,6 +257,16 @@ class XenapiLiveMigrateData(LiveMigrateData):
if 'pre_live_migration_result' in legacy:
self.sr_uuid_map = \
legacy['pre_live_migration_result']['sr_uuid_map']
+ self.vif_uuid_map = \
+ legacy['pre_live_migration_result'].get('vif_uuid_map', {})
+
+ def obj_make_compatible(self, primitive, target_version):
+ super(XenapiLiveMigrateData, self).obj_make_compatible(
+ primitive, target_version)
+ target_version = versionutils.convert_version_to_tuple(target_version)
+ if target_version < (1, 1):
+ if 'vif_uuid_map' in primitive:
+ del primitive['vif_uuid_map']
@obj_base.NovaObjectRegistry.register

View File

@ -1,253 +0,0 @@
diff --git a/nova/objects/migrate_data.py b/nova/objects/migrate_data.py
index 44dce4f..07a16ea 100644
--- a/nova/objects/migrate_data.py
+++ b/nova/objects/migrate_data.py
@@ -210,7 +210,9 @@ class LibvirtLiveMigrateData(LiveMigrateData):
@obj_base.NovaObjectRegistry.register
class XenapiLiveMigrateData(LiveMigrateData):
- VERSION = '1.0'
+ # Version 1.0: Initial version
+ # Version 1.1: Added vif_uuid_map
+ VERSION = '1.1'
fields = {
'block_migration': fields.BooleanField(nullable=True),
@@ -219,6 +221,7 @@ class XenapiLiveMigrateData(LiveMigrateData):
'sr_uuid_map': fields.DictOfStringsField(),
'kernel_file': fields.StringField(),
'ramdisk_file': fields.StringField(),
+ 'vif_uuid_map': fields.DictOfStringsField(),
}
def to_legacy_dict(self, pre_migration_result=False):
@@ -233,6 +236,8 @@ class XenapiLiveMigrateData(LiveMigrateData):
live_result = {
'sr_uuid_map': ('sr_uuid_map' in self and self.sr_uuid_map
or {}),
+ 'vif_uuid_map': ('vif_uuid_map' in self and self.vif_uuid_map
+ or {}),
}
if pre_migration_result:
legacy['pre_live_migration_result'] = live_result
@@ -252,6 +257,16 @@ class XenapiLiveMigrateData(LiveMigrateData):
if 'pre_live_migration_result' in legacy:
self.sr_uuid_map = \
legacy['pre_live_migration_result']['sr_uuid_map']
+ self.vif_uuid_map = \
+ legacy['pre_live_migration_result'].get('vif_uuid_map', {})
+
+ def obj_make_compatible(self, primitive, target_version):
+ super(XenapiLiveMigrateData, self).obj_make_compatible(
+ primitive, target_version)
+ target_version = versionutils.convert_version_to_tuple(target_version)
+ if target_version < (1, 1):
+ if 'vif_uuid_map' in primitive:
+ del primitive['vif_uuid_map']
@obj_base.NovaObjectRegistry.register
diff --git a/nova/virt/xenapi/driver.py b/nova/virt/xenapi/driver.py
index 899c083..1639861 100644
--- a/nova/virt/xenapi/driver.py
+++ b/nova/virt/xenapi/driver.py
@@ -569,6 +569,7 @@ class XenAPIDriver(driver.ComputeDriver):
# any volume that was attached to the destination during
# live migration. XAPI should take care of all other cleanup.
self._vmops.rollback_live_migration_at_destination(instance,
+ network_info,
block_device_info)
def pre_live_migration(self, context, instance, block_device_info,
@@ -594,6 +595,16 @@ class XenAPIDriver(driver.ComputeDriver):
"""
self._vmops.post_live_migration(context, instance, migrate_data)
+ def post_live_migration_at_source(self, context, instance, network_info):
+ """Unplug VIFs from networks at source.
+
+ :param context: security context
+ :param instance: instance object reference
+ :param network_info: instance network information
+ """
+ self._vmops.post_live_migration_at_source(context, instance,
+ network_info)
+
def post_live_migration_at_destination(self, context, instance,
network_info,
block_migration=False,
diff --git a/nova/virt/xenapi/vif.py b/nova/virt/xenapi/vif.py
index ac271e3..6925426 100644
--- a/nova/virt/xenapi/vif.py
+++ b/nova/virt/xenapi/vif.py
@@ -88,6 +88,9 @@ class XenVIFDriver(object):
raise exception.NovaException(
reason=_("Failed to unplug vif %s") % vif)
+ def get_vif_interim_net_name(self, vif_id):
+ return ("net-" + vif_id)[:network_model.NIC_NAME_LEN]
+
def hot_plug(self, vif, instance, vm_ref, vif_ref):
"""hotplug virtual interface to running instance.
:param nova.network.model.VIF vif:
@@ -126,10 +129,20 @@ class XenVIFDriver(object):
"""
pass
+ def create_vif_interim_network(self, vif):
+ pass
+
+ def delete_network_and_bridge(self, instance, vif):
+ pass
+
class XenAPIBridgeDriver(XenVIFDriver):
"""VIF Driver for XenAPI that uses XenAPI to create Networks."""
+ # NOTE(huanxie): This driver uses linux bridge as backend for XenServer,
+ # it only supports nova network, for using neutron, you should use
+ # XenAPIOpenVswitchDriver
+
def plug(self, instance, vif, vm_ref=None, device=None):
if not vm_ref:
vm_ref = vm_utils.lookup(self._session, instance['name'])
@@ -279,8 +292,7 @@ class XenAPIOpenVswitchDriver(XenVIFDriver):
4. delete linux bridge qbr and related ports if exist
"""
super(XenAPIOpenVswitchDriver, self).unplug(instance, vif, vm_ref)
-
- net_name = self.get_vif_interim_net_name(vif)
+ net_name = self.get_vif_interim_net_name(vif['id'])
network = network_utils.find_network_with_name_label(
self._session, net_name)
if network is None:
@@ -292,6 +304,16 @@ class XenAPIOpenVswitchDriver(XenVIFDriver):
# source and target VM will be connected to the same
# interim network.
return
+ self.delete_network_and_bridge(instance, vif)
+
+ def delete_network_and_bridge(self, instance, vif):
+ net_name = self.get_vif_interim_net_name(vif['id'])
+ network = network_utils.find_network_with_name_label(
+ self._session, net_name)
+ if network is None:
+ LOG.debug("Didn't find network by name %s", net_name,
+ instance=instance)
+ return
LOG.debug('destroying patch port pair for vif: vif_id=%(vif_id)s',
{'vif_id': vif['id']})
bridge_name = self._session.network.get_bridge(network)
@@ -473,11 +495,8 @@ class XenAPIOpenVswitchDriver(XenVIFDriver):
# Add port to interim bridge
self._ovs_add_port(bridge_name, patch_port1)
- def get_vif_interim_net_name(self, vif):
- return ("net-" + vif['id'])[:network_model.NIC_NAME_LEN]
-
def create_vif_interim_network(self, vif):
- net_name = self.get_vif_interim_net_name(vif)
+ net_name = self.get_vif_interim_net_name(vif['id'])
network_rec = {'name_label': net_name,
'name_description': "interim network for vif",
'other_config': {}}
diff --git a/nova/virt/xenapi/vmops.py b/nova/virt/xenapi/vmops.py
index e44117e..82a9aef 100644
--- a/nova/virt/xenapi/vmops.py
+++ b/nova/virt/xenapi/vmops.py
@@ -2388,11 +2388,41 @@ class VMOps(object):
self._generate_vdi_map(
sr_uuid_map[sr_uuid], vm_ref, sr_ref))
vif_map = {}
+ vif_uuid_map = None
+ if 'vif_uuid_map' in migrate_data:
+ vif_uuid_map = migrate_data.vif_uuid_map
+ if vif_uuid_map:
+ vif_map = self._generate_vif_network_map(vm_ref, vif_uuid_map)
+ LOG.debug("Generated vif_map for live migration: %s", vif_map)
options = {}
self._session.call_xenapi(command_name, vm_ref,
migrate_send_data, True,
vdi_map, vif_map, options)
+ def _generate_vif_network_map(self, vm_ref, vif_uuid_map):
+ # Generate a mapping dictionary of src_vif_ref: dest_network_ref
+ vif_map = {}
+ # vif_uuid_map is dictionary of neutron_vif_uuid: dest_network_ref
+ vifs = self._session.VM.get_VIFs(vm_ref)
+ for vif in vifs:
+ other_config = self._session.VIF.get_other_config(vif)
+ neutron_id = other_config.get('nicira-iface-id')
+ if neutron_id is None or neutron_id not in vif_uuid_map.keys():
+ raise exception.MigrationError(
+ reason=_('No mapping for source network %s') % (
+ neutron_id))
+ network_ref = vif_uuid_map[neutron_id]
+ vif_map[vif] = network_ref
+ return vif_map
+
+ def create_interim_networks(self, network_info):
+ # Creating an interim bridge in destination host before live_migration
+ vif_map = {}
+ for vif in network_info:
+ network_ref = self.vif_driver.create_vif_interim_network(vif)
+ vif_map.update({vif['id']: network_ref})
+ return vif_map
+
def pre_live_migration(self, context, instance, block_device_info,
network_info, disk_info, migrate_data):
if migrate_data is None:
@@ -2401,6 +2431,11 @@ class VMOps(object):
migrate_data.sr_uuid_map = self.connect_block_device_volumes(
block_device_info)
+ migrate_data.vif_uuid_map = self.create_interim_networks(network_info)
+ LOG.debug("pre_live_migration, vif_uuid_map: %(vif_map)s, "
+ "sr_uuid_map: %(sr_map)s",
+ {'vif_map': migrate_data.vif_uuid_map,
+ 'sr_map': migrate_data.sr_uuid_map}, instance=instance)
return migrate_data
def live_migrate(self, context, instance, destination_hostname,
@@ -2457,6 +2492,11 @@ class VMOps(object):
migrate_data.kernel_file,
migrate_data.ramdisk_file)
+ def post_live_migration_at_source(self, context, instance, network_info):
+ LOG.debug('post_live_migration_at_source, delete networks and bridges',
+ instance=instance)
+ self._delete_networks_and_bridges(instance, network_info)
+
def post_live_migration_at_destination(self, context, instance,
network_info, block_migration,
block_device_info):
@@ -2471,7 +2511,7 @@ class VMOps(object):
vm_ref = self._get_vm_opaque_ref(instance)
vm_utils.strip_base_mirror_from_vdis(self._session, vm_ref)
- def rollback_live_migration_at_destination(self, instance,
+ def rollback_live_migration_at_destination(self, instance, network_info,
block_device_info):
bdms = block_device_info['block_device_mapping'] or []
@@ -2488,6 +2528,20 @@ class VMOps(object):
LOG.exception(_LE('Failed to forget the SR for volume %s'),
params['id'], instance=instance)
+ # delete VIF and network in destination host
+ LOG.debug('rollback_live_migration_at_destination, delete networks '
+ 'and bridges', instance=instance)
+ self._delete_networks_and_bridges(instance, network_info)
+
+ def _delete_networks_and_bridges(self, instance, network_info):
+ # Unplug VIFs and delete networks
+ for vif in network_info:
+ try:
+ self.vif_driver.delete_network_and_bridge(instance, vif)
+ except Exception:
+ LOG.exception(_LE('Failed to delete networks and bridges with '
+ 'VIF %s'), vif['id'], instance=instance)
+
def get_per_instance_usage(self):
"""Get usage info about each active instance."""
usage = {}

View File

@ -1,87 +0,0 @@
#!/usr/bin/env python
# Copyright 2012 OpenStack Foundation
# Copyright 2012 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# XenAPI plugin for executing network commands (ovs, iptables, etc) on dom0
#
import gettext
gettext.install('neutron', unicode=1)
try:
import json
except ImportError:
import simplejson as json
import subprocess
import XenAPIPlugin
ALLOWED_CMDS = [
'ip',
'ipset',
'iptables-save',
'iptables-restore',
'ip6tables-save',
'ip6tables-restore',
'sysctl',
# NOTE(yamamoto): of_interface=native doesn't use ovs-ofctl
'ovs-ofctl',
'ovs-vsctl',
'ovsdb-client',
'conntrack',
]
class PluginError(Exception):
"""Base Exception class for all plugin errors."""
def __init__(self, *args):
Exception.__init__(self, *args)
def _run_command(cmd, cmd_input):
"""Abstracts out the basics of issuing system commands. If the command
returns anything in stderr, a PluginError is raised with that information.
Otherwise, the output from stdout is returned.
"""
pipe = subprocess.PIPE
proc = subprocess.Popen(cmd, shell=False, stdin=pipe, stdout=pipe,
stderr=pipe, close_fds=True)
(out, err) = proc.communicate(cmd_input)
return proc.returncode, out, err
def run_command(session, args):
cmd = json.loads(args.get('cmd'))
if cmd and cmd[0] not in ALLOWED_CMDS:
msg = _("Dom0 execution of '%s' is not permitted") % cmd[0]
raise PluginError(msg)
returncode, out, err = _run_command(
cmd, json.loads(args.get('cmd_input', 'null')))
if not err:
err = ""
if not out:
out = ""
# This runs in Dom0, will return to neutron-ovs-agent in compute node
result = {'returncode': returncode,
'out': out,
'err': err}
return json.dumps(result)
if __name__ == "__main__":
XenAPIPlugin.dispatch({"run_command": run_command})

View File

@ -1,236 +0,0 @@
diff --git a/nova/virt/xenapi/vif.py b/nova/virt/xenapi/vif.py
index b9660be..a474d23 100644
--- a/nova/virt/xenapi/vif.py
+++ b/nova/virt/xenapi/vif.py
@@ -230,11 +230,11 @@ class XenAPIOpenVswitchDriver(XenVIFDriver):
def unplug(self, instance, vif, vm_ref):
"""unplug vif:
- 1. unplug and destroy vif.
- 2. delete the patch port pair between the integration bridge and
- the interim network.
- 3. destroy the interim network
- 4. delete the OVS bridge service for the interim network
+ 1. delete the patch port pair between the integration bridge and
+ the qbr linux bridge(if exist) and the interim network.
+ 2. destroy the interim network
+ 3. delete the OVS bridge service for the interim network
+ 4. delete linux bridge qbr and related ports if exist
"""
super(XenAPIOpenVswitchDriver, self).unplug(instance, vif, vm_ref)
@@ -253,12 +253,10 @@ class XenAPIOpenVswitchDriver(XenVIFDriver):
LOG.debug('destroying patch port pair for vif: vif_id=%(vif_id)s',
{'vif_id': vif['id']})
bridge_name = self._session.network.get_bridge(network)
- patch_port1, patch_port2 = self._get_patch_port_pair_names(vif['id'])
+ patch_port1, tap_name = self._get_patch_port_pair_names(vif['id'])
try:
# delete the patch port pair
self._ovs_del_port(bridge_name, patch_port1)
- self._ovs_del_port(CONF.xenserver.ovs_integration_bridge,
- patch_port2)
except Exception as e:
LOG.warn(_LW("Failed to delete patch port pair for vif %(if)s,"
" exception:%(exception)s"),
@@ -277,6 +275,18 @@ class XenAPIOpenVswitchDriver(XenVIFDriver):
# won't be destroyed automatically by XAPI. So let's destroy it
# at here.
self._ovs_del_br(bridge_name)
+
+ qbr_name = self._get_qbr_name(vif['id'])
+ qvb_name, qvo_name = self._get_veth_pair_names(vif['id'])
+ if self._device_exists(qbr_name):
+ # delete tap port, qvb port and qbr
+ LOG.debug(
+ "destroy linux bridge %(qbr)s when unplug vif %(vif)s",
+ {'qbr': qbr_name, 'vif': vif['id']})
+ self._delete_linux_port(qbr_name, tap_name)
+ self._delete_linux_port(qbr_name, qvb_name)
+ self._delete_linux_bridge(qbr_name)
+ self._ovs_del_port(CONF.xenserver.ovs_integration_bridge, qvo_name)
except Exception as e:
LOG.warn(_LW("Failed to delete bridge for vif %(if)s, "
"exception:%(exception)s"),
@@ -284,6 +294,88 @@ class XenAPIOpenVswitchDriver(XenVIFDriver):
raise exception.VirtualInterfaceUnplugException(
reason=_("Failed to delete bridge"))
+ def _get_qbr_name(self, iface_id):
+ return ("qbr" + iface_id)[:network_model.NIC_NAME_LEN]
+
+ def _get_veth_pair_names(self, iface_id):
+ return (("qvb%s" % iface_id)[:network_model.NIC_NAME_LEN],
+ ("qvo%s" % iface_id)[:network_model.NIC_NAME_LEN])
+
+ def _device_exists(self, device):
+ """Check if ethernet device exists."""
+ try:
+ cmd = 'ip_link_get_dev'
+ args = {'device_name': device}
+ self._exec_dom0_cmd(cmd, args)
+ return True
+ except Exception:
+ # Swallow exception from plugin, since this indicates the device
+ # doesn't exist
+ return False
+
+ def _delete_net_dev(self, dev):
+ """Delete a network device only if it exists."""
+ if self._device_exists(dev):
+ LOG.debug("delete network device '%s'", dev)
+ args = {'device_name': dev}
+ self._exec_dom0_cmd('ip_link_del_dev', args)
+
+ def _create_veth_pair(self, dev1_name, dev2_name):
+ """Create a pair of veth devices with the specified names,
+ deleting any previous devices with those names.
+ """
+ LOG.debug("Create veth pair, port1:%(qvb)s, port2:%(qvo)s",
+ {'qvb': dev1_name, 'qvo': dev2_name})
+ for dev in [dev1_name, dev2_name]:
+ self._delete_net_dev(dev)
+ args = {'dev1_name': dev1_name, 'dev2_name': dev2_name}
+ self._exec_dom0_cmd('ip_link_add_veth_pair', args)
+ for dev in [dev1_name, dev2_name]:
+ args = {'device_name': dev, 'option': 'up'}
+ self._exec_dom0_cmd('ip_link_set_dev', args)
+ args = {'device_name': dev, 'option': 'on'}
+ self._exec_dom0_cmd('ip_link_set_promisc', args)
+
+ def _create_linux_bridge(self, vif_rec):
+ """create a qbr linux bridge for neutron security group
+ """
+ iface_id = vif_rec['other_config']['nicira-iface-id']
+ linux_br_name = self._get_qbr_name(iface_id)
+ if not self._device_exists(linux_br_name):
+ LOG.debug("Create linux bridge %s", linux_br_name)
+ self._brctl_add_br(linux_br_name)
+ self._brctl_set_fd(linux_br_name, '0')
+ self._brctl_set_stp(linux_br_name, 'off')
+ args = {'device_name': linux_br_name, 'option': 'up'}
+ self._exec_dom0_cmd('ip_link_set_dev', args)
+
+ qvb_name, qvo_name = self._get_veth_pair_names(iface_id)
+ if not self._device_exists(qvo_name):
+ self._create_veth_pair(qvb_name, qvo_name)
+ self._brctl_add_if(linux_br_name, qvb_name)
+ self._ovs_add_port(CONF.xenserver.ovs_integration_bridge, qvo_name)
+ self._ovs_map_external_ids(qvo_name, vif_rec)
+ return linux_br_name
+
+ def _delete_linux_port(self, qbr_name, port_name):
+ try:
+ # delete port in linux bridge
+ self._brctl_del_if(qbr_name, port_name)
+ self._delete_net_dev(port_name)
+ except Exception:
+ LOG.debug("Fail to delete linux port %(port_name)s on bridge"
+ "%(qbr_name)s",
+ {'port_name': port_name, 'qbr_name': qbr_name})
+
+ def _delete_linux_bridge(self, qbr_name):
+ try:
+ # delete linux bridge qbrxxx
+ args = {'device_name': qbr_name, 'option': 'down'}
+ self._exec_dom0_cmd('ip_link_set_dev', args)
+ self._brctl_del_br(qbr_name)
+ except Exception:
+ LOG.debug("Fail to delete linux bridge %s", qbr_name)
+
def post_start_actions(self, instance, vif_ref):
"""Do needed actions post vif start:
plug the interim ovs bridge to the integration bridge;
@@ -295,19 +387,25 @@ class XenAPIOpenVswitchDriver(XenVIFDriver):
bridge_name = self._session.network.get_bridge(network_ref)
network_uuid = self._session.network.get_uuid(network_ref)
iface_id = vif_rec['other_config']['nicira-iface-id']
- patch_port1, patch_port2 = self._get_patch_port_pair_names(iface_id)
+ patch_port1, tap_name = self._get_patch_port_pair_names(iface_id)
LOG.debug('plug_ovs_bridge: port1=%(port1)s, port2=%(port2)s,'
'network_uuid=%(uuid)s, bridge_name=%(bridge_name)s',
- {'port1': patch_port1, 'port2': patch_port2,
+ {'port1': patch_port1, 'port2': tap_name,
'uuid': network_uuid, 'bridge_name': bridge_name})
if bridge_name is None:
raise exception.VirtualInterfacePlugException(
_("Failed to find bridge for vif"))
- self._ovs_add_patch_port(bridge_name, patch_port1, patch_port2)
- self._ovs_add_patch_port(CONF.xenserver.ovs_integration_bridge,
- patch_port2, patch_port1)
- self._ovs_map_external_ids(patch_port2, vif_rec)
+ # Create Linux bridge qbrXXX
+ linux_br_name = self._create_linux_bridge(vif_rec)
+ LOG.debug("create veth pair for interim bridge %(interim_bridge)s and "
+ "linux bridge %(linux_bridge)s",
+ {'interim_bridge': bridge_name,
+ 'linux_bridge': linux_br_name})
+ self._create_veth_pair(tap_name, patch_port1)
+ self._brctl_add_if(linux_br_name, tap_name)
+ # Add port to interim bridge
+ self._ovs_add_port(bridge_name, patch_port1)
def get_vif_interim_net_name(self, vif):
return ("net-" + vif['id'])[:network_model.NIC_NAME_LEN]
@@ -335,14 +433,13 @@ class XenAPIOpenVswitchDriver(XenVIFDriver):
return network_ref
def _get_patch_port_pair_names(self, iface_id):
- return (("pp1-%s" % iface_id)[:network_model.NIC_NAME_LEN],
- ("pp2-%s" % iface_id)[:network_model.NIC_NAME_LEN])
+ return (("vif%s" % iface_id)[:network_model.NIC_NAME_LEN],
+ ("tap%s" % iface_id)[:network_model.NIC_NAME_LEN])
- def _ovs_add_patch_port(self, bridge_name, port_name, peer_port_name):
- cmd = 'ovs_add_patch_port'
+ def _ovs_add_port(self, bridge_name, port_name):
+ cmd = 'ovs_add_port'
args = {'bridge_name': bridge_name,
- 'port_name': port_name,
- 'peer_port_name': peer_port_name
+ 'port_name': port_name
}
self._exec_dom0_cmd(cmd, args)
@@ -378,6 +475,40 @@ class XenAPIOpenVswitchDriver(XenVIFDriver):
self._ovs_set_if_external_id(interface, 'xs-vif-uuid', vif_uuid)
self._ovs_set_if_external_id(interface, 'iface-status', status)
+ def _brctl_add_if(self, bridge_name, interface_name):
+ cmd = 'brctl_add_if'
+ args = {'bridge_name': bridge_name,
+ 'interface_name': interface_name}
+ self._exec_dom0_cmd(cmd, args)
+
+ def _brctl_del_if(self, bridge_name, interface_name):
+ cmd = 'brctl_del_if'
+ args = {'bridge_name': bridge_name,
+ 'interface_name': interface_name}
+ self._exec_dom0_cmd(cmd, args)
+
+ def _brctl_del_br(self, bridge_name):
+ cmd = 'brctl_del_br'
+ args = {'bridge_name': bridge_name}
+ self._exec_dom0_cmd(cmd, args)
+
+ def _brctl_add_br(self, bridge_name):
+ cmd = 'brctl_add_br'
+ args = {'bridge_name': bridge_name}
+ self._exec_dom0_cmd(cmd, args)
+
+ def _brctl_set_fd(self, bridge_name, fd):
+ cmd = 'brctl_set_fd'
+ args = {'bridge_name': bridge_name,
+ 'fd': fd}
+ self._exec_dom0_cmd(cmd, args)
+
+ def _brctl_set_stp(self, bridge_name, stp_opt):
+ cmd = 'brctl_set_stp'
+ args = {'bridge_name': bridge_name,
+ 'option': stp_opt}
+ self._exec_dom0_cmd(cmd, args)
+
def _exec_dom0_cmd(self, cmd, cmd_args):
args = {'cmd': cmd,
'args': cmd_args

View File

@ -1,294 +0,0 @@
diff --git a/nova/exception.py b/nova/exception.py
index 40b82bf..f9cd12f 100644
--- a/nova/exception.py
+++ b/nova/exception.py
@@ -187,6 +187,11 @@ class VirtualInterfacePlugException(NovaException):
msg_fmt = _("Virtual interface plugin failed")
+class VirtualInterfaceUnplugException(NovaException):
+ msg_fmt = _("Virtual interface unplugin failed: "
+ "%(reason)s")
+
+
class GlanceConnectionFailed(NovaException):
msg_fmt = _("Connection to glance host %(server)s failed: "
"%(reason)s")
diff --git a/nova/virt/xenapi/client/objects.py b/nova/virt/xenapi/client/objects.py
index 5cc91eb..7f7215c 100644
--- a/nova/virt/xenapi/client/objects.py
+++ b/nova/virt/xenapi/client/objects.py
@@ -100,6 +100,12 @@ class VDI(XenAPISessionObject):
super(VDI, self).__init__(session, "VDI")
+class VIF(XenAPISessionObject):
+ """Virtual Network Interface."""
+ def __init__(self, session):
+ super(VIF, self).__init__(session, "VIF")
+
+
class SR(XenAPISessionObject):
"""Storage Repository."""
def __init__(self, session):
diff --git a/nova/virt/xenapi/client/session.py b/nova/virt/xenapi/client/session.py
index 8f277ff..70e9bec 100644
--- a/nova/virt/xenapi/client/session.py
+++ b/nova/virt/xenapi/client/session.py
@@ -65,6 +65,7 @@ def apply_session_helpers(session):
session.VM = cli_objects.VM(session)
session.SR = cli_objects.SR(session)
session.VDI = cli_objects.VDI(session)
+ session.VIF = cli_objects.VIF(session)
session.VBD = cli_objects.VBD(session)
session.PBD = cli_objects.PBD(session)
session.PIF = cli_objects.PIF(session)
diff --git a/nova/virt/xenapi/vif.py b/nova/virt/xenapi/vif.py
index 5c7a350..b9660be 100644
--- a/nova/virt/xenapi/vif.py
+++ b/nova/virt/xenapi/vif.py
@@ -23,6 +23,7 @@ from oslo_log import log as logging
from nova import exception
from nova.i18n import _
from nova.i18n import _LW
+from nova.network import model as network_model
from nova.virt.xenapi import network_utils
from nova.virt.xenapi import vm_utils
@@ -185,11 +186,18 @@ class XenAPIBridgeDriver(XenVIFDriver):
def unplug(self, instance, vif, vm_ref):
super(XenAPIBridgeDriver, self).unplug(instance, vif, vm_ref)
+ def post_start_actions(self, instance, vif_ref):
+ """no further actions needed for this driver type"""
+ pass
+
class XenAPIOpenVswitchDriver(XenVIFDriver):
"""VIF driver for Open vSwitch with XenAPI."""
def plug(self, instance, vif, vm_ref=None, device=None):
+ """create an interim network for this vif; and build
+ the vif_rec which will be used by xapi to create VM vif
+ """
if not vm_ref:
vm_ref = vm_utils.lookup(self._session, instance['name'])
@@ -203,10 +211,10 @@ class XenAPIOpenVswitchDriver(XenVIFDriver):
if not device:
device = 0
- # with OVS model, always plug into an OVS integration bridge
- # that is already created
- network_ref = network_utils.find_network_with_bridge(
- self._session, CONF.xenserver.ovs_integration_bridge)
+ # Create an interim network for each VIF, so dom0 has a single
+ # bridge for each device (the emulated and PV ethernet devices
+ # will both be on this bridge.
+ network_ref = self.create_vif_interim_network(vif)
vif_rec = {}
vif_rec['device'] = str(device)
vif_rec['network'] = network_ref
@@ -221,4 +229,157 @@ class XenAPIOpenVswitchDriver(XenVIFDriver):
return self._create_vif(vif, vif_rec, vm_ref)
def unplug(self, instance, vif, vm_ref):
+ """unplug vif:
+ 1. unplug and destroy vif.
+ 2. delete the patch port pair between the integration bridge and
+ the interim network.
+ 3. destroy the interim network
+ 4. delete the OVS bridge service for the interim network
+ """
super(XenAPIOpenVswitchDriver, self).unplug(instance, vif, vm_ref)
+
+ net_name = self.get_vif_interim_net_name(vif)
+ network = network_utils.find_network_with_name_label(
+ self._session, net_name)
+ if network is None:
+ return
+ vifs = self._session.network.get_VIFs(network)
+ if vifs:
+ # only remove the interim network when it's empty.
+ # for resize/migrate on local host, vifs on both of the
+ # source and target VM will be connected to the same
+ # interim network.
+ return
+ LOG.debug('destroying patch port pair for vif: vif_id=%(vif_id)s',
+ {'vif_id': vif['id']})
+ bridge_name = self._session.network.get_bridge(network)
+ patch_port1, patch_port2 = self._get_patch_port_pair_names(vif['id'])
+ try:
+ # delete the patch port pair
+ self._ovs_del_port(bridge_name, patch_port1)
+ self._ovs_del_port(CONF.xenserver.ovs_integration_bridge,
+ patch_port2)
+ except Exception as e:
+ LOG.warn(_LW("Failed to delete patch port pair for vif %(if)s,"
+ " exception:%(exception)s"),
+ {'if': vif, 'exception': e}, instance=instance)
+ raise exception.VirtualInterfaceUnplugException(
+ reason=_("Failed to delete patch port pair"))
+
+ LOG.debug('destroying network: network=%(network)s,'
+ 'bridge=%(br)s',
+ {'network': network, 'br': bridge_name})
+ try:
+ self._session.network.destroy(network)
+ # delete bridge if it still exists.
+ # As there is patch port existing on this bridge when destroying
+ # the VM vif (which happens when shutdown the VM), the bridge
+ # won't be destroyed automatically by XAPI. So let's destroy it
+ # at here.
+ self._ovs_del_br(bridge_name)
+ except Exception as e:
+ LOG.warn(_LW("Failed to delete bridge for vif %(if)s, "
+ "exception:%(exception)s"),
+ {'if': vif, 'exception': e}, instance=instance)
+ raise exception.VirtualInterfaceUnplugException(
+ reason=_("Failed to delete bridge"))
+
+ def post_start_actions(self, instance, vif_ref):
+ """Do needed actions post vif start:
+ plug the interim ovs bridge to the integration bridge;
+ set external_ids to the int-br port which will service
+ for this vif.
+ """
+ vif_rec = self._session.VIF.get_record(vif_ref)
+ network_ref = vif_rec['network']
+ bridge_name = self._session.network.get_bridge(network_ref)
+ network_uuid = self._session.network.get_uuid(network_ref)
+ iface_id = vif_rec['other_config']['nicira-iface-id']
+ patch_port1, patch_port2 = self._get_patch_port_pair_names(iface_id)
+ LOG.debug('plug_ovs_bridge: port1=%(port1)s, port2=%(port2)s,'
+ 'network_uuid=%(uuid)s, bridge_name=%(bridge_name)s',
+ {'port1': patch_port1, 'port2': patch_port2,
+ 'uuid': network_uuid, 'bridge_name': bridge_name})
+ if bridge_name is None:
+ raise exception.VirtualInterfacePlugException(
+ _("Failed to find bridge for vif"))
+
+ self._ovs_add_patch_port(bridge_name, patch_port1, patch_port2)
+ self._ovs_add_patch_port(CONF.xenserver.ovs_integration_bridge,
+ patch_port2, patch_port1)
+ self._ovs_map_external_ids(patch_port2, vif_rec)
+
+ def get_vif_interim_net_name(self, vif):
+ return ("net-" + vif['id'])[:network_model.NIC_NAME_LEN]
+
+ def create_vif_interim_network(self, vif):
+ net_name = self.get_vif_interim_net_name(vif)
+ network_rec = {'name_label': net_name,
+ 'name_description': "interim network for vif",
+ 'other_config': {}}
+ network_ref = network_utils.find_network_with_name_label(
+ self._session, net_name)
+ if network_ref:
+ # already exist, just return
+ # in some scenarios: e..g resize/migrate, it won't create new
+ # interim network.
+ return network_ref
+ try:
+ network_ref = self._session.network.create(network_rec)
+ except Exception as e:
+ LOG.warn(_LW("Failed to create interim network for vif %(if)s, "
+ "exception:%(exception)s"),
+ {'if': vif, 'exception': e})
+ raise exception.VirtualInterfacePlugException(
+ _("Failed to create the interim network for vif"))
+ return network_ref
+
+ def _get_patch_port_pair_names(self, iface_id):
+ return (("pp1-%s" % iface_id)[:network_model.NIC_NAME_LEN],
+ ("pp2-%s" % iface_id)[:network_model.NIC_NAME_LEN])
+
+ def _ovs_add_patch_port(self, bridge_name, port_name, peer_port_name):
+ cmd = 'ovs_add_patch_port'
+ args = {'bridge_name': bridge_name,
+ 'port_name': port_name,
+ 'peer_port_name': peer_port_name
+ }
+ self._exec_dom0_cmd(cmd, args)
+
+ def _ovs_del_port(self, bridge_name, port_name):
+ cmd = 'ovs_del_port'
+ args = {'bridge_name': bridge_name,
+ 'port_name': port_name
+ }
+ self._exec_dom0_cmd(cmd, args)
+
+ def _ovs_del_br(self, bridge_name):
+ cmd = 'ovs_del_br'
+ args = {'bridge_name': bridge_name}
+ self._exec_dom0_cmd(cmd, args)
+
+ def _ovs_set_if_external_id(self, interface, extneral_id, value):
+ cmd = 'ovs_set_if_external_id'
+ args = {'interface': interface,
+ 'extneral_id': extneral_id,
+ 'value': value}
+ self._exec_dom0_cmd(cmd, args)
+
+ def _ovs_map_external_ids(self, interface, vif_rec):
+ '''set external ids on the integration bridge vif
+ '''
+ mac = vif_rec['MAC']
+ iface_id = vif_rec['other_config']['nicira-iface-id']
+ vif_uuid = vif_rec['uuid']
+ status = 'active'
+
+ self._ovs_set_if_external_id(interface, 'attached-mac', mac)
+ self._ovs_set_if_external_id(interface, 'iface-id', iface_id)
+ self._ovs_set_if_external_id(interface, 'xs-vif-uuid', vif_uuid)
+ self._ovs_set_if_external_id(interface, 'iface-status', status)
+
+ def _exec_dom0_cmd(self, cmd, cmd_args):
+ args = {'cmd': cmd,
+ 'args': cmd_args
+ }
+ self._session.call_plugin_serialized('xenhost', 'network_config', args)
diff --git a/nova/virt/xenapi/vmops.py b/nova/virt/xenapi/vmops.py
index ab9368c..51d9627 100644
--- a/nova/virt/xenapi/vmops.py
+++ b/nova/virt/xenapi/vmops.py
@@ -348,6 +348,18 @@ class VMOps(object):
if bad_volumes_callback and bad_devices:
bad_volumes_callback(bad_devices)
+ # Do some operations which have to be done after start:
+ # e.g. The vif's interim bridge won't be created until VM starts.
+ # So the operations on the interim bridge have be done after
+ # start.
+ self._post_start_actions(instance)
+
+ def _post_start_actions(self, instance):
+ vm_ref = vm_utils.lookup(self._session, instance['name'])
+ vif_refs = self._session.call_xenapi("VM.get_VIFs", vm_ref)
+ for vif_ref in vif_refs:
+ self.vif_driver.post_start_actions(instance, vif_ref)
+
def _get_vdis_for_instance(self, context, instance, name_label,
image_meta, image_type, block_device_info):
"""Create or connect to all virtual disks for this instance."""
@@ -1935,13 +1947,20 @@ class VMOps(object):
"""Creates vifs for an instance."""
LOG.debug("Creating vifs", instance=instance)
+ vif_refs = []
# this function raises if vm_ref is not a vm_opaque_ref
self._session.call_xenapi("VM.get_domid", vm_ref)
for device, vif in enumerate(network_info):
LOG.debug('Create VIF %s', vif, instance=instance)
- self.vif_driver.plug(instance, vif, vm_ref=vm_ref, device=device)
+ vif_ref = self.vif_driver.plug(instance, vif,
+ vm_ref=vm_ref, device=device)
+ vif_refs.append(vif_ref)
+
+ LOG.debug('Created the vif_refs: %(vifs)s for VM name: %(name)s',
+ {'vifs': vif_refs, 'name': instance['name']},
+ instance=instance)
def plug_vifs(self, instance, network_info):
"""Set up VIF networking on the host."""

View File

@ -1,28 +0,0 @@
diff --git a/nova/virt/xenapi/vm_utils.py b/nova/virt/xenapi/vm_utils.py
index f88cb3a..982ad37 100644
--- a/nova/virt/xenapi/vm_utils.py
+++ b/nova/virt/xenapi/vm_utils.py
@@ -137,6 +137,7 @@ MBR_SIZE_BYTES = MBR_SIZE_SECTORS * SECTOR_SIZE
KERNEL_DIR = '/boot/guest'
MAX_VDI_CHAIN_SIZE = 16
PROGRESS_INTERVAL_SECONDS = 300
+DD_BLOCKSIZE = 65536
# Fudge factor to allow for the VHD chain to be slightly larger than
# the partitioned space. Otherwise, legitimate images near their
@@ -1159,6 +1160,7 @@ def generate_configdrive(session, instance, vm_ref, userdevice,
utils.execute('dd',
'if=%s' % tmp_file,
'of=%s' % dev_path,
+ 'bs=%d' % DD_BLOCKSIZE,
'oflag=direct,sync',
run_as_root=True)
@@ -2426,6 +2428,7 @@ def _copy_partition(session, src_ref, dst_ref, partition, virtual_size):
utils.execute('dd',
'if=%s' % src_path,
'of=%s' % dst_path,
+ 'bs=%d' % DD_BLOCKSIZE,
'count=%d' % num_blocks,
'iflag=direct,sync',
'oflag=direct,sync',

View File

@ -1,22 +0,0 @@
diff --git a/nova/virt/xenapi/vm_utils.py b/nova/virt/xenapi/vm_utils.py
index 583a913..f88cb3a 100644
--- a/nova/virt/xenapi/vm_utils.py
+++ b/nova/virt/xenapi/vm_utils.py
@@ -1179,7 +1179,7 @@ def _create_kernel_image(context, session, instance, name_label, image_id,
Returns: A list of dictionaries that describe VDIs
"""
filename = ""
- if CONF.xenserver.cache_images:
+ if CONF.xenserver.cache_images != 'none':
args = {}
args['cached-image'] = image_id
args['new-image-uuid'] = str(uuid.uuid4())
@@ -1569,7 +1569,7 @@ def _fetch_disk_image(context, session, instance, name_label, image_id,
# Let the plugin copy the correct number of bytes.
args['image-size'] = str(vdi_size)
- if CONF.xenserver.cache_images:
+ if CONF.xenserver.cache_images != 'none':
args['cached-image'] = image_id
filename = session.call_plugin('kernel', 'copy_vdi', args)

View File

@ -1,231 +0,0 @@
diff --git a/nova/virt/xenapi/driver.py b/nova/virt/xenapi/driver.py
index 77483fa..899c083 100644
--- a/nova/virt/xenapi/driver.py
+++ b/nova/virt/xenapi/driver.py
@@ -104,6 +104,13 @@ OVERHEAD_PER_VCPU = 1.5
class XenAPIDriver(driver.ComputeDriver):
"""A connection to XenServer or Xen Cloud Platform."""
+ capabilities = {
+ "has_imagecache": False,
+ "supports_recreate": False,
+ "supports_migrate_to_same_host": False,
+ "supports_attach_interface": True,
+ "supports_device_tagging": False,
+ }
def __init__(self, virtapi, read_only=False):
super(XenAPIDriver, self).__init__(virtapi)
@@ -681,3 +688,39 @@ class XenAPIDriver(driver.ComputeDriver):
:returns: dict of nova uuid => dict of usage info
"""
return self._vmops.get_per_instance_usage()
+
+ def attach_interface(self, instance, image_meta, vif):
+ """Use hotplug to add a network interface to a running instance.
+
+ The counter action to this is :func:`detach_interface`.
+
+ :param context: The request context.
+ :param nova.objects.instance.Instance instance:
+ The instance which will get an additional network interface.
+ :param nova.objects.ImageMeta image_meta:
+ The metadata of the image of the instance.
+ :param nova.network.model.VIF vif:
+ The object which has the information about the interface to attach.
+
+ :raise nova.exception.NovaException: If the attach fails.
+
+ :return: None
+ """
+ self._vmops.attach_interface(instance, vif)
+
+ def detach_interface(self, instance, vif):
+ """Use hotunplug to remove a network interface from a running instance.
+
+ The counter action to this is :func:`attach_interface`.
+
+ :param context: The request context.
+ :param nova.objects.instance.Instance instance:
+ The instance which gets a network interface removed.
+ :param nova.network.model.VIF vif:
+ The object which has the information about the interface to detach.
+
+ :raise nova.exception.NovaException: If the detach fails.
+
+ :return: None
+ """
+ self._vmops.detach_interface(instance, vif)
diff --git a/nova/virt/xenapi/vif.py b/nova/virt/xenapi/vif.py
index a474d23..6b07a62 100644
--- a/nova/virt/xenapi/vif.py
+++ b/nova/virt/xenapi/vif.py
@@ -20,6 +20,7 @@
from oslo_config import cfg
from oslo_log import log as logging
+from nova.compute import power_state
from nova import exception
from nova.i18n import _
from nova.i18n import _LW
@@ -77,6 +78,8 @@ class XenVIFDriver(object):
LOG.debug("vif didn't exist, no need to unplug vif %s",
vif, instance=instance)
return
+ # hot unplug the VIF first
+ self.hot_unplug(vif, instance, vm_ref, vif_ref)
self._session.call_xenapi('VIF.destroy', vif_ref)
except Exception as e:
LOG.warn(
@@ -85,6 +88,44 @@ class XenVIFDriver(object):
raise exception.NovaException(
reason=_("Failed to unplug vif %s") % vif)
+ def hot_plug(self, vif, instance, vm_ref, vif_ref):
+ """hotplug virtual interface to running instance.
+ :param nova.network.model.VIF vif:
+ The object which has the information about the interface to attach.
+ :param nova.objects.instance.Instance instance:
+ The instance which will get an additional network interface.
+ :param string vm_ref:
+ The instance's reference from hypervisor's point of view.
+ :param string vif_ref:
+ The interface's reference from hypervisor's point of view.
+ :return: None
+ """
+ pass
+
+ def hot_unplug(self, vif, instance, vm_ref, vif_ref):
+ """hot unplug virtual interface from running instance.
+ :param nova.network.model.VIF vif:
+ The object which has the information about the interface to detach.
+ :param nova.objects.instance.Instance instance:
+ The instance which will remove additional network interface.
+ :param string vm_ref:
+ The instance's reference from hypervisor's point of view.
+ :param string vif_ref:
+ The interface's reference from hypervisor's point of view.
+ :return: None
+ """
+ pass
+
+ def post_start_actions(self, instance, vif_ref):
+ """post actions when the instance is power on.
+ :param nova.objects.instance.Instance instance:
+ The instance which will execute extra actions after power on
+ :param string vif_ref:
+ The interface's reference from hypervisor's point of view.
+ :return: None
+ """
+ pass
+
class XenAPIBridgeDriver(XenVIFDriver):
"""VIF Driver for XenAPI that uses XenAPI to create Networks."""
@@ -186,10 +227,6 @@ class XenAPIBridgeDriver(XenVIFDriver):
def unplug(self, instance, vif, vm_ref):
super(XenAPIBridgeDriver, self).unplug(instance, vif, vm_ref)
- def post_start_actions(self, instance, vif_ref):
- """no further actions needed for this driver type"""
- pass
-
class XenAPIOpenVswitchDriver(XenVIFDriver):
"""VIF driver for Open vSwitch with XenAPI."""
@@ -226,7 +263,12 @@ class XenAPIOpenVswitchDriver(XenVIFDriver):
# OVS on the hypervisor monitors this key and uses it to
# set the iface-id attribute
vif_rec['other_config'] = {'nicira-iface-id': vif['id']}
- return self._create_vif(vif, vif_rec, vm_ref)
+ vif_ref = self._create_vif(vif, vif_rec, vm_ref)
+
+ # call XenAPI to plug vif
+ self.hot_plug(vif, instance, vm_ref, vif_ref)
+
+ return vif_ref
def unplug(self, instance, vif, vm_ref):
"""unplug vif:
@@ -294,6 +336,29 @@ class XenAPIOpenVswitchDriver(XenVIFDriver):
raise exception.VirtualInterfaceUnplugException(
reason=_("Failed to delete bridge"))
+ def hot_plug(self, vif, instance, vm_ref, vif_ref):
+ # hot plug vif only when VM's power state is running
+ LOG.debug("Hot plug vif, vif: %s", vif, instance=instance)
+ state = vm_utils.get_power_state(self._session, vm_ref)
+ if state != power_state.RUNNING:
+ LOG.debug("Skip hot plug VIF, VM is not running, vif: %s", vif,
+ instance=instance)
+ return
+
+ self._session.VIF.plug(vif_ref)
+ self.post_start_actions(instance, vif_ref)
+
+ def hot_unplug(self, vif, instance, vm_ref, vif_ref):
+ # hot unplug vif only when VM's power state is running
+ LOG.debug("Hot unplug vif, vif: %s", vif, instance=instance)
+ state = vm_utils.get_power_state(self._session, vm_ref)
+ if state != power_state.RUNNING:
+ LOG.debug("Skip hot unplug VIF, VM is not running, vif: %s", vif,
+ instance=instance)
+ return
+
+ self._session.VIF.unplug(vif_ref)
+
def _get_qbr_name(self, iface_id):
return ("qbr" + iface_id)[:network_model.NIC_NAME_LEN]
diff --git a/nova/virt/xenapi/vmops.py b/nova/virt/xenapi/vmops.py
index 1c93eac..182873f 100644
--- a/nova/virt/xenapi/vmops.py
+++ b/nova/virt/xenapi/vmops.py
@@ -2522,3 +2522,47 @@ class VMOps(object):
volume_utils.forget_sr(self._session, sr_uuid_map[sr_ref])
return sr_uuid_map
+
+ def attach_interface(self, instance, vif):
+ LOG.debug("Attach interface, vif info: %s", vif, instance=instance)
+ vm_ref = self._get_vm_opaque_ref(instance)
+
+ @utils.synchronized('xenapi-vif-' + vm_ref)
+ def _attach_interface(instance, vm_ref, vif):
+ # find device for use with XenAPI
+ allowed_devices = self._session.VM.get_allowed_VIF_devices(vm_ref)
+ if allowed_devices is None or len(allowed_devices) == 0:
+ raise exception.InterfaceAttachFailed(
+ _('attach network interface %(vif_id)s to instance '
+ '%(instance_uuid)s failed, no allowed devices.'),
+ vif_id=vif['id'], instance_uuid=instance.uuid)
+ device = allowed_devices[0]
+ try:
+ # plug VIF
+ self.vif_driver.plug(instance, vif, vm_ref=vm_ref,
+ device=device)
+ # set firewall filtering
+ self.firewall_driver.setup_basic_filtering(instance, [vif])
+ except exception.NovaException:
+ with excutils.save_and_reraise_exception():
+ LOG.exception(_LE('attach network interface %s failed.'),
+ vif['id'], instance=instance)
+ try:
+ self.vif_driver.unplug(instance, vif, vm_ref)
+ except exception.NovaException:
+ # if unplug failed, no need to raise exception
+ LOG.warning(_LW('Unplug VIF %s failed.'),
+ vif['id'], instance=instance)
+
+ _attach_interface(instance, vm_ref, vif)
+
+ def detach_interface(self, instance, vif):
+ LOG.debug("Detach interface, vif info: %s", vif, instance=instance)
+
+ try:
+ vm_ref = self._get_vm_opaque_ref(instance)
+ self.vif_driver.unplug(instance, vif, vm_ref)
+ except exception.NovaException:
+ with excutils.save_and_reraise_exception():
+ LOG.exception(_LE('detach network interface %s failed.'),
+ vif['id'], instance=instance)

View File

@ -1,625 +0,0 @@
#!/usr/bin/env python
# Copyright 2011 OpenStack Foundation
# Copyright 2011 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# NOTE: XenServer still only supports Python 2.4 in it's dom0 userspace
# which means the Nova xenapi plugins must use only Python 2.4 features
# TODO(sfinucan): Resolve all 'noqa' items once the above is no longer true
#
# XenAPI plugin for host operations
#
try:
import json
except ImportError:
import simplejson as json
import logging
import re
import sys
import time
import utils
import pluginlib_nova as pluginlib
import XenAPI
import XenAPIPlugin
try:
import xmlrpclib
except ImportError:
import six.moves.xmlrpc_client as xmlrpclib
pluginlib.configure_logging("xenhost")
_ = pluginlib._
host_data_pattern = re.compile(r"\s*(\S+) \([^\)]+\) *: ?(.*)")
config_file_path = "/usr/etc/xenhost.conf"
DEFAULT_TRIES = 23
DEFAULT_SLEEP = 10
def jsonify(fnc):
def wrapper(*args, **kwargs):
return json.dumps(fnc(*args, **kwargs))
return wrapper
class TimeoutError(StandardError):
pass
def _run_command(cmd, cmd_input=None):
"""Wrap utils.run_command to raise PluginError on failure
"""
try:
return utils.run_command(cmd, cmd_input=cmd_input)
except utils.SubprocessException, e: # noqa
raise pluginlib.PluginError(e.err)
def _resume_compute(session, compute_ref, compute_uuid):
"""Resume compute node on slave host after pool join.
This has to happen regardless of the success or failure of the join
operation.
"""
try:
# session is valid if the join operation has failed
session.xenapi.VM.start(compute_ref, False, True)
except XenAPI.Failure:
# if session is invalid, e.g. xapi has restarted, then the pool
# join has been successful, wait for xapi to become alive again
for c in range(0, DEFAULT_TRIES):
try:
_run_command(["xe", "vm-start", "uuid=%s" % compute_uuid])
return
except pluginlib.PluginError:
logging.exception('Waited %d seconds for the slave to '
'become available.' % (c * DEFAULT_SLEEP))
time.sleep(DEFAULT_SLEEP)
raise pluginlib.PluginError('Unrecoverable error: the host has '
'not come back for more than %d seconds'
% (DEFAULT_SLEEP * (DEFAULT_TRIES + 1)))
@jsonify
def set_host_enabled(self, arg_dict):
"""Sets this host's ability to accept new instances.
It will otherwise continue to operate normally.
"""
enabled = arg_dict.get("enabled")
if enabled is None:
raise pluginlib.PluginError(
_("Missing 'enabled' argument to set_host_enabled"))
host_uuid = arg_dict['host_uuid']
if enabled == "true":
result = _run_command(["xe", "host-enable", "uuid=%s" % host_uuid])
elif enabled == "false":
result = _run_command(["xe", "host-disable", "uuid=%s" % host_uuid])
else:
raise pluginlib.PluginError(_("Illegal enabled status: %s") % enabled)
# Should be empty string
if result:
raise pluginlib.PluginError(result)
# Return the current enabled status
cmd = ["xe", "host-param-get", "uuid=%s" % host_uuid, "param-name=enabled"]
host_enabled = _run_command(cmd)
if host_enabled == "true":
status = "enabled"
else:
status = "disabled"
return {"status": status}
def _write_config_dict(dct):
conf_file = file(config_file_path, "w")
json.dump(dct, conf_file)
conf_file.close()
def _get_config_dict():
"""Returns a dict containing the key/values in the config file.
If the file doesn't exist, it is created, and an empty dict
is returned.
"""
try:
conf_file = file(config_file_path)
config_dct = json.load(conf_file)
conf_file.close()
except IOError:
# File doesn't exist
config_dct = {}
# Create the file
_write_config_dict(config_dct)
return config_dct
@jsonify
def get_config(self, arg_dict):
"""Return the value stored for the specified key, or None if no match."""
conf = _get_config_dict()
params = arg_dict["params"]
try:
dct = json.loads(params)
except Exception:
dct = params
key = dct["key"]
ret = conf.get(key)
if ret is None:
# Can't jsonify None
return "None"
return ret
@jsonify
def set_config(self, arg_dict):
"""Write the specified key/value pair, overwriting any existing value."""
conf = _get_config_dict()
params = arg_dict["params"]
try:
dct = json.loads(params)
except Exception:
dct = params
key = dct["key"]
val = dct["value"]
if val is None:
# Delete the key, if present
conf.pop(key, None)
else:
conf.update({key: val})
_write_config_dict(conf)
def iptables_config(session, args):
# command should be either save or restore
logging.debug("iptables_config:enter")
logging.debug("iptables_config: args=%s", args)
cmd_args = pluginlib.exists(args, 'cmd_args')
logging.debug("iptables_config: cmd_args=%s", cmd_args)
process_input = pluginlib.optional(args, 'process_input')
logging.debug("iptables_config: process_input=%s", process_input)
cmd = json.loads(cmd_args)
cmd = map(str, cmd)
# either execute iptable-save or iptables-restore
# command must be only one of these two
# process_input must be used only with iptables-restore
if len(cmd) > 0 and cmd[0] in ('iptables-save',
'iptables-restore',
'ip6tables-save',
'ip6tables-restore'):
result = _run_command(cmd, process_input)
ret_str = json.dumps(dict(out=result, err=''))
logging.debug("iptables_config:exit")
return ret_str
# else don't do anything and return an error
else:
raise pluginlib.PluginError(_("Invalid iptables command"))
def _ovs_add_patch_port(args):
bridge_name = pluginlib.exists(args, 'bridge_name')
port_name = pluginlib.exists(args, 'port_name')
peer_port_name = pluginlib.exists(args, 'peer_port_name')
cmd_args = ['ovs-vsctl', '--', '--if-exists', 'del-port',
port_name, '--', 'add-port', bridge_name, port_name,
'--', 'set', 'interface', port_name,
'type=patch', 'options:peer=%s' % peer_port_name]
return _run_command(cmd_args)
def _ovs_del_port(args):
bridge_name = pluginlib.exists(args, 'bridge_name')
port_name = pluginlib.exists(args, 'port_name')
cmd_args = ['ovs-vsctl', '--', '--if-exists', 'del-port',
bridge_name, port_name]
return _run_command(cmd_args)
def _ovs_del_br(args):
bridge_name = pluginlib.exists(args, 'bridge_name')
cmd_args = ['ovs-vsctl', '--', '--if-exists',
'del-br', bridge_name]
return _run_command(cmd_args)
def _ovs_set_if_external_id(args):
interface = pluginlib.exists(args, 'interface')
extneral_id = pluginlib.exists(args, 'extneral_id')
value = pluginlib.exists(args, 'value')
cmd_args = ['ovs-vsctl', 'set', 'Interface', interface,
'external-ids:%s=%s' % (extneral_id, value)]
return _run_command(cmd_args)
def _ovs_add_port(args):
bridge_name = pluginlib.exists(args, 'bridge_name')
port_name = pluginlib.exists(args, 'port_name')
cmd_args = ['ovs-vsctl', '--', '--if-exists', 'del-port', port_name,
'--', 'add-port', bridge_name, port_name]
return _run_command(cmd_args)
def _ip_link_get_dev(args):
device_name = pluginlib.exists(args, 'device_name')
cmd_args = ['ip', 'link', 'show', device_name]
return _run_command(cmd_args)
def _ip_link_del_dev(args):
device_name = pluginlib.exists(args, 'device_name')
cmd_args = ['ip', 'link', 'delete', device_name]
return _run_command(cmd_args)
def _ip_link_add_veth_pair(args):
dev1_name = pluginlib.exists(args, 'dev1_name')
dev2_name = pluginlib.exists(args, 'dev2_name')
cmd_args = ['ip', 'link', 'add', dev1_name, 'type', 'veth', 'peer',
'name', dev2_name]
return _run_command(cmd_args)
def _ip_link_set_dev(args):
device_name = pluginlib.exists(args, 'device_name')
option = pluginlib.exists(args, 'option')
cmd_args = ['ip', 'link', 'set', device_name, option]
return _run_command(cmd_args)
def _ip_link_set_promisc(args):
device_name = pluginlib.exists(args, 'device_name')
option = pluginlib.exists(args, 'option')
cmd_args = ['ip', 'link', 'set', device_name, 'promisc', option]
return _run_command(cmd_args)
def _brctl_add_br(args):
bridge_name = pluginlib.exists(args, 'bridge_name')
cmd_args = ['brctl', 'addbr', bridge_name]
return _run_command(cmd_args)
def _brctl_del_br(args):
bridge_name = pluginlib.exists(args, 'bridge_name')
cmd_args = ['brctl', 'delbr', bridge_name]
return _run_command(cmd_args)
def _brctl_set_fd(args):
bridge_name = pluginlib.exists(args, 'bridge_name')
fd = pluginlib.exists(args, 'fd')
cmd_args = ['brctl', 'setfd', bridge_name, fd]
return _run_command(cmd_args)
def _brctl_set_stp(args):
bridge_name = pluginlib.exists(args, 'bridge_name')
option = pluginlib.exists(args, 'option')
cmd_args = ['brctl', 'stp', bridge_name, option]
return _run_command(cmd_args)
def _brctl_add_if(args):
bridge_name = pluginlib.exists(args, 'bridge_name')
if_name = pluginlib.exists(args, 'interface_name')
cmd_args = ['brctl', 'addif', bridge_name, if_name]
return _run_command(cmd_args)
def _brctl_del_if(args):
bridge_name = pluginlib.exists(args, 'bridge_name')
if_name = pluginlib.exists(args, 'interface_name')
cmd_args = ['brctl', 'delif', bridge_name, if_name]
return _run_command(cmd_args)
ALLOWED_NETWORK_CMDS = {
# allowed cmds to config OVS bridge
'ovs_add_patch_port': _ovs_add_patch_port,
'ovs_add_port': _ovs_add_port,
'ovs_del_port': _ovs_del_port,
'ovs_del_br': _ovs_del_br,
'ovs_set_if_external_id': _ovs_set_if_external_id,
'ip_link_add_veth_pair': _ip_link_add_veth_pair,
'ip_link_del_dev': _ip_link_del_dev,
'ip_link_get_dev': _ip_link_get_dev,
'ip_link_set_dev': _ip_link_set_dev,
'ip_link_set_promisc': _ip_link_set_promisc,
'brctl_add_br': _brctl_add_br,
'brctl_add_if': _brctl_add_if,
'brctl_del_br': _brctl_del_br,
'brctl_del_if': _brctl_del_if,
'brctl_set_fd': _brctl_set_fd,
'brctl_set_stp': _brctl_set_stp
}
def network_config(session, args):
"""network config functions"""
cmd = pluginlib.exists(args, 'cmd')
if not isinstance(cmd, basestring):
msg = _("invalid command '%s'") % str(cmd)
raise pluginlib.PluginError(msg)
return
if cmd not in ALLOWED_NETWORK_CMDS:
msg = _("Dom0 execution of '%s' is not permitted") % cmd
raise pluginlib.PluginError(msg)
return
cmd_args = pluginlib.exists(args, 'args')
return ALLOWED_NETWORK_CMDS[cmd](cmd_args)
def _power_action(action, arg_dict):
# Host must be disabled first
host_uuid = arg_dict['host_uuid']
result = _run_command(["xe", "host-disable", "uuid=%s" % host_uuid])
if result:
raise pluginlib.PluginError(result)
# All running VMs must be shutdown
result = _run_command(["xe", "vm-shutdown", "--multiple",
"resident-on=%s" % host_uuid])
if result:
raise pluginlib.PluginError(result)
cmds = {"reboot": "host-reboot",
"startup": "host-power-on",
"shutdown": "host-shutdown"}
result = _run_command(["xe", cmds[action], "uuid=%s" % host_uuid])
# Should be empty string
if result:
raise pluginlib.PluginError(result)
return {"power_action": action}
@jsonify
def host_reboot(self, arg_dict):
"""Reboots the host."""
return _power_action("reboot", arg_dict)
@jsonify
def host_shutdown(self, arg_dict):
"""Reboots the host."""
return _power_action("shutdown", arg_dict)
@jsonify
def host_start(self, arg_dict):
"""Starts the host.
Currently not feasible, since the host runs on the same machine as
Xen.
"""
return _power_action("startup", arg_dict)
@jsonify
def host_join(self, arg_dict):
"""Join a remote host into a pool.
The pool's master is the host where the plugin is called from. The
following constraints apply:
- The host must have no VMs running, except nova-compute, which
will be shut down (and restarted upon pool-join) automatically,
- The host must have no shared storage currently set up,
- The host must have the same license of the master,
- The host must have the same supplemental packs as the master.
"""
session = XenAPI.Session(arg_dict.get("url"))
session.login_with_password(arg_dict.get("user"),
arg_dict.get("password"))
compute_ref = session.xenapi.VM.get_by_uuid(arg_dict.get('compute_uuid'))
session.xenapi.VM.clean_shutdown(compute_ref)
try:
if arg_dict.get("force"):
session.xenapi.pool.join(arg_dict.get("master_addr"),
arg_dict.get("master_user"),
arg_dict.get("master_pass"))
else:
session.xenapi.pool.join_force(arg_dict.get("master_addr"),
arg_dict.get("master_user"),
arg_dict.get("master_pass"))
finally:
_resume_compute(session, compute_ref, arg_dict.get("compute_uuid"))
@jsonify
def host_data(self, arg_dict):
"""Runs the commands on the xenstore host to return the current status
information.
"""
host_uuid = arg_dict['host_uuid']
resp = _run_command(["xe", "host-param-list", "uuid=%s" % host_uuid])
parsed_data = parse_response(resp)
# We have the raw dict of values. Extract those that we need,
# and convert the data types as needed.
ret_dict = cleanup(parsed_data)
# Add any config settings
config = _get_config_dict()
ret_dict.update(config)
return ret_dict
def parse_response(resp):
data = {}
for ln in resp.splitlines():
if not ln:
continue
mtch = host_data_pattern.match(ln.strip())
try:
k, v = mtch.groups()
data[k] = v
except AttributeError:
# Not a valid line; skip it
continue
return data
@jsonify
def host_uptime(self, arg_dict):
"""Returns the result of the uptime command on the xenhost."""
return {"uptime": _run_command(['uptime'])}
def cleanup(dct):
"""Take the raw KV pairs returned and translate them into the
appropriate types, discarding any we don't need.
"""
def safe_int(val):
"""Integer values will either be string versions of numbers,
or empty strings. Convert the latter to nulls.
"""
try:
return int(val)
except ValueError:
return None
def strip_kv(ln):
return [val.strip() for val in ln.split(":", 1)]
out = {}
# sbs = dct.get("supported-bootloaders", "")
# out["host_supported-bootloaders"] = sbs.split("; ")
# out["host_suspend-image-sr-uuid"] = dct.get("suspend-image-sr-uuid", "")
# out["host_crash-dump-sr-uuid"] = dct.get("crash-dump-sr-uuid", "")
# out["host_local-cache-sr"] = dct.get("local-cache-sr", "")
out["enabled"] = dct.get("enabled", "true") == "true"
out["host_memory"] = omm = {}
omm["total"] = safe_int(dct.get("memory-total", ""))
omm["overhead"] = safe_int(dct.get("memory-overhead", ""))
omm["free"] = safe_int(dct.get("memory-free", ""))
omm["free-computed"] = safe_int(
dct.get("memory-free-computed", ""))
# out["host_API-version"] = avv = {}
# avv["vendor"] = dct.get("API-version-vendor", "")
# avv["major"] = safe_int(dct.get("API-version-major", ""))
# avv["minor"] = safe_int(dct.get("API-version-minor", ""))
out["enabled"] = dct.get("enabled", True)
out["host_uuid"] = dct.get("uuid", None)
out["host_name-label"] = dct.get("name-label", "")
out["host_name-description"] = dct.get("name-description", "")
# out["host_host-metrics-live"] = dct.get(
# "host-metrics-live", "false") == "true"
out["host_hostname"] = dct.get("hostname", "")
out["host_ip_address"] = dct.get("address", "")
oc = dct.get("other-config", "")
out["host_other-config"] = ocd = {}
if oc:
for oc_fld in oc.split("; "):
ock, ocv = strip_kv(oc_fld)
ocd[ock] = ocv
capabilities = dct.get("capabilities", "")
out["host_capabilities"] = capabilities.replace(";", "").split()
# out["host_allowed-operations"] = dct.get(
# "allowed-operations", "").split("; ")
# lsrv = dct.get("license-server", "")
# out["host_license-server"] = ols = {}
# if lsrv:
# for lspart in lsrv.split("; "):
# lsk, lsv = lspart.split(": ")
# if lsk == "port":
# ols[lsk] = safe_int(lsv)
# else:
# ols[lsk] = lsv
# sv = dct.get("software-version", "")
# out["host_software-version"] = osv = {}
# if sv:
# for svln in sv.split("; "):
# svk, svv = strip_kv(svln)
# osv[svk] = svv
cpuinf = dct.get("cpu_info", "")
out["host_cpu_info"] = ocp = {}
if cpuinf:
for cpln in cpuinf.split("; "):
cpk, cpv = strip_kv(cpln)
if cpk in ("cpu_count", "family", "model", "stepping"):
ocp[cpk] = safe_int(cpv)
else:
ocp[cpk] = cpv
# out["host_edition"] = dct.get("edition", "")
# out["host_external-auth-service-name"] = dct.get(
# "external-auth-service-name", "")
return out
def query_gc(session, sr_uuid, vdi_uuid):
result = _run_command(["/opt/xensource/sm/cleanup.py",
"-q", "-u", sr_uuid])
# Example output: "Currently running: True"
return result[19:].strip() == "True"
def get_pci_device_details(session):
"""Returns a string that is a list of pci devices with details.
This string is obtained by running the command lspci. With -vmm option,
it dumps PCI device data in machine readable form. This verbose format
display a sequence of records separated by a blank line. We will also
use option "-n" to get vendor_id and device_id as numeric values and
the "-k" option to get the kernel driver used if any.
"""
return _run_command(["lspci", "-vmmnk"])
def get_pci_type(session, pci_device):
"""Returns the type of the PCI device (type-PCI, type-VF or type-PF).
pci-device -- The address of the pci device
"""
# We need to add the domain if it is missing
if pci_device.count(':') == 1:
pci_device = "0000:" + pci_device
output = _run_command(["ls", "/sys/bus/pci/devices/" + pci_device + "/"])
if "physfn" in output:
return "type-VF"
if "virtfn" in output:
return "type-PF"
return "type-PCI"
if __name__ == "__main__":
# Support both serialized and non-serialized plugin approaches
_, methodname = xmlrpclib.loads(sys.argv[1])
if methodname in ['query_gc', 'get_pci_device_details', 'get_pci_type',
'network_config']:
utils.register_plugin_calls(query_gc,
get_pci_device_details,
get_pci_type,
network_config)
XenAPIPlugin.dispatch(
{"host_data": host_data,
"set_host_enabled": set_host_enabled,
"host_shutdown": host_shutdown,
"host_reboot": host_reboot,
"host_start": host_start,
"host_join": host_join,
"get_config": get_config,
"set_config": set_config,
"iptables_config": iptables_config,
"host_uptime": host_uptime})

View File

@ -1,123 +0,0 @@
#!/usr/bin/env python
from glanceclient import Client
from keystoneauth1 import loading
from keystoneauth1 import session
import os
from time import sleep
import utils
import yaml
utils.setup_logging('primary_controller_post_deployment.log')
LOG = utils.LOG
def get_keystone_creds():
return {
'username': os.environ.get('OS_USERNAME'),
'password': os.environ.get('OS_PASSWORD'),
'auth_url': os.environ.get('OS_AUTH_URL'),
'tenant_name': os.environ.get('OS_TENANT_NAME'),
}
def get_keystone_session():
loader = loading.get_plugin_loader('password')
creds = get_keystone_creds()
auth = loader.load_from_options(**creds)
return session.Session(auth=auth)
def list_images(sess):
LOG.info('Listing images:')
glance = Client('2', session=sess)
images = glance.images.list()
for image in images:
LOG.info(('+ {name} container_format:{container_format} '
'disk_format:{disk_format} visibility:{visibility} '
'file:{file}').format(**image))
def del_images(sess, image_name):
glance = Client('2', session=sess)
images = glance.images.list()
for image in images:
if image.name == image_name:
glance.images.delete(image.id)
LOG.info('Image %s has been deleted' % image_name)
def add_image(sess, image_name, vm_mode, image_file):
glance = Client('2', session=sess)
image = glance.images.create(name=image_name, container_format="ovf",
disk_format="vhd", visibility="public",
vm_mode=vm_mode)
with open(image_file, 'rb') as f:
glance.images.upload(image.id, f)
LOG.info('Image %s (mode: %s, file: %s) has been added' %
(image_name, vm_mode, image_file))
def wait_ocf_resource_started(timeout, interval):
"""Wait until all ocf resources are started"""
LOG.info("Waiting for all ocf resources to start")
remain_time = timeout
while remain_time > 0:
resources = utils.execute('pcs', 'resource', 'show')
if resources:
exists_not_started = any([("Started" not in line)
for line in resources.split('\n')
if "ocf::fuel" in line])
# All started
if not exists_not_started:
return
sleep(interval)
remain_time = timeout - interval
utils.reportError("Timeout for waiting resources to start")
def mod_ceilometer():
rc, out, err = utils.detailed_execute(
'pcs', 'resource', 'show', 'p_ceilometer-agent-central',
allowed_return_codes=[0, 1])
"""Wait until all ocf resources are started, otherwise there is risk for race
condition: If run "pcs resource restart" while some resources are still in
restarting or initiating stage, it may result into failures for both.
"""
if rc == 0:
wait_ocf_resource_started(300, 10)
LOG.info("Patching ceilometer pipeline.yaml to exclude \
network.servers.*")
# Exclude network.services.* to avoid error 404
pipeline = '/etc/ceilometer/pipeline.yaml'
if not os.path.exists(pipeline):
utils.reportError('%s not found' % pipeline)
with open(pipeline) as f:
ceilometer = yaml.safe_load(f)
sources = utils.astute_get(ceilometer, ('sources',))
if len(sources) != 1:
utils.reportError('ceilometer has none or more than one sources')
source = sources[0]
meters = utils.astute_get(source, ('meters',))
new_meter = '!network.services.*'
if new_meter not in meters:
meters.append(new_meter)
with open(pipeline, "w") as f:
ceilometer = yaml.safe_dump(ceilometer, f)
restart_info = utils.execute(
'pcs', 'resource', 'restart', 'p_ceilometer-agent-central')
LOG.info(restart_info)
if __name__ == '__main__':
sess = get_keystone_session()
list_images(sess)
del_images(sess, "TestVM")
add_image(sess, "TestVM", "xen", "cirros-0.3.4-x86_64-disk.vhd.tgz")
list_images(sess)
mod_ceilometer()

View File

@ -1,69 +0,0 @@
#!/bin/bash
set -eu
# Script to rotate console logs
#
# Should be run on Dom0, with cron, every minute:
# * * * * * /root/rotate_xen_guest_logs.sh
#
# Should clear out the guest logs on every boot
# because the domain ids may get re-used for a
# different tenant after the reboot
#
# /var/log/xen/guest should be mounted into a
# small loopback device to stop any guest being
# able to fill dom0 file system
log_dir="/var/log/xen/guest"
kb=1024
max_size_bytes=$(($kb*$kb))
truncated_size_bytes=$((5*$kb))
syslog_tag='rotate_xen_guest_logs'
log_file_base="${log_dir}/console."
# Only delete log files older than this number of minutes
# to avoid a race where Xen creates the domain and starts
# logging before the XAPI VM start returns (and allows us
# to preserve the log file using last_dom_id)
min_logfile_age=10
# Ensure logging is setup correctly for all domains
xenstore-write /local/logconsole/@ "${log_file_base}%d"
# Grab the list of logs now to prevent a race where the domain is
# started after we get the valid last_dom_ids, but before the logs are
# deleted. Add spaces to ensure we can do containment tests below
current_logs=$(find "$log_dir" -type f)
# Ensure the last_dom_id is set + updated for all running VMs
for vm in $(xe vm-list power-state=running --minimal | tr ',' ' '); do
xe vm-param-set uuid=$vm other-config:last_dom_id=$(xe vm-param-get uuid=$vm param-name=dom-id)
done
# Get the last_dom_id for all VMs
valid_last_dom_ids=$(xe vm-list params=other-config --minimal | tr ';,' '\n\n' | grep last_dom_id | sed -e 's/last_dom_id: //g' | xargs)
echo "Valid dom IDs: $valid_last_dom_ids" | /usr/bin/logger -t $syslog_tag
# Remove old console files that do not correspond to valid last_dom_id's
allowed_consoles=".*console.\(${valid_last_dom_ids// /\\|}\)$"
delete_logs=`find "$log_dir" -type f -mmin +${min_logfile_age} -not -regex "$allowed_consoles"`
for log in $delete_logs; do
if echo "$current_logs" | grep -q -w "$log"; then
echo "Deleting: $log" | /usr/bin/logger -t $syslog_tag
rm $log
fi
done
# Truncate all remaining logs
for log in `find "$log_dir" -type f -regex '.*console.*' -size +${max_size_bytes}c`; do
echo "Truncating log: $log" | /usr/bin/logger -t $syslog_tag
tmp="$log.tmp"
tail -c $truncated_size_bytes "$log" > "$tmp"
mv -f "$tmp" "$log"
# Notify xen that it needs to reload the file
domid="${log##*.}"
xenstore-write /local/logconsole/$domid "$log"
xenstore-rm /local/logconsole/$domid
done

View File

@ -1,69 +0,0 @@
#!/usr/bin/env python
# Licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""This tool is used to clean up Glance images that are cached in the SR."""
import sys
from oslo_config import cfg
import nova.conf
from nova import config
from nova import utils
from nova.virt.xenapi.client import session
from nova.virt.xenapi import vm_utils
destroy_opts = [
cfg.BoolOpt('all_cached',
default=False,
help='Destroy all cached images instead of just unused cached'
' images.'),
cfg.BoolOpt('dry_run',
default=False,
help='Don\'t actually delete the VDIs.')
]
CONF = nova.conf.CONF
CONF.register_cli_opts(destroy_opts)
def main():
"""By default, this script will only cleanup unused cached images.
Options:
--all_cached - Destroy all cached images instead of just unused cached
images.
--dry_run - Don't actually destroy the VDIs.
"""
config.parse_args(sys.argv)
utils.monkey_patch()
_session = session.XenAPISession(CONF.xenserver.connection_url,
CONF.xenserver.connection_username,
CONF.xenserver.connection_password)
sr_ref = vm_utils.safe_find_sr(_session)
destroyed = vm_utils.destroy_cached_images(
_session, sr_ref, all_cached=CONF.all_cached,
dry_run=CONF.dry_run)
if '--verbose' in sys.argv:
print('\n'.join(destroyed))
print("Destroyed %d cached VDIs" % len(destroyed))
if __name__ == "__main__":
main()

View File

@ -1,313 +0,0 @@
#!/usr/bin/env python
import logging
import netifaces
import os
import subprocess
import tempfile
import yaml
XS_RSA = '/root/.ssh/xs_rsa'
ASTUTE_PATH = '/etc/astute.yaml'
ASTUTE_SECTION = '@PLUGIN_NAME@'
PLUGIN_NAME = '@PLUGIN_NAME@'
LOG_ROOT = '/var/log/@PLUGIN_NAME@'
HIMN_IP = '169.254.0.1'
DIST_PACKAGES_DIR = '/usr/lib/python2.7/dist-packages/'
LOG = logging.getLogger('@PLUGIN_NAME@')
LOG.setLevel(logging.DEBUG)
class ExecutionError(Exception):
pass
class FatalException(Exception):
pass
def reportError(err):
LOG.error(err)
raise FatalException(err)
def detailed_execute(*cmd, **kwargs):
cmd = map(str, cmd)
_env = kwargs.get('env')
env_prefix = ''
if _env:
env_prefix = ''.join(['%s=%s ' % (k, _env[k]) for k in _env])
env = dict(os.environ)
env.update(_env)
else:
env = None
LOG.info(env_prefix + ' '.join(cmd))
proc = subprocess.Popen(cmd, stdin=subprocess.PIPE, # nosec
stdout=subprocess.PIPE,
stderr=subprocess.PIPE, env=env)
prompt = kwargs.get('prompt')
if prompt:
(out, err) = proc.communicate(prompt)
else:
(out, err) = proc.communicate()
if out:
# Truncate "\n" if it is the last char
out = out.strip()
LOG.debug(out)
if err:
LOG.info(err)
if proc.returncode is not None and proc.returncode != 0:
if proc.returncode in kwargs.get('allowed_return_codes', [0]):
LOG.info('Swallowed acceptable return code of %d',
proc.returncode)
else:
LOG.warn('proc.returncode: %s', proc.returncode)
raise ExecutionError(err)
return proc.returncode, out, err
def execute(*cmd, **kwargs):
_, out, _ = detailed_execute(*cmd, **kwargs)
return out
def patch(directory, patch_file, level):
cwd = os.getcwd()
patchset_dir = os.path.join(cwd, "patchset")
patches_applied = os.path.join(patchset_dir, "patches_applied")
patched = False
if os.path.exists(patches_applied):
with open(patches_applied) as f:
patches = f.read().split('\n')
patched = (patch_file) in patches
if not patched:
# use '--forward' to ignore patches that seem to be reversed or
# already applied.
ret_code, out, err = detailed_execute(
'patch', '--forward', '-d', directory, '-p%s' % level,
'-i', os.path.join(patchset_dir, patch_file),
allowed_return_codes=[0, 1])
if ret_code == 1:
skip_reason = 'Reversed (or previously applied) patch detected!'
if skip_reason in out or skip_reason in err:
LOG.info('Skipping patching %s: not needed anymore.'
% patch_file)
else:
raise ExecutionError('Failed patching %s' % patch_file)
else:
LOG.info('%s is applied successfully.' % patch_file)
with open(patches_applied, "a") as f:
f.write(patch_file + "\n")
else:
LOG.info("%s is already applied - skipping" % patch_file)
def ssh(host, username, *cmd, **kwargs):
cmd = map(str, cmd)
return execute('ssh', '-i', XS_RSA,
'-o', 'StrictHostKeyChecking=no',
'%s@%s' % (username, host), *cmd, **kwargs)
def ssh_detailed(host, username, *cmd, **kwargs):
cmd = map(str, cmd)
return detailed_execute('ssh', '-i', XS_RSA,
'-o', 'StrictHostKeyChecking=no',
'%s@%s' % (username, host), *cmd, **kwargs)
def scp(host, username, target_path, filename):
return execute('scp', '-i', XS_RSA,
'-o', 'StrictHostKeyChecking=no', filename,
'%s@%s:%s' % (username, host, target_path))
def setup_logging(filename):
LOG_FILE = os.path.join(LOG_ROOT, filename)
if not os.path.exists(LOG_ROOT):
os.mkdir(LOG_ROOT)
logging.basicConfig(
filename=LOG_FILE, level=logging.WARNING,
format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s')
def get_astute(astute_path=ASTUTE_PATH):
"""Return the root object read from astute.yaml"""
if not os.path.exists(astute_path):
reportError('%s not found' % astute_path)
with open(astute_path) as f:
astute = yaml.safe_load(f)
return astute
def astute_get(dct, keys, default=None, fail_if_missing=True):
"""A safe dictionary getter"""
for key in keys:
if key in dct:
dct = dct[key]
else:
if fail_if_missing:
reportError('Value of "%s" is missing' % key)
return default
return dct
def get_options(astute, astute_section=ASTUTE_SECTION):
"""Return username and password filled in plugin."""
if astute_section not in astute:
reportError('%s not found' % astute_section)
options = astute[astute_section]
LOG.info('username: {username}'.format(**options))
LOG.info('password: {password}'.format(**options))
LOG.info('install_xapi: {install_xapi}'.format(**options))
return options['username'], options['password'], \
options['install_xapi']
def eth_to_mac(eth):
return netifaces.ifaddresses(eth).get(netifaces.AF_LINK)[0]['addr']
def detect_himn_ip(eths=None):
if eths is None:
eths = netifaces.interfaces()
for eth in eths:
ip = netifaces.ifaddresses(eth).get(netifaces.AF_INET)
if ip is None:
continue
himn_local = ip[0]['addr']
himn_xs = '.'.join(himn_local.split('.')[:-1] + ['1'])
if HIMN_IP == himn_xs:
return eth, ip
return None, None
def find_eth_xenstore():
domid = execute('xenstore-read', 'domid')
himn_mac = execute(
'xenstore-read',
'/local/domain/%s/vm-data/himn_mac' % domid)
LOG.info('himn_mac: %s' % himn_mac)
eths = [eth for eth in netifaces.interfaces()
if eth_to_mac(eth) == himn_mac]
if len(eths) != 1:
reportError('Cannot find eth matches himn_mac')
return eths[0]
def detect_eth_dhclient():
for eth in netifaces.interfaces():
# Don't try and dhclient for devices an IP address already
ip = netifaces.ifaddresses(eth).get(netifaces.AF_INET)
if ip:
continue
# DHCP replies from HIMN should be super fast
execute('timeout', '2s', 'dhclient', eth,
allowed_return_codes=[0, 124])
try:
_, ip = detect_himn_ip([eth])
if ip is not None:
return eth
finally:
execute('dhclient', '-r', eth)
def init_eth():
"""Initialize the net interface connected to HIMN
Returns:
the IP addresses of local host and hypervisor.
"""
eth, ip = detect_himn_ip()
if not ip:
eth = None
try:
eth = find_eth_xenstore()
except Exception:
LOG.debug('Failed to find MAC through xenstore', exc_info=True)
if eth is None:
eth = detect_eth_dhclient()
if eth is None:
reportError('Failed to detect HIMN ethernet device')
LOG.info('himn_eth: %s' % eth)
execute('dhclient', eth)
fname = '/etc/network/interfaces.d/ifcfg-' + eth
s = ('auto {eth}\n'
'iface {eth} inet dhcp\n'
'post-up route del default dev {eth}').format(eth=eth)
with open(fname, 'w') as f:
f.write(s)
LOG.info('%s created' % fname)
execute('ifdown', eth)
execute('ifup', eth)
ip = netifaces.ifaddresses(eth).get(netifaces.AF_INET)
himn_local = ip[0]['addr']
himn_xs = '.'.join(himn_local.split('.')[:-1] + ['1'])
if HIMN_IP != himn_xs:
# Not on the HIMN - we failed here.
LOG.info('himn_local: DHCP returned incorrect IP %s' %
ip[0]['addr'])
ip = None
if not ip:
reportError('HIMN failed to get IP address from Hypervisor')
LOG.info('himn_local: %s' % ip[0]['addr'])
return eth, ip[0]['addr']
def add_cron_job(user, job_entry):
crontab_cmd = 'crontab'
out = execute(crontab_cmd, '-l', '-u', user)
entries = []
if out is not None:
entries = out.split('\n')
if job_entry not in entries:
# avoid duplicated entries
entries.append(job_entry)
else:
entries = [job_entry]
entries_str = '\n'.join(entries)
# the ending '\n' is mandatory for crontab.
entries_str += '\n'
temp_fd, temp_path = tempfile.mkstemp()
with open(temp_path, 'w') as tmp_file:
tmp_file.write(entries_str)
execute(crontab_cmd, '-u', user, temp_path)
os.close(temp_fd)
os.remove(temp_path)
def get_xcp_version(himn, username):
xcp_ver = ssh(himn, username,
('xe host-param-get uuid=$(xe host-list --minimal) '
'param-name=software-version '
'param-key=platform_version'))
return xcp_ver

View File

@ -1,44 +0,0 @@
- id: 'install-pv-tool'
version: 2.0.0
role: ['compute']
required_for: ['compute-pre-test']
requires: ['network_configuration_end']
type: shell
parameters:
cmd: 'dpkg -i ./xe-guest-utilities_7.0.0-24_all.deb'
timeout: 60
- id: 'compute-pre-test'
version: 2.0.0
role: ['compute']
requires: ['install-pv-tool']
type: shell
parameters:
cmd: ./compute_pre_test.py
timeout: 120
- id: 'compute-post-deployment'
version: 2.0.0
role: ['compute']
required_for: ['post_deployment_end']
requires: ['post_deployment_start']
type: shell
parameters:
cmd: ./compute_post_deployment.py
timeout: 300
- id: 'controller-post-deployment'
version: 2.0.0
role: ['primary-controller', 'controller']
required_for: ['primary-controller-post-deployment']
requires: ['post_deployment_start']
type: shell
parameters:
cmd: ./controller_post_deployment.py
timeout: 600
- id: 'primary-controller-post-deployment'
version: 2.0.0
role: ['primary-controller']
required_for: ['post_deployment_end']
requires: ['controller-post-deployment']
type: shell
parameters:
cmd: source /root/openrc && ./primary_controller_post_deployment.py
timeout: 600

View File

@ -1,32 +0,0 @@
attributes:
metadata:
restrictions:
- "settings:common.libvirt_type.value == 'kvm'"
- "settings:storage.volumes_ceph.value == true"
- "settings:storage.ephemeral_ceph.value == true"
- "settings:additional_components.sahara.value == true"
- "settings:additional_components.murano.value == true"
- "settings:additional_components.mongo.value == true"
- "settings:additional_components.ironic.value == true"
group: 'compute'
username:
value: 'root'
label: 'Username'
description: ''
weight: 10
type: "text"
password:
value: ''
label: 'Password'
description: ''
weight: 20
type: "password"
regex:
source: '\S'
error: "Password cannot be empty"
install_xapi:
value: true
label: 'Install Nova Plugins'
description: ''
weight: 30
type: "checkbox"

View File

@ -1,33 +0,0 @@
# Plugin name
name: @PLUGIN_NAME@
# Human-readable name for your plugin
title: @HYPERVISOR_NAME@ Plugin
# Plugin version
version: '@PLUGIN_VERSION@.@PLUGIN_REVISION@'
# Description
description: Enable Mirantis OpenStack to integrate with @HYPERVISOR_NAME@
# Required fuel version
fuel_version: ['9.0']
# Specify license of your plugin
licenses: ['Apache License Version 2.0']
# Specify author or company name
authors: ['Citrix']
# A link to the plugin's page
homepage: 'https://git.openstack.org/cgit/openstack/fuel-plugin-xenserver'
# Specify a group which your plugin implements, possible options:
# network, storage, storage::cinder, storage::glance, hypervisor
groups: ['hypervisor']
# Change `false` to `true` if the plugin can be installed in the environment
# after the deployment.
is_hotpluggable: false
# The plugin is compatible with releases in the list
releases:
- os: ubuntu
version: 'mitaka-9.0'
mode: ['multinode', 'ha']
deployment_scripts_path: deployment_scripts/
repository_path: repositories/ubuntu
# Version of plugin package
package_version: '4.0.0'

View File

@ -1,6 +0,0 @@
#!/bin/bash
# Add here any the actions which are required before plugin build
# like packages building, packages downloading from mirrors and so on.
# The script should return 0 if there were no errors.

View File

@ -1,169 +0,0 @@
Copyright 2015 Citrix Systems
Fuel Plugin for Xenserver
==============================
The XenServer plugin provides the ability to use Xenserver as the
hypervisor for Mirantis OpenStack.
Compatible with Fuel version 9.0.
Problem description
===================
There is currently no supported way for Citrix Xenserver customers to
use Mirantis OpenStack in their environments. XenServer Fuel plugin
aims to provide support for it.
Proposed change
===============
XenServer Fuel plugin that will deliver new features and patches to
Compute/Node nodes as well as the XenServer hosts, customize user
interface as XenServer isn't a built-in hypervisor and reconfigure
OpenStack environment from qemu-based to xenserver-based.
Alternatives
------------
N/A - the aim is to implement a Fuel plugin.
Data model impact
-----------------
None, although a new Release will be installed into the existing model.
REST API impact
---------------
None.
Upgrade impact
--------------
When upgrading the Fuel Master node to Fuel Version higher than 9.0,
plugin compatibility should be checked, and a new plugin installed if
necessary.
Security impact
---------------
None.
Notifications impact
--------------------
None.
Other end user impact
---------------------
Once the plugin is installed, the user can select the XenServer
Release and then configure the access credentials in the Settings tab
of the Fuel Web UI.
Performance Impact
------------------
None.
Plugin impact
-------------
The plugin will:
* Install a customized Release definition (based on Ubuntu) which will
only be usable to install an environment with XenServer
* Connect to XenServer over a new network (an internal management
network)
* Require the network is already set up on the host (to be
documented)
* Set up the network to use DHCP, which will allow access to
XenServer over a fixed link-local IP 169.254.0.1
* Configure Nova to the XenAPI driver and use this interface to
connect
* Configure the Compute VM to provide a NATed interface to Dom0 so
that storage traffic (for example, when downloading the initial
image) which originates in Dom0 can be routed through the Compute VM
as a gateway
Other deployer impact
---------------------
The plugin requries the Compute nodes to be created on the XenServer
hosts as XenServer's Nova plugin requires access to the virtual disks
as they are being created.
Developer impact
----------------
None.
Infrastructure impact
---------------------
None.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Huan Xie <huan.xie@citrix.com> (developer)
Other contributors:
Bob Ball <bob.ball@citrix.com> (developer, reviewer)
Jianghua Wang <jianghua.wang@citrix.com> (developer)
John Hua <john.hua@citrix.com> (developer)
Work Items
----------
* Upgrade the XenServer Fuel 8.0 plugin to work with Fuel 9.0.
* Test XenServer plugin.
* Create the documentation.
Dependencies
============
* Fuel 9.0
Testing
=======
* Prepare a test plan.
* Test the plugin according to the test plan.
Documentation Impact
====================
* Create the following documentation:
* User Guide.
* Test Plan.
* Test Report.
References
==========
* Citrix XenServer official documentation: http://docs.vmd.citrix.com/XenServer
* What is Xen? by Xen.org: http://xen.org/files/Marketing/WhatisXen.pdf
* Xen Hypervisor project: http://www.xenproject.org/developers/teams/hypervisor.html
* Xapi project: http://www.xenproject.org/developers/teams/xapi.html
* Further XenServer and OpenStack information: http://wiki.openstack.org/XenServer

View File

@ -1,30 +0,0 @@
# build-xenserver-suppack.sh
This script is used to build iso for XenServer Dom0 xapi plugin.
It will build both Nova and Neutron Dom0 plugin RPM packages firstly,
and then make them in one ISO.
## usage:
#####./build-xenserver-suppack.sh OS_RELEASE PLATFORM_VERSION XS_BUILD XS_PLUGIN_VERSION
* OS_RELEASE: OpenStack branch that's used for building this plugin
* PLATFORM_VERSION: Xen cloud platform version which can be used for this plugin
* XS_BUILD: XenServer build number
* XS_PLUGIN_VERSION: OpenStack XenServer Dom0 plguin version
*NOTE: If no input parameters given, default values are used*
*OS_RELEASE: mitaka
*PLATFORM_VERSION: 1.9
*XS_BUILD: 90233c
*XS_PLUGIN_VERSION: 2015.1

View File

@ -1,235 +0,0 @@
#!/bin/bash
set -eux
# =============================================
# Usage of this script:
# ./build-xenserver-suppack.sh os-release xs-plugin-version key
# Or
# ./build-xenserver-suppack.sh
#
# You can provide explict input parameters or you can use the default ones:
# OpenStack release
# XenServer OpenStack plugin version
# Key for building supplemental packages
# Keyfile for building supplemental packages
#
# Prerequisite:
# For Dundee:
# No
# For Ely:
# 1. Secret key is imported to the VM which is use for building suppack
# 2. Public keyfile is downloaded to this folder in the building VM
# 3. Below packages should be installed in advance:
# expect-5.45-14.el7_1.x86_64
# libarchive-3.1.2-7.el7.x86_64
# rpm-sign-4.11.3-17.el7.x86_64
THIS_FILE=$(readlink -f $0)
FUELPLUG_UTILS_ROOT=$(dirname $THIS_FILE)
BUILDROOT=${FUELPLUG_UTILS_ROOT}/build
SUPPACK_CREEDENCE=${FUELPLUG_UTILS_ROOT}/xcp_1.9.0
SUPPACK_DUNDEE=${FUELPLUG_UTILS_ROOT}/xcp_2.1.0
SUPPACK_ELY=${FUELPLUG_UTILS_ROOT}/xcp_2.2.0
rm -rf $BUILDROOT $SUPPACK_CREEDENCE $SUPPACK_DUNDEE $SUPPACK_ELY
mkdir -p $SUPPACK_CREEDENCE
mkdir -p $SUPPACK_DUNDEE
mkdir -p $SUPPACK_ELY
mkdir -p $BUILDROOT && cd $BUILDROOT
# =============================================
# Configurable items
# OpenStack release
OS_RELEASE=${1:-"mitaka"}
# nova and neutron xenserver dom0 plugin version
XS_PLUGIN_VERSION=${2:-"13.0.0"}
# key of the public/secret OpenStack GPG key
SUPPACK_KEY=${3:-"Citrix OpenStack (XenServer Updates) <openstack@citrix.com>"}
# keyfile
SUPPACK_KEYFILE=${4:-"RPM-GPG-KEY-XS-OPENSTACK"}
# branch info
GITBRANCH="stable/$OS_RELEASE"
# repository info
NOVA_GITREPO="https://git.openstack.org/openstack/nova"
NEUTRON_GITREPO="https://git.openstack.org/openstack/neutron"
RPM_BUILDER_REPO="https://github.com/citrix-openstack/xenserver-nova-suppack-builder"
# Update system and install dependencies
export DEBIAN_FRONTEND=noninteractive
# =============================================
# Install suppack builder for Dundee (XCP 2.1.0)
RPM_ROOT=http://coltrane.uk.xensource.com/usr/groups/release/XenServer-7.x/XS-7.0/RTM-125380/binary-packages/RPMS/domain0/RPMS/noarch
wget $RPM_ROOT/supp-pack-build-2.1.0-xs55.noarch.rpm -O supp-pack-build.rpm
wget $RPM_ROOT/xcp-python-libs-1.9.0-159.noarch.rpm -O xcp-python-libs.rpm
# Don't install the RPM as we may not have root.
rpm2cpio supp-pack-build.rpm | cpio -idm
rpm2cpio xcp-python-libs.rpm | cpio -idm
# ==============================================
# Install suppack builder for Ely (XCP 2.2.0)
RPM_ROOT=http://coltrane.uk.xensource.com/release/XenServer-7.x/XS-7.1/RC/137005.signed/binary-packages/RPMS/domain0/RPMS/noarch/
wget $RPM_ROOT/python-libarchive-c-2.5-1.el7.centos.noarch.rpm -O python-libarchive.rpm
wget $RPM_ROOT/update-package-1.1.2-1.noarch.rpm -O update-package.rpm
rpm2cpio python-libarchive.rpm | cpio -idm
rpm2cpio update-package.rpm | cpio -idm
# Work around dodgy requirements for xcp.supplementalpack.setup function
# Note that either root or a virtual env is needed here. venvs are better :)
cp -f usr/bin/* .
# If we are in a venv, we can potentially work with genisoimage and not mkisofs
venv_prefix=$(python -c 'import sys; print sys.prefix if hasattr(sys, "real_prefix") else ""')
set +e
mkisofs=`which mkisofs`
set -e
if [ -n "$venv_prefix" -a -z "$mkisofs" ]; then
# Some systems (e.g. debian) only have genisofsimage.
set +e
genisoimage=`which genisoimage`
set -e
[ -n "$genisoimage" ] && ln -s $genisoimage $venv_prefix/bin/mkisofs
fi
# Now we must have mkisofs as the supp pack builder just invokes it
which mkisofs || (echo "mkisofs not installed" && exit 1)
# =============================================
# Check out rpm packaging repo
rm -rf xenserver-nova-suppack-builder
git clone -b $GITBRANCH --single-branch --depth 1 $RPM_BUILDER_REPO xenserver-nova-suppack-builder
# =============================================
# Create nova rpm file
rm -rf nova
git clone -b $GITBRANCH --single-branch --depth 1 "$NOVA_GITREPO" nova
pushd nova
# patch xenhost as this file is not merged into this release
cp $FUELPLUG_UTILS_ROOT/../plugin_source/deployment_scripts/patchset/xenhost plugins/xenserver/xenapi/etc/xapi.d/plugins/
# patch bandwidth as this file is not merged into this release
cp $FUELPLUG_UTILS_ROOT/../plugin_source/deployment_scripts/patchset/bandwidth plugins/xenserver/xenapi/etc/xapi.d/plugins/
popd
cp -r xenserver-nova-suppack-builder/plugins/xenserver/xenapi/* nova/plugins/xenserver/xenapi/
pushd nova/plugins/xenserver/xenapi/contrib
./build-rpm.sh $XS_PLUGIN_VERSION
popd
RPMFILE=$(find $FUELPLUG_UTILS_ROOT -name "openstack-xen-plugins-*.noarch.rpm" -print)
# =============================================
# Create neutron rpm file
rm -rf neutron
git clone -b $GITBRANCH --single-branch --depth 1 "$NEUTRON_GITREPO" neutron
pushd neutron
cp $FUELPLUG_UTILS_ROOT/../plugin_source/deployment_scripts/patchset/netwrap neutron/plugins/ml2/drivers/openvswitch/agent/xenapi/etc/xapi.d/plugins/
popd
cp -r xenserver-nova-suppack-builder/neutron/* \
neutron/neutron/plugins/ml2/drivers/openvswitch/agent/xenapi/
pushd neutron/neutron/plugins/ml2/drivers/openvswitch/agent/xenapi/contrib
./build-rpm.sh $XS_PLUGIN_VERSION
popd
NEUTRON_RPMFILE=$(find $FUELPLUG_UTILS_ROOT -name "openstack-neutron-xen-plugins-*.noarch.rpm" -print)
# =============================================
# Find conntrack-tools related RPMs
EXTRA_RPMS=""
EXTRA_RPMS="$EXTRA_RPMS $(find $FUELPLUG_UTILS_ROOT -name "conntrack-tools-*.rpm" -print)"
EXTRA_RPMS="$EXTRA_RPMS $(find $FUELPLUG_UTILS_ROOT -name "libnetfilter_cthelper-*.rpm" -print)"
EXTRA_RPMS="$EXTRA_RPMS $(find $FUELPLUG_UTILS_ROOT -name "libnetfilter_cttimeout-*.rpm" -print)"
EXTRA_RPMS="$EXTRA_RPMS $(find $FUELPLUG_UTILS_ROOT -name "libnetfilter_queue-*.rpm" -print)"
# =============================================
# Create Supplemental pack for Creedence and Dundee
tee buildscript.py << EOF
import sys
sys.path.append('$BUILDROOT/usr/lib/python2.7/site-packages')
from xcp.supplementalpack import *
from optparse import OptionParser
parser = OptionParser()
parser.add_option('--pdn', dest="product_name")
parser.add_option('--pdv', dest="product_version")
parser.add_option('--hvn', dest="hypervisor_name")
parser.add_option('--desc', dest="description")
parser.add_option('--bld', dest="build")
parser.add_option('--out', dest="outdir")
(options, args) = parser.parse_args()
xcp = Requires(originator='xcp', name='main', test='ge',
product=options.hypervisor_name, version=options.product_version,
build=options.build)
setup(originator='xcp', name=options.product_name, product=options.hypervisor_name,
version=options.product_version, build=options.build, vendor='',
description=options.description, packages=args, requires=[xcp],
outdir=options.outdir, output=['iso'])
EOF
python buildscript.py \
--pdn=xenapi-plugins-$OS_RELEASE \
--pdv="1.9.0" \
--hvn="XCP" \
--desc="OpenStack Plugins" \
--bld=0 \
--out=$SUPPACK_CREEDENCE \
$RPMFILE \
$NEUTRON_RPMFILE
python buildscript.py \
--pdn=xenapi-plugins-$OS_RELEASE \
--pdv="2.1.0" \
--hvn="XCP" \
--desc="OpenStack Plugins" \
--bld=0 \
--out=$SUPPACK_DUNDEE \
$RPMFILE \
$NEUTRON_RPMFILE \
$EXTRA_RPMS
# =============================================
# Create Supplemental pack for Ely
# KEY for building supplemental pack
SUPPACK_KEY="Citrix OpenStack (XenServer Updates) <openstack@citrix.com>"
CONNTRACK_UUID=`uuidgen`
XENAPI_PLUGIN_UUID=`uuidgen`
tee buildscript_ely.py << EOF
import sys
sys.path.append('$BUILDROOT/usr/lib/python2.7/site-packages')
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.exit(
load_entry_point('update-package', 'console_scripts', 'build-update')()
)
EOF
python buildscript_ely.py \
--uuid $XENAPI_PLUGIN_UUID \
-l "openstack-xenapi-plugins" \
-v 1.0 \
-d "OpenStack plugins supplemental pack" \
-o $SUPPACK_ELY/xenapi-plugins-$OS_RELEASE.iso \
-k "$SUPPACK_KEY" \
--keyfile "$FUELPLUG_UTILS_ROOT/$SUPPACK_KEYFILE" --no-passphrase \
$RPMFILE $NEUTRON_RPMFILE $EXTRA_RPMS