Retire repository

Fuel (from openstack namespace) and fuel-ccp (in x namespace)
repositories are unused and ready to retire.

This change removes all content from the repository and adds the usual
README file to point out that the repository is retired following the
process from
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

See also
http://lists.openstack.org/pipermail/openstack-discuss/2019-December/011647.html

Depends-On: https://review.opendev.org/699362
Change-Id: I3ed5ff99845dc94e0b531dadfa20fda9eb1cc848
This commit is contained in:
Andreas Jaeger 2019-12-18 09:50:23 +01:00
parent 3baf0bee63
commit b47e51b862
273 changed files with 8 additions and 61158 deletions

22
.gitignore vendored
View File

@ -1,22 +0,0 @@
*.pyc
# vim swap files
.*.swp
# services' runtime files
*.log
*.pid
.idea/
.DS_Store
*.egg-info
draft/
/.testrepository
/.tox
/doc/build
/doc/source/index.rst
ChangeLog
AUTHORS

View File

@ -1,4 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -1,3 +0,0 @@
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode

View File

@ -1,54 +1,10 @@
========================
Team and repository tags
========================
This project is no longer maintained.
.. image:: http://governance.openstack.org/badges/fuel-specs.svg
:target: http://governance.openstack.org/reference/tags/index.html
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
.. Change things from this point on
==================================
Fuel Specifications
==================================
This git repository is used to hold approved design specifications for additions
to the Fuel project. Reviews of the specs are done in gerrit, using a similar
workflow to how we review and merge changes to the code itself.
The layout of this repository is::
specs/<release>/
You can find an example spec in `doc/source/specs/template.rst`.
Specifications are proposed for a given release by adding them to the
`specs/<release>` directory and posting it for review. The implementation
status of a blueprint for a given release can be found by looking at the
blueprint in launchpad. Not all approved blueprints will get fully implemented.
Specifications have to be re-proposed for every release. The review may be
quick, but even if something was previously approved, it should be re-reviewed
to make sure it still makes sense as written.
Prior to the Juno development cycle, this repository was not used for spec
reviews. Reviews prior to Juno were completed entirely through Launchpad
blueprints::
http://blueprints.launchpad.net/fuel
Please note, Launchpad blueprints are still used for tracking the
current status of blueprints. For more information, see::
https://wiki.openstack.org/wiki/Blueprints
For more information about working with gerrit, see::
http://docs.openstack.org/infra/manual/developers.html#development-workflow
To validate that the specification is syntactically correct (i.e. get more
confidence in the Jenkins result), please execute the following command::
$ tox
After running ``tox``, the documentation will be available for viewing in HTML
format in the ``doc/build/`` directory.
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,301 +0,0 @@
# -*- coding: utf-8 -*-
#
# Tempest documentation build configuration file, created by
# sphinx-quickstart on Tue May 21 17:43:32 2013.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import subprocess
import sys
import os
import os.path
import glob
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'sphinx.ext.todo',
'sphinx.ext.viewcode',
'oslosphinx',
'sphinxcontrib.httpdomain',
]
todo_include_todos = True
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Fuel Specs'
copyright = u'2014, Fuel'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = False
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
modindex_common_prefix = ['fuel-specs.']
# -- Options for man page output ----------------------------------------------
man_pages = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'nature'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
git_cmd = ["git", "log", "--pretty=format:'%ad, commit %h'", "--date=local",
"-n1"]
html_last_updated_fmt = subprocess.Popen(
git_cmd, stdout=subprocess.PIPE).communicate()[0]
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
html_domain_indices = False
# If false, no index is generated.
html_use_index = False
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'Fuel-Specsdoc'
confdir = os.path.dirname(os.path.abspath(__file__))
workdir = os.path.join(confdir, "..", "..")
releases = [os.path.basename(dirname) for dirname in
glob.iglob("{0}/specs/[0-9]*.[0-9]*".format(workdir))]
with open(os.path.join(confdir, 'header.rst.template')) as f:
header = f.read()
with open(os.path.join(confdir, 'footer.rst.template')) as f:
footer = f.read()
with open(os.path.join(confdir, 'index.rst'), 'w') as f:
f.write(header)
for specdir in sorted(releases):
f.write("""
{0} approved specs:
.. toctree::
:glob:
:maxdepth: 1
specs/{0}/*
""".format(specdir))
f.write(footer)
# -- Options for LaTeX output --------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'fuel-specs.tex', u'Fuel Specs',
u'Fuel Team', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for Texinfo output ------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'fuel-specs', u'Fuel Design Specs',
u'Fuel Team', 'fuel-specs', 'Design specifications for the fuel project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# -- Options for Epub output ---------------------------------------------------
# Bibliographic Dublin Core info.
epub_title = u'Fuel Specs'
epub_author = u'Fuel Team'
epub_publisher = u'Fuel Team'
epub_copyright = u'2014, Fuel'
# The language of the text. It defaults to the language option
# or en if the language is not set.
#epub_language = ''
# The scheme of the identifier. Typical schemes are ISBN or URL.
#epub_scheme = ''
# The unique identifier of the text. This can be a ISBN number
# or the project homepage.
#epub_identifier = ''
# A unique identification for the text.
#epub_uid = ''
# A tuple containing the cover image and cover page html template filenames.
#epub_cover = ()
# HTML files that should be inserted before the pages created by sphinx.
# The format is a list of tuples containing the path and title.
#epub_pre_files = []
# HTML files shat should be inserted after the pages created by sphinx.
# The format is a list of tuples containing the path and title.
#epub_post_files = []
# A list of files that should not be packed into the epub file.
#epub_exclude_files = []
# The depth of the table of contents in toc.ncx.
#epub_tocdepth = 3
# Allow duplicate toc entries.
#epub_tocdup = True

View File

@ -1,5 +0,0 @@
==================
Indices and tables
==================
* :ref:`search`

View File

@ -1,22 +0,0 @@
.. fuel-specs documentation master file
=====================
Fuel Policy Documents
=====================
.. toctree::
:glob:
:maxdepth: 1
policy/*
===========================
Fuel Project Specifications
===========================
.. toctree::
:glob:
:maxdepth: 1
specs/*

Binary file not shown.

Before

Width:  |  Height:  |  Size: 114 KiB

View File

@ -1 +0,0 @@
../../policy

View File

@ -1 +0,0 @@
../../specs

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 117 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 258 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 94 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 281 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 264 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 141 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 74 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 166 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 168 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 171 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 174 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 144 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 106 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 124 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 114 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 20 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 6.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 103 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 89 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 217 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 74 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

View File

@ -1,206 +0,0 @@
================
Team Structure
================
This document describes the structure of the Fuel team and how it is used to
organize code review, design discussions, and overall technical decision making
process in the Fuel project.
Problem Description
===================
Code review is the primary tool for day to day interactions between Fuel
contributors. Problems with code review process can be grouped into two
buckets.
It is hard to get code reviewed and merged:
1. It is hard to find subject matter experts and core reviewers for the
specific part of codebase, especially if you are new to the project.
2. Contributor sometimes receives contradicting opinions from different
reviewers, including cores.
3. Without an assigned core reviewer, it is hard to guide a feature through
architectural negotiations and code review process to landing the code into
master.
4. Some commits are waiting for a long time for a reviewer.
Quality of code review itself could be better:
5. Reviews are not thorough enough. Instead of examining the whole patch set
and identifying all problems in one shot, a reviewer can leave a -1 vote
after identifying only one minor problem. This increases number of patch
sets per commit, and demotivates contributors.
6. Some of the core reviewers decreased their involvement, and so number of
reviews has dropped dramatically. However, they still occasionally merge
code.
7. As a legacy of the past, we still have old core reviewers being able to
merge code in all Fuel repos. All new cores have core rights only for single
repo, which is their primary area of expertise.
Having well defined areas of ownership in Fuel components addreses most of
these problems: from making it easier to identify the right reviewers for your
code, to prioritizing code review work so that core reviewers can spend more
attention on smaller number of commits.
Proposed Policy
===============
Definitions
-----------
Contributor:
Submitter of a code review, who doesn't necessarily work on Fuel regularly,
may not be familiar with the team structure or with Fuel codebase.
Maintainer:
Subject matter expert in certain Fuel area of code, which they regularly
contribute to and review code of other contributors into this area. For
example: network checker or Nailgun agent would have their own lists of
maintainers.
List of maintainers for different parts of a Fuel git repository is
provided in a MAINTAINERS file at the top level of that repository. A
repository that contains multiple components may have multiple MAINTAINERS
files in the component subdirectories.
Core Reviewer:
Maintainer who has maintained high level of contribution and high quality
of code reviews and was promoted to core reviewers team by consensus of
other core reviewers of the same Fuel component.
Fuel PTL:
Project Team Lead in its OpenStack standard definition. Delegates most of
the review and design work to component teams, resolves technical disputes
across components.
Code Review Workflow
--------------------
Typical commit goes through the following code review stages:
0. Contributor makes sure their commit receives no negative votes from CI. When
possible, contributor also invites peers to review their commit, preferably
from different locations to help spread out the knowledge of the new code.
1. Contributor finds the maintainers for the areas of the code modified by
their commit in the MAINTAINERS file, and invites them to the review.
2. Once maintainer is ready to add +1 code review vote to the commit, they
invite core reviewers of the modified component to the review.
3. A commit that has a +2 vote from a core reviewer can be merged by another
core reviewer (may be the same core reviewer if the repository has only 2 or
less core reviewers).
Governance Process
------------------
Fuel PTL is elected twice a year following the same cycle and rules as other
OpenStack projects: all committers to all Fuel projects (fuel-* and
python-fuelclient) over the last year can vote and can self-nominate.
Fuel aggregates features provided by Fuel components.
Components could be either Fuel driven (like Nailgun, Astute, UI) or
generic in a sense that Fuel is not the only use case for such components
(e.g. Keystone, potentially Neutron, Ironic, Glance, etc.). Component
teams are independent but should interact with each other while
working on features.
Core team of a component is responsible for code review in their component.
It is totally up to a component team (not Fuel team as a whole)
to decide whether they resolve review conflicts by consensus or they delegate
their voices to a formal or inforaml component lead. It should be up to a
component team how they share review responsibilites and how they make
architecture and planning decisions.
Core reviewers are approved by consensus of existing core reviewers, following
the same process as with other OpenStack projects. Core reviewers can
voluntarily step down, or be removed by consensus of existing core reviewers.
Separate core reviewers list is maintained for each Fuel git repository.
Maintainers are defined by the contents of the MAINTAINERS files in Fuel git
repositories, following the standard code review process. Any contributor can
propose an update of a MAINTAINERS file; a core reviewer can approve an update
that has a +2 from another core reviewer; if the update adds new maintainers,
it must also have +1 votes from all added maintainers.
Since components could be generic there must be two levels of design.
By-component design specs describe component changes that are not necessarily
related to Fuel and these specs are out of the scope of this policy.
Fuel design specs describe Fuel features that usually require coordinated
changes in multiple components. Each Fuel spec must be reviewed
and approved (+2) by matter experts from at least the following backgrounds
(even if respective section is empty):
* Web UI
* Nailgun&Orchestration
* Fuel Library
It is up to the Fuel-specs core team to involve other SMEs to review a particular
spec if specific expertise is required.
Alternatives
============
Flat project structure
----------------------
Many other OpenStack projects keep a flat team structure: one elected PTL, and
a single list of core reviewers for the whole project. The advantage is a more
simple and straightforward governance process. The disadvantages are described
in the problem description.
Implementation
==============
Author(s)
---------
Primary author:
mihgen (Mike Scherbakov)
Other contributors:
angdraug (Dmitry Borodaenko)
kozhukalov (Vladimir Kozhukalov)
Milestones
----------
The current policy was put in place for Mitaka, and updated for Newton.
Work Items
----------
N/A
References
==========
* OpenStack Governance process:
https://wiki.openstack.org/wiki/Governance
* Code review process in Fuel and related issues (by Mike Scherbakov):
http://lists.openstack.org/pipermail/openstack-dev/2015-August/072406.html
* Fuel Review Inbox (by Dmitry Borodaenko):
http://git.openstack.org/cgit/openstack/gerrit-dash-creator/tree/dashboards/fuel.dash
* Fuel contribution statistics (Stackalytics):
http://stackalytics.com/report/contribution/fuel-group/90
* Open Reviews for Fuel (by Russel Bryant):
http://russellbryant.net/openstack-stats/fuel-openreviews.html
.. note::
This work is licensed under a Creative Commons Attribution 3.0
Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode

View File

@ -1,7 +0,0 @@
docutils==0.9.1
oslosphinx
pbr>=0.6,!=0.7,<1.0
sphinx>=1.1.2,!=1.2.0,<1.3
testrepository>=0.0.18
testtools>=0.9.34
sphinxcontrib-httpdomain

View File

@ -1,23 +0,0 @@
[metadata]
name = fuel-specs
summary = Fuel Project Development Specs
description-file =
README.rst
author = OpenStack
author-email = fuel-dev@lists.launchpad.net
home-page = https://wiki.openstack.org/wiki/Fuel
classifier =
Intended Audience :: Developers
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
[build_sphinx]
all_files = 1
build-dir = doc/build
source-dir = doc/source
[pbr]
warnerrors = True
[wheel]
universal = 1

View File

@ -1,22 +0,0 @@
#!/usr/bin/env python
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
setuptools.setup(
setup_requires=['pbr'],
pbr=True)

View File

@ -1,272 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=========================================
Containerized Control Plane on Kubernetes
=========================================
https://blueprints.launchpad.net/fuel/+spec/ccp
This is a meta specification to describe in details creation of the new
experimental project under the Fuel project umbrella to provide to users
Containerized OpenStack deployment on top of Kubernetes, codename
"Fuel CCP".
--------------------
Problem description
--------------------
Containerized Control Plane (CCP) is the initiative to package OpenStack
services in the containers and use standard container management framework to
run and manage them. It includes following areas, but not limited to them:
* OpenStack containerization and container image building tooling. OpenStack
components are planned to be installed into container images from source
code (not using deb/rpm packages).
* CI/CD to produce properly layered and versioned containers for the supported
stable and current master branches of OpenStack projects
* OpenStack deployment in containers on top of Kubernetes with HA for OpenStack
services and their dependencies (e.g. MySQL, RabbitMQ, etc.)
* Tooling for deploying and operating OpenStack clusters with support for the
upgrades, patching, scaling and changing configuration
Fuel CCP governance will be a separate experimental project under the openstack
git namespace with unique specs and core team. There is no intention right now
to apply for the Big Tent. The nearest example of the same governance is 3rd
party Fuel plugin done not by Mirantis that aren't under Big Tent and not
controlled by the main Fuel core team.
Separate Launchpad project will be used for the blueprints and bugs management.
Fuel's main IRC channels will be initially used for communication (#fuel) and
there will be weekly sub-team meetings as part of the main Fuel weekly IRC
meetings.
CCP is to be a set of repositories with Docker image definitions together with
Kubernetes applications definitions. It is a single git repository per
OpenStack component plus few repositories with related software, tooling,
such as 3rd party CI config, installer and etc. Were going to start with the
minimal set of the repositories for the “core” OpenStack implementation plus
logging and monitoring implementation based on the existing Stacklight [0]_
[1]_ expertise. The CI system will use upstream infra CI as much as possible
and 3rd party CI for running end-to-end deployment tests.
The initial list of repositories for CCP initiative:
* fuel-ccp (main repo, image build tool, app framework, tooling)
* fuel-ccp-specs
* fuel-ccp-installer
* fuel-ccp-tests
* fuel-ccp-ci-config (~ project-config for 3rd party CI)
* fuel-ccp-debian-base
* fuel-ccp-openstack-base
* fuel-ccp-entrypoint
* fuel-ccp-mariadb
* fuel-ccp-keystone
* fuel-ccp-glance
* fuel-ccp-memcached
* fuel-ccp-horizon
* fuel-ccp-neutron (incl. ovs)
* fuel-ccp-rabbitmq
* fuel-ccp-nova
* fuel-ccp-stacklight (LMA stack)
Each repository will have it's own core reviewers team and there will be one
general core reviewers team with permissions in all repositories.
----------------
Proposed changes
----------------
None. There will be no changes to the existing Fuel projects now.
Web UI
======
None. There is no intention to integrate with Web UI on the early stages of
the initiative.
Nailgun
=======
None. There is no intention to integrate with Nailgun on the early stages of
the initiative.
Data model
----------
None
REST API
--------
None
Orchestration
=============
None
RPC Protocol
------------
None
Fuel Client
===========
None
Plugins
=======
None
Fuel Library
============
None
------------
Alternatives
------------
This spec is actually describes an alternative experimental approach for
OpenStack deployment, but there are few questions to answer about alternatives.
1. Why not use Kolla's container images?
There is a set of fundamental requirements for container images that is
currently not covered or controversial to some Kolla development principles.
This list will be maintained and discussed with Kolla community under the
following specification published to Kolla project:
I18b319cb796192a1e61ecd516a485dc82d52652f
2. Why not contribute to Kolla-Kubernetes?
It's based on the Kolla container images, while we need to solve list of
requirements described in I18b319cb796192a1e61ecd516a485dc82d52652f
--------------
Upgrade impact
--------------
It'll be separate activity to define migration path from current to the
Kubernetes / CCP based OpenStack version.
---------------
Security impact
---------------
None
--------------------
Notifications impact
--------------------
None
---------------
End user impact
---------------
None
------------------
Performance impact
------------------
None
-----------------
Deployment impact
-----------------
None
----------------
Developer impact
----------------
None
---------------------
Infrastructure impact
---------------------
Separate 3rd party CI will be used to run end-to-end tests.
--------------------
Documentation impact
--------------------
Separate documentation will be needed for CCP initiative.
--------------
Implementation
--------------
Assignee(s)
===========
Primary assignee:
slukjanov
Other contributors:
None
Mandatory design review:
* Vladimir Kozhukalov <vkozhukalov@mirantis.com>
* Sergii Golovatiuk <sgolovatiuk@mirnatis.com>
* Bulat Gaifullin <bgaifullin@mirantis.com>
* Julia Aranovich <jkirnosova@mirantis.com>
Work Items
==========
* Create separate CCP Launchpad project
* Create a set of CCP repositories
* Setup 3-rd party CCP CI
Dependencies
============
None
-----------
Testing, QA
-----------
It is not planned to use current Fuel QA resources. All tests
will be run on a separate CI (partly upstream, partly 3-rd party)
and test code is to be written by CCP sub-team.
Acceptance criteria
===================
* A set of CCP repositories are ready to use for development
* CCP Launchpad project is ready to use for tracking CCP bugs and BPs
* CCP 3-rd party CI is available to add CCP testing jobs
----------
References
----------
.. [0] https://www.mirantis.com/blog/stacklight-logging-monitoring-alerting-lma-toolchain-mirantis-openstack/
.. [1] https://www.youtube.com/watch?v=JF1BKgH9uco

View File

@ -1,619 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=========================================================================
Support custom CA bundle file to use in verifying the vCenter server cert
=========================================================================
https://blueprints.launchpad.net/fuel/+spec/custom-ca-bundle-verify-vcenter-cert
After implementation this blueprint, user can specify CA bundle file to use
in verifying the vCenter server certificate for nova-compute [4]_ and
cinder-volume [3]_. Also we improve use cases for Glance vSphere backend
and CA bundle file.
--------------------
Problem description
--------------------
The VMware driver for cinder-volume and nova-compute establishes connections
to vCenter over HTTPS, and VMware driver support the vCenter server
certificate verification as part of the connection process.
Currently, for cinder-volume [3]_ we use ``vmware_insecure = True`` [1]_
and for nova-compute [4]_ we set ``insecure = True`` [2]_ options therefore
the vCenter server certificate is not verified.
In Fuel Web UI is not possible to select a certificate for cinder-volume [3]_
and nova-compute [4]_.
For Glance vSphere backend we can specify custom CA bundle file and it covers
the case where the vCenter is using a Self-Signed certificate. But if vCenter
server certificate was emitted by know CA (e.g. GeoTrust) and we don't specify
custom CA bundle file, certificate verification turn off, because by default
we set ``vmware_insecure = True`` [5]_.
Use cases which cover this blueprint for cinder-volume [3]_, nova-compute [4]_
and Glance vSphere backend:
1. ``Case 1.`` Bypass vCenter certificate verification (default). Certificate
verification turn off. This case is useful for faster deployment and for
testing environment.
2. ``Case 2.`` vCenter is using a Self-Signed certificate. In this case the
user must upload custom CA bundle file certificate.
3. ``Case 3.`` vCenter server certificate was emitted by know CA
(e.g. GeoTrust). In this case user have to leave CA certificate bundle upload
field empty.
----------------
Proposed changes
----------------
The following changes need to be done to implement this feature:
* [Web UI] Add file upload support that allows certificate upload on the
VMware tab [0]_.
* [Web UI] Implement restrictions [6]_ support on VMware tab [0]_.
* [Nailgun] Add field that allows user to upload CA certificate that emitted
vCenters TLS/SSL certificate.
* [Nailgun] Add checkbox "Bypass vCenter certificate verification".
* [Fuel Library] Fetch CA certificate bundle and deploy services with using
certificate.
Web UI
======
On VMware tab [0]_ in the availability zone section need to add the ability to
certificate upload and restrictions [6]_ support.
Availability zone section on VMware tab [0]_:
.. image:: ../../images/10.0/custom-ca-bundle-verify-vcenter-cert/fuel_web_ui_vmware_tab.png
:width: 100 %
For the ``case 1`` availability zone section on VMware tab [0]_ will look like:
.. image:: ../../images/10.0/custom-ca-bundle-verify-vcenter-cert/fuel_web_ui_vmware_tab_case1.png
:width: 100 %
For the ``case 2`` availability zone section on VMware tab [0]_ will look like:
.. image:: ../../images/10.0/custom-ca-bundle-verify-vcenter-cert/fuel_web_ui_vmware_tab_case2.png
:width: 100 %
For the ``case 3`` availability zone section on VMware tab [0]_ will look like:
.. image:: ../../images/10.0/custom-ca-bundle-verify-vcenter-cert/fuel_web_ui_vmware_tab_case3.png
:width: 100 %
Description of the above cases can be found in section ``Problem description``.
It will use the same logic for the Glance vSphere backend (Glance section on
VMware tab [0]_).
Nailgun
=======
Data model
----------
Nailgun should be able to serialize CA certificate data and pass it into
astute.yaml file, astute.yaml for ``case 2``:
.. code-block:: yaml
/etc/astute.yaml
...
vcenter:
computes:
- availability_zone_name: vcenter
datastore_regex: .*
service_name: vmcluster1
target_node: controllers
vc_cluster: Cluster1
vc_host: 172.16.0.254
vc_password: Qwer!1234
vc_user: administrator@vsphere.local
vc_insecure : false
vc_ca_file:
content: RSA
name: vcenter-ca.pem
- availability_zone_name: vcenter
datastore_regex: .*
service_name: vmcluster2
target_node: controllers
vc_cluster: Cluster2
vc_host: 172.16.0.254
vc_password: Qwer!1234
vc_user: administrator@vsphere.local
vc_insecure: false
vc_ca_file:
content: RSA
name: vcenter-ca.pem
...
cinder:
...
instances:
- availability_zone_name: vcenter
vc_host: 172.16.0.254
vc_password: Qwer!1234
vc_user: administrator@vsphere.local
vc_insecure: false
vc_ca_file:
content: RSA
name: vcenter-ca.pem
...
glance:
...
vc_insecure: false
vc_ca_file:
content: RSA
name: vcenter-ca.pem
vc_datacenter: Datacenter
vc_datastore: nfs
vc_host: 172.16.0.254
vc_password: Qwer!1234
vc_user: administrator@vsphere.local
...
REST API
--------
GET ``/api/clusters/%cluster_id%/vmware_attributes/`` method should return data
with the following structure:
.. code-block:: json
[{
"pk": 1,
"editable": {
"metadata": [
{
"fields": [
{
"type": "text",
"description": "Availability zone name",
"name": "az_name",
"label": "AZ name"
},
{
"type": "text",
"description": "vCenter host or IP",
"name": "vcenter_host",
"label": "vCenter host"
},
{
"type": "text",
"description": "vCenter username",
"name": "vcenter_username",
"label": "vCenter username"
},
{
"type": "password",
"description": "vCenter password",
"name": "vcenter_password",
"label": "vCenter password"
},
{
"type": "checkbox",
"name": "vcenter_insecure",
"label": "Bypass vCenter certificate verification"
},
{
"type": "file",
"description": "vCenter CA file",
"name": "vcenter_ca_file",
"label": "CA file",
"restrictions": [
{
"message": "Bypass vCenter certificate verification should be disabled.",
"condition": "currentVCenter:vcenter_insecure == true"
}
]
},
{
"fields": [
{
"type": "text",
"description": "vSphere Cluster",
"name": "vsphere_cluster",
"label": "vSphere Cluster",
"regex": {
"source": "\\S",
"error": "Empty cluster"
}
},
{
"type": "text",
"description": "Service name",
"name": "service_name",
"label": "Service name"
},
{
"type": "text",
"description": "Datastore regex",
"name": "datastore_regex",
"label": "Datastore regex"
},
{
"type": "select",
"description": "Target node for nova-compute service",
"name": "target_node",
"label": "Target node"
}
],
"type": "array",
"name": "nova_computes"
}
],
"type": "array",
"name": "availability_zones"
},
{
"fields": [
{
"type": "text",
"description": "VLAN interface",
"name": "esxi_vlan_interface",
"label": "VLAN interface"
}
],
"type": "object",
"name": "network"
},
{
"fields": [
{
"type": "text",
"description": "VCenter host or IP",
"name": "vcenter_host",
"label": "VCenter Host",
"regex": {
"source": "\\S",
"error": "Empty host"
}
},
{
"type": "text",
"description": "vCenter username",
"name": "vcenter_username",
"label": "vCenter username",
"regex": {
"source": "\\S",
"error": "Empty username"
}
},
{
"type": "password",
"description": "vCenter password",
"name": "vcenter_password",
"label": "vCenter password",
"regex": {
"source": "\\S",
"error": "Empty password"
}
},
{
"type": "text",
"description": "Datacenter",
"name": "datacenter",
"label": "Datacenter",
"regex": {
"source": "\\S",
"error": "Empty datacenter"
}
},
{
"type": "text",
"description": "Datastore",
"name": "datastore",
"label": "Datastore",
"regex": {
"source": "\\S",
"error": "Empty datastore"
}
},
{
"type": "checkbox",
"name": "vcenter_insecure",
"label": "Bypass vCenter certificate verification"
},
{
"type": "file",
"description": "File containing the trusted CA bundle that emitted vCenter server certificate. If empty vCenters certificate is not verified.",
"name": "ca_file",
"label": "CA file",
"restrictions": [
{
"message": "Bypass vCenter certificate verification should be disabled.",
"condition": "Glance:vcenter_insecure == true"
}
]
}
],
"type": "object",
"name": "glance",
"restrictions": [
{
"action": "hide",
"condition": "settings:storage.images_vcenter.value == false or settings:common.use_vcenter.value == false"
}
]
}
],
"value": {
"availability_zones": [
{
"az_name": "Zone 1",
"vcenter_host": "1.2.3.4",
"vcenter_username": "admin",
"vcenter_password": "secret",
"vcenter_insecure": "true",
"vcenter_ca_file": "file_blob",
"nova_computes": [
{
"vsphere_cluster": "cluster1",
"service_name": "Compute 1",
"datastore_regex": "",
"target_node": {
"current": {
"id": "test_target_node"
}
}
},
{
"vsphere_cluster": "cluster2",
"service_name": "Compute 3",
"datastore_regex": "",
"target_node": {
"current": {
"id": "test_target_node"
}
}
}
]
},
{
"az_name": "Zone 2",
"vcenter_host": "1.2.3.6",
"vcenter_username": "user$",
"vcenter_password": "pass$word",
"vcenter_insecure": "true",
"vcenter_ca_file": "file_blob",
"nova_computes": [
{
"vsphere_cluster": "cluster1",
"service_name": "Compute 4",
"datastore_regex": "^openstack-[0-9]$"
},
{
"vsphere_cluster": "",
"service_name": "Compute 7",
"datastore_regex": ""
}
]
}
],
"glance": {
"vcenter_host": "1.2.3.4",
"vcenter_username": "admin",
"vcenter_password": "secret",
"datacenter": "test_datacenter",
"datastore": "test_datastore",
"vcenter_insecure": "true",
"ca_file": "file_blob",
},
"network": {
"esxi_vlan_interface": "eth0"
}
}
}
}]
Orchestration
=============
None
RPC Protocol
------------
None
Fuel Client
===========
None
Plugins
=======
Specification might affect plugins that connect to vCenter server:
* Fuel VMware DVS plugin [8]_.
* Fuel VMware NSXv plugin [7]_.
Fuel Library
============
Changes to Puppet manifests:
* vmware::cinder::vmdk
* vmware::compute_vmware
* vmware::ceilometer::compute_vmware
* vmware::controller
* vmware::ceilometer
* parse_vcenter_settings function
------------
Alternatives
------------
None
--------------
Upgrade impact
--------------
None
---------------
Security impact
---------------
None
--------------------
Notifications impact
--------------------
None
---------------
End user impact
---------------
* The user can upload in VMware tab [0]_ CA certificate that emitted
vCenters TLS/SSL certificate.
* The user can check or uncheck ``Bypass vCenter certificate verification`` in
VMware tab [0]_.
------------------
Performance impact
------------------
None
-----------------
Deployment impact
-----------------
None
----------------
Developer impact
----------------
None
---------------------
Infrastructure impact
---------------------
None
--------------------
Documentation impact
--------------------
Document how to use ``CA file`` field and ``Bypass vCenter certificate
verification`` checkbox on VMware tab in the availability zone section and in
Glance section.
--------------
Implementation
--------------
Assignee(s)
===========
======================= ==============================================
Primary assignee - Alexander Arzhanov <aarzhanov@mirantis.com>
Developers - Alexander Arzhanov <aarzhanov@mirantis.com>
- Anton Zemlyanov <azemlyanov@mirantis.com>
- Andriy Popovych <apopovych@mirantis.com>
QA engineers - Ilya Bumarskov <ibumarskov@mirantis.com>
Mandatory design review - Igor Zinovik <izinovik@mirantis.com>
- Sergii Golovatiuk <sgolovatiuk@mirantis.com>
======================= ==============================================
Work Items
==========
* [Web UI] Add file upload support that allows certificate upload on the
VMware tab [0]_.
* [Web UI] Implement restrictions [6]_ support on VMware tab [0]_.
* [Nailgun] Add field that allows user to upload CA certificate that emitted
vCenters TLS/SSL certificate. Need to make changes:
* openstack.yaml
* vmware_attributes.json
* base_serializers.py
* [Nailgun] Add checkbox ``Bypass vCenter certificate verification``.
* [Fuel Library] Fetch CA certificate bundle and deploy services with using
certificate. Need to make changes:
* vmware::cinder::vmdk
* vmware::compute_vmware
* vmware::ceilometer::compute_vmware
* vmware::controller
* vmware::ceilometer
* parse_vcenter_settings function
Dependencies
============
None
------------
Testing, QA
------------
Necessary to check scenarios:
* insecure connections for nova-compute [4]_, cinder-volume [3]_ and Glance
vSphere backend.
* secure connections for nova-compute [4]_ and cinder-volume [3]_. and Glance
vSphere backend (with CA bundle file for vCenter).
Acceptance criteria
===================
User can upload the CA certificate for vCenter and after deploy nova-compute
[4]_, cinder-volume [3]_ and Glance vSphere backend service works. If the user
does not upload the CA certificate for vCenter and enable ``Bypass vCenter
certificate verification`` checkbox everything works too.
----------
References
----------
.. [0] https://blueprints.launchpad.net/fuel/+spec/vmware-ui-settings
.. [1] https://github.com/openstack/fuel-library/blob/master/deployment/puppet/vmware/templates/cinder-volume.conf.erb#L81
.. [2] https://github.com/openstack/fuel-library/blob/master/deployment/puppet/vmware/templates/nova-compute.conf.erb#L17
.. [3] configured with VMwareVcVmdkDriver
.. [4] configured with VMwareVCDriver
.. [5] https://github.com/openstack/puppet-glance/blob/master/manifests/backend/vsphere.pp#L112
.. [6] https://wiki.openstack.org/wiki/Fuel/Plugins#What_are_restrictions.3F
.. [7] https://github.com/openstack/fuel-plugin-nsxv
.. [8] https://github.com/openstack/fuel-plugin-vmware-dvs

View File

@ -1,261 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=========================================
Create a client module for fuel-devops3.0
=========================================
https://blueprints.launchpad.net/fuel/+spec/fuel-devops-client-as-a-module
Fuel-devops should have an API class to provide a complete interface for
interacting with the environment.
--------------------
Problem description
--------------------
In current implementation there is a list of functions to interact
with environments:
* devops/helpers/helpers.py:
* get_nodes
* get_slave_ip
* get_admin_ip
* get_node_remote
* get_admin_remote
* get_private_keys
* devops/helpers/ntp.py:
* sync_time
These main functions are written in procedural style. It makes it hard to
extend or add a new one.
Also some of these functions are duplicated in fuel-qa
fuelweb_test/models/environment.py
----------------
Proposed changes
----------------
To reduce dependency issues and allow to re-use management layer of
virtual/baremetal labs:
- separate all the code that manage environments nodes/networks
into a 'devops' module
- separate all the code that provide a logical layer (ssh manager,
filters for specific node roles, accessing to the services
that are started on the environment nodes) into a fuel-devops client module.
Fuel-devops client module should provide a complete interface for interacting
with the environment: manage nodes, mapping devops and nailgun nodes into a
single object, accessing nodes via SSH, snapshot/revert nodes, bootstrap admin
node and so on.
It should encapsulate some of methods from fuel-devops Environment object and
fuel-qa EnvironmentModel object (then deprecate it later).
Schema of DevopsClient usage::
+---------+ +----------+
| | | |
| fuel-qa | | shell.py |
| | | |
+-----+---+ +-----+----+
| |
+--------+ +------------+
| |
v v
+--------------+
| |
| DevopsClient |
| |
+----+-----+---+
| |
| +----------+------------------+
| | |
v v v
+--------------------+ +---------------+ +----------+
| | | | | |
| devops.Environment | | NailgunClient | | NtpGroup |
| | | | | |
+--------------------+ +---------------+ +----------+
NailgunClient should be added to replace get_nodes method.
NtpGroup should be added to replace sync_time method.
Web UI
======
None
Nailgun
=======
None
Data model
----------
None
REST API
--------
No FUEL REST API changes.
Orchestration
=============
None
RPC Protocol
------------
None
Fuel Client
===========
None
Plugins
=======
None
Fuel Library
============
None
------------
Alternatives
------------
N/A
--------------
Upgrade impact
--------------
N/A
---------------
Security impact
---------------
N/A
--------------------
Notifications impact
--------------------
N/A
---------------
End user impact
---------------
N/A
------------------
Performance impact
------------------
N/A
-----------------
Deployment impact
-----------------
N/A
----------------
Developer impact
----------------
N/A
---------------------
Infrastructure impact
---------------------
N/A
--------------------
Documentation impact
--------------------
* fuel-qa
* fuel-devops
--------------
Implementation
--------------
Assignee(s)
===========
Primary assignee:
* Anton Studenov (astudenov): astudenov@mirantis.com
Other contributors:
* Dennis Dmitriev (ddmitriev): ddmitriev@mirantis.com
Mandatory design review:
Anastasiia Urlapova, Denys Dmytriiev
Work Items
==========
* Implement DevopsClient and move get_admin_ip/get_node_remote/etc
to this class
* Change Shell to use DevopsClient instead of direct access to
Environment
* Refactor ntp.py to be independent of get_admin/get_slave_remote functions
* Deprecate get_admin_ip/get_node_remote/etc functions
Dependencies
============
None
------------
Testing, QA
------------
None
Acceptance criteria
===================
DevopsClient provides all necessary methods to interact with devops
environment.
----------
References
----------
None

View File

@ -1,190 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=============================================
Modify release repositories using Fuel client
=============================================
https://blueprints.launchpad.net/fuel/+spec/fuelclient-modify-release-repos
--------------------
Problem description
--------------------
Currently we use fuel-mirror tool both to build partial mirrors
and to modify default release repos. We'd better use
packetary for building partial repos and fuelclient for
modifying repos.
----------------
Proposed changes
----------------
The proposal is to implement an option in fuelclient that
could be used to modify repos in Fuel releases.
Then we could get rid of fuel-mirror totally.
Web UI
======
None
Nailgun
=======
Get and put handlers for release attributes metadata
must be implemented.
Data model
----------
None
REST API
--------
None
Orchestration
=============
None
RPC Protocol
------------
None
Fuel Client
===========
There will be commands
.. code-block:: bash
fuel2 release list
fuel2 release repos list <release_id>
fuel2 release repos update <release_id> <-f repos.yaml>
Plugins
=======
None
Fuel Library
============
None
------------
Alternatives
------------
Continue to use fuel-mirror.
--------------
Upgrade impact
--------------
None
---------------
Security impact
---------------
None
--------------------
Notifications impact
--------------------
None
---------------
End user impact
---------------
It will be easy to modify default release repos using Fuel client.
------------------
Performance impact
------------------
None
-----------------
Deployment impact
-----------------
None
----------------
Developer impact
----------------
None
---------------------
Infrastructure impact
---------------------
None
--------------------
Documentation impact
--------------------
Sections in the documentation that mention fuel-mirror should
be removed. Instead there should be references to packetary
and fuelclient docs. Fuelclient section should be modified
in order to reflect this additional repository manipulation
functionality.
--------------
Implementation
--------------
Assignee(s)
===========
Primary assignee:
Vladimir Kozhukalov <vkozhukalov@mirantis.com>
Mandatory design review:
Bulat Gaifullin <bgaifullin@mirantis.com>
Roman Prikhodchenko <rprikhodchenko@mirantis.com>
Work Items
==========
* Implement release repos get and put handlers in nailgun.
* Implement release repos update subcommand in fuelclient.
Dependencies
============
None
------------
Testing, QA
------------
There should be a functional test that checks this new feature.
Acceptance criteria
===================
It must be possible to update release repos using fuel2
command. It is to receive yaml file with the list of repositories.
----------
References
----------
None

View File

@ -1,548 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
======================================
Fuel Graph Concept Extension And Usage
======================================
https://blueprints.launchpad.net/fuel/+spec/graph-concept-extension
There is introduced a new opportunity that allows to execute graphs
for different purposes by the Fuel graph concept extension.
-------------------
Problem description
-------------------
Currently, the Fuel graph concept is tied to the deployment process. For
example, we can't use graphs for provisioning, deletion or verification.
Those actions are hardcoded in Nailgun and Astute, and there's no way to
extend extend them easily.
Meantime we want to see every action as a graph in order to make it pluggable
and extendable, since end users usually want to somehow change them. For
instance, some of them want to use torrent protocol for image delivering
instead of HTTP and there's no way to change it so far.
Another problem is that we can't verify advanced network configuration in
bootstrap mode. The problem lies in our approach where network-checker is
responsible only for basic configuration while we need l23network manifest
to be applied in order to verify network against real configuration.
Having everything in the graphs allows to reuse that puppet manifest, and
hence prepare network for verification.
There're plenty of places where we have hardcoded actions instead of
declarative ones. Moving them into graphs will help to clean and simplify
our code base, as well as provide opportunity to customize them manually
or via plugins.
----------------
Proposed changes
----------------
#. **Transaction Manager**
Nailgun should have a general transaction manager for running graphs as
well as a bunch of them within a single transaction.
The transaction manager must be used by the new RESTful API endpoint
for executing graphs. See REST API section for details.
#. **Default Graphs for Basic Actions**
At minimum we want to see the following actions as graphs:
* Deployment (done)
* Provisioning
* Verification
* Deletion
Hence, fuel-library should provide tasks for those graphs the same
way they provide them for deployment. The proposed way is to separate
them on filesystem (drop into different directories) and sync them
one be one by passing additional argument to Fuel CLI. Example:
.. code-block:: console
fuel rel --sync-deployment-tasks --dir /etc/puppet/ --graph provision
#. **Scenarios**
Scenarios is the way to run specified graphs one-by-one, each on pre-defined
set of nodes. A set of nodes could be specified either explicitly or by
using YAQL expression.
Scenarios is a good way to provide a high level orchestration flows such
as "Deploy Changes" in declarative manner.
#. **New Astute tasks**
In order to support existing scenarios as graphs we need to implement the
following tasks in task-based format in Astute:
* ``erase_node`` - run mcollective erase_node action
* ``master_shell`` - execute a task on the master node with a particular
node context
* ``move_to_bootstrap`` - reregister node with a bootstrap profile in
cobbler
#. **New method of nodes statuses update**
In order to get rid of hardcoded state machine of node statuses, we
need to provide a way to set node statuses in a data driven format.
Hence, it's proposed to add a set of callbacks: ``on_success``, ``on_error``
and ``on_stop``.
.. code-block:: yaml
graph_metadata:
on_success:
node_attributes:
status: ready
on_error:
node_attributes:
status: error
error_type: deploy
on_stop: null
Web UI
======
Custom graphs management in Fuel UI was described and implemented within the
[1], although the ability to execute a sequence of graphs is introduced in this
spec as extension.
Working in 'Custom Scenarios' deployment mode, user should be able to specify
a sequence of space-separated graph types, that he wants to execute.
Also, it is necessary to use a new ``/api/v1/graphs/execute/`` handler (that
works with transactions manager) in Fuel UI to run a graph/graphs.
Nailgun
=======
Data model
----------
#. Having everything defined as a graph and mechanism to run few graphs within
a single transaction, simple means we can't rely on task's name anymore. It
makes more sense to distinguish runs by two criteria: ``graph_type`` and
``dry_run``. So it's proposed to extend ``tasks`` table with those columns
and mark ``tasks.name`` as deprecated column.
#. Transient node statuses shouldn't be persisted in database. That means
``nodes::status`` attribute should contain either ``discover`` or
``provisioned`` or ``deployed``. Statuses ``provisioning``, ``deploying``
and ``error`` should be calculated based on node attributes.
* ``provisioning`` = ``discovery`` + ``progress >= 0``
* ``deploying`` = ``provisioned`` + ``progress >= 0``
* ``error`` = ``error_type`` is not ``null``
When any action is committed the ``progress`` should be resetted to
``100``.
``error_type`` should not be limited to pre-defined set of types.
#. In order to implement scenarios, we need to design a database schema for
new entity. Here's a proposed solution:
.. code-block:: text
.
SCENARIOS_ACTS
SCENARIOS +--------------------+
+-----------+ | + id (pk) |
| + id (pk) |<------------| + scenario_id (fk) |
| + name | | + order |
+-----------+ | + graph_type |
| + nodes |
+--------------------+
where:
* ``scenarios::name`` is a unique identifier to be used by clients for
running scenarios;
* ``scenarios_acts::scenario_id`` is a foreign key to ``scenarios``;
* ``scenarios_acts::order`` is an execution order in scenario;
* ``scenarios_acts::graph_type`` is a graph type to run;
* ``scenarious_acts::nodes`` is a JSON column that may contain either
hardcoded JSON array with nodes IDs or JSON object with ``yaql_exp`` key
for getting nodes IDs on fly;
Executing scenarios mean: run its graphs on corresponding set of nodes
within a single transaction.
REST API
--------
#. **Graphs Execution**
.. http:post:: /graphs/execute
Execute passed graphs.
**Request:**
.. code-block:: http
POST /graphs/execute HTTP/1.1
{
"cluster": <cluster-id>,
"graphs": [
{
"type": "graph-type-1",
"nodes": [1, 2, 3, 4],
"tasks": ["task-a", "task-b"]
},
{
"type": "graph-type-2",
"nodes": [3, 4],
"tasks": ["task-c", "task-d"]
},
],
"dry_run": false,
"force": false
}
where:
* ``cluster`` -- cluster id;
* ``graphs`` -- list of graphs to be executed, with optional ``nodes``
and ``tasks`` params;
* ``dry_run`` (optional, default: false) -- run graphs in dry run mode;
* ``force`` (optional, default: false) -- execute tasks anyway; don't
take into account previous runs.
**Response:**
.. code-block:: http
HTTP/1.1 202 Accepted
{
"task_uuid": "transaction-uuid",
...
}
where:
* ``task_uuid`` -- unique ID of accepted transaction
As the graph term was extended, some requests should be modified to avoid
misunderstanding. In the following requests the deployment/deploy word
should be removed:
* ``GET /releases/<release_id>/deployment_graphs/``
* ``GET/POST/PUT/PATCH/DELETE /releases/<release_id>/deployment_graphs/<graph_type>/``
* ``GET /releases/<release_id>/deployment_tasks/``
* ``GET /clusters/<cluster_id>/deployment_graphs/``
* ``GET /clusters/<cluster_id>/deployment_tasks/``
* ``GET/POST/PUT/PATCH/DELETE /clusters/<cluster_id>/deployment_graphs/<graph_type>/``
* ``GET /plugins/<plugin_id>/deployment_graphs/``
* ``GET/POST/PUT/PATCH/DELETE /plugins/<plugin_id>/deployment_graphs/<graph_type>/``
* ``GET /clusters/<cluster_id>/deploy_tasks/graph.gv``
#. **Scenarios**
.. http:post:: /scenarios
Create a new workflow.
**Request:**
.. code-block:: http
POST /scenarios HTTP/1.1
{
"name": "deploy-changes",
"scenario": [
{
"graph_type": "provision",
"nodes": {
"yaql_exp": "select nodes for provisioning"
}
},
{
"graph_type": "deployment"
"nodes": ...,
}
...
]
}
.. http:get:: /scenarios
List available scenarios.
**Response:**
.. code-block:: http
HTTP/1.1 200 Ok
[
{
"id": 1,
"name": "deploy-changes",
"scenario": [
... scenario's acts ...
]
},
{
"id": 2,
...
}
]
.. http:post:: /scenarios/:name/execute
Run a scenarios with a given ``name``. If successful a transaction ID
is returned.
**Response:**
.. code-block:: http
HTTP/1.1 202 Accepted
{
"task_uuid": "transaction uuid"
}
Orchestration
=============
None
RPC Protocol
------------
None
Fuel Client
===========
For listing/uploading/downloading will be used the common custom graph commands
[0].
The graph execution command should stay practically the same, however it is
necessary to be able to define several graph types to run them one by one. Also
it should be possible to enforce execution of tasks without skipping and to run
only specific tasks ignoring dependancies.
.. code-block:: console
fuel2 graph execute --env 1 [--nodes 1 2 3]
[--graph-types gtype1 gtype2]
[--task-names task1 task2]
[--force]
[--dry-run]
where
* ``--nodes`` executes only on passed nodes;
* ``--graph-types`` executes passed graphs within one transaction;
* ``--task-names`` executes only passed tasks skipping others;
* ``--force`` executes tasks anyway;
* ``--dry-run`` executes in dry-run mode (doesn't affect nodes)
Plugins
=======
None
Fuel Library
============
* Compose the default provisioning and deletion graphs.
* Compose the default verification graph. This graph should contain tasks
for the network configuring and checking.
* All default graphs should be loaded during the Fuel installation with
the corresponding graph types.
------------
Alternatives
------------
None for the whole approach.
For the verification tool:
* Use the standard network verification mechanism, although in this
case we have a deal with non-realistic network configuration.
* Use connectivity checker plugin [2] to verify network during
the deployment, but it will take more time to rework.
--------------
Upgrade impact
--------------
Some API endpoints are renamed so it breaks backward compatibility.
---------------
Security impact
---------------
None
--------------------
Notifications impact
--------------------
None
---------------
End user impact
---------------
Ability to:
* execute different graphs for different purposes.
* check the realistic network configuration design before the deployment
process.
------------------
Performance impact
------------------
None
-----------------
Deployment impact
-----------------
The whole mechanism is more flexible. The provisioning part is configurable
and easier to debug. Thanks to the verification graph mechanism, errors
detection before the deployment stage may save a lot of time in case of
reconfiguration necessity.
----------------
Developer impact
----------------
None
---------------------
Infrastructure impact
---------------------
None
--------------------
Documentation impact
--------------------
* API, CLI and UI documentations should be extended according to the
appropriate changes.
--------------
Implementation
--------------
Assignee(s)
===========
Primary assignee:
bgaifullin
Other contributors:
vsharshov (astute)
sbogatkin (library: deletion, provisioning)
lefremova (library: verification)
ikutukov (client)
Mandatory design review:
ashtokolov
vkuklin
Work Items
==========
* Implement transaction manager that runs a bunch of graphs one by one,
each with own context generated on top of changes committed by previous
graph.
* Implement new Astute tasks for moving nodes to bootstrap, running shell
tasks on master node with context of other roles and removing nodes.
* Implement new graphs to run provisioning, deployment, deletion and
verification.
* Implement CLI interface to run graphs in one transaction.
* Implement Fuel UI to run graphs in one transaction as well as scenarios.
Dependencies
============
Custom graph management on UI [1].
-----------
Testing, QA
-----------
* New logic in nailgun should be covered by unit and integration tests.
* Functional tests that executes verification and provisioning graphs on
bootstrap nodes should be introduced.
Acceptance criteria
===================
* The Fuel graph concept is extended so we can use a graph mechanism
for different purposes.
* Network checking tool in Fuel is introduced for realistic configurations
via execution an appropriate verification graph on bootstrap nodes.
So as a cloud operator I have the possibility to investigate the production
specific network defects before the deployment.
* Provisioning and deletion mechanisms also work via the corresponding graphs
execution.
* While the default graphs for the base actions are loaded during the Fuel
insallation, user may specify and execute custom graphs.
----------
References
----------
[0] Allow user to run custom graph on cluster
https://blueprints.launchpad.net/fuel/+spec/custom-graph-execution
[1] Custom graph management on UI
https://blueprints.launchpad.net/fuel/+spec/ui-custom-graph
[2] Connectivity checker plugin
https://github.com/xenolog/fuel-plugin-connectivity-checker

View File

@ -1,231 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===============================================================
fuel-devops: Support master node installation as node extension
===============================================================
https://blueprints.launchpad.net/fuel/+spec/master-node-installation-as-devops-extension
In scope of [1] node role extensions were introduced. This spec offers to use
them for bootstrapping of Fuel master node.
--------------------
Problem description
--------------------
There are 2 places in code where installation of Fuel master node is done:
* fuel-qa/fuelweb_test/models/environment.py::EnvironmentModel::setup_environment
* fuel-devops/devops/shell.py::Shell::do_admin_setup
These two places do the same thing but also they have different implementation.
It is not optimal from development and architecture points of view.
----------------
Proposed changes
----------------
Unify methods fuel-qa and fuel-devops, to get in fuel-devops a single way
for setup of master node instead of dependencies on unsuitable
get_admin_remote() methods.
Also the process should be splitted into 4 steps:
1. Sending of scancodes of keys into boot menu
2. Waiting for ssh port to open
3. Waiting for appearance of deploy end phrase in logs (optinally waiting for
unpacking of docker containers)
Example of required steps to bootstrap admin node::
master_node = env.get_node(name='admin')
admin_node.kernel_cmd = "<custom kernel command>"
admin_node.bootstrap_and_wait()
admin_node.deploy_wait()
Web UI
======
None
Nailgun
=======
None
Data model
----------
None
REST API
--------
None
Orchestration
=============
None
RPC Protocol
------------
None
Fuel Client
===========
None
Plugins
=======
None
Fuel Library
============
None
------------
Alternatives
------------
None
--------------
Upgrade impact
--------------
None
---------------
Security impact
---------------
None
--------------------
Notifications impact
--------------------
None
---------------
End user impact
---------------
None
------------------
Performance impact
------------------
None
-----------------
Deployment impact
-----------------
None
----------------
Developer impact
----------------
None
---------------------
Infrastructure impact
---------------------
None
--------------------
Documentation impact
--------------------
None
--------------
Implementation
--------------
Assignee(s)
===========
Primary assignee:
* Anton Studenov (astudenov): astudenov@mirantis.com
Other contributors:
* Dennis Dmitriev (ddmitriev): ddmitriev@mirantis.com
* Dmitry Tyzhnenko (dtyzhnenko): dtyzhnenko@mirantis.com
* Kirill Rozin (krozin): krozin@mirantis.com
Mandatory design review:
None
Work Items
==========
- Investigate the existing code
- Move/Rewrite fuel-devops/helpers/node_manager.py to extension files
- Remove node_manager.py and use extension code in shell.py
- Update fuel-qa/fuelweb_test/models/environment.py to use node extension
Dependencies
============
https://blueprints.launchpad.net/fuel/+spec/template-based-virtual-devops-environments
------------
Testing, QA
------------
None
Acceptance criteria
===================
- Setup of fuel master node is done inside of ``setup`` method of
node_extension for 5.0, 6.1 and 7.0 versions of Fuel.
- API remains back-compatible to previous versions.
----------
References
----------
[1] - https://blueprints.launchpad.net/fuel/+spec/template-based-virtual-devops-environments

View File

@ -1,799 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
========================================================
Support extensions of NIC and Node attributes in plugins
========================================================
https://blueprints.launchpad.net/fuel/+spec/nics-and-nodes-attributes-via-plugin
Plugin developer should be able to extend NIC, Bond and Node properties
via plugin.
-------------------
Problem description
-------------------
Plugins should have a mechanism for providing additional attributes for NICs,
bonds and nodes. In future it can be useful when plugin provides some
technology which should work "per interface" or "per node". For example,
in case of Contrail we need support VF for vRouter on each network interface.
----------------
Proposed changes
----------------
Extend Fuel plugin framework with functionality of merging additional NIC,
Bond and Node attributes through plugins.
Web UI
======
* UI should properly represent schema and data for NIC, BOND and Node
attributes provided by plugin on ``Configure Interfaces`` and ``Node``
screens.
* Client can receive core and plugin NICs and BONDs attributes default
state by ``/nodes/interfaces/default_assignment/`` and
``/nodes/bonds/attributes/defaults`` API calls.
* ``/nodes/:id/attributes`` should operate with both core and plugins Node
attributes.
* ``Load defaults`` button on ``Configure Interfaces`` screen should return
default data for NIC attributes.
* ``Load defaults`` button on ``Node`` details dialog should return default
data for Node attributes.
* In case of bond creation, all slave interfaces should have the same set of
attributes with identical structure and depend on availability conditions
for different type of bonds [0]_.
* Current mechanism for attributes availability during bonding like DPDK
will be the same and implemented on UI.
Nailgun
=======
Data model
----------
New default core attributes for NIC and BOND should be described in
``openstack.yaml`` file. They will be mapped on ``nic_attributes`` and
``bond_attributes`` in Release.
Plugin related information with NICs, BONDs and Nodes default attributes
will be stored in ``nic_attributes_metadata``, ``bond_attributes_metadata``
and ``node_attributes_metadata`` attributes of Plugin model (Can be changed
based on Plugins v5 spec [1]_).
Additional models ``NodeNICInterfaceClusterPlugin``, ``NodeClusterPlugin`` and
``NodeBondInterfaceClusterPlugin`` will be used to store actual state of plugin
related NICs, BONDs and Nodes attributes data per each interface, bond or
node. By default ``attributes`` fields of these models should be filled with
data from ``Plugin.nic_attributes_metadata``,
``Plugin.node_attributes_metadata`` and ``Plugin.bond_attributes_metadata``
respectively.
Fuel core NIC, BOND and Node ``attributes`` [2]_ can be stored in
``attributes`` field in each related table. Core NICs ``attributes`` will be
filled with default attributes from Release which are taken from
``nic_attributes`` and values will be generated in same way as for
``interface_properties``. Data from
``NodeNICInterfaceClusterPlugin.attributes`` will be mixed with
``NodeNICInterface.attributes`` based on info about disabled or enabled state
of plugins during ``/nodes/:id/interfaces/`` API call. And vice versa: data
from client will be split and stored between these two tables. Same logic will
be used for BOND and Node attributes.
``NodeNICInterface.meta`` will be used to store read-only metadata and filled
with ``Node.meta`` values.
If plugin does not provide NIC, BOND or Node additional attributes then
relations with empty ``attributes`` should not exist.
Plugin can override core interface attributes. If two plugins override the
same attribute, conflict exception should be raised.
Nailgun DB tables changes:
**Plugin**
``nic_attributes_metadata``
Plugin attributes data taken from ``nic_attributes`` yaml
``bond_attributes_metadata``
Plugin attributes data taken from ``bond_attributes`` yaml
``node_attributes_metadata``
Plugin attributes data taken from ``node_attributes`` yaml
**NodeNICInterface**
``attributes``
NIC attributes in DSL format
``meta``
Read-only metadata
**NodeNICInterfaceClusterPlugin**
``id``
unique identifier
``attributes``
Actual state of plugin NIC attributes data
``cluster_plugin_id``
Foreign key on cluster_plugins table
``interface_id``
Foreign key on node_nic_interfaces table
``node_id``
Foreign key on nodes table
Example of `attributes` field:
.. code-block:: json
{
"attribute_a": {
"label": "NIC attribute A",
"weight": 10
"description": "Some description",
"type": "text",
"value": "test"
},
"attribute_b": {
"label": "NIC attribute B",
"weight": 20
"description": "Some description",
"type": "checkbox",
"value": False
}
}
**NodeBondInterface**
``attributes``
BOND attributes in DSL format
**NodeBondInterfaceClusterPlugin**
``id``
Unique identifier
``attributes``
Actual state of plugin Bond attributes data
``cluster_plugin_id``
Foreign key on cluster_plugins table
``bond_id``
Foreign key on node_bond_interfaces table
``node_id``
Foreign key on nodes table
**NodeClusterPlugin**
``id``
Unique identifier
``attributes``
Actual state of plugin Node attributes data
`cluster_plugin_id`
Foreign key on cluster_plugins table
``node_id``
Foreign key on nodes table
**Release**
``nic_attributes``
Attributes with default values for NICs
``bond_attributes``
Attributes with default values for BONDs
Data from ``attributes`` in ``NodeNICInterface``,
``NodeNICInterfaceClusterPlugin``, ``NodeBondInterface``,
``NodeBondInterfaceClusterPlugin``, ``Node`` and ``NodeClusterPlugin`` should
be serialized in deployment scenario and sent to astute with other attributes.
This is how an astute.yaml part will look like for additional NIC attributes:
.. code-block:: yaml
interfaces:
enp0s1:
vendor_specific:
driver: e1000
bus_info: "0000:00:01.0"
attribute_a: "spam"
attribute_b: false
enp0s2:
vendor_specific:
driver: e1000
bus_info: "0000:00:02.0"
attribute_a: "egg"
attribute_b: true
for BOND attributes:
.. code-block:: yaml
transformations:
- bridge: br-mgmt
name: bond0
interfaces:
- enp0s1
- enp0s2
bond_properties:
mode: balance-rr
interface_properties:
vendor_specific:
disable_offloading: true
attribute_a: "test"
attribute_b: true
action: add-bond
for Node attributes:
.. code-block:: yaml
plugin_section_a:
attribute_a: "test"
attribute_b: false
REST API
--------
There will be new API call provided metadata for NIC and BOND.
===== ============================================ ===========================
HTTP URL Description
===== ============================================ ===========================
GET /api/v1/nodes/:id/bonds/attributes/defaults/ Get default bond attributes
for specific release
GET /api/v1/nodes/:id/attributes/defaults/ Get default node attributes
for specific release
===== ============================================ ===========================
The response format for GET ``/nodes/:id/bonds/attributes/defaults``:
.. code-block:: json
{
"additional_attributes": {
"metadata": {
"label": "Plugins attributes section for bonds",
"weight": 50
},
"attribute_a": {
"label": "BOND attribute A",
"weight": 10
"description": "Some description",
"type": "text",
"value": "test"
},
"attribute_b": {
"label": "BOND attribute B",
"weight": 20
"description": "Some description",
"type": "checkbox",
"value": False
}
}
}
GET ``/nodes/:id/interfaces/`` method should return data with the following
structure:
.. code-block:: json
[
{
"id": 1,
"type": "ether",
"name": "enp0s1",
"assigned_networks": [],
"driver": "igb",
"mac": "00:25:90:6a:b1:10",
"state": null,
"max_speed": 1000,
"current_speed": 1000,
"pxe": False,
"bus_info": "0000:01:00.0",
"meta": {
"sriov": {
"available": true,
"pci_id": "12345"
},
"dpdk": {
'available': true,
},
"offloading_modes" : [
{
"state": null,
"name": "tx-checksumming",
"sub": [
{
"state": null,
"name": "tx-checksum-sctp",
"sub": []
}
]
}
]
}
"attributes": {
"offloading": {
"metadata": {
"label": "Offloading",
"weight": 10
},
"disable_offloading": {
"label": "Disable offloading",
"weight": 10,
"type": "checkbox",
"value": False,
},
"offloading_modes": {
"label": "Offloading modes"
"weight": 20
"description": "Offloading modes"
"type": "offloading_modes"
"value": {
"tx-checksumming": true,
"tx-checksum-sctp": false
}
}
},
"mtu": {
"metadata": {
"label": "MTU",
"weight": 20,
},
"mtu_value": {
"label": "MTU",
"weight": 10,
"type": "text",
"value": ""
}
},
"sriov" : {
"metadata": {
"group": "sriov",
"label": "SRIOV",
"weight": 30
},
"sriov_enabled": {
"label": "SRIOV enabled",
"type": "checkbox",
"enabled": True,
"weight": 10
},
"sriov_numvfs": {
"label": "virtual_functions"
"type": "number",
"min": "0",
"max": "10", // taken from sriov_totalvfs
"value": "5",
"weight": 20
},
"physnet": {
"label": "physical_network",
"type": "text",
"value": "",
"weight": 30
}
},
"dpdk": {
"metadata": {
"group": "nfv",
"label": "DPDK",
"weight": 40
},
"dpdk_enabled": {
"label": "DPDK enabled",
"type": "checkbox",
"enabled": False,
"weight": 10
},
}
"additional_attributes": {
"metadata": {
"label": "All plugins attributes section",
"weight": 50
},
"attribute_a": {
"label": "NIC attribute A",
"weight": 10
"description": "Some description",
"type": "text",
"value": "test",
"nic_plugin_id": 1
},
"attribute_b": {
"label": "NIC attribute B",
"weight": 20
"description": "Some description",
"type": "checkbox",
"value": False,
"nic_plugin_id": 1
}
}
}
},
{
"type": "bond",
"name": "bond0",
"state": null,
"assigned_networks": [],
"bond_properties": {
"type__": "linux",
"mode": "balance-rr",
},
"mac": null,
"mode": "balance-rr",
"slaves": [],
"attributes": {
"mode": {
"metadata": {
"label": "Mode",
"weight": 10
}
"mode_value": {
"label": "Mode",
"weight": 10,
"type": "select",
"values": [
{"label":"balance-rr", "data": "balance-rr"},
{"label":"some-label-1", "data": "some-value-1"},
{"label":"some-label-n", "data": "some-value-n"}
]
"value": "balance-rr"
}
},
"offloading": {
"metadata": {
"label": "Offloading",
"weight": 20
},
"disable_offloading": {
"label": "Disable offloading",
"weight": 10,
"type": "checkbox",
"value": False,
},
"offloading_modes": {
"label": "Offloading modes"
"weight": 20
"description": "Offloading modes"
"type": "offloading_modes"
"value": {
"tx-checksumming": true,
"tx-checksum-sctp": false
}
}
},
"mtu": {
"metadata": {
"label": "MTU",
"weight": 30,
},
"mtu_value": {
"label": "MTU",
"weight": 10,
"type": "text",
"value": ""
}
},
"additional_attributes": {
"metadata": {
"label": "All plugins attributes section",
"weight": 40
},
"attribute_a": {
"label": "BOND attribute A",
"weight": 10,
"description": "Some description",
"type": "text",
"value": "test",
"bond_plugin_id": 1
},
"attribute_b": {
"label": "BOND attribute B",
"weight": 20,
"description": "Some description",
"type": "checkbox",
"value": False,
"bond_plugin_id": 1
}
}
}
}
]
In case of Node attributes, GET ``/nodes/:id/attributes/``:
.. code-block:: json
{
"cpu_pinning": {},
"hugepages": {},
"plugin_section_a": {
"metadata": {
"group": "some_new_section",
"label": "Section A",
},
"attribute_a": {
"label": "Node attribute A"
"description": "Some description",
"type": "text",
"value": "test"
},
"attribute_b": {
"label": "Node attribute B"
"description": "Some description",
"type": "checkbox",
"value": False
}
}
}
Orchestration
=============
None
RPC Protocol
------------
None
Fuel Client
===========
None
Plugins
=======
* NIC, BOND and Node attributes can be described in additional optional
config yaml files.
* Basic skeleton description for NICs in ``nic_attributes`` yaml file:
.. code-block:: yaml
attribute_a:
label: "NIC attribute A"
description: "Some description"
type: "text"
value: ""
attribute_b:
label: "NIC attribute B"
description: "Some description"
type: "checkbox"
value: false
For Bond in ``bond_attributes`` yaml file:
.. code-block:: yaml
attribute_a:
label: "Bond attribute A"
description: "Some description"
type: "text"
value: ""
attribute_b:
label: "Bond attribute B"
description: "Some description"
type: "checkbox"
value: false
For Node in ``node_attributes`` yaml file:
.. code-block:: yaml
plugin_section_a:
metadata:
group: "some_new_section"
label: "Section A"
attribute_a:
label: "Node attribute A for section A"
description: "Some description"
type: "text"
attribute_b:
label: "Node attribute B for section A"
description: "Some description"
type: "checkbox"
Actually NICs and Nodes attributes should have similar structure as in
``openstack.yaml`` file.
* Fuel plugin builder should provide validation of schema for NICs and Nodes
attributes in relevant config files if they exist.
Fuel Library
============
None
------------
Alternatives
------------
None
--------------
Upgrade impact
--------------
Provide migrations to transform NIC and Bond ``interface_properties`` into
``nic_attributes`` and ``bond_attributes`` respectively.
---------------
Security impact
---------------
None
--------------------
Notifications impact
--------------------
None
---------------
End user impact
---------------
All the plugin NIC attributes will use the same UI representation as core
attributes, no direct UI impact. UI code should be adapted to work with
attributes instead of interface_properties.
------------------
Performance impact
------------------
None
-----------------
Deployment impact
-----------------
None
----------------
Developer impact
----------------
None
---------------------
Infrastructure impact
---------------------
None
--------------------
Documentation impact
--------------------
Describe in docs how plugin developers can provide additional NICs and Nodes
attributes via plugins.
--------------
Implementation
--------------
Assignee(s)
===========
Primary assignee:
* Andriy Popovych <apopovych@mirantis.com>
Other contributors:
* Anton Zemlyanov <azemlyanov@mirantis.com>
QA assignee:
* Ilya Bumarskov <ibumarskov@mirantis.com>
Mandatory design review:
* Aleksey Kasatkin <akasatkin@mirantis.com>
* Vitaly Kramskikh <vkramskikh@mirantis.com>
Work Items
==========
* [Nailgun] Provide changes in DB model and new plugin config files sync.
* [Nailgun] Implement API handlers for Node default attributes.
* [Nailgun] Provide mixing for core and plugin Node attributes.
* [Nailgun] Provide serialization of plugin releated attributes for astute.
* [Nailgun network extension] Implement API handlers for Bond and default
attributes.
* [Nailgun network extension] Provide mixing of core and plugin NICs and
Bonds attributes and proper data storing.
* [Nailgun network extension] Change current API for NICs to support plugin
attributes.
* [Nailgun network extension] Refresh NICs attributes with default data.
* [UI] Handle plugin Bond, NICs and Nodes attributes on ``Node`` details
dialog and ``Configure Interfaces`` screens.
* [FPB] Templates and validation for optional yaml files: ``nic_attributes``,
``bond_attributes`` and ``node_attributes``.
Dependencies
============
* Plugins v5 [1]_
* Based on implementation of Node attributes [2]_
* Based on network manager extension [3]_
------------
Testing, QA
------------
* Extend TestRail with WEB UI cases for the configuring NIC, Bond and Node
attributes.
* Extend TestRail with API/CLI cases for the configuring NIC, Bond and Node
attributes.
* Manually test that FPB provide validation for additional attributes in
relevant config files
Acceptance criteria
===================
* Plugin developers can provide new attributes per network interface, bond
and node via plugin.
----------
References
----------
.. [0] https://github.com/openstack/fuel-web/blob/stable/mitaka/nailgun/nailgun/fixtures/openstack.yaml#L378-L409
.. [1] https://blueprints.launchpad.net/fuel/+spec/plugins-v5
.. [2] https://blueprints.launchpad.net/fuel/+spec/support-numa-cpu-pinning
.. [3] https://blueprints.launchpad.net/fuel/+spec/network-manager-extension

View File

@ -1,286 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
================================================
Puppet noop run for Fuel puppet deployment tasks
================================================
https://blueprints.launchpad.net/fuel/+spec/puppet-noop-run
--------------------
Problem description
--------------------
Currently, Fuel Environment re-deployment re-runs all Fuel tasks without any
check of customizations which could be applied for different OpenStack and
Fuel components aka files and config values changes, running and stopped
services and etc. If such changes weren't applied to Fuel deployment tasks
(manifests, scripts) as well (that the most frequent case for the users)
new tasks run (in case of re-deployment or update) could lead to losing of
applied customization.
----------------
Proposed changes
----------------
Before re-deployment or update run of successfully deployed cluster it should
be possible to get a report about those customizations which were applied to
the Fuel cluster or particular node which differs from performed previously
deployment and which could be possible overridden by new Fuel tasks run. Those
customizations should be stored in the report file or database in readable
(or parsable format) and it should be possible to get it using REST API.
Noop run for Fuel tasks could be used as a mechanism for detecting of any set
of customizations applied to the services, configuration files and etc in the
cluster. Tasks noop run is able to show changes in the metaparameters for
files (e.g. owner, mode, content), services (e.g. status, service provider),
OpenStack configs (e.g. missed options, incorrect values for options) and
other (even custom) resources. This approach could be easily implemented
for all types of tasks: Puppet tasks could be executed with '--noop' option,
other types could be just skipped.
Specifically Puppet Noop run will help to detect required types of
customizations in Fuel environment. Puppet store report of each run (even noop
run) in /var/lib/puppet/reports/<node-fqdn>/ folder in YAML format. Each puppet
operation is tracked here and it has detailed description. The most important
information is: was resource changed or not? This is shown by 'changed'
parameter. So every puppet operation could be easily checked by status. Another
aprroach here is to generate Puppet report in JSON format. For enabling of
this feature is required to add '--logdest /path/to/file.json' to the end of
puppet apply command. In that case it's possible to store a report for all
Fuel puppet tasks in one file or separate a report per running task.
The implementation of this approach requires changes in the Fuel:
* Puppet tasks: all Fuel puppet tasks should support noop action. Some tasks
may error with '--noop' option. Such failures will be stored in report but
they won't stop Tasks noop run. They also won't affect cluster/node status.
* Astute: will be created additional task types for noop run; those types
will adapt all existing tasks to run them in noop mode: in case of puppet
and shell types tasks will be executed in noop mode run, in case of other
types those tasks will be skipped. Logging output for noop tasks run will
be reduced: debug and verbose options for Puppet noop run will not be used.
* Task history: Noop run report should be stored in deployment tasks history.
* Nailgun: Noop run report should be available through nailgun API for each
particular node in environment.
* Fuel CLI: it should be possible to run any custom graph for particular
environment or node with Noop option.
This Noop run for the any cluster or set of nodes shouldn't change their
statuses. Noop run is not a part of deployment. It should work similar
to addional checks (like OSTF is working).
Web UI
======
None
Nailgun
=======
* Nailgun API should handle noop_run parameter from request to deploy,
re-deploy or execute graph of tasks for the cluster. If 'noop_run' is set,
nailgun should execute requested actions as Noop run for the cluster.
* Nailgun shouldn't change cluster state (e.g. deployed -> deploying) during
and after Noop run even if it has failed.
Data model
----------
None
REST API
--------
Described in Nailgun section.
Orchestration
=============
RPC Protocol
------------
'Noop_run' flag will be added to RPC protocol. The end message sent by Nailgun
should look like:
.. code-block:: json
{
"api_version": "1",
"method": "task_deploy",
"respond_to": "transaction_resp",
"args": {
"task_uuid": "10",
"tasks_graph": "cluster_graph",
...,
"dry_run": false,
"noop_run": true,
}
}
Fuel Client
===========
Fuel client should support following Noop actions:
* Run any graph with a 'noop' option which would ask nailgun to format
a message to Astute properly, so that Astute runs only 'noop' tasks.
* Start Noop run for particular environment, node, task or
set of tasks (custom graph).
* Get report from each Noop run.
Plugins
=======
Fuel Puppet tasks in plugins should also support Puppet noop run with new
log destination.
Fuel Library
============
None
------------
Alternatives
------------
Manual detect of customizations applied to the clusrer.
--------------
Upgrade impact
--------------
None
---------------
Security impact
---------------
None
--------------------
Notifications impact
--------------------
None
---------------
End user impact
---------------
End users will be able to check their environment for customizations before
cluster re-deployment, update or upgrade. They will be notified about the
differences between current cluster/nodes state and original (after last
deployment). It will help to reduce the risk of missing important
customizations applied to cluster/nodes.
------------------
Performance impact
------------------
None
-----------------
Deployment impact
-----------------
None
----------------
Developer impact
----------------
None
---------------------
Infrastructure impact
---------------------
None
--------------------
Documentation impact
--------------------
Documentation will have to be updated to reflect changes.
--------------
Implementation
--------------
Assignee(s)
===========
Primary assignee:
Denis Egorenko
Other contributors:
Ivan Berezovskiy
Mandatory design review:
Vladimir Kuklin
Vladimir Sharshov
QA engineer:
Timur Nurlygayanov
Work Items
==========
* Update Fuel Astute to support Noop run for all type of tasks.
* Add support for keeping Puppet Noop run report in parsable format
(YAML or JSON) and make it available to download through API call or using
Fuel client.
* Update Fuel client to be able to apply custom graph on particular environment
or set of nodes with Noop option.
* Update Nailgun to ignore Noop run errors. They shouldn't affect cluster or node
state/status.
Dependencies
============
None
------------
Testing, QA
------------
* Nailgun's unit and integration tests will be extended to test new feature.
* Astute's unit and integration tests will be extended to test new feature.
* Fuel Client's unit and integration tests will be extended to test new feature.
Acceptance criteria
===================
* Noop run should be possible to execute on only successfully deployed
environment.
* It should be possible check custom changes in services, files, OpenStack
components configuration and other puppet resources applied to cluster or
particular node using simple command of Fuel client.
* It should be possible to get report of Noop run using REST API.
* Noop run shouldn't affect cluster deployment status.
----------
References
----------
1. LP Blueprint https://blueprints.launchpad.net/fuel/+spec/puppet-noop-run

View File

@ -1,420 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===================
Release as a plugin
===================
Blueprint: https://blueprints.launchpad.net/fuel/+spec/release-as-a-plugin
As a deployment Engineer I want to express a Fuel Release as a Fuel Plugin so
that I could define, maintain and deploy various flavors of customized
OpenStack deployments in a clean isolated way, externalized from common
Fuel provisioning layer.
-------------------
Problem description
-------------------
The nailgun repo still holds onto one of the remaining parts of the data model
the release fixture. This fixture is used to describe everything about the
deployment from the ground up and is where every change can possibly be
expressed.
----------------
Proposed changes
----------------
By moving the release fixtures ``openstack.yaml`` completely into the plugin
framework we're opening road to following changes:
* To make ``fuel-library`` repo a plugin.
* It is possible to ship multiple Openstack version release packages as
each in its own plugin.
* Next steps allow Fuel to have different releases bundled or no pre-bundled
releases at all (lightweight version) are possible as well.
Web UI
======
Support of the case when there are ``no release`` is required.
Basic releases will be shipped as pre-installed plugins. It becomes possible
to uninstall them completely leaving Fuel without releases at all.
UI should support case when no releases installed, not allowing to pass in
cluster creation wizard further that a release selection or not allow to start
this wizard at all.
Message about what to do if no releases are installed should be displayed to
user.
Nailgun
=======
Pre-installed plugins
---------------------
Providing releases as a plugin supposes that Fuel bundled release fixture
should be shipped and pre-installed as plugin package.
For the details, please, see [1] spec.
Data model
----------
Data model of Nailgun will be left intact except the changes in incoming
release configuration.
Release name version is determined in metadata with following fields:
.. code-block:: yaml
releases:
- release_name: 'ExampleRelease' #required
description: 'Example Release Description' #required
operating_system: 'ubuntu' #required, or its alias "os"
version: '0.0.1' #required
REST API
--------
No changes in REST API
Orchestration
=============
RPC Protocol
------------
None
Fuel Client
===========
Plugins installation is not changed.
Plugins
=======
Plugin adapters
---------------
Fuel plugin adapter should now be able to understand new format of
``release:`` records declared in plugin ``metadata.yaml``.
New release loader should be integrated with plugin adapters.
Release package
---------------
If ``is_release`` is defined and set to ``true`` for record in the
``releases:`` section this is a hallmark of release definition.
If ``is_release`` is defined and set to ``true`` the ``release_name``
is required.
``is_hotpluggable`` flag is not available for the release plugins and will
be ignored.
Release could contain any data matching FPB and Fuel validation schema,
without any restriction related to the OS version or bundling something
other than OS into the release plugin.
To make updates/upgrades simpler, it's supposed that plugin could contain
either releases or releases extensions, but not both of them at the same time.
FPB validator should provide warning if more than one release is defined
or plugin name is different from the release name.
As the result plugin developer is free to define any folders and files
structure that is comfortable to work with.
Example of ``metadata.yaml``:
.. code-block:: yaml
...
name: 'ExampleRelease'
version: '10.0.0'
package_version: '5.0.0' # plugin package version
releases:
- release_name: 'ExampleRelease'
description: 'Example Release Description' #required
operating_system: 'ubuntu' #required, or its alias "os"
version: 'mitaka-10.0' #required
# is_release should be true for plugins that define releases
is_release: true
# base_release_path allows defining template from which all data tree
# will be inherited by overriding keys.
base_release_path: ubuntu-10.0.0/_base.yaml
networks_path: ubuntu-10.0.0/metadata/networks.yaml
volumes_path: ubuntu-10.0.0/metadata/volumes.yaml
roles_path: ubuntu-10.0.0/metadata/roles.yaml
network_roles_path: ubuntu-10.0.0/metadata/network_roles.yaml
components_path: ubuntu-10.0.0/metadata/components.yaml
attributes_path: ubuntu-10.0.0/attributes/attributes.yaml
vmware_attributes_path: ubuntu-10.0.0/attributes/vmware.yaml
node_attributes_path: ubuntu-10.0.0/attributes/node.yaml
nic_attributes_path: ubuntu-10.0.0/attributes/nic.yaml
bond_attributes_path: ubuntu-10.0.0/attributes/bond.yaml
graphs:
- type: default
tasks_path: ubuntu-10.0.0/graphs/deployment_graph.yaml
- type: provisioning
tasks_path: ubuntu-10.0.0/graphs/provisioning_graph.yaml
- type: deletion
tasks_path: ubuntu-10.0.0/graphs/deletion_graph.yaml
- type: network_verification
tasks_path: ubuntu-10.0.0/graphs/network_verification_graph.yaml
deployment_scripts_path: ubuntu-10.0.0/deployment_scripts/
repository_path: ubuntu-10.0.0/repositories
Attributes except deployment scripts, repository path and graph will be
ignored for old-fashioned plugin release (extending existing release
functionality)
Graphs types are highly required in the release description for providing
good UX experience to plugins developers and deployment engineers for the
``Deploy changes`` action.
`Graph concept extension <https://review.openstack.org/#/c/343256/>`_.
Fuel Plugin Builder
-------------------
Should be able to check new release schema and files are linked as files and
folders paths.
Also it should provide appropriate warnings in case of deprecated syntax
signs.
Plugins Package v5.0.0 will be supported starting from Fuel v9.1.0.
Appropriate validation should be defined.
Under the hood FPB will perform three operations:
* Data files discovery and loading making data tree from plugin files and
rendered configuration templates.
During processing of metadata file all attributes with ``_path`` suffix will
be considered as special one and processed using the following conditions:
* if ``some_key_path`` key is pointing to file or file-like object and it is
possible to load data from it (YAML/JSON) key will be replaced to version
without suffix ``some_key`` and data will be placed under this key in data
tree.
* if key with the ``_path`` suffix is pointing to folder like
``./release/fuel-10.0/``, it will be left intact.
* if key with a path suffix ``_path`` is a glob expression like
``release/graphs/\*.yaml`` file search will be run.
All found files matching glob will be merged
into one list if they all have list root or
their properties will be merged into dict if their root is dict.
In the case of mixed root loader will fail.
After data is merged as well as data from single file it will be placed
under the key without ``_path`` suffix and original key will be removed
from data tree.
* Data tree validation.
* Plugin building and packaging (identical to the current functionality)
Deprecation
-----------
``modes`` release parameter is deprecated and will be removed in further
versions.
``tasks.yaml`` no further supported.
``fuel_version`` field currently is not processed by any business logic in
nailgun and should be deprecated.
Fuel Library
============
In perspective current Fuel Library should become a plugin.
------------
Alternatives
------------
There is alternative implementation offered by bgaifullin@mirantis.com
Release are provided as separate package and it is not related to the plugin.
Each release can be registered in nailgun by using API.
That means it is not required to update plugins model, only need to move
openstack.yaml to the fuel-library side.
The release package should include openstack.yaml and deployment tasks.
The plugins model will be kept as is and plugins only extend releases which
are registered in nailgun instead of new release declaration.
On the other hand release and plugin have quite similar data structures that
are different in the ways they are managed by business logic and how they are
delivered.
It seems sane to make their delivery and management as close as it's possible
as well.
--------------
Upgrade impact
--------------
It will be possible to ship release upgrades as a plugin.
---------------
Security impact
---------------
None
--------------------
Notifications impact
--------------------
Fuel Plugin Builder
===================
Fuel Plugin Builder validator should be able to validate new releases
parameter structure.
---------------
End user impact
---------------
None
------------------
Performance impact
------------------
None
-----------------
Deployment impact
-----------------
None
----------------
Developer impact
----------------
This feature highly affects Fuel plugins and library developers.
---------------------
Infrastructure impact
---------------------
None
--------------------
Documentation impact
--------------------
Add documentation about fuel plugins format.
--------------
Implementation
--------------
Assignee(s)
===========
Primary assignee:
ikutukov@mirantis.com
Other contributors:
Mandatory design review:
bgaifulin@mirantis.com
ikalnitsky@mirantis.com
Work Items
==========
* Bump plugins version to v5.0.0
* Add to ongiong Fuel release support of new manifest version.
* Add new manifest version support to ongoing FPB release.
Dependencies
============
None
-----------
Testing, QA
-----------
* Manual testing
* Automated testing with fuel library as the release.
Acceptance criteria
===================
* It is possible to deploy configuration with specific set of plugins and
packages.
* It is possible to perform only discovering/provision and manage
HostOS + underlay storage and networking.
* Vanilla Fuel 9.1 installation is possible without any release plugins, but
cluster creation is blocked with the UI notice, explaining situation.
----------
References
----------
[1] - https://blueprints.launchpad.net/fuel/+spec/release-description-in-fuel
-library

View File

@ -1,214 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===========================
S3 API/Keystone Integration
===========================
Operator should be able to decide whether the S3 API/Keystone integration
in Ceph RADOS Gateway is enabled or not through checkbox in Fuel.
Administrator should be informed about a trade-off that is associated with
enabling the integration.
--------------------
Problem description
--------------------
Ceph RADOS Gateway offers multiple backends for client authenication for both
OpenStack Open Storage v1 API (aka Swift API) and S3 API.
Unfortunately, request authentication in S3 API is very different in comparison
to its counterpart in OpenStack. Instead of providing tokens, a client
application always may access the object store with a frequently varying
zero-knowledge proof. This assures extra security guarantees but - conjuncted
with the principle that Keystone cannot reveal credentials it stores - also
increases load and latency as each S3 request will be reflected in request to
Keystone. This is an architectural limitation that cannot be addressed through
introduction of caching like in case of Swift API.
Thus, enabling the S3/Keystone integration in RadosGW is decision associated
with a fundamental trade-off and should be made after careful consideration.
However, administrator should be able to decide to turn on the integration
through graphical user interface.
----------------
Proposed changes
----------------
Enabling S3 API/Keystone integration requires changes in Ceph configuration
files:
On controller side:
* Put "rgw_s3_auth_use_keystone = True" into a section of /etc/ceph/ceph.conf
dedicated to RadosGW.
Web UI
======
Interaction with the Web UI may be similar to the following scenario:
1. Administrator navigates to the Storage section of the Settings tab.
2. Administrator is presented with an option "Enable S3 API Authentication via
Keystone" (or other appropriate from existing ones) and hint - "Please note
that enabling this will increase the load on Keystone service. Please
consult with documentation (link) and Mirantis Support on mitigating the
risks related with load."
3. If user checks the option from step 2 - S3 API on RadosGW is configured for
authentication via Keystone
Nailgun
=======
Nailgun-agent
-------------
None
Bootstrap
---------
None
Data model
----------
None
REST API
--------
None
Orchestration
=============
None
RPC Protocol
------------
Only payload changes
Fuel Client
===========
None
Plugins
=======
None
Fuel Library
============
See items in Proposed changes section.
------------
Alternatives
------------
None
--------------
Upgrade impact
--------------
None
---------------
Security impact
---------------
User will be able to authenticate requests made through S3 API basing solely
on credentials stored and handlded by Keystone.
--------------------
Notifications impact
--------------------
None
---------------
End user impact
---------------
None
------------------
Performance impact
------------------
Load on Keystone may be significantly increased. Latency of request to object
store made through S3 API will be increased.
-----------------
Deployment impact
-----------------
None
----------------
Developer impact
----------------
None
---------------------
Infrastructure impact
---------------------
None
--------------------
Documentation impact
--------------------
TBD
--------------
Implementation
--------------
Assignee(s)
===========
TBD
Work Items
==========
* Enable S3 API/Keystone integration in fuel-library (already done)
* UI changes
* Manual testing
Dependencies
============
None
------------
Testing, QA
------------
* Automated API/CLI test cases for the configuring S3 authenication via
Keystone.
Acceptance criteria
===================
* Operator should be able to enable and disable the S3 API/Keystone in RadosGW
through Web UI.
----------
References
----------
1. https://bugs.launchpad.net/mos/+bug/1540426
2. https://bugs.launchpad.net/fuel/+bug/1446704

View File

@ -1,374 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
====================================
Manage Custom Workflows from Fuel UI
====================================
https://blueprints.launchpad.net/fuel/+spec/ui-custom-graph
This blueprint extends Fuel UI with ability to list, remove, upload, download,
and run custom workflows (custom graphs) that are sets of arbitrary deployment
actions such maintenance of cluster, security updates and even upgrades [1].
--------------------
Problem description
--------------------
Now Fuel UI gives User no instruments to view, remove, upload, download, or
execute custom deployment graphs. At the same time, an ability to do this
would help to operate with complex life-cycle management (LCM) use cases
such as bug fixing or other cluster updates.
----------------
Proposed changes
----------------
Web UI
======
Workflows tab
-------------
Cluster page in Fuel UI should be extended with new 'Workflows' tab.
The 'Workflows' tab should contain a table with all graphs available for
the cluster.
Rows in a workflows table should be grouped by graph type attribute.
Workflows table should have the following columns:
* 'Name' - to display a graph name
* 'Level' - to display a graph level
* 'Actions' - contains 'Delete' button to remove a graph
(this action available for cluster-level graphs only)
* 'Download' - contains 'JSON' and 'YAML' buttons
(to download graph tasks in JSON or YAML format)
+-------------------+---------------+-----------+-----------+
| Name | Level | Actions | Download |
+===================+===============+===========+===========+
| Type "default" | | | JSON/YAML |
+-------------------+---------------+-----------+-----------+
| - | Release | | JSON/YAML |
+-------------------+---------------+-----------+-----------+
| - | Environment | Delete | JSON/YAML |
+-------------------+---------------+-----------+-----------+
| Type "9.0-mu-1" | | | JSON/YAML |
+-------------------+---------------+-----------+-----------+
| mu-1-plugin | Plugin | | JSON/YAML |
| | Fuel Contrail | | |
| | plugin | | |
+-------------------+---------------+-----------+-----------+
| mu-1-release | Environment | Delete | JSON/YAML |
+-------------------+---------------+-----------+-----------+
| Type "upgrade" | | | JSON/YAML |
+-------------------+---------------+-----------+-----------+
| - | Release | | JSON/YAML |
+-------------------+---------------+-----------+-----------+
| my-plugin-graph | Plugin | | JSON/YAML |
| | Fuel Contrail | | |
| | plugin | | |
+-------------------+---------------+-----------+-----------+
| upgrade-graph | Environment | Delete | JSON/YAML |
+-------------------+---------------+-----------+-----------+
Note that workflows table should not include graphs of not enabled cluster
plugins.
Graphs of 'default' type should go first in the table. Inside each group
graphs should be sorted by its level (release, plugin, and then environment
graphs).
Workflows table should support filtering by deployment graph level and by
graph type. Both filters should support multiple values selection.
To delete a graph User have to confirm the action in confirmation pop-up by
entering the graph type.
User should also be able to download JSON or YAML file with merged tasks of
resulting graph by its type (tasks of graphs that have this type and related
to different levels are merged together).
The 'Workflows' tab should also have 'Upload New Workflow' button to launch
a pop-up with a form for uploading a new graph for the current cluster
(the new graph level will be 'cluster', it is shown on UI as 'Environment').
To do this User should fill the following fields:
* graph verbose name (optional; graph can have an empty verbose name;
if not empty, then graph name should be limited by 255 symbols)
* graph type (mandatory; should be unique within graphs of cluster level and
related to current cluster; the input should be validated across
`^[a-zA-Z0-9-_]+$` regexp and limited by 255 symbols)
* file with graph tasks data in JSON format (optional; graph can be empty)
Dashboard tab
-------------
Fuel UI user should be able to start execution of custom graph of a particular
type.
Top block on the 'Dashboard' tab that represents deployment modes should be
extended by a new 'Custom Workflows' mode.
Working in this deployment mode, User should specify a graph type he wants
to execute. All cluster graphs except 'default' type are available in this
deployment mode. And 'default' graph already executed by clicking existing
'Deploy Changes' button in Fuel UI.
User should also click 'Select Nodes' button to open a standard 'Select Nodes'
pop-up to specify nodes for selected graph execution.
All cluster nodes are selected in the pop-up by default.
Graph execution can not be launched from Fuel UI if no cluster nodes selected.
Graph also can not be executed if any of the selected nodes is offline.
When execution of the selected graph started, an appropriate task
(aka transaction) comes to UI. Fuel UI should display a progress bar on
Dashboard to represent a progress of the graph execution. By clicking
on the progress bar, deployment history [2] of the task should be shown.
Nailgun
=======
Data model
----------
No changes required.
REST API
--------
Existing API should be used by Fuel UI:
* `GET /graphs/?cluster_id=<cluster_id>` to get all graphs available for
a particular cluster (graphs of different levels)
Response data is returned in the following format:
.. code-block:: json
[
{
id: 1,
name: null,
relations: [{
type: 'default',
model: 'cluster',
model_id: 1
}],
tasks: [...]
},
{
id: 2,
name: 'some name',
relations: [{
type: 'default',
model: 'release',
model_id: 1
}],
tasks: [...]
},
{
id: 3,
name: 'my plugin graph',
relations: [{
type: 'plugin123',
model: 'plugin',
model_id: 12
}],
tasks: [...]
},
...
]
* `GET /clusters/<cluster_id>/deployment_tasks/?graph_type=<graph_type>`
to get merged tasks of a particular graph
* `DELETE /graphs/<graph_id>` to remove a graph.
* `POST /clusters/<cluster_id>/deployment_graphs/<graph_type>` to create
a new graph for the current cluster (the graph level will be 'cluster').
Data in the following format should be sent by Fuel UI:
.. code-block:: json
{
name: 'my graph name',
tasks: [...]
}
* `PUT /cluster/<cluster_id>/deploy/?graph_type=<graph_type>`
with empty data to run a graph on all cluster nodes
* `PUT /cluster/<cluster_id>/deploy/?graph_type=<graph_type>&nodes=<node_ids>`
with empty data to run a graph on a subset of nodes
Orchestration
=============
RPC Protocol
------------
No changes required.
Fuel Client
===========
No changes required.
Plugins
=======
No changes required.
Fuel Library
============
No changes required.
------------
Alternatives
------------
None.
--------------
Upgrade impact
--------------
None.
---------------
Security impact
---------------
None.
--------------------
Notifications impact
--------------------
None.
---------------
End user impact
---------------
Ability to perform maintenance of a cluster including applying of bugfixes,
security updates or even upgrade.
------------------
Performance impact
------------------
None.
-----------------
Deployment impact
-----------------
None.
----------------
Developer impact
----------------
None.
---------------------
Infrastructure impact
---------------------
None.
--------------------
Documentation impact
--------------------
Fuel UI user guide should be updated to include information about the feature.
--------------
Implementation
--------------
Assignee(s)
===========
Primary assignee:
jkirnosova
Other contributors:
bdudko (visual design)
kpimenova (JavaScript code)
bgaifullin, ikutukov (Nailgun code)
Mandatory design review:
vkramskikh
ikutukov
Work Items
==========
#. Add a new 'Workflows' tab with all cluster graphs listing.
#. Add controls to upload a new cluster graph.
#. Add controls to run custom graph on cluster nodes.
Dependencies
============
None.
------------
Testing, QA
------------
* Manual testing.
* UI functional tests should cover the changes.
Acceptance criteria
===================
Fuel UI user is able to list, remove, download, upload deployment graphs and
run the graph of the selected type on the subset of nodes or on the whole
cluster.
----------
References
----------
[1] Allow user to run custom graph on cluster
https://blueprints.launchpad.net/fuel/+spec/custom-graph-execution
[2] Deployment task execution history in Fuel UI
https://blueprints.launchpad.net/fuel/+spec/ui-deployment-history

View File

@ -1,363 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
============================================
Deployment task execution history in Fuel UI
============================================
https://blueprints.launchpad.net/fuel/+spec/ui-deployment-history
Show deployment execution graphs for all cluster deployments in Fuel UI.
This would allow End User to perform maintenance of a cluster with
an ability to do troubleshooting and audit of things happening to the cluster.
--------------------
Problem description
--------------------
Currently, it is almost impossible for Fuel UI user to see and understand
details of the deployment execution processes happened to cluster.
User has no information about running processes on nodes during deployment,
tasks sequence and their statuses, just a final result of successful or
failed environment deployment.
It makes maintenance and troubleshooting of a cluster via Fuel UI difficult
and time-consuming.
This change proposes to show all the information about particular deployments
for each particular cluster and its nodes.
----------------
Proposed changes
----------------
Web UI
======
#. Deployment progress bar on a cluster Dashboard should be clickable if
'deployment' task is running in the cluster. By clicking on it a graph with
the deployment data should appear.
#. Cluster page should be extended with the new 'History' tab that contains
'Deployment History' section with data of finished cluster deployments.
User should be able to choose a particular deployment on the tab
(deployments are referred to by their `id` and `time_start` attributes)
and get the process details and see graphs for all the deployed nodes.
Deployment should be represented as a graph where x-axis is a timeline and
y-axis indicates cluster nodes. Each node section contains a sequence of
deployment tasks related to the node. An additional section shows tasks
executed on the master node. Node sections are grouped by node roles.
Each bar on the graph corresponds to particular deployment task. Being hovered
it pops over the following info about the task:
* start time
* end time
* status (failed tasks should have a special layout)
Task bar should be clickable and display a popover with the following task
data:
* name
* node ID
* node name (user-assigned name during the deployment)
* node roles
* start time
* end time
* status
* message (actual for failed tasks with 'error' status)
Timeline should support zooming for better UX.
Tasks in 'skipped' status should not be reflected on the timeline, as they do
not participate in deployment. The same for tasks in 'pending' status. They
are not shown on the timeline because they are not started yet.
Deployment timeline should have a control to switch to table representation.
It is a table that displays a list of deployment tasks and it is divided into
sections. Each section includes tasks executed on nodes of specific roles and
has an appropriate role combination as a title.
Deployment table has the following columns:
* task name
* node ID
* node name (user-assigned name during the deployment)
* task status
* start time
* end time
* details (link to a full list of the task properties)
The list of tasks in the table should be sorted by node ID, then by start time
attribute.
Link in the 'Details' column should open a pop-up with all the task
attributes listed.
All tasks, including skipped and pending, should be shown in a table view.
Deployment tasks table should support filtering by:
* task name
* node (the filter options are node name and ID pairs)
* node role
* task status
These filters should support multiple values selection (user may want to see
tasks for several nodes or with a specific set of statuses).
Filters panel should have 'Reset' button to reset applied filters.
When switching to deployment table view on the Dashboard, tasks in the table
should be filtered by 'ready', 'running' and 'error' statuses by default.
History of a particular cluster deployment comes from
`GET /api/transactions/<deployment_id>/deployment_history/` response.
The response is a list of deployment tasks in the following format (only
attributes used in Fuel UI are described):
.. code-block:: json
{
"task_name": "upload_configuration",
"node_id": "6",
"node_roles": ["compute", "cinder"],
"node_name": "Node X",
"status": "running",
"time_start": "2016-06-24T06:37:51.735185",
"time_end": null,
"message": "",
...
}
where
* `task_name` - name of a deployment task
* `node_id` - id of node where a task was executed OR 'master' string if
a task was executed on the master node
* `node_roles` - list of the deployed node roles (an empty list in case of
master node)
* `node_name` - name that the node had at the moment of the deployment start
(should be 'Master Node' in case of the master node)
* `status` - status of a task and has one of the following values:
'pending', 'ready', 'running', 'error', or 'skipped'
* `time_start` - timestamp when a task was started (Null if a task is not
started yet)
* `time_end` - timestamp when a task was finished (Null if a task is not
started or not finished yet)
* `message` - text message that the finished task returns
`node_id` attribute can be set to 'null' or '-'. Null value means that
the task represents synchronization process on nodes and refers to Virtual
Sync Node. '-' value means that the task was not executed on any node.
Fuel UI should not display such tasks on timeline or in deployment table,
tasks related to cluster nodes or the master node should be shown only.
Ids of all cluster deployments come from the response of
`GET /api/transactions?cluster_id=<cluster_id>&tasks_names=deployment` API
call.
`GET /api/transactions/?cluster_id=<cluster_id>&tasks_names=deployment&
statuses=running` API call should be used on the cluster Dashboard to get id
of the running deployment.
Deployment history view should also have 'Export CSV' button for User to be
able to download a full history of particular deployment in CSV format.
Exported CSV data should include all deployment history tasks with all their
attributes.
Nailgun
=======
Data model
----------
#. Model of a cluster deployment (named 'transaction') should be extended with
`time_start` attribute, that will be used in Fuel UI to distinguish cluster
deployments.
#. Model of a deployment task from a deployment history should be extended
with `node_name` and 'node_roles' attributes.
#. The content of `custom` attribute of a deployment task should be merged
with root and task should not contain the `custom` property.
REST API
--------
#. Need to add filtering of results by task names or/and statuses for
`GET /api/transactions/` method. The following API calls should be
supported:
* `GET /api/transactions/?cluster_id=<cluster_id>&tasks_names=deployment`
* `GET /api/transactions/?cluster_id=<cluster_id>&tasks_names=deployment&
statuses=running`
#. `GET /api/transactions/<transaction_id>/deployment_history/` should return
data in CSV format if it was called with `{Accept: text/csv}` header.
Orchestration
=============
RPC Protocol
------------
No changes required.
Fuel Client
===========
None.
Plugins
=======
No changes required.
Fuel Library
============
No changes required.
------------
Alternatives
------------
None.
--------------
Upgrade impact
--------------
Migration should be prepared according to the changes in data models.
---------------
Security impact
---------------
None.
--------------------
Notifications impact
--------------------
None.
---------------
End user impact
---------------
Ability to easier troubleshoot and perform maintenance of a cluster.
------------------
Performance impact
------------------
None.
-----------------
Deployment impact
-----------------
None.
----------------
Developer impact
----------------
None.
---------------------
Infrastructure impact
---------------------
None.
--------------------
Documentation impact
--------------------
Fuel UI user guide should be updated to include information about the feature.
--------------
Implementation
--------------
Assignee(s)
===========
Primary assignee:
jkirnosova
Other contributors:
bdudko (visual design)
bgaifullin, ikutukov, dguryanov (Nailgun)
Mandatory design review:
vkramskikh
ashtokolov
Work Items
==========
* Display a deployment graph of a current deployment on the Dashboard tab.
* Display history graphs of all finished cluster deployments in a new
History tab.
* Support both display modes for deployment information: a timeline graph and
table view.
* Add filters toolbar for table representation of deployment history.
* Support CSV export of deployment history.
Dependencies
============
None.
------------
Testing, QA
------------
* Manual testing.
* UI functional tests should cover the changes.
Acceptance criteria
===================
Fuel UI user should be able to run several deployments for a cluster and see
the deployment tasks history in the cluster page, including real-time
information about a current deployment.
----------
References
----------
* Store Deployment Tasks Execution History in DB
https://blueprints.launchpad.net/fuel/+spec/store-deployment-tasks-history

View File

@ -1,323 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==================================================
Use Packetary for downloading MOS rpm/deb packages
==================================================
https://blueprints.launchpad.net/fuel/+spec/use-packetary-in-fuel
The current scheme of working with repositories in Fuel is quite messy,
rigid, and incompatible with upcoming architectural changes. We are
going to rework the whole approach to downloading rpm/deb packages
and handling of local/remote mirrors by introducing the packetary
tool both in the ISO build process and on the Fuel master node side.
--------------------
Problem description
--------------------
When building the ISO there is no need to create full upstream mirror
locally and then put it to the ISO. Instead we have the minimal
list of required packages. Then we can use tools ``yumdownloader``
to recursively resolve package dependencies and
download this minimal consistent tree.
Currently we use ``yumdownloader``/``reposync`` and ``debmirror``
for downloading rpm/deb packages while
building the Fuel ISO image. To mix packages from
different RPM repos on the Fuel master node we use the EXTRA_RPM_REPOS
variable. We are forced to deal with several tools at the same time
that provide user interfaces and functionality which are not
fully compatible with data structures that we currently use in Fuel.
Besides, we still build Fuel packages together with the ISO which
does not scale well. We have a specific service for building packages
not only from merged source code but also from the code that is
currently on review. The idea behind is to use these packages
to run deployment tests before a patch is even merged. Some cases,
however, assume putting these custom packages on a custom ISO,
but our current build code does not allow to download deb
packages from these custom repositories during ISO build.
This EXTRA_RPM_REPOS variable works only in rpm case. Custom
deb repos can only be used during deployment itself.
The existing approach has the following disadvantages:
* The code for fetching RPM/DEB repositories is strictly tied to the set of
internal configuration values.
* The code for creation of local repositories structure on a Fuel master node
does not support having multiple OpenStack releases within an ISO.
* There is no possibility to include a set of user-defined extra DEB
repositories to a product ISO, and automatically add them to Nailgun.
The easiest way to address all these issues is to use Packetary [1]_ for
the ISO build process.
The thing is that neither ``yumdownloader`` nor ``debmirror`` provide the level
of convenience and flexibility that Packetary does. Packetary allows to
download everything that we need running it just once passing
input data (yaml) in exactly the same format that we use for Fuel-menu
and for Nailgun [2]_. By the way, it is a flat list of repositories with their
priorities. All downloaded packages could either be merged into a single
repository or into a set of repositories depending on what one needs.
The process is fully data driven.
So, using Packetary we could make ISO build process really flexible.
One could put into the ISO packages from arbitrary number of custom
repositories. We could even check if this particular set of repositories
is consistent, i.e. there are no conflicting dependencies.
----------------
Proposed changes
----------------
We propose to replace current tools mentioned above with Packetary
which will process a user-defined list of RPM/DEB repositories and perform the
following actions.
At the ISO image build stage:
* download specified RPM/DEB packages/repositories (and, if required, create
new repositories based on the list of packages)
* put these repositories to the ISO along with the user-defined config file
(exactly the file that was used while downloading packages)
to set yum/apt repository configuration so to use locally
copied repositories.
At the base OS provisioning stage:
* put these repositories from the ISO to a user-defined target paths on the Fuel
master node
At the master node deployment stage:
* configure yum/apt repositories using this config file that was used on the
build stage and then was put on the ISO
* configure default repositories in fuel-menu and nailgun using the same
config file
How are we planning to integrate this new approach into Fuel CI?
* We are planning to remove from fuel-main all those data structures
that are related to Fuel infrastructure. There won't be variables like
USE_MIRROR=* that assume having hardcoded mirror urls for various
locations. Build system is to become fully data driven. We will
provide just few very basic defaults like current CentOS upstream and
maybe current MOS urls.
* We will use repository configuration template structure that is to
reflect the standard repository structure (that is not exact file content).
This file is to be rendered using environment variables set by Jenkins ISO
build job. These environment variables could be exposed to the custom job
web interface.
..
- name: "os"
path: "upstream"
uri: "{{CENTOS_URL}}/os/x86_64"
priority: 99
- name: "updates"
path: "upstream"
uri: "{{CENTOS_URL}}/updates/x86_64"
priority: 10
- name: "extras"
path: "upstream"
uri: "{{CENTOS_URL}}/extras/x86_64"
priority: 99
- name: "centosplus"
path: "upstream"
uri: "{{CENTOS_URL}}/centosplus/x86_64"
priority: 99
- name: "mos"
path: "mos-centos"
uri: "{{MOS_URL}}/x86_64"
priority: 5
- name: "mos-updates"
path: "mos-centos-updates"
uri: "{{MOS_UPDATES_URL}}/x86_64"
priority: 1
* This data structure, however, does not contain custom
repositories. To cover this case with custom repositories we
will expose to the Jenkins web interface a form that is to be
used for uploading custom yaml file. So, a user can prepare
yaml file using her favorite text editor and probably some
utilities and use this file to run custom build job.
Web UI
======
None
Nailgun
=======
Data model
----------
None
REST API
--------
None
Orchestration
=============
RPC Protocol
------------
None
Fuel Client
===========
None
Plugins
=======
None
Fuel Library
============
None
------------
Alternatives
------------
Provide repositories for different OpenStack versions as "pluggable" build
artifacts (RPMs) which include:
* a repository itself (packages + metadata)
* local yum/apt configuration (if required)
* post-install script to add repository to Nailgun (if needed)
However, this approach imposes significant impact on CI systems, and does not
solve extra repos issue.
--------------
Upgrade impact
--------------
Proposed changes allow to simplify the upgrade procedure by unifying the Fuel
repositories workflow.
---------------
Security impact
---------------
None
--------------------
Notifications impact
--------------------
None
---------------
End user impact
---------------
Users will be required to create or modify the yaml configuration file to
include their own set of RPM/DEB repositories. If one needs just to
change mirror base url, the it is to be possible to use environment
variables.
------------------
Performance impact
------------------
ISO build process should become faster or remain the same.
-----------------
Deployment impact
-----------------
None
----------------
Developer impact
----------------
None
---------------------
Infrastructure impact
---------------------
Using packetary allows us to cover such cases as:
* mix upstream and testing repos on deployment stage
* use custom repos (and custom packages)
Fuel 9.0+ ISO build environments should have packetary and all its
dependencies installed. Packetary could be installed using pip.
--------------------
Documentation impact
--------------------
None
--------------
Implementation
--------------
Assignee(s)
===========
Primary assignee:
Vladimir Kozhukalov <vkozhukalov@mirnatis.com>
Other contributors:
Bulat Gaifullin <bgaifullin@mirnatis.com>
Mandatory design review:
Vitaly Parakhin <vparakhin@mirantis.com>
Alexandra Fedorova <afedorova@mirantis.com>
Work Items
==========
* Add necessary functionality to Packetary
* Create a patch to fuel-main to introduce Packetary to the build process
* Create Jenkins jobs (product and custom)
Dependencies
============
None
------------
Testing, QA
------------
The ISO should pass the same set of system and deployment tests.
Acceptance criteria
===================
1. Build script should use Packetary as a tool to download packages during
ISO build.
2. ISO build when using Packetary should not be longer than it is now.
3. It should be possible to define repos during ISO build using a flat
prioritized list.
4. It should be possible to use several custom repos at the same time.
----------
References
----------
.. [1] `Packetary <https://github.com/openstack/packetary>`_
.. [2] `Unify the input data <https://github.com/openstack/fuel-specs/blob/master/specs/9.0/unify-the-input-data.rst>`_

View File

@ -1,237 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
Graph Based Upgrades
==========================================
https://blueprints.launchpad.net/fuel/+spec/graph-based-upgrade
This spec aims to improve Octane by making use of Fuel's graph execution engine
to run upgrade-related commands and procedures.
--------------------
Problem description
--------------------
Currently, Octane executes upgrade-related commands on cluster nodes
via SSH. This is not the best way to handle upgrades, given the
capability to execute custom graphs on both new and old environments.
This will allow to enhance upgrade experience in following ways:
- More consistent with Fuel way of logging of upgrade commands
- Utilization of existing graph execution framework for upgrades
- Puppet usage for upgrade tasks (where it is possible to do so)
----------------
Proposed changes
----------------
For each upgrade action, there will be one optional CLI argument:
--with-graph, which will enforce graph-based approach to said
operation.
Graphs and puppet manifests will be stored in the same repository,
in "deployment/" directory (similar to fuel-library).
There will be two types of graphs: "seed" and "orig" for new and old
environments respectively.
Graphs will be uploaded during upgrade action execution.
Graphs will be uploaded to concrete environments (similar to
fuel2 graph upload --env <id> execution)
Web UI
======
None
Nailgun
=======
This change depends on `converted serializers`_ extension.
.. _converted serializers: https://github.com/openstack/fuel-nailgun-extension-converted-serializers
Data model
----------
None
REST API
--------
None
Orchestration
=============
None
RPC Protocol
------------
None
Fuel Client
===========
None
Plugins
=======
None
Fuel Library
============
None
------------
Alternatives
------------
Stick with current Octane implementation based on existing Python code.
--------------
Upgrade impact
--------------
All the changes will be introduced into fuel-octane repo.
To make use of this feature, the user will have to
run all upgrade actions with "--with-graph" flag.
---------------
Security impact
---------------
Graph-based upgrades require that master node has additional directories
configured for rsync remote access:
- (Read only) /var/www/nailgun/octane_code (contains Puppet-related Octane
files)
- (Read/write) /var/www/nailgun/octane_data (will hold temporary/backup data
from other nodes)
Note: due to the nature of upgrade process, the second directory may contain
sensitive data. Contents of this directory are not to be cleaned automatically.
It will be operator's responsibility to remove files with sensitive information
from this directory.
E.g. during upgrade-db step, OpenStack database's contents will be dumped to a
file on the original environment's node, synced to /var/www/nailgun/octane_data
on the master node and then synced to the seed environment's node.
--------------------
Notifications impact
--------------------
None
---------------
End user impact
---------------
During upgrade process, user will have an option to
execute upgrades with graphs.
------------------
Performance impact
------------------
None
-----------------
Deployment impact
-----------------
None
----------------
Developer impact
----------------
None
---------------------
Infrastructure impact
---------------------
None
--------------------
Documentation impact
--------------------
Documentation will have to be adjusted to mention new
"--with-graph" approach to upgrades.
--------------
Implementation
--------------
Assignee(s)
===========
Primary assignee:
rsokolkov
Other contributors:
nikishov-da
paulche
Mandatory design review:
akscram
vkuklin
Work Items
==========
Implement following commands with graph support:
- upgrade-db
- upgrade-ceph
- upgrade-control
- preugrade-compute
- osd-upgrade
Dependencies
============
None
------------
Testing, QA
------------
Existing test cases will adopt graph-based CLI workflow.
Acceptance criteria
===================
It is possible to successfully execute the upgrade process using task graphs.
----------
References
----------
None

View File

@ -1,271 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
====================================================
Improve NFV workload performance with multiq support
====================================================
https://blueprints.launchpad.net/fuel/+spec/multiqueue-support-nfv
--------------------
Problem description
--------------------
Todays high-end servers have more processors, and guests running on them
often have an increasing number of vCPUs. In a single virtio-net queue, the
scale of the protocol stack in a guest is restricted, as the network
performance does not scale as the number of vCPUs increases. Guests cannot
transmit or retrieve packets in parallel, as virtio-net has only one TX and
RX queue.
Multiqueue virtio-net provides the greatest performance benefit when:
* Traffic packets are relatively large.
* The guest is active on many connections at the same time, with traffic
running between guests, guest to host, or guest to an external system.
* The number of queues is equal to the number of vCPUs. This is because
multi-queue support optimizes RX interrupt affinity and TX queue selection
in order to make a specific queue private to a specific vCPU.
There is a spec `Libvirt: virtio-net multiqueue`_ implemented that adds
support for mutliqueue feature in OpenStack, now we need to add support
on packages level. To achieve this we need the following packages together:
* qemu 2.5
* libvirt 1.3.1
* openvswitch 2.5
* dpdk 2.2
These packages availabe in Ubuntu 16.04 out of the box (thus MOS-10 will not
need any additional actions except QA), but need backporting in case of
Ubuntu 14.04 (which is a base system in MOS 9.x). The good thing is that
backported packages were already tested as part of optional 'NFV feature
support'.
----------------
Proposed changes
----------------
Integrate packages from 'feature/nfv' branch into main 9.0 development branch
and verify that they work as expected.
To enable the feature from OpenStack side additional parameter should be
added to image properties, like shown below:
..code-block:: text
hw_vif_multiqueue_enabled=true|false (default false)
Currently, the number of queues will match the number of vCPUs, defined for
the instance.
..note:: Virtio-net multiqueue should be enabled in the guest OS manually,
using ethtool. For example:
..code-block:: text
ethtool -L <NIC> combined #num_of_queues
Web UI
======
None
Nailgun
=======
None
Data model
----------
None
REST API
--------
None
Orchestration
=============
None
RPC Protocol
------------
None
Fuel Client
===========
None
Plugins
=======
Networking-related plugins might face issues with new packages.
Fuel Library
============
None
------------
Alternatives
------------
There is no other way than upgrade to the packages that provide multiqueue
functionality.
--------------
Upgrade impact
--------------
Upgrading QEMU requires every guest VM was stopped and started again (not
rebooted).
---------------
Security impact
---------------
None
--------------------
Notifications impact
--------------------
None
---------------
End user impact
---------------
None
------------------
Performance impact
------------------
Improves NFV performance.
-----------------
Deployment impact
-----------------
None
----------------
Developer impact
----------------
None
---------------------
Infrastructure impact
---------------------
None
--------------------
Documentation impact
--------------------
None
--------------
Implementation
--------------
Assignee(s)
===========
Primary assignee:
`Dmitry Teselkin`_
Other contributors:
`Ivan Suzdal`_
Mandatory design review:
`Dmitry Klenov`_
Work Items
==========
* Move every package from 'feature/nfv' into 9.0 branch, merge and build
packages.
Dependencies
============
None
------------
Testing, QA
------------
* Verify that new set of packages doesn't introduce any regressions.
* Verify that vhost-user network works in OpenStack
Acceptance criteria
===================
* The following packages available in 9.2 repository:
* qemu - 2.5
* libvirt - 1.3.1
* openvswitch - 2.5
* dpdk - 2.2
* dependencies for the packages above
* MOS 9.2 uses updated packages by default
* Multiqueue support with vhost user in OpenStack
----------
References
----------
.. _`Dmitry Teselkin`: https://launchpad.net/~teselkin-d
.. _`Ivan Suzdal`: https://launchpad.net/~isuzdal
.. _`Dmitry Klenov`: https://launchpad.net/~dklenov
.. _`Vladimir Khlyunev`: https://launchpad.net/~vkhlyunev
.. _`Libvirt: virtio-net multiqueue`: https://specs.openstack.org/openstack/nova-specs/specs/liberty/implemented/libvirt-virtiomq.html

View File

@ -1,294 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
Role decomposition
==========================================
https://blueprints.launchpad.net/fuel/+spec/role-decomposition
--------------------
Problem description
--------------------
Currently a role encompasses many tasks that cannot be separated from each
other. Deployers should have the flexibility to distribute services across
nodes in any combination they see fit.
----------------
Proposed changes
----------------
Task placement will be determined based on new unit - `tag`. Each release or
plugin role may contain(or not, in this case role name will be considered as
a tag name) specific set of tags. Task definitions will contain a list of tags
which will be used to match them to nodes.
This requires a new task resolver on nailgun side and work on decoupling of
tasks on fuel-library side.
Web UI
======
None
Nailgun
=======
A new tags based resolver which supports tags should be introduced. Tags are
simple entities what should be used for tasks resolution(in opposite to old
role driven resolution approach) only. User is not able to operate with node's
tags directly, but, he should create new role containing tags what he is
interested in and assign created role to the node.
``primary-tags`` field should be introduced for node model to store primary
set of tags for the node.
Tags will be fetched from roles metadata during serialization process and
will not be stored for each node directly(we have no `tags` field in node db
model).
It should be possible to create roles for clusters and it will be possible
to have so-called roles for release and cluster created with this release.
The idea is that cluster roles have a higher priority than release roles and
it means that only cluster role will be used if we have so-called cluster
and release roles.
Data model
----------
An additional field named ``tags`` will be added to release metadata to
provide ability to specify set of tags for release roles.
`Tag` should have only one field:
- `has_primary` property
Example:
.. code-block:: yaml
roles_metadata:
controller:
name: "Controller"
tags:
- controller
- mysql
tags_metadata:
controller:
name: "controller"
has_primary: true
mysql:
name: "mysql"
has_primary: true
New JSON fields ``volumes_metadata`` and ``roles_metadata`` should be
introduced for cluster model.
New JSON field ``tags_metadata`` should be introduced for cluster, release,
plugin models.
``primary_roles`` column should be renamed to ``primary_tags`` for node model.
REST API
--------
Nailgun API should be extended to support role's creation for clusters to
make cluster's specific roles not visible for other clusters and avoid
mishmash.
Orchestration
=============
None
RPC Protocol
------------
None
Fuel Client
===========
Fuel Client should be extended to support role's creation for clusters.
Plugins
=======
As plugins have ability to define its own roles it will be possible to specify
tags for any particular role introduced by a plugin. I would mention that it's
possible, but, not obligatory to specify tags for role(in this case role
name will be used for tasks resolution).
Fuel Library
============
Blueprint's scope includes detaching of following components:
- Neutron (incl. L3 agents, LBaaS, etc)
- Keystone
- MySQL DB
- RabbitMQ
`tags` will be introduced for controller role:
- neutron
- keystone
- mysql
- rabbitmq
- controller
Fuel-library tasks part should be re-written for corresponding components to
support new approach with tags.
All tasks related only to specific tag should be marked with this tag(
field `role` or `groups` should be replaced with `tags`).
The version of library tasks where `role` field has been replaced with `tags`
shall be bumped.
Example:
keystone task to be changed:
.. code-block:: yaml
- id: keystone
type: puppet
groups: [controller]
.. code-block:: yaml
- id: keystone
type: puppet
groups: [controller]
tags: [keystone]
As we have a lot of places in fuel-library code where we are collecting
set of ip address for particular component by node's role we should
re-write this data access methods to work with `tags` and
provide fallback mechanism to support old style role based approach.
Initially, we are going to have one pacemaker cluster for all nodes
with assigned `tags` what need in it. For example, if we have 'node-1'
with tag 'mysql' and 'node-2' with tag 'rabbitmq' then single pacemaker
cluster with resources 'rabbitmq' and 'mysql' acting on corresponding
nodes will be created.
There is no detached plugin for neutron. So, additional efforts should
be spent to collect mandatory tasks for neutron task group and test it.
------------
Alternatives
------------
None
--------------
Upgrade impact
--------------
We should consider changes in tag's assignment between minor releases.
For example, it may be embedded into db migration process.
---------------
Security impact
---------------
None
--------------------
Notifications impact
--------------------
None
---------------
End user impact
---------------
User will be able to create roles with specific set of tags.
Initially, user has only default set of roles and its tags. If he wants,
for example, create detached role with 'mysql', he should create new cluster
role containing only 'mysql' tag.
User is able to modify roles(and its set of tags) in any moment except of
deployment process.
------------------
Performance impact
------------------
None
-----------------
Deployment impact
-----------------
None
----------------
Developer impact
----------------
None
---------------------
Infrastructure impact
---------------------
None
--------------------
Documentation impact
--------------------
Describe how to create custom roles(with custom set of tag).
--------------
Implementation
--------------
Assignee(s)
===========
Primary assignee:
* Viacheslav Valyavskiy <vvalyavskiy@mirantis.com>
Other contributors:
* Mikhail Zhnichkov <mzhnichkov@mirantis.com>
Mandatory design review:
* Vladimir Kuklin <vkuklin@mirantis.com>
* Stanislaw Bogatkin <sbogatkin@mirantis.com>
Work Items
==========
#. Introduce operations with roles for cluster(API, DB)
#. New tags based resolver in nailgun
#. Extend fuel-client to support operations with roles
for cluster
#. Role/Tag decomposition in Fuel-library
#. Update composition data access methods in fuel-library
#. Decouple Neutron component in fuel-library
Dependencies
============
None
------------
Testing, QA
------------
* Create new test cases for the new operations with tags
* Extend fuel-qa test suite with new API tests for the operations with tags
Acceptance criteria
===================
User is able to deploy services currently tied to the controller (e.g.
Keystone, Neutron, Mysql) on separate nodes via CLI(Web UI have a
nice to have priority).
----------
References
----------
None

View File

@ -1,232 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=======================================
Security Groups support for Neutron/OVS
=======================================
https://blueprints.launchpad.net/fuel/+spec/security-groups-support-for-ovs
It is required to implement a radio button in Fuel to switch a Neutron Firewall
driver. `IPTables-based Firewall Driver` and `Open vSwitch Firewall Driver`
should be able. IPTables functionality should be used by default.
-------------------
Problem description
-------------------
Until now, only one firewall was implemented in OpenStack's Neutron project:
an iptables-based firewall. As long as now there is a second option to natively
utilize OVS for implementing security groups instead of the former
iptables/linux bridge solution we should have an attribute in Fuel for
selecting firewall driver.
----------------
Proposed changes
----------------
We should add a cluster attrubute for selecting firewall driver and apply
appropriate settings in nova and neutron configs.
Web UI
======
None
Nailgun
=======
* Change openstack.yaml as described in the
:ref:`Data model<security-groups-data-model>` section.
* Add the security_groups attribute to the white list for the installation
info.
.. _security-groups-data-model:
Data model
----------
* openstack.yaml changes::
attributes_metadata:
editable:
common:
security_groups:
value: "iptables_hybrid"
values:
- data: "openvswitch"
label: "Open vSwitch Firewall Driver"
description: "Choose this driver for OVS based security groups implementation. NOTE: Open vSwitch Firewall Driver requires kernel version >= 4.3 for non-dpdk case"
- data: "iptables_hybrid"
label: "IPTables-based Firewall Driver (No firewall for DPDK case)"
description: "Choose this driver for iptables/linux bridge based security groups implementation."
label: "Security Groups"
group: "security"
weight: 20
type: "radio"
REST API
--------
None
Orchestration
=============
None
RPC Protocol
------------
None
Fuel Client
===========
None
Plugins
=======
None
Fuel Library
============
Fuel-library should apply firewall settings in neutron config.
* neutron/plugins/ml2/openvswitch_agent.ini: set OVS firewall driver in the
`securitygroup` section.
**If IPTables-based Firewall Driver was chosen in dpdk case,**
**security groups should be disabled.**
------------
Alternatives
------------
None
--------------
Upgrade impact
--------------
Data migration should be prepared according to the changes in data models.
After upgrade procedure, a Neutron Firewall driver switching is forbidden.
An appropriate warning should be added to release notes.
---------------
Security impact
---------------
None
--------------------
Notifications impact
--------------------
None
---------------
End user impact
---------------
None
------------------
Performance impact
------------------
Performance impact is not expected.
-----------------
Deployment impact
-----------------
Rerun the deployment with changing a Neutron Firewall driver is forbidden.
An appropriate warning should be added to release notes.
----------------
Developer impact
----------------
None
---------------------
Infrastructure impact
---------------------
None
--------------------
Documentation impact
--------------------
The user guide should be updated according to the described feature.
--------------
Implementation
--------------
Assignee(s)
===========
Primary assignee:
Anastasia Balobashina <atolochkova@mirantis.com>
Mikhail Polenchuk <mpolenchuk@mirantis.com>
Mandatory design review:
Vladimir Eremin <veremin@mirantis.com>
Work Items
==========
* Change openstack.yaml as described in the
:ref:`Data model <security-groups-data-model>` section.
* Apply firewall settings in neutron and nova configs
* Test manually.
* Verify the :ref:`acceptance criteria <security-groups-acceptance-criteria>`.
Dependencies
============
None
-----------
Testing, QA
-----------
* Test cases for configuring and deployment of environment with the OVS based
security groups, VLAN/VXLAN segmentation, but without enabled DPDK.
* Test cases for configuring and deployment of environment with the OVS based
security groups, VLAN/VXLAN segmentation and enabled DPDK.
* Web UI test cases for configuring the OVS based security group.
* Functional testing.
* Performance testing.
.. _security-groups-acceptance-criteria:
Acceptance criteria
===================
* OVS based security group is tested and working with MOS + OVS and MOS +
OVS/DPDK.
* The OVS performance should be equivalent or better to iptables in kernel at
1000 VM and 2000 VM scale.
* OVS/DPDK performance should result in no more than 15% performance
degradation vs no security groups at 1000 VM and 2000 VM scale.
* Scale limit testing: Test the maximum number of flows supported per OVS,
get a model such that we know when OVS based security groups will fail.
* Default should still utilize iptables as OVS based security groups are new
and not well tested yet.
* When OVS/DPDK is used on the host OS then we must automatically configure to
use OVS based security groups. Iptables based security groups do not work
with OVS/DPDK.
* The radio button in UI to choose a firewall_driver.
----------
References
----------
[0] - http://docs.openstack.org/developer/neutron/devref/openvswitch_firewall.html

View File

@ -1,250 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================
VXLAN support for OVS-DPDK
==========================
https://blueprints.launchpad.net/fuel/+spec/vxlan-support-for-ovs-dpdk
We want to utilize a VXLAN-based networking with OVS/DPDK for a
high-performance scalable tenant networking.
-------------------
Problem description
-------------------
Currently, OVS-DPDK supports only VLAN segmentation. With VXLAN-based network
segmentation being adopted more widely, supporting VXLAN with OVS-DPDK is very
important for all NFV use cases.
----------------
Proposed changes
----------------
Currently, when using the VLAN-based networking with OVS-DPDK, we use the
`br-prv` bridge but we do not assign an IP to it.
To implement a VXLAN-based segmentation with DPDK, we should use a
`br-mesh` bridge whose configuration corresponds to the one of the `br-prv`
bridge in case of VLAN with DPDK.
Web UI
======
None
Nailgun
=======
The following changes are required in Nailgun:
* Remove restrictions for using DPDK in VXLAN-based segmentation case.
* Fix the Nailgun network serializer to generate transformations as
described in the :ref:`Data model <data-model>` section.
.. _data-model:
Data model
----------
When operator enables DPDK for a particular interface with VXLAN-based
segmentation to use it for the Private network, ``astute.yaml`` will be
extended as follows:
* The network ``transformations`` field should include a vendor-specific
attribute ``datapath_type: netdev`` for the `br-mesh` bridge::
network_scheme:
transformations:
- action: add-br
name: br-mesh
provider: ovs
vendor_specific:
vlan_id: netgroup['vlan']
datapath_type: netdev
* An interface should be added directly into the OVS `br-mesh` bridge using
the ``add-port`` action with ``provider: dpdkovs``::
network_scheme:
transformations:
- action: add-port
name: enp1s0f0
bridge: br-mesh
provider: dpdkovs
**No VLAN tag can be used here.**
* A bond should be added directly into the OVS `br-mesh` bridge using the
``add-bond`` action with ``provider: dpdkovs``::
network_scheme:
transformations:
- action: add-bond
bridge: br-mesh
provider: dpdkovs
bond_properties:
mode: balance-rr
interfaces:
- enp1s0f0
- enp1s0f1
name: bond0
**No VLAN tag can be used here.**
REST API
--------
None
Orchestration
=============
None
RPC Protocol
------------
None
Fuel Client
===========
None
Plugins
=======
None
Fuel Library
============
To achieve VLAN-tagged VXLAN, the vendor specific attribute ``vlan_id``
for ``add-br`` should be converted to
``ovs-vsctl set port br-mesh tag=<vlan_id>``.
------------
Alternatives
------------
Continue using the VLAN-based network segmentation.
--------------
Upgrade impact
--------------
None
---------------
Security impact
---------------
None
--------------------
Notifications impact
--------------------
None
---------------
End user impact
---------------
None
------------------
Performance impact
------------------
Performance impact is not expected.
-----------------
Deployment impact
-----------------
This feature requires using the VXLAN segmentation and a dedicated
DPDK-capable network interface for the Private network.
----------------
Developer impact
----------------
None
---------------------
Infrastructure impact
---------------------
* The feature will be tested on a virtual environment.
* The performance testing will be conducted on a hardware environment
--------------------
Documentation impact
--------------------
The user guide should be updated according to the described feature.
--------------
Implementation
--------------
Assignee(s)
===========
Primary assignee:
Anastasia Balobashina <atolochkova@mirantis.com>
Mandatory design review:
Aleksey Kasatkin <akasatkin@mirantis.com>
Sergey Matov <smatov@mirantis.com>
Work Items
==========
* Remove restrictions for using DPDK in VXLAN-based segmentation case.
* Fix the network serializer so that the transformations are configured
as described in the :ref:`Data model <data-model>` section.
* Convert the vendor specific attribute ``vlan_id`` for ``add-br`` to
``ovs-vsctl set port br-mesh tag=<vlan_id>``.
* Test manually.
* Create a system test for DPDK.
* Verify the :ref:`acceptance criteria <acceptance-criteria>`.
Dependencies
============
None
-----------
Testing, QA
-----------
* API/CLI test cases for configuring the DPDK with VXLAN segmentation.
* Web UI test cases for configuring the DPDK with VXLAN segmentation.
* Test case for DPDK with VXLAN segmentation being discovered and configured
properly.
* Test case for using the multiple-node network groups.
* Functional testing.
* Performance testing.
.. _acceptance-criteria:
Acceptance criteria
===================
* Ability to run a DPDK application on top of OVS/DPDK + VXLAN-enabled host
* A 3 Mpps packet rate on the 64-bytes UDP traffic on a single PMD thread
multiplied by a number of DPDK cores.
* Ability to work on the 40 Gb and 2x10 cards from Intel's Forteville family.
----------
References
----------
None

View File

@ -1,300 +0,0 @@
==========================================
Enforce access control for Fuel UI
==========================================
https://blueprints.launchpad.net/fuel/+spec/access-control-master-node
Currently, there is no enforced access control to the Fuel UI.
Problem description
===================
Anyone with access to network can create, change and delete environment.
At a minimum, this requirement could be fulfilled by a login/password when
connecting to the Fuel UI. If implemented in this manner,
additional requirements will be needed:
* Ability for a user to set / change their own password
* Default admin/admin
* Should be configurable in fuelmenu
More advanced options:
* Secure/Encrypted storage of passwords (potentially on the Fuel Master Node)
* A more advanced feature would be integration with an external
authentication source (e.g. Active Directory, LDAP)
* A "super user" account that can create additional accounts - but these
additional accounts cannot create more users
* A better implementation would be to have Role Based Access Control and
have "super user" as a role that can be assigned to one or more users
* This may lead, in the future, to a more granular RBAC - e.g. ability
to view but not take actions, restriction to specific environments, etc.
* Ability for a "super user" to change a user's password and/or disable/remove
an account
* Read only mode
Proposed change
===============
Use Keystone as authorization tool.
Advantages:
* it can be used with LDAP or AD
* supports authorization via tokens
* support scopes, and groups of users
* has good written api with many functions like getting accessible
endpoints for user
* has api easy for consumption
* has implemented events system that we can use in future
for additional monitoring
* has implemented multifactor authentication
(can be used with external systems)
* all apis that we need for future managing groups, roles,
users and project are created
* for UI we can base our solution on horizon solution
* keystone will be also used by Ironic project
Disadvantages:
* need to run in separate container/process
* next external dependency
* may be overkill
We tested keystone with postgresql, it's working.
We tested it via console: create (user, role, tenant, endpoint) add role,
get token, list(role, user, tenant)
and also we used api to: get user info, get token, operations via v3 api
Blueprint will be implemented in several stages:
------------------------------------------------
Stage I
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1. Keystone installation
* install in nailgun container
* run script for creating separate db in postgresql container
* make available ports for keystone
* create base configuration
* in nginx allow connection to keystone
2. GUI
* login page
* logout button
3. After installation set password via fuelmenu
(it will be stored in astute.yaml)
Stage II(API protection)
^^^^^^^^^^^^^^^^^^^^^^^^^
1. Keystone
* create new container for keystone
* create service user for OSTF
2. Nailgun
* Try to use `keystone middleware <https://github.com/openstack/python-keystoneclient/tree/master/keystoneclient/middleware>`_,
for api and api v1
* for node agent we should run separate webpy app without middleware
3. GUI
* Add authorization credentials to all requests
* use keystone token in auth header
* add handling of 401
4. Change OSTF and Fuel-client and Fuel CLI to use authorization
5. Make authorization optional(flag to enable/disable authorization)
Stage III(all public services protection)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1. Node agent authorization
2.GUI
* change password page
3. Keystone.
* create backup script for db
Stage IV(in unknown future)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
1. Many users, groups/roles and api access based on groups/roles
(i.e. read-only, network-admin)
2. External authentication (LDAP, AD)
Alternatives
------------
**Write everything by yourself or use some existing components:**
we need to write user model and apis for creating and managing: user,
groups etc
oauth, in this case we can reuse some existing libs like oauth2 for creating
and consuming tokens. Oauth will be easy to use with clients and node
authorization
Maybe we can also use sessions for UI to persistence user token
Advantages:
* full control
* possibilities to write good oauth2 authorization easy to use
also with nodes
Disadvantages:
* a lot of work on stuff that is already implemented in keystone
**Use basic auth in nginx**
Advantages:
* really simple to implement, requires only changes in nginx configuration
Disadvantages:
* It shows login page from browser.
On every browser it will look little different.
* We can not create custom login page.
* It is still required to implement handlers and tab for password change.
* It's not extensible. If we want to implement non minimal
requirements we need to start from beginning.
Data model impact
-----------------
New database for keystone is required
REST API impact
---------------
Keystone API will be used
Security impact
---------------
Fuel will be safer now. It will protect users against unauthorized access.
All actions will require authorization.
Notifications impact
--------------------
Keystone can log all requests to log file.
Other end user impact
---------------------
* before performing any actions user have to login.
* python-fuelclient should be adjusted to use authorization
* fuel cli should be adjusted to use authorization
Password file for fuel-cli? (like .openrc but .fuelrc)
Performance Impact
------------------
None
Other deployer impact
---------------------
Password for postgresql should be generated and access from remote
locations should be blocked.
External connections to cobbler and rabbitmq should be allowed.
But passwords should be changed to the same as for API even
in first version, if possible. In future versions we'll be able
to transfer options for bootstrap node. So we should generate bootstrap
ssh key during master node installation. And use password-protected API
for nailgun agents.
Developer impact
----------------
None
Upgrade impact
--------------
There will be new container with keystone installed.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
loles@mirantis.com ksambor@mirantis.com
Work Items
----------
Dependencies
============
None
Testing
=======
Unit tests and functional tests are required.
Acceptance Criteria
-------------------
1. Stage I
* After installation user should be able to set password in fuelmenu.
and it will be stored in astute.yaml
* User should be able to login/logut to fuel UI with credentials
he set in fuelmenu
* If user token timeout he should be logged out.
2. Stage II
* User should see keystone running in separate container.
Using command: docker ps
* All requests, except node agent requests, should be authenticated.
* User should be able to run nailgun with disabled authorization.
It should be done via settings or command line.
* All requests to ostf should be authenticated
* all tests should run without any problems
* fuelclient should use authentication
3. Stage III
* User should be able to change password via UI page.
* node agent should use authentication to register in Fuel
4. Stage IV is just group of ideas. No need for acceptance criteria yet.
Documentation Impact
====================
It should be described how to change password and where it's required.
References
==========
None

View File

@ -1,194 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==================
Backup Master Node
==================
https://blueprints.launchpad.net/fuel/+spec/backup+master+node
Because Fuel Master HA is viewed as a waste of a user's resources, we need
to provide value by allowing for backup/recovery for disaster recovery
scenarios. Now that Fuel Master is running on Docker containers, backup and
recovery are quite painless and simple.
Problem description
===================
A detailed description of the problem:
* Fuel Master currently cannot be backed up or restored
* Reconfiguration of the Fuel Master requires significant manual input
Proposed change
===============
Fuel Master backup and recovery can be simplified by use of scripts and a
simple mechanism to compress and save the archive wherever the user requests.
Backup will take place with powered down containers to ensure consistent state.
Recovery in its first stage of implementation will be simplified. It will not
include astute.yaml settings (IP addresses, DHCP settings, DNS, NTP, etc). It
will simply restart the Docker containers to the backed up state, plus restore
all configurations, Puppet manifests, and package repositories.
Alternatives
------------
Backup and restore can be done with docker-0.10 without freezing running
containers, but it may result in inconsistent data.
Using docker-0.12 will allow freezing containers to save running state without
any destructive risks.
Data model impact
-----------------
Changes which require modifications to the data model often have a wider impact
on the system. The community often has strong opinions on how the data model
should be evolved, from both a functional and performance perspective. It is
therefore important to capture and gain agreement as early as possible on any
proposed changes to the data model.
Questions which need to be addressed by this section include:
* None
REST API impact
---------------
Each API method which is either added or changed should have the following
* None
Upgrade impact
--------------
This will impact upgrades. The scope of this feature is not so extensive to
cover restoring an old version of Fuel onto a newer installed Fuel Master. In
most cases, this would downgrade every component on the system except Fuel
Master base host packages. A workaround could be devised in a future blueprint,
but certainly not in the time frame of 5.1.
Security impact
---------------
None
Notifications impact
--------------------
None
Other end user impact
---------------------
The user will interact with backup and restore via the dockerctl command
line utility. All containers will be shut down during the backup process.
The backup will fail to start if there are any incomplete tasks present
when running *fuel task --list*.
Performance Impact
------------------
Minimal. There will be performance hits during backup process, resulting in
downtime.
Other deployer impact
---------------------
Discuss things that will affect how you deploy and configure Fuel
that have not already been mentioned, such as:
* What config options are being added? Should they be more generic than
proposed? Are the default values ones which will work well in
real deployments?
Default backup path /var/backup/fuel
The backup ID will be generated from a timestamp of YYYY-MM-DD-hh_ss.
The backup will be 'tar'ed then compressed with lrzip due to its efficiency
in handling deduplication across large archives.
* Is this a change that takes immediate effect after its merged, or is it
something that has to be explicitly enabled?
Yes. Immediate effect, but no backups are automatic.
Developer impact
----------------
Discuss things that will affect other developers working on Fuel,
such as:
* There will be an impact on dockerctl config dependency on container names.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
raytrac3r
Feature Lead: raytrac3r
Mandatory Design Reviewers: vkuklin
Developers: raytrac3r
QA: ykotko
Other contributors:
None
Work Items
----------
Work items or tasks -- break the feature up into the things that need to be
done to implement it. Those parts might end up being done by different people,
but we're mostly trying to understand the timeline for implementation.
* Add backup feature to create archive of all containers, repositories, logs,
and puppet manifests.
* Add restore feature to overwrite all containers, repositories, logs,
and puppet manifests.
* User documentation on how to backup and restore.
* (Nice to have) backup via rsync, ftp, or http.
Dependencies
============
None.
Testing
=======
Automated tests for backup/save need to be added to current Fuel system tests.
Acceptance criteria:
* User can deploy multinode OpenStack and run a backup.
* User can deploy HA OpenStack and run a backup.
* User can install Fuel Master on a new host with the same network
configuration and then restore the backup.
* User can manage all existing environments (delete node, add node).
* User can deploy new OpenStack environments.
Documentation Impact
====================
User-facing docs are required to show users the different ways to perform
the back up and restore.
References
==========
None

View File

@ -1,140 +0,0 @@
==============
Feature Groups
==============
https://blueprints.launchpad.net/fuel/+spec/feature-groups
We need a mechanism to build Fuel ISOs with different "flavors". Currently,
it is only possible to specify MIRANTIS=yes flag to create an ISO with
Mirantis logo, but we need to configure ISO build in a more flexible way.
Problem description
===================
For now, we need have two options for ISO build:
* Whether or not to put Mirantis logo to the footer
* Whether or not to allow usage of experimental features
The resulting ISO may have both or none of them. It is also be good if any of
these options could be changed on a working master node.
Proposed change
===============
A key "feature_groups" needs to be added to "VERSION" section of settings.yaml.
It should contain a list of strings, which presence in this list should be
checked in a few places such as footer, settings tab, role list, wizard, etc.
These checks also can be written as restrictions in configs::
values:
- data: "kernel_lt"
label: "EXPERIMENTAL: Use Fedora longterm kernel"
description: "Install the Fedora 3.10 longterm kernel"
restrictions:
- "'experimental' in version:feature_groups"
ISO build scripts should be modified to use FEATURE_GROUPS environment
variable. Its value should contain a list of feature groups separated by comma
to put into settings.yaml. Example::
make FEATURE_GROUPS=mirantis,experimental iso
If FEATURE_GROUPS is undefined, only "experimental" feature group should be
enabled. Handling of MIRANTIS environment variable should be removed.
Alternatives
------------
This can also be achieved by implementing these features as plugins, so this
approach should probably be considered as a temporary solution until plugin
system is implemented properly.
Data model impact
-----------------
None
REST API impact
---------------
A new field "feature_groups" should be added to /api/version response. Field
"mirantis" should be removed.
Upgrade impact
--------------
None
Security impact
---------------
None
Notifications impact
--------------------
None
Other end user impact
---------------------
None
Performance Impact
------------------
Minimal. UI should perform checks whether or not to show settings/roles/other
controls dependent on feature groups.
Other deployer impact
---------------------
None
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
vkramskikh
Other contributors:
dpyzhov
Work Items
----------
Dependencies
============
None
Testing
=======
Should be tested manually. Acceptance criteria:
* ISO built with "mirantis" group should have the logo in the footer
* ISO built with "experimental" group should have Zabbix role
Documentation Impact
====================
Processes of specifying feature groups for ISO build and modifiying them on
deployed master node should be documented.
References
==========
None

View File

@ -1,135 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=================================
Improve Galera Cluster Management
=================================
https://blueprints.launchpad.net/fuel/+spec/galera-improvements [1]_
Problem description
===================
Galera Cluster implementation has some issues when a new controller is added to
the cluster. This case usually happens during cluster deployment or new member
addition.
Here are the issues in current implementation of Galera Cluster Management:
- Current implementation uses mysqldump as State Snapshot Transfer (SST)
which blocks "Donor" during the `process
<http://galeracluster.com/documentation-webpages/nodeprovisioning.html
#comparison-of-state-snapshot-transfer-methods>`_. Donor is locked while
mysqldump is running during State Snapshot Transfer (SST). Due to this
it's not possible to deploy Fuel Controllers in parallel, as Primary
Controller can perform SST with one controller only. All other controllers
won't be able to synchronize their state with Primary Controller.
- Haproxy doesnt detect whether controlleris out of sync during SST/IST.
It's not a problem during the deployment, but it may be a significant
problem on new controller addition.
Proposed change
===============
- Add MySQL 5.6.16 with galera 0.25 module to Fuel
- Use Percona's HAProxy `clustercheck script
<https://github.com/olafz/percona-clustercheck/blob/master/clustercheck>`_
to verify Galera status
- Refactor MySQL settings (wsrep.conf), include new Galera settings, remove
default settings from config
Alternatives
------------
None
Data model impact
-----------------
None
REST API impact
---------------
None
Upgrade impact
--------------
Security impact
---------------
Port 49000 will be opened. Anyone will be able to obtain the status of Galera
Cluster.
Notifications impact
--------------------
None
Other end user impact
---------------------
None
Performance Impact
------------------
During normal operations the perfomance will be the same. On new controller
addition the perfomance will be improved as "xtrabackup" won't lock donor and
faster than mysqldump SST method.
innodb_doublewrite, innodb_thread_concurrency, innodb_write_io_threads were
added to improve performance of InnoDB engine.
Other deployer impact
---------------------
None
Developer impact
----------------
None
Implementation
==============
None
Assignee(s)
-----------
Primary assignee:
sgolovatiuk@mirantis.com
Work Items
----------
None
Dependencies
============
None
Testing
=======
Destructive tests are required.
Manual testing and log verification are required.
Documentation Impact
====================
Describe clustercheck script functionality
HAProxy statistic for MySQL cluster
References
==========
.. [1] https://blueprints.launchpad.net/fuel/+spec/galera-improvements

View File

@ -1,336 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
================================================================
Substitution native OS installation process with image based one
================================================================
https://blueprints.launchpad.net/fuel/+spec/image-based-provisioning [1]_
Problem description
===================
First, we use plenty of customizations of OS installation process. It is not
always possible to customize native OS installers such as debian-installer and
anaconda on that level of customization we need. Besides, supporting
customizations requires engineering resources. If you need to support
just one set of customization scripts instead of two completely different
sets of customizations for anaconda and debian-installer, it requires
2 times less of engineering resources.
Second, assembling root file system from scratch on every node during OS
provisioning takes a lot of time. It is much more effective to build OS root
filesystem just once and then copy it thoughout nodes. It is going to take up
to 10 times faster than installation process using so called native installers.
Proposed change
===============
The first aspect of the issue is going to be addressed by implementing fully
customizable and extremely simple python script which is supposed to be both
a discovery agent (currently written in Ruby) and an installation agent which
will do nothing more than making disk partitions, retrieving OS root filesystem
image and copying it on a hard drive.
As far as MOS has plenty of customizations even for core CentOS and Ubuntu
packages, we need to implement building bare OS images from scratch using
anaconda and debootstrap.
Those images are supposed to have cloud-init installed and we
suggest to use cloud-init built-in capabilities to install and configure
puppet and mcollective after first reboot.
In the future we'll use diskimage-builder to build custom OS images based on
those which are distributed by Canonical [2]_ and RedHat [3]_.
Openstack diskimage-builder is a community tool which is actively
developed and using this tool is potentially a great advantage.
Openstack Ironic nowadays seems to be mature enough to be used as a
provisioning tool instead of Cobbler. Ironic's scope, however, is strictly
limited to cloud environments. It is not going to support hardware without IPMI
as well as to support disk partitioning and other important stuff. Besides,
Ironic python agent and Ironic agent driver are not
production ready yet. As a result, we suggest to
implement disk image based provisioning process mostly on the agent
side fully independently on Ironic. It means we are going to implement our
own agent partly based on Ironic python agent. We also suggest to use Cobbler
as a tool for managing tftp and dhcp services but not for
templating kickstarts.
As far as we are going to use our own OS installation agent and this agent is
supposed to be extremely simple, we don't need to reboot a node before
provisioning as well as we don't need Cobbler capabilities to boot nodes
with different profiles. Discovery/Installation flow diagram is
::
Nailgun Astute Agent Cobbler
+ + + +
| | | PXE boot |
| | | |
| | | <--------+ |
| | | |
| Discovery data | |
| | | |
| <-----------------------+ | |
| | | |
| Provision task | |
| | | |
| +---------> | | |
| | | |
| Provisioning config |
| | | |
| | +---------> | |
| | | |
| Launch provisioning |
| | | |
| | +---------> | |
| | | |
| | Download image |
| | + |
| | | |
| | Partitioning |
| | + |
| | | |
| | Prepare configdrive |
| | + |
| | | |
| Finish provisioning |
| | | |
| | <---------+ | |
| | | |
| | Disable PXE boot |
| | | |
| | +----------------------> |
| | | |
| | Reboot | |
| | | |
| | +---------> | |
| | | |
| Provisioning task finished| |
| | | |
| <---------+ | | |
+ + + +
Our suggestion is to put all agent related code into fuel-web/fuel_agent
python package and implement discovery and provisioning parts independently as
two executable python scripts:
- /opt/nailgun/bin/agent_new (discovery part)
- /opt/nailgun/bin/provision
Discovery part is supposed to be based on the code which is used as a discovery
extension in Ironic python agent. It uses dmidecode and other OS level
tools with no use of any third-party python libraries.
Discovery agent (agent_new) is not supposed to be run periodically by crond
daemon. Despite the fact that format of the discovery
data is not supposed to be changed, we suggest to continue
to use Ruby + Ohai based agent without any changes until we are sure that
new python based discovery agent (agent_new) is stable and works well.
The reason for that is that discovery agent is an extremely crucial part of
Fuel functionality. To make it possible to figure out whether the agent_new
works exactly the same way as the old one, we can organize launching two agents
and comparing their output. If it differs some way, we can report this fact on
the Fuel master node.
Provision script will be run using two mcollective agents uploadfile and
execute_shell_command. Uploadfile will prepare config file containing all those
data that are necessary for provisioning and come from provisioning serializer.
Provision script will make partitions according to configuration, download
OS image, copy it on a hard drive, prepare configdrive and copy
configdrive on a hard drive.
Configdrive is a set of configuration files for cloud-init. We assume puppet
and mcollective will be configured right after first reboot by cloud-init.
So, agent needs to be able to get parameters given in a serialized
provisioning data set and put them into a configdrive in the format that
cloud-init is able to read.
Configdrive is supposed to be put on a separate partition in the end of one of
hard drives on a node during provisioning stage. Configdirve is just a file
system which has at least the following structure
- openstack/latest/meta_data.json
- openstack/latest/user_data
where user_data is supposed to be a multipart mime file [4]_.
This file will contain puppet and mcollective configurations as well as
the executable script implementing all that stuff which now exists
as a set of cobbler snippets [6]_.
Cloud-init should be configured so as to have so called NoCloud data source as
it's only data source (configdrive). Cloud-init configuration file example
is here [5]_.
Astute provision method will add node records into cobbler, but only to prevent
them to boot in bootstrap mode. Provision method should be re-written so as
to run provision script on nodes and provide this script with serialized
provisioning data generated by nailgun.
Alternatives
------------
Another possible way is to integrate Ironic into Fuel. Why not? Because Ironic
has a very specific scope which is more about cloud environments when a node
is provisioned and leased by a tenant for a while and then it is supposed to
be returned to repeat that cycle again. This very specific use case makes
Ironic tightly limited in its capabilities. For example, Ironic assumes all
partitioning related stuff will be encapsulated either into an image itself or
into the configuration stage (not provisioning stage). Ironic also is not going
to support OS agent based power management (only IPMI, ILO, DRAC, etc.) That is
why it is better to adderess those issues Fuel currently has that are related
to provisioning customizations independently on Ironic.
Placing partition table into an OS image is going to be a part of DIB
capabilities. Currently cloud OS image is just an image of root file system.
But what if OS image would be an image of a block device with partition table
inside it. It is possible if you use logical volumes which are unlike plain
primary partitions extendable. During image building you create logical volume
which suits exactly the size of unextended root file system and then after
reboot cloud-init will create other primary partitions, place there physical
volumes, attach those physical volumes to root volume group and then extend
root logical volume and extend root file system.
Data model impact
-----------------
* Discovery data format won't be changed.
* Serialized provisioning data format won't be changed.
REST API impact
---------------
None
Upgrade impact
--------------
This change assumes that in Cobbler bootstrap-2 distro and bootstrap-2 prfile
will be created. bootstrap-2 distro will be bound to initramfs containing
fuel_agent. This bootstrap-2 profile will be used for the default Cobbler
system. It is supposed that upgrade script will also put two bare OS
images to /var/www/nailgun so as to make provision agent able to download them.
It will be possible to use both native provisioning and image based
provisioning and to choose one of those two options by pointing out which
astute provisioning driver (cobbler or image) to use.
Security impact
---------------
None
Notifications impact
--------------------
None
Other end user impact
---------------------
Probably provisioning progress bar is better to be removed at all as it going
to take as much time as the reboot stage usually takes.
Performance Impact
------------------
Provisioning process is going to take much less time than it usually
takes at the moment.
Other deployer impact
---------------------
As far as we are going to include Ubuntu and Centos OS bare images into ISO,
it is going to become around 700M bigger.
Developer impact
----------------
Probably UI team cooperation will be necessary to remove provisioning
progress bar if it'll be appropriate.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
<vkozhukalov@mirantis.com>
<agordeev@mirantis.com>
Work Items
----------
- Create make scripts for building bare OS images (Centos and Ubuntu)
from scratch and for putting those images into ISO. (Iteration 1)
- Re-implement in terms of cloud-init all that stuff which is currently
implemented in terms of Cobbler snippets. (Iteration 1)
- Create provisioning agent script. (Iteration 1)
* partitioning
* downloading and copying OS image
* preparing and copying configdrive
- Testing and debugging. (Iteration 2)
Dependencies
============
None
Testing
=======
Testing approach
- Create VM or allocate hardware node.
- Deploy tftp + pxelinux and configure pxelinux with bootstrap ramdisk
as a default item. Bootstrap ramdisk should contain provisioning script.
- Prepare a set of testing provisioning configurations similar to ones
generated by provisioning serialier in nailgun.
- Run provision script with a set of different configurations one by one,
comparing obtained state with required one.
Testing is supposed to be implemented according to this document [7]_
Acceptance criteria
- Two bare OS images built from scratch using MOS repositories must be
available via http on Fuel master node
- After master node upgrade Cobbler must have one additional distro
bootstrap-2 and one additional profile bootstrap-2 which are supposed to
provide ramdisk with built-in fuel agent.
- It must be possible to choose one of two provisioning options "native" and
"image based". It supposed to be configured using nailgun configuration file
settings.yaml. By default "native" driver will be used.
- During image based provisioning fuel agent must make an appropriate
partitioning scheme on a node according to the partitioning data, which is
supposed to have the same format as it currently has.
- Once provisioning process is done, cloud-init must perform initial node
configuration including at least but not limited to network, ssh,
puppet and mcollective.
Documentation Impact
====================
It will be necessary to re-write those parts of Fuel documentation
which mention cobbler and provisioning.
References
==========
.. [1] https://blueprints.launchpad.net/fuel/+spec/image-based-provisioning
.. [2] http://cloud-images.ubuntu.com/
.. [3] http://openstack.redhat.com/Image_resources
.. [4] https://help.ubuntu.com/community/CloudInit
.. [5] http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/config/cloud.cfg
.. [6] https://etherpad.openstack.org/p/BOwAMY9pqy
.. [7] http://docs.mirantis.com/fuel-dev/devops.html

View File

@ -1,164 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===================================================
Install openstack from upstream source repositories
===================================================
https://blueprints.launchpad.net/fuel/+spec/openstack-from-master
Be able to deploy the very latest distribution of OpenStack from upstream
master. This is to provide community developers a way to deploy their own
additional changes through an easy to use deployment technology (i.e. Fuel).
Problem description
===================
* The idea behind that feature is to allow customers to compile OpenStack
packages during a Fuel ISO build on the fly, both RPM and DEB versions.
* Customers may use spec files either from our public Gerrit, or from their
own local/remote git repos.
Proposed change
===============
Changes will include:
* New configuration entries to fuel-main/config.mk
* New subroutines for our make system that will build RPM and DEB packages
by using configuration entries from fuel-main/config.mk
Alternatives
------------
None
Data model impact
-----------------
None
REST API impact
---------------
None
Upgrade impact
--------------
None
Security impact
---------------
None
Notifications impact
--------------------
None
Other end user impact
---------------------
* Additional options to the "make iso" command allow user to customize
external sources to build OpenStack components from.
Performance Impact
------------------
By using this feature to build multiple custom OpenStack components, the total
ISO build time could be significantly higher than "vanilla" Fuel ISO one.
Other deployer impact
---------------------
The fuel-main/config.mk will contain the following new parameters:
* BUILD_OPENSTACK_PACKAGES - contains comma-separated list of OpenStack
components to build, or "0" otherwise
Per each of OpenStack components, the following list of parameters is defined
(using Neutron as an example):
* NEUTRON_REPO
* NEUTRON_COMMIT
* NEUTRON_SPEC_REPO
* NEUTRON_SPEC_COMMIT
* NEUTRON_GERRIT_URL
* NEUTRON_SPEC_GERRIT_URL
These values will take effect only if BUILD_OPENSTACK_PACKAGES parameter
contains a name of respective OpenStack component, i.e.:
BUILD_OPENSTACK_PACKAGES:=neutron
It is possible to build specific OpenStack components only, by using make
command with the target component parameter, i.e.:
make neutron
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Vitaly Parakhin
Work Items
----------
Initial phase:
* Implement building RPM packages from master
* Produce the specs for building RPM from master
Second phase:
* Implement building DEB packages from master
* Produce the specs for building DEB from master
Dependencies
============
* https://blueprints.launchpad.net/fuel/+spec/build-packages-for-openstack-master-rpm
* https://blueprints.launchpad.net/fuel/+spec/osci-to-dmz
Testing
=======
The following tests should be performed:
* Building all OpenStack components from master using our specs
* Deployment tests for an ISO with customized OpenStack components
The existing deployment tests are adequate for testing customized ISO.
Acceptance criteria:
* Each of OpenStack components could be built from master using our specs
* Deployment of simple multinode OpenStack succeeds
* Diagnostic snapshot works
* Health Check works
Documentation Impact
====================
A note should be added to Fuel User Guide to describe the possibility to build
custom OpenStack components from upstream source repositories during ISO build.
References
==========
None

View File

@ -1,150 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===============================================
Reliable Pacemaker Galera Resource Agent Script
===============================================
https://blueprints.launchpad.net/fuel/+spec/reliable-galera-ocf-script [1]_
This document is intended to capture the problems and requirements for
Pacemaker OCF “Resource Agent” (hereafter RA) to improve Galera Cluster
management under Pacemaker Resource Manager
Problem description
===================
* Reboot Whole cluster (Power outages scenario)
- RA script doesnt determine the latest Galera GTID version. It always
relies on “primary controller” as a donor. Under some circumstances
Pacemaker cannot assemble Galera cluster.
* Reboot any node from cluster
* Add a new node to active cluster
* Advanced features
- Currently puppet manifests use *cs_shadow* as a method for cluster
management. It's not possible to use *crm_attribute* to store attributes in
configuration as *cs_shadow* will revert values back
Proposed change
===============
* Write a new RA script for Galera with the following requirements
- RA script allows to bootstrap cluster even when wsrep_cluster_address has
all nodes specified.
- RA script introduces timeout where pacemaker waits for 60-120 seconds until
all nodes specified in CIB became online after reboot or outage.
- After 60-120 seconds RA script must start the process of Primary Component
election which is the node with the latest GTID. This timeout is specified
as node attribute and can be changed by administrator. If all nodes
specified in CIB are UP the election process will be started immediately.
- RA script dertemines Galera GTID state and set it as node attribute. RA
gets GTID from **mysqld --wsrep-recover** or SQL query
**SHOW STATUS LIKE wsrep_local_state_uuid**
- The node with the latest GTID will become Galera Primary Controller. It
will be started with empty gcomm:// string. All other nodes will join to
Galera Primary controller to synchronize their state.
- If the node bootstrapped after timeout it will discard its configuration.
This usually happenes when it's stuck performing *fsck*.
- When new a node is added to cluster it will join cluster normally.
* Remove cs_shadow
- Remove cs_shadow from manifests to allow to store node attributes
Alternatives
------------
None
Data model impact
-----------------
None
REST API impact
---------------
None
Upgrade impact
--------------
This change doesn't affect master node upgrade. Openstack upgrade should be
disabled as this change impacts on HA logic.
Security impact
---------------
None
Notifications impact
--------------------
None
Other end user impact
---------------------
None
Performance Impact
------------------
None
Other deployer impact
---------------------
None
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Sergii Golovatiuk (sgolovatiuk@mirantis.com)
Work Items
----------
- Write Galera OCF script
- Perform all set of destructive tests
Dependencies
============
Testing
=======
All set of destructive tests: Reboot single node, reboot whole cluster, add a
new node from Fuel UI
Documentation Impact
====================
The documentation should indicate how to increase/decrease Bootstrap timeout.
References
==========
.. [1] https://blueprints.launchpad.net/fuel/+spec/reliable-galera-ocf-script

View File

@ -1,189 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===========================
Secure Fuel Master Services
===========================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/fuel/+spec/secure-fuel-master-services
When Fuel was a new project, predefined usernames and passwords were provided
for deployment. Securing the services provided by Fuel Master is necessary
to meet user expectations. This includes services, such as Cobbler, RabbitMQ,
PostgreSQL, rsyslog, and several more. Note that this change does not
intersect with the access-control-master-node spec and will not address the
Fuel API authentication implementation.
Problem description
===================
Hardcoded usernames and passwords for every service with no available method to
provide strong passwords limits viability in production environments.
Proposed change
===============
Main changes include:
* Randomized passwords generated by fuelmenu for supporting services.
* Limiting service availabilty to Admin network
* Adapt astute.yaml to contain service usernames and passwords
The change is limited to adding a new module to fuelmenu which is not
user-facing, plus modifications to Puppet manifests to distribute these
passwords, rather than hardcoded passwords.
Alternatives
------------
Passwords could be generated with any tool available, but it would mean that
Fuel Master relies on more than one source of information for deployment. The
astute.yaml file for deployment provides all information for each service to
configure itself with Puppet.
Data model impact
-----------------
None. The way Nailgun stores data will not be affected in any way.
REST API impact
---------------
None. No API will be modified and its configuration will be semantically
identical.
Security impact
---------------
Describe any potential security impact on the system. Some of the items to
consider include:
* Visibility of passwords in astute.yaml may pose a risk, but gaining root
access compromises many other services that would be a concern as well.
* This change will involve a password generation schema that should be
reasonably strong and not predictable.
* Not permitting user-provided data ensures stronger passwords for supporting
services.
Notifications impact
--------------------
None.
Other end user impact
---------------------
The only user impact will be the lack of direct access from outside Fuel Admin
network to Fuel Master services (excluding Fuel API).
Performance Impact
------------------
None.
Other deployer impact
---------------------
Regarding new configuration, astute.yaml will contain the following new
parameters:
* ostf/user
* ostf/password
* postgres/nailgun_dbname
* postgres/nailgun_user
* postgres/nailgun_password
* postgres/ostf_dbname
* postgres/ostf_user
* postgres/ostf_password
* mcollective/user
* mcollective/password
* astute/user
* astute/password
* cobbler/user
* cobbler/password
This will take immediate effect after deployment, but requires no manual input.
For continuous integration tests that log directly into any services using
predefined usernames and passwords, these credentials need to be parsed from
astute.yaml first.
Developer impact
----------------
None.
Upgrade impact
--------------
An extra script will be required for upgrading from 5.0 to 5.1 to enable the
new manifests to recycle old default passwords. No passwords will be changed
(or secured) for deployments that are being upgraded from 5.0. The script will
simply populate astute.yaml with the legacy hardcoded passwords.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
raytrac3r
Feature Lead: raytrac3r
Mandatory Design Reviewers: vkuklin
Developers: raytrac3r
QA: asledzinskiy
Work Items
----------
Initial phase:
* Implement password generation inside fuelmenu.
* Add service password module to fuelmenu to generate randomized credentials.
* Refactor site.pp for each service to use these passwords.
Second phase:
* Implement patch for astute.yaml when performing upgrades from Fuel 5.0.
* Add iptables rules to limit which interfaces expose external access.
* (Nice to have) method to update any of these passwords and propagate
changes to every service after initial deployment.
Dependencies
============
None. Does coincide with access-control-master-node, but does not actually
depend on this blueprint.
Testing
=======
The existing deployment tests are adequate.
Acceptance criteria:
* Deployment of simple multinode OpenStack succeeds
* Diagnostic snapshot works
* Health Check works
Documentation Impact
====================
A note should be added to Fuel User Guide to point users to astute.yaml if he
or she requires credentials to the Fuel Master internal services.
References
==========
None.

View File

@ -1,137 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
Optionally pack upgrade tarball with lrzip
==========================================
https://blueprints.launchpad.net/fuel/+spec/upgrade-lrzip
Problem description
===================
Our upgrade tarball is almost 5Gb in size. We can compress it with lrzip.
It will save about 2Gb of space and network traffic for users.
Proposed change
===============
Create a separate build target with uncompressed Fuel images inside of
a compressed archive file called fuel-upgrade.tar.lrz. Start using LRZ
archives instead of TAR for upgrade tarballs everywhere.
Alternatives
------------
* Do nothing. It will save time of the process of building and unpacking.
* Decrease amount of data in tarball. This is out of scope for this small
blueprint and will be done in next release.
Data model impact
-----------------
None
REST API impact
---------------
None
Upgrade impact
--------------
Command line for upgrade will be changed. Size of tarball will be decreased.
Tarball unpack time will be increased.
Security impact
---------------
None
Notifications impact
--------------------
None
Other end user impact
---------------------
New command line for unpack of upgrade tarball: lrzuntar fuel-upgrade.tar.lrz.
Performance Impact
------------------
It takes about 15 minutes on a VirtualBox environment to unpack the LRZ
archive. It does not have a noticable impact on build time.
The upgrade process will be faster because there is no need to unpack
fuel-images.tar.lrz file
Other deployer impact
---------------------
None
Developer impact
----------------
There will be no changes in existing build scenarios. Developer can build
compressed tarball with 'make upgrade-lrzip' command. In order to build
tarball, iso and img, use 'make iso img upgrade-lrzip' command. 'make all'
command includes uncompressed upgrade tarball, so 'make all upgrade-lrzip'
will create both compressed and uncompressed tarballs, what is not needed in
common case.
Our system tests need update in order to work with compressed tarballs.
Existing test scenarios are not affected.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
lux-place
Other contributors:
None
Work Items
----------
* Update upgrade.sh should work both with compressed fuel-images.tar.lrz and
uncompressed fuel-images.tar.
* New 'make upgrade-lrzip' command is needed.
* Extend method 'untar' in fuelweb_test/helpers/checkers.py with support for
LRZ archives.
* Use compressed tarball in community build.
* Turn on compressed tarball for all builds on product jenkins.
* Update upgrade instruction with new command line.
Dependencies
============
None
Testing
=======
Automated test for upgrade with compressed tarball is needed.
Acceptance criteria:
* User can upgrade Fuel Master using compressed tarball.
Documentation Impact
====================
Upgrade guide must be updated with new command line for unpacking of tarball.
References
==========
Discussion in openstack-dev:
https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg32837.html

View File

@ -1,167 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===============================
Integration of NSX with vCenter
===============================
https://blueprints.launchpad.net/fuel/+spec/vcenter-nsx-support
Fuel will be able to deploy OpenStack which will use VMWare vCenter as
a hypervisor and VMWare NSX as a network virtualisation backend.
Problem description
===================
Fuel 5.0 has a limited support of vCenter as a hypervisor and no NSX support
at all, but OpenStack can be integrated with both these components. There are
two other blueprints already about vCenter support improvements [0] and
introducing a basic NSX support [1]. But both features cannot be used
simultaneously without some additional work now. If a user has already paired
vCenter Cluster and the NSX platform (or multiple pairs vCenter + NSX) he
should be able to manage them by OpenStack.
[0] https://blueprints.launchpad.net/fuel/+spec/vcenter-hv-full-scale-support
[1] https://blueprints.launchpad.net/fuel/+spec/neutron-nsx-plugin-integration
Proposed change
===============
After the blueprints [0] and [1] mentioned above would be implemented, there
will be a possibility to enable both features deployed simultaneously. It's
mostly an administrative work needed because all the manifests will be ready,
and we need just to allow a simultaneous use of the features somewhere in
Release description.
Alternatives
------------
We can do nothing, but a user will not be able to use his already paired
vCenter + NSX environment as a hypervisor for OpenStack.
Data model impact
-----------------
No data models modifications needed.
REST API impact
---------------
No REST API modifications needed.
Upgrade impact
--------------
I see no objections about upgrades. NSX Neutron plugin is a part of official
set of plugins. Compute VMWareVCDriver is also official driver. So any
upgrades should be done in a common way.
Security impact
---------------
No additional security modifications needed.
Notifications impact
--------------------
Little modifications of the Cluster Creation Wizard needed.
Other end user impact
---------------------
None.
Performance Impact
------------------
None.
Other deployer impact
---------------------
There should be no significant changes in Puppet modules. Most of the work
should be done on the Nailgun/UI side.
Release upgrades should be covered by blueprints [0], [1] mentioned above.
Developer impact
----------------
No extra developer impact needed.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Andrey Danin (gcon-monolake)
Other contributors:
Igor Zinovik (izinovik)
Work Items
----------
* Set up the dev environment with one vCenter and one NSX clusters.
* Modify openstack.yaml and test it.
* Create a pull request to Gerrit.
* Describe a test environment and additional System tests and discuss it in ML.
* Set up a test environment and provide System tests.
* Set up additional Jenkins jobs for System tests.
Dependencies
============
https://blueprints.launchpad.net/fuel/+spec/vcenter-hv-full-scale-support
https://blueprints.launchpad.net/fuel/+spec/neutron-nsx-plugin-integration
https://blueprints.launchpad.net/fuel/+spec/devops-bare-metal-driver
Testing
=======
Acceptance Criteria:
* User should be able to deploy environment with parameters:
host OS: CentOS / Ubuntu OS;
deployment mode: simple, HA;
roles: different roles are supported due to vCenter as hypervisor and NSX
plugin in simultaneous interrelation with required settings through Fuel UI;
* All operations with environment which are provided through Fuel UI must be
available for user;
* NSX and vCenter must be stable for all destructive tests that we already have
for these features;
* OSTF tests related to these features must be passed: especially 'smoke',
'sanity', 'ha' groups;
* Network connectivity test must be passed;
* Manual testing is now a high priority part of acceptance testing
using checklists according to acceptance criteria above;
* A set of automatic tests will be implemented for this feature
with 50% coverage of system tests.
Documentation Impact
====================
The documentation should describe how to set up vCenter and NSX for a simple
test environment.
A reference architecture of the feature should also be described.
References
==========
http://docs.openstack.org/trunk/config-reference/content/vmware.html
https://www.edge-cloud.net/2013/12/openstack-vsphere-nsx/

View File

@ -1,231 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
====================================================
1-1 mapping between nova-compute and vSphere cluster
====================================================
https://blueprints.launchpad.net/fuel/+spec/1-1-nova-compute-vsphere-cluster-mapping
Problem description
===================
Currently single nova-compute service instance utilizes all vSphere clusters
(clusters that are formed by ESXi hosts) managed by a single vCenter server
which is specified by a user. This behaviour prevents user to specify a
vSphere's cluster on which user may launch a VM instance at. Currently this
decision happens automatically and is controlled by nova-scheduler logic and
vCenter DRS logic.
::
+--------------------+ +------------------+ +-------------------+
| | | | | |
| OpenStack | | vCenter server | | vSphere cluster 1 |
| Controller | | +- - - - -+-------+ |
| | | | +-------------------+
| | | | |
++------------------++ | | +-------------------+
|| || | | | | |
|| nova-compute |--------+ - - - -+- - - - -+-------+ vSphere cluster 2 |
|| || | | | | |
|| || | | +-------------------+
++------------------++ | | |
| | | | +-------------------+
| | | | | | |
| | | +- - - - -+-------+ vSphere cluster N |
| | | | | |
+--------------------+ +------------------+ +-------------------+
A single nova-compute service instance also acts as a single point of failure,
even if we defend it with Pacemaker. If the service fails for some reason a
whole cloud loses access to compute resources.
Also, VMware itself recommends to avoid 1-M mapping between a nova-compute
service and vSphere clusters.
Proposed change
===============
Launch multiple instances of nova-compute service and configure each service to
use a single vSphere cluster. Nova-compute services will be running on
OpenStack controller nodes like it does now. We are not proposing creation of a
separate compute node for each nova-compute, because it requires us to
configure additional pacemaker group that will backup nova-compute services on
those compute nodes. It also requires a customer to procure additional hardware
to run additional nova-compute process which might be unacceptable.
::
+--------------------+
| |
| OpenStack |
| Controller |
| |
|+------------------+| +------------------+ +-------------------+
|| || | | | |
|| nova-compute-1 +--------+ - - - - - - - - -+-------+ vSphere cluster 1 |
|| (login1/pass1) || | | | |
|| || | | +-------------------+
|+------------------+| | |
| | | |
|+------------------+| | vCenter server | +-------------------+
|| || | | | |
|| nova-compute-2 +--------+ - - - - - - - - -+-------+ vSphere cluster 2 |
|| (login2/pass2) || | | | |
|| || | | +-------------------+
|+------------------+| | |
| | | |
|+------------------+| | | +-------------------+
|| || | | | |
|| nova-compute-N +--------+ - - - - - - - - -+-------+ vSphere cluster N |
|| (loginN/passN) || | | | |
|| || | | +-------------------+
|+------------------+| +------------------+
+--------------------+
Currently we will use same credentials for all nova-computes, but in future we
must add an opportunity for user to specify different credentials for different
vSphere clusters on web UI. Nevertheless puppet manifests must be ready for
accepting different pairs of login/password.
Alternatives
------------
We can leave things as they work right now: single nova-compute instance
utilizes multiple vSphere clusters that are specified in
*/etc/nova/nova-compute.conf*.
Data model impact
-----------------
None.
REST API impact
---------------
None.
Upgrade impact
--------------
None.
Security impact
---------------
None.
Notifications impact
--------------------
None.
Other end user impact
---------------------
None.
Performance Impact
------------------
Controller node will be running number of nova-compute processes equal to
number of specified vSphere clusters (simple deployment mode considered as
worst case). Maximum number of ESXi hosts that are supported by vCenter is
1000, it means that each host can form a cluster of itself, so in worst case
maximum number of nova-compute instances might raise to 1000.
(http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf).
So controller must be able to run additional 1000 processes.
There is a limit on number of concurrent vSphere connections to vCenter (100
and 180 for vSphere Web Client). Some nova-computes connections must scheduled
across timeline.
Other deployer impact
---------------------
None.
Developer impact
----------------
None.
Implementation
==============
Assignee(s)
-----------
Drafter:
Igor Zinovik (izinovik)
Primary assignee:
Andrey Danin (gcon-monolake)
Igor Zinovik (izinovik)
Reviewer:
Andrey Danin (gcon-monolake)
Evgeniya Shumakher (eshumakher)
QA:
Tatiana Dubyk (tdubyk)
Work Items
----------
#. Modify puppet manifests that will create multiple nova-compute instances in
simple deployment mode. Create appropriate configuration file for each
nova-compute instance.
#. Modify puppet manifests that will creates multiple pacemaker's nova-compute
resources in HA deployment mode. Create one nova-compute resource and
corresponding configuration file per one vSphere cluster.
#. Reference architecture in our documentation must be updated and reflect
implementation of this specification.
Dependencies
============
None.
Testing
=======
Manual testing using checklists according to acceptance criteria below.
Acceptance Criteria:
Stage I:
- Verify that OpenStack environment that is running with vCenter that manages
multiple vSphere clusters as hypervisor option runs nova-compute services on
controllers and that each nova-compute has a single vSphere cluster in its
configuration file.
Documentation Impact
====================
The proposed change modifies Reference Architecture. All vCenter related
sections must be reviewed and updated.
References
==========

View File

@ -1,219 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
100 nodes support (fuel only)
==========================================
https://blueprints.launchpad.net/fuel/+spec/100-nodes-support
Fuel is an enterprise tool for deploying OpenStack, it should be
able to deploy large clusters. Fuel also should be fast and responsive.
It does not run any processor consuming tasks, so there is no reason
for it to be slow.
Problem description
===================
* For large number of nodes Fuel(nailgun, astute) is getting slow.
* Probability of failing provisioning is also increasing.
* MySQL DB works only as active/standby which has very poor performance.
Proposed change
===============
For nailgun
-----------
In the first step, it is necessary to write tests which will show places in
code which are not optimal. Some of slow parts are already known.
Such tests should include(all in fake mode):
* list 100 nodes
* get cluster with 100 nodes
* add 100 nodes to environment
* remove 100 nodes from environment
* run network verification for environment with 100 nodes
* change settings in environment with 100 nodes
* change network configuration in environment with 100 nodes
* run deploy in environment with 100 nodes
* run provision in environment with 100 nodes
* ...
In order to detect any specific code that works slow it's necessary to run all
the above mentioned tests which measure the time of execution and compare it to
specification in order to see which of them are actually slow.
Run the operations under a profiler and then analyse and fix all bottlenecks,
non-optimal code, etc.
To measure and profile code following tools may be used:
* cprofile - python module
* osprofiler - python module
* rally - testing framework
For fuelclient
--------------
There should not be any performance bottlenecks in the fuelclient, it
only parses JSON data. There should be tests for fuelclient which should
at least include:
* list nodes
* add nodes to environment
* list environment with pending changes for 100 nodes
* upload nodes from disk
For astute
-----------
Testing astute is harder because it includes interaction with hardware
and other services like cobbler, tftp, dhcp. There is one known problem
which can be addressed now. The rest of the problems can be identified after
testing on real hardware.
One known problem is connected with network/storage capabilities of Fuel Master
node. When, during provisioning, 100 nodes simultaneously trying to fetch
images and packages. Master node can not handle that high load. Astute should
detect such situation and handle it.
User should be also able to manually tweak astute work. For example to
configure it to provision 10 nodes at the time. It will increase provisioning
time but will make it more resistant.
There should be configuration option to set number nodes to deploy in one run.
Currently, if provisioning fails on one of the nodes, astute will
stop the whole process. It is not an optimal solution for larger deployments.
Some nodes may fail because of random failures, provisioning should still
continue in this case.
Provision will not be restarted for failed nodes. This nodes will be removed
from cluster. User can re-add this nodes to cluster after successful
deployment.
There should be a configuration option to set percent of nodes which can fail
during provisioning.
In case when for example all controllers failed to provision, provisioning
should be stopped.
User should be notified about each failure.
For UI
------
Our tests show that for 100 nodes UI speed is acceptable. In future, for 1000
nodes, it will require some speed improvements.
For puppet manifests library
----------------------------
Configure HAproxy MySQL backends as active/active.
There is a patch https://review.openstack.org/#/c/124549/ addressing this
change, but it requires additional researching and load testing.
Alternatives
------------
None
Data model impact
-----------------
Depends on bottlenecks found, but unlikely.
REST API impact
---------------
No API changes. All optimization have to be backward compatible.
Upgrade impact
--------------
Only if database is changed, but unlikely.
Security impact
---------------
None
Notifications impact
--------------------
If there are failed nodes. User should be informed about this.
Other end user impact
---------------------
None
Performance Impact
------------------
After blueprint is implemented Fuel should be able to deploy 100 nodes.
Active/active load balancing for MySQL connections should improve DB
operations.
Other deployer impact
---------------------
Rules will change. Some nodes can fail now.
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
loles@mirantis.com
ksambor@mirantis.com
Work Items
----------
Blueprint will be implemented in several stages:
* In first stage all tests will be written.
* In next stage all known and discovered bottlenecks will be fixed.
* After this tests will be run in virtual environment which can create
100 nodes.
* At the end tests will be run in lab with 100 physical nodes. This test
should show us all astute bottlenecks.
* To prevent reintroducing bottlenecks in next releases all test
will be integrated with our CI infrastructure.
* Additional integration with OSProfiler. It can help find bottleneck
in production systems
* Additional integration with Rally. It will help to test Fuel in real live
environment.
* Additional Neutron load testing with Rally in HA for active/active MySQL.
Even if active/active will fail the testing, at least we could play with
tuning related params and provide some output to community.
Dependencies
============
None
Testing
=======
When all bottlenecks are fixed, load test will be added to CI infrastructure,
so non optimal code can immediately be noticed.
Documentation Impact
====================
Deployment rules will change, it should be documented. New notifications
should be described. Active/active mode for MySQL should be documented.
References
==========
* https://github.com/stackforge/osprofiler
* https://github.com/stackforge/rally
* https://docs.google.com/a/mirantis.com/document/d/1GJHr4AHw2qA2wYgngoeN2C-6Dhb7wd1Nm1Q9lkhGCag
* https://docs.google.com/a/mirantis.com/document/d/1O2G-fTXlEWh0dAbRCtbrFtPVefc5GvEEOhgBIsU_eP0
* http://lists.openstack.org/pipermail/openstack-operators/2014-September/005162.html

View File

@ -1,158 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
Fuel Master access control improvements
==========================================
https://blueprints.launchpad.net/fuel/+spec/access-control-master-node-improvments
In 5.1 release cycle Fuel Master node access control was introduced.
In next release, some configuration tuning is required to make it easier
to use and upgrade.
Problem description
===================
With current implementation we have following problems:
* Each request is validated by middleware using keystone admin token.
This method is deprecated.
* If user changes his password it is not possible to run upgrade.
* Outdated tokens are not cleaned which in long term
may lead to run out of space.
* No cookies support so some GET requests from UI can not be validated.
* After login password is stored in browser cache.
Proposed change
===============
* Create users *nailgun* and *ostf* with admin roles which will be used
to authenticate requests in middleware. Both will be added to new
*services* tenant.
* Passwords will be generated by fuelmenu for fresh install and by upgrade
script in case of upgrade.
* During upgrade there will be puppet run for keystone that will add new
project and users.
* Ask user for password before upgrade.
* Create cron script which runs in keystone container and deletes outdated
tokens using `keystone-manage token_flush` command.
Script will be run once per day.
* Add support for cookies, which also will allow to test API from browser.
* Increase token expiration time to 24h, so it will not be necessary to
store password in browser cache.
* Add two services to keystone: *nailgun* and *ostf*, toghether with endpoints
pointing to its urls to enable service discovery instead of hardcoded urls.
* Use `keystonemiddleware` insted of deprecated `keystoneclinet.middleware`.
Alternatives
------------
None
Data model impact
-----------------
There will be two new users, two new services and two new endpoints
in keystone database.
REST API impact
---------------
None
Upgrade impact
--------------
During the upgrade user will be asked to give an admin user password.
Security impact
---------------
Using service user instead of admin_token to verify is safer.
Notifications impact
--------------------
None
Other end user impact
---------------------
None
Performance Impact
------------------
None
Other deployer impact
---------------------
None
Developer impact
----------------
When cookies support is added developer will be able to test API from browser.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
skalinowski@mirantis.com
loles@mirantis.com
Work Items
----------
Must have items:
* Create new users during fresh install and during upgrade.
* Ask for admin user password before upgrade.
* Remove usage of admin_token in Fuel.
Rest of the items can be done later and can be done separately.
Dependencies
============
None
Testing
=======
All tests from previous blueprint should still apply here.
System tests may require changes to pass the password to the upgrade
script.
Documentation Impact
====================
Documentation describing internal architecture should be created.
It should contain examples how to use curl with API.
References
==========
None

Some files were not shown because too many files have changed in this diff Show More