Retire openstack-specs repo

As discussed in TC meeting[1], TC is retiring the
openstack-specs repo.

[1] https://meetings.opendev.org/meetings/tc/2021/tc.2021-06-17-15.00.log.html#l-98

Change-Id: Ieb37227e6b80a64ead680ece315973e2f040da6e
This commit is contained in:
Ghanshyam Mann 2021-06-17 19:04:04 -05:00 committed by Ghanshyam
parent 70a7c6d7dd
commit 75bed26c57
33 changed files with 9 additions and 3609 deletions

51
.gitignore vendored
View File

@ -1,51 +0,0 @@
*.py[cod]
# C extensions
*.so
# Packages
*.egg
*.egg-info
dist
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
.tox
nosetests.xml
.testrepository
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Complexity
output/*.html
output/*/index.html
# Sphinx
doc/build
# pbr generates these
AUTHORS
ChangeLog
# Editors
*~
.*.swp

View File

@ -1,3 +0,0 @@
# Format is:
# <preferred e-mail> <other e-mail 1>
# <preferred e-mail> <other e-mail 2>

View File

@ -1,3 +0,0 @@
- project:
templates:
- openstack-specs-jobs

View File

@ -1,20 +0,0 @@
==================================
Contributing to: openstack-specs
==================================
If you would like to contribute to the development of OpenStack,
you must follow the steps in this page:
http://docs.openstack.org/infra/manual/developers.html
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/openstack

View File

@ -1,3 +0,0 @@
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode

View File

@ -1,6 +0,0 @@
include AUTHORS
include ChangeLog
exclude .gitignore
exclude .gitreview
global-exclude *.pyc

View File

@ -1,23 +1,10 @@
=====================================================
OpenStack Cross-Project Specifications and Policies
=====================================================
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
This repository contains specifications and policies that apply to
OpenStack as a whole.
.. note:: The OpenStack Cross-Project specification process has been
deprecated in favor of `OpenStack-wide Goals
<https://governance.openstack.org/tc/goals/index.html>`__ and
`OpenStack SIGs <https://wiki.openstack.org/wiki/OpenStack_SIGs>`__.
The documents found here are still useful as historical artifacts,
but at this time the specifications are not actionable
This work is licensed under a `Creative Commons Attribution 3.0
Unported License
<http://creativecommons.org/licenses/by/3.0/legalcode>`__.
The source files are available via the openstack/openstack-specs git
repository at http://git.openstack.org/cgit/openstack/openstack-specs.
Published versions of approved specifications and policies can be
found at http://specs.openstack.org/openstack/openstack-specs.
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
OFTC.

View File

@ -1,97 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import datetime
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'openstackdocstheme',
'yasfb',
]
# Feed configuration for yasfb
feed_base_url = 'https://specs.openstack.org/openstack/openstack-specs'
feed_author = 'OpenStack Development Team'
exclude_patterns = [
'template.rst',
]
# Optionally allow the use of sphinxcontrib.spelling to verify the
# spelling of the documents.
try:
import sphinxcontrib.spelling
extensions.append('sphinxcontrib.spelling')
except ImportError:
pass
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'openstack-specs'
copyright = u'%s, OpenStack Foundation' % datetime.date.today().year
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- openstackdocstheme configuration -----------------------------------------
repository_name = 'openstack/openstack-specs'
html_theme = 'openstackdocs'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
#intersphinx_mapping = {'http://docs.python.org/': None}

View File

@ -1 +0,0 @@
.. include:: ../../CONTRIBUTING.rst

View File

@ -1,45 +0,0 @@
=====================================================
OpenStack Cross-Project Specifications and Policies
=====================================================
This repository contains specifications and policies that apply to
OpenStack as a whole.
This work is licensed under a `Creative Commons Attribution 3.0
Unported License
<http://creativecommons.org/licenses/by/3.0/legalcode>`__.
.. note:: The OpenStack Cross-Project specification process has been
deprecated in favor of `OpenStack-wide Goals
<https://governance.openstack.org/tc/goals/index.html>`__ and
`OpenStack SIGs <https://wiki.openstack.org/wiki/OpenStack_SIGs>`__.
The documents found here are still useful as historical artifacts,
but at this time the specifications are not actionable
Specifications
==============
.. toctree::
:glob:
:maxdepth: 1
specs/*
Repository Information
======================
.. toctree::
:maxdepth: 1
readme
contributing
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1 +0,0 @@
.. include:: ../../README.rst

View File

@ -1 +0,0 @@
../../specs

View File

@ -1,3 +0,0 @@
pbr!=2.1.0,>=2.0.0 # Apache-2.0
openstackdocstheme>=2.0
yasfb>=0.5.1

View File

@ -1,13 +0,0 @@
[metadata]
name = openstack-specs
summary = OpenStack Cross-Project Specifications and Policies
description-file =
README.rst
author = OpenStack
author-email = openstack-discuss@lists.openstack.org
home-page = http://www.openstack.org/
classifier =
Environment :: OpenStack
Intended Audience :: Developers
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux

View File

@ -1,22 +0,0 @@
#!/usr/bin/env python
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
setuptools.setup(
setup_requires=['pbr'],
pbr=True)

View File

View File

@ -1,403 +0,0 @@
==========================================
Chronicles of a distributed lock manager
==========================================
No blueprint, this is intended as a reference/consensus document.
The various OpenStack projects have an ongoing requirement to perform
some set of actions in an atomic manner performed by some distributed set of
applications on some set of distributed resources **without** having those
resources end up in some corrupted state due those actions being performed on
them without the traditional concept of `locking`_.
A `DLM`_ is one such concept/solution that can help (but not entirely
solve) these types of common resource manipulation patterns in distributed
systems. This specification will be an attempt at defining the problem
space, understanding what each project *currently* has done in regards of
creating its own `DLM`_-like entity and how we can make the situation better
by coming to consensus on a common solution that we can benefit from to
make everyone's lives (developers, operators and users of OpenStack
projects) that much better. Such a consensus being built will also
influence the future functionality and capabilities of OpenStack at large
so we need to be **especially** careful, thoughtful, and explicit here.
.. _DLM: https://en.wikipedia.org/wiki/Distributed_lock_manager
.. _locking: https://en.wikipedia.org/wiki/Lock_%28computer_science%29
Problem description
===================
Building distributed systems is **hard**. It is especially hard when the
distributed system (and the applications ``[X, Y, Z...]`` that compose the
parts of that system) manipulate mutable resources without the ability to do
so in a conflict-free, highly available, and
scalable manner (for example, application ``X`` on machine ``1`` resizes
volume ``A``, while application ``Y`` on machine ``2`` is writing files to
volume ``A``). Typically in local applications (running on a single
machine) these types of conflicts are avoided by using primitives provided
by the operating system (`pthreads`_ for example, or filesystem locks, or
other similar `CAS`_ like operations provided by the `processor instruction`_
set). In distributed systems these types of solutions do **not** work, so
alternatives have to either be invented or provided by some
other service (for example one of the many academia has created, such
as `raft`_ and/or other `paxos`_ variants, or services created
from these papers/concepts such as `zookeeper`_ or `chubby`_ or one of the
many `raft implementations`_ or the redis `redlock`_ algorithm). Sadly in
OpenStack this has meant that there are now multiple implementations/inventions
of such concepts (most using some variation of database locking), using
different techniques to achieve the defined goal (conflict-free, highly
available, and scalable manipulation of resources). To make things worse
some projects still desire to have this concept and have not reached the
point where it is needed (or they have reached this point but have been
unable to achieve consensus around an implementation and/or
direction). Overall this diversity, while nice for inventors and people
that like to explore these concepts does **not** appear to be the best
solution we can provide to operators, developers inside the
community, deployers and other users of the now (and every expanding) diverse
set of `OpenStack projects`_.
.. _redlock: http://redis.io/topics/distlock
.. _pthreads: http://man7.org/linux/man-pages/man7/pthreads.7.html
.. _CAS: https://en.wikipedia.org/wiki/Compare-and-swap
.. _processor instruction: http://www.felixcloutier.com/x86/CMPXCHG.html
.. _paxos: https://en.wikipedia.org/wiki/Paxos_%28computer_science%29
.. _raft: http://raftconsensus.github.io/
.. _zookeeper: https://en.wikipedia.org/wiki/Apache_ZooKeeper
.. _chubby: http://research.google.com/archive/chubby.html
.. _raft implementations: http://raftconsensus.github.io/#implementations
.. _OpenStack projects: http://git.openstack.org/cgit/openstack/\
governance/tree/reference/projects.yaml
What has been created
---------------------
To show the current diversity let's dive slightly into what *some* of the
projects have created and/or used to resolve the problems mentioned above.
Cinder
******
**Problem:**
Avoid multiple entities from manipulating the same volume resource(s)
at the same time while still being scalable and highly available.
**Solution:**
Currently is limited to file locks and basic volume state transitions. Has
limited scalability and reliability of cinder under failure/load; has been
worked on for a while to attempt to create a solution that will fix some of
these fundamental issues.
**Notes:**
- For further reading/details these links can/may offer more insight.
- https://review.openstack.org/#/c/149894/
- https://review.openstack.org/#/c/202615/
- https://etherpad.openstack.org/p/mitaka-cinder-volmgr-locks
- https://etherpad.openstack.org/p/mitaka-cinder-cvol-aa
- (and more)
Ironic
******
**Problem:**
Avoid multiple conductors from manipulating the same bare-metal
instances and/or nodes at the same time while still being scalable and
highly available.
Other required/implemented functionality:
* Track what services are running, supporting what drivers, and rebalance
work when service state changes (service discovery and rebalancing).
* Sync state of temporary agents instead of polling or heartbeats.
**Solution:**
Partition resources onto a hash-ring to allow for ownership to be scaled
out among many conductors as needed. To avoid entities in that hash-ring
from manipulating the same resource/node that they both may co-own a database
lock is used to ensure single ownership. Actions taken on nodes are performed
after the lock (shared or exclusive) has been obtained (a `state machine`_
built using `automaton`_ also helps ensure only valid transitions
are performed).
**Notes:**
- Has logic for shared and exclusive locks and provisions for upgrading
a shared lock to an exclusive lock as needed (only one exclusive lock
on a given row/key may exist at the same time).
- Reclaim/take over lock mechanism via periodic heartbeats into the
database (reclaims is apparently a manual and clunky process).
**Code/doc references:**
- Some of the current issues listed at `pluggable-locking`_.
- `Etcd`_ proposed @ `179965`_ I believe this further validates the view
that we need a consensus on a uniform solution around DLM (vs continually
having projects implement whatever suites there fancy/flavor of the week).
- https://github.com/openstack/ironic/blob/master/ironic/conductor/task_manager.py#L20
- https://github.com/openstack/ironic/blob/master/ironic/conductor/task_manager.py#L222
.. _state machine: http://docs.openstack.org/developer/ironic/dev/states.html
.. _automaton: http://docs.openstack.org/developer/automaton/
.. _179965: https://review.openstack.org/#/c/179965
.. _Etcd: https://github.com/coreos/etcd
.. _pluggable-locking: https://blueprints.launchpad.net/ironic/+spec/pluggable-locking
Heat
****
**Problem:**
Multiple engines working on the same stack (or nested stack of). The
ongoing convergence rework may change this state of the world (so in the
future the problem space might be slightly different, but the concept
of requiring locks on resources will still exist).
**Solution:**
Lock a stack using a database lock and disallow other engines
from working on that same stack (or stack inside of it if nested),
using expiry/staleness allow other engines to claim potentially
lost lock after period of time.
**Notes:**
- Liveness of stack lock not easy to determine? For example is an engine
just taking a long time working on a stack, has the engine had a network
partition from the database but is still operational, or has the engine
really died?
- To resolve this a combination of an ``oslo.messaging`` ping used to
determine when a lock may be dead (or the owner of it is dead), if an
engine is non-responsive to pings/pongs after period of time (and its
associated database entry has expired) then stealing is allowed to occur.
- Lacks *simple* introspection capabilities? For example it is necessary
to examine the database or log files to determine who is trying to acquire
the lock, how long they have waited and so on.
- Lock releasing may fail (which is highly undesirable, *IMHO* it should
**never** be possible to fail releasing a lock); implementation does not
automatically release locks on application crash/disconnect/other but relies
on ping/pongs and database updating (each operation in this
complex 'stealing dance' may fail or be problematic, and therefore is not
especially simple).
**Code/doc references:**
- http://docs.openstack.org/developer/heat/_modules/heat/engine/stack_lock.html
- https://github.com/openstack/heat/blob/master/heat/engine/resource.py#L1307
Ceilometer and Sahara
*********************
**Problem:**
Distributing tasks across central agents.
**Solution:**
Token ring based on `tooz`_.
**Notes:**
Your project here
*****************
Solution analysis
=================
The proposed change would be to choose one of the following:
- Select a distributed lock manager (one that is opensource) and integrate
it *deeply* into openstack, work with the community that owns it to develop
and issues (or fix any found bugs) and use it for lock management
functionality and service discovery...
- Select a API (likely `tooz`_) that will be backed by capable
distributed lock manager(s) and integrate it *deeply* into openstack and
use it for lock management functionality and service discovery...
* `zookeeper`_ (`community respected
analysis <https://aphyr.com/posts/291-call-me-maybe-zookeeper>`__)
* `consul`_ (`community respected
analysis <https://aphyr.com/posts/316-call-me-maybe-etcd-and-consul>`__)
* `etc.d`_ (`community respected
analysis <https://aphyr.com/posts/316-call-me-maybe-etcd-and-consul>`__)
Zookeeper
---------
Summary:
Age: around 8 years
* Changelog was created in svn repository on aug 27, 2007.
License: Apache License 2.0
Approximate community size:
Features (overview):
- `Zab`_ based (paxos variant)
- Reliable filesystem like-storage (see `zk data model`_)
- Mature (and widely used) python client (via `kazoo`_)
- Mature shell/REPL interface (via `zkshell`_)
- Ephemeral nodes (filesystem entries that are tied to presence
of their creator)
- Self-cleaning trees (implemented in 3.5.0 via
https://issues.apache.org/jira/browse/ZOOKEEPER-2163)
- Dynamic reconfiguration (making upgrades/membership changes that
much easier to get right)
- https://zookeeper.apache.org/doc/trunk/zookeeperReconfig.html
Operability:
- Rolling restarts < 3.5.0 (to allow for upgrades to happen)
- Starting >= 3.5.0, 'rolling restarts' are no longer needed (see
mention of dynamic reconfiguration above)
- Java stack experience required
Language written in: java
.. _kazoo: http://kazoo.readthedocs.org/
.. _zkshell: https://pypi.python.org/pypi/zk_shell/
.. _zk data model: http://zookeeper.apache.org/doc/\
trunk/zookeeperProgrammers.html#ch_zkDataModel
.. _Zab: https://web.stanford.edu/class/cs347/reading/zab.pdf
Packaged: yes (at least on ubuntu and fedora)
* http://packages.ubuntu.com/trusty/java/zookeeperd
* https://apps.fedoraproject.org/packages/zookeeper
Consul
------
Summary:
Age: around 1.5 years
* Repository changelog denotes added in april 2014.
License: Mozilla Public License, version 2.0
Approximate community size:
Features (overview):
- Raft based
- DNS interface
- HTTP interface
- Reliable K/V storage
- Suited for multi-datacenter usage
- Python client (via `python-consul`_)
.. _python-consul: https://pypi.python.org/pypi/python-consul
.. _consul: https://www.consul.io/
Operability:
* Go stack experience required
Language written in: go
Packaged: somewhat (at least on ubuntu and fedora)
* Ppa at https://launchpad.net/~bcandrea/+archive/ubuntu/consul
* https://admin.fedoraproject.org/pkgdb/package/consul/ (?)
Etc.d
-----
Summary:
Age: Around 1.09 years old
License: Apache License 2.0
Approximate community size:
Features (overview):
Language written in: go
Operability:
* Go stack experience required
Packaged: ?
Proposed change
===============
Place all functionality behind `tooz`_ (as much as possible) and let the
operator choose which implementation to use. Do note that functionality that
is not possible in all backends (for example consul provides a `DNS`_ interface
that complements its HTTP REST interface) will not be able to be exposed
through a `tooz`_ API, so this may limit the developer using `tooz`_ to
implement some feature/s).
Compliance: further details about what each `tooz`_ driver must
conform to (as in regard to how it operates, what functionality it must support
and under what consistency, availability, and partition tolerance scheme
it must operate under) will be detailed at: `240645`_
It is expected as the result of `240645`_ that
certain existing `tooz`_ drivers will be deprecated and eventually removed
after a given number of cycles (due to there inherent inability to meet the
policy constraints created by that specification) so that the quality
and consistency of there operating policy can be guaranteed (this guarantee
reduces the divergence in implementations that makes plugins that much
harder to diagnosis, debug, and validate).
.. Note::
Do note that the `tooz`_ alternative which needs to be understood
is that `tooz`_ is a tiny layer around solutions mentioned above, which
is an admirable goal (I guess I can say this since I helped make that
library) but it does favor pluggability over picking one solution and
making it better. This is obviously a trade-off that must IMHO **not** be
ignored (since ``X`` solutions mean that it becomes that much harder to
diagnose and fix upstream issues because ``X - Y`` solutions may not have
the issue in the first place); TLDR: pluggability comes at a cost.
.. _DNS: http://www.consul.io/docs/agent/dns.html
.. _tooz: http://docs.openstack.org/developer/tooz/
.. _240645: https://review.openstack.org/#/c/240645/
Implementation
==============
Assignee(s)
-----------
- All the reviewers, code creators, PTL(s) of OpenStack?
Work Items
----------
Dependencies
============
History
=======
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Mitaka
- Introduced
.. note::
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode

View File

@ -1,123 +0,0 @@
==========================================
CLI Sorting Argument Guidelines
==========================================
To varying degrees, the REST APIs for various projects support sort keys
and sort directions; these sorting options are exposed as python client
arguments. This specification defines the syntax for these arguments so
that there is consistency across clients.
Problem description
===================
Different projects have implemented the CLI sorting options in different
ways. For example:
- Nova: --sort key1[:dir1],key2[:dir2]
- Cinder: --sort_key <key> --sort_dir <dir>
- Ironic: --sort-key <key> --sort-dir <dir>
- Neutron: --sort-key <key1> --sort-dir <dir1>
--sort-key <key2> --sort-dir <dir2>
- Glance (under review): --sort-key <key1> --sort-key <key2> --sort-dir <dir>
Proposed change
===============
Based on mailing list feedback (see References sections), the consensus is to
follow the syntax that nova currently implements: --sort <key>[:<direction>]
Where the --sort parameter is comma-separated and used to specify one or more
sort keys and directions. A sort direction is optionally appended to each key
and is either 'asc' for ascending or 'desc' for descending.
For example:
* nova list --sort display_name
* nova list --sort display_name,vm_state
* nova list --sort display_name:asc,vm_state:desc
* nova list --sort display_name,vm_state:asc
Unfortunately, the REST APIs for each project support sorting to different
degrees:
- Nova and Neutron: Multiple sort keys and multiple sort directions
- Cinder and Ironic: Single sort key and single sort direction (Note: approved
kilo spec in Cinder for adding adding for multiple key and direction
support)
- Glance: Multiple sort keys and single sort direction
In the event that the corresponding REST APIs do not support multiple sort
keys and multiple sort directions, the client may:
- Support a single key and direction
- Support multiple keys and directions and implement any remaining sorting
in the client
Alternatives
------------
Each sort key and associated direction could be supplied independently, for
example:
--sort-key key1 --sort-dir dir1 --sort-key key2 --sort-dir dir2
Implementation
==============
Assignee(s)
-----------
Primary assignee:
* Cinder: Steven Kaufer (kaufer)
* Glance: Mike Fedosin (mfedosin)
Work Items
----------
Cinder:
* Deprecate --sort_key and --sort_dir and add support for --sort
* Note that the Cinder REST API currently supports only a single sort key
and direction so the CLI will have the same restriction, this restriction
can be lifted once the following is implemented:
https://blueprints.launchpad.net/cinder/+spec/cinder-pagination
Ironic/Neutron:
* Deprecate --sort-key and --sort-dir and add support for --sort
Glance:
* Modify the existing patch set to adopt the --sort parameter:
https://review.openstack.org/#/c/120777/
* Note that Glance supports multiple sort keys but only a single sort
direction.
Dependencies
============
- Cinder BP for multiple sort keys and directions:
https://blueprints.launchpad.net/cinder/+spec/cinder-pagination
History
=======
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Kilo
- Introduced
References
==========
- Nova review that implemented the --sort argument:
https://review.openstack.org/#/c/117591/
- Glance client review: https://review.openstack.org/#/c/120777/
- http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg42854.html
- http://www.mail-archive.com/openstack-dev%40lists.openstack.org/msg42954.html
.. note::
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode

View File

@ -1,146 +0,0 @@
================================
clouds.yaml support in clients
================================
`clouds.yaml` is a config file for the facilitation of consuming multiple
OpenStack clouds in a reasonable way. It is processed by the `os-client-config`
library, and is currently supported by `python-openstackclient`, `shade`,
`nodepool` and Ansible 2.0.
It should be supported across the board in our client utilities.
Problem description
===================
One of the goals of our efforts in OpenStack is interoperability between
clouds. Although there are several reasons that this is important, one of
them is to allow consumers to spread their workloads across multiple clouds.
Once a user has more than one cloud, dealing with credentials for the tasks of
selecting a specific cloud to operate on, or of performing actions across all
available clouds, becomes important.
Because the only auth information mechanism the OpenStack project has provided
so far, `openrc`, is targetted towards a single cloud, projects have have
attempted to deal with the problem in a myriad of different ways that do not
carry over to each other.
Although `python-openstackclient` supports `clouds.yaml` cloud definitions,
there are still some functions not yet exposed in `python-openstackclient` and
cloud users sometimes have to fall back to the legacy client utilities. That
means that even though `python-openstackclient` allows the user to manage
their clouds simply, the problem of dealing with piles of `openrc` files
remains, making it a net loss complexity-wise.
Proposed change
===============
Each of the python client utilities that exist should use `os-client-config` to
process their input parameters. New projects that do not yet have a CLI
utility should use `python-openstackclient` instead, and should not write new
CLI utilities.
An example of migrating an existing utility to `os-client-config` can be seen
in https://review.openstack.org/#/c/236325/ which adds the support to
`python-neutronclient`. Since all of those need to migrate to `keystoneauth1`
anyway, and since `os-client-config` is well integrated with `keystoneauth1`
it makes sense to do it as a single change.
This change will also add `OS_CLOUD` and `--os-cloud` as options supported
everywhere for selecting a named cloud from a collection of configured
cloud configurations.
Horizon should add a 'Download clouds.yaml' link where the 'Download openrc'
link is.
Reach out to the ecosystem of client utilities and libraries to suggest adding
support for consuming `clouds.yaml` files.
`gophercloud` https://github.com/rackspace/gophercloud/issues/487 has been
contacted already, but at least `libcloud`, `fog`, `jclouds` - or any other
framework that is in the Getting Started guide should at least be contacted
about adding support.
It should be pointed out that `os-client-config` does not require the use of
or existence of `clouds.yaml` and the traditional `openrc` environment
variables will continue to work as always.
http://inaugust.com/posts/multi-cloud-with-python-openstackclient.html is
a walkthrough on what life looks like in a world of `os-client-config` and
`python-openstackclient`.
Alternatives
------------
Using `envdir` has been suggested and is a good fit for direct user
consumption. However, some calling environments like `tox` and `ansible` make
communicating information from the calling context to the execution context
via environment variables clunkier than one would like. `python-neutronclient`
for instance has a `functional-creds.conf` that it writes out to avoid the
problems with environment variables and `tox`.
Just focus on `python-openstackclient`. While this is a wonderful future, it's
still the future. Adding `clouds.yaml` support to the existing clients gets us
a stronger bridge to the future state of everyone using
`python-openstackclient` for everything.
Use `oslo.config` as the basis of credentials configuration instead of yaml.
This was originally considered when `os-client-config` was being written, but
due to the advent of keystone auth plugins, it becomes important for some
use cases to have nested data structures, which is not particularly clean
to express in ini format.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
mordred
mordred is happy to do all of the work - but is also not territorial and if
elements of the work magically get done by happy friends, the world would be
a lovely place.
Work Items
----------
Not exhaustive, but should be close. Many projects provide openstackclient
extensions rather than their own client, so are covered already.
* Add support to python-barbicanclient
* Add support to python-ceilometerclient
* Add support to python-cinderclient
* Add support to python-designateclient
* Add support to python-glanceclient
* Add support to python-heatclient
* Add support to python-ironicclient
* Add support to python-keystoneclient
* Add support to python-magnumclient
* Add support to python-manilaclient
* Add support to python-neutronclient
* Add support to python-novaclient
* Add support to python-saharaclient
* Add support to python-swiftclient
* Add download link to horizon
Dependencies
============
None
History
=======
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Mitaka
- Introduced
.. note::
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode

View File

@ -1,130 +0,0 @@
============
CORS Support
============
The W3C has released a Technical Recommendation (TR) by which an API may
permit a user agent - usually a web browser - to selectively break the
`same-origin policy`_. This permits javascript running in the user agent to
access the API from domains, protocols, and ports that do not match the API
itself. This TR is called Cross Origin Resource Sharing (CORS_).
This specification details how CORS_ is implemented and supported across
OpenStack's services.
Problem description
===================
User Agents (browsers), in order to limit Cross-Site Scripting exploits, do
not permit access to an API that does not match the hostname, protocol, and
port from which the javascript itself is hosted. For example, if a user
agent's javascript is hosted at `https://example.com:443/`, and tries to access
openstack ironic at `https://example.com:6354/`, it would not be permitted to
do so because the ports do not match. This is called the `same-origin policy`_.
The `default ports`_ for most openstack services (excluding horizon) are not
the ports commonly used by user agents to access websites (80, 443). As such,
even if the services were hosted on the same domain and protocol, it would be
impossible for any user agent's application to access these services
directly, as any request would violate the above policy.
The current method of addressing this is to provide an API proxy, currently
part of the horizon project, which is accessible from the same location as
any javascript that might wish to access it. This additional code requires
additional maintenance for both upstream and downstream teams, and is largely
unnecessary.
This specification does *not* presume to require an additional configuration
step for operators for a 'default' install of OpenStack and its user
interface. Horizon currently maintains, and shall continue to maintain, its
own installation requirements.
This specification does *not* presume to set front-end application design
standards- rather it exists to expand the options that front-end teams have,
and allow them to make whatever choice makes the most sense for them.
This specification *does* provide a method by which teams, whether upstream or
downstream, can choose to implement additional user interfaces of their own. An
example use case may be Ironic, which may wish to ship an interface that can
live independently of horizon, for such users who do not want to install
additional components.
Proposed change
===============
All OpenStack API's should implement a common middleware that implements CORS
in a reusable, optional fashion. This middleware must be well documented,
with security concerns highlighted, in order to properly educate the operator
community on their choices. The middleware must default to inactive, unless
it is activated either explicitly, or implicitly via a provided configuration.
`CORS Middleware`_ is available in oslo_middleware version 0.3.0. This
particular implementation defaults to inactive, unless appropriate configuration
options are detected in oslo_config, and its documentation already covers key
security concerns. Additional work would be required to add this middleware
to the appropriate services, and to add the necessary documentation to the
docs repository.
Note that improperly implemented CORS_ support is a security concern, and
this should be highlighted in the documentation.
Alternatives
------------
One alternative is to provide a proxy, much like horizon's implementation,
or a well configured Apache mod_proxy. It would require additional documentation
that teaches UI development teams on how to implement and build on it. These
options are already available and well documented, however they do not enable
experimentation or deployment of alternative UIs in the same way that CORS can,
since they require the UI to be hosted in the same endpoint. This requires
either close deployment cooperation, or deployment of a proxy-per-UI. CORS can
permit UIs to be deployed using static files, allowing much lower cost-of-entry
overheads.
Implementation
==============
Assignee
--------
Primary assignee:
Michael Krotscheck (krotscheck)
Work Items
----------
- Update Global Requirements to use oslo_middleware version 1.2.0 (complete)
- Propose `CORS Middleware`_ to OpenStack API's that do not already support it.
This includes, but is not restricted to: Nova, Glance, Neutron, Cinder,
Keystone, Ceilometer, Heat, Trove, Sahara, and Ironic.
- Propose refactor to use `CORS Middleware`_ to OpenStack API's that already
support it via other means. This includes, but is not restricted to: Swift.
- Write documentation for CORS configuration.
- The authoritative content will live in the Cloud Admin Guide.
- The Security Guide will contain a comment and link to the Cloud Admin Guide.
Dependencies
============
- Depends on oslo_middleware version 1.2.0 (already in Global Requirements)
History
=======
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Liberty
- Introduced
.. note::
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode
.. _CORS: http://www.w3.org/TR/cors/
.. _`default ports`: http://docs.openstack.org/juno/config-reference/content/firewalls-default-ports.html
.. _`Same-origin Policy`: http://en.wikipedia.org/wiki/Same-origin_policy
.. _`CORS Middleware`: http://docs.openstack.org/developer/oslo.middleware/cors.html

View File

@ -1,192 +0,0 @@
=========================
Deprecate Individual CLIs
=========================
https://blueprints.launchpad.net/+spec/deprecate-clis
Historically, each service has offered a CLI application that is included with
the python-\*client that provides administrative and user control over the
service. With the popularity of OpenStack Client and the majority of functions
having been implemented there we should look to officially deprecate the
individual CLI applications.
This does not imply that the entire python-\*client will be deprecated, just
the CLI portion of the library. The python bindings are expected to continue to
work.
Problem description
===================
There is currently no standard workflow for interacting with OpenStack services
on the command line. In the beginning it made sense that there was a nova CLI
for working with nova. As keystone, glance and cinder split out they cloned
novaclient and adapted it to their needs. By the time neutron and the deluge of
big tent services came along there was a clear pattern that each service would
provide a CLI along with their library.
Given the common base and some strong persuasion there is at least a common
subset of parameters and environment variables that are accepted by all CLI
applications. However as new features come up such as YAML based configuration,
keystone's v3 authentication or SSL handling issues these must be addressed in
each project individually and the way these parameters are handled have
drifted or are supported to various levels.
This also creates a horrible user experience for those trying to interact with
the CLI as you have to continually switch between different formatting, command
structures, capabilities and requires a deep knowledge of which service is
responsible for different tasks - a pattern we have been trying to break in
favour of a unified OpenStack interface.
To deal with this the OpenStack client project has now been around for nearly 2
years. It provides a pluggable way to register CLI tasks and a common place to
fix security and usability issues.
Proposed change
===============
Whilst there has been general support for the OpenStack Client project and
support from individual services (it is the primary/supported CLI for keystone
and several newer services) there is no clear direction on whether our users
should use it or continue using the project specific CLIs. Similarly there is
no clear direction on whether developers should contribute new features in
services to the service specific CLI or to OpenStack client.
This blueprint proposes that as a community we ratify that OpenStack Client is
to be the default supported CLI application going forward. This will give
services the direction to deprecate the project CLIs and start pushing their
new features to OpenStack Client. It will give our documentation teams the
direction to start using OpenStack Client as the command for setting up
functionality.
Given that various projects currently have different needs from their CLI I do
not expect that we will be immediately able to deprecate all CLIs. There may be
certain tasks for which there will always need to be a project specific CLI.
The intent of this blueprint initially is not to provide a timeline or force
projects away from their own CLIs. Instead to provide direction to start
deprecating the CLIs for which OpenStack Client already has functionality
compatibility and properly start the process.
Alternatives
------------
We could look at an oslo project that handles the common components of CLI
generation such that we could standardize parameters and handle client creation
in a cross service way. There may be an advantage to doing this anyway as there
will likely always be tools that want to provide a CLI interface to an
OpenStack API that do not belong in OpenStack Client and these should remain
consistent.
Doing nothing is always an option. OpenStack client is steadily gaining
adoption naturally because it can quickly provide new features across a range
of services and so CLI deprecation may happen naturally over time. However
until then we must duplicate the effort of supporting features in multiple
places.
Implementation
==============
As with all OpenStack applications there will have to be a 2 cycle deprecation
period for all these tools.
There are multiple components to this spec and much of the work required will
have to be performed individually in each of the services and documentation
projects. The intention of this spec is to indicate to projects that this is
the direction of the community so we can figure out the project specific
requirements in those groups.
Assignee(s)
-----------
Primary assignee:
jamielennox
dtroyer
Work Items
----------
- Add a deprecation warning to clients that have OpenStack Client equivalent
functionality.
- Update documentation to use OpenStack Client commands instead of project
specific CLIs (see Documentation Impact).
- Remove CLI components from CLIs after deprecation period complete.
Service Impact
--------------
For most CLI applications we must first start emitting deprecation warnings for
the CLI tools that ship with the deployment libraries.
For core services most functionality is already present and maintained in the
OpenStack Client repository so they would need to ensure feature parity however
they would typically not require any additional code.
As part of core functionality OSC currently supports:
- Nova
- Glance
- Cinder
- Swift
- Neutron
- Keystone
A number of additional projects have already implemented their CLI as an
OpenStack Client plugin. These projects will not be affected. Projects that
have not created plugins would need to implement a plugin that handles the
features they wish to expose via CLI.
Services that currently include an OpenStack Client plugin in their repository
include (but not limited to):
- Zaqar
- Sahara
- Designate
Documentation impact
--------------------
This will be a fairly fundamental change in the way we have communicated with
users to consume openstack and so will require significant documentation
changes.
This will include (but not limited to):
- Install Guides
- Admin Guide
- Ops Guide
- CLI Reference can be deprecated or redirected to OpenStack Client
documentation.
The OpenStack Client is already in use and deployed as a part of most
installations (it is required for keystone). Therefore changes to documentation
would not be dependant on any work happening in the services. The spec attempts
to ratify that this is the correct approach.
Dependencies
============
There have been many required steps to this goal such as os-client-config,
keystoneauth, cliff, stevedore and the work that has already gone into
OpenStack client. We are now at the point where we can move forward with the
change.
The OpenStack SDK is not listed as a dependency here because it is not
currently a dependency of OpenStack Client. It is intended that when OpenStack
SDK is released it will be consumed by OpenStack Client however that can be
considered an implementation detail.
History
=======
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Mitaka
- Introduced
.. note::
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode

View File

@ -1,242 +0,0 @@
========================================================
Enabling Python 3 for Integration and Functional Tests
========================================================
The 2.x series of the C Python interpreter on which OpenStack releases
through Kilo are built is reaching the end of its extended support
period, defined by the upstream developers. This spec describes
motivation for porting fully to Python 3 and some of the work we will
need to enable testing applications as they move to Python 3.
Problem description
===================
There are a lot of small motivations for moving to Python 3, including
better unicode support and new features in the language and standard
library. The primary motivation, however, is that Python 2 is reaching
its end-of-life for support from its developers.
Just as we expect our users to update to new versions of OpenStack in
order to continue to receive support, the python-dev team expects
users of the language to update to reasonably modern and supported
versions of the interpreter in order to receive bug and security
fixes. When Python 3 was introduced, the support period for Python 2
was extended beyond the normal length of time to allow projects plenty
of time to migrate, and to allow the python-dev team to receive
feedback to make changes to the language so that migration is
easier. That period is coming to an end, and we need to consider
migration seriously.
"Python 3.0 was released in 2008. The final 2.x version 2.7 release
came out in mid-2010, with a statement of extended support for this
end-of-life release. The 2.x branch will see no new major releases
after that. 3.x is under active development and has already seen
over five years of stable releases, including version 3.3 in 2012
and 3.4 in 2014. This means that all recent standard library
improvements, for example, are only available by default in Python
3.x." -- Python2orPython3_
That said, we cannot expect all of OpenStack to be ported at one
time. It's likely that we could not port everything in a single
release cycle, given the other work going on. So we need a way to
stage the porting work so that projects can port when they are ready,
without having to wait for any other projects to finish their ports.
Proposed change
===============
Our services communicate through REST APIs and the message bus. This
means they are decoupled enough that we can port them one at a time,
if our tools support running some services on Python 2 and some on
Python 3. Our unit test tool, tox, supports multiple Python versions
already, and in fact most of our library projects are testing under
Python 2.6, 2.7, and 3.4 today. Our integration tests, however, do not
yet support multiple Python versions, so that's the next step to take.
General Strategy
----------------
#. Update devstack to install apps with the "right" version of the
interpreter.
* Use the version declared to be supported by the project through
its trove classifiers.
* Allowing apps to be installed with the right version of the
interpreter independently of other apps means we can port one
app at a time.
#. Port each application to 3.4, but support both 2.7 and 3.4.
* Set up an appropriate devstack-gate job using Python 3 as
non-voting for projects when they start to port.
* Make incremental changes to the applications until the non-voting
job passes reliably, then update it to make it a voting job.
* Technically there is no need to run integration tests for an
application under both versions, since they only need to be
deployed under one version at a time. However, different
packagers and deployers may want to choose to wait to move to
Python 3 and so we can continue to run the tests under both
versions.
.. note::
Even after all applications are on 3.x, we need to maintain some
python 2.7 support for client libraries and the Oslo libraries they
use. We should consider the deprecation policy of Python 2 for the
client libraries independently of porting the applications to 3.
Which version of Python to use?
-------------------------------
We have discussed this before, and it continues to be a moving
target. Version 3.4 seems to be our best goal for now.
- 3.0 - 3.2 are no longer actively supported
- 3.3 is not available on all distros
- **3.4 is (or soon will be) available on all distros**
- 3.5 is in beta and so is not ready for us to use, yet
Functional Tests for Libraries
------------------------------
Besides the functional and integration tests for applications, we also
have functional tests for libraries. I propose that we configure the
test jobs to run those only under Python 3, to avoid duplication and
expose porting issues that would have an impact on applications as
early as possible.
Alternatives
------------
Stay with C Python 2
~~~~~~~~~~~~~~~~~~~~
Commercial support is likely to be available from distros for longer
than it is available upstream, but even that will stop at some point.
Use PyPy or Another Implementation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Some applications may benefit from PyPy's JIT compiler. It currently
supports 2.7.8 and 3.2.5, which means our Python 2 code would probably
run but code designed for Python 3.4 will not. I'm not aware of any
large deployments using PyPy to run services, so I'm not sure this is
really a problem. Given the expected long time frame for porting to
Python 3, it is likely that PyPy will be able to catch up to the
language level needed to run OpenStack by the time we are fully moved
to Python 3.
Wait for Python 3.5
~~~~~~~~~~~~~~~~~~~
Moving from 3.4 to 3.5 should require much less work than moving from
2.7 to 3.4. We can therefore start now, and monitor adoption of 3.5 by
distributions to decide whether to ultimately use 3.4 or a later
version.
Use Separate Virtualenvs in devstack
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We have discussed installing applications into virtualenvs a couple of
times. Doing that is orthogonal to these proposed changes, since we
would still need to use the correct version of Python within the
virtualenv.
Functional tests for libraries on 2 and 3
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We could run parallel test jobs configured to run the functional tests
for libraries under both Python 2 and 3. This would largely duplicate
effort, though it might uncover some inconsistent handling of bytes
vs. strings. We shouldn't start out trying to do this, but if we do
uncover problems we can add more test jobs.
Implementation
==============
Assignee(s)
-----------
Primary assignee: Doug Hellmann
Work Items
----------
1. Update devstack to install pip for both Python 2 and Python 3.
2. Update devstack to look at the supported Python versions for a
project, and choose the correct copy of pip to install it and its
dependencies.
This may be as simple as::
python setup.py --classifiers | grep 'Language' | cut -f5 -d: | grep '\.'
3. When installing libraries from source using the ``LIBS_FROM_GIT``
feature of devstack, ensure that the libraries are installed for
both Python 2 and Python 3.
4. Begin porting applications to Python 3.
* Unit tests can be run under Python 3 for applications just as
they are for libraries, by enabling the appropriate job. Having
the unit tests working with Python 3 is a good first step, before
enabling the integration tests.
* Integration tests can be run by submitting a patch updating the
trove classifier.
* Some projects will have dependencies blocking them from moving to
Python 3 at first, and those should be tracked separately from
this proposal.
Some functions in Oslo libraries have been identified as having
incompatibilities with Python 3. As these cases are reported, we will
need to decide, on a case-by-case basis whether it is feasible to
create versions of those functions that work for both Python 2 and 3,
or if we will need to create some new APIs for use under Python 3 (see
``oslo_utils.encodeutils.safe_decode``,
``oslo_utils.strutils.mask_password``, and
``oslo_concurrency.processutils.execute`` as examples).
References
==========
- A proof-of-concept patch to devstack: https://review.openstack.org/181165
- Our notes about the state of Python 3 support:
https://wiki.openstack.org/wiki/Python3
- Advice from the python-dev community about choosing a Python
version: Python2orPython3_
- Summit discussions
- `Havana <https://etherpad.openstack.org/p/havana-python3>`__
- `Icehouse <https://etherpad.openstack.org/p/IcehousePypyPy3>`__
- `Juno <https://etherpad.openstack.org/p/juno-cross-project-future-of-python>`__
- Project-specific specs related to Python 3
- `Heat <http://specs.openstack.org/openstack/heat-specs/specs/liberty/heat-python34-support.html>`__
- `Keystone <https://review.openstack.org/#/c/177380/>`__
- `Neutron <https://review.openstack.org/#/c/172962/>`__
- `Nova <https://review.openstack.org/#/c/176868>`__
.. _Python2orPython3: https://wiki.python.org/moin/Python2orPython3
History
=======
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Liberty
- Introduced
.. note::
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode

View File

@ -1,196 +0,0 @@
=========================
Eventlet Best Practices
=========================
No blueprint, this is intended as a reference document.
Eventlet is used in many of the OpenStack projects as the default concurrency
model, and there are some things we've learned about it over the years that
currently exist only as tribal knowledge. This is an attempt to codify those
in a central location for everyone's benefit.
It is worth noting that while there has been a push from some members of the
community to move away from eventlet entirely, there is currently no approved
plan to do so. Even if there were, it will likely take a long time to
implement, so eventlet will be something we have to care about for at least
the short and medium term.
Problem description
===================
In some ways eventlet behaves much differently from other concurrency models
and can even change the behavior of the Python standard library. This means
that scenarios exist where a bad interaction between eventlet and some other
code, often code that is not eventlet-aware, can cause problems. We need some
best practices that will minimize the potential for these issues to occur.
Proposed change
===============
Guidelines for using eventlet:
Monkey Patching
---------------
* When using eventlet.monkey_patch, do it first or not at all. In practice,
this means monkey patching in a top-level __init__.py which is guaranteed
to be run before any other project code. As an example, Nova monkey patches
in nova/cmd/__init__.py and nova/tests/unit/__init__.py so that in both the
runtime and test scenarios the monkey patching happens before any Nova code
executes.
The reasoning behind this is that unpatched stdlib modules may not play
nicely with eventlet monkey patched ones. For example, if thread A is
started, the application monkey patches, then starts thread B, now you've
mixed native threads and green threads and the results are undefined but
most likely bad.
It is not practical to expect developers to recognize all such
possible race conditions during development or review, and in fact it is
impossible because the race condition could be introduced by code we
consume from another library. Because of this, it is safest to
simply eliminate the races by monkey patching before any other code is run.
* Monkey patching should also be done in a way that allows services to run
without it, such as when an API service runs under Apache. This is the
reason for Nova not simply monkey patching in nova/__init__.py.
Another example is Keystone, which recommends running under Apache but also
supports eventlet. They have a separate eventlet binary 'keystone-all' which
handles monkey patching before running any other code. Note that
`eventlet is deprecated`_ in Keystone as of the Kilo cycle.
.. _`eventlet is deprecated`: http://lists.openstack.org/pipermail/openstack-dev/2015-February/057359.html
* Monkey patching with thread=False is likely to cause problems. This is done
conditionally in many services due to `problems running under a debugger`_
with the threading module monkey patched. Unfortunately, even simple
concurrency scenarios can result in deadlocks with this sort of setup. For
example, the following code provided by Josh Harlow will cause hangs::
import eventlet
eventlet.monkey_patch(os=False, thread=False)
import threading
import time
thingy_lock = threading.Lock()
def do_it():
with thingy_lock:
time.sleep(1)
threads = []
for i in range(0, 5):
threads.append(eventlet.spawn(do_it))
while threads:
t = threads.pop()
t.wait()
It is unclear at this time whether there is a way to enable debuggers and
also have a sane monkey patched environment. The `eventlet backdoor`_ was
mentioned as a possible alternative.
.. _`problems running under a debugger`: http://lists.openstack.org/pipermail/openstack-dev/2012-August/000693.html
.. _`eventlet backdoor`: http://lists.openstack.org/pipermail/openstack-dev/2012-August/000873.html
* Monkey patching can cause problems running flake8 with multiple workers.
If it does, the monkey patching can be made conditional based on an
environment variable that can be set during flake8 test runs. This should
not be a problem as monkey patching is not needed for flake8.
For example::
import os
if not os.environ.get('DISABLE_EVENTLET_PATCHING'):
import eventlet
eventlet.monkey_patch()
Even though os is being imported before monkey patching, this should be safe
as long as no other code is run before monkey patching occurs.
Greenthread-aware Modules
-------------------------
* There is a greenthread-aware subprocess module in eventlet, but it does
*not* get patched in by eventlet.monkey_patch. Code that has interactions
between green threads and the subprocess module must be sure to use the
green subprocess module explicitly. A simpler alternative is to use
processutils from oslo.concurrency, which selects the appropriate module
depending on the status of eventlet's monkey patching.
Database Drivers
----------------
* Eventlet can cause deadlocks_ in some Python database drivers. The current
plan is to move our recommended and default driver_ to something that is more
eventlet-friendly.
.. _deadlocks: https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#MySQLdb_.2B_eventlet_.3D_sad
.. _driver: https://wiki.openstack.org/wiki/PyMySQL_evaluation#MySQL_DB_Drivers_Comparison
Tools for Ensuring Monkey Patch Sanity
--------------------------------------
* The oslo.utils project has an eventletutils_ module that can help ensure
proper monkey patching for code that knows what it needs patched. This
could, for example, be used to raise a warning when a service is run under
a debugger without threading patched. At least that way the user will have
a clue what is wrong if deadlocks occur.
.. _eventletutils: http://docs.openstack.org/developer/oslo.utils/api/eventletutils.html
Alternatives
------------
* Continue to have each project implement eventlet in its own way. This is
undesirable because it will result in projects hitting bugs that may have
been solved in another project.
Implementation
==============
Assignee(s)
-----------
Primary assignee: bnemec
Additional contributors: harlowja, ihrachyshka
Work Items
----------
* Audit the use of eventlet in OpenStack projects and make any changes
necessary to abide by these guidelines.
* Follow up with the eventlet team on whether the green subprocess module
not being included in monkey patching is intentional.
Dependencies
============
None
History
=======
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Kilo
- Introduced
.. note::
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode

View File

@ -1,190 +0,0 @@
========================================
Managing Stable Branches for Libraries
========================================
Problem description
===================
We want to restrict stable branches to a narrow range of allowed
versions of all dependencies, to increase our chances of avoiding
breaking changes in the stable branches.
This means rather than always rolling all library releases forward to
the most current release, we want to start using stable releases for
libraries more consistently.
We also want to express the dependency range in a way that does not
require that we explicitly modify the allowed version when we produce
patch releases. So we want to place a cap on the upper bound of the
version range, rather than pinning to a specific version.
The Oslo team has been considering this problem for a while, but we
now have more teams producing libraries, either server-support
libraries or clients, so we need to bring the plan to a wider audience
and apply a consistent set of procedures to all OpenStack libraries,
including Oslo, project-specific clients, the SDK, middleware, and any
other projects that are installed as subcomponents of the rest of
OpenStack. (The applications follow a stable branch process already,
so this process describes how to handle stable branches for projects
that are not applications.)
All libraries should already be using the `pbr version of SemVer`_ for
version numbering. This lets us express a dependency within the range
of major.minor, and allow any patch release to meet the
requirement. This spec discusses the process for ensuring that we have
valid stable branches to work from.
.. _pbr version of SemVer: http://docs.openstack.org/developer/pbr/semver.html
Proposed change
===============
When we reach the feature freeze at the end of the cycle, libraries
should freeze development a little before the application freeze date
(Oslo freezes 1 week prior, which should be enough time for all
projects to follow these procedures). At that point, the release
manager for each library should follow the steps below, substituting
the release series name (for example, "kilo") for ``$SERIES``.
#. Update the global requirements list to make the current version of
the library the minimum supported version, and express the
requirement using the "compatible version" operator (``~=``) to
allow for future patch releases (see the `Compatible Releases`_
section of :pep:`440`).
To avoid churn in the applications, we probably want to do this in
one, or at least just a very few, patches, so we will need to
coordinate between the release managers to get that set up.
#. Create a ``stable/$SERIES`` branch in each library from the same
commit that was tagged, to provide a place to back-port changes for
the stable branch.
The release team for each project will be responsible for this
step, so we will want to automate it as much as possible.
The rest of the requirements management for the release is largely
unchanged, with one final step added:
#. Freeze the master branch of the global requirements repository at
the Feature Freeze date.
#. All projects merge the latest global requirements before issuing
their RC1.
#. When projects tag their RC1 they create ``proposed/$SERIES``
branches.
#. When all integrated projects have done their RC1, we create a
requirements ``proposed/$SERIES`` branch and unfreeze master
#. After all applications have released their RC1 and have created
their ``proposed/$SERIES`` branches, the caps on the global
requirements list can be removed on the master branch.
.. _Compatible Releases: https://www.python.org/dev/peps/pep-0440/#compatible-release
New stable releases of the library can then proceed as before, tagging
new patch releases in the stable branch instead of master. Stable
releases of libraries are expected to be exceptions, to support
security or serious bug fixes. Trivial bug fixes will not necessarily
be back-ported. As with applications stable releases of libraries
should not include new features and should have a high level of
backwards-compatibility.
The global requirements updates for our own libraries should be merged
into the applications requirements list before their RC1 is produced
to ensure that we don't have any releases with conflicting
requirements.
The next release on master for each library should use a new minimum
version number to move it out of the stable release series. We will
have cut the stable branch at that point, so bug fixes will have to be
a back-ported anyway. Historically feature patches that didn't make it
before the freeze have merged early in the next cycle. Taking both of
those factors together means it will just be simpler to always cut a
release with a new minor version to avoid any issues with later
back-ports or with accidentally including features in the release
going to the new stable branch.
Management of the stable branches is left up to the projects to
decide, but it should not be assumed that the stable maintenance team
will directly handle all back-ports.
Alternatives
------------
Use a "proposed" Branch before the Stable Branch
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We could follow the two-step process the applications use and create a
``proposed/$SERIES`` branch before the final ``stable``
branch. However, the library code bases are smaller and tend to have
fewer changes in flight at any one time than the applications, so this
would be extra overhead in the process. We haven't found many cases in
the past where we need to back-port changes from master to the stable
branches, so it shouldn't be a large amount of work to do that as
needed.
The branches for libraries are also created *after* a release, and so
they are not a "proposed" release.
Create Stable Branches as Needed
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As mentioned above, we waited to create stable branches for some of
the Oslo libraries until they were needed. This introduced extra time
into the process because a back-port patch couldn't be submitted for
review until the branch existed.
Create Numbered Branches
~~~~~~~~~~~~~~~~~~~~~~~~
We could also create branches like ``stable/1.2`` or using some other
prefix. However, this makes it more difficult to set up the test jobs
using regexes against the branch names, and especially the job that
tests proposed changes to stable branches of libraries will be more
difficult to configure properly. Using the release name as the branch
name lets all of this work "automatically" using our existing tools.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Doug Hellmann
Work Items
----------
1. Review and update the scripts in ``openstack-infra/release-tools``
to find any that need to be updated to support libraries.
.. I'll need to talk to Thierry about which scripts might need to be
updated and if there are any other written instructions that we
need to update, but I wanted to get the first draft of this spec
out for review.
Dependencies
============
None
History
=======
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Kilo
- Introduced
.. note::
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode

View File

@ -1,408 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
Logging Guidelines
==========================================
https://blueprints.launchpad.net/nova/+spec/log-guidelines
Problem description
===================
The current state of logging both within and between OpenStack
components is inconsistent to the point of being somewhat harmful by
obscuring the current state, function, and real cause of errors in an
OpenStack cloud. A consistent, unified logging format will better
enable cloud administrators to monitor and maintain their
environments.
Before we can address this in OpenStack, we first need to come up with
a set of guidelines that we can get broad agreement on. This is
expected to happen in waves, and this is the first iteration to gather
agreement on.
Proposed change
===============
Definition of Log Levels
------------------------
http://stackoverflow.com/a/2031209
This is a nice writeup about when to use each log level. Here is a
brief description:
- Debug: Shows everything and is likely not suitable for normal
production operation due to the sheer size of logs generated
- Info: Usually indicates successful service start/stop, versions and
such non-error related data. This should include largely positive
units of work that are accomplished (such as starting a compute,
creating a user, deleting a volume, etc.)
- Audit: REMOVE - (all previous Audit messages should be put as INFO)
- Warning: Indicates that there might be a systemic issue; potential
predictive failure notice
- Error: An error has occurred and an administrator should research
the event
- Critical: An error has occurred and the system might be unstable;
immediately get administrator assistance
We can think of this from an operator perspective the following ways
(Note: we are not specifying operator policy here, just trying to set
tone for developers that aren't familiar with how these messages will
be interpreted):
- Critical : ZOMG! Cluster on FIRE! Call all pagers, wake up
everyone. This is an unrecoverable error with a service that has or
probably will lead to service death or massive degredation.
- Error: Serious issue with cloud, administrator should be notified
immediately via email/pager. On call people expected to respond.
- Warning: Something is not right, should get looked into during the
next work week. Administrators should be working through eliminating
warnings as part of normal work.
- Info: normal status messages showing measureable units of positive
work passing through under normal functioning of the system. Should
not be so verbose as to overwhelm real signal with noise. Should not
be continuous "I'm alive!" messages.
- Debug: developer logging level, only enable if you are interested in
reading through a ton of additional information about what is going
on.
- Trace: In functions which support this level, details every
parameter and operation to help diagnose subtle bugs. This should
only be enabled for specific areas of interest or the log volume
will be overwhelming. Some system performance degradation should be
expected.
Proposed Changes From Status Quo
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Deprecate and remove AUDIT level
Rationale, AUDIT is confusing, and people use it for entirely the
wrong purposes. The origin of AUDIT was a NASA specific requirement
which is not longer really relevant to the current code.
Information that was previously being emitted at AUDIT should instead
be sent as notifications to a notification queue. *Note: Notification formats
and frequency are beyond the scope of this spec.*
- Define TRACE logging level
TRACE is a logging keyword that is understood by most logging
tools. OpenStack has repurposed this in the past to not be TRACE
logging but instead be used whenever a Stacktrace was dumped.
Stack traces should be logged at ERROR level (they currently
aren't).
TRACE should be defined as log level 5 in python (which is lower than
DEBUG), and LOG.trace support should be added to oslo
logger. LOG.trace can then be used for deep tracing of code.
Overall Logging Rules
---------------------
The following principles should apply to all messages
Log messages at Info and above should be a "unit of work"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The Info log level is defined as: "normal status messages showing
measureable units of positive work passing through under normal
functioning of the system."
A measurable unit of work should be describable by a short sentence
fragment, in the past tense with a noun and a verb of something
significant.
Examples::
Instance spawned
Instance destroyed
Volume attached
Image failed to copy
Words like "started", "finished", or any verb ending in "ing" are
flags for non unit of work messages.
Debugging start / end messages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
At the Debug log level it is often extremely important to flag the
beginning and ending of actions to track the progression of flows
(which might error out before the unit of work is completed).
This should be made clear by there being a "starting" message with
some indication of completion for that starting point.
In a real OpenStack environment lots of things are happening in
parallel. There are multiple workers per services, multiple instances
of services in the cloud.
Examples of Good and Bad uses of Info
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Below are some examples of good and bad users of info. In the Good
examples we can see the 'noun / verb' fragment for a unit of work
(successfully is probably superfluous and could be removed).
In the bad examples we see trace level thinking put into INFO and
above messages.
**Good**
::
2014-01-26 15:36:10.597 28297 INFO nova.virt.libvirt.driver [-]
[instance: b1b8e5c7-12f0-4092-84f6-297fe7642070] Instance spawned
successfully.
2014-01-26 15:36:14.307 28297 INFO nova.virt.libvirt.driver [-]
[instance: b1b8e5c7-12f0-4092-84f6-297fe7642070] Instance destroyed
successfully.
**Bad**
::
2014-01-26 15:36:11.198 INFO nova.virt.libvirt.driver
[req-ded67509-1e5d-4fb2-a0e2-92932bba9271
FixedIPsNegativeTestXml-1426989627 FixedIPsNegativeTestXml-38506689]
[instance: fd027464-6e15-4f5d-8b1f-c389bdb8772a] Creating image
2014-01-26 15:36:11.525 INFO nova.virt.libvirt.driver
[req-ded67509-1e5d-4fb2-a0e2-92932bba9271
FixedIPsNegativeTestXml-1426989627 FixedIPsNegativeTestXml-38506689]
[instance: fd027464-6e15-4f5d-8b1f-c389bdb8772a] Using config drive
2014-01-26 15:36:12.326 AUDIT nova.compute.manager
[req-714315e2-6318-4005-8f8f-05d7796ff45d FixedIPsTestXml-911165017
FixedIPsTestXml-1315774890] [instance:
b1b8e5c7-12f0-4092-84f6-297fe7642070] Terminating instance
2014-01-26 15:36:12.570 INFO nova.virt.libvirt.driver
[req-ded67509-1e5d-4fb2-a0e2-92932bba9271
FixedIPsNegativeTestXml-1426989627 FixedIPsNegativeTestXml-38506689]
[instance: fd027464-6e15-4f5d-8b1f-c389bdb8772a] Creating config
drive at
/opt/stack/data/nova/instances/fd027464-6e15-4f5d-8b1f
-c389bdb8772a/disk.config
This is mostly an overshare issue. At Info these are stages that don't
really need to be fully communicated.
Messages shouldn't need a secret decoder ring
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
**Bad**
::
2014-01-26 15:36:14.256 28297 INFO nova.compute.manager [-]
Lifecycle event 1 on VM b1b8e5c7-12f0-4092-84f6-297fe7642070
General rule, when using constants or enums ensure they are translated
back to user strings prior to being sent to the user.
Specific Event Types
--------------------
In addition to the above guidelines very specific additional
requirements exist.
WSGI requests
~~~~~~~~~~~~~
Should be:
- Logged at **INFO** level
- Logged exactly once per request
- Include enough information to know what the request was
The last point is notable, because some POST API requests don't
include enough information in the URL alone to determine what the
API did. For instance, Nova Server Actions (where POST includes a
method name).
Rationale: Operators should be able to easily see what API requests
their users are making in their cloud to understand the usage patterns
of their users with their cloud.
Operator Deprecation Warnings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Should be:
- Logged at **WARN** level
- Logged exactly once per service start (not on every request through
code)
- Include directions on what to do to migrate from the deprecated
state
Rationale: Operators need to know that some aspect of their cloud
configuration is now deprecated, and will require changes in the
future. And they need enough of a bread crumb trail to figure out how
to do that.
REST API Deprecation Warnings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Should be:
- **Not** logged any higher than DEBUG (these are not operator facing
messages)
- Logged no more than once per REST API usage / tenant. Definitely
not on *every* REST API call.
Rationale: The users of the REST API don't have access to the system
logs. Therefore logging at a WARNING level is telling the wrong people
about the fact that they are using a deprecated API.
Deprecation of User facing API should be communicated via User facing
mechanisms, being API change notes associated with new API versions.
Stacktraces in Logs
~~~~~~~~~~~~~~~~~~~
Should be:
- **exceptional** events, for unforeseeable circumstance that is not
yet recoverable by the system.
- Logged at ERROR level
- Considered high priority bugs to be addressed by the development
team.
Rationale: The current behavior of OpenStack is extremely stack trace
happy. Many existing stack traces in the logs are considered
*normal*. This dramatically increases the time to find the root cause
of real issues in OpenStack.
Logging by non-OpenStack Components
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenStack uses a ton of libraries, which have their own definitions of
logging. This causes a lot of extraneous information in normal logs by
wildly different definitions of those libraries.
As such, all 3rd party libraries should have their logging levels
adjusted so only real errors are logged.
Currently proposed settings for 3rd party libraries:
- amqp=WARN
- boto=WARN
- qpid=WARN
- sqlalchemy=WARN
- suds=INFO
- iso8601=WARN
- requests.packages.urllib3.connectionpool=WARN
- urllib3.connectionpool=WARN
Alternatives
------------
Continue to have terribly confusing logs
Data model impact
-----------------
NA
REST API impact
---------------
NA
Security impact
---------------
NA
Notifications impact
--------------------
NA
Other end user impact
---------------------
NA
Performance Impact
------------------
NA
Other deployer impact
---------------------
Should provide a much more standard way to determine what's going on
in the system.
Developer impact
----------------
Developers will need to be cognizant of these guidelines in creating
new code or reviewing code.
Implementation
==============
Assignee(s)
-----------
Assignee is for moving these guidelines through the review process to
something that we all agree on. The expectation is that these become
review criteria that we can reference and are implemented by a large
number of people. Once approved, will also drive collecting volunteers
to help fix in multiple projects.
Primary assignee:
Sean Dague <sean@dague.net>
Work Items
----------
Using this section to highlight things we need to decide that aren't
settled as of yet.
Proposed changes with general consensus
- Drop AUDIT log level, move all AUDIT message to either an INFO log
message or a ``notification``.
- Begin adjusting log levels within projects to match the severity
guidelines.
Dependencies
============
NA
Testing
=======
See tests provided by
https://blueprints.launchpad.net/nova/+spec/clean-logs
Documentation Impact
====================
Once agreed upon this should form a more permanent document on logging
specifications.
References
==========
- Security Log Guidelines -
https://wiki.openstack.org/wiki/Security/Guidelines/logging_guidelines
- Wiki page for basic logging standards proposal developed early in
Icehouse - https://wiki.openstack.org/wiki/LoggingStandards
- Apache Log4j levels (which many tools work with) -
https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/Level.html

View File

@ -1,92 +0,0 @@
========================================
No More Downward SQL Schema Migrations
========================================
SQL Migrations are the accepted method for OpenStack projects to ensure
that a given schema is consistent across deployments. Often these
migrations include updating of the underlying data as well as
modifications to the schema. It is inadvisable to perform a downwards
migration in any environment.
Problem description
===================
Best practices state: when reverting post schema migration the
correct course is to restore to a known-good state of the database.
Many migrations in OpenStack include data-manipulation to ensure the data
conforms to the new schema; often these data-migrations are difficult or
impossible to reverse without significant overhead. Performing a downgrade
of the schema with such data manipulation can lead to inconsistent or
broken state. The possibility of bad-states, relatively minimal testing,
and no demand for support renders a downgrade of the schema an unsafe
action.
Proposed change
===============
The proposed change is as follows:
* Eliminate the downgrade option(s) from the CLI tools utilized for
schema migrations. This is done by rejecting any attempt to migrate
that would result in a lower schema version number than current
* Document best practices on restoring to a previous schema version
including the steps to take prior to migrating the schema
* Stop support of migrations with downgrade functions across all
projects
* Do not add future migrations with downgrade functions
Alternatives
------------
Downgrade paths can continue to be supported.
Implementation
==============
Assignee(s)
-----------
Morgan Fainberg (irc: morganfainberg)
Matt Fischer (irc: mfisch)
Work Items
----------
* Document best practices for restoring to a previous schema version
* Update oslo.db to return appropriate errors when trying to perform
a schema downgrade
* (all core teams) Do not accept new migrations that include downgrade
functions
* (all core teams) Projects may drop downgrade functions in all
current migration scripts
Dependencies
============
No external dependencies.
References
==========
1. `openstack-dev mailing list topic <http://lists.openstack.org/pipermail/
openstack-dev/2015-January/055586.html>`_
2. `openstack-operators mailing list topic <http://lists.openstack.org/
pipermail/openstack-operators/2015-January/006082.html>`_
3. `Current migration/rollback documentation <http://docs.openstack.org/
openstack-ops/content/ops_upgrades-roll-back.html>`_
.. note::
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode

View File

@ -1,189 +0,0 @@
=========================
Requirements management
=========================
Our current requirements management policy is causing significant gate
downtime, developer issues and packaging headaches. We can do better.
Problem description
===================
There are a number of interacting things we do today which are causing issues.
We run our tests with unpinned dependencies. This means that any patch being
tested actually tests two things: the change in the patch, and all new
releases of dependencies (within the ranges relevant to the branch in
question). With nearly 300 (transitive) libraries in use, an incompatibility
rate of only one per library per year can break us daily. We are suffering
regular firedrills, and we've burnt out many gate fixers so far.
We require that projects have their install_requires exactly match
our top level package specifiers that we use in gate jobs. This leads to
releases of stable branches that have much narrower dependency specifiers than
may work in practice. This becomes a problem when one package of a set needs
to upgrade a given dependency post-release to fix a bug: the new package can
be outside the set of versions dictated by the union of specifiers across all
our packages that use it, which causes a cascade where new releases are
required across the entire set to permit it to be used.
We override project local install_requires during testing, which means that
our co-installability check is often returning a false positive: our actual
install_requires may be incompatible but the gate won't report on this.
Additionally we have some constraints on solutions:
1. Provide “some” expression to downstream packagers/users
2. Configure test jobs (unit, integration, & functional) (devstack &
non-devstack)
3. Encourage convergence/maintain co-installability
4. Not be fiction
5. pip install python-novaclient works
6. Stop breaking all the time (esp. stable)
7. Dont make library release management painful
Finally there are plenty of things that could be done that aren't addressed in
this specification: it's the minimal self consistent set of improvements to
address the ongoing firedrills we currently suffer.
Proposed change
===============
tl;dr: Use exact pins for testing. Use open ended specifiers in project
install_requires.
Globally we need to maintain global-requirements as we do today. This remains
our policy control for which libraries are acceptable in OpenStack projects.
As we start using extras, we'll need to track extras[1] in global-requirements
as well. We want to preserve an axiom: that projects have wider-or-equal
requirements than the coordinated release which has wider-or-equal
requirements to the test pinned list.
We'll add a new pip freeze file to openstack/requirements, called
`upper-constraints.txt`. This will contain a pinned list of the entire set of
transitive requirements for that branch of OpenStack (defined by the projects
within the projects.txt file in openstack/requirements). All CI jobs will use
this to constrain the versions of Python projects that can be tested with.
Changes to that file will be tested by running a subset of the same jobs that
would consume it, as well as a policy checker that checks it is compatible
with `global-requirements.txt`. Changes to either `global-requirements.txt` or
`upper-constraints.txt` will have to be compatible with each other.
We'll tighten up the policy checks on projects to require that there be no
dependencies outside of those listed in global-requirements. This is needed to
allow centralised calculations about potential upgrades and co-installability
calculations. We'll change the check from 'identical line to global-requires'
to be 'compatible with both global-requires and upper-constraints'. The
existing requirements sync job will continue to propose converged requirements
for projects that have opted into converged dependency management.
We'll create a periodic job that takes global-requirements, expands it out
transitively, and then proposes a merge to global-requirements bringing in
any new releases (and any new or removed transitive-only dependencies that that
implies) as a patch to upper-constraints.
Releases are a multi week process in OpenStack as different servers fork and
setup their branches one at a time. During that period we'll gate any
requirements changes on both master and any branched projects, branching
openstack/requirements last when we're finally ready to decouple the release
from master. This is aimed at the changes involved in using new releases of
oslo etc.
Lastly, the pip resolver work will increase the accuracy of our constraints
pinning, but this spec is not dependent on that.
Alternatives
------------
The null option will continue to burn people out.
No other options have been proposed.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
lifeless
Work Items
----------
- Open up install-requires in master and stable/kilo. [lifeless]
- Generate pip glue to honour upper-constraints.txt. There are a few different
ways this might be cut, lifeless will figure a working low hanging fruit
implementation with upstream and/or infra. [lifeless]
- Create an initial upper-constraints.txt for master and stable/kilo.
[lifeless]
- Change g-r self test to ensure consistency between g-r and
upper-constraints.txt. [lifeless]
- Create jobs to prevent project local install_requires being incompatible
with global-requirements. [fungi/lifeless]
- Teach devstack how to honour a constraints file. [lifeless]
- Teach unittests / tox how to honour a constraints file. [lifeless]
- Generate zuul glue to mangle constraints files when a project from within
the constraints is part of the queue being assessed. [fungi/lifeless]
- Create jobs that use constraints files for everything. experimental for
local project contexts, non-voting when triggered from g-r. [fungi]
- Create script to determine transitive dependencies of global-requirements
and propose upgrades to upper-constraints.txt. [lifeless]
- Turn that script into a periodic job. [fungi]
- Debug and fix as needed until we get reasonably reliable passes on
the non-voting g-r jobs. [lifeless]
- Flip the constraints using jobs to voting on g-r, and non-voting
everywhere else. [fungi]
- Switch over to the constraints using jobs everywhere else in a more
controlled fashion. [fungi]
- Update release documentation to branch openstack/requirements last, and
setup the job that will validate requirements changes against the projects
that have branched already, as well as master itself. [fungi]
Dependencies
============
- None.
History
=======
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Liberty
- Introduced
References
==========
1. https://pythonhosted.org/setuptools/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies
.. note::
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode

View File

@ -1,483 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===========================
Return request ID to caller
===========================
Currently there is no way to return `X-Openstack-Request-Id` to the user from
individual python-clients, python-openstackclient and python-openstacksdk.
Problem description
===================
Most of the OpenStack RESTful API returns `X-Openstack-Request-Id` in the API
response header but this request id is not available to the caller from the
python client. When you run command on the command prompt using client with
debug option, then it displays `X-Openstack-Request-Id` on the console but
if I'm using python-client in some third party applications and if some
api fails due to some unknown reason then there is no way to get
`X-Openstack-Request-Id` from the client. This request id is very useful
to get quick support from infrastructure support team.
Use Cases
---------
1. Our users are asking for `X-Openstack-Request-Id` to be returned from
python-clients which would help them to get support from service provider
quickly.
2. Log request id of the caller and callee on the same log message in case of
api request call crossing service boundaries. This particular use case is
a future work once the above use case is implemented.
Proposed change
===============
Add a wrapper class around response to add a request id to it which can be
returned back to the caller. We have analyzed 5 different python client
libraries to understand what different types of return values are returned back
to the caller.
1. python-novaclient - Version 2.26.0
2. python-glanceclient - Version 0.19.0
3. python-cinderclient - Version 1.2.3
4. python-keystoneclient - Version 1.6.0
5. python-neutronclient - Version 2.6.0
We have documented the details of return types in the below google spreadsheet.
https://docs.google.com/spreadsheets/d/1al6_XBHgKT8-N7HS7j_L2H5c4CFD0fB8xT93z6REkSk/edit?usp=sharing
There are 9 different types of return values:
**1 List**
Presently, there is no way to return `X-Openstack-Request-Id` for list type.
Add a new wrapper class inherited from list to return request-id back to
the caller.
.. code:: python
class ListWithMeta(list):
def __init__(self, values, req_id):
super(ListWithMeta, self).__init__(values)
self.request_ids = []
if isinstance(req_id, list):
self.request_ids.extend(req_id)
else:
self.request_ids.append(req_id)
**2 Dict**
Similar to list type above, there is no way to return `X-Openstack-Request-Id`
for dict type. Add a new wrapper class inherited from dict to return request-id
back to the caller.
.. code:: python
class DictWithMeta(dict):
def __init__(self, values, req_id):
super(DictWithMeta, self).__init__(values)
self.request_ids = [req_id]
**3 Resource object**
There are various methods that returns different resource objects from clients
(volume/snapshot etc. from python-cinderclient). These resource class don't
have request_ids attribute. Add a request_ids attribute to the resource class
and populate it from the HTTP response object during instantiating resource
objects.
.. code:: python
# code snippet to add request_id to Resource class
class Resource(object):
def __init__(self, manager, info, loaded=False, req_id=None):
self.manager = manager
self._info = info
self._add_details(info)
self._loaded = loaded
self.request_ids = [req_id]
**4 Tuple**
Most of the actions returns tuple containing 'Response' object and 'response
body'. Add a new wrapper class inherited from tuple to return request-id
back to the caller. For few actions, its returning None at present.
Make changes at all such places to return new wrapper class of tuple.
.. code:: python
class TupleWithMeta(tuple):
def __new__(cls, values, request_id):
obj = super(TupleWithMeta, cls).__new__(cls, values)
obj.request_ids = [req_id]
return obj
**5 None**
Mostly all delete/update methods dont return any value back to the caller.
In most of the clients response and body tuple is returned by api call but
it is not returned back to the caller. Make changes at all such places
and return TupleWithMetaData object which will have request_ids as a
attribute for delete and update cases. There are some corner cases like
deleting metadata where its not possible to return request-id back to the
caller as internally it iterates through the list and deletes metadata key
one by one. For such cases, list of request-id's will be returned back to
the caller.
**6 Exception**
For python-cinderclient, python-keystoneclient and python-novaclient provision
is made to pass request-id when exception is raised as base exception class
has attribute request-id. Make similar changes in python-glanceclient,
and python-neutronclient to add request-id to base exception class so that
request-id will be available in case of failure.
**7 Boolean (True/False)**
Couple of python-keystoneclient methods for V3, like check_in_group to check
user is in group or not are returning bool (True/False). Add new wrapper
class inherited from int to return request-id back to the caller.
.. code:: python
class BoolWithMeta(int):
def __new__(cls, value, req_id):
obj = super(BoolWithMeta, cls).__new__(cls, bool(value))
obj.request_ids = [req_id]
return obj
def __repr__(self):
return ['False', 'True'][self]
**8 Generator**
All list api's are returning generator from python-glanceclient.
In order to return list of request id's in the generator, add a
new wrapper class to wrap the existing generator and implement the iterator
protocol. New wrapper class will have the attribute as 'request_id' of list
type. In the next method of iterator (wrapper class), request_id will be
added to the list based on page size and limit.
.. code:: python
# code snippet to add request_id to GeneratorWrapper class
class GeneratorWrapper(object):
def __init__(self, paginate_func, url, page_size, limit):
self.paginate_func = paginate_func
self.url = url
self.limit = limit
self.page_size = page_size
self.generator = None
self.request_ids = []
def _paginate(self):
for obj, req_id in self.paginate_func(
self.url, self.page_size, self.limit):
yield obj, req_id
def __iter__(self):
return self
# Python 3 compatibility
def __next__(self):
return self.next()
def next(self):
if not self.generator:
self.generator = self._paginate()
try:
obj, req_id = self.generator.next()
if req_id and (req_id not in self.request_ids):
self.request_ids.append(req_id)
except StopIteration:
raise StopIteration()
return obj
**9 String**
Couple of nova api's are returning String as a response to the user.
Add a new wrapper class inherited from str to return request-id back to
the caller.
.. code:: python
class StrWithMeta(str):
def __new__(cls, value, req_id):
obj = super(StrWithMeta, cls).__new__(cls, value)
obj.request_ids = [req_id]
return obj
**Note:**
To start with, we are proposing to implement this solution in two steps.
*Step 1: Add request-id attribute to base exception class.*
request-id is most needed when api returns anything >= 400 error code.
As of now python-cinderclient, python-keystoneclient and python-novaclient
already has a mechanism to return request-id in exception. Make similar
changes in remaining clients to return request-id in exception.
*Step 2: Add request-id for remaining return types*
Add new wrapper class in common package of oslo-incubator (apiclient/base.py)
and sync oslo-incubator in python-clients to return request-id for remaining
return types.
Alternatives
------------
**Alternative Solution #1**
Step 1:
We are proposing to add 'get_previous_request_id()' method in python-clients,
python-openstackclient and python-openstacksdk to return request id to the
user.
Design
When a caller make a call and get a response from the OpenStack service, it
will extract `X-Openstack-Request-Id` from the response header and store it
in the thread local storage (TLS). Add a new method 'get_previous_request_id()'
in the client to return `X-Openstack-Request-Id` stored in the thread local
storage to the caller. We need to store request id in the TLS because same
client object could be used in multi-threaded application to interact with
the OpenStack services.
.. code:: python
from cinderclient import client
cinder = client.Client('2', 'demo', 'admin', 'demo',
'http://21.12.4.342:5000/v2.0')
cinder.volumes.list()
[<Volume: 88c77848-ef8e-4d0a-9bbe-61ac41de0f0e>,
<Volume: 4b731517-2f3d-4c93-a580-77665585f8ca>]
cinder.get_previous_request_id()
'req-a9b74258-0b21-49c2-8ce8-673b420e20cc'
Notes:
1. If authentication fails or succeeds, in both the cases, request_id is
set to None in thread local storage because authenticate method will give
a call to the keystone service, and response header returned will contain
request_id of keystone service.
2. There might be possibility that request might fail with an exception
(timeout, service down etc.) before it gets response with the request_id.
In this case get_previous_request_id() will return request_id of previous
request and not of current request. To avoid these kind of issues,
request_id need to be set to None in thread local storage before new
request is made.
Pros:
* Doesn't break compatibility.
* Minimal changes are required in the client.
Cons:
* Liable to bugs where folk make two calls and then look at the wrong id or
deletes where N calls are involved - that implies buffering lots of ids
on the client, which implies an API for resetting it.
Step 2:
Logging request-id of the caller and callee on the same log message.
Once step 1 is implemented and `X-Openstack-Request-Id` is made available in
the python-client, it will be an easy change to log request id of the caller
and callee on the same log message in OpenStack core services where API request
call is crossing service boundaries. This is a future work for which we will
create another specs if required but it's worth mentioning it here to explain
the usefulness of returning `X-Openstack-Request-Id` from python-clients.
**Alternative Solution #2**
An alternative is to register a callback method with the client which will
be invoked after it gets a response from the OpenStack service. This callback
method will contain the response object which contains `X-Openstack-Request-Id`
and URL.
.. code:: python
def callback_method(response):
# get `X-Openstack-Request-Id` and URL from response and log
# it for trouble shooting.
c = cinder.Client(...)
c.register_request_id_callback(request_id_mapping)
volumes = c.list_volumes()
Pros:
* Doesn't break compatibility (meaning OpenStack services consuming python
client libraries requires no changes in the code if a newer version of
client library is used).
* Minimal changes are required in the client.
* With this approach, we can log caller and callee request-id in the same log message.
Cons:
* Forces consumers to try to match the call they made to the event,
which is complex.
Data model impact
-----------------
None
REST API impact
---------------
None
Security impact
---------------
None
Notifications impact
--------------------
None
Other end user impact
---------------------
None
Performance Impact
------------------
None
Other deployer impact
---------------------
None
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
abhijeet-malawade(abhijeet.malawade@nttdata.com)
Other contributors:
ankitagrawal(ankit11.agrawal@nttdata.com)
Work Items
----------
* Add request_id attribute in base exception for following projects:
1) python-glanceclient
2) python-neutronclient
* Add new wrapper classes in oslo-incubator openstack/common/apiclient to add
request-id to the caller.
**Note:**
All of the new wrapper classes will be added in the common package of
oslo-incubator (openstack/common/apiclient/) and later synced with individual
python clients. It is decided in cross-project meeting [*] to mark openstack
package as private mainly in python clients which is syncing apiclient python
package from oslo-incubator project. For example, oslo-incubator/openstack
should be synced with python-glanceclient as
glanceclient/_openstack/common/apiclient. For syncing, we will add a new
config parameter '--private-pkg' in update.py of oslo-incubator. Marking
openstack python package as private will have impact on all import statements
which will be refactored in the individual python clients.
* Sync openstack.common.apiclient.common module of oslo-incubator with
following projects:
1) python-cinderclient
2) python-glanceclient
3) python-novaclient
4) python-neutronclient
5) python-keystoneclient
Dependencies
============
None
Testing
=======
* Unittests for coverage
Documentation Impact
====================
None
References
==========
[*] http://eavesdrop.openstack.org/meetings/crossproject/2015/crossproject.2015-08-04-21.01.log.html
Etherpad
https://etherpad.openstack.org/p/request-id
Blueprints/Bugs
[1] Return request ID to caller(Cinder)
https://blueprints.launchpad.net/python-cinderclient/+spec/return-req-id
[2] Return request ID to caller(Glance)
https://blueprints.launchpad.net/python-glanceclient/+spec/expose-get-x-openstack-request-id
[3] Return request ID to caller(Neutron)
https://blueprints.launchpad.net/python-neutronclient/+spec/expose-get-x-openstack-request-id
[4] Return request ID to caller(Nova)
https://blueprints.launchpad.net/python-novaclient/+spec/expose-get-x-openstack-request-id
[5] Return request ID to caller(Keystone)
https://blueprints.launchpad.net/python-keystoneclient/+spec/expose-get-x-openstack-request-id
[6] python-openstackclient and python-openstacksdk bug
https://bugs.launchpad.net/python-openstacksdk/+bug/1465817
Discussions on cross-project weekly meeting
[1] http://eavesdrop.openstack.org/meetings/crossproject/2015/crossproject.2015-07-28-21.03.log.html
#topic Return request-id to caller (use thread local to store request-id)

View File

@ -1,256 +0,0 @@
===============================
Service Catalog Standardization
===============================
https://blueprints.launchpad.net/keystone/+spec/service-catalog-standards
The service catalog offers cloud consumers an initial view to all the available
services in an OpenStack cloud along with additional information about
regions, API versions, and projects available. This catalog is to help make it
easy to efficiently find information about the services, such as how to
configure communications between services. But the catalog is also terribly
underused, under documented, and inconsistently configured among most services.
By standardizing the service catalog we can provide a better user experience
with OpenStack.
Problem description
===================
The service catalog might be the first interaction a user has with an OpenStack
cloud to understand what the cloud offers in services and resources. That
interaction can be confusing, inconsistent between cloud providers, and contain
names and numbers that are mysterious and need decoding.
Providers making a service catalog might not think about consumers who see
multiple service catalogs in a single week.
The API Working Group did some initial fact finding about the varieties of
service catalogs available, and discovered just how varied the catalog can be.
See
https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Service_Catalog.
As an example of the inconsistency, cloud providers have filled in the
"name" object as all three: "nova", "cloudServersOpenStack" and "Compute".
Such a diverse service catalog means that services don't depend on it
being consistent, SDK devs don't completely understand it, and it
requires applications to encode cloud-specific behavior.
Here are some concrete examples of information that must be encoded because it
cannot be determined from the service catalog.
For example, a `nova.conf` file has to indicate exact URLs for many API
endpoints::
[glance]
api_servers = http://127.0.0.1:9292
[neutron]
url = http://127.0.0.1:9696
Ideally, rather than hardcoding these URL/port values in configuration
files, the service catalog could provide discoverability for those.
Another example, the ``catalog_info = volume:cinder:publicURL`` in
`nova.conf` is a configuration setting to set the info to match when
looking for cinder in the service catalog. Format is separated values
of the form::
<service_type>:<service_name>:<endpoint_type>.
There's also an ``endpoint_template`` `nova.conf` variable that
overrides the service catalog lookup with template for cinder
endpoint, such as ``http://localhost:8776/v1/%(project_id)s``.
While we are working through many of these issues, we are ensuring that
projects understand that user experience includes consistency, discoverability,
and simplicity as design tenets for service catalog incremental improvements.
A Vision of the Ideal Service Catalog
=====================================
The following is a vision of where we want to get to with the service
catalog in OpenStack.
1. Querying type='volume' in any service catalog on any cloud returns
an unversioned URL for that service. This is a contract we can
depend on.
2. All OpenStack server components find the operational urls for other
OpenStack services with the catalog.
3. The service types used in the catalog, such as ``volume`` or ``compute``,
defined in server components. This definition ensures that standardization
propagates, as clouds will not work if their service catalog is not
defined correctly.
4. There are Tempest tests for standard service catalog, and it's a
DefCore requirement that standard service catalog entries are
defined.
Proposed change
===============
What we want to solve for:
- Standard required naming for endpoints (versioned vs. unversioned,
contains project ID vs. no project ID).
* We want unversioned endpoints so that the user can get
information about multiple available versions in a given cloud.
* We do not want project ID, account ID, or tenant ID as part of
the resource URI for an OpenStack API endpoint.
* Standard naming means all consumers, including other OpenStack
services, can trust what the value of type='volume' will be.
- List of changes needed in existing clouds/products to comply with this.
* We want DevStack to follow these standards as the best practice example.
* We want to use JSON Schema to define the API for the service catalog
to ensure understanding and compliance.
* JSON Schema must allow for "extra" data so that we can continue with
name, and vendor-specific "Extra" things during the transition(s).
* Known types such as `service_type` can be documented in `projects.yaml`
in the `openstack/governance` git repository.
- List of changes in OpenStack projects that would rely on this standard, thus
making sure we've got it right.
- Published guidelines we recommend that DefCore requires of cloud provider's
service catalogs going forward. These guidelines can be created in the API
Working Group set of guidelines.
- Documentation for all new projects to comply with the service catalog
standards defined by the guidelines.
Top difficulties with the service catalog for SDK devs are currently:
- Name and type are meaningless, misunderstood, and poorly documented.
- Regions are not consistently named or used. The way regions are structured
are a pain for SDKs, because it requires a lot of traversing. Encourage
clear provider documentation and guidance for this naming.
- The versions in URLs are inconsistent (see what we want to solve for above).
- The tying between auth and getting a service catalog seems unnecessary. See
roles example above. A user should be able to get a list of all the services
and endpoints in a single, preferably unauthenticated, call.
Documentation can improve some of the difficulties. Standards and guidelines
should be published from within the Cloud Admin Guide, the Installation Guides,
and the Identity service (keystone) developer documentation.
The list of changes is gathered here:
- Ensure each service's API has a version request (current standard is a GET
call to /). However, keystoneauths's session can't use that to discover
versions because the URL returned by the Identity service for another
configured service is the versioned endpoint. The version is embedded in the
URL. We should have the Identity service discover version number with each
services' API itself.
- Remove ``project_id`` template from endpoints, acknowledging that future clients
will have to account for this change.
- Ensure DevStack examples are consistent and can be used as an exemplary
best practice.
- Ensure Tempest works with new catalog.
- Write a tempest test that uses JSON Schema for the service catalog.
- Provide the standard project and service names in the governance repository
through the `projects.yaml` file. However, enable flexibility in the "name"
for providers to offer multiple services.
- Cause project's interactions with the service catalog to be standard so that
for example, the nova project does not need three configuration variables to
specify how nova can interact with the cinder service catalog entries.
- Ensure that the publicURL, adminURL, and internalURL have known use cases.
Work with the operator community to understand whether those can be
consolidated when presenting the catalog to an end user.
Alternatives
------------
What happens currently is DevStack's configuration becomes a de facto standard
for endpoint URL naming, which then indicates both the name and type standard.
Implementation
==============
Assignee(s)
-----------
Anne Gentle annegentle
Augustina Ragwitz
Sean Dague sdague
Dolph Matthews dolphm
Work Items
----------
Create a guideline in the API Working Group repository for service
types, names, endpoint URLs, and configuration for cloud providers
creating service catalog entries.
Create a JSON Schema for the service catalog, to be stored as a
tempest test, so that the refstack repo can make use of it. Tempest
tests can check for valid entries. So the Identity project won't
enforce the list, rather a test in Tempest can enforce for
interoperability. The test will check each entry based on JSON Schema,
such as:
- existence of service_type: required
- value type of service_type: string (reference for value from
governance/projects.yaml file)
- extra data: acceptable because of the need for transition for providers
DevStack should be the reference implementation for best practices in
service catalog entries.
Create a conceptual topic about the service catalog using
http://dolphm.com/openstack-keystone-service-catalog/ as a starting
point.
Dependencies
============
.. Summit session:
https://libertydesignsummit.sched.org/event/194b2589eca19956cb88ada45e985e29
Additional Reading
==================
http://docs.openstack.org/developer/keystone/configuration.html?highlight=catalog#service-catalog
http://docs.openstack.org/juno/config-reference/content/list-of-compute-config-options.html
http://dolphm.com/openstack-keystone-service-catalog/
https://etherpad.openstack.org/p/service-catalog-cross-project-vancouver
https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Entry_Points
https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Service_Catalog
History
=======
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Liberty
- Introduced
.. note::
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode

View File

@ -1,130 +0,0 @@
..
=============================
Supported Messaging Drivers
=============================
We need to define a policy on messaging in general.
Problem description
===================
OpenStack has gravitated toward using RabbitMQ for message passing. There
are numerous excellent alternatives available as backends, including
QPID and 0mq, but only RabbitMQ has received attention directly in the
community at large. As a result, more and more users of these backends
are switching to RabbitMQ, and this leaves QPID code lying around that
is not well tested and not well supported by the community. Said code
will continue to be a burden, and should be removed if it is doing more
harm than good.
There is also anecdotal evidence that users are using the zmq driver
to achieve high scale with fixes, but the zmq driver is not well
tested either, which may actually be worse than not having it at all,
as now anecdotes are passed from user to user but results will be very
different for users who stick to upstream code.
Proposed change
===============
RabbitMQ may not be sufficient for the entire community as the community
grows. Pluggability is still something we should maintain, but we should
have a very high standard for drivers that are shipped and documented
as being supported.
We will define a very clear policy as to the requirements for drivers
to be carried in oslo.messaging and thus supported by the OpenStack
community as a whole. We will deprecate any drivers that do not meet
the requirements, and announce said deprecations in any appropriate
channels to give users time to signal their needs. Deprecation will last
for two release cycles before removing the code. We will also review and
update documentation to annotate which drivers are supported and which
are deprecated given these policies
Policy
------
Testing
~~~~~~~
* Must have unit and/or functional test coverage of at least 60% as
reported by coverage report. Unit tests must be run for all versions
of python oslo.messaging currently gates on.
* Must have integration testing including at least 3 popular oslo.messaging
dependents, preferrably at the minimum a devstack-gate job with Nova,
Cinder, and Neutron.
* All testing above must be voting in the gate of oslo.messaging.
Documentation
~~~~~~~~~~~~~
* Must have a reasonable amount of documentation including documentation
in the official OpenStack deployment guide.
Support
~~~~~~~
* Must have at least two individuals from the community commited to
triaging and fixing bugs, and responding to test failures in a timely
manner.
Prospective Drivers
~~~~~~~~~~~~~~~~~~~
* Drivers that intend to meet the requirements above, but that do not yet
meet them will be given one full release cycle, or 6 months, whichever
is longer, to comply before being marked for deprecation. Their use,
however, will not be supported by the community. This will prevent a
chicken and egg problem for new drivers.
Alternatives
------------
We could remove pluggability from oslo.messaging entirely, and just
ship RabbitMQ drivers. This option would alienate users who have private
drivers, and would also force users of the zmq driver who are trying to
fix it to abandon those efforts and try to scale with RabbitMQ.
We could also go even further, remove the pluggability, and improve the
kombu library enough to support all use cases of oslo.messaging. That is
beyond the scope of this document though.
Implementation
==============
Assignee(s)
-----------
Clint "SpamapS" Byrum
Work Items
----------
- Record policy in oslo developer documentation
- Announce policy to mailing lists
- Mark non-compliant drivers as deprecated
- Update configuration guides to note non-supported drivers
- After deprecation period, remove non-compliant drivers from code and docs
Dependencies
============
N/A
History
=======
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Liberty
- Introduced
.. note::
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode

View File

@ -1,111 +0,0 @@
..
This template should be in ReSTructured text. For help with syntax,
see http://sphinx-doc.org/rest.html
To test out your formatting, build the docs using tox, or see:
http://rst.ninjs.org
The filename in the git repository should match the launchpad URL,
for example a URL of
https://blueprints.launchpad.net/openstack/+spec/awesome-thing should be
named specs/awesome-thing.rst.
Wrap text at 79 columns.
Do not delete any of the sections in this template. If you have
nothing to say for a whole section, just write: None
If you would like to provide a diagram with your spec, ascii
diagrams are required. http://asciiflow.com/ is a very nice tool to
assist with making ascii diagrams. The reason for this is that the
tool used to review specs is based purely on plain text. Plain text
will allow review to proceed without having to look at additional
files which can not be viewed in gerrit. It will also allow inline
feedback on the diagram itself.
==================================
The title of your spec or policy
==================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net//+spec/example
Introduction paragraph -- why are we doing anything?
Problem description
===================
A detailed description of the problem.
Proposed change
===============
Here is where you cover the change you propose to make in detail. How
do you propose to solve this problem?
If this is one part of a larger effort make it clear where this piece
ends. In other words, what's the scope of this effort?
Include where in the openstack tree hierarchy this will reside.
Alternatives
------------
This is an optional section, where it does apply we'd just like a
demonstration that some thought has been put into why the proposed
approach is the best one.
Implementation
==============
Assignee(s)
-----------
Who is leading the writing of the code? Or is this a specification
where you're throwing it out there to see who picks it up?
If more than one person is working on the implementation, please
designate the primary author and contact.
Primary assignee:
name, IRC nic, etc.
Can optionally can list additional ids if they intend on doing
substantial implementation work.
Work Items
----------
Work items or tasks -- break the feature up into the things that need
to be done to implement it. Those parts might end up being done by
different people, but we're mostly trying to understand the time-line
for implementation.
Dependencies
============
- Include specific references to specs and/or blueprints in OpenStack,
or in other projects, that this one either depends on or is related
to.
- Does this feature require any new library dependencies or code
otherwise not included in OpenStack? Or does it depend on a specific
version of library?
History
=======
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Kilo
- Introduced
.. note::
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode

View File

27
tox.ini
View File

@ -1,27 +0,0 @@
[tox]
minversion = 1.6
envlist = docs
skipsdist = True
[testenv]
basepython = python3
usedevelop = True
setenv =
VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
[testenv:venv]
commands = {posargs}
[testenv:docs]
commands =
sphinx-build -W -b html -d doc/build/doctrees doc/source doc/build/html
[testenv:spelling]
deps =
-r{toxinidir}/requirements.txt
sphinxcontrib-spelling
PyEnchant
commands = sphinx-build -b spelling doc/source doc/build/spelling