moving specs directory from openstack-attic/akanda

Co-authored by: Sean Roberts <seanroberts66@gmail.com>
Co-authored by: Adam Gandelman <adamg@ubuntu.com>

Change-Id: I7fcf0af9753efd747a8bc96f00a90f2487d1a948
This commit is contained in:
David Lenwell 2015-11-10 10:52:20 -08:00
parent f2a02ad887
commit d162f3789f
8 changed files with 1195 additions and 0 deletions

35
specs/README.rst Normal file
View File

@ -0,0 +1,35 @@
OpenStack Akanda Specifications
===============================
This directory structure is used to hold approved design specifications for additions
to the Akanda project. Reviews of the specs are done in gerrit, using a
similar workflow to how we review and merge changes to the code itself.
The layout of this repository is::
specs/<release>/
You can find an example spec in `specs/template.rst`. A
skeleton that contains all the sections required for a spec
file is located in `specs/skeleton.rst` and can
be copied, then filled in with the details of a new blueprint for
convenience.
Specifications are proposed for a given release by adding them to the
`specs/<release>` directory and posting it for review. The implementation
status of a blueprint for a given release can be found by looking at the
blueprint in launchpad. Not all approved blueprints will get fully implemented.
Specifications have to be re-proposed for every release. The review may be
quick, but even if something was previously approved, it should be re-reviewed
to make sure it still makes sense as written.
Please note, Launchpad blueprints are still used for tracking the
current status of blueprints. For more information, see::
https://wiki.openstack.org/wiki/Blueprints
http://blueprints.launchpad.net/akanda
For more information about working with gerrit, see::
http://docs.openstack.org/infra/manual/developers.html#development-workflow

191
specs/kilo/ci-updates.rst Normal file
View File

@ -0,0 +1,191 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
Title of your blueprint
=======================
Akanda CI updates for Kilo
Problem Description
===================
We build lots of interconnected things but dont test any of the things. We
should be employing pre-commit testing similar to other projects to ensure
users get something thats not broken when deploying from master of git
repositories or generated tarballs and images.
Proposed Change
===============
All changes to Akanda projects should go through regular check and gate
phases that test a deployment containing proposed code changes. This
includes changes to Akanda code as well as supporting things like its devstack
code and ``akanda-appliance-builder``. We can leverage devstack, tempest
and diskimage-builder to do this and create a generic Akanda integration
testing job that can be added to the pipelines of relevant projects. We should
also be running standard unit test coverage and pep8 checks here, too.
For code that runs in the Akanda appliance VM or code that is used to build
said image, we should ensure that tests run against proposed changes and not
static, pre-built appliance images. That is, runs that are testing changes
to ``akanda-appliance`` should build and entirely new appliance VM image and
use that for its integration tests instead of pulling a pre-built image that
does not contain the code under review.
Additionally, we should be archiving the results of changes to these
appliance-related repositories as a 'latest' image. That is, if someone
lands a change to ``akanda-appliance``, we should build and archive a
VM image in a known location on the internet. This will speed up other
tests that do not need to build a new image but should run against the
latest version, and also avoid forcing users to needlessly build images.
For changes that do not modify the appliance code or tooling used to build
the image, tests should run with a pre-built image. This can be either a
'latest' image or a released, versioned image.
One question at this point is where we run the Tempest jobs. These usually
take between 30min-1hr to complete and the nodes that run them in the main
OpenStack gate are a limited resource. We may need to maintain our own third
party CI infrastructure to do this. TBD.
Data Model Impact
-----------------
None
REST API Impact
---------------
None
Security Impact
---------------
None
Notifications Impact
--------------------
None
Other End User Impact
---------------------
None
Performance Impact
------------------
None
Other Deployer Impact
---------------------
None
Developer Impact
----------------
Developers hoping to land code in any of the Akanda repositories will need to
ensure their code passes all gate tests before it can land.
Community Impact
----------------
This may make landing changes a bit slower but should improve the overall
quality and health of Akanda repositories.
Alternatives
------------
Implementation
==============
Assignee(s)
-----------
Work Items
----------
* Enable pep8 and unit test jobs against relevant Akanda repositories.
* Move existing devstack code to out of ``http://github.com/dreamhost/akanda-devstack.git``
and into a proper gerrit-managed Akanda repository in the stackforge namespace.
* Complete diskimage-builder support that currently exists in
``http://github.com/stackforge/akanda-appliance-builder.git``
* Update devstack code to either pull a pre-built Akanda appliance image from a
known URL or to build one from source for use in test run.
* Create a generic ``(check|gate)-dsvm-tempest-akanda`` job that spins up the
Akanda devstack deployment and runs a subset of Tempest tests against it.
* Identifiy the subset of Tempest tests we care to run.
* Sync with openstack-infra and determine how and where these integration test
jobs will run.
* Run the devstack job against changes to ``akanda-appliance`` or
``akanda-appliace-builder`` with a configuration such that the appliance
image will be built from source including the patch under review.
* Setup infrastructure to publish a new appliance image
(ie, akanda-appliance-latest.qcow2) to a known location on the internet
after code lands in ``akanda-appliance`` or ``akanda-appliance-builder``
* Run the devstack job against all other relevant akanda repositories with a
configuration such that a pre-built appliance image from a known location on
the internet. Ideally, this will be the image produced from changes to
the appliance repositories (ie, akanda-appliance-latest.qcow2)
Dependencies
============
None
Testing
=======
Tempest Tests
-------------
n/a
Functional Tests
----------------
n/a
API Tests
---------
n/a
Documentation Impact
====================
User Documentation
------------------
Should be updated to reflect the new home of devstack code and proper ways to
deploy it.
Developer Documentation
-----------------------
Should be updated to reflect the new home of devstack code and proper ways to
deploy it.
References
==========
None

1
specs/kilo/skeleton.rst Symbolic link
View File

@ -0,0 +1 @@
../skeleton.rst

View File

@ -0,0 +1,192 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
Title of your blueprint
=======================
Liberty release documentation updates
Problem Description
===================
The documentation needs to be easy for new users and contributors while
following similar OpenStack docs structure and conventions.
Proposed Change
===============
Organize the documentation around four sections: What Is Akanda, Installing
Akanda, Operating Akanda, and Akanda Developer Guide.
This change will make the Akanda documentation [1] read similar to the existing
OpenStack documentation [2]. This will also prepare the Akanda documentation
for merging with the OpenStack documentation.
What Is Akanda section will hold the existing High Level Architecture,
Service VM Orchestration and Management, The Service VM sections. These pages
VM will be renamed to Instance. We will
add user documentation for demonstrating akanda, understanding how it
orchestrates network services, and how to compare (or not to) akanda to other
SDN options. Add some details around EW and NS frame/packet flow between
compute nodes. Make IPv6 support very clear and called out. Explain the driver
concept and how it will make support of new Neutron Advanced services easier.
Additionally provide understanding of how Akanda integrates with Neutron. Say
all this without duplicating any of the existing OpenStack documentation.
Installing Akanda section will hold the existing Akanda Developer Quickstart.
Adding installing from tarballs, source, and eventually distribution. Known good
configurations will also be part of this section.
Operating Akanda will hold the existing Operation and Deployment and
Configuration Options. We will add the training material here. We will need to
add details on dynamic routing support, how the configuration drift support
works and is managed. Links to supporting ML2 drivers like linuxbridge and OVS.
Making it clear how Akanda supports common Neutron configurations and
configuration changes. Add details on supporting VXLAN overlay and Lightweight
Network Virtualization (LNV) (Hierarchical Port Binding) with Akanda.
Akanda Developer Guide will hold the details on setting up the developer
environment, testing code locally, explaining the CI tests, along with some
references to Neutron dependencies. This entire section will move to the
Akanda developer reference section here [3], once the Akanda project is
accepted into the OpenStack org repo.
This spec also includes the use of docstrings in the code. We will start with
updating the rug code with docstrings as the most critical.
Data Model Impact
-----------------
n/a
REST API Impact
---------------
n/a
Security Impact
---------------
n/a
Notifications Impact
--------------------
n/a
Other End User Impact
---------------------
n/a
Performance Impact
------------------
n/a
Other Deployer Impact
---------------------
n/a
Developer Impact
----------------
Updating the documentation structure will make it easier for new contributors
to join the Akanda project. As Akanda joins the OpenStack org repo structure,
it will make setting up the devref material very easy.
Community Impact
----------------
The OpenStack community will better understand what the Akanda project is
about and why it is important with clear documentation.
Alternatives
------------
* Leave documentation as is
* Wait until the Akanda project is moved into the OpenStack org repo before
updating the documentation structure.
Implementation
==============
Assignee(s)
-----------
Sean Roberts (sarob)
Work Items
----------
* Create a patch to restructure the Akanda documentation
* Add new content from slides and other sources
* After Akanda gets moved into OpenStack org repos, move the Akanda developer
reference to doc.openstack.org/developer/akanda/devref/
Dependencies
============
Testing
=======
Tempest Tests
-------------
n/a
Functional Tests
----------------
n/a
API Tests
---------
n/a
Documentation Impact
====================
User Documentation
------------------
See the proposed change section
Developer Documentation
-----------------------
See the proposed change section
References
==========
[1] http://docs.akanda.io/
[2] http://docs.openstack.org/
[3] http://docs.openstack.org/developer/openstack-projects.html

201
specs/liberty/rug_ha.rst Normal file
View File

@ -0,0 +1,201 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
Title of your blueprint
=======================
Rug HA and scaleout
Problem Description
===================
The RUG is a multi-process, multi-worker service but it be cannot be
scaled out to multiple nodes for purposes of high-availability and
distributed handling of load. The only currently option for a
highly-available is to do an active/passive cluster using Pacemaker
or similar, which is less than ideal and does not address scale-out
concerns.
Proposed Change
===============
This proposes allowing multiple RUG processes to be spawned across
many nodes. Each RUG process is responsible for a fraction of the
total running appliances. RUG_process->appliance(s) mapping will be
managed by a consistent hash ring. An external coordination service
(ie, zookeeper) will be leveraged to provide cluster membership
capabilities, and python-tooz will be used to manage cluster events.
When new members join or depart, the hash ring will be rebalanced and
appliances re-distributed across the RUG.
This allows operators to scale out to many RUG instances, eliminating
the single-point-of-failure and allowing appliances to be evenly
distributed across multiple worker processes.
Data Model Impact
-----------------
n/a
REST API Impact
---------------
n/a
Security Impact
---------------
None
Notifications Impact
--------------------
Other End User Impact
---------------------
n/a
Performance Impact
------------------
There will be some new overhead introduced the messaging layer as Neutron
notifications and RPCs will need to be distributed to per-RUG message queues.
Other Deployer Impact
---------------------
Deployers will need to evaluate and choose an appropriate backend to be used
by tooz for leader election. memcached is a simple yet non-robust solution,
while zookeeper is a less light-weight but proven one. More info at [2]
Developer Impact
----------------
n/a
Community Impact
----------------
n/a
Alternatives
------------
One alternative to having each RUG instance declare its own messaging queue and
inspect all incoming messages would be to have the DHT master also serve as a
notification master. That is, the leader would be the only instance of the RUG
listening to and processing incoming Neutron notificatons, and then
re-distributing them to specific RUG workers based on the state of the DHT.
Another option would be to do away with the use of Neutron notifications
entirely and hard-wire the akanda-neutron plugin to the RUG via a dedicated
message queue.
Implementation
==============
This proposes enabling operators to run multiple instances of the RUG.
Each instance of the RUG will be responsible for a subset of the managed
appliances. A distributed, consistent hash ring will be used to map appliances
to their respective RUG instance. The Ironic project is already doing
something similar and has a hashring implementation we can likely leverage
to get started [1]
The RUG cluster is essentially leaderless. The hash ring is constructed
using the active node list and each indvidual RUG instance is capable of
constructing a ring given a list of members. This ring is consistent
across nodes provided the coordination service is properly reporting membership
events and they are processed correctly. Using metadata attached to incoming
events (ie, tenant_id), a consumer is able to check the hash ring to determine
which node in the ring the event is mapped to.
The RUG will spawn a new subprocess called the coordinator. It's only purpose
is to listen for cluster membership events using python-tooz. When a member
joins or departs, the coordinator will create a new Event of type REBALANCE
and put it onto the notifications queue. This event's body will contain an
updated list of current cluster nodes.
Each RUG worker process will maintain a copy of the hash ring, which is
shared by its worker threads. When it receives a REBALANCE event, it will
rebalance the hash ring given the new membership list. When it receives
normal CRUD events for resources, it will first check the hash ring to see
if it is mapped to its host based on target tenant_id for the event. If it is,
the event will be processed. If it is not, the event will be ignored and
serviced by another worker.
Ideally, REBALANCE events should be serviced before CRUD events.
Assignee(s)
-----------
Work Items
----------
* Implement a distributed hash ring for managing worker:appliance
assignment
* Add new coordination sub-process to the RUG that publishes REBALANCE
events to the notifications queue when membership changes
* Setup per-RUG message queues such that notifications are distributed to all
RUG processes equally.
* Update worker to manage its own copy of the hash ring
* Update worker /w ability to respond to new REBALANCE events by rebalancing
the ring with an updated membership list
* Update worker to drop events for resources that are not mapped to its host in
the hash ring.
Dependencies
============
Testing
=======
Tempest Tests
-------------
Functional Tests
----------------
If we cannot sufficiently test this using unit tests, we could potentially
spin up our devstack job with multiple copies of the akanda-rug-service
running on a single host, and having multiple router appliances. This
would allow us to test ring rebalancing by killing off one of the multiple
akanda-rug-service processes.
API Tests
---------
Documentation Impact
====================
User Documentation
------------------
Deployment docs need to be updated to mention this feature is dependent
on an external coordination service.
Developer Documentation
-----------------------
References
==========
[1] https://git.openstack.org/cgit/openstack/ironic/tree/ironic/common/hash_ring.py
[2] http://docs.openstack.org/developer/tooz/drivers.html

1
specs/liberty/skeleton.rst Symbolic link
View File

@ -0,0 +1 @@
../skeleton.rst

103
specs/skeleton.rst Normal file
View File

@ -0,0 +1,103 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
Title of your blueprint
=======================
Problem Description
===================
Proposed Change
===============
Data Model Impact
-----------------
REST API Impact
---------------
Security Impact
---------------
Notifications Impact
--------------------
Other End User Impact
---------------------
Performance Impact
------------------
Other Deployer Impact
---------------------
Developer Impact
----------------
Community Impact
----------------
Alternatives
------------
Implementation
==============
Assignee(s)
-----------
Work Items
----------
Dependencies
============
Testing
=======
Tempest Tests
-------------
Functional Tests
----------------
API Tests
---------
Documentation Impact
====================
User Documentation
------------------
Developer Documentation
-----------------------
References
==========

471
specs/template.tst Normal file
View File

@ -0,0 +1,471 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
Example Spec - The title of your blueprint
==========================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/akanda/+spec/example
Introduction paragraph -- why are we doing anything? A single paragraph of
prose that **operators, deployers, and developers** can understand.
If your specification proposes any changes to the Akanda REST API such
as changing parameters which can be returned or accepted, or even
the semantics of what happens when a client calls into the API, then
you should add the APIImpact flag to the commit message. Specifications with
the APIImpact flag can be found with the following query:
https://review.openstack.org/#/q/status:open+project:openstack/neutron-specs+message:apiimpact,n,z
Problem Description
===================
A detailed description of the problem:
* For a new feature this should be use cases. Ensure you are clear about the
actors in each use case: End User vs Deployer
* For a major reworking of something existing it would describe the
problems in that feature that are being addressed.
Proposed Change
===============
Here is where you cover the change you propose to make in detail. How do you
propose to solve this problem?
If this is one part of a larger effort make it clear where this piece ends. In
other words, what's the scope of this effort?
Data Model Impact
-----------------
Changes which require modifications to the data model often have a wider impact
on the system. The community often has strong opinions on how the data model
should be evolved, from both a functional and performance perspective. It is
therefore important to capture and gain agreement as early as possible on any
proposed changes to the data model.
Questions which need to be addressed by this section include:
* What new data objects and/or database schema changes is this going to require?
* What database migrations will accompany this change.
* How will the initial set of new data objects be generated, for example if you
need to take into account existing instances, or modify other existing data
describe how that will work.
REST API Impact
---------------
For each API resource to be implemented, describe the resource
collection and specify the name, type, and other essential details of
each new or modified attribute. A table similar to the following may
be used:
+----------+-------+---------+---------+------------+--------------+
|Attribute |Type |Access |Default |Validation/ |Description |
|Name | | |Value |Conversion | |
+==========+=======+=========+=========+============+==============+
|id |string |RO, all |generated|N/A |identity |
| |(UUID) | | | | |
+----------+-------+---------+---------+------------+--------------+
|name |string |RW, all |'' |string |human-readable|
| | | | | |name |
+----------+-------+---------+---------+------------+--------------+
|color |string |RW, admin|'red' |'red', |color |
| | | | |'yellow', or|indicating |
| | | | |'green' |state |
+----------+-------+---------+---------+------------+--------------+
Here is the other example of the table using csv-table
.. csv-table:: CSVTable
:header: Attribute Name,Type,Access,Default Value,Validation Conversion,Description
id,string (UUID),"RO, all",generated,N/A,identity
name,string,"RW, all","''",string,human-readable name
color,string,"RW, admin",red,"'red', 'yellow' or 'green'",color indicating state
Each API method which is either added or changed should have the following:
* Specification for the method
* A description of what the method does suitable for use in
user documentation
* Method type (POST/PUT/GET/DELETE)
* Normal http response code(s)
* Expected error http response code(s)
* A description for each possible error code should be included
describing semantic errors which can cause it such as
inconsistent parameters supplied to the method, or when an
instance is not in an appropriate state for the request to
succeed. Errors caused by syntactic problems covered by the JSON
schema defintion do not need to be included.
* URL for the resource
* Parameters which can be passed via the url
* JSON schema definition for the body data if allowed
* JSON schema definition for the response data if any
* Example use case including typical API samples for both data supplied
by the caller and the response
* Discuss any API policy changes, and discuss what things a deployer needs to
think about when defining their API policy. This is in reference to the
policy.json file.
Note that the schema should be defined as restrictively as
possible. Parameters which are required should be marked as such and
only under exceptional circumstances should additional parameters
which are not defined in the schema be permitted (eg
additionaProperties should be False).
Reuse of existing predefined parameter types such as regexps for
passwords and user defined names is highly encouraged.
Security Impact
---------------
Describe any potential security impact on the system. Some of the items to
consider include:
* Does this change touch sensitive data such as tokens, keys, or user data?
* Does this change alter the API in a way that may impact security, such as
a new way to access sensitive information or a new way to login?
* Does this change involve cryptography or hashing?
* Does this change require the use of sudo or any elevated privileges?
* Does this change involve using or parsing user-provided data? This could
be directly at the API level or indirectly such as changes to a cache layer.
* Can this change enable a resource exhaustion attack, such as allowing a
single API interaction to consume significant server resources? Some examples
of this include launching subprocesses for each connection, or entity
expansion attacks in XML.
For more detailed guidance, please see the OpenStack Security Guidelines
[#security_guidelines]_ as a reference. These guidelines are a work in
progress and are designed to help you identify security best practices.
For further information, feel free to reach out to the OpenStack Security
Group at openstack-security@lists.openstack.org.
.. [#security_guidelines] OpenStack Security Guidelines
https://wiki.openstack.org/wiki/Security/Guidelines
Notifications Impact
--------------------
Please specify any changes to notifications. Be that an extra notification,
changes to an existing notification, or removing a notification.
Other End User Impact
---------------------
Aside from the API, are there other ways a user will interact with this feature?
Performance Impact
------------------
Describe any potential performance impact on the system, for example
how often will new code be called, and is there a major change to the calling
pattern of existing code.
Examples of things to consider here include:
* A periodic task might look like a small addition but if it calls conductor or
another service the load is multiplied by the number of nodes in the system.
* A small change in a utility function or a commonly used decorator can have a
large impacts on performance.
* Calls which result in a database queries (whether direct or via conductor) can
have a profound impact on performance when called in critical sections of the
code.
* Will the change include any locking, and if so what considerations are there on
holding the lock?
Other Deployer Impact
---------------------
Discuss things that will affect how you deploy and configure OpenStack
that have not already been mentioned, such as:
* What config options are being added? Should they be more generic than
proposed (for example a flag that other hypervisor drivers might want to
implement as well)? Are the default values ones which will work well in
real deployments?
* Is this a change that takes immediate effect after its merged, or is it
something that has to be explicitly enabled?
* If this change is a new binary, how would it be deployed?
* Please state anything that those doing continuous deployment, or those
upgrading from the previous release, need to be aware of. Also describe
any plans to deprecate configuration values or features. For example, if we
change the directory name that instances are stored in, how do we handle
instance directories created before the change landed? Do we move them? Do
we have a special case in the code? Do we assume that the operator will
recreate all the instances in their cloud?
* Does this require downtime or manual intervention to apply when upgrading?
Developer Impact
----------------
Discuss things that will affect other developers working on OpenStack,
such as:
* If the blueprint proposes a change to the API, discussion of how other
plugins would implement the feature is required.
Community Impact
----------------
Describe how this change fits in with the direction the Akanda community is
going.
* Has the change been discussed on mailing lists, at the weekly Akanda
meeting, or at a Design Summit?
* Does the change fit with the direction of the Akanda community?
Alternatives
------------
What other ways could we do this thing? Why aren't we using those? This doesn't
have to be a full literature review, but it should demonstrate that thought has
been put into why the proposed solution is an appropriate one.
Implementation
==============
Assignee(s)
-----------
Who is leading the writing of the code? Or is this a blueprint where you're
throwing it out there to see who picks it up?
If more than one person is working on the implementation, please designate the
primary author and contact.
Primary assignee:
<launchpad-id or None>
Other contributors:
<launchpad-id or None>
Work Items
----------
Work items or tasks -- break the feature up into the things that need to be
done to implement it. Those parts might end up being done by different people,
but we're mostly trying to understand the timeline for implementation.
Dependencies
============
* Include specific references to specs and/or blueprints in Akanda, or in other
projects, that this one either depends on or is related to.
* If this requires functionality of another project that is not currently used
by Akanda (such as the glance v2 API when we previously only required v1),
document that fact.
* Does this feature require any new library dependencies or code otherwise not
included in OpenStack? Or does it depend on a specific version of library?
Testing
=======
Please discuss how the change will be tested. We especially want to know what
tempest tests will be added. It is assumed that unit test coverage will be
added so that doesn't need to be mentioned explicitly, but discussion of why
you think unit tests are sufficient and we don't need to add more tempest
tests would need to be included.
Is this untestable in gate given current limitations (specific hardware /
software configurations available)? If so, are there mitigation plans (3rd
party testing, gate enhancements, etc).
Tempest Tests
-------------
List new, changed, or deleted Tempest tests in this section. If a blueprint
has been filed in the Tempest specs repository, please cross reference that
blueprint here.
Functional Tests
----------------
Please document any functional tests which this change will require. New
features will require functional tests before being allowed to be merged.
Code refactors may require functional tests.
API Tests
---------
Add changes to API tests in this section. This is required if the change is
adding, removing, or changing any API related code in Akanda.
Documentation Impact
====================
What is the impact on the docs team of this change? Some changes might require
donating resources to the docs team to have the documentation updated. Don't
repeat details discussed above, but please reference them here.
User Documentation
------------------
Specify any User Documentation which needs to be changed. Reference the guides
which need updating due to this change.
Developer Documentation
-----------------------
If API changes are being made, specify the developer API documentation which
will be updated to reflect the new changes here.
References
==========
Please add any useful references here. You are not required to have any
reference. Moreover, this specification should still make sense when your
references are unavailable. Examples of what you could include are:
* Links to mailing list or IRC discussions
* Links to notes from a summit session
* Links to relevant research, if appropriate
* Related specifications as appropriate (e.g. link any vendor documentation)
* Anything else you feel it is worthwhile to refer to
NOTE: Please remove everything from here and down. This section is meant to
show examples of how to format the spec.
Some notes about using this template:
* Your spec should be in ReSTructured text, like this template.
* Please wrap text at 80 columns.
* The filename in the git repository should match the launchpad URL, for
example a URL of: https://blueprints.launchpad.net/akanda/+spec/awesome-thing
should be named awesome-thing.rst
* Please do not delete any of the sections in this template. If you have
nothing to say for a whole section, just write: None
* For help with syntax, see http://sphinx-doc.org/rest.html
* To test out your formatting, build the docs using tox, or see:
http://rst.ninjs.org
* If you would like to provide a diagram with your spec, text representations
are preferred. http://asciiflow.com/ is a very nice tool to assist with
making ascii diagrams. blockdiag is another tool. These are described below.
If you require an image (screenshot) for your BP, attaching that to the BP
and checking it in is also accepted. However, text representations are prefered.
* Diagram examples
asciiflow::
+----------+ +-----------+ +----------+
| A | | B | | C |
| +-----+ +--------+ |
+----------+ +-----------+ +----------+
blockdiag
.. blockdiag::
blockdiag sample {
a -> b -> c;
}
actdiag
.. actdiag::
actdiag {
write -> convert -> image
lane user {
label = "User"
write [label = "Writing reST"];
image [label = "Get diagram IMAGE"];
}
lane actdiag {
convert [label = "Convert reST to Image"];
}
}
nwdiag
.. nwdiag::
nwdiag {
network dmz {
address = "210.x.x.x/24"
web01 [address = "210.x.x.1"];
web02 [address = "210.x.x.2"];
}
network internal {
address = "172.x.x.x/24";
web01 [address = "172.x.x.1"];
web02 [address = "172.x.x.2"];
db01;
db02;
}
}
seqdiag
.. seqdiag::
seqdiag {
browser -> webserver [label = "GET /index.html"];
browser <-- webserver;
browser -> webserver [label = "POST /blog/comment"];
webserver -> database [label = "INSERT comment"];
webserver <-- database;
browser <-- webserver;
}