Copying in docs folder from the openstack-attic/akanda repo

Co-authored by: Sean Roberts <seanroberts66@gmail.com>
Co-authored by: Adam Gandelman <adamg@ubuntu.com>
Co-authored by: Ryan Petrello <lists@ryanpetrello.com>

Implements: blueprint astara-doc-updates-mitaka

Change-Id: I88997da80a3b07189ac3e0ea5f1a1cb0d5807dc0
This commit is contained in:
David Lenwell 2015-11-10 10:08:13 -08:00
parent f2a02ad887
commit 342372cf82
20 changed files with 1225 additions and 49 deletions

192
docs/Makefile Normal file
View File

@ -0,0 +1,192 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest coverage gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " applehelp to make an Apple Help Book"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " coverage to run coverage check of the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/akanda.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/akanda.qhc"
applehelp:
$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
@echo
@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
@echo "N.B. You won't be able to view it unless you put it in" \
"~/Library/Documentation/Help or install it in your application" \
"bundle."
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/akanda"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/akanda"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
coverage:
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
@echo "Testing of coverage in the sources finished, look at the " \
"results in $(BUILDDIR)/coverage/python.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."

Binary file not shown.

After

Width:  |  Height:  |  Size: 805 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 826 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 96 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 233 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 233 KiB

BIN
docs/source/_static/rug.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.8 MiB

338
docs/source/appliance.rst Normal file
View File

@ -0,0 +1,338 @@
.. _appliance:
The Service VM (the Akanda Appliance)
=====================================
Akanda uses Linux-based images (stored in OpenStack Glance) to provide layer
3 routing and advanced networking services. Akanda, Inc provides stable image
releases for download at `akanda.io <http://akanda.io>`_, but it's also
possible to build your own custom Service VM image (running additional
services of your own on top of the routing and other default services provided
by Akanda).
.. _appliance_build:
Building a Service VM image from source
---------------------------------------
The router code that runs within the appliance is hosted in the ``akanda-appliance``
repository at ``https://github.com/stackforge/akanda-appliance``. Additional tooling
for actually building a VM image to run the appliance is located in that repository's
``disk-image-builder`` sub-directory, in the form elements to be used with
``diskimage-builder``. The following instructions will walk through
building the Debian-based appliance locally, publishing to Glance and configuring the RUG to
use said image. These instructions are for building the image on an Ubuntu 14.04+ system.
Install Prerequisites
+++++++++++++++++++++
First, install ``diskimage-builder`` and required packages:
::
sudo apt-get -y install debootstrap qemu-utils
sudo pip install "diskimage-builder<0.1.43"
Next, clone the ``akanda-appliance-builder`` repository:
::
git clone https://github.com/stackforge/akanda-appliance
Build the image
+++++++++++++++
Kick off an image build using diskimage-builder:
::
cd akanda-appliance
ELEMENTS_PATH=diskimage-builder/elements DIB_RELEASE=wheezy DIB_EXTLINUX=1 \
disk-image-create debian vm akanda -o akanda
Publish the image
+++++++++++++++++
The previous step should produce a qcow2 image called ``akanda.qcow`` that can be
published into Glance for use by the system:
::
# We assume you have the required OpenStack credentials set as an environment
# variables
glance image-create --name akanda --disk-format qcow2 --container-format bare \
--file akanda.qcow2
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | cfc24b67e262719199c2c4dfccb6c808 |
| container_format | bare |
| created_at | 2015-05-13T21:27:02.000000 |
| deleted | False |
| deleted_at | None |
| disk_format | qcow2 |
| id | e2caf7fa-9b51-4f42-9fb9-8cfce96aad5a |
| is_public | False |
| min_disk | 0 |
| min_ram | 0 |
| name | akanda |
| owner | df8eaa19c1d44365911902e738c2b10a |
| protected | False |
| size | 450573824 |
| status | active |
| updated_at | 2015-05-13T21:27:03.000000 |
| virtual_size | None |
+------------------+--------------------------------------+
Configure the RUG
+++++++++++++++++
Take the above image id and set the corresponding value in the RUG's config file, to instruct
the service to use that image for software router instances it manages:
::
vi /etc/akanda/rug.ini
...
router_image_uuid=e2caf7fa-9b51-4f42-9fb9-8cfce96aad5a
Making local changes to the appliance service
+++++++++++++++++++++++++++++++++++++++++++++
By default, building an image in this way pulls the ``akanda-appliance`` code directly
from the upstream tip of trunk. If you'd like to make modifications to this code locally
and build an image containing those changes, set DIB_REPOLOCATION_akanda and DIB_REPOREF_akanda
in your enviornment accordingly during the image build, ie:
::
export DIB_REPOLOCATION_akanda=~/src/akanda-appliance # Location of the local repository checkout
export DIB_REPOREF_akanda=my-new-feature # The branch name or SHA-1 hash of the git ref to build from.
.. _appliance_rest:
REST API
--------
The Akanda Appliance REST API is used by the :ref:`rug` service to manage
health and configuration of services on the router.
Router Health
+++++++++++++
``HTTP GET /v1/status/``
~~~~~~~~~~~~~~~~~~~~~~~~
Used to confirm that a router is responsive and has external network connectivity.
::
Example HTTP 200 Response
Content-Type: application/json
{
'v4': true,
'v6': false,
}
Router Configuration
++++++++++++++++++++
``HTTP GET /v1/firewall/rules/``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Used to retrieve an overview of configured firewall rules for the router (from
``iptables -L`` and ``iptables6 -L``).
::
Example HTTP 200 Response
Content-Type: text/plain
Chain INPUT (policy DROP)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmptype 8
...
``HTTP GET /v1/system/interface/<ifname>/``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Used to retrieve JSON data about a specific interface on the router.
::
Example HTTP 200 Response
Content-Type: application/json
{
"interface": {
"addresses": [
"8.8.8.8",
"2001:4860:4860::8888",
],
"description": "",
"groups": [],
"ifname": "ge0",
"lladdr": "fa:16:3f:de:21:e9",
"media": null,
"mtu": 1500,
"state": "up"
}
}
``HTTP GET /v1/system/interfaces``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Used to retrieve JSON data about a `every` interface on the router.
::
Example HTTP 200 Response
Content-Type: application/json
{
"interfaces": [{
"addresses": [
"8.8.8.8",
"2001:4860:4860::8888",
],
"description": "",
"groups": [],
"ifname": "ge0",
"lladdr": "fa:16:3f:de:21:e9",
"media": null,
"mtu": 1500,
"state": "up"
}, {
...
}]
}
``HTTP PUT /v1/system/config/``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Used (generally, by :program:`akanda-rug-service`) to push a new configuration
to the router and restart services as necessary:
::
Example HTTP PUT Body
Content-Type: application/json
{
"configuration": {
"networks": [
{
"address_allocations": [],
"interface": {
"addresses": [
"8.8.8.8",
"2001:4860:4860::8888"
],
"description": "",
"groups": [],
"ifname": "ge1",
"lladdr": null,
"media": null,
"mtu": 1500,
"state": "up"
},
"name": "",
"network_id": "f0f8c937-9fb7-4a58-b83f-57e9515e36cb",
"network_type": "external",
"v4_conf_service": "static",
"v6_conf_service": "static"
},
{
"address_allocations": [],
"interface": {
"addresses": [
"..."
],
"description": "",
"groups": [],
"ifname": "ge0",
"lladdr": "fa:16:f8:90:32:e3",
"media": null,
"mtu": 1500,
"state": "up"
},
"name": "",
"network_id": "15016de1-494b-4c65-97fb-475b40acf7e1",
"network_type": "management",
"v4_conf_service": "static",
"v6_conf_service": "static"
},
{
"address_allocations": [
{
"device_id": "7c400585-1743-42ca-a2a3-6b30dd34f83b",
"hostname": "10-10-10-1.local",
"ip_addresses": {
"10.10.10.1": true,
"2607:f298:6050:f0ff::1": false
},
"mac_address": "fa:16:4d:c3:95:81"
}
],
"interface": {
"addresses": [
"10.10.10.1/24",
"2607:f298:6050:f0ff::1/64"
],
"description": "",
"groups": [],
"ifname": "ge2",
"lladdr": null,
"media": null,
"mtu": 1500,
"state": "up"
},
"name": "",
"network_id": "31a242a0-95aa-49cd-b2db-cc00f33dfe88",
"network_type": "internal",
"v4_conf_service": "static",
"v6_conf_service": "static"
}
],
"static_routes": []
}
}
Survey of Software and Services
-------------------------------
The Akanda Appliance uses a variety of software and services to manage routing
and advanced services, such as:
* ``iproute2`` tools (e.g., ``ip neigh``, ``ip addr``, ``ip route``, etc...)
* ``dnsmasq``
* ``bird6``
* ``iptables`` and ``iptables6``
In addition, the Akanda Appliance includes two Python-based services:
* The REST API (which :program:`akanda-rug-service)` communicates with to
orchestrate router updates), deployed behind `gunicorn
<http://gunicorn.org>`_.
* A Python-based metadata proxy.
Proxying Instance Metadata
--------------------------
When OpenStack VMs boot with ``cloud-init``, they look for metadata on a
well-known address, ``169.254.169.254``. To facilitate this process, Akanda
sets up a special NAT rule (one for each local network)::
-A PREROUTING -i eth2 -d 169.254.169.254 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.10.10.1:9602
...and a special rule to allow metadata requests to pass across the management
network (where OpenStack Nova is running, and will answer requests)::
-A INPUT -i !eth0 -d <management-v6-address-of-router> -j DROP
A Python-based metadata proxy runs locally on the router (in this example,
listening on ``http://10.10.10.1:9602``) and proxies these metadata requests
over the management network so that instances on local tenant networks will
have access to server metadata.

View File

@ -1,26 +1,10 @@
# Copyright 2014 DreamHost, LLC
#
# Author: DreamHost, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# -*- coding: utf-8 -*-
#
# Akanda RUG documentation build configuration file, created by
# sphinx-quickstart on Mon Mar 17 14:59:20 2014.
# akanda documentation build configuration file, created by
# sphinx-quickstart on Thu Apr 2 14:55:06 2015.
#
# This file is execfile()d with the current directory set to its containing dir.
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
@ -28,26 +12,34 @@
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys, os
import sys
import os
import shlex
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.todo', 'sphinx.ext.graphviz']
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.intersphinx',
'sphinx.ext.graphviz'
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The encoding of source files.
@ -57,21 +49,25 @@ source_suffix = '.rst'
master_doc = 'index'
# General information about the project.
project = u'Akanda RUG'
copyright = u'2014, DreamHost'
project = u'akanda'
copyright = u'2015, Akanda, Inc'
author = u'Akanda, Inc'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.0'
version = '1.0'
# The full version, including alpha/beta/rc tags.
release = '0.0'
release = '1.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
@ -83,7 +79,8 @@ release = '0.0'
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all documents.
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
@ -103,12 +100,17 @@ pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# -- Options for HTML output ---------------------------------------------------
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
@ -139,6 +141,11 @@ html_theme = 'default'
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
@ -180,11 +187,24 @@ html_static_path = ['_static']
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Language to be used for generating the HTML full-text search index.
# Sphinx supports the following languages:
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
#html_search_language = 'en'
# A dictionary with options for the search language support, empty by default.
# Now only 'ja' uses this config value
#html_search_options = {'type': 'default'}
# The name of a javascript file (relative to the configuration directory) that
# implements a search results scorer. If empty, the default will be used.
#html_search_scorer = 'scorer.js'
# Output file base name for HTML help builder.
htmlhelp_basename = 'AkandaRUGdoc'
htmlhelp_basename = 'akandadoc'
# -- Options for LaTeX output --------------------------------------------------
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
@ -195,13 +215,17 @@ latex_elements = {
# Additional stuff for the LaTeX preamble.
#'preamble': '',
# Latex figure (float) alignment
#'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'AkandaRUG.tex', u'Akanda RUG Documentation',
u'DreamHost', 'manual'),
(master_doc, 'akanda.tex', u'akanda Documentation',
u'Akanda, Inc', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
@ -225,27 +249,27 @@ latex_documents = [
#latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'akandarug', u'Akanda RUG Documentation',
[u'DreamHost'], 1)
(master_doc, 'akanda', u'akanda Documentation',
[author], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output ------------------------------------------------
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'AkandaRUG', u'Akanda RUG Documentation',
u'DreamHost', 'AkandaRUG', 'One line description of project.',
(master_doc, 'akanda', u'akanda Documentation',
author, 'akanda', 'One line description of project.',
'Miscellaneous'),
]
@ -257,3 +281,10 @@ texinfo_documents = [
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {'https://docs.python.org/': None}

View File

@ -0,0 +1,20 @@
Contributing
============
Installing Akanda Locally for Development
-----------------------------------------
Akanda's own `continuous integration <http://ci.akanda.io>`_ is open source
`(github.com/akanda/akanda-ci) <https://github.com/akanda/akanda-ci>`_, and
includes `Ansible <http://ansibleworks.com>`_ playbooks which can be used to
spin up the Akanda platform with a `devstack
<http://docs.openstack.org/developer/devstack/>`_ installation::
$ pip install ansible
$ ansible-playbook -vv playbooks/ansible-devstack.yml -e "branch=stable/juno"
Submitting Code Upstream
------------------------
All of Akanda's code is 100% open-source and is hosted `on GitHub
<http://github.com/akanda/>`_. Pull requests are welcome!

View File

@ -0,0 +1,92 @@
.. _developer_quickstart:
Akanda Developer Quickstart
=====================================
This guide provides guidance for new developers looking to get up and running
with an Akanda development environment. The Akanda components may be easily
deployed alongside OpenStack using DevStack. For more information about
DevStack visit ``http://docs.openstack.org/developer/devstack/``.
.. _developer_quickstart_rest:
Deploying Akanda using DevStack
-------------------------------
Preparation and prerequisites
+++++++++++++++++++++++++++++
Deploying DevStack on your local workstation is not recommended. Instead,
developers should use a dedicated virtual machine. Currently, Ubuntu
Trusty 14.04 is the tested and supported base operating system. Additionally,
you'll need at least 4GB of RAM and to have ``git`` installed::
sudo apt-get -y install git
First clone the DevStack repository::
sudo mkdir -p /opt/stack/
sudo chown `whoami` /opt/stack
git clone https://git.openstack.org/openstack-dev/devstack /opt/stack/devstack
Configuring DevStack
++++++++++++++++++++
Next, you will need to enable the Akanda plugin in the DevStack configuration
and enable the relevant services::
cat >/opt/stack/devstack/local.conf <<END
[[local|localrc]]
enable_plugin akanda-rug https://github.com/stackforge/akanda-rug
enable_service q-svc q-agt ak-rug
disable_service n-net
HOST_IP=127.0.0.1
LOGFILE=/opt/stack/devstack/devstack.log
DATABASE_PASSWORD=secret
RABBIT_PASSWORD=secret
SERVICE_TOKEN=secret
SERVICE_PASSWORD=secret
ADMIN_PASSWORD=secret
END
You may wish to SSH into the appliance VMs for debugging purposes. The RUG will
enable access for the 'akanda' user for a specified public key. This may be
specified by setting AKANDA_APPLIANCE_SSH_PUBLIC_KEY variable in your devstack
config to point to an existing public key. The default is
$HOME/.ssh/id_rsa.pub.
Building a Custom Service VM
++++++++++++++++++++++++++++
By default, the Akanda plugin downloads a pre-built official Akanda image. To
build your own from source, enable ``BUILD_AKANDA_APPLIANCE_IMAGE`` and specify
a repository and branch to build from::
cat >>/opt/stack/devstack/local.conf <<END
BUILD_AKANDA_APPLIANCE_IMAGE=True
AKANDA_APPLIANCE_REPO=http://github.com/stackforge/akanda-appliance.git
AKANDA_APPLIANCE_BRANCH=master
END
To build the appliance using locally modified ``akanda-appliance`` code, you
may point devstack at the local git checkout by setting the
AKANDA_APPLIANCE_DIR variable. Ensure that any changes you want included in
the image build have been committed to the repository and it is checked out
to the proper commit.
Deploying
+++++++++
Simply run DevStack and allow time for the deployment to complete::
cd /opt/stack/devstack
./stack.sh
After it has completed, you should have a ``akanda-rug`` process running
alongside the other services and an Akanda router appliance booted as a Nova
instance.

View File

@ -1,4 +1,35 @@
Akanda Docs
===========
.. akanda documentation master file, created by
sphinx-quickstart on Thu Apr 2 14:55:06 2015.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Find the docs in the `Akanda repository <http://docs.akanda.io/>`_
Akanda
======
Akanda is the only 100% open source network virtualization platform built by
OpenStack operators for real OpenStack clouds. Originally developed by
`DreamHost <https://dreamhost.com>`_ for their OpenStack-based public cloud,
`DreamCompute <https://dreamhost.com/compute/cloud>`_, Akanda eliminates the
need for complex SDN controllers, overlays, and multiple plugins by providing
a simple integrated networking stack (routing, firewall, and load balancing via
a :ref:`virtual Service VM <appliance>`) for connecting and securing
multi-tenant OpenStack environments.
Narrative Documentation
-----------------------
.. toctree::
:maxdepth: 2
what_is_akanda.rst
rug.rst
appliance.rst
contribute.rst
operation.rst
developer_quickstart.rst
reference.rst
Licensing
---------
Akanda is licensed under the Apache-2.0 license and is copyright `Akanda, Inc
<http://akanda.io>`_.

99
docs/source/operation.rst Normal file
View File

@ -0,0 +1,99 @@
.. _operator_tools:
Operation and Deployment
========================
Installation
------------
You can install from GitHub directly with ``pip``::
$ pip install -e git://github.com/stackforge/akanda-rug.git@stable/kilo#egg=akanda-rug
After installing :py:mod:`akanda.rug`, it can be invoked as::
$ akanda-rug-service --config-file /etc/akanda-rug/rug.ini
The :py:mod:`akanda.rug` service is intended to run on a management network (a
separate network for use by your cloud operators). This segregation prevents
system administration and the monitoring of system access from being disrupted
by traffic generated by guests.
Operator Tools
--------------
rug-ctl
+++++++
:program:`rug-ctl` is a tool which can be used to send manual instructions to
a running :py:mod:`akanda.rug` via AMQP::
$ rug-ctl browse
A curses console interface for browsing the state
of every Neutron router and issuing `rebuild` commands
$ rug-ctl poll
Sends a POLL instruction to every router to check health
$ rug-ctl router rebuild <router-id>
Sends a REBUILD instruction to a specific router
$ rug-ctl router update <router-id>
Sends an UPDATE instruction to a specific router
$ rug-ctl router debug <router-id>
Places a specific router in `debug mode`.
This causes the rug to ignore messages for the specified
router (so that, for example, operators can investigate
troublesome routers).
$ rug-ctl router manage <router-id>
Removes a specific router from `debug mode` and places
it back under akanda-rug management.
$ rug-ctl tenant debug <tenant-id>
Places a specific tenant in `debug mode`.
This causes the rug to ignore messages for the specified
tenant.
troublesome routers).
$ rug-ctl tenant manage <tenant-id>
Removes every router for a specific tenant from `debug mode`
and places the tenant back under akanda-rug management.
$ rug-ctl ssh <router-id>
Establishes an ssh connection with a specified Service VM.
$ rug-ctl workers debug
Causes the rug to print debugging diagnostics about the
current state of its worker processes and the state machines
under their management.
:program:`akanda-rug` also exposes an RPC API on the management network,
which allows non-interactive `rug-ctl` commands to be issued via HTTP, e.g.,
::
$ curl -X PUT -g6 "http://[fdca:3ba5:a17a:acda::1]:44250/poll/"
$ curl -X PUT -g6 "http://[fdca:3ba5:a17a:acda::1]:44250/workers/debug/"
$ curl -X PUT -g6 "http://[fdca:3ba5:a17a:acda::1]:44250/router/rebuild/<ID>"
akanda-debug-router
+++++++++++++++++++
:program:`akanda-debug-router` is a diagnostic tool which can be used to
analyze the state machine flow of any router and step through its operation
using Python's debugger. This is particularly useful for development purposes
and understanding the nature of the :py:mod:`akanda.rug` state machine, but it's
also useful for debugging problematic routers as an operator; a common pattern
for determining why a Service VM won't boot is to place the router in `debug
mode`::
$ rug-ctl router debug <router-id>
...and then step through the handling of a manual ``UPDATE`` event to see where
it fails::
$ akanda-debug-router --router-id <router-id>

View File

@ -0,0 +1,5 @@
Configuration Options
=====================
``akanda-rug`` uses ``oslo.config`` for configuration, so it's
configuration file format should be very familiar to OpenStack deployers

148
docs/source/rug.rst Normal file
View File

@ -0,0 +1,148 @@
.. _rug:
Service VM Orchestration and Management
=======================================
RUG - Router Update Generator
-----------------------------
:program:`akanda-rug-service` is a multiprocessed, multithreaded Python process
composed of three primary subsystems, each of which are spawned as a subprocess
of the main :py:mod:`akanda.rug` process:
L3 and DHCP Event Consumption
-----------------------------
:py:mod:`akanda.rug.notifications` uses `kombu <https://pypi.python.org/pypi/kombu>`_
and a Python :py:mod:`multiprocessing.Queue` to listen for specific Neutron service
events (e.g., ``router.interface.create``, ``subnet.create.end``,
``port.create.end``, ``port.delete.end``) and normalize them into one of
several event types:
* ``CREATE`` - a router creation was requested
* ``UPDATE`` - services on a router need to be reconfigured
* ``DELETE`` - a router was deleted
* ``POLL`` - used by the :ref:`health monitor<health>` for checking aliveness
of a Service VM
* ``REBUILD`` - a Service VM should be destroyed and recreated
As events are normalized and shuttled onto the :py:mod:`multiprocessing.Queue`,
:py:mod:`akanda.rug.scheduler` shards (by Tenant ID, by default) and
distributes them amongst a pool of worker processes it manages.
This system also consumes and distributes special :py:mod:`akanda.rug.command` events
which are published by the :program:`rug-ctl` :ref:`operator tools<operator_tools>`.
State Machine Workers and Router Lifecycle
------------------------------------------
Each multithreaded worker process manages a pool of state machines (one
per virtual router), each of which represents the lifecycle of an individual
router. As the scheduler distributes events for a specific router, logic in
the worker (dependent on the router's current state) determines which action to
take next:
.. graphviz:: worker_diagram.dot
For example, let's say a user created a new Neutron network, subnet, and router.
In this scenario, a ``router-interface-create`` event would be handled by the
appropriate worker (based by tenant ID), and a transition through the state
machine might look something like this:
.. graphviz:: sample_boot.dot
State Machine Flow
++++++++++++++++++
The supported states in the state machine are:
:CalcAction: The entry point of the state machine. Depending on the
current status of the Service VM (e.g., ``ACTIVE``, ``BUILD``, ``SHUTDOWN``)
and the current event, determine the first step in the state machine to
transition to.
:Alive: Check aliveness of the Service VM by attempting to communicate with
it via its REST HTTP API.
:CreateVM: Call ``nova boot`` to boot a new Service VM. This will attempt
to boot a Service VM up to a (configurable) number of times before
placing the router into ``ERROR`` state.
:CheckBoot: Check aliveness (up to a configurable number of seconds) of the
router until the VM is responsive and ready for initial configuration.
:ConfigureVM: Configure the Service VM and its services. This is generally
the final step in the process of booting and configuring a router. This
step communicates with the Neutron API to generate a comprehensive network
configuration for the router (which is pushed to the router via its REST
API). On success, the state machine yields control back to the worker
thread and that thread handles the next event in its queue (likely for
a different Service VM and its state machine).
:ReplugVM: Attempt to hot-plug/unplug a network from the router via ``nova
interface-attach`` or ``nova-interface-detach``.
:StopVM: Terminate a running Service VM. This is generally performed when
a Neutron router is deleted or via explicit operator tools.
:ClearError: After a (configurable) number of ``nova boot`` failures, Neutron
routers are automatically transitioned into a cooldown ``ERROR`` state
(so that :py:mod:`akanda.rug` will not continue to boot them forever; this is
to prevent further exasperation of failing hypervisors). This state
transition is utilized to add routers back into management after issues
are resolved and signal to :py:mod:`akanda-rug` that it should attempt
to manage them again.
:STATS: Reads traffic data from the router.
:CONFIG: Configures the VM and its services.
:EXIT: Processing stops.
ACT(ion) Variables are:
:Create: Create router was requested.
:Read: Read router traffic stats.
:Update: Update router configuration.
:Delete: Delete router.
:Poll: Poll router alive status.
:rEbuild: Recreate a router from scratch.
VM Variables are:
:Down: VM is known to be down.
:Booting: VM is booting.
:Up: VM is known to be up (pingable).
:Configured: VM is known to be configured.
:Restart Needed: VM needs to be rebooted.
:Hotplug Needed: VM needs to be replugged.
:Gone: The router definition has been removed from neutron.
:Error: The router has been rebooted too many times, or has had some
other error.
.. graphviz:: state_machine.dot
.. _health:
Health Monitoring
-----------------
``akanda.rug.health`` is a subprocess which (at a configurable interval)
periodically delivers ``POLL`` events to every known virtual router. This
event transitions the state machine into the ``Alive`` state, which (depending
on the availability of the router), may simply exit the state machine (because
the router's status API replies with an ``HTTP 200``) or transition to the
``CreateVM`` state (because the router is unresponsive and must be recreated).

View File

@ -0,0 +1,14 @@
digraph sample_boot {
rankdir=LR;
node [shape = doublecircle];
CalcAction;
node [shape = circle];
CalcAction -> Alive;
Alive -> CreateVM;
CreateVM -> CheckBoot;
CheckBoot -> CheckBoot;
CheckBoot -> ConfigureVM;
}

View File

@ -0,0 +1,57 @@
digraph rug {
// rankdir=LR;
node [shape = rectangle];
START;
// These nodes enter and exit the state machine.
node [shape = doublecircle];
EXIT;
CALC_ACTION;
node [shape = circle];
START -> CALC_ACTION;
CALC_ACTION -> ALIVE [ label = "ACT>[CRUP],vm:[UC]" ];
CALC_ACTION -> CREATE_VM [ label = "ACT>[CRUP],vm:D" ];
CALC_ACTION -> CHECK_BOOT [ label = "ACT>[CRUP],vm:B" ];
CALC_ACTION -> REBUILD_VM [ label = "ACT:E" ];
CALC_ACTION -> STOP_VM [ label = "ACT>D or vm:G" ];
CALC_ACTION -> CLEAR_ERROR [ label = "vm:E" ];
ALIVE -> CREATE_VM [ label = "vm>D" ];
ALIVE -> CONFIG [ label = "ACT:[CU],vm:[UC]" ];
ALIVE -> STATS [ label = "ACT:R,vm:C" ];
ALIVE -> CALC_ACTION [ label = "ACT:P,vm>[UC]" ];
ALIVE -> STOP_VM [ label = "vm:G" ];
CREATE_VM -> CHECK_BOOT [ label = "ACT:[CRUDP],vm:[DBUCR]" ];
CREATE_VM -> STOP_VM [ label = "vm:G" ];
CREATE_VM -> CALC_ACTION [ label = "vm:E" ];
CREATE_VM -> CREATE_VM [ label = "vm:D" ];
CHECK_BOOT -> CONFIG [ label = "vm>U" ];
CHECK_BOOT -> CALC_ACTION [ label = "vm:[BCR]" ];
CHECK_BOOT -> STOP_VM [ label = "vm:[DG]" ];
CONFIG -> STATS [ label = "ACT:R,vm>C" ];
CONFIG -> CALC_ACTION [ label = "ACT>P,vm>C" ];
CONFIG -> REPLUG_VM [ label = "vm>[H]" ];
CONFIG -> STOP_VM [ label = "vm>[RDG]" ];
REPLUG_VM -> CONFIG [ label = "vm>[H]" ];
REPLUG_VM -> STOP_VM [ label = "vm>[R]" ];
STATS -> CALC_ACTION [ label = "ACT>P" ];
CLEAR_ERROR -> CALC_ACTION [ label = "no pause before next action" ];
REBUILD_VM -> REBUILD_VM [ label = "vm!=[DG]" ];
REBUILD_VM -> CREATE_VM [ label = "ACT:E,vm:D" ];
STOP_VM -> CREATE_VM [ label = "ACT:E or vm>D" ];
STOP_VM -> EXIT [ label = "ACT:D,vm>D or vm:G" ];
}

View File

@ -0,0 +1,122 @@
What Is Akanda
==============
Akanda is the only open source network virtualization solution built by OpenStack
operators for real OpenStack clouds.
Akanda follows core principles of simple, compatible, and open development.
The Akanda architecture is broken down by describing the building blocks. The
most important of those building blocks, the Akanda Rug, is a multi-process,
multi-threaded Neutron Advanced Services orchestration service which manages the
lifecycle of the Neutron Advanced Services. Akanda currently supports a Router
Service Instance for Neutron Advanced Services. Akanda will support additional
Neuton Advanced services such as Load Balancing, VPN, and Firewalls with the
driver model.
High-Level Architecture
-----------------------
Akanda is a network orchestration platform that delivers network services
(L3-L7) via Instances that provide routing, load balancing, firewall and more.
Akanda also interacts with any L2 overlay - including open source solutions
based on OVS and Linux bridge (VLAN, VXLAN, GRE) and most proprietary solutions
- to deliver a centralized management layer for all OpenStack networking decisions.
In a canonical OpenStack deployment, Neutron server emits L3 and DHCP
messages which are handled by a variety of Neutron agents (the L3 agent, DHCP
agent, agents for advanced services such as load balancing, firewall, and VPN
as a service):
.. image:: _static/neutron-canonical-v2.png
When we add Akanda into the mix, we're able to replace these agents with
a virtualized Service Instance that manages layer 3 routing and other advanced
networking services, significantly lowering the barrier of entry for operators
(in terms of deployment, monitoring and management):
.. image:: _static/neutron-akanda-v2.png
Akanda takes the place of many of the agents that OpenStack Neutron
communicates with (L3, DHCP, LBaaS, FWaaS) and acts as a single control point
for all networking services. By removing the complexity of extra agents, Akanda
can centrally manage DHCP and L3, orchestrate load balancing and VPN Services,
and overall reduce the number of components required to build, manage and
monitor complete virtual networks within your cloud.
Akanda Building Blocks
++++++++++++++++++++++
From an architectural perspective, Akanda is composed of a few sub-projects:
* | :ref:`akanda-rug <http://github.com/stackforge/akanda-rug>`
A service for managing the creation, configuration, and health of Akanda
Service Instances in an OpenStack cloud. The :py:mod:`akanda-rug` acts in
part as a replacement for Neutron's various L3-L7 agents by listening for
Neutron AMQP events and coalescing them into software
router API calls (which configure and manage embedded services on the
Service Instance). Additionally, :py:mod:`akanda-rug` contains a health
monitoring component which monitors health and guarantees uptime for
existing Service Instances.
The rug really ties the room together
.. image:: _static/rug.png
* | :ref:`akanda-appliance <http://github.com/stackforge/akanda-appliance>`
The software and services (including tools for building custom router
images themselves) that run on the virtualized Linux router. Includes
drivers for L3-L7 services and a RESTful API that :py:mod:`akanda-rug`
uses to orchestrate changes to router configuration.
* | `akanda-neutron <http://github.com/stackforge/akanda-neutron>`_
Addon API extensions and plugins for OpenStack Neutron which enable
functionality and integration with the Akanda project, notably Akanda
router appliance interaction.
* | `akanda-horizon <http://github.com/stackforge/akanda-neutron>`_
OpenStack Horizon rug panels providing management of the appliance
Software Instance Lifecycle
+++++++++++++++++++++++++++
As Neutron emits events in reaction to network operations (e.g., a user creates
a new network/subnet, a user attaches a virtual machine to a network,
a floating IP address is associated, etc...), :py:mod:`akanda-rug` receives these
events, parses, and dispatches them to a pool of workers which manage the
lifecycle of every virtualized router.
This management of individual routers is handled via a state machine per
router; as events come in, the state machine for the appropriate router
transitions, modifying its virtualized router in a variety of ways, such as:
* Booting a virtual machine for the router via the Nova API (if one doesn't
exist).
* Checking for aliveness of the router via the :ref:`REST API
<appliance_rest>` on the Service Instance.
* Pushing configuration updates via the :ref:`REST API
<appliance_rest>` to configure routing
and manage services (such as ``iptables``, ``dnsmasq``, ``bird6``,
etc...).
* Deleting virtual machines via the Nova API (e.g., when a router is
deleted from Neutron).
The Router Service Instance (the Akanda Appliance)
--------------------------------------------------
Akanda uses Linux-based images (stored in OpenStack Glance) to provide layer 3
routing and advanced networking services. There is a stable image
available by default, but its also possible to build your own
custom Service Instance image (running additional services of your own on top of
the routing and other default services provided by the project).
Traffic Flow Using Akanda Router Service Instances
--------------------------------------------------
.. image:: _static/akanda-ew-traffic.png
.. image:: _static/akanda-ns-traffic.png

View File

@ -0,0 +1,27 @@
digraph sample_boot {
node [shape = square];
AMQP;
"Event Processing + Scheduler";
Nova;
Neutron;
node [shape = circle];
AMQP -> "Event Processing + Scheduler";
subgraph clusterrug {
"Event Processing + Scheduler" -> "Worker 1";
"Event Processing + Scheduler" -> "Worker ...";
"Event Processing + Scheduler" -> "Worker N";
"Worker 1" -> "Thread 1"
"Worker 1" -> "Thread ..."
"Worker 1" -> "Thread N"
}
"Thread 1" -> "Service VM 1";
"Thread 1" -> "Service VM ..." [ label = "Appliance REST API" ];
"Thread 1" -> "Service VM N";
"Thread 1" -> "Nova" [ label = "Nova API" ];
"Thread 1" -> "Neutron" [ label = "Neutron API" ];
}