Expand documentation and change to sphinx from MarkDown

This commit is contained in:
Mark Goddard 2017-03-29 14:02:51 +01:00
parent 08b83abc22
commit 61f7f804cb
13 changed files with 539 additions and 246 deletions

62
.gitignore vendored
View File

@ -1,19 +1,53 @@
# vim and emacs temp files
*~
[._]*.s[a-w][a-z]
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Packages
*.egg*
*.egg-info
dist
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
cover/
.coverage*
!.coveragerc
.tox
.venv
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Complexity
output/*.html
output/*/index.html
# Sphinx
doc/build
# Editors
*~
.*.swp
.*sw?
# Files generated by Ansible
ansible/*.retry
# Others
.DS_Store
.vimrc
# Ansible Galaxy roles
ansible/roles/ahuffman.resolv/
ansible/roles/jriguera.configdrive/
@ -26,9 +60,3 @@ ansible/roles/yatesr.timezone/
# Virtualenv
ansible/kolla-venv/
# Tox
.tox/
# Python build artifacts
kayobe.egg-info/

6
CONTRIBUTING.rst Normal file
View File

@ -0,0 +1,6 @@
Kayobe does not currently follow the upstream OpenStack development process,
but we will still be incredibly grateful for any contributions.
Please raise issues and submit pull requests via Github.
Thanks in advance!

203
README.md
View File

@ -1,203 +0,0 @@
# Kayobe
## Overiew
Kayobe is a tool for automating deployment of Scientific OpenStack onto bare
metal. Kayobe is composed of Ansible playbooks, a python module, and makes
heavy use of the OpenStack Kolla project.
## Prerequisites
Currently Kayobe supports the following Operating Systems:
- CentOS 7.3
To avoid conflicts with python packages installed by the system package manager
it is recommended to install Kayobe in a virtualenv. Ensure that the
`virtualenv` python module is available on the control host. For example, on
CentOS:
$ yum install -y python-virtualenv
## Installation
This guide will describe how to install Kayobe from source in a virtualenv.
First, obtain the Kayobe source code. For example:
$ git clone https://github.com/stackhpc/kayobe
To create a virtualenv for Kayobe:
$ cd kayobe
$ virtualenv kayobe-venv
Activate the virtualenv and update pip:
$ source kayobe-venv/bin/activate
(kayobe-venv) $ pip install -U pip
Install Kayobe and its dependencies using the source code checkout:
(kayobe-venv) $ pip install .
At this point the `kayobe` Command Line Interface (CLI) should be available. To
see information on how to use the CLI:
(kayobe-venv) $ kayobe help
Finally, deactivate the virtualenv:
(kayobe-venv) $ deactivate
## Configuration
Kayobe configuration is by default located in `/etc/kayobe` on the Ansible
control host. This can be overridden to a different location to avoid touching
the system configuration directory by setting the environment variable
`KAYOBE_CONFIG_PATH`. Similarly, Kolla configuration on the Ansible control
host will by default be located in `/etc/kolla` and can be overridden via
`KOLLA_CONFIG_PATH`.
The baseline Kayobe configuration should be copied to the Kayobe configuration
path:
$ cp -r etc/ ${KAYOBE_CONFIG_PATH:-/etc/kayobe}
Once in place, each of the YAML files should be inspected and configured as
required.
## Usage
This section describes usage of Kayobe to install an OpenStack cloud onto bare
metal. We assume access is available to a node which will act as the hypervisor
hosting the seed node in a VM. We also assume that this seed hypervisor has
access to the bare metal nodes that will form the OpenStack control plane.
Finally, we assume that the control plane nodes have access to the bare metal
nodes that will form the workload node pool.
NOTE: Where a prompt starts with `(kayobe-venv)` it is implied that the user
has activated the Kayobe virtualenv. This can be done as follows:
$ source kayobe-venv/bin/activate
To deactivate the virtualenv:
(kayobe-venv) $ deactivate
### Ansible Control Host
Before starting deployment we must bootstrap the Ansible control host. Tasks
here include:
- Install Ansible and role dependencies from Ansible Galaxy
- Generate an SSH key if necessary and add it to authorized\_keys
- Configure Kolla Ansible
To bootstrap the Ansible control host:
(kayobe-venv) $ kayobe control host bootstrap
### Seed
The seed hypervisor should have CentOS and `libvirt` installed. It should have
`libvirt` networks configured for all networks that the seed VM needs access
to. To provision the seed VM:
(kayobe-venv) $ kayobe seed vm provision
When this command has completed the seed VM should be active and accessible via
SSH. Kayobe will update the Ansible inventory with the dynamically assigned IP
address of the VM.
At this point the seed services need to be deployed on the seed VM. These
services include Docker and the Kolla `bifrost-deploy` container. This command
will also build the image to be used to deploy the overcloud nodes using Disk
Image Builder (DIB). To configure the seed host OS:
(kayobe-venv) $ kayobe seed host configure
If the seed host uses disks that have been in use in a previous installation,
it may be necessary to wipe partition and LVM data from those disks. To wipe
all disks that are not mounted during host configuration:
(kayobe-venv) $ kayobe seed host configure --wipe-disks
It is possible to use prebuilt container images from an image registry such as
Dockerhub. In some cases it may be necessary to build images locally either to
apply local image customisation or to use a downstream version of Kolla. To
build images locally:
(kayobe-venv) $ kayobe seed container image build
To deploy the seed services in containers:
(kayobe-venv) $ kayobe seed service deploy
After this command has completed the seed services will be active. For SSH
access to the seed VM, first determine the seed VM's IP address:
$ sudo virsh domifaddr <seed VM name>
The `kayobe_user` variable determines which user account will be used by Kayobe
when accessing the machine via SSH. By default this is `stack`. Use this user
to access the seed:
$ ssh stack@<seed VM IP>
To see the active Docker containers:
$ docker ps
Leave the seed VM and return to the shell on the control host:
$ exit
### Overcloud
Provisioning of the overcloud is performed by Bifrost running in a container on
the seed. An inventory of servers should be configured using the
`kolla_bifrost_servers` variable. To provision the overcloud nodes:
(kayobe-venv) $ kayobe overcloud provision
After this command has completed the overcloud nodes should have been
provisioned with an OS image. To configure the overcloud hosts' OS:
(kayobe-venv) $ kayobe overcloud host configure
If the controller hosts use disks that have been in use in a previous
installation, it may be necessary to wipe partition and LVM data from those
disks. To wipe all disks that are not mounted during host configuration:
(kayobe-venv) $ kayobe overcloud host configure --wipe-disks
It is possible to use prebuilt container images from an image registry such as
Dockerhub. In some cases it may be necessary to build images locally either to
apply local image customisation or to use a downstream version of Kolla. To
build images locally:
(kayobe-venv) $ kayobe overcloud container image build
To deploy the overcloud services in containers:
(kayobe-venv) $ kayobe overcloud service deploy
Once this command has completed the overcloud nodes should have OpenStack
services running in Docker containers. Kolla writes out an environment file
that can be used to access the OpenStack services:
$ source ${KOLLA_CONFIG_PATH:-/etc/kolla}/admin-openrc.sh
### Other Useful Commands
To run an arbitrary Kayobe playbook:
(kayobe-venv) $ kayobe playbook run <playbook> [<playbook>]
To execute a Kolla Ansible command:
(kayobe-venv) $ kayobe kolla ansible run <command>
To dump Kayobe configuration for one or more hosts:
(kayobe-venv) $ kayobe configuration dump

33
README.rst Normal file
View File

@ -0,0 +1,33 @@
======
Kayobe
======
Deployment of Scientific OpenStack using OpenStack kolla.
Kayobe is a tool for automating deployment of Scientific OpenStack onto a set
of bare metal servers. Kayobe is composed of Ansible playbooks, a python
module, and makes heavy use of the OpenStack kolla project. Kayobe aims to
complement the kolla-ansible project, providing an opinionated yet highly
configurable OpenStack deployment and automation of many operational
procedures.
* Documentation: https://github.com/stackhpc/kayobe/tree/master/docs
* Source: https://github.com/stackhpc/kayobe
* Bugs: https://github.com/stackhpc/kayobe/issues
Features
--------
* Heavily automated using Ansible
* *kayobe* Command Line Interface (CLI) for cloud operators
* Deployment of a *seed* VM used to manage the OpenStack control plane
* Configuration of physical network infrastructure
* Discovery, introspection and provisioning of control plane hardware using
`OpenStack bifrost <https://docs.openstack.org/developer/bifrost/>`_
* Deployment of an OpenStack control plane using `OpenStack kolla-ansible
<https://docs.openstack.org/developer/kolla-ansible/>`_
* Discovery, introspection and provisioning of bare metal compute hosts
using `OpenStack ironic <https://docs.openstack.org/developer/ironic/>`_ and
`ironic inspector <https://docs.openstack.org/developer/ironic-inspector/>`_
Plus more to follow...

View File

@ -0,0 +1,48 @@
============
Architecture
============
Hosts in the System
===================
In a system deployed by Kayobe we define a number of classes of hosts.
Control host
The control host is the host on which kayobe, kolla and kolla-ansible will
be installed, and is typically where the cloud will be managed from.
Seed host
The seed host runs the bifrost deploy container and is used to provision
the cloud hosts. Typically the seed host is deployed as a VM but this is
not mandatory.
Cloud hosts
The cloud hosts run the OpenStack control plane, storage, and virtualised
compute services. Typically the cloud hosts run on bare metal but this is
not mandatory.
Bare metal compute hosts:
In a cloud providing bare metal compute services to tenants via ironic,
these hosts will run the bare metal tenant workloads. In a cloud with only
virtualised compute this category of hosts does not exist.
.. note::
In many cases the control and seed host will be the same, although this is
not mandatory.
Networks
========
Kayobe's network configuration is very flexible but does define a few default
classes of networks. These are logical networks and may map to one or more
physical networks in the system.
Overcloud provisioning network
The overcloud provisioning network is used by the seed host to provision
the cloud hosts.
Workload provisioning network
The workload provisioning network is used by the cloud hosts to provision
the bare metal compute hosts.
Internal network
The internal network hosts the internal and admin OpenStack API endpoints.
External network
The external network hosts the public OpenStack API endpoints and provides
external network access for the hosts in the system.

77
doc/source/conf.py Executable file
View File

@ -0,0 +1,77 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
#'sphinx.ext.intersphinx',
# Uncomment this to enable the OpenStack documentation style, adding
# oslosphinx to test-requirements.txt.
#'oslosphinx',
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'kayobe'
copyright = u'2017, StackHPC Ltd.'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
#intersphinx_mapping = {'http://docs.python.org/': None}

View File

@ -0,0 +1,4 @@
=================
How to Contribute
=================
.. include:: ../../CONTRIBUTING.rst

33
doc/source/index.rst Normal file
View File

@ -0,0 +1,33 @@
.. kayobe documentation master file, created by
sphinx-quickstart on Tue Jul 9 22:26:36 2013.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to Kayobe's documentation!
==================================
.. include:: ../../README.rst
Documentation
-------------
.. note::
Kayobe and its documentation is currently under heavy development, and
therefore may be incomplete or out of date. If in doubt, contact the
project's maintainers.
.. toctree::
:maxdepth: 2
architecture
installation
usage
Developer Documentation
-----------------------
.. toctree::
:maxdepth: 2
contributing

View File

@ -0,0 +1,43 @@
============
Installation
============
Prerequisites
=============
Currently Kayobe supports the following Operating Systems:
- CentOS 7.3
To avoid conflicts with python packages installed by the system package manager
it is recommended to install Kayobe in a virtualenv. Ensure that the
``virtualenv`` python module is available on the control host. For example, on
CentOS::
$ yum install -y python-virtualenv
Installation
============
This guide will describe how to install Kayobe from source in a virtualenv.
First, obtain the Kayobe source code. For example::
$ git clone https://github.com/stackhpc/kayobe
Create a virtualenv for Kayobe::
$ cd kayobe
$ virtualenv kayobe-venv
Activate the virtualenv and update pip::
$ source kayobe-venv/bin/activate
(kayobe-venv) $ pip install -U pip
Install Kayobe and its dependencies using the source code checkout::
(kayobe-venv) $ pip install .
Finally, deactivate the virtualenv::
(kayobe-venv) $ deactivate

193
doc/source/usage.rst Normal file
View File

@ -0,0 +1,193 @@
=====
Usage
=====
This section describes usage of Kayobe to install an OpenStack cloud onto a set
of bare metal servers. We assume access is available to a node which will act
as the hypervisor hosting the seed node in a VM. We also assume that this seed
hypervisor has access to the bare metal nodes that will form the OpenStack
control plane. Finally, we assume that the control plane nodes have access to
the bare metal nodes that will form the workload node pool.
Configuration
=============
Kayobe configuration is by default located in ``/etc/kayobe`` on the Ansible
control host. This can be overridden to a different location to avoid touching
the system configuration directory by setting the environment variable
``KAYOBE_CONFIG_PATH``. Similarly, kolla configuration on the Ansible control
host will by default be located in ``/etc/kolla`` and can be overridden via
``KOLLA_CONFIG_PATH``.
From a checkout of the Kayobe repository, the baseline Kayobe configuration
should be copied to the Kayobe configuration path::
$ cp -r etc/ ${KAYOBE_CONFIG_PATH:-/etc/kayobe}
Once in place, each of the YAML and inventory files should be manually
inspected and configured as required.
Command Line Interface
======================
.. note::
Where a prompt starts with ``(kayobe-venv)`` it is implied that the user has
activated the Kayobe virtualenv. This can be done as follows::
$ source kayobe-venv/bin/activate
To deactivate the virtualenv::
(kayobe-venv) $ deactivate
To see information on how to use the ``kayobe`` CLI and the commands it
provides::
(kayobe-venv) $ kayobe help
As the ``kayobe`` CLI is based on the ``cliff`` package (as used by the
``openstack`` client), it supports tab auto-completion of subcommands. This
can be activated by generating and then sourcing the bash completion script::
(kayobe-venv) $ kayobe complete > kayobe-complete
(kayobe-venv) $ source kayobe-complete
Ansible Control Host
====================
Before starting deployment we must bootstrap the Ansible control host. Tasks
performed here include:
- Install Ansible and role dependencies from Ansible Galaxy.
- Generate an SSH key if necessary and add it to the current user's authorised
keys.
- Configure kolla and kolla-ansible.
To bootstrap the Ansible control host::
(kayobe-venv) $ kayobe control host bootstrap
Seed
====
The seed hypervisor should have CentOS and ``libvirt`` installed. It should
have ``libvirt`` networks configured for all networks that the seed VM needs
access to and a ``libvirt`` storage pool available for the seed VM's volumes.
To provision the seed VM::
(kayobe-venv) $ kayobe seed vm provision
When this command has completed the seed VM should be active and accessible via
SSH. Kayobe will update the Ansible inventory with the IP address of the VM.
At this point the seed services need to be deployed on the seed VM. These
services include Docker and the kolla ``bifrost-deploy`` container. This
command will also build the Operating System image that will be used to deploy
the overcloud nodes using Disk Image Builder (DIB).
To configure the seed host OS::
(kayobe-venv) $ kayobe seed host configure
.. note::
If the seed host uses disks that have been in use in a previous
installation, it may be necessary to wipe partition and LVM data from those
disks. To wipe all disks that are not mounted during host configuration::
(kayobe-venv) $ kayobe seed host configure --wipe-disks
It is possible to use prebuilt container images from an image registry such as
Dockerhub. In some cases it may be necessary to build images locally either to
apply local image customisation or to use a downstream version of kolla. To
build images locally::
(kayobe-venv) $ kayobe seed container image build
To deploy the seed services in containers::
(kayobe-venv) $ kayobe seed service deploy
After this command has completed the seed services will be active.
Accessing the Seed via SSH
--------------------------
For SSH access to the seed VM, first determine the seed VM's IP address. We can
use the ``kayobe configuration dump`` command to inspect the seed's IP
address::
(kayobe-venv) $ kayobe configuration dump --host seed --var-name ansible_host
The ``kayobe_ansible_user`` variable determines which user account will be used
by Kayobe when accessing the machine via SSH. By default this is ``stack``.
Use this user to access the seed::
$ ssh <kayobe ansible user>@<seed VM IP>
To see the active Docker containers::
$ docker ps
Leave the seed VM and return to the shell on the control host::
$ exit
Overcloud
=========
.. note::
Automated discovery of the overcloud nodes is not currently documented.
Provisioning of the overcloud is performed by bifrost running in a container on
the seed. A static inventory of servers may be configured using the
``kolla_bifrost_servers`` variable. To provision the overcloud nodes::
(kayobe-venv) $ kayobe overcloud provision
After this command has completed the overcloud nodes should have been
provisioned with an OS image. To configure the overcloud hosts' OS::
(kayobe-venv) $ kayobe overcloud host configure
.. note::
If the controller hosts use disks that have been in use in a previous
installation, it may be necessary to wipe partition and LVM data from those
disks. To wipe all disks that are not mounted during host configuration::
(kayobe-venv) $ kayobe overcloud host configure --wipe-disks
It is possible to use prebuilt container images from an image registry such as
Dockerhub. In some cases it may be necessary to build images locally either to
apply local image customisation or to use a downstream version of kolla. To
build images locally::
(kayobe-venv) $ kayobe overcloud container image build
To deploy the overcloud services in containers::
(kayobe-venv) $ kayobe overcloud service deploy
Once this command has completed the overcloud nodes should have OpenStack
services running in Docker containers. Kolla-ansible writes out an environment
file that can be used to access the OpenStack services::
$ source ${KOLLA_CONFIG_PATH:-/etc/kolla}/admin-openrc.sh
Other Useful Commands
=====================
To run an arbitrary Kayobe playbook::
(kayobe-venv) $ kayobe playbook run <playbook> [<playbook>]
To execute a kolla-ansible command::
(kayobe-venv) $ kayobe kolla ansible run <command>
To dump Kayobe configuration for one or more hosts::
(kayobe-venv) $ kayobe configuration dump

28
setup.cfg Normal file
View File

@ -0,0 +1,28 @@
[metadata]
name = kayobe
summary = Deployment of Scientific OpenStack using OpenStack Kolla
description-file =
README.rst
author = Mark Goddard
author-email = mark@stackhpc.com
home-page = https://stackhpc.com
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
[files]
packages =
kayobe
[build_sphinx]
all-files = 1
source-dir = doc/source
build-dir = doc/build
[upload_sphinx]
upload-dir = doc/build/html

View File

@ -1,5 +1,9 @@
hacking
coverage
flake8-import-order
mock
unittest2
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
hacking>=0.12.0,<0.13 # Apache-2.0
coverage>=4.0 # Apache-2.0
sphinx>=1.5.1 # BSD
oslotest>=1.10.0 # Apache-2.0

41
tox.ini
View File

@ -1,35 +1,34 @@
[tox]
minversion = 1.8
minversion = 2.0
envlist = py34,py27,pypy,pep8
skipsdist = True
envlist = py27,pep8
[testenv]
usedevelop = True
install_command = pip install -U {opts} {packages}
setenv = VIRTUAL_ENV={envdir}
PYTHONDONTWRITEBYTECODE = 1
LANGUAGE=en_US
LC_ALL=en_US.UTF-8
PYTHONWARNINGS=default::DeprecationWarning
TESTS_DIR=./kayobe/tests/unit/
install_command = pip install -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/ocata} {opts} {packages}
setenv =
VIRTUAL_ENV={envdir}
PYTHONWARNINGS=default::DeprecationWarning
TESTS_DIR=./kayobe/tests/unit/
deps = -r{toxinidir}/test-requirements.txt
commands = unit2 discover {posargs}
passenv = http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY
[testenv:pep8]
whitelist_externals = bash
commands =
flake8 {posargs}
commands = flake8 {posargs}
[testenv:venv]
setenv = PYTHONHASHSEED=0
commands = {posargs}
[testenv:docs]
commands = python setup.py build_sphinx
[testenv:debug]
commands = oslo_debug_helper {posargs}
[flake8]
exclude = .venv,.git,.tox,dist,doc,*lib/python*,*egg,build
import-order-style = pep8
max-complexity=17
# [H106] Dont put vim configuration in source files.
# [H203] Use assertIs(Not)None to check for None.
# [H904] Delay string interpolations at logging calls.
enable-extensions=H106,H203,H904
# E123, E125 skipped as they are invalid PEP-8.
show-source = True
ignore = E123,E125
builtins = _
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build