Begin migrating docs to sphinx

There are getting to be enough docs for ovb that a flat readme will
become unwieldy.  Migrating to sphinx will also be more consistent
with how other OpenStack projects work.

The flat readme is left for now as I don't have a hosting location
for the sphinx docs yet.
This commit is contained in:
Ben Nemec 2017-01-11 17:18:40 -06:00
parent b69f56330c
commit f2215e6c62
12 changed files with 503 additions and 0 deletions

123
doc/source/conf.py Normal file
View File

@ -0,0 +1,123 @@
# openstack-virtual-baremetal documentation build configuration file, created by
# sphinx-quickstart on Wed Feb 25 10:56:57 2015.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys, os
import sphinx_rtd_theme
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'oslosphinx'
]
# Add any paths that contain templates here, relative to this directory.
templates_path = []
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'OpenStack Virtual Baremetal'
copyright = u'2015, Red Hat Inc.'
bug_tracker = u'Github'
bug_tracker_url = u'https://github.com/cybertron/openstack-virtual-baremetal'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.0.1'
# The full version, including alpha/beta/rc tags.
release = '0.0.1'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
html_static_path = []
# html_style = 'custom.css'
templates_path = []
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# -- Options for LaTeX output --------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
rst_prolog = """
.. |project| replace:: %s
.. |bug_tracker| replace:: %s
.. |bug_tracker_url| replace:: %s
""" % (project, bug_tracker, bug_tracker_url)

View File

@ -0,0 +1,82 @@
Deploying a Standalone Baremetal Stack
======================================
#. Create provisioning network.
.. note:: The CIDR used for the subnet does not matter.
Standard tenant and external networks are also needed to
provide floating ip access to the undercloud and bmc instances
.. warning:: Do not enable DHCP on this network. Addresses will be
assigned by the undercloud Neutron.
::
neutron net-create provision
neutron subnet-create --name provision --no-gateway --disable-dhcp provision 192.0.2.0/24
#. Create "public" network.
.. note:: The CIDR used for the subnet does not matter.
This can be used as the network for the public API endpoints
on the overcloud, but it does not have to be accessible
externally. Only the undercloud VM will need to have access
to this network.
.. warning:: Do not enable DHCP on this network. Doing so may cause
conflicts between the host cloud metadata service and the
undercloud metadata service. Overcloud nodes will be
assigned addresses on this network by the undercloud Neutron.
::
neutron net-create public
neutron subnet-create --name public --no-gateway --disable-dhcp public 10.0.0.0/24
#. Copy the example env file and edit it to reflect the host environment::
cp templates/env.yaml.example env.yaml
vi env.yaml
#. Deploy the stack::
bin/deploy.py
#. Wait for Heat stack to complete:
.. note:: The BMC instance does post-deployment configuration that can
take a while to complete, so the Heat stack completing does
not necessarily mean the environment is entirely ready for
use. To determine whether the BMC is finished starting up,
run ``nova console-log bmc``. The BMC service outputs a
message like "Managing instance [uuid]" when it is fully
configured. There should be one of these messages for each
baremetal instance.
::
heat stack-show baremetal
#. Boot a VM to serve as the undercloud::
nova boot undercloud --flavor m1.large --image centos7 --nic net-id=[tenant net uuid] --nic net-id=[provisioning net uuid]
neutron floatingip-create [external net uuid]
neutron port-list
neutron floatingip-associate [floatingip uuid] [undercloud instance port id]
#. Build a nodes.json file that can be imported into Ironic::
bin/build-nodes-json
scp nodes.json centos@[undercloud floating ip]:~/instackenv.json
.. note:: ``build-nodes-json`` also outputs a file named ``bmc_bm_pairs``
that lists which BMC address corresponds to a given baremetal
instance.
#. The undercloud vm can now be used with something like TripleO
to do a baremetal-style deployment to the virtual baremetal instances
deployed previously.
#. If using the full network isolation provided by OS::OVB::BaremetalNetworks
then the overcloud can be created with the network templates in
the ``network-templates`` directory.

View File

@ -0,0 +1,9 @@
Deploying the Heat stack
========================
There are two options for deploying the Heat stack.
.. toctree::
quintupleo
baremetal

View File

@ -0,0 +1,37 @@
Deploying with QuintupleO
=========================
TBD: Explain the meaning of QuintupleO
#. Copy the example env file and edit it to reflect the host environment::
cp templates/env.yaml.example env.yaml
vi env.yaml
#. Deploy a QuintupleO stack::
bin/deploy.py --quintupleo
#. Wait for Heat stack to complete:
.. note:: The BMC instance does post-deployment configuration that can
take a while to complete, so the Heat stack completing does
not necessarily mean the environment is entirely ready for
use. To determine whether the BMC is finished starting up,
run ``nova console-log bmc``. The BMC service outputs a
message like "Managing instance [uuid]" when it is fully
configured. There should be one of these messages for each
baremetal instance.
::
heat stack-show quintupleo
#. Build a nodes.json file that can be imported into Ironic::
bin/build-nodes-json
scp nodes.json centos@[undercloud floating ip]:~/instackenv.json
.. note:: ``build-nodes-json`` also outputs a file named ``bmc_bm_pairs``
that lists which BMC address corresponds to a given baremetal
instance.

View File

@ -0,0 +1,49 @@
Configuring the Host Cloud
==========================
Some of the configuration recommended below is optional, but applying
all of it will provide the optimal experience.
The changes described in this document apply to compute nodes in the
host cloud.
#. Neutron must be configured to use the NoopFirewallDriver. Edit
``/etc/neutron/plugins/ml2/ml2_conf.ini`` and set the option
``firewall_driver`` in the ``[securitygroup]`` section as follows::
firewall_driver = neutron.agent.firewall.NoopFirewallDriver
#. In Liberty and later versions, arp spoofing must be disabled. Edit
``/etc/neutron/plugins/ml2/ml2_conf.ini`` and set the option
``prevent_arp_spoofing`` in the ``[agent]`` section as follows::
prevent_arp_spoofing = False
#. The Nova option ``force_config_drive`` must _not_ be set.
#. Ideally, jumbo frames should be enabled on the host cloud. This
avoids MTU problems when deploying to instances over tunneled
Neutron networks with VXLAN or GRE.
For TripleO-based host clouds, this can be done by setting ``mtu``
on all interfaces and vlans in the network isolation nic-configs.
A value of at least 1550 should be sufficient to avoid problems.
If this cannot be done (perhaps because you don't have access to make
such a change on the host cloud), it will likely be necessary to
configure a smaller MTU on the deployed virtual instances. For a
TripleO undercloud, Neutron should be configured to advertise a
smaller MTU to instances. Run the following as root::
# Replace 'eth1' with the actual device to be used for the
# provisioning network
ip link set eth1 mtu 1350
echo -e "\ndhcp-option-force=26,1350" >> /etc/dnsmasq-ironic.conf
systemctl restart 'neutron-*'
If network isolation is in use, the templates must also configure
mtu as discussed above, except the mtu should be set to 1350 instead
of 1550.
#. Restart ``nova-compute`` and ``neutron-openvswitch-agent`` to apply the
changes above.

View File

@ -0,0 +1,29 @@
Patching the Host Cloud
=======================
The changes described in this section apply to compute nodes in the
host cloud.
Apply the Nova pxe boot patch file in the ``patches`` directory to the host
cloud Nova. Examples:
TripleO/RDO::
sudo patch -p1 -d /usr/lib/python2.7/site-packages < patches/nova/nova-pxe-boot.patch
Devstack:
.. note:: You probably don't want to try to run this with devstack anymore.
Devstack no longer supports rejoining an existing stack, so if you
have to reboot your host cloud you will have to rebuild from
scratch.
.. note:: The patch may not apply cleanly against master Nova
code. If/when that happens, the patch will need to
be applied manually.
::
cp patches/nova/nova-pxe-boot.patch /opt/stack/nova
cd /opt/stack/nova
patch -p1 < nova-pxe-boot.patch

View File

@ -0,0 +1,63 @@
Preparing the Host Cloud Environment
====================================
#. Source an rc file that will provide admin credentials for the host cloud.
#. Upload an ipxe-boot image for the baremetal instances::
glance image-create --name ipxe-boot --disk-format qcow2 --property os_shutdown_timeout=5 --container-format bare < ipxe/ipxe-boot.qcow2
.. note:: The path provided to ipxe-boot.qcow2 is relative to the root of
the OVB repo. If the command is run from a different working
directory, the path will need to be adjusted accordingly.
.. note:: os_shutdown_timeout=5 is to avoid server shutdown delays since
since these servers won't respond to graceful shutdown requests.
.. note:: On a UEFI enabled openstack cloud, to boot the baremetal instances
with uefi (instead of the default bios firmware) the image should
be created with the parameters --property="hw_firmware_type=uefi".
#. Upload a CentOS 7 image for use as the base image::
wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
glance image-create --name CentOS-7-x86_64-GenericCloud --disk-format qcow2 --container-format bare < CentOS-7-x86_64-GenericCloud.qcow2
#. (Optional) Create a pre-populated base BMC image. This is a CentOS 7 image
with the required packages for the BMC pre-installed. This eliminates one
potential point of failure during the deployment of an OVB environment
because the BMC will not require any external network resources::
wget https://repos.fedorapeople.org/repos/openstack-m/ovb/bmc-base.qcow2
glance image-create --name bmc-base --disk-format qcow2 --container-format bare < bmc-base.qcow2
To use this image, configure ``bmc_image`` in env.yaml to be ``bmc-base`` instead
of the generic CentOS 7 image.
#. Create recommended flavors::
nova flavor-create baremetal auto 6144 50 2
nova flavor-create bmc auto 512 20 1
These flavors can be customized if desired. For large environments
with many baremetal instances it may be wise to give the bmc flavor
more memory. A 512 MB BMC will run out of memory around 20 baremetal
instances.
#. Source an rc file that will provide user credentials for the host cloud.
#. Add a Nova keypair to be injected into instances::
nova keypair-add --pub-key ~/.ssh/id_rsa.pub default
#. (Optional) Configure quotas. When running in a dedicated OVB cloud, it may
be helpful to set some quotas to very large/unlimited values to avoid
running out of quota when deploying multiple or large environments::
neutron quota-update --security_group 1000
neutron quota-update --port -1
neutron quota-update --network -1
neutron quota-update --subnet -1
nova quota-update --instances -1 --cores -1 --ram -1 [tenant uuid]

View File

@ -0,0 +1,16 @@
Host Cloud Setup
================
Instructions for setting up the host cloud[1].
1: The host cloud is any OpenStack cloud providing the necessary functionality
to run OVB. In a TripleO deployment, this would be the overcloud.
.. warning:: This process requires patches and configuration settings that
may not be appropriate for production clouds.
.. toctree::
patches
configuration
prepare

22
doc/source/index.rst Normal file
View File

@ -0,0 +1,22 @@
OpenStack Virtual Baremetal
===========================
OpenStack Virtual Baremetal is a tool for using OpenStack instances to test
baremetal-style deployments.
Table of Contents
=================
.. toctree::
:maxdepth: 2
introduction
host-cloud/setup
deploy/deploy
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -0,0 +1,55 @@
Introduction
============
OpenStack Virtual Baremetal is a way to use OpenStack instances to do
simulated baremetal deployments. This project is a collection of tools
and documentation that make it much easier to do so. It primarily consists
of the following pieces:
- Patches and documentation for setting up a host cloud.
- A deployment CLI that leverages the OpenStack Heat project to deploy the
VMs, networks, and other resources needed.
- An OpenStack BMC that can be used to control OpenStack instances via IPMI
commands.
- A tool to collect details from the "baremetal" VMs so they can be added as
nodes in the OpenStack Ironic baremetal deployment project.
A basic OVB environment is just a BMC VM configured to control a number
of "baremetal" VMs. This allows them to be treated largely the same
way a real baremetal system with a BMC would. A number of additional
features can also be enabled to add more to the environment.
OVB was initially conceived as an improved method to deploy environments for
OpenStack TripleO development and testing. As such, much of the terminology
is specific to TripleO. However, it should be possible to use it for any
non-TripleO scenarios where a baremetal-style deployment is desired.
Benefits and Drawbacks
----------------------
As noted above, OVB started as part of the OpenStack TripleO project.
Previous methods for deploying virtual environments for TripleO focused on
setting up all the vms for a given environment on a single box. This had a
number of drawbacks:
- Each developer needed to have their own system. Sharing was possible, but
more complex and generally not done. Multi-tenancy is a basic design
tenet of OpenStack so this is not a problem when using it to provision the
VMs. A large number of developers can make use of a much smaller number of
physical systems.
- If a deployment called for more VMs than could fit on a single system, it
was a complex manual process to scale out to multiple systems. An OVB
environment is only limited by the number of instances the host cloud can
support.
- Pre-OVB test environments were generally static because there was not an API
for dynamic provisioning. By using the OpenStack API to create all of the
resources, test environments can be easily tailored to their intended use
case.
One drawback to OVB at this time is that it is generally not compatible with
current public clouds. While it is possible to do an OVB deployment on a
completely stock OpenStack cloud, most public clouds have restrictions (older
OpenStack releases, inability to upload new images, no Heat, etc.) that make
it problematic. At this time, OVB is primarily used with semi-private clouds
configured for ideal compatibility. This situation should improve as more
public clouds move to newer OpenStack releases, however.

View File

@ -0,0 +1,13 @@
Using a Deployed OVB Environment
================================
After an OVB environment has been deployed, there are a few things to know.
#. The undercloud vm can be used with something like TripleO
to do a baremetal-style deployment to the virtual baremetal instances
deployed previously.
#. If using the full network isolation provided by OS::OVB::BaremetalNetworks
then a TripleO overcloud can be deployed in the OVB environment by using
the network templates in the ``network-templates`` (for ipv4) or
``ipv6-network-templates`` (for ipv6) directories.

View File

@ -5,3 +5,8 @@ python-subunit>=0.0.18
testrepository>=0.0.18
testtools>=0.9.36,!=1.2.0
mock>=1.0
# docs
sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3
oslosphinx>=2.2.0 # Apache-2.0
sphinx_rtd_theme==0.1.7