Move all the docs except api-docs to fuel-docs

Leave only auto-generated documentation in docs. All the other
docs are moved to fuel-docs repo.

Implements: blueprints.launchpad.net/fuel/+spec/fuel-docs-migration
Change-Id: I9f321acc802ca477f42b46e5aa1375c4084ba85a
This commit is contained in:
Dzmitry Moisa 2016-03-14 18:08:05 +03:00
parent 8d7f83ebd4
commit eb673991cc
52 changed files with 5 additions and 7039 deletions

2
docs/README Normal file
View File

@ -0,0 +1,2 @@
Development guide was moved from this directory to openstack/fuel-docs repository.
Only auto-generated API documentation left here.

View File

@ -1,267 +0,0 @@
.. _buildsystem:
Fuel ISO build system
=====================
Use the `fuel-main repository <https://github.com/openstack/fuel-main.git>`_
to build Fuel components such as an ISO or an upgrade tarball.
This repository contains a set of GNU Make build scripts.
Quick start
-----------
1. You must use Ubuntu 14.04 distribution to build Fuel components or the build process may fail. Note that build only works for x64 platforms.
2. Check whether you have git installed in
your system. To do that, use the following command:
::
which git
If git is not found, install it with the following command:
::
apt-get install git
3. Clone the **fuel-main** git repository to the location where
you will work. The root of your repo will be named `fuel-main`.
In this example, it will be located under the *~/fuel* directory:
::
mkdir ~/fuel
cd ~/fuel
git clone https://github.com/openstack/fuel-main.git
cd fuel-main
.. note::Fuel build system consists of the following components:
* a shell script (**./prepare-build-env.sh**) - prepares the build environment by checking
that all necessary packages are installed and installing any that are not.
* **fuel-main** directory - the only one required repository for building the Fuel ISO.
The make script then downloads the additional components
(Fuel Library, Nailgun, Astute and OSTF).
Unless otherwise specified in the makefile,
the master branch of each respective repo is used to build the ISO.
4. Run the shell script:
::
./prepare-build-env.sh
and wait until **prepare-build-env.sh**
installs the Fuel build evironment on your computer.
5. After the script runs successfully, issue the following command to build a
Fuel ISO:
::
make iso
6. Use the following command to list the available make targets:
::
make help
For the full list of available targets with description, see :ref:`Build targets <build-targets>` section below.
Build system structure
----------------------
Fuel consists of several components such as web interface,
puppet modules, orchestration components, testing components.
Source code of all those components is split into multiple git
repositories like, for example:
- https://github.com/openstack/fuel-web
- https://github.com/openstack/fuel-ui
- https://github.com/openstack/fuel-astute
- https://github.com/openstack/fuel-ostf
- https://github.com/openstack/fuel-library
- https://github.com/openstack/fuel-docs
The main component of the Fuel build system is
*fuel-main* directory.
Fuel build processes are quite complicated,
so to make the **fuel-main** code easily
maintainable, it is
split into a bunch of files and directories.
Those files
and directories contain independent
(or at least almost independent)
pieces of Fuel build system:
* **Makefile** - the main Makefile which includes all other make modules.
* **config.mk** - contains parameters used to customize the build process,
specifying items such as build paths,
upstream mirrors, source code repositories
and branches, built-in default Fuel settings and ISO name.
* **rules.mk** - defines frequently used macros.
* **repos.mk** - contains make scripts to download the
other repositories to develop Fuel
components put into separate repos.
* **sandbox.mk** - shell script definitions that create
and destroy the special chroot environment required to
build some components.
For example, for building RPM packages,
CentOS images we use CentOS chroot environment.
* **mirror** - contains the code which is used to download
all necessary packages from upstream mirrors and build new
ones which are to be copied on Fuel ISO even if Internet
connection is down.
* **packages** - contains DEB and RPM
specs as well as make code for building those packages,
included in Fuel DEB and RPM mirrors.
* **bootstrap** - contains a make script intended
to build CentOS-based miniroot image (a.k.a initrd or initramfs).
* **docker** - contains the make scripts to
build docker containers, deployed on the Fuel Master node.
* **iso** - contains **make** scripts for building Fuel ISO file.
.. _build-targets:
Build targets
-------------
* **all** - used for building all Fuel artifacts.
Currently, it is an alias for **iso** target.
* **bootstrap** - used for building in-memory bootstrap
image which is used for auto-discovering.
* **mirror** - used for building local mirrors (the copies of CentOS and
Ubuntu mirrors which are then placed into Fuel ISO).
They contain all necessary packages including those listed in
*requirements-*.txt* files with their dependencies as well as those which
are Fuel packages. Packages listed in *requirements-*.txt* files are downloaded
from upstream mirrors while Fuel packages are built from source code.
* **iso** - used for building Fuel ISO. If build succeeds,
ISO is put into build/artifacts folder.
* **clean** - removes build directory.
* **deep_clean** - removes build directory and local mirror.
Note that if you remove a local mirror, then next time
the ISO build job will download all necessary packages again.
So, the process goes faster if you keep local mirrors.
You should also mind the following:
it is better to run *make deep_clean* every time when building an ISO to make sure the local mirror is consistent.
Customizing build process
-------------------------
There are plenty of variables in make files.
Some of them represent a kind of build parameters.
They are defined in **config.mk** file:
* **TOP_DIR** - a default current directory.
All other build directories are relative to this path.
* **BUILD_DIR** - contains all files, used during build process.
By default, it is **$(TOP_DIR)/build**.
* **ARTS_DIR** - contains build artifacts such as ISO and IMG files
By default, it is **$(BUILD_DIR)/artifacts**.
* **LOCAL_MIRROR** - contains local CentOS and Ubuntu mirrors
By default, it is **$(TOP_DIR)/local_mirror**.
* **DEPS_DIR** - contains build targets that depend on artifacts
of the previous build jobs, placed there
before build starts. By default, it is **$(TOP_DIR)/deps**.
* **ISO_NAME** - a name of Fuel ISO without file extension:
if **ISO_NAME** = **MY_CUSTOM_NAME**, then Fuel ISO file will
be placed into **$(MY_CUSTOM_NAME).iso**.
* **ISO_PATH** - used to specify Fuel ISO full path instead of defining
just ISO name.
By default, it is **$(ARTS_DIR)/$(ISO_NAME).iso**.
* Fuel ISO contains some default settings for the
Fuel Master node. These settings can be customized
during Fuel Master node installation.
One can customize those
settings using the following variables:
- **MASTER_IP** - the Fuel Master node IP address.
By default, it is 10.20.0.2.
- **MASTER_NETMASK** - Fuel Master node IP netmask.
By default, it is 255.255.255.0.
- **MASTER_GW** - Fuel Master node default gateway.
By default, it is is 10.20.0.1.
- **MASTER_DNS** - the upstream DNS location for the Fuel master node.
FUel Master node DNS will redirect there all DNS requests that it is not able to resolve itself.
By default, it is 10.20.0.1.
Other options
-------------
* **[repo]_REPO** - remote source code repo.
URL or git repository can be specified for each of the Fuel components.
(FUELLIB, NAILGUN, ASTUTE, OSTF).
* **[repo]_COMMIT** - source branch for each of the Fuel components to build.
* **[repo]_GERRIT_URL** - gerrit repo.
* **[repo]_GERRIT_COMMIT** - list of extra commits from gerrit.
* **[repo]_SPEC_REPO** - repo for RPM/DEB specs of OpenStack packages.
* **[repo]_SPEC_COMMIT** - branch for checkout.
* **[repo]_SPEC_GERRIT_URL** - gerrit repo for OpenStack specs.
* **[repo]_SPEC_GERRIT_COMMIT** - list of extra commits from gerrit for specs.
* **USE_MIRROR** - pre-built mirrors from Fuel infrastructure.
The following mirrors can be used:
* ext (external mirror, available from outside of Mirantis network)
* none (reserved for building local mirrors: in this case
CentOS and Ubuntu packages will be fetched from upstream mirrors, so
that it will make the build process much slower).
* **MIRROR_CENTOS** - download CentOS packages from a specific remote repo.
* **MIRROR_UBUNTU** - download Ubuntu packages from a specific remote repo.
* **MIRROR_DOCKER** - download docker images from a specific remote url.
* **EXTRA_RPM_REPOS** - extra repos with RPM packages.
Each repo must be comma separated
tuple with repo-name and repo-path:
<first_repo_name>,<repo_path> <second_repo_name>,<second_repo_path>
For example,
*qemu2,http://osci-obs.vm.mirantis.net:82/centos-fuel-5.1-stable-15943/centos/ libvirt,http://osci-obs.vm.mirantis.net:82/centos-fuel-5.1-stable-17019/centos/*.
Note that if you want to add more packages to the Fuel Master node, you should update the **requirements-rpm.txt** file.

View File

@ -1,25 +0,0 @@
.. _develop:
Development Documentation
=========================
.. toctree::
:maxdepth: 3
:numbered:
develop/architecture
develop/sequence
develop/quick_start
develop/addition_examples
develop/env
develop/system_tests/tree
develop/live_masternode
develop/nailgun
develop/module_structure
develop/fuel_settings
develop/puppet_tips
develop/pxe_deployment
develop/ostf_contributors_guide
develop/custom-bootstrap-node
develop/modular-architecture
develop/separateMOS

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 467 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 264 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

View File

@ -1,198 +0,0 @@
Fuel Development Examples
=========================
This section provides examples of the Fuel development
process. It builds on the information in the `How to
contribute
<https://wiki.openstack.org/wiki/Fuel/How_to_contribute>`_
document, and the :doc:`Fuel Development Quick-Start Guide
</develop/quick_start>` which illustrate the development
process for a single Fuel component. These examples show how
to manage development and integration of a more complicated
example.
Any new feature effort should start with the creation of a
blueprint where implementation decisions and related commits
are tracked. More information on launchpad blueprints can
be found here: `https://wiki.openstack.org/wiki/Blueprints
<https://wiki.openstack.org/wiki/Blueprints>`_.
Understanding the Fuel architecture helps you understand
which components any particular addition will impact. The
following documents provide valuable information about the
Fuel architecture, and the provisioning and deployment
process:
* `Fuel architecture on the OpenStack wiki <https://wiki.openstack.org/wiki/Fuel#Fuel_architecture>`_
* :doc:`Architecture section of Fuel documentation </develop/architecture>`
* :doc:`Visual of provisioning tasks </develop/sequence>`
Adding Zabbix Role
------------------
This section outlines the steps followed to add a new role
to Fuel. In this case, monitoring service functionality was
added by enabling the deployment of a Zabbix server
configured to monitor an OpenStack environment deployed by
Fuel.
The monitoring server role was initially planned in `this
blueprint
<https://blueprints.launchpad.net/fuel/+spec/monitoring-system>`_.
Core Fuel developers provided feedback to small
commits via Gerrit and IRC while the work was coming
together. Ultimately the work was rolled up into two
commits including over 23k lines of code, and these two
commits were merged into `fuel-web <https://github.com/openstack/fuel-web>`_
and `fuel-library
<https://github.com/openstack/fuel-library>`_.
Additions to Fuel-Web for Zabbix role
-------------------------------------
In fuel-web, the `Support for Zabbix
<https://review.openstack.org/#/c/84408/>`_ commit added the
additional role to :doc:`Nailgun </develop/nailgun>`. The
reader is urged to review this commit closely as a good
example of where specific additions fit. In order to
include this as an option in the Fuel deployment process,
the following files were included in the commit for
fuel-web:
UI components::
nailgun/static/translations/core.json
nailgun/static/js/views/cluster_page_tabs/nodes_tab_screens/node_list_screen.jsx
Testing additions::
nailgun/nailgun/test/integration/test_cluster_changes_handler.py
nailgun/nailgun/test/integration/test_orchestrator_serializer.py
General Nailgun additions::
nailgun/nailgun/errors/__init__.py
nailgun/nailgun/fixtures/openstack.yaml
nailgun/nailgun/network/manager.py
nailgun/nailgun/orchestrator/deployment_serializers.py
nailgun/nailgun/rpc/receiver.py
nailgun/nailgun/settings.yaml
nailgun/nailgun/task/task.py
nailgun/nailgun/utils/zabbix.py
Additions to Fuel-Library for Zabbix role
-----------------------------------------
In addition to the Nailgun additions, the related Puppet
modules were added to the `fuel-library repository
<https://github.com/openstack/fuel-library>`_. This
`Zabbix fuel-library integration
<https://review.openstack.org/#/c/101844/>`_ commit included
all the puppet files, many of which are brand new modules
specifically for Zabbix, in addition to adjustments to the
following files::
deployment/puppet/openstack/manifests/logging.pp
deployment/puppet/osnailyfacter/manifests/cluster_ha.pp
deployment/puppet/osnailyfacter/manifests/cluster_simple.pp
Once all these commits passed CI and had been reviewed by
both community members and the Fuel PTLs, they were merged
into master.
Adding Hardware Support
-----------------------
This section outlines the steps followed to add support for
a Mellanox network card, which requires a kernel driver that
is available in most Linux distributions but was not loaded
by default. Adding support for other hardware would touch
similar Fuel components, so this outline should provide a
reasonable guide for contributors wishing to add support for
new hardware to Fuel.
It is important to keep in mind that the Fuel node discovery
process works by providing a bootstrap image via PXE. Once
the node boots with this image, a basic inventory of
hardware information is gathered and sent back to the Fuel
controller. If a node contains hardware requiring a unique
kernel module, the bootstrap image must contain that module
in order to detect the hardware during discovery.
In this example, loading the module in the bootstrap image
was enabled by adjusting the ISO makefile and specifying the
appropriate requirements.
Adding a hardware driver to bootstrap
-------------------------------------
The `Added bootstrap support to Mellanox
<https://review.openstack.org/#/c/101126>`_ commit shows how
this is achieved by adding the modprobe call to load the
driver specified in the requirements-rpm.txt file, requiring
modification of only two files in the fuel-main repository::
bootstrap/module.mk
requirements-rpm.txt
.. note:: Any package specified in the bootstrap building procedure
must be listed in the requirements-rpm.txt file explicitly.
The Fuel mirrors must be rebuilt by the OSCI team prior to
merging requests like this one.
.. note:: Changes made to bootstrap do not affect package sets for
target systems, so in case if you're adding support for NIC,
for example, you have to add installation of all related
packages to kickstart/preceed as well.
The `Adding OFED drivers installation
<https://review.openstack.org/#/c/103427>`_ commit shows the
changes made to the preseed (for Ubuntu) and kickstart (for
CentOS) files in the fuel-library repository::
deployment/puppet/cobbler/manifests/snippets.pp
deployment/puppet/cobbler/templates/kickstart/centos.ks.erb
deployment/puppet/cobbler/templates/preseed/ubuntu-1404.preseed.erb
deployment/puppet/cobbler/templates/snippets/centos_ofed_prereq_pkgs_if_enabled.erb
deployment/puppet/cobbler/templates/snippets/ofed_install_with_sriov.erb
deployment/puppet/cobbler/templates/snippets/ubuntu_packages.erb
Though this example did not require it, if the hardware
driver is required during the operating system installation,
the installer images (debian-installer and anaconda) would
also need to be repacked. For most installations though,
ensuring the driver package is available during installation
should be sufficient.
Adding to Fuel package repositories
-----------------------------------
If the addition will be committed back to the public Fuel
codebase to benefit others, you will need to submit a bug in
the Fuel project to request the package be added to the
repositories.
Let's look at this process step by step by the example
of `Add neutron-lbaas-agent package
<https://bugs.launchpad.net/bugs/1330610>`_ bug:
* you create a bug in the Fuel project providing full description on
the packages to be added, and assign it to the Fuel OSCI team
* you create a request to add these packages to Fuel requirements-\*.txt
files `Add all neutron packages to requirements
<https://review.openstack.org/#/c/104633/>`_
You receive +1 vote from Fuel CI if these packages already exist on
either Fuel internal mirrors or upstream mirrors for respective OS
type (rpm/deb), or -1 vote in any other case.
* if requested packages do not exist in the upstream OS distributive,
OSCI team builds them and then places on internal Fuel mirrors
* OSCI team rebuilds public Fuel mirrors with `Add all neutron packages to
requirements <https://review.openstack.org/#/c/104633/>`_ request
* `Add all neutron packages to requirements
<https://review.openstack.org/#/c/104633/>`_ request is merged
.. note:: The package must include a license that complies
with the Fedora project license requirements for binary
firmware. See the `Fedora Project licensing page
<https://fedoraproject.org/wiki/Licensing:Main#Binary_Firmware>`_
for more information.

View File

@ -1,182 +0,0 @@
Fuel Architecture
=================
Good overview of Fuel architecture is represented on
`OpenStack wiki <https://wiki.openstack.org/wiki/Fuel#Fuel_architecture>`_.
You can find a detailed breakdown of how this works in the
:doc:`Sequence Diagrams </develop/sequence>`.
Master node is the main part of the Fuel project. It contains all the
services needed for network provisioning of other managed nodes,
installing an operating system, and then deploying OpenStack services to
create a cloud environment. *Nailgun* is the most important service.
It is a RESTful application written in Python that contains all the
business logic of the system. A user can interact with it either using
the *Fuel Web* interface or by the means of *CLI utility*. He can create
a new environment, edit its settings, assign roles to the discovered
nodes, and start the deployment process of the new OpenStack cluster.
Nailgun stores all of its data in a *PostgreSQL* database. It contains
the hardware configuration of all discovered managed nodes, the roles,
environment settings, current deployment status and progress of
running deployments.
.. uml::
package "Discovered Node" {
component "Nailgun Agent"
}
package "Master Node" {
component "Nailgun"
component "Database"
interface "REST API"
component "Fuel Web"
}
actor "User"
component "CLI tool"
[User] -> [CLI tool]
[User] -> [Fuel Web]
[Nailgun Agent] -> [REST API] : Upload hardware profile
[CLI tool] -> [REST API]
[Fuel Web] -> [REST API]
[REST API] -> [Nailgun]
[Nailgun] -> [Database]
Managed nodes are discovered over PXE using a special bootstrap image
and the PXE boot server located on the master node. The bootstrap image
runs a special script called Nailgun agent. The agent **nailgun-agent.rb**
collects the server's hardware information and submits it to Nailgun
through the REST API.
The deployment process is started by the user after he has configured
a new environment. The Nailgun service creates a JSON data structure
with the environment settings, its nodes and their roles and puts this
file into the *RabbitMQ* queue. This message should be received by one
of the worker processes who will actually deploy the environment. These
processes are called *Astute*.
.. uml::
package "Master Node" {
component "Nailgun"
interface "RabbitMQ"
package "Astute Worker" {
component Astute
}
component "Cobbler"
component "DHCP and TFTP"
}
[Nailgun] -> [RabbitMQ] : Put task into Nailgun queue
[Astute] <- [RabbitMQ] : Take task from Nailgun queue
[Astute] -> [Cobbler] : Set node's settings through XML-RPC
[Cobbler] -> [DHCP and TFTP]
The Astute workers are listening to the RabbitMQ queue and receives
messages. They use the *Astute* library which implements all deployment
actions. First, it starts the provisioning of the environment's nodes.
Astute uses XML-RPC to set these nodes' configuration in Cobbler and
then reboots the nodes using *MCollective agent* to let Cobbler install
the base operating system. *Cobbler* is a deployment system that can
control DHCP and TFTP services and use them to network boot the managed
node and start the OS installer with the user-configured settings.
Astute puts a special message into the RabbitMQ queue that contains
the action that should be executed on the managed node. MCollective
servers are started on all bootstrapped nodes and they constantly listen
for these messages, when they receive a message, they run the required
agent action with the given parameters. MCollective agents are just Ruby
scripts with a set of procedures. These procedures are actions that the
MCollective server can run when asked to.
When the managed node's OS is installed, Astute can start the deployment
of OpenStack services. First, it uploads the node's configuration
to the **/etc/astute.yaml** file on node using the **uploadfile** agent.
This file contains all the variables and settings that will be needed
for the deployment.
Next, Astute uses the **puppetsync** agent to synchronize Puppet
modules and manifests. This agent runs an rsync process that connects
to the rsyncd server on the Master node and downloads the latest version
of Puppet modules and manifests.
.. uml::
package "Master Node" {
interface "RabbitMQ"
component "Rsyncd"
component "Astute"
}
package "Managed Node" {
interface "MCollective"
package "MCollective Agents" {
component "uploadfile"
component "puppetsync"
component "puppetd"
component "shell"
}
component "Puppet"
component "Rsync"
interface "astute.yaml"
component "Puppet Modules"
}
[Astute] <-> [RabbitMQ]
[RabbitMQ] <-> [MCollective]
[MCollective] -> [uploadfile]
[MCollective] -> [puppetsync]
[MCollective] -> [puppetd]
[MCollective] -> [shell]
[uploadfile] ..> [astute.yaml]
[puppetsync] -> [Rsync]
[puppetd] -> [Puppet]
[Rsync] <..> [Rsyncd]
[Rsync] ..> [Puppet Modules]
[astute.yaml] ..> [Puppet]
[Puppet Modules] ..> [Puppet]
When the modules are synchronized, Astute can run the actual deployment
by applying the main Puppet manifest **site.pp**. MCollective agent runs
the Puppet process in the background using the **daemonize** tool.
The command looks like this:
::
daemonize puppet apply /etc/puppet/manifests/site.pp"
Astute periodically polls the agent to check if the deployment has
finished and reports the progress to Nailgun through its RabbitMQ queue.
When started, Puppet reads the **astute.yaml** file content as a fact
and then parses it into the **$fuel_settings** structure used to get all
deployment settings.
When the Puppet process exits either successfully or with an error,
Astute gets the summary file from the node and reports the results to
Nailgun. The user can always monitor both the progress and the
results using Fuel Web interface or the CLI tool.
Fuel installs the **puppet-pull** script. Developers can use it if
they need to manually synchronize manifests from the Master node and
run the Puppet process on node again.
Astute also does some additional actions, depending on environment
configuration, either before the deployment of after successful one:
* Generates and uploads SSH keys that will be needed during deployment.
* During network verification phase **net_verify.py** script.
* Uploads CirrOS guest image into Glance after the deployment.
* Updates **/etc/hosts** file on all nodes when new nodes are deployed.
* Updates RadosGW map when Ceph nodes are deployed.
Astute also uses MCollective agents when a node or the entire
environment is being removed. It erases all boot sectors on the node
and reboots it. The node will be network booted with the bootstrap
image again, and will be ready to be used in a new environment.

View File

@ -1,273 +0,0 @@
.. _custom-bootstrap-node:
Bootstrap node
==============
When you would like to bring changes
into bootstrap, you should take up either of the
options:
* create an additional
"piece" of bootstrap (initrd_update)
that will be injected into the
original initramfs image on the bootstrap.
That means, avoid modifying the original initramfs
image for bootstrap
* modify the original initramfs image manually
* create a custom initramfs image for
bootstrap to replace the default one.
Let's take a look at every approach in more details.
Creating and injecting the initrd_update into bootstrap
-------------------------------------------------------
A typical use case for creating initrd_update looks as follows:
a great number of proprietary drivers for equipment cannot be
shipped with GA Fuel ISO due to legal issues
and should be installed by users themselves.
That means, you can add (or inject) the required issues (drivers,
scripts etc.) during Fuel ISO
installation procedure.
Injection workflow consists of several stages:
#. Prepare the injected initramfs image with the required kernel modules (for CentOS).
#. Modify bootstrap (CentOS)
Prepare injected initramfs image for CentOS
+++++++++++++++++++++++++++++++++++++++++++
The injected initramfs image should contain
the files what are going to be put on (or let's say injected into)
the original initramfs on the bootstrap in addition to
the deployed (original) RAM file system.
The injected initramfs image should have the following structure:
::
/
/lib/modules/<kernel-version>/kernel/<path-to-the-driver>/<module.ko>
/etc/modprobe.d/<module>.conf
Let's put all required files into the folder called *dd-src* and create the image.
For example, we need the 2.6.32-504 (CentOs 6.6) kernel:
#. Create the working folder dd-src:
::
mkdir dd-src
#. Put the kernel modules into:
::
mkdir -p ./dd-src/lib/modules/2.6.32-504.1.3.el6.x86_64/kernel/drivers/scsi
cp hpvsa.ko ./dd-src/lib/modules/2.6.32-504.1.3.el6.x86_64/kernel/drivers/scsi
#. Put the *<module-name>.conf* file with the modprobe command into
the *etc/modprobe.d/* folder:
::
mkdir -p ./dd-src/etc/modprobe.d/
echo modprobe hpvsa > ./dd-src/etc/modprobe.d/hpvsa.conf
chmod +x ./dd-src/etc/modprobe.d/hpvsa.conf
There is the second (deprecated) way:
create the */etc/rc.modules* executable file and list the command to probe with the module name.
Do not use */etc/rc.local* file for this purpose,
because it is too late for init hardware:
::
mkdir ./dd-src/etc
echo modprobe hpvsa > ./dd-src/etc/rc.modules
chmod +x ./dd-src/etc/rc.modules
#. Create the dd-src.tar.gz file for coping to the Fuel Master node:
::
tar -czvf dd-src.tar.gz ./dd-src
The *dd-src.tar.gz* file can now be copied to the Fuel Master node.
Adding initrd_update image to the bootstrap
+++++++++++++++++++++++++++++++++++++++++++
.. note:: Currently, the bootstrap is based on CentOS (kernel and modules).
Let's assume that the Fuel Master node has been deployed:
#. Connect to the Fuel Master node:
::
ssh root@<your-Fuel-Master-node-IP>
#. Prepare initramfs update image:
::
tar -xzvf dd-src.tar.gz
find dd-src/ | cpio --quiet -o -H newc | gzip -9 > /tmp/initrd_update.img
#. Copy into the TFTP (PXE) bootstrap folder:
::
cp /tmp/initrd_update.img /var/www/nailgun/bootstrap/
chmod 755 /var/www/nailgun/bootstrap/initrd_update.img
#. Copy inside the cobbler container to the folder:
::
dockerctl copy initrd_update.img cobbler:/var/lib/tftpboot/initrd_update.img
#. Modify the bootstrap menu initrd parameter.
* Log into the cobbler container:
::
dockerctl shell cobbler
* Get the variable kopts variable value:
::
cobbler profile dumpvars --name=bootstrap | grep kernel_options
kernel_options : ksdevice=bootif locale=en_US text mco_user=mcollective initrd=initrd_update.img biosdevname=0 lang url=http://10.20.0.2:8000/api priority=critical mco_pass=HfQqE2Td kssendmac
* Add *initrd=initrd_update.img* at the beginning of the string
and re-sync the container. It turns into the kernel
parameter passing to the kernel on boot
'initrd=initramfs.img,initrd_update.img':
::
cobbler profile edit --name bootstrap --kopts='initrd=initrd_update.img ksdevice=bootif lang= locale=en_US text mco_user=mcollective priority=critical url=http://10.20.0.2:8000/api biosdevname=0 mco_pass=HfQqE2Td kssendmac'
cobbler sync
Modifying initramfs image manually for bootstrap node
-----------------------------------------------------
To edit the initramfs (initrd) image,
you should unpack it, modify and pack back.
Initramfs image is a gzip-ed cpio archive.
To change initramfs image, follow these steps:
#. Create a folder for modifying initramfs image and copy the initramfs image into it:
::
mkdir /tmp/initrd-orig
dockerctl copy cobbler:/var/lib/tftpboot/images/bootstrap/initramfs.img /tmp/initrd-orig/
#. Unpack initramfs image. First of all, unzip it:
::
cd /tmp/initrd-orig/
mv initramfs.img initramfs.img.gz
gunzip initramfs.img.gz
#. Unpack the cpio archive to the initramfs folder:
::
mkdir initramfs
cd initramfs
cpio -i < ../initramfs.img
#. Now you have the file system what you have in the RAM on the bootstrap:
::
ls -l /tmp/initrd-orig/initramfs
#. Modify it as you need. For example, copy files or modify the scripts:
::
cp hpvsa.ko lib/modules/2.6.32-504.1.3.el6.x86_64/kernel/drivers/scsi/
echo "modprobe hpvsa" > etc/modprobe.d/hpvsa.conf
To get more information on how to pass options to
the module, start dependent modules or black-list modules please,
consult see the *modprobe.d* man page.
::
vi etc/modprobe.d/blacklist.conf
#. Pack the intiramfs back to **initfamfs.img.new** image:
::
find /tmp/initrd-orig/initramfs | cpio --quiet -o -H newc | gzip -9 > /tmp/initramfs.img.new
#. Clean up. Remove */tmp/initrd-orig* temporary folder:
::
rm -Rf /tmp/initrd-orig/
Creating a custom bootstrap node
--------------------------------
This option requires further investigation
and will be introduced in the near future.
Replacing default bootstrap node with the custom one
++++++++++++++++++++++++++++++++++++++++++++++++++++
Let's suppose that you have created or modified
the initramfs image. It is placed in the */tmp* folder under **initramfs.img.new** name.
To replace the default boostrap with the custom,
follow these steps:
#. Save the previous initramfs image:
::
mv /var/www/nailgun/bootstrap/initramfs.img /var/www/nailgun/bootstrap/initramfs.img.old
#. Copy the new initramfs image into the bootstrap folder:
::
cd /tmp
cp initramfs.img.new /var/www/nailgun/bootstrap/initramfs.img
dockerctl copy /var/www/nailgun/bootstrap/initramfs.img cobbler:/var/lib/tftpboot/images/bootstrap/initramfs.img
#. Make the Cobbler update the files:
::
cobbler sync

View File

@ -1,370 +0,0 @@
Fuel Development Environment
============================
.. warning:: Fuel ISO build works only on 64-bit operating system.
If you are modifying or augmenting the Fuel source code or if you
need to build a Fuel ISO from the latest branch, you will need
an environment with the necessary packages installed. This page
lays out the steps you will need to follow in order to prepare
the development environment, test the individual components of
Fuel, and build the ISO which will be used to deploy your
Fuel master node.
The basic operating system for Fuel development is Ubuntu Linux.
The setup instructions below assume Ubuntu 14.04 (64 bit) though most of
them should be applicable to other Ubuntu and Debian versions, too.
Each subsequent section below assumes that you have followed the steps
described in all preceding sections. By the end of this document, you
should be able to run and test all key components of Fuel, build the
Fuel master node installation ISO, and generate documentation.
.. _getting-source:
Getting the Source Code
-----------------------
Source code of OpenStack Fuel can be found at git.openstack.org or
GitHub.
Follow these steps to clone the repositories for each of
the Fuel components:
::
apt-get install git
git clone https://github.com/openstack/fuel-main
git clone https://github.com/openstack/fuel-web
git clone https://github.com/openstack/fuel-ui
git clone https://github.com/openstack/fuel-agent
git clone https://github.com/openstack/fuel-astute
git clone https://github.com/openstack/fuel-ostf
git clone https://github.com/openstack/fuel-library
git clone https://github.com/openstack/fuel-docs
.. _building-fuel-iso:
Building the Fuel ISO
---------------------
The "fuel-main" repository is the only one required in order
to build the Fuel ISO. The make script then downloads the
additional components (Fuel Library, Nailgun, Astute and OSTF).
Unless otherwise specified in the makefile, the master branch of
each respective repo is used to build the ISO.
The basic steps to build the Fuel ISO from trunk in an
Ubuntu 14.04 environment are:
::
apt-get install git
git clone https://github.com/openstack/fuel-main
cd fuel-main
./prepare-build-env.sh
make iso
If you want to build an ISO using a specific commit or repository,
you will need to modify the "Repos and versions" section in the
config.mk file found in the fuel-main repo before executing "make
iso". For example, this would build a Fuel ISO against the v5.0
tag of Fuel:
::
# Repos and versions
FUELLIB_COMMIT?=tags/5.0
NAILGUN_COMMIT?=tags/5.0
FUEL_UI_COMMIT?=tags/5.0
ASTUTE_COMMIT?=tags/5.0
OSTF_COMMIT?=tags/5.0
FUELLIB_REPO?=https://github.com/openstack/fuel-library.git
NAILGUN_REPO?=https://github.com/openstack/fuel-web.git
FUEL_UI_REPO?=https://github.com/openstack/fuel-ui.git
ASTUTE_REPO?=https://github.com/openstack/fuel-astute.git
OSTF_REPO?=https://github.com/openstack/fuel-ostf.git
To build an ISO image from custom gerrit patches on review, edit the
"Gerrit URLs and commits" section of config.mk, e.g. for
https://review.openstack.org/#/c/63732/8 (id:63732, patch:8) set:
::
FUELLIB_GERRIT_COMMIT?=refs/changes/32/63732/8
If you are building Fuel from an older branch that does not contain the
"prepare-build-env.sh" script, you can follow these steps to prepare
your Fuel ISO build environment on Ubuntu 14.04:
#. ISO build process requires sudo permissions, allow yourself to run
commands as root user without request for a password::
echo "`whoami` ALL=(ALL) NOPASSWD: ALL" | sudo tee -a /etc/sudoers
#. Install software::
sudo apt-get update
sudo apt-get install apt-transport-https
echo deb http://mirror.yandex.ru/mirrors/docker/ docker main | sudo tee /etc/apt/sources.list.d/docker.list
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
sudo apt-get update
sudo apt-get install lxc-docker
sudo apt-get update
sudo apt-get remove nodejs nodejs-legacy npm
sudo apt-get install software-properties-common python-software-properties
sudo add-apt-repository -y ppa:chris-lea/node.js
sudo apt-get update
sudo apt-get install build-essential make git ruby ruby-dev rubygems debootstrap createrepo \
python-setuptools yum yum-utils libmysqlclient-dev isomd5sum \
python-nose libvirt-bin python-ipaddr python-paramiko python-yaml \
python-pip kpartx extlinux unzip genisoimage nodejs multistrap \
lrzip python-daemon
sudo gem install bundler -v 1.2.1
sudo gem install builder
sudo pip install xmlbuilder jinja2
#. If you haven't already done so, get the source code::
git clone https://github.com/openstack/fuel-main
#. Now you can build the Fuel ISO image::
cd fuel-main
make iso
#. If you encounter issues and need to rebase or start over::
make clean #remove build/ directory
make deep_clean #remove build/ and local_mirror/
.. note::
In case you use a virtual machine for building the image, verify the
following:
- Both ``BUILD_DIR`` and ``LOCAL_MIRROR`` build directories are
out of the shared folder path in the `config.mk
<https://github.com/openstack/fuel-main/blob/master/config.mk>`_
file. For more information, see:
- `Shared folders of VitrualBox
<https://www.virtualbox.org/manual/ch04.html#sharedfolders>`_
documentation
- `Synced folders of Vargant
<https://docs.vagrantup.com/v2/synced-folders/>`_
documentation
- To prevent a random docker unexpected termination, a virtual
machine has a kernel that supports the ``aufs`` file system.
To install the kernel, run:
.. code-block:: console
sudo apt-get install --yes linux-image-extra-virtual
Reboot the kernel when the installation is complete. Check that
docker is using ``aufs`` by running:
.. code-block:: console
sudo docker info 2>&1 | grep -q 'Storage Driver: aufs' \
&& echo OK || echo KO
For more information, see `Select a storage driver for docker
<https://docs.docker.com/engine/userguide/storagedriver/selectadriver/>`_.
You can also use the following tools to make your work and development process
with Fuel easier:
* CGenie fuel-utils - a set of tools to interact with code on a Fuel Master node created
from the ISO. It provides the *fuel* command
that gives a simple way to upload Python or UI code (with staticfiles compression)
to Docker containers, SSH into machine and into the container,
display the logs etc.
* Vagrant SaltStack-based -Vagrant box definition with quick and basic Fuel
environment with fake tasks.
This is useful for UI or Nailgun development.
You can download both tools from the
`fuel-dev-tools <https://github.com/openstack/fuel-dev-tools>`_.
Nailgun (Fuel-Web)
------------------
Nailgun is the heart of Fuel project. It implements a REST API as well
as deployment data management. It manages disk volume configuration data,
network configuration data and any other environment specific data
necessary for a successful deployment of OpenStack. It provides the
required orchestration logic for provisioning and
deployment of the OpenStack components and nodes in the right order.
Nailgun uses a SQL database to store its data and an AMQP service to
interact with workers.
Requirements for preparing the nailgun development environment, along
with information on how to modify and test nailgun can be found in
the Nailgun Development Instructions document: :ref:`nailgun-development`
Astute
------
Astute is the Fuel component that represents Nailgun's workers, and
its function is to run actions according to the instructions provided
from Nailgun. Astute provides a layer which encapsulates all the details
about interaction with a variety of services such as Cobbler, Puppet,
shell scripts, etc. and provides a universal asynchronous interface to
those services.
#. Astute can be found in fuel-astute repository
#. Install Ruby dependencies::
sudo apt-get install git curl
curl -sSL https://get.rvm.io | bash -s stable
source ~/.rvm/scripts/rvm
rvm install 2.1
rvm use 2.1
git clone https://github.com/nulayer/raemon.git
cd raemon
git checkout b78eaae57c8e836b8018386dd96527b8d9971acc
gem build raemon.gemspec
gem install raemon-0.3.0.gem
cd ..
rm -Rf raemon
#. Install or update dependencies and run unit tests::
cd fuel-astute
./run_tests.sh
#. (optional) Run Astute MCollective integration test (you'll need to
have MCollective server running for this to work)::
cd fuel-astute
bundle exec rspec spec/integration/mcollective_spec.rb
Running Fuel Puppet Modules Unit Tests
--------------------------------------
If you are modifying any puppet modules used by Fuel, or including
additional modules, you can use the PuppetLabs RSpec Helper
to run the unit tests for any individual puppet module. Follow
these steps to install the RSpec Helper:
#. Install PuppetLabs RSpec Helper::
cd ~
gem2deb puppetlabs_spec_helper
sudo dpkg -i ruby-puppetlabs-spec-helper_0.4.1-1_all.deb
gem2deb rspec-puppet
sudo dpkg -i ruby-rspec-puppet_0.1.6-1_all.deb
#. Run unit tests for a Puppet module::
cd fuel/deployment/puppet/module
rake spec
Installing Cobbler
------------------
Install Cobbler from GitHub (it can't be installed from PyPi, and deb
package in Ubuntu is outdated)::
cd ~
git clone git://github.com/cobbler/cobbler.git
cd cobbler
git checkout release24
sudo make install
Building Documentation
----------------------
You should prepare your build environment before you can build
this documentation. First you must install Java, using the
appropriate procedure for your operating system.
Java is needed to use PlantUML to automatically generate UML diagrams
from the source. You can also use `PlantUML Server
<http://www.plantuml.com/plantuml/>`_ for a quick preview of your
diagrams and language documentation.
Then you need to install all the packages required for creating of
the Python virtual environment and dependencies installation.
::
sudo apt-get install make postgresql postgresql-server-dev-9.1
sudo apt-get install python-dev python-pip python-virtualenv
Now you can create the virtual environment and activate it.
::
virtualenv fuel-web-venv
. fuel-web-venv/bin/activate
And then install the dependencies.
::
pip install -r nailgun/test-requirements.txt
Now you can look at the list of available formats and generate
the one you need:
::
cd docs
make help
make html
There is a helper script **build-docs.sh**. It can perform
all the required steps automatically. The script can build documentation
in required format.
::
Documentation build helper
-o - Open generated documentation after build
-c - Clear the build directory
-n - Don't install any packages
-f - Documentation format [html,signlehtml,latexpdf,pdf,epub]
For example, if you want to build HTML documentation you can just
use the following script, like this:
::
./build-docs.sh -f html -o
It will create virtualenv, install the required dependencies and
build the documentation in HTML format. It will also open the
documentation with your default browser.
If you don't want to install all the dependencies and you are not
interested in building automatic API documentation there is an easy
way to do it.
First remove autodoc modules from extensions section of **conf.py**
file in the **docs** directory. This section should be like this:
::
extensions = [
'rst2pdf.pdfbuilder',
'sphinxcontrib.plantuml',
]
Then remove **develop/api_doc.rst** file and reference to it from
**develop.rst** index.
Now you can build documentation as usual using make command.
This method can be useful if you want to make some corrections to
text and see the results without building the entire environment.
The only Python packages you need are Sphinx packages:
::
Sphinx
sphinxcontrib-actdiag
sphinxcontrib-blockdiag
sphinxcontrib-nwdiag
sphinxcontrib-plantuml
sphinxcontrib-seqdiag
Just don't forget to rollback all these changes before you commit your
corrections.

View File

@ -1,130 +0,0 @@
Using Fuel settings
~~~~~~~~~~~~~~~~~~~
Fuel uses a special method to pass settings from Nailgun to Puppet manifests:
- Before the deployment process begins,
Astute uploads all settings
to the the */etc/astute.yaml* files that are located on each node.
- When Puppet is run,
Facter reads the contents of all these */etc/astute.yaml* files
and creates a single file called *$astute_settings_yaml*.
- The **parseyaml** function (at the beginning of the *site.pp* file)
then parses these settings
and creates a rich data structure called *$fuel_settings*.
All of the settings used during node deployment are stored there
and can be used anywhere in Puppet code.
For example, single top level variables are available as
*$::fuel_settings['debug']*.
More complex structures are also available as
values of *$::fuel_settings* hash keys
and can be accessed like normal hashes and arrays.
Many aliases and generated values are provided
to help you retrieve values easily.
You can create a variable from any hash key in *$fuel_settings*
and work with this variable within your local scope
or from other classes, using fully qualified paths::
$debug = $::fuel_settings['debug']
Some variables and structures are generated from the settings hash
by filtering and transformation functions.
For example the $node structure only contains
settings of the current node,
filtered from the has of all nodes.
It can be accessed as::
$node = filter_nodes($nodes_hash, 'name', $::hostname)
If you are going to use your module inside the Fuel Library
and need some settings,
you can get them from this *$::fuel_settings* structure.
Most variables related to network and OpenStack services configuration
are already available there and you can use them as they are.
If your modules require some additional or custom settings,
you must either use **Custom Attributes**
by editing the JSON files before deployment, or,
if you are integrating your project with the Fuel Library,
you can contact the Fuel UI developers
and ask them to add your configuration options to the Fuel setting panel.
After you have defined all classes you need inside your module,
you can add this module's declaration
into the Fuel manifests such as
*cluster_simple.pp* and *cluster_ha.pp* located inside
the *osnailyfacter/manifests* folder
or, if your additions are related to another class,
can add them into that class.
Example module
~~~~~~~~~~~~~~
To demonstrate how to add a new module to the Fuel Library,
let us add a simple class
that changes the terminal color of Red Hat based systems.
Our module is named *profile* and has only one class.::
profile
profile/manifests
profile/manifests/init.pp
profile/files
profile/files/colorcmd.sh
init.pp could have a class definition such as:::
class profile {
if $::osfamily == 'RedHat' {
file { 'colorcmd.sh' :
ensure => present,
owner => 'root',
group => 'root',
mode => '0644',
path => "/etc/profile.d/colorcmd.sh",
source => 'puppet:///modules/profile/colorcmd.sh',
}
}
}
This class downloads the *colorcmd.sh* file
and places it in the defined location
when the class is run on a Red Hat or CentOS system.
The profile module can be added to Fuel modules
by uploading its folder to */etc/puppet/modules*
on the Fuel Master node.
Now we need to declare this module somewhere inside the Fuel manifests.
Since this module should be run on every server,
we can use our main *site.pp* manifest
found inside the *osnailyfacter/examples* folder.
On the deployed master node,
this file will be copied to */etc/puppet/manifests*
and used to deploy Fuel on all other nodes.
The only thing we need to do here is to add the *include profile*
to the end of the */etc/puppet/manifests/site.pp* file
on the already deployed master node
and to the *osnailyfacter/examples/site.pp* file inside the Fuel repository.
Declaring a class outside of a node block
forces this class to be included everywhere.
If you want to include your module only on some nodes,
you can add its declaration
to the blocks associated with the role that is running on those nodes
inside the *cluster_simple* and *cluster_ha* classes.
You can add some additional logic to allow this module to be disabled,
either from the Fuel UI or by passing Customer Attributes
to the Fuel configuration.::
if $::fuel_settings['enable_profile'] {
include 'profile'
}
This block uses the *enable_profile* variable
to enable or disable inclusion of the profile module.
The variable should be passed from Nailgun and saved
to the */etc/astute.yaml* files on managed nodes.
You can do this either by downloading the settings files
and manually editing them before deployment
or by asking the Fuel UI developers to include additional options
in the settings panel.

View File

@ -1,9 +0,0 @@
Fuel Development Environment on Live Master Node
================================================
If you need to deploy your own developer version of FUEL on live
Master Node, you can use the helper scripts, that are available
in the `fuel-dev-tools <https://github.com/openstack/fuel-dev-tools>` repository.
Help information about fuel-dev-tools can be found by running it
with the '-h' parameter.

View File

@ -1,396 +0,0 @@
Modular Architecture
====================
The idea behind the modular architecture introduced in Fuel 6.1 is the separation of the legacy site.pp manifests to a group of small manifests. Each manifest can be designed to do only a limited part of the deployment process. These manifests can be applied by Puppet the same way as it was done in Fuel 6.0 or older. The deployment process in Fuel 6.1 consists of a sequential application of small manifests in a predefined order.
Using smaller manifests instead of the monolithic ones has the following advantages:
* **Independent development**
As a developer, you can work only with those Fuel components that you are interested in. A separate manifest
can be dedicated wholly to a single task without any interference from other components and developers. This
task may require the system to be in some state before the deployment can be started and the task may require
some input data be available. But other than that each task is on its own.
* **Granular testing**
With granular deployment introduced in Fuel 6.1, any finished task can be tested independently. Testing can be
automated with autotests; you can snapshot and revert the environment to a previous state; or you can manually
run tests in your environment. With Fuel 6.0 or older, testing consumes a considerable amount of time as there
is no way to test only a part of the deployment -- each new change requires the whole deployment to be started
from scratch. See also the `granular deployment blueprint <https://blueprints.launchpad.net/fuel/+spec/granular-deployment-based-on-tasks>`_.
* **Encapsulation**
Puppet deems all resources to be unique within the catalog and the way it works with dependencies and
ordering. Normally one cannot take a third party module and expect that it will work as designed within the
Puppet infrastructure. Modular manifests, introduced in Fuel 6.1, solve this by making every single task use
its own catalog without directly interfering with other tasks.
* **Self-Testing**
Granular architecture allows making tests for every task. These tests can be run either after the task to
check if it is successful or before the task to check if the system is in the required state to start the
task. These tests can be used by the developer as acceptance tests, by the Continuous Integration (CI) to
determine if the changes can be merged, or during the real deployment to control the whole process and to
raise alarm if something goes wrong.
* **Using multiple tools**
Sometimes you may use a tool other than Puppet (from shell scripts to Python or Ruby, and even binary
executables). Granular deployment allows using any tools you see fit for the task. Tasks, tests and pre/post
hooks can be implemented using anything the developer knows best. In Fuel 6.1 only pre/post hooks can use
non-Puppet tasks.
Granular deployment process
---------------------------
Granular deployment is implemented using the Nailgun plugin system. Nailgun uses the deployment graph data to determine what tasks on which nodes should be run. This data graph is traversed and sent to Astute as an ordered list of tasks to be executed with the information on which nodes they should be run.
Astute receives this data structure and starts running the tasks one by one in the following order:
#. Pre-deploy actions
#. Main deployment tasks
#. Post-deploy actions
Each tasks reports back if it is successful. Astute stops the deployment on any failed task.
.. image:: _images/granularDeployment.png
See also `Fuel Reference Architecture: Task Deployment <https://docs.mirantis.com/openstack/fuel/fuel-6.1/reference-architecture.html#task-deployment>_`.
Task Graph
----------
A task graph is built by Nailgun from *tasks.yaml* files during Fuel Master node bootstrap:
::
fuel rel --sync-deployment-tasks --dir /etc/puppet/
*tasks.yaml* files describe a group of tasks (or a single task).
::
- id: netconfig
type: puppet
groups: [primary-controller, controller, cinder, compute, ceph-osd, zabbix-server, primary-mongo, mongo]
required_for: [deploy]
requires: [hiera]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/netconfig.pp
puppet_modules: /etc/puppet/modules
timeout: 3600
where:
* ``id`` - Each task must have the unique ID.
* ``type`` - Determines how the tasks should be executed. Currently there are Puppet and exec types.
* ``groups`` - Groups are used to determine on which nodes these tasks should be started and are mostly related to the node roles.
* ``required_for`` - The list of tasks that require this task to start. Can be empty.
* ``requires`` - The list of tasks that are required by this task to start. Can be empty.
* Both the ``requires`` and ``required_for`` fields are used to build the dependency graph and to determine the order of task execution.
* ``parameters`` - The actual payload of the task. For the Puppet task these can be paths to modules (puppet_modules) and the manifest (puppet_manifest) to apply; the exec type requires the actual command to run.
* ``timeout`` determines how long the orchestrator should wait (in seconds) for the task to complete before marking it as failed.
Graph example
-------------
::
- id: netconfig
type: puppet
groups: [primary-controller, controller, cinder, compute, ceph-osd, zabbix-server, primary-mongo, mongo]
required_for: [deploy_start]
requires: [hiera]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/netconfig.pp
puppet_modules: /etc/puppet/modules
timeout: 3600
::
- id: tools
type: puppet
groups: [primary-controller, controller, cinder, compute, ceph-osd, zabbix-server, primary-mongo, mongo]
required_for: [deploy_start]
requires: [hiera]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/tools.pp
puppet_modules: /etc/puppet/modules
timeout: 3600
- id: hosts
type: puppet
groups: [primary-controller, controller, cinder, compute, ceph-osd, zabbix-server, primary-mongo, mongo]
required_for: [deploy_start]
requires: [netconfig]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/hosts.pp
puppet_modules: /etc/puppet/modules
timeout: 3600
- id: firewall
type: puppet
groups: [primary-controller, controller, cinder, compute, ceph-osd, zabbix-server, primary-mongo, mongo]
required_for: [deploy_start]
requires: [netconfig]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/firewall.pp
puppet_modules: /etc/puppet/modules
timeout: 3600
- id: hiera
type: puppet
groups: [primary-controller, controller, cinder, compute, ceph-osd, zabbix-server, primary-mongo, mongo]
required_for: [deploy_start]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/hiera.pp
puppet_modules: /etc/puppet/modules
timeout: 3600
This graph data will be processed to the following graph when imported to the Nailgun. Deploy task is an anchor used to start the graph traversal and is hidden from the image.
.. image:: _images/graph.png
Nailgun will run the hiera task first, then netconfig or tools, and then firewall or hosts. Astute will start each task on those nodes whose roles are present in the groups field of each task.
Modular manifests
-----------------
Starting with Fuel 6.1, granular deployment allows using a number of small manifests instead of the single monolithic one. These small manifests are placed in the ``deployment/puppet/osnailyfacter/modular`` folder and its subfolders. In Fuel 6.0 or older there was a single entry point manifest used -- located at ``deployment/puppet/osnailyfacter/examples/site.pp`` in the `fuel-library <https://github.com/openstack/fuel-library/>`_ repository.
To write a modular manifest, you will need to take all the resources, classes and definitions you are using to deploy your component and place them into a single file. This manifest should be able to do everything that is required for your component.
The system should be in some state before you will be able to start your task. For example, database, Pacemaker, or Keystone should be present.
You should also satisfy the missing dependencies. Some of the manifests may have internal dependencies on other manifests and their parts. You will have to either remove these dependencies or make dummy classes to satisfy them.
Modular example
---------------
Here is an example of a modular manifest that installs Apache and creates a basic site.
::
>>> site.pp
$fuel_settings = parseyaml($astute_settings_yaml)
File {
owner => root,
group => root,
mode => 0644,
}
package { apache :
ensure => installed,
}
service { apache :
ensure => running,
enable => true,
}
file { /etc/apache.conf :
ensure => present,
content => template(apache/config.erb),
}
$www_root = $fuel_settings[www_root]
file { “${www_root}/index.html” :
ensure => present,
content => hello world,
}
As the first line of any granular Puppet manifest, add the following:
::
notice("MODULAR: $$$TASK_ID_OR_NAME$$$")
It will help you debug by finding a place in ``puppet.log`` where your task started.
Now let's split the manifest into several tasks:
::
>>> apache_install.pp
package { apache :
ensure => installed,
}
>>> apache_config.pp
File {
owner => root,
group => root,
mode => 0644,
}
$www_root = hiera(www_root)
file { /etc/apache.conf :
ensure => present,
content => template(apache/config.erb),
}
>>> create_site.pp
File {
owner => root,
group => root,
mode => 0644,
}
$www_root = hiera(www_root)
file { “${www_root}/index.html” :
ensure => present,
content => hello world,
}
>>> apache_start.pp
service { apache :
ensure => running,
enable => true,
}
We have just created several manifests. Each will do just its simple action. First we install an Apache package, then we create a configuration file, then create a sample site, and, finally, start the service. Each of these tasks now can be started separately together with any other task. We have also replaced ``$fuel_settings`` with hiera calls.
Since there are some dependencies, we cannot start the Apache service without installing the package first, but we can start the service just after the package installation without the configuration and sample site creation.
So there are the following tasks:
* install
* config
* site
* start
* hiera (to enable the hiera function)
A visual representation of the dependency graph will be the following:
.. image:: _images/dependGraph.png
**start**, **config**, and **site** require the package to be installed. **site** and **config** require the **hiera** function to work. Apache should be configured and **site** should be created to start.
Now, lets write a data yaml to describe this structure:
::
- id: hiera
type: puppet
role: [test]
required_for: [deploy]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/hiera.pp
puppet_modules: /etc/puppet/modules
timeout: 3600
- id: install
type: puppet
role: [test]
required_for: [deploy]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/apache_install.pp
puppet_modules: /etc/puppet/modules
timeout: 3600
- id: config
type: puppet
role: [test]
required_for: [deploy]
requires: [hiera, install]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/apache_config.pp
puppet_modules: /etc/puppet/modules
timeout: 3600
- id: site
type: puppet
role: [test]
required_for: [deploy]
requires: [install, hiera]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/create_site.pp
puppet_modules: /etc/puppet/modules
timeout: 3600
- id: start
type: puppet
role: [test]
required_for: [deploy]
requires: [install, config, site]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/apache_start.pp
puppet_modules: /etc/puppet/modules
timeout: 3600
Nailgun can process this data file and tell Astute to deploy all the tasks in the required order. Other nodes or other deployment modes may require more tasks or tasks run in different order.
Now, let's say you have a new apache_proxy class, and want to add it to the setup:
::
>>> apache_proxy/init.pp
file { /etc/apache.conf :
owner => root,
group => root,
mode => 0644,
ensure => present,
source => puppet:///apache/proxy.conf,
} ->
service { apache :
ensure => running,
enable => true,
}
This tasks updates the main Apache configuration as well, and it conflicts with the previous configuration tasks. It would not be possible to combine them in a single catalog. It also attempts to enable the Apache service, which produces a new duplicate error.
Granular deployment solves this. You can still use them together without trying to do something with duplicates or dependency problems.
.. image:: _images/dependGraph02.png
We have just inserted a new proxy task between the **config** and **start** tasks. The proxy task will rewrite the configuration file created in the **config** task making the **config** task pointless. This setup will still work as expected and we will have a working Apache-based proxy. Apache will be started on the proxy task but the **start** task will not produce any errors due to Puppet's idempotency.
There are also `granular noop tests <https://ci.fuel-infra.org/job/fuellib_noop_tests/>`_ based on rspec-puppet. These CI tests will put -1 for any new Puppet task not covered with tests.
Testing
-------
Testing these manifests is easier than having a single monolithic manifest.
After writing each file you can manually apply it to check if the task works as expected.
If the task is complex enough, it can benefit from automated acceptance testing. These tests can be implemented using any tool you as a developer see fit.
For example, lets try using `http://serverspec.org <http://serverspec.org>`_. This is an rspec extension that is very convenient for server testing.
The only thing the install task does is the package installation and it has no preconditions. The spec file for it may look like this:
::
require 'spec_helper'
describe package(apache) do
it { should be_installed }
end
Running the spec should produce an output similar to the following:
::
Package "nginx"
should be installed
Finished in 0.17428 seconds
1 example, 0 failures
There are many different resource types *serverspec* can work with, and this can easily be extended. Other tasks can be tested with specs like the following:
::
describe service('apache') do
it { should be_enabled }
it { should be_running }
end
describe file(/etc/apache.conf) do
it { should be_file }
its(:content) { should match %r{DocumentRoot /var/www/html} }
end

View File

@ -1,423 +0,0 @@
Contributing to Fuel Library
============================
This chapter explains how to add a new module or project into the Fuel Library,
how to integrate with other components,
and how to avoid different problems and potential mistakes.
The Fuel Library is a very big project
and even experienced Puppet users may have problems
understanding its structure and internal workings.
Adding new modules to fuel-library
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Case A. Pulling in an existing module*
If you are adding a module that is the work of another project
and is already tracked in a separate repo:
1. Create a review request with an unmodified copy
of the upstream module from which you are working
and *no* other related modifications.
* This review should also contain the commit hash from the upstream repo
in the commit message.
* The review should be evaluated to determine its suitability
and either rejected
(for licensing, code quality, outdated version requested)
or accepted without requiring modifications.
* The review should not include code that calls this new module.
2. Any changes necessary to make it work with Fuel
should then be proposed as a dependent change(s).
*Case B. Adding a new module*
If you are adding a new module that is a work purely for Fuel
and is not tracked in a separate repo,
submit incremental reviews that consist of
working implementations of features for your module.
If you have features that are necessary but do not yet work fully,
then prevent them from running during the deployment.
Once your feature is complete,
submit a review to activate the module during deployment.
Contributing to existing fuel-library modules
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As developers of Puppet modules, we tend to collaborate with the Puppet
OpenStack community. As a result, we contribute to upstream modules all of the
improvements, fixes and customizations we make to improve Fuel as well.
That implies that every contributor must follow Puppet DSL basics,
`puppet-openstack dev docs
<https://wiki.openstack.org/wiki/Puppet-openstack#Developer_documentation>`_
and `Puppet rspec tests
<https://wiki.openstack.org/wiki/Puppet-openstack#Rspec_puppet_tests>`_
requirements.
The most common and general rule is that upstream modules should be modified
only when bugfixes and improvements could benefit everyone in the community.
And appropriate patch should be proposed to the upstream project prior
to Fuel project.
In other cases (like applying some very specific custom logic or settings)
contributor should submit patches to ``openstack::*`` `classes
<https://github.com/openstack/fuel-library/tree/master/deployment/puppet/
openstack>`_
Fuel library includes custom modules as well as ones forked from upstream
sources. Note that ``Modulefile``, if any exists, should be used in order
to recognize either given module is forked upstream one or not.
In case there is no ``Modulefile`` in module's directory, the contributor may
submit a patch directly to this module in Fuel library.
Otherwise, he or she should submit patch to upstream module first, and once
merged or +2 recieved from a core reviewer, the patch should be backported to
Fuel library as well. Note that the patch submitted for Fuel library should
contain in commit message the upstream commit SHA or link to github pull-request
(if the module is not on git.openstack.org) or Change-Id of gerrit patch.
The Puppet modules structure
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Code that is contributed into the Fuel Library
should be organized into a Puppet module.
A module is a self-contained set of Puppet code
that is usually made to perform a specific function.
For example, you could have a module for each service
you are going to configure or for every part of your project.
Usually it is a good idea to make a module independent
but sometimes it may require or be required by other modules.
You can think of a module as a sort of library.
The most important part of every Puppet module is its **manifests** folder.
This folder contains Puppet classes and definitions
which also contain resources managed by this module.
Modules and classes also form namespaces.
Each class or definition should be placed into a single file
inside the manifests folder
and this file should have the same name as the class or definition.
The module should have a top level class
that serves as the module's entry point
and is named same as the module.
This class should be placed into the *init.pp* file.
This example module shows the standard structure
that every Puppet module should follow::
example
example/manifests/init.pp
example/manifests/params.pp
example/manifests/client.pp
example/manifests/server
example/manifests/server/vhost.pp
example/manifests/server/service.pp
example/templates
example/templates/server.conf.erb
example/files
example/files/client.data
The first file in the *manifests* folder is named *init.pp*
and should contain the entry point class of this module.
This class should have the same name as the module::
class example {
}
The second file is *params.pp*.
This file is not mandatory but is often used
to store different configuration values and parameters
that are used by other classes of the module.
For example, it could contain the service name and package name
of our hypothetical example module.
Conditional statements might be included
if you need to change default values in different environments.
The *params* class should be named as a child
to the module's namespace as are all other classes of the module::
class example::params {
$service = 'example'
$server_package = 'example-server'
$client_package = 'example-client'
$server_port = '80'
}
All other files inside the manifests folder
contain classes as well and can perform any action
you might want to identify as a separate piece of code.
This generally falls into sub-classes that do not require its users
to configure the parameters explicitly,
or may be optional classes that are not required in all cases.
In the following example,
we create a client class that defines a client package
that will be installed and placed into a file called *client.pp*::
class example::client {
include example::params
package { $example::params::client_package :
ensure => installed,
}
}
As you can see, we have used the package name from params class.
Consolidating all values that might require editing into a single class,
as opposed to hardcoding them,
allows you to reduce the effort required
to maintain and develop the module further in the future.
If you are going to use any values from the params class,
you should include it first to force its code
to execute and create all required variables.
You can add more levels into the namespace structure if you want.
Let's create server folder inside our manifests folder
and add the *service.pp* file there.
It would be responsible for installing and running
the server part of our imaginary software.
Placing the class inside the subfolder adds one level
into the name of the contained class.::
class example::server::service (
$port = $example::params::server_port,
) inherits example::params {
$package = $example::params::server_package
$service = $example::params::service
package { $package :
ensure => installed,
}
service { $service :
ensure => running,
enabled => true,
hasstatus => true,
hasrestart => true,
}
file { 'example_config' :
ensure => present,
path => '/etc/example.conf',
owner => 'root',
group => 'root',
mode => '0644',
content => template('example/server.conf.erb'),
}
file { 'example_config_dir' :
ensure => directory,
path => '/etc/example.d',
owner => 'example',
group => 'example',
mode => '0755',
}
Package[$package] -> File['example_config', 'example_config_dir'] ~>
Service['example_config']
}
This example is a bit more complex. Let's see what it does.
Class *example::server::service* is **parametrized**
and can accept one parameter:
the port to which the server process should bind.
It also uses a popular "smart defaults" hack.
This class inherits the params class and uses its default values
only if no port parameter is provided.
In this case, you cannot use *include params*
to load the default values
because it is called by the *inherits example::params* clause
of the class definition.
Inside our class, we take several variables from the params class
and declare them as variables of the local scope.
This is a convenient practice to make their names shorter.
Next we declare our resources.
These resources are package, service, config file and config dir.
The package resource installs the package
whose name is taken from the variable
if it is not already installed.
File resources create the config file and config dir;
the service resource starts the daemon process and enables its autostart.
The final part of this class is the *dependency* declaration.
We have used a "chain" syntax to specify the order of evaluation
of these resources.
It is important to install the package first,
then install the configuration files
and only then start the service.
Trying to start the service before installing the package will definitely fail.
So we need to tell Puppet that there are dependencies between our resources.
The arrow operator that has a tilde instead of a minus sign (~>)
means not only dependency relationship
but also *notifies* the object to the right of the arrow to refresh itself.
In our case, any changes in the configuration file
would make the service restart and load a new configuration file.
Service resources react to the notification event
by restating the managed service.
Other resources may instead perform other supported actions.
The configuration file content is generated by the template function.
Templates are text files that use Ruby's erb language tags
and are used to generate a text file using pre-defined text
and some variables from the manifest.
These template files are located inside the **templates** folder
of the module and usually have the *erb* extension.
When a template function is called
with the template name and module name prefix,
Fuel tries to load this template and compile it
using variables from the local scope of the class function
from which the template was called.
For example, the following template saved in
the templates folder as *server.conf.erb file*
is a setting to bind the port of our service::
bind_port = <%= @port %>
The template function will replace the 'port' tag
with the value of the port variable from our class
during Puppet's catalog compilation.
If the service needs several virtual hosts,
you need to define **definitions**,
which are similar to classes but, unlike classes,
they have titles like resources do
and can be used many times with different titles
to produce many instances of the managed resources.
Classes cannot be declared several times with different parameters.
Definitions are placed in single files inside the manifests directories
just as classes are
and are named in a similar way, using the namespace hierarchy.
Let's create our vhost definition.::
define example::server::vhost (
$path = '/var/data',
) {
include example::params
$config = “/etc/example.d/${title}.conf”
$service = $example::params::service
file { $config :
ensure => present,
owner => 'example',
group => 'example',
mode => '0644',
content => template('example/vhost.conf.erb'),
}
File[$config] ~> Service[$service]
}
This defined type only creates a file resource
with its name populated by the title
that is used when it gets defined.
It sets the notification relationship with the service
to make it restart when the vhost file is changed.
This defined type can be used by other classes
like a simple resource type to create as many vhost files as we need.::
example::server::vhost { 'mydata' :
path => '/path/to/my/data',
}
Defined types can form relationships in the same way as resources do
but you need to capitalize all elements of the path to make the reference::
File['/path/to/my/data'] -> Example::Server::Vhost['mydata']
This is works for text files but binary files must be handled differently.
Binary files or text files that will always be same
can be placed into the **files** directory of the module
and then be taken by the file resource.
To illustrate this, let's add a file resource for a file
that contains some binary data that must be distributed
in our client package.
The file resource is the *example::client* class::
file { 'example_data' :
path => '/var/lib/example.data',
owner => 'example',
group => 'example',
mode => '0644',
source => 'puppet:///modules/example/client.data',
}
We have specified source as a special puppet URL scheme
with the module's and the file's name.
This file will be placed in the specified location when Puppet runs.
On each run, Puppet will check this file's checksum,
overwriting it if the checksum changes;
note that this method should not be used with mutable data.
Puppet's fileserving works in both client-server and masterless modes.
We now have all classes and resources that are required
to manage our hypothetical example service.
Our example class defined inside *init.pp* is still empty
so we can use it to declare all other classes
to put everything together::
class example {
include example::params
include example::client
class { 'example::server::service' :
port => '100',
}
example::server::vhost { 'site1' :
path => '/data/site1',
}
example::server::vhost { 'site2' :
path => '/data/site2',
}
example::server::vhost { 'test' :
path => '/data/test',
}
}
Now we have entire module packed inside *example* class and we can just
include this class to any node where we want to see our service running.
Declaration of parametrized class also did override default port number from
params file and we have three separate virtual hosts for out service. Client
package is also included into this class.
Adding Python code to fuel-library
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
All Python code that is added to fuel-library must pass style checks and have
tests written.
Whole test suite is run by `python_run_tests.sh <docs/develop/module_structure.rst>`_.
It uses a virtualenv in which all Python modules from
`python-tests-requirements.txt <https://github.com/openstack/fuel-library/blob/master/utils/jenkins/python-test-requirements.txt>`_
are installed. If tests need any third-party library, it should be added as a requirement into this file.
Before starting any test for Python code, test suite runs style checks for any Python code
found in fuel-library. Those checks are performed by `flake8` (for more information, see the
`flake8 documentation <http://flake8.readthedocs.org/en/2.3.0/>`_)
with additional `hacking` checks installed. Those checks are a set of guidelines for Python code.
More information about those guidelines could be found in `hacking documentation <http://flake8.readthedocs.org/en/2.3.0/>`_
If, for some reason, you need to disable style checks in the given file you can add the following
line at the beginning of the file:::
# flake8: noqa
After style checks, test suite will execute Python tests by using `py.test <http://pytest.org>`_ test runner.
`py.test` will look for Python files whose names begin with 'test\_' and will search for the tests in them.
Documentation on how to write tests could be found in
`the official Python documentation <https://docs.python.org/2/library/unittest.html>`_ and
`py.test documentation <http://pytest.org/latest/assert.html>`_.

View File

@ -1,7 +0,0 @@
.. _nailgun:
Nailgun
=======
.. toctree::
nailgun/tree

View File

@ -1,142 +0,0 @@
Bonding in UI/Nailgun
=====================
Abstract
--------
The NIC bonding allows you to aggregate multiple physical links to one link
to increase speed and provide fault tolerance.
Design docs
-----------
https://etherpad.openstack.org/p/fuel-bonding-design
Fuel Support
------------
The Puppet module L23network has support for OVS and native Linux bonding,
so we can use it for both NovaNetwork and Neutron deployments. Only Native
OVS bonding (Neutron only) is implemented in Nailgun now. Vlan splinters cannot
be used on bonds now. Three modes are supported now: 'active-backup',
'balance-slb', 'lacp-balance-tcp' (see nailgun.consts.OVS_BOND_MODES).
Deployment serialization
------------------------
Most detailed docs on deployment serialization for neutron are here:
1. http://docs.mirantis.com/fuel/fuel-4.0/reference-architecture.html#advanced-network-configuration-using-open-vswitch
2. https://etherpad.openstack.org/p/neutron-orchestrator-serialization
Changes related to bonding are in the “transformations” section:
1. "add-bond" section
::
{
"action": "add-bond",
"name": "bond-xxx", # name is generated in UI
"interfaces": [], # list of NICs; ex: ["eth1", "eth2"]
"bridge": "br-xxx",
"properties": [] # info on bond's policy, mode; ex: ["bond_mode=active-backup"]
}
2. Instead of creating separate OVS bridges for every bonded NIC we need to create one bridge for the bond itself
::
{
"action": "add-br",
"name": "br-xxx"
}
REST API
--------
NodeNICsHandler and NodeCollectionNICsHandler are used for bonds creation,
update and removal. Operations with bonds and networks assignment are done in
single request fashion. It means that creation of bond and appropriate networks
reassignment is done using one request. Request parameters must contain
sufficient and consistent data for construction of new interfaces topology and
proper assignment of all node's networks.
Request/response data example::
[
{
"name": "ovs-bond0", # only name is set for bond, not id
"type": "bond",
"mode": "balance-slb", # see nailgun.consts.OVS_BOND_MODES for modes list
"slaves": [
{"name": "eth1"}, # only “name” must be in slaves list
{"name": "eth2"}],
"assigned_networks": [
{
"id": 9,
"name": "public"
}
]
},
{
"name": "eth0",
"state": "up",
"mac": "52:54:00:78:55:68",
"max_speed": null,
"current_speed": null,
"assigned_networks": [
{
"id": 1,
"name": "fuelweb_admin"
},
{
"id": 10,
"name": "management"
},
{
"id": 11,
"name": "storage"
}
],
"type": "ether",
"id": 5
},
{
"name": "eth1",
"state": "up",
"mac": "52:54:00:88:c8:78",
"max_speed": null,
"current_speed": null,
"assigned_networks": [], # empty for bond slave interfaces
"type": "ether",
"id": 2
},
{
"name": "eth2",
"state": "up",
"mac": "52:54:00:03:d1:d2",
"max_speed": null,
"current_speed": null,
"assigned_networks": [], # empty for bond slave interfaces
"type": "ether",
"id": 1
}
]
Following fields are required in request body for bond interface:
name, type, mode, slaves.
Following fields are required in request body for NIC:
id, type.
Nailgun DB
----------
Now we have separate models for bond interfaces and NICs: NodeBondInterface and
NodeNICInterface. Node's interfaces can be accessed through Node.nic_interfaces
and Node.bond_interfaces separately or through Node.interfaces (property,
read-only) all together.
Relationship between them (bond:NIC ~ 1:M) is expressed in “slaves” field in
NodeBondInterface model.
Two more new fields in NodeBondInterface are: “flags” and “mode”.
Bond's “mode” can accept values from nailgun.consts.OVS_BOND_MODES.
Bond's “flags” are not in use now. “type” property (read-only) indicates whether
it is a bond or NIC (see nailgun.consts.NETWORK_INTERFACE_TYPES).

View File

@ -1,177 +0,0 @@
Creating Partitions on Nodes
============================
Fuel generates Anaconda Kickstart scripts for Red Hat based systems and
preseed files for Ubuntu to partition block devices on new nodes. Most
of the work is done in the pmanager.py_ Cobbler script using the data
from the "ks_spaces" variable generated by the Nailgun VolumeManager_
class based on the volumes metadata defined in the openstack.yaml_
release fixture.
.. _pmanager.py: https://github.com/openstack/fuel-library/blob/master/deployment/puppet/cobbler/templates/scripts/pmanager.py
.. _VolumeManager: https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/extensions/volume_manager/manager.py
.. _openstack.yaml: https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml
Volumes are created following best practices for OpenStack and other
components. Following volume types are supported:
vg
an LVM volume group that can contain one or more volumes with type set
to "lv"
partition
plain non-LVM partition
raid
a Linux software RAID-1 array of LVM volumes
Typical slave node will always have an "os" volume group and one or more
volumes of other types, depending on the roles assigned to that node and
the role-to-volumes mapping defined in the "volumes_roles_mapping"
section of openstack.yaml.
There are a few different ways to add another volume to a slave node:
#. Add a new logical volume definition to one of the existing LVM volume
groups.
#. Create a new volume group containing your new logical volumes.
#. Create a new plain partition.
Adding an LV to an Existing Volume Group
----------------------------------------
If you need to add a new volume to an existing volume group, for example
"os", your volume definition in openstack.yaml might look like this::
- id: "os"
type: "vg"
min_size: {generator: "calc_min_os_size"}
label: "Base System"
volumes:
- mount: "/"
type: "lv"
name: "root"
size: {generator: "calc_total_root_vg"}
file_system: "ext4"
- mount: "swap"
type: "lv"
name: "swap"
size: {generator: "calc_swap_size"}
file_system: "swap"
- mount: "/mnt/some/path"
type: "lv"
name: "LOGICAL_VOLUME_NAME"
size:
generator: "calc_LOGICAL_VOLUME_size"
generator_args: ["arg1", "arg2"]
file_system: "ext4"
Make sure that your logical volume name ("LOGICAL_VOLUME_NAME" in the
example above) is not the same as the volume group name ("os"), and
refer to current version of openstack.yaml_ for up-to-date format.
Adding Generators to Nailgun VolumeManager
------------------------------------------
The "size" field in a volume definition can be defined either directly
as an integer number in megabytes, or indirectly via a so called
generator. Generator is a Python lambda that can be called to calculate
logical volume size dynamically. In the json example above size is
defined as a dictionary with two keys: "generator" is the name of the
generator lambda and "generator_args" is the list of arguments that will
be passed to the generator lambda.
There is the method in the VolumeManager class where generators are
defined. New volume generator 'NEW_GENERATOR_TO_CALCULATE_SIZ' needs to
be added in the generators dictionary inside this method.
.. code-block:: python
class VolumeManager(object):
...
def call_generator(self, generator, *args):
generators = {
...
'NEW_GENERATOR_TO_CALCULATE_SIZE': lambda: 1000,
...
}
Creating a New Volume Group
---------------------------
Another way to add new volume to slave nodes is to create new volume
group and to define one or more logical volume inside the volume group
definition::
- id: "NEW_VOLUME_GROUP_NAME"
type: "vg"
min_size: {generator: "calc_NEW_VOLUME_NAME_size"}
label: "Label for NEW VOLUME GROUP as it will be shown on UI"
volumes:
- mount: "/path/to/mount/point"
type: "lv"
name: "LOGICAL_VOLUME_NAME"
size:
generator: "another_generator_to_calc_LOGICAL_VOLUME_size"
generator_args: ["arg"]
file_system: "xfs"
Creating a New Plain Partition
------------------------------
Some node roles may be incompatible with LVM and would require plain
partitions. If that's the case, you may have to define a standalone
volume with type "partition" instead of "vg"::
- id: "NEW_PARTITION_NAME"
type: "partition"
min_size: {generator: "calc_NEW_PARTITION_NAME_size"}
label: "Label for NEW PARTITION as it will be shown on UI"
mount: "none"
disk_label: "LABEL"
file_system: "xfs"
Note how you can set mount point to "none" and define a disk label to
identify the partition instead. Its only possible to set a disk label on
a formatted portition, so you have to set "file_system" parameter to use
disk labels.
Updating the Node Role to Volumes Mapping
-----------------------------------------
Unlike a new logical volume added to a pre-existing logical volume
group, a new logical volume group or partition will not be allocated on
the node unless it is included in the role-to-volumes mapping
corresponding to one of the node's roles, like this::
volumes_roles_mapping:
controller:
- {allocate_size: "min", id: "os"}
- {allocate_size: "all", id: "image"}
compute:
...
* *controller* - is a role for which partitioning information is given
* *id* - is id of volume group or plain partition
* *allocate_size* - can be "min" or "all"
* *min* - allocate volume with minimal size
* *all* - allocate all free space for
volume, if several volumes have this key
then free space will be allocated equally
Setting Volume Parameters from Nailgun Settings
-----------------------------------------------
In addition to VolumeManager generators, it is also possible to define
sizes or whatever you want in the nailgun configuration file
(/etc/nailgun/settings.yaml). All fixture files are templated using
Jinja2 templating engine just before being loaded into nailgun database.
For example, we can define mount point for a new volume as follows::
"mount": "{{settings.NEW_LOGICAL_VOLUME_MOUNT_POINT}}"
Of course, *NEW_LOGICAL_VOLUME_MOUNT_POINT* must be defined in the
settings file.

View File

@ -1,66 +0,0 @@
Nailgun is the core of FuelWeb.
To allow an enterprise features be easily connected,
and open source commity to extend it as well, Nailgun must
have simple, very well defined and documented core,
with the great pluggable capabilities.
Reliability
___________
All software contains bugs and may fail, and Nailgun is not an exception of this rule.
In reality, it is not possible to cover all failure scenarios,
even to come close to 100%.
The question is how we can design the system to avoid bugs in one module causing the damage
of the whole system.
Example from the Nailgun's past:
Agent collected hardware information, include current_speed param on the interfaces.
One of the interfaces had current_speed=0. At the registration attempt, Nailgun's validator
checked that current_speed > 0, and validator raised an exception InvalidData,
which declined node discovery.
current_speed is one of the attibutes which we can easily skip, it is not even
used for deployment in any way at the moment and used only for the information provided to the user.
But it prevented node discovery, and it made the server unusable.
Another example. Due to the coincedence of bug and wrong metadata of one of the nodes,
GET request on that node would return 500 Internal Server Error.
Looks like it should affect the only one node, and logically we could remove such
failing node from the environment to get it discovered again.
However, UI + API handlers were written in the following way:
* UI calls /api/nodes to fetch info about all nodes to just show how many nodes are allocated, and how many are not
* NodesCollectionHandler would return 500 if any of nodes raise an exception
It is simple to guess, that the whole UI was completely destroyed by just one
failed node. It was impossible to do any action on UI.
These two examples give us the starting point to rethink on how to avoid
Nailgun crash just if one of the meta attr is wrong.
First, we must divide the meta attributes discovered by agent on two categories:
* absolutely required for node discovering (i.e. MAC address)
* non-required for discovering
* required for deployment (i.e. disks)
* non-required for deployment (i.e. current_speed)
Second, we must have UI refactored to fetch only the information required,
not the whole DB to just show two numbers. To be more specific,
we have to make sure that issues in one environment must not
affect the other environment. Such a refactoring will require additional
handlers in Nailgun, as well as some additions, such as pagination and etc.
From Nailgun side, it is bad idea to fail the whole CollectionHandler if one
of the objects fail to calculate some attribute. My(mihgen) idea is to simply set
attrubute to Null if failed to calculate, and program UI to handle it properly.
Unit tests must help in testing of this.
Another idea is to limit the /api/nodes,
/api/networks and other calls
to work only if cluster_id param provided, whether set to None or some of cluster Ids.
In such a way we can be sure that one env will not be able to break the whole UI.

View File

@ -1,121 +0,0 @@
Creating roles
==============
Each release has its own role list which can be customized. A plain list of
roles is stored in the "roles" section of each release in the openstack.yaml_::
roles:
- controller
- compute
- cinder
The order in which the roles are listed here determines the order in which
they are displayed on the UI.
For each role in this list there should also be entry in "roles_metadata"
section. It defines role name, description and conflicts with other roles::
roles_metadata:
controller:
name: "Controller"
description: "..."
conflicts:
- compute
compute:
name: "Compute"
description: "..."
conflicts:
- controller
cinder:
name: "Storage - Cinder LVM"
description: "..."
"conflicts" section should contain a list of other roles that cannot be placed
on the same node. In this example, "controller" and "compute" roles cannot be
combined.
Roles restrictions
------------------
You should take the following restrictions for the role into consideration:
#. Controller
* There should be at least one controller. The `Not enough deployed
controllers` error occurs when you have 1 Controller installed
and want to redeploy that controller. During redeployment, it will
be taken off, so for some time the cluster will not have any
operating controller, which is wrong.
* If we are using simple multinode mode, then we cannot add more
than one controller.
* In HA mode, we can add as much as possible controllers, though
it is recommended to add at least 3.
* Controller role cannot be combined with compute.
#. Compute
* It is recommended to have at least one compute in non-vCenter env
(https://bugs.launchpad.net/fuel/+bug/1381613: note that this is a
bug in UI and not yaml-specific). Beginning
with Fuel 6.1, no vCenter-related limitations are longer present.
* Computes cannot be combined with controllers.
* Computes cannot be added if vCenter is chosen as a hypervisor.
#. Cinder
* It is impossible to add Cinder nodes to an environment with Ceph RBD.
* At least one Cinder node is recommended
* Cinder LVM needs to be enabled on the *Settings* tab of the Fuel web UI.
#. MongoDB
* Cannot be added to already deployed environment.
* Can be added only if Ceilometer is enabled.
* Cannot be combined with Ceph OSD and Compute.
* For a simple mode there has to be 1 Mongo node, for HA at least 3.
* It is not allowed to choose MongoDB role for a node if external
MongoDB setup is used.
#. Zabbix
* Only available in experimental ISO.
* Cannot be combined with any other roles.
* Only one Zabbix node can be assigned in an environment.
#. Ceph
* Cannot be used with Mongo and Zabbix.
* Minimal number of Ceph nodes is equal to
the `Ceph object replication factor` value from the *Settings* tab
of the Fuel web UI.
* Ceph cannot be added if vCenter is chosen as a hypervisor
and `volumes_ceph`, `images_ceph` and `ephemeral_ceph` settings
are all False.
#. Ceilometer
* Either a node with MongoDB role or external MongoDB turned on is
required.
.. _openstack.yaml: https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml

View File

@ -1,164 +0,0 @@
Extending OpenStack Settings
============================
Each release has a list of OpenStack settings that can be customized.
The settings configuration is stored in the "attributes_metadata.editable"
release section in the openstack.yaml_ file.
Settings are divided into groups. Each group should have a "metadata" section
with the following attributes::
metadata:
toggleable: true
enabled: false
weight: 40
* *toggleable* defines an ability to enable/disable the whole setting group
on UI (checkbox control is presented near a setting group label)
* *enabled* indicates whether the group is checked on the UI
* *weight* defines the order in which this group is displayed on the tab.
* *restrictions*: see restrictions_.
Other sections of a setting group represent separate settings. A setting
structure includes the following attributes::
syslog_transport:
value: "tcp"
label: "Syslog transport protocol"
description: ""
weight: 30
type: "radio"
values:
- data: "udp"
label: "UDP"
description: ""
restrictions:
- "cluster:net_provider != 'neutron'"
- data: "tcp"
label: "TCP"
description: ""
regex:
source: "^[A-z0-9]+$"
error: "Invalid data"
min: 1
max: 3
* *label* is a setting title that is displayed on UI
* *weight* defines the order in which this setting is displayed in its group.
This attribute is desirable
* *type* defines the type of UI control to use for the setting.
The following types are supported:
* *text* - single line input
* *password* - password input
* *textarea* - multiline input
* *checkbox* - multiple-options selector
* *radio* - single-option selector
* *select* - drop-down list
* *hidden* - invisible input
* *file* - file contents input
* *text_list* - multiple sigle line text inputs
* *textarea_list* - multiple multiline text inputs
* *regex* section is applicable for settings of "text" type. "regex.source"
is used when validating with a regular expression. "regex.error" contains
a warning displayed near invalid field
* *restrictions*: see restrictions_.
* *description* section should also contain information about setting
restrictions (dependencies, conflicts)
* *values* list is needed for settings of "radio" or "select" type to declare
its possible values. Options from "values" list also support dependencies
and conflcits declaration.
* *min* is actual for settings of "text_list" or "textarea_list" type
to declare a minimum input list length for the setting
* *max* is actual for settings of "text_list" or "textarea_list" type
to declare a maximum input list length for the setting
.. _restrictions:
Restrictions
------------
Restrictions define when settings and setting groups should be available.
Each restriction is defined as a *condition* with optional *action*, *message*
and *strict*::
restrictions:
- condition: "settings:common.libvirt_type.value != 'kvm'"
message: "KVM only is supported"
- condition: "not ('experimental' in version:feature_groups)"
action: hide
* *condition* is an expression written in `Expression DSL`_. If returned value
is true, then *action* is performed and *message* is shown (if specified).
* *action* defines what to do if *condition* is satisfied. Supported values
are "disable", "hide" and "none". "none" can be used just to display
*message*. This field is optional (default value is "disable").
* *message* is a message that is shown if *condition* is satisfied. This field
is optional.
* *strict* is a boolean flag which specifies how to handle non-existent keys
in expressions. If it is set to true (default value), exception is thrown in
case of non-existent key. Otherwise values of such keys have null value.
Setting this flag to false is useful for conditions which rely on settings
provided by plugins::
restrictions:
- condition: "settings:other_plugin == null or settings:other_plugin.metadata.enabled != true"
strict: false
message: "Other plugin must be installed and enabled"
There are also short forms of restrictions::
restrictions:
- "settings:common.libvirt_type.value != 'kvm'": "KVM only is supported"
- "settings:storage.volumes_ceph.value == true"
.. _Expression DSL:
Expression Syntax
-----------------
Expression DSL can describe arbitrarily complex conditions that compare fields
of models and scalar values.
Supported types are:
* Number (123, 5.67)
* String ("qwe", 'zxc')
* Boolean (true, false)
* Null value (null)
* ModelPath (settings:common.libvirt_type.value, cluster:net_provider)
ModelPaths consist of a model name and a field name separated by ":". Nested
fields (like in settings) are supported, separated by ".". Models available for
usage are "cluster", "settings", "networking_parameters" and "version".
Supported operators are:
* "==". Returns true if operands are equal::
settings:common.libvirt_type.value == 'qemu'
* "!=". Returns true if operands are not equal::
cluster:net_provider != 'neutron'
* "in". Returns true if the right operand (Array or String) contains the left
operand::
'ceph-osd' in release:roles
* Boolean operators: "and", "or", "not"::
cluster:mode == "ha_compact" and not (settings:common.libvirt_type.value == 'kvm' or 'experimental' in version:feature_groups)
Parentheses can be used to override the order of precedence.
.. _openstack.yaml: https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml

View File

@ -1,56 +0,0 @@
Code testing policy
===================
When writing tests, please note the following rules:
#. Each code change MUST be covered with tests. The test for specific code
change must fail if that change to code is reverted, i.e. the test must
really cover the code change and not the general case. Bug fixes should
have tests for failing case.
#. The tests MUST be in the same patchset with the code changes.
#. It's permitted not to write tests in extreme cases. The extreme cases are:
* hot-fix / bug-fix with *Critical* status.
* patching during Feature Freeze (FF_) or Hard Code Freeze (HCF_).
In this case, request for writing tests should be reported as a bug with
*technical-debt* tag. It has to be related to the bug which was fixed with
a patchset that didn't have the tests included.
.. _FF: https://wiki.openstack.org/wiki/FeatureFreeze
.. _HCF: https://wiki.openstack.org/wiki/Fuel/Hard_Code_Freeze
#. Before writing tests please consider which type(s) of testing is suitable
for the unit/module you're covering.
#. Test coverage should not be decreased.
#. Nailgun application can be sliced up to tree layers (Presentation, Object,
Model). Consider usage of the unit testing if it is performed within one of
the layers or implementing mock objects is not complicated.
#. The tests have to be isolated. The order and count of executions must not
influence test results.
#. Tests must be repetitive and must always pass regardless of how many times
they are run.
#. Parametrize tests to avoid testing many times the same behaviour but with
different data. This gives an additional flexibility in the methods' usage.
#. Follow DRY principle in tests code. If common code parts are present, please
extract them to a separate method/class.
#. Unit tests are grouped by namespaces as corresponding unit. For instance,
the unit is located at: ``nailgun/db/dl_detector.py``, corresponding test
would be placed in ``nailgun/test/unit/nailgun.db/test_dl_detector.py``
#. Integration tests are grouped at the discretion of the developer.
#. Consider implementing performance tests for the cases:
* new handler is added which depends on number of resources in the database.
* new logic is added which parses/operates on elements like nodes.

View File

@ -1,43 +0,0 @@
Nailgun database migrations
===========================
Nailgun uses Alembic (http://alembic.readthedocs.org/en/latest/) for database
migrations, allowing access to all common Alembic commands through "python
manage.py migrate"
This command creates DB tables for Nailgun service::
python manage.py syncdb
This is done by applying one by one a number of database migration files,
which are located in nailgun/nailgun/db/migration/alembic_migrations/versions.
This command does not create corresponding DB tables unless you created another
migration file or updated an existing one, even if you're making some changes
in SQLAlchemy models or creating the new ones.
A new migration file can be generated by running::
python manage.py migrate revision -m "Revision message" --autogenerate
There are two important points here:
1) This command always creates a "diff" between the current database state
and the one described by your SQLAlchemy models, so you should always
run "python manage.py syncdb" before this command. This prevents running
the migrate command with an empty database, which would cause it to
create all tables from scratch.
2) Some modifications may not be detected by "--autogenerate", which
require manual addition to the migration file. For example, adding a new
value to ENUM field is not detected.
After creating a migration file, you can upgrade the database to a new state
by using this command::
python manage.py migrate upgrade +1
To merge your migration with an existing migration file, you can just move
lines of code from the "upgrade()" and "downgrade()" methods to the bottom of corresponding methods in previous migration file. As of this writing,
the migration file is called "current.py".
For all additional features and needs, you may refer to Alembic documentation:
http://alembic.readthedocs.org/en/latest/tutorial.html

View File

@ -1,256 +0,0 @@
Setting up Environment
======================
For information on how to get source code see :ref:`getting-source`.
.. _nailgun_dependencies:
Preparing Development Environment
---------------------------------
.. warning:: Nailgun requires Python 2.7. Please check
installed Python version using ``python --version``.
#. Nailgun can be found in fuel-web/nailgun
#. Install and configure PostgreSQL database. Please note that
Ubuntu 12.04 requires postgresql-server-dev-9.1 while
Ubuntu 14.04 requires postgresql-server-dev-9.3::
sudo apt-get install --yes postgresql postgresql-server-dev-all
sudo sed -ir 's/peer/trust/' /etc/postgresql/9.*/main/pg_hba.conf
sudo service postgresql restart
sudo -u postgres psql -c "CREATE ROLE nailgun WITH LOGIN PASSWORD 'nailgun'"
sudo -u postgres createdb nailgun
If required, you can specify Unix-domain
socket in 'host' settings to connect to PostgreSQL database:
.. code-block:: yaml
DATABASE:
engine: "postgresql"
name: "nailgun"
host: "/var/run/postgresql"
port: ""
user: "nailgun"
passwd: "nailgun"
#. Install pip and development tools::
sudo apt-get install --yes python-dev python-pip
#. Install virtualenv. This step increases flexibility
when dealing with environment settings and package installation::
sudo pip install virtualenv virtualenvwrapper
. /usr/local/bin/virtualenvwrapper.sh # you can save this to .bashrc
mkvirtualenv fuel # you can use any name instead of 'fuel'
workon fuel # command selects the particular environment
#. Install Python dependencies. This section assumes that you use virtual environment.
Otherwise, you must install all packages globally.
You can install pip and use it to require all the other packages at once::
sudo apt-get install --yes git
git clone https://github.com/openstack/fuel-web.git
cd fuel-web
pip install --allow-all-external -r nailgun/test-requirements.txt
#. Install Nailgun in the developers mode by running the command below in the
`nailgun` folder. Thanks to that, Nailgun extensions will be discovered::
python setup.py develop
Or if you are using pip::
pip install -e .
#. Create required folder for log files::
sudo mkdir /var/log/nailgun
sudo chown -R `whoami`.`whoami` /var/log/nailgun
sudo chmod -R a+w /var/log/nailgun
Setup for Nailgun Unit Tests
----------------------------
#. Nailgun unit tests use `Tox <http://testrun.org/tox/latest/>`_ for generating test
environments. This means that you don't need to install all Python packages required
for the project to run them, because Tox does this by itself.
#. First, create a virtualenv the way it's described in previous section. Then, install
the Tox package::
workon fuel #activate virtual environment created in the previous section
pip install tox
#. Run the Nailgun backend unit tests and flake8 test::
sudo apt-get install puppet-common #install missing package required by tasklib tests
./run_tests.sh
#. You can also run the same tests by hand, using tox itself::
cd nailgun
tox -epy26 -- -vv nailgun/test
tox -epep8
#. Tox reuses the previously created environment. After making some changes with package
dependencies, tox should be run with **-r** option to recreate existing virtualenvs::
tox -r -epy26 -- -vv nailgun/test
tox -r -epep8
Running Nailgun Performance Tests
+++++++++++++++++++++++++++++++++
Now you can run performance tests using -x option:
::
./run_tests.sh -x
If -x is not specified, run_tests.sh will not run performance tests.
The -n or -N option works exactly as before: it states whether
tests should be launched or not.
For example:
* run_tests.sh -n -x - run both regular and performance Nailgun tests.
* run_tests.sh -x - run nailgun performance tests only, do not run
regular Nailgun tests.
* run_tests.sh -n - run regular Naigun tests only.
* run_tests.sh -N - run all tests except for Nailgun regular and
performance tests.
.. _running-parallel-tests-py:
Running parallel tests with py.test
-----------------------------------
Now tests can be run over several processes
in a distributed manner; each test is executed
within an isolated database.
Prerequisites
+++++++++++++
- Nailgun user requires createdb permission.
- Postgres database is used for initial connection.
- If createdb cannot be granted for the environment,
then several databases should be created. The number of
databases should be equal to *TEST_WORKERS* variable.
The *createdb* permission
should have the following format: *nailgun0*, *nailgun1*.
- If no *TEST_WORKERS* variable is provided, then a default
database name will be used. Often it is nailgun,
but you can overwrite it with *TEST_NAILGUN_DB*
environment variable.
- To execute parallel tests on your local environment,
run the following command from *fuel-web/nailgun*:
::
py.test -n 4 nailgun/test
You can also run the it from *fuel-web*:
::
py.test -n 4 nailgun/nailgun/test
.. _running-nailgun-in-fake-mode:
Running Nailgun in Fake Mode
----------------------------
#. Switch to virtual environment::
workon fuel
#. Populate the database from fixtures::
cd nailgun
./manage.py syncdb
./manage.py loaddefault # It loads all basic fixtures listed in settings.yaml
./manage.py loaddata nailgun/fixtures/sample_environment.json # Loads fake nodes
#. Start application in "fake" mode, when no real calls to orchestrator
are performed::
python manage.py run -p 8000 --fake-tasks | egrep --line-buffered -v '^$|HTTP' >> /var/log/nailgun.log 2>&1 &
#. (optional) You can also use --fake-tasks-amqp option if you want to
make fake environment use real RabbitMQ instead of fake one::
python manage.py run -p 8000 --fake-tasks-amqp | egrep --line-buffered -v '^$|HTTP' >> /var/log/nailgun.log 2>&1 &
Nailgun in fake mode is usually used for Fuel UI development and Fuel UI
functional tests. For more information, please check out README file in
the fuel-ui repo.
Note: Diagnostic Snapshot is not available in a Fake mode.
Running the Fuel System Tests
-----------------------------
For fuel-devops configuration info please refer to
:doc:`Devops Guide </devops>` article.
#. Run the integration test::
cd fuel-main
make test-integration
#. To save time, you can execute individual test cases from the
integration test suite like this (nice thing about TestAdminNode
is that it takes you from nothing to a Fuel master with 9 blank nodes
connected to 3 virtual networks)::
cd fuel-main
export PYTHONPATH=$(pwd)
export ENV_NAME=fuelweb
export PUBLIC_FORWARD=nat
export ISO_PATH=`pwd`/build/iso/fuelweb-centos-6.5-x86_64.iso
./fuelweb_tests/run_tests.py --group=test_cobbler_alive
#. The test harness creates a snapshot of all nodes called 'empty'
before starting the tests, and creates a new snapshot if a test
fails. You can revert to a specific snapshot with this command::
dos.py revert --snapshot-name <snapshot_name> <env_name>
#. To fully reset your test environment, tell the Devops toolkit to erase it::
dos.py list
dos.py erase <env_name>
Flushing database before/after running tests
--------------------------------------------
The database should be cleaned after running tests;
before parallel tests were enabled,
you could only run dropdb with *./run_tests.sh* script.
Now you need to run dropdb for each slave node:
the *py.test --cleandb <path to the tests>* command is introduced for this
purpose.

View File

@ -1,35 +0,0 @@
Fuel UI Internationalization Guidelines
=======================================
Fuel UI internationalization is done using `i18next <http://i18next.com/>`_
library. Please read `i18next documentation
<http://i18next.com/pages/doc_features.html>`_ first.
All translations are stored in nailgun/static/translations/core.json
If you want to add new strings to the translations file, follow these rules:
#. Use words describing placement of strings like "button", "title", "summary",
"description", "label" and place them at the end of the key
(like "apply_button", "cluster_description", etc.). One-word strings may
look better without any of these suffixes.
#. Do NOT use shortcuts ("bt" instead of "button", "descr" instead of
"description", etc.)
#. Nest keys if it makes sense, for example, if there are a few values
for statuses, etc.
#. If some keys are used in a few places (for example, in utils), move them to
"common.*" namespace.
#. Use defaultValue ONLY with dynamically generated keys.
Validating translations
=========================================
To search for absent and unnecessary translation keys you can perform the following steps:
#. Open terminal and cd to fuel-web/nailgun directory.
#. Run "gulp i18n:validate" to start the validation.
If there are any mismatches, you'll see the list of mismatching keys.
Gulp task "i18n:validate" has one optional argument - a comma-separated list of
languages to compare with base English en-US translations. Run
"gulp i18n:validate --locales=zh-CN" to perform comparison only between English
and Chinese keys. You can also run "grunt i18n:validate --locales=zh-CN,ru-RU"
to perform comparison between English-Chinese and English-Russian keys.

View File

@ -1,122 +0,0 @@
Interacting with Nailgun using Shell
====================================
.. contents:: :local:
Launching shell
---------------
Development shell for Nailgun can only be accessed inside its virtualenv,
which can be activated by launching the following command::
source /opt/nailgun/bin/activate
After that, the shell is accessible through this command::
python /opt/nailgun/bin/manage.py shell
Its appearance depends on availability of ipython on current system. This
package is not available by default on the master node but you can use the
command above to run a default Python shell inside the Nailgun environment::
Python 2.7.3 (default, Feb 27 2014, 19:58:35)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>>
.. note:: If you want to quickly access the database,
use *manage.py dbshell* command.
Interaction
-----------
There are two ways user may interact with Nailgun object instances
through shell:
* Using Nailgun objects abstraction
* Using raw SQLAlchemy queries
**IMPORTANT NOTE:** Second way (which is equal to straightforward modifying
objects in DB) should only be used if nothing else works.
.. _shell-objects:
Objects approach
****************
Importing objects may look like this::
>>> from nailgun import objects
>>> objects.Release
<class 'nailgun.objects.release.Release'>
>>> objects.Cluster
<class 'nailgun.objects.cluster.Cluster'>
>>> objects.Node
<class 'nailgun.objects.node.Node'>
These are common abstractions around basic items Nailgun is dealing with.
The reference on how to work with them can be found here:
:ref:`objects-reference`.
These objects allow user to interact with items in DB on higher level, which
includes all necessary business logic which is not executed then values in DB
are changed by hands. For working examples continue to :ref:`shell-faq`.
SQLAlchemy approach
*******************
Using raw SQLAlchemy models and queries allows user to modify objects through
ORM, almost the same way it can be done through SQL CLI.
First, you need to get a DB session and import models::
>>> from nailgun.db import db
>>> from nailgun.db.sqlalchemy import models
>>> models.Release
<class 'nailgun.db.sqlalchemy.models.release.Release'>
>>> models.Cluster
<class 'nailgun.db.sqlalchemy.models.cluster.Cluster'>
>>> models.Node
<class 'nailgun.db.sqlalchemy.models.node.Node'>
and then get necessary instances from DB, modify them and commit current
transaction::
>>> node = db().query(models.Node).get(1) # getting object by ID
>>> node
<nailgun.db.sqlalchemy.models.node.Node object at 0x3451790>
>>> node.status = 'error'
>>> db().commit()
You may refer to `SQLAlchemy documentation <http://docs.sqlalchemy.org/en/rel_0_7/orm/query.html>`_
to find some more info on how to do queries.
.. _shell-faq:
Frequently Asked Questions
--------------------------
As a first step in any case objects should be imported as is
described here: :ref:`shell-objects`.
**Q:** How can I change status for particular node?
**A:** Just retrieve node by its ID and update it::
>>> node = objects.Node.get_by_uid(1)
>>> objects.Node.update(node, {"status": "ready"})
>>> objects.Node.save(node)
**Q:** How can I remove node from cluster by hands?
**A:** Get node by ID and call its method::
>>> node = objects.Node.get_by_uid(1)
>>> objects.Node.remove_from_cluster(node)
>>> objects.Node.save(node)

View File

@ -1,41 +0,0 @@
Managing UI Dependencies
========================
The dependencies of Fuel UI are managed by NPM_.
Used NPM packages are listed in *dependencies* and *devDependencies* sections
of a package.json file. To install all required packages, run::
npm install
To use gulp_ you also need to install the gulp package globally::
sudo npm install -g gulp
To add a new package, it is not enough just to add a new entry to a
package.json file because npm-shrinkwrap_ is used to lock down package
versions. First you need to install the clingwrap package globally:
sudo npm install -g clingwrap
Then install required package::
npm install --save some-package
Then run::
clingwrap some-package
to update npm-shrinkwrap.json.
Alternatively, you can completely regenerate npm-shrinkwrap.json by running::
rm npm-shrinkwrap.json
rm -rf node_modules
npm install
npm shrinkwrap --dev
clingwrap npmbegone
.. _npm: https://www.npmjs.org/
.. _gulp: http://gulpjs.com/
.. _npm-shrinkwrap: https://www.npmjs.org/doc/cli/npm-shrinkwrap.html

View File

@ -1,27 +0,0 @@
.. _nailgun-development:
Nailgun Development Instructions
================================
.. toctree::
development/env
development/i18n
development/db_migrations
development/shell_doc
development/api_doc
development/objects
development/ui_dependencies
development/code_testing
Nailgun Customization Instructions
==================================
.. _nailgun-customization:
.. toctree::
customization/partitions
customization/reliability
customization/roles
customization/settings
customization/bonding_in_ui

View File

@ -1,533 +0,0 @@
Health Check (OSTF) Contributor's Guide
=======================================
Health Check or OSTF?
Main goal of OSTF
Main rules of code contributions
How to setup my environment?
How should my modules look like?
How to execute my tests?
Now I'm done, what's next?
General OSTF architecture
OSTF packages architecture
OSTF Adapter architecture
Appendix 1
Health Check or OSTF?
^^^^^^^^^^^^^^^^^^^^^
Fuel UI has tab which is called Health Check. In development team though,
there is an established acronym OSTF, which stands for OpenStack Testing Framework.
This is all about the same. For simplicity, this document will use widely
accepted term OSTF.
Main goal of OSTF
^^^^^^^^^^^^^^^^^
After OpenStack installation via Fuel, it`s very important to understand whether it was successful and if it`s ready for work.
OSTF provides a set of health checks - sanity, smoke, HA and additional components tests that check the proper operation of all system components in typical conditions.
There are tests for OpenStack scenarios validation and other specific tests useful in validating an OpenStack deployment.
Main rules of code contributions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There are a few rules you need to follow to successfully pass the code review and contribute high-quality code.
How to setup my environment?
----------------------------
OSTF repository is located at git.openstack.org or GitHub mirror: https://github.com/openstack/fuel-ostf. You also have to install and hook-up gerrit, because otherwise you will not be able to contribute code. To do that you need to follow registration and installation instructions in the document https://wiki.openstack.org/wiki/CLA#Contributors_License_Agreement
After you've completed the instructions, you're all set to begin editing/creating code.
How should my modules look like?
--------------------------------
The rules are quite simple:
- follow Python coding rules
- follow OpenStack contributor's rules
- watch out for mistakes in docstrings
- follow correct test structure
- always execute your tests after you wrote them before sending them to review
Speaking of following Python coding standards, you can find the style guide here: http://www.python.org/dev/peps/pep-0008/. You should read it carefully once and after implementing scripts you need to run some checks that will ensure that your code corresponds the standards. Without correcting issues with coding stadards your scripts will not be merged to master.
You should always follow the following implementation rules:
- name the test module, test class and test method beginning with the word "test"
- if you have some tests that should be ran in a specific order, add a number to test method name, for example: test_001_create_keypair
- use verify(), verify_response_body_content() and other methods from mixins (see OSTF package architecture fuel_health/common/test_mixins.py section) with giving them failed step parameter
- always list all steps you are checking using test_mixins methods in the docstring in Scenario section in correct order
- always use verify() method when you want to check an operation that can go to an infinite loop
The test docstrings are another important piece and you should always stick to the following docstring structure:
- test title - test description that will be always shown on UI (the remaining part of docstring will only be shown in cases when test failed)
- target component (optional) - component name that is tested (e.g. Nova, Keystone)
- blank line
- test scenario, example::
Scenario:
1. Create a new small-size volume.
2. Wait for volume status to become "available".
3. Check volume has correct name.
4. Create new instance.
5. Wait for "Active" status.
6. Attach volume to an instance.
7. Check volume status is "in use".
8. Get information on the created volume by its id.
9. Detach volume from the instance.
10. Check volume has "available" status.
11. Delete volume.
- test duration - an estimate of how much a test will take
deployment tags (optional) - gives information about what kind of environment the test will be run, possible values are CENTOS, Ubuntu, RHEL nova_network, Heat, Sahara)
Here's a test example which confirms the above explanations:
.. image:: _images/test_docstring_structure.png
Test run ordering and profiles
------------------------------
Each test set (sanity, smoke, HA and platform_tests) contains a special
variable in __init__.py module which is called __profile__.
The profile variable makes it possible to set different rules, such as test run
order, set up deployment tags, information gathering on cleanup and expected
time estimate for running a test set.
If you are to develop a new set of tests, you need to create __init__.py module
and place __profile__ dict in it. It is important that your profile matches
the following structure::
__profile__ = {
"test_runs_ordering_priority": 4,
"id": "platform_tests",
"driver": "nose",
"test_path": "fuel_health/tests/platform_tests",
"description": ("Platform services functional tests."
" Duration 3 min - 60 min"),
"cleanup_path": "fuel_health.cleanup",
"deployment_tags": ['additional_components'],
"exclusive_testsets": [],
"available_since_release": "2015.2-6.1",
}
Take note of each field in the profile, along with acceptable values.
- test_runs_ordering_priority is a field responsible for setting the priority
in which the test set will be displayed. For example, if you set "6" for
sanity tests and "3" for smoke tests, smoke test set will be displayed
first on the HealthCheck tab;
- id is just the unique id of a test set;
- driver field is used for setting the test runner;
- test_path is the field representing path where test set is located starting
from fuel_health directory;
- description is the field which contains the value to be shown on the UI
as the tests duration;
- cleanup_path is the field that specifies path to module responsible for
cleanup mechanism (if you do not specify this value, cleanup will not be
started after your test set);
- deployment_tags field is used for defining when these tests should be
available depending on cluster settings;
- exclusive_testsets field gives you an opportunity to specify test sets that
will be run successively. For example, you can specify "smoke_sanity" for
smoke and sanity test set profiles, then these tests will be run not
simultaneously, but successively.
- available_since_release field is responsible for the release version
starting from which a particular test set can be run. This means that
the test will run only on the specified or newer version of Fuel.
It is necessary to specify a value for each of the attributes. The optional
attribute is "deployment_tags", meaning optionally you may not specify it
in your profile at all. You can leave the "exclusive_testsets" empty ([]) to
run your testset simultaneously with other ones.
How to execute my tests?
------------------------
Simplest way is to install Fuel, and OSTF will be installed as part of it.
- install virtualbox
- build Fuel ISO: :ref:`building-fuel-iso`
- use `virtualbox scripts to run an ISO <https://github.com/openstack/fuel-virtualbox/tree/master/>`_
- once the installation is finished, go to Fuel UI (usually it's 10.20.0.2:8000) and create a new cluster with necessary configuration
- execute::
rsync -avz <path to fuel_health>/ root@10.20.0.2:/opt/fuel_plugins/ostf/lib/python2.6/site-packages/fuel_health/
- execute::
ssh root@10.20.0.2
ps uax | grep supervisor
kill <supervisord process number>
service supervisord start
- go to Fuel UI and run your new tests
Now I'm done, what's next?
--------------------------
- don't forget to run pep8 on modified part of code
- commit your changes
- execute git review
- ask to review in IRC
From this part you'll only need to fix and commit review comments (if there are any) by doing the same steps. If there are no review comments left, the reviewers will accept your code and it will be automatically merged to master.
General OSTF architecture
^^^^^^^^^^^^^^^^^^^^^^^^^
Tests are included to Fuel, so they will be accessible as soon as you install Fuel on your lab. OSTF architecture is quite simple, it consists of two main packages:
- fuel_health which contains the test set itself and related modules
- fuel_plugin which contains OSTF-adapter that forms necessary test list in context of cluster deployment options and transfers them to UI using REST_API
On the other hand, there is some information necessary for test execution itself. There are several modules that gather information and parse them into objects which will be used in the tests themselves. All information is gathered from Nailgun component.
OSTF REST api interface
-----------------------
Fuel OSTF module provides not only testing, but also RESTful
interface, a means for interaction with the components.
In terms of REST, all types of OSTF entities are managed by three HTTP verbs:
GET, POST and PUT.
The following basic URL is used to make requests to OSTF::
{ostf_host}:{ostf_port}/v1/{requested_entity}/{cluster_id}
Currently, you can get information about testsets, tests and testruns
via GET request on corresponding URLs for ostf_plugin.
To get information about testsets, make the following GET request on::
{ostf_host}:{ostf_port}/v1/testsets/{cluster_id}
To get information about tests, make GET request on::
{ostf_host}:{ostf_port}/v1/tests/{cluster_id}
To get information about executed tests, make the following GET
requests:
- for the whole set of testruns::
{ostf_host}:{ostf_port}/v1/testruns/
- for the particular testrun::
{ostf_host}:{ostf_port}/v1/testruns/{testrun_id}
- for the list of testruns executed on the particular cluster::
{ostf_host}:{ostf_port}/v1/testruns/last/{cluster_id}
To start test execution, make the following POST request on this URL::
{ostf_host}:{ostf_port}/v1/testruns/
The body must consist of JSON data structure with testsets and the list
of tests belonging to it that must be executed. It should also have
metadata with the information about the cluster
(the key with the "cluster_id" name is used to store the parameter's value)::
[
{
"testset": "test_set_name",
"tests": ["module.path.to.test.1", ..., "module.path.to.test.n"],
"metadata": {"cluster_id": id}
},
...,
{...}, # info for another testrun
{...},
...,
{...}
]
If succeeded, OSTF adapter returns attributes of created testrun entities
in JSON format. If you want to launch only one test, put its id
into the list. To launch all tests, leave the list empty (by default).
Example of the response::
[
{
"status": "running",
"testset": "sanity",
"meta": null,
"ended_at": "2014-12-12 15:31:54.528773",
"started_at": "2014-12-12 15:31:41.481071",
"cluster_id": 1,
"id": 1,
"tests": [.....info on tests.....]
},
....
]
You can also stop and restart testruns. To do that, make a PUT request on
testruns. The request body must contain the list of the testruns and
tests to be stopped or restarted. Example::
[
{
"id": test_run_id,
"status": ("stopped" | "restarted"),
"tests": ["module.path.to.test.1", ..., "module.path.to.test.n"]
},
...,
{...}, # info for another testrun
{...},
...,
{...}
]
If succeeded, OSTF adapter returns attributes of the processed testruns
in JSON format. Its structure is the same as for POST request, described
above.
OSTF package architecture
^^^^^^^^^^^^^^^^^^^^^^^^^
The main modules used in fuel_health package are:
**config** module is responsible of getting data which is necessary for tests. All data is gathered from Nailgun component or a text config.
Nailgun provides us with the following data:
- OpenStack admin user name
- OpenStack admin user password
- OpenStack admin user tenant
- ip of controllers node
- ip of compute node - easily get data from nailgun by parsing role key in response json
- deployment mode (HA /non-HA)
- deployment os (RHEL/CENTOS)
- keystone / horizon urls
- tiny proxy address
All other information we need is stored in config.py itself and remains default in this case. In case you are using data from Nailgun (OpenStack installation using Fuel) you should to the following:
initialize NailgunConfig() class.
Nailgun is running on Fuel master node, so you can easily get data for each cluster by invoking curl http:/localhost:8000/api/<uri_here>. Cluster id can be get from OS environment (provided by Fuel)
If you want run OSTF for non Fuel installation, change the initialization of NailgunConfig() to FileConfig() and set parameters marked with green color in config - see Appendix 1 (default config file path fuel_health/etc/test.conf)
**cleanup.py** - invoked by OSTF adapter in case if user stops test execution in Web UI. This module is responsible for deleting all test resources created during test suite run. It simply finds all resources whose name starts with ost1_test- and destroys each of them using _delete_it method.
*Important: if you decide to add additional cleanup for this resource, you have to keep in mind:
All resources depend on each other, that's why deleting a resource that is still in use will give you an exception;
Don't forget that deleting several resources requires an ID for each resource, but not its name. You'll need to specify delete_type optional argument in _delete_it method to id*
**nmanager.py** contains base classes for tests. Each base class contains setup, teardown and methods that act as an interlayer between tests and OpenStack python clients (see nmanager architecture diagram).
.. image:: _images/nmanager.png
**fuel_health/common/test_mixins.py** - provides mixins to pack response verification into a human-readable message. For assertion failure cases, the method requires a step on which we failed and a descriptive
message to be provided. The verify() method also requires a timeout value to be set. This method should be used when checking OpenStack operations (such as instance creation). Sometimes a cluster
operation taking too long may be a sign of a problem, so this will secure the tests from such a situation or even from going into infinite loop.
**fuel_health/common/ssh.py** - provides an easy way to ssh to nodes or instances. This module uses the paramiko library and contains some useful wrappers that make some routine tasks for you
(such as ssh key authentication, starting transport threads, etc). Also, it contains a rather useful method exec_command_on_vm(), which makes an ssh to an instance through a controller and then executes
the necessary command on it.
OSTF Adapter architecture
^^^^^^^^^^^^^^^^^^^^^^^^^
.. image:: _images/plugin_structure.png
The important thing to remember about OSTF Adapter is that just like when writing tests, all code should follow pep8 standard.
Appendix 1
----------
::
IdentityGroup = [
cfg.StrOpt('catalog_type',
default='identity', may be changes on keystone
help="Catalog type of the Identity service."),
cfg.BoolOpt('disable_ssl_certificate_validation',
default=False,
help="Set to True if using self-signed SSL certificates."),
cfg.StrOpt('uri',
default='http://localhost/' (If you are using FileConfig set here appropriate address)
help="Full URI of the OpenStack Identity API (Keystone), v2"),
cfg.StrOpt('url',
default='http://localhost:5000/v2.0/', (If you are using FileConfig set here appropriate address to horizon)
help="Dashboard Openstack url, v2"),
cfg.StrOpt('uri_v3',
help='Full URI of the OpenStack Identity API (Keystone), v3'),
cfg.StrOpt('strategy',
default='keystone',
help="Which auth method does the environment use? "
"(basic|keystone)"),
cfg.StrOpt('region',
default='RegionOne',
help="The identity region name to use."),
cfg.StrOpt('admin_username',
default='nova' , (If you are using FileConfig set appropriate value here)
help="Administrative Username to use for"
"Keystone API requests."),
cfg.StrOpt('admin_tenant_name', (If you are using FileConfig set appropriate value here)
default='service',
help="Administrative Tenant name to use for Keystone API "
"requests."),
cfg.StrOpt('admin_password', (If you are using FileConfig set appropriate value here)
default='nova',
help="API key to use when authenticating as admin.",
secret=True),
]
ComputeGroup = [
cfg.BoolOpt('allow_tenant_isolation',
default=False,
help="Allows test cases to create/destroy tenants and "
"users. This option enables isolated test cases and "
"better parallel execution, but also requires that "
"OpenStack Identity API admin credentials are known."),
cfg.BoolOpt('allow_tenant_reuse',
default=True,
help="If allow_tenant_isolation is True and a tenant that "
"would be created for a given test already exists (such "
"as from a previously-failed run), re-use that tenant "
"instead of failing because of the conflict. Note that "
"this would result in the tenant being deleted at the "
"end of a subsequent successful run."),
cfg.StrOpt('image_ssh_user',
default="root", (If you are using FileConfig set appropriate value here)
help="User name used to authenticate to an instance."),
cfg.StrOpt('image_alt_ssh_user',
default="root", (If you are using FileConfig set appropriate value here)
help="User name used to authenticate to an instance using "
"the alternate image."),
cfg.BoolOpt('create_image_enabled',
default=True,
help="Does the test environment support snapshots?"),
cfg.IntOpt('build_interval',
default=10,
help="Time in seconds between build status checks."),
cfg.IntOpt('build_timeout',
default=160,
help="Timeout in seconds to wait for an instance to build."),
cfg.BoolOpt('run_ssh',
default=False,
help="Does the test environment support snapshots?"),
cfg.StrOpt('ssh_user',
default='root', (If you are using FileConfig set appropriate value here)
help="User name used to authenticate to an instance."),
cfg.IntOpt('ssh_timeout',
default=50,
help="Timeout in seconds to wait for authentication to "
"succeed."),
cfg.IntOpt('ssh_channel_timeout',
default=20,
help="Timeout in seconds to wait for output from ssh "
"channel."),
cfg.IntOpt('ip_version_for_ssh',
default=4,
help="IP version used for SSH connections."),
cfg.StrOpt('catalog_type',
default='compute',
help="Catalog type of the Compute service."),
cfg.StrOpt('path_to_private_key',
default='/root/.ssh/id_rsa', (If you are using FileConfig set appropriate value here)
help="Path to a private key file for SSH access to remote "
"hosts"),
cfg.ListOpt('controller_nodes',
default=[], (If you are using FileConfig set appropriate value here)
help="IP addresses of controller nodes"),
cfg.ListOpt('compute_nodes',
default=[], (If you are using FileConfig set appropriate value here)
help="IP addresses of compute nodes"),
cfg.StrOpt('controller_node_ssh_user',
default='root', (If you are using FileConfig set appropriate value here)
help="ssh user of one of the controller nodes"),
cfg.StrOpt('controller_node_ssh_password',
default='r00tme', (If you are using FileConfig set appropriate value here)
help="ssh user pass of one of the controller nodes"),
cfg.StrOpt('image_name',
default="TestVM", (If you are using FileConfig set appropriate value here)
help="Valid secondary image reference to be used in tests."),
cfg.StrOpt('deployment_mode',
default="ha", (If you are using FileConfig set appropriate value here)
help="Deployments mode"),
cfg.StrOpt('deployment_os',
default="RHEL", (If you are using FileConfig set appropriate value here)
help="Deployments os"),
cfg.IntOpt('flavor_ref',
default=42,
help="Valid primary flavor to use in tests."),
]
ImageGroup = [
cfg.StrOpt('api_version',
default='1',
help="Version of the API"),
cfg.StrOpt('catalog_type',
default='image',
help='Catalog type of the Image service.'),
cfg.StrOpt('http_image',
default='http://download.cirros-cloud.net/0.3.1/'
'cirros-0.3.1-x86_64-uec.tar.gz',
help='http accessable image')
]
NetworkGroup = [
cfg.StrOpt('catalog_type',
default='network',
help='Catalog type of the Network service.'),
cfg.StrOpt('tenant_network_cidr',
default="10.100.0.0/16",
help="The cidr block to allocate tenant networks from"),
cfg.IntOpt('tenant_network_mask_bits',
default=29,
help="The mask bits for tenant networks"),
cfg.BoolOpt('tenant_networks_reachable',
default=True,
help="Whether tenant network connectivity should be "
"evaluated directly"),
cfg.BoolOpt('neutron_available',
default=False,
help="Whether or not neutron is expected to be available"),
]
VolumeGroup = [
cfg.IntOpt('build_interval',
default=10,
help='Time in seconds between volume availability checks.'),
cfg.IntOpt('build_timeout',
default=180,
help='Timeout in seconds to wait for a volume to become'
'available.'),
cfg.StrOpt('catalog_type',
default='volume',
help="Catalog type of the Volume Service"),
cfg.BoolOpt('cinder_node_exist',
default=True,
help="Allow to run tests if cinder exist"),
cfg.BoolOpt('multi_backend_enabled',
default=False,
help="Runs Cinder multi-backend test (requires 2 backends)"),
cfg.StrOpt('backend1_name',
default='BACKEND_1',
help="Name of the backend1 (must be declared in cinder.conf)"),
cfg.StrOpt('backend2_name',
default='BACKEND_2',
help="Name of the backend2 (must be declared in cinder.conf)"),
]
ObjectStoreConfig = [
cfg.StrOpt('catalog_type',
default='object-store',
help="Catalog type of the Object-Storage service."),
cfg.StrOpt('container_sync_timeout',
default=120,
help="Number of seconds to time on waiting for a container"
"to container synchronization complete."),
cfg.StrOpt('container_sync_interval',
default=5,
help="Number of seconds to wait while looping to check the"
"status of a container to container synchronization"),
]

View File

@ -1,315 +0,0 @@
Resource duplication and file conflicts
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you have been developing your module that somehow uses services which are
already in use by other components of OpenStack, most likely you will try to
declare some of the same resources that have already been declared.
Puppet architecture doesn't allow declaration of resources that have same
type and title even if they do have same attributes.
For example, your module could be using Apache and has Service['apache']
declared. When you are running your module outside Fuel nothing else tries to
control this service to and everything work fine. But when you will try to add
this module to Fuel you will get resource duplication error because Apache is
already managed by Horizon module.
There is pretty much nothing you can do about this problem because uniqueness
of Puppet resources is one on its core principles. But you can try to solve
the problem by one of following ways.
The best thing you can do is to try to use an already declared resource by
settings dependencies to the other class that does use it. This will not work
in many cases and you may have to modify both modules or move conflicting
resource elsewhere to avoid conflicts.
Puppet does provide a good solution to this problem - **virtual resources**.
The idea behind it is that you move resource declaration to separate class and
make them virtual. Virtual resources will not be evaluated until you realize
them and you can do it in all modules that do require this resources.
The trouble starts when these resources have different attributes and complex
dependencies. Most current Puppet modules doesn't use virtual resources and
will require major refactoring to add them.
Puppet style guidelines advise to move all classes related with the same
service inside a single module instead of using many modules to work with
same service to minimize conflicts, but in many cases this approach
doesn't work.
There are also some hacks such are defining resource inside *if !
defined(Service['apache']) { ... }* block or using **ensure_resource**
function from Puppet's stdlib.
Similar problems often arise then working with configuration files.
Even using templates doesn't allow several modules to directly edit same
file. There are a number of solutions to this starting from using
configurations directories and snippets if service supports them to
representing lines or configuration options as resources and managing
them instead of entire files.
Many services does support configuration directories where you can place
configuration files snippets. Daemon will read them all, concatenate and
use like it was a single file. Such services are the most convenient to
manage with Puppet. You can just separate you configuration and manage
its pieces as templates. If your service doesn't know how to work with
snippets you still can use them. You only need to create parts of your
configuration file in some directory and then just combine them all
using simple exec with *cat* command. There is also a special *concat*
resource type to make this approach easier.
Some configuration files could have standard structure and can be managed
by custom resource types. For example, there is the *ini_file* resource
type to manage values in compatible configuration as single resources.
There is also *augeas* resource type that can manage many popular
configuration file formats.
Each approach has its own limitations and editing single file from
many modules is still non-trivial task in most cases.
Both resource duplication and file editing problems doesn't have a good
solution for every possible case and significantly limit possibility
of code reuse.
The last approach to solving this problem you can try is to modify files
by scripts and sed patches ran by exec resources. This can have unexpected
results because you can't be sure of what other operations are performed
on this configuration file, what text patterns exist there, and if your
script breaks another exec.
Puppet module containment
~~~~~~~~~~~~~~~~~~~~~~~~~
Fuel Library consists of many modules with a complex structure and
several dependencies defined between the provided modules.
There is a known Puppet problem related to dependencies between
resources contained inside classes declared from other classes.
If you declare resources inside a class or definition they will be
contained inside it and entire container will not be finished until all
of its contents have been evaluated.
For example, we have two classes with one notify resource each.::
class a {
notify { 'a' :}
}
class b {
notify { 'b' :}
}
Class['a'] -> Class['b']
include a
include b
Dependencies between classes will force contained resources to be executed in
declared order.
But if we add another layer of containers dependencies between them will not
affect resources declared in first two classes.::
class a {
notify { 'a' :}
}
class b {
notify { 'b' :}
}
class l1 {
include a
}
class l2 {
include b
}
Class['l1'] -> Class['l2']
include 'l1'
include 'l2'
This problem can lead to unexpected and in most cases unwanted behaviour
when some resources 'fall out' from their classes and can break the logic
of the deployment process.
The most common solution to this issue is **Anchor Pattern**. Anchors are
special 'do-nothing' resources found in Puppetlab's stdlib module.
Anchors can be declared inside top level class and be contained
inside as any normal resource. If two anchors was declared they can be
named as *start* and *end* anchor. All classes, that should be contained
inside the top-level class can have dependencies with both anchors.
If a class should go after the start anchor and before the end anchor
it will be locked between them and will be correctly contained inside
the parent class.::
class a {
notify { 'a' :}
}
class b {
notify { 'b' :}
}
class l1 {
anchor { 'l1-start' :}
include a
anchor { 'l1-end' :}
Anchor['l1-start'] -> Class['a'] -> Anchor['l1-end']
}
class l2 {
anchor { 'l2-start' :}
include b
anchor { 'l2-end' :}
Anchor['l2-start'] -> Class['b'] -> Anchor['l2-end']
}
Class['l1'] -> Class['l2']
include 'l1'
include 'l2'
This hack does help to prevent resources from randomly floating out of their
places, but look very ugly and is hard to understand. We have to use this
technique in many of Fuel modules which are rather complex and require such
containment.
If your module is going to work with dependency scheme like this, you could
find anchors useful too.
There is also another solution found in the most recent versions of Puppet.
*Contain* function can force declared class to be locked within its
container.::
class l1 {
contain 'a'
}
class l2 {
contain 'b'
}
Puppet scope and variables
~~~~~~~~~~~~~~~~~~~~~~~~~~
The way Puppet looks for values of variables from inside classes can be
confusing too. There are several levels of scope in Puppet.
**Top scope** contains all facts and built-in variables and goes from the
start of *site.pp* file before any class or node declaration. There is also a
**node scope**. It can be different for every node block. Each class and
definition start their own **local scopes** and their variables and resource
defaults are available their. **They can also have parent scopes**.
Reference to a variable can consist of two parts
**$(class_name)::(variable_name)** for example *$apache::docroot*. Class name
can also be empty and such record will explicitly reference top level scope
for example *$::ipaddress*.
If you are going to use value of a fact or top-scope variable it's usually a
good idea to add two colons to the start of its name to ensure that you
will get the value you are looking for.
If you want to reference variable found in another class and use fully
qualified name like this *$apache::docroot*. But you should remember that
referenced class should be already declared. Just having it inside your
modules folder is not enough for it. Using *include apache* before referencing
*$apache::docroot* will help. This technique is commonly used to make
**params** classes inside every module and are included to every other class
that use their values.
And finally if you reference a local variable you can write just *$myvar*.
Puppet will first look inside local scope of current class of defined type,
then inside parent scope, then node scope and finally top scope. If variable
is found on any of this scopes you get the first match value.
Definition of what the parent scope is varies between Puppet 2.* and Puppet
3.*. Puppet 2.* thinks about parent scope as a class from where current class
was declared and all of its parents too. If current class was inherited
from another class base class also is parent scope allowing to do popular
*Smart Defaults* trick.::
class a {
$var = a
}
class b(
$a = $a::var,
) inherits a {
}
Puppet 3.* thinks about parent scope only as a class from which current class
was inherited if any and doesn't take declaration into account.
For example::
$msg = 'top'
class a {
$msg = "a"
}
class a_child inherits a {
notify { $msg :}
}
Will say 'a' in puppet 2.* and 3.* both. But.::
$msg = 'top'
class n1 {
$msg = 'n1'
include 'n2'
}
class n2 {
notify { $msg :}
}
include 'n1'
Will say 'n1' in puppet 2.6, will say 'n1' and issue *deprecation warning* in
2.7, and will say 'top' in puppet 3.*
Finding such variable references replacing them with fully qualified names is
very important part Fuel of migration to Puppet 3.*
Where to find more information
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The best place to start learning Puppet is Puppetlabs' official learning
course (http://docs.puppetlabs.com/learning/). There is also a special virtual
machine image you can use to safely play with Puppet manifests.
Then you can continue to read Puppet reference and other pages of Puppetlabs
documentation.
You can also find a number of printed book about Puppet and how to use it to
manage your IT infrastructure.
Pro Puppet
http://www.apress.com/9781430230571
Pro Puppet. 2nd Edition
http://www.apress.com/9781430260400
Puppet 2.7 Cookbook
https://www.packtpub.com/networking-and-servers/puppet-27-cookbook
Puppet 3 Cookbook
https://www.packtpub.com/networking-and-servers/puppet-3-cookbook
Puppet 3: Beginners Guide
https://www.packtpub.com/networking-and-servers/puppet-3-beginner%E2%80%99s-guide
Instant Puppet 3 Starter
https://www.packtpub.com/networking-and-servers/instant-puppet-3-starter-instant
Pulling Strings with Puppet Configuration Management Made Easy
http://www.apress.com/9781590599785
Puppet Types and Providers Extending Puppet with Ruby
http://shop.oreilly.com/product/0636920026860.do
Managing Infrastructure with Puppet. Configuration Management at Scale
http://shop.oreilly.com/product/0636920020875.do

View File

@ -1,304 +0,0 @@
Fuel Master Node Deployment over PXE
====================================
Tech Explanation of the process
-------------------------------
In some cases (such as no installed CD-ROM or no physical access to the
servers) we need to install Fuel Master node somehow other way from CD or USB
Flash drive. Starting from Fuel 4.0 it's possible to deploy Master node with PXE
The process of deployment of Fuel master node over network consists of booting
linux kernel by DHCP and PXE. Then anaconda installer will download
configuration file and all packages needed to complete the installation.
* PXE firmware of the network card makes DHCP query and gets IP address and
boot image name.
* Firmware downloads boot image file using TFTP protocol and starts it.
* This bootloader downloads configuration file with kernel boot option,
kernel and initramfs and starts the installer.
* Installer downloads kickstart configuration file by mounting contents of
Fuel ISO file over NFS.
* Installer partitions hard drive, installs the system by downloading packages
over NFS, copies all additional files, installs the bootloader and reboots
into new system.
So we need:
* Working system to serve as network installer.
* DHCP server
* TFTP server
* NFS server
* PXE bootloader and its configuration file
* Extracted or mounted Fuel ISO file
In our test we will use 10.20.0.0/24 network.
10.20.0.1/24 will be IP address of our host system.
Installing packages
-------------------
We will be using Ubuntu or Debian system as an installation server. Other linux
or even BSD-based systems could be used too, but paths to configuration files
and init scripts may differ.
First we need to install the software::
# TFTP server and client
apt-get install tftp-hpa tftpd-hpa
# DHCP server
apt-get install isc-dhcp-server
# network bootloader
apt-get install syslinux syslinux-common
# nfs server
apt-get install nfs-server
Setting up DHCP server
--------------------------
Standalone ISC DHCPD
~~~~~~~~~~~~~~~~~~~~
First we are going to create configuration file located at /etc/dhcp/dhcpd.conf::
ddns-update-style none;
default-lease-time 600;
max-lease-time 7200;
authoritative;
log-facility local7;
subnet 10.20.0.0 netmask 255.255.255.0 {
range 10.20.0.2 10.20.0.2;
option routers 10.20.0.1;
option domain-name-servers 10.20.0.1;
}
host fuel {
hardware ethernet 52:54:00:31:38:5a;
fixed-address 10.20.0.2;
filename "pxelinux.0";
}
We have declared a subnet with only one IP address available that we are going
to give to our master node. We are not going to serve entire range of IP
addresses because it will disrupt Fuels own DHCP service. There is also a host
definition with a custom configuration that matches a specific MAC address. This
address should be set to the MAC address of the system that you are going to
make Fuel master node. Other systems on this subnet will not receive any IP
addresses and will load bootstrap from master node when it starts serving DHCP
requests.
We also give a filename that will be used to boot the Fuel master node.
Using 10.20.0.0/24 subnet requires you to set 10.20.0.1 on the network
interface connected to this network. You may also need to set the interface
manually using the INTERFACES variable in /etc/default/isc-dhcp-server file.
Start DHCP server::
/etc/init.d/isc-dhcp-server restart
Simple with dnsmasq::
sudo dnsmasq -d --enable-tftp --tftp-root=/var/lib/tftpboot \
--dhcp-range=10.20.0.2,10.20.0.2 \
--port=0 -z -i eth2 \
--dhcp-boot='pxelinux.0'
Libvirt with dnsmasq
~~~~~~~~~~~~~~~~~~~~
If you are using libvirt virtual network to install your master node, then you
can use its own DHCP service. Use virsh net-edit default to modify network
configuration::
<network>
<name>default</name>
<bridge name="virbr0" />
<forward />
<ip address="10.20.0.1" netmask="255.255.255.0">
<tftp root="/var/lib/tftpboot"/>
<dhcp>
<range start="10.20.0.2" end="10.20.0.2" />
<host mac="52:54:00:31:38:5a" ip="10.20.0.2" />
<bootp file="pxelinux.0"/>
</dhcp>
</ip>
</network>
This configuration includes TFTP server and DHCP server with only one IP
address set to your master nodes MAC address. You don't need to install
neither external DHCP server nor TFTP server.
Dont forget to restart the network after making edits::
virsh net-destroy default
virsh net-start default
Dnsmasq without libvirt
~~~~~~~~~~~~~~~~~~~~~~~
You can also use dnsmasq as a DHCP and TFTP server without libvirt::
strict-order
domain-needed
user=libvirt-dnsmasq
local=//
pid-file=/var/run/dnsmasq.pid
except-interface=lo
bind-dynamic
interface=virbr0
dhcp-range=10.20.0.2,10.20.0.2
dhcp-no-override
enable-tftp
tftp-root=/var/lib/tftpboot
dhcp-boot=pxelinux.0
dhcp-leasefile=/var/lib/dnsmasq/leases
dhcp-lease-max=1
dhcp-hostsfile=/etc/dnsmasq/hostsfile
In /etc/dnsmasq/hostsfile you can specify hosts and their mac addresses::
52:54:00:31:38:5a,10.20.0.2
Dnsmasq provides both DHCP, TFTP, as well as acts as a DNS caching server, so
you don't need to install additional external services.
Setting our TFTP server
-----------------------
If you are not using a libvirt virtual network, then you need to install tftp
server. On Debian or Ubuntu system its configuration file will be located here
/etc/default/tftpd-hpa.
Checking if all we want are there::
TFTP_USERNAME="tftp"
TFTP_DIRECTORY="/var/lib/tftpboot"
TFTP_ADDRESS="10.20.0.1:69"
TFTP_OPTIONS="--secure --blocksize 512"
Dont forget to set blocksize here. Some hardware switches have problems with
larger block sizes.
And star it::
/etc/init.d/tftpd-hpa restart
Setting up NFS server
---------------------
You will also need to setup NFS server on your install system. Edit the NFS
exports file::
vim /etc/exports
Add the following line::
/var/lib/tftpboot 10.20.0.2(ro,async,no_subtree_check,no_root_squash,crossmnt)
And start it::
/etc/init.d/nfs-kernel-server restart
Set up tftp root
----------------
Our tftp root will be located here: /var/lib/tftpboot
Lets create a folder called "fuel" to store ISO image contents and syslinux
folder for bootloader files. If you have installed syslinux package you can find
them in /usr/lib/syslinux folder.
Copy this files from /usr/lib/syslinux to /var/lib/tftpboot::
memdisk menu.c32 poweroff.com pxelinux.0 reboot.c32
Now we need to write the pxelinux configuration file. It will be located here
/var/lib/tftpboot/pxelinux.cfg/default::
DEFAULT menu.c32
prompt 0
MENU TITLE My Distro Installer
TIMEOUT 600
LABEL localboot
MENU LABEL ^Local Boot
MENU DEFAULT
LOCALBOOT 0
LABEL fuel
MENU LABEL Install ^FUEL
KERNEL /fuel/isolinux/vmlinuz
INITRD /fuel/isolinux/initrd.img
APPEND biosdevname=0 ks=nfs:10.20.0.1:/var/lib/tftpboot/fuel/ks.cfg repo=nfs:10.20.0.1:/var/lib/tftpboot/fuel ip=10.20.0.2 netmask=255.255.255.0 gw=10.20.0.1 dns1=10.20.0.1 hostname=fuel.mirantis.com showmenu=no
LABEL reboot
MENU LABEL ^Reboot
KERNEL reboot.c32
LABEL poweroff
MENU LABEL ^Poweroff
KERNEL poweroff.com
You can ensure silent installation without any Anaconda prompts by adding the following APPEND directives:
* ksdevice=INTERFACE
* installdrive=DEVICENAME
* forceformat=yes
For example:
installdrive=sda ksdevice=eth0 forceformat=yes
Now we need to unpack the Fuel ISO file we have downloaded::
mkdir -p /var/lib/tftpboot/fuel /mnt/fueliso
mount -o loop /path/to/your/fuel.iso /mnt/fueliso
rsync -a /mnt/fueliso/ /var/lib/tftpboot/fuel/
umount /mnt/fueliso && rmdir /mnt/fueliso
So that's it! We can boot over the network from this PXE server.
Troubleshooting
---------------
After implementing one of the described configuration you should see something
like that in your /var/log/syslog file::
dnsmasq-dhcp[16886]: DHCP, IP range 10.20.0.2 -- 10.20.0.2, lease time 1h
dnsmasq-tftp[16886]: TFTP root is /var/lib/tftpboot
To make sure all of daemon listening sockets as they should::
# netstat -upln | egrep ':(67|69|2049) '
udp 0 0 0.0.0.0:67 0.0.0.0:* 30791/dnsmasq
udp 0 0 10.20.0.1:69 0.0.0.0:* 30791/dnsmasq
udp 0 0 0.0.0.0:2049 0.0.0.0:* -
* NFS - udp/2049
* DHCP - udp/67
* TFTP - udp/69
So all of daemons listening as they should.
To test DHCP server does provide an IP address you can do something like that
on the node in the defined PXE network. Please note, it should have Linux
system installed or any other OS to test configuration properly::
# dhclient -v eth0
Internet Systems Consortium DHCP Client 4.1.1-P1
Copyright 2004-2010 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/
Listening on LPF/eth0/00:25:90:c4:7a:64
Sending on LPF/eth0/00:25:90:c4:7a:64
Sending on Socket/fallback
DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x7b6e25dc)
DHCPACK from 10.20.0.1 (xid=0x7b6e25dc)
bound to 10.20.0.2 -- renewal in 1659 seconds.
After running dhclient you should see how it asks one or few times DHCP server
with DHCPDISCOVER and then get 10.20.0.2. If you have more then one NIC you
should run dhclient on every one to determine where our network in connected
to.
TFTP server can be tested with tftp console client::
# tftp
(to) 10.20.0.1
tftp> get /pxelinux.0
NFS could be tested with mounting it::
mkdir /mnt/nfsroot
mount -t nfs 10.20.0.1:/var/lib/tftpboot /mnt/nfsroot

View File

@ -1,110 +0,0 @@
Fuel Development Quick-Start
============================
If you are interested in contributing to Fuel or modifying Fuel
for your own purposes, this short guide should get you pointed
to all the information you need to get started.
If you are new to contributing to OpenStack, read
through the “How To Contribute” page on the OpenStack wiki.
See: `How to contribute
<http://docs.openstack.org/infra/manual/developers.html>`_.
For this walk-through, lets use the example of modifying an
option to the “new environment wizard” in Fuel (example here:
`https://review.openstack.org/#/c/90687/1
<https://review.openstack.org/#/c/90687/1>`_). This
enhancement required modification to three files in the fuel-web
repository::
fuel-web/nailgun/static/translations/core.json
fuel-web/nailgun/static/js/views/dialogs.jsx
fuel-web/nailgun/static/templates/dialogs/create_cluster_wizard/storage.html
In order to add, test and commit the code necessary to
implement this feature, these steps were followed:
#. Create a Fuel development environment by following the
instructions found here:
:doc:`Fuel Development Environment </develop/env>`.
#. In your development environment, prepare your environment
for Nailgun unit tests and Web UI tests by following
the instructions found here:
:doc:`Nailgun Dev Environment </develop/nailgun/development/env>`.
Be sure to run the tests noted in each section to ensure
your environment confirms to a known good baseline.
#. Branch your fuel-web checkout (see `Gerrit Workflow
<http://docs.openstack.org/infra/manual/developers.html#development-workflow>`_ for
more information on the gerrit workflow)::
cd fuel-web
git fetch --all;git checkout -b vcenter-wizard-fix origin/master
#. Modify the necessary files (refer to :doc:`Fuel Architecture
</develop/architecture>` to understand how the components
of Fuel work together).
#. Test your Nailgun changes::
cd fuel-web
./run_tests.sh --no-webui
./run_tests.sh --flake8
./run_tests.sh --webui
#. You should also test Nailgun in fake UI mode by following
the steps found here: :ref:`running-nailgun-in-fake-mode`
#. When all tests pass you should commit your code, which
will subject it to further testing via Jenkins and Fuel CI.
Be sure to include a good commit message, guidelines can be
found here: `Git Commit Messages <https://wiki.openstack.org/wiki/GitCommitMessages>`_.::
git commit -a
git review
#. Frequently, the review process will suggest changes be
made before your code can be merged. In that case, make
your changes locally, test the changes, and then re-submit
for review by following these steps::
git commit -a --amend
git review
#. Now that your code has been committed, you should change
your Fuel ISO makefile to point to your specific commit.
As noted in the :doc:`Fuel Development documentation </develop/env>`,
when you build a Fuel ISO it pulls down the additional
repositories rather than using your local repos. Even
though you have a local clone of fuel-web holding the branch
you just worked on, the build script will be pulling code
from git for the sub-components (Nailgun, Astute, OSTF)
based on the repository and commit specified in environment
variables when calling “make iso”, or as found in config.mk.
You will need to know the gerrit commit ID and patch number.
For this example we are looking at
https://review.openstack.org/#/c/90687/1
with the gerrit ID 90687, patch 1. In this instance, you
would build the ISO with::
cd fuel-main
NAILGUN_GERRIT_COMMIT=refs/changes/32/90687/1 make iso
#. Once your ISO build is complete, you can test it. If
you have access to hardware that can run the KVM
hypervisor, you can follow the instructions found in the
:doc:`Devops Guide </devops>` to create a robust testing
environment. Otherwise you can test the ISO with
Virtualbox (the download link can be found at
`https://software.mirantis.com/ <https://software.mirantis.com/>`_)
#. Once your code has been merged, you can return your local
repo to the master branch so you can start fresh on your
next commit by following these steps::
cd fuel-web
git remote update
git checkout master
git pull

View File

@ -1,544 +0,0 @@
Separate Mirantis OpenStack and Linux Repositories
==================================================
Starting with Fuel 6.1, the repositories for
Mirantis OpenStack and Linux packages are separate.
Having separate repositories gives you the following
advantages:
* You can run base distributive updates and
Mirantis OpenStack updates during the product life cycle.
* You can see what packages are provided by
Mirantis OpenStack and what packages provided by the base distributive.
* You can download security updates
independently for the base distributive and for Mirantis OpenStack.
* You can see sources of the base distributive and
Mirantis OpenStack packages.
* You can see the debug symbols of the base distributive and
Mirantis OpenStack packages.
* As a Fuel Developer, you can have the same approach for making
changes to Fuel components and their dependencies.
Types of software packages
--------------------------
All software packages deployed starting with Fuel 6.1
are divided into the following categories:
#. *Upstream package*: A package from the supported release of the base distributive
is reused from the distributive repositories directly, without modifications specific
to Mirantis OpenStack.
#. *Mirantis OpenStack specific package*: A package specific to Mirantis OpenStack that does not
exist in any release of the base distributive, or any of the base distributive's
upstream distributions.
#. *Divergent package*: A package that is based on a version of the same
package from any release of the base distributive or its upstream distributions,
and includes a different software version than the corresponding *upstream
package*, or additional modifications that are not present in the *upstream
package*, or both.
#. *Upgraded package*: A variant of *divergent package* that includes a
newer software version than the corresponding *upstream package*.
#. *Holdback package*: a variant of *divergent package* constituting a
temporary replacement for an *upstream package* to fix a regression
introduced in the supported release of the base distributive. It is published
in a Mirantis OpenStack repository after a regression is detected, and is
removed from that repository as soon as the regression is either resolved
in the upstream package, or addressed elsewhere in Mirantis OpenStack.
.. note:: Downgraded packages with the upstream version lower than the version
available in the base distributive are not allowed.
Different releases of Mirantis OpenStack can put the same software package in different
categories.
All kinds of divergence from the base distributive should be kept to a minimum. As
many of Mirantis OpenStack dependencies as possible should be satisfied by upstream packages
from the supported release of the base distro. When possible, Mirantis OpenStack patches and
Mirantis OpenStack specific packages should be contributed back to the base distributives.
Distributing Mirantis OpenStack packages
----------------------------------------
Released Mirantis OpenStack packages are be distributed as part of the Fuel ISO image. Upstream
packages and any other IP protected by respective operating system vendors are not
included in Fuel ISO images. Regular updates to the Mirantis OpenStack
distribution are delivered through online Mirantis OpenStack mirrors.
Mirantis OpenStack mirrors structure
------------------------------------
Mirantis OpenStack mirrors are organized in the same way as base distro mirrors.
Every supported operating system has its own set of repositories containing Mirantis OpenStack packages
per release (mos6.1, mos7.0 etc). These repositories contain only packages
with Mirantis OpenStack specific modifications, and no upstream packages from the
corresponding base distro.
As a user you can create and maintain local copies of the base distro and Mirantis OpenStack mirrors.
This allows you to use the repositories in completely isolated environments or
create your own mirrors to pass the extended validation before a package update
roll-out across your production environment.
Top level Mirantis OpenStack mirror structure
---------------------------------------------
::
/
+--+centos
| |
| +--+6
| | |
| | +--+mos6.0
| | |
| | +--+mos6.1
| |
| +--+7
| |
| +--+mos7.0
| |
| +--+mos7.1
|
+--+ubuntu
|
+--+dists
|
+--+pool
|
+--+...
Debian based mirror structure
-----------------------------
Mirantis OpenStack mirrors include several repositories (updates, security, proposed)
built in the same way as the base operating system mirror (Debian or Ubuntu). Repository sections
are organized in the same way (main, restricted) in accordance with the package licenses
(non-free).
::
$(OS_MIRROR) $(MOS_MIRROR)
+ +
| |
+--+ubuntu +--+ubuntu
| |
+--+dists +--+dists
| | | |
| +--+precise-backport | +--+mos6.1-proposed
| | | |
| +--+precise-proposed | +--+mos6.1-security
| | | |
| +--+precise-security | +--+mos6.1-updates
| | | |
| +--+precise-updates | +--+mos6.1
| | | |
| +--+precise | +--+mos7.0-proposed
| | | |
| +--+trusty-backport | +--+mos7.0-security
| | | |
| +--+trusty-proposed | +--+mos7.0-updates
| | | |
| +--+trusty-security | +--+mos7.0
| | |
| +--+trusty-updates +--+indices
| | | |
| +--+trusty | +--+...
| |
+--+indices +--+pool
| | | |
| +--+... | +--+main
| | | |
+--+pool | | +--+a
| | | | |
| +--+main | | +--+...
| | | | |
| +--+multiverse | | +--+z
| | | |
| |--+restricted | |--+restricted
| | | |
+ |--+universe | +--+a
| | |
|--+... | +--+...
| |
| +--+z
|
+--+project
|
+--+mos-archive-keyring.gpg
|
+--+mos-archive-keyring.sig
Red Hat based mirror structure
------------------------------
Mirantis OpenStack mirrors include several repositories (operating system, updates, Fasttrack) built
in the same way as base distro mirror (Red Hat or CentOS).
::
$(OS_MIRROR) $(MOS_MIRROR)
+ +
| |
+--+centos-6 +--+centos-6
| | | |
| +--+... | +--+mos6.1
| | |
+--+centos-7 | +--+mos7.0
| | |
+--+7 | +--+os
| | | |
+--+os | | +--+x86_64
| | | | |
| +--+x86_64 | | +--+Packages
| | | | | |
| +--+Packages | | | +--+*.rpm
| | | | | |
| | +--+*.rpm | | +--+RPM-GPG-KEY-MOS7.0
| | | | |
| +--+RPM-GPG-KEY-CentOS-7 | | +--+repodata
| | | | |
| +--+repodata | | +--+*.xml,*.gz
| | | |
| +--+*.xml,*.gz | +--+updates
| | |
+--+updates | +--+x86_64
| | |
+--+x86_64 | +--+Packages
| | | |
+--+Packages | | +--+*.rpm
| | | |
| +--+*.rpm | +--+repodata
| | |
+--+repodata | +--+*.xml,*.gz
| |
+--+*.xml,*.gz +--+centos-7
|
+--+mos7.1
|
+--+mos8.0
Repositories priorities
-----------------------
Handling multiple package repositories in Nailgun is expanded to
allow the user to set priorities during deployment.
Default repository priorities are arranged so that packages from Mirantis OpenStack
repositories are preferred over packages from the base distro. On Debian based
systems, the force-downgrade APT pinning priorities are used for Mirantis OpenStack
repositories to make sure that, when a package is available in a Mirantis OpenStack
repository, it is always preferred over the package from the base distro, even if
the version in the Mirantis OpenStack repository is lower.
Fuel developer repositories
---------------------------
The build system allows developers to build custom packages. These packages
are placed into a special repository which can be specified in Nailgun
to deliver these packages to an environment. APT pinning priority for these
repositories is higher than the base distro and Mirantis OpenStack repositories.
Accordingly, Yum repository priority value is lower than the base distro and
Mirantis OpenStack repositories.
Holdback repository
-------------------
The purpose of the holdback repository is to ensure the highest quality of the Mirantis OpenStack
product. If there is an *upstream* package that breaks the product, and this
cannot be fixed in a timely manner, the Mirantis OpenStack team publishes the package
proven stable to the "mosXX-holdback" repository. This repository is
automatically configured on all installations with the priority higher than the base
distro repositories.
The case when the base distro vendor releases a fixed version of a problem package
is covered by Mirantis OpenStack system tests.
Package versioning requirements
-------------------------------
A package version string for a *Mirantis OpenStack specific* or a *divergent* package must not
include registered trademarks of base distro vendors, and should include the "mos"
keyword.
Every new revision of a *Mirantis OpenStack specific* or a *divergent* package targeted to a Mirantis OpenStack
release (including the corresponding update repository) must have a package version
greater than or equal to the versions of the same package in all previous
releases of Mirantis OpenStack (base, update, security repositories), as well as versions of
the same package previously published in any repos for this Mirantis OpenStack release.
For example, there must be no package version downgrades in the following Mirantis OpenStack
release progression (where 6.1.1 matches the state of update repository at the
time of 6.1.1 maintenance release):
6.0 <= 6.0.1 <= 6.1 <= 6.1.1 <= 6.1.2 <= 7.0
Package version of a *divergent* package (including *upgraded* and *holdback*
packages) must be constructed in a way that allows the *upstream* package
with the same software version to be automatically considered for an upgrade by
the package management system as soon as the divergent package is removed from the
Mirantis OpenStack repositories. This simplifies the phasing out of divergent packages in favor of the
upstream packages between major Mirantis OpenStack releases, but due to the repository priorities
defined above, does not lead to new upstream packages superceding the *upgraded*
packages available from Mirantis OpenStack repositories when applying updates.
Every new revision of a *divergent* package must have a package version greater
than previous revisions of the same package that is published to the same
repository for that Mirantis OpenStack release. Its version should be lower than the version of
the corresponding *upstream* package.
When the same package version is ported from one Mirantis OpenStack release to another without
modifications (i.e. same upstream version and same set of patches), a new package
version should include the full package version from the original Mirantis OpenStack release.
Debian package versioning
-------------------------
Versioning requirements defined in this section apply to all software packages
in all Mirantis OpenStack repositories for Debian based distros. The standard terms defined in
Debian Policy are used to describe package version components: epoch,
upstream version, Debian revision.
Upstream version of a package should exactly match the software version,
without suffixes. Introducing epoch or increasing epoch relative to a base distro
should be avoided.
Debian revision of a Mirantis OpenStack package should use the following format::
<revision>~<base-distro-release>+mos<subrevision>
In Mirantis OpenStack specific packages, revision must always be "0"::
fuel-nailgun_6.1-0~u14.04+mos1
In *divergent* packages, revision should include as much of the Debian revision
of the corresponding *upstream* package as possible while excluding the base
distro vendor's trademarks, and including the target distribution version::
qemu_2.1.0-1 -> qemu_2.1.0-1~u14.04+mos1
ohai_6.14.0-2.3ubuntu4 -> ohai_6.14.0-2.3~u14.04+mos1
Subrevision numbering starts from 1. Subsequent revisions of a package using
the same upstream version and based on the upstream package with the same
Debian revision should increment the subrevision::
ohai_6.14.0-2.3~u14.04+mos2
ohai_6.14.0-2.3~u14.04+mos3
Subsequent revision of a package that introduces a new upstream version or new
base distro package revision should reset the subrevision back to 1::
ohai_6.14.0-3ubuntu1 -> ohai_6.14.0-3~u14.04+mos1
Versioning of packages in post-release updates
++++++++++++++++++++++++++++++++++++++++++++++
Once a Mirantis OpenStack release reaches GA, the primary package repository for the release
is frozen, and subsequent updates are published to the updates repositories.
Most of the time, only a small subset of modifications (including patches,
metadata changes, etc.) is backported to updates for old Mirantis OpenStack releases.
When an updated package includes only a subset of modifications, its version
should include the whole package version from the primary repository, followed
by a reference to the targeted Mirantis OpenStacj release, and an update subrevision, both
separated by "+"::
mos6.1: qemu_2.1.0-1~u14.04+mos1
mos7.0: qemu_2.1.0-1~u14.04+mos1
mos7.1: qemu_2.1.0-1~u14.04+mos2
mos6.1-updates: qemu_2.1.0-1~u14.04+mos1+mos6.1+1
mos7.0-updates: qemu_2.1.0-1~u14.04+mos1+mos7.0+1
If the whole package along with all the included modifications is backported from
the current release to updates for an old Mirantis OpenStack release, its version should include
the whole package version from the current release, followed by a reference to
the targeted Mirantis OpenStack release separated by "~", and an update subrevision, separated
by "+"::
mos6.1: qemu_2.1.0-1~u14.04+mos1
mos7.0: qemu_2.1.0-1~u14.04+mos1
mos7.1: qemu_2.1.0-1~u14.04+mos2
mos6.1-updates: qemu_2.1.0-1~u14.04+mos2~mos6.1+1
mos7.0-updates: qemu_2.1.0-1~u14.04+mos2~mos7.0+1
The same rule applies if modifications include an upgrade to a newer software
version::
mos6.1: qemu_2.1.0-1~u14.04+mos1
mos7.0: qemu_2.1.0-1~u14.04+mos1
mos7.1: qemu_2.2+dfsg-5exp~u14.04+mos3
mos6.1-updates: qemu_2.2+dfsg-5exp~u14.04+mos3~mos6.1+1
mos7.0-updates: qemu_2.2+dfsg-5exp~u14.04+mos3~mos7.0+1
Debian package metadata
-----------------------
All *Mirantis OpenStack specific* and *divergent* packages must have the following metadata:
#. The latest entry in the debian/changelog must contain:
- reference to the targeted Mirantis OpenStack release series (e.g. mos6.1)
- reference to the organization that produced the package (Mirantis)
- commits (full git commit sha1) in all source code repositories that the
package was built from: build repository commit if both source code and
build scripts are tracked in the same repository (git-buildpackage style),
or both source and build repository commits if the source code is tracked in a
separate repository from build scripts
#. Maintainer in debian/control must be Mirantis OpenStack Team
Example of a valid debian/changelog entry::
keystone (2014.2.3-1~u14.04+mos1) mos6.1; urgency=low
* Source commit: 17f8fb6d8d3b9d48f5a4206079c18e84b73bf36b
* Build commit: 8bf699819c9d30e2d34e14e76917f94daea4c67f
-- Mirantis OpenStack Team <mos@mirantis.com> Sat, 21 Mar 2015 15:08:01 -0700
If the package is a backport from a different release of a base distro (e.g. a
backport of a newer software version from Ubuntu 14.10 to Ubuntu 14.04), the
exact package version which the backport was based on must also be specified in
the debian/changelog entry, along with the URL where the source package for
that package version can be obtained from.
The following types of URLs may be used, in the order of preference:
#. git-buildpackage or similar source code repository,
#. deb package pool directory,
#. direct dpkg source (orig and debian) download links.
Package life cycle management
-----------------------------
To deliver the high quality of the product, Mirantis OpenStack teams produce package updates
during the product life cycle when required.
Packaging life cycle follows the Mirantis OpenStack product lifecycle (Feature Freeze,
Soft Code Freeze, Hard Code Freeze, Release, Updates).
The Mirantis OpenStack mirror is modified on the Hard Code Freeze announcement. A new Mirantis OpenStack
version is created to allow the developers continue on a new release.
After a GA release, all packages are placed in the updates or security
repository
::
V^ +---------------------+
| |7.1-updates
| |
| |
| +-----------------------------------+
| |8.0-dev |
| | |
| | |
| +-------------------------------------------------+
| |6.1-updates | |
| | | |
| | | |
| +-------------------------+-------------+---------------------+
| |7.1-dev | 7.1-HCF 7.1 GA
| | |
| | |
+------------+-----------+------------------------------------------------->
6.1 dev 6.1 HCF 6.1 GA t
Patches for the security vulnerabilities are placed in the *security* repository.
They are designed to change the behavior of the package as little as possible.
Continous integration testing against base distro updates
---------------------------------------------------------
As part of the product lifecycle, there are system tests that
verify functionality of Mirantis OpenStack against:
- the current state of the base distro mirror (base system plus released updates),
to check stability of the current release
- the current state of the Stable Release Updates or Fasttrack repository,
to check if package candidates introduce any regressions
Handling of system test results
-------------------------------
If the system test against proposed or Fasttrack repositories reveals
one or several packages that break the Mirantis OpenStack functionality, the Mirantis OpenStack teams provide
one of the following solutions:
- solve the issue on the product side by releasing fixed Mirantis OpenStack packages through
the "updates" repository
- raise a debate with base distro SRU reviewing team regarding problem packages
- (if none of the above helps) put working version of a problem package to
the holdback repository
Also, any package that failed the system test, is reflected on the
release status page.
Release status page
-------------------
To ensure that the Mirantis OpenStack customers have full information on the release stability, all
packages that produce system test failures must are also reported in several
different ways:
- Web: Via the `status page <https://fuel-infra.org/>`_.
- on deployed nodes: Via a hook that updates MOTD using the `Fuel Infra website <https://fuel-infra.org/>`_.
- on deployed nodes: Via an APT pre-hook that checks the status via the above
website, and gives a warning if an ``apt-get update`` command is issued
Packages building module
------------------------
Fuel DEB packages build routinely are disabled by default, and kept for the
Fuel CI purposes only (nightly and test product builds). The release product
ISO contains Fuel DEB packages from the Mirantis OpenStack repository. Updates to the Fuel
DEB packages are consumed from the Mirantis OpenStack mirror directly on the Fuel Master
node.
The explicit list of Fuel DEB packages is the following:
* fencing-agent
* nailgun-mcagents
* nailgun-net-check
* nailgun-agent
* python-tasklib
Docker containers building module
---------------------------------
All Dockerfile configs are adjusted to include both upstream and Mirantis OpenStack
repositories.
ISO assembly module
-------------------
ISO assembly module excludes all the parts mentioned above.
Offline installations
---------------------
To support offline installation cases there is a Linux console
script that mirrors the public base distro and Mirantis OpenStack mirrors to a given location,
allowing to put these local sources as input for the appropriate menu entry of the
Fuel "Settings" tab on UI, or specify directly via Fuel CLI. In case of
deb-based base distro, Mirantis OpenStack requires packages from multiple sections of a given
distribution (main, universe, multiverse, restricted), so the helper script
will mirror all packages from the components specified above. Requirements:
* input base distro mirror URL
* input MOS mirror URL
* ability to run as cronjob to update base distro and Mirantis OpenStack mirrors

View File

@ -1,114 +0,0 @@
Sequence Diagrams
=================
OS Provisioning
---------------
.. uml::
title Nodes Provisioning
actor WebUser
box "Physical Server"
participant NodePXE
participant NodeAgent
end box
NodePXE -> Cobbler: PXE discovery
Cobbler --> NodePXE: bootstrap OS image
NodePXE -> Cobbler: network settings request
Cobbler --> NodePXE: IP, DNS response
NodePXE -> NodePXE: OS installation
NodePXE -> NodeAgent: starts agent
NodePXE -> MC: starts MCollective
NodeAgent -> Ohai: get info
Ohai --> NodeAgent: info
NodeAgent -> NodePXE: get admin node IP
NodePXE --> NodeAgent: admin node IP
NodeAgent -> Nailgun: Registration
|||
WebUser -> Nailgun: create cluster
WebUser -> Nailgun: add nodes to cluster
WebUser -> Nailgun: deploy cluster
|||
Nailgun -> Astute: Provision CentOS
Astute -> Cobbler: Provision CentOS
Cobbler -> NodePXE: ssh to reboot
Cobbler --> NodePXE: CentOS image
NodePXE -> NodeAgent: starts agent
NodePXE -> MC: starts MC agent
NodeAgent -> Nailgun: Node metadata
Networks Verification
---------------------
.. uml::
title Network Verification
actor WebUser
WebUser -> Nailgun: verify networks (cluster #1)
Nailgun -> Astute: verify nets (100-120 vlans)
Astute -> MC: start listeners
MC -> net_probe.py: forks to listen
MC --> Astute: listening
Astute -> MC: send frames
MC -> net_probe.py: send frames
net_probe.py --> MC: sent
MC --> Astute: sent
Astute -> MC: get result
MC -> net_probe.py: stop listeners
net_probe.py --> MC: result
MC --> Astute: result graph
Astute --> Nailgun: response vlans Ok
Nailgun --> WebUser: response
Details on Cluster Provisioning & Deployment (via Facter extension)
-------------------------------------------------------------------
.. uml::
title Cluster Deployment
actor WebUser
Nailgun -> Astute: Provision,Deploy
Astute -> MC: Type of nodes?
MC -> Astute: bootstrap
Astute -> Cobbler: create system,reboot
Astute -> MC: Type of nodes?
MC --> Astute: booted in target OS
Astute --> Nailgun: provisioned
Nailgun --> WebUser: status on UI
Astute -> MC: Create /etc/astute.yaml
Astute -> MC: run puppet
MC -> Puppet: runonce
Puppet -> Facter: get facts
Facter --> Puppet: set facts and parse astute.yaml
Puppet -> Puppet: applies $role
Puppet --> MC: done
MC --> Astute: deploy is done
Astute --> Nailgun: deploy is done
Nailgun --> WebUser: deploy is done
Once deploy and provisioning messages are accepted by Astute, provisioning
method is called. Provisioning part creates system in Cobbler and
calls reboot over Cobbler. Then Astute uses `MCollective direct addressing
mode
<http://www.devco.net/archives/2012/06/19/mcollective-direct-addressing-mode.ph
p>`_
to check if all required nodes are available, include puppet agent on them. If
some nodes are not yet ready, Astute waits for a few seconds and tries to
request again. When nodes are booted in target OS, Astute uses upload_file
MCollective plugin to push data to a special file */etc/astute.yaml* on the
target system.
Data include role and all other variables needed for deployment. Then, Astute
calls puppetd MCollective plugin to start deployment. Puppet is started on
nodes.
Accordingly, puppet agent starts its run. Modules contain facter extension,
which runs before deployment. Extension reads data from */etc/astute.yaml*
placed by mcollective, and extends Facter data with it as a single fact, which
is then parsed by *parseyaml* function to create *$::fuel_settings* data
structure. This structure contains all variables as a single hash and
supports embedding of other rich structures such as nodes hash or arrays.
Case structure in running class chooses appropriate class to import,
based on *role* and *deployment_mode* variables found in */etc/astute.yaml*.

View File

@ -1,9 +0,0 @@
.. _system-tests:
System Tests
============
To include documentation on system tests, **SYSTEM_TESTS_PATH**
environment variable should be set before running *sphinx-build* or
*make*.

View File

@ -1,522 +0,0 @@
Devops Guide
============
Introduction
------------
Fuel-Devops is a sublayer between application and target environment (currently
only supported under libvirt).
This application is used for testing purposes like grouping virtual machines to
environments, booting KVM VMs locally from the ISO image and over the network
via PXE, creating, snapshotting and resuming back the whole environment in
single action, create virtual machines with multiple NICs, multiple hard drives
and many other customizations with a few lines of code in system tests.
After 6.0 release, fuel-devops was divided into 2.5.x and 2.9.x versions. Two
separate versions of fuel-devops provide backward compatibility for system
tests which have been refactored since the last major release. Look here
`how to migrate`_ from older devops.
For sources please refer to
`fuel-devops repository on github <https://github.com/openstack/fuel-devops>`_.
.. _install system dependencies:
Installation
-------------
The installation procedure can be implemented via PyPI in Python virtual
environment (suppose you are using *Ubuntu 12.04* or *Ubuntu 14.04*):
Before using it, please install the following required dependencies:
.. code-block:: bash
sudo apt-get install --yes \
git \
libyaml-dev \
libffi-dev \
python-dev \
python-pip \
qemu \
libvirt-bin \
libvirt-dev \
vlan \
bridge-utils \
genisoimage
sudo apt-get update && sudo apt-get upgrade -y
.. _DevOpsPyPIvenv:
Devops installation in `virtualenv <http://virtualenv.readthedocs.org/en/latest/virtualenv.html>`_
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Install packages needed for building python eggs
.. code-block:: bash
sudo apt-get install --yes python-virtualenv libpq-dev libgmp-dev pkg-config
2. In case you are using *Ubuntu 12.04* let's update pip and virtualenv,
otherwise you can skip this step
.. code-block:: bash
sudo pip install pip virtualenv --upgrade
hash -r
3. In oder to store the path where your Python virtualenv will be located
create your working directory and use the following environment variable. If
it is not specified, it will use the current working directory:
.. code-block:: bash
export WORKING_DIR=$HOME/working_dir
mkdir $HOME/working_dir
4. Create virtualenv for the *devops* project (e.g. ``fuel-devops-venv``).
Note: the related directory will be used for the ``VENV_PATH`` variable:
.. code-block:: bash
cd $WORKING_DIR
sudo apt-get install --yes python-virtualenv
virtualenv --no-site-packages fuel-devops-venv
.. note:: If you want to use different devops versions in the same time, you
can create several different folders for each version, and then activate the
required virtual environment for each case.
For example::
virtualenv --no-site-packages fuel-devops-venv # For fuel-devops 2.5.x
virtualenv --no-site-packages fuel-devops-venv-2.9 # For fuel-devops 2.9.x
5. Activate virtualenv and install *devops* package using PyPI.
In order to indentify the latest available versions you would like to install,
visit `fuel-devops <https://github.com/openstack/fuel-devops/tags>`_ repo. For
Fuel 6.0 and earlier, take the latest fuel-devops 2.5.x (e.g.
fuel-devops.git@2.5.6). For Fuel 6.1 and later, use 2.9.x or newer (e.g.
fuel-devops.git@2.9.11):
.. code-block:: bash
. fuel-devops-venv/bin/activate
pip install git+https://github.com/openstack/fuel-devops.git@2.9.11 --upgrade
setup.py in fuel-devops repository does everything required.
.. hint:: You can also use
`virtualenvwrapper <http://virtualenvwrapper.readthedocs.org/>`_
which can help you manage virtual environments
6. Next, follow :ref:`DevOpsConf` section
.. _DevOpsConf:
Configuration
--------------
Basically *devops* requires that the following system-wide settings are
configured:
* Default libvirt storage pool is active (called 'default')
* Current user must have permission to run KVM VMs with libvirt
* PostgreSQL server running with appropriate grants and schema for *devops*
* [Optional] Nested Paging is enabled
Configuring libvirt pool
~~~~~~~~~~~~~~~~~~~~~~~~~
Create libvirt's pool
.. code-block:: bash
sudo virsh pool-define-as --type=dir --name=default --target=/var/lib/libvirt/images
sudo virsh pool-autostart default
sudo virsh pool-start default
Permissions to run KVM VMs with libvirt with current user
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Give current user permissions to use libvirt: do not forget to log out and log
back in.
.. code-block:: bash
sudo usermod $(whoami) -a -G libvirtd,sudo
Configuring database
~~~~~~~~~~~~~~~~~~~~~
You can configure PostgreSQL database or as an alternative SQLite.
Configuring PostgreSQL
+++++++++++++++++++++++
Install postgresql package:
.. code-block:: bash
sudo apt-get install --yes postgresql
Set local peers to be trusted by default, create user and db and load fixtures.
.. code-block:: bash
pg_version=$(dpkg-query --show --showformat='${version;3}' postgresql)
pg_createcluster $pg_version main --start
sudo sed -ir 's/peer/trust/' /etc/postgresql/9.*/main/pg_hba.conf
sudo service postgresql restart
* in **2.9.x version**, default <user> and <db> are **fuel_devops**
.. code-block:: bash
sudo -u postgres createuser -P fuel_devops
sudo -u postgres psql -c "CREATE ROLE fuel_devops WITH LOGIN PASSWORD 'fuel_devops'"
sudo -u postgres createdb fuel_devops -O fuel_devops
* in **2.5.x version**, default <user> and <db> are **devops**
.. code-block:: bash
sudo -u postgres createuser -P devops
sudo -u postgres psql -c "CREATE ROLE devops WITH LOGIN PASSWORD 'devops'"
sudo -u postgres createdb devops -O devops
Configuring SQLite3 database
+++++++++++++++++++++++++++++
Install SQLite3 library:
.. code-block:: bash
sudo apt-get install --yes libsqlite3-0
Export the path to the SQLite3 database as the database name:
.. code-block:: bash
export DEVOPS_DB_NAME=$WORKING_DIR/fuel-devops
export DEVOPS_DB_ENGINE="django.db.backends.sqlite3
Configuring Django
~~~~~~~~~~~~~~~~~~~
After the database setup, we can install the django tables and data:
.. code-block:: bash
django-admin.py syncdb --settings=devops.settings
django-admin.py migrate devops --settings=devops.settings
.. note:: Depending on your Linux distribution,
`django-admin <http://django-admin-tools.readthedocs.org>`_ may refer
to system-wide django installed from package. If this happens you could get
an exception that says that devops.settings module is not resolvable.
To fix this, run django-admin.py (or django-admin) with a relative path ::
./bin/django-admin syncdb --settings=devops.settings
./bin/django-admin migrate devops --settings=devops.settings
[Optional] Enabling `Nested Paging <http://en.wikipedia.org/wiki/Second_Level_Address_Translation>`_
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following section covers only Intel platform. This option is enabled by
default in the KVM kernel module. If the file ``qemu-system-x86.conf`` does not
exist, you have to create it.
.. code-block:: bash
cat /etc/modprobe.d/qemu-system-x86.conf
options kvm_intel nested=1
In order to be sure that this feature is enabled on your system,
please run:
.. code-block:: bash
sudo apt-get install --yes cpu-checker
sudo modprobe kvm_intel
sudo kvm-ok && cat /sys/module/kvm_intel/parameters/nested
The result should be:
.. code-block:: bash
INFO: /dev/kvm exists
KVM acceleration can be used
Y
Environment creation via Devops + Fuel_QA or Fuel_main
-------------------------------------------------------
Depending on the Fuel release, you may need a different repository.
1. Clone GIT repository
For 6.1 and later, the *fuel-qa* is required:
.. code-block:: bash
git clone https://github.com/openstack/fuel-qa
cd fuel-qa/
.. note:: It is recommended to use the stable branch related to the ISO version.
For instance, with FUEL v7.0 ISO:
.. code-block:: bash
git clone https://github.com/openstack/fuel-qa -b stable/7.0
In case of 6.0 or earlier, please use *fuel-main* repository:
.. code-block:: bash
git clone https://github.com/openstack/fuel-main -b stable/6.0
cd fuel-main/
2. Install requirements (follow :ref:`DevOpsPyPIvenv` section for the
WORKING_DIR variable)
.. code-block:: bash
. $WORKING_DIR/fuel-devops-venv/bin/activate
pip install -r ./fuelweb_test/requirements.txt --upgrade
.. note:: A certain version of fuel-devops is specified in the
./fuelweb_test/requirements.txt , so it will overwrite the already installed
fuel-devops. For example, for fuel-master branch stable/6.0, there is:
.. code-block:: bash
git+git://github.com/stackforge/fuel-devops.git@2.5.6
It is recommended to install the django tables and data after installing
fuel-qa requiremets:
.. code-block:: bash
django-admin.py syncdb --settings=devops.settings
django-admin.py migrate devops --settings=devops.settings
3. Check :ref:`DevOpsConf` section
4. Prepare environment
Download Fuel ISO from
`Nightly builds <https://ci.fuel-infra.org/view/ISO/>`_
or build it yourself (please, refer to :ref:`building-fuel-iso`)
Next, you need to define several variables for the future environment:
* the path where is located your iso (e.g. $WORKING_DIR/fuel-community-7.0.iso)
* the number of nodes instantiated for the environment (e.g. 5)
.. code-block:: bash
export ISO_PATH=$WORKING_DIR/fuel-community-7.0.iso
export NODES_COUNT=5
Optionally you can specify the name of your test environment (it will
be used as a prefix for the domains and networks names created by
libvirt, defaults is ``fuel_system_test``).
.. code-block:: bash
export ENV_NAME=fuel_system_test
export VENV_PATH=$WORKING_DIR/fuel-devops-venv
If you want to use separated files for snapshots you need to set env variable
and use the following required versions:
* fuel-devops >= 2.9.17
* libvirtd >= 1.2.12
This change will switch snapshots created by libvirt from internal to external
mode.
.. code-block:: bash
export SNAPSHOTS_EXTERNAL=true
.. note:: External snapshots by default uses ~/.devops/snap directory to store
memory dumps. If you want to use other directory you can set
SNAPSHOTS_EXTERNAL_DIR variable.
.. code-block:: bash
export SNAPSHOTS_EXTERNAL_DIR=~/.devops/snap
Alternatively, you can edit this file to set them as a default values
.. code-block:: bash
fuelweb_test/settings.py
Start tests by running this command
.. code-block:: bash
./utils/jenkins/system_tests.sh -t test -w $(pwd) -j fuelweb_test -i $ISO_PATH -o --group=setup
For more information about how tests work, read the usage information
.. code-block:: bash
./utils/jenkins/system_tests.sh -h
Important notes for Sahara tests
--------------------------------
* It is not recommended to start tests without KVM.
* For the best performance Put Sahara image
`savanna-0.3-vanilla-1.2.1-ubuntu-13.04.qcow2 <http://sahara-files.mirantis.com/savanna-0.3-vanilla-1.2.1-ubuntu-13.04.qcow2>`_
(md5: 9ab37ec9a13bb005639331c4275a308d) in /tmp/ before start, otherwise
(If Internet access is available) the image will download automatically.
Important notes for Murano tests
--------------------------------
* Murano is deprecated in Fuel 9.0.
* Put Murano image `ubuntu-murano-agent.qcow2 <http://sahara-files.mirantis.com/ubuntu-murano-agent.qcow2>`_
(md5: b0a0fdc0b4a8833f79701eb25e6807a3) in /tmp before start.
* Running Murano tests on instances without an Internet connection will fail.
* For Murano tests execute 'export SLAVE_NODE_MEMORY=5120' before starting.
* If you need an image For Heat autoscale tests check
`prebuilt-jeos-images <https://fedorapeople.org/groups/heat/prebuilt-jeos-images/>`_.
Run single OSTF tests several times
-----------------------------------
* Export environment variable OSTF_TEST_NAME. Example: export OSTF_TEST_NAME='Request list of networks'
* Export environment variable OSTF_TEST_RETRIES_COUNT. Example: export OSTF_TEST_RETRIES_COUNT=120
* Execute test_ostf_repetable_tests from tests_strength package
Run tests ::
sh "utils/jenkins/system_tests.sh" -t test \
-w $(pwd) \
-j "fuelweb_test" \
-i "$ISO_PATH" \
-V $(pwd)/venv/fuelweb_test \
-o \
--group=create_delete_ip_n_times_nova_flat
.. _How to migrate:
Upgrade from system-wide devops to devops in Python virtual environment
------------------------------------------------------------------------
To migrate from older devops, follow these steps:
1. Remove system-wide fuel-devops (e.g. python-devops)
You must remove system-wide fuel-devops and switch to separate venvs with
different versions of fuel-devops, for Fuel 6.0.x (and older) and 6.1 release.
Repositories 'fuel-main' and 'fuel-qa', that contain system tests, must use
different Python virtual environments, for example:
* ~/venv-nailgun-tests - used for 6.0.x and older releases. Contains version 2.5.x of fuel-devops
* ~/venv-nailgun-tests-2.9 - used for 6.1 and above. Contains version 2.9.x of fuel-devops
If you have scripts which use system fuel-devops, fix them, and activate Python
venv before you start working in your devops environment.
By default, the network pool is configured as follows:
* 10.108.0.0/16 for devops 2.5.x
* 10.109.0.0/16 for 2.9.x
Please check other settings in *devops.settings*, especially the connection
settings to the database.
Before using devops in Python venv, you need to `install system dependencies`_
2. Update fuel-devops and Python venv on CI servers
To update fuel-devops, you can use the following examples:
.. code-block:: bash
# DevOps 2.5.x for system tests from 'fuel-main' repository
if [ -f ~/venv-nailgun-tests/bin/activate ]; then
echo "Python virtual env exist"
else
rm -rf ~/venv-nailgun-tests
virtualenv --no-site-packages ~/venv-nailgun-tests
fi
source ~/venv-nailgun-tests/bin/activate
pip install -r https://raw.githubusercontent.com/openstack/fuel-main/master/fuelweb_test/requirements.txt --upgrade
django-admin.py syncdb --settings=devops.settings --noinput
django-admin.py migrate devops --settings=devops.settings --noinput
deactivate
# DevOps 2.9.x for system tests from 'fuel-qa' repository
if [ -f ~/venv-nailgun-tests-2.9/bin/activate ]; then
echo "Python virtual env exist"
else
rm -rf ~/venv-nailgun-tests-2.9
virtualenv --no-site-packages ~/venv-nailgun-tests-2.9
fi
source ~/venv-nailgun-tests-2.9/bin/activate
pip install -r https://raw.githubusercontent.com/openstack/fuel-qa/master/fuelweb_test/requirements.txt --upgrade
django-admin.py syncdb --settings=devops.settings --noinput
django-admin.py migrate devops --settings=devops.settings --noinput
deactivate
3. Setup new repository of system tests for 6.1 release
All system tests for 6.1 and higher were moved to
`fuel-qa <https://github.com/openstack/fuel-qa>`_ repo.
To upgrade 6.1 jobs, follow these steps:
* make a separate Python venv, for example in ~/venv-nailgun-tests-2.9
* install `requirements <https://github.com/openstack/fuel-qa/blob/master/fuelweb_test/requirements.txt>`_ of system tests
* if you are using system tests on CI, please configure your CI to use new
Python venv, or export path to the new Python venv in the variable
``VENV_PATH`` (follow :ref:`DevOpsPyPIvenv` section for the WORKING_DIR
variable):
.. code-block:: bash
export VENV_PATH=$WORKING_DIR/fuel-devops-venv-2.9
Known issues
------------
* Some versions of libvirt contain a bug that breaks QEMU virtual machine
XML. You can see this when tests crush with a *libvirt: QEMU Driver error:
unsupported configuration: host doesn't support invariant TSC*. See:
`Bug 1133155 <https://bugzilla.redhat.com/show_bug.cgi?id=1133155>`_.
Workaround: upgrade libvirt to the latest version.
* If the same version of fuel-devops is used with several different databases
(for example, with multiple sqlite3 databases, or with a separated database for
each devops in different python virtual environments), there will be a
collision between Libvirt bridge names and interfaces.
Workaround: use the same database for the same version of the fuel-devops.
- for **2.9.x**, export the following env variables:
.. code-block:: bash
export DEVOPS_DB_NAME=fuel_devops
export DEVOPS_DB_USER=fuel_devops
export DEVOPS_DB_PASSWORD=fuel_devops
- for **2.5.x**, edit the dict for variable ``DATABASES``:
.. code-block:: bash
vim $WORKING_DIR/fuel-devops-venv/lib/python2.7/site-packages/devops/settings.py

View File

@ -4,17 +4,7 @@ Table of contents
=================
.. toctree::
:maxdepth: 4
:numbered:
development/api_doc
development/objects
develop
user
devops
buildsystem
infra/jenkins_master_deployment
infra/jenkins_slave_deployment
infra/overview
infra/puppet_master_deployment
infra/seed_server_deployment
infra/zabbix_server_deployment
packaging

View File

@ -1,83 +0,0 @@
Jenkins Master
==============
Jenkins master as a head of CI systems handles all tasks related to the build
system being responsible for its slaves connectivity, users authentication, and
job launches.
Deployment
----------
Before you deploy the Jenkins master, perform the following tasks:
#. Verify that the Puppet Master is deployed and DNS solution is working.
#. Consider using the Hiera role. As an example, see the ``jenkins_master`` role
included in the ``fuel-infra/puppet-manifests`` repository.
#. Set the following parameters for the Hiera role on the Jenkins Master:
.. code-block:: ini
---
classes:
- '::fuel_project::jenkins::master'
jenkins::master::jenkins_ssh_private_key_contents: |
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAzVoDH+iIrEmNBytJqR5IFYUcR7A6JvNTyelt4wIHEgVmNSs/
9ry/fEivdaaYGJpw2tri23IWNl5PXInnzKZu0KuRDuqEjyiSYQA8gmAF/+2KJmSM
OCj+QIRutLnHbUyg9MvExSveWrXqZYHKvSS0SJ4a3YP75yS2yp1e5T9YOXX2Na5u
...
LJnYPGIQsEziRtqpClCz9O6qyzPagom13y+s/uYrk9IKzSzjNvHKqzAFIF57paGo
3TWXEjB/RazdPB0PWfc3kjruz8IhDsLKQYPX+h8JuLO8ZL20Mxo7o3bs/GQnDrw1
g/PCKBJscu0RQxsa16tt5aX/IM82cJR6At3tTUyUpiwqNsVClJs=
-----END RSA PRIVATE KEY-----
jenkins::master::jenkins_ssh_public_key_contents: |
'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNWgMf6IisSY0HK0mpHkgVhRxHs
Dom81PJ6W3jAgcSBWY1Kz/2vL98SK91ppgYmnDa2uLbchY2Xk9ciefMpm7Qq5EO6oS \
...
KnV7lP1g5dfY1rm6bum7P+Jwf2tdTOa0b5ucK/+iWVbyPO4Z2afPpblh4Vynfe2wMz \
zpGAp3n5MwtH2EZmSXm/B6/CkgOFROsmWH8MzQEvNBGHhw+ONR9'
#. Set proper service FQDN of the Jenkins Master instance:
.. code-block:: ini
fuel_project::jenkins::master::service_fqdn: 'jenkins-master.test.local'
#. Adjust the security model of the Jenkins after the deployment:
.. code-block:: ini
jenkins::master::security_model: 'unsecured' || 'password' || 'ldap'
.. note::
The ``password`` is the most basic one, when Jenkins has no access to
the LDAP and you still require some authorization to be enabled in the
Jenkins.
To deploy the Jenkins Master, complete the following steps:
#. Install base Ubuntu 14.04 with SSH service and set appropriate FQDN.
#. Install puppet agent package:
.. code-block:: console
apt-get update; apt-get install -y puppet
#. Enable puppet agent:
.. code-block:: console
puppet agent --enable
#. Run the Jenkins Master deployment:
.. code-block:: console
FACTER_ROLE=jenkins_master FACTER_LOCATION=us1 puppet agent -tvd \
--server puppet-master.test.local --waitforcert 60
The last action requests the client's certificate. To continue the puppet run,
the certificate should be signed from the Puppet Master.

View File

@ -1,101 +0,0 @@
Jenkins Slave
=============
The Jenkins Slave is a machine that is set up to run build projects scheduled
from the master. Slave runs a dedicated program called a ``slave agent``
spawned from the master, thus there is no need to install Jenkins itself
on a slave.
Deployment
----------
There are few ways to setup Jenkins master-slave connection, however in our
infra currently we use only one of them.
In general, the Jenkins master SSH-key is to be placed in authorized_keys file
for the jenkins user on a slave machine. Then via the Jenkins Master's WebUI
create a node manually by specifying slave's node FQDN. After, the Jenkins
master will connect to the slave via SSH to the jenkins user, upload
``slave.jar`` file and spawn it on slave using jenkins user.
In order to deploy Jenkins Slave please look at already existing hiera
role for an example of jenkins slave instance. Check if ssh public key
authentication is properly configured.
#. Ensure that in jenkins master hiera role the following two parameters are
set:
.. code-block:: ini
---
classes:
- '::fuel_project::jenkins::master'
jenkins::master::jenkins_ssh_private_key_contents: |
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAzVoDH+iIrEmNBytJqR5IFYUcR7A6JvNTyelt4wIHEgVmNSs/
9ry/fEivdaaYGJpw2tri23IWNl5PXInnzKZu0KuRDuqEjyiSYQA8gmAF/+2KJmSM
...
3TWXEjB/RazdPB0PWfc3kjruz8IhDsLKQYPX+h8JuLO8ZL20Mxo7o3bs/GQnDrw1
g/PCKBJscu0RQxsa16tt5aX/IM82cJR6At3tTUyUpiwqNsVClJs=
-----END RSA PRIVATE KEY-----
jenkins::master::jenkins_ssh_public_key_contents: |
'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNWgMf6IisSY0HK0mpHkgVhRxHs \
Dom81PJ6W3jAgcSBWY1Kz/2vL98SK91ppgYmnDa2uLbchY2Xk9ciefMpm7Qq5EO6oS \
...
KnV7lP1g5dfY1rm6bum7P+Jwf2tdTOa0b5ucK/+iWVbyPO4Z2afPpblh4Vynfe2wMz \
zpGAp3n5MwtH2EZmSXm/B6/CkgOFROsmWH8MzQEvNBGHhw+ONR9'
#. In the jenkins slave role ensure to set proper 'authorized_keys' parameter
.. code-block:: ini
---
classes:
- fuel_project::jenkins::slave
jenkins::slave::authorized_keys:
'jenkins@jenkins-master.test.local':
type: ssh-rsa
key: 'AAAAB3NzaC1yc2EAAAADAQABAAABAQDNWgMf6IisSY...BGHhw+ONR9'
# optional - if you wish to use also password authentication, set to true:
ssh::sshd::password_authentication: true
The above configuration is mandatory to be set in order to get proper
master-to-slave connection.
Other case, if the slaves running the particular hiera role are suppose to be
able to buid the ISO, it is required to enable 'build_fuel_iso' parameter in
the 'slave' class.
.. code-block:: ini
fuel_project::jenkins::slave::build_fuel_iso: true
To deploy Jenkins slave, complete the following steps:
#. Install base Ubuntu 14.04 with SSH service and set appropriate FQDN.
#. Install puppet agent package:
.. code-block:: console
apt-get update; apt-get install -y puppet
#. Enable puppet agent:
.. code-block:: console
puppet agent --enable
#. Run the Jenkins Master deployment.
.. code-block:: console
FACTER_ROLE=jenkins_slave FACTER_LOCATION=us1 puppet agent -tvd \
--server puppet-master.test.local --waitforcert 60
The last action requests the client's certificate. To continue the puppet run,
the certificate should be signed from the Puppet Master.

View File

@ -1,311 +0,0 @@
Fuel Infrastructure
===================
Overview
--------
Fuel Infrastructure is the set of systems (servers and services) which provide
the following functionality:
* Automatic tests for every patchset committed to Fuel Gerrit repositories
* Fuel nightly builds
* Regular integration tests
* Custom builds and custom tests
* Release management and publishing
* Centralized log storage for gathering logs from infra's servers
* Internal and external mirrors, used by our infra and partners
* DNS service
* Server's monitoring service
* Docker's registry for managing custom docker images
* Small helper subsystems like status pages and so on
Fuel Infrastructure servers are managed by Puppet from one Puppet Master node.
To add new server to the infrastructure you can either take any server with base
Ubuntu 14.04 installed and connect it to the Puppet Master via puppet agent, or
you can first set up the PXE-server with PXETool and then run
server provisioning in automated way.
Your infrastructure must have a DNS service running in order to resolve the
mandatory hosts like puppet-master.test.local or pxetool.test.local. There are
at least two possible scenarios of using DNS in infra.
Using DHCP service in your infra is optional, but can be more elastic and
comfortable than static IP configuration.
#. Create own DNS service provided by dnsmasq in your infra.
#. Install base Ubuntu 14.04 with SSH service and set an appropriate FQDN
such as ``dns01.test.local`` and configure the Dnsmasq service:
.. code-block:: console
apt-get update; apt-get install -y dnsmasq
echo "addn-hosts=/etc/dnsmasq.d/hosts" >> /etc/dnsmasq.conf
echo "192.168.50.2 puppet-master.test.local puppet-master" > /etc/dnsmasq.d/hosts
echo "192.168.50.3 pxetool.test.local puppet-master" > /etc/dnsmasq.d/hosts
service dnsmasq restart
#. If you use a static IP, verify that the ``/etc/resolv.conf`` file points
to your DNS.
#. If you use a dynamic IP, verify that the DHCP service is updated
correspondingly.
#. Add a new zone to your current DNS setup or use an external, online DNS service.
#. Add a zone named ``test.local``.
#. Add an appropriate A and its coresponding PTR record for the
``puppet-master`` name (mandatory for deployment) at least.
#. If you use a static IP, verify that the ``/etc/resolv.conf`` file points
to your DNS,
#. If you use a dynamic IP, verify that the DHCP service is updated
correspondingly.
Jenkins Jobs
------------
Our CI requires many jobs and configuration, it is not convenient to configure
everything with jenkins GUI. We use dedicated
`repository <https://github.com/fuel-infra/jenkins-jobs>`_ and
`JJB <http://docs.openstack.org/infra/jenkins-job-builder/>`_
to store and manage our jobs.
Install Jenkins Job Builder
~~~~~~~~~~~~~~~~~~~~~~~~~~~
To begin work with jenkins job builder we need to install it and configure.
#. Install packages required to work with JJB
.. code-block:: console
apt-get install -y git python-tox
# or
yum install git python-tox
#. Download git repository and install JJB
.. code-block:: console
git clone https://github.com/fuel-infra/jenkins-jobs.git
cd jenkins-jobs
tox
#. Enable python environment, please replace <server> with server name, for
example fuel-ci
.. code-block:: console
source .tox/<server>/bin/activate
#. Create file jenkins_jobs.ini with JJB configuration. It could be created
at any place, for this documentation we assume that it will be placed in
conf/ directory, inside local copy of jenkins-jobs repository.
.. code-block:: console
[jenkins]
user=<JENKINS USER>
password=<JENKINS PASSWORD OR API-TOKEN>
url=https://<JENKINS URL>/
[job_builder]
ignore_cache=True
keep_descriptions=False
recursive=True
include_path=.:scripts
.. note:: <JENKINS_USER> is the user already defined in Jenkins with an
appropriate permissions set:
* Read - under the Global group of permissions
* Create, Delete, Configure and Read - under the Job group of permissions
Upload jobs to Jenkins
~~~~~~~~~~~~~~~~~~~~~~
When JJB is installed and configured you can upload jobs to jenkins master.
.. note:: We assume that you are in main directory of jenkins-jobs repository
and you have enabled python environment.
Upload all jobs configured for one specified server, for example upload of
fule-ci can be done in this way:
.. code-block:: console
jenkins-jobs --conf conf/jenkins_jobs.ini update servers/fuel-ci:common
Upload only one job
.. code-block:: console
jenkins-jobs --conf conf/jenkins_jobs.ini update servers/fuel-ci:common 8.0-community.all
Building ISO with Jenkins
-------------------------
Requirements
~~~~~~~~~~~~
For minimal environment we need 3 systems:
* Jenkins master
* Jenkins slave with enabled slave function for ISO building and deployment
testing. This can be done in different ways. For instance, you can create
hiera role for such server with the values provided below. Please keep in
mind that you have to explicitely set run_test and build_fuel_iso variables
to true, as ones are not enabled by default.
.. code-block:: ini
---
classes:
- '::fuel_project::jenkins::slave'
fuel_project::jenkins::slave::run_test: true
fuel_project::jenkins::slave::build_fuel_iso: true
.. note:: Every slave which will be used for ISO deployment testing, like
BVT, requires additional preparation.
Once puppet is applied, and slave is configured in Jenkins master, you need
to run the prepare_env job on it. Job will setup the python virtual
environment with fuel-devops installed (:doc:`../devops`).
If you build ISO newer than 6.1 there is no need to change default job
parameters. For older versions you need to run build with
update_devops_2_5_x option checked.
* Seed server - it is the server where you plan to store built ISO
Create Jenkins jobs
~~~~~~~~~~~~~~~~~~~
To build your own ISO you need to create job configurations for it, it requires
a few steps:
#. Create your own jobs repository, for start we will use fuel-ci jobs
.. code-block:: console
cd jenkins-jobs/servers
cp -pr fuel-ci test-ci
#. To build and test ISO we will use files:
* servers/test-ci/8.0/community.all.yaml
* servers/test-ci/8.0/fuel_community_publish_iso.yaml
* servers/test-ci/8.0/fuel_community.centos.bvt_2.yaml
* servers/test-ci/8.0/fuel_community.ubuntu.bvt_2.yaml
#. In all files you need to make changes:
* Change email devops+alert@mirantis.com to your own
* If you don't need reporting jobs you should delete triggering of
fuel_community_build_reports in all jobs or disable reporting job
.. code-block:: ini
- job:
...
publishers:
...
- trigger-parameterized-builds:
...
- project: fuel_community_build_reports
* Update seed name server in file
servers/test-ci/8.0/fuel_community_publish_iso.yaml
.. code-block:: ini
- job:
...
publishers:
...
- trigger-parameterized-builds:
...
- project: 8.0.fuel_community.centos.bvt_2, 8.0.fuel_community.ubuntu.bvt_2
...
predefined-parameters: |
ISO_TORRENT=http://seed.fuel-infra.org/fuelweb-iso/fuel-community-$ISO_ID.iso.torrent
* Update seed name server in file
servers/test-ci/8.0/builders/publish_fuel_community_iso.sh
.. code-block:: console
sed -i 's/seed-us1.fuel-infra.org/seed.test.local/g' servers/test-ci/8.0/builders/publish_fuel_community_iso.sh
sed -i 's/seed-cz1.fuel-infra.org/seed.test.local/g' servers/test-ci/8.0/builders/publish_fuel_community_iso.sh
#. Create jobs on jenkins master
.. note:: Please remember to:
* change current directory to the root directory of cloned jenkins-jobs repository
* enable python environment
* use correct jenkins_jobs.ini file (with correct jenkins master server)
.. code-block:: console
jenkins-jobs --conf conf/jenkins_jobs.ini update servers/test-ci:common 8.0-community.all
jenkins-jobs --conf conf/jenkins_jobs.ini update servers/test-ci:common 8.0.publish_fuel_community_iso
jenkins-jobs --conf conf/jenkins_jobs.ini update servers/test-ci:common 8.0.fuel_community.centos.bvt_2
jenkins-jobs --conf conf/jenkins_jobs.ini update servers/test-ci:common 8.0.fuel_community.ubuntu.bvt_2
Start ISO building
~~~~~~~~~~~~~~~~~~
When you finish setting jobs up on jenkins master you will see project with
name 8.0-community.all there, to start ISO build and test procedure you need
to run mentioned project.
Build and test procedure have 3 steps:
* ISO building (8.0-community.all)
* when ISO is successfully created it will be uploaded to the seed server
(by triggering 8.0.publish_fuel_community_iso)
* successful upload will start BVT test (8.0.fuel_community.centos.bvt_2 and
8.0.fuel_community.ubuntu.bvt_2)
Gerrit
------
Although fuel-* repositories are hosted by the `OpenStack Gerrit <http://review.openstack.org>`_,
we use additional Gerrit instance to host OpenStack packages, internal projects and all the code
related to Infrastructure itself.
Our Gerrit instance is installed and configured by Puppet, including specifying
the exact Java WAR file that is used(link). To manage Gerrit instance we use
`Jeepyb <http://docs.openstack.org/infra/system-config/jeepyb.html>`_ - the tool written by Openstack Infra
team, which allows to store projects configuration in YAML format.
To use Jeepyb with gerrit you need to create "projects.yaml" configuration file,
where for each project you add the following information:
* project name
* project description
* project ACL
* project upstream
If "upstream" option is specified, Jeepyb will automaticaly import the upstream
repository to this new project. To apply the configuration, use "manage-projects" command.
Every project has ACL file. One ACL file can be reused in several projects. In
ACL file, access rights are defined based on the Gerrit user groups.
For example, in this file you can allow certain group to use the Code-Review
+/-2 marks.
In our gerrit, we have some global projects - <projects>/. The Core Reviewers
for these projects are <one-core-group>.
Contributing
~~~~~~~~~~~~
Feedback
~~~~~~~~

View File

@ -1,128 +0,0 @@
Puppet Master
=============
Puppet is a tool which provides ability to manage configuration of systems in an
automatic way using the declarative language. The so called 'manifests' are
used for describing particular system configuration.
Deployment
----------
In order to install Puppet Master on the brand new server running Ubuntu please
proceed with the following steps:
#. Install Ubuntu 14.04 with SSH service and set FQDN to puppet-master.test.local
#. Install git and clone Puppet Manifests repository into /etc/puppet directory:
.. code-block:: console
apt-get install -y git
git clone https://github.com/fuel-infra/puppet-manifests.git /etc/puppet
#. Execute the Puppet Master's install script:
.. code-block:: console
/etc/puppet/bin/install_puppet_master.sh
The script does the following:
* upgrades all packages on the system
* installs required puppet modules
* installs Puppet Master packages
* runs puppet apply to setup Puppet Master
* runs puppet agent to do a second pass and verify if installation is usable
When script finishes successfully, the Puppet Master installation is completed.
-----------
Using Hiera
-----------
Puppet can use Hiera to look for data. Hiera allows to override manifest
parameter values during the deployment, thus it is possible to create
a specific data configuration for easier code re-use and easier management of
data that needs to differ across nodes.
All related Hiera structure is placed under the ``/var/lib/hiera`` directory.
The Hiera hierarchy
-------------------
#. common.yaml - the most general,
#. locations/%{::location}.yaml - can override common's data,
#. roles/%{::role}.yaml - can override location's and common's data
#. nodes/%{::clientcert}.yaml - can override data specified in common,
location and role.
The ``common`` and ``nodes`` are used within every deployment when exist. But in
contrast, the ``location`` and ``role`` needs to be passed explicitly as a
variable within ``puppet agent`` run, in order to use them. An example:
.. code-block:: console
FACTER_ROLE=websrv FACTER_LOCATION=us1 puppet agent -tvd
To include puppet's class in a role, it is required to use the ``classes``
keyword on the role's beginning. An example:
.. code-block:: ini
---
classes:
- '::class1::class2'
.. note::
avoid including classes in more than one place since this will lead to
duplicate class declaration error.
Other example - create a role's stub for 'docker_registry' module and make
sure that each of the nodes running that role have its own, custom, service's
FQDN set in Nginx's Vhost.
#. File 'roles/docker_registry.yaml'
.. code-block:: ini
---
classes:
- '::docker_registry'
- '::fuel_project::nginx'
- '::fuel_project::apps::firewall'
- '::fuel_project::common'
docker_registry::service_fqdn: '%{::fqdn}'
#. File 'nodes/srv01-us.infra.test.local.yaml'
.. code-block:: ini
---
docker_registry::service_fqdn: 'registry-us1.infra.test.local'
#. File 'nodes/srv01-cz.infra.test.local.yaml'
.. code-block:: ini
---
docker_registry::service_fqdn: 'registry-cz1.infra.test.local'
On a ``srv01-us.infra.test.local`` node, during the deployment of a
``docker_registry`` role, a default value for ``service_fqdn`` class parameter
has been overridden.
After the deployment using FACTER variable a facter file is created containing
the used FACTERs variables. For instance:
.. code-block:: console
cat /etc/facter/facts.d/facts.sh
#!/bin/bash
echo "location=us1"
echo "role=docker_registry"
Having these, now every next puppet agent run will not require ``FACTER`` variables
to be passed (if no role nor location is to be changed).

View File

@ -1,91 +0,0 @@
Seed server
===========
Seed server serves the files, such as ISO and disk images that were uploaded
from other servers or clients. Seed can host the content by the rsync, http,
or torrent protocol, depending on the Hiera's role configuration.
Deployment
----------
Before you deploy Seed server, verify that you have completed the following tasks:
#. Deploy the Puppet Master
#. Verify that the DNS solution works.
#. On the Puppet Master, create ``seed``, a dedicated Hiera role from which all
necessary services as opentracker will install.
.. note::
For torrent download, do not include the ``fuel_project::apps::mirror``
#. On the Jenkins Master, verify that the ``seed`` Hiera role exists:
.. code-block:: ini
---
classes:
- 'fuel_project::common'
- 'fuel_project::apps::seed'
- 'fuel_project::apps::firewall'
- 'opentracker'
fuel_project::apps::seed::vhost_acl_allow:
- 10.0.0.2/32 # IP's slave example on which ISO is build
fuel_project::apps::seed::service_fqdn: 'seed.test.local'
fuel_project::apps::seed::seed_cleanup_dirs:
- dir: '/var/www/seed/fuelweb-iso'
ttl: 11
pattern: 'fuel-*'
fuel_project::apps::firewall::rules:
'1000 - allow ssh connections from 0.0.0.0/0':
source: 0.0.0.0/0
dport: 22
proto: tcp
action: accept
'1000 - allow data upload connections from temp build1.test.local':
source: 10.0.0.2/32
dport: 17333
proto: tcp
action: accept
'1000 - allow zabbix-agent connections from 10.0.0.200/32':
source: 10.0.0.200/32
dport: 10050
proto: tcp
action: accept
'1000 - allow torrent traffic within 10.0.0.0/8 network':
source: 10.0.0.0/8
dport: 8080
proto: tcp
action: accept
To deploy Seed server, complete the following steps:
#. Install base Ubuntu 14.04 with SSH service and set appropriate FQDN.
#. Install puppet agent package:
.. code-block:: console
apt-get update; apt-get install -y puppet
#. Enable puppet agent:
.. code-block:: console
puppet agent --enable
#. Run the deployment of the ``seed`` role:
.. code-block:: console
FACTER_ROLE=seed FACTER_LOCATION=us1 puppet agent -tvd \
--server puppet-master.test.local --waitforcert 60
The last action requests the client's certificate. To continue the puppet run,
the certificate should be signed from the Puppet Master.

View File

@ -1,58 +0,0 @@
Zabbix server
=============
Zabbix is an open source tool designed for servers, services, and network
monitoring.
Deployment
----------
Zabbix can be deployed on a virtual machine with at least 1 GB RAM and 1 CPU.
The HW requirements vary and depend on the potential Zabbix's server load and
place of the database engine, for example, MySQL, regardless of whether it is
located on the same host or on a dedicated one.
The puppet-manifests repository contains an example ``zbxserver`` role
that can be used as a starting point for Zabbix server deployment.
To deploy Zabbix, perform the following tasks:
#. Install base Ubuntu 14.04 with SSH service and set appropriate FQDN.
#. Install puppet agent package:
.. code-block:: console
apt-get update; apt-get install -y puppet
#. Enable puppet agent:
.. code-block:: console
puppet agent --enable
#. Run the Jenkins Master deployment:
.. code-block:: console
FACTER_ROLE=zbxserver FACTER_LOCATION=us1 puppet agent -tvd \
--server puppet-master.test.local --waitforcert 60
The last action requests the client's certificate. To continue the puppet run,
the certificate should be signed from the Puppet Master.
Import templates
----------------
You can import templates and related items, such as triggers, that comes from
Fuel's Zabbix production server. To do this, complete the following tasks:
#. Clone the ``tools/zabbix-maintenance`` repository:
.. code-block:: console
git clone https://review.fuel-infra.org/tools/zabbix-maintenance
#. In the Zabbix server web UI, navigate to :guilabel:`Configuration ->
Templates`. Import the required template file using the :guilabel:`Import`
button.

View File

@ -1,11 +0,0 @@
.. _packaging:
Packaging
=========
.. toctree::
:maxdepth: 3
:numbered:
packaging/package_versions
packaging/perestroika

View File

@ -1,90 +0,0 @@
Package version problems
========================
The introduction of OpenStack updates brought up one non-obvious problem with
package versions. In order to update a package the new version should be
considered higher than the previous one. Package versions are compared between
the old and the new packages and a package manager then decides if the new
package installation is an upgrade or a downgrade. The algorithm used to
compare version numbers is different for rpm and deb packages and Puppet uses
its own implementation.
Previously we have never tried to update our OSCI packages from the previous
version and did not think about version number comparison and many packages
were named badly.
For example:
2014.1.fuel5.0-mira4 2014.1.1.fuel5.1-mira0
In this case most of version comparison algorithms would find 5.0 version
to be larger than 5.1.
::
* Native Puppet
2014.1.fuel5.0-mira4 > 2014.1.1.fuel5.1-mira0
* Puppet RPMVERCMP
2014.1.fuel5.0-mira4 < 2014.1.1.fuel5.1-mira0
* Native RPM
0:2014.1.fuel5.0-mira4 < 0:2014.1.1.fuel5.1-mira0
* Native DEB
2014.1.fuel5.0-mira4 > 2014.1.1.fuel5.1-mira0
This is because we have not been using correct separation between package
version and revision and these packages should have been named like this:
::
2014.1-fuel5.0.mira4 2014.1.1-fuel5.1.mira0
* Native Puppet
2014.1-fuel5.0.mira4 < 2014.1.1-fuel5.1.mira0
* Puppet RPMVERCMP
2014.1-fuel5.0.mira4 < 2014.1.1-fuel5.1.mira0
* Native RPM
0:2014.1-fuel5.0.mira4 < 0:2014.1.1-fuel5.1.mira0
* Native DEB
2014.1-fuel5.0.mira4 < 2014.1.1-fuel5.1.mira0
This also affect other packages not only OpenStack ones.
::
1.0.8.fuel5.0-mira0 1.0.8-fuel5.0.2.mira1
* Native Puppet
1.0.8.fuel5.0-mira0 > 1.0.8-fuel5.0.2.mira1
* Puppet RPMVERCMP
1.0.8.fuel5.0-mira0 < 1.0.8-fuel5.0.2.mira1
* Native RPM
0:1.0.8.fuel5.0-mira0 > 0:1.0.8-fuel5.0.2.mira1
* Native DEB
1.0.8.fuel5.0-mira0 > 1.0.8-fuel5.0.2.mira1
The most complex part of this problem is that even if we change our current
package naming to be correct one we cannot change packages already installed
and used in production. So we have raised epoch for many 5.0.2 packages to make
it possible to upgrade them from incorrectly named 5.0 packages and move to the
correct naming scheme. In Fuel 5.1 and higher we will be using only correct
naming and epoch can be removed.
::
0:1.0.8.fuel5.0-mira0 1:1.0.8-fuel5.0.2.mira1
* Native Puppet
0:1.0.8.fuel5.0-mira0 < 1:1.0.8-fuel5.0.2.mira1
* Puppet RPMVERCMP
0:1.0.8.fuel5.0-mira0 < 1:1.0.8-fuel5.0.2.mira1
* Native RPM
0:1.0.8.fuel5.0-mira0 < 1:1.0.8-fuel5.0.2.mira1
* Native DEB
0:1.0.8.fuel5.0-mira0 < 1:1.0.8-fuel5.0.2.mira1
As you can see, raising epoch allows us to set which version is higher
regardless its actual value.
Removing epoch from 5.1 packages makes it impossible to seamlessly upgrade
5.0 or 5.0.2 packages to 5.1 ones. Its not really a problem because we
dont support such upgrades anyway but if you do try to make such upgrade
manually you should remove old packages manually before installing new packages.
* My version checkers for all algorithms: https://github.com/dmitryilyin/vercmp
* RPM naming guidelines: http://fedoraproject.org/wiki/Packaging:NamingGuidelines
* RPM comparison: http://fedoraproject.org/wiki/Archive:Tools/RPM/VersionComparison
* DEB naming guidelines and version comparison: https://www.debian.org/doc/debian-policy/ch-controlfields.html#s-f-Version

View File

@ -1,165 +0,0 @@
Perestroika build system
========================
Introduction
------------
Fuel 7.0 introduces a new build system called Perestroika. It uses
standard upstream Linux distribution tools in order to:
* build packages (SBuild/Mock)
* publish packages to repositories
* manage package repositories (using reprepro/createrepo tools).
Every package is built in a clean and up-to-date buildroot. Packages,
their dependencies, and build dependencies are fully self-contained
for each Mirantis OpenStack release. Any package included in any
release can be rebuilt at any point in time using only the packages
from that release.
The package build CI is reproducible and can be recreated from scratch
in a repeatable way.
Perestroika is based on Docker which provides an easy distribution.
Each supported Linux distribution contains proper Docker images with
necessary tools and scripts.
For the advantages of Perestroika over OBS build system, see
`Replace OBS with another build system <http://specs.fuel-infra.org/fuel-specs-master/specs/7.0/replace-obs.html>`_.
Perestroika structure
---------------------
Code storage
~~~~~~~~~~~~
Gerrit code review system is used as a code storage.
Gerrit projects structure:
* Mirantis OpenStack/Fuel Master node packages:
- code projects: *[customer-name]/openstack/{package name}*
- spec projects: *[customer-name]/openstack-build/{package name}-build*
* Mirantis OpenStack Linux packages:
- code/spec projects: *[customer-name]/packages/{distribution}/{packagename}*
* Fuel Master node Linux packages (separated from Mirantis OpenStack
Linux in 7.0):
- code/spec projects: *[customer-name]/packages/fuel/{distribution}/{package name}*
* Versioning scheme supported by project branches:
- OpenStack: *openstack-ci/fuel-{fuel version}/{openstack version}*
- Mirantis OpenStack Linux/Fuel Master node: *{fuel version}*
**where**
* *customer-name* should be empty for Mirantis OpenStack projects due
to a backward compatibility with releases older than 7.0.
* supported values of the ``{distribution}`` parameter are {centos6},
{centos7} and {trusty}.
Scheduler
~~~~~~~~~
This part is based on Jenkins CI tool. All jobs are configured using
``jenkins-job-builder``. Jenkins has a separate set of jobs for each
*[customer name]+[fuel version]* case. Gerrit-trigger is configured
to track events from the *{version}* branch of all the *[customer-name]*
Gerrit projects.
Each set of jobs contains:
* jobs for OpenStack packages for a cluster (``.rpm`` and ``.deb``)
* jobs for Mirantis OpenStack Linux packages for a cluster (``.rpm``
and ``.deb``)
* jobs for OpenStack packages for Fuel Master node (``.rpm``). In case
of using cluster packages, they are optional.
* jobs for non-OpenStack Fuel Master node packages (``.rpm``)
* jobs for Fuel packages (``.rpm`` and ``.deb``)
* a job for package publishing
Build workers
~~~~~~~~~~~~~
These are hardware nodes with preconfigured build tools for all the
supported distributions. They are configured as Jenkins slaves.
Each worker contains:
* preconfigured Docker images with native build tools for each
distribution type:
- ``mockbuild`` builds packages using Mock (CentOS 6 and 7 target
distributions are supported).
- ``sbuild`` builds packages using SBuild tool (Ubuntu Trusty
Tahr target distribution is supported only).
* prepared minimal build chroots for all the supported distributions:
- These chroots are updated on a daily basis in order to be up-to-date
against the upstream state.
- Chroot updating is performed by a separate Jenkins job.
- No build jobs can run on a build host while the updating Jenkins job
is running on it.
**Building stage flow**:
#. Checking out sources from Gerrit.
#. Preparing sources to build (creating tarball files, updating
changelogs).
#. Building sources (performed in an isolated environment inside a
Docker container).
#. Getting the build stage exit status, build logs, and built
packages.
#. Parsing and archiving build logs for Jenkins artifacts.
Packaging CI uses short-lived Docker containers to perform package
building. Docker images contain preconfigured build tools only. There
are no chroots inside images. Build chroots are mounted to a Docker
container at start in a read-only mode. Additionally *tmpfs* partition
is mounted over a read-only chroot folder with AUFS overlays inside
a Docker container. The container is destroyed once the build stage is
completed.
**Goals of this scheme**:
* It can run a number of containers with the only chroot simultaneously
on the same build host.
* There is no need to perform clean-up operations after the build (all
changes matter inside the container only and will be purged after the
container is destroyed).
* *tmpfs* works much faster than disk FS/LVM snapshots.
Publisher node
~~~~~~~~~~~~~~
If the build stage finishes successfully, Jenkins runs a publishing
job. The Publisher node contains all repositories for all customer
projects. It is configured as a Jenkins slave. The repositories are
maintained by native tools of their respective distribution
(reprepro or createrepo).
The Publisher slave is fully private and available from Jenkins Master
node only because of containing a GPG key. All the packages and
repositories are signed in terms of their respective distribution by
GPG keys that are stored on the Publisher node.
**Publishing stage**:
#. Getting built packages from the build host (over ``scp``).
#. Checking if packages can be added to a repository (version checking
against existing packages in order to prevent downgrading).
#. Signing new packages (all ``.rpm`` and source ``.deb``) with GPG keys.
#. Removing existing and adding new packages to a repository.
#. Resigning the repository metadata.
#. Syncing new repository state to a Mirror host (over ``rsync``).
Mirror node
~~~~~~~~~~~
All repositories are available through http or rsync protocols and are
synced by a Publisher to a Mirror host.

View File

@ -1,7 +0,0 @@
.. _user:
User Guide
==========
User guide has been moved to `docs.mirantis.com <http://docs.mirantis.com/>`_.
If you want to contribute, checkout the sources from `github <https://github.com/openstack/fuel-docs.git>`_.