Refactor project infrastructure docs.

The goal is to re-orient the documentation as an introduction for new
contributors and a reference for all contributors.

Change-Id: I8702a5ace908c7618a6451bbfef7fc79b07429ff
Reviewed-on: https://review.openstack.org/30515
Reviewed-by: Elizabeth Krumbach Joseph <lyz@princessleia.com>
Reviewed-by: Clark Boylan <clark.boylan@gmail.com>
Approved: Monty Taylor <mordred@inaugust.com>
Reviewed-by: Monty Taylor <mordred@inaugust.com>
Tested-by: Jenkins
This commit is contained in:
James E. Blair 2013-05-24 16:56:35 -07:00 committed by Jenkins
parent 971b210cf7
commit 9ed2be3098
22 changed files with 1440 additions and 1460 deletions

View File

@ -13,11 +13,13 @@
# serve to show the default.
import datetime
import os
import sys
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ----------------------------------------------------
@ -26,7 +28,7 @@ import datetime
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = []
extensions = ['custom_roles']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
@ -125,7 +127,7 @@ html_theme = 'default'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
#html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.

View File

@ -0,0 +1,52 @@
# Copyright 2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Most of this code originated in sphinx.domains.python and
# sphinx.ext.autodoc and has been only slightly adapted for use in
# subclasses here.
# Thanks to Doug Hellman for:
# http://doughellmann.com/2010/05/defining-custom-roles-in-sphinx.html
from docutils import nodes
def file_role(name, rawtext, text, lineno, inliner,
options={}, content=[]):
"""Link a local path to a Github file view.
Returns 2 part tuple containing list of nodes to insert into the
document and a list of system messages. Both are allowed to be
empty.
:param name: The role name used in the document.
:param rawtext: The entire markup snippet, with role.
:param text: The text marked with the role.
:param lineno: The line number where rawtext appears in the input.
:param inliner: The inliner instance that called us.
:param options: Directive options for customization.
:param content: The directive content for customization.
"""
ref = 'https://github.com/openstack-infra/config/blob/master/%s' % text
node = nodes.reference(rawtext, text, refuri=ref, **options)
return [node], []
def setup(app):
"""Install the plugin.
:param app: Sphinx application context.
"""
app.add_role('file', file_role)
return

View File

@ -0,0 +1,206 @@
:title: Devstack Gate
.. _devstack-gate:
Devstack Gate
#############
Devstack-gate is a collection of scripts used by the OpenStack CI team
to test every change to core OpenStack projects by deploying OpenStack
via devstack on a cloud server.
At a Glance
===========
:Hosts:
* http://jenkins.openstack.org/
* http://devstack-launch.slave.openstack.org/
:Puppet:
* :file:`modules/openstack_project/manifests/template.pp`
* :file:`modules/openstack_project/manifests/devstack_launch_slave.pp`
:Projects:
* http://github.com/openstack-infra/devstack-gate
:Bugs:
* http://bugs.launchpad.net/openstack-ci
:Resources:
* `Devstack-gate README <https://github.com/openstack-infra/devstack-gate/blob/master/README.md>`_
Overview
========
All changes to core OpenStack projects are "gated" on a set of tests
so that it will not be merged into the main repository unless it
passes all of the configured tests. Most projects require unit tests
in python2.6 and python2.7, and pep8. Those tests are all run only on
the project in question. The devstack gate test, however, is an
integration test and ensures that a proposed change still enables
several of the projects to work together. Any proposed change to the
configured set of projects must pass the devstack gate test:
Obviously we test nova, glance, keystone, horizon, quantum and their
clients because they all work closely together to form an OpenStack
system. Changes to devstack itself are also required to pass this test
so that we can be assured that devstack is always able to produce a
system capable of testing the next change to nova. The devstack gate
scripts themselves are included for the same reason.
How It Works
============
The devstack test starts with an essentially bare virtual machine,
installs devstack on it, and runs some simple tests of the resulting
OpenStack installation. In order to ensure that each test run is
independent, the virtual machine is discarded at the end of the run,
and a new machine is used for the next run. In order to keep the
actual test run as short and reliable as possible, the virtual
machines are prepared ahead of time and kept in a pool ready for
immediate use. The process of preparing the machines ahead of time
reduces network traffic and external dependencies during the run.
The mandate of the devstack-gate project is to prepare those virtual
machines, ensure that enough of them are always ready to run,
bootstrap the test process itself, and clean up when it's done. The
devstack gate scripts should be able to be configured to provision
machines based on several images (eg, natty, oneiric, precise), and
each of those from several providers. Using multiple providers makes
the entire system somewhat highly-available since only one provider
needs to function in order for us to run tests. Supporting multiple
images will help with the transition of testing from oneiric to
precise, and will allow us to continue running tests for stable
branches on older operating systems.
To accomplish all of that, the devstack-gate repository holds several
scripts that are run by Jenkins.
Once per day, for every image type (and provider) configured, the
devstack-vm-update-image.sh script checks out the latest copy of
devstack, and then runs the devstack-vm-update-image.py script. It
boots a new VM from the provider's base image, installs some basic
packages (build-essential, python-dev, etc) including java so that the
machine can run the Jenkins slave agent, runs puppet to set up the
basic system configuration for Jenkins slaves in the openstack-infra
project, and then caches all of the debian and pip packages and test
images specified in the devstack repository, and clones the OpenStack
project repositories. It then takes a snapshot image of that machine
to use when booting the actual test machines. When they boot, they
will already be configured and have all, or nearly all, of the network
accessible data they need. Then the template machine is deleted. The
Jenkins job that does this is devstack-update-vm-image. It is a matrix
job that runs for all configured providers, and if any of them fail,
it's not a problem since the previously generated image will still be
available.
Even though launching a machine from a saved image is usually fast,
depending on the provider's load it can sometimes take a while, and
it's possible that the resulting machine may end up in an error state,
or have some malfunction (such as a misconfigured network). Due to
these uncertainties, we provision the test machines ahead of time and
keep them in a pool. Every ten minutes, a job runs to spin up new VMs
for testing and add them to the pool, using the devstack-vm-launch.py
script. Each image type has a parameter specifying how many machine of
that type should be kept ready, and each provider has a parameter
specifying the maximum number of machines allowed to be running on
that provider. Within those bounds, the job attempts to keep the
requested number of machines up and ready to go at all times. When a
machine is spun up and found to be accessible, it as added to Jenkins
as a slave machine with one executor and a tag like "devstack-foo"
(eg, "devstack-oneiric" for oneiric image types). The Jenkins job that
does this is devstack-launch-vms. It is also a matrix job that runs
for all configured providers.
When a proposed change is approved by the core reviewers, Jenkins
triggers the devstack gate test itself. This job runs on one of the
previously configured "devstack-foo" nodes and invokes the
devstack-vm-gate-wrap.sh script which checks out code from all of the
involved repositories, and merges the proposed change. That script
then calls devstack-vm-gate.sh which installs a devstack configuration
file, and invokes devstack. Once devstack is finished, it runs
exercise.sh which performs some basic integration testing. After
everything is done, the script copies all of the log files back to the
Jenkins workspace and archives them along with the console output of
the run. The Jenkins job that does this is the somewhat awkwardly
named gate-integration-tests-devstack-vm.
To prevent a node from being used for a second run, there is a job
named devstack-update-inprogress which is triggered as a parameterized
build step from gate-interation-tests-devstack-vm. It is passed the
name of the node on which the gate job is running, and it disabled
that node in Jenkins by invoking devstack-vm-inprogress.py. The
currently running job will continue, but no new jobs will be scheduled
for that node.
Similarly, when the node is finished, a parameterized job named
devstack-update-complete (which runs devstack-vm-delete.py) is
triggered as a post-build action. It removes the node from Jenkins
and marks the VM for later deletion.
In the future, we hope to be able to install developer SSH keys on VMs
from failed test runs, but for the moment the policies of the
providers who are donating test resources do not permit that. However,
most problems can be diagnosed from the log data that are copied back
to Jenkins. There is a script that cleans up old images and VMs that
runs frequently. It's devstack-vm-reap.py and is invoked by the
Jenkins job devstack-reap-vms.
How to Debug a Devstack Gate Failure
====================================
When Jenkins runs gate tests for a change, it leaves comments on the
change in Gerrit with links to the test run. If a change fails the
devstack gate test, you can follow it to the test run in Jenkins to
find out what went wrong. The first thing you should do is look at the
console output (click on the link labeled "[raw]" to the right of
"Console Output" on the left side of the screen). You'll want to look
at the raw output because Jenkins will truncate the large amount of
output that devstack produces. Skip to the end to find out why the
test failed (keep in mind that the last few commands it runs deal with
copying log files and deleting the test VM -- errors that show up
there won't affect the test results). You'll see a summary of the
devstack exercise.sh tests near the bottom. Scroll up to look for
errors related to failed tests.
You might need some information about the specific run of the test. At
the top of the console output, you can see all the git commands used
to set up the repositories, and they will output the (short) sha1 and
commit subjects of the head of each repository.
It's possible that a failure could be a false negative related to a
specific provider, especially if there is a pattern of failures from
tests that run on nodes from that provider. In order to find out which
provider supplied the node the test ran on, look at the name of the
jenkins slave near the top of tho console output, the name of the
provider is included.
Below that, you'll find the output from devstack as it installs all of
the debian and pip packages required for the test, and then configures
and runs the services. Most of what it needs should already be cached
on the test host, but if the change to be tested includes a dependency
change, or there has been such a change since the snapshot image was
created, the updated dependency will be downloaded from the Internet,
which could cause a false negative if that fails.
Assuming that there are no visible failures in the console log, you
may need to examine the log output from the OpenStack services. Back
on the Jenkins page for the build, you should see a list of "Build
Artifacts" in the center of the screen. All of the OpenStack services
are configured to syslog, so you may find helpful log messages by
clicking on "syslog.txt". Some error messages are so basic they don't
make it to syslog, such as if a service fails to start. Devstack
starts all of the services in screen, and you can see the output
captured by screen in files named "screen-\*.txt". You may find a
traceback there that isn't in syslog.
After examining the output from the test, if you believe the result
was a false negative, you can retrigger the test by re-approving the
change in Gerrit. If a test failure is a result of a race condition in
the OpenStack code, please take the opportunity to try to identify it,
and file a bug report or fix the problem. If it seems to be related to
a specific devstack gate node provider, we'd love it if you could help
identify what the variable might be (whether in the devstack-gate
scripts, devstack itself, OpenStack, or even the provider's service).
Developer Setup
===============
If you'd like to work on the devstack-gate scripts and test process,
see the README in the devstack-gate repo for specific instructions.

33
doc/source/etherpad.rst Normal file
View File

@ -0,0 +1,33 @@
:title: Etherpad
.. _etherpad:
Etherpad
########
Etherpad (previously known as "etherpad-lite") is installed on
etherpad.openstack.org to facilitate real-time collaboration on
documents. It is used extensively during OpenStack Developer
Summits.
At a Glance
===========
:Hosts:
* http://etherpad.openstack.org
:Puppet:
* :file:`modules/etherpad_lite`
* :file:`modules/openstack_project/manifests/etherpad.pp`
* :file:`modules/openstack_project/manifests/etherpad_dev.pp`
:Projects:
* http://etherpad.org/
* https://github.com/ether/etherpad-lite
:Bugs:
* http://bugs.launchpad.net/openstack-ci
* https://github.com/ether/etherpad-lite/issues
Overview
========
Apache is configured as a reverse proxy and there is a MySQL database
backend.

View File

@ -1,175 +1,52 @@
:title: Gerrit Installation
:title: Gerrit
.. _gerrit:
Gerrit
######
Objective
*********
Gerrit is the code review system used by the OpenStack project. For a
full description of how the system fits into the OpenStack workflow,
see `the GerritJenkinsGithub wiki article
<https://wiki.openstack.org/wiki/GerritJenkinsGithub>`_.
A workflow where developers submit changes to gerrit, changes are
peer-reviewed and automatically tested by Jenkins before being
committed to the main repo. The public repo is on github.
This section describes how Gerrit is configured for use in the
OpenStack project and the tools used to manage that configuration.
References
**********
At a Glance
===========
* http://gerrit.googlecode.com/svn/documentation/2.2.1/install.html
* http://feeding.cloud.geek.nz/2011/04/code-reviews-with-gerrit-and-gitorious.html
* http://feeding.cloud.geek.nz/2011/05/integrating-launchpad-and-gerrit-code.html
* http://www.infoq.com/articles/Gerrit-jenkins-hudson
* https://wiki.jenkins-ci.org/display/JENKINS/Gerrit+Trigger
* https://wiki.mahara.org/index.php/Developer_Area/Developer_Tools
Known Issues
************
* Don't use innodb until at least gerrit 2.2.2 because of:
http://code.google.com/p/gerrit/issues/detail?id=518
:Hosts:
* http://review.openstack.org
* http://review-dev.openstack.org
:Puppet:
* :file:`modules/gerrit`
* :file:`modules/openstack_project/manifests/review.pp`
* :file:`modules/openstack_project/manifests/review_dev.pp`
:Configuration:
* :file:`modules/openstack_project/templates/review.projects.yaml.erb`
:Projects:
* http://code.google.com/p/gerrit/
:Bugs:
* http://bugs.launchpad.net/openstack-ci
* http://code.google.com/p/gerrit/issues/list
:Resources:
* `Gerrit Documentation <https://review.openstack.org/Documentation/index.html>`_
Installation
************
============
Host Installation
=================
Gerrit is installed and configured by Puppet, including specifying the
exact Java WAR file that is used. See :ref:`sysadmin` for how Puppet
is used to manage OpenStack infrastructure systems.
Prepare Host
------------
This sets the host up with the standard OpenStack system
administration configuration. Skip this if you're not setting up a
host for use by the OpenStack project.
Gerrit Configuration
--------------------
.. code-block:: bash
sudo apt-get install puppet git openjdk-6-jre-headless mysql-server
git clone git://github.com/openstack-infra/config.git
cd config/
sudo bash run_puppet.sh
Install MySQL
-------------
Basic configuration of MySQL is handled via puppet.
Install Gerrit
--------------
Note that Openstack's gerrit installation currently uses a custom .war of gerrit
2.2.2. The following instruction is for the generic gerrit binaries:
.. code-block:: bash
wget http://gerrit.googlecode.com/files/gerrit-2.2.1.war
mv gerrit-2.2.1.war gerrit.war
java -jar gerrit.war init -d review_site
The .war file will bring up an interactive tool to change the settings, these
should be set as follows. Note that the password configured earlier for MySQL
should be provided when prompted::
*** Gerrit Code Review 2.2.1
***
Create '/home/gerrit2/review_site' [Y/n]?
*** Git Repositories
***
Location of Git repositories [git]:
*** SQL Database
***
Database server type [H2/?]: ?
Supported options are:
h2
postgresql
mysql
jdbc
Database server type [H2/?]: mysql
Gerrit Code Review is not shipped with MySQL Connector/J 5.1.10
** This library is required for your configuration. **
Download and install it now [Y/n]?
Downloading http://repo2.maven.org/maven2/mysql/mysql-connector-java/5.1.10/mysql-connector-java-5.1.10.jar ... OK
Checksum mysql-connector-java-5.1.10.jar OK
Server hostname [localhost]:
Server port [(MYSQL default)]:
Database name [reviewdb]:
Database username [gerrit2]:
gerrit2's password :
confirm password :
*** User Authentication
***
Authentication method [OPENID/?]:
*** Email Delivery
***
SMTP server hostname [localhost]:
SMTP server port [(default)]:
SMTP encryption [NONE/?]:
SMTP username :
*** Container Process
***
Run as [gerrit2]:
Java runtime [/usr/lib/jvm/java-6-openjdk/jre]:
Copy gerrit.war to /home/gerrit2/review_site/bin/gerrit.war [Y/n]?
Copying gerrit.war to /home/gerrit2/review_site/bin/gerrit.war
*** SSH Daemon
***
Listen on address [*]:
Listen on port [29418]:
Gerrit Code Review is not shipped with Bouncy Castle Crypto v144
If available, Gerrit can take advantage of features
in the library, but will also function without it.
Download and install it now [Y/n]?
Downloading http://www.bouncycastle.org/download/bcprov-jdk16-144.jar ... OK
Checksum bcprov-jdk16-144.jar OK
Generating SSH host key ... rsa... dsa... done
*** HTTP Daemon
***
Behind reverse proxy [y/N]? y
Proxy uses SSL (https://) [y/N]? y
Subdirectory on proxy server [/]:
Listen on address [*]:
Listen on port [8081]:
Canonical URL [https://review.openstack.org/]:
Initialized /home/gerrit2/review_site
Executing /home/gerrit2/review_site/bin/gerrit.sh start
Starting Gerrit Code Review: OK
Waiting for server to start ... OK
Opening browser ...
Please open a browser and go to https://review.openstack.org/#admin,projects
Configure Gerrit
----------------
The file /home/gerrit2/review_site/etc/gerrit.config will be setup automatically
by puppet.
Set Gerrit to start on boot:
.. code-block:: bash
ln -snf /home/gerrit2/review_site/bin/gerrit.sh /etc/init.d/gerrit
update-rc.d gerrit defaults 90 10
Then create the file ``/etc/default/gerritcodereview`` with the following
contents:
.. code-block:: ini
GERRIT_SITE=/home/gerrit2/review_site
Most of Gerrit's configuration is in configuration files or Git
repositories (and in our case, managed by Puppet), but a few items
must be configured in the database. The following is a record of
these changes:
Add "Approved" review type to gerrit:
@ -209,10 +86,7 @@ we're not happy with people for submitting the patch in the first place:
set name="I would prefer that you didn't merge this"
where category_id='CRVW' and value=-1;
OpenStack currently uses Gerrit's built in CLA system. This
configuration is not recommended for new projects and is merely an
artifact of legal requirements placed on the OpenStack project. Here are
the SQL commands to set it up:
Add information about the CLA:
.. code-block:: mysql
@ -221,189 +95,32 @@ the SQL commands to set it up:
'OpenStack Individual Contributor License Agreement',
'static/cla.html', 2);
Groups
------
Install Apache
--------------
::
A number of system-wide groups are configured in Gerrit. These
include `Project Bootstrappers` which grants all the permissions
needed to set up a new project. Normally the OpenStack Project
Creater account is the only member of this group, but members of the
`Administrators` group may temporarily add themselves in order to
correct problems with automatic project creation.
apt-get install apache2
Create: /etc/apache2/sites-available/gerrit:
.. code-block:: apacheconf
<VirtualHost *:80>
ServerAdmin webmaster@localhost
ErrorLog ${APACHE_LOG_DIR}/gerrit-error.log
LogLevel warn
CustomLog ${APACHE_LOG_DIR}/gerrit-access.log combined
Redirect / https://review-dev.openstack.org/
</VirtualHost>
<IfModule mod_ssl.c>
<VirtualHost _default_:443>
ServerAdmin webmaster@localhost
ErrorLog ${APACHE_LOG_DIR}/gerrit-ssl-error.log
LogLevel warn
CustomLog ${APACHE_LOG_DIR}/gerrit-ssl-access.log combined
SSLEngine on
SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
#SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt
<FilesMatch "\.(cgi|shtml|phtml|php)$">
SSLOptions +StdEnvVars
</FilesMatch>
<Directory /usr/lib/cgi-bin>
SSLOptions +StdEnvVars
</Directory>
BrowserMatch "MSIE [2-6]" \
nokeepalive ssl-unclean-shutdown \
downgrade-1.0 force-response-1.0
# MSIE 7 and newer should be able to use keepalive
BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown
RewriteEngine on
RewriteCond %{HTTP_HOST} !review-dev.openstack.org
RewriteRule ^.*$ https://review-dev.openstack.org/
ProxyPassReverse / http://localhost:8081/
<Location />
Order allow,deny
Allow from all
ProxyPass http://localhost:8081/ retry=0
</Location>
The `External Testing Tools` group is used to grant +/-1 Verified
access to external testing tools.
</VirtualHost>
</IfModule>
GitHub Integration
==================
Run the following commands:
Gerrit replicate to GitHub by pushing to a standard Git remote. The
GitHub projects are configured to allow only the Gerrit user to push.
.. code-block:: bash
Pull requests can not be disabled for a project in Github, so instead
we have a script that runs from cron to close any open pull requests
with instructions to use Gerrit.
a2enmod ssl proxy proxy_http rewrite
a2ensite gerrit
a2dissite default
These are both handled automatically by :ref:`jeepyb`.
Install Exim
------------
::
apt-get install exim4
dpkg-reconfigure exim4-config
Choose "internet site", otherwise select defaults
edit: /etc/default/exim4 ::
QUEUEINTERVAL='5m'
GitHub Setup
============
Generate an SSH key for Gerrit for use on GitHub
------------------------------------------------
::
sudo su - gerrit2
gerrit2@gerrit:~$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/gerrit2/.ssh/id_rsa):
Created directory '/home/gerrit2/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
GitHub Configuration
--------------------
#. create openstack-gerrit user on github
#. add gerrit2 ssh public key to openstack-gerrit user
#. create gerrit team in openstack org on github with push/pull access
#. add openstack-gerrit to gerrit team in openstack org
#. add public master repo to gerrit team in openstack org
#. save github host key in known_hosts
::
gerrit2@gerrit:~$ ssh git@github.com
The authenticity of host 'github.com (207.97.227.239)' can't be established.
RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'github.com,207.97.227.239' (RSA) to the list of known hosts.
PTY allocation request failed on channel 0
You will also need to create the file ``github-projects.secure.config`` in the ``/etc/github/`` directory. The contents of this are as follows:
.. code-block:: ini
[github]
username = guthub-user
password = string
The username should be the github username for gerrit to use when communicating
with github. The api_token can be found in github's account setting for the
account.
Gerrit Replication to GitHub
----------------------------
The file ``review_site/etc/replication.config`` is needed with the following
contents:
.. code-block:: ini
[remote "github"]
url = git@github.com:${name}.git
Jenkins / Gerrit Integration
============================
Create a Jenkins User in Gerrit
-------------------------------
With the jenkins public key, as a gerrit admin user::
cat jenkins.pub | ssh -p29418 review.openstack.org gerrit create-account --ssh-key - --full-name Jenkins --email jenkins@openstack.org jenkins
Create "CI Systems" group in gerrit, make jenkins a member
Create a Gerrit Git Prep Job in Jenkins
---------------------------------------
When gating trunk with Jenkins, we want to test changes as they will
appear once merged by Gerrit, but the gerrit trigger plugin will, by
default, test them as submitted. If HEAD moves on while the change is
under review, it may end up getting merged with HEAD, and we want to
test the result.
To do that, make sure the "Hudson Template Project plugin" is
installed, then set up a new job called "Gerrit Git Prep", and add a
shell command build step (no other configuration)::
#!/bin/sh -x
git checkout $GERRIT_BRANCH
git reset --hard remotes/origin/$GERRIT_BRANCH
git merge FETCH_HEAD
CODE=$?
if [ ${CODE} -ne 0 ]; then
git reset --hard remotes/origin/$GERRIT_BRANCH
exit ${CODE}
fi
Later, we will configure Jenkins jobs that we want to behave this way
to use this build step.
Auto Review Expiry
==================
@ -418,505 +135,39 @@ onto the gerrit servers. This script follows two rules:
If your review gets touched by either of these rules it is possible to
unabandon a review on the gerrit web interface.
Launchpad Integration
=====================
Keys
----
The key for the launchpad account is in ~/.ssh/launchpad_rsa. Connecting
to Launchpad requires oauth authentication - so open the URL in a
browser and log in to launchpad as the hudson-openstack user. Subsequent
runs will use the cached credentials.
This process is managed by the :ref:`jeepyb` openstack-infra project.
Gerrit IRC Bot
==============
Installation
------------
Gerritbot consumes the Gerrit event stream and announces relevant
events on IRC. :ref:`gerritbot` is an openstack-infra project and is
also available on Pypi.
Ensure there is an up-to-date checkout of openstack-infra/config in ~gerrit2.
::
apt-get install python-irclib python-daemon python-yaml
cp ~gerrit2/openstack-infra/config/gerritbot.init /etc/init.d
chmod a+x /etc/init.d/gerritbot
update-rc.d gerritbot defaults
su - gerrit2
ssh-keygen -f /home/gerrit2/.ssh/gerritbot_rsa
As a Gerrit admin, create a user for gerritbot::
cat ~gerrit2/.ssh/gerritbot_rsa | ssh -p29418 review.openstack.org gerrit create-account --ssh-key - --full-name GerritBot gerritbot
Configure gerritbot, including which events should be announced in the
gerritbot.config file:
.. code-block:: ini
[ircbot]
nick=NICNAME
pass=PASSWORD
server=chat.freenode.net
channel=openstack-dev
port=6667
[gerrit]
user=gerritbot
key=/home/gerrit2/.ssh/gerritbot_rsa
host=review.openstack.org
port=29418
events=patchset-created, change-merged, x-vrif-minus-1, x-crvw-minus-2
Register an account with NickServ on FreeNode, and put the account and
password in the config file.
::
sudo /etc/init.d/gerritbot start
Launchpad Bug Integration
=========================
In addition to the hyperlinks provided by the regex in gerrit.config,
we use a Gerrit hook to update Launchpad bugs when changes referencing
them are applied.
them are applied. This is managed by the :ref:`jeepyb`
openstack-infra project.
Installation
------------
Ensure an up-to-date checkout of openstack-infra/config is in ~gerrit2.
::
apt-get install python-pyme
cp ~gerrit2/gerrit-hooks/change-merged ~gerrit2/review_site/hooks/
Create a GPG and register it with Launchpad::
gerrit2@gerrit:~$ gpg --gen-key
gpg (GnuPG) 1.4.11; Copyright (C) 2010 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Please select what kind of key you want:
(1) RSA and RSA (default)
(2) DSA and Elgamal
(3) DSA (sign only)
(4) RSA (sign only)
Your selection?
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048)
Requested keysize is 2048 bits
Please specify how long the key should be valid.
0 = key does not expire
<n> = key expires in n days
<n>w = key expires in n weeks
<n>m = key expires in n months
<n>y = key expires in n years
Key is valid for? (0)
Key does not expire at all
Is this correct? (y/N) y
You need a user ID to identify your key; the software constructs the user ID
from the Real Name, Comment and Email Address in this form:
"Heinrich Heine (Der Dichter) <heinrichh@duesseldorf.de>"
Real name: Openstack Gerrit
Email address: review@openstack.org
Comment:
You selected this USER-ID:
"Openstack Gerrit <review@openstack.org>"
Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? o
You need a Passphrase to protect your secret key.
gpg: gpg-agent is not available in this session
You don't want a passphrase - this is probably a *bad* idea!
I will do it anyway. You can change your passphrase at any time,
using this program with the option "--edit-key".
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: /home/gerrit2/.gnupg/trustdb.gpg: trustdb created
gpg: key 382ACA7F marked as ultimately trusted
public and secret key created and signed.
gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u
pub 2048R/382ACA7F 2011-07-26
Key fingerprint = 21EF 7F30 C281 F61F 44CD EC48 7424 9762 382A CA7F
uid Openstack Gerrit <review@openstack.org>
sub 2048R/95F6FA4A 2011-07-26
gerrit2@gerrit:~$ gpg --send-keys --keyserver keyserver.ubuntu.com 382ACA7F
gpg: sending key 382ACA7F to hkp server keyserver.ubuntu.com
Log into the Launchpad account and add the GPG key to the account.
Adding New Projects
Generate an SSH key for Gerrit
------------------------------------------------
::
sudo su - gerrit2
gerrit2@gerrit:~$ ssh-keygen -f ~/.ssh/example_project_id_rsa
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
*******************
Creating a new Gerrit Project with Puppet
=========================================
New Project Creation
====================
Gerrit project creation is now managed through changes to the
openstack-infra/config repository. The old manual processes are documented
below as the processes are still valid and documentation of them may
still be useful when dealing with corner cases. That said, you should
use this method whenever possible.
openstack-infra/config repository. :ref:`jeepyb` handles
automatically creating any new projects defined in the configuration
files.
Puppet and its related scripts are able to create the new project in
Gerrit, create the new project on Github, create a local git replica on
the Gerrit host, configure the project Access Controls, and create new
groups in Gerrit that are mentioned in the Access Controls. You might
also want to configure Zuul and Jenkins to run tests on the new project.
The details for that process are in the next section.
Gerrit projects are configured in the
``openstack-infra/config:modules/openstack_project/templates/review.projects.yaml.erb``.
file. This file contains two sections, the first is a set of default
config values that each project can override, and the second is a list
of projects (each may contain their own overrides).
As a Gerrit admin, create a user for example-project-creator::
cat ~gerrit2/.ssh/example_project_id_rsa | ssh -p29418 review.openstack.org gerrit create-account --ssh-key - --full-name "Example Project Creator" --email example-project-creator@example.org example-project-creator
#. Config default values::
- homepage: http://example.org
local-git-dir: /var/lib/git
gerrit-host: review.example.org
gerrit-user: example-project-creator
gerrit-key: /home/gerrit2/.ssh/example_project_id_rsa
github-config: /etc/github/github-projects.secure.config
has-wiki: False
has-issues: False
has-pull-requests: False
has-downloads: False
Note The gerrit-user 'example-project-creator' should be added to the
"Project Bootstrapers" group in :ref:`acl`.
#. Project definition::
- project: example/gerrit
description: Fork of Gerrit used by Example
remote: https://gerrit.googlesource.com/gerrit
- project: example/project1
description: Best project ever.
has-wiki: True
acl-config: /path/to/acl/file
The above config gives puppet and its related scripts enough information
to create new projects, but not enough to add access controls to each
project. To add access control you need to have have an ``acl-config``
option for the project in ``review.projects.yaml.erb`` file. That option
should have a value that is a path to the project.config for that
project.
That is the high level view of how we can configure projects using the
pupppet repository. To create an actual change that does all of this for
a single project you will want to do the following:
#. Add a ``modules/openstack_project/files/gerrit/acls/project-name.config``
file to the repo. You can refer to the :ref:`project-config` section
below if you need more details on writing the project.config file,
but contents will probably end up looking like the below block (note
that the sections are in alphabetical order and each indentation is
8 spaces)::
[access "refs/heads/*"]
label-Code-Review = -2..+2 group project-name-core
label-Approved = +0..+1 group project-name-core
workInProgress = group project-name-core
[access "refs/heads/milestone-proposed"]
label-Code-Review = -2..+2 group project-name-milestone
label-Approved = +0..+1 group project-name-milestone
[project]
state = active
[receive]
requireChangeId = true
requireContributorAgreement = true
[submit]
mergeContent = true
#. Add a project entry for the project in
``openstack-infra/config:modules/openstack_project/templates/review.projects.yaml.erb``.::
- project: openstack/project-name
acl-config: /home/gerrit2/acls/project-name.config
#. If there is an existing repo that is being replaced by this new
project you can set the upstream value for the project. When an
upstream is set, that upstream will be cloned and pushed into Gerrit
instead of an empty repository. eg::
- project: openstack/project-name
acl-config: /home/gerrit2/acls/project-name.config
upstream: git://github.com/awesumsauce/project-name.git
That is all you need to do. Push the change to gerrit and if necessary
modify group membership for the groups you configured in the
``project.config`` through Launchpad.
Have Zuul Monitor a Gerrit Project
=====================================
Define the required jenkins jobs for this project using the Jenkins Job
Builder. Edit openstack-infra/config:modules/openstack_project/files/jenkins_job_builder/config/projects.yaml
and add the desired jobs. Most projects will use the python jobs template.
A minimum config::
- project:
name: PROJECT
github-org: openstack
node: precise
tarball-site: tarballs.openstack.org
doc-publisher-site: docs.openstack.org
jobs:
- python-jobs
Full example config for nova::
- project:
name: nova
github-org: openstack
node: precise
tarball-site: tarballs.openstack.org
doc-publisher-site: docs.openstack.org
jobs:
- python-jobs
- python-diablo-bitrot-jobs
- python-essex-bitrot-jobs
- openstack-publish-jobs
- gate-{name}-pylint
Edit openstack-infra/config:modules/openstack_project/files/zuul/layout.yaml
and add the required jenkins jobs to this project. At a minimum you will
probably need the gate-PROJECT-merge test in the check and gate queues.
A minimum config::
- name: openstack/PROJECT
check:
- gate-PROJECT-merge:
gate:
- gate-PROJECT-merge:
Full example config for nova::
- name: openstack/nova
check:
- gate-nova-merge:
- gate-nova-docs
- gate-nova-pep8
- gate-nova-python26
- gate-nova-python27
- gate-tempest-devstack-vm
- gate-tempest-devstack-vm-cinder
- gate-nova-pylint
gate:
- gate-nova-merge:
- gate-nova-docs
- gate-nova-pep8
- gate-nova-python26
- gate-nova-python27
- gate-tempest-devstack-vm
- gate-tempest-devstack-vm-cinder
post:
- nova-branch-tarball
- nova-coverage
- nova-docs
pre-release:
- nova-tarball
publish:
- nova-tarball
- nova-docs
Creating a Project in Gerrit
============================
Using ssh key of a gerrit admin (you)::
ssh -p 29418 review.openstack.org gerrit create-project --name openstack/PROJECT
If the project is an API project (eg, image-api), we want it to share
some extra permissions that are common to all API projects (eg, the
OpenStack documentation coordinators can approve changes, see
:ref:`acl`). Run the following command to reparent the project if it
is an API project::
ssh -p 29418 review.openstack.org gerrit set-project-parent --parent API-Projects openstack/PROJECT
Add yourself to the "Project Bootstrappers" group in Gerrit which will
give you permissions to push to the repo bypassing code review.
Do the initial push of the project with::
git push ssh://USERNAME@review.openstack.org:29418/openstack/PROJECT.git HEAD:refs/heads/master
git push --tags ssh://USERNAME@review.openstack.org:29418/openstack/PROJECT.git
Remove yourself from the "Project Bootstrappers" group, and then set
the access controls as specified in :ref:`acl`.
Create a Project in GitHub
==========================
As a github openstack admin:
* Visit https://github.com/organizations/openstack
* Click New Repository
* Visit the gerrit team admin page
* Add the new repository to the gerrit team
Pull requests can not be disabled for a project in Github, so instead
we have a script that runs from cron to close any open pull requests
with instructions to use Gerrit.
* Edit openstack-infra/config:modules/openstack_project/templates/review.projects.yaml.erb
and add the project to the list of projects in the yaml file
For example::
- project: openstack/PROJECT
Adding Local Git Replica
========================
Local Git Replica
=================
Gerrit replicates all repos to a local directory so that Apache can
serve the anonymous http requests out directly.
On the gerrit host::
sudo git --bare init /var/lib/git/openstack/PROJECT.git
sudo chown -R gerrit2:gerrit2 /var/lib/git/openstack/PROJECT.git
Adding A New Project On The Command Line
****************************************
All of the steps involved in adding a new project to Gerrit can be
accomplished via the commandline, with the exception of creating a new repo
on github.
First of all, add the .gitreview file to the repo that will be added. Then,
assuming an ssh config alias of `review` for the gerrit instance, as a person
in the Project Bootstrappers group::
ssh review gerrit create-project --name openstack/$PROJECT
git review -s
git push gerrit HEAD:refs/heads/master
git push --tags gerrit
At this point, the branch contents will be in gerrit, and the project config
settings and ACLs need to be set. These are maintained in a special branch
inside of git in gerrit. Check out the branch from git::
git fetch gerrit +refs/meta/*:refs/remotes/gerrit-meta/*
git checkout -b config remotes/gerrit-meta/config
There will be two interesting files, `groups` and `project.config`. `groups`
contains UUIDs and names of groups that will be referenced in
`project.config`. UUIDs can be found on the group page in gerrit.
Next, edit `project.config` to look like::
[access "refs/*"]
owner = group Administrators
[receive]
requireChangeId = true
requireContributorAgreement = true
[submit]
mergeContent = true
[access "refs/heads/*"]
label-Code-Review = -2..+2 group $PROJECT-core
label-Approved = +0..+1 group $PROJECT-core
[access "refs/heads/milestone-proposed"]
label-Code-Review = -2..+2 group $PROJECT-milestone
label-Approved = +0..+1 group $PROJECT-milestone
If the project is for a client library, the `refs/*` section of
`project.config` should look like::
[access "refs/*"]
owner = group Administrators
create = group $PROJECT-milestone
pushTag = group $PROJECT-milestone
Replace $PROJECT with the name of the project.
Finally, commit the changes and push the config back up to Gerrit::
git commit -m "Initial project config"
git push gerrit HEAD:refs/meta/config
At this point you can follow the steps above for creating the project's github
replica, the local git replica, and zuul monitoring/jenkins jobs.
Migrating a Project from bzr
============================
Add the bzr PPA and install bzr-fastimport:
add-apt-repository ppa:bzr/ppa
apt-get update
apt-get install bzr-fastimport
Doing this from the bzr PPA is important to ensure at least version 0.10 of
bzr-fastimport.
Clone the git-bzr-ng from termie:
git clone https://github.com/termie/git-bzr-ng.git
In git-bzr-ng, you'll find a script, git-bzr. Put it somewhere in your path.
Then, to get a git repo which contains the migrated bzr branch, run:
git bzr clone lp:${BRANCHNAME} ${LOCATION}
So, for instance, to do glance, you would do:
git bzr clone lp:glance glance
And you will then have a git repo of glance in the glance dir. This git repo
is now suitable for uploading in to gerrit to become the new master repo.
.. _project-config:
Project Config
**************
There are a few options which need to be enabled on the project in the Admin
interface.
* Merge Strategy should be set to "Merge If Necessary"
* "Automatically resolve conflicts" should be enabled
* "Require Change-Id in commit message" should be enabled
* "Require a valid contributor agreement to upload" should be enabled
Optionally, if the PTL agrees to it:
* "Require the first line of the commit to be 50 characters or less" should
be enabled.
serve the anonymous http requests out directly. This is automatically
configured by :ref:`jeepyb`.
.. _acl:
@ -1022,8 +273,15 @@ These permissions try to achieve the high level goals::
label code review -2/+2: foo-milestone
label approved 0/+1: foo-milestone
Manual Administrative Tasks
===========================
The following sections describe tasks that individuals with root
access may need to perform on rare occations.
Renaming a Project
******************
------------------
Renaming a project is not automated and is disruptive to developers,
so it should be avoided. Allow for an hour of downtime for the
@ -1106,7 +364,7 @@ or manually update their remotes with something like::
git remote set-url origin https://github.com/$ORG/$PROJECT.git
Deleting a User from Gerrit
***************************
---------------------------
This isn't normally necessary, but if you find that you need to
completely delete an account from Gerrit, here's how:

View File

@ -7,36 +7,33 @@ OpenStack Project Infrastructure
================================
This documentation covers the installation and maintenance of the
project infrastructure used by OpenStack. It
may be of interest to people who may want to help develop this
infrastructure or integrate their tools into it. Some instructions
may be useful to other projects that want to set up similar infrastrucutre
systems for their developers.
project infrastructure used by OpenStack. It may be of interest to
people who may want to help develop this infrastructure or integrate
their tools into it. Some instructions may be useful to other
projects that want to set up similar infrastrucutre systems for their
developers.
OpenStack developers or users do not need to read this documentation.
Instead, see http://wiki.openstack.org/ to learn how contribute to or
use OpenStack.
Howtos:
.. sidebar:: HOWTOs
.. toctree::
:maxdepth: 2
.. toctree::
:maxdepth: 1
third_party
third_party
stackforge
Contents:
.. toctree::
:maxdepth: 2
project
sysadmin
systems
jenkins
gerrit
puppet
puppet_modules
jenkins_jobs
meetbot
stackforge
Indices and tables
==================

View File

@ -1,8 +1,38 @@
Meetbot
==============
:title: IRC Services
Overview
--------
.. _irc:
IRC Services
############
The infrastructure team runs a number of IRC bots that are active on
OpenStack related channels.
At a Glance
===========
:Hosts:
* http://eavesdrop.openstack.org/
* http://review.openstack.org/
* https://wiki.openstack.org/wiki/Infrastructure_Status
:Puppet:
* :file:`modules/meetbot`
* :file:`modules/statusbot`
* :file:`modules/gerritbot`
* :file:`modules/openstack_project/manifests/eavesdrop.pp`
* :file:`modules/openstack_project/manifests/review.pp`
:Configuration:
* :file:`modules/gerritbot/files/gerritbot_channel_config.yaml`
:Projects:
* http://wiki.debian.org/MeetBot
* http://sourceforge.net/projects/supybot/
* https://github.com/openstack-infra/gerritbot
* https://github.com/openstack-infra/statusbot
:Bugs:
* http://bugs.launchpad.net/openstack-ci
Meetbot
=======
The OpenStack Infrastructure team run a slightly modified
`Meetbot <http://wiki.debian.org/MeetBot>`_ to log IRC channel activity and
@ -21,9 +51,7 @@ get you going, but there are other goodies in ``doc/``.
Once you have Supybot installed you will need to configure a bot. The
``supybot-wizard`` command can get you started with a basic config, or you can
have Puppet do the heavy lifting. The OpenStack Infrastructure Meetbot Puppet
module creates a configuration and documentation for that module is at
:ref:`Meetbot_Puppet_Module`.
have the OpenStack meetbot puppet module do the heavy lifting.
One important config setting is ``supybot.reply.whenAddressedBy.chars``, which
sets the prefix character for this bot. This should be set to something other
@ -37,8 +65,6 @@ The OpenStack Infrastructure Meetbot fork can be found at
https://github.com/openstack-infra/meetbot. Manual installation of the Meetbot
plugin is straightforward and documented in that repository's README.
OpenStack Infrastructure installs and configures Meetbot through Puppet.
Documentation for the Puppet module that does that can be found at
:ref:`Meetbot_Puppet_Module`.
Voting
^^^^^^
@ -88,3 +114,49 @@ A somewhat contrived voting example:
meetbot | Voted on "Should we vote now?" Results are
meetbot | Yes (1): bar
meetbot | No (1): foo
.. _statusbot:
Statusbot
=========
Statusbot is used to distribute urgent information from the
Infrastructure team to OpenStack channels. It updates the
`Infrastructure Status wiki page
<https://wiki.openstack.org/wiki/Infrastructure_Status>`_. It
supports the following commands when issued by authenticated and
whitelisted users:
#status log MESSAGE
Log a message to the wiki page.
#status notice MESSAGE
Broadcast a message to all OpenStack channels, and log to the wiki
page.
#status alert MESSAGE
Broadcast a message to all OpenStack channels and change their
topics, log to the wiki page, and set an alert box on the wiki
page (eventually include this alert box on status.openstack.org
pages).
#status ok [MESSAGE]
Remove alert box and restore channel topics, optionally announcing
and logging an "okay" message.
.. _gerritbot:
Gerritbot
=========
Gerritbot watches the Gerrit event stream (using the "stream-events"
Gerrit command) and announces events (such as patchset-created, or
change-merged) to relevant IRC channels.
Gerritbot's configuration is in
:file:`modules/gerritbot/files/gerritbot_channel_config.yaml`.
The configuration is organized by channel, with each project that a
channel is interested in listed under the channel.

204
doc/source/jeepyb.rst Normal file
View File

@ -0,0 +1,204 @@
:title: Jeepyb
.. _jeepyb:
Jeepyb
######
Jeepyb is a collection of tools which make managing Gerrit easier.
Specifically, management of Gerrit projects and their associated
upstream integration with things like Github and Launchpad.
At a Glance
===========
:Hosts:
* http://review.openstack.org
* http://review-dev.openstack.org
:Puppet:
* :file:`modules/jeepyb`
* :file:`modules/openstack_project/manifests/review.pp`
* :file:`modules/openstack_project/manifests/review_dev.pp`
:Configuration:
* :file:`modules/openstack_project/templates/review.projects.yaml.erb`
* :file:`modules/openstack_project/files/pypi-mirror.yaml`
:Projects:
* http://github.com/openstack-infra/jeepyb
:Bugs:
* http://bugs.launchpad.net/openstack-ci
Gerrit Project Configuration
============================
The ``manage-projects`` command in Jeepyb is able to create a new
project in Gerrit, create the new project on Github, create a local
git replica on the Gerrit host, configure the project Access Controls,
and create new groups in Gerrit.
OpenStack Gerrit projects are configured in the
:file:`modules/openstack_project/templates/review.projects.yaml.erb`.
file. When this file is updated, ``manage-projects`` is run
automatically. This file contains two sections, the first is a set of
default config values that each project can override, and the second
is a list of projects (each may contain their own overrides).
#. Config default values::
- homepage: http://example.org
local-git-dir: /var/lib/git
gerrit-host: review.example.org
gerrit-user: example-project-creator
gerrit-key: /home/gerrit2/.ssh/example_project_id_rsa
github-config: /etc/github/github-projects.secure.config
has-wiki: False
has-issues: False
has-pull-requests: False
has-downloads: False
#. Project definition::
- project: example/gerrit
description: Fork of Gerrit used by Example
remote: https://gerrit.googlesource.com/gerrit
- project: example/project1
description: Best project ever.
has-wiki: True
acl-config: /path/to/acl/file
The above config gives puppet and its related scripts enough information
to create new projects, but not enough to add access controls to each
project. To add access control you need to have have an ``acl-config``
option for the project in ``review.projects.yaml.erb`` file. That option
should have a value that is a path to the ``project.config`` for that
project.
That is the high level view of how we can configure projects using the
pupppet repository. To create an actual change that does all of this for
a single project you will want to do the following:
#. Add a
``modules/openstack_project/files/gerrit/acls/project-name.config``
file to the repo. The contents will probably end up looking like
the block below (note that the sections are in alphabetical order
and each indentation is 8 spaces)::
[access "refs/heads/*"]
label-Code-Review = -2..+2 group project-name-core
label-Approved = +0..+1 group project-name-core
workInProgress = group project-name-core
[access "refs/heads/milestone-proposed"]
label-Code-Review = -2..+2 group project-name-milestone
label-Approved = +0..+1 group project-name-milestone
[project]
state = active
[receive]
requireChangeId = true
requireContributorAgreement = true
[submit]
mergeContent = true
#. Add a project entry for the project in
``modules/openstack_project/templates/review.projects.yaml.erb``.::
- project: openstack/project-name
acl-config: /home/gerrit2/acls/project-name.config
#. If there is an existing repo that is being replaced by this new
project you can set the upstream value for the project. When an
upstream is set, that upstream will be cloned and pushed into Gerrit
instead of an empty repository. eg::
- project: openstack/project-name
acl-config: /home/gerrit2/acls/project-name.config
upstream: git://github.com/awesumsauce/project-name.git
That is all you need to do. Push the change to gerrit and if necessary
modify group membership for the groups you configured in the
``project.config`` through Launchpad.
Commit Hooks
============
Launchpad Bug Integration
-------------------------
The ``update-bug`` Jeepyb command is installed as a Gerrit commit hook
so that it runs each time a patchset is created. It updates Launchpad
bugs based on information that it finds in the commit message. It
also contains a manual mapping of Gerrit to Launchpad project names
for projects that use a different Launchpad project for their bugs.
Launchpad Blueprint Integration
-------------------------------
The ``update-blueprint`` Jeepyb command is installed as a Gerrit
commit hook so that it runs each time a patchset is created. It
updates Launchpad blueprints based on information that it finds in the
commit message.
Impact Notification
-------------------
The ``notify-impact`` commit hook runs when new patchsets are created
and sends email notifications when certain regular expressions are
matched, such as:
* DocImpact
* SecurityImpact
Trivial Rebase Hook
-------------------
The ``trivial-rebase`` commit hook runs when new patchsets are
uploaded and detects whether the new patchset is merely a rebase onto
a new parent, or is a substantial change. If it is a rebase, it
restores previous review votes and leaves a comment in Gerrit. It
uses Gerrit's own SSH host key as the private key for access in order
to gain the "superuser" permissions needed to impersonate other users
in reviews.
Periodic Tasks
==============
Closing Github Pull Requests
----------------------------
The ``close-pull-requests`` Jeepyb command is installed as a cron job
and periodically closes all pull requests for projects so configured
in projects.yaml.
Expiring Old Reviews
--------------------
The ``expire-old-reviews`` Jeepyb command is installed as a cron job
that periodically marks reviews that have seen little activity as
`Abandoned`. Their owners may use the Gerrit interface to restore
them when they are ready for further review.
Fetching Remotes
----------------
Some projects may have remotes defined in Jeepyb; the
``fetch-remotes`` cron job will update these remotes so that their
commits are available in Gerrit.
RSS feeds
---------
Jeepyb's ``openstackwatch`` command publishes RSS feeds of Gerrit
projects.
Pypi Mirror
-----------
The ``run-mirror`` command builds a full Pypi mirror for a project or
set of projects by reading a requirements.txt file, installing all
listed dependencies into a virtualenv, inspecting the resulting
installed package set, and then downloading all of the second-level
(and further) dependencies. Essentially, the mirror is built by
introspection and contains the full set of depedencies needed whether
they are explicitly listed or not.

View File

@ -1,35 +1,72 @@
:title: Jenkins Configuration
:title: Jenkins
.. _jenkins:
Jenkins
#######
Jenkins is a Continuous Integration system that runs tests and
automates some parts of project operations. It is controlled for the
most part by :ref:`zuul` which determines what jobs are run when.
At a Glance
===========
:Hosts:
* http://jenkins.openstack.org
* http://jenkins-dev.openstack.org
:Puppet:
* :file:`modules/jenkins`
* :file:`modules/openstack_project/manifests/jenkins.pp`
* :file:`modules/openstack_project/manifests/jenkins_dev.pp`
:Configuration:
* :file:`modules/openstack_project/files/jenkins_job_builder/config/`
:Projects:
* http://jenkins-ci.org/
* :ref:`zuul`
* :ref:`jjb`
:Bugs:
* http://bugs.launchpad.net/openstack-ci
* https://wiki.jenkins-ci.org/display/JENKINS/Issue+Tracking
:Resources:
* :ref:`zuul`
* :ref:`jjb`
Overview
********
========
Jenkins is a Continuous Integration system and the central control
system for the orchestration of both pre-merge testing and post-merge
actions such as packaging and publishing of documentation.
A large number and variety of jobs are defined in Jenkins. The
configuration of all of those jobs is stored in git in the
openstack-infra/config repository. They are defined in YAML files
that are read by :ref:`jjb` which configures the actual jobs in
Jenkins.
The overall design that Jenkins is a key part of implementing is that
all code should be reviewed and tested before being merged in to trunk,
and that as many tasks around review, testing, merging and release that
can be automated should be.
Anyone may submit a change to the openstack-infra/config repository
that defines a new job or alters an existing job by editing the
appropriate YAML files. See :ref:`jjb` for more information.
Jenkins is essentially a job queing system, and everything that is done
through Jenkins can be thought of as having a few discreet components:
Because of the large number of builds that Jenkins executes, the
OpenStack project favors the following approach in configuring Jenkins
jobs:
* Triggers - What causes a job to be run
* Location - Where do we run a job
* Steps - What actions are taken when the job runs
* Results - What is the outcome of the job
The OpenStack Jenkins can be found at http://jenkins.openstack.org
OpenStack uses :doc:`gerrit` to manage code reviews, which in turns calls
Jenkins to test those reviews.
* Minimal use of plugins: the more post-processing work that Jenkins
needs to perform on a job, the more likely we are to run into
compatibility problems among plugins, and contention for shared
resources on the Jenkins master. A number of popuplar plugins
will cause all builds of a job to be serialized even if the jobs
otherwise run in parallel.
* Minimal build history: Jenkins stores build history in individual
XML files on disk, and accessing a large build history can cause
the Jenkins master to be unresponsive for a significant time while
loading them. It also increases memory usage. Instead, we
generally keep no more than a day's worth of builds.
* Move artifacts off of Jenkins: Jenkins is not efficient at serving
static information such as build artifacts (e.g., tarballs) or
logs. Instead, we copy them to a static webserver which is far
more efficient.
Authorization
*************
=============
Jenkins is set up to use OpenID in a Single Sign On mode with Launchpad.
This means that all of the user and group information is managed via
@ -40,20 +77,18 @@ user will need to re-log in upon changing either team membership on
Launchpad, or changing that team's authorization in Jenkins for the new
privileges to take effect.
Integration Testing
*******************
TODO: How others can get involved in testing and integrating with
OpenStack Jenkins.
Devstack Gate
=============
Currently OpenStack integration testing is performed by the devstack
gate test framework. This framework runs the devstack exercises and
Tempest smoketests against a devstack install on single use cloud
servers. The devstack gate source can be found on
`Github <https://github.com/openstack-infra/devstack-gate>`_ and the
`Readme <https://github.com/openstack-infra/devstack-gate/blob/master/README.md>`_
OpenStack integration testing is performed by the devstack gate test
framework. This framework runs the devstack exercises and Tempest
smoketests against a devstack install on single use cloud servers. The
devstack gate source can be found on `Github
<https://github.com/openstack-infra/devstack-gate>`_ and the `Readme
<https://github.com/openstack-infra/devstack-gate/blob/master/README.md>`_
describes the process of using devstack gate to run your own devstack
based tests.
The :ref:`devstack-gate` project is used to maintain a pool of Jenkins
slaves that are used to run these tests. Devstack-gate jobs create
and delete Jenkins slaves as needed in order to maintain the pool.

View File

@ -1,26 +1,48 @@
:title: Jenkins Job Builder
.. _jjb:
Jenkins Job Builder
===================
###################
Jenkins Job Builder is a system for configuring Jenkins jobs using
simple YAML files stored in Git.
At a Glance
===========
:Hosts:
* http://jenkins.openstack.org
* http://jenkins-dev.openstack.org
:Puppet:
* :file:`modules/jenkins/manifests/job_builder.pp`
:Configuration:
* :file:`modules/openstack_project/files/jenkins_job_builder/config/`
:Projects:
* http://github.com/openstack-infra/jenkins-job-builder
:Bugs:
* http://bugs.launchpad.net/openstack-ci
:Resources:
* `Reference Manual <http://ci.openstack.org/jenkins-job-builder>`_
Overview
--------
========
In order to make the process of managing hundreds of Jenkins Jobs easier a
Python based utility was designed to take YAML based configurations and convert
those into jobs that are injected into Jenkins. The source for this utility can
be found on `github <https://github.com/openstack-infra/jenkins-job-builder>`_ and
it comes with its own
`documentation <http://ci.openstack.org/jenkins-job-builder/>`_.
In order to make the process of managing hundreds of Jenkins jobs
easier, Jenkins Job Builder was designed to take YAML based
configurations and convert those into jobs that are injected into
Jenkins.
The documentation below describes how the OpenStack Infrastructure team uses
the Jenkins Job Builder in their environment.
The documentation below describes how the OpenStack Infrastructure
team uses the Jenkins Job Builder in our environment.
Configuring Projects
--------------------
====================
The YAML scripts to make this work are stored in the ``openstack-infra/config``
repository in the
``modules/openstack_project/files/jenkins_job_builder/config/`` directory.
In this directory you can have four different types of yaml config files:
The YAML scripts to make this work are stored in the
:file:`modules/openstack_project/files/jenkins_job_builder/config/`
directory. In this directory you can have four different types of
yaml config files:
* Jenkins Jobs Defaults in ``defaults.yaml``.
* Jenkins Jobs Macros to give larger config sections meaningful names in
@ -30,15 +52,14 @@ In this directory you can have four different types of yaml config files:
the templates should be filled out and templates go in ``template_name.yaml``.
YAML Format
-----------
===========
Defaults
^^^^^^^^
--------
Example defaults config:
.. code-block:: yaml
:linenos:
- defaults:
name: global
@ -77,13 +98,12 @@ indicating Puppet manages these jobs, jobs are allowed to run concurrently,
and a thirty minute job timeout.
Macros
^^^^^^
------
Macros exist to give meaningful names to blocks of configuration that can be
used in job configs in place of the blocks they name. For example:
.. code-block:: yaml
:linenos:
- builder:
name: git-prep
@ -111,12 +131,11 @@ having the yaml below the name in place of the name in the job config. The next
section shows how you can use these macros.
Job Config
^^^^^^^^^^
----------
Example job config:
.. code-block:: yaml
:linenos:
- job:
name: example-docs
@ -147,14 +166,13 @@ jenkins publishing steps and so on. The macros defined earlier make this easy
and simple.
Job Templates
^^^^^^^^^^^^^
-------------
Job templates allow you to specify a job config once with arguments that are
replaced with the values specified in ``projects.yaml``. This allows you to
reuse job configs across many projects. First you need a templated job config:
.. code-block:: yaml
:linenos:
- job-template:
name: '{name}-docs'
@ -197,7 +215,6 @@ The ``projects.yaml`` pulls all of the magic together. It specifies the
arguemnts to and instantiates the job templates as real jobs. For example:
.. code-block:: yaml
:linenos:
- project:
name: example1
@ -235,14 +252,16 @@ file should be modified or removed.
Sending a Job to Jenkins
------------------------
The Jenkins Jobs builder talks to Jenkins using the Jenkins API. This means
that it can create and modify jobs directly without the need to restart or
reload the Jenkins server. It also means that Jenkins will verify the XML and
cause the Jenkins Jobs builder to fail if there is a problem.
The Jenkins Jobs builder talks to Jenkins using the Jenkins API. This
means that it can create and modify jobs directly without the need to
restart or reload the Jenkins server. It also means that Jenkins will
verify the XML and cause the Jenkins Jobs builder to fail if there is
a problem.
For this to work a configuration file is needed. There is an erb template for
this configuration file at ``modules/jenkins/templates/jenkins_jobs.ini.erb``.
The contents of this erb are:
For this to work a configuration file is needed. There is an erb
template for this configuration file at
:file:`modules/jenkins/templates/jenkins_jobs.ini.erb`. The contents
of this template are:
.. code-block:: ini
@ -252,10 +271,11 @@ The contents of this erb are:
url=<%= url %>
The values for user and url are hardcoded in the Puppet repo in
`modules/openstack_project/manifests/jenkins.pp <https://github.com/openstack-infra/config/blob/master/modules/openstack_project/manifests/jenkins.pp>`_,
but the password is stored in hiera. Make sure you have it defined as
:file:`modules/openstack_project/manifests/jenkins.pp`, but the
password is stored in hiera. Make sure you have it defined as
``jenkins_jobs_password`` in the hiera DB.
The password can be obtained by logging into the Jenkins user, clicking on your
username in the top-right, clicking on `Configure` and then `Show API Token`.
This API Token is your password for the API.
The password can be obtained by logging into the Jenkins user,
clicking on your username in the top-right, clicking on `Configure`
and then `Show API Token`. This API Token is your password for the
API.

45
doc/source/lists.rst Normal file
View File

@ -0,0 +1,45 @@
:title: Mailing Lists
.. _lists:
Mailing Lists
#############
`Mailman <http://www.gnu.org/software/mailman/>`_ is installed on
lists.openstack.org to run OpenStack related mailing lists, as well as
host list archives.
At a Glance
===========
:Hosts:
* http://lists.openstack.org
:Puppet:
* :file:`modules/mailman`
* :file:`modules/openstack_project/manifests/lists.pp`
:Projects:
* http://www.gnu.org/software/mailman/
:Bugs:
* http://bugs.launchpad.net/openstack-ci
* https://bugs.launchpad.net/mailman
:Resources:
* `Mailman Documentation <http://www.gnu.org/software/mailman/docs.html>`_
Adding a List
=============
A list may be added by adding it to the ``openstack-infra/config``
repository in ``modules/openstack_project/manifests/lists.pp``. For
example:
.. code-block:: ruby
maillist { 'openstack-foo':
ensure => present,
admin => 'admin@example.com',
password => $listpassword,
description => 'Discussion of OpenStack Foo',
webserver => $listdomain,
mailserver => $listdomain,
}

44
doc/source/logstash.rst Normal file
View File

@ -0,0 +1,44 @@
:title: Logstash
.. _logstash:
Logstash
########
Logstash is a high-performance indexing and search engine for logs.
At a Glance
===========
:Hosts:
* http://logstash.openstack.org
* logstash-worker-\*.openstack.org
* elasticsearch.openstack.org
:Puppet:
* :file:`modules/logstash`
* :file:`modules/openstack_project/manifests/logstash.pp`
* :file:`modules/openstack_project/manifests/logstash_worker.pp`
* :file:`modules/openstack_project/manifests/elasticsearch.pp`
:Configuration:
* :file:`modules/openstack_project/files/logstash`
:Projects:
* http://logstash.net/
* http://kibana.org/
:Bugs:
* http://bugs.launchpad.net/openstack-ci
* https://logstash.jira.com/secure/Dashboard.jspa
* https://github.com/rashidkpc/Kibana/issues
Overview
========
Logs from Jenkins test runs are sent to logstash where they are
indexed and stored. Logstash facilitates reviewing logs from mulitple
sources in a single test run, searching for errors or particular
events within a test run, as well as searching for log event trends
across test runs.
TODO(clarkb): more details about system architecture
TODO(clarkb): useful queries

49
doc/source/paste.rst Normal file
View File

@ -0,0 +1,49 @@
:title: Paste
.. _paste:
Paste
#####
Paste servers are an easy way to share long-form content such as
configuration files or log data with others over short-form
communication protocols such as IRC. OpenStack runs the "lodgeit"
paste software.
At a Glance
===========
:Hosts:
* http://paste.openstack.org
:Puppet:
* :file:`modules/lodgeit`
* :file:`modules/openstack_project/manifests/paste.pp`
:Projects:
* http://github.com/openstack-infra/lodgeit
* https://bitbucket.org/dcolish/lodgeit-main
* http://www.pocoo.org/projects/lodgeit/
:Bugs:
* http://bugs.launchpad.net/openstack-ci
Overview
========
For OpenStack we use `a fork
<https://github.com/openstack-infra/lodgeit>`_ of lodgeit which is
based on one with bugfixes maintained by `dcolish
<https://bitbucket.org/dcolish/lodgeit-main>`_ but adds back missing
anti-spam features required by Openstack.
Puppet configures lodgeit to use drizzle as a database backend, apache
as a front-end proxy.
The lodgeit module will automatically create a git repository in
``/var/backups/lodgeit_db``. Inside this every site will have its own
SQL file, for example "openstack" will have a file called
``openstack.sql``. Every day a cron job will update the SQL file (one
job per file) and commit it to the git repository.
.. note::
Ideally the SQL files would have a row on every line to keep the
diffs stored in git small, but ``drizzledump`` does not yet support
this.

37
doc/source/planet.rst Normal file
View File

@ -0,0 +1,37 @@
:title: Planet
.. _planet:
Planet
######
The `Planet Venus
<http://intertwingly.net/code/venus/docs/index.html>`_ blog aggregator
is installed on planet.openstack.org.
At a Glance
===========
:Hosts:
* http://planet.openstack.org
:Puppet:
* :file:`modules/planet`
* :file:`modules/openstack_project/manifests/planet.pp`
:Configuration:
* https://github.com/openstack/openstack-planet/blob/master/planet.ini
:Projects:
* https://github.com/openstack/openstack-planet
* http://www.intertwingly.net/code/venus/
:Bugs:
* http://bugs.launchpad.net/openstack-ci
:Resources:
* `Planet Venus Documentation <http://intertwingly.net/code/venus/docs/index.html>`_
Overview
========
Planet Venus works by having a cron job which creates static files.
In our configuration, the static files are served using Apache.
The puppet module is configured to use the openstack/planet git
repository to provide the ``planet.ini`` configuration file.

109
doc/source/project.rst Normal file
View File

@ -0,0 +1,109 @@
:title: Infrastructure Project
.. _infra-project:
Infrastructure Project
######################
The infrastructure for the OpenStack project itself is run with the
same processes, tools and philosophy as any other OpenStack project.
The infrastructure team is an open meritocracy that welcomes new
members. You can read about the OpenStack way on the wiki:
* https://wiki.openstack.org/wiki/How_To_Contribute
* https://wiki.openstack.org/wiki/Open
* https://wiki.openstack.org/wiki/Governance
* https://wiki.openstack.org/wiki/Teams
Scope
=====
The project infrastructure encompasses all of the systems that are
used in the day to day operation of the OpenStack project as a whole.
This includes development, testing, and collaboration tools. All of
the software that we run is open source, and its configuration is
public. The project still uses a number of systems that do not yet
fall under this umbrella (notably, the main website), but we're
working to incorporate them so that people may just as easily
contribute to those areas. All new services used by the project
should begin as part of the infrastructure project to ensure easy
collaboration from the start.
Contributing
============
We welcome contributions from new contributors. Reading this
documentation is the first step. You should also join our `mailing list <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra>`_.
We are most active on IRC, so please join the **#openstack-infra**
channel on Freenode.
Feel free to attend our `weekly IRC meeting
<https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting>`_.
on Tuesdays at 19:00 UTC in #openstack-meeting.
Check out our open bugs, particularly the `low-hanging-fruit
<https://bugs.launchpad.net/openstack-ci/+bugs?field.tag=low-hanging-fruit>`_,
which are smaller (but still important!) tasks that may not require a
great deal of in-depth knowledge.
We hold regular `bug days
<https://wiki.openstack.org/wiki/InfraTeam#Bugs>`_ where we review and
triage bugs.
To read about how our systems are managed and how to view or edit
those configurations, see :ref:`sysadmin`.
And if you have any questions, please ask.
Team
====
The infrastructure team is open, meaning anyone may join and begin
contributing with no formal process. As an individual's contributions
and involvement grow, there are more formal roles in the team:
Core Members
Core team members are able to approve or reject proposed changes to
any of the infrastructure projects. If an individual shows
commitment and aptitude in code reviews, the current core team
membership will take notice and propose that person for inclusion in
the core team, and hold a vote to make the final determination.
In addition to the project-wide infrastructure group, individual
infrastructure projects (such as Jenkins Job Builder or Reviewday)
may also have their own core teams as necessary.
Root Members
While core membership is directly analogous to the same system in
other OpenStack projects, because the infrastructure team operates
production servers, there is another sub-group of the infrastructure
team that has root access to all servers. Root membership is
handled in the same way as core membership. Root members must also
be core members, but core members may not necessarily be root
members.
Root access is generally only necessary to launch new servers,
perform low-level maintenance, manage DNS, or fix problems. In
general it is not needed for day-to-day system administration and
configuration which is done in puppet (where anyone may propose
changes). Therefore it is generally reserved for people who are
well versed in infrastructure operations and can commit to spending
a significant amount of time troubleshooting on servers.
Some individuals may need root access to individual servers; in
these cases the core group may grant root access on a limited basis.
Bugs
====
The infrastructure project maintains a bug list at:
https://bugs.launchpad.net/openstack-ci
Both defects and new features are tracked in the bug system. A number
of tags are used to indicate relevance to a particular subsystem.
There is also a low-hanging-fruit tag associated with bugs that should
provide a gentle introduction to working on the infrastructure project
without needing too much in-depth knowledge or access.

View File

@ -1,14 +1,31 @@
Puppet Master
=============
:title: Puppet Master
Overview
--------
.. _puppet-master:
Puppet Master
#############
Puppet agent is a mechanism use to pull puppet manifests and configuration
from a centralized master. This means there is only one place that needs to
hold secure information such as passwords, and only one location for the git
repo holding the modules.
At a Glance
===========
:Hosts:
* ci-puppetmaster.openstack.org
* http://puppet-dashboard.openstack.org:3000/
:Puppet:
* :file:`modules/openstack_project/manifests/puppetmaster.pp`
:Projects:
* https://puppetlabs.com/
:Bugs:
* http://bugs.launchpad.net/openstack-ci
* http://projects.puppetlabs.com/
:Resources:
* `Puppet Language Reference <http://docs.puppetlabs.com/references/2.7.latest/type.html>`_
Puppet Master
-------------

View File

@ -1,370 +0,0 @@
Puppet Modules
==============
Overview
--------
Much of the OpenStack project infrastructure is deployed and managed using
puppet.
The OpenStack Infrastructure team manages a number of custom puppet modules
outlined in this document.
Lodgeit
-------
The lodgeit module installs and configures lodgeit [1]_ on required servers to
be used as paste installations. For OpenStack we use
`a fork <https://github.com/openstack-infra/lodgeit>`_ of this which is based on
one with bugfixes maintained by
`dcolish <https://bitbucket.org/dcolish/lodgeit-main>`_ but adds back missing
anti-spam features required by Openstack.
Puppet will configure lodgeit to use drizzle [2]_ as a database backend,
apache as a front-end proxy and upstart scripts to run the lodgeit
instances. It will store and maintain local branch of the the mercurial
repository for lodgeit in ``/tmp/lodgeit-main``.
To use this module you need to add something similar to the following in the
main ``site.pp`` manifest:
.. code-block:: ruby
:linenos:
node "paste.openstack.org" {
include openstack_server
include lodgeit
lodgeit::site { "openstack":
port => "5000",
image => "header-bg2.png"
}
lodgeit::site { "drizzle":
port => "5001"
}
}
In this example we include the lodgeit module which will install all the
pre-requisites for Lodgeit as well as creating a checkout ready.
The ``lodgeit::site`` calls create the individual paste sites.
The name in the ``lodgeit::site`` call will be used to determine the URL, path
and name of the site. So "openstack" will create ``paste.openstack.org``,
place it in ``/srv/lodgeit/openstack`` and give it an upstart script called
``openstack-paste``. It will also change the h1 tag to say "Openstack".
The port number given needs to be a unique port which the lodgeit service will
run on. The puppet script will then configure nginx to proxy to that port.
Finally if an image is given that will be used instead of text inside the h1
tag of the site. The images need to be stored in the ``modules/lodgeit/files``
directory.
Lodgeit Backups
^^^^^^^^^^^^^^^
The lodgeit module will automatically create a git repository in ``/var/backups/lodgeit_db``. Inside this every site will have its own SQL file, for example "openstack" will have a file called ``openstack.sql``. Every day a cron job will update the SQL file (one job per file) and commit it to the git repository.
.. note::
Ideally the SQL files would have a row on every line to keep the diffs stored
in git small, but ``drizzledump`` does not yet support this.
Planet
------
The planet module installs Planet Venus [4]_ along with required dependancies
on a server. It also configures specified planets based on options given.
Planet Venus works by having a cron job which creates static files. In this
module the static files are served using apache.
To use this module you need to add something similar to the following into the
main ``site.pp`` manifest:
.. code-block:: ruby
:linenos:
node "planet.openstack.org" {
include planet
planet::site { "openstack":
git_url => "https://github.com/openstack/openstack-planet.git"
}
}
In this example the name "openstack" is used to create the site
``planet.openstack.org``. The site will be served from
``/srv/planet/openstack/`` and the checkout of the ``git_url`` supplied will
be maintained in ``/var/lib/planet/openstack/``.
This module will also create a cron job to pull new feed data 3 minutes past each hour.
The ``git_url`` parameter needs to point to a git repository which stores the
planet.ini configuration for the planet (which stores a list of feeds) and any required theme data. This will be pulled every time puppet is run.
.. _Meetbot_Puppet_Module:
Meetbot
-------
The meetbot module installs and configures meetbot [5]_ on a server. The
meetbot version installed by this module is pulled from the
`OpenStack Infrastructure fork <https://github.com/openstack-infra/meetbot/>`_
of the project.
It also configures apache to be used for accessing the public IRC logs of
the meetings.
To use this module simply add a section to the site manifest as follows:
.. code-block:: ruby
:linenos:
node "eavesdrop.openstack.org" {
include openstack_cron
class { 'openstack_server':
iptables_public_tcp_ports => [80]
}
include meetbot
meetbot::site { "openstack":
nick => "openstack",
network => "FreeNode",
server => "chat.freenode.net:7000",
url => "eavesdrop.openstack.org",
channels => "#openstack #openstack-dev #openstack-meeting",
use_ssl => "True"
}
}
You will also need a file ``/root/secret-files/name-nickserv.pass`` where `name`
is the name specified in the call to the module (`openstack` in this case).
Each call to meetbot::site will create setup a meebot in ``/var/lib/meetbot``
under a subdirectory of the name of the call to the module. It will also
configure nginix to go to that site when the ``/meetings`` directory is
specified on the URL.
The puppet module also creates startup scripts for meetbot and will ensure that
it is running on each puppet run.
Gerrit
------
The Gerrit puppet module configures the basic needs of a Gerrit server. It does
not (yet) install Gerrit itself and mostly deals with the configuration files
and skinning of Gerrit.
Using Gerrit
^^^^^^^^^^^^
Gerrit is set up when the following class call is added to a node in the site
manifest:
.. code-block:: ruby
class { 'gerrit':
canonicalweburl => "https://review.openstack.org/",
email => "review@openstack.org",
github_projects => [
'openstack/nova',
'stackforge/MRaaS',
],
logo => 'openstack.png'
}
Most of these options are self-explanitory. The ``github_projects`` is a list of
all projects in GitHub which are managed by the gerrit server.
Skinning
^^^^^^^^
Gerrit is skinned using files supplied by the puppet module. The skin is
automatically applied as soon as the module is executed. In the site manifest
setting the logo is important:
.. code-block:: ruby
class { 'gerrit':
...
logo => 'openstack.png'
}
This specifies a PNG file which must be stored in the ``modules/gerrit/files/``
directory.
Jenkins Master
--------------
The Jenkins Master puppet module installs and supplies a basic Jenkins
configuration. It also supplies a skin to Jenkins to make it look more like an
OpenStack site. It does not (yet) install the additional Jenkins plugins used
by the OpenStack project.
Using Jenkins Master
^^^^^^^^^^^^^^^^^^^^
In the site manifest a node can be configured to be a Jenkins master simply by
adding the class call below:
.. code-block:: ruby
class { 'jenkins::master':
site => 'jenkins.openstack.org',
serveradmin => 'webmaster@openstack.org',
logo => 'openstack.png'
}
The ``site`` and ``serveradmin`` parameters are used to configure Apache. You
will also need in this instance the following files for Apache to start::
/etc/ssl/certs/jenkins.openstack.org.pem
/etc/ssl/private/jenkins.openstack.org.key
/etc/ssl/certs/intermediate.pem
The ``jenkins.openstack.org`` is replace by the setting in the ``site``
parameter.
Skinning
^^^^^^^^
The Jenkins skin uses the `Simple Theme Plugin
<http://wiki.jenkins-ci.org/display/JENKINS/Simple+Theme+Plugin>`_ for Jenkins.
The puppet module will install and configure most aspects of the skin
automatically, with a few adjustments needed.
In the site.pp file the ``logo`` parameter is important:
.. code-block:: ruby
class { 'jenkins::master':
...
logo => 'openstack.png'
}
This relates to a PNG file that must be in the ``modules/jenkins/files/``
directory.
Once puppet installs this and the plugin is installed you need to go into
``Manage Jenkins -> Configure System`` and look for the ``Theme`` heading.
Assuming we are skinning the main OpenStack Jenkins site, in the ``CSS`` box
enter
``https://jenkins.openstack.org/plugin/simple-theme-plugin/openstack.css`` and
in the ``JS`` box enter
``https://jenkins.openstack.org/plugin/simple-theme-plugin/openstack.js``.
Etherpad Lite
-------------
This Puppet module installs Etherpad Lite [3]_ and its dependencies (including
node.js). This Puppet module also configures Etherpad Lite to be started at
boot with Nginx running in front of it as a reverse proxy and MySQL running as
the database backend.
Using this module is straightforward you simply need to include a few classes.
However, there are some limitations to be aware of which are described below.
The includes you need are:
::
include etherpad_lite # Acts like a package manager and installs things
include etherpad_lite::nginx # Sets up Nginx to reverse proxy Etherpad Lite
include etherpad_lite::site # Configures Etherpad Lite
include etherpad_lite::mysql # Configures MySQL DB backend for Etherpad Lite
These classes are parameterized and provide some configurability, but should
all work together when instantiated with their defaults.
Config File
^^^^^^^^^^^
Because the Etherpad Lite configuration file contains a database password it is
not directly managed by Puppet. Instead Puppet expects the configuration file
to be at ``/root/secret-files/etherpad-lite_settings.json`` on the Puppet
master (if running in master/agent setup) or on the server itself if running
``puppet apply``.
MySQL will be configured by Puppet to listen on TCP 3306 of localhost and a
database called ``etherpad-lite`` will be created for user ``eplite``. Also,
this module does install the Abiword package. Knowing this, a good template for
your config is:
::
/*
This file must be valid JSON. But comments are allowed
Please edit settings.json, not settings.json.template
*/
{
//Ip and port which etherpad should bind at
"ip": "127.0.0.1",
"port" : 9001,
//The Type of the database. You can choose between dirty, sqlite and mysql
//You should use mysql or sqlite for anything else than testing or development
"dbType" : "mysql",
//the database specific settings
"dbSettings" : {
"user" : "eplite",
"host" : "localhost",
"password": "changeme",
"database": "etherpad-lite"
},
//the default text of a pad
"defaultPadText" : "Welcome to Etherpad Lite!\n\nThis pad text is synchronized as you type, so that everyone viewing this page sees the same text. This allows you to collaborate seamlessly on documents!\n\nEtherpad Lite on Github: http:\/\/j.mp/ep-lite\n",
/* Users must have a session to access pads. This effectively allows only group pads to be accessed. */
"requireSession" : false,
/* Users may edit pads but not create new ones. Pad creation is only via the API. This applies both to group pads and regular pads. */
"editOnly" : false,
/* if true, all css & js will be minified before sending to the client. This will improve the loading performance massivly,
but makes it impossible to debug the javascript/css */
"minify" : true,
/* How long may clients use served javascript code? Without versioning this
is may cause problems during deployment. */
"maxAge" : 21600000, // 6 hours
/* This is the path to the Abiword executable. Setting it to null, disables abiword.
Abiword is needed to enable the import/export of pads*/
"abiword" : "/usr/bin/abiword",
/* This setting is used if you need http basic auth */
// "httpAuth" : "user:pass",
/* The log level we are using, can be: DEBUG, INFO, WARN, ERROR */
"loglevel": "INFO"
}
Don't forget to change the password if you copy this configuration. Puppet will
grep that password out of the config and use it to set the password for the
MySQL eplite user.
Nginx
^^^^^
The reverse proxy is configured to talk to Etherpad Lite over localhost:9001.
Nginx listens on TCP 443 for HTTPS connections. Because HTTPS is used you will
need SSL certificates. These files are not directly managed by Puppet (again
because of the sensitive nature of these files), but Puppet will look for
``/root/secret-files/eplite.crt`` and ``/root/secret-files/eplite.key`` and
copy them to ``/etc/nginx/ssl/eplite.crt`` and ``/etc/nginx/ssl/eplite.key``,
which is where Nginx expects them to be.
MySQL
^^^^^
MySQL is configured by the Puppet module to allow user ``eplite`` to use
database ``etherpad-lite``. If you want backups for the ``etherpad-lite``
database you can include ``etherpad_lite::backup``. By default this will backup
the ``etherpad-lite`` DB daily and keep a rotation of 30 days of backups.
.. rubric:: Footnotes
.. [1] `Lodgeit homepage <http://www.pocoo.org/projects/lodgeit/>`_
.. [2] `Drizzle homepage <http://www.drizzle.org/>`_
.. [3] `Etherpad Lite homepage <https://github.com/Pita/etherpad-lite>`_
.. [4] `Planet Venus homepage <http://intertwingly.net/code/venus/docs/index.html>`_
.. [5] `Meetbot homepage <http://wiki.debian.org/MeetBot>`_

207
doc/source/sysadmin.rst Normal file
View File

@ -0,0 +1,207 @@
:title: System Administration
.. _sysadmin:
System Administration
#####################
Our infrastructure is code and contributions to it are handled just
like the rest of OpenStack. This means that anyone can contribute to
the installation and long-running maintenance of systems without shell
access, and anyone who is interested can provide feedback and
collaborate on code reviews.
The configuration of every system operated by the infrastructure team
is managed by Puppet in a single Git repository:
https://github.com/openstack-infra/config
All system configuration should be encoded in that repository so that
anyone may propose a change in the running configuration to Gerrit.
Making a Change in Puppet
=========================
Many changes to the Puppet configuration can safely be made while only
performing syntax checks. Some more complicated changes merit local
testing and an interactive development cycle. The config repo is
structured to facilitate local testing before proposing a change for
review. This is accomplished by separating the puppet configuration
into several layers with increasing specificity about site
configuration higher in the stack.
The `modules/` directory holds puppet modules that abstractly describe
the configuration of a service. Ideally, these should have no
OpenStack-specific information in them, and eventually they should all
become modules that are directly consumed from PuppetForge, only
existing in the config repo during an initial incubation period. This
is not yet the case, so you may find OpenStack-specific configuration
in these modules, though we are working to reduce it.
The `modules/openstack_project/manifests/` directory holds
configuration for each of the servers that the OpenStack project runs.
Think of these manifests as describing how OpenStack runs a particular
service. However, no site-specific configuration such as hostnames or
credentials should be included in these files. This is what lets you
easily test an OpenStack project manifest on your own server.
Finally, the `manifests/site.pp` file contains the information that is
specific to the actual servers that OpenStack runs. These should be
very simple node definitions that largely exist simply to provide
private date from hiera to the more robust manifests in the
`openstack_project` modules.
This means that you can run the same configuration on your own server
simply by providing a different manifest file instead of site.pp.
As an example, to run the etherpad configuration on your own server,
start by cloning the config Git repo::
git clone https://github.com/openstack-infra/config
Then copy the etherpad node definition from manifests/site.pp to a new
file (be sure to specify the FQDN of the host you are working with in
the node specifier). It might look something like this::
# local.pp
node 'etherpad.example.org' {
class { 'openstack_project::etherpad':
database_password => 'badpassword',
sysadmins => 'user@example.org',
}
}
Then to apply that configuration, run the following::
cd config
bash install_puppet.sh
bash install_modules.sh
puppet apply -l manifest.log --modulepath=modules:/etc/puppet/modules local.pp
That should turn the system you are logged into into an etherpad
server with the same configuration as that used by the OpenStack
project. You can edit the contents of the config repo and iterate as
needed. When you're ready to propose the change for review, you can
propose the change with git-review. See the `Gerrit Workflow wiki
article <https://wiki.openstack.org/wiki/GerritWorkflow>`_ for more
information.
Adding a New Server
===================
To create a new server, do the following:
* Add a file in :file:`modules/openstack_project/manifests/` that defines a
class which specifies the configuration of the server.
* Add a node entry in :file:`manifests/site.pp` for the server that uses that
class.
* If your server needs private information such as password,s use
hiera calls in the site manifest, and ask an infra-core team member
to manually add the private information to hiera.
* You should be able to install and configure most software only with
puppet. Nonetheless, if you need SSH access to the host, add your
public key to :file:`modules/openstack_project/manifests/users.pp` and
include a stanza like this in your server class::
realize (
User::Virtual::Localuser['USERNAME'],
)
* Add an RST file with documentation about the server in :file:`doc/source`
and add it to the index in that directory.
SSH Access
==========
For any of the systems managed by the OpenStack Infrastructure team, the
following practices must be observed for SSH access:
* SSH access is only permitted with SSH public/private key
authentication.
* Users must use a strong passphrase to protect their private key. A
passphrase of several words, at least one of which is not in a
dictionary is advised, or a random string of at least 16
characters.
* To mitigate the inconvenience of using a long passphrase, users may
want to use an SSH agent so that the passphrase is only requested
once per desktop session.
* Users private keys must never be stored anywhere except their own
workstation(s). In particular, they must never be stored on any
remote server.
* If users need to 'hop' from a server or bastion host to another
machine, they must not copy a private key to the intermediate
machine (see above). Instead SSH agent forwarding may be used.
However due to the potential for a compromised intermediate machine
to ask the agent to sign requests without the users knowledge, in
this case only an SSH agent that interactively prompts the user
each time a signing request (ie, ssh-agent, but not gnome-keyring)
is received should be used, and the SSH keys should be added with
the confirmation constraint ('ssh-add -c').
* The number of SSH keys that are configured to permit access to
OpenStack machines should be kept to a minimum.
* OpenStack Infrastructure machines must use puppet to centrally manage and
configure user accounts, and the SSH authorized_keys files from the
openstack-infra/config repository.
* SSH keys should be periodically rotated (at least once per year).
During rotation, a new key can be added to puppet for a time, and
then the old one removed. Be sure to run puppet on the backup
servers to make sure they are updated.
Backups
=======
Off-site backups are made to two servers:
* ci-backup-rs-ord.openstack.org
* ci-backup-hp-az1.openstack.org
Puppet is used to perform the initial configuration of those machines,
but to protect them from unauthorized access in case access to the
puppet git repo is compromised, it is not run in agent or in cron mode
on them. Instead, it should be manually run when changes are made
that should be applied to the backup servers.
To start backing up a server, some commands need to be run manually on
both the backup server, and the server to be backed up. On the server
to be backed up::
ssh-keygen -t rsa -f /root/.ssh/id_rsa -N ""
And then ''cat /root/.ssh/id_rsa.pub'' for use later.
On the backup servers::
sudo su -
BUPUSER=bup-<short-servername> # eg, bup-jenkins-dev
useradd -r $BUPUSER -s /bin/bash -m
cd /home/$BUPUSER
mkdir .ssh
cat >.ssh/authorized_keys
and add this to the authorized_keys file::
command="BUP_DEBUG=0 BUP_FORCE_TTY=3 bup server",no-port-forwarding,no-agent-forwarding,no-X11-forwarding,no-pty <ssh key from earlier>
Switching back to the server to be backed up, run::
ssh $BUPUSER@ci-backup-rs-ord.openstack.org
ssh $BUPUSER@ci-backup-hp-az1.openstack.org
And verify the host key. Add the "backup" class in puppet to the server
to be backed up.
GitHub Access
=============
To ensure that code review and testing are not bypassed in the public
Git repositories, only Gerrit will be permitted to commit code to
OpenStack repositories. Because GitHub always allows project
administrators to commit code, accounts that have access to manage the
GitHub projects necessarily will have commit access to the
repositories. Therefore, to avoid inadvertent commits to the public
repositories, unique administrative-only accounts must be used to
manage the OpenStack GitHub organization and projects. These accounts
will not be used to check out or commit code for any project.

View File

@ -1,167 +1,22 @@
:title: Infrastructure Systems
:title: Major Systems
Infrastructure Systems
######################
Major Systems
#############
The OpenStack Infrastructure team maintains a number of systems that are
critical to the operation of the OpenStack project, such as gerrit, jenkins,
mailman, meetbot, etherpad, paste, and others.
.. toctree::
:maxdepth: 2
Additionally the team maintains the project sites on Launchpad and
GitHub. The following policies have been adopted to ensure the
continued and secure operation of the project.
SSH Access
**********
For any of the systems managed by the OpenStack Infrastructure team, the
following practices must be observed for SSH access:
* SSH access is only permitted with SSH public/private key
authentication.
* Users must use a strong passphrase to protect their private key. A
passphrase of several words, at least one of which is not in a
dictionary is advised, or a random string of at least 16
characters.
* To mitigate the inconvenience of using a long passphrase, users may
want to use an SSH agent so that the passphrase is only requested
once per desktop session.
* Users private keys must never be stored anywhere except their own
workstation(s). In particular, they must never be stored on any
remote server.
* If users need to 'hop' from a server or bastion host to another
machine, they must not copy a private key to the intermediate
machine (see above). Instead SSH agent forwarding may be used.
However due to the potential for a compromised intermediate machine
to ask the agent to sign requests without the users knowledge, in
this case only an SSH agent that interactively prompts the user
each time a signing request (ie, ssh-agent, but not gnome-keyring)
is received should be used, and the SSH keys should be added with
the confirmation constraint ('ssh-add -c').
* The number of SSH keys that are configured to permit access to
OpenStack machines should be kept to a minimum.
* OpenStack Infrastructure machines must use puppet to centrally manage and
configure user accounts, and the SSH authorized_keys files from the
openstack-infra/config repository.
* SSH keys should be periodically rotated (at least once per year).
During rotation, a new key can be added to puppet for a time, and
then the old one removed. Be sure to run puppet on the backup
servers to make sure they are updated.
Servers
*******
Because the configuration of servers is managed in puppet, anyone may
propose changes to existing servers, or propose that new servers be
created by editing the puppet configuration and uploading a change for
review in Gerrit. The installation and maintenance of software on
project infrastructure servers should be carried out entirely through
puppet so that anyone can contribute.
The Git repository with the puppet configuration may be cloned from
https://github.com/openstack-infra/config and changes submitted
with `git-review`.
In order to ensure that it is easy for both the OpenStack project as
well as others to re-use the configuration in that repository, server
definitions are split into two levels of abstraction: first, a class
is created that defines the configuration of the server, but without
specifics such as hostnames and passwords. Then a node definition is
created that uses that class, passing in any specific information
needed for that node.
For instance, `modules/openstack_project/manifests/gerrit.pp` defines a
class which specifies how the OpenStack project configures a gerrit
server, and then `manifests/site.pp` defines a node that uses that
class, passing in passwords and other information specific to that
node obtained from puppet's hiera.
To create a new server, do the following:
* Add a file in `modules/openstack_project/manifests/` that defines a
class which specifies the configuration of the server.
* Add a node entry in `manifests/site.pp` for the server that uses that
class.
* If your server needs private information such as password,s use
hiera calls in the site manifest, and ask an infra-core team member
to manually add the private information to hiera.
* You should be able to install and configure most software only with
puppet. Nonetheless, if you need SSH access to the host, add your
public key to `modules/openstack_project/manifests/users.pp` and
include a stanza like this in your server class::
realize (
User::Virtual::Localuser['USERNAME'],
)
* Add an RST file with documentation about the server in `doc/source`
and add it to the index in that directory.
Backups
*******
Off-site backups are made to two servers:
* ci-backup-rs-ord.openstack.org
* ci-backup-hp-az1.openstack.org
Puppet is used to perform the initial configuration of those machines,
but to protect them from unauthorized access in case access to the
puppet git repo is compromised, it is not run in agent or in cron mode
on them. Instead, it should be manually run when changes are made
that should be applied to the backup servers.
To start backing up a server, some commands need to be run manually on
both the backup server, and the server to be backed up. On the server
to be backed up::
ssh-keygen -t rsa -f /root/.ssh/id_rsa -N ""
And then ''cat /root/.ssh/id_rsa.pub'' for use later.
On the backup servers::
sudo su -
BUPUSER=bup-<short-servername> # eg, bup-jenkins-dev
useradd -r $BUPUSER -s /bin/bash -m
cd /home/$BUPUSER
mkdir .ssh
cat >.ssh/authorized_keys
and add this to the authorized_keys file::
command="BUP_DEBUG=0 BUP_FORCE_TTY=3 bup server",no-port-forwarding,no-agent-forwarding,no-X11-forwarding,no-pty <ssh key from earlier>
Switching back to the server to be backed up, run::
ssh $BUPUSER@ci-backup-rs-ord.openstack.org
ssh $BUPUSER@ci-backup-hp-az1.openstack.org
And verify the host key. Add the "backup" class in puppet to the server
to be backed up.
GitHub Access
*************
To ensure that code review and testing are not bypassed in the public
Git repositories, only Gerrit will be permitted to commit code to
OpenStack repositories. Because GitHub always allows project
administrators to commit code, accounts that have access to manage the
GitHub projects necessarily will have commit access to the
repositories. Therefore, to avoid inadvertent commits to the public
repositories, unique administrative-only accounts must be used to
manage the OpenStack GitHub organization and projects. These accounts
will not be used to check out or commit code for any project.
Launchpad Teams
***************
Each OpenStack project should have the following teams on Launchpad:
* foo-bugs -- people interested in receieving bug reports
* foo-drivers -- people who may approve and target blueprints
The openstack-admins team should be a member of each of those teams.
gerrit
jenkins
zuul
jjb
logstash
devstack-gate
jeepyb
irc
etherpad
paste
planet
puppet
lists
wiki

View File

@ -1,5 +1,5 @@
HOWTO: Third Party Testing
==========================
Third Party Testing
===================
Overview
--------

28
doc/source/wiki.rst Normal file
View File

@ -0,0 +1,28 @@
:title: Wiki
.. _wiki:
Wiki
####
`Mediawiki <http://www.mediawiki.org/wiki/MediaWiki>`_ is installed on
wiki.openstack.org.
At a Glance
===========
:Hosts:
* https://wiki.openstack.org
:Puppet:
* :file:`modules/mediawiki`
* :file:`modules/openstack_project/manifests/wiki.pp`
:Projects:
* http://www.mediawiki.org/wiki/MediaWiki
:Bugs:
* http://bugs.launchpad.net/openstack-ci
Overview
========
Much (but not all) of the configuration is in puppet in the
``openstack-infra/config`` repository. Mediawiki upgrades are
currently performed manually.

80
doc/source/zuul.rst Normal file
View File

@ -0,0 +1,80 @@
:title: Zuul
.. _zuul:
Zuul
####
Zuul is a pipeline-oriented project gating system. It facilitates
running tests and automated tasks in response to Gerrit events.
At a Glance
===========
:Hosts:
* http://status.openstack.org/zuul
* http://zuul.openstack.org
* http://zuul-dev.openstack.org
:Puppet:
* :file:`modules/zuul`
* :file:`modules/openstack_project/manifests/zuul_prod.pp`
* :file:`modules/openstack_project/manifests/zuul_dev.pp`
:Configuration:
* :file:`modules/openstack_project/files/zuul/layout.yaml`
:Projects:
* http://launchpad.net/zuul
* http://github.com/openstack-infra/zuul
:Bugs:
* http://bugs.launchpad.net/zuul
:Resources:
* `Zuul Reference Manual <http://ci.openstack.org/zuul>`_
Overview
========
The OpenStack project uses a number of pipelines in Zuul:
**check**
Newly uploaded patchsets enter this pipeline to receive an initial
+/-1 Verified vote from Jenkins.
**gate**
Changes that have been approved by core developers are enqueued in
order in this pipeline, and if they pass tests in Jenkins, will be
merged.
**post**
This pipeline runs jobs that operate after each change is merged.
**pre-release**
This pipeline runs jobs on projects in response to pre-release tags.
**release**
When a commit is tagged as a release, this pipeline runs jobs that
publish archives and documentation.
**silent**
This pipeline is used for silently testing new jobs.
Zuul watches events in Gerrit (using the Gerrit "stream-events"
command) and matches those events to the pipelines above. If a match
is found, it adds the change to the pipeline and starts running
related jobs.
The **gate** pipeline uses speculative execution to improve
throughput. Changes are tested in parallel under the assumption that
changes ahead in the queue will merge. If they do not, Zuul will
abort and restart tests without the affected changes. This means that
many changes may be tested in parallel while continuing to assure that
each commit is correctly tested.
Zuul's current status may be viewed at
`<http://status.openstack.org/zuul/>`_.
Zuul's configuration is stored in
:file:`modules/openstack_project/files/zuul/layout.yaml`. Anyone may
propose a change to the configuration by editing that file and
submitting the change to Gerrit for review.
For the full syntax of Zuul's configuration file format, see the `Zuul
reference manual`_.