Add a workflow overview for adding a cloud

Add some details about how we integrate a new cloud into the
ecosystem.  I feel like this is an appropriate level of detail given
we're dealing with clueful admins who just need a rough guide on what
to do and can fill in the gaps.

Fix up the formatting a bit while we're here.

Change-Id: Iba3440e67ab798d5018b9dffb835601bb5c0c6c7
This commit is contained in:
Ian Wienand 2018-10-17 11:20:20 +11:00 committed by Andreas Jaeger
parent e637ee5e2e
commit 8a95c976e9
2 changed files with 127 additions and 11 deletions

View File

@ -24,9 +24,8 @@ The compute instances used by this system are generously donated by
organizations that are contributing to OpenStack, and the project is
very appreciative of this.
By visiting this page, you can see the system in action at any time:
https://zuul.openstack.org/
By visiting https://zuul.openstack.org/ you can see the system in
action at any time.
You'll see every job that's running currently, as well as some graphs
that show activity over time. Each of those jobs is running on its
@ -43,16 +42,16 @@ If you have some capacity on an OpenStack cloud that you are able to
contribute to the project, it would be a big help. This is what we
need:
* Nova and Glance APIs (with the ability to upload images)
* A single instance with 500GB of disk (via Cinder is preferred,
local is okay) per cloud region for our region-local mirror
* Nova and Glance APIs (with the ability to upload images)
* A single instance with 500GB of disk (via Cinder is preferred, local
is okay) per cloud region for our region-local mirror
Each test instance requires:
* 8GB RAM
* 8vCPU at 2.4GHz (or more or less vCPUs depending on speed)
* A public IP address (IPv4 and/or IPv6)
* 80GB of storage
* 8GB RAM
* 8vCPU at 2.4GHz (or more or less vCPUs depending on speed)
* A public IP address (IPv4 and/or IPv6)
* 80GB of storage
In a setting where our instances will be segregated, our usage
patterns will cause us to be our own noisy neighbors at the worst
@ -65,7 +64,7 @@ Since there's a bit of setup and maintenance involved in adding a new
provider, a minimum of 100 instances would be helpful.
Benefits to Contributors
########################
========================
Since we continuously use the OpenStack APIs and are familiar with how
they should operate, we occasionally discover potential problems with
@ -100,3 +99,117 @@ Infrastructure team on our public mailing list at
`#openstack-infra` on Freenode. You are also welcome to privately
contact the `Infrastructure Project Team lead
<https://governance.openstack.org/tc/reference/projects/infrastructure.html>`_.
Contribution Workflow
=====================
After discussing your welcome contribution with the infrastructure
team it will be time to build and configure the cloud.
Initial setup
-------------
We require two projects to be provisioned
* A ``zuul`` project for infrastructure testing nodes
* A ``ci`` project for control-plane services
The ``zuul`` project will be used by nodepool for running the testing
nodes. Note there may be be references in configuration to projects
with ``jenkins``; although this is not used any more some original
clouds named their projects for the CI system in use at the time.
At a minimum, the ``ci`` project has the region-local mirror host(s)
for the cloud's region(s). This will be named
``mirror.<region>.<cloud>.openstack.org`` and all jobs running in the
``zuul`` project will be configured to use it as much as possible
(this might influence choices you make in network setup, etc.).
Depending on the resources available and with prior co-ordination with
the provider, the infrastructure team may also run other services in
this project such as webservers, file servers or nodepool builders.
The exact project and user names is not particularly important,
usually something like ``openstack[ci|zuul]`` is chosen. Per below,
these will exist as ``openstackci-<provider>``
``openstackzuul-<provider>`` in various ``clouds.yaml`` configuration
files. For minimising potential for problems it is probably best that
the provided users do not have "admin" credentials; although in some
clouds that are private to OpenStack infra admin permissions may be
granted, or an alternative user available with such permissions, to
help with various self-service troubleshooting. For example, the
infrastructure team does not require any particular access to subnet
or router configuration in the cloud, although where requested we are
happy to help with this level of configuration.
Add cloud configuration
-----------------------
After creating the two projects and users, configuration and
authentication details need to be added into configuration management.
The public portions can be proposed via the standard review process at
any time by anyone. Exact details of cloud configuration changes from
time to time; the best way to begin the addition is to clone the
``system-configuration`` repository (i.e. this repo) with ``git clone
https://git.openstack.org/openstack-infra/system-config`` and ``grep``
for an existing cloud (or go through ``git log`` and find the last
cloud added) and follow the pattern. After posting the review, CI
tests and reviewers will help with any issues.
These details largely consist of the public portions of the
``openstackclient`` configuration format, such as the endpoint and
version details. Note we require ``https`` communication to Keystone;
we can use self-signed certificates if required, some non-commercial
clouds use `letsencrypt <https://letsencrypt.org>`__ while others use
their CA of preference.
Once the public review is ready, the secret values used in the review
need to be manually entered by an ``infra-root`` member into the
secret storage on ``bridge.openstack.org``. You can communicate these
via GPG encrypted mail to a ``infra-root`` member (ping ``infra-root``
in ``#openstack-infra`` and someone will appear). If not told
explicitly, most sign the OpenStack signing key, so you can find their
preferred key via that; if the passwords can be changed plain-text is
also fine. With those in place, the public review will be committed
and the cloud will become active.
Once active, ``bridge.openstack.org`` will begin regularly running
`ansible-role-cloud-launcher
<http://git.openstack.org/cgit/openstack/ansible-role-cloud-launcher/>`__
against the new cloud to configure keys, upload base images, setup
security groups and such.
Activate in nodepool
--------------------
After the cloud is configured, it can be added as a resource for
nodepool to use for testing nodes.
Firstly, an ``infra-root`` member will need to make the region-local
mirror server, configure any required storage for it and setup DNS
(see :ref:`adding_new_server`). With this active, the cloud is ready
to start running testing nodes.
At this point, the cloud needs to be added to nodepool configuration
in `project-config
<https://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool>`__.
Again existing entries provide useful templates for the initial review
proposal, which can be done by anyone. Some clouds provision
particular flavors for CI nodes; these need to be present at this
point and will be conveyed via the nodepool configuration. Again CI
checks and reviewers will help with any fine details.
Once this is committed, nodepool will upload images into the new
region and start running nodes automatically. Don't forget to add the
region to the `grafana
<https://git.openstack.org/cgit/openstack-infra/project-config/tree/grafana>`__
configuration to ensure we have a dashboard for the region's health.
Ongoing operation
-----------------
If at any point the cloud needs to be disabled for maintenance a
review can be proposed to set the ``max-servers`` to zero in the
nodepool configuration. We usually propose a revert of this at the
same time with a negative workflow to remember to turn it back on when
appropriate. In an emergency, an ``infra-root`` member can bypass the
normal review process and apply such a change by hand.

View File

@ -112,6 +112,9 @@ bridge, you can access any of the clouds with::
export OS_CLIENT_CONFIG_FILE=/etc/openstack/all-clouds.yaml
openstack --os-cloud <cloud name> --os-cloud-region <region name>
.. _adding_new_server:
Adding a New Server
===================