Retire devstack gate

This "removes" devstack-gate content in order to retire it. Devstack
grew the ability to bootstrap its own CI environments using ansible and
no longer needs devstack-gate.

Part of the motivation for this chagne is that it will help us in our
quest to remove old Ubuntu Xenial test nodes from the CI system as
devstack-gate still relies on them.

Depends-On: https://review.opendev.org/c/openstack/project-config/+/919625
Change-Id: Ife60f1dd6fae7577cee78054b69d8ab83df9a8ce
This commit is contained in:
Clark Boylan 2024-05-14 15:02:46 -07:00
parent 9cfd5cca0a
commit 842ac82d65
52 changed files with 11 additions and 5634 deletions

18
.gitignore vendored
View File

@ -1,18 +0,0 @@
*.pyc
*.swp
vendor
.ksl-venv
.venv
.tox
devstack_gate.egg-info/
*.log
.coverage
covhtml
AUTHORS
ChangeLog
pep8.txt
*.db
.DS_Store
build/
dist/
.testrepository

View File

@ -1,4 +0,0 @@
[DEFAULT]
test_command=${PYTHON:-python} -m subunit.run discover -t . ./tests $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -1,49 +0,0 @@
- nodeset:
name: devstack-single-node
nodes:
- name: primary
label: ubuntu-xenial
- nodeset:
name: legacy-ubuntu-focal
nodes:
- name: primary
label: ubuntu-focal
- job:
name: devstack-gate-hooks
parent: legacy-dsvm-base
run: playbooks/devstack-gate-hooks/run.yaml
post-run: playbooks/devstack-gate-hooks/post.yaml
timeout: 3900
- job:
name: legacy-tempest-neutron-full-stable
parent: legacy-dsvm-base
run: playbooks/legacy/tempest-neutron-full/run.yaml
post-run: playbooks/legacy/tempest-neutron-full/post.yaml
timeout: 10800
nodeset: legacy-ubuntu-focal
required-projects:
- openstack/devstack-gate
- openstack/neutron
- openstack/tempest
- project:
templates:
- official-openstack-repo-jobs
- openstack-python35-jobs
queue: integrated
check:
jobs:
- openstack-tox-bashate:
nodeset: ubuntu-bionic
- openstack-tox-py27
- devstack-gate-hooks
gate:
jobs:
- openstack-tox-py27
- devstack-gate-hooks
experimental:
jobs:
- legacy-tempest-dsvm-neutron-dvr-multinode-full

View File

@ -1,274 +1,14 @@
.. warning::
This project is no longer maintained.
For very long, we recommended switching your CI jobs to `Zuulv3
native jobs`__. In the Xena cycle, we are officially deprecating
it. Devstack Gate will support only stable branches until stable/wallaby.
From the Xena release onwards, we no longer guarantee it to work. If it
fails, we strongly recommend to switch your CI jobs to Zuulv3 native or
fork this repo to fix it for your CI.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
We will retire it completely once stable/wallaby is in 'Extended Maintenance'
state which is 2022-10-14.
Devstack is now capable of bootstrapping itself in CI environments
using ansible playbooks and roles. You should not need devstack-gate
to act as a driver.
.. __: https://governance.openstack.org/tc/goals/selected/victoria/native-zuulv3-jobs.html
Devstack Gate
=============
Devstack-gate is a collection of scripts used by the OpenStack CI team
to test every change to core OpenStack projects by deploying OpenStack
via devstack on a cloud server.
What It Is
==========
All changes to core OpenStack projects are "gated" on a set of tests
so that it will not be merged into the main repository unless it
passes all of the configured tests. Most projects require unit tests
with pep8 and several versions of Python. Those tests are all run only
on the project in question. The devstack gate test, however, is an
integration test and ensures that a proposed change still enables
several of the projects to work together.
Obviously we test integrated OpenStack components and their clients
because they all work closely together to form an OpenStack
system. Changes to devstack itself are also required to pass this test
so that we can be assured that devstack is always able to produce a
system capable of testing the next change to nova. The devstack gate
scripts themselves are included for the same reason.
How It Works
============
The devstack test starts with an essentially bare virtual machine,
installs devstack on it, and runs tests of the resulting OpenStack
installation. In order to ensure that each test run is independent,
the virtual machine is discarded at the end of the run, and a new
machine is used for the next run. In order to keep the actual test run
as short and reliable as possible, the virtual machines are prepared
ahead of time and kept in a pool ready for immediate use. The process
of preparing the machines ahead of time reduces network traffic and
external dependencies during the run.
The `Nodepool`_ project is used to maintain this pool of machines.
.. _Nodepool: https://opendev.org/zuul/nodepool
How to Debug a Devstack Gate Failure
====================================
When Jenkins runs gate tests for a change, it leaves comments on the
change in Gerrit with a link to the resulting logs, including the
console log. If a change fails in a devstack-gate test, you can follow
these links to find out what went wrong. Start at the bottom of the log
file with the failure, scroll up to look for errors related to failed
tests.
You might need some information about the specific run of the test. In
the devstack-gate-setup-workspace log, you can see all the git commands
used to set up the repositories, and they will output the (short) sha1
and commit subjects of the head of each repository.
It's possible that a failure could be a false negative related to a
specific provider, especially if there is a pattern of failures from
tests that run on nodes from that provider. In order to find out which
provider supplied the node the test ran on, look at the name of the
jenkins slave in the devstack-gate-setup-host log, the name of the
provider is included.
Below that, you'll find the output from devstack as it installs all of
the debian and pip packages required for the test, and then configures
and runs the services. Most of what it needs should already be cached
on the test host, but if the change to be tested includes a dependency
change, or there has been such a change since the snapshot image was
created, the updated dependency will be downloaded from the Internet,
which could cause a false negative if that fails.
Assuming that there are no visible failures in the console log, you
may need to examine the log output from the OpenStack services, located
in the logs/ directory. All of the OpenStack services are configured to
syslog, so you may find helpful log messages by clicking on the
"syslog.txt[.gz]" file. Some error messages are so basic they don't
make it to syslog, such as if a service fails to start. Devstack
starts all of the services in screen, and you can see the output
captured by screen in files named "screen-\*.txt". You may find a
traceback there that isn't in syslog.
After examining the output from the test, if you believe the result
was a false negative, you can retrigger the test by running a recheck,
this is done by leaving a review comment with simply the text: recheck
If a test failure is a result of a race condition in the OpenStack code,
you also have the opportunity to try to identify it, and file a bug report,
help fix the problem or leverage `elastic-recheck
<http://docs.openstack.org/infra/elastic-recheck/readme.html>`_ to help
track the problem. If it seems to be related to a specific devstack gate
node provider, we'd love it if you could help identify what the variable
might be (whether in the devstack-gate scripts, devstack itself, Nodepool,
OpenStack, or even the provider's service).
Simulating Devstack Gate Tests
==============================
Developers often have a need to recreate gating integration tests
manually, and this provides a walkthrough of making a DG-slave-like
throwaway server without the overhead of building other CI
infrastructure to manage a pool of them. This can be useful to reproduce
and troubleshoot failures or tease out nondeterministic bugs.
First, you can build an image identical to the images running in the gate using
`diskimage-builder <https://docs.openstack.org/developer/diskimage-builder>`_.
The specific operating systems built and DIB elements for each image type are
defined in `nodepool.yaml <https://opendev.org/openstack/project-config/
src/branch/master/nodepool/nodepool.yaml>`_. There is a handy script
available in the project-config repo to build this for you::
git clone https://opendev.org/openstack/project-config
cd project-config
./tools/build-image.sh
Take a look at the documentation within the `build-image.sh` script for specific
build options.
These days Tempest testing is requiring in excess of 2GiB RAM (4 should
be enough but we typically use 8) and completes within an hour on a
4-CPU virtual machine.
If you're using an OpenStack provider, it's usually helpful to set up a
`clouds.yaml` file. More information on `clouds.yaml` files can be found in the
`os-client-config documentation <https://docs.openstack.org/developer/os-client-config/#config-files`_.
A `clouds.yaml` file for Rackspace would look something like::
clouds:
rackspace:
auth:
profile: rackspace
username: '<provider_username>'
password: '<provider_password>'
project_name: '<provider_project_name>'
Where provider_username and provider_password are the user / password
for a valid user in your account, and provider_project_name is the project_name
you want to use (sometimes called 'tenant name' on older clouds)
You can then use the `openstack` command line client (found in the python
package
`python-openstackclient <http://pypi.python.org/pypi/python-openstackclient>`_)
to create a VM on the cloud.
You can tell `openstack` to use the `DFW` region
of the `rackspace` cloud you defined either by setting environment variables::
export OS_CLOUD=rackspace
export OS_REGION_NAME=DFW
openstack servers list
or command line options:
openstack --os-cloud=rackspace --os-region-name=DFW servers list
It will be assumed in remaining examples that environment varialbes have been
set.
If you haven't already, create an SSH keypair "my-keypair" (name it whatever
you like)::
openstack keypair create --public-key=$HOME/.ssh/id_rsa.pub my-keypair
Upload your image, boot a server named "testserver" (chosen arbitrarily for
this example) with your SSH key allowed, and log into it::
FLAVOR='8GB Standard Instance'
openstack image create --file devstack-gate.qcow2 devstack-gate
openstack server create --wait --flavor "$FLAVOR" --image "devstack-gate" \
--key-name=my-keypair testserver
openstack server ssh testserver
If you get a cryptic error like ``ERROR: 'public'`` then you may need to
manually look up the IP address with ``openstack server show testserver`` and
connect by running ``ssh root@<ip_address>`` instead. Once logged in, switch to
the jenkins user and set up parts of the environment expected by devstack-gate
testing::
su - jenkins
export REPO_URL=https://git.openstack.org
export ZUUL_URL=/home/jenkins/workspace-cache
export ZUUL_REF=HEAD
export WORKSPACE=/home/jenkins/workspace/testing
mkdir -p $WORKSPACE
Specify the project and branch you want to test for integration::
export ZUUL_PROJECT=openstack/nova
export ZUUL_BRANCH=master
Get a copy of the tested project. After these steps, apply relevant
patches on the target branch (via cherry-pick, rebase, et cetera) and
make sure ``HEAD`` is at the ref you want tested::
git clone $REPO_URL/$ZUUL_PROJECT $ZUUL_URL/$ZUUL_PROJECT \
&& cd $ZUUL_URL/$ZUUL_PROJECT \
&& git checkout remotes/origin/$ZUUL_BRANCH
Switch to the workspace and get a copy of devstack-gate::
cd $WORKSPACE \
&& git clone --depth 1 $REPO_URL/openstack/devstack-gate
At this point you're ready to set the same environment variables and run
the same commands/scripts as used in the desired job. The definitions
for these are found in the openstack/project-config project under
the jenkins/jobs directory in a file named devstack-gate.yaml. It will
probably look something like::
export PYTHONUNBUFFERED=true
export DEVSTACK_GATE_TEMPEST=1
export DEVSTACK_GATE_TEMPEST_FULL=1
cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
./safe-devstack-vm-gate-wrap.sh
If you're trying to figure out which devstack gate jobs run for a given
project+branch combination, this is encoded in the
openstack/project-config project under the zuul/ directory in a file
named layout.yaml. You'll want to look in the "projects" section for a list
of jobs run on a given project in the "gate" pipeline, and then consult the
"jobs" section of the file to see if there are any overrides indicating
which branches qualify for the job and whether or not its voting is
disabled.
After the script completes, investigate any failures. Then log out and
``openstack server delete testserver`` or similar to get rid of it once no
longer needed. It's possible to re-run certain jobs or specific tests on a used
VM (sometimes with a bit of manual clean-up in between runs), but for
proper testing you'll want to validate your fixes on a completely fresh
one.
Refer to the `Jenkins Job Builder`_ and Zuul_ documentation for more
information on their configuration file formats.
.. _`Jenkins Job Builder`: http://docs.openstack.org/infra/system-config/jjb.html
.. _Zuul: http://docs.openstack.org/infra/system-config/zuul.html
Contributions Welcome
=====================
All of the OpenStack developer infrastructure is freely available and
managed in source code repositories just like the code of OpenStack
itself. If you'd like to contribute, just clone and propose a patch to
the relevant repository::
https://opendev.org/openstack/devstack-gate
https://opendev.org/zuul/nodepool
https://opendev.org/opendev/system-config
https://opendev.org/openstack/project-config
You can file bugs on the storyboard devstack-gate project::
https://storyboard.openstack.org/#!/project/712
And you can chat with us on OFTC in #openstack-qa or #openstack-infra.
It's worth noting that, while devstack-gate is generally licensed under the
Apache license, `playbooks/plugins/callback/devstack.py` is GPLv3 due to having
derived from the Ansible source code.
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
OFTC.

View File

@ -1,34 +0,0 @@
#!/bin/bash -x
# Simulate what Jenkins does with the devstack-gate script.
NODE_IP_ADDR=$1
cat >$WORKSPACE/test-env.sh <<EOF
export WORKSPACE=/home/jenkins/workspace
export DEVSTACK_GATE_PREFIX=wip-
export DEVSTACK_GATE_TEMPEST=1
export ZUUL_BRANCH=master
export ZUUL_PROJECT=testing
export ZUUL_REF=refs/zuul/Ztest
export JOB_NAME=test
export BUILD_NUMBER=42
export GERRIT_CHANGE_NUMBER=1234
export GERRIT_PATCHSET_NUMBER=1
export DEVSTACK_GATE_TEMPEST=${DEVSTACK_GATE_TEMPEST:-0}
export DEVSTACK_GATE_NEUTRON=${DEVSTACK_GATE_NEUTRON:-0}
export DEVSTACK_GATE_HEAT=${DEVSTACK_GATE_HEAT:-0}
export DEVSTACK_GATE_GRENADE=${DEVSTACK_GATE_GRENADE:-""}
EOF
rsync -az $WORKSPACE/ jenkins@$NODE_IP_ADDR:workspace-cache/
rsync -az $WORKSPACE/ jenkins@$NODE_IP_ADDR:workspace/
RETVAL=$?
if [ $RETVAL != 0 ]; then
exit $RETVAL
fi
rm $WORKSPACE/test-env.sh
ssh -t jenkins@$NODE_IP_ADDR '. workspace/test-env.sh && cd workspace && ./devstack-gate/devstack-vm-gate-wrap.sh'
echo "done"
#RETVAL=$?

View File

@ -1,751 +0,0 @@
#!/bin/bash
# Gate commits to several projects on a VM running those projects
# configured by devstack.
# Copyright (C) 2011-2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
# Most of the work of this script is done in functions so that we may
# easily redirect their stdout / stderr to log files.
GIT_BASE=${GIT_BASE:-https://opendev.org}
GIT_BRANCH=${GIT_BRANCH:-master}
# We're using enough ansible specific features that it's extremely
# possible that new ansible releases can break us. As such we should
# be very deliberate about which ansible we use.
# NOTE(ykarel): Ansible 2.9.6 is current as of Ubuntu Focal 20.04.
# ARA is pinned to <1.0.0 below which affects the required version of Ansible.
ANSIBLE_VERSION=${ANSIBLE_VERSION:-2.9.6}
export DSTOOLS_VERSION=${DSTOOLS_VERSION:-0.4.0}
# Set to 0 to skip stackviz
export PROCESS_STACKVIZ=${PROCESS_STACKVIZ:-1}
# sshd may have been compiled with a default path excluding */sbin
export PATH=$PATH:/usr/local/sbin:/usr/sbin
# When doing xtrace (set -x / set -o xtrace), provide more debug output
export PS4='+ ${BASH_SOURCE:-}:${FUNCNAME[0]:-}:L${LINENO:-}: '
#check to see if WORKSPACE var is defined
if [ -z ${WORKSPACE} ]; then
echo "The 'WORKSPACE' variable is undefined. It must be defined for this script to work"
exit 1
fi
source $WORKSPACE/devstack-gate/functions.sh
start_timer
# Note that service/project enablement vars are here so that they can be
# used to select the PROJECTS list below reliably.
# Set to 1 to run sahara
export DEVSTACK_GATE_SAHARA=${DEVSTACK_GATE_SAHARA:-0}
# Set to 1 to run trove
export DEVSTACK_GATE_TROVE=${DEVSTACK_GATE_TROVE:-0}
# are we pulling any libraries from git
export DEVSTACK_PROJECT_FROM_GIT=${DEVSTACK_PROJECT_FROM_GIT:-}
# Save the PROJECTS variable as it was passed in. This is needed for reproduce.sh
# incase the job definition contains items that are not in the "global" list
# below.
# See: https://bugs.launchpad.net/openstack-gate/+bug/1544827
JOB_PROJECTS="$PROJECTS"
PROJECTS="openstack/devstack-gate $PROJECTS"
PROJECTS="openstack/devstack $PROJECTS"
PROJECTS="openstack/ceilometer $PROJECTS"
PROJECTS="openstack/ceilometermiddleware $PROJECTS"
PROJECTS="openstack/cinder $PROJECTS"
PROJECTS="openstack/glance $PROJECTS"
PROJECTS="openstack/heat $PROJECTS"
PROJECTS="openstack/heat-cfntools $PROJECTS"
PROJECTS="openstack/heat-templates $PROJECTS"
if [[ "$DEVSTACK_GATE_HORIZON" -eq "1" || "$DEVSTACK_PROJECT_FROM_GIT" = "manila-ui" ]] ; then
PROJECTS="openstack/horizon $PROJECTS"
PROJECTS="openstack/manila-ui $PROJECTS"
fi
PROJECTS="openstack/keystone $PROJECTS"
PROJECTS="openstack/neutron $PROJECTS"
PROJECTS="openstack/nova $PROJECTS"
PROJECTS="openstack/requirements $PROJECTS"
PROJECTS="openstack/swift $PROJECTS"
PROJECTS="openstack/tempest $PROJECTS"
# Everything below this line in the PROJECTS list is for non
# default devstack runs. Overtime we should remove items from
# below and add them explicitly to the jobs that need them. The
# reason for this is to reduce job runtimes, every git repo
# has to be cloned and updated and checked out to the proper ref
# which is not free.
PROJECTS="openstack/tripleo-ci $PROJECTS"
# The devstack heat plugin uses these repos
if [[ "$DEVSTACK_GATE_HEAT" -eq "1" ]] ; then
PROJECTS="openstack/dib-utils $PROJECTS"
PROJECTS="openstack/diskimage-builder $PROJECTS"
fi
PROJECTS="openstack/glance_store $PROJECTS"
PROJECTS="openstack/keystoneauth $PROJECTS"
PROJECTS="openstack/keystonemiddleware $PROJECTS"
PROJECTS="openstack/manila $PROJECTS"
PROJECTS="openstack/zaqar $PROJECTS"
PROJECTS="openstack/neutron-fwaas $PROJECTS"
PROJECTS="openstack/octavia $PROJECTS"
PROJECTS="openstack/neutron-vpnaas $PROJECTS"
PROJECTS="openstack/os-apply-config $PROJECTS"
PROJECTS="openstack/os-brick $PROJECTS"
PROJECTS="openstack/os-client-config $PROJECTS"
PROJECTS="openstack/os-collect-config $PROJECTS"
PROJECTS="openstack/os-net-config $PROJECTS"
PROJECTS="openstack/os-refresh-config $PROJECTS"
PROJECTS="openstack/osc-lib $PROJECTS"
if [[ "$DEVSTACK_GATE_SAHARA" -eq "1" ]] ; then
PROJECTS="openstack/sahara $PROJECTS"
PROJECTS="openstack/sahara-dashboard $PROJECTS"
fi
PROJECTS="openstack/tripleo-heat-templates $PROJECTS"
PROJECTS="openstack/tripleo-image-elements $PROJECTS"
if [[ "$DEVSTACK_GATE_TROVE" -eq "1" ]] ; then
PROJECTS="openstack/trove $PROJECTS"
fi
if [[ -n "$DEVSTACK_PROJECT_FROM_GIT" ]] ; then
# We populate the PROJECTS list with any libs that should be installed
# from source and not pypi assuming that live under openstack/
TRAILING_COMMA_REMOVED=$(echo "$DEVSTACK_PROJECT_FROM_GIT" | sed -e 's/,$//')
PROCESSED_FROM_GIT=$(echo "openstack/$TRAILING_COMMA_REMOVED" | sed -e 's/,/ openstack\//g')
PROJECTS="$PROCESSED_FROM_GIT $PROJECTS"
fi
# Include openstack/placement starting in Stein.
stable_compare="stable/[a-r]"
if [[ ! "$OVERRIDE_ZUUL_BRANCH" =~ $stable_compare ]] ; then
PROJECTS="openstack/placement $PROJECTS"
fi
# Remove duplicates as they result in errors when managing
# git state.
PROJECTS=$(echo $PROJECTS | tr '[:space:]' '\n' | sort -u)
echo "The PROJECTS list is:"
echo $PROJECTS | fold -w 80 -s
echo "---"
export BASE=/opt/stack
# The URL from which to fetch ZUUL references
export ZUUL_URL=${ZUUL_URL:-http://zuul.openstack.org/p}
# The feature matrix to select devstack-gate components
export DEVSTACK_GATE_FEATURE_MATRIX=${DEVSTACK_GATE_FEATURE_MATRIX:-roles/test-matrix/files/features.yaml}
# Set to 1 to install, configure and enable the Tempest test suite; more flags may be
# required to be set to customize the test run, e.g. DEVSTACK_GATE_TEMPEST_STRESS=1
export DEVSTACK_GATE_TEMPEST=${DEVSTACK_GATE_TEMPEST:-0}
# Set to 1, in conjunction with DEVSTACK_GATE_TEMPEST, will allow Tempest to be
# installed and configured, but the tests will be skipped
export DEVSTACK_GATE_TEMPEST_NOTESTS=${DEVSTACK_GATE_TEMPEST_NOTESTS:-0}
# Set to 1 to run postgresql instead of mysql
export DEVSTACK_GATE_POSTGRES=${DEVSTACK_GATE_POSTGRES:-0}
# Set to 1 to use zeromq instead of rabbitmq (or qpid)
export DEVSTACK_GATE_ZEROMQ=${DEVSTACK_GATE_ZEROMQ:-0}
# Set to qpid to use qpid, or zeromq to use zeromq.
# Default set to rabbitmq
export DEVSTACK_GATE_MQ_DRIVER=${DEVSTACK_GATE_MQ_DRIVER:-"rabbitmq"}
# This value must be provided when DEVSTACK_GATE_TEMPEST_STRESS is set.
export DEVSTACK_GATE_TEMPEST_STRESS_ARGS=${DEVSTACK_GATE_TEMPEST_STRESS_ARGS:-""}
# Set to 1 to run tempest heat slow tests
export DEVSTACK_GATE_TEMPEST_HEAT_SLOW=${DEVSTACK_GATE_TEMPEST_HEAT_SLOW:-0}
# Set to 1 to run tempest large ops test
export DEVSTACK_GATE_TEMPEST_LARGE_OPS=${DEVSTACK_GATE_TEMPEST_LARGE_OPS:-0}
# Set to 1 to run tempest smoke tests serially
export DEVSTACK_GATE_SMOKE_SERIAL=${DEVSTACK_GATE_SMOKE_SERIAL:-0}
# Set to 1 to explicitly disable tempest tenant isolation. Otherwise tenant isolation setting
# for tempest will be the one chosen by devstack.
export DEVSTACK_GATE_TEMPEST_DISABLE_TENANT_ISOLATION=${DEVSTACK_GATE_TEMPEST_DISABLE_TENANT_ISOLATION:-0}
# Should cinder perform secure deletion of volumes?
# Defaults to none to avoid bug 1023755. Can also be set to zero or shred.
# Only applicable to stable/liberty+ devstack.
export DEVSTACK_CINDER_VOLUME_CLEAR=${DEVSTACK_CINDER_VOLUME_CLEAR:-none}
# Set this to override the branch selected for testing (in
# single-branch checkouts; not used for grenade)
export OVERRIDE_ZUUL_BRANCH=${OVERRIDE_ZUUL_BRANCH:-$ZUUL_BRANCH}
stable_compare="stable/[a-n]"
# Set to 1 to run neutron instead of nova network
# This is a bit complicated to handle the deprecation of nova net across
# repos with branches from this branchless job runner.
if [ -n "$DEVSTACK_GATE_NEUTRON" ] ; then
# If someone has made a choice externally honor it
export DEVSTACK_GATE_NEUTRON=$DEVSTACK_GATE_NEUTRON
elif [[ "$OVERRIDE_ZUUL_BRANCH" =~ $stable_compare ]] ; then
# Default to no neutron on older stable branches because nova net
# was the default all that time.
export DEVSTACK_GATE_NEUTRON=0
else
# For everything else there is neutron
export DEVSTACK_GATE_NEUTRON=1
fi
# Set to 1 to run neutron distributed virtual routing
export DEVSTACK_GATE_NEUTRON_DVR=${DEVSTACK_GATE_NEUTRON_DVR:-0}
# This variable tells devstack-gate to set up an overlay network between the nodes.
export DEVSTACK_GATE_NET_OVERLAY=${DEVSTACK_GATE_NET_OVERLAY:-$DEVSTACK_GATE_NEUTRON_DVR}
# Set to 1 to run nova in cells mode instead of the default mode
export DEVSTACK_GATE_CELLS=${DEVSTACK_GATE_CELLS:-0}
# Set to 1 to run nova in with nova metadata server as a separate binary
export DEVSTACK_GATE_NOVA_API_METADATA_SPLIT=${DEVSTACK_GATE_NOVA_API_METADATA_SPLIT:-0}
# Set to 1 to run ironic baremetal provisioning service.
export DEVSTACK_GATE_IRONIC=${DEVSTACK_GATE_IRONIC:-0}
# Set to "agent_ipmitool" to run ironic with the ironic-python-agent driver
export DEVSTACK_GATE_IRONIC_DRIVER=${DEVSTACK_GATE_IRONIC_DRIVER:-pxe_ipmitool}
# Set to 0 to avoid building Ironic deploy ramdisks
export DEVSTACK_GATE_IRONIC_BUILD_RAMDISK=${DEVSTACK_GATE_IRONIC_BUILD_RAMDISK:-1}
# Set to 0 to disable config_drive and use the metadata server instead
export DEVSTACK_GATE_CONFIGDRIVE=${DEVSTACK_GATE_CONFIGDRIVE:-0}
# Set to 1 to enable installing test requirements
export DEVSTACK_GATE_INSTALL_TESTONLY=${DEVSTACK_GATE_INSTALL_TESTONLY:-0}
# Set the number of threads to run tempest with
DEFAULT_CONCURRENCY=$(nproc)
if [ ${DEFAULT_CONCURRENCY} -gt 3 ] ; then
DEFAULT_CONCURRENCY=$((${DEFAULT_CONCURRENCY} / 2))
fi
export TEMPEST_CONCURRENCY=${TEMPEST_CONCURRENCY:-${DEFAULT_CONCURRENCY}}
# The following variable is set for different directions of Grenade updating
# for a stable branch we want to both try to upgrade forward n => n+1 as
# well as upgrade from last n-1 => n.
#
# i.e. stable/ocata:
# pullup means stable/newton => stable/ocata
# forward means stable/ocata => master (or stable/pike if that's out)
export DEVSTACK_GATE_GRENADE=${DEVSTACK_GATE_GRENADE:-}
# the branch name for selecting grenade branches
GRENADE_BASE_BRANCH=${OVERRIDE_ZUUL_BRANCH:-${ZUUL_BRANCH}}
if [[ -n "$DEVSTACK_GATE_GRENADE" ]]; then
# All grenade upgrades get tempest
export DEVSTACK_GATE_TEMPEST=1
# NOTE(sdague): Adjusting grenade branches for a release.
#
# When we get to the point of the release where we should adjust
# the grenade branches, the order of doing so is important.
#
# 1. stable/foo on all projects in devstack
# 2. stable/foo on devstack
# 3. stable/foo on grenade
# 4. adjust branches in devstack-gate
#
# The devstack-gate branch logic going last means that it will be
# tested before thrust upon the jobs. For both the stable/kilo and
# stable/liberty releases real release issues were found in this
# process. So this should be done as early as possible.
case $DEVSTACK_GATE_GRENADE in
# sideways upgrades try to move between configurations in the
# same release, typically used for migrating between services
# or configurations.
sideways-*)
export GRENADE_OLD_BRANCH="$GRENADE_BASE_BRANCH"
export GRENADE_NEW_BRANCH="$GRENADE_BASE_BRANCH"
;;
# forward upgrades are an attempt to migrate up from an
# existing stable branch to the next release.
forward)
if [[ "$GRENADE_BASE_BRANCH" == "stable/kilo" ]]; then
export GRENADE_OLD_BRANCH="stable/kilo"
export GRENADE_NEW_BRANCH="stable/liberty"
elif [[ "$GRENADE_BASE_BRANCH" == "stable/liberty" ]]; then
export GRENADE_OLD_BRANCH="stable/liberty"
export GRENADE_NEW_BRANCH="stable/mitaka"
elif [[ "$GRENADE_BASE_BRANCH" == "stable/mitaka" ]]; then
export GRENADE_OLD_BRANCH="stable/mitaka"
export GRENADE_NEW_BRANCH="stable/newton"
elif [[ "$GRENADE_BASE_BRANCH" == "stable/newton" ]]; then
export GRENADE_OLD_BRANCH="stable/newton"
export GRENADE_NEW_BRANCH="$GIT_BRANCH"
elif [[ "$GRENADE_BASE_BRANCH" == "stable/ocata" ]]; then
export GRENADE_OLD_BRANCH="stable/ocata"
export GRENADE_NEW_BRANCH="stable/pike"
elif [[ "$GRENADE_BASE_BRANCH" == "stable/pike" ]]; then
export GRENADE_OLD_BRANCH="stable/pike"
export GRENADE_NEW_BRANCH="stable/queens"
elif [[ "$GRENADE_BASE_BRANCH" == "stable/queens" ]]; then
export GRENADE_OLD_BRANCH="stable/queens"
export GRENADE_NEW_BRANCH="stable/rocky"
elif [[ "$GRENADE_BASE_BRANCH" == "stable/rocky" ]]; then
export GRENADE_OLD_BRANCH="stable/rocky"
export GRENADE_NEW_BRANCH="stable/stein"
elif [[ "$GRENADE_BASE_BRANCH" == "stable/stein" ]]; then
export GRENADE_OLD_BRANCH="stable/stein"
export GRENADE_NEW_BRANCH="stable/train"
elif [[ "$GRENADE_BASE_BRANCH" == "stable/train" ]]; then
export GRENADE_OLD_BRANCH="stable/train"
export GRENADE_NEW_BRANCH="stable/ussuri"
elif [[ "$GRENADE_BASE_BRANCH" == "stable/ussuri" ]]; then
export GRENADE_OLD_BRANCH="stable/ussuri"
export GRENADE_NEW_BRANCH="stable/victoria"
elif [[ "$GRENADE_BASE_BRANCH" == "stable/victoria" ]]; then
export GRENADE_OLD_BRANCH="stable/victoria"
export GRENADE_NEW_BRANCH="stable/wallaby"
elif [[ "$GRENADE_BASE_BRANCH" == "stable/wallaby" ]]; then
export GRENADE_OLD_BRANCH="stable/wallaby"
export GRENADE_NEW_BRANCH="$GIT_BRANCH"
fi
;;
# pullup upgrades are our normal upgrade test. Can you upgrade
# to the current patch from the last stable.
pullup)
if [[ "$GRENADE_BASE_BRANCH" == "stable/liberty" ]]; then
export GRENADE_OLD_BRANCH="stable/kilo"
export GRENADE_NEW_BRANCH="stable/liberty"
elif [[ "$GRENADE_BASE_BRANCH" == "stable/mitaka" ]]; then
export GRENADE_OLD_BRANCH="stable/liberty"
export GRENADE_NEW_BRANCH="stable/mitaka"
elif [[ "$GRENADE_BASE_BRANCH" == "stable/newton" ]]; then
export GRENADE_OLD_BRANCH="stable/mitaka"
export GRENADE_NEW_BRANCH="stable/newton"
elif [[ "$GRENADE_BASE_BRANCH" == "stable/ocata" ]]; then
export GRENADE_OLD_BRANCH="stable/newton"
export GRENADE_NEW_BRANCH="stable/ocata"
elif [[ "$GRENADE_BASE_BRANCH" == "stable/pike" ]]; then
export GRENADE_OLD_BRANCH="stable/ocata"
export GRENADE_NEW_BRANCH="stable/pike"
elif [[ "$GRENADE_BASE_BRANCH" == "stable/queens" ]]; then
export GRENADE_OLD_BRANCH="stable/pike"
export GRENADE_NEW_BRANCH="stable/queens"
elif [[ "$GRENADE_BASE_BRANCH" == "stable/rocky" ]]; then
export GRENADE_OLD_BRANCH="stable/queens"
export GRENADE_NEW_BRANCH="stable/rocky"
elif [[ "$GRENADE_BASE_BRANCH" == "stable/stein" ]]; then
export GRENADE_OLD_BRANCH="stable/rocky"
export GRENADE_NEW_BRANCH="stable/stein"
elif [[ "$GRENADE_BASE_BRANCH" == "stable/train" ]]; then
export GRENADE_OLD_BRANCH="stable/stein"
export GRENADE_NEW_BRANCH="stable/train"
elif [[ "$GRENADE_BASE_BRANCH" == "stable/ussuri" ]]; then
export GRENADE_OLD_BRANCH="stable/train"
export GRENADE_NEW_BRANCH="stable/ussuri"
elif [[ "$GRENADE_BASE_BRANCH" == "stable/victoria" ]]; then
export GRENADE_OLD_BRANCH="stable/ussuri"
export GRENADE_NEW_BRANCH="stable/victoria"
elif [[ "$GRENADE_BASE_BRANCH" == "stable/wallaby" ]]; then
export GRENADE_OLD_BRANCH="stable/victoria"
export GRENADE_NEW_BRANCH="stable/wallaby"
else # master
export GRENADE_OLD_BRANCH="stable/wallaby"
export GRENADE_NEW_BRANCH="$GIT_BRANCH"
fi
;;
# If we got here, someone typoed a thing, and we should fail
# explicitly so they don't accidentally pass in some what that
# is unexpected.
*)
echo "Unsupported upgrade mode: $DEVSTACK_GATE_GRENADE"
exit 1
;;
esac
fi
# Set the virtualization driver to: libvirt, openvz, xenapi
export DEVSTACK_GATE_VIRT_DRIVER=${DEVSTACK_GATE_VIRT_DRIVER:-libvirt}
# Use qemu by default for consistency since some providers enable
# nested virt
export DEVSTACK_GATE_LIBVIRT_TYPE=${DEVSTACK_GATE_LIBVIRT_TYPE:-qemu}
# See switch below for this -- it gets set to 1 when tempest
# is the project being gated.
export DEVSTACK_GATE_TEMPEST_FULL=${DEVSTACK_GATE_TEMPEST_FULL:-0}
# Set to 1 to run all tempest tests
export DEVSTACK_GATE_TEMPEST_ALL=${DEVSTACK_GATE_TEMPEST_ALL:-0}
# Set to 1 to run all tempest scenario tests
export DEVSTACK_GATE_TEMPEST_SCENARIOS=${DEVSTACK_GATE_TEMPEST_SCENARIOS:-0}
# Set to a regex to run tempest with a custom regex filter
export DEVSTACK_GATE_TEMPEST_REGEX=${DEVSTACK_GATE_TEMPEST_REGEX:-""}
# Set to 1 to run all-plugin tempest tests
export DEVSTACK_GATE_TEMPEST_ALL_PLUGINS=${DEVSTACK_GATE_TEMPEST_ALL_PLUGINS:-0}
# Set to 1 if running the openstack/requirements integration test
export DEVSTACK_GATE_REQS_INTEGRATION=${DEVSTACK_GATE_REQS_INTEGRATION:-0}
# Set to 0 to disable clean logs enforcement (3rd party CI might want to do this
# until they get their driver cleaned up)
export DEVSTACK_GATE_CLEAN_LOGS=${DEVSTACK_GATE_CLEAN_LOGS:-1}
# Set this to the time in milliseconds that the entire job should be
# allowed to run before being aborted (default 120 minutes=7200000ms).
# This may be supplied by Jenkins based on the configured job timeout
# which is why it's in this convenient unit.
export BUILD_TIMEOUT=$(expr ${BUILD_TIMEOUT:-7200000} / 60000)
# Set this to the time in minutes that should be reserved for
# uploading artifacts at the end after a timeout. Defaults to 10
# minutes.
export DEVSTACK_GATE_TIMEOUT_BUFFER=${DEVSTACK_GATE_TIMEOUT_BUFFER:-10}
# Not user servicable.
export DEVSTACK_GATE_TIMEOUT=$(expr $BUILD_TIMEOUT - $DEVSTACK_GATE_TIMEOUT_BUFFER)
# Set to 1 to remove the stack users blanket sudo permissions forcing
# openstack services running as the stack user to rely on rootwrap rulesets
# instead of raw sudo. Do this to ensure rootwrap works. This is the default.
export DEVSTACK_GATE_REMOVE_STACK_SUDO=${DEVSTACK_GATE_REMOVE_STACK_SUDO:-1}
# Set to 1 to unstack immediately after devstack installation. This
# is intended to be a stop-gap until devstack can support
# dependency-only installation.
export DEVSTACK_GATE_UNSTACK=${DEVSTACK_GATE_UNSTACK:-0}
# The topology of the system determinates the service distribution
# among the nodes.
# aio: `all in one` just only one node used
# aiopcpu: `all in one plus compute` one node will be installed as aio
# the extra nodes will gets only limited set of services
# ctrlpcpu: `controller plus compute` One node will gets the controller type
# services without the compute type of services, the others gets,
# the compute style services several services can be common,
# the networking services also presents on the controller [WIP]
export DEVSTACK_GATE_TOPOLOGY=${DEVSTACK_GATE_TOPOLOGY:-aio}
# Set to a space-separated list of projects to prepare in the
# workspace, e.g. 'openstack/devstack openstack/neutron'.
# Minimizing the number of targeted projects can reduce the setup cost
# for jobs that know exactly which repos they need.
export DEVSTACK_GATE_PROJECTS_OVERRIDE=${DEVSTACK_GATE_PROJECTS_OVERRIDE:-""}
# Set this to "True" to force devstack to pick python 3.x. "False" will cause
# devstack to pick python 2.x. We should leave this empty for devstack to
# pick the default.
export DEVSTACK_GATE_USE_PYTHON3=${DEVSTACK_GATE_USE_PYTHON3:-""}
# Set this to enable remote logging of the console via UDP packets to
# a specified ipv4 ip:port (note; not hostname -- ip address only).
# This can be extremely useful if a host is oopsing or dropping off
# the network amd you are not getting any useful logs from jenkins.
#
# To capture these logs, enable a netcat/socat type listener to
# capture UDP packets at the specified remote ip. For example:
#
# $ nc -v -u -l -p 6666 | tee save-output.log
# or
# $ socat udp-recv:6666 - | tee save-output.log
#
# One further trick is to send interesting data to /dev/ksmg; this
# data will get out over the netconsole even if the main interfaces
# have been disabled, etc. e.g.
#
# $ ip addr | sudo tee /dev/ksmg
#
export DEVSTACK_GATE_NETCONSOLE=${DEVSTACK_GATE_NETCONSOLE:-""}
enable_netconsole
if [ -n "$DEVSTACK_GATE_PROJECTS_OVERRIDE" ]; then
PROJECTS=$DEVSTACK_GATE_PROJECTS_OVERRIDE
fi
if ! function_exists "gate_hook"; then
# the command we use to run the gate
function gate_hook {
$BASE/new/devstack-gate/devstack-vm-gate.sh
}
export -f gate_hook
fi
echo "Triggered by: https://review.openstack.org/$ZUUL_CHANGE patchset $ZUUL_PATCHSET"
echo "Pipeline: $ZUUL_PIPELINE"
echo "Timeout set to $DEVSTACK_GATE_TIMEOUT minutes \
with $DEVSTACK_GATE_TIMEOUT_BUFFER minutes reserved for cleanup."
echo "Available disk space on this host:"
indent df -h
if command -v python3 &>/dev/null; then
PIP=pip3
PYTHON_VER=$(python3 -c 'import sys; print("%s.%s" % sys.version_info[0:2])')
else
PIP=pip
PYTHON_VER=2.7
fi
# Install ansible
# TODO(gmann): virtualenv 20.0.1 is broken, one known issue:
# https://github.com/pypa/virtualenv/issues/1551
# Once virtualenv is fixed we can use the latest one.
sudo -H $PIP install "virtualenv<20.0.0"
virtualenv -p python${PYTHON_VER} /tmp/ansible
# Explicitly install pbr first as this will use pip rathat than
# easy_install. Hope is this is generally more reliable.
/tmp/ansible/bin/pip install pbr
/tmp/ansible/bin/pip install ansible==$ANSIBLE_VERSION \
devstack-tools==$DSTOOLS_VERSION 'ara<1.0.0' 'cmd2<0.9.0' \
'flask<2.0.0' 'alembic<1.5.0' 'importlib-resources<5.1.3' \
'MarkupSafe<2.1.0'
export ANSIBLE=/tmp/ansible/bin/ansible
export ANSIBLE_PLAYBOOK=/tmp/ansible/bin/ansible-playbook
export ANSIBLE_CONFIG="$WORKSPACE/ansible.cfg"
export DSCONF=/tmp/ansible/bin/dsconf
# Write inventory file with groupings
COUNTER=1
PRIMARY_NODE=$(cat /etc/nodepool/primary_node_private)
echo "[primary]" > "$WORKSPACE/inventory"
echo "localhost ansible_connection=local host_counter=$COUNTER nodepool='{\"private_ipv4\": \"$PRIMARY_NODE\"}'" >> "$WORKSPACE/inventory"
echo "[subnodes]" >> "$WORKSPACE/inventory"
export SUBNODES=$(cat /etc/nodepool/sub_nodes_private)
for SUBNODE in $SUBNODES ; do
let COUNTER=COUNTER+1
echo "$SUBNODE host_counter=$COUNTER nodepool='{\"private_ipv4\": \"$SUBNODE\"}'" >> "$WORKSPACE/inventory"
done
# Write ansible config file
cat > $ANSIBLE_CONFIG <<EOF
[defaults]
callback_plugins = $WORKSPACE/devstack-gate/playbooks/plugins/callback:/tmp/ansible/lib/python${PYTHON_VER}/site-packages/ara/plugins/callbacks
stdout_callback = devstack
# Disable SSH host key checking
host_key_checking = False
EOF
# NOTE(clarkb): for simplicity we evaluate all bash vars in ansible commands
# on the node running these scripts, we do not pass through unexpanded
# vars to ansible shell commands. This may need to change in the future but
# for now the current setup is simple, consistent and easy to understand.
# This is in brackets for avoiding inheriting a huge environment variable
(export PROJECTS; export > "$WORKSPACE/test_env.sh")
# Copy bootstrap to remote hosts
$ANSIBLE subnodes -f 5 -i "$WORKSPACE/inventory" -m copy \
-a "src='$WORKSPACE/devstack-gate' dest='$WORKSPACE'"
$ANSIBLE subnodes -f 5 -i "$WORKSPACE/inventory" -m copy \
-a "src='$WORKSPACE/test_env.sh' dest='$WORKSPACE/test_env.sh'"
# Make a directory to store logs
$ANSIBLE all -f 5 -i "$WORKSPACE/inventory" -m file \
-a "path='$WORKSPACE/logs' state=absent"
$ANSIBLE all -f 5 -i "$WORKSPACE/inventory" -m file \
-a "path='$WORKSPACE/logs' state=directory"
# Record a file to reproduce this build
reproduce "$JOB_PROJECTS"
# Run ansible to do setup_host on all nodes.
echo "Setting up the hosts"
# This function handles any common exit paths from here on in
function exit_handler {
local status=$1
# Generate ARA report
/tmp/ansible/bin/ara generate html $WORKSPACE/logs/ara
gzip --recursive --best $WORKSPACE/logs/ara
if [[ $status -ne 0 ]]; then
echo "*** FAILED with status: $status"
else
echo "SUCCESSFULLY FINISHED"
fi
exit $status
}
# little helper that runs anything passed in under tsfilter
function run_command {
local fn="$@"
local cmd=""
# note that we want to keep the tsfilter separate; it's a trap for
# new-players that errexit isn't applied if we do "&& tsfilter
# ..." and thus we won't pick up any failures in the commands the
# function runs.
#
# Note we also send stderr to stdout, otherwise ansible consumes
# each separately and outputs them separately. That doesn't work
# well for log files; especially running "xtrace" in bash which
# puts tracing on stderr.
read -r -d '' cmd <<EOF
source '$WORKSPACE/test_env.sh'
source '$WORKSPACE/devstack-gate/functions.sh'
set -o errexit
tsfilter $fn 2>&1
executable=/bin/bash
EOF
echo "$cmd"
}
rc=0
echo "... this takes a few seconds (logs at logs/devstack-gate-setup-host.txt.gz)"
$ANSIBLE_PLAYBOOK -f 5 -i "$WORKSPACE/inventory" "$WORKSPACE/devstack-gate/playbooks/setup_host.yaml" \
&> "$WORKSPACE/logs/devstack-gate-setup-host.txt" || rc=$?
if [[ $rc -ne 0 ]]; then
exit_handler $rc;
fi
if [ -n "$DEVSTACK_GATE_GRENADE" ]; then
start=$(date +%s)
echo "Setting up the new (migrate to) workspace"
echo "... this takes 3 - 5 minutes (logs at logs/devstack-gate-setup-workspace-new.txt.gz)"
$ANSIBLE all -f 5 -i "$WORKSPACE/inventory" -m shell \
-a "$(run_command setup_workspace '$GRENADE_NEW_BRANCH' '$BASE/new')" \
&> "$WORKSPACE/logs/devstack-gate-setup-workspace-new.txt" || rc=$?
if [[ $rc -ne 0 ]]; then
exit_handler $rc;
fi
echo "Setting up the old (migrate from) workspace ..."
echo "... this takes 3 - 5 minutes (logs at logs/devstack-gate-setup-workspace-old.txt.gz)"
$ANSIBLE all -f 5 -i "$WORKSPACE/inventory" -m shell \
-a "$(run_command setup_workspace '$GRENADE_OLD_BRANCH' '$BASE/old')" \
&> "$WORKSPACE/logs/devstack-gate-setup-workspace-old.txt" || rc=$?
end=$(date +%s)
took=$((($end - $start) / 60))
if [[ "$took" -gt 20 ]]; then
echo "WARNING: setup of 2 workspaces took > 20 minutes, this is a very slow node."
fi
if [[ $rc -ne 0 ]]; then
exit_handler $rc;
fi
else
echo "Setting up the workspace"
echo "... this takes 3 - 5 minutes (logs at logs/devstack-gate-setup-workspace-new.txt.gz)"
start=$(date +%s)
$ANSIBLE all -f 5 -i "$WORKSPACE/inventory" -m shell \
-a "$(run_command setup_workspace '$OVERRIDE_ZUUL_BRANCH' '$BASE/new')" \
&> "$WORKSPACE/logs/devstack-gate-setup-workspace-new.txt" || rc=$?
end=$(date +%s)
took=$((($end - $start) / 60))
if [[ "$took" -gt 10 ]]; then
echo "WARNING: setup workspace took > 10 minutes, this is a very slow node."
fi
if [[ $rc -ne 0 ]]; then
exit_handler $rc;
fi
fi
# relocate and symlink logs into $BASE to save space on the root filesystem
# TODO: make this more ansibley
$ANSIBLE all -f 5 -i "$WORKSPACE/inventory" -m shell -a "
if [ -d '$WORKSPACE/logs' -a \! -e '$BASE/logs' ]; then
sudo mv '$WORKSPACE/logs' '$BASE/'
ln -s '$BASE/logs' '$WORKSPACE/'
fi executable=/bin/bash"
# The DEVSTACK_GATE_SETTINGS variable may contain a path to a script that
# should be sourced after the environment has been set up. This is useful for
# allowing projects to provide a script in their repo that sets some custom
# environment variables.
check_for_devstack_gate_settings() {
if [ -f $1 ] ; then
return 0
else
return 1
fi
}
if [ -n "${DEVSTACK_GATE_SETTINGS}" ] ; then
if check_for_devstack_gate_settings ${DEVSTACK_GATE_SETTINGS} ; then
source ${DEVSTACK_GATE_SETTINGS}
else
echo "WARNING: DEVSTACK_GATE_SETTINGS file does not exist: '${DEVSTACK_GATE_SETTINGS}'"
fi
fi
# Note that hooks should be multihost aware if necessary.
# devstack-vm-gate-wrap.sh will not automagically run the hooks on each node.
# Run pre test hook if we have one
with_timeout call_hook_if_defined "pre_test_hook"
GATE_RETVAL=$?
if [ $GATE_RETVAL -ne 0 ]; then
echo "ERROR: the pre-test setup script run by this job failed - exit code: $GATE_RETVAL"
fi
# Run the gate function
if [ $GATE_RETVAL -eq 0 ]; then
echo "Running gate_hook"
with_timeout "gate_hook"
GATE_RETVAL=$?
if [ $GATE_RETVAL -ne 0 ]; then
echo "ERROR: the main setup script run by this job failed - exit code: $GATE_RETVAL"
fi
fi
RETVAL=$GATE_RETVAL
if [ $GATE_RETVAL -ne 0 ]; then
echo " please look at the relevant log files to determine the root cause"
echo "Running devstack worlddump.py"
sudo $BASE/new/devstack/tools/worlddump.py -d $BASE/logs
fi
# Run post test hook if we have one
if [ $GATE_RETVAL -eq 0 ]; then
# Run post_test_hook if we have one
with_timeout call_hook_if_defined "post_test_hook"
RETVAL=$?
fi
if [ $GATE_RETVAL -eq 137 ] && [ -f $WORKSPACE/gate.pid ] ; then
echo "Job timed out"
GATEPID=`cat $WORKSPACE/gate.pid`
echo "Killing process group ${GATEPID}"
sudo kill -s 9 -${GATEPID}
fi
echo "Cleaning up host"
echo "... this takes 3 - 4 minutes (logs at logs/devstack-gate-cleanup-host.txt.gz)"
$ANSIBLE all -f 5 -i "$WORKSPACE/inventory" -m shell \
-a "$(run_command cleanup_host)" &> "$WORKSPACE/devstack-gate-cleanup-host.txt"
$ANSIBLE subnodes -f 5 -i "$WORKSPACE/inventory" -m synchronize \
-a "mode=pull src='$BASE/logs/' dest='$BASE/logs/subnode-{{ host_counter }}' copy_links=yes"
sudo mv $WORKSPACE/devstack-gate-cleanup-host.txt $BASE/logs/
exit_handler $RETVAL

View File

@ -1,919 +0,0 @@
#!/bin/bash
# Script that is run on the devstack vm; configures and
# invokes devstack.
# Copyright (C) 2011-2012 OpenStack LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
set -o errexit
set -o xtrace
# Keep track of the devstack directory
TOP_DIR=$(cd $(dirname "$0") && pwd)
# Prepare the environment
# -----------------------
# Import common functions
source $TOP_DIR/functions.sh
# Get access to iniset and friends
# NOTE(sdague): as soon as we put
# iniget into dsconf, we can remove this.
source $BASE/new/devstack/inc/ini-config
# redefine localrc_set to use dsconf
function localrc_set {
local lcfile=$1
local key=$2
local value=$3
$DSCONF setlc "$1" "$2" "$3"
}
echo $PPID > $WORKSPACE/gate.pid
source `dirname "$(readlink -f "$0")"`/functions.sh
# Need to set FIXED_RANGE for pre-ocata devstack
FIXED_RANGE=${DEVSTACK_GATE_FIXED_RANGE:-10.1.0.0/20}
IPV4_ADDRS_SAFE_TO_USE=${DEVSTACK_GATE_IPV4_ADDRS_SAFE_TO_USE:-${DEVSTACK_GATE_FIXED_RANGE:-10.1.0.0/20}}
FLOATING_RANGE=${DEVSTACK_GATE_FLOATING_RANGE:-172.24.5.0/24}
PUBLIC_NETWORK_GATEWAY=${DEVSTACK_GATE_PUBLIC_NETWORK_GATEWAY:-172.24.5.1}
# The next two values are used in multinode testing and are related
# to the floating range. For multinode test envs to know how to route
# packets to floating IPs on other hosts we put addresses on the compute
# node interfaces on a network that overlaps the FLOATING_RANGE. This
# automagically sets up routing in a sane way. By default we put floating
# IPs on 172.24.5.0/24 and compute nodes get addresses in the 172.24.4/23
# space. Note that while the FLOATING_RANGE should overlap the
# FLOATING_HOST_* space you should have enough sequential room starting at
# the beginning of your FLOATING_HOST range to give one IP address to each
# compute host without letting compute host IPs run into the FLOATING_RANGE.
# By default this lets us have 255 compute hosts (172.24.4.1 - 172.24.4.255).
FLOATING_HOST_PREFIX=${DEVSTACK_GATE_FLOATING_HOST_PREFIX:-172.24.4}
FLOATING_HOST_MASK=${DEVSTACK_GATE_FLOATING_HOST_MASK:-23}
# Get the smallest local MTU
LOCAL_MTU=$(ip link show | sed -ne 's/.*mtu \([0-9]\+\).*/\1/p' | sort -n | head -1)
# 50 bytes is overhead for vxlan (which is greater than GRE
# allowing us to use either overlay option with this MTU.
EXTERNAL_BRIDGE_MTU=$((LOCAL_MTU - 50))
function setup_ssh {
# Copy the SSH key from /etc/nodepool/id_rsa{.pub} to the specified
# directory on 'all' the nodes. 'all' the nodes consists of the primary
# node and all of the subnodes.
local path=$1
local dest_file=${2:-id_rsa}
$ANSIBLE all --become -f 5 -i "$WORKSPACE/inventory" -m file \
-a "path='$path' mode=0700 state=directory"
# Note that we append to the authorized keys file just in case something
# is already authorized to ssh with content in that file.
$ANSIBLE all --become -f 5 -i "$WORKSPACE/inventory" -m lineinfile \
-a "line={{ lookup('file', '/etc/nodepool/id_rsa.pub') }} dest='$path/authorized_keys' insertafter=EOF create=yes mode=0600"
$ANSIBLE all --become -f 5 -i "$WORKSPACE/inventory" -m copy \
-a "src=/etc/nodepool/id_rsa.pub dest='$path/${dest_file}.pub' mode=0600"
$ANSIBLE all --become -f 5 -i "$WORKSPACE/inventory" -m copy \
-a "src=/etc/nodepool/id_rsa dest='$path/${dest_file}' mode=0400"
}
function setup_nova_net_networking {
local localrc=$1
local primary_node=$2
shift 2
local sub_nodes=$@
# We always setup multinode connectivity to work around an
# issue with nova net configuring br100 to take over eth0
# by default.
$ANSIBLE_PLAYBOOK -f 5 -i "$WORKSPACE/inventory" "$WORKSPACE/devstack-gate/playbooks/ovs_vxlan_bridge.yaml" \
-e "bridge_name=br_pub" \
-e "host_ip=$primary_node" \
-e "set_ips=True" \
-e "ovs_starting_offset=1" \
-e "pub_addr_prefix=$FLOATING_HOST_PREFIX" \
-e "pub_addr_mask=$FLOATING_HOST_MASK" \
-e "peer_ips=$sub_nodes"
$ANSIBLE_PLAYBOOK -f 5 -i "$WORKSPACE/inventory" "$WORKSPACE/devstack-gate/playbooks/ovs_vxlan_bridge.yaml" \
-e "bridge_name=br_flat" \
-e "host_ip=$primary_node" \
-e "set_ips=False" \
-e "ovs_starting_offset=128" \
-e "peer_ips=$sub_nodes"
localrc_set $localrc "FLAT_INTERFACE" "br_flat"
localrc_set $localrc "PUBLIC_INTERFACE" "br_pub"
}
function setup_multinode_connectivity {
local mode=${1:-"devstack"}
# Multinode setup variables:
#
# ``localrc`` - location to write localrc content on the primary
# node. In grenade mode we write to the grenade template that is
# copied into old and new.
#
# ``old_or_new`` - should the subnodes be computed on the old side
# or new side. For grenade where we don't upgrade them, calculate
# on the old side.
local old_or_new="new"
local localconf
local devstack_dir
if [[ "$mode" == "grenade" ]]; then
localconf=$BASE/new/grenade/devstack.localrc
old_or_new="old"
devstack_dir=$BASE/$old_or_new/devstack
else
devstack_dir=$BASE/$old_or_new/devstack
localconf=$devstack_dir/local.conf
fi
# set explicit paths on all conf files we're writing so that
# current working directory doesn't introduce subtle bugs.
local sub_localconf=$devstack_dir/sub_local.conf
set -x # for now enabling debug and do not turn it off
setup_localrc $old_or_new "$sub_localconf" "sub"
local primary_node
primary_node=$(cat /etc/nodepool/primary_node_private)
local sub_nodes
sub_nodes=$(cat /etc/nodepool/sub_nodes_private)
if [[ "$DEVSTACK_GATE_NEUTRON" -ne '1' ]]; then
setup_nova_net_networking $localconf $primary_node $sub_nodes
localrc_set $sub_localconf "FLAT_INTERFACE" "br_flat"
localrc_set $sub_localconf "PUBLIC_INTERFACE" "br_pub"
localrc_set $sub_localconf "MULTI_HOST" "True"
# and on the master
localrc_set $localconf "MULTI_HOST" "True"
elif [[ "$DEVSTACK_GATE_NET_OVERLAY" -eq '1' ]]; then
$ANSIBLE_PLAYBOOK -f 5 -i "$WORKSPACE/inventory" "$WORKSPACE/devstack-gate/playbooks/ovs_vxlan_bridge.yaml" \
-e "bridge_name=br-ex" \
-e "host_ip=$primary_node" \
-e "set_ips=True" \
-e "ovs_starting_offset=1" \
-e "pub_addr_prefix=$FLOATING_HOST_PREFIX" \
-e "pub_addr_mask=$FLOATING_HOST_MASK" \
-e "peer_ips=$sub_nodes"
fi
if [[ "$DEVSTACK_GATE_IRONIC" -eq '1' ]]; then
# NOTE(vsaienko) Ironic VMs will be connected to this bridge
# in order to have access to VMs on another nodes.
$ANSIBLE_PLAYBOOK -f 5 -i "$WORKSPACE/inventory" "$WORKSPACE/devstack-gate/playbooks/ovs_vxlan_bridge.yaml" \
-e "bridge_name=br_ironic_vxlan" \
-e "host_ip=$primary_node" \
-e "set_ips=False" \
-e "ovs_starting_offset=128" \
-e "peer_ips=$sub_nodes"
localrc_set "$sub_localconf" "HOST_TOPOLOGY" "multinode"
localrc_set "$sub_localconf" "HOST_TOPOLOGY_ROLE" "subnode"
# NOTE(vsaienko) we assume for now that we using only 1 subnode,
# each subnode should have different switch name (bridge) as it is used
# by networking-generic-switch to uniquely identify switch.
localrc_set "$sub_localconf" "IRONIC_VM_NETWORK_BRIDGE" "sub1brbm"
localrc_set "$sub_localconf" "OVS_PHYSICAL_BRIDGE" "sub1brbm"
localrc_set "$sub_localconf" "ENABLE_TENANT_TUNNELS" "False"
localrc_set "$localconf" "HOST_TOPOLOGY" "multinode"
localrc_set "$localconf" "HOST_TOPOLOGY_ROLE" "primary"
localrc_set "$localconf" "HOST_TOPOLOGY_SUBNODES" "$sub_nodes"
localrc_set "$localconf" "GENERIC_SWITCH_KEY_FILE" "$BASE/new/.ssh/id_rsa"
localrc_set "$localconf" "ENABLE_TENANT_TUNNELS" "False"
fi
echo "Preparing cross node connectivity"
setup_ssh $BASE/new/.ssh
setup_ssh ~root/.ssh
# TODO (clarkb) ansiblify the /etc/hosts and known_hosts changes
# set up ssh_known_hosts by IP and /etc/hosts
for NODE in $sub_nodes; do
ssh-keyscan $NODE >> /tmp/tmp_ssh_known_hosts
echo $NODE `remote_command $NODE hostname | tr -d '\r'` >> /tmp/tmp_hosts
done
ssh-keyscan `cat /etc/nodepool/primary_node_private` >> /tmp/tmp_ssh_known_hosts
echo `cat /etc/nodepool/primary_node_private` `hostname` >> /tmp/tmp_hosts
cat /tmp/tmp_hosts | sudo tee --append /etc/hosts
# set up ssh_known_host files based on hostname
for HOSTNAME in `cat /tmp/tmp_hosts | cut -d' ' -f2`; do
ssh-keyscan $HOSTNAME >> /tmp/tmp_ssh_known_hosts
done
$ANSIBLE all --become -f 5 -i "$WORKSPACE/inventory" -m copy \
-a "src=/tmp/tmp_ssh_known_hosts dest=/etc/ssh/ssh_known_hosts mode=0444"
for NODE in $sub_nodes; do
remote_copy_file /tmp/tmp_hosts $NODE:/tmp/tmp_hosts
remote_command $NODE "cat /tmp/tmp_hosts | sudo tee --append /etc/hosts > /dev/null"
rm -f /tmp/tmp_sub_localconf
# Build a custom local.conf for the subnode that has HOST_IP
# encoded. We do the HOST_IP add early so that it's a variable
# that can be used by other stanzas later.
$DSCONF setlc /tmp/tmp_sub_localconf "HOST_IP" "$NODE"
$DSCONF merge_lc /tmp/tmp_sub_localconf "$sub_localconf"
remote_copy_file /tmp/tmp_sub_localconf $NODE:$devstack_dir/local.conf
done
}
function setup_networking {
local mode=${1:-"devstack"}
# Neutron in single node setups does not need any special
# sauce to function.
if [[ "$DEVSTACK_GATE_TOPOLOGY" != "multinode" ]] && \
[[ "$DEVSTACK_GATE_NEUTRON" -ne '1' ]]; then
if [[ "$mode" == "grenade" ]]; then
setup_nova_net_networking "$BASE/new/grenade/devstack.local.conf.base" "127.0.0.1"
setup_nova_net_networking "$BASE/new/grenade/devstack.local.conf.target" "127.0.0.1"
else
setup_nova_net_networking "$BASE/new/devstack/local.conf" "127.0.0.1"
fi
elif [[ "$DEVSTACK_GATE_TOPOLOGY" == "multinode" ]]; then
setup_multinode_connectivity $mode
fi
}
# Discovers compute nodes (subnodes) and maps them to cells.
# NOTE(mriedem): We want to remove this if/when nova supports auto-registration
# of computes with cells, but that's not happening in Ocata.
function discover_hosts {
# We have to run this on the primary node AFTER the subnodes have been
# setup. Since discover_hosts is really only needed for Ocata, this checks
# to see if the script exists in the devstack installation first.
# NOTE(danms): This is ||'d with an assertion that the script does not exist,
# so that if we actually failed the script, we'll exit nonzero here instead
# of ignoring failures along with the case where there is no script.
# TODO(mriedem): Would be nice to do this with wrapped lines.
$ANSIBLE primary -f 5 -i "$WORKSPACE/inventory" -m shell \
-a "cd $BASE/new/devstack/ && (test -f tools/discover_hosts.sh && sudo -H -u stack DSTOOLS_VERSION=$DSTOOLS_VERSION stdbuf -oL -eL ./tools/discover_hosts.sh) || (! test -f tools/discover_hosts.sh)" \
&> "$WORKSPACE/logs/devstack-gate-discover-hosts.txt"
}
function setup_localrc {
local localrc_oldnew=$1;
local localrc_file=$2
local role=$3
# The branch we use to compute the feature matrix is pretty
# straight forward. If it's a GRENADE job, we use the
# GRENADE_OLD_BRANCH, otherwise the branch ZUUL has told is it's
# running on.
local branch_for_matrix=${GRENADE_OLD_BRANCH:-$OVERRIDE_ZUUL_BRANCH}
# Allow calling context to pre-populate the localrc file
# with additional values
if [[ -z $KEEP_LOCALRC ]] ; then
rm -f $localrc_file
fi
# are we being explicit or additive?
if [[ ! -z $OVERRIDE_ENABLED_SERVICES ]]; then
MY_ENABLED_SERVICES=${OVERRIDE_ENABLED_SERVICES}
else
# Install PyYaml for test-matrix.py
PYTHON_PATH=$(which python3 || which python)
PYTHON_NAME=$(basename $PYTHON_PATH)
if uses_debs; then
if ! dpkg -s "${PYTHON_NAME}-yaml" > /dev/null; then
apt_get_install "${PYTHON_NAME}-yaml"
fi
elif is_suse; then
if [ "$PYTHON_NAME" = "python" ] ; then
sudo zypper -n install python-PyYAML
elif [ "$PYTHON_NAME" = "python3" ] ; then
sudo zypper -n install python3-PyYAML
fi
elif is_fedora; then
if [ "$PYTHON_NAME" = "python" ] ; then
if ! rpm --quiet -q "PyYAML"; then
sudo yum install -y PyYAML
fi
elif [ "$PYTHON_NAME" = "python3" ] ; then
if ! rpm --quiet -q "python3-PyYAML"; then
sudo yum install -y python3-PyYAML
fi
fi
fi
local test_matrix_role='primary'
if [[ $role = sub ]]; then
test_matrix_role='subnode'
fi
TEST_MATRIX='roles/test-matrix/library/test_matrix.py -n'
MY_ENABLED_SERVICES=$(cd $BASE/new/devstack-gate && $PYTHON_PATH $TEST_MATRIX -b $branch_for_matrix -f $DEVSTACK_GATE_FEATURE_MATRIX -r $test_matrix_role)
local original_enabled_services
original_enabled_services=$(cd $BASE/new/devstack-gate && $PYTHON_PATH $TEST_MATRIX -b $branch_for_matrix -f $DEVSTACK_GATE_FEATURE_MATRIX -r primary)
echo "MY_ENABLED_SERVICES: ${MY_ENABLED_SERVICES}"
echo "original_enabled_services: ${original_enabled_services}"
# Allow optional injection of ENABLED_SERVICES from the calling context
if [[ ! -z $ENABLED_SERVICES ]] ; then
MY_ENABLED_SERVICES+=,$ENABLED_SERVICES
fi
fi
if [[ ! -z $DEVSTACK_GATE_USE_PYTHON3 ]] ; then
localrc_set $localrc_file "USE_PYTHON3" "$DEVSTACK_GATE_USE_PYTHON3"
fi
if [[ "$DEVSTACK_GATE_CEPH" == "1" ]]; then
localrc_set $localrc_file "CINDER_ENABLED_BACKENDS" "ceph:ceph"
localrc_set $localrc_file "TEMPEST_STORAGE_PROTOCOL" "ceph"
fi
# the exercises we *don't* want to test on for devstack
SKIP_EXERCISES=boot_from_volume,bundle,client-env,euca
if [[ "$DEVSTACK_GATE_NEUTRON" -eq "1" ]]; then
localrc_set $localrc_file "Q_USE_DEBUG_COMMAND" "True"
localrc_set $localrc_file "NETWORK_GATEWAY" "10.1.0.1"
fi
if [[ "$DEVSTACK_GATE_NEUTRON_DVR" -eq "1" ]]; then
# The role for L3 agents running on first node is 'dvr' and
# other nodes is 'dvr_snat'
if [[ "$DEVSTACK_GATE_TOPOLOGY" == "aio" ]] || [[ $role = sub ]]; then
localrc_set $localrc_file "Q_DVR_MODE" "dvr_snat"
else
localrc_set $localrc_file "Q_DVR_MODE" "dvr"
fi
fi
localrc_set "$localrc_file" "USE_SCREEN" "False"
localrc_set "$localrc_file" "DEST" "$BASE/$localrc_oldnew"
# move DATA_DIR outside of DEST to keep DEST a bit cleaner
localrc_set "$localrc_file" "DATA_DIR" "$BASE/data"
localrc_set "$localrc_file" "ACTIVE_TIMEOUT" "90"
localrc_set "$localrc_file" "BOOT_TIMEOUT" "90"
localrc_set "$localrc_file" "ASSOCIATE_TIMEOUT" "60"
localrc_set "$localrc_file" "TERMINATE_TIMEOUT" "60"
localrc_set "$localrc_file" "MYSQL_PASSWORD" "secretmysql"
localrc_set "$localrc_file" "DATABASE_PASSWORD" "secretdatabase"
localrc_set "$localrc_file" "RABBIT_PASSWORD" "secretrabbit"
localrc_set "$localrc_file" "ADMIN_PASSWORD" "secretadmin"
localrc_set "$localrc_file" "SERVICE_PASSWORD" "secretservice"
localrc_set "$localrc_file" "SERVICE_TOKEN" "111222333444"
localrc_set "$localrc_file" "SWIFT_HASH" "1234123412341234"
localrc_set "$localrc_file" "ROOTSLEEP" "0"
# ERROR_ON_CLONE should never be set to FALSE in gate jobs.
# Setting up git trees must be done by zuul
# because it needs specific git references directly from gerrit
# to correctly do testing. Otherwise you are not testing
# the code you have posted for review.
localrc_set "$localrc_file" "ERROR_ON_CLONE" "True"
# When you enable the tempest service that creates a virtualenv for
# tempest. This virtualenv is what we run tests out of. Additionally
# devstack installs tempest globally by default. We dont need that
# and since the installation process adds to devstack-gate runtime
# due to the extra steps and extra packages affecting OSC just don't
# install it globally.
localrc_set "$localrc_file" "INSTALL_TEMPEST" "False"
# Since git clone can't be used for novnc in gates, force it to install the packages
localrc_set "$localrc_file" "NOVNC_FROM_PACKAGE" "True"
localrc_set "$localrc_file" "ENABLED_SERVICES" "$MY_ENABLED_SERVICES"
localrc_set "$localrc_file" "SKIP_EXERCISES" "$SKIP_EXERCISES"
# Screen console logs will capture service logs.
localrc_set "$localrc_file" "SYSLOG" "False"
localrc_set "$localrc_file" "SCREEN_LOGDIR" "$BASE/$localrc_oldnew/screen-logs"
localrc_set "$localrc_file" "LOGFILE" "$BASE/$localrc_oldnew/devstacklog.txt"
localrc_set "$localrc_file" "VERBOSE" "True"
localrc_set "$localrc_file" "FIXED_RANGE" "$FIXED_RANGE"
localrc_set "$localrc_file" "IPV4_ADDRS_SAFE_TO_USE" "$IPV4_ADDRS_SAFE_TO_USE"
localrc_set "$localrc_file" "FLOATING_RANGE" "$FLOATING_RANGE"
localrc_set "$localrc_file" "PUBLIC_NETWORK_GATEWAY" "$PUBLIC_NETWORK_GATEWAY"
localrc_set "$localrc_file" "FIXED_NETWORK_SIZE" "4096"
localrc_set "$localrc_file" "VIRT_DRIVER" "$DEVSTACK_GATE_VIRT_DRIVER"
localrc_set "$localrc_file" "SWIFT_REPLICAS" "1"
localrc_set "$localrc_file" "SWIFT_START_ALL_SERVICES" "False"
localrc_set "$localrc_file" "LOG_COLOR" "False"
# Don't reset the requirements.txt files after g-r updates
localrc_set "$localrc_file" "UNDO_REQUIREMENTS" "False"
# NOTE(rosmaita): change I1ef1fe564123216b19582262726cdb1078b7650e makes
# the following line a no-op when used with Train or later devstacks
localrc_set "$localrc_file" "CINDER_PERIODIC_INTERVAL" "10"
# TODO(mriedem): Remove OS_NO_CACHE after newton-eol for devstack.
localrc_set "$localrc_file" "export OS_NO_CACHE" "True"
localrc_set "$localrc_file" "LIBS_FROM_GIT" "$DEVSTACK_PROJECT_FROM_GIT"
# set this until all testing platforms have libvirt >= 1.2.11
# see bug #1501558
localrc_set "$localrc_file" "EBTABLES_RACE_FIX" "True"
# This will put libvirt coredumps into /var/core
# https://bugs.launchpad.net/nova/+bug/1643911
localrc_set "$localrc_file" DEBUG_LIBVIRT_COREDUMPS "True"
if [[ "$DEVSTACK_GATE_TOPOLOGY" == "multinode" ]] && [[ $DEVSTACK_GATE_NEUTRON -eq "1" ]]; then
# Reduce the MTU on br-ex to match the MTU of underlying tunnels
localrc_set "$localrc_file" "PUBLIC_BRIDGE_MTU" "$EXTERNAL_BRIDGE_MTU"
fi
localrc_set "$localrc_file" "CINDER_VOLUME_CLEAR" "${DEVSTACK_CINDER_VOLUME_CLEAR}"
if [[ "$DEVSTACK_GATE_TEMPEST_HEAT_SLOW" -eq "1" ]]; then
localrc_set "$localrc_file" "HEAT_CREATE_TEST_IMAGE" "False"
# Use Fedora 20 for heat test image, it has heat-cfntools pre-installed
localrc_set "$localrc_file" "HEAT_FETCHED_TEST_IMAGE" "Fedora-i386-20-20131211.1-sda"
fi
if [[ "$DEVSTACK_GATE_VIRT_DRIVER" == "libvirt" ]]; then
if [[ -n "$DEVSTACK_GATE_LIBVIRT_TYPE" ]]; then
localrc_set "$localrc_file" "LIBVIRT_TYPE" "${DEVSTACK_GATE_LIBVIRT_TYPE}"
fi
fi
if [[ "$DEVSTACK_GATE_VIRT_DRIVER" == "ironic" ]]; then
export TEMPEST_OS_TEST_TIMEOUT=${DEVSTACK_GATE_OS_TEST_TIMEOUT:-1200}
localrc_set "$localrc_file" "IRONIC_DEPLOY_DRIVER" "$DEVSTACK_GATE_IRONIC_DRIVER"
localrc_set "$localrc_file" "IRONIC_BAREMETAL_BASIC_OPS" "True"
localrc_set "$localrc_file" "IRONIC_VM_LOG_DIR" "$BASE/$localrc_oldnew/ironic-bm-logs"
localrc_set "$localrc_file" "DEFAULT_INSTANCE_TYPE" "baremetal"
localrc_set "$localrc_file" "BUILD_TIMEOUT" "${DEVSTACK_GATE_TEMPEST_BAREMETAL_BUILD_TIMEOUT:-600}"
localrc_set "$localrc_file" "IRONIC_CALLBACK_TIMEOUT" "600"
localrc_set "$localrc_file" "Q_AGENT" "openvswitch"
localrc_set "$localrc_file" "Q_ML2_TENANT_NETWORK_TYPE" "vxlan"
if [[ "$DEVSTACK_GATE_IRONIC_BUILD_RAMDISK" -eq 0 ]]; then
localrc_set "$localrc_file" "IRONIC_BUILD_DEPLOY_RAMDISK" "False"
else
localrc_set "$localrc_file" "IRONIC_BUILD_DEPLOY_RAMDISK" "True"
fi
if [[ -z "${DEVSTACK_GATE_IRONIC_DRIVER%%agent*}" ]]; then
localrc_set "$localrc_file" "SWIFT_ENABLE_TEMPURLS" "True"
localrc_set "$localrc_file" "SWIFT_TEMPURL_KEY" "secretkey"
localrc_set "$localrc_file" "IRONIC_ENABLED_DRIVERS" "fake,agent_ipmitool"
# agent driver doesn't support ephemeral volumes yet
localrc_set "$localrc_file" "IRONIC_VM_EPHEMERAL_DISK" "0"
# agent CoreOS ramdisk is a little heavy
localrc_set "$localrc_file" "IRONIC_VM_SPECS_RAM" "1024"
else
localrc_set "$localrc_file" "IRONIC_ENABLED_DRIVERS" "fake,pxe_ipmitool"
localrc_set "$localrc_file" "IRONIC_VM_EPHEMERAL_DISK" "1"
fi
fi
if [[ "$DEVSTACK_GATE_VIRT_DRIVER" == "xenapi" ]]; then
if [ ! $DEVSTACK_GATE_XENAPI_DOM0_IP -o ! $DEVSTACK_GATE_XENAPI_DOMU_IP -o ! $DEVSTACK_GATE_XENAPI_PASSWORD ]; then
echo "XenAPI must have DEVSTACK_GATE_XENAPI_DOM0_IP, DEVSTACK_GATE_XENAPI_DOMU_IP and DEVSTACK_GATE_XENAPI_PASSWORD all set"
exit 1
fi
localrc_set "$localrc_file" "SKIP_EXERCISES" "${SKIP_EXERCISES},volumes"
localrc_set "$localrc_file" "XENAPI_PASSWORD" "${DEVSTACK_GATE_XENAPI_PASSWORD}"
localrc_set "$localrc_file" "XENAPI_CONNECTION_URL" "http://${DEVSTACK_GATE_XENAPI_DOM0_IP}"
localrc_set "$localrc_file" "VNCSERVER_PROXYCLIENT_ADDRESS" "${DEVSTACK_GATE_XENAPI_DOM0_IP}"
localrc_set "$localrc_file" "VIRT_DRIVER" "xenserver"
# A separate xapi network is created with this name-label
localrc_set "$localrc_file" "FLAT_NETWORK_BRIDGE" "vmnet"
# A separate xapi network on eth4 serves the purpose of the public network.
# This interface is added in Citrix's XenServer environment as an internal
# interface
localrc_set "$localrc_file" "PUBLIC_INTERFACE" "eth4"
# The xapi network "vmnet" is connected to eth3 in domU
# We need to explicitly specify these, as the devstack/xenserver driver
# sets GUEST_INTERFACE_DEFAULT
localrc_set "$localrc_file" "VLAN_INTERFACE" "eth3"
localrc_set "$localrc_file" "FLAT_INTERFACE" "eth3"
# Explicitly set HOST_IP, so that it will be passed down to xapi,
# thus it will be able to reach glance
localrc_set "$localrc_file" "HOST_IP" "${DEVSTACK_GATE_XENAPI_DOMU_IP}"
localrc_set "$localrc_file" "SERVICE_HOST" "${DEVSTACK_GATE_XENAPI_DOMU_IP}"
# Disable firewall
localrc_set "$localrc_file" "XEN_FIREWALL_DRIVER" "nova.virt.firewall.NoopFirewallDriver"
# Disable agent
localrc_set "$localrc_file" "EXTRA_OPTS" "(\"xenapi_disable_agent=True\")"
# Add a separate device for volumes
localrc_set "$localrc_file" "VOLUME_BACKING_DEVICE" "/dev/xvdb"
# Set multi-host config
localrc_set "$localrc_file" "MULTI_HOST" "1"
fi
if [[ "$DEVSTACK_GATE_TEMPEST" -eq "1" ]]; then
# Volume tests in Tempest require a number of volumes
# to be created, each of 1G size. Devstack's default
# volume backing file size is 10G.
#
# The 24G setting is expected to be enough even
# in parallel run.
localrc_set "$localrc_file" "VOLUME_BACKING_FILE_SIZE" "24G"
# in order to ensure glance http tests don't time out, we
# specify the TEMPEST_HTTP_IMAGE address that's in infra on a
# service we need to be up for anything to work anyway.
localrc_set "$localrc_file" "TEMPEST_HTTP_IMAGE" "http://git.openstack.org/static/openstack.png"
fi
if [[ "$DEVSTACK_GATE_TEMPEST_DISABLE_TENANT_ISOLATION" -eq "1" ]]; then
localrc_set "$localrc_file" "TEMPEST_ALLOW_TENANT_ISOLATION" "False"
fi
if [[ -n "$DEVSTACK_GATE_GRENADE" ]]; then
if [[ "$localrc_oldnew" == "old" ]]; then
localrc_set "$localrc_file" "GRENADE_PHASE" "base"
else
localrc_set "$localrc_file" "GRENADE_PHASE" "target"
fi
localrc_set "$localrc_file" "CEILOMETER_USE_MOD_WSGI" "False"
localrc_set "$localrc_file" "GLANCE_STANDALONE" "False"
fi
if [[ "$DEVSTACK_GATE_TEMPEST_LARGE_OPS" -eq "1" ]]; then
# NOTE(danms): Temporary transition to =NUM_RESOURCES
localrc_set "$localrc_file" "VIRT_DRIVER" "fake"
localrc_set "$localrc_file" "TEMPEST_LARGE_OPS_NUMBER" "50"
elif [[ "$DEVSTACK_GATE_TEMPEST_LARGE_OPS" -gt "1" ]]; then
# use fake virt driver and 10 copies of nova-compute
localrc_set "$localrc_file" "VIRT_DRIVER" "fake"
# To make debugging easier, disabled until bug 1218575 is fixed.
# echo "NUMBER_FAKE_NOVA_COMPUTE=10" >>"$localrc_file"
localrc_set "$localrc_file" "TEMPEST_LARGE_OPS_NUMBER" "$DEVSTACK_GATE_TEMPEST_LARGE_OPS"
fi
if [[ "$DEVSTACK_GATE_CONFIGDRIVE" -eq "1" ]]; then
localrc_set "$localrc_file" "FORCE_CONFIG_DRIVE" "True"
else
localrc_set "$localrc_file" "FORCE_CONFIG_DRIVE" "False"
fi
if [[ "$CEILOMETER_NOTIFICATION_TOPICS" ]]; then
# Add specified ceilometer notification topics to localrc
# Set to notifications,profiler to enable profiling
localrc_set "$localrc_file" "CEILOMETER_NOTIFICATION_TOPICS" "$CEILOMETER_NOTIFICATION_TOPICS"
fi
if [[ "$DEVSTACK_GATE_INSTALL_TESTONLY" -eq "1" ]]; then
# Sometimes we do want the test packages
localrc_set "$localrc_file" "INSTALL_TESTONLY_PACKAGES" "True"
fi
if [[ "$DEVSTACK_GATE_TOPOLOGY" != "aio" ]]; then
localrc_set "$localrc_file" "NOVA_ALLOW_MOVE_TO_SAME_HOST" "False"
localrc_set "$localrc_file" "LIVE_MIGRATION_AVAILABLE" "True"
localrc_set "$localrc_file" "USE_BLOCK_MIGRATION_FOR_LIVE_MIGRATION" "True"
local primary_node
primary_node=`cat /etc/nodepool/primary_node_private`
localrc_set "$localrc_file" "SERVICE_HOST" "$primary_node"
if [[ "$role" = sub ]]; then
if [[ $original_enabled_services =~ "qpid" ]]; then
localrc_set "$localrc_file" "QPID_HOST" "$primary_node"
fi
if [[ $original_enabled_services =~ "rabbit" ]]; then
localrc_set "$localrc_file" "RABBIT_HOST" "$primary_node"
fi
localrc_set "$localrc_file" "DATABASE_HOST" "$primary_node"
if [[ $original_enabled_services =~ "mysql" ]]; then
localrc_set "$localrc_file" "DATABASE_TYPE" "mysql"
else
localrc_set "$localrc_file" "DATABASE_TYPE" "postgresql"
fi
localrc_set "$localrc_file" "GLANCE_HOSTPORT" "$primary_node:9292"
localrc_set "$localrc_file" "Q_HOST" "$primary_node"
# Set HOST_IP in subnodes before copying localrc to each node
else
localrc_set "$localrc_file" "HOST_IP" "$primary_node"
fi
fi
# If you specify a section of a project-config job with
#
# local_conf:
# conf: |
# [[local|localrc]]
# foo=a
# [[post-config|$NEUTRON_CONF]]
# [DEFAULT]
# global_physnet_mtu = 1400
#
# Then that whole local.conf fragment will get carried through to
# this special file, and we'll merge those values into *all*
# local.conf files in the job. That includes subnodes, and new &
# old in grenade.
#
# NOTE(sdague): the name of this file should be considered
# internal only, and jobs should not write to it directly, they
# should only use the project-config stanza.
if [[ -e "/tmp/dg-local.conf" ]]; then
$DSCONF merge_lc "$localrc_file" "/tmp/dg-local.conf"
fi
# a way to pass through arbitrary devstack config options so that
# we don't need to add new devstack-gate options every time we
# want to create a new config.
#
# NOTE(sdague): this assumes these are old school "localrc"
# sections, we should probably figure out a way to warn over using
# these.
if [[ "$role" = sub ]]; then
# If we are in a multinode environment, we may want to specify 2
# different sets of plugins
if [[ -n "$DEVSTACK_SUBNODE_CONFIG" ]]; then
$DSCONF setlc_raw "$localrc_file" "$DEVSTACK_SUBNODE_CONFIG"
else
if [[ -n "$DEVSTACK_LOCAL_CONFIG" ]]; then
$DSCONF setlc_raw "$localrc_file" "$DEVSTACK_LOCAL_CONFIG"
fi
fi
else
if [[ -n "$DEVSTACK_LOCAL_CONFIG" ]]; then
$DSCONF setlc_raw "$localrc_file" "$DEVSTACK_LOCAL_CONFIG"
fi
fi
# NOTE(sdague): new style local.conf declarations which need to
# merge late. Projects like neutron build up a lot of scenarios
# based on this, but they have to apply them late.
#
# TODO(sdague): subnode support.
if [[ -n "$DEVSTACK_LOCALCONF" ]]; then
local ds_conf_late="/tmp/ds-conf-late.conf"
echo "$DEVSTACK_LOCALCONF" > "$ds_conf_late"
$DSCONF merge_lc "$localrc_file" "$ds_conf_late"
fi
}
# This makes the stack user own the $BASE files and also changes the
# permissions on the logs directory so we can write to the logs when running
# devstack or grenade. This must be called AFTER setup_localrc.
function setup_access_for_stack_user {
# Make the workspace owned by the stack user
# It is not clear if the ansible file module can do this for us
$ANSIBLE all --become -f 5 -i "$WORKSPACE/inventory" -m shell \
-a "chown -R stack:stack '$BASE'"
# allow us to add logs
$ANSIBLE all --become -f 5 -i "$WORKSPACE/inventory" -m shell \
-a "chmod 777 '$WORKSPACE/logs'"
}
if [[ -n "$DEVSTACK_GATE_GRENADE" ]]; then
cd $BASE/new/grenade
setup_localrc "old" "devstack.local.conf.base" "primary"
setup_localrc "new" "devstack.local.conf.target" "primary"
cat <<EOF >$BASE/new/grenade/localrc
BASE_RELEASE=old
BASE_RELEASE_DIR=$BASE/\$BASE_RELEASE
BASE_DEVSTACK_DIR=\$BASE_RELEASE_DIR/devstack
BASE_DEVSTACK_BRANCH=$GRENADE_OLD_BRANCH
TARGET_RELEASE=new
TARGET_RELEASE_DIR=$BASE/\$TARGET_RELEASE
TARGET_DEVSTACK_DIR=\$TARGET_RELEASE_DIR/devstack
TARGET_DEVSTACK_BRANCH=$GRENADE_NEW_BRANCH
TARGET_RUN_SMOKE=False
SAVE_DIR=\$BASE_RELEASE_DIR/save
TEMPEST_CONCURRENCY=$TEMPEST_CONCURRENCY
export OS_TEST_TIMEOUT=$DEVSTACK_GATE_OS_TEST_TIMEOUT
VERBOSE=False
PLUGIN_DIR=\$TARGET_RELEASE_DIR
EOF
# Create a pass through variable that can add content to the
# grenade pluginrc. Needed for grenade external plugins in gate
# jobs.
if [[ -n "$GRENADE_PLUGINRC" ]]; then
echo "$GRENADE_PLUGINRC" >>$BASE/new/grenade/pluginrc
fi
if [[ "$DEVSTACK_GATE_TOPOLOGY" == "multinode" ]]; then
# ensure local.conf exists to remove conditional logic
if [[ $DEVSTACK_GATE_NEUTRON -eq "1" ]]; then
$DSCONF setlc_conf "devstack.local.conf.base" "post-config" "\$NEUTRON_CONF" \
"DEFAULT" "global_physnet_mtu" "$EXTERNAL_BRIDGE_MTU"
$DSCONF setlc_conf "devstack.local.conf.target" "post-config" "\$NEUTRON_CONF" \
"DEFAULT" "global_physnet_mtu" "$EXTERNAL_BRIDGE_MTU"
fi
# build the post-stack.sh config, this will be run as stack user so no sudo required
cat > $BASE/new/grenade/post-stack.sh <<EOF
#!/bin/bash
set -x
$ANSIBLE subnodes -f 5 -i "$WORKSPACE/inventory" -m shell \
-a "cd '$BASE/old/devstack' && stdbuf -oL -eL ./stack.sh"
if [[ -e "$BASE/old/devstack/tools/discover_hosts.sh" ]]; then
$BASE/old/devstack/tools/discover_hosts.sh
fi
EOF
sudo chmod a+x $BASE/new/grenade/post-stack.sh
fi
setup_networking "grenade"
setup_access_for_stack_user
echo "Running grenade ..."
echo "This takes a good 30 minutes or more"
cd $BASE/new/grenade
sudo -H -u stack DSTOOLS_VERSION=$DSTOOLS_VERSION stdbuf -oL -eL ./grenade.sh
cd $BASE/new/devstack
else
cd $BASE/new/devstack
setup_localrc "new" "local.conf" "primary"
if [[ "$DEVSTACK_GATE_TOPOLOGY" == "multinode" ]]; then
if [[ $DEVSTACK_GATE_NEUTRON -eq "1" ]]; then
localconf_set "local.conf" "post-config" "\$NEUTRON_CONF" \
"DEFAULT" "global_physnet_mtu" "$EXTERNAL_BRIDGE_MTU"
fi
fi
setup_networking
setup_access_for_stack_user
echo "Running devstack"
echo "... this takes 10 - 15 minutes (logs in logs/devstacklog.txt.gz)"
start=$(date +%s)
# Note stack.sh eventually redirects its output to
# log/devstacklog.txt.gz as it says above; this is usually what's
# interesting to a developer. But before it gets to that point,
# there is a non-trivial amount of early setup work that happens
# that sometimes we need to debug. This is why we redirect to
# "devstack-early.txt" here.
$ANSIBLE primary -f 5 -i "$WORKSPACE/inventory" -m shell \
-a "cd '$BASE/new/devstack' && sudo -H -u stack DSTOOLS_VERSION=$DSTOOLS_VERSION stdbuf -oL -eL ./stack.sh 2>&1 executable=/bin/bash" \
&> "$WORKSPACE/logs/devstack-early.txt"
if [ -d "$BASE/data/CA" ] && [ -f "$BASE/data/ca-bundle.pem" ] ; then
# Sync any data files which include certificates to be used if
# TLS is enabled
$ANSIBLE subnodes -f 5 -i "$WORKSPACE/inventory" --become -m file \
-a "path='$BASE/data' state=directory owner=stack group=stack mode=0755"
$ANSIBLE subnodes -f 5 -i "$WORKSPACE/inventory" --become -m file \
-a "path='$BASE/data/CA' state=directory owner=stack group=stack mode=0755"
$ANSIBLE subnodes -f 5 -i "$WORKSPACE/inventory" \
--become -m synchronize \
-a "mode=push src='$BASE/data/ca-bundle.pem' dest='$BASE/data/ca-bundle.pem'"
sudo $ANSIBLE subnodes -f 5 -i "$WORKSPACE/inventory" \
--become -u $USER -m synchronize \
-a "mode=push src='$BASE/data/CA' dest='$BASE/data'"
fi
# Run non controller setup after controller is up. This is necessary
# because services like nova apparently expect to have the controller in
# place before anything else.
$ANSIBLE subnodes -f 5 -i "$WORKSPACE/inventory" -m shell \
-a "cd '$BASE/new/devstack' && sudo -H -u stack DSTOOLS_VERSION=$DSTOOLS_VERSION stdbuf -oL -eL ./stack.sh 2>&1 executable=/bin/bash" \
&> "$WORKSPACE/logs/devstack-subnodes-early.txt"
end=$(date +%s)
took=$((($end - $start) / 60))
if [[ "$took" -gt 20 ]]; then
echo "WARNING: devstack run took > 20 minutes, this is a very slow node."
fi
# Discover the hosts on a cells v2 deployment.
discover_hosts
# provide a check that the right db was running
# the path are different for fedora and red hat.
if [[ -f /usr/bin/yum ]]; then
POSTGRES_LOG_PATH="-d /var/lib/pgsql"
MYSQL_LOG_PATH="-f /var/log/mysqld.log"
else
POSTGRES_LOG_PATH="-d /var/log/postgresql"
MYSQL_LOG_PATH="-d /var/log/mysql"
fi
if [[ "$DEVSTACK_GATE_POSTGRES" -eq "1" ]]; then
if [[ ! $POSTGRES_LOG_PATH ]]; then
echo "Postgresql should have been used, but there are no logs"
exit 1
fi
else
if [[ ! $MYSQL_LOG_PATH ]]; then
echo "Mysql should have been used, but there are no logs"
exit 1
fi
fi
fi
if [[ "$DEVSTACK_GATE_UNSTACK" -eq "1" ]]; then
$ANSIBLE all -f 5 -i "$WORKSPACE/inventory" -m shell \
-a "cd '$BASE/new/devstack' && sudo -H -u stack ./unstack.sh"
fi
if [[ "$DEVSTACK_GATE_REMOVE_STACK_SUDO" -eq 1 ]]; then
echo "Removing sudo privileges for devstack user"
$ANSIBLE all --become -f 5 -i "$WORKSPACE/inventory" -m file \
-a "path=/etc/sudoers.d/50_stack_sh state=absent"
fi
if [[ "$DEVSTACK_GATE_TEMPEST" -eq "1" ]]; then
# under tempest isolation tempest will need to write .tox dir, log files
if [[ -d "$BASE/new/tempest" ]]; then
sudo chown -R tempest:stack $BASE/new/tempest
fi
# Make sure tempest user can write to its directory for
# lock-files.
if [[ -d $BASE/data/tempest ]]; then
sudo chown -R tempest:stack $BASE/data/tempest
fi
# ensure the cirros image files are accessible
if [[ -d $BASE/new/devstack/files ]]; then
sudo chmod -R o+rx $BASE/new/devstack/files
fi
# In the future we might want to increase the number of compute nodes.
# This will ensure that multinode jobs consist of 2 nodes.
# As a part of tempest configuration, it should be executed
# before the DEVSTACK_GATE_TEMPEST_NOTESTS check, because the DEVSTACK_GATE_TEMPEST
# guarantees that tempest should be configured, no matter should
# tests be executed or not.
if [[ "$DEVSTACK_GATE_TOPOLOGY" == "multinode" ]]; then
sudo $DSCONF iniset $BASE/new/tempest/etc/tempest.conf compute min_compute_nodes 2
fi
# if set, we don't need to run Tempest at all
if [[ "$DEVSTACK_GATE_TEMPEST_NOTESTS" -eq "1" ]]; then
exit 0
fi
# There are some parts of devstack that call the neutron api to verify the
# extension. We should not ever trust this for gate testing. This checks to
# ensure on master we always are using the default value. (on stable we hard
# code a list of available extensions so we can't use this)
neutron_extensions=$(iniget "$BASE/new/tempest/etc/tempest.conf" "neutron-feature-enabled" "api_extensions")
if [[ $GIT_BRANCH == 'master' && ($neutron_extensions == 'all' || $neutron_extensions == '') ]] ; then
echo "Devstack misconfugred tempest and changed the value of api_extensions"
exit 1
fi
# From here until the end we rely on the fact that all the code fails if
# something is wrong, to enforce exit on bad test results.
set -o errexit
# NOTE(gmann): Use branch constraint because Tempest is pinned to the branch release
# instead of using master. We need to export it via env var TOX_CONSTRAINTS_FILE
# so that initial creation of tempest tox use stable branch constraint
# instead of master constraint as defined in tempest/tox.ini
# Hoping we do not need to update these setting anymore as victoria onwards
# all jobs are supposed to be migrated to zuulv3 native jobs and does not require
# devstack-gate.
stable_for_u_c="stable/[o-v]"
if [[ "$ZUUL_BRANCH" =~ $stable_for_u_c ]] ; then
export TOX_CONSTRAINTS_FILE=$BASE/new/requirements/upper-constraints.txt
else
export TOX_CONSTRAINTS_FILE=https://releases.openstack.org/constraints/upper/master
fi
# Older name, keep this for transition
export UPPER_CONSTRAINTS_FILE=$TOX_CONSTRAINTS_FILE
if [[ "${TEMPEST_OS_TEST_TIMEOUT:-}" != "" ]] ; then
TEMPEST_COMMAND="sudo -H -u tempest UPPER_CONSTRAINTS_FILE=$UPPER_CONSTRAINTS_FILE TOX_CONSTRAINTS_FILE=$TOX_CONSTRAINTS_FILE OS_TEST_TIMEOUT=$TEMPEST_OS_TEST_TIMEOUT tox"
else
TEMPEST_COMMAND="sudo -H -u tempest UPPER_CONSTRAINTS_FILE=$UPPER_CONSTRAINTS_FILE TOX_CONSTRAINTS_FILE=$TOX_CONSTRAINTS_FILE tox"
fi
cd $BASE/new/tempest
if [[ "$DEVSTACK_GATE_TEMPEST_REGEX" != "" ]] ; then
if [[ "$DEVSTACK_GATE_TEMPEST_ALL_PLUGINS" -eq "1" ]]; then
echo "Running tempest with plugins and a custom regex filter"
$TEMPEST_COMMAND -eall-plugin -- $DEVSTACK_GATE_TEMPEST_REGEX --concurrency=$TEMPEST_CONCURRENCY
sudo -H -u tempest .tox/all-plugin/bin/tempest list-plugins
else
echo "Running tempest with a custom regex filter"
$TEMPEST_COMMAND -eall -- $DEVSTACK_GATE_TEMPEST_REGEX --concurrency=$TEMPEST_CONCURRENCY
fi
elif [[ "$DEVSTACK_GATE_TEMPEST_ALL_PLUGINS" -eq "1" ]]; then
echo "Running tempest all-plugins test suite"
$TEMPEST_COMMAND -eall-plugin -- --concurrency=$TEMPEST_CONCURRENCY
sudo -H -u tempest .tox/all-plugin/bin/tempest list-plugins
elif [[ "$DEVSTACK_GATE_TEMPEST_ALL" -eq "1" ]]; then
echo "Running tempest all test suite"
$TEMPEST_COMMAND -eall -- 'tempest' --concurrency=$TEMPEST_CONCURRENCY
elif [[ "$DEVSTACK_GATE_TEMPEST_DISABLE_TENANT_ISOLATION" -eq "1" ]]; then
echo "Running tempest full test suite serially"
$TEMPEST_COMMAND -efull-serial
elif [[ "$DEVSTACK_GATE_TEMPEST_FULL" -eq "1" ]]; then
echo "Running tempest full test suite"
$TEMPEST_COMMAND -efull -- --concurrency=$TEMPEST_CONCURRENCY
elif [[ "$DEVSTACK_GATE_SMOKE_SERIAL" -eq "1" ]] ; then
echo "Running tempest smoke tests"
$TEMPEST_COMMAND -esmoke-serial
elif [[ "$DEVSTACK_GATE_TEMPEST_SCENARIOS" -eq "1" ]] ; then
echo "Running tempest scenario tests"
$TEMPEST_COMMAND -escenario -- $DEVSTACK_GATE_TEMPEST_REGEX
else
echo "Running tempest smoke tests"
$TEMPEST_COMMAND -esmoke -- --concurrency=$TEMPEST_CONCURRENCY
fi
fi

File diff suppressed because it is too large Load Diff

View File

@ -1,2 +0,0 @@
This directory contains help files on how to read the logs generated
by devstack-gate. These are usually uploaded along-side the logs.

View File

@ -1,186 +0,0 @@
<h1>Guide to Devstack Gate Logs</h1>
<p>
Above is a collection of log files from
the <a href="../console.html">current tempest run</a>. Within them
should be everything you need to get to the bottom of a test
failure. The screen-* logs will be your most valuable tools in this
process. Use the timestamp of the failed test in
the <a href="../console.html">current tempest run</a>.
</p>
<h2>Types of logs </h2>
<p>
<ul>
<li> <b>cinder</b>
<ul>
<li><a href="screen-c-api.txt.gz">screen-c-api.txt.gz</a>: cinder-api
<li><a href="screen-c-bak.txt.gz">screen-c-bak.txt.gz</a>: cinder-backup
<li><a href="screen-c-sch.txt.gz">screen-c-sch.txt.gz</a>: cinder-scheduler
<li><a href="screen-c-vol.txt.gz">screen-c-vol.txt.gz</a>: cinder-volume
</ul>
<li> <b>ceilometer</b>
<ul>
<li><a href="screen-ceilometer-acentral.txt.gz">screen-ceilometer-acentral.txt.gz</a>: ceilometer-agent-central
<li><a href="screen-ceilometer-acompute.txt.gz">screen-ceilometer-acompute.txt.gz</a>: ceilometer-agent-compute
<li><a href="screen-ceilometer-alarm-evaluator.txt.gz">screen-ceilometer-alarm-evaluator.txt.gz</a>: ceilometer-alarm-evaluator
<li><a href="screen-ceilometer-alarm-notifier.txt.gz">screen-ceilometer-alarm-notifier.txt.gz</a>: ceilometer-alarm-notifier
<li><a href="screen-ceilometer-anotification.txt.gz">screen-ceilometer-anotification.txt.gz</a>: ceilometer-agent-notifier
<li><a href="screen-ceilometer-api.txt.gz">screen-ceilometer-api.txt.gz</a>: ceilometer-api
<li><a href="screen-ceilometer-collector.txt.gz">screen-ceilometer-collector.txt.gz</a>: ceilometer-collector
</ul>
<li> <b>designate</b>
<ul>
<li><a href="screen-designate-agent.txt.gz">screen-designate-agent.txt.gz</a>: designate-agent
<li><a href="screen-designate-api.txt.gz">screen-designate-api.txt.gz</a>: designate-api
<li><a href="screen-designate-central.txt.gz">screen-designate-central.txt.gz</a>: designate-central
<li><a href="screen-designate-mdns.txt.gz">screen-designate-mdns.txt.gz</a>: designate-mdns
<li><a href="screen-designate-pool-manager.txt.gz">screen-designate-pool-manager.txt.gz</a>: designate-pool-manager
<li><a href="screen-designate-sink.txt.gz">screen-designate-sink.txt.gz</a>: designate-sink
<li><a href="screen-designate-zone-manager.txt.gz">screen-designate-zone-manager.txt.gz</a>: designate-zone-manager
</ul>
<li> <b>glance</b>
<ul>
<li><a href="screen-g-api.txt.gz">screen-g-api.txt.gz</a>: glance-api
<li><a href="screen-g-reg.txt.gz">screen-g-reg.txt.gz</a>: glance-registry
</ul>
<li><b>heat</b>
<ul>
<li><a href="screen-h-api-cfn.txt.gz">screen-h-api-cfn.txt.gz</a>: heat-api-cfn
<li><a href="screen-h-api-cw.txt.gz">screen-h-api-cw.txt.gz</a>: heat-api-cloudwatch
<li><a href="screen-h-api.txt.gz">screen-h-api.txt.gz</a>: heat-api
<li><a href="screen-h-eng.txt.gz">screen-h-eng.txt.gz</a>: heat-engine
</ul>
<li> <b>horizon</b>
<ul>
<li><a href="apache/horizon_error.txt.gz">horizon_error.txt.gz</a>: horizon logs
</ul>
<li> <b>ironic</b>
<ul>
<li><a href="ironic-bm-logs/">ironic-bm-logs/</a>: output from the last successful boot of an ironic "bare metal" VM
<li><a href="screen-ir-api.txt.gz">screen-ir-api.txt.gz</a>: ironic-api
<li><a href="screen-ir-cond.txt.gz">screen-ir-cond.txt.gz</a>: ironic-conductor
</ul>
<li> <b>keystone</b>
<ul>
<li><a href="apache/keystone.txt.gz">keystone.txt.gz</a>: keystone log (Apache Httpd)
<li><a href="apache/keystone_access.txt.gz">keystone_access.txt.gz</a>: keystone access log (Apache Httpd)
<li><a href="screen-key.txt.gz">screen-key.txt.gz</a>: keystone log (eventlet)
</ul>
<li> <b>nova</b>
<ul>
<li><a href="screen-n-api.txt.gz">screen-n-api.txt.gz</a>: nova-api
<li><a href="screen-n-cond.txt.gz">screen-n-cond.txt.gz</a>: nova-conductor
<li><a href="screen-n-cpu.txt.gz">screen-n-cpu.txt.gz</a>: nova-compute
<li><a href="screen-n-crt.txt.gz">screen-n-crt.txt.gz</a>: nova-cert
<li><a href="screen-n-net.txt.gz">screen-n-net.txt.gz</a>: nova-network
<li><a href="screen-n-obj.txt.gz">screen-n-obj.txt.gz</a>: nova-objectstore
<li><a href="screen-n-sch.txt.gz">screen-n-sch.txt.gz</a>: nova-scheduler
</ul>
<li> <b>neutron</b>
<ul>
<li><a href="screen-q-agt.txt.gz">screen-q-agt.txt.gz</a>: neutron-openvswitch-agent
<li><a href="screen-q-dhcp.txt.gz">screen-q-dhcp.txt.gz</a>: neutron-dhcp-agent
<li><a href="screen-q-lbaas.txt.gz">screen-q-lbaas.txt.gz</a>: neutron-lbaas-agent
<li><a href="screen-q-meta.txt.gz">screen-q-meta.txt.gz</a>: neutron-metadata-agent
<li><a href="screen-q-metering.txt.gz">screen-q-metering.txt.gz</a>: neutron-metering-agent
<li><a href="screen-q-svc.txt.gz">screen-q-svc.txt.gz</a>: neutron-server
<li><a href="screen-q-l3.txt.gz">screen-q-l3.txt.gz</a>: neutron-l3-agent
</ul>
<li> <b>swift</b>
<ul>
<li><a href="screen-s-account.txt.gz">screen-s-account.txt.gz</a>: swift-account-server
<li><a href="screen-s-container.txt.gz">screen-s-container.txt.gz</a>: swift-container-server
<li><a href="screen-s-object.txt.gz">screen-s-object.txt.gz</a>: swift-object-server
<li><a href="screen-s-proxy.txt.gz">screen-s-proxy.txt.gz</a>: swift-proxy-server
</ul>
<li> <b>system</b>
<ul>
<li><a href="pip-freeze.txt.gz">pip-freeze.txt.gz</a>: List of pip installed python packages. Output of 'pip freeze'
<li><a href="dpkg-l.txt.gz">dpkg-l.txt.gz</a>: List of apt-get installed packages. Output of 'dpkg -l'
<li><a href="df.txt.gz">df.txt.gz</a>:
<li><a href="rpm-qa.txt.gz">rpm-qa.txt.gz</a>: List of rpm installed packages. Output of 'rpm -qa'
<li><a href="syslog.txt.gz">syslog.txt.gz</a>: syslog for the test slave
<li><a href="screen-dstat.txt.gz">screen-dstat.txt.gz</a>: dstat output during the test job
<li><a href="sudoers.txt.gz">sudoers.txt.gz</a>: sudoers file
</ul>
<li> <b>trove</b>
<ul>
<li><a href="screen-tr-api.txt.gz">screen-tr-api.txt.gz</a>: trove-api
<li><a href="screen-tr-cond.txt.gz">screen-tr-cond.txt.gz</a>: trove-conductor
<li><a href="screen-tr-tmgr.txt.gz">screen-tr-tmgr.txt.gz</a>: trove-taskmanager
</ul>
<li> <b>tempest</b>
<ul>
<li><a href="tempest.txt.gz">tempest.txt.gz</a>: Tempest log file
<li><a href="tempest_conf.txt.gz">tempest_conf.txt.gz</a>: Tempest config file
<li><a href="subunit_log.txt.gz">subunit_log.txt.gz</a>: Subunit v1 stream from tempest run
<li><a href="testr_results.html.gz">testr_results.html.gz</a>: html formatted output of test results
</ul>
<li> <b>devstack</b>
<ul>
<li><a href=devstacklog.txt.gz>devstacklog.txt.gz</a>: Devstack log
<li><a href=devstacklog.summary.txt.gz>devstacklog.summary.txt.gz</a>: Devstack summary log
</ul>
</ul>
</p>
<h2>Nova Compute Fails</h2>
<p>
If there is a compute test failure, especially a server not getting
created correctly, or being in an unexpected state, the following is
typically the most fruitful order to look at things:
<ul>
<li><a href="screen-n-api.txt.gz">screen-n-api.txt.gz</a> - the nova
api log, which will show top level failures. Make sure the request
that was being sent in actually succeeded.
<li><a href="screen-n-cpu.txt.gz">screen-n-cpu.txt.gz</a> - the nova
compute log. If a libvirt or qemu issue happened during guest
creation it will be here.
<li><a href="screen-n-sch.txt.gz">screen-n-sch.txt.gz</a> - the nova
scheduler. Some times there are races in allocating resources, and
the scheduler will throw a WARNING if it couldn't allocate the
requested resources.
<li>all other nova logs
</ul>
<h2>Cinder Volume Fails</h2>
<p>
If there is a volume failure in the test, the following is typically
the most fruitful order to look at things:
<ul>
<li><a href="screen-c-api.txt.gz">screen-c-api.txt.gz</a> - the
cinder api log, which will show top level failures. Make sure
the request that was being sent in actually succeeded.
<li><a href="screen-c-vol.txt.gz">screen-c-vol.txt.gz</a> - the
cinder agent log. If there was a local allocation error it will be
here.
<li><a href="screen-c-sch.txt.gz">screen-c-sch.txt.gz</a> - the
cinder scheduler. Some times there are races in allocating
resources, and the scheduler will throw a WARNING if it couldn't
allocate the requested resources.
</ul>
</p>
<h2>Designate Zone Fails</h2>
<p>
If a DNS Zone fails to reach the DNS Server, the following is
typically the most fruitful order to look at things:
<ul>
<li><a href="screen-designate-pool-manager.txt.gz">screen-designate-pool-manager.txt.gz</a> - the
pool manager logs will show if there was an error creating the zone on the
server, or if it had issues loading the driver for the DNS Server
<li><a href="screen-designate-mdns.txt.gz">screen-designate-mdns.txt.gz</a> - the
mini dns server will log if it received a AFXR request, or if the DNS Server has too low a
serial number. This shows either a communications problem or a server configuration issue.
<li><a href="screen-designate-central.txt.gz">screen-designate-central.txt.gz</a> - designate
central will most likely contain the error it it is not in pool manager or mini dns.
All of the DB access runs though here, and all of the validation logic is also here.
</ul>
</p>
<h2>About this Help</h2>
<p>
This help file is part of the
<a href="https://opendev.org/openstack/devstack-gate">
openstack/devstack-gate</a>
project, and can be found at
<a href="https://opendev.org/openstack/devstack-gate/src/branch/master/help/tempest-logs.html">
help/tempest-logs.html
</a>.
The file can be updated via the standard OpenStack Gerrit Review process.
</p>

View File

@ -1,56 +0,0 @@
<h1>Guide to Tempest Results Runs</h1>
<p>
You are looking at the full test results of a devstack setup and
tempest run for the OpenStack gate, as well as all the logs of all the
relevant services that were running during that tempest test run. From
them you should have enough information to debug.
</p>
<h2>How To Debug - Quickstart</h2>
<p>
<ol>
<li>scroll to the end of console.html
<li>work your way backwards until the first failing test
<li>copy the timestamp of that failing test
<li>go into logs directory and look at that time stamp in related
service logs for traces, failures, or other oddities
</ol>
</p>
<h2>File Overview</h2>
<h3>console.html</h3>
This file contains the stdout/stderr console of the devstack-gate
job. The basic flow of the file goes as follows:
<ul>
<li>Boot guest from base Linux image
<li>Use devstack to install all required packages (debs and pips)
<li>Use devstack to setup all the services from upstream master except
for the particular project being tested, which is pulled from
gerrit.
<li>Run devstack exercises (very basic sanity checking)
<li>Run tempest tests
</ul>
<p>
All the devstack setup is done under bash tracing, so
is <b>extremely</b> verbose. This is to ensure enough data is captured
so that you can debug failures in the gate with the information provided.
</p>
<p>
The tempest tests are the last 1% of the console.html. When looking
at failures it is typical best to start at the end of the file and
work backwards.
</p>
<h3>logs</h3>
<p>
The <a href="logs">logs</a> directory contains all the screen logs
from all the services during the devstack-gate run.
</p>
<h2>About this Help</h2>
<p>
This help file is part of the
<a href="https://opendev.org/openstack/devstack-gate">
openstack/devstack-gate</a>
project, and can be found at
<a href="https://opendev.org/openstack/devstack-gate/src/branch/master/help/tempest-overview.html">
help/tempest-overview.html
</a>.
The file can be updated via the standard OpenStack Gerrit Review process.
</p>

View File

@ -1,149 +0,0 @@
The basic requirement we have for running multinode openstack tests is that
tempest must be able to ssh and ping the nested VMs booted by openstack and
these nested VMs need to be able to talk to each other. This is due to how
the tests are run by tempest.
We run devstack-gate on multiple public clouds. In order to meet the above
requirement we need some control over l2 and l3 networking in the test envs,
but not all of our clouds provide this control. To work around this we setup
overlay networks across the VMs using software bridges and tunnels between
the hosts. This provides routing for the floating IP network between tempest
and VMs and between VMs themselves.
To map against a real deployment the overlay networks would be the networking
provided by your datacenter for OpenStack and the existing eth0 for each test
node is a management interface or ILO. We just have to set up our own
datacenter networking because we are running in clouds.
Some useful IP ranges:
172.24.4.0/23 This is our "public" IP range. Test nodes get IPs in the first
half of this subnet.
172.24.5.0/24 This is our floating IP range. VMs get assigned floating IPs
from this range. The test nodes know how to "route" to these VMs due to the
interfaces on 172.24.4.0/23.
Now to network solution specifics. Nova network and neutron are different
enough that they deserve their own specific documentation below.
Nova Network
============
Subnode1 Primary Node Subnode2
+--------------------------+ +--------------------------+ +--------------------------+
| | | | | |
| | | | | |
| | | | | |
|172.24.4.2/23 | |172.24.4.1/23 | |172.24.4.3/23 |
|+------+ +--------+ | |+-------+ +-------+ | |+-------+ +-------+ |
||br_pub| | br_flat| | ||br_pub | |br_flat| | ||br_pub | |br_flat| |
|+--+---+ +---+----+ | |+---+---+ +---+---+ | |+---+---+ +---+---+ |
| | | | | | | | | | | |
| | +------------------vxlan-tunnel-+-----------------vxlan-tunnel-+ |
| | | | | | | | |
| +--------vxlan-tunnel-----------+--------vxlan-tunnel----------+ |
| | | | | |
+--------------------------+ +--------------------------+ +--------------------------+
In addition to the requirement for floating IP connectivity above nova net
also requires that the the private network for the VMs be shared so that the
nested VMs can get access to nova services like dhcp and metadata. While not
strictly necessary when using nova net multihost we support both and this
allows nested VM to nested VM communication over private IP.
In this setup we have two soft bridges on the primary node (the controller).
The br_flat bridge handles the l2 traffic for the VMs private interfaces.
The br_pub bridge is where floating IPs are configured and allows for test
node to nested VM communication and nested VM to nested VM communication.
We cannot share the l2 bridge for separate l3 communication because nova net
uses ebtables to prevent public IPs from talking to private IPs and we lose
packets on a shared bridge as a result.
This is what it all looks like after you run devstack and boot some nodes.
Subnode1 Primary Node Subnode2
+--------------------------+ +--------------------------+ +--------------------------+
| +--+ +-----+ | | +--+ +-----+ | | +--+ +-----+ |
| |vm|---------|br100| | | |vm|----------|br100| | | |vm|----------|br100| |
| +--+ +-----+ | | +--+ +-----+ | | +--+ +-----+ |
| | | | | | | | | | | |
|172.25.5.1/24 | | |172.25.5.2/24 | | |172.25.5.3/24 | |
|172.24.4.2/23 | | |172.24.4.1/23 | | |172.24.4.3/23 | |
|+------+ +--------+ | |+-------+ +-------+ | |+-------+ +-------+ |
||br_pub| | br_flat| | ||br_pub | |br_flat| | ||br_pub | |br_flat| |
|+--+---+ +---+----+ | |+---+---+ +---+---+ | |+---+---+ +---+---+ |
| | | | | | | | | | | |
| | +------------------vxlan-tunnel-+-----------------vxlan-tunnel-+ |
| | | | | | | | |
| +--------vxlan-tunnel-----------+--------vxlan-tunnel----------+ |
| | | | | |
+--------------------------+ +--------------------------+ +--------------------------+
Neutron
=======
Neutron is a bit different and comes in two flavors. The base case is
neutron without DVR. In this case all of the l3 networking runs on the
primary node. The other case is with DVR where each test node handles
l3 for the nested VMs running on that test node.
For the non DVR case we don't need to do anything special. Devstack and
neutron setup br-int between the nodes for us and all public floating
IP traffic is backhauled over br-int to the primary node where br-ex
exists. br-ex is created on the primary node as it is on the single node
tests and all tempest to floating IP and nested VM to nested VM communication
happens here.
Subnode1 Primary Node Subnode2
+--------------------------+ +--------------------------+ +--------------------------+
| | | | | |
| | | | | |
| | | | | |
|172.24.4.2/23 | |172.24.4.1/23 | |172.24.4.3/23 |
|+------+ | |+-------+ | |+-------+ |
||br-ex | | ||br-ex | | ||br-ex | |
|+--+---+ | |+---+---+ | |+---+---+ |
| | | | | | | | |
| | | | | | | | |
| +--------vxlan-tunnel-----------+--------vxlan-tunnel----------+ |
| | | | | |
+--------------------------+ +--------------------------+ +--------------------------+
The DVR case is a bit more complicated. Devstack and neutron still configure
br-int for us so we don't need two overlay networks like with nova net, but
do need an overlay for floating IP public networking due to our original
requirements. If floating IPs are configured on arbitrary test nodes we need
to know how to get packets to them.
Neutron uses br-ex for the floating IP network; unfortunately, Devstack and
neutron do not configure br-ex except for in the trivial detached from
everything case described earlier. This means we have to configure br-ex
ourselves and the simplest way to do that is to just make br-ex the overlay
itself. Doing this allows neutron to work properly with nested VMs talking
to nested VMs and it also allows the test nodes to talk to VMs over br-ex as
well.
This is what it all looks like after you run devstack and boot some nodes.
Subnode1 Primary Node Subnode2
+--------------------------+ +--------------------------+ +--------------------------+
| +------+ | | +------+ | | +------+ |
| |br-tun|--------tunnel---------|br-tun|--------tunnel---------|br-tun| |
| +------+ | | +------+ | | +------+ |
| |br-int| | | |br-int| | | |br-int| |
| +------+ | | +------+ | | +------+ |
| | | | | | | | |
|172.24.4.2/23 +--+ | |172.24.4.1/23 +--+ | |172.24.4.3/23 +--+ |
|172.24.5.1/24--NAT--|vm| | |172.24.5.2/24--NAT--|vm| | |172.24.5.3/24--NAT--|vm| |
|+------+ +--+ | |+-------+ +--+ | |+-------+ +--+ |
||br-ex | | ||br-ex | | ||br-ex | |
|+--+---+ | |+---+---+ | |+---+---+ |
| | | | | | | | |
| | | | | | | | |
| +--------vxlan-tunnel-----------+--------vxlan-tunnel----------+ |
| | | | | |
+--------------------------+ +--------------------------+ +--------------------------+
When DVR is enabled, agent_mode in l3_agent.ini for the primary node will be set to "dvr"
and "dvr_snat" for any remaining subnodes. DVR HA jobs need 3 node setup with this
configuration, where "dvr_snat" represents the network node with centralized SNAT,
and "dvr" represents compute nodes. There should be at least 2 "dvr_snat" nodes.

View File

@ -1,15 +0,0 @@
- hosts: primary
tasks:
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=/logs/**
- --include=*/
- --exclude=*
- --prune-empty-dirs

View File

@ -1,52 +0,0 @@
- hosts: all
name: Autoconverted job legacy-dg-hooks-dsvm from old job gate-dg-hooks-dsvm
tasks:
- name: Ensure legacy workspace directory
file:
path: '{{ ansible_user_dir }}/workspace'
state: directory
- shell:
cmd: |
set -e
set -x
cat > clonemap.yaml << EOF
clonemap:
- name: openstack/devstack-gate
dest: devstack-gate
EOF
/usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \
https://opendev.org \
openstack/devstack-gate
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'
- shell:
cmd: |
set -e
set -x
export PYTHONUNBUFFERED=true
# place calls for all hooks in here
function pre_test_hook {
echo "I am totally an awesome pre_test_hook"
}
export -f pre_test_hook
function gate_hook {
echo "I am totally an awesome gate_hook"
}
export -f gate_hook
function post_test_hook {
echo "I am totally an awesome post_test_hook"
}
export -f post_test_hook
cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
./safe-devstack-vm-gate-wrap.sh
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'

View File

@ -1,6 +0,0 @@
---
BASE: "{{ lookup('env', 'BASE')|default('/opt/stack', true) }}"
CI_USER: "{{ lookup('env', 'CI_USER')|default(ansible_user_id, true) }}"
PING_TIMES: 20
HTTP_TIMES: 10
MIRROR_INFO_FILE: "{{ lookup('env', 'MIRROR_INFO_FILE')|default('/etc/ci/mirror_info.sh', true) }}"

View File

@ -1,15 +0,0 @@
- hosts: primary
tasks:
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=/logs/**
- --include=*/
- --exclude=*
- --prune-empty-dirs

View File

@ -1,63 +0,0 @@
- hosts: all
name: legacy-tempest-neutron-full-stable
tasks:
- name: Ensure legacy workspace directory
file:
path: '{{ ansible_user_dir }}/workspace'
state: directory
- shell:
cmd: |
set -e
set -x
cat > clonemap.yaml << EOF
clonemap:
- name: openstack/devstack-gate
dest: devstack-gate
EOF
/usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \
https://opendev.org \
openstack/devstack-gate
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'
- shell:
cmd: |
set -e
set -x
cat << 'EOF' >>"/tmp/dg-local.conf"
[[local|localrc]]
ENABLE_FILE_INJECTION=True
EOF
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'
- shell:
cmd: |
set -e
set -x
export PYTHONUNBUFFERED=true
export DEVSTACK_GATE_TEMPEST=1
export DEVSTACK_GATE_TEMPEST_FULL=1
export DEVSTACK_GATE_NEUTRON=1
export DEVSTACK_GATE_TLSPROXY=1
# NOTE(gmann): Overriding the branch to most recent
# stable branch. Idea here is to make sure devstack-gate
# which is used in legacy jobs for stable branches work
# fine. Testing it on most recent stable branch should
# be enough to make sure devstack-gate keep working for
# stable branch legacy jobs which are not yet migrated
# to Zuul v3 native jobs.
export BRANCH_OVERRIDE=stable/wallaby
if [ "$BRANCH_OVERRIDE" != "default" ] ; then
export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE
fi
cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
./safe-devstack-vm-gate-wrap.sh
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'

View File

@ -1,25 +0,0 @@
---
- name: Set up openvswitch requirements
hosts: all
gather_facts: yes
become: yes
roles:
- ovs_vxlan_bridge_prereqs
- name: Set up bridge on the primary node
hosts: primary
gather_facts: no
become: yes
vars_files:
- devstack_gate_vars.yaml
roles:
- ovs_vxlan_bridge_primary
- name: Set up the bridge on the sub nodes
hosts: subnodes
gather_facts: no
become: yes
vars_files:
- devstack_gate_vars.yaml
roles:
- ovs_vxlan_bridge_peers

View File

@ -1,141 +0,0 @@
# (c) 2012-2014, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from ansible import constants as C
from ansible.plugins.callback import CallbackBase
from ansible.plugins.callback import strip_internal_keys
import datetime
import yaml
def _get_timestamp():
return str(datetime.datetime.now())[:-3]
class CallbackModule(CallbackBase):
'''Callback plugin for devstack-gate.
Based on the minimal callback plugin from the ansible tree. Adds
timestamps to the start of the lines, squishes responses that are only
messages, returns facts in yaml not json format and strips facter facts
from the reported facts.
'''
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'devstack'
def _command_generic_msg(self, host, result, task, caption):
'''output the result of a command run'''
if caption == 'SUCCESS':
buf = "%s | %s | %s | %s >>\n" % (
_get_timestamp(), host, caption, task.get_name().strip())
else:
buf = "%s | %s | %s | %s | rc=%s >>\n" % (
_get_timestamp(), host, caption, task.get_name().strip(),
result.get('rc', 0))
buf += result.get('stdout', '')
buf += result.get('stderr', '')
buf += result.get('msg', '')
return buf + "\n"
def v2_runner_on_failed(self, result, ignore_errors=False):
if 'exception' in result._result:
self._display.display(
"An exception occurred during task execution."
" The full traceback is:\n" + result._result['exception'])
if result._task.action in C.MODULE_NO_JSON:
self._display.display(
self._command_generic_msg(
result._host.get_name(), result._result, result._task,
"FAILED"))
else:
self._display.display(
"%s | %s | FAILED! => %s" % (
_get_timestamp(),
result._host.get_name(), self._dump_results(
result._result, indent=4)))
def v2_runner_on_ok(self, result):
self._clean_results(result._result, result._task.action)
if 'ansible_facts' in result._result:
return
elif 'hostvars[inventory_hostname]' in result._result:
facts = result._result['hostvars[inventory_hostname]']
facter_keys = [k for k in facts.keys() if k.startswith('facter_')]
for key in facter_keys:
del facts[key]
result._result['ansible_facts'] = facts
self._display.display(
"%s | %s | Gathered facts:\n%s" % (
_get_timestamp(),
result._host.get_name(),
yaml.safe_dump(facts, default_flow_style=False)))
return
if result._task.action in C.MODULE_NO_JSON:
self._display.display(
self._command_generic_msg(
result._host.get_name(), result._result, result._task,
"SUCCESS"))
else:
if 'changed' in result._result and result._result['changed']:
self._display.display(
"%s | %s | SUCCESS => %s" % (
_get_timestamp(),
result._host.get_name(), self._dump_results(
result._result, indent=4)))
else:
abriged_result = strip_internal_keys(result._result)
if 'msg' in abriged_result and len(abriged_result.keys()) == 1:
result_text = result._result['msg']
else:
result_text = self._dump_results(result._result, indent=4)
self._display.display(
"%s | %s | %s | %s" % (
_get_timestamp(),
result._host.get_name(),
result._task.get_name().strip(),
result_text))
self._handle_warnings(result._result)
def v2_runner_on_skipped(self, result):
self._display.display(
"%s | %s | SKIPPED" % (
_get_timestamp(), result._host.get_name()))
def v2_runner_on_unreachable(self, result):
self._display.display(
"%s | %s | UNREACHABLE! => %s" % (
_get_timestamp(),
result._host.get_name(), self._dump_results(
result._result, indent=4)))
def v2_on_file_diff(self, result):
if 'diff' in result._result and result._result['diff']:
self._display.display(self._get_diff(result._result['diff']))

View File

@ -1,12 +0,0 @@
- name: Get status of pydistutils.cfg file
stat: path={{ '~' + CI_USER | expanduser }}/.pydistutils.cfg
register: st
- block:
- name: Install CI_USER .pydistutils on root home folder
command: install -D -m0644 -o root -g root {{ '~' + CI_USER | expanduser }}/.pydistutils.cfg /root/.pydistutils.cfg
- name: Install CI_USER .pydistutils on stack home folder
command: install -D -m0644 -o stack -g stack {{ '~' + CI_USER | expanduser }}/.pydistutils.cfg {{ BASE }}/new/.pydistutils.cfg
- name: Install CI_USER .pydistutils on tempest home folder
command: install -D -m0644 -o tempest -g tempest {{ '~' + CI_USER | expanduser }}/.pydistutils.cfg /home/tempest/.pydistutils.cfg
when: st.stat.exists
become: yes

View File

@ -1,3 +0,0 @@
- name: Create BASE folder
file: path={{ BASE }} state=directory
become: yes

View File

@ -1,106 +0,0 @@
---
# TODO: Turn this into proper Ansible
- name: Fix the disk layout
become: yes
args:
executable: /bin/bash
creates: /etc/fixed_disk_layout
shell: |
set -ex
# Don't attempt to fix disk layout more than once
touch /etc/fixed_disk_layout
# Ensure virtual machines from different providers all have at least 8GB of
# swap.
# Use an ephemeral disk if there is one or create and use a swapfile.
# Rackspace also doesn't have enough space on / for two devstack installs,
# so we partition the disk and mount it on /opt, syncing the previous
# contents of /opt over.
SWAPSIZE=8192
swapcurrent=$(( $(grep SwapTotal /proc/meminfo | awk '{ print $2; }') / 1024 ))
if [[ $swapcurrent -lt $SWAPSIZE ]]; then
if [ -b /dev/xvde ]; then
DEV='/dev/xvde'
else
EPHEMERAL_DEV=$(blkid -L ephemeral0 || true)
if [ -n "$EPHEMERAL_DEV" -a -b "$EPHEMERAL_DEV" ]; then
DEV=$EPHEMERAL_DEV
fi
fi
if [ -n "$DEV" ]; then
# If an ephemeral device is available, use it
swap=${DEV}1
lvmvol=${DEV}2
optdev=${DEV}3
if mount | grep ${DEV} > /dev/null; then
echo "*** ${DEV} appears to already be mounted"
echo "*** ${DEV} unmounting and reformating"
umount ${DEV}
fi
parted ${DEV} --script -- \
mklabel msdos \
mkpart primary linux-swap 1 ${SWAPSIZE} \
mkpart primary ext2 8192 -1
sync
# We are only interested in scanning $DEV, not all block devices
sudo partprobe ${DEV}
# The device partitions might not show up immediately, make sure
# they are ready and available for use
udevadm settle --timeout=0 || echo "Block device not ready yet. Waiting for up to 10 seconds for it to be ready"
udevadm settle --timeout=10 --exit-if-exists=${DEV}1
udevadm settle --timeout=10 --exit-if-exists=${DEV}2
mkswap ${DEV}1
mkfs.ext4 ${DEV}2
swapon ${DEV}1
mount ${DEV}2 /mnt
find /opt/ -mindepth 1 -maxdepth 1 -exec mv {} /mnt/ \;
umount /mnt
mount ${DEV}2 /opt
# Sanity check
grep -q ${DEV}1 /proc/swaps || exit 1
grep -q ${DEV}2 /proc/mounts || exit 1
else
# If no ephemeral devices are available, use root filesystem
# Don't use sparse device to avoid wedging when disk space and
# memory are both unavailable.
swapfile='/root/swapfile'
touch ${swapfile}
swapdiff=$(( $SWAPSIZE - $swapcurrent ))
if df -T ${swapfile} | grep -q ext ; then
fallocate -l ${swapdiff}M ${swapfile}
else
# Cannot fallocate on filesystems like XFS
dd if=/dev/zero of=${swapfile} bs=1M count=${swapdiff}
fi
chmod 600 ${swapfile}
mkswap ${swapfile}
swapon ${swapfile}
# Sanity check
grep -q ${swapfile} /proc/swaps || exit 1
fi
fi
# dump vm settings for reference (Ubuntu 12 era procps can get
# confused with certain proc trigger nodes that are write-only and
# return a EPERM; ignore this)
sysctl vm || true
# ensure a standard level of swappiness. Some platforms
# (rax+centos7) come with swappiness of 0 (presumably because the
# vm doesn't come with swap setup ... but we just did that above),
# which depending on the kernel version can lead to the OOM killer
# kicking in on some processes despite swap being available;
# particularly things like mysql which have very high ratio of
# anonymous-memory to file-backed mappings.
# make sure reload of sysctl doesn't reset this
sed -i '/vm.swappiness/d' /etc/sysctl.conf
# This sets swappiness low; we really don't want to be relying on
# cloud I/O based swap during our runs
sysctl -w vm.swappiness=30

View File

@ -1,11 +0,0 @@
---
- name: Check whether /etc/hosts contains hostname
command: grep {{ ansible_hostname }} /etc/hosts
changed_when: False
failed_when: False
register: grep_out
- name: Add hostname to /etc/hosts
lineinfile: dest=/etc/hosts insertafter=EOF line='127.0.1.1 {{ ansible_hostname }}'
become: yes
when: grep_out.rc != 0

View File

@ -1,9 +0,0 @@
---
# this is what prints the facts to the logs
- debug: var=hostvars[inventory_hostname]
- command: locale
name: "Gather locale"
- command: cat /proc/cpuinfo
name: "Gather kernel cpu info"

View File

@ -1,6 +0,0 @@
- name: Perform HTTP check
uri: url={{ url }}
register: uri_result
until: ('status' in uri_result and uri_result['status'] == 200)
changed_when: False
retries: "{{ HTTP_TIMES }}"

View File

@ -1,20 +0,0 @@
---
- name: Get status of file MIRROR_INFO_FILE
stat: path={{ MIRROR_INFO_FILE }}
register: st
- block:
# Get the shell parsed values so that we are consistent with what is used
# and don't have to do our own parsing.
- name: Get NODEPOOL_MIRROR_HOST
shell: source {{ MIRROR_INFO_FILE }} && echo $NODEPOOL_MIRROR_HOST
register: mirror_host
args:
executable: /bin/bash
- name: Get NODEPOOL_PYPI_MIRROR
shell: source {{ MIRROR_INFO_FILE }} && echo $NODEPOOL_PYPI_MIRROR
register: pypi_mirror
args:
executable: /bin/bash
- include: ping_check.yaml host={{ mirror_host.stdout }}
- include: http_check.yaml url={{ pypi_mirror.stdout }}
when: st.stat.exists

View File

@ -1,4 +0,0 @@
- name: Perform ping check
command: ping -c 1 {{ host }}
changed_when: False
retries: "{{ PING_TIMES }}"

View File

@ -1,6 +0,0 @@
ovs_starting_offset: 1
ovs_vni_offset: 1000000
# This value can be set to override the dynamic determinication
# of this value.
# ovs_bridge_mtu: 1450

View File

@ -1,88 +0,0 @@
---
# This dynamically configures a unique offset for this peer
- name: Set offset
set_fact:
offset: "{{ ovs_starting_offset | int + 1 + groups['all'].index(inventory_hostname) }}"
- name: Add additional vni offset
set_fact:
vni: "{{ offset | int + ovs_vni_offset | int }}"
# To make things more readable in the following tasks
- name: Alias the primary node private IP
set_fact:
primary_private_ip: "{{ hostvars[groups['primary'][0]]['nodepool']['private_ipv4'] }}"
- name: Add port to bridge on primary
shell: >-
ovs-vsctl --may-exist add-port {{ bridge_name }}
{{ bridge_name }}_{{ nodepool['private_ipv4'] }}
-- set interface {{ bridge_name}}_{{ nodepool['private_ipv4'] }}
type=vxlan options:remote_ip={{ nodepool['private_ipv4'] }} options:key={{ vni }}
options:local_ip={{ primary_private_ip }}
delegate_to: "{{ groups['primary'][0] }}"
- name: Create bridge on subnode
openvswitch_bridge:
bridge: "{{ bridge_name }}"
- when: ovs_bridge_mtu is not defined
block:
- name: Determine bridge mtu
shell: |
# Find all interfaces with a permanent mac address type.
# Permanent mac addrs imply "real" hardware and not interfaces we have
# created through this system. This makes our MTU determination mostly
# idempotent allowing us to create multiple overlays without
# perpetually smaller MTUs.
SMALLEST_MTU=""
for X in $(ls /sys/class/net) ; do
MAC_TYPE=$(cat "/sys/class/net/${X}/addr_assign_type")
if [ "$MAC_TYPE" -ne "0" ] ; then
# Type 0 is a permanent address implying a "real"
# interface. We ignore other interfaces as that is what we
# create here
continue
fi
MTU=$(cat "/sys/class/net/${X}/mtu")
if [ -z "$SMALLEST_MTU" ] || [ "$SMALLEST_MTU" -gt "$MTU" ] ; then
SMALLEST_MTU=$MTU
fi
done
# 50 byte overhead for vxlan
echo $(( SMALLEST_MTU - 50 ))
args:
executable: /bin/bash
environment:
PATH: '{{ ansible_env.PATH }}:/bin:/sbin:/usr/sbin'
register: mtu_output
- name: Set ovs_bridge_mtu
set_fact:
ovs_bridge_mtu: "{{ mtu_output.stdout }}"
- name: Set MTU on subnode bridge
command: ip link set mtu {{ ovs_bridge_mtu }} dev {{ bridge_name }}
- name: Add port to bridge on subnode
shell: >-
ovs-vsctl --may-exist add-port {{ bridge_name }}
{{ bridge_name }}_{{ primary_private_ip }}
-- set interface {{ bridge_name}}_{{ primary_private_ip }}
type=vxlan options:remote_ip={{ primary_private_ip }} options:key={{ vni }}
options:local_ip={{ nodepool['private_ipv4'] }}
- when: set_ips
block:
- name: Verify if the bridge address is set
shell: ip addr show dev {{ bridge_name }} | grep -q {{ pub_addr_prefix }}.{{ offset }}/{{ pub_addr_mask }}
register: ip_addr_var
failed_when: False
changed_when: False
- name: Set the bridge address
command: ip addr add {{ pub_addr_prefix }}.{{ offset }}/{{ pub_addr_mask }} dev {{ bridge_name }}
become: yes
when: ip_addr_var.rc == 1
- name: Bring subnode bridge interface up
command: ip link set dev {{ bridge_name }} up

View File

@ -1,13 +0,0 @@
---
- name: Gather OS specific package and service names
include_vars: "{{ ansible_os_family }}.yaml"
- name: Install openvswitch package
package:
name: "{{ ovs_package }}"
state: installed
- name: Start openvswitch service
service:
name: "{{ ovs_service }}"
state: started

View File

@ -1,3 +0,0 @@
---
ovs_package: "openvswitch-switch"
ovs_service: "openvswitch-switch"

View File

@ -1,3 +0,0 @@
---
ovs_package: "openvswitch"
ovs_service: "openvswitch"

View File

@ -1,3 +0,0 @@
---
ovs_package: "openvswitch"
ovs_service: "openvswitch"

View File

@ -1,5 +0,0 @@
ovs_starting_offset: 1
# This value can be set to override the dynamic determinication
# of this value.
# ovs_bridge_mtu: 1450

View File

@ -1,56 +0,0 @@
- name: Create bridge
openvswitch_bridge:
bridge: "{{ bridge_name }}"
- when: ovs_bridge_mtu is not defined
block:
- name: Determine bridge mtu
shell: |
# Find all interfaces with a permanent mac address type.
# Permanent mac addrs imply "real" hardware and not interfaces we have
# created through this system. This makes our MTU determination mostly
# idempotent allowing us to create multiple overlays without
# perpetually smaller MTUs.
SMALLEST_MTU=""
for X in $(ls /sys/class/net) ; do
MAC_TYPE=$(cat "/sys/class/net/${X}/addr_assign_type")
if [ "$MAC_TYPE" -ne "0" ] ; then
# Type 0 is a permanent address implying a "real"
# interface. We ignore other interfaces as that is what we
# create here
continue
fi
MTU=$(cat "/sys/class/net/${X}/mtu")
if [ -z "$SMALLEST_MTU" ] || [ "$SMALLEST_MTU" -gt "$MTU" ] ; then
SMALLEST_MTU=$MTU
fi
done
# 50 byte overhead for vxlan
echo $(( SMALLEST_MTU - 50 ))
args:
executable: /bin/bash
environment:
PATH: '{{ ansible_env.PATH }}:/bin:/sbin:/usr/sbin'
register: mtu_output
- name: Set ovs_bridge_mtu
set_fact:
ovs_bridge_mtu: "{{ mtu_output.stdout }}"
- name: Set bridge MTU
command: ip link set mtu {{ ovs_bridge_mtu }} dev {{ bridge_name }}
- when: set_ips
block:
- name: Verify if the bridge address is set
shell: ip addr show dev {{ bridge_name }} | grep -q {{ pub_addr_prefix }}.{{ ovs_starting_offset }}/{{ pub_addr_mask }}
register: ip_addr_var
failed_when: False
changed_when: False
- name: Set the bridge address
command: ip addr add {{ pub_addr_prefix }}.{{ ovs_starting_offset }}/{{ pub_addr_mask }} dev {{ bridge_name }}
become: yes
when: ip_addr_var.rc == 1
- name: Bring bridge interface up
command: ip link set dev {{ bridge_name }} up

View File

@ -1 +0,0 @@
stack ALL=(root) NOPASSWD:ALL

View File

@ -1,20 +0,0 @@
---
- name: Create stack group
group: name=stack state=present
become: yes
- name: Create stack user
user: name=stack shell=/bin/bash home={{ BASE }}/new group=stack
become: yes
- name: Set home folder permissions
file: path={{ BASE }}/new mode=0755
become: yes
- name: Copy 50_stack_sh file to /etc/sudoers.d
copy: src=50_stack_sh dest=/etc/sudoers.d mode=0440 owner=root group=root
become: yes
- name: Create new/.cache folder within BASE
file: path={{ BASE }}/new/.cache state=directory owner=stack group=stack
become: yes

View File

@ -1,4 +0,0 @@
tempest ALL=(root) NOPASSWD:/sbin/ip
tempest ALL=(root) NOPASSWD:/sbin/iptables
tempest ALL=(root) NOPASSWD:/usr/bin/ovsdb-client
tempest ALL=(root) NOPASSWD:/usr/bin/virt-filesystems

View File

@ -1,12 +0,0 @@
---
- name: Create tempest group
group: name=tempest state=present
become: yes
- name: Create tempest user
user: name=tempest shell=/bin/bash group=tempest
become: yes
- name: Copy 51_tempest_sh to /etc/sudoers.d
copy: src=51_tempest_sh dest=/etc/sudoers.d owner=root group=root mode=0440
become: yes

View File

@ -1,57 +0,0 @@
---
- name: Check for /bin/journalctl file
command: which journalctl
changed_when: False
failed_when: False
register: which_out
- block:
- name: Get current date
command: date +"%Y-%m-%d %H:%M:%S"
register: date_out
- name: Copy current date to log-start-timestamp.txt
copy:
dest: "{{ BASE }}/log-start-timestamp.txt"
content: "{{ date_out.stdout }}"
when: which_out.rc == 0
become: yes
- block:
- name: Stop rsyslog
service: name=rsyslog state=stopped
- name: Save syslog file prior to devstack run
command: mv /var/log/syslog /var/log/syslog-pre-devstack
- name: Save kern.log file prior to devstack run
command: mv /var/log/kern.log /var/log/kern_log-pre-devstack
- name: Recreate syslog file
file: name=/var/log/syslog state=touch
- name: Recreate syslog file owner and group
command: chown /var/log/syslog --ref /var/log/syslog-pre-devstack
- name: Recreate syslog file permissions
command: chmod /var/log/syslog --ref /var/log/syslog-pre-devstack
- name: Add read permissions to all on syslog file
file: name=/var/log/syslog mode=a+r
- name: Recreate kern.log file
file: name=/var/log/kern.log state=touch
- name: Recreate kern.log file owner and group
command: chown /var/log/kern.log --ref /var/log/kern_log-pre-devstack
- name: Recreate kern.log file permissions
command: chmod /var/log/kern.log --ref /var/log/kern_log-pre-devstack
- name: Add read permissions to all on kern.log file
file: name=/var/log/kern.log mode=a+r
- name: Start rsyslog
service: name=rsyslog state=started
when: which_out.rc == 1
become: yes

View File

@ -1,15 +0,0 @@
---
- hosts: all
gather_facts: yes
vars_files:
- devstack_gate_vars.yaml
roles:
- gather_host_info
- fix_etc_hosts
- fix_disk_layout
- create_base_folder
- start_fresh_logging
- setup_stack_user
- setup_tempest_user
- copy_mirror_config
- network_sanity_check

View File

@ -1,28 +0,0 @@
Set the a enabled_services fact based based on the test matrix
**Role Variables**
.. zuul:rolevar:: test_matrix_features
:default: files/features.yaml
The YAML file that defines the test matrix.
.. zuul:rolevar:: test_matrix_branch
:default: {{ zuul.override_checkout | default(zuul.branch) }}
The git branch for which to calculate the test matrix.
.. zuul:rolevar:: test_matrix_role
:default: primary
The role of the node for which the test matrix is calculated.
Valid values are 'primary' and 'subnode'.
.. zuul:rolevar:: test_matrix_configs
:default: []
:type: list
Feature configuration for the test matrix. This option allows enabling
more features, as defined in ``test_matrix_features``.
The default value is an empty list, however 'neutron' is added by default
from stable/ocata onwards.

View File

@ -1,4 +0,0 @@
test_matrix_features: files/features.yaml
test_matrix_branch: "{{ zuul.override_checkout | default(zuul.branch) }}"
test_matrix_role: primary
test_matrix_configs: []

View File

@ -1,324 +0,0 @@
config:
default:
master: [default, glance, nova, placement, swift, cinder, keystone]
wallaby: [default, glance, nova, placement, swift, cinder, keystone]
victoria: [default, glance, nova, placement, swift, cinder, keystone]
ussuri: [default, glance, nova, placement, swift, cinder, keystone]
train: [default, ceilometer, glance, nova, placement, swift, cinder, keystone]
stein: [default, ceilometer, glance, nova, placement, swift, cinder, keystone]
rocky: [default, ceilometer, glance, nova, placement, swift, cinder, keystone]
queens: [default, ceilometer, glance, nova, placement, swift, cinder, keystone]
pike: [default, ceilometer, glance, nova, placement, swift, cinder, keystone]
ocata: [default, ceilometer, glance, nova, placement, swift, cinder, keystone]
newton: [default, ceilometer, glance, nova, swift, cinder, keystone]
mitaka: [default, ceilometer, glance, nova, swift, cinder, keystone]
liberty: [default, ceilometer, glance, nova, swift, cinder, keystone]
kilo: [default, ceilometer, glance, nova, swift, cinder, keystone]
# This can be used by functional jobs that only want their dependencies installed
# and don't need to incur the overhead of installing all services in the process.
no_services: [default]
neutron:
features: [neutron, neutron-adv]
# different backends
postgres:
features: [postgresql]
# feature changes for different test matrixes
grenade:
rm-features: [trove, sahara, neutron-adv, horizon]
# Disable c-bak and etcd3 since they are not used in grenade runs
# and just take up resources (iops) which can cause gate failures.
rm-services: [c-bak, etcd3]
tempest:
features: [tempest]
cells:
features: [nova-cells]
# feature declarations for incubated or recently integrated projects (so they
# can be tested outside the releases they were supported in)
trove:
features: [trove]
marconi:
features: [marconi]
zaqar:
features: [zaqar]
sahara:
features: [sahara]
ironic:
features: [ironic]
ironic_inspector:
features: [ironic-inspector]
qpid:
features: [qpid]
zeromq:
features: [zeromq]
ceph:
features: [ceph]
heat:
features: [heat]
tlsproxy:
features: [tlsproxy]
cinder_mn_grenade:
features: [cinder-mn-grenade]
cinder_mn_grenade_sub_volschbak:
features: [cinder-mn-grenade-sub-volschbak]
cinder_mn_grenade_sub_bak:
features: [cinder-mn-grenade-sub-bak]
neutron_dvr:
features: [neutron-dvr]
swift:
features: [swift]
keystone:
features: [keystone]
horizon:
features: [horizon]
branches:
# The value of ""default" is the name of the "trunk" branch
default: master
# Normalized branch names only here, e.g. stable/ocata => ocata
allowed: [master, wallaby, victoria, ussuri, train, stein, rocky, queens, pike, ocata, newton, mitaka, liberty, kilo]
primary:
default:
base:
services: [mysql, rabbit, dstat, etcd3]
ceilometer:
base:
services: [ceilometer-acompute, ceilometer-acentral, ceilometer-collector, ceilometer-api, ceilometer-alarm-notifier, ceilometer-alarm-evaluator, ceilometer-anotification]
glance:
base:
services: [g-api, g-reg]
keystone:
base:
services: [key]
horizon:
base:
services: [horizon]
nova:
base:
services: [n-api, n-cond, n-cpu, n-novnc, n-sch, n-api-meta]
rocky:
services: [n-cauth, n-net]
queens:
services: [n-cauth, n-net]
pike:
services: [n-cauth, n-net]
ocata:
services: [n-cauth, n-net]
nova-cells:
base:
services: [n-cell]
rm-compute-ext: [agregates, hosts]
placement:
base:
services: [placement-api]
neutron:
base:
services: [q-svc, q-agt, q-dhcp, q-l3, q-meta, q-metering]
rm-services: [n-net]
neutron-adv:
base:
rm-services: [n-net]
mitaka:
services: [q-lbaas]
liberty:
services: [q-lbaas]
kilo:
services: [q-vpn]
neutron-dvr:
base:
services: []
swift:
base:
services: [s-proxy, s-account, s-container, s-object]
cinder:
base:
services: [c-api, c-vol, c-sch, c-bak]
# This will be used to disable c-vol, c-bak on primary node when running multinode grenade
# job that will test compatibility of new c-api, c-sch (primary) and old c-vol and c-bak (sub).
cinder-mn-grenade:
base:
rm-services: [c-vol, c-bak]
# This will be used to disable c-vol, c-sch, c-bak on primary node when running multinode grenade
# job that will test compatibility of new c-api (primary) and old c-vol, c-sch and c-bak (sub).
cinder-mn-grenade-sub-volschbak:
base:
rm-services: [c-vol, c-sch, c-bak]
# This will be used to disable c-bak on primary node when running multinode grenade
# job that will test compatibility of new c-api, c-sch, c-vol (primary) and old c-bak (sub).
cinder-mn-grenade-sub-bak:
base:
rm-services: [c-bak]
heat:
base:
services: [heat, h-api, h-api-cfn, h-api-cw, h-eng]
trove:
base:
services: [trove, tr-api, tr-tmgr, tr-cond]
ironic:
base:
services: [ir-api, ir-cond]
rm-services: [cinder, c-api, c-vol, c-sch, c-bak]
ironic-inspector:
base:
services: [ironic-inspector,ironic-inspector-dhcp]
sahara:
base:
services: [sahara]
marconi:
base:
services: [marconi-server]
zaqar:
base:
services: [zaqar-server]
tempest:
base:
services: [tempest]
# service overrides
postgresql:
base:
services: [postgresql]
rm-services: [mysql]
zeromq:
base:
services: [zeromq]
rm-services: [rabbit]
qpid:
base:
services: [qpid]
rm-services: [rabbit]
ceph:
base:
services: [ceph]
tlsproxy:
base:
services: [tls-proxy]
# TLS proxy didn't work properly until ocata
liberty:
rm-services: [tls-proxy]
mitaka:
rm-services: [tls-proxy]
newton:
rm-services: [tls-proxy]
subnode:
default:
base:
services: [dstat]
ceilometer:
base:
services: [ceilometer-acompute]
cinder:
base:
services: [c-vol, c-bak]
cinder-mn-grenade:
base:
services: []
cinder-mn-grenade-sub-volschbak:
base:
services: [c-vol, c-sch, c-bak]
cinder-mn-grenade-sub-bak:
base:
rm-services: [c-vol]
glance:
base:
services: [g-api]
horizon:
base:
services: []
ironic:
base:
rm-services: [c-vol, c-bak]
services: [ir-api, ir-cond]
ironic-inspector:
base:
services: []
keystone:
base:
services: []
neutron:
base:
rm-services: [n-net, n-api-meta]
services: [q-agt]
neutron-adv:
base:
services: []
neutron-dvr:
base:
rm-services: [n-net, n-api-meta]
services: [q-agt, q-l3, q-meta]
nova:
base:
services: [n-cpu, n-api-meta]
rocky:
services: [n-net]
queens:
services: [n-net]
pike:
services: [n-net]
ocata:
services: [n-net]
placement:
base:
services: [placement-client]
swift:
base:
services: []
tempest:
base:
services: []
tlsproxy:
base:
services: [tls-proxy]
# TLS proxy didn't work properly until ocata
liberty:
rm-services: [tls-proxy]
mitaka:
rm-services: [tls-proxy]
newton:
rm-services: [tls-proxy]

View File

@ -1,238 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import logging
import os
import sys
import yaml
GRID = None
ALLOWED_BRANCHES = []
FALSE_VALUES = [None, '', '0', 'false', 'False', 'FALSE']
FORMAT = '%(asctime)s %(levelname)s: %(message)s'
logging.basicConfig(format=FORMAT)
LOG = logging.getLogger(__name__)
def parse_features(fname):
with open(fname) as f:
return yaml.load(f)
def normalize_branch(branch):
if branch.startswith(("feature/", "bug/")):
# Feature and bug branches chase master and should be tested
# as if they were the master branch.
branch = GRID['branches']['default']
elif branch.startswith("stable/"):
branch = branch[len("stable/"):]
elif branch.startswith("proposed/"):
branch = branch[len("proposed/"):]
for allowed in GRID['branches']['allowed']:
# If the branch name starts with one of our known
# named integrated release names treat that branch
# as belonging to the integrated release. This means
# proposed/foo* will be treated as the foo release.
if branch.startswith(allowed):
branch = allowed
break
else:
# Releases that are not named integreated releases
# should be tested as if they were the master branch
# as they occur between integrated releases when other
# projects are developing master.
branch = GRID['branches']['default']
if branch not in ALLOWED_BRANCHES:
LOG.error("branch not allowed by features matrix: %s" % branch)
sys.exit(1)
return branch
def configs_from_env():
configs = []
for k, v in os.environ.items():
if k.startswith('DEVSTACK_GATE_'):
if v not in FALSE_VALUES:
f = k.split('DEVSTACK_GATE_')[1]
configs.append(f.lower())
return configs
def calc_services(branch, features, configs, role):
LOG.debug('Role: %s', role)
services = set()
for feature in features:
grid_feature = GRID[role][feature]
add_services = grid_feature['base'].get('services', [])
if add_services:
LOG.debug('Adding services for feature %s: %s',
feature, add_services)
services.update(add_services)
if branch in grid_feature:
update_services = grid_feature[branch].get('services', [])
if update_services:
LOG.debug('Updating branch: %s specific services for '
'feature %s: %s', branch, feature, update_services)
services.update(update_services)
# deletes always trump adds
for feature in features:
grid_feature = GRID[role][feature]
rm_services = grid_feature['base'].get('rm-services', [])
if rm_services:
LOG.debug('Removing services for feature %s: %s',
feature, rm_services)
services.difference_update(rm_services)
if branch in grid_feature:
services.difference_update(
grid_feature[branch].get('rm-services', []))
# Finally, calculate any services to add/remove per config.
# TODO(mriedem): This is not role-based so any per-config service
# modifications are dealt with globally across all nodes.
# do all the adds first
for config in configs:
if config in GRID['config']:
add_services = GRID['config'][config].get('services', [])
if add_services:
LOG.debug('Adding services for config %s: %s',
config, add_services)
services.update(add_services)
# deletes always trump adds
for config in configs:
if config in GRID['config']:
rm_services = GRID['config'][config].get('rm-services', [])
if rm_services:
LOG.debug('Removing services for config %s: %s',
config, rm_services)
services.difference_update(rm_services)
return sorted(list(services))
def calc_features(branch, configs=[]):
LOG.debug("Branch: %s" % branch)
LOG.debug("Configs: %s" % configs)
if os.environ.get('DEVSTACK_GATE_NO_SERVICES') not in FALSE_VALUES:
features = set(GRID['config']['default']['no_services'])
else:
features = set(GRID['config']['default'][branch])
# do all the adds first
for config in configs:
if config in GRID['config']:
add_features = GRID['config'][config].get('features', [])
if add_features:
LOG.debug('Adding features for config %s: %s',
config, add_features)
features.update(add_features)
# removes always trump
for config in configs:
if config in GRID['config']:
rm_features = GRID['config'][config].get('rm-features', [])
if rm_features:
LOG.debug('Removing features for config %s: %s',
config, rm_features)
features.difference_update(rm_features)
return sorted(list(features))
def get_opts():
usage = """
Compute the test matrix for devstack gate jobs from a combination
of environmental feature definitions and flags.
"""
parser = argparse.ArgumentParser(description=usage)
parser.add_argument('-f', '--features',
default='roles/test-matrix/files/features.yaml',
help="Yaml file describing the features matrix")
parser.add_argument('-b', '--branch',
default="master",
help="Branch to compute the matrix for")
parser.add_argument('-m', '--mode',
default="services",
help="What to return (services, compute-ext)")
parser.add_argument('-r', '--role',
default='primary',
help="What role this node will have",
choices=['primary', 'subnode'])
parser.add_argument('-a', '--ansible',
dest='ansible',
help="Behave as an Ansible Module",
action='store_true')
parser.add_argument('-n', '--not-ansible',
dest='ansible',
help="Behave as python CLI",
action='store_false')
parser.add_argument('-v', '--verbose',
default=False, action='store_true',
help='Log verbose output')
parser.set_defaults(ansible=True)
return parser.parse_args()
def main():
global GRID
global ALLOWED_BRANCHES
opts = get_opts()
if opts.verbose:
LOG.setLevel(logging.DEBUG)
if opts.ansible:
ansible_module = get_ansible_module()
features = ansible_module.params['features']
branch = ansible_module.params['branch']
role = ansible_module.params['role']
configs = ansible_module.params['configs']
else:
features = opts.features
branch = opts.branch
role = opts.role
configs = configs_from_env()
GRID = parse_features(features)
ALLOWED_BRANCHES = GRID['branches']['allowed']
branch = normalize_branch(branch)
features = calc_features(branch, configs)
LOG.debug("Features: %s " % features)
services = calc_services(branch, features, configs, role)
LOG.debug("Services: %s " % services)
if opts.ansible:
ansible_module.exit_json(changed=True, services=services)
else:
if opts.mode == "services":
print(",".join(services))
def get_ansible_module():
from ansible.module_utils.basic import AnsibleModule
return AnsibleModule(
argument_spec=dict(
features=dict(type='str'),
branch=dict(type='str'),
role=dict(type='str'),
configs=dict(type='list')
)
)
if __name__ == "__main__":
sys.exit(main())

View File

@ -1,44 +0,0 @@
- name: Deploy then features matrix and d-g bash functions
copy:
src: "{{ test_matrix_features }}"
dest: "{{ ansible_user_dir }}"
- name: Check for venv
command: python3 -m venv -h
changed_when: false
failed_when: false
register: venv_available
- name: Ensure venv
fail:
msg: Can not find venv module?
when:
- venv_available.rc != 0
- name: Create testmatrix venv
command: python3 -m venv /tmp/.test_matrix_venv
- name: Install PyYAML to venv
command: |
/tmp/.test_matrix_venv/bin/pip install 'PyYAML<6'
- name: Append neutron to configs for stable/ocata+
set_fact:
test_matrix_configs: "{{ test_matrix_configs }} + [ 'neutron' ]"
when:
- '"neutron" not in test_matrix_configs'
- test_matrix_branch is match("^(stable/[o-z].*|master)$")
- name: Run the test matrix
test_matrix:
features: "{{ ansible_user_dir }}/{{ test_matrix_features | basename }}"
branch: "{{ test_matrix_branch }}"
role: "{{ test_matrix_role }}"
configs: "{{ test_matrix_configs }}"
vars:
ansible_python_interpreter: "/tmp/.test_matrix_venv/bin/python"
register: test_matrix_result
- name: Set the enabled_services fact
set_fact:
enabled_services: "{{ test_matrix_result.services }}"

View File

@ -1,36 +0,0 @@
#!/bin/bash
# Run all test-*.sh functions, fail if any of them fail
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
set -o errexit
# this is mostly syntactic sugar to make it easy on the reader of the tests
trap exit_trap EXIT
function exit_trap {
local r=$?
if [[ "$r" -eq "0" ]]; then
echo "All tests run successfully"
else
echo "ERROR! some tests failed, please see detailed output"
fi
}
for testfile in test-*.sh; do
echo "Running $testfile"
./$testfile
echo
done

View File

@ -1,93 +0,0 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
ERRORS=0
TEMPEST_FULL_MASTER="n-api,n-api-meta,n-cpu,n-sch,n-cond,n-novnc,g-api,g-reg,key,c-api,c-vol,c-sch,c-bak,s-proxy,s-account,s-container,s-object,mysql,rabbit,dstat,etcd3,tempest,placement-api"
TEMPEST_NEUTRON_MASTER="n-api,n-api-meta,n-cpu,n-sch,n-cond,n-novnc,g-api,g-reg,key,c-api,c-vol,c-sch,c-bak,s-proxy,s-account,s-container,s-object,mysql,rabbit,dstat,etcd3,tempest,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-metering,placement-api"
TEMPEST_HEAT_SLOW_MASTER="n-api,n-api-meta,n-cpu,n-sch,n-cond,n-novnc,g-api,g-reg,key,c-api,c-vol,c-sch,c-bak,s-proxy,s-account,s-container,s-object,mysql,rabbit,dstat,etcd3,tempest,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-metering,placement-api"
GRENADE_NEW_MASTER="n-api,n-api-meta,n-cpu,n-sch,n-cond,n-novnc,g-api,g-reg,key,c-api,c-vol,c-sch,s-proxy,s-account,s-container,s-object,mysql,rabbit,dstat,tempest,placement-api"
GRENADE_SUBNODE_MASTER="n-api-meta,n-cpu,g-api,c-vol,dstat,placement-client"
# Utility function for tests
function assert_list_equal {
local source
local target
source=$(echo $1 | awk 'BEGIN{RS=",";} {print $1}' | sort -V | xargs echo)
target=$(echo $2 | awk 'BEGIN{RS=",";} {print $1}' | sort -V | xargs echo)
if [[ "$target" != "$source" ]]; then
echo -n `caller 0 | awk '{print $2}'`
echo -e " - ERROR\n $target \n != $source"
ERRORS=1
else
# simple backtrace progress detector
echo -n `caller 0 | awk '{print $2}'`
echo " - ok"
fi
}
function test_full_master {
local results
results=$(DEVSTACK_GATE_TEMPEST=1 python ./roles/test-matrix/library/test_matrix.py -n)
assert_list_equal $TEMPEST_FULL_MASTER $results
}
function test_full_feature_ec {
local results
results=$(DEVSTACK_GATE_TEMPEST=1 python ./roles/test-matrix/library/test_matrix.py -n -b feature/ec)
assert_list_equal $TEMPEST_FULL_MASTER $results
}
function test_neutron_master {
local results
results=$(DEVSTACK_GATE_NEUTRON=1 DEVSTACK_GATE_TEMPEST=1 python ./roles/test-matrix/library/test_matrix.py -n)
assert_list_equal $TEMPEST_NEUTRON_MASTER $results
}
function test_heat_slow_master {
local results
results=$(DEVSTACK_GATE_TEMPEST_HEAT_SLOW=1 DEVSTACK_GATE_NEUTRON=1 DEVSTACK_GATE_TEMPEST=1 python ./roles/test-matrix/library/test_matrix.py -n)
assert_list_equal $TEMPEST_HEAT_SLOW_MASTER $results
}
function test_grenade_new_master {
local results
results=$(DEVSTACK_GATE_TEMPEST_HEAT_SLOW=1 DEVSTACK_GATE_GRENADE=pullup DEVSTACK_GATE_TEMPEST=1 python ./roles/test-matrix/library/test_matrix.py -n)
assert_list_equal $GRENADE_NEW_MASTER $results
}
function test_grenade_subnode_master {
local results
results=$(DEVSTACK_GATE_GRENADE=pullup DEVSTACK_GATE_TEMPEST=1 python ./roles/test-matrix/library/test_matrix.py -n -r subnode)
assert_list_equal $GRENADE_SUBNODE_MASTER $results
}
test_full_master
test_full_feature_ec
test_neutron_master
test_heat_slow_master
test_grenade_new_master
test_grenade_subnode_master
if [[ "$ERRORS" -ne 0 ]]; then
echo "Errors detected, job failed"
exit 1
fi

View File

@ -1,606 +0,0 @@
#!/bin/bash
# Copyright (C) 2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
# This script tests the checkout functions defined in functions.sh.
source functions.sh
SUDO=""
LOCAL_AAR_VARS="TEST_GIT_CHECKOUTS TEST_ZUUL_REFS GIT_CLONE_AND_CD_ARG"
# Mock out the checkout function since the refs we're checking out do
# not exist.
function git_checkout_branch {
local project=$1
local branch=$2
project=`basename $project`
if [[ "$branch" == "FETCH_HEAD" ]]; then
branch=$FETCH_HEAD
fi
TEST_GIT_CHECKOUTS[$project]=$branch
}
# Mock out the fetch function since the refs we're fetching do not
# exist.
function git_fetch_at_ref {
local project=$1
local ref=$2
project=`basename $project`
if [ "$ref" != "" ]; then
if [[ "${TEST_ZUUL_REFS[$project]}" =~ "$ref" ]]; then
FETCH_HEAD="$ref"
return 0
fi
return 1
else
# return failing
return 1
fi
}
# Mock out git repo functions so the git repos don't have to exist.
function git_has_branch {
local project=$1
local branch=$2
case $branch in
master) return 0 ;;
stable/havana)
case $project in
openstack/glance) return 0 ;;
openstack/swift) return 0 ;;
openstack/nova) return 0 ;;
openstack/keystone) return 0 ;;
opestnack/tempest) return 0 ;;
esac
esac
return 1
}
function git_prune {
return 0
}
function git_remote_update {
return 0
}
function git_remote_set_url {
return 0
}
function git_clone_and_cd {
if [[ "x${2}" == "x" ]]; then
GIT_CLONE_AND_CD_ARG["ERROR"]="ERROR"
return 1
else
GIT_CLONE_AND_CD_ARG[$2]="$1,$3"
fi
return 0
}
# Utility function for tests
function assert_equal {
local lineno
local function
lineno=$(caller 0 | awk '{print $1}')
function=$(caller 0 | awk '{print $2}')
if [[ "$1" != "$2" ]]; then
echo "ERROR: $1 != $2 in $function:L$lineno!"
ERROR=1
else
echo "$function:L$lineno - ok"
fi
}
function assert_raises {
local lineno
local function
lineno=$(caller 0 | awk '{print $1}')
function=$(caller 0 | awk '{print $2}')
eval "$@" &>/dev/null
if [[ $? -eq 0 ]]; then
ERROR=1
echo "ERROR: \`\`$@\`\` returned OK instead of error in $function:L$lineno!"
fi
}
# Tests follow:
function test_one_on_master {
# devstack-gate master ZA
for aar_var in $LOCAL_AAR_VARS; do
eval `echo "declare -A $aar_var"`
done
local ZUUL_PROJECT='openstack/devstack-gate'
local ZUUL_BRANCH='master'
local ZUUL_REF='refs/zuul/master/ZA'
TEST_ZUUL_REFS[devstack-gate]+=' refs/zuul/master/ZA'
setup_project openstack/devstack-gate $ZUUL_BRANCH
assert_equal "${TEST_GIT_CHECKOUTS[devstack-gate]}" 'refs/zuul/master/ZA'
}
function test_two_on_master {
# devstack-gate master ZA
# glance master ZB
for aar_var in $LOCAL_AAR_VARS; do
eval `echo "declare -A $aar_var"`
done
local ZUUL_PROJECT='openstack/glance'
local ZUUL_BRANCH='master'
local ZUUL_REF='refs/zuul/master/ZB'
TEST_ZUUL_REFS[devstack-gate]+=' refs/zuul/master/ZA'
TEST_ZUUL_REFS[devstack-gate]+=' refs/zuul/master/ZB'
TEST_ZUUL_REFS[glance]+=' refs/zuul/master/ZB'
setup_project openstack/devstack-gate $ZUUL_BRANCH
setup_project openstack/glance $ZUUL_BRANCH
assert_equal "${TEST_GIT_CHECKOUTS[devstack-gate]}" 'refs/zuul/master/ZB'
assert_equal "${TEST_GIT_CHECKOUTS[glance]}" 'refs/zuul/master/ZB'
}
function test_multi_branch_on_master {
# devstack-gate master ZA
# glance stable/havana ZB
# python-glanceclient master ZC
for aar_var in $LOCAL_AAR_VARS; do
eval `echo "declare -A $aar_var"`
done
local ZUUL_PROJECT='openstack/python-glanceclient'
local ZUUL_BRANCH='master'
local ZUUL_REF='refs/zuul/master/ZC'
TEST_ZUUL_REFS[devstack-gate]+=' refs/zuul/master/ZA'
TEST_ZUUL_REFS[devstack-gate]+=' refs/zuul/master/ZB'
TEST_ZUUL_REFS[devstack-gate]+=' refs/zuul/master/ZC'
TEST_ZUUL_REFS[glance]+=' refs/zuul/stable/havana/ZB'
TEST_ZUUL_REFS[glance]+=' refs/zuul/stable/havana/ZC'
TEST_ZUUL_REFS[python-glanceclient]+=' refs/zuul/master/ZC'
setup_project openstack/devstack-gate $ZUUL_BRANCH
setup_project openstack/glance $ZUUL_BRANCH
setup_project openstack/python-glanceclient $ZUUL_BRANCH
assert_equal "${TEST_GIT_CHECKOUTS[devstack-gate]}" 'refs/zuul/master/ZC'
assert_equal "${TEST_GIT_CHECKOUTS[glance]}" 'master'
assert_equal "${TEST_GIT_CHECKOUTS[python-glanceclient]}" 'refs/zuul/master/ZC'
}
function test_multi_branch_project_override {
# main branch is stable/havana
# devstack-gate master ZA
# devstack-gate master ZB
# python-glanceclient master ZC
# glance stable/havana ZD
# tempest not in queue (override to master)
# oslo.config not in queue (master because no stable/havana branch)
# nova not in queue (stable/havana)
for aar_var in $LOCAL_AAR_VARS; do
eval `echo "declare -A $aar_var"`
done
local ZUUL_PROJECT='openstack/glance'
local ZUUL_BRANCH='stable/havana'
local OVERRIDE_TEMPEST_PROJECT_BRANCH='master'
local ZUUL_REF='refs/zuul/stable/havana/ZD'
TEST_ZUUL_REFS[devstack-gate]+=' refs/zuul/master/ZA'
TEST_ZUUL_REFS[devstack-gate]+=' refs/zuul/master/ZB'
TEST_ZUUL_REFS[devstack-gate]+=' refs/zuul/master/ZC'
TEST_ZUUL_REFS[devstack-gate]+=' refs/zuul/master/ZD'
TEST_ZUUL_REFS[python-glanceclient]+=' refs/zuul/master/ZC'
TEST_ZUUL_REFS[python-glanceclient]+=' refs/zuul/master/ZD'
TEST_ZUUL_REFS[glance]+=' refs/zuul/stable/havana/ZD'
setup_project openstack/devstack-gate $ZUUL_BRANCH
setup_project openstack/glance $ZUUL_BRANCH
setup_project openstack/python-glanceclient $ZUUL_BRANCH
setup_project openstack/tempest $ZUUL_BRANCH
setup_project openstack/nova $ZUUL_BRANCH
setup_project openstack/oslo.config $ZUUL_BRANCH
assert_equal "${TEST_GIT_CHECKOUTS[devstack-gate]}" 'refs/zuul/master/ZD'
assert_equal "${TEST_GIT_CHECKOUTS[glance]}" 'refs/zuul/stable/havana/ZD'
assert_equal "${TEST_GIT_CHECKOUTS[tempest]}" 'master'
assert_equal "${TEST_GIT_CHECKOUTS[nova]}" 'stable/havana'
assert_equal "${TEST_GIT_CHECKOUTS[oslo.config]}" 'master'
assert_equal "${TEST_GIT_CHECKOUTS[python-glanceclient]}" 'refs/zuul/master/ZD'
}
function test_multi_branch_on_stable {
# devstack-gate master ZA
# glance stable/havana ZB
# python-glanceclient not in queue
for aar_var in $LOCAL_AAR_VARS; do
eval `echo "declare -A $aar_var"`
done
local ZUUL_PROJECT='openstack/glance'
local ZUUL_BRANCH='stable/havana'
local ZUUL_REF='refs/zuul/stable/havana/ZB'
TEST_ZUUL_REFS[devstack-gate]+=' refs/zuul/master/ZA'
TEST_ZUUL_REFS[devstack-gate]+=' refs/zuul/master/ZB'
TEST_ZUUL_REFS[glance]+=' refs/zuul/stable/havana/ZB'
setup_project openstack/devstack-gate $ZUUL_BRANCH
setup_project openstack/glance $ZUUL_BRANCH
setup_project openstack/python-glanceclient $ZUUL_BRANCH
assert_equal "${TEST_GIT_CHECKOUTS[devstack-gate]}" 'refs/zuul/master/ZB'
assert_equal "${TEST_GIT_CHECKOUTS[glance]}" 'refs/zuul/stable/havana/ZB'
assert_equal "${TEST_GIT_CHECKOUTS[python-glanceclient]}" 'master'
}
function test_multi_git_base_project_override {
# osrg/ryu https://github.com
# test/devstack-gate https://example.com
# openstack/keystone https://opendev.org
# openstack/glance http://tarballs.openstack.org
for aar_var in $LOCAL_AAR_VARS; do
eval `echo "declare -A $aar_var"`
done
GIT_CLONE_AND_CD_ARG["ERROR"]="NULL"
local ZUUL_PROJECT='openstack/neutron'
local ZUUL_BRANCH='master'
local ZUUL_REF='refs/zuul/master/ZA'
local GIT_BASE=""
local GIT_BASE_DEF="https://opendev.org"
local OVERRIDE_RYU_GIT_BASE='https://github.com'
setup_project "osrg/ryu" $ZUUL_BRANCH
local OVERRIDE_DEVSTACK_GATE_GIT_BASE='https://example.com'
setup_project "test/devstack-gate" $ZUUL_BRANCH
setup_project "openstack/keystone" $ZUUL_BRANCH
local GIT_BASE="http://tarballs.openstack.org"
setup_project "openstack/glance" $ZUUL_BRANCH
assert_equal "${GIT_CLONE_AND_CD_ARG["ryu"]}" "osrg/ryu,$OVERRIDE_RYU_GIT_BASE"
assert_equal "${GIT_CLONE_AND_CD_ARG["devstack-gate"]}" "test/devstack-gate,$OVERRIDE_DEVSTACK_GATE_GIT_BASE"
assert_equal "${GIT_CLONE_AND_CD_ARG["keystone"]}" "openstack/keystone,$GIT_BASE_DEF"
assert_equal "${GIT_CLONE_AND_CD_ARG["glance"]}" "openstack/glance,$GIT_BASE"
assert_equal "${GIT_CLONE_AND_CD_ARG["ERROR"]}" "NULL"
}
function test_grenade_backward {
# devstack-gate master ZA
# nova stable/havana ZB
# keystone stable/havana ZC
# keystone master ZD
# glance master ZE
# swift not in queue
# python-glanceclient not in queue
# havana -> master (with changes)
for aar_var in $LOCAL_AAR_VARS; do
eval `echo "declare -A $aar_var"`
done
local ZUUL_PROJECT='openstack/glance'
local ZUUL_BRANCH='master'
local ZUUL_REF='refs/zuul/master/ZE'
local GRENADE_OLD_BRANCH='stable/havana'
local GRENADE_NEW_BRANCH='master'
TEST_ZUUL_REFS[devstack-gate]+=' refs/zuul/master/ZA'
TEST_ZUUL_REFS[devstack-gate]+=' refs/zuul/master/ZB'
TEST_ZUUL_REFS[devstack-gate]+=' refs/zuul/master/ZC'
TEST_ZUUL_REFS[devstack-gate]+=' refs/zuul/master/ZD'
TEST_ZUUL_REFS[devstack-gate]+=' refs/zuul/master/ZE'
TEST_ZUUL_REFS[nova]+=' refs/zuul/stable/havana/ZB'
TEST_ZUUL_REFS[nova]+=' refs/zuul/stable/havana/ZC'
TEST_ZUUL_REFS[nova]+=' refs/zuul/stable/havana/ZD'
TEST_ZUUL_REFS[nova]+=' refs/zuul/stable/havana/ZE'
TEST_ZUUL_REFS[keystone]+=' refs/zuul/stable/havana/ZC'
TEST_ZUUL_REFS[keystone]+=' refs/zuul/stable/havana/ZD'
TEST_ZUUL_REFS[keystone]+=' refs/zuul/stable/havana/ZE'
TEST_ZUUL_REFS[keystone]+=' refs/zuul/master/ZD'
TEST_ZUUL_REFS[keystone]+=' refs/zuul/master/ZE'
TEST_ZUUL_REFS[glance]+=' refs/zuul/master/ZE'
setup_project openstack/devstack-gate $GRENADE_OLD_BRANCH
setup_project openstack/nova $GRENADE_OLD_BRANCH
setup_project openstack/keystone $GRENADE_OLD_BRANCH
setup_project openstack/glance $GRENADE_OLD_BRANCH
setup_project openstack/swift $GRENADE_OLD_BRANCH
setup_project openstack/python-glanceclient $GRENADE_OLD_BRANCH
assert_equal "${TEST_GIT_CHECKOUTS[devstack-gate]}" 'refs/zuul/master/ZE'
assert_equal "${TEST_GIT_CHECKOUTS[nova]}" 'refs/zuul/stable/havana/ZE'
assert_equal "${TEST_GIT_CHECKOUTS[keystone]}" 'refs/zuul/stable/havana/ZE'
assert_equal "${TEST_GIT_CHECKOUTS[glance]}" 'stable/havana'
assert_equal "${TEST_GIT_CHECKOUTS[swift]}" 'stable/havana'
assert_equal "${TEST_GIT_CHECKOUTS[python-glanceclient]}" 'master'
declare -A TEST_GIT_CHECKOUTS
setup_project openstack/devstack-gate $GRENADE_NEW_BRANCH
setup_project openstack/nova $GRENADE_NEW_BRANCH
setup_project openstack/keystone $GRENADE_NEW_BRANCH
setup_project openstack/glance $GRENADE_NEW_BRANCH
setup_project openstack/swift $GRENADE_NEW_BRANCH
setup_project openstack/python-glanceclient $GRENADE_NEW_BRANCH
assert_equal "${TEST_GIT_CHECKOUTS[devstack-gate]}" 'refs/zuul/master/ZE'
assert_equal "${TEST_GIT_CHECKOUTS[nova]}" 'master'
assert_equal "${TEST_GIT_CHECKOUTS[keystone]}" 'refs/zuul/master/ZE'
assert_equal "${TEST_GIT_CHECKOUTS[glance]}" 'refs/zuul/master/ZE'
assert_equal "${TEST_GIT_CHECKOUTS[swift]}" 'master'
assert_equal "${TEST_GIT_CHECKOUTS[python-glanceclient]}" 'master'
}
function test_grenade_forward {
# devstack-gate master ZA
# nova master ZB
# keystone stable/havana ZC
# keystone master ZD
# glance stable/havana ZE
# swift not in queue
# python-glanceclient not in queue
# havana (with changes) -> master
for aar_var in $LOCAL_AAR_VARS; do
eval `echo "declare -A $aar_var"`
done
local ZUUL_PROJECT='openstack/glance'
local ZUUL_BRANCH='stable/havana'
local ZUUL_REF='refs/zuul/stable/havana/ZE'
local GRENADE_OLD_BRANCH='stable/havana'
local GRENADE_NEW_BRANCH='master'
TEST_ZUUL_REFS[devstack-gate]+=' refs/zuul/master/ZA'
TEST_ZUUL_REFS[devstack-gate]+=' refs/zuul/master/ZB'
TEST_ZUUL_REFS[devstack-gate]+=' refs/zuul/master/ZC'
TEST_ZUUL_REFS[devstack-gate]+=' refs/zuul/master/ZD'
TEST_ZUUL_REFS[devstack-gate]+=' refs/zuul/master/ZE'
TEST_ZUUL_REFS[nova]+=' refs/zuul/master/ZB'
TEST_ZUUL_REFS[nova]+=' refs/zuul/master/ZC'
TEST_ZUUL_REFS[nova]+=' refs/zuul/master/ZD'
TEST_ZUUL_REFS[nova]+=' refs/zuul/master/ZE'
TEST_ZUUL_REFS[keystone]+=' refs/zuul/stable/havana/ZC'
TEST_ZUUL_REFS[keystone]+=' refs/zuul/stable/havana/ZD'
TEST_ZUUL_REFS[keystone]+=' refs/zuul/stable/havana/ZE'
TEST_ZUUL_REFS[keystone]+=' refs/zuul/master/ZD'
TEST_ZUUL_REFS[keystone]+=' refs/zuul/master/ZE'
TEST_ZUUL_REFS[glance]+=' refs/zuul/stable/havana/ZE'
setup_project openstack/devstack-gate $GRENADE_OLD_BRANCH
setup_project openstack/nova $GRENADE_OLD_BRANCH
setup_project openstack/keystone $GRENADE_OLD_BRANCH
setup_project openstack/glance $GRENADE_OLD_BRANCH
setup_project openstack/swift $GRENADE_OLD_BRANCH
setup_project openstack/python-glanceclient $GRENADE_OLD_BRANCH
assert_equal "${TEST_GIT_CHECKOUTS[devstack-gate]}" 'refs/zuul/master/ZE'
assert_equal "${TEST_GIT_CHECKOUTS[nova]}" 'stable/havana'
assert_equal "${TEST_GIT_CHECKOUTS[keystone]}" 'refs/zuul/stable/havana/ZE'
assert_equal "${TEST_GIT_CHECKOUTS[glance]}" 'refs/zuul/stable/havana/ZE'
assert_equal "${TEST_GIT_CHECKOUTS[swift]}" 'stable/havana'
assert_equal "${TEST_GIT_CHECKOUTS[python-glanceclient]}" 'master'
declare -A TEST_GIT_CHECKOUTS
setup_project openstack/devstack-gate $GRENADE_NEW_BRANCH
setup_project openstack/nova $GRENADE_NEW_BRANCH
setup_project openstack/keystone $GRENADE_NEW_BRANCH
setup_project openstack/glance $GRENADE_NEW_BRANCH
setup_project openstack/swift $GRENADE_NEW_BRANCH
setup_project openstack/python-glanceclient $GRENADE_NEW_BRANCH
assert_equal "${TEST_GIT_CHECKOUTS[devstack-gate]}" 'refs/zuul/master/ZE'
assert_equal "${TEST_GIT_CHECKOUTS[nova]}" 'refs/zuul/master/ZE'
assert_equal "${TEST_GIT_CHECKOUTS[keystone]}" 'refs/zuul/master/ZE'
assert_equal "${TEST_GIT_CHECKOUTS[glance]}" 'master'
assert_equal "${TEST_GIT_CHECKOUTS[swift]}" 'master'
assert_equal "${TEST_GIT_CHECKOUTS[python-glanceclient]}" 'master'
}
function test_branch_override {
# glance stable/havana ZA
# devstack-gate master ZB
# swift not in queue
# python-glanceclient not in queue
for aar_var in $LOCAL_AAR_VARS; do
eval `echo "declare -A $aar_var"`
done
local ZUUL_PROJECT='openstack/devstack-gate'
local ZUUL_BRANCH='master'
local ZUUL_REF='refs/zuul/master/ZB'
local OVERRIDE_ZUUL_BRANCH='stable/havana'
TEST_ZUUL_REFS[devstack-gate]+=' refs/zuul/master/ZB'
TEST_ZUUL_REFS[glance]+=' refs/zuul/stable/havana/ZA'
TEST_ZUUL_REFS[glance]+=' refs/zuul/stable/havana/ZB'
setup_project openstack/devstack-gate $OVERRIDE_ZUUL_BRANCH
setup_project openstack/glance $OVERRIDE_ZUUL_BRANCH
setup_project openstack/swift $OVERRIDE_ZUUL_BRANCH
setup_project openstack/python-glanceclient $OVERRIDE_ZUUL_BRANCH
assert_equal "${TEST_GIT_CHECKOUTS[devstack-gate]}" 'refs/zuul/master/ZB'
assert_equal "${TEST_GIT_CHECKOUTS[glance]}" 'refs/zuul/stable/havana/ZB'
assert_equal "${TEST_GIT_CHECKOUTS[swift]}" 'stable/havana'
assert_equal "${TEST_GIT_CHECKOUTS[python-glanceclient]}" 'master'
}
function test_periodic {
# No queue
for aar_var in $LOCAL_AAR_VARS; do
eval `echo "declare -A $aar_var"`
done
local ZUUL_BRANCH='stable/havana'
local ZUUL_PROJECT='openstack/glance'
setup_project openstack/glance $ZUUL_BRANCH
assert_equal "${TEST_GIT_CHECKOUTS[glance]}" 'stable/havana'
}
# Run setup_project without setting a ZUUL_BRANCH which is how a subset of
# periodic jobs operate
function test_periodic_no_branch {
declare -A TEST_GIT_CHECKOUTS
declare -A TEST_ZUUL_REF
local ZUUL_PROJECT='openstack/glance'
setup_project openstack/glance 'master'
assert_equal "${TEST_GIT_CHECKOUTS[glance]}" 'master'
}
# setup_workspace fails without argument
function test_workspace_branch_arg {
assert_raises setup_workspace
}
function test_call_hook_if_defined {
local filename=test_call_hook_if_defined.txt
local save_dir
save_dir=$(pwd)/tmp
mkdir -p $save_dir
function demo_script {
local filename=$1
local save_dir=$2
# Clean up any files from previous tests
rm -f $save_dir/$filename
call_hook_if_defined test_hook $filename $save_dir
ret_val=$?
return $ret_val
}
# No hook defined returns success 0 & no file created
demo_script $filename $save_dir
ret_val=$?
assert_equal "$ret_val" "0"
[[ -e $save_dir/$filename ]]
file_exists=$?
assert_equal $file_exists 1
# Hook defined returns its error code and file with output
function test_hook {
echo "hello test_hook"
return 123
}
demo_script $filename $save_dir
ret_val=$?
assert_equal "$ret_val" "123"
[[ -e $save_dir/$filename ]]
file_exists=$?
assert_equal $file_exists 0
# Make sure the expected contents has length > 0
result_expected=`cat $save_dir/$filename | grep "hello test_hook"`
[[ ${#result_expected} -eq "0" ]]
assert_equal $? 1
# Hook defined with invalid file fails
demo_script /invalid/file.txt $save_dir
ret_val=$?
assert_equal "$ret_val" "1"
# Clean up
rm -rf $save_dir
}
# test that reproduce file is populated correctly
function test_reproduce {
# expected result
read -d '' EXPECTED_VARS << EOF
declare -x ZUUL_VAR="zuul-var"
declare -x DEVSTACK_VAR="devstack-var"
declare -x ZUUL_VAR_MULTILINE="zuul-var-setting1
zuul-var-setting2"
declare -x DEVSTACK_VAR_MULTILINE="devstack-var-setting1
devstack-var-setting2"
gate_hook ()
{
echo "The cake is a lie"
}
declare -fx gate_hook
EOF
# prepare environment for test
WORKSPACE=.
export DEVSTACK_VAR=devstack-var
export DEVSTACK_VAR_MULTILINE="devstack-var-setting1
devstack-var-setting2"
export ZUUL_VAR=zuul-var
export ZUUL_VAR_MULTILINE="zuul-var-setting1
zuul-var-setting2"
function gate_hook {
echo "The cake is a lie"
}
export -f gate_hook
mkdir $WORKSPACE/logs
# execute call and assert
reproduce
[[ -e $WORKSPACE/logs/reproduce.sh ]]
file_exists=$?
assert_equal $file_exists 0
result_expected=`cat $WORKSPACE/logs/reproduce.sh | grep "$EXPECTED_VARS"`
[[ ${#result_expected} -eq "0" ]]
assert_equal $? 1
# clean up environment
rm -rf $WORKSPACE/logs
rm -rf $WORKSPACE/workspace
unset WORKSPACE
unset DEVSTACK_VAR
unset DEVSTACK_VAR_MULTILINE
unset ZUUL_VAR
unset ZUUL_VAR_MULTILINE
unset gate_hook
}
# Run tests:
#set -o xtrace
test_branch_override
test_grenade_backward
test_grenade_forward
test_multi_branch_on_master
test_multi_branch_on_stable
test_multi_branch_project_override
test_multi_git_base_project_override
test_one_on_master
test_periodic
test_periodic_no_branch
test_two_on_master
test_workspace_branch_arg
test_call_hook_if_defined
test_reproduce
if [[ ! -z "$ERROR" ]]; then
echo
echo "FAIL: Tests have errors! See output above."
echo
exit 1
else
echo
echo "Tests completed successfully!"
echo
fi

View File

@ -1 +0,0 @@
PyYAML>=3.1.0

24
tox.ini
View File

@ -1,24 +0,0 @@
[tox]
envlist = bashate
minversion = 1.6
skipsdist = True
[testenv]
basepython = python2
install_command = pip install -U {opts} {packages}
setenv = VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/test-requirements.txt
commands =
bash -c "./run-tests.sh"
[testenv:bashate]
deps=
{env:BASHATE_INSTALL_PATH:bashate==0.5.0}
whitelist_externals=
bash
# bashate options:
# -i E006 : ignore long lines
# -e E005 : error if not starting with #!
# E042 : error if "local" hides exit status
commands =
bash -c "ls *.sh | xargs bashate -v {posargs} -iE006 -eE005,E042"