- Add molecule support to test ansible roles.
- Also fix flake8 failures.
- Also drop python36 from the templates
Change-Id: Ib647d93144a02a6be7340991c31f65842fdf3f29
Signed-off-by: Charles Short <chucks@redhat.com>
1. Added containerized_overcloud for queens
2. Set containerized_overcloud and undercloud for rocky
3. Set ntp server to clock.corp.redhat.com for rocky as
rocky builds fail on uc deployment while NTP syncing.
Change-Id: I77434b104ab48eec097b28da92cc7fe93f6cfc79
This commit is to ensure that the rdo jobs don't use internal
config files.
Also, testing out exactly how the current_build and
hash vars look like
Change-Id: Id77ae847323fafddd891707c59f633fb11e8cd65
Perf-CI has more parameters that are passed down
from jjb such as TOOL which dont exist in browbeat-ci.
Change-Id: I3f4f4ccc4e8a7fdc6e98d3c50b252101e0c00bb8
Tripleo quickstart made changes to default node files
to use mapping to rename the compute/controller-0.
This commit will allow us to use the old version.
https://review.openstack.org/#/c/531504/2
Change-Id: Iec82b61b2088d70c30142df27b251f28a68738e8
Reasons to move from json, json.j2 to yaml/yaml.j2 + GrafYaml:
* Yaml is less lines
* Yaml allows comments
* Yaml means less curly braces and quotes
* GrafYaml manages panel ids and target refIds
* GrafYaml defaults reduce number of lines stored
* GrafYaml allows more easily cut/paste management of Dashboards
Identified Downsides:
* GrafYaml will be behind the Grafana Dashboard Model (Until code is updated)
* Json will always allow full feature set
* Installing Dashboards now requires GrafYaml
New Dashboards/Features:
* Templated Dashboards (Reduce line count in "static" dashboards)
* Cloud Specific networks - all dashboard
* Three Node Performance Food groups Enhanced
* Networker Node added for Cloud Specific Dashboards and Total Memory
Change-Id: I55ce9f9f6c28497c8b4ed7a19d42657a8eb14170
We can no longer pull from current-passed-ci.yml
as they point to a puddle that no longer exists and
the CI had already moved away from using them a
couple of months back.
Change-Id: I172250690fa402fce3488f020955ff47849fc4f9
Tripleo environments had deleted rhos-*-current-passed-ci
files that were needed to test commits made to browbeat.
Readded files in tripleo-environments
https://code.engineering.redhat.com/gerrit/#/c/129983/
Updated path accordingly.
Change-Id: If1b8f59b87fbd0dc58d7f6dde994cf1efb53883b
* Mix and Match Workloads
* rerun_type - ['iteration', 'complete'] - allows you to rerun complete
browbeat workload or iteratively
* browbeat/config.py for loading and validating config schema
* tests/test_config.py to test config.py
Change-Id: I99ea87c01c17d5d06cf7f8d1eec3299aa8d483a5
* Use the built-in pykwalify cli validator
* Use set -e on loop inside tox.ini to ensure invalid configs fail CI
Change-Id: I251f7ead8393b97e93de03dc3b6accbdd9670092
current_build for rhos jobs is a puddle not a dlrn hash
so when we run it through the hash expander we get nothing
back and pass a blank current build. The build then fails
because quickstart can't figure out what puddle to download.
Change-Id: I011e358cd26e553275d00e02e198d3744e1c4cbe
When patching our CI to work with the new overcloud image
based oooq I neglected the promotion piplines.
Change-Id: Ibd6770a168b300e78a29734101a099b29f54569b
This implements rsyslog -> elasticsearch logging as well
as rsyslog forwarder -> rsyslog aggregator -> elasticsearch logging
using the common logging template as a base and adding
in dynamic detection of containerized services and log path
detection.
Services can be moved into and out of containers and add
or remove log files and the log detector script will create a template
that reflects these changes dynamically.
Logging inherits cloud name and elasticsearch info from the existing
group_vars variables, so this should be no additional work to setup
beyond setting logging_backend: rsyslog and either running the install
playbook or the rsyslog-logging playbook.
Finally additional variables can be passed into the deployment with
-e or just being in the ansible namespace, this way things like a
unique build ID can be templated into the logs automatically. I've
added support for browbeat_uuid, dlrn_hash, and rhos_puddle others
should be trivial to add.
There are also additional tunables to configure if logging instaces
should be standalone (viable for small clouds) or rely on a server
side aggregator service (more efficient for large deployments).
Disk backed mode is another tunable that will create a variable
disk load that may be undesierable in some deployments, but if
collecting every last log is important it can be turned on creating
a one or two layer queueing structure in case of Elasticsearch downtime
or overload depending on if the aggregation server is in use.
If you want to see examples from both containerized and non
container clouds check out elk.browbeatproject.org's logstash
index.
Change-Id: I3e6652223a08ab8a716a40b7a0e21b7fcea6c000
Unfortunately connmon hasn't been used in a while and isn't well tested on latest releases,
thus ideally in order to prevent any more cruft issues, lets remove it for now and if it becomes
relevant again we can add it back in.
Change-Id: I0759d164621f3aac1c36dbe1fac49acd7dde97e3
The recent CI short circuit patch runs into an issue
when run in promotion pipelines because the gerrit vars
may not be set.
Change-Id: If9b5c7abb78130c9f8fc02fc9bee158840a21efe
So this is very easy, the problem is determining if the user
actually intended to mark their commit as a work in progress
for no testing, or if the user has WIP in their commit title
for some other reason. To help with that WIP or wip has to
be at the start of the commit message and have a trailing space.
Maybe : should also be accepted as a trailing char.
Change-Id: Ic44114dbf50abbe2a9f135a41257208fe7feddf8
This seems to have finally settled down, first off the Ocata exception
can finally be removed and the promotion images are in the rdo_trunk
folder universally now instead of delorean.
Change-Id: I45e7cbc0bafff5b637a75702aacb3ba874905b42
This commit splits the Browbeat CI out by workload by passing in
different playbooks, mostly just makes a different playbook for yoda
because why deploy an overcloud before testing yoda? makes no sense and
adds another couple of hours to the test. We also add an integration test
playbook but that doesn't seriously diverge from the normal testing playbook
at the moment
Change-Id: Ic83412cac668cbf7d2fb6d8d86c2c86eb354c9dd
Now that we're using Graphite/Grafana for the automated testing
regularly I should take advantage of the dashboard link indexing
feature.
Change-Id: I9098e9ff0ac81dbeb6b3c9aca82e54b59e7e0a80
Moving CI documentation to the modern location so that I can
patch in documentation for these new workloads etc.
Change-Id: I6d7fc4561d14d95f39df311cb2f76fd01fe215d1
Avoid dangerous file parsing and object serialization libraries.
yaml.load is the obvious function to use but it is dangerous[1]
Because yaml.load return Python object may be dangerous if you
receive a YAML document from an untrusted source such as the
Internet. The function yaml.safe_load limits this ability to
simple Python objects like integers or lists.
In addition, Bandit flags yaml.load() as security risk
so replace all occurrences with yaml.safe_load().
Thus I replace yaml.load() with yaml.safe_load()
[1]https://security.openstack.org/guidelines/dg_avoid-dangerous-input-parsing-libraries.html
Change-Id: Iaa2b7d9c880f3e20243bb2a9cbd8f9db29ecc267
This commit allows CI to run the collectd-openstack role directly
to do this I take the relevent variables from the group_vars/all.yml
and duplicate them in to the defaults. If you're not making mix and
match playbooks with browbeat roles and run the playbooks 'normally'
these defaults will never affect you in any way and will be overridden
by the group_vars/all.yml file without any interaction from you.
If on the other hand you want a playbook where you can toss collectd
install whereever you need it, this makes life much easier. Also note
that the trick I'm using to get the name of the executing hosts first
group could be used to make the collectd-openstack playbook run in
parallel across all hosts all at once rather than one group at a time.
As one final tweak this adds 10 minutesof retires to the epel install
role, specifically because two of it's tasks rely on external internet
access and fail often enough to cause trouble. Previously I would retry
the whole CollectD playbook when that happened, this is more efficient
Change-Id: Ia67ab2b4152a8d8e9f3ba1dabe1f4417d7b86233
This restructures some of the metadata to better reflect how builds
are tracked up and downstream. Upstream we have a delorean hash which
is used to track builds as they come down the pipeline, downstream we end
up with RHOS puddles, ideally we will track the transition between these two
since every puddle has a delorean hash but not the other way around. Right now
I'll settle for collecting the infromation.
Change-Id: I9a9030523ec5f586b7e696abe892e0c8167e9869
This updates the url building for the rhos pipeline builds
so that we grab the correct image ID when the promotion pipeline
attempts to promote a build.
Change-Id: I13da83b7ecdf4cd145e7ffc1ae2b9e6e0a9690f1
This commit enanbles Ansible linting and does some
minor refactoring to make existing Ansible roles
compatible with the new rules.
Several Ansible linting rules have been excluded to keep the number
of changes from being too onerous.
Also a new script in ci-scripts is used to check very config file
included in the Browbeat repo for validity using the template
Browbeat uses when it runs.
Here's a list of the new linting rules
* Ansible tasks must have names
* When you use shell you must use become not sudo
* Using become_user without using become is not allowed
* If a repo is pulled it must be a pinned version of commit, not latest
* Always_run is deprecated don't use it
* Variables without {{}} and not in when statements are deprecated don't use them
* No Trailing whitepaces
* YAML checking, catches big syntax errors but not less obvious ones
Change-Id: Ic531c91c408996d4e7d8899afe8b21d364998680
The internal pipeline image mirror has a slightly different
folder structure that doesn't include the /current-tripleo/ folder
anyways this was causing Ocata pipeline builds to fail.
Change-Id: I688218dda2c32222df3752bceb771427c2ba9425
As the latest version is moved from Master to Ocata some folder structure
changes are required to make sure we get the right image. Eventually master
will point to 12 instead of 11.
Change-Id: Ie83cf1add5418c77054b67610c4f35318420b93e
See here for more details
https://review.openstack.org/#/c/437946/
The reason the old pin isn't working is that --bootstrap in quickstart
doesn't work like it should and force the reinstallation of the virtualenv
so we carry over the prep internal script that now uses extras itself.
Change-Id: I5479738565017941df71313f10816f2c9a4debea
This commit sets the variables for network creation to the
appropriate values. Since the network details are internal, this CI
variables file is being moved to internal git.
Change-Id: Ib55d2896991c74562f01e3cc56117af7110dc403
StatsD support works, working on dashboards
As documented here http://docs.openstack.org/developer/ironic/deploy/metrics.html
Ironic allows performance metrics to be dumped via StatsD, this installs StatsD
on the undercloud via epel, sets up a service for it, enables it, and updates
Ironic to use it.
Change-Id: I793d4d3211ecf6113bd4863a0672ea0cb0de9dd3
Reverting Commit: 2ba6da9022
This needed much more testing before merging, my bad.
I strongly suggest we don't add this functionality
in a seperate commit again, it doesn't make sense
to reorg and test all of this and then the pip commit
right after. Just add it's functionality there.
Change-Id: Iee7aa439fbc077c3c71f67b625b67fc55a86f199
* if results and log path not found, create it.
* Also resolve $FOO in browbeat config paths -jkilpatr
Change-Id: Ie5ec32386ca0d6db9177d9a3a55387b5b1e88a69
While the Grafana infra is having trouble it's causing tests to fail
so until it's all squared away I'm going to make it non-mandatory
Change-Id: Iaa779f19325bb5aef830bd93086995cb2beef3b5
https://review.openstack.org/#/c/403677 The above commit is currently
required for the Microbrow CI to work well. It should be clean enough
to be mainlined now but no one wants to merge someone elses hack. Anyways
this just moves the pin forward for the sake of staying close to upstream.
Change-Id: I1d455e3ecc490c62289be25eaec4bfd0a124f346
In some cases the user might want to run the same scenario with the
same concurrency value multiple times. In that case currently Browbeat
exits with an ugly traceback since the test name would be based on the
concurrency and hyaving the same concurrency twice results in trying to
create a file that already exists.
This commit makes the code smart enough to recognize duplicate concurrency
values and name files accordingly. For example. having 16 twice as the
concurrency value for ceilometer_list_meters scenario produces two different
log files as 'results/20160718-202008/CeilometerMeters/ceilometer_list_meters/
20160718-202008-browbeat-ceilometer_list_meters-16-iteration-0.log' and
'results/20160718-202008/CeilometerMeters/ceilometer_list_meters/
20160718-202008-browbeat-ceilometer_list_meters-16-2-iteration-0.log.
Change-Id: Ie39c8c54ddf0435ff46975bfc4a5fd62995b2a32