This patch introduces the following changes.
1. Playbooks have been created to start collectd on different hosts.
2. A feature has been added that allows a user to start collectd containers
before running workloads, and stop the collectd containers after running
the workloads. This will help us in minimising the space used for storing
collectd data.
Change-Id: I7926884f461e97bc67453f46eef0121c46c7f19e
This commit
1. Provides a playbook to install the filebeat agent on all
undercloud/overcloud nodes
2. Provides another playbook that adds the browbeat uuid to the
filebeat config file and starts filebeat during browbeat run
3. Corresponding changes in browbeat.py and browbeat/tools.py
to run the playbook to insert custom browbeat uuid in the
filebeat configuration.
Change-Id: Idd2efaf931f4ff581db715a04adef738f81d281c
Move the perkit, rally and shaker workloads to their own submodule
in the browbeat namespace. So we dont pollute the browswer namespace
and make the code a bit more orgranized.
Change-Id: Ib833e86e71e595d336c27b08774f164e8f8c49bd
Signed-off-by: Charles Short <chucks@redhat.com>
This fixes several flake8 failures and integrates flake8 into linters
tox environment.
Rule W504 is disabled because at this moment there is no known way
to avoid flapping between W504 and W503.
This allows us to retire openstack-tox-pep8 job because the more
beneric openstack-tox-linters includes it. Still, developers will
be able to coveniently run only pep8 if they want.
Change-Id: I7da0f6f09a533dd1c4dc303029e8c587bc200f66
Right now we depend on Kibana to do our comparisons. This will give the
user a CLI mechansim to compare two different browbeat runs.
+ Small fix to browbeat metadata comparsion to not query _all
+ Changing how the metadata comparsion is displayed
Change-Id: I3881486100c91dcf3cc4eeeb4ddfa532ff01a7f1
This commit adds functionality for a user to exit out of a long running
browbeat configuration by using SIGINT. On catchinga SIGINT, browbeat
compeltes the current workload and then exits.
To keep the code lightweight, browbeat termination on SIGINT happens
only at a workload level. The current running workload (in case of
rally, the scenarios and concurrencies associated with the workload) is
finished.
Change-Id: Iaa1f10233dabd053293327f42f8bd1320f1af95d
After code was added to mix-match workload, the stats of executed,
passed and failed tests kept by each workload class have become
inconsistent. It makes sense to only maintain a count at the total
number of tests executed/passed/failed vs per workload.
Change-Id: I4f84f4580ac29206e7ce660222a2a396e419cac8
* Mix and Match Workloads
* rerun_type - ['iteration', 'complete'] - allows you to rerun complete
browbeat workload or iteratively
* browbeat/config.py for loading and validating config schema
* tests/test_config.py to test config.py
Change-Id: I99ea87c01c17d5d06cf7f8d1eec3299aa8d483a5
This change gives the user more output to determine what the
differences between Browbeat-uuids are.
+ browbeat version information
+ newlines to help the cluttered output
Change-Id: I3dfe3f89c07c615addbacd035c512da952faf624
Closes-bug: 1702546
Method to query ElasticSearch for a specific set of browbeat_uuids and
compare the metadata to determine if there are differences.
This work will also tell the user if a option or value is missing.
Eventually, I would like to see us query Elastic for collectd data to
see if there has been CPU/Memory/DiskIO increases during a specific
Browbeat run -- this is a longer-term goal.
Example of this :
https://gist.github.com/jtaleric/ffc1508eba3cba9515ca24cfcf23583c
Change-Id: Ie65e2c3d505aa2f19ba10109276ba982ee4ab67b
Yoda is a Browbeat workload for Ironic and TripleO
It can perform and monitor the following
* Introspection bulk or batch
* Cloud deployment with varying node types and numbers
* Baremetal node import timing (actually done during introspection tests)
Metrics that are gathered inclue
* Time to pxe
* Time till pingable
* Success/Failure rate ahd times
* Overcloud Metadata after each deploy
Potential issues
Change-Id: I89809cc35db2cfaa39f8ede49ec853572c0e468e
At the end of a Browbeat run loop through all the tests and their results
to check if any failed to index, if so set the return code to 1.
This way tests will complete even if indexing fails but scripts and other
software checking the return code will fail out properly instead of allowing
data to be lost silently.
Change-Id: If1a7b2b41df35abbb8a6cc63d6003c1ae36664dd
This adds a very simple check for failed tests before Browbeat's exit
if failed tests are found Browbeat will exit with a return code of one.
This will provide Browbeat CI failure when tests fail without interuppting
ongoing tests if a single one fails.
Change-Id: I5382f684fe03d85692a275dc5c03a136004f34d9
This work is to take an results directory and find all the
rally/Shaker/Perfkit JSONs and create the Elasticsearch JSONs that can be
pushed to Elasticsearch.
This is also a minor refactor of the Rally lib to refactor how we push
results into elasticsearch. This creates a generic function so we can
have file and/or taskid based metrics created
+ (sai) Fix way how we crawl for files
+ (sai) Exclude already postprocessed files
+ (sai) Fix filenaming of dumped postprocessed files
+ {sai) flake8
Co-Authored-By: Sai Sindhur Malleni <smalleni@redhat.com>
Change-Id: I5ca8877f26e889856c9773b51ba38f24562a80af
Reverting Commit: 2ba6da9022
This needed much more testing before merging, my bad.
I strongly suggest we don't add this functionality
in a seperate commit again, it doesn't make sense
to reorg and test all of this and then the pip commit
right after. Just add it's functionality there.
Change-Id: Iee7aa439fbc077c3c71f67b625b67fc55a86f199
* if results and log path not found, create it.
* Also resolve $FOO in browbeat config paths -jkilpatr
Change-Id: Ie5ec32386ca0d6db9177d9a3a55387b5b1e88a69
This commit adds logic to run Metadata at the beginning of every browbeat
run through browbeat.py. Also some imports have been fixed.
Co-Authored-By: Joe Talerico <jtaleric@redhat.com>
Change-Id: Ibc13a64710209b25a755f606ea7fddc80232cbc4
case where metadata file list is empty
This commit specifically fixes the code to handle the case where metadata files
are configured in the browbeat configuration file but are not present in the
metadata directory. Previoulsy since the combine_metadata method return boolean
value false in such cases and the index_result method treated result as a
dcitionary, traceback was seen. With this commit, combine_metadata stops
returning false and instead exits the run if an intended metadata file is
absent.
+ Changing how we return result so that the code doesn't err out even when
metadata file list is empty in the confing file
+ Checking in browbeat.py too
Change-Id: I52712beaa2dec6209394a2f3ef605d5a9a13f5cb
With directory names, graphs and everything else in UTC, it becomes hard to
correlate things if the logger is not in UTC.
Change-Id: I8d1f88103d6eb413b11046cf997e146ae9820731
In some part in the code we import objects.
In the Openstack style guidelines they recommend to import only
modules.
http://docs.openstack.org/developer/hacking/#imports
Change-Id: Icae231f06f3c4fd15256f06a464d3ba3e2845e33
Currently we use UTC in some places and local time in some. Changing everything
to UTC to avoid confusion.
Change-Id: I19bab4e41870bbb2fcb68f1dc6c6b9069271c9f9
Creates a UUID at the start of browbeat, logs it at (start/end) and
addes it to each document with index. This should make it easier to
remove large runs if they are invalid or display only a single run
from the ELK/EFK stack.
Change-Id: I8bbc1cda522d609cf27bbe88dce1d74a96afaa93
Having the workload classes inherit from the abstract base class will
enforce certain methods and help the summarized output
+ Moving logger here
+ Implementing functionality to report max test time
+ Browbeat report in yaml format
+ Renaming workload base method
+ Removing an unnecessary method
+ Formatting methods in WorkloadBase
+ autopep8
Change-Id: I090a863b4b00068a48cf5d914c337e15fd5739f5
+ Adjusted workloads to positional args and is flexiable to take:
- ./browbeat.py ---> Runs all worklaods in order: perfkit rally shaker
- ./browbeat.py all ---> Runs all workloads in order: perfkit rally shaker
- ./browbeat.py perfkit rally ---> Runs workloads in order: perfkit rally
- ./browbeat.py shaker rally perfkit ---> Runs workloads in order: shaker rally perfkit
+ --debug now displays debug messages in stdout in addition to previous locations it logged
+ --setup or -s to take a config, Defaults to browbeat-config.yaml (Same as before), Examples:
- ./browbeat.py -s browbeat-complete.yaml rally
- ./browbeat.py -s conf/browbeat-keystone-complete.yaml --debug
+ Use __init__.py to allow cleaner importing of files under lib/
+ Remove ansible version restriction in requirements.txt
+ Separate connmon config from browbeat config for clarity.
Change-Id: Ifb74e5868be128fb378c7b052ba5a1bea46b4dff
+ Adding more benchmarks + Grafana snapshots
+ Benchmarks in list for ordering now.
+ Adding connmon
Change-Id: I9fa4f5d31f9575ad7636218ae6091c8e11343410
Can now specify the workload the user wants to run or simply run all
Added enable option for rally workloads as well
+ (akrzos) Handle empty list as 'all' workloads.
+ (akrzos) Few tweaks to logging
Change-Id: Ie5ae8444408f79d106eabfbf0ad33fdf7819d8f4
Signed-off-by: Sindhur <smalleni@redhat.com>
Modified Install Playbook and added playbook for building image
+ flavor and port can now be modified
+ error checking and logging
+ Result Tracking
+ venv is configurable
+ null in config file to deploy all agents at once
+ removing udp in scenarios as known to cause scale problems
+ (akrzos) streamlining shaker install
+ (akrzos) small changes to build_image.yml
+ (akrzos) Small config changes
Change-Id: I2d5c4a1503d5181d5c28597a2b764ce4b775d7b3
Signed-off-by: Sindhur <smalleni@redhat.com>
We have a solution that works better internally and externally. Pbench
had a set of challanges.
(akrzos) Moved pbench ansible install playbook into pbench folder
(akrzos) cleaned up other garbage
(akrzos) Fix merge conflict
Change-Id: Ie0a59d799846c3c9f5970c1a3b83ae50ebced0b8
Store a debug log in ./log/ to help the user determine any possible
issues that might popup. The stdout (info) might not provide enough
information
+Added Logic for Rally
Change-Id: I9ca1f42c061ae912fde9af414ada0d328615f458
Learning the Ansible API for this migration. Very simple script
that will use the browbeat checks
+ Added Pbench start/stop 01/11/16
+ Moved ansible to config 01/11/16
+ Adding ansible hosts option 01/11/16
+ Connmon added (start/stop) still nothing with results 01/12/16
+ New Rally YAML format... (nova example) 01/12/16
+ Create results directory 01/12/16
+ Created lib with classes 01/13/16
+ Updated Scenarios 01/14/16
+ Updated other workloads to new format 01/15/16
+ Switched to dict get method 01/15/16
+ Removed pyc files and updated 01/15/16
+ updated genhost file 01/15/16
+ Update Ansible for connmon finished pbench work 01/15/16
+ Catch if user tries to run without Ansible or Ansible2 01/26/16
+ Minor changes 01/26/16
+ Bug fix... 01/27/16
+ (akrzos) added keystone yamls and browbeat-complete.yaml
+ Moved BrowbeatRally to Rally and broke connmon out of Tools
+ (akrzos) Implemented per Rally test scenario task args.
+ Updated Docs, removed old browbeat.sh
+ (akrzos) Cleaned up lib/Rally.py and added cinder scenarios to browbeat configs.
+ Fix Connmon install issue
+ (akrzos) Added parameters to neutron task yamls
+ (akrzos) Changed connmon to stop logging immediately after rally task completes.
Change-Id: I338c3463e25f38c2ec7667c7dfc8b5424acba8c2