two policies are added to handle short and long term indices.
life of the indices can be configured using the 'elasticsearch.life'
field at browbeat-config.yml file
shortterm will have: 125 days
longterm will have 2 years
the policy and the policy based templates can be created using the
'es-template' install playbook.
Change-Id: I0f4a4a9acc03092fd582ae4ff50f688850def953
This patch adds an option in browbeat-config.yaml to create annotations
on a Grafana dashboard for a Browbeat scenario. This would be useful for
CI as it provides information on Grafana about what Browbeat scenario was
running at a particular time.
Change-Id: I83a9c74a56379da35ec9466a7492aecc2ee64ea9
Browbeat uses cirros as the default image for many scenarios.
In OSP17 we encountered issues with interface creation in cirros version 0.3.5.
This patch changes the default from cirros 0.3.5 to cirros 0.5.1(cirro5).
Change-Id: Idfb0aff3a6b79eef5a6f0252c0f948f3cd207427
This patch introduces the following changes.
1. Playbooks have been created to start collectd on different hosts.
2. A feature has been added that allows a user to start collectd containers
before running workloads, and stop the collectd containers after running
the workloads. This will help us in minimising the space used for storing
collectd data.
Change-Id: I7926884f461e97bc67453f46eef0121c46c7f19e
This commit
1. Provides a playbook to install the filebeat agent on all
undercloud/overcloud nodes
2. Provides another playbook that adds the browbeat uuid to the
filebeat config file and starts filebeat during browbeat run
3. Corresponding changes in browbeat.py and browbeat/tools.py
to run the playbook to insert custom browbeat uuid in the
filebeat configuration.
Change-Id: Idd2efaf931f4ff581db715a04adef738f81d281c
With the addition of a8b256cad6
and 3791692021 we can have stockpile
set the var so that's one less var for the user to set.
Change-Id: Ic0e31549685d0f66fe09b4dc1694945f3071b873
In OSP15, the default containers are podman. So we need
to pass container_cli environment variable as "podman"
so that the podman containers configuration file on the
undercloud can be parsed.
Change-Id: I223c46baf4cf36596c8ff1e7468eb9fd1a0f1126
Signed-off-by: Charles Short <chucks@redhat.com>
Moving to stockpile instead of the traditional gather script. Soon
totally remove the old gather work.
Change-Id: Ia37aabb4b110930cae0cc1d5af6d4d405e41d4f3
Sysbench is an open source benchmarking tool used to evaluate the system
performance. Currently, the Sysbench CPU test is being integrated into
the Rally workload.
Change-Id: I032d4bc5621d598c0c8ebc6b6367bdd50a15d929
* Mix and Match Workloads
* rerun_type - ['iteration', 'complete'] - allows you to rerun complete
browbeat workload or iteratively
* browbeat/config.py for loading and validating config schema
* tests/test_config.py to test config.py
Change-Id: I99ea87c01c17d5d06cf7f8d1eec3299aa8d483a5
* Use the built-in pykwalify cli validator
* Use set -e on loop inside tox.ini to ensure invalid configs fail CI
Change-Id: I251f7ead8393b97e93de03dc3b6accbdd9670092
This implements rsyslog -> elasticsearch logging as well
as rsyslog forwarder -> rsyslog aggregator -> elasticsearch logging
using the common logging template as a base and adding
in dynamic detection of containerized services and log path
detection.
Services can be moved into and out of containers and add
or remove log files and the log detector script will create a template
that reflects these changes dynamically.
Logging inherits cloud name and elasticsearch info from the existing
group_vars variables, so this should be no additional work to setup
beyond setting logging_backend: rsyslog and either running the install
playbook or the rsyslog-logging playbook.
Finally additional variables can be passed into the deployment with
-e or just being in the ansible namespace, this way things like a
unique build ID can be templated into the logs automatically. I've
added support for browbeat_uuid, dlrn_hash, and rhos_puddle others
should be trivial to add.
There are also additional tunables to configure if logging instaces
should be standalone (viable for small clouds) or rely on a server
side aggregator service (more efficient for large deployments).
Disk backed mode is another tunable that will create a variable
disk load that may be undesierable in some deployments, but if
collecting every last log is important it can be turned on creating
a one or two layer queueing structure in case of Elasticsearch downtime
or overload depending on if the aggregation server is in use.
If you want to see examples from both containerized and non
container clouds check out elk.browbeatproject.org's logstash
index.
Change-Id: I3e6652223a08ab8a716a40b7a0e21b7fcea6c000
Unfortunately connmon hasn't been used in a while and isn't well tested on latest releases,
thus ideally in order to prevent any more cruft issues, lets remove it for now and if it becomes
relevant again we can add it back in.
Change-Id: I0759d164621f3aac1c36dbe1fac49acd7dde97e3
* Rally context to insert delay for specific scenarios
* Boot a persisting instance with NIC and a volume
* Boot a persisting instance with NIC and associate a FIP
* Boot a persisting instance with a NIC and a volume and associate
a FIP
Change-Id: I3735495148ef88e69fc13be23fb53f29c184ed87
1. Make overcloud credentials a configurable parameter in browbeat
configuration
2. Make venv format consistent in browbeat configuration
Change-Id: I2fa05725c89e1bdb9487af70567efaf8ff19bd34
* Small fix for PerfKitBenchmarker results directory
* Update to v1.12.0 PerfKitBenchmarker
* Fix which requirements are needed inside perfkit-venv
Change-Id: Icfc497a6fe411691f8bf33c1a34e3c807c627344
This commit enanbles Ansible linting and does some
minor refactoring to make existing Ansible roles
compatible with the new rules.
Several Ansible linting rules have been excluded to keep the number
of changes from being too onerous.
Also a new script in ci-scripts is used to check very config file
included in the Browbeat repo for validity using the template
Browbeat uses when it runs.
Here's a list of the new linting rules
* Ansible tasks must have names
* When you use shell you must use become not sudo
* Using become_user without using become is not allowed
* If a repo is pulled it must be a pinned version of commit, not latest
* Always_run is deprecated don't use it
* Variables without {{}} and not in when statements are deprecated don't use them
* No Trailing whitepaces
* YAML checking, catches big syntax errors but not less obvious ones
Change-Id: Ic531c91c408996d4e7d8899afe8b21d364998680
+ Adjust boot and persist instances from 200 to 1000 instances
+ Boot and persist instances attached to a network (1000 instances)
+ Neutron context plugin to persist network over entire browbeat run
+ Added docs for the plugins
Change-Id: I58802218f1e2201063cf9ec3f82efa71b28ac1a0
Working:
* Benchmark output makes it to browbeat results directory
* v1.7 PerfKitBenchmarker is installed
* Many benchmarks are tested however there is still some that
require testing/configuration
* Example are provided in conf/perfkit-benchmarks.yml
Change-Id: I62dec86fe7e8c6f71b7c5654abfd4b2079904e4b
This commit adds logic to run Metadata at the beginning of every browbeat
run through browbeat.py. Also some imports have been fixed.
Co-Authored-By: Joe Talerico <jtaleric@redhat.com>
Change-Id: Ibc13a64710209b25a755f606ea7fddc80232cbc4
+ Adding status_code to Kibana tasks to check for HTTP 201 Created
+ Removes existing Searches/Dashboards/Visualizations before uploading
+ Templatized Keystone Version Comparsion Dashboard Visualizations
+ Dashboards use latest metadata format/structure
+ Add version.json to config files and provide example version.json
+ Added line to remind updating version.json in metadata directory
+ s/browbeat/Browbeat/ when referenced not as file
Change-Id: If5f200b9d4557c6ef5a13ed1880cd46e8172ac86
+ Added all Rally Ceilometer scenarios and added minimal ceilometer
config to ceilometer-minimal.yaml
+ BrowbeatPlugin: Nova boot and persist scenario which boots
persisting extra tiny instances
+ Two additional Browbeat Configs that use nova boot and persist
plugin to stress telemetry services over time
Change-Id: Ib52c60559c974c2e63478305a610df6afca5f087
Adding some common scenarios for stressing glance.
+ Create and list images
+ Create and delete images
+ Create image and boot instances off it
+ List images by placing image in context
Also adding a separate configuration file for glance only.
Change-Id: I5327e9c65cb1b045f686e442e88b3e71f73ac5cf
This change adds the cloud name to the indexed data. This allows you
to filter out other cloud's results if you have multiple clouds
feeding browbeat result data into ElasticSearch.
Change-Id: I92c764af115736380660157ad4da54f737e1db98
+ Move metadata to its own .gitignore-d directory
+ Add elasticsearch configuration to other browbeat config files in conf/
+ Adjustments to base .gitignore to ignore ansible retry files and removing legacy pbench-hosts-file
Change-Id: I7e3205f070e5f66e508cb486ae2306d28d4982bd
+ Use a specific version of perfkit (v1.4.0).
+ Ensure names are unique to avoid name conflict with results directory.
+ Adjust the validator mapping
+ Reduce number of time stamp variables for readability
Change-Id: Iad9e4417ff0800985914a57dd3d00bfc44dd9c07
+ Anonymous access on grafana allows pngs to be rendered without authentication
+ Fixed ansible 2.0 depreciation warning.
+ Specify ansible_python_interpreter to avoid following issue:
- https://github.com/ansible/ansible/issues/13773
Change-Id: I2f68d8e9ad5f9f39befb05a023cc68b1de754e94
+ Adjusted workloads to positional args and is flexiable to take:
- ./browbeat.py ---> Runs all worklaods in order: perfkit rally shaker
- ./browbeat.py all ---> Runs all workloads in order: perfkit rally shaker
- ./browbeat.py perfkit rally ---> Runs workloads in order: perfkit rally
- ./browbeat.py shaker rally perfkit ---> Runs workloads in order: shaker rally perfkit
+ --debug now displays debug messages in stdout in addition to previous locations it logged
+ --setup or -s to take a config, Defaults to browbeat-config.yaml (Same as before), Examples:
- ./browbeat.py -s browbeat-complete.yaml rally
- ./browbeat.py -s conf/browbeat-keystone-complete.yaml --debug
+ Use __init__.py to allow cleaner importing of files under lib/
+ Remove ansible version restriction in requirements.txt
+ Separate connmon config from browbeat config for clarity.
Change-Id: Ifb74e5868be128fb378c7b052ba5a1bea46b4dff
+ Adding more benchmarks + Grafana snapshots
+ Benchmarks in list for ordering now.
+ Adding connmon
Change-Id: I9fa4f5d31f9575ad7636218ae6091c8e11343410