Commit Graph

76 Commits

Author SHA1 Message Date
Sanjay Chari da7b97c08a Google Sheets : Add support for run comparison
This patch adds support to pass multiple Rally
result json files as arguments to the Google Sheet
generation script. This helps in comparison between results
of different runs.

Change-Id: Ic833b10b3fb94e8d5771a5ab3235576fedebfec6
2022-08-08 16:45:40 +05:30
Zuul d3133b8b12 Merge "Add SLA failures worksheets to Rally json Google Sheets" 2022-06-02 11:38:34 +00:00
rajeshP524 6699909997 Add SLA failures worksheets to Rally json Google Sheets
This patch adds a new feature to the Rally json Google Sheets python script
to generate additional worksheets for atomic actions that do not pass SLA criteria.

Change-Id: Ic56672b78ac844f2deffdd5b1d8f5e498943beb3
2022-06-02 16:50:56 +05:30
Sanjay Chari a71506b31c Add playbook to clean up sqlalchemy collectd configuration
Browbeat adds configuration for sqlalchemy collectd on the configuration files
of many Openstack API containers on controller hosts. This causes issues in the
next overcloud deployment. This patch adds a playbook to clean up sqlalchemy
collectd configuration.

Closes-Bug: #1975693
Change-Id: I2574676aa444f76e11cec91d9e0e2a66282301ac
2022-06-02 13:21:28 +05:30
Asma Syed Hameed 2417bb0178 Update Docs
This patch updates the docs

Change-Id: I103eaed34bcae3745667ae4ef11d87abfc844996
2022-03-09 17:03:17 +05:30
Sanjay Chari a0d555812b Multiple worksheets for Rally json Google Sheets
This patch introduces multiple worksheets within one Google sheet
for multiple atomic actions

Change-Id: Ia6542cd7646a87c53cd633bde070939729a7ef9b
2022-01-28 13:17:24 +05:30
Sanjay Chari b0a101bdc1 Rally json google sheets
This patch introduces a script that takes the
Rally json report as input and creates a Google sheet
with duration data for an atomic action.

Change-Id: Ia3a116da0a39f2e3754f79970d997d1bf87eb167
2022-01-07 12:07:33 +05:30
Sanjay Chari bf5a1f3657 Enhance Rally HTML reports
This patch introduces the following changes.
1. We often have multiple executions of the same atomic action in
a single rally iteration. Existing rally charts do not show duration
for duplicate actions in an iteration accurately. This patch introduces
line charts for each occurence of a duplicate atomic action for each
instance, by passing it as additive data.
2. This patch also adds a per iteration stacked area chart that shows
data per iteration for each occurence of all atomic actions in the iteration.
3. This patch also adds a duration line chart for each resource created by
Rally.

Co-authored-by: venkata anil <anilvenkata@redhat.com>
Change-Id: I44dafad69cdbcd6db7c8fd9148f1b35d89924e03
2021-12-08 17:55:14 +05:30
Zuul 1e7fbe05d6 Merge "rally cleanup manually" 2021-09-29 11:35:09 +00:00
venkata anil 8c094190ed rally cleanup manually
Rally cleans up resources automatically at the end of testing.
However, we disable cleanup in rally sometimes during testing
and later try to manually delete these resources.

Cleaning up the resources at scale is very time consuming,
so we came up with a python process to speed up this activity.

To cleanup :

$ source browbeat/.rally-venv/bin/activate
$ source ~/overcloudrc
$ python browbeat/rally_cleanup.py

It spawns the user specified number of processes for cleanup.
Each process cleans up the provided resource (example, given
network). We are using python multiprocessing Pool with map,
which distributes list of resources (ex, networks) to processes
for deletion.

We also add retries to check and delete the left over resources
from previous iterations.

Change-Id: Ifeb08e6eb6b8251422ba5777bda15c6701b7fefe
2021-09-29 15:50:13 +05:30
Asma Syed Hameed 73380ad35d Fix typos in doc
Change-Id: I7072a4aa3e21d9b50ebbb53eb801945a254c304b
2021-09-29 13:11:12 +05:30
Sanjay Chari d4bb0a95a7 Dynamic workloads : Support for multiple OCP clusters
This patch adds support for multiple Openshift clusters in
kube-burner dynamic workloads.

Change-Id: I15a629c43fe359e40a28a75493fe27483e31d1d2
2021-09-15 16:17:22 +05:30
Sanjay Chari f82e467c2e Clone e2e-benchmarking repo during browbeat installation
This patch adds an option to clone the e2e-benchmarking repo during
browbeat installation, so that we can use it to run shift-on-stack
workloads in browbeat.

Change-Id: I56f0b7e2e1e7d214cbed10b06f8c8d7a0f0f2395
2021-09-08 14:13:04 +05:30
Sanjay Chari 199feefecb Updated undercloud installation instructions
Change-Id: Ie11f8dcb737680a087aebdf3336923629425f388
2021-07-08 17:13:04 +05:30
Zuul 28a42fbe19 Merge "Build the custom images for workload-plugin if enabled" 2020-10-23 10:50:43 +00:00
Asma Syed Hameed 02e56b8258 Automate collectd_container var
This patch automates collectd_container var to set to true  if running on OpenStack version
Stein or later and set to false for below OpenStack version which was manually done earlier.

Change-Id: Idd61f060a088caa1eae9d2a80e3c7ceb5debc424
2020-10-23 08:55:56 +00:00
Asma Syed Hameed 881a60196b Build the custom images for workload-plugin if enabled
Change-Id: I864f288ad9d680e696d06cc9086fb9ab01430a0c
2020-08-12 10:53:52 +05:30
Lucas H. Xu 219a5122fd Add new support for Prometheus/Grafana/Collectd
This patch adds the support for using Prometheus with
Grafana and Collectd. Automated the most panels within the
openstack system performance dashboad.

To test it:

$ansible-playbook -i hosts install/grafana-prometheus-dashboards.yml

And make sure add grafana-api-key in the group_vars/all.yml

Co-authored-by: Asma Syed Hameed <asyedham@redhat.com>
Co-authored-by: Sai Sindhur Malleni <smalleni@redhat.com>

Change-Id: Icb9fa058324165c0d304dc1dcf2dd843662307cf
2020-08-06 02:20:32 +00:00
Sai Sindhur Malleni fbf309baee Add common logging with filebeat
This commit
1. Provides a playbook to install the filebeat agent on all
   undercloud/overcloud nodes
2. Provides another playbook that adds the browbeat uuid to the
   filebeat config file and starts filebeat during browbeat run
3. Corresponding changes in browbeat.py and browbeat/tools.py
   to run the playbook to insert custom browbeat uuid in the
   filebeat configuration.

Change-Id: Idd2efaf931f4ff581db715a04adef738f81d281c
2020-04-03 19:27:24 +00:00
Charles Short 0fa8454fd1 Remove PerfkitBenchMaker
No longer supported.

Change-Id: Iae8ff4e0a1f55af67b49df16e8ecf276877f2525
Signed-off-by: Charles Short <chucks@redhat.com>
2019-11-20 14:54:43 -05:00
Charles Short 2ba39b30ab Refresh collectd for "train"
This commit does several things at once:

- Use ansible_distribution_major_version to detect which version of the
  EPEL repository. So we dont have to hard code the URL for either epel7
  or epel 8.
- Remove "stein" workaround for colelctd-openstack role. The "stein"
  workaround has been removed in favor of running the collectd daemon
  in a podman container.
- Drop opendaylight support for collectd since it is no longer
  suupported.
- Add the collectd playbook so we can run collectd in a centos 7
  container going forward for "train". This commit still needs
  to be tested on "stein" but it will probably work anyways.
- Add browbeat-containers to tox.ini for flake8
- Simplify detection of docker or podman for older versions of OSP.
(sai)
- Fixed typo from compute_compute to collectd_compute that caused failures on computes
- clear graphite_host in install/group_vars/all.yml
- Move container DockerFiles into brwobeat tree
- Conditionally copy required Dockerfiles to node instead of git clone
- Fix up some log file paths
- Use Docker/Podman depending on release
- Provide single interface(collectd.yml) which has container and baremetal playbooks
- Introduce variable collectd_container in install/group_vars/all
- remove unneeded selinux rebaelling (already running as priveleged) when running container
- remove unneed hostfs mount
- collectd container logs to file instead of STDOUT for easier debug
- add collectd-ping package to collectd-openstack Dockerfile
- Improve docs to reflect changes
- dynamically set rabbitmq and swift paths as well for tail plugin

Co-Authored-By: Sai Sindhur Malleni <smalleni@redhat.com>

Change-Id: I627a696f6f1240d96a0e1d85c26d59bbbfae2b1b
Signed-off-by: Charles Short <chucks@redhat.com>
Signed-off-by: Sai Sindhur Malleni <smalleni@redhat.com>
2019-11-05 08:08:37 -05:00
Sai Sindhur Malleni 94d0d29f52 Remove checks as they are not maintained anymore
Change-Id: Iad45c255a62c2eac0c1feaaa5fa60604508a72cb
2019-08-26 19:59:44 -04:00
Charles Short 00f04b0dff Disable perfkitbenchmaker on RHEL8
Disable perfkitbenchmaker on RHEL8 since
it does not reliably work with python3.

Change-Id: I04a1805db2a04194d8bb77d31b310e9539273781
Signed-off-by: Charles Short <chucks@redhat.com>
2019-07-08 13:56:03 -04:00
Charles Short 30fe497de4 Fix doc theme
Fix doc build failures due to newer oslosphinx.

Change-Id: I010990ebc067d8863951d66fedcdeeab9b07b871
Signed-off-by: Charles Short <chucks@redhat.com>
2019-03-29 14:33:37 -04:00
Zuul 189da7eb04 Merge "Removing ELK Server Install" 2019-02-27 21:16:53 +00:00
agopi 4969f31ce1 Update ansible-lint execution
Updated ansible-lint to run via pre-commit only on ansible files.

Moved config file to its standard location, repository root, which
simplifies syncronization and usage.

Contains bumping ansible-lint to current version which also required
adding few more rule excludes. These excludes are going to be removed
one by one in follow-up changes. This gradual approach allow us to
improve code style without endless merge conflicts.

Config settings mostly based on those used by tripleo repos.

Bumping linters can now be done by running 'pre-commit autoupdate'.

Pro-commit always locks versions so there is no chance that a newer
linter (ansible-lint) would break CI.

Some documentation can be found at https://github.com/openstack/tripleo-quickstart/blob/master/doc/source/contributing.rst
and applies mostly to any project using pre-commit.

Co-Authored-By: Sorin Sbarnea <ssbarnea@redhat.com>
Change-Id: I05eb561c4e353b5fe0bc7c6d3ab2f8ea6c6ea2f4
2019-01-29 18:36:59 +00:00
agopi abcff17b82 Browbeat to use fs001 of stockpile and minor bugfix in prescribe
1. Browbeat shall make use of the fs001 which is the targetted version
   of stockpile, thus it won't run all roles against all hosts.
2. Also updated the bug where node_name wasn't added to dictionary when 
   prescribe first hits data that was gathered from outside config file.

Change-Id: Ieac2c090713b307b4971aee3fd4d5b24f14b9fc9
2019-01-23 15:53:20 +00:00
Joe Talerico 2e3ec425ad Removing ELK Server Install
This was built to help users standup infrastructure. However, we no
longer maintain these playbooks, and there are other solutions out there
to help uers deploy ELK.

Change-Id: I001ca4ed75c55dce617b7efe9ac9e38f2f0b9060
2019-01-22 21:22:52 +00:00
agopi 4ebe2cc331 Updated docs for creating grafyaml key
There was some confusion around installing grafana dashboards since
the move to grafyaml, tried to update the docs to make it further clear
about steps needed.

Change-Id: I536a82b1d9800e9a648ef3d5e67a1437f097ff2a
Closes-Bug: #1757545
2018-11-26 13:55:39 -05:00
Zuul 7c3aed604e Merge "Adding stockpile to collect data" 2018-11-14 14:35:08 +00:00
agopi 0b56097272 Adding stockpile to collect data
This would facilitate browbeat users to take advantage of
work being done wrt browbench utilities.

To use stockpile just update metadata_playbook to
ansible/gather/stockpile_gather.yml

Change-Id: I4c12920007f66bc3378439b437676e4cb162b082
2018-11-13 21:52:40 +00:00
zhouxinyong 029f47184a Advancing the protocal of the website to HTTPS in README.rst.
Change-Id: I396cdd04623310eb7db95123a7800540f2e29bac
2018-11-13 23:06:11 +08:00
zhouxinyong d44cc7599f omit the twice occured words in ci.rst
Change-Id: I901057e8be3ce1369149505e9790eeeaa6bab93e
2018-11-13 10:43:08 +08:00
agopi 03be425102 Add numpy to extras
Move numpy out of requirements into extras, as it's not required
for running browbeat, but only used for insights like compare
results.

So to install with insights, pip install .[insights], to perform
the CLI operations such as compare.

Added a tox test to ensure no dep conflict arises.

Change-Id: Id8aafcd479003ae79ab8c2d0f1fa378ea38d60d2
Closes-Bug: #1799690
2018-11-12 15:41:24 +00:00
Sai Sindhur Malleni d090700eeb Update docs for grafana-dashboards playbook
Grafyaml needs to be installed for dahboard uploading to work. So,
the browbeat installation playbook needs to be run before uploading dashboards.
This commit adds text in docs to emphasize the order in which playbooks are run.

Change-Id: Iec154f600db156907a2bf78fcb1c71b4bceb5469
2018-10-10 09:20:59 -04:00
Zuul 005ee423ab Merge "Adding elastic methods to query results" 2018-07-19 14:53:49 +00:00
Joe Talerico 55d4aa5a9f Adding elastic methods to query results
Right now we depend on Kibana to do our comparisons. This will give the
user a CLI mechansim to compare two different browbeat runs.

+ Small fix to browbeat metadata comparsion to not query _all

+ Changing how the metadata comparsion is displayed

Change-Id: I3881486100c91dcf3cc4eeeb4ddfa532ff01a7f1
2018-07-19 09:56:50 -04:00
Chuck Short 0f553b7103 Update documentation
openstack-dashboards.yml had been renamed to grafana-dashboards.yml.
Also the networking guide was 404 when clicking on the link in the
installation documentation, so point it to the right(?) documentation.

Change-Id: I60dc3d797d38ac0280a5347a4c5e580531169f54
Signed-off-by: Chuck Short <chucks@redhat.com>
2018-07-16 11:48:09 -04:00
akrzos df1a4764bb Python ssh-config+Ansible Inventory Generator
Converting some of generate_tripleo_hostfile.sh into Python
* Use API bindings rather than cli commands
* Pluggable design to allow other Ansible ssh-config/inventory generator
* Two integration tests for testing cli of bootstrap.py and bootstrap/tripleo.py

Change-Id: I0669d96904891f1d54d0b805fbb0acadb4a7bf57
2018-05-25 11:28:47 -04:00
zhangdebo 1d275959cb Fix typo
Change-Id: Idc204b736ec2352c96b1303f2aaf1f222ff5d1f4
2018-05-23 17:09:27 +00:00
Joe Talerico 04f720ebf0 Checks no longer maintained
Checks were built to verify installation of the overcloud had
performance tunings applied. However, it has not been maintained over
the past year.

Change-Id: I48e99a6ea6266da30186b0b07079d96dccdac3d8
2018-05-23 11:16:08 -04:00
agopi e6b789a53b Updated documentation: fixed trivial stuff
Added grafana_apikey to documentation.
Fixed trivial spellings.

Change-Id: I504c9ed806f7edf7eafd2f9c70c36dc4083f700a
2018-03-23 12:13:49 -04:00
gaofei c31d117117 Fix spell mistake in document
Fix spell mistake in document.
Trivial fix.

Change-Id: I83979ce05962f083173f119040f1d209e0ea50bf
2018-02-07 17:46:04 +08:00
akrzos 8c84205e01 Upgrade Rally and PerfKit
* Also add Plugin Scenario to boot persisting instances with a volume

Change-Id: Ia06b3336a6856e83b76114d6ddaff2aee5bd20fa
2017-12-21 14:38:52 +00:00
akrzos 155a0cef15 Tripleo Quickstart Browbeat Install script
* Installs Browbeat either on local machine or oooq Undercloud

Change-Id: I2c536da9ab7c84cc32809b0f09574861ca1fece9
2017-12-13 10:58:06 -05:00
akrzos 6091bd8772 Few small changes to developing with quickstart.
Change-Id: Id06840c661adbf0ab8e322f87b51233b8de29741
2017-10-20 13:36:06 -04:00
jkilpatr bb44cd830c Rsyslog -> Elasticsearch logging
This implements rsyslog -> elasticsearch logging as well
as rsyslog forwarder -> rsyslog aggregator -> elasticsearch logging
using the common logging template as a base and adding
in dynamic detection of containerized services and log path
detection.

Services can be moved into and out of containers and add
or remove log files and the log detector script will create a template
that reflects these changes dynamically.

Logging inherits cloud name and elasticsearch info from the existing
group_vars variables, so this should be no additional work to setup
beyond setting logging_backend: rsyslog and either running the install
playbook or the rsyslog-logging playbook.

Finally additional variables can be passed into the deployment with
-e or just being in the ansible namespace, this way things like a
unique build ID can be templated into the logs automatically. I've
added support for browbeat_uuid, dlrn_hash, and rhos_puddle others
should be trivial to add.

There are also additional tunables to configure if logging instaces
should be standalone (viable for small clouds) or rely on a server
side aggregator service (more efficient for large deployments).
Disk backed mode is another tunable that will create a variable
disk load that may be undesierable in some deployments, but if
collecting every last log is important it can be turned on creating
a one or two layer queueing structure in case of Elasticsearch downtime
or overload depending on if the aggregation server is in use.

If you want to see examples from both containerized and non
container clouds check out elk.browbeatproject.org's logstash
index.

Change-Id: I3e6652223a08ab8a716a40b7a0e21b7fcea6c000
2017-10-16 12:08:26 +00:00
akrzos e585fb3df4 Upgrade Rally to 0.9.1
Migrated plugins and cleaned up config files and improved ease of
running some scenarios/plugins.

Change-Id: If76ce233f3067b85aa086be7f615dbb900a1bcb9
2017-10-13 19:33:06 +00:00
akrzos 203cfd7926 Adding more nova boot and persist scenarios.
* Rally context to insert delay for specific scenarios
* Boot a persisting instance with NIC and a volume
* Boot a persisting instance with NIC and associate a FIP
* Boot a persisting instance with a NIC and a volume and associate
  a FIP

Change-Id: I3735495148ef88e69fc13be23fb53f29c184ed87
2017-10-10 14:39:38 -04:00
Sai Sindhur Malleni c5c558e79a Removing the browbeat_network.yml playbook
This playbook has caused much pain, with people assuming this works
irrespective of how their underlay is configured. Removing this to
remove confusion.

Co-Authored-By: Alex Krzos akrzos@redhat.com
Change-Id: I21ea865e31176be3ad7ceafd0696634d89a0d86a
2017-10-08 13:25:27 -04:00