Migrate browbeat.sh to Python

Learning the Ansible API for this migration. Very simple script
that will use the browbeat checks

+ Added Pbench start/stop 01/11/16
+ Moved ansible to config 01/11/16
+ Adding ansible hosts option 01/11/16
+ Connmon added (start/stop) still nothing with results 01/12/16
+ New Rally YAML format... (nova example) 01/12/16
+ Create results directory 01/12/16
+ Created lib with classes 01/13/16
+ Updated Scenarios 01/14/16
+ Updated other workloads to new format 01/15/16
+ Switched to dict get method 01/15/16
+ Removed pyc files and updated 01/15/16
+ updated genhost file 01/15/16
+ Update Ansible for connmon finished pbench work 01/15/16
+ Catch if user tries to run without Ansible or Ansible2 01/26/16
+ Minor changes 01/26/16
+ Bug fix... 01/27/16
+ (akrzos) added keystone yamls and browbeat-complete.yaml
+ Moved BrowbeatRally to Rally and broke connmon out of Tools
+ (akrzos) Implemented per Rally test scenario task args.
+ Updated Docs, removed old browbeat.sh
+ (akrzos) Cleaned up lib/Rally.py and added cinder scenarios to browbeat configs.
+ Fix Connmon install issue
+ (akrzos) Added parameters to neutron task yamls
+ (akrzos) Changed connmon to stop logging immediately after rally task completes.
Change-Id: I338c3463e25f38c2ec7667c7dfc8b5424acba8c2
This commit is contained in:
Joe Talerico 2016-01-08 09:58:20 -05:00 committed by Alex Krzos
parent 2fc699e7d2
commit 8a542ef8ec
38 changed files with 941 additions and 657 deletions

View File

@ -34,7 +34,7 @@ overcloud.
# How to run Browbeat?
On the Red Hat OpenStack Director host, as the Stack user jump into a venv w/ Rally and you simply run:
./browbeat.sh <test name>
./browbeat.py --help
# What is necessary?
* Red Hat OpenStack Director
@ -42,7 +42,7 @@ On the Red Hat OpenStack Director host, as the Stack user jump into a venv w/ Ra
* OpenStack Rally
* Why? We are using Rally to stress the control plane of the env.
* Ansible
* Why? We started with using bash to make changes to the Overcloud, creating complex sed/awks that we get for free with Ansible (for the most part). If you prefer to not use Ansible, the older versions (no longer maintained) of the browbeat.sh can be found here.
* Why? We started with using bash to make changes to the Overcloud, creating complex sed/awks that we get for free with Ansible (for the most part). If you prefer to not use Ansible, the older versions (no longer maintained) of the browbeat.sh can be found in a older commit.
# Detailed Install, Check and Run
@ -90,8 +90,8 @@ $ ssh undercloud-root
[stack@ospd ~]$ screen -S browbeat
[stack@ospd ~]$ . browbeat-venv/bin/activate
(browbeat-venv)[stack@ospd ~]$ cd browbeat/
(browbeat-venv)[stack@ospd browbeat]$ vi browbeat.cfg # Edit browbeat.cfg to control how many stress tests are run.
(browbeat-venv)[stack@ospd browbeat]$ ./browbeat.sh test01
(browbeat-venv)[stack@ospd browbeat]$ vi browbeat-config.yaml # Edit browbeat-config.yaml to control how many stress tests are run.
(browbeat-venv)[stack@ospd browbeat]$ ./browbeat.py -w
...
(browbeat-venv)[stack@ospd browbeat]$ ./graphing/rallyplot.py test01
```
@ -137,8 +137,8 @@ Your Overcloud check output is located in check/bug_report.log
```
[stack@ospd ansible]$ . ../../browbeat-venv/bin/activate
(browbeat-venv)[stack@ospd ansible]$ cd ..
(browbeat-venv)[stack@ospd browbeat]$ vi browbeat.cfg # Edit browbeat.cfg to control how many stress tests are run.
(browbeat-venv)[stack@ospd browbeat]$ ./browbeat.sh test01
(browbeat-venv)[stack@ospd browbeat]$ vi browbeat-config.yaml # Edit browbeat.cfg to control how many stress tests are run.
(browbeat-venv)[stack@ospd browbeat]$ ./browbeat.py -w
...
(browbeat-venv)[stack@ospd browbeat]$ ./graphing/rallyplot.py test01
```

View File

@ -1,12 +1,13 @@
#!/bin/bash
if [ ! $# == 2 ]; then
echo "Usage: ./gen_hostfiles.sh <ospd_ip_address> <ssh_config_file>"
if [ ! $# -ge 2 ]; then
echo "Usage: ./gen_hostfiles.sh <ospd_ip_address> <ssh_config_file> <OPTIONAL pbench_host_file> "
echo "Generates ssh config file to use OSP undercloud host as a jumpbox and creates ansible inventory file."
exit
fi
ospd_ip_address=$1
ansible_inventory_file='hosts'
ssh_config_file=$2
pbench_host_file=$3
# "Hackish" copy ssh key to self if we are on directly on the undercloud machine:
if [[ "${ospd_ip_address}" == "localhost" ]]; then
@ -14,7 +15,7 @@ if [[ "${ospd_ip_address}" == "localhost" ]]; then
sudo bash -c "cat ~stack/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys"
fi
nodes=$(ssh -t -o "StrictHostKeyChecking no" stack@${ospd_ip_address} ". ~/stackrc; nova list | grep -i running")
nodes=$(ssh -t -o "StrictHostKeyChecking no" stack@${ospd_ip_address} ". ~/stackrc; nova list | grep -i -E 'active|running'")
controller_id=$(ssh -t -o "StrictHostKeyChecking no" stack@${ospd_ip_address} ". ~/stackrc; heat resource-show overcloud Controller | grep physical_resource_id" | awk '{print $4}')
compute_id=$(ssh -t -o "StrictHostKeyChecking no" stack@${ospd_ip_address} ". ~/stackrc; heat resource-show overcloud Compute | grep physical_resource_id" | awk '{print $4}')
@ -54,6 +55,7 @@ echo " IdentityFile ~/.ssh/id_rsa" | tee -a ${ssh_config_file}
echo " StrictHostKeyChecking no" | tee -a ${ssh_config_file}
echo " UserKnownHostsFile=/dev/null" | tee -a ${ssh_config_file}
echo "[hosts]" > ${pbench_host_file}
compute_hn=()
controller_hn=()
ceph_hn=()
@ -80,6 +82,7 @@ for line in $nodes; do
echo " IdentityFile ~/.ssh/heat-admin-id_rsa" | tee -a ${ssh_config_file}
echo " StrictHostKeyChecking no" | tee -a ${ssh_config_file}
echo " UserKnownHostsFile=/dev/null" | tee -a ${ssh_config_file}
echo "${IP}" >> ${pbench_host_file}
done

View File

@ -29,6 +29,12 @@
when: undercloud
changed_when: false
- name: Change connmon result owner
command: chown stack:stack /tmp/connmon_results.csv
when: undercloud
changed_when: false
ignore_errors: true
- name: check iptables
shell: iptables -nvL | grep -q "dpt:5800"
changed_when: false

View File

@ -1,5 +1,5 @@
[connmon_service_default]
name: default
csv_dump: ./csv_results.csv
csv_dump: /tmp/connmon_results.csv
nodes:
node1 hostname={{ item.ip_address }}:5800 bind=0.0.0.0

117
browbeat-complete.yaml Normal file
View File

@ -0,0 +1,117 @@
# Complete set of Stress Tests, this can take a long time (day(s))
browbeat:
results : results/
sudo: true
debug: false
connmon: true
rerun: 3
pbench:
enabled: false
hosts: ansible/pbench-host-file
interval: 2
tools:
- mpstat
- iostat
- sar
- vmstat
- pidstat
num_workers: None
ansible:
hosts: ansible/hosts
install:
connmon: ansible/install/connmon.yml
pbench: ansible/install/pbench.yml
browbeat: ansible/install/browbeat.yml
check: ansible/check/site.yml
adjust:
workers: ansible/browbeat/adjustment.yml
keystone-token: browbeat/keystone_token_type.yml
rally:
benchmarks:
cinder:
enabled: true
concurrency:
- 64
- 128
- 256
times: 1024
scenarios:
create-attach-volume-centos:
enabled: true
file: rally/cinder/cinder-create-and-attach-volume-cc.yml
create-attach-volume-cirros:
enabled: true
image_name: cirros
file: rally/cinder/cinder-create-and-attach-volume-cc.yml
flavor_name: m1.tiny
keystone:
enabled: true
concurrency:
- 64
- 128
- 192
- 256
- 320
- 384
- 448
- 512
times: 5000
scenarios:
authentic-keystone:
enabled: true
file: rally/keystone/authenticate-keystone-cc.yml
authentic-neutron:
enabled: true
file: rally/keystone/authenticate-neutron-cc.yml
authentic-nova:
enabled: true
file: rally/keystone/authenticate-nova-cc.yml
create-list-tenant:
enabled: true
file: rally/keystone/keystone-create-list-tenant-cc.yml
create-list-user:
enabled: true
file: rally/keystone/keystone-create-list-user-cc.yml
nova:
enabled: false
concurrency:
- 16
- 32
- 48
- 64
- 80
- 96
times: 128
scenarios:
boot-list-centos:
enabled: true
file: rally/nova/nova-boot-list-cc.yml
boot-list-cirros:
enabled: true
image_name: cirros
file: rally/nova/nova-boot-list-cc.yml
flavor_name: m1.tiny
neutron:
enabled: true
concurrency:
- 16
- 32
- 48
- 64
times: 500
scenarios:
create-list-network:
enabled: true
file: rally/neutron/neutron-create-list-network-cc.yml
create-list-port:
enabled: true
file: rally/neutron/neutron-create-list-port-cc.yml
create-list-router:
enabled: true
file: rally/neutron/neutron-create-list-router-cc.yml
create-list-security-group:
enabled: true
file: rally/neutron/neutron-create-list-security-group-cc.yml
create-list-subnet:
enabled: true
file: rally/neutron/neutron-create-list-subnet-cc.yml

100
browbeat-config.yaml Normal file
View File

@ -0,0 +1,100 @@
# Basic set of initial stress tests
browbeat:
results : results/
sudo: true
debug: false
connmon: true
rerun: 1
pbench:
enabled: false
hosts: ansible/pbench-host-file
interval: 2
tools:
- mpstat
- iostat
- sar
- vmstat
- pidstat
num_workers: None
ansible:
hosts: ansible/hosts
install:
connmon: ansible/install/connmon.yml
pbench: ansible/install/pbench.yml
browbeat: ansible/install/browbeat.yml
check: ansible/check/site.yml
adjust:
workers: ansible/browbeat/adjustment.yml
keystone-token: browbeat/keystone_token_type.yml
rally:
benchmarks:
cinder:
enabled: true
concurrency:
- 8
times: 40
scenarios:
create-attach-volume-centos:
enabled: true
file: rally/cinder/cinder-create-and-attach-volume-cc.yml
create-attach-volume-cirros:
enabled: true
image_name: cirros
file: rally/cinder/cinder-create-and-attach-volume-cc.yml
flavor_name: m1.tiny
keystone:
enabled: true
concurrency:
- 64
times: 1000
scenarios:
authentic-keystone:
enabled: true
file: rally/keystone/authenticate-keystone-cc.yml
authentic-neutron:
enabled: false
file: rally/keystone/authenticate-neutron-cc.yml
authentic-nova:
enabled: true
file: rally/keystone/authenticate-nova-cc.yml
create-list-tenant:
enabled: false
file: rally/keystone/keystone-create-list-tenant-cc.yml
create-list-user:
enabled: false
file: rally/keystone/keystone-create-list-user-cc.yml
nova:
enabled: false
concurrency:
- 8
times: 40
scenarios:
boot-list-centos:
enabled: false
file: rally/nova/nova-boot-list-cc.yml
boot-list-cirros:
enabled: true
image_name: cirros
file: rally/nova/nova-boot-list-cc.yml
flavor_name: m1.tiny
neutron:
enabled: false
concurrency:
- 8
times: 100
scenarios:
create-list-network:
enabled: false
file: rally/neutron/neutron-create-list-network-cc.yml
create-list-port:
enabled: true
file: rally/neutron/neutron-create-list-port-cc.yml
create-list-router:
enabled: false
file: rally/neutron/neutron-create-list-router-cc.yml
create-list-security-group:
enabled: false
file: rally/neutron/neutron-create-list-security-group-cc.yml
create-list-subnet:
enabled: false
file: rally/neutron/neutron-create-list-subnet-cc.yml

View File

@ -1,9 +1,11 @@
#!/bin/bash
task_file=$1
test_name=$2
test_args=$3
echo "task_file: ${task_file}"
echo "test_name: ${test_name}"
echo "test_args: ${test_args}"
echo "Before Rally task start."
rally task start --task ${task_file} 2>&1 | tee ${test_name}.log
rally task start --task ${task_file} ${test_args} 2>&1 | tee ${test_name}.log
echo "After Rally task start."

View File

@ -1,41 +0,0 @@
# Block run if we haven't updated the config
UPDATED=false
DEBUG=true
CONNMON=true
# Number of workers to test. This is a loop.
NUM_WORKERS="36 32 24 12 6"
RESET_WORKERS="24"
CONNMON_PID=0
# Number of times we should rerun a Rally Scenario
RERUN=3
CONTROLLERS=$(nova list | grep control)
PBENCH=true
PBENCH_INTERVAL=2
SSH_OPTS="StrictHostKeyChecking no"
declare -A WORKERS
WORKERS["keystone"]="public_workers|admin_workers|processes"
WORKERS["nova"]="metadata_workers|osapi_compute_workers|ec2_workers|workers|#workers"
WORKERS["neutron"]="rpc_workers|api_workers"
WORKERS["cinder"]="osapi_volume_workers"
declare -A TIMES
TIMES["keystone"]=5000
TIMES["neutron"]=500
TIMES["nova"]=128
TIMES["cinder"]=1024
declare -A CONCURRENCY
CONCURRENCY["keystone"]="64 96 128 160 192 224 256"
CONCURRENCY["neutron"]="8 16 32 48 54"
CONCURRENCY["nova"]="8 16 32 48 54"
CONCURRENCY["cinder"]="64 128 256"
ROOT=false
LOGIN_USER="heat-admin"
if [[ $(whoami) == "root" ]]; then
LOGIN_USER="root"
ROOT=true
fi

131
browbeat.py Executable file
View File

@ -0,0 +1,131 @@
#!/usr/bin/env python
import argparse
import yaml
import logging
import sys
sys.path.append('lib/')
from Pbench import *
from Tools import *
from Rally import *
import ConfigParser, os
# Setting up our logger
_logger = logging.getLogger('browbeat')
_logger.setLevel(logging.INFO)
_formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)5s - %(message)s')
_ch = logging.StreamHandler()
_ch.setFormatter(_formatter)
_logger.addHandler(_ch)
# import ansible
try :
from ansible.playbook import PlayBook
from ansible import callbacks
from ansible import utils
except ImportError :
_logger.error("Unable to import Ansible API. This code is not Ansible 2.0 ready")
exit(1)
# Browbeat specific options
_install_opts=['pbench','connmon','browbeat']
_config_file = 'browbeat-config.yaml'
_config = None
# Load Config file
def _load_config(path):
stream = open(path, 'r')
config=yaml.load(stream)
stream.close()
return config
# Run Ansible Playbook
def _run_playbook(path, hosts, only_tag=None, skip_tag=None):
stats = callbacks.AggregateStats()
playbook_cb = callbacks.PlaybookCallbacks(verbose=1)
runner_cb = callbacks.PlaybookRunnerCallbacks(stats, verbose=1)
play = PlayBook(playbook=path,
host_list=hosts,
stats=stats,
only_tags=only_tag,
skip_tags=skip_tag,
callbacks=playbook_cb,
runner_callbacks=runner_cb)
return play.run()
#
# Browbeat Main
#
if __name__ == '__main__':
_cli=argparse.ArgumentParser(description="Browbeat automated scrips")
_cli.add_argument('-n','--hosts',nargs=1,
help='Provide Ansible hosts file to use. Default is ansible/hosts')
_cli.add_argument('-s','--setup',nargs=1,
help='Provide Setup YAML for browbeat. Default is ./browbeat-config.yaml')
_cli.add_argument('-c','--check',action='store_true',
help='Run the Browbeat Overcloud Checks')
_cli.add_argument('-w','--workloads',action='store_true',
help='Run the Browbeat workloads')
_cli.add_argument('-i','--install',nargs=1,choices=_install_opts,dest='install',
help='Install Browbeat Tools')
_cli.add_argument('--debug',action='store_true',
help='Enable Debug messages')
_cli_args = _cli.parse_args()
if _cli_args.debug :
_logger.setLevel(logging.DEBUG)
#
# Install Tool(s)
#
if _cli_args.install :
if _cli_args.setup :
_config=_load_config(_cli_args.setup[0])
else:
_config=_load_config(_config_file)
hosts_path=_config['ansible']['hosts']
if _cli_args.hosts :
_logger.info("Loading new hosts file : %s"% _cli_args.hosts[0])
hosts_path=_cli_args.hosts
if _cli_args.install[0] == 'all' :
for tool in _install_opts:
_run_playbook(_config['ansible']['install'][tool],hosts_path)
elif _cli_args.install[0] in _install_opts :
_run_playbook(_config['ansible']['install'][_cli_args.install[0]],hosts_path)
#
# Overcloud check
#
if _cli_args.check :
if _cli_args.setup :
_config=_load_config(_cli_args.setup[0])
else:
_config=_load_config(_config_file)
hosts_path=_config['ansible']['hosts']
if _cli_args.hosts :
_logger.info("Loading new hosts file : %s"% _cli_args.hosts[0])
hosts_path=_cli_args.hosts
_run_playbook(_config['ansible']['check'],hosts_path)
#
# Run Workloads
#
if _cli_args.workloads :
hosts = None
if _cli_args.setup :
_config=_load_config(_cli_args.setup[0])
else:
_config=_load_config(_config_file)
hosts_path=_config['ansible']['hosts']
if _config['browbeat']['pbench']['enabled'] :
pbench_hosts_path=_config['browbeat']['pbench']['hosts']
if _cli_args.hosts :
_logger.info("Loading new hosts file : %s"% _cli_args.hosts[0])
hosts_path=_cli_args.hosts
if _config['browbeat']['pbench']['enabled'] :
hosts = ConfigParser.ConfigParser(allow_no_value=True)
hosts.read(pbench_hosts_path)
tools = Tools(_config)
rally = Rally(_config,hosts)
rally.start_workloads()

View File

@ -1,267 +0,0 @@
#!/bin/bash
source ~/stackrc
source browbeat.cfg
log()
{
echo "[$(date)]: $*"
}
check_controllers()
{
for IP in $(echo "$CONTROLLERS" | awk '{print $12}' | cut -d "=" -f 2); do
# Number of cores?
CORES=$(ssh -o "${SSH_OPTS}" ${LOGIN_USER}@$IP sudo cat /proc/cpuinfo | grep processor | wc -l)
log Controller : $IP
log Number of cores : $CORES
log Service : Keystone
log "\_Admin:" $(ssh -o "${SSH_OPTS}" ${LOGIN_USER}@$IP sudo cat /etc/httpd/conf.d/10-keystone_wsgi_admin.conf | grep -vi "NONE" | grep -v "#" | grep -E ${WORKERS["keystone"]})
log "\_Main:" $(ssh -o "${SSH_OPTS}" ${LOGIN_USER}@$IP sudo cat /etc/httpd/conf.d/10-keystone_wsgi_main.conf | grep -vi "NONE" | grep -v "#" | grep -E ${WORKERS["keystone"]})
log $(ssh -o "${SSH_OPTS}" ${LOGIN_USER}@$IP sudo cat /etc/keystone/keystone.conf | grep -vi "NONE" | grep -v "#" |grep -E ${WORKERS["keystone"]})
log Service : Nova
log $(ssh -o "${SSH_OPTS}" ${LOGIN_USER}@$IP sudo cat /etc/nova/nova.conf | grep -vi "NONE" | grep -v "#" |grep -E ${WORKERS["nova"]})
log Service : Neutron
log $(ssh -o "${SSH_OPTS}" ${LOGIN_USER}@$IP sudo cat /etc/neutron/neutron.conf | grep -vi "NONE" | grep -v "#" |grep -E ${WORKERS["neutron"]})
log Service : Cinder
log $(ssh -o "${SSH_OPTS}" ${LOGIN_USER}@$IP sudo cat /etc/cinder/cinder.conf | grep -vi "NONE" | grep -v "#" |grep -E ${WORKERS["cinder"]})
done
}
check_running_workers()
{
for IP in $(echo "$CONTROLLERS" | awk '{print $12}' | cut -d "=" -f 2); do
log Validate number of workers
keystone_num=$(ssh -o "${SSH_OPTS}" ${LOGIN_USER}@$IP sudo ps afx | grep "[Kk]eystone" | wc -l)
keystone_admin_httpd_num=$(ssh -o "${SSH_OPTS}" ${LOGIN_USER}@$IP sudo ps afx | grep "[Kk]eystone-admin" | wc -l)
keystone_main_httpd_num=$(ssh -o "${SSH_OPTS}" ${LOGIN_USER}@$IP sudo ps afx | grep "[Kk]eystone-main" | wc -l)
nova_api_num=$(ssh -o "${SSH_OPTS}" ${LOGIN_USER}@$IP sudo ps afx | grep "[Nn]ova-api" | wc -l)
nova_conductor_num=$(ssh -o "${SSH_OPTS}" ${LOGIN_USER}@$IP sudo ps afx | grep "[Nn]ova-conductor" | wc -l)
nova_scheduler_num=$(ssh -o "${SSH_OPTS}" ${LOGIN_USER}@$IP sudo ps afx | grep "[Nn]ova-scheduler" | wc -l)
nova_consoleauth_num=$(ssh -o "${SSH_OPTS}" ${LOGIN_USER}@$IP sudo ps afx | grep "[Nn]ova-consoleauth" | wc -l)
nova_novncproxy_num=$(ssh -o "${SSH_OPTS}" ${LOGIN_USER}@$IP sudo ps afx | grep "[Nn]ova-novncproxy" | wc -l)
cinder_worker_num=$(ssh -o "${SSH_OPTS}" ${LOGIN_USER}@$IP sudo ps afx | grep "[Cc]inder-api" | wc -l)
log $IP : keystone : $keystone_num workers admin/main combined
log $IP : "keystone(httpd)" : $keystone_admin_httpd_num admin workers, $keystone_main_httpd_num main workers
log $IP : nova-api : $nova_api_num workers
log $IP : nova-conductor : $nova_conductor_num workers
log $IP : nova-scheduler : $nova_scheduler_num workers
log $IP : nova-consoleauth : $nova_consoleauth_num workers
log $IP : nova-novncproxy : $nova_novncproxy_num workers
log $IP : cinder-api : $cinder_worker_num workers
# Keystone should be 2x for admin and main + 1 for main process
# Nova should be 3x + 1 nova-api, core_count + 1 for conductor, and scheduler+consoleauth+novncproxy
# Neutron ?
done
}
run_rally()
{
if [ -z "$1" ] ; then
echo "ERROR : Pass which service to run rally tests against"
echo "Usage : run_rally SERVICE TEST_PREFIX"
echo "Valid services : keystone, nova, neutron"
exit 1
else
echo "Benchmarking : $1"
osp_service=$1
fi
if [ -z "$2" ] ; then
echo "ERROR : Pass test_prefix to run rally tests"
echo "Usage : run_rally SERVICE TEST_PREFIX"
echo "Valid services : keystone, nova, neutron"
exit 1
else
test_prefix=$2
fi
for task_file in `ls rally/${osp_service}`
do
task_dir=rally/$osp_service
if [ ${task_file: -3} == "-cc" ]
then
for concur in ${CONCURRENCY[${osp_service}]}
do
for ((run_count=1; run_count<=${RERUN}; run_count++))
do
times=${TIMES[${osp_service}]}
concur_padded="$(printf "%04d" ${concur})"
test_name="${test_prefix}-iteration_$run_count-${task_file}-${concur_padded}"
log Test-Name ${test_name}
sed -i "s/\"concurrency\": 1,/\"concurrency\": ${concur},/g" ${task_dir}/${task_file}
sed -i "s/\"times\": 1,/\"times\": ${times},/g" ${task_dir}/${task_file}
truncate_token_bloat
results_dir=results/${test_prefix}/$osp_service/${task_file}/run-$run_count
mkdir -p $results_dir
if $CONNMON ; then
log Starting connmon
# Kill any existing connmond session in screen, ansible install script creates this
sudo screen -X -S connmond kill
sudo sed -i "s/csv_dump:.*/csv_dump: results\/$test_prefix\/$osp_service\/$task_file\/run-$run_count\/current-run.csv/g" /etc/connmon.cfg
connmond --config /etc/connmon.cfg > /tmp/connmond-${test_name} 2>&1 &
CONNMON_PID=$!
fi
if $PBENCH ; then
setup_pbench
user-benchmark --config=${test_name} -- "./browbeat-run-rally.sh ${task_dir}/${task_file} ${test_name}"
else
# pbench is off, just run rally directly
rally task start --task ${task_dir}/${task_file} 2>&1 | tee ${test_name}.log
fi
if $CONNMON ; then
log Stopping connmon
kill -9 $CONNMON_PID
mv ${results_dir}/current-run.csv ${results_dir}/${test_name}.csv
fi
post_process $results_dir
# grep the log file for the results to be run
test_id=`grep "rally task results" ${test_name}.log | awk '{print $4}'`
pbench_results_dir=`find /var/lib/pbench-agent/ -name "*${test_prefix}*" -print`
if [ -n "${test_id}" ]; then
rally task report ${test_id} --out ${test_name}.html
if $PBENCH; then
cp ${test_name}.html ${pbench_results_dir}
fi
fi
if $PBENCH ; then
cp ${test_name}.log ${pbench_results_dir}
if $CONNMON ; then
mkdir -p ${pbench_results_dir}/connmon/
for connmon_graph in `find ${results_dir}/ | grep -E "png$|csv$"`
do
cp ${connmon_graph} ${pbench_results_dir}/connmon/
done
fi
move-results --prefix=${test_prefix}/${task_file}-${concur}
clear-tools
fi
if [ -n "${test_id}" ]; then
mv ${test_name}.html $results_dir
fi
mv ${test_name}.log $results_dir
sed -i "s/\"concurrency\": ${concur},/\"concurrency\": 1,/g" ${task_dir}/${task_file}
sed -i "s/\"times\": ${times},/\"times\": 1,/g" ${task_dir}/${task_file}
done # RERUN
done # Concurrency
fi
done # Task Files
}
post_process()
{
if [ -z "$1" ] ; then
echo "Error result path not passed"
exit 1
else
log Post-Processing : $1
results=$1
fi
if $CONNMON ; then
log Building Connmon Graphs
for i in `ls -talrh $results | grep -E "*\.csv$" | awk '{print $9}'` ; do
python graphing/connmonplot.py $results/$i;
done
fi
}
setup_pbench()
{
log "Setting up pbench tools"
clear-tools
kill-tools
sudo /opt/pbench-agent/util-scripts/register-tool --name=mpstat -- --interval=${PBENCH_INTERVAL}
sudo /opt/pbench-agent/util-scripts/register-tool --name=iostat -- --interval=${PBENCH_INTERVAL}
sudo /opt/pbench-agent/util-scripts/register-tool --name=sar -- --interval=${PBENCH_INTERVAL}
sudo /opt/pbench-agent/util-scripts/register-tool --name=vmstat -- --interval=${PBENCH_INTERVAL}
sudo /opt/pbench-agent/util-scripts/register-tool --name=pidstat -- --interval=${PBENCH_INTERVAL}
for IP in $(echo "$CONTROLLERS" | awk '{print $12}' | cut -d "=" -f 2); do
sudo /opt/pbench-agent/util-scripts/register-tool --name=mpstat --remote=${IP} -- --interval=${PBENCH_INTERVAL}
sudo /opt/pbench-agent/util-scripts/register-tool --name=iostat --remote=${IP} -- --interval=${PBENCH_INTERVAL}
sudo /opt/pbench-agent/util-scripts/register-tool --name=sar --remote=${IP} -- --interval=${PBENCH_INTERVAL}
sudo /opt/pbench-agent/util-scripts/register-tool --name=vmstat --remote=${IP} -- --interval=${PBENCH_INTERVAL}
sudo /opt/pbench-agent/util-scripts/register-tool --name=pidstat --remote=${IP} -- --interval=${PBENCH_INTERVAL}
sudo /opt/pbench-agent/util-scripts/register-tool --name=user-tool --remote=${IP} -- --tool-name=mariadb-conntrack --start-script=/opt/usertool/mariadb-track.sh
done
}
truncate_token_bloat()
{
log "Truncating Token Bloat"
IP=`echo "$CONTROLLERS" | head -n 1 | awk '{print $12}' | cut -d "=" -f 2`
ssh -o "${SSH_OPTS}" ${LOGIN_USER}@$IP sudo "mysql keystone -e 'truncate token;'"
}
if [ "$UPDATED" = false ]; then
log "Usage: ./browbeat.sh <test_prefix>"
log "Please update the browbeat.cfg before running"
exit
fi
if [ ! $# == 1 ]; then
log "Usage: ./browbeat.sh <test_prefix>"
exit
fi
if [ ! -f ansible/hosts ]; then
log "ERROR: Ansible inventory file does not exist."
log "In ansible directory, run: ./gen_hosts.sh <ospd ip address> ~/.ssh/config"
exit
fi
complete_test_prefix=$1
if $DEBUG ; then
log $CONTROLLERS
fi
#
# 1) Show the current # of workers
# 2) Run Tests (Keystone, Nova, Neutron)
# 3) Update # of workers per-service
# 4) Re-Run tests above
#
mkdir -p results
check_controllers
# Clean logs before run
ansible-playbook -i ansible/hosts ansible/browbeat/cleanlogs.yml
for num_wkrs in ${NUM_WORKERS} ; do
num_wkr_padded="$(printf "%02d" ${num_wkrs})"
ansible-playbook -i ansible/hosts ansible/browbeat/adjustment.yml -e "workers=${num_wkrs}"
check_running_workers
check_controllers
run_rally keystone "${complete_test_prefix}-keystone-${num_wkr_padded}" ${num_wkrs}
check_controllers
run_rally neutron "${complete_test_prefix}-neutron-${num_wkr_padded}" ${num_wkrs}
check_controllers
run_rally nova "${complete_test_prefix}-nova-${num_wkr_padded}" ${num_wkrs}
check_controllers
run_rally cinder "${complete_test_prefix}-cinder-${num_wkr_padded}" ${num_wkrs}
done
ansible-playbook -i ansible/hosts ansible/browbeat/adjustment.yml -e "workers=${RESET_WORKERS}"
check_running_workers
check_controllers

45
lib/Connmon.py Normal file
View File

@ -0,0 +1,45 @@
from Tools import *
class Connmon :
def __init__(self,config):
self.logger = logging.getLogger('browbeat.Connmon')
self.config = config
self.tools = Tools(self.config)
return None
# Start connmond
def start_connmon(self):
self.stop_connmon()
tool="connmond"
connmond=self.tools.find_cmd(tool)
if not connmond :
self.logger.error("Unable to find {}".format(tool))
as_sudo = self.config['browbeat']['sudo']
cmd = ""
if as_sudo :
cmd +="sudo "
cmd += "screen -X -S connmond kill"
self.tools.run_cmd(cmd)
self.logger.info("Starting connmond")
cmd = ""
cmd +="{} --config /etc/connmon.cfg > /tmp/connmond 2>&1 &".format(connmond)
return self.tools.run_cmd(cmd)
# Stop connmond
def stop_connmon(self):
self.logger.info("Stopping connmond")
return self.tools.run_cmd("pkill -9 connmond")
# Create Connmon graphs
def connmon_graphs(self,result_dir,test_name):
cmd="python graphing/connmonplot.py {}/connmon/{}.csv".format(result_dir,
test_name)
return self.tools.run_cmd(cmd)
# Move connmon results
def move_connmon_results(self,result_dir,test_name):
path = "%s/connmon" % result_dir
if not os.path.exists(path) :
os.mkdir(path)
return shutil.move("/tmp/connmon_results.csv",
"{}/connmon/{}.csv".format(result_dir,test_name))

93
lib/Pbench.py Normal file
View File

@ -0,0 +1,93 @@
import logging
import sys
sys.path.append("./")
from Tools import *
class Pbench:
def __init__(self,config,hosts):
self.logger = logging.getLogger('browbeat.Pbench')
self.tools = Tools()
self.config = config
self.hosts = hosts
return None
# PBench Start Tools
def register_tools(self):
tool="register-tool"
register_tool=self.tools.find_cmd(tool)
tool="clear-tools"
clear_tools=self.tools.find_cmd(tool)
interval = self.config['browbeat']['pbench']['interval']
as_sudo = self.config['browbeat']['sudo']
# Clear out old tools
cmd = ""
if as_sudo :
cmd +="sudo "
cmd = "%s" % clear_tools
self.logger.info('PBench Clear : Command : %s' % cmd)
self.tools.run_cmd(cmd)
# Now Register tools
self.logger.info('PBench register tools')
for tool in self.config['browbeat']['pbench']['tools'] :
cmd = ""
if as_sudo :
cmd +="sudo "
cmd += "%s " % register_tool
cmd += "--name=%s -- --interval=%s" % (tool,interval)
self.logger.debug('PBench Start : Command : %s' % cmd)
if not self.tools.run_cmd(cmd) :
self.logger.error("Issue registering tool.")
return False
return self.register_remote_tools()
def get_results_dir(self,prefix):
cmd="find /var/lib/pbench-agent/ -name \"*%s*\" -print"%prefix
return self.tools.run_cmd(cmd)
def register_remote_tools(self):
tool="register-tool"
register_tool=self.tools.find_cmd(tool)
interval = self.config['browbeat']['pbench']['interval']
if len(self.hosts.options('hosts')) > 0 :
for node in self.hosts.options('hosts'):
cmd = ""
as_sudo = self.config['browbeat']['sudo']
if as_sudo :
cmd +="sudo "
cmd = ""
for tool in self.config['browbeat']['pbench']['tools'] :
cmd = ""
if as_sudo :
cmd +="sudo "
cmd += "%s " % register_tool
cmd += "--name=%s --remote=%s -- --interval=%s" % (tool,node,interval)
self.logger.debug('PBench register-remote: Command : %s' % cmd)
if not self.tools.run_cmd(cmd) :
self.logger.error("Issue registering tool.")
return False
return True
# PBench Stop Tools
def stop_pbench(self,sudo=False):
tool="stop-tools"
stop_tool=self.tools.find_cmd(tool)
cmd = ""
if sudo :
cmd +="sudo "
cmd = "%s" % stop_tool
self.logger.info('PBench Stop : Command : %s' % cmd)
self.tools.run_cmd(cmd)
return True
# Move Results
def move_results(self,sudo=False):
tool="move-results"
move_tool=self.tools.find_cmd(tool)
cmd = ""
if sudo :
cmd +="sudo "
cmd = "%s" % move_tool
self.logger.info('PBench move-results : Command : %s' % cmd)
self.tools.run_cmd(cmd)
return True

139
lib/Rally.py Normal file
View File

@ -0,0 +1,139 @@
from Connmon import Connmon
from Pbench import Pbench
from Tools import Tools
import datetime
import glob
import logging
import shutil
class Rally:
def __init__(self, config, hosts=None):
self.logger = logging.getLogger('browbeat.Rally')
self.config = config
self.tools = Tools(self.config)
self.connmon = Connmon(self.config)
if hosts is not None:
self.pbench = Pbench(self.config, hosts)
def run_scenario(self, task_file, scenario_args, result_dir, test_name):
self.logger.debug("--------------------------------")
self.logger.debug("task_file: {}".format(task_file))
self.logger.debug("scenario_args: {}".format(scenario_args))
self.logger.debug("result_dir: {}".format(result_dir))
self.logger.debug("test_name: {}".format(test_name))
self.logger.debug("--------------------------------")
if self.config['browbeat']['pbench']['enabled']:
task_args = str(scenario_args).replace("'", "\\\"")
self.pbench.register_tools()
self.logger.info("Starting Scenario")
tool = "rally"
rally = self.tools.find_cmd(tool)
cmd = ("user-benchmark --config={1} -- \"./browbeat-run-rally.sh"
" {0} {1} \'{2}\'\"".format(task_file, test_name, task_args))
self.tools.run_cmd(cmd)
else:
task_args = str(scenario_args).replace("'", "\"")
cmd = "rally task start {} --task-args \'{}\' 2>&1 | tee {}.log".format(task_file,
task_args, test_name)
self.tools.run_cmd(cmd)
def get_task_id(self, test_name):
cmd = "grep \"rally task results\" {}.log | awk '{{print $4}}'".format(test_name)
return self.tools.run_cmd(cmd)
def gen_scenario_html(self, task_id, test_name):
self.logger.info("Generating Rally HTML for task_id : {}".format(task_id))
cmd = "rally task report {} --out {}.html".format(task_id, test_name)
return self.tools.run_cmd(cmd)
# Iterate through all the Rally scenarios to run.
# If rerun is > 1, execute the test the desired number of times.
def start_workloads(self):
self.logger.info("Starting Rally workloads")
time_stamp = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
self.logger.debug("Time Stamp (Prefix): {}".format(time_stamp))
benchmarks = self.config.get('rally')['benchmarks']
if len(benchmarks) > 0:
for benchmark in benchmarks:
if benchmarks[benchmark]['enabled']:
self.logger.info("Benchmark: {}".format(benchmark))
scenarios = benchmarks[benchmark]['scenarios']
def_concurrencies = benchmarks[benchmark]['concurrency']
def_times = benchmarks[benchmark]['times']
self.logger.debug("Default Concurrencies: {}".format(def_concurrencies))
self.logger.debug("Default Times: {}".format(def_times))
for scenario in sorted(scenarios):
if scenarios[scenario]['enabled']:
self.logger.info("Running Scenario: {}".format(scenario))
self.logger.debug("Scenario File: {}".format(
scenarios[scenario]['file']))
scenario_args = dict(scenarios[scenario])
del scenario_args['enabled']
del scenario_args['file']
if len(scenario_args) > 0:
self.logger.debug("Overriding Scenario Args: {}".format(
scenario_args))
result_dir = self.tools.create_results_dir(
self.config['browbeat']['results'], time_stamp, benchmark,
scenario)
self.logger.debug("Created result directory: {}".format(result_dir))
# Override concurrency/times
if 'concurrency' in scenario_args:
concurrencies = scenario_args['concurrency']
del scenario_args['concurrency']
else:
concurrencies = def_concurrencies
if 'times' not in scenario_args:
scenario_args['times'] = def_times
for concurrency in concurrencies:
scenario_args['concurrency'] = concurrency
for run in range(self.config['browbeat']['rerun']):
test_name = "{}-browbeat-{}-{}-iteration-{}".format(time_stamp,
scenario, concurrency, run)
if not result_dir:
self.logger.error("Failed to create result directory")
exit(1)
# Start connmon before rally
if self.config['browbeat']['connmon']:
self.connmon.start_connmon()
self.run_scenario(scenarios[scenario]['file'], scenario_args,
result_dir, test_name)
# Stop connmon at end of rally task
if self.config['browbeat']['connmon']:
self.connmon.stop_connmon()
self.connmon.move_connmon_results(result_dir, test_name)
self.connmon.connmon_graphs(result_dir, test_name)
# Find task id (if task succeeded in running)
task_id = self.get_task_id(test_name)
if task_id:
self.gen_scenario_html(task_id, test_name)
else:
self.logger.error("Cannot find task_id")
for data in glob.glob("./{}*".format(test_name)):
shutil.move(data, result_dir)
if self.config['browbeat']['pbench']['enabled']:
pbench_results_dir = self.pbench.get_results_dir(time_stamp)
shutil.copytree(result_dir,
"{}/results/".format(pbench_results_dir))
self.pbench.move_results()
else:
self.logger.info("Skipping {} scenario enabled: false".format(scenario))
else:
self.logger.info("Skipping {} benchmarks enabled: false".format(benchmark))
else:
self.logger.error("Config file contains no rally benchmarks.")

51
lib/Tools.py Normal file
View File

@ -0,0 +1,51 @@
import logging
import os
import shutil
from subprocess import Popen, PIPE
class Tools:
def __init__(self,config=None):
self.logger = logging.getLogger('browbeat.Tools')
self.config = config
return None
# Run command, return stdout as result
def run_cmd(self,cmd):
self.logger.debug("Running command : %s" % cmd)
process = Popen(cmd,shell=True, stdout=PIPE, stderr=PIPE)
stdout, stderr = process.communicate()
if len(stderr) > 0 :
return None
else :
return stdout.strip()
# Find Command on host
def find_cmd(self,cmd):
_cmd = "which %s" % cmd
self.logger.debug('Find Command : Command : %s' % _cmd)
command = self.run_cmd(_cmd)
if command is None:
self.logger.error("Unable to find %s"%cmd)
raise Exception("Unable to find command : '%s'"%cmd)
return False
else:
return command.strip()
def create_run_dir(self,results_dir,run):
try :
os.makedirs("%s/run-%s" %(results_dir,run))
return "%s/run-%s" % (results_dir,run)
except OSError as e:
return False
# Create directory for results
def create_results_dir(self, results_dir, timestamp, service, scenario):
try :
os.makedirs("{}/{}/{}/{}".format(results_dir, timestamp, service, scenario))
self.logger.debug("{}/{}/{}/{}".format(os.path.dirname(results_dir), timestamp, service,
scenario))
return "{}/{}/{}/{}".format(os.path.dirname(results_dir), timestamp, service, scenario)
except OSError as e:
return False

View File

@ -1,42 +0,0 @@
{% set flavor_name = flavor_name or "m1.tiny" %}
{
"CinderVolumes.create_and_attach_volume": [
{
"args": {
"size": 1,
"image": {
"name": "centos7"
},
"flavor": {
"name": "{{flavor_name}}"
}
},
"runner": {
"times": 1,
"concurrency": 1,
"type": "constant"
},
"context": {
"users": {
"tenants": 2,
"users_per_tenant": 2
},
"quotas": {
"neutron": {
"network": -1,
"port": -1
},
"nova": {
"instances": -1,
"cores": -1,
"ram": -1
},
"cinder": {
"gigabytes": -1,
"volumes": -1
}
}
}
}
]
}

View File

@ -0,0 +1,30 @@
{% set image_name = image_name or "centos7" %}
{% set flavor_name = flavor_name or "m1.small" %}
---
CinderVolumes.create_and_attach_volume:
-
args:
size: 1
image:
name: {{image_name}}
flavor:
name: {{flavor_name}}
runner:
concurrency: {{concurrency}}
times: {{times}}
type: "constant"
context:
users:
tenants: 2
users_per_tenant: 2
quotas:
neutron:
network: -1
port: -1
nova:
instances: -1
cores: -1
ram: -1
cinder:
gigabytes: -1
volumes: -1

View File

@ -1,21 +0,0 @@
{
"Authenticate.keystone": [
{
"args": {},
"context": {
"users": {
"project_domain": "default",
"resource_management_workers": 30,
"tenants": 1,
"user_domain": "default",
"users_per_tenant": 8
}
},
"runner": {
"concurrency": 1,
"times": 1,
"type": "constant"
}
}
]
}

View File

@ -0,0 +1,15 @@
---
Authenticate.keystone:
-
args: {}
context:
users:
project_domain: "default"
resource_management_workers: 30
tenants: 1
user_domain: "default"
users_per_tenant: 8
runner:
concurrency: {{concurrency}}
times: {{times}}
type: "constant"

View File

@ -1,23 +0,0 @@
{
"Authenticate.validate_neutron": [
{
"args": {
"repetitions": 2
},
"context": {
"users": {
"project_domain": "default",
"resource_management_workers": 30,
"tenants": 1,
"user_domain": "default",
"users_per_tenant": 8
}
},
"runner": {
"concurrency": 1,
"times": 1,
"type": "constant"
}
}
]
}

View File

@ -0,0 +1,17 @@
{% set repetitions = repetitions or 2 %}
---
Authenticate.validate_neutron:
-
args:
repetitions: {{repetitions}}
context:
users:
project_domain: "default"
resource_management_workers: 30
tenants: 1
user_domain: "default"
users_per_tenant: 8
runner:
concurrency: {{concurrency}}
times: {{times}}
type: "constant"

View File

@ -1,23 +0,0 @@
{
"Authenticate.validate_nova": [
{
"args": {
"repetitions": 2
},
"context": {
"users": {
"project_domain": "default",
"resource_management_workers": 30,
"tenants": 1,
"user_domain": "default",
"users_per_tenant": 8
}
},
"runner": {
"concurrency": 1,
"times": 1,
"type": "constant"
}
}
]
}

View File

@ -0,0 +1,17 @@
{% set repetitions = repetitions or 2 %}
---
Authenticate.validate_nova:
-
args:
repetitions: {{repetitions}}
context:
users:
project_domain: "default"
resource_management_workers: 30
tenants: 1
user_domain: "default"
users_per_tenant: 8
runner:
concurrency: {{concurrency}}
times: {{times}}
type: "constant"

View File

@ -1,21 +0,0 @@
{
"KeystoneBasic.create_and_list_tenants": [
{
"args": {},
"context": {
"users": {
"project_domain": "default",
"resource_management_workers": 30,
"tenants": 1,
"user_domain": "default",
"users_per_tenant": 8
}
},
"runner": {
"concurrency": 1,
"times": 1,
"type": "constant"
}
}
]
}

View File

@ -0,0 +1,15 @@
---
KeystoneBasic.create_and_list_tenants:
-
args: {}
context:
users:
project_domain: "default"
resource_management_workers: 30
tenants: 1
user_domain: "default"
users_per_tenant: 8
runner:
concurrency: {{concurrency}}
times: {{times}}
type: "constant"

View File

@ -1,21 +0,0 @@
{
"KeystoneBasic.create_and_list_users": [
{
"args": {},
"context": {
"users": {
"project_domain": "default",
"resource_management_workers": 30,
"tenants": 1,
"user_domain": "default",
"users_per_tenant": 8
}
},
"runner": {
"concurrency": 1,
"times": 1,
"type": "constant"
}
}
]
}

View File

@ -0,0 +1,15 @@
---
KeystoneBasic.create_and_list_users:
-
args: {}
context:
users:
project_domain: "default"
resource_management_workers: 30
tenants: 1
user_domain: "default"
users_per_tenant: 8
runner:
concurrency: {{concurrency}}
times: {{times}}
type: "constant"

View File

@ -1,28 +0,0 @@
{
"NeutronNetworks.create_and_list_networks": [
{
"args": {
"network_create_args": ""
},
"runner": {
"concurrency": 1,
"times": 1,
"type": "constant"
},
"context": {
"users": {
"tenants": 1,
"users_per_tenant": 8
},
"quotas": {
"neutron": {
"network": -1,
"port": -1,
"router": -1,
"subnet": -1
}
}
}
}
]
}

View File

@ -0,0 +1,19 @@
---
NeutronNetworks.create_and_list_networks:
-
args:
network_create_args: ""
runner:
concurrency: {{concurrency}}
times: {{times}}
type: "constant"
context:
users:
tenants: 1
users_per_tenant: 8
quotas:
neutron:
network: -1
port: -1
router: -1
subnet: -1

View File

@ -1,29 +0,0 @@
{
"NeutronNetworks.create_and_list_ports": [
{
"args": {
"network_create_args": "",
"ports_per_network": 4
},
"runner": {
"concurrency": 1,
"times": 1,
"type": "constant"
},
"context": {
"users": {
"tenants": 1,
"users_per_tenant": 8
},
"quotas": {
"neutron": {
"network": -1,
"port": -1,
"router": -1,
"subnet": -1
}
}
}
}
]
}

View File

@ -0,0 +1,21 @@
{% set ports_per_network = ports_per_network or 4 %}
---
NeutronNetworks.create_and_list_ports:
-
args:
network_create_args: ""
ports_per_network: {{ports_per_network}}
runner:
concurrency: {{concurrency}}
times: {{times}}
type: "constant"
context:
users:
tenants: 1
users_per_tenant: 8
quotas:
neutron:
network: -1
port: -1
router: -1
subnet: -1

View File

@ -1,32 +0,0 @@
{
"NeutronNetworks.create_and_list_routers": [
{
"args": {
"network_create_args": "",
"subnet_create_args": "",
"subnet_cidr_start": "1.1.0.0/30",
"subnets_per_network": 2,
"router_create_args": ""
},
"runner": {
"concurrency": 1,
"times": 1,
"type": "constant"
},
"context": {
"users": {
"tenants": 1,
"users_per_tenant": 8
},
"quotas": {
"neutron": {
"network": -1,
"port": -1,
"router": -1,
"subnet": -1
}
}
}
}
]
}

View File

@ -0,0 +1,24 @@
{% set subnets_per_network = subnets_per_network or 2 %}
---
NeutronNetworks.create_and_list_routers:
-
args:
network_create_args: ""
subnet_create_args: ""
subnet_cidr_start: "1.1.0.0/30"
subnets_per_network: {{subnets_per_network}}
router_create_args: ""
runner:
concurrency: {{concurrency}}
times: {{times}}
type: "constant"
context:
users:
tenants: 1
users_per_tenant: 8
quotas:
neutron:
network: -1
port: -1
router: -1
subnet: -1

View File

@ -1,29 +0,0 @@
{
"NeutronSecurityGroup.create_and_list_security_groups": [
{
"args": {
"security_group_create_args": ""
},
"runner": {
"concurrency": 1,
"times": 1,
"type": "constant"
},
"context": {
"users": {
"tenants": 1,
"users_per_tenant": 8
},
"quotas": {
"neutron": {
"network": -1,
"port": -1,
"router": -1,
"subnet": -1,
"security_group": -1
}
}
}
}
]
}

View File

@ -0,0 +1,20 @@
---
NeutronSecurityGroup.create_and_list_security_groups:
-
args:
security_group_create_args: ""
runner:
concurrency: {{concurrency}}
times: {{times}}
type: "constant"
context:
users:
tenants: 1
users_per_tenant: 8
quotas:
neutron:
network: -1
port: -1
router: -1
subnet: -1
security_group: -1

View File

@ -1,31 +0,0 @@
{
"NeutronNetworks.create_and_list_subnets": [
{
"args": {
"network_create_args": "",
"subnet_create_args": "",
"subnet_cidr_start": "1.1.0.0/30",
"subnets_per_network": 2
},
"runner": {
"concurrency": 1,
"times": 1,
"type": "constant"
},
"context": {
"users": {
"tenants": 1,
"users_per_tenant": 8
},
"quotas": {
"neutron": {
"network": -1,
"port": -1,
"router": -1,
"subnet": -1
}
}
}
}
]
}

View File

@ -0,0 +1,23 @@
{% set subnets_per_network = subnets_per_network or 2 %}
---
NeutronNetworks.create_and_list_subnets:
-
args:
network_create_args: ""
subnet_create_args: ""
subnet_cidr_start: "1.1.0.0/30"
subnets_per_network: {{subnets_per_network}}
runner:
concurrency: {{concurrency}}
times: {{times}}
type: "constant"
context:
users:
tenants: 1
users_per_tenant: 8
quotas:
neutron:
network: -1
port: -1
router: -1
subnet: -1

View File

@ -1,38 +0,0 @@
{% set flavor_name = flavor_name or "m1.small" %}
{
"NovaServers.boot_and_list_server": [
{
"args": {
"flavor": {
"name": "{{flavor_name}}"
},
"image": {
"name": "centos7"
},
"detailed": true
},
"runner": {
"concurrency": 1,
"times": 1,
"type": "constant"
},
"context": {
"users": {
"tenants": 1,
"users_per_tenant": 1
},
"quotas": {
"neutron": {
"network": -1,
"port": -1
},
"nova": {
"instances": -1,
"cores": -1,
"ram": -1
}
}
}
}
]
}

View File

@ -0,0 +1,27 @@
{% set image_name = image_name or "centos7" %}
{% set flavor_name = flavor_name or "m1.small" %}
---
NovaServers.boot_and_list_server:
-
args:
flavor:
name: {{flavor_name}}
image:
name: {{image_name}}
detailed: true
runner:
concurrency: {{concurrency}}
times: {{times}}
type: "constant"
context:
users:
tenants: 1
users_per_tenant: 1
quotas:
neutron:
network: -1
port: -1
nova:
instances: -1
cores: -1
ram: -1