Add tool for Rally reliability analytics

Change-Id: I160580f4f5f4ef7dd9cfdb1fc887a1fce8e2c4d2
This commit is contained in:
Ilya Shakhat 2016-09-29 16:57:54 +03:00
parent 1ef9268e1a
commit c83599d45b
43 changed files with 1805 additions and 0 deletions

View File

@ -0,0 +1,6 @@
[run]
branch = True
source = rally_runners
[report]
ignore_errors = True

58
scripts/rally-runners/.gitignore vendored Normal file
View File

@ -0,0 +1,58 @@
*.py[cod]
# C extensions
*.so
# Packages
*.egg*
*.egg-info
dist
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
cover/
.coverage*
!.coveragerc
.tox
nosetests.xml
.testrepository
.venv
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Complexity
output/*.html
output/*/index.html
# Sphinx
doc/build
# pbr generates these
AUTHORS
ChangeLog
# Editors
*~
.*.swp
.*sw?
# Files created by releasenotes build
releasenotes/build

View File

@ -0,0 +1,7 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
${PYTHON:-python} -m subunit.run discover -t ./ . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -0,0 +1,176 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -0,0 +1,6 @@
include AUTHORS
include ChangeLog
exclude .gitignore
exclude .gitreview
global-exclude *.pyc

View File

@ -0,0 +1,5 @@
Rally Runners
-------------
**A collection of Rally runners, scenarios and report generators**

View File

@ -0,0 +1,75 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
#'sphinx.ext.intersphinx',
'oslosphinx'
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'rally-runners'
copyright = u'2016, OpenStack Foundation'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
#intersphinx_mapping = {'http://docs.python.org/': None}

View File

@ -0,0 +1,4 @@
============
Contributing
============
.. include:: ../../CONTRIBUTING.rst

View File

@ -0,0 +1,25 @@
.. rally-runners documentation master file, created by
sphinx-quickstart on Tue Jul 9 22:26:36 2013.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to rally-runners's documentation!
========================================================
Contents:
.. toctree::
:maxdepth: 2
readme
installation
usage
contributing
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -0,0 +1,12 @@
============
Installation
============
At the command line::
$ pip install rally-runners
Or, if you have virtualenvwrapper installed::
$ mkvirtualenv rally-runners
$ pip install rally-runners

View File

@ -0,0 +1 @@
.. include:: ../../README.rst

View File

@ -0,0 +1,7 @@
========
Usage
========
To use rally-runners in a project::
import rally_runners

View File

@ -0,0 +1,19 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pbr.version
__version__ = pbr.version.VersionInfo(
'rally_runners').version_string()

View File

@ -0,0 +1,382 @@
# coding=utf-8
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import math
from interval import interval
import numpy as np
from scipy import stats
from sklearn import cluster as skl
from rally_runners.reliability import types
MIN_CLUSTER_WIDTH = 3 # filter cluster with less items
MAX_CLUSTER_GAP = 6 # max allowed gap in the cluster (otherwise split them)
WINDOW_SIZE = 21 # window size for average duration calculation
WARM_UP_CUTOFF = 10 # drop first N points from etalon
DEGRADATION_THRESHOLD = 4 # how many sigmas duration differs from etalon mean
def find_clusters(arr, filter_fn, max_gap=MAX_CLUSTER_GAP,
min_cluster_width=MIN_CLUSTER_WIDTH):
"""Find clusters of 1 in the sequence containing (0, 1)
The given array is filtered through filter_fn function which produces
sequence of 0s or 1s. Then 1s are grouped into clusters so that:
* there can not be more than max_gap 0s inside
* there are at least min_cluster_width of 1s
:param arr: initial array
:param filter_fn: transformation x -> [0, 1]
:param max_gap: maximum allowed number of consequent 0s inside the cluster
:param min_cluster_width: minimum cluster width
:return: multi-interval (i.e. list of intervals)
"""
clusters = interval()
start = None
end = None
for i, y in enumerate(arr):
v = filter_fn(y)
if v:
if not start:
start = i
end = i
else:
if end and i - end > max_gap:
if end - start >= min_cluster_width:
clusters |= interval([start, end])
start = end = None
if end:
if end - start >= MIN_CLUSTER_WIDTH:
clusters |= interval([start, end])
return clusters
def convert_rally_data(data):
"""Convert raw Rally data into [DataRow]
:param data: raw Rally data
:return: ([DataRow], index of hook)
"""
results = data['result']
start = results[0]['timestamp'] # start of the run
hooks = data['hooks']
hook_index = 0
if hooks:
# when the hook started
hook_start_time = hooks[0]['started_at'] - start
else:
# let all data be etalon
hook_start_time = results[-1]['timestamp']
table = []
for index, result in enumerate(results):
time = result['timestamp'] - start
duration = result['duration']
if time + duration < hook_start_time:
hook_index = index
table.append(types.DataRow(index=index, time=time, duration=duration,
error=bool(result['error'])))
return table, hook_index
def calculate_array_stats(data):
data = np.array(data)
return types.ArrayStats(mean=np.mean(data), median=np.median(data),
p95=np.percentile(data, 95), var=np.var(data),
std=np.std(data), count=len(data))
def indexed_interval_to_time_interval(table, src_interval):
"""For given indexes in the table return time interval
:param table: [DataRow] source data
:param src_interval: interval of array indexes
:return: ClusterStats
"""
start_index = int(src_interval.inf)
end_index = int(src_interval.sup)
if start_index > 0:
d_start = (table[start_index].time - table[start_index - 1].time) / 2
else:
d_start = 0
if end_index < len(table) - 1:
d_end = (table[end_index + 1].time - table[end_index].time) / 2
else:
d_end = 0
start_time = table[start_index].time - d_start
end_time = table[end_index].time + d_end
var = d_start + d_end
duration = end_time - start_time
count = sum(1 if start_time <= p.time <= end_time else 0 for p in table)
return types.ClusterStats(start=start_time, end=end_time, count=count,
duration=types.MeanVar(duration, var))
def calculate_error_area(table):
"""Calculates error statistics
:param table:
:return: list of time intervals where errors occur
"""
error_clusters = find_clusters(
(p.error for p in table),
filter_fn=lambda x: 1 if x else 0,
min_cluster_width=0
)
error_stats = [indexed_interval_to_time_interval(table, cluster)
for cluster in error_clusters]
return error_stats
def calculate_anomaly_area(table, quantile=0.9):
"""Find anomalies
:param quantile: float, default 0.3
:param table:
:return: list of time intervals where anomalies occur
"""
table = [p for p in table if not p.error] # rm errors
x = [p.duration for p in table]
X = np.array(zip(x, np.zeros(len(x))), dtype=np.float)
bandwidth = skl.estimate_bandwidth(X, quantile=quantile)
mean_shift_algo = skl.MeanShift(bandwidth=bandwidth, bin_seeding=True)
mean_shift_algo.fit(X)
labels = mean_shift_algo.labels_
lm = stats.mode(labels)
# filter out the largest cluster
vl = [(0 if labels[i] == lm.mode else 1) for i, p in enumerate(x)]
anomaly_clusters = find_clusters(vl, filter_fn=lambda y: y)
anomaly_stats = [indexed_interval_to_time_interval(table, cluster)
for cluster in anomaly_clusters]
return anomaly_stats
def calculate_smooth_data(table, window_size):
"""Calculate mean for the data
:param table:
:param window_size:
:return: list of points in mean data
"""
table = [p for p in table if not p.error] # rm errors
smooth = []
for i in range(0, len(table) - window_size):
durations = [p.duration for p in table[i: i + window_size]]
time = np.mean([p.time for p in table[i: i + window_size]])
duration = np.mean(durations)
var = abs(time - np.mean(
[p.time for p in table[i + 1: i + window_size - 1]]))
smooth.append(types.SmoothData(time=time, duration=duration, var=var))
return smooth
def calculate_degradation_area(table, smooth, etalon_stats, etalon_threshold):
table = [p for p in table if not p.error] # rm errors
if len(table) <= WINDOW_SIZE:
return []
mean_times = [p.time for p in smooth]
mean_durations = [p.duration for p in smooth]
mean_vars = [p.var for p in smooth]
clusters = find_clusters(
mean_durations,
filter_fn=lambda y: 0 if abs(y) < etalon_threshold else 1)
# calculate cluster duration
degradation_cluster_stats = []
for cluster in clusters:
start_idx = int(cluster.inf)
end_idx = int(cluster.sup)
start_time = mean_times[start_idx]
end_time = mean_times[end_idx]
duration = end_time - start_time
var = np.mean(mean_vars[start_idx: end_idx])
# point durations
point_durations = []
for p in table:
if start_time < p.time < end_time:
point_durations.append(p.duration)
# calculate difference between means
# http://onlinestatbook.com/2/tests_of_means/difference_means.html
anomaly_mean = np.mean(point_durations)
anomaly_var = np.var(point_durations)
se = math.sqrt(anomaly_var / len(point_durations) +
etalon_stats.var / etalon_stats.count)
dof = etalon_stats.count + len(point_durations) - 2
mean_diff = anomaly_mean - etalon_stats.mean
conf_interval = stats.t.interval(0.95, dof, loc=mean_diff, scale=se)
degradation = types.MeanVar(
mean_diff, np.mean([mean_diff - conf_interval[0],
conf_interval[1] - mean_diff]))
degradation_ratio = types.MeanVar(
anomaly_mean / etalon_stats.mean,
np.mean([(mean_diff - conf_interval[0]) / etalon_stats.mean,
(conf_interval[1] - mean_diff) / etalon_stats.mean]))
logging.debug('Mean diff: %s' % mean_diff)
logging.debug('Conf int: %s' % str(conf_interval))
degradation_cluster_stats.append(types.DegradationClusterStats(
start=start_time, end=end_time,
duration=types.MeanVar(duration, var),
degradation=degradation, degradation_ratio=degradation_ratio,
count=len(point_durations)
))
return degradation_cluster_stats
def process_one_run(rally_data):
"""Process single Rally run (raw output for single task iteration)
This function calculates statistics for a single run, including
baseline stats (etalon), error stats, anomalies and areas with degraded
performance.
:param rally_data: raw Rally data
:return: RunResult
"""
data, hook_index = convert_rally_data(rally_data)
etalon = [p.duration for p in data[WARM_UP_CUTOFF:hook_index]]
etalon_stats = calculate_array_stats(etalon)
etalon_threshold = abs(etalon_stats.mean +
DEGRADATION_THRESHOLD * etalon_stats.std)
etalon_interval = interval([data[WARM_UP_CUTOFF].time,
data[hook_index].time])[0]
logging.debug('Hook index: %s' % hook_index)
logging.debug('Etalon stats: %s' % str(etalon_stats))
# Calculate stats
error_area = calculate_error_area(data)
anomaly_area = calculate_anomaly_area(data)
smooth_data = calculate_smooth_data(data, window_size=WINDOW_SIZE)
degradation_area = calculate_degradation_area(
data, smooth_data, etalon_stats, etalon_threshold)
# logging.debug stats
logging.debug('Error area: %s' % error_area)
logging.debug('Anomaly area: %s' % anomaly_area)
logging.debug('Degradation area: %s' % degradation_area)
return types.RunResult(
data=data,
error_area=error_area,
anomaly_area=anomaly_area,
degradation_area=degradation_area,
etalon_stats=etalon_stats,
etalon_interval=etalon_interval,
etalon_threshold=etalon_threshold,
smooth_data=smooth_data,
)
def process_all_runs(runs):
"""Process all runs from Rally raw data report
This function returns summary stats for all runs, including downtime
duration, MTTR, performance degradation.
:param runs: collection of Rally runs
:return: SummaryResult
"""
run_results = []
downtime_statistic = []
downtime_var = []
ttr_statistic = []
ttr_var = []
degradation_statistic = []
degradation_var = []
degradation_ratio_statistic = []
degradation_ratio_var = []
for i, one_run in enumerate(runs):
run_result = process_one_run(one_run)
run_results.append(run_result)
ds = 0
for index, stat in enumerate(run_result.error_area):
ds += stat.duration.statistic
downtime_var.append(stat.duration.var)
if run_result.error_area:
downtime_statistic.append(ds)
ts = ss = sr = 0
for index, stat in enumerate(run_result.degradation_area):
ts += stat.duration.statistic
ttr_var.append(stat.duration.var)
ss += stat.degradation.statistic
degradation_var.append(stat.degradation.var)
sr += stat.degradation_ratio.statistic
degradation_ratio_var.append(stat.degradation_ratio.var)
if run_result.degradation_area:
ttr_statistic.append(ts)
degradation_statistic.append(ss)
degradation_ratio_statistic.append(sr)
downtime = None
if downtime_statistic:
downtime_mean = np.mean(downtime_statistic)
se = math.sqrt((sum(downtime_var) +
np.var(downtime_statistic)) / len(downtime_statistic))
downtime = types.MeanVar(downtime_mean, se)
mttr = None
if ttr_statistic:
ttr_mean = np.mean(ttr_statistic)
se = math.sqrt((sum(ttr_var) +
np.var(ttr_statistic)) / len(ttr_statistic))
mttr = types.MeanVar(ttr_mean, se)
degradation = None
degradation_ratio = None
if degradation_statistic:
degradation = types.MeanVar(np.mean(degradation_statistic),
np.mean(degradation_var))
degradation_ratio = types.MeanVar(np.mean(degradation_ratio_statistic),
np.mean(degradation_ratio_var))
return types.SummaryResult(run_results=run_results, mttr=mttr,
degradation=degradation,
degradation_ratio=degradation_ratio,
downtime=downtime)

View File

@ -0,0 +1,80 @@
# coding=utf-8
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import matplotlib as mpl
mpl.use('Agg') # do not require X server
import matplotlib.pyplot as plt
def draw_area(plot, area, color, label):
for i, c in enumerate(area):
plot.axvspan(c.start, c.end, color=color, label=label)
label = None # show label only once
def draw_plot(run_result, show_etalon=True, show_errors=True,
show_anomalies=False, show_degradation=True):
table = run_result.data
x = [p.time for p in table]
y = [p.duration for p in table]
x2 = [p.time for p in table if p.error]
y2 = [p.duration for p in table if p.error]
figure = plt.figure()
plot = figure.add_subplot(111)
plot.plot(x, y, 'b.', label='Successful operations')
plot.plot(x2, y2, 'r.', label='Failed operations')
plot.set_ylim(0)
plot.axhline(run_result.etalon_threshold, color='violet',
label='Degradation threshold')
# highlight etalon
if show_etalon:
plot.axvspan(run_result.etalon_interval.inf,
run_result.etalon_interval.sup,
color='#b0efa0', label='Baseline')
# highlight anomalies
if show_anomalies:
draw_area(plot, run_result.anomaly_area,
color='#f0f0f0', label='Anomaly')
# highlight degradation
if show_degradation:
draw_area(plot, run_result.degradation_area,
color='#f8efa8', label='Degradation')
# highlight errors
if show_errors:
draw_area(plot, run_result.error_area,
color='#ffc0a7', label='Downtime')
# draw mean
plot.plot([p.time for p in run_result.smooth_data],
[p.duration for p in run_result.smooth_data],
color='cyan', label='Mean duration')
plot.grid(True)
plot.set_xlabel('time, s')
plot.set_ylabel('operation duration, s')
# add legend
legend = plot.legend(loc='right', shadow=True)
for label in legend.get_texts():
label.set_fontsize('small')
return figure

View File

@ -0,0 +1,49 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os_faults
from rally.common import logging
from rally import consts
from rally.task import hook
LOG = logging.getLogger(__name__)
@hook.configure(name="fault_injection")
class FaultInjectionHook(hook.Hook):
"""Performs fault injection."""
CONFIG_SCHEMA = {
"type": "object",
"$schema": consts.JSON_SCHEMA,
"properties": {
"action": {"type": "string"},
},
"required": [
"action",
],
"additionalProperties": False,
}
def run(self):
LOG.debug("Injecting fault: %s", self.config["action"])
injector = os_faults.connect()
try:
os_faults.human_api(injector, self.config["action"])
self.set_status(consts.HookStatus.SUCCESS)
except Exception as e:
self.set_status(consts.HookStatus.FAILED)
self.set_error(exception_name=type(e),
description='Fault injection failure',
details=str(e))

View File

@ -0,0 +1,190 @@
# coding=utf-8
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import argparse
import functools
import json
import logging
import math
import os
import jinja2
from tabulate import tabulate
import yaml
from rally_runners.reliability import analytics
from rally_runners.reliability import graphics
from rally_runners import utils
REPORT_TEMPLATE = 'rally_runners/reliability/templates/report.rst'
SCENARIOS_DIR = 'rally_runners/reliability/scenarios/'
def round2(number, variance=None):
if not variance:
variance = number
return round(number, int(math.ceil(-(math.log10(variance)))) + 1)
def mean_var_to_str(mv):
if not mv:
return 'N/A'
if mv.var == 0:
precision = 4
else:
precision = int(math.ceil(-(math.log10(mv.var)))) + 1
if precision > 0:
pattern = '%%.%df' % precision
pattern_1 = '%%.%df' % (precision)
else:
pattern = pattern_1 = '%d'
return '%s ~%s' % (pattern % round(mv.statistic, precision),
pattern_1 % round(mv.var, precision + 1))
def tabulate2(*args, **kwargs):
return (u'%s' % tabulate(*args, **kwargs)).replace(' ~', u'\u00A0±')
def get_runs(raw_rally_reports):
for one_report in raw_rally_reports:
for one_run in one_report:
yield one_run
def indent(text, distance):
return '\n'.join((' ' * distance + line) for line in text.split('\n'))
def process(raw_rally_reports, book_folder, scenario, scenario_name):
scenario_text = indent(scenario, 4)
report = dict(runs=[], scenario=scenario_text, scenario_name=scenario_name)
summary = analytics.process_all_runs(get_runs(raw_rally_reports))
logging.debug('Summary: %s', summary)
has_errors = False
has_degradation = False
for i, one_run in enumerate(summary.run_results):
report_one_run = {}
plot = graphics.draw_plot(one_run)
plot.savefig(os.path.join(book_folder, 'plot_%d.svg' % (i + 1)))
headers = ['Samples', 'Median, s', 'Mean, s', 'Std dev',
'95% percentile, s']
t = [[one_run.etalon_stats.count,
round2(one_run.etalon_stats.median),
round2(one_run.etalon_stats.mean),
round2(one_run.etalon_stats.std),
round2(one_run.etalon_stats.p95)]]
report_one_run['etalon_table'] = tabulate2(
t, headers=headers, tablefmt='grid')
headers = ['#', 'Downtime, s']
t = []
for index, stat in enumerate(one_run.error_area):
t.append([index + 1, mean_var_to_str(stat.duration)])
if one_run.error_area:
has_errors = True
report_one_run['errors_table'] = tabulate2(
t, headers=headers, tablefmt='grid')
headers = ['#', 'Time to recover, s', 'Absolute degradation, s',
'Relative degradation']
t = []
for index, stat in enumerate(one_run.degradation_area):
t.append([index + 1,
mean_var_to_str(stat.duration),
mean_var_to_str(stat.degradation),
mean_var_to_str(stat.degradation_ratio)])
if one_run.degradation_area:
has_degradation = True
report_one_run['degradation_table'] = tabulate2(
t, headers=headers, tablefmt="grid")
report['runs'].append(report_one_run)
headers = ['Service downtime, s', 'MTTR, s',
'Absolute performance degradation, s',
'Relative performance degradation, ratio']
t = [[mean_var_to_str(summary.downtime),
mean_var_to_str(summary.mttr),
mean_var_to_str(summary.degradation),
mean_var_to_str(summary.degradation_ratio)]]
report['summary_table'] = tabulate2(t, headers=headers, tablefmt='grid')
report['has_errors'] = has_errors
report['has_degradation'] = has_degradation
jinja_env = jinja2.Environment()
jinja_env.filters['json'] = json.dumps
jinja_env.filters['yaml'] = functools.partial(
yaml.safe_dump, indent=2, default_flow_style=False)
path = utils.resolve_relative_path(REPORT_TEMPLATE)
with open(path) as fd:
template = fd.read()
compiled_template = jinja_env.from_string(template)
rendered_template = compiled_template.render(dict(report=report))
index_path = os.path.join(book_folder, 'index.rst')
with open(index_path, 'w') as fd2:
fd2.write(rendered_template.encode('utf8'))
logging.info('The book is written to: %s', book_folder)
def make_report(scenario_name, raw_rally_file_names, book_folder):
scenario_dir = utils.resolve_relative_path(SCENARIOS_DIR)
scenario_path = os.path.join(scenario_dir, scenario_name)
if not scenario_path.endswith('.yaml'):
scenario_path += '.yaml'
with open(scenario_path) as fd:
scenario = fd.read()
raw_rally_reports = []
for file_name in raw_rally_file_names:
with open(file_name) as fd:
raw_rally_reports.append(json.loads(fd.read()))
utils.mkdir_tree(book_folder)
process(raw_rally_reports, book_folder, scenario, scenario_name)
def main():
parser = argparse.ArgumentParser(prog='rally-reliability-report')
parser.add_argument('-d', '--debug', action='store_true')
parser.add_argument('-i', '--input', dest='input', nargs='+',
help='Rally raw json output')
parser.add_argument('-b', '--book', dest='book', required=True,
help='folder where to write RST book')
parser.add_argument('-s', '--scenario', dest='scenario', required=True,
help='Rally scenario')
args = parser.parse_args()
logging.basicConfig(format='%(asctime)s %(levelname)s %(message)s',
level=logging.DEBUG if args.debug else logging.INFO)
make_report(args.scenario, args.input, args.book)
if __name__ == '__main__':
main()

View File

@ -0,0 +1,89 @@
# coding=utf-8
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import argparse
import functools
import itertools
import logging
import os
import shlex
from oslo_concurrency import processutils
import rally_runners.reliability as me
import rally_runners.reliability.rally_plugins as plugins
from rally_runners.reliability import report
from rally_runners import utils
SCENARIOS_DIR = 'rally_runners/reliability/scenarios/'
def make_help_options(base, type_filter=None):
path = utils.resolve_relative_path(base)
files = itertools.chain.from_iterable(
[map(functools.partial(os.path.join, root), files)
for root, dirs, files in os.walk(path)]) # list of files in a tree
if type_filter:
files = (f for f in files if type_filter(f)) # filtered list
rel_files = map(functools.partial(os.path.relpath, start=path), files)
return '\n '.join('%s' % f.partition('.')[0] for f in sorted(rel_files))
SCENARIOS_LIST = make_help_options(SCENARIOS_DIR,
type_filter=lambda x: x.endswith('.yaml'))
USAGE = """rally-reliability [-h] -s SCENARIO -o OUTPUT -b BOOK
Scenario is one of:
%s
""" % SCENARIOS_LIST
def main():
parser = argparse.ArgumentParser(prog='rally-reliability', usage=USAGE)
parser.add_argument('-d', '--debug', action='store_true')
parser.add_argument('-s', '--scenario', dest='scenario', required=True,
help='Rally scenario')
parser.add_argument('-o', '--output', dest='output', required=True,
help='raw Rally output')
parser.add_argument('-b', '--book', dest='book', required=True,
help='folder where to write RST book')
args = parser.parse_args()
logging.basicConfig(format='%(asctime)s %(levelname)s %(message)s',
level=logging.DEBUG if args.debug else logging.INFO)
plugin_paths = os.path.dirname(plugins.__file__)
scenario_dir = os.path.join(os.path.dirname(me.__file__), 'scenarios')
scenario_path = os.path.join(scenario_dir, args.scenario)
if not scenario_path.endswith('.yaml'):
scenario_path += '.yaml'
run_cmd = ('rally --plugin-paths %(path)s task start --task %(scenario)s' %
dict(path=plugin_paths, scenario=scenario_path))
logging.info('Executing %s' % run_cmd)
command_stdout, command_stderr = processutils.execute(
*shlex.split(run_cmd))
logging.info('Execution is done: %s' % command_stdout)
command_stdout, command_stderr = processutils.execute(
*shlex.split('rally task results'))
with open(args.output, 'w') as fd:
fd.write(command_stdout)
report.make_report(args.scenario, [args.output], args.book)
if __name__ == '__main__':
main()

View File

@ -0,0 +1,24 @@
---
{% set repeat = repeat|default(5) %}
Authenticate.keystone:
{% for iteration in range(repeat) %}
-
runner:
type: "constant_for_duration"
duration: 30
concurrency: 20
context:
users:
tenants: 1
users_per_tenant: 1
hooks:
-
name: fault_injection
args:
action: kill keystone service on one node
trigger:
name: event
args:
unit: iteration
at: [100]
{% endfor %}

View File

@ -0,0 +1,21 @@
---
Authenticate.keystone:
-
runner:
type: "constant_for_duration"
duration: 60
concurrency: 5
context:
users:
tenants: 1
users_per_tenant: 1
hooks:
-
name: fault_injection
args:
action: kill mysql service on one node
trigger:
name: event
args:
unit: iteration
at: [150]

View File

@ -0,0 +1,24 @@
---
{% set repeat = repeat|default(5) %}
Authenticate.keystone:
{% for iteration in range(repeat) %}
-
runner:
type: "constant_for_duration"
duration: 30
concurrency: 5
context:
users:
tenants: 1
users_per_tenant: 1
hooks:
-
name: fault_injection
args:
action: restart keystone service on one node
trigger:
name: event
args:
unit: iteration
at: [100]
{% endfor %}

View File

@ -0,0 +1,24 @@
---
{% set repeat = repeat|default(5) %}
Authenticate.keystone:
{% for iteration in range(repeat) %}
-
runner:
type: "constant_for_duration"
duration: 30
concurrency: 5
context:
users:
tenants: 1
users_per_tenant: 1
hooks:
-
name: fault_injection
args:
action: restart memcached service on one node
trigger:
name: event
args:
unit: iteration
at: [100]
{% endfor %}

View File

@ -0,0 +1,29 @@
---
{% set repeat = repeat|default(3) %}
NeutronNetworks.create_and_list_networks:
{% for iteration in range(repeat) %}
-
args:
network_create_args: {}
runner:
type: "constant_for_duration"
duration: 60
concurrency: 4
context:
users:
tenants: 1
users_per_tenant: 1
quotas:
neutron:
network: -1
hooks:
-
name: fault_injection
args:
action: kill mysql service on one node
trigger:
name: event
args:
unit: iteration
at: [100]
{% endfor %}

View File

@ -0,0 +1,27 @@
---
NovaServers.boot_and_delete_server:
-
args:
flavor:
name: "m1.micro"
image:
name: "(^cirros.*uec$|TestVM)"
force_delete: false
runner:
type: "constant_for_duration"
duration: 600
concurrency: 4
context:
users:
tenants: 1
users_per_tenant: 1
hooks:
-
name: fault_injection
args:
action: disconnect management network on one node with nova-scheduler service
trigger:
name: event
args:
unit: iteration
at: [50]

View File

@ -0,0 +1,27 @@
---
NovaServers.boot_and_delete_server:
-
args:
flavor:
name: "m1.micro"
image:
name: "(^cirros.*uec$|TestVM)"
force_delete: false
runner:
type: "constant_for_duration"
duration: 300
concurrency: 4
context:
users:
tenants: 1
users_per_tenant: 1
hooks:
-
name: fault_injection
args:
action: disconnect storage network on one node with nova-compute service
trigger:
name: event
args:
unit: iteration
at: [50]

View File

@ -0,0 +1,27 @@
---
NovaServers.boot_and_delete_server:
-
args:
flavor:
name: "m1.micro"
image:
name: "(^cirros.*uec$|TestVM)"
force_delete: false
runner:
type: "constant_for_duration"
duration: 240
concurrency: 4
context:
users:
tenants: 1
users_per_tenant: 1
hooks:
-
name: fault_injection
args:
action: kill mysql service on one node
trigger:
name: event
args:
unit: iteration
at: [60]

View File

@ -0,0 +1,27 @@
---
NovaServers.boot_and_delete_server:
-
args:
flavor:
name: "m1.micro"
image:
name: "(^cirros.*uec$|TestVM)"
force_delete: false
runner:
type: "constant_for_duration"
duration: 240
concurrency: 4
context:
users:
tenants: 1
users_per_tenant: 1
hooks:
-
name: fault_injection
args:
action: kill rabbitmq service on one node
trigger:
name: event
args:
unit: iteration
at: [60]

View File

@ -0,0 +1,27 @@
---
NovaServers.boot_and_delete_server:
-
args:
flavor:
name: "m1.micro"
image:
name: "(^cirros.*uec$|TestVM)"
force_delete: false
runner:
type: "constant_for_duration"
duration: 600
concurrency: 4
context:
users:
tenants: 1
users_per_tenant: 1
hooks:
-
name: fault_injection
args:
action: reboot one node with rabbitmq service
trigger:
name: event
args:
unit: iteration
at: [50]

View File

@ -0,0 +1,24 @@
---
{% set repeat = repeat|default(3) %}
NovaFlavors.list_flavors:
{% for iteration in range(repeat) %}
-
runner:
type: "constant_for_duration"
duration: 60
concurrency: 4
context:
users:
tenants: 1
users_per_tenant: 1
hooks:
-
name: fault_injection
args:
action: restart keystone service on one node
trigger:
name: event
args:
unit: iteration
at: [100]
{% endfor %}

View File

@ -0,0 +1,35 @@
---
{% set repeat = repeat|default(1) %}
VMTasks.boot_runcommand_delete:
{% for iteration in range(repeat) %}
-
args:
flavor:
name: "m1.micro"
image:
name: "(^cirros.*uec$|TestVM)"
floating_network: "admin_floating_net"
command:
script_inline: "echo '{}'"
interpreter: "/bin/sh"
username: "cirros"
runner:
type: "constant_for_duration"
duration: 900
concurrency: 2
context:
users:
tenants: 1
users_per_tenant: 1
network: {}
hooks:
-
name: fault_injection
args:
action: restart keystone service on one node
trigger:
name: event
args:
unit: iteration
at: [60]
{% endfor %}

View File

@ -0,0 +1,73 @@
Scenario "{{ report.scenario_name }}"
=========={{ '=' * report.scenario_name | length }}=
This report is generated on results collected by execution of the following
Rally scenario:
.. code-block:: yaml
{{ report.scenario }}
Summary
-------
{% if report.has_errors or report.has_degradation %}
{{ report.summary_table }}
Metrics:
* `Service downtime` is the time interval between the first and
the last errors.
* `MTTR` is the mean time to recover service performance after
the fault.
* `Absolute performance degradation` is an absolute difference between
the mean of operation duration during recovery period and the baseline's.
* `Relative performance degradation` is the ratio between the mean
of operation duration during recovery period and the baseline's.
{% else %}
No errors nor performance degradation observed.
{% endif %}
Details
-------
This section contains individual data for particular scenario runs.
{% for item in report.runs %}
Run #{{ loop.index }}
^^^^^^
.. image:: plot_{{ loop.index }}.svg
Baseline
~~~~~~~~
Baseline samples are collected before the start of fault injection. They are
used to estimate service performance degradation after the fault.
{{ item.etalon_table }}
{% if item.errors_table %}
Service downtime
~~~~~~~~~~~~~~~~
The tested service is not available during the following time period(s).
{{ item.errors_table }}
{% endif %}
{% if item.degradation_table %}
Service performance degradation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The tested service has measurable performance degradation during the
following time period(s).
{{ item.degradation_table }}
{% endif %}
{% endfor %}

View File

@ -0,0 +1,36 @@
# coding=utf-8
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
MinMax = collections.namedtuple('MinMax', ('min', 'max'))
Mean = collections.namedtuple('Mean', ('statistic', 'minmax'))
MeanVar = collections.namedtuple('MeanVar', ('statistic', 'var'))
ArrayStats = collections.namedtuple(
'ArrayStats', ['mean', 'median', 'p95', 'var', 'std', 'count'])
ClusterStats = collections.namedtuple(
'ClusterStats', ['start', 'end', 'duration', 'count'])
DegradationClusterStats = collections.namedtuple(
'DegradationClusterStats',
['start', 'end', 'duration', 'count', 'degradation', 'degradation_ratio'])
RunResult = collections.namedtuple(
'RunResult', ['data', 'error_area', 'anomaly_area', 'degradation_area',
'etalon_stats', 'etalon_interval', 'etalon_threshold',
'smooth_data'])
SummaryResult = collections.namedtuple(
'SummaryResult', ['run_results', 'mttr', 'degradation',
'degradation_ratio', 'downtime'])
SmoothData = collections.namedtuple('SmoothData', ['time', 'duration', 'var'])
DataRow = collections.namedtuple(
'DataRow', ['index', 'time', 'duration', 'error'])

View File

@ -0,0 +1,28 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import testtools
from rally_runners.reliability import report
class TestReport(testtools.TestCase):
def test_indent(self):
src = ('lorem ipsum\n'
'dolor sit amet')
expected = (' lorem ipsum\n'
' dolor sit amet')
observed = report.indent(src, 4)
self.assertEqual(observed, expected)

View File

@ -0,0 +1,34 @@
# coding=utf-8
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import errno
import os
def resolve_relative_path(file_name):
path = os.path.normpath(os.path.join(
os.path.dirname(
__import__('rally_runners').__file__), '../', file_name))
if os.path.exists(path):
return path
def mkdir_tree(path):
try:
os.makedirs(path)
except OSError as exc:
if exc.errno == errno.EEXIST and os.path.isdir(path):
pass
else:
raise

View File

@ -0,0 +1,15 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr>=1.6 # Apache-2.0
Jinja2>=2.8 # BSD License (3 clause)
oslo.concurrency>=3.5.0 # Apache-2.0
matplotlib
numpy
pyinterval
PyYAML>=3.1.0 # MIT
scipy
sklearn
tabulate

View File

@ -0,0 +1,34 @@
[metadata]
name = rally-runners
summary = A collection of Rally runners, scenarios and report generators
description-file =
README.rst
author = OpenStack
author-email = openstack-dev@lists.openstack.org
home-page = http://www.openstack.org/
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
[files]
packages =
rally_runners
[entry_points]
console_scripts =
rally-reliability = rally_runners.reliability.runner:main
rally-reliability-report = rally_runners.reliability.report:main
[build_sphinx]
source-dir = doc/source
build-dir = doc/build
all_files = 1
[upload_sphinx]
upload-dir = doc/build/html

View File

@ -0,0 +1,29 @@
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
setup_requires=['pbr'],
pbr=True)

View File

@ -0,0 +1,13 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
hacking<0.12,>=0.11.0 # Apache-2.0
coverage>=3.6 # Apache-2.0
python-subunit>=0.0.18 # Apache-2.0/BSD
sphinx!=1.3b1,<1.3,>=1.2.1 # BSD
oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
testrepository>=0.0.18 # Apache-2.0/BSD
testscenarios>=0.4 # Apache-2.0/BSD
testtools>=1.4.0 # MIT

View File

@ -0,0 +1,36 @@
[tox]
minversion = 2.0
envlist = py27,pep8
skipsdist = True
[testenv]
usedevelop = True
install_command = pip install -U {opts} {packages}
setenv =
VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
commands = python setup.py test --slowest --testr-args='{posargs}'
[testenv:pep8]
commands = flake8 {posargs}
[testenv:venv]
commands = {posargs}
[testenv:cover]
commands = python setup.py test --coverage --testr-args='{posargs}'
[testenv:docs]
commands = python setup.py build_sphinx
[testenv:debug]
commands = oslo_debug_helper {posargs}
[flake8]
# E123, E125 skipped as they are invalid PEP-8.
show-source = True
ignore = E123,E125
builtins = _
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build