pandas>=1.2.0 contains some changes in rounding numbers which makes
the following tests fail when executed by py38
- test_get_numeric_data_diff_build_name
- test_get_numeric_data_diff_build_name
with the following assert error
'2018-06-23T00:00:00': 4766.42225,
'2018-06-23T00:00:00': 4766.4222500000005,
++++++++
'2018-06-24T00:00:00': 4899.55725,
'2018-06-25T00:00:00': 5042.148499999999}}
^^^^^^^^^
Change-Id: Icd33d84d94d58ba36b296d46f4397227fa36ef9c
building of docs fails due to incompatibilities between sphinx and other
packages. Sync requirements with global requirements, this fixes the
build problems.
Change-Id: I2060945854515fa54a8c5e3e9598a9255e5493e4
This commit fixes RSS feed unavailable error to return zero length entry
list when there is specified run_metadata_key and value with no failure.
Closes-Bug: #1573630
Change-Id: I48659140314a2d8737611dd0a86097f10c1f3ac8
This commit fixes pandas functions warnings. Some of pandas functions such as
rolling_mean(), resample(how=xx) are deprecated from v0.18.0.
Change-Id: I1465e50821af2aaa77d5458205469c4eec1dab58
Closes-Bug: #1580447
In the latest release of dogpile.cache, 0.6.0, a couple of backwards
incompatible import rejiggering broke our usage in openstack-health's
distributed dbm backend proxy. This commit updates the imports to work
with 0.6.0 and newer versions of dogpile.cache.
Change-Id: Iad50ab66c88a2164146fea98edf435a445e0ee6c
The new dogpile.cache 0.6.0 release seems to be breaking our gate
tests. As a temporary fix, this caps the dogpile.cache version to
the previously-working 0.5.7.
Change-Id: I8f6f721f0b08a0501e711f2718baa91546c56f12
This commit updates the flask jsonpify extension import to stop using
the deprecated version. Starting In flask 0.11 when importing
jsonpify via flask.ext.jsonpify a deprecation warning is emitted to
update the import.
Change-Id: I266d0dcb0b71d97f4fb7edd0d5d7bf1cc8b36205
This commit adds a new dogpile.cache backend that uses the memcached
distributed locking mechanism to enable async workers but still uses
the dbm file storage. This should enable us to cache large JSON blobs
without worrying about the memacached size constraints, but at the
same time reap the benefits of having an async worker update the
cache in the background. The tradeoff here is configuration complexity
because you still need to install memcached to leverage this.
Change-Id: Ied241ca1762c62a047bd366d7bd105028a884f30
We recently added elastic-recheck/elastic-search as an additional data
source for openstack-health. However, adding the current configuration
was neglected in that patch. This commit fixes this oversight, but
including a field for elastic-recheck in the status response. It will
let API users know if elastic-recheck is installed, configured, and if
so what the health of the elastic-search cluster is.
Change-Id: Ia76a26de930b13a4a7cd90dc0ef45bbcecc714f6
This commit is the start of adding support for elastic-recheck data
to openstack-health. This will get the bug numbers for the recent
failed runs it's looking at. To actually use this feature it requires
having elastic-recheck installed and a new api config option for the
location on disk of the query files. This does add some install time
complexity especially as eleastic-recheck is only available via git.
The elastic recheck support is optional and will only be activated if
it is present at import time and the config flag telling the api
server where the queries are located on disk is set.
In order to make the response times reasonable when elastic-recheck
support is enabled dogpile caching is used to keep a cached copy
of the recent results list and limit the amount we're querying
elastic search. By default this is configured to use a dbm file
and refresh the cache every 30mins.
Depends-On: Icf203668690419c38f6d6be6b6fe4af8462845f3
Change-Id: Iccd9ec6d62e2249ec7c09d42ec02ea27c71144cc
This commit removes the file storing from the rss endpoint. The code
is buggy and also a potential sec issue. It also isn't actually needed
as most (if not all) rss reader will handle missing entries fine, as
long as the ids are unique it shouldn't matter. So even if we lose old
entries on a crash or restart it won't be an issue.
Change-Id: I595abc566880ae6c778b00affde12e2227c9ec35
This commit adds a new api route to return an rss feed with the recent
failed test runs for all runs by a given key value run_metadata pair
Depends-On: Ic3463c1f9d9b4a20145467b3da1cdbf5344bbceb
Change-Id: Ib4583d2cf0e6890f86ee4216d07237be4bfce3b0
This adds 2 new REST API methods for getting a list of test prefixes
('/tests/prefix') and getting a paginated list of tests for a given
prefix ('/tests/prefix/<prefix>').
Depends-On: I55e37c763f3f3ffc2840fddc8109d601291fcc0f
Change-Id: Ib6584a7a5600037dd33cce8fb4466495ed8e83d1
This commit adds a new table to the front page which contains a list
of the tests which failed as part of the 10 most recent failed runs.
Change-Id: I5ff8ed38742d31e0b1a972901b7907030ec99bea
This commit updates the requirements that comes from global
requirements. But some of libraries are not in global-requirements[1].
So I've added the licenses for them. And we may need to add them to
global-requirements later.
[1] flask-jsonpify, numpy, pandas, subunit2sql
Change-Id: I05a04406f966917e35c91aca3f8b0cd870e9a498
This commit adds a new rest api method to return a list of recent
runs for a given run_metadata key value pair. It will be useful in
constructing lists with links to recent runs.
Change-Id: Ic5557e42ab158b865c024dc539d291c061a69dce
Depends-On: I855c47e0056f459b1f71e919d0331c2e8ecbd3a8
This commit reworks the get_test_runs_for() rest api call and changes
it's basic behavior. The first thing done is the numeric data is split
from the non-numeric data in the response. This was done to enable
mean based resampling on the numeric data. This is necessary because
the other large change to the api is that a running mean and a running
std dev are added to the response. This was done to enable more
detailed graphs to the UI for the run_time of a single test.
To perform the running mean and std dev pandas is added to the
requirements for the rest api.
Change-Id: I3416dc0cad24c90405ac437df31c39b59f524d88
This commit adds a new REST API method to get a time series of
test_runs given the test_id for a test. The intent here is to enable
a per test view with a graph of success and failures over time, as
well as a test run_time graphed over time.
Change-Id: Id7fe36c3e1ca069d942fe246d688648f719d3168
The release has happened, the migration worked, all's right with the
world.
This reverts commit 6b636200f5.
Change-Id: I4b19c5acc6c30f2379a30d2c68422057fdb83a90
This commit adds a temporary version cap on subunit2sql to be < 1.0.0.
The 1.0.0 release includes a very large database migration which will
be slow to execute. The python DB api from >=v1.0.0 will not work with
a database that doesn't have the updated schema. So while the migration
is running let's cap the version we install to prevent everything from
breaking while the migration is running. (which might take days)
Change-Id: Ieadcc6847a1648bc247fd840ec95653f438be664
This commit adds a rest api endpoint for the second page view. It
takes in a key value pair from the url and returns a time series dict
of all the runs which had that key value pair in it's respective run
metadata. The datetime resolution, start date and stop date are all
adjustable with url parameters.
The second page view will use this with the key being project and the
value being whatever project the page is being generated for.
Co-Authored-By: Glauco Oliveira <gvinici@thoughtworks.com>
Co-Authored-By: Moises Trovó <mtrovo@thoughtworks.com>
Change-Id: I7837073c9029014e03b2faca642f77f997ebdf82
This commit adds a new rest endpoint /runs/group_by/metadata_key to
return a json dictionary of the runs stats for all the runs in a given
time resolution grouped by the metadata_key provided in the url.
This will be used for the first page view to return a json dict that
is grouped by project by making a call like: GET /runs/group_by/project
which will return all the runs grouped by project.
Change-Id: I73a9a040a8950b72b399d1866246422b08f2e60d
As PyMySQL is a dependency to run api, it should be in requirements
file.
Change-Id: I8c587235088897db96bf6c78238046a094dbe879
Co-Authored-By: Dhiana Deva <ddeva@thoughtworks.com>
JSONP is required to allow cross-origin API requests from
JavaScript clients. This adds the `flask-jsonpify` library to
support this additional request method, and uses it in all current
routes.
Change-Id: Ifb7d5abaa16cf165bfc763fec1189fdfb48a4b88
This commit adds the basic python infrastructure to the repo. The
backend rest api on top of the data stores will live in the repo
but before we can add that we need to be able to support python code
in the repo.
Change-Id: I869f42e148c2f5c2369fb5613d43b4ec25aaa2db