Merge "Build pdf docs"

This commit is contained in:
Zuul 2019-09-19 01:53:55 +00:00 committed by Gerrit Code Review
commit f0d600447b
4 changed files with 74 additions and 47 deletions

View File

@ -227,6 +227,17 @@ texinfo_documents = [
'Miscellaneous'),
]
# Disable usage of xindy https://bugzilla.redhat.com/show_bug.cgi?id=1643664
latex_use_xindy = False
latex_domain_indices = False
latex_elements = {
'makeindex': '',
'printindex': '',
'preamble': r'\setcounter{tocdepth}{3}',
}
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []

View File

@ -1,5 +1,6 @@
pbr>=2.0.0,!=2.1.0 # Apache-2.0
sphinx>=1.6.2 # BSD
sphinx!=1.6.6,!=1.6.7,>=1.6.5,<2.0.0;python_version=='2.7' # BSD
sphinx!=1.6.6,!=1.6.7,>=1.6.5,!=2.1.0;python_version>='3.4' # BSD
testtools>=1.4.0
yasfb>=0.8.0
openstackdocstheme>=1.19.0 # Apache-2.0

View File

@ -124,63 +124,63 @@ That said, the following changes are going to be implemented:
* In the watcher/common package:
* ScoringEngine class defining an abstract base class for all Scoring
Engine implementations. The abstract class will include the following
abstract methods:
ScoringEngine class defining an abstract base class for all Scoring
Engine implementations. The abstract class will include the following
abstract methods:
:get_engine_id:
Method will return a unique string identifier of the Scoring Engine.
This ID will be used by factory classes and `Strategies`_ wanting to
use a specific Scoring Engine
:get_engine_id:
Method will return a unique string identifier of the Scoring Engine.
This ID will be used by factory classes and `Strategies`_ wanting to
use a specific Scoring Engine
:Input:
none
:Input:
none
:Result:
unique string ID (must be unique across all Scoring Engines)
:Result:
unique string ID (must be unique across all Scoring Engines)
:get_model_metadata:
Method will return a map with metadata information about the data
model. This might include informations about used algorithm, labels
for data returned by the score method (useful for interpreting the
results)
:get_model_metadata:
Method will return a map with metadata information about the data
model. This might include informations about used algorithm, labels
for data returned by the score method (useful for interpreting the
results)
:Input:
none
:Input:
none
:Result:
dictionary with metadata, both keys and values as strings
:Result:
dictionary with metadata, both keys and values as strings
For example, the metadata can contain the following information (real
world example):
For example, the metadata can contain the following information (real
world example):
* scoring engine is a classifier, which is based on the learning data
with these column labels (last column is the result used for
learning): [MEM_USAGE, PROC_USAGE, PCI_USAGE, POWER_CONSUMPTION,
CLASSIFICATION_ID]
* during the learning process, the machine learning decides that it
actually only needs these columns to calculate the expected
CLASSIFICATION_ID: [MEM_USAGE, PROC_USAGE]
* because the scoring result is a list of doubles, we need to know
what it means, e.g. 0.0 == CLASSIFICATION_ID_2, 1.0 ==
CLASSIFICATION_ID_1, etc.
* there is no guarantee of the order of the columns or even the
existence of them in input/output list
* this information must be passed as metadata, so the user of the
scoring engine is able to "understand" the results
* in addition, the metadata might provide some insights like what was
the algorithm used for learning or how many training records were
used
* scoring engine is a classifier, which is based on the learning data
with these column labels (last column is the result used for
learning): [MEM_USAGE, PROC_USAGE, PCI_USAGE, POWER_CONSUMPTION,
CLASSIFICATION_ID]
* during the learning process, the machine learning decides that it
actually only needs these columns to calculate the expected
CLASSIFICATION_ID: [MEM_USAGE, PROC_USAGE]
* because the scoring result is a list of doubles, we need to know
what it means, e.g. 0.0 == CLASSIFICATION_ID_2, 1.0 ==
CLASSIFICATION_ID_1, etc.
* there is no guarantee of the order of the columns or even the
existence of them in input/output list
* this information must be passed as metadata, so the user of the
scoring engine is able to "understand" the results
* in addition, the metadata might provide some insights like what was
the algorithm used for learning or how many training records were
used
:calculate_score:
Method responsible for performing the actual scoring, such as
classifying or predicting data
:calculate_score:
Method responsible for performing the actual scoring, such as
classifying or predicting data
:Input:
list of float numbers (e.g. feature values)
:Input:
list of float numbers (e.g. feature values)
:Result:
list of float numbers (e.g. classified values, predicted results)
:Result:
list of float numbers (e.g. classified values, predicted results)
* In the `Watcher Decision Engine`_:

15
tox.ini
View File

@ -21,11 +21,26 @@ commands = {posargs}
[testenv:docs]
basepython = python3
deps =
-c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master}
-r{toxinidir}/requirements.txt
setenv = PYTHONHASHSEED=0
commands =
find . -type f -name "*.pyc" -delete
python setup.py build_sphinx
[testenv:pdf-docs]
basepython = python3
envdir = {toxworkdir}/docs
deps = {[testenv:docs]deps}
whitelist_externals =
rm
make
commands =
rm -rf doc/build/pdf
sphinx-build -W -b latex doc/source doc/build/pdf
make -C doc/build/pdf
[testenv:pep8]
basepython = python3
deps =