Switch to stestr

According to Openstack summit session [1],
stestr is maintained project to which all Openstack projects should migrate.
Let's switch to stestr as other projects have already moved to it.

[1] https://etherpad.openstack.org/p/YVR-python-pti

Change-Id: I7daa364d2016e1b7951e6240b3254287361f9d0f
This commit is contained in:
Vu Cong Tuan 2018-07-09 14:09:58 +07:00 committed by Eric Kao
parent 2c423c83c2
commit 006a529286
7 changed files with 84 additions and 68 deletions

2
.gitignore vendored
View File

@ -4,7 +4,7 @@
*.swp
*.swo
*.pyc
.testrepository
.stestr/
# Editors
*~

3
.stestr.conf Normal file
View File

@ -0,0 +1,3 @@
[DEFAULT]
test_path=./tests
top_dir=./

View File

@ -1,4 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -1,5 +1,5 @@
oslosphinx
pbr>=0.6,!=0.7,<1.0
sphinx>=1.1.2,!=1.2.0,<1.3
testrepository>=0.0.18
stestr>=2.0.0
testtools>=0.9.34

View File

@ -13,8 +13,9 @@ https://blueprints.launchpad.net/congress/+spec/configuration-files-validation
Congress could be used by cloud operators to formalize the constraints between
options defined in configuration files with the help of business rules written
in Datalog and verify the compliance of their deployments automatically.
It is intended to be complementary to config management systems that should only
create valid configuration files and to integration tests such as RefStack.
It is intended to be complementary to config management systems
that should only create valid configuration files
and to integration tests such as RefStack.
Problem description
@ -41,13 +42,13 @@ library. Configuration options are not independent:
Constraints may be defined by different actors:
* Service developers know the constraints on the options they define and usually
document them informally in the source code in the description field associated
to the option.
* Service developers know the constraints on the options they define and
usually document them informally in the source code in the description field
associated to the option.
* Cloud integrators will discover additional constraints when they develop
deployment code. Most of the time those options are only implicitly defined in
the source code of the scripts.
deployment code. Most of the time those options are only implicitly defined
in the source code of the scripts.
* Administrators may want to enforce additional constraints reflecting the
operational environment constraints.
@ -80,13 +81,15 @@ deployer impact' section).
The agent is responsible for conveying information on option values but also on
their meta-data: layout in groups, type, flags such as deprecation, secret
status, etc. Meta-data are described in templates which are in fact a collection
of namespaces. Namespace contain the actual definition of meta-data. To avoid
versionning problems, Meta-data must be obtained directly from the service.
status, etc. Meta-data are described in templates which are in fact
a collection of namespaces. Namespace contain the actual definition of
meta-data. To avoid versionning problems, Meta-data must be obtained directly
from the service.
To limit the amount of traffic between the driver and agents, template files are
hashed. Agents first reply to queries with hashes. The driver will request the
content only if the hash is unknown. The same process is used for namespaces.
To limit the amount of traffic between the driver and agents, template files
are hashed. Agents first reply to queries with hashes. The driver will request
the content only if the hash is unknown.
The same process is used for namespaces.
The only processing performed by agents on the option files is the removal of
the values of secret options.
@ -124,11 +127,11 @@ The definition of an option is split between three tables:
defined in a specialized table. The option id is used again as a key to refer
to the option table.
Regarding the uniqueness of configuration meta-data in the extensional database,
the driver must ensure that the ids are deterministic. An option identified
by the same name, same group name and same namespace name should always be
given the same unique id. The use of the MD5 hash function (there is no
cryptographic requirement) guarantees uniqueness and determinism.
Regarding the uniqueness of configuration meta-data in the extensional
database, the driver must ensure that the ids are deterministic. An option
identified by the same name, same group name and same namespace name should
always be given the same unique id. The use of the MD5 hash function (there is
no cryptographic requirement) guarantees uniqueness and determinism.
Alternatives
------------
@ -141,13 +144,14 @@ space beyond what is necessary.
The proposed change must be considered as a work in progress. It does not fully
address the problem of the location and the management of constraints. Most
constraints are known by the service developers and should be maintained in the
source code of services in a dedicated meta-data field of the option definition.
source code of services in a dedicated meta-data field of
the option definition.
The use of an external agent to push a service configuration is not the only
solution. The oslo-config library could be modified to push the configuration
read by the service to the datasource driver. This could be done through the use
of a hook in oslo-config. It would require additional configuration of the
services to identify the endpoint.
read by the service to the datasource driver. This could be done through
the use of a hook in oslo-config. It would require additional configuration of
the services to identify the endpoint.
Policy
------
@ -165,7 +169,8 @@ Z: lb-agt host
::
warn('Multinode OVS and linuxbridge use incompatible UDP ports', 'vxlan_conflicting_ovs_lb_udp_ports') :-
warn('Multinode OVS and linuxbridge use incompatible UDP ports',
'vxlan_conflicting_ovs_lb_udp_ports') :-
vxlan_conflicting_ovs_lb_udp_ports()
vxlan_conflicting_ovs_lb_udp_ports(Y, Z) :-
@ -232,8 +237,8 @@ Other end user impact
---------------------
None other than the usual management of the datasource and policy.
Eventually, we would like to feed the engine with rules that are coming from and
maintained in the services source code.
Eventually, we would like to feed the engine with rules that are coming from
and maintained in the services source code.
Performance impact
------------------
@ -252,16 +257,16 @@ the duplicated sending of files and templates, and to prevent overloading the
driver. When the driver is activated, it periodically notifies agents, over the
communication bus requesting their data description. An agent send description
of the files it has been set to provide. The description contains hashes of
namespaces, templates and configs. The driver then requests the resources, which
hashes have not been recognized.
namespaces, templates and configs. The driver then requests the resources,
which hashes have not been recognized.
We use the RPC server of the datasource associated DseNode.
Other deployer impact
---------------------
We add a dedicated group and options to configure an agent and the configuration
files to manage.
We add a dedicated group and options to configure an agent and the
configuration files to manage.
*validator.host*
@ -282,7 +287,8 @@ An dict option describing the OpenStack services activated on this node. The
values are also dictionaries. Keys are paths to config-files,
values are paths to the associated templates. For instance::
congress: { /etc/congress/congress.conf:/opt/stack/congress/etc/congress-config-generator.conf }
congress: { /etc/congress/congress.conf:
/opt/stack/congress/etc/congress-config-generator.conf }
Example config :
@ -293,7 +299,8 @@ Example config :
transport_url = rabbit://..@control:5672/
[validator]
services = nova : { /etc/nova.conf:/opt/stack/nova/etc/nova/nova-config-generator.conf:
services = nova : { /etc/nova.conf:
/opt/stack/nova/etc/nova/nova-config-generator.conf:
version = ocata
host = node_A
@ -349,8 +356,8 @@ sending of meta-data and files.
Documentation impact
====================
This feature introduces an agent component that requires separate configuration.
It also defines new datasources.
This feature introduces an agent component that requires separate
configuration. It also defines new datasources.
References

View File

@ -37,9 +37,10 @@ policy engines), and backwards-compatible with the standard ``agnostic`` engine
and existing data source drivers.
As an illustration of the need for extensibility and loose-coupling, consider
that a SMT-based Z3 prover needs types that are as narrow as possible (e.g., enum{'ingress', 'egress'}), but another engine does not understand such narrow
types. A data source driver then must specify its types in a way that
works well for both engines.
that a SMT-based Z3 prover needs types that are as narrow as possible
(e.g., enum{'ingress', 'egress'}), but another engine does not understand
such narrow types. A data source driver then must specify its types
in a way that works well for both engines.
Proposed change
@ -57,9 +58,9 @@ we expect new ones to be introduced over time). To allow a policy engine to
handle custom data types it does not "understand", we require that every
custom data type inherit from another data type, all of which ultimately
inherit from one of the base types all policy engines must handle. That way,
when a policy engine encounters a value of a custom type it does not understand,
the policy engine can fall back to handling the value according to an ancestor
type the engine does understand.
when a policy engine encounters a value of a custom type
it does not understand, the policy engine can fall back to handling the value
according to an ancestor type the engine does understand.
What is a type?
---------------
@ -67,8 +68,8 @@ What is a type?
To define what a type is in this spec, we first establish four related
concepts:
#. *source representation*: the representation used in the data received from an
external source. Each external data source has its own source
#. *source representation*: the representation used in the data received from
an external source. Each external data source has its own source
representation defined outside of Congress. In an IP address example, one
source could use IPv4 dotted decimal string ``"1.0.0.1"`` while another
source could use IPv6 (short) hexadecimal string ``"::ffff:100:1"``.
@ -193,19 +194,26 @@ could be specified for the ``flavors`` table in the Nova data source driver.
'selector-type': 'DOT_SELECTOR',
'field-translators':
({'fieldname': 'id', 'desc': 'ID of the flavor',
'translator': {'translation-type': 'VALUE', 'data-type': CongressStr}},
'translator': {'translation-type': 'VALUE',
'data-type': CongressStr}},
{'fieldname': 'name', 'desc': 'Name of the flavor',
'translator': {'translation-type': 'VALUE', 'data-type': CongressStr}},
'translator': {'translation-type': 'VALUE',
'data-type': CongressStr}},
{'fieldname': 'vcpus', 'desc': 'Number of vcpus',
'translator': {'translation-type': 'VALUE', 'data-type': CongressInt}},
'translator': {'translation-type': 'VALUE',
'data-type': CongressInt}},
{'fieldname': 'ram', 'desc': 'Memory size in MB',
'translator': {'translation-type': 'VALUE', 'data-type': CongressInt}},
'translator': {'translation-type': 'VALUE',
'data-type': CongressInt}},
{'fieldname': 'disk', 'desc': 'Disk size in GB',
'translator': {'translation-type': 'VALUE', 'data-type': CongressInt}},
'translator': {'translation-type': 'VALUE',
'data-type': CongressInt}},
{'fieldname': 'ephemeral', 'desc': 'Ephemeral space size in GB',
'translator': {'translation-type': 'VALUE', 'data-type': CongressInt}},
'translator': {'translation-type': 'VALUE',
'data-type': CongressInt}},
{'fieldname': 'rxtx_factor', 'desc': 'RX/TX factor',
'translator': {'translation-type': 'VALUE', 'data-type': CongressFloat}})
'translator': {'translation-type': 'VALUE',
'data-type': CongressFloat}})
}
@ -360,9 +368,9 @@ of an ancestor type. Ideally, every policy engine would recognize and support
The only exception is that values of the non-string types
inheriting from CongressStr need to be converted to string to be interprable
as a value of CongressStr type.
The CongressDataType abstract base class can include additional helper methods to
make the interpretation easy. Below is an expanded CongressDataType definition
including the additional helper methods.
The CongressDataType abstract base class can include additional helper methods
to make the interpretation easy. Below is an expanded
CongressDataType definition including the additional helper methods.
.. code-block:: python
@ -386,7 +394,8 @@ including the additional helper methods.
this type among the types the data consumer supports.
:param supported_types: iterable collection of types
:returns: the subclass of CongressDataType which is the least ancestor
:returns: the subclass of CongressDataType
which is the least ancestor
'''
target_types = frozenset(target_types)
current_class = cls
@ -399,15 +408,16 @@ including the additional helper methods.
@classmethod
def convert_to_ancestor(cls, value, ancestor_type):
'''Convert this type's exchange value to ancestor_type's exchange value
'''Convert this type's exchange value
to ancestor_type's exchange value
Generally there is no actual conversion because descendant type value
is directly interpretable as ancestor type value. The only exception
is the conversion from non-string descendents to string. This
conversion is needed by Agnostic engine does not support boolean.
.. warning:: undefined behavior if ancestor_type is not an ancestor of
this type.
.. warning:: undefined behavior if ancestor_type is not
an ancestor of this type.
'''
if ancestor_type == CongressStr:
return json.dumps(value)
@ -427,7 +437,7 @@ including the additional helper methods.
raise cls.CongressDataTypeHierarchyError(
'More than one parent type found for {0}: {1}'
.format(cls, congress_parents))
class CongressDataTypeNoParent(TypeError):
pass
@ -543,11 +553,11 @@ No impact on REST API.
Security impact
---------------
There is little new security impact. Because new/custom types include new/custom
data handling methods, it does theoretically increase the attack surface. Care
needs to be taken to make sure the data handling methods are safe against
malformed or possibility malicious input. There are well-known best practices to
minimize such risks.
There is little new security impact. Because new/custom types include
new/custom data handling methods, it does theoretically increase the attack
surface. Care needs to be taken to make sure the data handling methods are safe
against malformed or possibility malicious input. There are well-known best
practices to minimize such risks.
Notifications impact
--------------------

View File

@ -10,7 +10,7 @@ usedevelop = True
setenv = VIRTUAL_ENV={envdir}
install_command = pip install -U {opts} {packages}
deps = -r{toxinidir}/requirements.txt
commands = python setup.py testr --slowest --testr-args='{posargs}'
commands = stestr run --slowest {posargs}
[testenv:venv]
commands = {posargs}