Enable specs sphinx build

Currently Congress specs sphinx doc build broken,
all of the rst files can't be built into html files,
so previewing the html format of specs is impossible.
Fix it so that we can review specs in human readable way.

Some format issues exist in current rst files, so fix these
at the same time so that tox py27 and docs build can pass.

Change-Id: I0dcdfb315e8314fb54d7333c2395d41ff6a0c9a6
This commit is contained in:
Rui Chen 2015-08-12 15:13:39 +08:00
parent 7a14a0f114
commit 1f6fc8525e
22 changed files with 355 additions and 979 deletions

View File

@ -4,13 +4,21 @@
Congress Project Specifications
===============================
Contents:
Liberty approved specs:
.. toctree::
:glob:
:maxdepth: 1
specs/*
specs/liberty/*
Kilo approved specs:
.. toctree::
:glob:
:maxdepth: 1
specs/kilo/*
Juno approved specs:
@ -20,6 +28,14 @@ Juno approved specs:
specs/juno/*
Spec Template:
.. toctree::
:glob:
:maxdepth: 1
specs/*
==================
Indices and tables
==================

1
doc/source/specs Symbolic link
View File

@ -0,0 +1 @@
../../specs

View File

@ -1,354 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
Example Spec - The title of your blueprint
==========================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/congress/+spec/example
Introduction paragraph -- why are we doing anything? A single paragraph of
prose that operators can understand.
Some notes about using this template:
* Your spec should be in ReSTructured text, like this template.
* Please wrap text at 79 columns.
* The filename in the git repository should match the launchpad URL, for
example a URL of: https://blueprints.launchpad.net/congress/+spec/awesome-thing
should be named awesome-thing.rst
* Please do not delete any of the sections in this template. If you have
nothing to say for a whole section, just write: None
* For help with syntax, see http://sphinx-doc.org/rest.html
* To test out your formatting, build the docs using tox, or see:
http://rst.ninjs.org
* If you would like to provide a diagram with your spec, ascii diagrams are
required. http://asciiflow.com/ is a very nice tool to assist with making
ascii diagrams. The reason for this is that the tool used to review specs is
based purely on plain text. Plain text will allow review to proceed without
having to look at additional files which can not be viewed in gerrit. It
will also allow inline feedback on the diagram itself.
Problem description
===================
A detailed description of the problem:
* For a new feature this might be use cases. Ensure you are clear about the
actors in each use case: End User vs Deployer
* For a major reworking of something existing it would describe the
problems in that feature that are being addressed.
Proposed change
===============
Here is where you cover the change you propose to make in detail. How do you
propose to solve this problem?
If this is one part of a larger effort make it clear where this piece ends. In
other words, what's the scope of this effort?
Alternatives
------------
What other ways could we do this thing? Why aren't we using those? This doesn't
have to be a full literature review, but it should demonstrate that thought has
been put into why the proposed solution is an appropriate one.
Policy
------
Using the Congress datalog syntax, write out an example policy using
https://wiki.openstack.org/wiki/Congress#Policy_Language
Example:
error(vm) :-
nova:virtual_machine(vm),
ids:ip_packet(src_ip, dst_ip),
neutron:port(vm, src_ip), //finds out the port that has the VMs IP
ids:ip_blacklist(dst_ip).
Policy Actions
--------------
Describe the policy activities in terms of monitoring, reactive, proactive,
and other ways to explain how the policy will implement it's desired state.
Data Sources
------------
Describe which projects and/or services the data is coming from
Data model impact
-----------------
Changes which require modifications to the data model often have a wider impact
on the system. The community often has strong opinions on how the data model
should be evolved, from both a functional and performance perspective. It is
therefore important to capture and gain agreement as early as possible on any
proposed changes to the data model.
Questions which need to be addressed by this section include:
* What new data objects and/or database schema changes is this going to
require?
* What database migrations will accompany this change.
* How will the initial set of new data objects be generated, for example if you
need to take into account existing instances, or modify other existing data
describe how that will work.
REST API impact
---------------
Each API method which is either added or changed should have the following
* Specification for the method
* A description of what the method does suitable for use in
user documentation
* Method type (POST/PUT/GET/DELETE)
* Normal http response code(s)
* Expected error http response code(s)
* A description for each possible error code should be included
describing semantic errors which can cause it such as
inconsistent parameters supplied to the method, or when an
instance is not in an appropriate state for the request to
succeed. Errors caused by syntactic problems covered by the JSON
schema defintion do not need to be included.
* URL for the resource
* Parameters which can be passed via the url
* JSON schema definition for the body data if allowed
* JSON schema definition for the response data if any
* Example use case including typical API samples for both data supplied
by the caller and the response
* Discuss any policy changes, and discuss what things a deployer needs to
think about when defining their policy.
Example JSON schema definitions can be found in the Nova tree
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/schemas/v3
Note that the schema should be defined as restrictively as
possible. Parameters which are required should be marked as such and
only under exceptional circumstances should additional parameters
which are not defined in the schema be permitted (eg
additionaProperties should be False).
Reuse of existing predefined parameter types such as regexps for
passwords and user defined names is highly encouraged.
Security impact
---------------
Describe any potential security impact on the system. Some of the items to
consider include:
* Does this change touch sensitive data such as tokens, keys, or user data?
* Does this change alter the API in a way that may impact security, such as
a new way to access sensitive information or a new way to login?
* Does this change involve cryptography or hashing?
* Does this change require the use of sudo or any elevated privileges?
* Does this change involve using or parsing user-provided data? This could
be directly at the API level or indirectly such as changes to a cache layer.
* Can this change enable a resource exhaustion attack, such as allowing a
single API interaction to consume significant server resources? Some examples
of this include launching subprocesses for each connection, or entity
expansion attacks in XML.
For more detailed guidance, please see the OpenStack Security Guidelines as
a reference (https://wiki.openstack.org/wiki/Security/Guidelines). These
guidelines are a work in progress and are designed to help you identify
security best practices. For further information, feel free to reach out
to the OpenStack Security Group at openstack-security@lists.openstack.org.
Notifications impact
--------------------
Please specify any changes to notifications. Be that an extra notification,
changes to an existing notification, or removing a notification.
Other end user impact
---------------------
Aside from the API, are there other ways a user will interact with this
feature?
* Does this change have an impact on python-congressclient? What does the user
interface there look like?
Performance Impact
------------------
Describe any potential performance impact on the system, for example
how often will new code be called, and is there a major change to the calling
pattern of existing code.
Examples of things to consider here include:
* A periodic task might look like a small addition but if it calls conductor or
another service the load is multiplied by the number of nodes in the system.
* Scheduler filters get called once per host for every instance being created,
so any latency they introduce is linear with the size of the system.
* A small change in a utility function or a commonly used decorator can have a
large impacts on performance.
* Calls which result in a database queries (whether direct or via conductor)
can have a profound impact on performance when called in critical sections of
the code.
* Will the change include any locking, and if so what considerations are there
on holding the lock?
Other Deployer Impacts
----------------------
Discuss things that will affect how you deploy and configure OpenStack
that have not already been mentioned, such as:
* What config options are being added? Should they be more generic than
proposed (for example a flag that other hypervisor drivers might want to
implement as well)? Are the default values ones which will work well in
real deployments?
* Is this a change that takes immediate effect after its merged, or is it
something that has to be explicitly enabled?
* If this change is a new binary, how would it be deployed?
* Please state anything that those doing continuous deployment, or those
upgrading from the previous release, need to be aware of. Also describe
any plans to deprecate configuration values or features. For example, if we
change the directory name that instances are stored in, how do we handle
instance directories created before the change landed? Do we move them? Do
we have a special case in the code? Do we assume that the operator will
recreate all the instances in their cloud?
Developer Impact
----------------
Discuss things that will affect other developers working on OpenStack,
such as:
* If the blueprint proposes a change to the driver API, discussion of how
other hypervisors would implement the feature is required.
Implementation
==============
Assignee(s)
-----------
Who is leading the writing of the code? Or is this a blueprint where you're
throwing it out there to see who picks it up?
If more than one person is working on the implementation, please designate the
primary author and contact.
Primary assignee:
<launchpad-id or None>
Other contributors:
<launchpad-id or None>
Work Items
----------
Work items or tasks -- break the feature up into the things that need to be
done to implement it. Those parts might end up being done by different people,
but we're mostly trying to understand the timeline for implementation.
Dependencies
============
* Include specific references to specs and/or blueprints in congress, or in other
projects, that this one either depends on or is related to.
* If this requires functionality of another project that is not currently used
by congress (such as the glance v2 API when we previously only required v1),
document that fact.
* Does this feature require any new library dependencies or code otherwise not
included in OpenStack? Or does it depend on a specific version of library?
Testing
=======
Please discuss how the change will be tested. We especially want to know what
tempest tests will be added. It is assumed that unit test coverage will be
added so that doesn't need to be mentioned explicitly, but discussion of why
you think unit tests are sufficient and we don't need to add more tempest
tests would need to be included.
Is this untestable in gate given current limitations (specific hardware /
software configurations available)? If so, are there mitigation plans (3rd
party testing, gate enhancements, etc).
Documentation Impact
====================
What is the impact on the docs team of this change? Some changes might require
donating resources to the docs team to have the documentation updated. Don't
repeat details discussed above, but please reference them here.
References
==========
Please add any useful references here. You are not required to have any
reference. Moreover, this specification should still make sense when your
references are unavailable. Examples of what you could include are:
* Links to mailing list or IRC discussions
* Links to notes from a summit session
* Links to relevant research, if appropriate
* Related specifications as appropriate (e.g. if it's an EC2 thing, link the
EC2 docs)
* Anything else you feel it is worthwhile to refer to

View File

@ -1,354 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
Example Spec - The title of your blueprint
==========================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/congress/+spec/example
Introduction paragraph -- why are we doing anything? A single paragraph of
prose that operators can understand.
Some notes about using this template:
* Your spec should be in ReSTructured text, like this template.
* Please wrap text at 79 columns.
* The filename in the git repository should match the launchpad URL, for
example a URL of: https://blueprints.launchpad.net/congress/+spec/awesome-thing
should be named awesome-thing.rst
* Please do not delete any of the sections in this template. If you have
nothing to say for a whole section, just write: None
* For help with syntax, see http://sphinx-doc.org/rest.html
* To test out your formatting, build the docs using tox, or see:
http://rst.ninjs.org
* If you would like to provide a diagram with your spec, ascii diagrams are
required. http://asciiflow.com/ is a very nice tool to assist with making
ascii diagrams. The reason for this is that the tool used to review specs is
based purely on plain text. Plain text will allow review to proceed without
having to look at additional files which can not be viewed in gerrit. It
will also allow inline feedback on the diagram itself.
Problem description
===================
A detailed description of the problem:
* For a new feature this might be use cases. Ensure you are clear about the
actors in each use case: End User vs Deployer
* For a major reworking of something existing it would describe the
problems in that feature that are being addressed.
Proposed change
===============
Here is where you cover the change you propose to make in detail. How do you
propose to solve this problem?
If this is one part of a larger effort make it clear where this piece ends. In
other words, what's the scope of this effort?
Alternatives
------------
What other ways could we do this thing? Why aren't we using those? This doesn't
have to be a full literature review, but it should demonstrate that thought has
been put into why the proposed solution is an appropriate one.
Policy
------
Using the Congress datalog syntax, write out an example policy using
https://wiki.openstack.org/wiki/Congress#Policy_Language
Example:
error(vm) :-
nova:virtual_machine(vm),
ids:ip_packet(src_ip, dst_ip),
neutron:port(vm, src_ip), //finds out the port that has the VMs IP
ids:ip_blacklist(dst_ip).
Policy Actions
--------------
Describe the policy activities in terms of monitoring, reactive, proactive,
and other ways to explain how the policy will implement it's desired state.
Data Sources
------------
Describe which projects and/or services the data is coming from
Data model impact
-----------------
Changes which require modifications to the data model often have a wider impact
on the system. The community often has strong opinions on how the data model
should be evolved, from both a functional and performance perspective. It is
therefore important to capture and gain agreement as early as possible on any
proposed changes to the data model.
Questions which need to be addressed by this section include:
* What new data objects and/or database schema changes is this going to
require?
* What database migrations will accompany this change.
* How will the initial set of new data objects be generated, for example if you
need to take into account existing instances, or modify other existing data
describe how that will work.
REST API impact
---------------
Each API method which is either added or changed should have the following
* Specification for the method
* A description of what the method does suitable for use in
user documentation
* Method type (POST/PUT/GET/DELETE)
* Normal http response code(s)
* Expected error http response code(s)
* A description for each possible error code should be included
describing semantic errors which can cause it such as
inconsistent parameters supplied to the method, or when an
instance is not in an appropriate state for the request to
succeed. Errors caused by syntactic problems covered by the JSON
schema defintion do not need to be included.
* URL for the resource
* Parameters which can be passed via the url
* JSON schema definition for the body data if allowed
* JSON schema definition for the response data if any
* Example use case including typical API samples for both data supplied
by the caller and the response
* Discuss any policy changes, and discuss what things a deployer needs to
think about when defining their policy.
Example JSON schema definitions can be found in the Nova tree
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/schemas/v3
Note that the schema should be defined as restrictively as
possible. Parameters which are required should be marked as such and
only under exceptional circumstances should additional parameters
which are not defined in the schema be permitted (eg
additionaProperties should be False).
Reuse of existing predefined parameter types such as regexps for
passwords and user defined names is highly encouraged.
Security impact
---------------
Describe any potential security impact on the system. Some of the items to
consider include:
* Does this change touch sensitive data such as tokens, keys, or user data?
* Does this change alter the API in a way that may impact security, such as
a new way to access sensitive information or a new way to login?
* Does this change involve cryptography or hashing?
* Does this change require the use of sudo or any elevated privileges?
* Does this change involve using or parsing user-provided data? This could
be directly at the API level or indirectly such as changes to a cache layer.
* Can this change enable a resource exhaustion attack, such as allowing a
single API interaction to consume significant server resources? Some examples
of this include launching subprocesses for each connection, or entity
expansion attacks in XML.
For more detailed guidance, please see the OpenStack Security Guidelines as
a reference (https://wiki.openstack.org/wiki/Security/Guidelines). These
guidelines are a work in progress and are designed to help you identify
security best practices. For further information, feel free to reach out
to the OpenStack Security Group at openstack-security@lists.openstack.org.
Notifications impact
--------------------
Please specify any changes to notifications. Be that an extra notification,
changes to an existing notification, or removing a notification.
Other end user impact
---------------------
Aside from the API, are there other ways a user will interact with this
feature?
* Does this change have an impact on python-congressclient? What does the user
interface there look like?
Performance Impact
------------------
Describe any potential performance impact on the system, for example
how often will new code be called, and is there a major change to the calling
pattern of existing code.
Examples of things to consider here include:
* A periodic task might look like a small addition but if it calls conductor or
another service the load is multiplied by the number of nodes in the system.
* Scheduler filters get called once per host for every instance being created,
so any latency they introduce is linear with the size of the system.
* A small change in a utility function or a commonly used decorator can have a
large impacts on performance.
* Calls which result in a database queries (whether direct or via conductor)
can have a profound impact on performance when called in critical sections of
the code.
* Will the change include any locking, and if so what considerations are there
on holding the lock?
Other Deployer Impacts
----------------------
Discuss things that will affect how you deploy and configure OpenStack
that have not already been mentioned, such as:
* What config options are being added? Should they be more generic than
proposed (for example a flag that other hypervisor drivers might want to
implement as well)? Are the default values ones which will work well in
real deployments?
* Is this a change that takes immediate effect after its merged, or is it
something that has to be explicitly enabled?
* If this change is a new binary, how would it be deployed?
* Please state anything that those doing continuous deployment, or those
upgrading from the previous release, need to be aware of. Also describe
any plans to deprecate configuration values or features. For example, if we
change the directory name that instances are stored in, how do we handle
instance directories created before the change landed? Do we move them? Do
we have a special case in the code? Do we assume that the operator will
recreate all the instances in their cloud?
Developer Impact
----------------
Discuss things that will affect other developers working on OpenStack,
such as:
* If the blueprint proposes a change to the driver API, discussion of how
other hypervisors would implement the feature is required.
Implementation
==============
Assignee(s)
-----------
Who is leading the writing of the code? Or is this a blueprint where you're
throwing it out there to see who picks it up?
If more than one person is working on the implementation, please designate the
primary author and contact.
Primary assignee:
<launchpad-id or None>
Other contributors:
<launchpad-id or None>
Work Items
----------
Work items or tasks -- break the feature up into the things that need to be
done to implement it. Those parts might end up being done by different people,
but we're mostly trying to understand the timeline for implementation.
Dependencies
============
* Include specific references to specs and/or blueprints in congress, or in other
projects, that this one either depends on or is related to.
* If this requires functionality of another project that is not currently used
by congress (such as the glance v2 API when we previously only required v1),
document that fact.
* Does this feature require any new library dependencies or code otherwise not
included in OpenStack? Or does it depend on a specific version of library?
Testing
=======
Please discuss how the change will be tested. We especially want to know what
tempest tests will be added. It is assumed that unit test coverage will be
added so that doesn't need to be mentioned explicitly, but discussion of why
you think unit tests are sufficient and we don't need to add more tempest
tests would need to be included.
Is this untestable in gate given current limitations (specific hardware /
software configurations available)? If so, are there mitigation plans (3rd
party testing, gate enhancements, etc).
Documentation Impact
====================
What is the impact on the docs team of this change? Some changes might require
donating resources to the docs team to have the documentation updated. Don't
repeat details discussed above, but please reference them here.
References
==========
Please add any useful references here. You are not required to have any
reference. Moreover, this specification should still make sense when your
references are unavailable. Examples of what you could include are:
* Links to mailing list or IRC discussions
* Links to notes from a summit session
* Links to relevant research, if appropriate
* Related specifications as appropriate (e.g. if it's an EC2 thing, link the
EC2 docs)
* Anything else you feel it is worthwhile to refer to

View File

@ -31,7 +31,8 @@ Proposed change
This change will create a new (mix-in) class called ExecutionDriver that has
one main function: execute(name, positional-args, named-args). That interface
takes the name of the action, a list of its positional arguments, and a
dictionary of its named arguments and executes that action on the cloud service.
dictionary of its named arguments and executes that action on the cloud
service.
This change will also overload the functionality of
@ -111,31 +112,33 @@ Question: Should we rename the "data-sources" in the API to "cloud-services"?
If so, then we'd have...
Execute an action on a specified service.
POST v1/cloud-services/nova?action=execute -d {'name': 'disconnectNetwork',
'args': ['vm123', 'net456'],
'options': {}}
Execute an action on a specified service. ::
POST v1/cloud-services/nova?action=execute -d {'name': 'disconnectNetwork',
'args': ['vm123', 'net456'],
'options': {}}
Execute an action as if the given policy dictated it should be executed.
POST v1/policies/alice?action=execute -d {'name': 'nova:disconnectNetwork',
'args': ['vm123', 'net456'],
'options': {}}
Execute an action as if the given policy dictated it should be executed. ::
POST v1/policies/alice?action=execute -d {'name': 'nova:disconnectNetwork',
'args': ['vm123', 'net456'],
'options': {}}
These API calls will have an empty response (HTTP code 204 perhaps) or an
appropriate error response if there was an error raised *before* the API call
was actually made. These API calls will return before the call we are executing
returns.
was actually made. These API calls will return before the call we are
executing returns.
Security impact
---------------
This change gives anyone with the ability to write policy the ability to execute
API calls using the same rights Congress has been granted. Since
This change gives anyone with the ability to write policy the ability to
execute API calls using the same rights Congress has been granted. Since
we typically run Congress with administrative rights, we need to secure
Congress authentication properly. But that is already accomplished because
we use the standard Keystone authentication system.
@ -153,7 +156,7 @@ python-congressclient would need a new endpoint if we expose the execute()
functionality via the API.
Performance Impact
Performance impact
------------------
Because the DatasourceDriver and ExecutionDriver will most often be implemented
@ -164,12 +167,12 @@ run in a single thread, i.e. either one runs or the other runs. This is
actually beneficial because we would not want the driver to be pulling new
data at the same time it executes a change in that data.
Other Deployer Impacts
----------------------
Other deployer impact
---------------------
None
Developer Impact
Developer impact
----------------
None
@ -186,19 +189,19 @@ Primary assignee:
Other contributors:
<launchpad-id or None>
Work Items
Work items
----------
- Rename folder congress/datasources to congress/cloudservices and change
etc/congress/datasources.conf.sample appropriately. There may be other places
that reference 'datasources' explictly.
etc/congress/datasources.conf.sample appropriately. There may be other
places that reference 'datasources' explicitly.
- Add congress/cloudservices/execution_driver.py to include the mixin class
ExecutionDriver
ExecutionDriver
- Implement policy/dseruntime.py:DseRuntime.execute() to send a unicast
message to the appropriate service or raise an error if that service does
not exist on the bus.
message to the appropriate service or raise an error if that service does
not exist on the bus.
- Implement the ExecutionDriver for an existing service, e.g. Nova.
@ -213,13 +216,14 @@ Testing
=======
- Unit tests that ensure a call to DseRuntime.execute() invokes the appropriate
ExecutionDriver.execute().
ExecutionDriver.execute().
- Tempest tests that invoke execute() and ensure the proper change actually happens.
- Tempest tests that invoke execute() and ensure the proper change actually
happens.
Documentation Impact
Documentation impact
====================
Need to add description of actions and action format to docs, along with

View File

@ -21,9 +21,9 @@ general and declarative networking in particular.
Problem description
===================
Congress needs an expressive policy language that is both easy-to-use as well as
efficient for information processing and declarative networking. Datalog is a
declarative language that provides a higher-level abstraction for querying
Congress needs an expressive policy language that is both easy-to-use as well
as efficient for information processing and declarative networking. Datalog is
a declarative language that provides a higher-level abstraction for querying
graphs and relational structures. It defines an efficient, recursive, query
execution and incremental data update mechanisms based on the relational model.
This design specification specifies a formal EBNF grammar for Datalog-ng, an
@ -37,32 +37,32 @@ Proposed change
This specification proposes a generalization of Datalog, called Datalog-ng,
which is suitable for declarative networking. The grammar will be defined in
Extended Backus-Naur Form (EBNF) [2]. An EBNF specification is platform-neutral,
and formally defines the syntax of a language. Note that the full expressive
power of EBNF will NOT be used; this enables access to a wider variety of tools
that may not be able to understand all of the EBNF standard. Hence, a wider
variety of tools can be used to implement a parser, an interpreter, and/or a
compiler. See [3] for more information on the benefits of formally defining a
grammar.
Extended Backus-Naur Form (EBNF) [2]_. An EBNF specification is
platform-neutral, and formally defines the syntax of a language. Note that the
full expressive power of EBNF will NOT be used; this enables access to a wider
variety of tools that may not be able to understand all of the EBNF standard.
Hence, a wider variety of tools can be used to implement a parser, an
interpreter, and/or a compiler. See [3]_ for more information on the benefits
of formally defining a grammar.
Change #1: Table Referencing
[1] says: “Conceptually, Datalog describes policy in terms of a collection of
tables.” Tables are a simple way of conveying information, and lend themselves
to querying, editing, and reporting. Policy rules can be thought of as how input
from one or more tables can be transformed into output in one or more tables.
Tables are full-fledged objects, so this enables us to not only reuse tables
(i.e., the actual data), but also reuse the policies that created the data.
However, tables that have a large number of columns are hard to use, since there
is no simple way to reference their name, or even know the number of columns
they have. In addition, there is no way for a policy to know about changes in
the table, so it would likely break as well.
[1]_ says: “Conceptually, Datalog describes policy in terms of a collection
of tables.” Tables are a simple way of conveying information, and lend
themselves to querying, editing, and reporting. Policy rules can be thought of
as how input from one or more tables can be transformed into output in one or
more tables. Tables are full-fledged objects, so this enables us to not only
reuse tables (i.e., the actual data), but also reuse the policies that created
the data. However, tables that have a large number of columns are hard to use,
since there is no simple way to reference their name, or even know the number
of columns they have. In addition, there is no way for a policy to know about
changes in the table, so it would likely break as well.
Change #2: Fact Semantics
Datalog (and Datalog-ng) work by adding facts into, and removing facts from, the
database. However, there is no standard way for a policy author to introduce
semantics associated with these operations. For example, if a fact is removed by
one policy rule, it could adversely affect other policy rules that were going to
use that rule.
Datalog (and Datalog-ng) work by adding facts into, and removing facts from,
the database. However, there is no standard way for a policy author to
introduce semantics associated with these operations. For example, if a fact is
removed by one policy rule, it could adversely affect other policy rules that
were going to use that rule.
Change #3: Query and Rule Semantics
Formal semantics need to be added to standardize how queries and rules are
@ -71,46 +71,49 @@ very easy to add.
Change #4: Constraints
Currently, there is no way to express constraints in Datalog. For example, if
table A has four columns, and if one of the columns must not be null, then there
is no way to check that an insert of three values into a row will fail or not.
Similarly, if there are semantic constraints (e.g., row #3 must use one of a set
of enumerated values), or data type constraints, incorrect entries will either
be very hard or impossible to check, and will fail when applied. This may be
delayed, due to its difficulty and potential to introduce problems.
table A has four columns, and if one of the columns must not be null, then
there is no way to check that an insert of three values into a row will fail or
not. Similarly, if there are semantic constraints (e.g., row #3 must use one of
a set of enumerated values), or data type constraints, incorrect entries will
either be very hard or impossible to check, and will fail when applied. This
may be delayed, due to its difficulty and potential to introduce problems.
Change #5: Safety
Currently, there is no way to guarantee rule safety. For example, the following
queries are NOT safe:
queries are NOT safe: ::
foo(X, Y, Z) :- rel1(X, y) & X < Z
foo(X, Y, Z) :- rel1(X, Y) & NOT rel2(X, Y, Z)
foo(X, Y) :- rel1(X)
Change #6: Grammatical Improvements
Datalog is powerful, but somewhat hard to use. A set of “syntactical sugar” will
be defined to make Datalog easier to use, especially for the native Python
Datalog is powerful, but somewhat hard to use. A set of “syntactical sugar”
will be defined to make Datalog easier to use, especially for the native Python
developer.
Alternatives
------------
N/
N/A
Policy
------
Traditional policy rules take the form of statements that have either two or
three clauses [5]; they are called ECA (event-condition-action) and CA
(condition-action):
three clauses [5]_ ; they are called ECA (event-condition-action) and CA
(condition-action): ::
ON <event-clause> IF <condition-clause> THEN <action-clause>
or
or
IF <condition-clause> THEN <action-clause>
In both ECA and CA, each clause can be a Boolean combination of atoms. However,
there are other types of policy rules:
• Goal policies
• Utility functions
• Promises
[6] covers the first two, and [7] is the latest of Marks publications on
[6]_ covers the first two, and [7]_ is the latest of Marks publications on
promise theory. All three of the above are different in form and function than
ECA and CA policy rules. Datalog-ng can model the intent of most of these forms
of policy rules, which is what is needed in Congress the ability to
@ -122,12 +125,14 @@ Policy Actions
Uses of Congress Policy Rules
Possible candidates include:
* Monitoring
* Reporting (including filtering for selected values, so the user is not
inundated with data)
inundated with data)
* Configuring a device reactively (e.g., a threshold was violated)
* Configuring a device proactively (e.g., trending analysis predicts that a
threshold will be violated in the future)
threshold will be violated in the future)
The first three are straightforward; the latter may be pushed beyond the Kilo
release.
@ -135,11 +140,11 @@ Policy Rule Implementation Alternatives
The advantages of Datalog are that it is a declarative subset of first-order
logic. Declarative languages express the logic in a task without specifying the
flow of control to perform the task. First-order logic is a formal system of
logic in which each statement consists of a subject and a predicate. A predicate
can only refer to a single subject. Sentences are combined and manipulated using
the same rules as those used in Boolean algebra. Two quantifiers exist: “for
all” and “for some” (higher-order logics have additional quantifiers, such as
“for every property of an object”).
logic in which each statement consists of a subject and a predicate. A
predicate can only refer to a single subject. Sentences are combined and
manipulated using the same rules as those used in Boolean algebra. Two
quantifiers exist: “for all” and “for some” (higher-order logics have
additional quantifiers, such as “for every property of an object”).
Datalog is thus more powerful than simple propositional logic, but not as
powerful as first-order logic. However, it provides a combination of power and
@ -180,9 +185,10 @@ N/A
Security impact
---------------
Policy can contain the proverbial “keys to the kingdom”. So, if someone hacks
their way into the system and can start issuing policies, game over. Therefore,
some type of access control should be used with policy-based systems.
Policy can contain the proverbial “keys to the kingdom”. So, if someone
hacks their way into the system and can start issuing policies, game over.
Therefore, some type of access control should be used with policy-based
systems.
Notifications impact
@ -197,20 +203,20 @@ Other end user impact
Datalog-ng is intended for Developers and Administrators, not End Users.
Performance Impact
Performance impact
------------------
N/A
Other Deployer Impacts
----------------------
Other deployer impact
---------------------
For this to work in a secure manner, I really think we need to operate in a
secure environment, such as role-based access control (RBAC).
Developer Impact
Developer impact
----------------
Implementation will provide policy writers with a richer set of options
@ -228,7 +234,7 @@ Primary assignee:
Other contributors:
thinrichs, sarob
Work Items
Work items
----------
The following is a short description of how the above changes will be addressed
@ -254,19 +260,20 @@ examples comes from relational theory. For the following definitions, assume
that a table represents an entity, such as a router or a network. Note that
these types of constraints are critical for being able to safely reference
different tables from different namespaces.
* Entity integrity: a row has a unique identifier, called a primary key. The
primary key is unique and not null. This enables each row in the entity to
be identified.
primary key is unique and not null. This enables each row in the entity to
be identified.
* Referential integrity: sometimes, tables reference other tables. A foreign
key is a set of attributes in one table that uniquely identifies a row of
another table a primary key from one table that appears in another table.
Referential integrity defines the dependency of one table on another table.
key is a set of attributes in one table that uniquely identifies a row of
another table a primary key from one table that appears in another table.
Referential integrity defines the dependency of one table on another table.
* Value constraints: data has constraint(s) on the value(s) that it can take
on. For example, a physical chassis can only be mounted in a single rack.
on. For example, a physical chassis can only be mounted in a single rack.
* Domain constraints: entity attributes in a given domain are restricted in
one or more ways. For example, the sum of two values from two columns in the
same row is less than or equal to another value. This generally include data
type, data value, and the defaulting of values.
one or more ways. For example, the sum of two values from two columns in the
same row is less than or equal to another value. This generally include data
type, data value, and the defaulting of values.
The second set of examples comes from applications of security. In this view, a
constraint is an assertion that needs to be satisfied. The typical example is
@ -290,16 +297,17 @@ developers from themselves. :-)
Change #6: Grammatical Improvements
A set of grammatical improvements will be defined to simplify the use of
Datalog-ng, and especially to make its syntax friendlier to Python developers.
Examples include more recognizable comments (e.g., familiar “//” or “/*..*/”
instead of the native Datalog %), the ability to use single and/or double
quotes, and English equivalents to some commands (e.g., ! or NOT or not).
Examples include more recognizable comments (e.g., familiar “//” or
“/*..*/” instead of the native Datalog %), the ability to use single
and/or double quotes, and English equivalents to some commands
(e.g., ! or NOT or not).
Dependencies
============
This spec may be broken up into multiple blueprints for implementation. The list
of spec and/or blueprints will be listed at the top of this spec.
This spec may be broken up into multiple blueprints for implementation. The
list of spec and/or blueprints will be listed at the top of this spec.
Testing
@ -308,7 +316,7 @@ Testing
N/A
Documentation Impact
Documentation impact
====================
N/A
@ -318,15 +326,15 @@ References
==========
The following are references for this specification.
[1] Congress Design, http://goo.gl/YFd2Fr
[2] ISO/IEC, “Information technology Syntactic metalanguage Extended BNF”,
14977, 12/15/1996
[3] J. Strassner, “A Gentle Introduction to EBNF”, TBD
[4] Congress Policy Workshop, TBD
[5] J. Strassner, “Policy Based Network Management”, Morgan Kaufman Publishing,
978-1558608597, 9/2003
[6] J. Strassner, J. Kephart, “Autonomic Systems and Networks Theory and
Practice”, NOMS 2006 Tutorial
[7] M. Burgess, J. Bergstra, “Promise Theory An Introduction”, xtAxis Press,
2014
.. [1] Congress Design, http://goo.gl/YFd2Fr
.. [2] ISO/IEC, "Information technology - Syntactic metalanguage - Extended
BNF", 14977, 12/15/1996
.. [3] J. Strassner, "A Gentle Introduction to EBNF", TBD
.. [4] Congress Policy Workshop, TBD
.. [5] J. Strassner, "Policy Based Network Management", Morgan Kaufman
Publishing, 978-1558608597, 9/2003
.. [6] J. Strassner, J. Kephart, "Autonomic Systems and Networks Theory and
Practice", NOMS 2006 Tutorial
.. [7] M. Burgess, J. Bergstra, "Promise Theory - An Introduction", xtAxis
Press, 2014

View File

@ -16,8 +16,8 @@ broadcast mechanism is used for this purpose. While effective, these
broadcasts are inefficient, and complicate debugging.
Instead of using the exiting broadcast-based discovery mechanism, nodes on
the DSE should publish events on a well-known "collections registry,” that every
node subscribes to.
the DSE should publish events on a well-known "collections registry,”
that every node subscribes to.
Problem description
@ -38,12 +38,12 @@ Proposed change
registry.
* Each deepsix instance will maintain a list of its data_indexes that it has
a desired subscription for. The instance will do a lookup in its local
copy of the collection registry to find associated endpoints to send
copy of the collection registry to find associated endpoints to send
subscription requests to. As changes occur to the collection registry,
each instance will re-evaluate its own subscriptions against these
updates.
* There is a collection registry per instance of the DSE. In a hierarchical
DSE, there could be multiple registries.
DSE, there could be multiple registries.
* The periodic broadcast of data_index information can be removed after
these changes.
@ -104,7 +104,7 @@ Other end user impact
N/A
Performance Impact
Performance impact
------------------
At runtime, this change will reduce the time it takes to distribute
@ -113,13 +113,13 @@ information about new nodes in the system. It will also reduce the overall
the reduction in chatter will greatly simplify debugging the message bus.
Other Deployer Impacts
----------------------
Other deployer impact
---------------------
N/A
Developer Impact
Developer impact
----------------
N/A
@ -138,7 +138,7 @@ Other contributors:
None
Work Items
Work items
----------
TBD
@ -156,7 +156,7 @@ Testing
The existing testing framework is sufficient to test to change.
Documentation Impact
Documentation impact
====================
N/A

View File

@ -10,9 +10,9 @@ Explicit reactive enforcement
https://blueprints.launchpad.net/congress/+spec/explicit-reactive-enforcement
Enable policy writers to describe conditions under which Congress should execute
specific actions, e.g. when a network is connected to a VM that contains a
virus, disconnect it from the internet.
Enable policy writers to describe conditions under which Congress should
execute specific actions, e.g. when a network is connected to a VM that
contains a virus, disconnect it from the internet.
Problem description
===================
@ -30,12 +30,11 @@ the one below to encode instructions for how Congress ought to
react to changes in the cloud. It enables the policy writer to dictate
that Congress must execute certain actions under certain conditions.
// disconnect a VM from the network whenever it is infected by a virus
execute[disconnectNetwork(vm, network=net)] :-
antivirus:infected(vm),
nova:network(vm, net)
// disconnect a VM from the network whenever it is infected by a virus::
execute[disconnectNetwork(vm, network=net)] :-
antivirus:infected(vm),
nova:network(vm, net)
Alternatives
------------
@ -99,18 +98,18 @@ Other end user impact
None
Performance Impact
Performance impact
------------------
None for this spec; other, dependent specs have performance impacts.
Other Deployer Impacts
----------------------
Other deployer impact
---------------------
None.
Developer Impact
Developer impact
----------------
None.
@ -128,14 +127,14 @@ Primary assignee:
Other contributors:
<launchpad-id or None>
Work Items
Work items
----------
- Modify trigger framework to operate on modal operators directly, e.g. so
that we can register a trigger whenever execute[x] changes.
that we can register a trigger whenever execute[x] changes.
- Set up a trigger on execute[x]. When that trigger fires, it invokes
the action on the appropriate ExecutionDriver.
the action on the appropriate ExecutionDriver.
Dependencies
@ -154,7 +153,7 @@ This change will require unit tests for the enhanced trigger framework.
It should include tempest tests to test end-to-end functionality.
Documentation Impact
Documentation impact
====================
This will require a new documentation section for reactive enforcement.

View File

@ -94,17 +94,17 @@ Other end user impact
None
Performance Impact
Performance impact
------------------
Preliminary testing shows a 10x reduction in CPU use and a 3x reduction in
memory use in initialize_table() for 7M facts where the payload is 700MB.
Other Deployer Impacts
----------------------
Other deployer impact
---------------------
None
Developer Impact
Developer impact
----------------
A Theory object will internally contain Rules and Facts. The caller an insert
@ -120,7 +120,7 @@ Assignee(s)
Primary assignee:
ayip
Work Items
Work items
----------
* Implement FactSet
@ -140,7 +140,7 @@ Testing
Add a unit test for FactSet
Documentation Impact
Documentation impact
====================
None

View File

@ -1,5 +1,4 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
@ -12,14 +11,16 @@ Congress OpenStack Horizon : Data Source Status Table
https://blueprints.launchpad.net/congress/+spec/horizon-datasource-status-table
This blue print describes integration of datasources status with Horizon dashboard.
Admin will be able to see detailed status of each datasource.
This blue print describes integration of datasources status with Horizon
dashboard. Admin will be able to see detailed status of each datasource.
Problem description
===================
A detailed description of the problem:
* In the existing horizon implementation congress data source status is not shown
* In the existing horizon implementation congress data source status is not
shown
* This poses problem to admins who want to see the status of data sources
@ -38,7 +39,7 @@ none
Screens
------
-------
none
@ -88,8 +89,8 @@ Performance impact
none
Other deployer impacts
----------------------
Other deployer impact
---------------------
none

View File

@ -1,5 +1,6 @@
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
@ -21,7 +22,7 @@ Problem description
====================
Datalog is not intuitive to use, even difficult for users to express their
policices who are not familiar with it. And it may cause some misinterpretation
policies who are not familiar with it. And it may cause some misinterpretation
when translating real intent to Datalog because of the complex logic.
Proposed change
@ -31,37 +32,46 @@ Congress makes the whole cloud compliant by defining violation state and action
for violation.
Policies in Congress can be expressed by BNF as below.
Congress Policy ::= violation condition, “do” action for violation
::
Congress Policy ::= violation condition, “do” action for violation
So, policy abstraction is to abstract violation state and corresponding action
to make the policy more intuitive and easy to use.
By analyzing typical scenarios, violation mainly can be divided into two parts.
One is the constraint of objects attributes, and another is the constraint of
relationship between several objects attributes.
All the objects and constraints are not just a simple set of data source tables,
but they can be divided into some categories according to their functions and
relations. So users just need to choose objects they care about without worrying
about which tables they are in.
One is the constraint of objects attributes, and another is the constraint
of relationship between several objects attributes.
All the objects and constraints are not just a simple set of data source
tables, but they can be divided into some categories according to their
functions and relations. So users just need to choose objects they care about
without worrying about which tables they are in.
The violation condition can be expressed by BNF as below.
violation condition ::=object attribute constraint (value | object attribute)
object attribute::=object “.” attribute
::
violation condition ::=object attribute constraint (value | object attribute)
object attribute::=object “.” attribute
For any violation state, congress will take some actions, such as monitoring,
proactive and reactive. Though congress has just realized monitor violation,
changing cloud state to make the cloud compliant is also an important function
for congress. So, policy abstraction will provide some optional reactive actions
for different objects to resolve violations.
for congress. So, policy abstraction will provide some optional reactive
actions for different objects to resolve violations.
The action for violation state can be expressed by BNF as below.
action ::= (“monitor”| “proactive”| “reactive”) data
::
action ::= (“monitor”| “proactive”| “reactive”) data
So policies in Congress can be abstracted into "name", "objects",
"violation condition", "action" and "data".
Among these, element “name” defines a marker of a policy, which is used to be
a unique identification for a policy.
Among these, element “name” defines a marker of a policy, which is used to
be a unique identification for a policy.
Element “objects” defines all objects which are concerned by this policy.
They are not just simple display of data source tables, but a organized set
@ -105,23 +115,24 @@ There is one example to express typical policy by abstraction form in Horizon.
Example: every network connected to a VM must either be public or
owned by someone in the same group as the VM.
For this example, users care about "servers" and "networks", so users will choose
these two objects from a drop-down list.
For this example, users care about "servers" and "networks", so users will
choose these two objects from a drop-down list.
After users decide the objects, corresponding optional violation state will
be decided, which will include these two objects attributes and some predefined
relationship, so users can choose "not same_group" and choose who are not in the same
group. All the choices will be show as drop-down lists, too.
And users need to choose the action and data for this violation.
be decided, which will include these two objects attributes and some
predefined relationship, so users can choose "not same_group" and choose who
are not in the same group. All the choices will be show as drop-down lists,
too. And users need to choose the action and data for this violation.
For example, users choose "monitoring", attributes of servers and networks
will appear in "data".
In this policy, users can create a policy as below.
+------ ---+----------+-----------------------------------------------+------------+--------------+
+----------+----------+-----------------------------------------------+------------+--------------+
| name | objects | violation state | action | data |
+------ ---+----------+-----------------------------------------------+------------+--------------+
+----------+----------+-----------------------------------------------+------------+--------------+
| policy_1 | servers |not equal(networks.share, public) | monitoring | servers.name |
| | networks |not same_group(servers.tenant, networks.tenant)| | |
+------ ---+----------+-----------------------------------------------+------------+--------------+
+----------+----------+-----------------------------------------------+------------+--------------+
Policy Actions
--------------
@ -150,8 +161,8 @@ All parameters inputted by users need satisfy predefined standard, for example,
if values inputted in "violation condition" in reasonable range
(e.g. 0-100% for CPU utilization).
Notification impact
-------------------
Notifications impact
--------------------
N/A
@ -167,8 +178,18 @@ Performance impact
N/A
Other deployer impact
---------------------
N/A
Developer impact
----------------
N/A
Implementation
===============
==============
Assignee(s)
-----------

View File

@ -4,17 +4,17 @@
http://creativecommons.org/licenses/by/3.0/legalcode
===============================
==================================
Create an ironic Datasource Driver
===============================
==================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/congress/+spec/ironic-datasource-driver
This ironic driver will allow Congress to interact with the Openstack ironic API
for orchestration. The first version will provide data from ironic's read API
calls until Congress does have infrastructure to handle writing to drivers.
This ironic driver will allow Congress to interact with the Openstack ironic
API for orchestration. The first version will provide data from ironic's read
API calls until Congress does have infrastructure to handle writing to drivers.
Subsequent versions may be able to send requests and write to the ironic API.
@ -141,8 +141,8 @@ Testing
This work must include a unit test and a tempest test. The driver translator
infrastructure makes most of the translation code robust, but the driver is
still dependent on the ironic API, so the tempest test is particularly important
as an integration test.
still dependent on the ironic API, so the tempest test is particularly
important as an integration test.
Documentation impact

View File

@ -134,17 +134,17 @@ Other end user impact
None
Performance Impact
Performance impact
------------------
None
Other Deployer Impacts
----------------------
Other deployer impact
---------------------
None
Developer Impact
Developer impact
----------------
None
@ -162,7 +162,7 @@ Primary assignee:
Other contributors:
<launchpad-id or None>
Work Items
Work items
----------
- Modify grammar as described above
@ -183,7 +183,7 @@ Unit tests are sufficient. Ensure that the new syntactic constructs
can be used anywhere and that the runtime produces the proper results
when the new syntactic constructs are in place.
Documentation Impact
Documentation impact
====================
End-user documentation is not necessary for this change. But documentation

View File

@ -82,17 +82,17 @@ Other end user impact
N/A
Performance Impact
Performance impact
------------------
N/A
Other Deployer Impacts
----------------------
Other deployer impact
---------------------
N/A
Developer Impact
Developer impact
----------------
N/A
@ -106,7 +106,7 @@ Assignee(s)
Primary assignee:
Murano Team
Work Items
Work items
----------
Data source driver
@ -122,7 +122,7 @@ Testing
Tempest and unit tests will be added.
Documentation Impact
Documentation impact
====================
N/A

View File

@ -33,14 +33,16 @@ data tables.
* *object_id* - uuid of the object as used in Murano
* *owner_id* - uuid of the owner object as used in Murano
* *type* - string with full type identification as used in Murano (e.g., io.murano.Environment,...)
* *type* - string with full type identification as used in Murano
(e.g., io.murano.Environment,...)
* *murano:parent-types(id, parent_type)*
This table holds parent types of *obj_id* object.
* *id* - uuid of the object as used in Murano
* *parent_type* - string with full type identification of the parent object
* *parent_type* - string with full type identification of the parent
object
Note that Murano supports multiple inheritance, so there can be several
parent types for one object

View File

@ -27,11 +27,11 @@ But there are several use cases when other entities on the DSE message bus
defined within policy.
- Building proof of concepts where an external service is informed of
policy violations and reacts accordingly, implemented for the sake of
convenience as another entity on the DSE message bus.
policy violations and reacts accordingly, implemented for the sake of
convenience as another entity on the DSE message bus.
- Interoperable policy engines that publish their monitoring results
on the bus for other policy engines to consume.
on the bus for other policy engines to consume.
Proposed change
@ -109,7 +109,7 @@ Other end user impact
N/A
Performance Impact
Performance impact
------------------
Performance will be impacted, but little moreso than because of the triggers.
@ -118,12 +118,12 @@ be published on the message bus. Publishing is fast (especially compared
to computing the contents of tables and then their deltas).
Other Deployer Impacts
----------------------
Other deployer impact
---------------------
N/A
Developer Impact
Developer impact
----------------
N/A
@ -140,18 +140,18 @@ Primary assignee:
Other contributors:
<launchpad-id or None>
Work Items
Work items
----------
- Create subclass of DeepSix that includes delta publication functionality
and have DatasourceDriver and DseRuntime inherit from that subclass
instead of DeepSix
and have DatasourceDriver and DseRuntime inherit from that subclass
instead of DeepSix
- Alter DseRuntime so that every subscribe message sets up the appropriate
trigger.
trigger.
- Alter DseRuntime so that every unsubscribe message removes the appropriate
trigger.
trigger.
Dependencies
@ -167,7 +167,7 @@ changes to those tables, and verify that the appropriate deltas
are sent on the bus are adequate.
Documentation Impact
Documentation impact
====================
None required--we're just making a policy engine implement the same interface

View File

@ -116,17 +116,17 @@ Other end user impact
N/A
Performance Impact
Performance impact
------------------
N/A
Other Deployer Impacts
----------------------
Other deployer impact
---------------------
N/A
Developer Impact
Developer impact
----------------
N/A
@ -140,7 +140,7 @@ Assignee(s)
Primary assignee:
arosen
Work Items
Work items
----------
Push code to do this.
@ -155,7 +155,7 @@ Testing
Unit tests and tempest tests will be present to confirm this works as desired.
Documentation Impact
Documentation impact
====================
Will update docs for how this now works.

View File

@ -81,7 +81,7 @@ REST API impact
* JSON schema definition for the response data if any:
::
::
{
"type": "object",
@ -126,6 +126,7 @@ REST API impact
"required": ["versions"]
}
* Example use case:
::
@ -163,7 +164,7 @@ The related works in python-congressclient will also be added.
After this modification, user could get the API version details, like this:
::
::
openstack congress version list

View File

@ -1,11 +1,12 @@
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===========================================
=========================================
Abstraction policy definitions in Horizon
===========================================
=========================================
https://blueprints.launchpad.net/congress/+spec/horizon-policy-abstraction
@ -18,10 +19,10 @@ an abstraction for policies, and shows it in an abstraction form in Horizon,
which will facilitate users to express their policies.
Problem description
====================
===================
Datalog is not intuitive to use, even difficult for users to express their
policices who are not familiar with it. And it may cause some misinterpretation
policies who are not familiar with it. And it may cause some misinterpretation
when translating real intent to Datalog because of the complex logic.
Proposed change
@ -32,41 +33,47 @@ for violation.
Policies in Congress can be expressed by BNF as below.
Congress Policy ::= violation-condition, “do” action for violation
::
Congress Policy ::= violation-condition, “do” action for violation
So, policy abstraction is to abstract violation state and corresponding action
to make the policy more intuitive and easy to use.
By analyzing typical scenarios, violation mainly can be divided into two parts.
One is the constraint of objects attributes, and another is the constraint of
relationship between several objects attributes.
One is the constraint of objects attributes, and another is the constraint
of relationship between several objects attributes.
All the objects and constraints are not just a simple set of data source tables,
but they can be divided into some categories according to their functions and
relations. So users just need to choose objects they care about without worrying
about which tables they are in.
All the objects and constraints are not just a simple set of data source
tables, but they can be divided into some categories according to their
functions and relations. So users just need to choose objects they care about
without worrying about which tables they are in.
The violation-condition can be expressed by BNF as below.
violation-condition ::=object attribute constraint (value | object-attribute)
object-attribute::=object “.” attribute
::
violation-condition ::=object attribute constraint (value | object-attribute)
object-attribute::=object “.” attribute
For any violation state, congress will take some actions, such as monitoring,
proactive and reactive. Of course, there may be more than one action defined to
a violation. Though monitoring violation is the fundamental function of congress,
changing cloud state to make the cloud compliant is also an important function.
So, policy abstraction will provide some optional reactive actions for different
objects to resolve violations.
a violation. Though monitoring violation is the fundamental function of
congress, changing cloud state to make the cloud compliant is also an important
function. So, policy abstraction will provide some optional reactive actions
for different objects to resolve violations.
The action for violation state can be expressed by BNF as below.
action ::= (“monitoring”| “proactive”| “reactive action”) data
::
action ::= (“monitoring”| “proactive”| “reactive action”) data
So policies in Congress can be abstracted into "name", "objects",
"violation-condition", "action" and "data".
Among these, element “name” defines a marker of a policy, which is used to be
a unique identification for a policy.
Among these, element “name” defines a marker of a policy, which is used to
be a unique identification for a policy.
Element “objects” defines all objects which are concerned by this policy.
They are not just simple display of data source tables, but a organized set
@ -119,12 +126,12 @@ There is one example to express typical policy by abstraction form in Horizon.
Example: every network connected to a VM must either be public or
owned by someone in the same group as the VM.
For this example, users care about "servers" and "networks", so users will choose
these two objects from a drop-down list.
After users decide the objects,users could make use of these attributes to define
violation state. In this example, violation-condition is that servers tenant's
group is not same with networks tenant's group. So users could choose these two
attributes and set their relation is "not equal".
For this example, users care about "servers" and "networks", so users will
choose these two objects from a drop-down list.
After users decide the objects,users could make use of these attributes to
define violation state. In this example, violation-condition is that servers
tenant's group is not same with networks tenant's group. So users could choose
these two attributes and set their relation is "not equal".
All the choices will be show as drop-down lists, too.
And users need to choose the action and data to define which actions should be
@ -132,24 +139,26 @@ applied to this violation. For example, users choose "monitoring", attributes
of servers and networks will appear in "data".
In this policy, users can create a policy as below.
+------ ---+----------+------------------------------------------------------+------------+--------------+
+----------+----------+------------------------------------------------------+------------+--------------+
| name | objects | violation-condition | action | data |
+------ ---+----------+------------------------------------------------------+------------+--------------+
+----------+----------+------------------------------------------------------+------------+--------------+
| policy_1 | servers |not equal(networks.share, public) | monitoring | servers.name |
| | networks |not equal(servers.tenant.group, networks.tenant.group)| | |
+------ ---+----------+------------------------------------------------------+------------+--------------+
+----------+----------+------------------------------------------------------+------------+--------------+
If user have defined a packaging function for same_group, it will be added into
violation-condition, so user could choose this function and set which two attributes
are the parameters of this function.
violation-condition, so user could choose this function and set which two
attributes are the parameters of this function.
If use take use of this way, above policy will be showed as below.
+------ ---+----------+-----------------------------------------------+------------+--------------+
+----------+----------+-----------------------------------------------+------------+--------------+
| name | objects | violation-condition | action | data |
+------ ---+----------+-----------------------------------------------+------------+--------------+
+----------+----------+-----------------------------------------------+------------+--------------+
| policy_1 | servers |not equal(networks.share, public) | monitoring | servers.name |
| | networks |not same_group(servers.tenant, networks.tenant)| | |
+------ ---+----------+-----------------------------------------------+------------+--------------+
+----------+----------+-----------------------------------------------+------------+--------------+
Policy Actions
--------------
@ -178,8 +187,8 @@ All parameters inputted by users need satisfy predefined standard, for example,
if values inputted in "violation-condition" in reasonable range
(e.g. 0-100% for CPU utilization).
Notification impact
-------------------
Notifications impact
--------------------
N/A
@ -195,6 +204,16 @@ Performance impact
N/A
Other deployer impact
---------------------
N/A
Developer impact
----------------
N/A
Implementation
===============

View File

@ -49,6 +49,7 @@ Data-sources would support:
* status: status
Policy-engines would support:
* schema: available tables (e.g. classification:connected_to_internet)
* actions: available actions (e.g. scripts built into a policy-engine for
carrying out some task)
@ -124,8 +125,8 @@ And maybe we can have arbitrary aliases for services as well, so that we can
upgrade any service without changing policy.
One worry with providing the /v1/policies, etc. endpoints is that it may
seem to mask Congress's overall status, policies, actions, and tables. That
is, people might expect those endpoints to aggregate all the potential policies,
seem to mask Congress's overall status, policies, actions, and tables. That is,
people might expect those endpoints to aggregate all the potential policies,
actions, tables, and statuses. But if such functionality ever becomes
necessary, we can attach those endpoints to /v1/services, giving us the
following end points.
@ -135,7 +136,8 @@ following end points.
/services/tables
/services/status
Here is an example of the entry points if we had Nova, GBP, and our policy engine. The name of the service is whatever name DSE expects.
Here is an example of the entry points if we had Nova, GBP, and our policy
engine. The name of the service is whatever name DSE expects.
/policies
/actions
@ -171,7 +173,10 @@ we already need adapters that translate Datalog into the native language of
each policy engine, so here we expose that functionality directly to the user
as well.
Alternatives
------------
N/A
Tradeoffs
@ -182,6 +187,7 @@ Pros for the type-based approach:
* Simple extension of the current API
Cons for the type-based approach:
* Awkward that data-sources and policy-engines implement almost exactly the
same interface and have separate namespaces, but are represented as
distinct classes in the API.
@ -191,10 +197,12 @@ Cons for the type-based approach:
and interfaces in the underlying implementation.
Pros for the the service-based approach:
* All services running on the DSE are accessed identically from the API. This
is a more natural reflection of the reality of the nature of those services.
Cons for the service-based approach:
* Bigger change
* May be more difficult for users to understand initially.
* Eventually the policy-engine class will include functionality that the
@ -294,10 +302,11 @@ Once we decide on the approach, we will figure out the necessary work items.
But here's a rough cut.
- type-based approach: add routes, create congress/api/engine_model.py, modify
congress/api/*_model to enable tables/actions/policies/etc. for engines.
congress/api/\*_model to enable tables/actions/policies/etc. for engines.
- service-based approach: add routes, create congress/api/service_model.py,
(including an API to list different types of objects), modify the
congress/api/*_model to eliminate distinction between datasources and policies
congress/api/\*_model to eliminate distinction between datasources and
policies

View File

@ -32,10 +32,10 @@ to other projects, but that mechanism is underutilized today.
Proposed change
===============
Solving this problem means making it even easier for other projects to send their
data to Congress than oslo.messaging does. This change will include middleware,
perhaps made available via oslo, that publishes all changes to the underlying
database tables on the bus. Moreover, it will
Solving this problem means making it even easier for other projects to send
their data to Congress than oslo.messaging does. This change will include
middleware, perhaps made available via oslo, that publishes all changes to the
underlying database tables on the bus. Moreover, it will
send not the entire table but rather the delta on the table that occurred.
It will be tightly integrated into existing oslo.db so that existing projects
need make no code changes; they need only include and configure the code.
@ -104,7 +104,7 @@ Other end user impact
None
Performance Impact
Performance impact
------------------
There will be minimal performance impact on the projects that utilize this
@ -114,13 +114,13 @@ are published on the bus, the project owner can tune any
possible performance impact.
Other Deployer Impacts
----------------------
Other deployer impact
---------------------
When configuring the middleware, we propose one key configuration option:
which tables should be published on the bus.
Developer Impact
Developer impact
----------------
None
@ -136,7 +136,7 @@ Primary assignee:
Other contributors:
<launchpad-id or None>
Work Items
Work items
----------
- Write basic middleware functionality
@ -161,7 +161,7 @@ Testing
o Verify that changes to tables configured NOT to be published do not
get published
Documentation Impact
Documentation impact
====================
Need documentation in oslo for the basic middleware.

View File

@ -68,6 +68,9 @@ class TestTitles(testtools.TestCase):
for i, line in enumerate(raw.split("\n")):
if "http://" in line or "https://" in line:
continue
# skip long table
elif "-+-" in line or " | " in line:
continue
self.assertTrue(
len(line) < 80,
msg="%s:%d: Line limited to a maximum of 79 characters." %