Add DevRef for all major features

This is a partially fix since it doesn't fully address the dev-side
documentation of all GBP related resources. More work needs to be done
on top of this to have a full DevRef.

Change-Id: I135a3d23a5a1df136c04a7114f94274bd4921cb6
Partial-Bug: #1571385
This commit is contained in:
Igor Duarte Cardoso 2016-04-17 19:03:56 +01:00 committed by Sumit Naiksatam
parent 07855c36d1
commit ebfd92fa2d
15 changed files with 1717 additions and 10 deletions

View File

@ -0,0 +1,501 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
Group-Based Policy Abstraction
==============================
The OpenStack networking model of networks, ports, subnets, routers,
and security groups provides the necessary building blocks to build a logical
network topology for connectivity. However, it does not provide the right level
of abstraction for an application administrator who understands the
application's details (like application port numbers), but not the
infrastructure details likes networks and routes. Not only that, the current
abstraction puts the burden of maintaining the consistency of the network
topology on the user. The lack of application developer/administrator focussed
abstractions supported by a declarative model make it hard for those users
to consume the existing connectivity layer.
The GBP framework complements the OpenStack networking model with the
notion of policies that can be applied between groups of network endpoints.
As users look beyond basic connectivity, richer network services with diverse
implementations and network properties are naturally expressed as policies.
Examples include service chaining, QoS, path properties, access control, etc.
The model allows application administrators to express their networking
requirements using group and policy abstractions, with the specifics of policy
enforcement and implementation left to the underlying policy driver. The main
advantage of the abstractions described here is that they allow for an
application-centric interface to OpenStack networking.
These abstractions achieve the following:
* Show clear separation of concerns between application and infrastructure
administrator.
- The application administrator can then deal with a higher level abstraction
that does not concern itself with networking specifics like
networks/routers/etc.
- The infrastructure administrator will deal with infrastructure specific
policy abstractions and not have to understand application specific concerns
like specific ports that have been opened or which of them expect to be
limited to secure or insecure traffic. The infrastructure admin will also
have ability to direct which technologies and approaches used in rendering.
For example, if VLAN or VxLAN is used.
- Allow the infrastructure admin to introduce connectivity constraints
without the application administrator having to be aware of it (e.g. audit
all traffic between two application tiers).
* Allow for independent provider/consumer model with late binding and n-to-m
relationships between them.
* Allow for automatic orchestration that can respond to changes in policy or
infrastructure without requiring human interaction to translate intent to
specific actions.
* Complement the governance model proposed in the OpenStack Congress project by
making Policy Tags available for enforcement.
Terminology
-----------
**Policy Target (PT):** It is the smallest unit of resource abstraction at
which policy can be applied.
**Policy Target Group (PTG):** A collection of policy targets.
**Policy Rule Set (PRS):** It defines how the application services provided by
a PTG can be accessed. In effect it specifies how a PTG communicates with other
PTGs. A Policy Rule Set consists of Policy Rules.
**Policy Rule (PR):** These are individual rules used to define the communication
criteria between PTGs. Each rule contains a Filter, Classifier, and Action.
**Policy Classifier:** Characterizes the traffic that a particular Policy Rule acts on.
Corresponding action is taken on traffic that satisfies this classification
criteria.
**Policy Action:** The action that is taken for a matching Policy Rule defined in a
Policy Rule Set.
**L2 Policy (L2P):** Used to define a L2 boundary and impose additional
constraints (such as no broadcast) within that L2 boundary.
**L3 Policy (L3P):** Used to define a non-overlapping IP address space.
**Network Service Policy (NSP):** Used to define policies that are used for
assigning resources in a PTG to be consumed by network services.
Resource Model
---------------
.. code-block:: python
RESOURCE_ATTRIBUTE_MAP = {
POLICY_TARGETS: {
'id': {'allow_post': False, 'allow_put': False,
'validate': {'type:uuid': None}, 'is_visible': True,
'primary_key': True},
'name': {'allow_post': True, 'allow_put': True,
'validate': {'type:gbp_resource_name': None}, 'default': '',
'is_visible': True},
'description': {'allow_post': True, 'allow_put': True,
'validate': {'type:string': None},
'is_visible': True, 'default': ''},
'tenant_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'required_by_policy': True, 'is_visible': True},
'policy_target_group_id': {'allow_post': True, 'allow_put': True,
'validate': {'type:uuid_or_none': None},
'required': True, 'is_visible': True},
'cluster_id': {'allow_post': True, 'allow_put': True,
'validate': {'type:string': None},
'default': '', 'is_visible': True}
},
POLICY_TARGET_GROUPS: {
'id': {'allow_post': False, 'allow_put': False,
'validate': {'type:uuid': None}, 'is_visible': True,
'primary_key': True},
'name': {'allow_post': True, 'allow_put': True,
'validate': {'type:gbp_resource_name': None},
'default': '', 'is_visible': True},
'description': {'allow_post': True, 'allow_put': True,
'validate': {'type:string': None},
'is_visible': True, 'default': ''},
'tenant_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'required_by_policy': True, 'is_visible': True},
'policy_targets': {'allow_post': False, 'allow_put': False,
'validate': {'type:uuid_list': None},
'convert_to': attr.convert_none_to_empty_list,
'default': None, 'is_visible': True},
'l2_policy_id': {'allow_post': True, 'allow_put': True,
'validate': {'type:uuid_or_none': None},
'default': None, 'is_visible': True},
'provided_policy_rule_sets': {'allow_post': True, 'allow_put': True,
'validate': {'type:dict_or_none': None},
'convert_to':
attr.convert_none_to_empty_dict,
'default': None, 'is_visible': True},
'consumed_policy_rule_sets': {'allow_post': True, 'allow_put': True,
'validate': {'type:dict_or_none': None},
'convert_to':
attr.convert_none_to_empty_dict,
'default': None, 'is_visible': True},
'network_service_policy_id': {'allow_post': True, 'allow_put': True,
'validate': {'type:uuid_or_none': None},
'default': None, 'is_visible': True},
attr.SHARED: {'allow_post': True, 'allow_put': True,
'default': False, 'convert_to': attr.convert_to_boolean,
'is_visible': True, 'required_by_policy': True,
'enforce_policy': True},
'service_management': {'allow_post': True, 'allow_put': True,
'default': False,
'convert_to': attr.convert_to_boolean,
'is_visible': True, 'required_by_policy': True,
'enforce_policy': True},
},
L2_POLICIES: {
'id': {'allow_post': False, 'allow_put': False,
'validate': {'type:uuid': None}, 'is_visible': True,
'primary_key': True},
'name': {'allow_post': True, 'allow_put': True,
'validate': {'type:gbp_resource_name': None},
'default': '', 'is_visible': True},
'description': {'allow_post': True, 'allow_put': True,
'validate': {'type:string': None},
'is_visible': True, 'default': ''},
'tenant_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'required_by_policy': True, 'is_visible': True},
'policy_target_groups': {'allow_post': False, 'allow_put': False,
'validate': {'type:uuid_list': None},
'convert_to': attr.convert_none_to_empty_list,
'default': None, 'is_visible': True},
'l3_policy_id': {'allow_post': True, 'allow_put': True,
'validate': {'type:uuid_or_none': None},
'default': None, 'is_visible': True,
'required': True},
'inject_default_route': {'allow_post': True, 'allow_put': True,
'default': True, 'is_visible': True,
'convert_to': attr.convert_to_boolean,
'required': False},
attr.SHARED: {'allow_post': True, 'allow_put': True,
'default': False, 'convert_to': attr.convert_to_boolean,
'is_visible': True, 'required_by_policy': True,
'enforce_policy': True},
'required': False},
},
L3_POLICIES: {
'id': {'allow_post': False, 'allow_put': False,
'validate': {'type:uuid': None}, 'is_visible': True,
'primary_key': True},
'name': {'allow_post': True, 'allow_put': True,
'validate': {'type:gbp_resource_name': None},
'default': '', 'is_visible': True},
'description': {'allow_post': True, 'allow_put': True,
'validate': {'type:string': None},
'is_visible': True, 'default': ''},
'tenant_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'required_by_policy': True, 'is_visible': True},
'ip_version': {'allow_post': True, 'allow_put': False,
'convert_to': attr.convert_to_int,
'validate': {'type:values': [4, 6]},
'default': 4, 'is_visible': True},
'ip_pool': {'allow_post': True, 'allow_put': False,
'validate': {'type:subnet': None},
'default': '10.0.0.0/8', 'is_visible': True},
'subnet_prefix_length': {'allow_post': True, 'allow_put': True,
'convert_to': attr.convert_to_int,
# for ipv4 legal values are 2 to 30
# for ipv6 legal values are 2 to 127
'default': 24, 'is_visible': True},
'l2_policies': {'allow_post': False, 'allow_put': False,
'validate': {'type:uuid_list': None},
'convert_to': attr.convert_none_to_empty_list,
'default': None, 'is_visible': True},
attr.SHARED: {'allow_post': True, 'allow_put': True,
'default': False, 'convert_to': attr.convert_to_boolean,
'is_visible': True, 'required_by_policy': True,
'enforce_policy': True},
'external_segments': {
'allow_post': True, 'allow_put': True, 'default': None,
'validate': {'type:external_dict': None},
'convert_to': attr.convert_none_to_empty_dict, 'is_visible': True},
},
POLICY_CLASSIFIERS: {
'id': {'allow_post': False, 'allow_put': False,
'validate': {'type:uuid': None},
'is_visible': True, 'primary_key': True},
'name': {'allow_post': True, 'allow_put': True,
'validate': {'type:gbp_resource_name': None},
'default': '', 'is_visible': True},
'description': {'allow_post': True, 'allow_put': True,
'validate': {'type:string': None},
'is_visible': True, 'default': ''},
'tenant_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'required_by_policy': True,
'is_visible': True},
'protocol': {'allow_post': True, 'allow_put': True,
'is_visible': True, 'default': None,
'convert_to': convert_protocol},
'port_range': {'allow_post': True, 'allow_put': True,
'validate': {'type:gbp_port_range': None},
'convert_to': convert_port_to_string,
'default': None, 'is_visible': True},
'direction': {'allow_post': True, 'allow_put': True,
'validate': {'type:values': gp_supported_directions},
'default': gp_constants.GP_DIRECTION_BI,
'is_visible': True},
attr.SHARED: {'allow_post': True, 'allow_put': True,
'default': False, 'convert_to': attr.convert_to_boolean,
'is_visible': True, 'required_by_policy': True,
'enforce_policy': True},
},
POLICY_ACTIONS: {
'id': {'allow_post': False, 'allow_put': False,
'validate': {'type:uuid': None},
'is_visible': True,
'primary_key': True},
'name': {'allow_post': True, 'allow_put': True,
'validate': {'type:gbp_resource_name': None},
'default': '', 'is_visible': True},
'description': {'allow_post': True, 'allow_put': True,
'validate': {'type:string': None},
'is_visible': True, 'default': ''},
'tenant_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'required_by_policy': True,
'is_visible': True},
'action_type': {'allow_post': True, 'allow_put': False,
'convert_to': convert_action_to_case_insensitive,
'validate': {'type:values': gp_supported_actions},
'is_visible': True, 'default': 'allow'},
'action_value': {'allow_post': True, 'allow_put': True,
'validate': {'type:uuid_or_none': None},
'default': None, 'is_visible': True},
attr.SHARED: {'allow_post': True, 'allow_put': True,
'default': False, 'convert_to': attr.convert_to_boolean,
'is_visible': True, 'required_by_policy': True,
'enforce_policy': True},
},
POLICY_RULES: {
'id': {'allow_post': False, 'allow_put': False,
'validate': {'type:uuid': None},
'is_visible': True, 'primary_key': True},
'name': {'allow_post': True, 'allow_put': True,
'validate': {'type:gbp_resource_name': None},
'default': '', 'is_visible': True},
'description': {'allow_post': True, 'allow_put': True,
'validate': {'type:string': None},
'is_visible': True, 'default': ''},
'tenant_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'required_by_policy': True, 'is_visible': True},
'enabled': {'allow_post': True, 'allow_put': True,
'default': True, 'convert_to': attr.convert_to_boolean,
'is_visible': True},
'policy_classifier_id': {'allow_post': True, 'allow_put': True,
'validate': {'type:uuid': None},
'is_visible': True, 'required': True},
'policy_actions': {'allow_post': True, 'allow_put': True,
'default': None, 'is_visible': True,
'validate': {'type:uuid_list': None},
'convert_to': attr.convert_none_to_empty_list},
attr.SHARED: {'allow_post': True, 'allow_put': True,
'default': False, 'convert_to': attr.convert_to_boolean,
'is_visible': True, 'required_by_policy': True,
'enforce_policy': True},
},
POLICY_RULE_SETS: {
'id': {'allow_post': False, 'allow_put': False,
'validate': {'type:uuid': None},
'is_visible': True,
'primary_key': True},
'name': {'allow_post': True, 'allow_put': True,
'validate': {'type:gbp_resource_name': None},
'default': '',
'is_visible': True},
'description': {'allow_post': True, 'allow_put': True,
'validate': {'type:string': None},
'is_visible': True, 'default': ''},
'tenant_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'required_by_policy': True,
'is_visible': True},
'parent_id': {'allow_post': False, 'allow_put': False,
'validate': {'type:uuid': None},
'is_visible': True},
'child_policy_rule_sets': {'allow_post': True, 'allow_put': True,
'default': None, 'is_visible': True,
'validate': {'type:uuid_list': None},
'convert_to':
attr.convert_none_to_empty_list},
'policy_rules': {'allow_post': True, 'allow_put': True,
'default': None, 'validate': {'type:uuid_list': None},
'convert_to': attr.convert_none_to_empty_list,
'is_visible': True},
attr.SHARED: {'allow_post': True, 'allow_put': True,
'default': False, 'convert_to': attr.convert_to_boolean,
'is_visible': True, 'required_by_policy': True,
'enforce_policy': True},
},
NETWORK_SERVICE_POLICIES: {
'id': {'allow_post': False, 'allow_put': False,
'validate': {'type:uuid': None}, 'is_visible': True,
'primary_key': True},
'name': {'allow_post': True, 'allow_put': True,
'validate': {'type:gbp_resource_name': None},
'default': '', 'is_visible': True},
'description': {'allow_post': True, 'allow_put': True,
'validate': {'type:string': None},
'is_visible': True, 'default': ''},
'tenant_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'required_by_policy': True, 'is_visible': True},
'policy_target_groups': {'allow_post': False, 'allow_put': False,
'validate': {'type:uuid_list': None},
'convert_to': attr.convert_none_to_empty_list,
'default': None, 'is_visible': True},
'network_service_params': {'allow_post': True, 'allow_put': False,
'validate':
{'type:network_service_params': None},
'default': None, 'is_visible': True},
attr.SHARED: {'allow_post': True, 'allow_put': True,
'default': False, 'convert_to': attr.convert_to_boolean,
'is_visible': True, 'required_by_policy': True,
'enforce_policy': True},
}
The following defines the mapping to Neutron resources using attribute extension:
.. code-block:: python
EXTENDED_ATTRIBUTES_2_0 = {
gp.POLICY_TARGETS: {
'port_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:uuid_or_none': None},
'is_visible': True, 'default': None},
},
gp.POLICY_TARGET_GROUPS: {
'subnets': {'allow_post': True, 'allow_put': True,
'validate': {'type:uuid_list': None},
'convert_to': attr.convert_none_to_empty_list,
'is_visible': True, 'default': None},
},
gp.L2_POLICIES: {
'network_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:uuid_or_none': None},
'is_visible': True, 'default': None},
},
gp.L3_POLICIES: {
'routers': {'allow_post': True, 'allow_put': True,
'validate': {'type:uuid_list': None},
'convert_to': attr.convert_none_to_empty_list,
'is_visible': True, 'default': None},
}
}
All resources have the following common attributes:
* id - standard object uuid
* name - optional name
* description - optional annotation
The ip_pool in L2Policies is a supernet used for implicitly assigning subnets
to PTGs.
The subnet_prefix_length in L2Policies is the default subnet length used when
implicitly assigning a subnet to a PTG.
The way ip_pool and subnet_prefix_length work is as follows: When
creating L3Policy a default ip_pool and default_subnet_prefix_length are
created. If a user creates a PTG, a subnet will be pulled from ip_pool using
default_subnet_prefix_length.
The protocol in PolicyClassifier supports names “tcp”, “icmp”, “udp” and
protocol numbers 0 to 255 are supported.
The port range in PolicyClassifier port range can be a single port number
or a range (separated by a colon).
The direction in PolicyClassifier direction can be “in”, “out”, or “bi”.
The type in PolicyAction type can be “allow” or “redirect”.
The value in PolicyAction is used only in the case of “redirect” and
corresponds to a service_chain_spec.
The default route injection in VMs can be controlled by using the
inject_default_route in the L2Policies. This is set to True by default.
When set to False, the default route propagation is suppressed for all
the PTGs belonging to a specific L2Policy. This is useful in the cases
when a VM is associated with more than one PTG, and we want it to get a
specific default route and suppress others.
NetworkServiceParams
* type - Is one of “ip_single”, “ip_pool”, “string”
* name - A user-defined string, e.g. vip
* value - String, e.g. self_subnet or external_subnet when the type is
ip_single or ip_pool; a string value when the type is string
The type and value are validated, the name is treated as a literal.
The name of the param is chosen by the service chain implementation,
and as such is validated by the service chain provider.
The supported types are: ip_single, ip_pool, string.
The supported values are: self_subnet and external_subnet,
but the values are not validated when the tpye is 'string'.
Valid combinations are:
ip_single, self_subnet: Allocate a single IP addr from ptg subnet,
e.g. VIP (in the private network)
ip_single, external_subnet: Allocate a single floating-ip addr,
e.g. Public address for the VIP
ip_pool, external_subnet: Allocate a floating-ip for every PT in PTG
Database models
---------------
Database Objects to support Group-Based Policy:
::
+----------+ +-------------+
| | | |
| Policy | | Policy |
| Target |1 *| Rule |
| Groups +--------> Sets |
| (PTG) | | (PRS) |
| | | |
+----------+ +-------------+
1| 1|
| |
|* |*
+-----v----+ +------v------+ +-------------+
| | | | | |
| Policy | | Policy |1 *| Policy |
| Targets | | Rules +-------> Actions |
| (PT) | | (PR) | | (PA) |
| | | | | |
+----------+ +-------------+ +-------------+
1|
|
|1
+------v------+
| |
| Policy |
| Classifiers |
| (PC) |
| |
+-------------+
* [0..n]
Internals
---------
The GBP plugin class is located at `gbpservice/neutron/services/grouppolicy/plugin.py:GroupPolicyPlugin`.
The GBP plugin driver that maps resources to Neutron is located at `gbpservice/neutron/services/grouppolicy/drivers/resource_mapping.py:ResourceMappingDriver`.

View File

@ -0,0 +1,68 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
Extension Drivers
=================
The extension driver framework in GBP works much like Neutron's ML2, allowing
GBP resources to be extended with additional attributes.
Requirements
------------
Eventual GBP documentation will need to address configuring extension
drivers and the fact that different policy drivers may require
different API extensions.
Database models
---------------
Extension drivers implemented within the framework each have
their own data models.
Internals
---------
An ExtensionDriver abstract base class exists within the
group_policy_driver_api module and contains the following methods and
properties:
* initialize(self) - Perform driver initialization.
* extension_alias(self) - Supported extension alias.
* process_create_<resource>(self, session, data, result) - Process
extended attributes for <resource> creation.
* process_update_<resource>(self, session, data, result) - Process
extended attributes for <resource> update.
* extend_<resource>_dict(self, session, result) - Add extended
attributes to <resource> dictionary.
See the ML2 extension driver specification and code review listed
below in the references for more details.
Developers of policy drivers are able to define any needed
extensions to the GBP API by defining extension drivers.
Configuration
-------------
The extension_drivers configuration variable will need to be set to
enable any extensions required driver(s) specified in the
policy_drivers configuration variable.
References
----------
* ML2 extension driver blueprint:
https://blueprints.launchpad.net/neutron/+spec/extensions-in-ml2
* ML2 extension driver specification:
http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/juno/neutron-ml2-mechanismdriver-extensions.rst
* ML2 extension driver code review: https://review.openstack.org/#/c/89211/

View File

@ -0,0 +1,132 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
External Connectivity
=====================
Group-Based Policy includes API objects to model the external
connectivity policy. Although the objective is always to capture
the user's intent, it has to be noted that this particular case usually
requires a lot of manual configuration by the admin *outside* the cloud
boundaries (e.g. configuring external router), which means that the
usual automation provided by GBP has to be paired with meaningful tools
which allow detailed configuration when needed.
Terminology
-----------
**NAT Policy** A pool of IP addresses (range/CIDR) that will be used
by the drivers to implement NAT capabilities when needed.
**External Segment** A CIDR representing the L3 policy interface
to a given portion of the external world. The L3 Policy needs to provide
which address it has to expose on a given external access segment.
**External Route** A combination of a CIDR and a next hop
representing a portion of the external world reachable by the L3 Policy
via a given next hop.
**External Policy** A collection of ESs that provides and
consumes Policy Rule Sets in order to define the data path filtering
for the north-south traffic.
Requirements
------------
In order to talk to the external world, a given Policy Target Group
needs to satisfy the following:
- The L3P it belongs to must have at least one external access segment
and one IP allocated;
- The External Segment must have at least one route;
- the External Segment must have an External Policy;
- The PTG must provide/consume a PRS provided/consumed by the said EP;
- The traffic has to satisfy the filtering rules defined in the PRS;
Notes and restrictions on the Neutron resource mapping side:
- The external segment maps to a Neutron subnet;
- The network in which the ES's subnet resides must be external;
- To avoid to overload the model, in this iteration the external
subnet must always be explicit;
- Restriction: Only one External Policy per tenant is allowed
(side effect of https://bugs.launchpad.net/group-based-policy/+bug/1398156)
- Restriction: Only one ES per EP is allowed;
- Restriction: Only one ES per L3P is allowed;
- When no nexthop is specified in a ER, the subnet GW IP will be used;
- When no address is specified by the L3P when a ES is added, one will be
assigned automatically if available;
- Restriction: In this cycle, any NAT policy operation is completely ignored.
Database models
---------------
External connectivity is represented with::
+----------+
| External |
| Policy |
+----+-----+
|m
|
|n
+----+-------+ +---------+
| Ext. |1 m| NAT |
| Segment +----------+ Policy |
+----+-------+ +---------+
|
| +---------+
|1 n| Ext. |
+------------------+ Route |
| +---------+
|
| +------------+
|1 n | L3P Address|
+------------------+ Allocation |
+------------+
All objects (excluded ER and L3PAA) have the following common attributes:
* id - standard object uuid
* name - optional name
* description - optional annotation
* shared - whether the object is shared or not
External Segment
* ip_version - [4, 6]
* cidr - string on the form <subnet>/<prefix_length> which describes
the external segment subnet
* l3_policies - a list of l3_policies UUIDs
* port_address_translation - boolean, specifies whether PAT needs to be performed
using the addresses allocated for the L3P
NAT Policy
* ip_version - [4,6]
* ip_pool - string, IPSubnet with mask used to pull addresses from
for NAT purposes
* external_segments - UUID list of the ESs using this NAT policy
External Route
* cidr - string, IPSubnet with mask used to represent a portion of the
external world
* netx_hop - string, ip address describing where the traffic should be sent
in order to reach cidr
* external_segment_id - UUID of the ES through which this ER is
consumable
External Policy
* external_segments - a list of external access segments UUIDs
* provided_policy_rules_set - a list of provided policy rules set UUIDs
* consumed_policy_rules_set - a list of consumed policy rules set UUIDs
L3P Address Allocation
* external_segment_id - ES UUID
* l3_policy_id - L3P UUI
* allocated_address - IP address belonging to the ES subnet
Furthermore, L3 Policies contain the following relevant attribute:
* external_segments - A dictionary in the form
{<es_uuid>: [<my_es_ip>, ...]}. It represents which ES the L3P is connected
through, and which addresses it uses on it.

View File

@ -0,0 +1,43 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
Floating IP Support
===================
The Network Service Policy resource in GBP supports different features through
the definition of the associated Network Service Parameters, a dictionary that
includes a type and a value.
To allocate a single IP address from a given PTG subnet, the NSP should be
created with type ip_single and value self_subnet, indicating that a single
IP Address has to be allocated from the PTG subnet to which the
Network Service Policy is attached. This is used in Network Services, where
when we have a redirect to a Loadbalancer, the IP for the VIP comes from GBP.
To represent a Floating IP Allocation (FIP) Policy, the NSP type ip_pool is
used alongside the value nat_pool. This allows something like an Advanced
Services management PTG, where we require all PTs on that PTG to get a
floating IP associated to the port.
Requirements
------------
Policy Targets within the cloud reach the external world and outside world can reach
the Policy Targets when they get a FIP.
The security implications depend on the way the PRS are composed by the cloud admin.
or the Policy Target Group's policy targets to be accessible at their floating IPs from the external world, the PTG needs to have the following properties:
- The L3P it belongs to must have at least one external segment and one nat pool associated with the external segment.
- The External Segment must have at least one route.
- An External Policy should be associated with the External Segment.
- The PTG should provide/consume a PRS that allows traffic with a specific classifier.
- The External Policy should consume/provide that particular PRS.
- Traffic destined to the Floating IP, and satisfying the PRS criteria, will be NAT'ed and will reach the corresponding internal IP in the PTG.
Internals
---------
When a PT is created, the Resource Mapping Driver retrieves the Network Service Policy defined for the PTG. If there is a Network Service Parameter with type ip_pool and value nat_pool, the external segment is retrieved from the L3 Policy the PTG belongs to. From the external segment the nat pool is fetched and then a Floating IP is allocated out of the nat pool and associated with the PT port.
For backward compatibility with Juno, the nat_pool resource is forced to have the same ip_pool CIDR as the external subnet.
Network Service Policy create or update operations raise an error if an external segment having a nat pool is not associated with the L3Policy for the PTGs the NSP refers to.

View File

@ -0,0 +1,43 @@
===================================================
Group Based Policy Driver for Cisco APIC Controller
===================================================
Launchpad blueprint:
https://blueprints.launchpad.net/neutron/+spec/group-policy-apic-driver
This blueprint proposes a Group Based Policy (GBP) driver to enable
GBP plugin to be used with Cisco APIC controller.
Problem description
===================
Cisco APIC controller enables you to create an application centric fabric.
If you require a policy driven network control in an OpenStack deployment
using the ACI fabric, the reference driver for GBP can not leverage the
efficiency or scalability provided by the native fabric interfaces available
in the APIC controller.
Since the GBP plugin defines a multi-driver based framework
to support various implementation technologies (like ML2 for L2 support),
a GBP driver is available to support the APIC controller. This driver
interfaces with the APIC controller and allows efficient and scalable use of
the ACI fabric for policy based control from the GBP plugin.
This driver should allow for more efficient and scalable solution
for group based policy control of deployments using an ACI fabric.
Internals
---------
The PolicyDriver interface is defined in the abstract base class
gbpservice/neutron/services/grouppolicy/group_policy_driver_api.py:
PolicyDriver.
Configuration
-------------
The configuration files require specific information to using this driver.
These parameters include the addresses, credentials, and any configuration
required for accessing or using the APIC controller. Where possible, it
shares the configuration with the APIC ML2 driver.

View File

@ -0,0 +1,39 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
Heat Support
============
To enable simplified application oriented interfaces, OpenStack networking was
extended with policy and connectivity abstractions. Heat is extended with more
resources, which include policy and connectivity abstractions, through a new
Heat plugin.
Terminology
-----------
The terminology is consistent with the GBP resources
that the Heat resources refer to.
Internals
---------
The following Group-Based Policy Heat resources are available for consumption:
OS::GroupBasedPolicy::ExternalPolicy
OS::GroupBasedPolicy::ExternalSegment
OS::GroupBasedPolicy::L2Policy
OS::GroupBasedPolicy::L3Policy
OS::GroupBasedPolicy::NATPool
OS::GroupBasedPolicy::NetworkServicePolicy
OS::GroupBasedPolicy::PolicyAction
OS::GroupBasedPolicy::PolicyClassifier
OS::GroupBasedPolicy::PolicyRule
OS::GroupBasedPolicy::PolicyRuleSet
OS::GroupBasedPolicy::PolicyTarget
OS::GroupBasedPolicy::PolicyTargetGroup
OS::GroupBasedPolicy::ServiceChainNode
OS::GroupBasedPolicy::ServiceChainSpec
OS::GroupBasedPolicy::ServiceProfile

View File

@ -31,16 +31,21 @@ GBP Design
.. toctree::
:maxdepth: 3
gbp_resource_model
gbp_plugin_and_driver_architecture
gbp_driver_extensions
implicit_mapping_policy_driver
resource_mapping_policy_driver
gbp_external_connectivity
gbp_service_chaining_model
gbp_node_composition_plugin_and_driver_architecture
gbp_traffic_stitching_plumber
abstraction
development.environment
extension-drivers
external-connectivity
floating-ip-support
group-policy-apic-driver
heat-support
images
odl-policy-driver
policy-target-cluster-id
service-chain-abstractions
service-chain-driver
shared-resources
traffic-stitching-plumber-model
traffic-stitching-plumber-placement-type
Module Reference
----------------

View File

@ -0,0 +1,20 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
Group Based Policy Driver for OpenDaylight Controller
=====================================================
GBP plugin defines a multi-driver based framework to support
various implementation technologies (like ML2 has done for L2 support).
One of theses drivers is meant to be used with
the OpenDaylight (ODL) controller.
Internals
---------
An ODL GBP Policy Mapping Driver supports OpenDaylight GBP. It implements the PolicyDriver interface as defined in the abstract base class services.group_policy_driver_api.PolicyDriver. The GBP/ODL Mapping Driver interfaces with the ODL controller for GBP related operations, and with Neutron ML2 for network/subnet/port related operations.
An ODL GBP Mechanism Driver for Neutron ML2 Plugin provides a feedback loop to the ODL GBP Policy Mapping Driver to trigger policy target related operations when a VM is plugged into the Neutron port.

View File

@ -0,0 +1,74 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
Policy Target HA mode
=====================
With the introduction of more sophisticated network services and service
chaining modes, one limitation that we encountered was the impossibility for
GBP to describe a cluster of HA endpoints, defined as a collection of Policy
Target that can freely interchange their datapath identity
(MAC and IP address) to the end of replacing one another whenever the network
service requires so (for example, during an HA failover).
In the GBP universe, members of the same PTG share the same policy
characteristics, such as security, quality, connectivity constraints or
sharing the same network "identity", qualified as MAC or IP address in the
datapath. Policy Targets can impersonate other Policy Targets in the network.
Internals
---------
The attribute "cluster_id" of the PT description is none other but a string
which identifies that a specific Policy Target belongs to an HA cluster.
Whenever cluster_id is set, the PTs that share the same cluster identity will
be able to impersonate one another depending on the backend implementation.
In the reference implementation (Neutron) this is achieved by leveraging
Neutron's "allowed-address-pair" extension. In the current iteration, for the
resource_mapping driver, cluster_id will not be allowed to be just any generic
string, but a UUID pointing to an existing Policy Target. That Policy Target
will be identified as the "Master" of the cluster. Any member of the cluster
will be added the ability to impersonate the Master by setting its IP and MAC
addresses in the "allowed-address-pair" of the member's Neutron Port.
A "Master" PT (defined as a PT pointed by the cluster_id field of another PT)
can itself be part of the same cluster (for debuggability purposes) although
it's not mandatory.
By default, this attribute is only exposed to the Admin role.
Database models
---------------
Policy Target:
* cluster_id: String
REST API
--------
Changes to the PT API::
POLICY_TARGETS: {
'cluster_id': {'allow_post': True, 'allow_put': True,
'validate': {'type:string': None},
'default': '', 'is_visible': True}
}
Security
--------
As a Policy Target can now impersonate another PT in the datapath, that
includes a potential risk when done for malicious reasons. The API however is
open only only to Admins, and its scope limited in a single PTG (so no Group
escape can happen).
Notifications
-------------
When notifying a member of the cluster of a Datapath change, all the cluster's
members are notified in order to take coherent action.

View File

@ -0,0 +1,199 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
Service Chaining
================
Group Based Policy provides the intent based application-oriented
abstractions for the specification of networking requirements to
deploy applications. Network Services are an essential component for
the deployment of applications.
One common use-case for using multiple services can be described as a
"service-chain" - that is, application of specific services in a specific
order for every packet on a specific datapath. This API provides an
abstraction to specify that behavior with clear definition of the expected
semantics.
The goal of this specification is to provide the user with a tool that
captures their high level intent without getting coupled with incidental
details of their specific deployment. This then provides a path for migrating
such services across technology changes or for hybrid deployments. By design,
this specification does not mandate the technology used to providethat service.
Typical scenarios for combining services can usually be specified quite
succinctly (as traffic on this interface must first inspected by a firewall
and then processed by a load balancer). Unfortunately specifying that can many
times get mired in incidental complexity around service insertion and traffic
steering issues. If the user is able to describe that intent, we can
orchestrate the required service lifecycle events and steer the traffic as
required without exposing that complexity (and the resulting coupling)
outside the specific implementation details.
This has the added benefits that:
1. As we are only specifying the intent, it is back-end technology agnostic
2. The additional information allows us to provide technology upgrade without
breaking service usage from a user perspective
3. It can support hybrid deployments, or migrations across vendors, even
when the underlying technology used by those vendors is different.
Also, when specifying abstractions that are required to be implemented across
technologies, it is critical that the semantics that are expected by the API
(or implied) be clearly defined so that the usage can actually be portable
across those technologies.
Database models
---------------
1. ServiceChainNode
+-------------------+--------+---------+----------+-------------+---------------+
|Attribute |Type |Access |Default |Validation/ |Description |
|Name | | |Value |Conversion | |
+===================+========+=========+==========+=============+===============+
|id |string |RO, all |generated |N/A |identity |
| |(UUID) | | | | |
+-------------------+--------+---------+----------+-------------+---------------+
|name |string |RW, all |'' |string |human-readable |
| | | | | |name |
+-------------------+--------+---------+----------+-------------+---------------+
|type |string |RW, all |required |foreign-key |service-type |
|(flavor?) | | | | | |
| | | | | | |
+-------------------+--------+---------+----------+-------------+---------------+
|config |string |RW, all |'' |string | service |
| | | | | | configuration |
| | | | | | (as a HEAT |
| | | | | | template) |
| | | | | | [1]_ |
+-------------------+--------+---------+----------+-------------+---------------+
|service_params |list of |RW, all |'' |list of |list of |
| |strings | | |strings |required |
| | | | | |service config |
| | | | | |param names |
+-------------------+--------+---------+----------+-------------+---------------+
2. ServiceChainSpec
+-------------------+--------+---------+----------+-------------+-----------------+
|Attribute |Type |Access |Default |Validation/ |Description |
|Name | | |Value |Conversion | |
+===================+========+=========+==========+=============+=================+
|id |string |RO, all |generated |N/A |identity |
| |(UUID) | | | | |
+-------------------+--------+---------+----------+-------------+-----------------+
|name |string |RW, all |'' |string |human-readable |
| | | | | |name |
+-------------------+--------+---------+----------+-------------+-----------------+
|nodes |string |RW, all |required |list of |list of |
| | | | |strings |ServiceChainNode |
| | | | |(UUIDs) | |
+-------------------+--------+---------+----------+-------------+-----------------+
|service_params |list of |RO, all |generated |N/A |list of required |
| |strings | | | |service config |
| | | | | |parameter names |
+-------------------+--------+---------+----------+-------------+-----------------+
service_params is generated by aggregating the service_params of each of
the ServiceChainNodes in the ServiceChainSpec. The parameter is not specified
in the API to create the ServiceChainSpec resource.
3. ServiceChainInstance
+--------------------+-------+---------+---------+-----------------+-----------------+
|Attribute |Type |Access |Default |Validation/ |Description |
|Name | | |Value |Conversion | |
+====================+=======+=========+=========+=================+=================+
|id |string |RO, all |generated|N/A |identity |
| |(UUID) | | | | |
+--------------------+-------+---------+---------+-----------------+-----------------+
|name |string |RW, all |'' |string |human-readable |
| | | | | |name |
+--------------------+-------+---------+---------+-----------------+-----------------+
|service-chain-spec |string |RW, all |required |foreign-key for |service-chain |
| | | | |ServiceChainSpec |spec for this |
| | | | | |instance |
+--------------------+-------+---------+---------+-----------------+-----------------+
|provider_ptg |string |RW, all |required |foreign-key |Destination |
| |(UUID) | | | |PolicyTargetGroup|
| | | | | | |
+--------------------+-------+---------+---------+-----------------+-----------------+
|consumer_ptg |string |RW, all |required |foreign-key |Source |
| |(UUID) | | | |PolicyTargetGroup|
| | | | | | |
+--------------------+-------+---------+---------+-----------------+-----------------+
|classifier |string |RW, all |required |foreign-key |Classifier |
| |(UUID) | | | | |
| | | | | | |
+--------------------+-------+---------+---------+-----------------+-----------------+
|service_param_values|string |RW, all |required |dictionary |configuration |
| | | | | |parameter names |
| | | | | |and values |
+--------------------+-------+---------+---------+-----------------+-----------------+
SEMANTICS:
The expected semantics would be equivalent of:
1. As if the services were created to process traffic from consumer_ptg
to provider_ptg that matches the classifier
NOTE: This is just specifying that the service chain needs to be
applied to all traffic that is traversing between the PolicyTargetGroups.
The provider may implement it using any valid insertion strategy.
2. In the order of ServiceChainNodes in the ServiceChainSpec for
inbound traffic to the Destination PolicyTargetGroup, and in opposite order
for outbound traffic from the Destination PolicyTargetGroup
3. Not all providers will honor arbitrary ordering of services
for application of the service.
In that case, the provider will raise a "NotImplemented"
exception.
USAGE WORKFLOW:
1. Assume a application policy that defines connectivity between
a provider PolicyTargetGroup (ptg1) and a consumer PolicyTargetGroup (ptg2)
2. Assume that the semantics that I want to provide are of having
all traffic from ptg1 to/from ptg2 needs to be (a) first inspected
by a firewall, and then (b) load balanced by a load balancer.
3. Then I would create a ServiceChainSpec with 2 ServiceChainNodes.
The first node would be of type FW and the second one LB.
The FW node would have config string as the HEAT template for
FWaaS configuration and the LB would have the config string as
the HEAT template for the LBaaS configuration. CLI for that
would look like::
gbp servicechain-node-create --type flavor_id --config_file fw_heat_template fw_node
gbp servicechain-node-create --type flavor_id --config_file lb_heat_template lb_node
gbp servicechain-spec-create --nodes "fw_node;lb_node" fwlb_spec
This creates the ordered-list ["FW", "LB"] as the list of services in the
chain.
4. The spec fwlb_spec created in step 3 would be used as the target of a
policy-rule in the application policy
5. Finally the GBP provider would create a ServiceChainInstance from
this ServiceChainSpec. A equivalent CLI command for that would look
like::
gbp servicechain-instance-create --servicechain_spec_id fwlb_spec --provider_ptg ptg1 --consumer_ptg ptg2 --classifier classifier-all --config_param_values "vip=IP1" service-chain
This creates a chain that applies services in the order:
* FW->LB->ptg1 for ingress traffic, and
* ptg1->LB->FW for egress traffic.
Notifications
-------------
1. All updates to service-chain-spec resources are relayed to the
configured service-chain-providers
2. Updates to ServiceChainNode or ServiceChainSpec generate notifications
to backend to "fixup" the ServiceChainInstances as required.
3. It is assumed that the existing notifications exception handling
meets the needs for this API and no new constructs are specified.

View File

@ -0,0 +1,238 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
Service Chain Driver Refactor
=============================
Previously, service chain drivers were a monolithic entity coupling service
chaining logic along with the service configuration logic. Today, these
entities are decoupled, allowing development of a service configuration
driver independent of the chaining mechanism.
Internals
---------
An API object called "Service Profile" contains a set of attributes that can
describe the service (eg. service_type, vendor, insertion_mode and so forth).
The "Node Composition Plugin" can load one or multiple "Node Driver(s)".
A Node Driver is capable of deploying, destroying and updating Service Chain
Node instances depending on their profile. The plumbing info of all the
scheduled nodes is used by the NCP for traffic stitching/steering.
This is a pluggable module (NodePlumber).
The relationship between the Services Plugin and Node Drivers is as shown below:
The Node Composition Plugin implementation is designed as the following class
hierarchy:
asciiflow::
+--------------------------------------+ +-----------------------------------------+
|NodeComposPlugin(ServiceChainDbPlugin)| | NodeDriverBase |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| |1 N| |
| +------+ |
+--------------------------------------+ +-----------------------------------------+
| *create *update *delete | | *get_plumbing_info() |
| *SCI *SCI *SCI | | *validate_create(NodeContext) |
| *SCS *SCS *SCS | | *validate_update(NodeContext) |
| *SCN *SCN *SCN | | *create(NodeContext) |
+-----------------+--------------------+ | *delete(NodeContext) |
| | *update(NodeContext) |
+-----------------+--------------------+ | *update_policy_target_added(NContext,PT)|
|NodePlumber | | *update_policy_target_removed(...) |
| | | |
| | | |
+--------------------------------------+ | |
| | +---------v----------v----------v---------+
| *plug_services(NContext,Deployment) | | | |
| *unplug_services(NContext,Deployment)| | | |
| | +------+------+ | +------+------+
+--------------------------------------+ | | | | |
| Nova | | | Neutron |
+--------------------------------------+ | Node | | | Node |
| NodeContext | | Driver | | | Driver |
| | | | | | |
| *core plugin | | | | | |
| *sc plugin | +-----v-------+ | +------v------+
| *provider ptg | | | |
| *consumer ptg | | | |
| *policy target(s) | | | |
| *management ptg | +-----+----+ +----+---+ +----+-----+
| *service chain instance | | SC Node | | SC Node| | SC Node |
| *service chain node | | Driver | | Driver | | Driver |
| *service chain spec | +----------+ +--------+ +----------+
| *service_targets |
| *l3_plugin |
| *gbp_plugin |
| |
+--------------------------------------+
| |
| |
+--------------------------------------+
Node Driver Base
This supports operations for CRUD of a service, and to query the number of
data-path and management interfaces required for this service.
Also supports call backs for notifications on added and removed Policy Targets
on a relevant PTG. This can be used for example to support auto-scaling by
adding new members to a loadbalancer pool.
Node Context
Provides useful attributes and methods for the Node Driver to use.
CRUD on "service targets" are useful to create service specific
Policy Targets in defined PTGs (provider/consumer/management)
The Node Driver operations are called as pre-/post-commit hooks.
Service Targets
This is an *internal only* construct. It's basically a normal Policy Target
but with some metadata which makes easy to understand which service it
belongs to, in which order, on which side of the relationship, for which
Node, deployed by which driver. Will require a new table to store all
these info.
Nova Node Driver
This provides a reusable implementation for managing the lifecycle of a
service VM.
Neutron Node Driver
This provides a reusable implementation for managing existing Neutron
services.
Node Driver
This configures the service based on the “config” provided in the Service
Node definition.
Node Plumber
The Node Plumber is a pluggable module that performs the network orchestration
required to insert the service nodes, and plumb traffic to them per the user's
intent captured in the service chain specification. It achieves this by creating
the appropriate Neutron and GBP constructs (e.g. Ports, Networks, Policy Targets,
Policy Target Groups) based on the requirements of the Node Driver and in the
context of realizing the Service Chain.
Deployment (input parameter in plug and unplug services methods)
A deployment is a list composed as follows::
[{'context': node_context,
'driver': deploying_driver,
'plumbing_info': node_plumbing_needs},
...]
The position of a given node in the service chain can be retrieved by the Node Driver
using node_context.current_position
Management Policy Target Group
A PTG can be marked for service management by setting the newly added "service_management"
attribute to True. In the default policy.json this operation can be only done by an Admin,
who can create (and only one) Management PTG per tenant plus a globally shared one.
Whenever a SCI is created the NCP will first look for an existing Management PTG on the SCI
owner tenant. If none, the NCP plugin will query for an existing shared PTG, which could potentially
belong to any tenant (typically one with admin capabilities). If no Management PTG is found, the
service instantiation will proceed without it and it's the Node Driver's duty to refuse a service
instantiation if it requires a Management PTG.
Database models
---------------
Service Target
* policy_target_id - PT UUID
* service_chain_instance_id - SCI UUID
* service_chain_node_id - SCN UUID, the one of the specific node this ST belongs to
* relationship - Enum, PROVIDER|CONSUMER|MANAGEMENT
* order - Int, order of the node within the chain
Service Profile
* id - standard object uuid
* name - optional name
* description - optional annotation
* shared - whether the object is shared or not
* vendor - optional string indicating the vendor
* insertion_mode - string L2|L3|BITW|TAP
* service_type - generic string (eg. LOADBALANCER|FIREWALL|...)
* service_flavor - generic string (eg. m1.tiny)
Service Chain Node
* REMOVE service_type
* service_profile_id - SP UUID
Policy Target Group
* service_management - bool (default False)
Service Chain Instance
* management_ptg_id - PTG UUID
REST API
--------
The REST API changes look like follows::
SERVICE_PROFILES: {
'id': {'allow_post': False, 'allow_put': False,
'validate': {'type:uuid': None}, 'is_visible': True,
'primary_key': True},
'name': {'allow_post': True, 'allow_put': True,
'validate': {'type:string': None},
'default': '', 'is_visible': True},
'description': {'allow_post': True, 'allow_put': True,
'validate': {'type:string': None},
'is_visible': True, 'default': ''},
'tenant_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:string': None},
'required_by_policy': True, 'is_visible': True},
attr.SHARED: {'allow_post': True, 'allow_put': True,
'default': False, 'convert_to': attr.convert_to_boolean,
'is_visible': True, 'required_by_policy': True,
'enforce_policy': True},
'vendor': {'allow_post': True, 'allow_put': True,
'validate': {'type:string': None},
'is_visible': True, 'default': ''},
'insertion_mode': {'allow_post': True, 'allow_put': True,
'validate': {'type:values':
scc.VALID_INSERTION_MODES},
'is_visible': True, 'default': None},
'service_type': {'allow_post': True, 'allow_put': True,
'validate': {'type:string': None},
'is_visible': True, 'required': True},
'service_flavor': {'allow_post': True, 'allow_put': True,
'validate': {'type:string': None},
'is_visible': True, 'required': True},
}
The following is added to servicechain node::
SERVICECHAIN_NODES: {
'service_profile_id': {'allow_post': True, 'allow_put': True,
'validate': {'type:uuid': None},
'required': True, 'is_visible': True},
}
The following is added to policy target group::
POLICY_TARGET_GROUPS: {
'service_management': {'allow_post': True, 'allow_put': True,
'default': False,
'convert_to': attr.convert_to_boolean,
'is_visible': True, 'required_by_policy': True,
'enforce_policy': True},
}
The following is added to service chain isntance::
SERVICECHAIN_INSTANCES: {
'management_ptg_id': {'allow_post': True, 'allow_put': False,
'validate': {'type:uuid_or_none': None},
'is_visible': True, 'default': None,
'required': True}
}

View File

@ -0,0 +1,80 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
Shared Resources
================
In the context of concerns separation, it's very important that a user
(e.g. the admin) shares some of the resources he created in order for
different kind of users to be able to consume them.
To achieve this, the API offers a way to specify whether a resource is shared
or not. The behavior is consistent with Neutron's already existing sharing
policy, which means that a given resource can be either consumable by a single
tenant or shared globally. This documents defines what a shared resource means
in the context of GBP and each individual resource type.
The following resources can be shared:
* Policy Rule Sets;
* Policy Target Groups;
* L2 Policies;
* L3 Policies;
* Network Service policies;
* Policy Rules;
* Policy Classifiers;
* Policy Actions;
* Service Chain Nodes;
* Service Chain Specs.
Shared resources are modifiable only by the owner or the
admin when applied.
The Policy Target resource has been excluded from the list above
since it is intrinsically something that the user creates and
consumes for himself.
One use case for sharing PTG is when the could admin provides a
common management PTG to all the tenants. They could then create
multi-homed VMs and use it according to the policies.
Requirements
------------
The sharing constraints are the following:
- A shared resource can only be associated with other shared
resources. For example, a shared L2_Policy can only exist on
a shared L3_Policy;
- A shared resource can be CRUD based on the
rules described by the policy.json file;
- A shared resource can't be reverted to non shared if being
used by either shared or other tenants' resources.
- Although the model provides as much flexibility as possible
(constrained by the above rules) each driver should limit
the sharing capabilities based on their own implementations.
Any datapath impact caused by a shared resource has to be
defined by the driver itself.
The Neutron mapping driver refactor includes sharing of the
following resources:
- L3_Policy: only usable by the same tenant;
- L2_Policy: only usable by the same tenant;
- PTG: usable by any tenant when shared for PT placement;
- Policy Classifiers: usable by any tenant when shared;
- Policy Actions: usable by any tenant when shared;
- Policy Rules: usable by any tenant when shared;
- Service Chain Specs: usable by any tenant when shared;
- Service Chain Nodes: usable by any tenant when shared.
L3 and L2 policies need to be sharable to allow PTG sharing.
However, no external tenant could use them because there's no
way today in Neutron to share a Router.
Security groups are also not shareable in Neutron, therefore
PRS is not listed above.

View File

@ -0,0 +1,168 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
Traffic Stitching Plumber
=========================
As part of the service chain refactor effort, GBP now supports the ability to provision
"node centric" service chains that are composed of interoperable multi-vendor service
nodes linked by a Plumber, which takes care of placing the services in the underlying
infrastructure in a way that complies with the user intent.
Each Node Driver will expose a set of networking requirements via the get_plumbing_info
API, that will be used by the plumber to ensure that the traffic flows correctly.
The Traffic Stitching Plumber (TScP) uses the GBP underlying constructs in
order to guarantee a correct traffic flow across services from their provider
to the consumer and vice versa. The output of the plumbing operations are
either the creation or deletion of a set of Service Targets, which effectively
result in creation of Policy Targets exposed to the specific Node Driver for
its own use. In addition to that, TScP creates a set of L2Ps and/or PTGs
that are "stitched" together and host the actual service PTs.
Internals
---------
The PROXY_GROUP driver extension, is an extension of the base
GBP API that introduces a way to "proxy" a PTG with another that "eats" all
the incoming and outgoing traffic from the original PTG. The Resource Mapping
Driver is already compliant with the PROXY_GROUP extension.
The TScP exclusively makes use of existing GBP constructs
(and proxy_group extension) in order to deploy a chain. This guarantees
interoperability across multiple backend solutions.
PROXY_GROUP driver extension.
The main functionality exposed by this extension is the ability to put a PTG
"in front" of an existing one (in a linked list fashion). Whenever a PTG
proxies another, all the traffic going to the original PTG will have to go
through the proxy fist. Traffic from the Proxy through the original group will
go through a special PT that has the proxy_gateway flag set to True.
In addition to the above, a new "group_default_gateway" attribute is introduced
for PTs, that indicates a special PT that will take the default gateway address
for a given group (typically enforced by reserving the corresponding Neutron
Subnet address). The flow representation below describes how traffic flows
from a L3P (for simplicity) to a PTG proxied by another:
asciiflow::
+---+ +-------+
| | | | +-------+
| | +--------------+ +-----------+ | |
|PTG<-----> proxy_gateway| PROXY | group dgw <-----> L3P |
| | +--------------+ +-----------+ | |
| | | | +-------+
+---+ +-------+
There are two types of Proxies: L2 and L3 proxy. A L2 proxy will share the
same subnets as the original group (at least at the CIDR level, may be different
Neutron's subnets). In a L2 Proxy traffic is supposed to go through the proxy
without being routed. A L3 proxy will route traffic to the original group,
and will have its own subnet coming from a proxy_ip_pool, new attribute that
extends the L3P.
Note that this extension will be exclusively for *internal* use! Meaning that
the TScP is likely the only entity that will even make use of this API.
RMD compliance with PROXY_GROUP extension.
The RMD will use existing Neutron constructs for implementing the PROXY_GROUP
extension using the semantic described above. More specifically, whenever a
Proxy is put in front of a PTG, the latter will be detached from the L3P router and
replaced by the Proxy. Is then expected that a proxy_gateway PT is created and
a proper function (depending from the proxy type) is created for ensuring traffic
flow.
In case of L3 proxies, the subnet allocation will happen just like it does for
normal PTGs, but from a different Pool (proxy_ip_pool). Also, routes need
to be updated properly across the Neutron subnets/routers when a L3 Proxy
is used.
TScP implementation.
The last step of the implementation is the TScP itself. By using the new
PROXY_GROUP constructs, the TScP plumber will take care of setting up the
Service Chain datapath.
Depending on the plumbing type used (defined in a different blueprint) the
TScP will create the correct proxy type and PTs to be provided to the node
driver for the actual service plugging to happen. Support for a management
PTG will also be implemented.
Database models
---------------
A number of tables are created for the PROXY_GROUP extension to work:
GroupProxyMapping (gp_group_proxy_mappings):
* policy_target_group_id - proxy PTG UUID;
* proxied_group_id - UUID of the proxied PTG
* proxy_group_id - UUID of the current PTG's proxy
* proxy_type - ENUM (L2/L3)
ProxyGatewayMapping(gp_proxy_gateway_mappings):
* policy_target_id - PT UUID
* proxy_gateway - Bool indicating whether this PT is a gateway to proxy
* group_default_gateway - Bool indicating whether this PT is the DG for its PTG
ProxyIPPoolMapping(gp_proxy_ip_pool_mapping):
* l3_policy_id - L3P UUID
* proxy_ip_pool - IP pool (address/cidr) to be used for L3 proxies
* proxy_subnet_prefix_length - Iterger value, prefix len for the proxy subnets
REST API
--------
The REST API changes look like follows (note that they only ally if the PROXY_GROUP
extension is configured)::
EXTENDED_ATTRIBUTES_2_0 = {
gp.POLICY_TARGET_GROUPS: {
'proxied_group_id': {
'allow_post': True, 'allow_put': False,
'validate': {'type:uuid_or_none': None}, 'is_visible': True,
'default': attr.ATTR_NOT_SPECIFIED,
'enforce_policy': True},
'proxy_type': {
'allow_post': True, 'allow_put': False,
'validate': {'type:values': ['l2', 'l3', None]},
'is_visible': True, 'default': attr.ATTR_NOT_SPECIFIED,
'enforce_policy': True},
'proxy_group_id': {
'allow_post': False, 'allow_put': False,
'validate': {'type:uuid_or_none': None}, 'is_visible': True,
'enforce_policy': True},
# TODO(ivar): The APIs should allow the creation of a group with a
# custom subnet prefix length. It may be useful for both the proxy
# groups and traditional ones.
},
gp.L3_POLICIES: {
'proxy_ip_pool': {'allow_post': True, 'allow_put': False,
'validate': {'type:subnet': None},
'default': '192.168.0.0/16', 'is_visible': True},
'proxy_subnet_prefix_length': {'allow_post': True, 'allow_put': True,
'convert_to': attr.convert_to_int,
# for ipv4 legal values are 2 to 30
# for ipv6 legal values are 2 to 127
'default': 29, 'is_visible': True},
# Proxy IP version is the same as the standard L3 pool ip version
},
gp.POLICY_TARGETS: {
# This policy target will be used to reach the -proxied- PTG
'proxy_gateway': {
'allow_post': True, 'allow_put': False, 'default': False,
'convert_to': attr.convert_to_boolean,
'is_visible': True, 'required_by_policy': True,
'enforce_policy': True},
# This policy target is the default gateway for the -current- PTG
# Only for internal use.
'group_default_gateway': {
'allow_post': True, 'allow_put': False, 'default': False,
'convert_to': attr.convert_to_boolean,
'is_visible': True, 'required_by_policy': True,
'enforce_policy': True},
},
}

View File

@ -0,0 +1,89 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
Traffic Stitching Plumber - Placement Type
==========================================
As part of the service chain refactor effort, GBP now supports the ability to provision
"node centric" service chains that are composed of interoperable multi-vendor service
nodes linked by a Plumber, which takes care of placing the services in the underlying
infrastructure in a way that complies with the user intent.
Each Node Driver will expose a set of networking requirements via the get_plumbing_info
API, that will be used by the plumber to ensure that the traffic flows correctly.
But how does get_plumbing_info work?
Let's go through all the use cases and iteratively define what a get_plumbing_info
call should provide in order for any Plumber (and so TScP) to be able to do its job.
Get Plumbing Info
The plumbing info is defined as a collection of needed policy targets on a specific role,
this may vary based on the node (obtained from the NodeDriverContext) that the specific
driver is asked to deploy. An example of plumbing info is the following::
{
"management": <list of updated PT body dicts, one for each needed>,
"provider": <list of updated PT body dicts, one for each needed>,
"consumer": <list of updated PT body dicts, one for each needed>
}
The role (key of the above dictionary) specifies in which "side" the policy target has to
exist. Depending on the kind of chaining the Neutron port could actually be placed somewhere else!
The value is a list of attributes intended to override the PT body. This could be used, for example,
for providing explicit Neutron Ports when the driver requires it or for establishing a naming
convention for the PTs. An empty dictionary will be mostly used in this case, which will
indicate a basic PT creation::
{
"management": [{}], # One PT needed in the management
"provider": [{}, {port_id: 'a'}], # Two PT needed in the provider
"consumer": [] # Zero PT needed in the consumer
}
The above dictionary tells the plumber how many interfaces the node needs and where to place them.
What it doesn't tell, however, is how this service will behave in the network, which is a fundamental
information when it comes to define the interaction with its neighbors (services).
The proposal is to add a "plumbing_type" attribute to the above dictionary that defines some well known
placement options for nodes. For every option, there has to be a rationale to identify how all the services of
that class will work, what kind of neighbors they require (or disallow) and at least one supporting plumber
has to exist in order to validate that the placement works as expected. Last but not least, a clear
use case should be brought up in the form of an example service.
Possibly, limitations and behaviors of all the plumbing_types will be the same across plumbers.
In this iteration, to be supported by TScP, the following plumbing/placement types are available:
Endpoint
* Rationale: An Endpoint needs to be directly reachable by the consumers, it is basically a traditional PT presented
in the form of a service. This kind of services are typically useful only when directly addressed, and
are irrelevant to the traffic course otherwise. The Endpoint Services typically get a VIP on the provider subnet.
* Neighborhood Limitations: Because of the above, the provider side interface of an Endpoint Service typically
is the provider itself (ie first node of the chain).
* Cardinality Limitations: Because of the above, the number of Endpoint Services in any given chain should be one.
Having more than one Endpoint is certainly possible, but it will defy the definition of "chain" since the consumers can
only address one of them at a time.
* Initial Supporting Plumber(s): Traffic Stitching Plumber
* Example Services: L4-7 Load Balancer (Reverse Proxy)
Gateway
* Rationale: A gateway service is a router that the PTs will use for reaching certain (or all the) destinations.
This kind of service usually works on the packets that it's entitled to route, never modifying the Source IP Address.
Traffic can indeed be dropped, inspected or otherwise manipulated by this kind of service.
* Neighborhood Limitations: None
* Cardinality Limitations: None
* Initial Supporting Plumber(s): Traffic Stitching Plumber
* Example Services: Router, Firewall, -Transport- Mode VPN
Transparent
* Rationale: A transparent service is either a L2 or a BITW service. This kind of service usually has 2 logical data
interfaces, and everything that is received in either of them is pushed on the other after processing. The 2 interfaces
typically exist in the same subnet, so traffic is not router but switched (or simply mirrored) instead.
* Neighborhood Limitations: None
* Cardinality Limitations: None
* Initial Supporting Plumber(s): Traffic Stitching Plumber
* Example Services: Transparent FW, IDS, IPS
TODO: We have defined a service such as a tunnel mode VPN to be characterizable as a Gateway + a Floating IP (somehow similar
to a Gateway+Endpoint kind of service). We will add this new plumbing type in a subsequent update once completely define.

View File

@ -16,6 +16,14 @@ Contents:
usage
contributing
Developer Docs
==============
.. toctree::
:maxdepth: 2
devref/index
Indices and tables
==================