Fix inconsistency in headlines format.
There are several restructuredtext files, which don't follow guide for document contribution. Also made a rule for having additional line between section body and folowing section headline. Change-Id: I2dfe9aea36299e3986acb16b1805a4dc0cd952d2
This commit is contained in:
parent
facc629005
commit
e32d796ac7
|
@ -1,3 +1,4 @@
|
|||
===================================
|
||||
kuryr-kubernetes Style Commandments
|
||||
===================================
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
========================
|
||||
Team and repository tags
|
||||
========================
|
||||
|
||||
|
@ -6,6 +7,7 @@ Team and repository tags
|
|||
|
||||
.. Change things from this point on
|
||||
|
||||
|
||||
Project description
|
||||
===================
|
||||
|
||||
|
@ -26,4 +28,5 @@ require it or to use different segments and, for example, route between them.
|
|||
|
||||
Contribution guidelines
|
||||
-----------------------
|
||||
|
||||
For the process of new feature addition, refer to the `Kuryr Policy <https://wiki.openstack.org/wiki/Kuryr#Kuryr_Policies>`_
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
====================
|
||||
Kuryr Heat Templates
|
||||
====================
|
||||
|
||||
|
@ -5,8 +6,9 @@ This set of scripts and Heat templates are useful for deploying devstack
|
|||
scenarios. It handles the creation of an allinone devstack nova instance and its
|
||||
networking needs.
|
||||
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
-------------
|
||||
|
||||
Packages to install on the host you run devstack-heat (not on the cloud server):
|
||||
|
||||
|
@ -29,8 +31,9 @@ After creating the instance, devstack-heat will immediately start creating a
|
|||
devstack `stack` user and using devstack to stack kuryr-kubernetes. When it is
|
||||
finished, there'll be a file names `/opt/stack/ready`.
|
||||
|
||||
|
||||
How to run
|
||||
~~~~~~~~~~
|
||||
----------
|
||||
|
||||
In order to run it, make sure that you have sourced your OpenStack cloud
|
||||
provider openrc file and tweaked `hot/parameters.yml` to your liking then launch
|
||||
|
@ -53,8 +56,11 @@ This will create a stack named *gerrit_465657*. Further devstack-heat
|
|||
subcommands should be called with the whole name of the stack, i.e.,
|
||||
*gerrit_465657*.
|
||||
|
||||
|
||||
Getting inside the deployment
|
||||
-----------------------------
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You can then ssh into the deployment in two ways:
|
||||
|
||||
You can then ssh into the deployment in two ways::
|
||||
|
||||
|
@ -79,8 +85,9 @@ To delete the deployment::
|
|||
|
||||
./devstack-heat unstack name_of_my_stack
|
||||
|
||||
|
||||
Supported images
|
||||
----------------
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
It should work with the latest centos7 image. It is not tested with the latest
|
||||
ubuntu 16.04 cloud image but it will probably work.
|
||||
|
|
|
@ -1,14 +1,17 @@
|
|||
====================
|
||||
Kuryr kubectl plugin
|
||||
====================
|
||||
|
||||
This plugin aims to bring kuryr introspection an interaction to the kubectl and
|
||||
oc command line tools.
|
||||
|
||||
|
||||
Installation
|
||||
------------
|
||||
|
||||
Place the kuryr directory in your ~/.kube/plugins
|
||||
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
|
@ -16,6 +19,7 @@ The way to use it is via the kubectl/oc plugin facility::
|
|||
|
||||
kubectl plugin kuryr get vif -o wide -l deploymentconfig=demo
|
||||
|
||||
|
||||
Media
|
||||
-----
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
=============================
|
||||
Subport pools management tool
|
||||
=============================
|
||||
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
============
|
||||
Contributing
|
||||
============
|
||||
|
||||
.. include:: ../../CONTRIBUTING.rst
|
||||
|
|
|
@ -16,9 +16,9 @@
|
|||
Kuryr Kubernetes Health Manager Design
|
||||
======================================
|
||||
|
||||
|
||||
Purpose
|
||||
-------
|
||||
|
||||
The purpose of this document is to present the design decision behind
|
||||
Kuryr Kubernetes Health Managers.
|
||||
|
||||
|
@ -26,6 +26,7 @@ The main purpose of the Health Managers is to perform Health verifications that
|
|||
assures readiness and liveness to Kuryr Controller and CNI pod, and so improve
|
||||
the management that Kubernetes does on Kuryr-Kubernetes pods.
|
||||
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
|
@ -46,8 +47,10 @@ configurations are properly verified to assure CNI daemon is in a good shape.
|
|||
On this way, the CNI Health Manager will check and serve the health state to
|
||||
Kubernetes readiness and liveness probes.
|
||||
|
||||
|
||||
Proposed Solution
|
||||
-----------------
|
||||
|
||||
One of the endpoints provided by the Controller Health Manager will check
|
||||
whether it is able to watch the Kubernetes API, authenticate with Keystone
|
||||
and talk to Neutron, since these are services needed by Kuryr Controller.
|
||||
|
|
|
@ -16,9 +16,9 @@
|
|||
Active/Passive High Availability
|
||||
================================
|
||||
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
Initially it was assumed that there will only be a single kuryr-controller
|
||||
instance in the Kuryr-Kubernetes deployment. While it simplified a lot of
|
||||
controller code, it is obviously not a perfect situation. Having redundant
|
||||
|
@ -29,16 +29,20 @@ Now with introduction of possibility to run Kuryr in Pods on Kubernetes cluster
|
|||
HA is much easier to be implemented. The purpose of this document is to explain
|
||||
how will it work in practice.
|
||||
|
||||
|
||||
Proposed Solution
|
||||
-----------------
|
||||
|
||||
There are two types of HA - Active/Passive and Active/Active. In this document
|
||||
we'll focus on the former. A/P basically works as one of the instances being
|
||||
the leader (doing all the exclusive tasks) and other instances waiting in
|
||||
*standby* mode in case the leader *dies* to take over the leader role. As you
|
||||
can see a *leader election* mechanism is required to make this work.
|
||||
|
||||
|
||||
Leader election
|
||||
+++++++++++++++
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
The idea here is to use leader election mechanism based on Kubernetes
|
||||
endpoints. The idea is neatly `explained on Kubernetes blog
|
||||
<https://kubernetes.io/blog/2016/01/simple-leader-election-with-kubernetes/>`_.
|
||||
|
@ -67,8 +71,10 @@ This adds a new container to the pod. This container will do the
|
|||
leader-election and expose the simple JSON API on port 16401 by default. This
|
||||
API will be available to kuryr-controller container.
|
||||
|
||||
|
||||
Kuryr Controller Implementation
|
||||
+++++++++++++++++++++++++++++++
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The main issue with having multiple controllers is task division. All of the
|
||||
controllers are watching the same endpoints and getting the same notifications,
|
||||
but those notifications cannot be processed by multiple controllers at once,
|
||||
|
@ -93,8 +99,10 @@ Please note that this means that in HA mode Watcher will not get started on
|
|||
controller startup, but only when periodic task will notice that it is the
|
||||
leader.
|
||||
|
||||
|
||||
Issues
|
||||
++++++
|
||||
~~~~~~
|
||||
|
||||
There are certain issues related to orphaned OpenStack resources that we may
|
||||
hit. Those can happen in two cases:
|
||||
|
||||
|
@ -116,8 +124,10 @@ The latter of the issues can also be tackled by saving last seen
|
|||
``resourceVersion`` of watched resources list when stopping the Watcher and
|
||||
restarting watching from that point.
|
||||
|
||||
|
||||
Future enhancements
|
||||
+++++++++++++++++++
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
It would be useful to implement the garbage collector and
|
||||
``resourceVersion``-based protection mechanism described in section above.
|
||||
|
||||
|
|
|
@ -20,6 +20,7 @@
|
|||
(Avoid deeper levels because they do not render well.)
|
||||
|
||||
|
||||
===========================
|
||||
Design and Developer Guides
|
||||
===========================
|
||||
|
||||
|
@ -48,6 +49,7 @@ Design documents
|
|||
network_policy
|
||||
updating_pod_resources_api
|
||||
|
||||
|
||||
Indices and tables
|
||||
------------------
|
||||
|
||||
|
|
|
@ -16,22 +16,26 @@
|
|||
Kuryr Kubernetes Integration Design
|
||||
===================================
|
||||
|
||||
|
||||
Purpose
|
||||
-------
|
||||
|
||||
The purpose of this document is to present the main Kuryr-K8s integration
|
||||
components and capture the design decisions of each component currently taken
|
||||
by the kuryr team.
|
||||
|
||||
|
||||
Goal Statement
|
||||
--------------
|
||||
|
||||
Enable OpenStack Neutron realization of the Kubernetes networking. Start by
|
||||
supporting network connectivity and expand to support advanced features, such
|
||||
as Network Policies. In the future, it may be extended to some other
|
||||
openstack services.
|
||||
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
In order to integrate Neutron into kubernetes networking, 2 components are
|
||||
introduced: Controller and CNI Driver.
|
||||
Controller is a supervisor component responsible to maintain translation of
|
||||
|
@ -47,8 +51,10 @@ Please see below the component view of the integrated system:
|
|||
:align: center
|
||||
:width: 100%
|
||||
|
||||
|
||||
Design Principles
|
||||
-----------------
|
||||
|
||||
1. Loose coupling between integration components.
|
||||
2. Flexible deployment options to support different project, subnet and
|
||||
security groups assignment profiles.
|
||||
|
@ -64,8 +70,10 @@ Design Principles
|
|||
configuration. If some vendor requires some extra code, it should be handled
|
||||
in one of the stevedore drivers.
|
||||
|
||||
|
||||
Kuryr Controller Design
|
||||
-----------------------
|
||||
|
||||
Controller is responsible for watching Kubernetes API endpoints to make sure
|
||||
that the corresponding model is maintained in Neutron. Controller updates K8s
|
||||
resources endpoints' annotations to keep neutron details required by the CNI
|
||||
|
@ -73,16 +81,20 @@ driver as well as for the model mapping persistency.
|
|||
|
||||
Controller is composed from the following components:
|
||||
|
||||
|
||||
Watcher
|
||||
~~~~~~~
|
||||
|
||||
Watcher is a common software component used by both the Controller and the CNI
|
||||
driver. Watcher connects to Kubernetes API. Watcher's responsibility is to observe the
|
||||
registered (either on startup or dynamically during its runtime) endpoints and
|
||||
invoke registered callback handler (pipeline) to pass all events from
|
||||
registered endpoints.
|
||||
|
||||
|
||||
Event Handler
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
EventHandler is an interface class for the Kubernetes event handling. There are
|
||||
several 'wrapper' event handlers that can be composed to implement Controller
|
||||
handling pipeline.
|
||||
|
@ -107,8 +119,10 @@ facility.
|
|||
handlers based on event content and handler predicate provided during event
|
||||
handler registration.
|
||||
|
||||
|
||||
ControllerPipeline
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
ControllerPipeline serves as an event dispatcher of the Watcher for Kuryr-K8s
|
||||
controller Service. Currently watched endpoints are 'pods', 'services' and
|
||||
'endpoints'. Kubernetes resource event handlers (Event Consumers) are registered into
|
||||
|
@ -127,8 +141,10 @@ order arrival. Events of different Kubernetes objects are handled concurrently.
|
|||
:align: center
|
||||
:width: 100%
|
||||
|
||||
|
||||
ResourceEventHandler
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
ResourceEventHandler is a convenience base class for the Kubernetes event processing.
|
||||
The specific Handler associates itself with specific Kubernetes object kind (through
|
||||
setting OBJECT_KIND) and is expected to implement at least one of the methods
|
||||
|
@ -139,8 +155,10 @@ actions, Handler has 'on_present' method that is invoked for both event types.
|
|||
The specific Handler implementation should strive to put all the common ADDED
|
||||
and MODIFIED event handling logic in this method to avoid code duplication.
|
||||
|
||||
|
||||
Pluggable Handlers
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Starting with the Rocky release, Kuryr-Kubernetes includes a pluggable
|
||||
interface for the Kuryr controller handlers.
|
||||
The pluggable handlers framework allows :
|
||||
|
@ -170,6 +188,7 @@ at kuryr.conf::
|
|||
|
||||
Providers
|
||||
~~~~~~~~~
|
||||
|
||||
Provider (Drivers) are used by ResourceEventHandlers to manage specific aspects
|
||||
of the Kubernetes resource in the OpenStack domain. For example, creating a Kubernetes Pod
|
||||
will require a neutron port to be created on a specific network with the proper
|
||||
|
@ -185,8 +204,10 @@ drivers. There are drivers that handle the Pod based on the project, subnet
|
|||
and security groups specified via configuration settings during cluster
|
||||
deployment phase.
|
||||
|
||||
|
||||
NeutronPodVifDriver
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
PodVifDriver subclass should implement request_vif, release_vif and
|
||||
activate_vif methods. In case request_vif returns Vif object in down state,
|
||||
Controller will invoke activate_vif. Vif 'active' state is required by the
|
||||
|
@ -194,6 +215,7 @@ CNI driver to complete pod handling.
|
|||
The NeutronPodVifDriver is the default driver that creates neutron port upon
|
||||
Pod addition and deletes port upon Pod removal.
|
||||
|
||||
|
||||
CNI Driver
|
||||
----------
|
||||
|
||||
|
@ -208,6 +230,7 @@ supposed to be maintained.
|
|||
|
||||
.. _cni-daemon:
|
||||
|
||||
|
||||
CNI Daemon
|
||||
----------
|
||||
|
||||
|
@ -232,6 +255,7 @@ kubernetes API and added to the registry by Watcher thread, Server will
|
|||
eventually get VIF it needs to connect for a given pod. Then it waits for the
|
||||
VIF to become active before returning to the CNI Driver.
|
||||
|
||||
|
||||
Communication
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
|
@ -247,8 +271,10 @@ For reference see updated pod creation flow diagram:
|
|||
:align: center
|
||||
:width: 100%
|
||||
|
||||
|
||||
/addNetwork
|
||||
+++++++++++
|
||||
|
||||
**Function**: Is equivalent of running ``K8sCNIPlugin.add``.
|
||||
|
||||
**Return code:** 201 Created
|
||||
|
@ -257,8 +283,10 @@ For reference see updated pod creation flow diagram:
|
|||
oslo.versionedobject from ``os_vif`` library. On the other side it can be
|
||||
deserialized using o.vo's ``obj_from_primitive()`` method.
|
||||
|
||||
|
||||
/delNetwork
|
||||
+++++++++++
|
||||
|
||||
**Function**: Is equivalent of running ``K8sCNIPlugin.delete``.
|
||||
|
||||
**Return code:** 204 No content
|
||||
|
@ -271,5 +299,6 @@ to perform its tasks and wait on socket for result.
|
|||
|
||||
Kubernetes Documentation
|
||||
------------------------
|
||||
|
||||
The `Kubernetes reference documentation <https://kubernetes.io/docs/reference/>`_
|
||||
is a great source for finding more details about Kubernetes API, CLIs, and tools.
|
||||
|
|
|
@ -18,11 +18,14 @@ Kuryr Kubernetes Ingress integration design
|
|||
|
||||
Purpose
|
||||
-------
|
||||
|
||||
The purpose of this document is to present how Kubernetes Ingress controller
|
||||
is supported by the kuryr integration.
|
||||
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
A Kubernetes Ingress [1]_ is used to give services externally-reachable URLs,
|
||||
load balance traffic, terminate SSL, offer name based virtual
|
||||
hosting, and more.
|
||||
|
@ -32,8 +35,10 @@ security configuration.
|
|||
A Kubernetes Ingress Controller [2]_ is an entity that watches the apiserver's
|
||||
/ingress resources for updates. Its job is to satisfy requests for Ingresses.
|
||||
|
||||
|
||||
Proposed Solution
|
||||
-----------------
|
||||
|
||||
The suggested solution is based on extension of the kuryr-kubernetes controller
|
||||
handlers functionality to support kubernetes Ingress resources.
|
||||
This extension should watch kubernetes Ingresses resources, and the
|
||||
|
@ -48,8 +53,10 @@ content in HTTP header, e.g: HOST_NAME).
|
|||
Kuryr will use neutron LBaaS L7 policy capability [3]_ to perform
|
||||
the L7 routing task.
|
||||
|
||||
|
||||
SW architecture:
|
||||
----------------
|
||||
|
||||
The following scheme describes the SW modules that provides Ingress controller
|
||||
capability in Kuryr Kubernetes context:
|
||||
|
||||
|
@ -68,8 +75,10 @@ modules:
|
|||
|
||||
Each one of this modules is detailed described below.
|
||||
|
||||
|
||||
Ingress resource creation
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The kuryr-kubernetes controller will create the L7 router,
|
||||
and both Ingress and Service/Endpoints handlers should update the L7
|
||||
rules database of the L7 router.
|
||||
|
@ -82,8 +91,10 @@ ingress controller SW :
|
|||
:align: center
|
||||
:width: 100%
|
||||
|
||||
|
||||
The L7 Router
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
In Kuryr context, a L7 router is actually an externally reachable
|
||||
loadbalancer with L7 capabilities.
|
||||
For achieving external connectivity the L7 router is attached to a floating
|
||||
|
@ -112,8 +123,10 @@ The next diagram illustrates data flow from external user to L7 router:
|
|||
:align: center
|
||||
:width: 100%
|
||||
|
||||
|
||||
Ingress Handler
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
The Ingress Handler watches the apiserver's for updates to
|
||||
the Ingress resources and should satisfy requests for Ingresses.
|
||||
Each Ingress being translated to a L7 policy in L7 router, and the rules on
|
||||
|
@ -126,16 +139,20 @@ Since the Service/Endpoints resource is not aware of changes in Ingress objects
|
|||
pointing to it, the Ingress handler should trigger this notification,
|
||||
the notification will be implemented using annotation.
|
||||
|
||||
|
||||
Service/Endpoints Handler
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Service/Endpoints handler should be **extended** to support the flows
|
||||
involving Ingress resources.
|
||||
The Service/Endpoints handler should add/delete all its members to/from the
|
||||
LBaaS pool mentioned above, in case an Ingress is pointing this
|
||||
Service/Endpoints as its destination.
|
||||
|
||||
|
||||
The L7 router driver
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The L7 router, Ingress handler and Service/Endpoints handler will
|
||||
call the L7 router driver services to create the L7 routing entities chain.
|
||||
The L7 router driver will rely on neutron LBaaS functionality.
|
||||
|
@ -156,8 +173,10 @@ entities is given below:
|
|||
- The green components are created/released by Ingress handler.
|
||||
- The red components are created/released by Service/Endpoints handler.
|
||||
|
||||
|
||||
Use cases examples
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describe in details the following scenarios:
|
||||
|
||||
A. Create Ingress, create Service/Endpoints.
|
||||
|
@ -243,7 +262,7 @@ This section describe in details the following scenarios:
|
|||
|
||||
References
|
||||
==========
|
||||
|
||||
.. [1] https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress
|
||||
.. [2] https://github.com/kubernetes/ingress-nginx/blob/master/README.md
|
||||
.. [3] https://wiki.openstack.org/wiki/Neutron/LBaaS/l7
|
||||
|
||||
|
|
|
@ -18,11 +18,14 @@ Kuryr Kubernetes Openshift Routes integration design
|
|||
|
||||
Purpose
|
||||
-------
|
||||
|
||||
The purpose of this document is to present how Openshift Routes are supported
|
||||
by kuryr-kubernetes.
|
||||
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
OpenShift Origin [1]_ is an open source cloud application development and
|
||||
hosting platform that automates the provisioning, management and scaling
|
||||
of applications.
|
||||
|
@ -39,19 +42,25 @@ incoming connections.
|
|||
The Openshift Routes concept introduced before Ingress [3]_ was supported by
|
||||
kubernetes, the Openshift Route matches the functionality of kubernetes Ingress.
|
||||
|
||||
|
||||
Proposed Solution
|
||||
-----------------
|
||||
|
||||
The solution will rely on L7 router, Service/Endpoints handler and
|
||||
L7 router driver components described at kuryr-kubernetes Ingress integration
|
||||
design, where a new component - OCP-Route handler, will satisfy requests for
|
||||
Openshift Route resources.
|
||||
|
||||
|
||||
Controller Handlers impact:
|
||||
---------------------------
|
||||
|
||||
The controller handlers should be extended to support OCP-Route resource.
|
||||
|
||||
|
||||
The OCP-Route handler
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The OCP-Route handler watches the apiserver's for updates to Openshift
|
||||
Route resources.
|
||||
The following scheme describes OCP-Route controller SW architecture:
|
||||
|
@ -72,8 +81,10 @@ pointing to it, the OCP-Route handler should trigger this notification,
|
|||
the notification will be implemented using annotation of the relevant
|
||||
Endpoint resource.
|
||||
|
||||
|
||||
Use cases examples
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes in details the following scenarios:
|
||||
|
||||
A. Create OCP-Route, create Service/Endpoints.
|
||||
|
@ -151,8 +162,10 @@ This section describes in details the following scenarios:
|
|||
* As a result to the OCP-Route handler notification, the Service/Endpoints
|
||||
handler will set its internal state to 'no Ingress is pointing' state.
|
||||
|
||||
|
||||
References
|
||||
==========
|
||||
|
||||
.. [1] https://www.openshift.org/
|
||||
.. [2] https://docs.openshift.com/enterprise/3.0/architecture/core_concepts/routes.html
|
||||
.. [3] https://kubernetes.io/docs/concepts/Services-networking/ingress/
|
||||
|
|
|
@ -12,24 +12,29 @@
|
|||
''''''' Heading 4
|
||||
(Avoid deeper levels because they do not render well.)
|
||||
|
||||
================================
|
||||
==============
|
||||
Network Policy
|
||||
================================
|
||||
==============
|
||||
|
||||
Purpose
|
||||
--------
|
||||
|
||||
The purpose of this document is to present how Network Policy is supported by
|
||||
Kuryr-Kubernetes.
|
||||
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
Kubernetes supports a Network Policy object to express ingress and egress rules
|
||||
for pods. Network Policy reacts on labels to qualify multiple pods, and defines
|
||||
rules based on differents labeling and/or CIDRs. When combined with a
|
||||
networking plugin, those policy objetcs are enforced and respected.
|
||||
|
||||
|
||||
Proposed Solution
|
||||
-----------------
|
||||
|
||||
Kuryr-Kubernetes relies on Neutron security groups and security group rules to
|
||||
enforce a Network Policy object, more specifically one security group per policy
|
||||
with possibly multiple rules. Each object has a namespace scoped Network Policy
|
||||
|
@ -70,67 +75,85 @@ side effects/actions of when a Network Policy is being enforced.
|
|||
expressions, mix of namespace and pod selector, ip block
|
||||
* named port
|
||||
|
||||
|
||||
New handlers and drivers
|
||||
++++++++++++++++++++++++
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Network Policy handler
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
++++++++++++++++++++++++++
|
||||
|
||||
This handler is responsible for triggering the Network Policy Spec processing,
|
||||
and the creation or removal of security group with appropriate security group
|
||||
rules. It also, applies the security group to the pods and services affected
|
||||
by the policy.
|
||||
|
||||
|
||||
The Pod Label handler
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
+++++++++++++++++++++
|
||||
|
||||
This new handler is responsible for triggering the update of a security group
|
||||
rule upon pod labels changes, and its enforcement on the pod port and service.
|
||||
|
||||
|
||||
The Network Policy driver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
++++++++++++++++++++++++++
|
||||
|
||||
Is the main driver. It ensures a Network Policy by processing the Spec
|
||||
and creating or updating the Security group with appropriate
|
||||
security group rules.
|
||||
|
||||
|
||||
The Network Policy Security Group driver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
It is responsible for creating, deleting, or updating security group rules
|
||||
for pods, namespaces or services based on different Network Policies.
|
||||
|
||||
|
||||
Modified handlers and drivers
|
||||
+++++++++++++++++++++++++++++
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The VIF handler
|
||||
~~~~~~~~~~~~~~~
|
||||
+++++++++++++++
|
||||
|
||||
As network policy rules can be defined based on pod labels, this handler
|
||||
has been enhanced to trigger a security group rule creation or deletion,
|
||||
depending on the type of pod event, if the pod is affected by the network
|
||||
policy and if a new security group rule is needed. Also, it triggers the
|
||||
translation of the pod rules to the affected service.
|
||||
|
||||
|
||||
The Namespace handler
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
+++++++++++++++++++++
|
||||
|
||||
Just as the pods labels, namespaces labels can also define a rule in a
|
||||
Network Policy. To account for this, the namespace handler has been
|
||||
incremented to trigger the creation, deletion or update of a
|
||||
security group rule, in case the namespace affects a Network Policy rule,
|
||||
and the translation of the rule to the affected service.
|
||||
|
||||
|
||||
The Namespace Subnet driver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
+++++++++++++++++++++++++++
|
||||
|
||||
In case of a namespace event and a Network Policy enforcement based
|
||||
on the namespace, this driver creates a subnet to this namespace,
|
||||
and restrict the number of security group rules for the Network Policy
|
||||
to just one with the subnet CIDR, instead of one for each pod in the namespace.
|
||||
|
||||
|
||||
The LBaaS driver
|
||||
~~~~~~~~~~~~~~~~
|
||||
++++++++++++++++
|
||||
|
||||
To restrict the incoming traffic to the backend pods, the LBaaS driver
|
||||
has been enhanced to translate pods rules to the listener port, and react
|
||||
to Service ports updates. E.g., when the target port is not allowed by the
|
||||
policy enforced in the pod, the rule should not be added.
|
||||
|
||||
|
||||
The VIF Pool driver
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
+++++++++++++++++++
|
||||
|
||||
The VIF Pool driver is responsible for updating the Security group applied
|
||||
to the pods ports. It has been modified to embrace the fact that with Network
|
||||
Policies pods' ports changes their security group while being used, meaning the
|
||||
|
@ -141,13 +164,16 @@ and host id. Thus if there is no ports on the pool with the needed
|
|||
security group id(s), one of the existing ports in the pool is updated
|
||||
to match the requested sg Id.
|
||||
|
||||
|
||||
Use cases examples
|
||||
++++++++++++++++++
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section describes some scenarios with a Network Policy being enforced,
|
||||
what Kuryr componenets gets triggered and what resources are created.
|
||||
|
||||
|
||||
Deny all incoming traffic
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
+++++++++++++++++++++++++
|
||||
|
||||
By default, Kubernetes clusters do not restrict traffic. Only once a network
|
||||
policy is enforced to a namespace, all traffic not explicitly allowed in the
|
||||
|
@ -194,8 +220,9 @@ are assumed to assumed to affect Ingress.
|
|||
securityGroupId: 20d9b623-f1e0-449d-95c1-01624cb3e315
|
||||
securityGroupName: sg-default-deny
|
||||
|
||||
|
||||
Allow traffic from pod
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
++++++++++++++++++++++
|
||||
|
||||
The following Network Policy specification has a single rule allowing traffic
|
||||
on a single port from the group of pods that have the label ``role=monitoring``.
|
||||
|
@ -264,8 +291,9 @@ restriction was enforced.
|
|||
securityGroupId: 7f0ef8c2-4846-4d8c-952f-94a9098fff17
|
||||
securityGroupName: sg-allow-monitoring-via-pod-selector
|
||||
|
||||
|
||||
Allow traffic from namespace
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
++++++++++++++++++++++++++++
|
||||
|
||||
The following network policy only allows allowing ingress traffic
|
||||
from namespace with the label ``purpose=test``:
|
||||
|
@ -339,16 +367,19 @@ egress rule allowing traffic to everywhere.
|
|||
that affects ingress traffic is created, and also everytime
|
||||
a pod or namespace is created.
|
||||
|
||||
|
||||
Create network policy flow
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
++++++++++++++++++++++++++
|
||||
|
||||
.. image:: ../../images/create_network_policy_flow.svg
|
||||
:alt: Network Policy creation flow
|
||||
:align: center
|
||||
:width: 100%
|
||||
|
||||
|
||||
Create pod flow
|
||||
~~~~~~~~~~~~~~~
|
||||
+++++++++++++++
|
||||
|
||||
The following diagram only covers the implementation part that affects
|
||||
network policy.
|
||||
|
||||
|
@ -357,8 +388,10 @@ network policy.
|
|||
:align: center
|
||||
:width: 100%
|
||||
|
||||
|
||||
Network policy rule definition
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
++++++++++++++++++++++++++++++
|
||||
|
||||
======================== ======================= ==============================================
|
||||
NamespaceSelector podSelector Expected result
|
||||
======================== ======================= ==============================================
|
||||
|
@ -381,8 +414,10 @@ ingress: [] Deny all traffic
|
|||
No ingress Blocks all traffic
|
||||
======================== ================================================
|
||||
|
||||
|
||||
Policy types definition
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
+++++++++++++++++++++++
|
||||
|
||||
=============== ===================== ======================= ======================
|
||||
PolicyType Spec Ingress/Egress Ingress generated rules Egress generated rules
|
||||
=============== ===================== ======================= ======================
|
||||
|
|
|
@ -18,6 +18,7 @@ Kuryr Kubernetes Port CRD Usage
|
|||
|
||||
Purpose
|
||||
-------
|
||||
|
||||
The purpose of this document is to present Kuryr Kubernetes Port and PortPool
|
||||
CRD [1]_ usage, capturing the design decisions currently taken by the Kuryr
|
||||
team.
|
||||
|
@ -32,8 +33,10 @@ Having the details in K8s data model should also serve the case where Kuryr is
|
|||
used as generic SDN K8s integration framework. This means that Port CRD can be
|
||||
not neutron specific.
|
||||
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
Interactions between Kuryr and Neutron may take more time than desired from
|
||||
the container management perspective.
|
||||
|
||||
|
@ -46,8 +49,10 @@ them available in case of Kuryr Controller restart. Since Kuryr is stateless
|
|||
service, the details should be kept either as part of Neutron or Kubernetes
|
||||
data. Due to the perfromance costs, K8s option is more performant.
|
||||
|
||||
|
||||
Proposed Solution
|
||||
-----------------
|
||||
|
||||
The proposal is to start relying on K8s CRD objects more and more.
|
||||
The first action is to create a KuryrPort CRD where the needed information
|
||||
about the Neutron Ports will be stored (or any other SDN).
|
||||
|
@ -192,7 +197,8 @@ Note this is similar to the approach already followed by the network per
|
|||
namespace subnet driver and it could be similarly applied to other SDN
|
||||
resources, such as LoadBalancers.
|
||||
|
||||
|
||||
References
|
||||
==========
|
||||
.. [1] https://kubernetes.io/docs/concepts/api-extension/custom-resources/#custom-resources
|
||||
|
||||
.. [1] https://kubernetes.io/docs/concepts/api-extension/custom-resources/#custom-resources
|
||||
|
|
|
@ -16,9 +16,9 @@
|
|||
Kuryr Kubernetes Port Manager Design
|
||||
====================================
|
||||
|
||||
|
||||
Purpose
|
||||
-------
|
||||
|
||||
The purpose of this document is to present Kuryr Kubernetes Port Manager,
|
||||
capturing the design decision currently taken by the kuryr team.
|
||||
|
||||
|
@ -28,8 +28,10 @@ the amount of calls to Neutron by ensuring port reusal as well as performing
|
|||
bulk actions, e.g., creating/deleting several ports within the same Neutron
|
||||
call.
|
||||
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
Interactions between Kuryr and Neutron may take more time than desired from
|
||||
the container management perspective.
|
||||
|
||||
|
@ -47,8 +49,10 @@ consequently remove the waiting time for:
|
|||
- Creating ports and waiting for them to become active when booting containers
|
||||
- Deleting ports when removing containers
|
||||
|
||||
|
||||
Proposed Solution
|
||||
-----------------
|
||||
|
||||
The Port Manager will be in charge of handling Neutron ports. The main
|
||||
difference with the current implementation resides on when and how these
|
||||
ports are managed. The idea behind is to minimize the amount of calls to the
|
||||
|
@ -61,6 +65,7 @@ can be added.
|
|||
|
||||
Ports Manager
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
The Port Manager will handle different pool of Neutron ports:
|
||||
|
||||
- Available pools: There will be a pool of ports for each tenant, host (or
|
||||
|
@ -105,8 +110,10 @@ In addition, a Time-To-Live (TTL) could be set to the ports at the pool, so
|
|||
that if they are not used during a certain period of time, they are removed --
|
||||
if and only if the available_pool size is still larger than the target minimum.
|
||||
|
||||
|
||||
Recovery of pool ports upon Kuryr-Controller restart
|
||||
++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If the Kuryr-Controller is restarted, the pre-created ports will still exist
|
||||
on the Neutron side but the Kuryr-controller will be unaware of them, thus
|
||||
pre-creating more upon pod allocation requests. To avoid having these existing
|
||||
|
@ -134,8 +141,10 @@ attached to each existing trunk port to find where the filtered ports are
|
|||
attached and then obtain all the needed information to re-add them into the
|
||||
corresponding pools.
|
||||
|
||||
|
||||
Kuryr Controller Impact
|
||||
+++++++++++++++++++++++
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
A new VIF Pool driver is created to manage the ports pools upon pods creation
|
||||
and deletion events. It will ensure that a pool with at least X ports is
|
||||
available for each tenant, host or trunk port, and security group, when the
|
||||
|
@ -151,16 +160,20 @@ changes related to the VIF drivers. The VIF drivers (neutron-vif and nested)
|
|||
will be extended to support bulk ports creation of Neutron ports and similarly
|
||||
for the VIF objects requests.
|
||||
|
||||
|
||||
Future enhancement
|
||||
''''''''''''''''''
|
||||
++++++++++++++++++
|
||||
|
||||
The VIFHandler needs to be aware of the new Pool driver, which will load the
|
||||
respective VIF driver to be used. In a sense, the Pool Driver will be a proxy
|
||||
to the VIF Driver, but also managing the pools. When a mechanism to load and
|
||||
set the VIFHandler drivers is in place, this will be reverted so that the
|
||||
VIFHandlers becomes unaware of the pool drivers.
|
||||
|
||||
|
||||
Kuryr CNI Impact
|
||||
++++++++++++++++
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
For the nested vlan case, the subports at the different pools are already
|
||||
attached to the VMs trunk ports, therefore they are already in ACTIVE status.
|
||||
However, for the generic case the ports are not really bond to anything (yet),
|
||||
|
@ -175,6 +188,8 @@ OVS agent sees them as 'still connected' and maintains their ACTIVE status.
|
|||
This modification must ensure the OVS (br-int) ports where these veth devices
|
||||
are connected are not deleted after container deletion by the CNI.
|
||||
|
||||
|
||||
Future enhancement
|
||||
''''''''''''''''''
|
||||
++++++++++++++++++
|
||||
|
||||
The CNI modifications will be implemented in a second phase.
|
||||
|
|
|
@ -16,15 +16,17 @@
|
|||
Kuryr Kubernetes Services Integration Design
|
||||
============================================
|
||||
|
||||
|
||||
Purpose
|
||||
-------
|
||||
|
||||
The purpose of this document is to present how Kubernetes Service is supported
|
||||
by the kuryr integration and to capture the design decisions currently taken
|
||||
by the kuryr team.
|
||||
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
A Kubernetes Service is an abstraction which defines a logical set of Pods and
|
||||
a policy by which to access them. Service is a Kubernetes managed API object.
|
||||
For Kubernetes-native applications, Kubernetes offers an Endpoints API that is
|
||||
|
@ -33,8 +35,10 @@ please refer to `Kubernetes service <http://kubernetes.io/docs/user-guide/servic
|
|||
Kubernetes supports services with kube-proxy component that runs on each node,
|
||||
`Kube-Proxy <http://kubernetes.io/docs/admin/kube-proxy/>`_.
|
||||
|
||||
|
||||
Proposed Solution
|
||||
-----------------
|
||||
|
||||
Kubernetes service in its essence is a Load Balancer across Pods that fit the
|
||||
service selection. Kuryr's choice is to support Kubernetes services by using
|
||||
Neutron LBaaS service. The initial implementation is based on the OpenStack
|
||||
|
@ -45,13 +49,17 @@ This may be affected once Kubernetes Network Policies will be supported.
|
|||
Oslo versioned objects are used to keep translation details in Kubernetes entities
|
||||
annotation. This will allow future changes to be backward compatible.
|
||||
|
||||
|
||||
Data Model Translation
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Kubernetes service is mapped to the LBaaSv2 Load Balancer with associated
|
||||
Listeners and Pools. Service endpoints are mapped to Load Balancer Pool members.
|
||||
|
||||
|
||||
Kuryr Controller Impact
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Two Kubernetes Event Handlers are added to the Controller pipeline.
|
||||
|
||||
- LBaaSSpecHandler manages Kubernetes Service creation and modification events.
|
||||
|
|
|
@ -16,9 +16,9 @@
|
|||
HowTo Update PodResources gRPC API
|
||||
==================================
|
||||
|
||||
|
||||
Purpose
|
||||
-------
|
||||
|
||||
The purpose of this document is to describe how to update gRPC API files in
|
||||
kuryr-kubernetes repository in case of upgrading to a new version of Kubernetes
|
||||
PodResources API. These files are ``api_pb2_grpc.py``, ``api_pb2.py`` and
|
||||
|
@ -42,8 +42,10 @@ Kubernetes source tree.
|
|||
version (this is highly unlikely). In this case ``protobuf`` could fail
|
||||
to use our python bindings.
|
||||
|
||||
|
||||
Automated update
|
||||
----------------
|
||||
|
||||
``contrib/regenerate_pod_resources_api.sh`` script could be used to re-generate
|
||||
PodResources gRPC API files. By default, this script will download ``v1alpha1``
|
||||
version of ``api.proto`` file from the Kubernetes GitHub repo and create
|
||||
|
@ -61,6 +63,7 @@ Define ``API_VERSION`` environment variable to use specific version of
|
|||
|
||||
$ export API_VERSION=v1alpha1
|
||||
|
||||
|
||||
Manual update steps
|
||||
-------------------
|
||||
|
||||
|
@ -82,6 +85,7 @@ Don't forget to update the file header that should point to the original
|
|||
// To regenerate api.proto, api_pb2.py and api_pb2_grpc.py follow instructions
|
||||
// from doc/source/devref/updating_pod_resources_api.rst.
|
||||
|
||||
|
||||
Generating the python bindings
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
@ -116,4 +120,3 @@ Generating the python bindings
|
|||
(venv) [kuryr-kubernetes]$ python -m grpc_tools.protoc -I./ \
|
||||
--python_out=. --grpc_python_out=. \
|
||||
kuryr_kubernetes/pod_resources/api.proto
|
||||
|
||||
|
|
|
@ -18,12 +18,15 @@ VIF-Handler And Vif Drivers Design
|
|||
|
||||
Purpose
|
||||
-------
|
||||
|
||||
The purpose of this document is to present an approach for implementing
|
||||
design of interaction between VIF-handler and the drivers it uses in
|
||||
Kuryr-Kubernetes Controller.
|
||||
|
||||
|
||||
VIF-Handler
|
||||
-----------
|
||||
|
||||
VIF-handler is intended to handle VIFs. The main aim of VIF-handler is to get
|
||||
the pod object, send it to 1) the VIF-driver for the default network, 2)
|
||||
enabled Multi-VIF drivers for the additional networks, and get VIF objects
|
||||
|
@ -31,8 +34,10 @@ from both. After that VIF-handler is able to activate, release or update VIFs.
|
|||
VIF-handler should stay clean whereas parsing of specific pod information
|
||||
should be done by Multi-VIF drivers.
|
||||
|
||||
|
||||
Multi-VIF driver
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
The new type of drivers which is used to call other VIF-drivers to attach
|
||||
additional interfaces to Pods. The main aim of this kind of drivers is to get
|
||||
additional interfaces from the Pods definition, then invoke real VIF-drivers
|
||||
|
@ -53,8 +58,10 @@ Diagram describing VifHandler - Drivers flow is giver below:
|
|||
:align: center
|
||||
:width: 100%
|
||||
|
||||
|
||||
Config Options
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
Add new config option "multi_vif_drivers" (list) to config file that shows
|
||||
what Multi-VIF drivers should be used in to specify the addition VIF objects.
|
||||
It is allowed to have one or more multi_vif_drivers enabled, which means that
|
||||
|
@ -78,8 +85,10 @@ Or like this:
|
|||
|
||||
multi_vif_drivers = npwg_multiple_interfaces
|
||||
|
||||
|
||||
Additional Subnets Driver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Since it is possible to request additional subnets for the pod through the pod
|
||||
annotations it is necessary to have new driver. According to parsed information
|
||||
(requested subnets) by Multi-vif driver it has to return dictionary containing
|
||||
|
@ -104,6 +113,7 @@ Here's how a Pod Spec with additional subnets requests might look like:
|
|||
|
||||
SRIOV Driver
|
||||
~~~~~~~~~~~~
|
||||
|
||||
SRIOV driver gets pod object from Multi-vif driver, according to parsed
|
||||
information (sriov requests) by Multi-vif driver. It should return a list of
|
||||
created vif objects. Method request_vif() has unified interface with
|
||||
|
@ -123,6 +133,7 @@ Here's how a Pod Spec with sriov requests might look like:
|
|||
|
||||
Specific ports support
|
||||
----------------------
|
||||
|
||||
Specific ports support is enabled by default and will be a part of the drivers
|
||||
to implement it. It is possile to have manually precreated specific ports in
|
||||
neutron and specify them in pod annotations as preferably used. This means that
|
||||
|
|
|
@ -8,6 +8,7 @@ Welcome to kuryr-kubernetes's documentation!
|
|||
|
||||
Contents
|
||||
--------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 3
|
||||
|
||||
|
@ -16,6 +17,7 @@ Contents
|
|||
usage
|
||||
contributing
|
||||
|
||||
|
||||
Developer Docs
|
||||
--------------
|
||||
|
||||
|
@ -24,6 +26,7 @@ Developer Docs
|
|||
|
||||
devref/index
|
||||
|
||||
|
||||
Design Specs
|
||||
------------
|
||||
|
||||
|
@ -37,9 +40,9 @@ Design Specs
|
|||
specs/rocky/npwg_spec_support
|
||||
specs/stein/vhostuser
|
||||
|
||||
|
||||
Indices and tables
|
||||
------------------
|
||||
|
||||
* :ref:`genindex`
|
||||
* :ref:`search`
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
================================================
|
||||
Kuryr installation as a Kubernetes network addon
|
||||
================================================
|
||||
|
||||
|
@ -24,6 +25,7 @@ Deployment and kuryr-cni DaemonSet definitions to use pre-built
|
|||
`controller <https://hub.docker.com/r/kuryr/controller/>`_ and `cni <https://hub.docker.com/r/kuryr/cni/>`_
|
||||
images from the Docker Hub. Those definitions will be generated in next step.
|
||||
|
||||
|
||||
Generating Kuryr resource definitions for Kubernetes
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
@ -113,6 +115,7 @@ This should generate 5 files in your ``<output_dir>``:
|
|||
In case when Open vSwitch keeps vhostuser socket files not in /var/run/openvswitch, openvswitch
|
||||
mount point in cni_ds.yaml and [vhostuser] section in config_map.yml should be changed properly.
|
||||
|
||||
|
||||
Deploying Kuryr resources on Kubernetes
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
=============================
|
||||
Inspect default Configuration
|
||||
=============================
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
===========================
|
||||
Basic DevStack installation
|
||||
===========================
|
||||
|
||||
|
@ -9,6 +10,7 @@ operating systems. It is also assumed that ``git`` is already installed on the
|
|||
system. DevStack will make sure to install and configure OpenStack, Kubernetes
|
||||
and dependencies of both systems.
|
||||
|
||||
|
||||
Cloning required repositories
|
||||
-----------------------------
|
||||
|
||||
|
@ -139,4 +141,4 @@ be found in `DevStack Documentation
|
|||
<https://docs.openstack.org/devstack/latest/>`_, especially in section
|
||||
`Using Systemd in DevStack
|
||||
<https://docs.openstack.org/devstack/latest/systemd.html>`_, which explains how
|
||||
to use ``systemctl`` to control services and ``journalctl`` to read its logs.
|
||||
to use ``systemctl`` to control services and ``journalctl`` to read its logs.
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
==========================
|
||||
Containerized installation
|
||||
==========================
|
||||
|
||||
|
@ -5,6 +6,7 @@ It is possible to configure DevStack to install kuryr-controller and kuryr-cni
|
|||
on Kubernetes as pods. Details can be found on :doc:`../containerized` page,
|
||||
this page will explain DevStack aspects of running containerized.
|
||||
|
||||
|
||||
Installation
|
||||
------------
|
||||
|
||||
|
@ -17,6 +19,7 @@ line to your ``local.conf``: ::
|
|||
This will trigger building the kuryr-controller and kuryr-cni containers during
|
||||
installation, as well as will deploy those on Kubernetes cluster it installed.
|
||||
|
||||
|
||||
Rebuilding container images
|
||||
---------------------------
|
||||
|
||||
|
@ -24,6 +27,7 @@ Instructions on how to manually rebuild both kuryr-controller and kuryr-cni
|
|||
container images are presented on :doc:`../containerized` page. In case you want
|
||||
to test any code changes, you need to rebuild the images first.
|
||||
|
||||
|
||||
Changing configuration
|
||||
----------------------
|
||||
|
||||
|
@ -38,12 +42,14 @@ present in the ConfigMap: kuryr.conf and kuryr-cni.conf. First one is attached
|
|||
to kuryr-controller and second to kuryr-cni. Make sure to modify both when doing
|
||||
changes important for both services.
|
||||
|
||||
|
||||
Restarting services
|
||||
-------------------
|
||||
|
||||
Once any changes are made to docker images or the configuration, it is crucial
|
||||
to restart pod you've modified.
|
||||
|
||||
|
||||
kuryr-controller
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
|
@ -56,6 +62,7 @@ kill existing pod: ::
|
|||
|
||||
Deployment controller will make sure to restart the pod with new configuration.
|
||||
|
||||
|
||||
kuryr-cni
|
||||
~~~~~~~~~
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
=========================================
|
||||
=======================================
|
||||
Kuryr Kubernetes Dragonflow Integration
|
||||
=========================================
|
||||
=======================================
|
||||
|
||||
Dragonflow is a distributed, modular and extendable SDN controller that
|
||||
enables to connect cloud network instances (VMs, Containers and Bare Metal
|
||||
|
@ -21,14 +21,15 @@ networking interface for Dragonflow.
|
|||
|
||||
|
||||
Testing with DevStack
|
||||
=====================
|
||||
---------------------
|
||||
|
||||
The next points describe how to test OpenStack with Dragonflow using DevStack.
|
||||
We will start by describing how to test the baremetal case on a single host,
|
||||
and then cover a nested environemnt where containers are created inside VMs.
|
||||
|
||||
|
||||
Single Node Test Environment
|
||||
----------------------------
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
1. Create a test system.
|
||||
|
||||
|
@ -98,7 +99,7 @@ rewritten to your network controller's ip address and sent out on the network:
|
|||
|
||||
|
||||
Inspect default Configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
+++++++++++++++++++++++++++++
|
||||
|
||||
In order to check the default configuration, in term of networks, subnets,
|
||||
security groups and loadbalancers created upon a successful devstack stacking,
|
||||
|
@ -108,7 +109,7 @@ you can check the `Inspect default Configuration`_.
|
|||
|
||||
|
||||
Testing Network Connectivity
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
++++++++++++++++++++++++++++
|
||||
|
||||
Once the environment is ready, we can test that network connectivity works
|
||||
among pods. To do that check out `Testing Network Connectivity`_.
|
||||
|
@ -117,7 +118,7 @@ among pods. To do that check out `Testing Network Connectivity`_.
|
|||
|
||||
|
||||
Nested Containers Test Environment (VLAN)
|
||||
-----------------------------------------
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Another deployment option is the nested-vlan where containers are created
|
||||
inside OpenStack VMs by using the Trunk ports support. Thus, first we need to
|
||||
|
@ -129,7 +130,7 @@ the kuryr components.
|
|||
|
||||
|
||||
Undercloud deployment
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
+++++++++++++++++++++
|
||||
|
||||
The steps to deploy the undercloud environment are the same as described above
|
||||
for the `Single Node Test Environment` with the different sample local.conf to
|
||||
|
@ -165,7 +166,7 @@ steps detailed at `Boot VM with a Trunk Port`_.
|
|||
|
||||
|
||||
Overcloud deployment
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
++++++++++++++++++++
|
||||
|
||||
Once the VM is up and running, we can start with the overcloud configuration.
|
||||
The steps to perform are the same as without Dragonflow integration, i.e., the
|
||||
|
@ -182,10 +183,12 @@ same steps as for ML2/OVS:
|
|||
|
||||
|
||||
Testing Nested Network Connectivity
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
+++++++++++++++++++++++++++++++++++
|
||||
|
||||
Similarly to the baremetal testing, we can create a demo deployment at the
|
||||
overcloud VM, scale it to any number of pods and expose the service to check if
|
||||
the deployment was successful. To do that check out
|
||||
`Testing Nested Network Connectivity`_.
|
||||
|
||||
|
||||
.. _Testing Nested Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_nested_connectivity.html
|
||||
|
|
|
@ -20,6 +20,7 @@
|
|||
(Avoid deeper levels because they do not render well.)
|
||||
|
||||
|
||||
===========================
|
||||
DevStack based Installation
|
||||
===========================
|
||||
|
||||
|
@ -27,6 +28,7 @@ This section describes how you can install and configure kuryr-kubernetes with
|
|||
DevStack for testing different functionality, such as nested or different
|
||||
ML2 drivers.
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
============================================
|
||||
How to try out nested-pods locally (MACVLAN)
|
||||
============================================
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
=================================================
|
||||
How to try out nested-pods locally (VLAN + trunk)
|
||||
=================================================
|
||||
|
||||
|
|
|
@ -16,14 +16,15 @@ deployment. Kuryr acts as the container networking interface for OpenDaylight.
|
|||
|
||||
|
||||
Testing with DevStack
|
||||
=====================
|
||||
---------------------
|
||||
|
||||
The next points describe how to test OpenStack with ODL using DevStack.
|
||||
We will start by describing how to test the baremetal case on a single host,
|
||||
and then cover a nested environemnt where containers are created inside VMs.
|
||||
|
||||
|
||||
Single Node Test Environment
|
||||
----------------------------
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
1. Create a test system.
|
||||
|
||||
|
@ -106,7 +107,7 @@ ip address and sent out on the network:
|
|||
|
||||
|
||||
Inspect default Configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
+++++++++++++++++++++++++++++
|
||||
|
||||
In order to check the default configuration, in term of networks, subnets,
|
||||
security groups and loadbalancers created upon a successful devstack stacking,
|
||||
|
@ -116,7 +117,7 @@ you can check the `Inspect default Configuration`_.
|
|||
|
||||
|
||||
Testing Network Connectivity
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
++++++++++++++++++++++++++++
|
||||
|
||||
Once the environment is ready, we can test that network connectivity works
|
||||
among pods. To do that check out `Testing Network Connectivity`_.
|
||||
|
@ -125,7 +126,7 @@ among pods. To do that check out `Testing Network Connectivity`_.
|
|||
|
||||
|
||||
Nested Containers Test Environment (VLAN)
|
||||
-----------------------------------------
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Another deployment option is the nested-vlan where containers are created
|
||||
inside OpenStack VMs by using the Trunk ports support. Thus, first we need to
|
||||
|
@ -137,7 +138,7 @@ components.
|
|||
|
||||
|
||||
Undercloud deployment
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
+++++++++++++++++++++
|
||||
|
||||
The steps to deploy the undercloud environment are the same described above
|
||||
for the `Single Node Test Environment` with the different of the sample
|
||||
|
@ -172,7 +173,7 @@ steps detailed at `Boot VM with a Trunk Port`_.
|
|||
|
||||
|
||||
Overcloud deployment
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
++++++++++++++++++++
|
||||
|
||||
Once the VM is up and running, we can start with the overcloud configuration.
|
||||
The steps to perform are the same as without ODL integration, i.e., the
|
||||
|
@ -189,7 +190,8 @@ same steps as for ML2/OVS:
|
|||
|
||||
|
||||
Testing Nested Network Connectivity
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
+++++++++++++++++++++++++++++++++++
|
||||
|
||||
Similarly to the baremetal testing, we can create a demo deployment at the
|
||||
overcloud VM, scale it to any number of pods and expose the service to check if
|
||||
the deployment was successful. To do that check out
|
||||
|
|
|
@ -13,14 +13,15 @@ nested) containers and VM networking in a OVN-based OpenStack deployment.
|
|||
|
||||
|
||||
Testing with DevStack
|
||||
=====================
|
||||
---------------------
|
||||
|
||||
The next points describe how to test OpenStack with OVN using DevStack.
|
||||
We will start by describing how to test the baremetal case on a single host,
|
||||
and then cover a nested environment where containers are created inside VMs.
|
||||
|
||||
|
||||
Single Node Test Environment
|
||||
----------------------------
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
1. Create a test system.
|
||||
|
||||
|
@ -105,21 +106,21 @@ ip address and sent out on the network:
|
|||
|
||||
|
||||
Inspect default Configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
+++++++++++++++++++++++++++++
|
||||
|
||||
In order to check the default configuration, in term of networks, subnets,
|
||||
security groups and loadbalancers created upon a successful devstack stacking,
|
||||
you can check the :doc:`../default_configuration`
|
||||
|
||||
Testing Network Connectivity
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
++++++++++++++++++++++++++++
|
||||
|
||||
Once the environment is ready, we can test that network connectivity works
|
||||
among pods. To do that check out :doc:`../testing_connectivity`
|
||||
|
||||
|
||||
Nested Containers Test Environment (VLAN)
|
||||
-----------------------------------------
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Another deployment option is the nested-vlan where containers are created
|
||||
inside OpenStack VMs by using the Trunk ports support. Thus, first we need to
|
||||
|
@ -131,7 +132,7 @@ components.
|
|||
|
||||
|
||||
Undercloud deployment
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
+++++++++++++++++++++
|
||||
|
||||
The steps to deploy the undercloud environment are the same described above
|
||||
for the `Single Node Test Environment` with the different of the sample
|
||||
|
@ -164,7 +165,7 @@ steps detailed at :doc:`../trunk_ports`
|
|||
|
||||
|
||||
Overcloud deployment
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
++++++++++++++++++++
|
||||
|
||||
Once the VM is up and running, we can start with the overcloud configuration.
|
||||
The steps to perform are the same as without OVN integration, i.e., the
|
||||
|
@ -178,7 +179,8 @@ same steps as for ML2/OVS:
|
|||
|
||||
|
||||
Testing Nested Network Connectivity
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
+++++++++++++++++++++++++++++++++++
|
||||
|
||||
Similarly to the baremetal testing, we can create a demo deployment at the
|
||||
overcloud VM, scale it to any number of pods and expose the service to check if
|
||||
the deployment was successful. To do that check out
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
======================================
|
||||
How to enable ports pool with devstack
|
||||
======================================
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
=========================================
|
||||
Watching Kubernetes api-server over HTTPS
|
||||
=========================================
|
||||
|
||||
|
@ -20,4 +21,3 @@ If want to query HTTPS Kubernetes api server with ``--insecure`` mode::
|
|||
|
||||
[kubernetes]
|
||||
ssl_verify_server_crt = False
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
===============
|
||||
IPv6 networking
|
||||
===============
|
||||
|
||||
|
@ -5,6 +6,7 @@ Kuryr Kubernetes can be used with IPv6 networking. In this guide we'll show how
|
|||
you can create the Neutron resources and configure Kubernetes and
|
||||
Kuryr-Kubernetes to achieve an IPv6 only Kubernetes cluster.
|
||||
|
||||
|
||||
Setting it up
|
||||
-------------
|
||||
|
||||
|
@ -193,6 +195,7 @@ Setting it up
|
|||
the host Kubernetes API. You should also make sure that the Kubernetes API
|
||||
server binds on the IPv6 address of the host.
|
||||
|
||||
|
||||
Troubleshooting
|
||||
---------------
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
====================================
|
||||
Installing kuryr-kubernetes manually
|
||||
====================================
|
||||
|
||||
|
@ -103,6 +104,7 @@ Alternatively you may run it in screen::
|
|||
|
||||
$ screen -dm kuryr-k8s-controller --config-file /etc/kuryr/kuryr.conf -d
|
||||
|
||||
|
||||
Configure kuryr-cni
|
||||
-------------------
|
||||
|
||||
|
@ -157,8 +159,9 @@ to work correctly::
|
|||
deactivate
|
||||
sudo pip install 'oslo.privsep>=1.20.0' 'os-vif>=1.5.0'
|
||||
|
||||
|
||||
Configure Kuryr CNI Daemon
|
||||
-------------------------------------
|
||||
--------------------------
|
||||
|
||||
Kuryr CNI Daemon is a service designed to increased scalability of the Kuryr
|
||||
operations done on Kubernetes nodes. More information can be found on
|
||||
|
@ -201,6 +204,7 @@ Alternatively you may run it in screen::
|
|||
|
||||
$ screen -dm kuryr-daemon --config-file /etc/kuryr/kuryr.conf -d
|
||||
|
||||
|
||||
Kuryr CNI Daemon health checks
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
========================================
|
||||
Configure Pod with Additional Interfaces
|
||||
========================================
|
||||
|
||||
|
@ -89,6 +90,7 @@ defined in step 1.
|
|||
|
||||
You may put a list of network separated with comma to attach Pods to more networks.
|
||||
|
||||
|
||||
Reference
|
||||
---------
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
=============================================================
|
||||
Enable network per namespace functionality (handler + driver)
|
||||
=============================================================
|
||||
|
||||
|
@ -92,6 +93,7 @@ to add the namespace handler and state the namespace subnet driver with::
|
|||
To disable the enforcement, you need to set the following variable:
|
||||
KURYR_ENFORCE_SG_RULES=False
|
||||
|
||||
|
||||
Testing the network per namespace functionality
|
||||
-----------------------------------------------
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
===========================================
|
||||
Enable network policy support functionality
|
||||
===========================================
|
||||
|
||||
|
@ -72,6 +73,7 @@ to add the policy, pod_label and namespace handler and drivers with::
|
|||
To disable the enforcement, you need to set the following variable:
|
||||
KURYR_ENFORCE_SG_RULES=False
|
||||
|
||||
|
||||
Testing the network policy support functionality
|
||||
------------------------------------------------
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
===============================
|
||||
Enable OCP-Router functionality
|
||||
===============================
|
||||
|
||||
|
@ -6,6 +7,7 @@ To enable OCP-Router functionality we should set the following:
|
|||
- Setting L7 Router.
|
||||
- Configure Kuryr to support L7 Router and OCP-Route resources.
|
||||
|
||||
|
||||
Setting L7 Router
|
||||
------------------
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
================================
|
||||
How to enable ports pool support
|
||||
================================
|
||||
|
||||
|
@ -138,6 +139,7 @@ the right pod-vif driver set.
|
|||
Note that if no annotation is set on a node, the default pod_vif_driver is
|
||||
used.
|
||||
|
||||
|
||||
Populate pools on subnets creation for namespace subnet driver
|
||||
--------------------------------------------------------------
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
==============================
|
||||
Kubernetes services networking
|
||||
==============================
|
||||
|
||||
|
@ -38,6 +39,7 @@ It is beyond the scope of this document to explain in detail the inner workings
|
|||
of these two possible Neutron LBaaSv2 backends thus, only a brief explanation
|
||||
will be offered on each.
|
||||
|
||||
|
||||
Legacy Neutron HAProxy agent
|
||||
----------------------------
|
||||
|
||||
|
@ -63,6 +65,7 @@ listeners and pools are added. Thus you should take into consideration the
|
|||
memory requirements that arise from having one HAProxy process per Kubernetes
|
||||
Service.
|
||||
|
||||
|
||||
Octavia
|
||||
-------
|
||||
|
||||
|
@ -455,8 +458,10 @@ The services and pods subnets should be created.
|
|||
In both 'User' and 'Pool' methods, the external IP address could be found
|
||||
in k8s service status information (under loadbalancer/ingress/ip)
|
||||
|
||||
|
||||
Alternative configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
It is actually possible to avoid this routing by performing a deployment change
|
||||
that was successfully pioneered by the people at EasyStack Inc. which consists
|
||||
of doing the following:
|
||||
|
@ -563,6 +568,7 @@ of doing the following:
|
|||
the pod subnet, follow the `Making the Pods be able to reach the Kubernetes API`_
|
||||
section
|
||||
|
||||
|
||||
.. _k8s_lb_reachable:
|
||||
|
||||
Making the Pods be able to reach the Kubernetes API
|
||||
|
@ -685,6 +691,7 @@ Kubernetes service to be accessible to Pods.
|
|||
| updated_at | 2017-08-10T16:46:55 |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
|
||||
.. _services_troubleshooting:
|
||||
|
||||
Troubleshooting
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
.. _sriov:
|
||||
|
||||
=============================
|
||||
How to configure SR-IOV ports
|
||||
=============================
|
||||
|
||||
|
@ -165,6 +166,7 @@ To make neutron ports active kuryr-k8s makes requests to neutron API to update
|
|||
ports with binding:profile information. Due to this it is necessary to make
|
||||
actions with privileged user with admin rights.
|
||||
|
||||
|
||||
Reference
|
||||
---------
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
============================
|
||||
Testing Network Connectivity
|
||||
============================
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
===================================
|
||||
Testing Nested Network Connectivity
|
||||
===================================
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
===========================
|
||||
Testing SRIOV functionality
|
||||
===========================
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
====================
|
||||
Testing UDP Services
|
||||
====================
|
||||
|
||||
|
@ -151,5 +152,6 @@ Since the `kuryr-udp-demo`_ application concatenates the pod's name to the
|
|||
replyed message, it is plain to see that both service's pods are
|
||||
replying to the requests from the client.
|
||||
|
||||
|
||||
.. _kuryr-udp-demo: https://hub.docker.com/r/yboaron/kuryr-udp-demo/
|
||||
.. _udp-client: https://github.com/yboaron/udp-client-script
|
||||
.. _udp-client: https://github.com/yboaron/udp-client-script
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
=========================
|
||||
Boot VM with a Trunk Port
|
||||
=========================
|
||||
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
==========================
|
||||
Upgrading kuryr-kubernetes
|
||||
===========================
|
||||
==========================
|
||||
|
||||
Kuryr-Kubernetes supports standard OpenStack utility for checking upgrade
|
||||
is possible and safe:
|
||||
|
@ -19,6 +20,7 @@ If any issue will be found, the utility will give you explanation and possible
|
|||
remediations. Also note that *Warning* results aren't blocking an upgrade, but
|
||||
are worth investigating.
|
||||
|
||||
|
||||
Stein (0.6.x) to T (0.7.x) upgrade
|
||||
----------------------------------
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
=====================================
|
||||
Kuryr-Kubernetes Release Notes Howto
|
||||
=====================================
|
||||
====================================
|
||||
Kuryr-Kubernetes Release Notes Howto
|
||||
====================================
|
||||
|
||||
Release notes are a new feature for documenting new features in
|
||||
OpenStack projects. Background on the process, tooling, and
|
||||
|
|
|
@ -1,8 +1,9 @@
|
|||
========================================================
|
||||
Welcome to Kuryr-Kubernetes Release Notes documentation!
|
||||
========================================================
|
||||
|
||||
Contents
|
||||
========
|
||||
--------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
===================================
|
||||
Queens Series Release Notes
|
||||
===================================
|
||||
===========================
|
||||
Queens Series Release Notes
|
||||
===========================
|
||||
|
||||
.. release-notes::
|
||||
:branch: stable/queens
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
===================================
|
||||
Rocky Series Release Notes
|
||||
===================================
|
||||
==========================
|
||||
Rocky Series Release Notes
|
||||
==========================
|
||||
|
||||
.. release-notes::
|
||||
:branch: stable/rocky
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
===================================
|
||||
Stein Series Release Notes
|
||||
===================================
|
||||
==========================
|
||||
Stein Series Release Notes
|
||||
==========================
|
||||
|
||||
.. release-notes::
|
||||
:branch: stable/stein
|
||||
|
|
Loading…
Reference in New Issue