[Trivial] Doc fix K8S/K8s -> Kubernetes

Change-Id: I9883eca5a73423971493d05e70536ed5571ec553
This commit is contained in:
Janonymous 2017-08-08 05:13:04 +00:00 committed by codevulture
parent aaa5150252
commit 3a8f4196c0
6 changed files with 36 additions and 36 deletions

View File

@ -35,7 +35,7 @@ Overview
In order to integrate Neutron into kubernetes networking, 2 components are
introduced: Controller and CNI Driver.
Controller is a supervisor component responsible to maintain translation of
networking relevant K8s model into the OpenStack (i.e. Neutron) model.
networking relevant Kubernetes model into the OpenStack (i.e. Neutron) model.
This can be considered as a centralized service (supporting HA mode in the
future).
CNI driver is responsible for binding kubernetess pods on worker nodes into
@ -56,9 +56,9 @@ Design Principles
should rely on existing communication channels, currently added to the pod
metadata via annotations.
4. CNI Driver should not depend on Neutron. It gets all required details
from K8s API server (currently through K8s annotations), therefore
from Kubernetes API server (currently through Kubernetes annotations), therefore
depending on Controller to perform its translation tasks.
5. Allow different neutron backends to bind K8s pods without code modification.
5. Allow different neutron backends to bind Kubernetes pods without code modification.
This means that both Controller and CNI binding mechanism should allow
loading of the vif management and binding components, manifested via
configuration. If some vendor requires some extra code, it should be handled
@ -76,14 +76,14 @@ Controller is composed from the following components:
Watcher
~~~~~~~
Watcher is a common software component used by both the Controller and the CNI
driver. Watcher connects to K8s API. Watchers responsibility is to observe the
driver. Watcher connects to Kubernetes API. Watchers responsibility is to observe the
registered (either on startup or dynamically during its runtime) endpoints and
invoke registered callback handler (pipeline) to pass all events from
registered endpoints.
Event Handler
~~~~~~~~~~~~~
EventHandler is an interface class for the K8s event handling. There are
EventHandler is an interface class for the Kubernetes event handling. There are
several 'wrapper' event handlers that can be composed to implement Controller
handling pipeline.
@ -111,16 +111,16 @@ ControllerPipeline
~~~~~~~~~~~~~~~~~~
ControllerPipeline serves as an event dispatcher of the Watcher for Kuryr-K8s
controller Service. Currently watched endpoints are 'pods', 'services' and
'endpoints'. K8s resource event handlers (Event Consumers) are registered into
'endpoints'. Kubernetes resource event handlers (Event Consumers) are registered into
the Controller Pipeline. There is a special EventConsumer, ResourceEventHandler,
that provides API for K8s event handling. When a watched event arrives, it is
processed by all Resource Event Handlers registered for specific K8s object
that provides API for Kubernetes event handling. When a watched event arrives, it is
processed by all Resource Event Handlers registered for specific Kubernetes object
kind. Pipeline retries on resource event handler invocation in
case of the ResourceNotReady exception till it succeeds or the number of
retries (time-based) is reached. Any unrecovered failure is logged without
affecting other Handlers (of the current and other events).
Events of the same group (same K8s object) are handled sequentially in the
order arrival. Events of different K8s objects are handled concurrently.
Events of the same group (same Kubernetes object) are handled sequentially in the
order arrival. Events of different Kubernetes objects are handled concurrently.
.. image:: ../..//images/controller_pipeline.png
:alt: controller pipeline
@ -129,11 +129,11 @@ order arrival. Events of different K8s objects are handled concurrently.
ResourceEventHandler
~~~~~~~~~~~~~~~~~~~~
ResourceEventHandler is a convenience base class for the K8s event processing.
The specific Handler associates itself with specific K8s object kind (through
ResourceEventHandler is a convenience base class for the Kubernetes event processing.
The specific Handler associates itself with specific Kubernetes object kind (through
setting OBJECT_KIND) and is expected to implement at least one of the methods
of the base class to handle at least one of the ADDED/MODIFIED/DELETED events
of the k8s object. For details, see `k8s-api <https://github.com/kubernetes/kubernetes/blob/release-1.4/docs/devel/api-conventions.md#types-kinds>`_.
of the Kubernetes object. For details, see `k8s-api <https://github.com/kubernetes/kubernetes/blob/release-1.4/docs/devel/api-conventions.md#types-kinds>`_.
Since both ADDED and MODIFIED event types trigger very similar sequence of
actions, Handler has on_present method that is invoked for both event types.
The specific Handler implementation should strive to put all the common ADDED
@ -142,13 +142,13 @@ and MODIFIED event handling logic in this method to avoid code duplication.
Providers
~~~~~~~~~
Provider (Drivers) are used by ResourceEventHandlers to manage specific aspects
of the K8s resource in the OpenStack domain. For example, creating a K8s Pod
of the Kubernetes resource in the OpenStack domain. For example, creating a Kubernetes Pod
will require a neutron port to be created on a specific network with the proper
security groups applied to it. There will be dedicated Drivers for Project,
Subnet, Port and Security Groups settings in neutron. For instance, the Handler
that processes pod events, will use PodVIFDriver, PodProjectDriver,
PodSubnetsDriver and PodSecurityGroupsDriver. The Drivers model is introduced
in order to allow flexibility in the K8s model mapping to the OpenStack. There
in order to allow flexibility in the Kubernetes model mapping to the OpenStack. There
can be different drivers that do Neutron resources management, i.e. create on
demand or grab one from the precreated pool. There can be different drivers for
the Project management, i.e. single Tenant or multiple. Same goes for the other
@ -171,11 +171,11 @@ Kuryr kubernetes integration takes advantage of the kubernetes `CNI plugin <http
and introduces Kuryr-K8s CNI Driver. Based on design decision, kuryr-kubernetes
CNI Driver should get all information required to plug and bind Pod via
kubernetes control plane and should not depend on Neutron. CNI plugin/driver
is invoked in a blocking manner by kubelet (k8s node agent), therefore it is
is invoked in a blocking manner by kubelet (Kubernetes node agent), therefore it is
expected to return when either success or error state determined.
Kuryr-K8s CNI Driver has 2 sources for Pod binding information: kubelet/node
environment and K8s API. The Kuryr-K8s Controller Service and CNI share the
environment and Kubernetes API. The Kuryr-K8s Controller Service and CNI share the
contract that defines Pod annotation that Controller Server adds and CNI
driver reads. The contract is `os_vif VIF <https://github.com/openstack/os-vif/blob/master/os_vif/objects/vif.py>`_

View File

@ -41,8 +41,8 @@ Neutron LBaaS service. The initial implementation is based on the OpenStack
LBaaSv2 API, so compatible with any LBaaSv2 API provider.
In order to be compatible with Kubernetes networking, Kuryr-Kubernetes
makes sure that services Load Balancers have access to Pods Neutron ports.
This may be affected once K8s Network Policies will be supported.
Oslo versioned objects are used to keep translation details in K8s entities
This may be affected once Kubernetes Network Policies will be supported.
Oslo versioned objects are used to keep translation details in Kubernetes entities
annotation. This will allow future changes to be backward compatible.
Data Model Translation
@ -66,7 +66,7 @@ Two Kubernetes Event Handlers are added to the Controller pipeline.
- LoadBalancerHandler manages Kubernetes endpoints events. It manages
LoadBalancer, LoadBalancerListener, LoadBalancerPool and LoadBalancerPool
members to reflect and keep in sync with the K8s service. It keeps details of
members to reflect and keep in sync with the Kubernetes service. It keeps details of
Neutron resources by annotating the Kubernetes Endpoints object.
Both Handlers use Project, Subnet and SecurityGroup service drivers to get

View File

@ -61,7 +61,7 @@ kuryr-cni
It's important to understand that kuryr-cni is only a storage pod i.e. it is
actually idling with ``sleep infinity`` once all the files are copied into
correct locations on k8s host.
correct locations on Kubernetes host.
You can force it to redeploy new files by killing it. DaemonSet controller
should make sure to restart it with new image and configuration files. ::

View File

@ -1,7 +1,7 @@
How to try out nested-pods locally (VLAN + trunk)
=================================================
Following are the instructions for an all-in-one setup where K8s will also be
Following are the instructions for an all-in-one setup where Kubernetes will also be
running inside the same Nova VM in which Kuryr-controller and Kuryr-cni will be
running. 4GB memory and 2 vCPUs, is the minimum resource requirement for the VM:

View File

@ -1,7 +1,7 @@
Watching K8S api-server over HTTPS
==================================
Watching Kubernetes api-server over HTTPS
=========================================
Add absolute path of client side cert file and key file for K8S server
Add absolute path of client side cert file and key file for Kubernetes server
in ``kuryr.conf``::
[kubernetes]
@ -16,7 +16,7 @@ path to the ca cert::
ssl_ca_crt_file = <absolute file path eg. /etc/kubernetes/ca.crt>
ssl_verify_server_crt = True
If want to query HTTPS K8S api server with ``--insecure`` mode::
If want to query HTTPS Kubernetes api server with ``--insecure`` mode::
[kubernetes]
ssl_verify_server_crt = False

View File

@ -83,10 +83,10 @@ provisioned, updated, or deleted in OpenStack. The volume provisioner will
implement the 'ResourceEventHandler' interface of Kuryr-kubernetes for
handling PVC events.
For each creation of PVC in k8s, the Kuryr-kubernetes's API watcher will
For each creation of PVC in Kubernetes, the Kuryr-kubernetes's API watcher will
trigger an event that will be eventually handled by volume provisioner.
On receiving the event, the volume provisioner will provision the appropriate
storage asset in OpenStack and create a PV in k8s to represent the provisioned
storage asset in OpenStack and create a PV in Kubernetes to represent the provisioned
storage asset. The volume provisioning workflow will be in compliance with
the Kubernetes's out-of-tree provisioning specification [2]. The provisioned
PV will be populated with necessary information for the volume driver to
@ -100,12 +100,12 @@ for fuxi Kubernetes.
Similarly, for each update or deletion of PVC, the volume provisioner will
call fuxi server to update or delete the corresponding storage assets at
OpenStack and PVs at k8s.
OpenStack and PVs at Kubernetes.
FlexVolume Driver
-----------------
FlexVolume [3] is a k8s volume plugin that allows vendor to write their own
FlexVolume [3] is a Kubernetes volume plugin that allows vendor to write their own
driver to support custom storage solutions. This spec proposes to implement
a FlexVolume driver that enables Kubelet to consume the provisioned storage
assets. The FlexVolume driver will implement the FlexVolume's driver interface
@ -140,16 +140,16 @@ Note that FlexVolume has several known drawbacks. For example, it invokes
drivers via shells, which requires executables pre-installed in the specified
path. This deployment model doesn't work with operating systems like CoreOS
in which the root file system is immutable. This proposal suggests to continue
monitoring the evolution of k8s and switch to a better solution if there is
monitoring the evolution of Kubernetes and switch to a better solution if there is
one showed up.
Alternatives
============
An alternative to FlexVolume driver is provide an implementation of k8s volume
plugin. An obstacle of this approach is that k8s doesn't support out-of-tree
An alternative to FlexVolume driver is provide an implementation of Kubernetes volume
plugin. An obstacle of this approach is that Kubernetes doesn't support out-of-tree
volume plugin (beside using FlexVolume) right now. Therefore, the fuxi volume
plugin needs to be reside in k8s tree and released with a different schedule
plugin needs to be reside in Kubernetes tree and released with a different schedule
from OpenStack.
@ -165,8 +165,8 @@ Hongbin Lu
Work Items
----------
1. Implement a k8s volume provisioner.
2. Implement a k8s FlexVolume driver.
1. Implement a Kubernetes volume provisioner.
2. Implement a Kubernetes FlexVolume driver.
References