Merge "Readme Update"

This commit is contained in:
Jenkins 2016-09-27 09:47:42 +00:00 committed by Gerrit Code Review
commit 058dfd364c
1 changed files with 129 additions and 85 deletions

View File

@ -1,10 +1,10 @@
Murano deployed Kubernetes Cluster application Murano-deployed Kubernetes Cluster application
============================================== ==============================================
Packages in this folder are required to deploy both Google Kubernetes and The Packages in this folder are required to deploy both Google Kubernetes and
applications on top of it. the applications that run on top of it.
Contents of each folder need to be zipped and uploaded to Murano Catalog. The contents of each folder need to be zipped and uploaded to the Murano Catalog.
You will also need to build a proper image for Kubernetes. You will also need to build a proper image for Kubernetes.
This can be done using `diskimage-builder <https://git.openstack.org/cgit/openstack/diskimage-builder>`_ This can be done using `diskimage-builder <https://git.openstack.org/cgit/openstack/diskimage-builder>`_
@ -16,30 +16,76 @@ The image has to be named *debian8-x64-kubernetes.qcow2*
Overview of Kubernetes Overview of Kubernetes
---------------------- ----------------------
Kubernetes is an open-source container manager by Google. It is responsible to Kubernetes is an open-source platform for automating deployment, scaling, and
schedule, run and manage docker containers into its own clustered setup. operations of application containers across clusters of hosts.
Kubernetes consists of one or more master nodes running Kubernetes API and For a more in-depth review of Kubernetes please refer to the official
one or more worker nodes (aka minions) that are used to schedule containers.
Containers are aggregated into pods. All containers in single pod are
guaranteed to be scheduled to a single node and share common port space.
Thus it can be considered as a container co-location.
Pods can be replicated. This is achieved by creation of Replication Controller
which creates and maintain fixed number of pod clones. In Murano replica
count is a property of KubernetesPod.
For a more in-depth review of Kubernetes please refer to official
`documentation <http://kubernetes.io/v1.1/docs/user-guide/README.html>`_. `documentation <http://kubernetes.io/v1.1/docs/user-guide/README.html>`_.
How Murano installs/upgrades a Kubernetes Cluster
=================================================
Installation
------------
Minimum requirements for Openstack in order to deploy Kubernetes cluster with Murano:
* Deployed Murano and Heat Openstack Services
* 3 instances of m1.medium flavor (Master Node, Kubernetes Node, Gateway Node)
* 1 Floating IP for Gateway, in case required to expose applications outside
* 2 Floating IPs for Master and Kubernetes Nodes to access kubectl CLI or
troubleshooting
A Kubernetes cluster deployed by Murano provisions 3 types of VMs that can be observed in
the Openstack Horizon Dashboard with this naming convention:
Single **Master Node** (murano-kube-1) - which represents the Kubernetes Control
Plane and runs the API server, Scheduler and Controller Manager. In the current
implementation of Kubernetes Cluster deployed by Murano, the Master Node is not
running in HA mode. Additionally it is not possible to schedule containers
on the Master node.
One or several **Kubernetes Nodes** (murano-kube-2..n) - Kubernetes worker nodes
that are responsible for running actual containers. Each Kubernetes Node runs
the Docker, kubelet and kube-proxy services.
One or several **Gateway nodes** (murano-gateway-1..n) - used as an interconnection
between Kubernetes internal Networking_ and the OpenStack external network
(Neutron-managed). The Gateway node provides the Kubernetes cluster with
external endpoints and allows users and services to reach Kubernetes pods from
the outside. Each gateway node runs confd and HAProxy services. When the end
user deploys an application and exposes it via a service, confd automatically
detects it and adds it to the haproxy configuration. HAProxy will expose
the application via the floating IP of the Gateway node and required port.
If the user choses multiple Gateways, the result will be several endpoints for
the application, which can be registered in the physical load balancer or DNS.
**ETCD** - Kubernetes uses etcd for key value store as well as for cluster
consensus between different software components. Additionally, if the Kubernetes
cluster is configured to run Calico networking, etcd will be configured to
support Calico configurations. In the current implementation of Kubernetes
Cluster deployed by Murano, the etcd cluster is not running on dedicated nodes.
Instead etcd is running on each node deployed by Murano. For example, if
Kubernetes Cluster deployed by Murano is running in the minimum available
configuration with 3 nodes: Master Node, Kubernetes Node and Gateway, then
etcd will run as a 3 node cluster.
Upgrade
-------
In current implementation of Kubernetes Cluster deployed by Murano it is not
possible to upgrade Kubernetes Cluster from previous version to newer.
Features Features
======== ========
Murano deployed Kubernetes Cluster supports following features: Murano deployed Kubernetes Cluster supports following features:
* Networking_: Calico * Networking_: Calico by default, Flannel optional
* `Container Runtime`_: Docker * `Container runtime`_: Docker
* `Rolling Updates`_ of Kubernetes application * `Rolling updates`_ of the Kubernetes application
* Publishing services: ClusterIP Type * Publishing services: ClusterIP Type
.. _Networking: .. _Networking:
@ -48,70 +94,88 @@ Networking
---------- ----------
Kubernetes Cluster deployed by Murano supports Calico networking by default. Kubernetes Cluster deployed by Murano supports Calico networking by default.
Support for Flannel is disabled by default, but can be enabled as an option. Calico provides a highly scalable networking and network policy solution for
connecting Kubernetes pods based on the same IP networking principles as a
layer 3 approach.
Calico Networking deployed by Murano as CNI plugin contains following components:
* **etcd** - distributed key-value store, which ensures Calico can always build
an accurate network, used primerly for data storage and communication
* **Felix**, the Calico worker process, which primarily routes and provides
desired connectivity to and from the workloads on host. As well as provides
the interface to kernels for outgoing endpoint traffic
* **BIRD**, BGP client that exchanges routing information between hosts
* **Confd**, a templating process to auto-generate configuration for BIRD
* **calicoctl**, the command line used to configure and start the Calico service
See `Calico <https://github.com/coreos/flannel>`_ for more information.
Support for Flannel is disabled by default, but can be enabled as an option.
Flannel is simple overlay network that satisfies the Kubernetes requirements.
See `flannel <https://www.projectcalico.org/>`_ for more information.
.. _Container runtime: .. _Container runtime:
Container runtime Container runtime
----------------- -----------------
A container runtime responsible for pulling container images from a registry, A container runtime is responsible for pulling container images from a registry,
unpacking the container and running the application. Kubernetes by default unpacking the container and running the application. Kubernetes by default
supports Docker runtime. Recently in Kubernetes version 1.3 support for rkt supports the Docker runtime. Recently in Kubernetes version 1.3 support for the
runtime has been added. More runtimes planned to be added in the future. rkt runtime has been added. More runtimes are planned to be added in the future.
The Kubernetes Cluster deployed by Murano currently supports only the Docker
Kubernetes Cluster deployed by Murano currently supports only Docker runtime. runtime, but we planning to add rkt runtime in close future.
Though we planning to add rkt runtime in close future.
.. _Rolling Updates: .. _Rolling updates:
Rolling Updates of Kubernetes application Rolling updates of the Kubernetes application
----------------------------------------- ---------------------------------------------
Kubernetes Cluster deployed by Murano supports rolling updates with the use of The Kubernetes Cluster deployed by Murano supports rolling updates with the use
“Deployments” and “Replication Controllers (RC)” abstractions. Rolling updates of “Deployments” and “Replication Controllers (RC)” abstractions. Rolling updates
using Deployments is a recommended way to perform updates. using Deployments is the recommended way to perform updates. Rolling update via
Rolling update via Deployments provides following benefits over RC: Deployments provides following benefits over RC:
* Declarative way to control how service updates are performed * Declarative way to control how service updates are performed
* Rollback to an earlier Deployment version * Rollback to an earlier Deployment version
* Pause and resume a Deployment. * Pause and resume a Deployment.
To use Rolling updates via Deployments refer to `Kubernetes documentation <http://kubernetes.io/docs/user-guide/deployments/#updating-a-deployment>`_. To use Rolling updates via Deployments refer to the `Kubernetes documentation <http://kubernetes.io/docs/user-guide/deployments/#updating-a-deployment>`_.
**NOTE:** Currently all applications deployed from Apps Catalog has been created as **NOTE:** Currently all applications deployed from the Apps Catalog have been
Replication Controllers (RC). It means that Rolling updates via Deployments created as Replication Controllers (RC), so Rolling updates via Deployments
are not available for those applications. are not available for those applications.
In case application running as Replication Controllers (RC) and requires update, If an application running as a Replication Controllers (RC) requires an update,
please refer to Kubernetes documentation `here <http://kubernetes.io/docs/user-guide/rolling-updates>`_. please refer to the Kubernetes documentation `here <http://kubernetes.io/docs/user-guide/rolling-updates>`_.
Interacting with Kubernetes Cluster deployed by Murano Interacting with the Kubernetes Cluster deployed by Murano
====================================================== ==========================================================
There are several ways to create, manage applications on Kubernetes cluster: There are several ways to create and manage applications on Kubernetes cluster:
Using Murano->Apps Catalog-> Environments view in Horizon: Using the Murano Environments view in Horizon:
---------------------------------------------------------- ----------------------------------------------------------
Users can perform following actions: Users can perform the following actions:
* Deploy/Destroy Kubernetes Cluster * Deploy/Destroy the Kubernetes Cluster
* Perform Kubernetes Cluster related actions such as scale Nodes and Gateways. * Perform Kubernetes Cluster related actions such as scale Nodes and Gateways.
* Perform Kubernetes Pod related actions such as scale, recreate pods or restart Containers. * Perform Kubernetes Pod related actions such as scale, recreate pods or restart Containers.
* Deploy selected Application from Apps Catalog via Murano Dashboard. * Deploy selected Application from the Apps Catalog via the Murano Dashboard.
* Deploy any docker image from Docker Hub using Docker Container apps from Apps Catalog. * Deploy any docker image from the Docker Hub using Docker Container apps from the Apps Catalog.
Using kubectl CLI: Using kubectl CLI:
------------------ ------------------
Deploy and manage applications using Kubernetes command-line tool - ``kubectl`` You can also deploy and manage applications using the Kubernetes command-line
from you laptop or any local environment: tool - ``kubectl`` from your laptop or any local environment:
* * `Download and install <http://kubernetes.io/docs/getting-started-guides/minikube/#install-kubectl>`_ the ``kubectl`` executable based on OS of the choice. * `Download and install <http://kubernetes.io/docs/getting-started-guides/minikube/#install-kubectl>`_ the ``kubectl`` executable based on OS of the choice.
* Configure kubectl context on local env: * Configure kubectl context on the local environments:
* ``kubectl config set-cluster kubernetes --server=http://<kube1-floating_IP>:8080`` * ``kubectl config set-cluster kubernetes --server=http://<kube1-floating_IP>:8080``
* ``kubectl config set-context kubelet-context --cluster=kubernetes --user=""`` * ``kubectl config set-context kubelet-context --cluster=kubernetes --user=""``
@ -122,37 +186,17 @@ from you laptop or any local environment:
* ``kubectl config view`` * ``kubectl config view``
* ``kubectl get nodes`` * ``kubectl get nodes``
The resulting kubeconfig file will be stored in ~/.kube/config. Can be sourced at any time after. The resulting kubeconfig file will be stored in ~/.kube/config and
can be sourced at any time afterwards.
Additionally, it is possible to access ``kubectl cli`` from Master Node (kube-1), Additionally, it is possible to access ``kubectl cli`` from Master Node (kube-1),
where ```kubectl cli``` is installed and configured by default. where ```kubectl cli``` is installed and configured by default.
**NOTE:** In case application has been deployed via kubectl it will be exposed **NOTE:** If the application has been deployed using kubectl CLI, it will be
automatically outside based on the port information provided in service yaml file. automatically exposed outside based on the port information provided in
However, it will be required to manually add required port to the OpenStack Security service yaml file. However, you will need to manually update the OpenStack
Groups created for this Cluster in order to be able reach application from outside. Security Groups configuration with the required port information in order to be
able reach the application from the outside.
How murano installs Kubernetes
------------------------------
Currently Murano supports setups with only single API node and at least one
worker node. API node cannot be used as a worker node.
To establish required network connectivity model for the Kubernetes Murano
sets up an overlay network between Kubernetes nodes using Flannel networking.
See `flannel <https://github.com/coreos/flannel>`_ for more information.
Because IP addresses of containers are in that internal network and not
accessible from outside in order to provide public endpoints Murano sets up
a third type of nodes: Gateway nodes.
Gateway nodes are connected to both Flannel and OpenStack Neutron networks
and serves as a gateway between them. Each gateway node runs HAProxy.
When an application deploys all its public endpoints are automatically registered
on all gateway nodes. Thus if user chose to have more than one gateway
it will usually get several endpoints for the application. Then those endpoints
can be registered in physical load balancer or DNS.
KubernetesCluster KubernetesCluster
@ -167,8 +211,8 @@ The procedure is:
for worker and gateway nodes. for worker and gateway nodes.
#. Join them into etcd cluster. etcd is a distributed key-value storage #. Join them into etcd cluster. etcd is a distributed key-value storage
used by the Kubernetes to store and synchronize cluster state. used by the Kubernetes to store and synchronize cluster state.
#. Setup Flannel network over etcd cluster. Flannel uses etcd to track #. Setup Networking (Calico or Flannel) over etcd cluster. Networking uses
network and nodes. etcd to track network and nodes.
#. Configure required services on master node. #. Configure required services on master node.
#. Configure worker nodes. They will register themselves in master nodes using #. Configure worker nodes. They will register themselves in master nodes using
etcd. etcd.
@ -317,7 +361,7 @@ deploying both Kubernetes and it's nodes.
`restartContainers(podName)` `restartContainers(podName)`
* `podName` string holding the name of the pod. * `podName` string holding the name of the pod.
Call `restartContainers($podName)` on each minion node. Call `restartContainers($podName)` on each Kubernetes node.
KubernetesNode KubernetesNode
~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
@ -344,7 +388,7 @@ fact that the function has been called.
master node and start etcd member service on underlying instance. master node and start etcd member service on underlying instance.
`setupNode()` `setupNode()`
Set up the node, by first setting up Flannel and Set up the node, by first setting up Calico or Flannel and
then setting up HAProxy load balancer on underlying instance. then setting up HAProxy load balancer on underlying instance.
`removeFromCluster()` `removeFromCluster()`
@ -366,7 +410,7 @@ fact that the function has been called.
Set up etcd master node config and launch etcd service on master node. Set up etcd master node config and launch etcd service on master node.
`setupNode()` `setupNode()`
Set up the node. This includes setting up Flannel for master and Set up the node. This includes setting up Calico or Flannel for master and
configuring and launching `kube-apiserver`, `kube-scheduler` and configuring and launching `kube-apiserver`, `kube-scheduler` and
`kube-controller-manager` services `kube-controller-manager` services
on the underlying instance. on the underlying instance.
@ -388,8 +432,8 @@ fact that the function has been called.
master node and start etcd member service on underlying instance. master node and start etcd member service on underlying instance.
`setupNode()` `setupNode()`
Set up the node, by first setting up Flannel and Set up the node, by first setting up Calico or Flannel and
then joining the minion into the cluster. If `dockerRegistry` or then joining the Kubernetes Nodes into the cluster. If `dockerRegistry` or
`dockerMirror` are supplied for underlying cluster, those are appended to `dockerMirror` are supplied for underlying cluster, those are appended to
the list of docker parameters. If gcloudKey is supplied for underlying the list of docker parameters. If gcloudKey is supplied for underlying
cluster, then current node attempts to login to google cloud registry. cluster, then current node attempts to login to google cloud registry.