Fix typo in developer user doc

Change-Id: Ie6129eab9acb94de4ae0438d35b8a78042f218d1
This commit is contained in:
Harry Zhang 2017-08-31 16:10:56 +08:00
parent fb8b06a0f1
commit c5e3a3f539
2 changed files with 20 additions and 16 deletions

View File

@ -8,19 +8,23 @@ CentOS. These instructions assume you're already installed git, golang and pytho
Design Tips
=========
The Stackube project is very simple. The main part of it is a stackube-controller, which use Kubernetes Customized Resource Definition (CRD, previous TPR) to:
The Stackube project is very simple. The main part of it is a ``stackube-controller``, which use Kubernetes ``Customized Resource Definition (CRD, previous TPR)`` to:
1. Manage tenants based on namespace change in k8s
2. Manage RBAC based on namespace change in k8s
3. Manage networks based on tenants change in k8s
The tenant is a CRD which maps to Keystone tenant, the network is a CRD which maps to Neutron network. We also have a kubestack binary which is the CNI plug-in for Neutron.
The tenant is a CRD which maps to Keystone tenant, the network is a CRD which maps to Neutron network. We also have a ``kubestack`` binary which is the CNI plug-in for Neutron.
Also, Stackube has it's own stackube-proxy to replace kube-proxy because network in Stackube is L2 isolated, so we need a multi-tenant version kube-proxy here.
Also, Stackube has it's own ``stackube-proxy`` to replace ``kube-proxy`` because network in Stackube is L2 isolated, so we need a multi-tenant version ``kube-proxy`` here.
We also replaced kube-dns in k8s for the same reason: we need to have a kube-dns running in every namespace instead of a global DNS server because namespaces are isolated.
We also replaced ``kube-dns`` in k8s for the same reason: we need to have a ``kube-dns`` running in every namespace instead of a global DNS server because namespaces are isolated.
You can see that: Stackube cluster = upstream Kubernetes + several our own add-ons + standalone OpenStack components.
You can see that:
::
Stackube cluster = upstream Kubernetes + several our own add-ons + standalone OpenStack components.
Please note: Cinder RBD based block device as volume is implemented in https://github.com/kubernetes/frakti, you need to contribute there if you have any idea and build a new stackube/flex-volume Docker image for Stackube to use.
@ -64,20 +68,20 @@ If you deployed Stackube by following official guide, you can skip this part.
But if not, these steps below are needed to make sure your Stackube cluster work.
Please note the following parts suppose you have already deployed an environment of OpenStack and Kubernetes on same baremetal host. And don't forget to setup `--experimental-keystone-url` for kube-apiserver, e.g.
Please note the following parts suppose you have already deployed an environment of OpenStack and Kubernetes on same baremetal host. And don't forget to setup ``--experimental-keystone-url`` for ``kube-apiserver``, e.g.
::
kube-apiserver --experimental-keystone-url=https://192.168.128.66:5000/v2.0 ...
Remove kube-dns deployment and kube-proxy daemonset if you have already running them.
Remove ``kube-dns`` deployment and ``kube-proxy`` daemonset if you have already running them.
::
kubectl -n kube-system delete deployment kube-dns
kubectl -n kube-system delete daemonset kube-proxy
If you have also configured a CNI network plugin, you should also remove it togather with CNI network config.
If you have also configured a CNI network plugin, you should also remove it together with CNI network config.
::

View File

@ -5,7 +5,7 @@ Stackube User Guide
Tenant and Network Management
=============================
In this part, we will introduce tenant management and networking in Stackube. The tenant, which is 1:1 mapped with k8s namespace, is managed by using k8s CRD (previous TPR) to interact with Keystone. And the tenant is also 1:1 mapped with a network automatically, which is also implemented by CRD with standalone Neutron.
In this part, we will introduce tenant management and networking in Stackube. The tenant, which is ``1:1`` mapped with k8s namespace, is managed by using k8s CRD (previous TPR) to interact with Keystone. And the tenant is also ``1:1`` mapped with a network automatically, which is also implemented by CRD with standalone Neutron.
1. Create a new tenant
@ -61,7 +61,7 @@ In this part, we will introduce tenant management and networking in Stackube. Th
+--------------------------------------+----------------------+----------------------------------+----------------------------------------------------------+
| 421d913a-a269-408a-9765-2360e202ad5b | kube-test-test | 915b36add7e34018b7241ab63a193530 | bb446a53-de4d-4546-81fc-8736a9a88e3a 10.244.0.0/16 |
4. Check the kube-dns pods created in the new namespace.
4. Check the ``kube-dns`` pods created in the new namespace.
::
@ -168,9 +168,9 @@ Stackube is a standard upstream Kubernetes cluster, so any type of `Kubernetes v
server: 10.244.1.4
path: "/exports"
Please note since Stackube is a baremetal k8s cluster, cloud provider based volume like GCE, AWS etc is not supported by default.
Please note since Stackube is a baremetal k8s cluster, cloud provider based volume are not supported by default.
But unless you are using emptyDir or hostPath, we will recommend always you the "Cinder RBD based block device as volume" described below in Stackube, this will bring you much higher performance.
But unless you are using ``emptyDir`` or ``hostPath``, we will recommend always you the ``Cinder RBD based block device as volume`` described below in Stackube, this will bring you much higher performance.
==================================
Cinder RBD based block device as volume
@ -207,11 +207,11 @@ In Stackube, we use a flexvolume to directly use Cinder RBD based block device a
options:
volumeID: daa7b4e6-1792-462d-ad47-78e900fed429
Please note the name of flexvolume is: "cinder/flexvolume_driver". Users are expected to provide a valid volume ID created with Cinder beforehand. Then a related RBD device will be attached to the VM-based Pod.
Please note the name of flexvolume is: ``cinder/flexvolume_driver``. Users are expected to provide a valid volume ID created with Cinder beforehand. Then a related RBD device will be attached to the VM-based Pod.
If your cluster is installed by stackube/devstack or following other stackube official guide, a /etc/kubernetes/cinder.conf file will be generated automatically on every node.
If your cluster is installed by ``stackube/devstack`` or following other stackube official guide, a ``/etc/kubernetes/cinder.conf`` file will be generated automatically on every node.
Otherwise, users are expected to write a /etc/kubernetes/cinder.conf on every node. The contents is like:
Otherwise, users are expected to write a ``/etc/kubernetes/cinder.conf`` on every node. The contents is like:
::
@ -225,4 +225,4 @@ Otherwise, users are expected to write a /etc/kubernetes/cinder.conf on every no
keyring = _KEYRING_
and also, users need to make sure flexvolume_driver binary is in /usr/libexec/kubernetes/kubelet-plugins/volume/exec/cinder~flexvolume_driver/ of every node.
and also, users need to make sure flexvolume_driver binary is in ``/usr/libexec/kubernetes/kubelet-plugins/volume/exec/cinder~flexvolume_driver/`` of every node.