Fix typos in doc

Fix some typos that I found in documents(except specs)
 * fix trivial mistakes(typos)
 * fix the link error(rst->html)
 * restore it's -> its
 * fix k8s -> K8s

Change-Id: I6ec65e9d04441adac210cc9fd476a37a1cb9644f
This commit is contained in:
ardentpark 2018-10-17 23:53:20 +09:00
parent 0fbf00585f
commit b067309b89
10 changed files with 29 additions and 29 deletions

View File

@ -63,7 +63,7 @@ endpoints are defined:
These values define all the endpoints that the Neutron chart may need in
order to build full URL compatible endpoints to various services.
Long-term, these will also include database, memcached, and rabbitmq
elements in one place. Essentially, all external connectivity can be be
elements in one place. Essentially, all external connectivity can be
defined centrally.
The macros that help translate these into the actual URLs necessary are
@ -101,5 +101,5 @@ various namespaces.
By default, each endpoint is located in the same namespace as the current
service's helm chart. To connect to a service which is running in a different
Kubernetes namespace, a ``namespace`` can be provided to each individual
Kubernetes namespace, a ``namespace`` can be provided for each individual
endpoint.

View File

@ -123,7 +123,7 @@ for serving the request should be wired.
# openvswitch or linuxbridge
interface_driver: openvswitch
Another place where the DHCP agent is dependent of L2 agent is the dependency
Another place where the DHCP agent is dependent on L2 agent is the dependency
for the L2 agent daemonset:
.. code-block:: yaml
@ -273,7 +273,7 @@ Configuration of OVS bridges can be done via
`neutron/templates/bin/_neutron-openvswitch-agent-init.sh.tpl`. The
script is configuring the external network bridge and sets up any
bridge mappings defined in :code:`network.auto_bridge_add`. These
values should be align with
values should align with
:code:`conf.plugins.openvswitch_agent.ovs.bridge_mappings`.
openvswitch-db and openvswitch-vswitchd

View File

@ -69,7 +69,7 @@ each gate. The contents of the log directory are as follows:
gate fails, the reason should be apparent in the dry-runs output. The logs
found here are helpful in identifying issues resulting from using helm-toolkit
functions incorrectly or other rendering issues with gotpl.
- The k8s directory contains the logs and output of the Kubernetes objects. It
- The K8s directory contains the logs and output of the Kubernetes objects. It
includes: pods, nodes, secrets, services, namespaces, configmaps, deployments,
daemonsets, and statefulsets. Descriptions for the state of all resources
during execution are found here, and this information can prove valuable when

View File

@ -1,6 +1,6 @@
===============================
Commmon Deployment Requirements
===============================
==============================
Common Deployment Requirements
==============================
Passwordless Sudo
=================

View File

@ -15,7 +15,7 @@ Requirements
.. warning:: Until the Ubuntu kernel shipped with 16.04 supports CephFS
subvolume mounts by default the `HWE Kernel
<../../troubleshooting/ubuntu-hwe-kernel.rst>`__ is required to use CephFS.
<../../troubleshooting/ubuntu-hwe-kernel.html>`__ is required to use CephFS.
System Requirements
-------------------

View File

@ -44,7 +44,7 @@ External DNS and FQDN
=====================
Prepare ahead of time your FQDN and DNS layouts. There are a handful of OpenStack endpoints
you will want exposed for API and Dashboard access.
you will want to expose for API and Dashboard access.
Update your lab/environment DNS server with your appropriate host values creating A Records
for the edge node IP's and various FQDN's. Alternatively you can test these settings locally by
@ -74,7 +74,7 @@ The default FQDN's for OpenStack-Helm are
metadata.openstack.svc.cluster.local
glance.openstack.svc.cluster.local
We want to change the ***public*** configurations to match our DNS layouts above. In each Chart
We want to change the **public** configurations to match our DNS layouts above. In each Chart
``values.yaml`` is a ``endpoints`` configuration that has ``host_fqdn_override``'s for each API
that the Chart either produces or is dependent on. `Read more about how Endpoints are developed
<https://docs.openstack.org/openstack-helm/latest/devref/endpoints.html>`__.
@ -141,7 +141,7 @@ repeat code.
Note if you need to make a DNS change, you will have to do a uninstall (``helm delete <chart>``)
Note if you need to make a DNS change, you will have to do uninstall (``helm delete <chart>``)
and install again.
Once installed, access the API's or Dashboard at `http://horizon.os.foo.org`

View File

@ -22,7 +22,7 @@ Setup:
- 6 Nodes (VM based) env
- Only 3 nodes will have Ceph and OpenStack related labels. Each of these 3
nodes will have one MON and one OSD running on them.
- Followed OSH multinode guide steps to setup nodes and install k8 cluster
- Followed OSH multinode guide steps to setup nodes and install K8s cluster
- Followed OSH multinode guide steps to install Ceph and OpenStack charts up to
Cinder.
@ -30,12 +30,12 @@ Steps:
======
1) Initial Ceph and OpenStack deployment:
Install Ceph and OpenStack charts on 3 nodes (mnode1, mnode2 and mnode3).
Capture Ceph cluster status as well as k8s PODs status.
Capture Ceph cluster status as well as K8s PODs status.
2) Node reduction (failure):
Shutdown 1 of 3 nodes (mnode3) to test node failure. This should cause
Ceph cluster to go in HEALTH_WARN state as it has lost 1 MON and 1 OSD.
Capture Ceph cluster status as well as k8s PODs status.
Capture Ceph cluster status as well as K8s PODs status.
3) Node expansion:
Add Ceph and OpenStack related labels to 4th node (mnode4) for expansion.
@ -53,7 +53,7 @@ Step 1: Initial Ceph and OpenStack deployment
.. note::
Make sure only 3 nodes (mnode1, mnode2, mnode3) have Ceph and OpenStack
related labels. k8s would only schedule PODs on these 3 nodes.
related labels. K8s would only schedule PODs on these 3 nodes.
``Ceph status:``
@ -336,8 +336,8 @@ In this test env, let's shutdown ``mnode3`` node.
openstack rabbitmq-rabbitmq-0 0 (0%) 0 (0%) 0 (0%) 0 (0%)
.. note::
In this test env, MariaDb chart is deployed with only 1 replicas. In order to
test properly, the node with MariaDb server POD (mnode2) should not be shutdown.
In this test env, MariaDB chart is deployed with only 1 replica. In order to
test properly, the node with MariaDB server POD (mnode2) should not be shutdown.
.. note::
In this test env, each node has Ceph and OpenStack related PODs. Due to this,
@ -624,7 +624,7 @@ In this test env, let's shutdown ``mnode3`` node.
Step 3: Node Expansion
======================
Let's add more resources for k8s to schedule PODs on.
Let's add more resources for K8s to schedule PODs on.
In this test env, let's use ``mnode4`` and apply Ceph and OpenStack related
labels.
@ -1113,7 +1113,7 @@ As shown above, Ceph status is now HEALTH_OK and and shows 3 MONs available.
As shown in Ceph status above, ``osd: 4 osds: 3 up, 3 in`` 1 of 4 OSDs is still
down. Let's remove that OSD.
First run ``ceph osd tree`` command to get list of OSDs.
First, run ``ceph osd tree`` command to get list of OSDs.
.. code-block:: console

View File

@ -100,7 +100,7 @@ Note: To find the daemonset associated with a failed OSD, check out the followin
(voyager1)$ kubectl get ds <daemonset-name> -n ceph -o yaml
3. Remove the failed OSD (OSD ID = 2 in this example) from the Ceph cluster:
2. Remove the failed OSD (OSD ID = 2 in this example) from the Ceph cluster:
.. code-block:: console
@ -109,7 +109,7 @@ Note: To find the daemonset associated with a failed OSD, check out the followin
(mon-pod):/# ceph auth del osd.2
(mon-pod):/# ceph osd rm 2
4. Find that Ceph is healthy with a lost OSD (i.e., a total of 23 OSDs):
3. Find that Ceph is healthy with a lost OSD (i.e., a total of 23 OSDs):
.. code-block:: console
@ -133,20 +133,20 @@ Note: To find the daemonset associated with a failed OSD, check out the followin
usage: 2551 MB used, 42814 GB / 42816 GB avail
pgs: 182 active+clean
5. Replace the failed disk with a new one. If you repair (not replace) the failed disk,
4. Replace the failed disk with a new one. If you repair (not replace) the failed disk,
you may need to run the following:
.. code-block:: console
(voyager4)$ parted /dev/sdh mklabel msdos
6. Start a new OSD pod on ``voyager4``:
5. Start a new OSD pod on ``voyager4``:
.. code-block:: console
$ kubectl label nodes voyager4 --overwrite ceph_maintenance_window=inactive
7. Validate the Ceph status (i.e., one OSD is added, so the total number of OSDs becomes 24):
6. Validate the Ceph status (i.e., one OSD is added, so the total number of OSDs becomes 24):
.. code-block:: console

View File

@ -69,7 +69,7 @@ Case: A OSD pod is deleted
==========================
This is to test a scenario when an OSD pod is deleted by ``kubectl delete $OSD_POD_NAME``.
Meanwhile, we monitor the status of Ceph and noted that it takes about 90 seconds for the OSD running in deleted pod to recover from ``down`` to ``up``.
Meanwhile, we monitor the status of Ceph and note that it takes about 90 seconds for the OSD running in deleted pod to recover from ``down`` to ``up``.
.. code-block:: console
@ -102,6 +102,6 @@ Meanwhile, we monitor the status of Ceph and noted that it takes about 90 second
We also monitored the pod status through ``kubectl get pods -n ceph``
during this process. The deleted OSD pod status changed as follows:
``Terminating`` -> ``Init:1/3`` -> ``Init:2/3`` -> ``Init:3/3`` ->
``Running``, and this process taks about 90 seconds. The reason is
``Running``, and this process takes about 90 seconds. The reason is
that Kubernetes automatically restarts OSD pods whenever they are
deleted.

View File

@ -13,7 +13,7 @@ to OSH components.
Setup:
======
- 3 Node (VM based) env.
- Followed OSH multinode guide steps to setup nodes and install k8 cluster
- Followed OSH multinode guide steps to setup nodes and install K8s cluster
- Followed OSH multinode guide steps upto Ceph install
Plan:
@ -590,4 +590,4 @@ pods are running. No interruption to OSH pods.
Conclusion:
===========
Ceph can be upgreade without downtime for Openstack components in a multinoe env.
Ceph can be upgraded without downtime for Openstack components in a multinode env.