Merge "Fix typo"
This commit is contained in:
commit
5b33e0a08b
|
@ -94,7 +94,7 @@ Each stage shows the inputs taken and the artifacts produced.
|
|||
|
||||
*Tag the containers* as ``current-tripleo-rdo-internal``
|
||||
|
||||
Run futher downstream jobs (scale etc.)
|
||||
Run further downstream jobs (scale etc.)
|
||||
rdo-promote-master-rdo_trunk-nonvoting
|
||||
|
||||
5. **OSP Phase 0**
|
||||
|
|
|
@ -97,7 +97,7 @@ The following steps are done to create the ``tripleo-admin`` user:
|
|||
deleted from ``~/.ssh/authorized_keys`` on each overcloud node, and the
|
||||
temporary keypair is then deleted from the undercloud.
|
||||
|
||||
With these steps, the deployer-specified ssh key which is used for the inital
|
||||
With these steps, the deployer-specified ssh key which is used for the initial
|
||||
connection is never sent or stored by any API service.
|
||||
|
||||
To override the deployer specified ssh private key and user, there are cli args
|
||||
|
|
|
@ -79,7 +79,7 @@ If the storage network uses VLAN, include storage network in
|
|||
subnet doesn't overlap with IP allocation pool used for Overcloud storage
|
||||
nodes (controlled by ``StorageAllocationPools`` heat parameter).
|
||||
``StorageAllocationPools`` is by default set to
|
||||
``[{'start': '172.16.1.4', 'end': '172.16.1.250'}]``. It may be neccessary
|
||||
``[{'start': '172.16.1.4', 'end': '172.16.1.250'}]``. It may be necessary
|
||||
to shrink this pool, for example::
|
||||
|
||||
StorageAllocationPools: [{'start': '172.16.1.4', 'end': '172.16.1.99'}]
|
||||
|
|
|
@ -96,7 +96,7 @@ you will find the CI job statistics and the last 100 (or less, it
|
|||
can be edited) job executions. Each of the job executions contains::
|
||||
|
||||
- Date: Time and date the CI job was triggered
|
||||
- Lenght: Job duration
|
||||
- Length: Job duration
|
||||
- Reason: CI job result or failure reason.
|
||||
- Patch: Git ref of the patch tha triggered the job.
|
||||
- Logs: Link to the logs.
|
||||
|
@ -129,7 +129,7 @@ for?
|
|||
(1) Find the job result
|
||||
|
||||
A good string to search is *PLAY RECAP*. At this point, all the
|
||||
playbooks have been executed and a sumary of the runs per node
|
||||
playbooks have been executed and a summary of the runs per node
|
||||
is displayed::
|
||||
|
||||
PLAY RECAP *********************************************************************
|
||||
|
@ -147,7 +147,7 @@ for?
|
|||
"start": "2017-11-14 16:55:07.949779", "stderr": "", "stdout": "", "stdout_lines": [], "warnings": []}
|
||||
|
||||
From this task, we can guess that something went wrong during the
|
||||
overcloud upgrading proces. But, where can I find the log
|
||||
overcloud upgrading process. But, where can I find the log
|
||||
*overcloud_upgrade_console.log* referenced in the task?
|
||||
|
||||
(2) Undercloud logs
|
||||
|
|
|
@ -144,7 +144,7 @@ Upgrading the overcloud from Newton to Queens
|
|||
.. note::
|
||||
|
||||
Generic Fast Forward Upgrade testing in the overcloud cannot cover all
|
||||
possible deployment configurations. Before performing Fast Foward Upgrade
|
||||
possible deployment configurations. Before performing Fast Forward Upgrade
|
||||
testing in the overcloud, test it in a matching staging environment, and
|
||||
create a backup of the production environment (your controller nodes and your
|
||||
workloads).
|
||||
|
|
Loading…
Reference in New Issue