[docs] Fix lint failures

This patch fixes:

doc/source/contributor/testing.rst:281: D000 Explicit markup ends without a blank line; unexpected unindent.
doc/source/user/test/example.rst:28: D001 Line too long
doc/source/admin/maintenance-tasks.rst:8: D000 Title level inconsistent:
doc/source/admin/maintenance-tasks.rst:22: D000 Title level inconsistent:
doc/source/admin/troubleshooting.rst:630: D001 Line too long
doc/source/admin/troubleshooting.rst:650: D001 Line too long
doc/source/admin/maintenance-tasks/inventory-backups.rst:11: D001 Line too long

For consistency between maintenance-tasks/ files, they now all
have the same markup hierarchy.

Depends-On: https://review.openstack.org/567804
Change-Id: Id1cf9cb45543daa7c39d5141d8dc5827a76c6413
This commit is contained in:
Jesse Pretorius 2018-05-11 09:48:59 +01:00 committed by Jean-Philippe Evrard
parent abc0c35b13
commit 52a11834ef
9 changed files with 48 additions and 46 deletions

View File

@ -45,7 +45,7 @@ For more information, see `Inventory <http://docs.ansible.com/ansible/intro_inve
and `Patterns <http://docs.ansible.com/ansible/intro_patterns.html>`_.
Running the shell module
~~~~~~~~~~~~~~~~~~~~~~~~
------------------------
The two most common modules used are the ``shell`` and ``copy`` modules. The
``shell`` module takes the command name followed by a list of space delimited
@ -78,7 +78,7 @@ For more information, see `shell - Execute commands in nodes
<http://docs.ansible.com/ansible/shell_module.html>`_.
Running the copy module
~~~~~~~~~~~~~~~~~~~~~~~
-----------------------
The copy module copies a file on a local machine to remote locations. Use the
fetch module to copy files from remote locations to the local machine. If you
@ -126,7 +126,7 @@ from a single Compute host:
-rw-r--r-- 1 root root 2428624 Dec 15 01:23 /tmp/aio1/var/log/nova/nova-compute.log
Using tags
~~~~~~~~~~
----------
Tags are similar to the limit flag for groups except tags are used to only run
specific tasks within a playbook. For more information on tags, see
@ -135,7 +135,7 @@ and `Understanding ansible tags
<http://www.caphrim.net/ansible/2015/05/24/understanding-ansible-tags.html>`_.
Ansible forks
~~~~~~~~~~~~~
-------------
The default ``MaxSessions`` setting for the OpenSSH Daemon is 10. Each Ansible
fork makes use of a session. By default, Ansible sets the number of forks to

View File

@ -10,7 +10,7 @@ adding new deployment groups. It is also possible to destroy containers
if needed after changes and modifications are complete.
Scale individual services
~~~~~~~~~~~~~~~~~~~~~~~~~
-------------------------
Individual OpenStack services, and other open source project services,
run within containers. It is possible to scale out these services by
@ -63,7 +63,7 @@ modifying the ``/etc/openstack_deploy/openstack_user_config.yml`` file.
$ openstack-ansible lxc-containers-create.yml rabbitmq-install.yml
Destroy and recreate containers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-------------------------------
Resolving some issues may require destroying a container, and rebuilding
that container from the beginning. It is possible to destroy and

View File

@ -24,7 +24,7 @@ if you want to use a different port.
configure your firewall.
Finding ports for your external load balancer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
---------------------------------------------
As explained in the previous section, you can find (in each role
documentation) the default variables used for the public

View File

@ -10,7 +10,7 @@ node, when the service is not running, or when changes are made to the
``/etc/mysql/my.cnf`` configuration file.
Verify cluster status
~~~~~~~~~~~~~~~~~~~~~
---------------------
Compare the output of the following command with the following output.
It should give you information about the status of your cluster.
@ -42,7 +42,7 @@ processing SQL requests. When gracefully shutting down multiple nodes,
perform the actions sequentially to retain operation.
Start a cluster
~~~~~~~~~~~~~~~
---------------
Gracefully shutting down all nodes destroys the cluster. Starting or
restarting a cluster from zero nodes requires creating a new cluster on
@ -147,7 +147,7 @@ one of the nodes.
.. _galera-cluster-recovery:
Galera cluster recovery
~~~~~~~~~~~~~~~~~~~~~~~
-----------------------
Run the ``galera-install`` playbook using the ``galera-bootstrap`` tag
to automatically recover a node or an entire environment.
@ -161,7 +161,7 @@ to automatically recover a node or an entire environment.
The cluster comes back online after completion of this command.
Recover a single-node failure
-----------------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If a single node fails, the other nodes maintain quorum and
continue to process SQL requests.
@ -202,7 +202,7 @@ continue to process SQL requests.
for the node.
Recover a multi-node failure
----------------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When all but one node fails, the remaining node cannot achieve quorum and
stops processing SQL requests. In this situation, failed nodes that
@ -290,7 +290,7 @@ recover cannot join the cluster because it no longer exists.
last resort, rebuild the container for the node.
Recover a complete environment failure
--------------------------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Restore from backup if all of the nodes in a Galera cluster fail (do not
shutdown gracefully). Change to the ``playbook`` directory and run the
@ -332,7 +332,7 @@ restart the cluster using the ``--wsrep-new-cluster`` command on one
node.
Rebuild a container
-------------------
~~~~~~~~~~~~~~~~~~~
Recovering from certain failures require rebuilding one or more containers.

View File

@ -1,41 +1,41 @@
Prune Inventory Backup Archive
==============================
The inventory backup archive will require maintenance over a long enough period
of time.
The inventory backup archive will require maintenance over a long enough
period of time.
Bulk pruning
------------
It's possible to do mass pruning of the inventory backup. The following example will
prune all but the last 15 inventories from the running archive.
It's possible to do mass pruning of the inventory backup. The following
example will prune all but the last 15 inventories from the running archive.
.. code-block:: bash
ARCHIVE="/etc/openstack_deploy/backup_openstack_inventory.tar"
tar -tvf ${ARCHIVE} | \
head -n -15 | awk '{print $6}' | \
xargs -n 1 tar -vf ${ARCHIVE} --delete
ARCHIVE="/etc/openstack_deploy/backup_openstack_inventory.tar"
tar -tvf ${ARCHIVE} | \
head -n -15 | awk '{print $6}' | \
xargs -n 1 tar -vf ${ARCHIVE} --delete
Selective Pruning
-----------------
To prune the inventory archive selectively first identify the files you wish to
remove by listing them out.
To prune the inventory archive selectively first identify the files you wish
to remove by listing them out.
.. code-block:: bash
tar -tvf /etc/openstack_deploy/backup_openstack_inventory.tar
tar -tvf /etc/openstack_deploy/backup_openstack_inventory.tar
-rw-r--r-- root/root 110096 2018-05-03 10:11 openstack_inventory.json-20180503_151147.json
-rw-r--r-- root/root 110090 2018-05-03 10:11 openstack_inventory.json-20180503_151205.json
-rw-r--r-- root/root 110098 2018-05-03 10:12 openstack_inventory.json-20180503_151217.json
-rw-r--r-- root/root 110096 2018-05-03 10:11 openstack_inventory.json-20180503_151147.json
-rw-r--r-- root/root 110090 2018-05-03 10:11 openstack_inventory.json-20180503_151205.json
-rw-r--r-- root/root 110098 2018-05-03 10:12 openstack_inventory.json-20180503_151217.json
Now delete the targeted inventory archive.
.. code-block:: bash
tar -vf /etc/openstack_deploy/backup_openstack_inventory.tar --delete openstack_inventory.json-20180503_151205.json
tar -vf /etc/openstack_deploy/backup_openstack_inventory.tar --delete openstack_inventory.json-20180503_151205.json

View File

@ -27,7 +27,7 @@ restrictive environments. For more details on that setup, see
be released in Ansible version 2.3.
Create a RabbitMQ cluster
~~~~~~~~~~~~~~~~~~~~~~~~~
-------------------------
RabbitMQ clusters can be formed in two ways:
@ -86,7 +86,7 @@ cluster of the first node.
Starting node rabbit@rabbit2 ...done.
Check the RabbitMQ cluster status
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
---------------------------------
#. Run ``rabbitmqctl cluster_status`` from either node.
@ -119,7 +119,7 @@ process by stopping the rabbitmq application on the third node.
...done.
Stop and restart a RabbitMQ cluster
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-----------------------------------
To stop and start the cluster, keep in mind the order in
which you shut the nodes down. The last node you stop, needs to be the
@ -130,7 +130,7 @@ it thinks the current `master` should not be the master and drops the messages
to ensure that no new messages are queued while the real master is down.
RabbitMQ and mnesia
~~~~~~~~~~~~~~~~~~~
-------------------
Mnesia is a distributed database that RabbitMQ uses to store information about
users, exchanges, queues, and bindings. Messages, however
@ -143,7 +143,7 @@ To view the locations of important Rabbit files, see
`File Locations <https://www.rabbitmq.com/relocate.html>`_.
Repair a partitioned RabbitMQ cluster for a single-node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-------------------------------------------------------
Invariably due to something in your environment, you are likely to lose a
node in your cluster. In this scenario, multiple LXC containers on the same host
@ -195,7 +195,7 @@ the failing node.
...done.
Repair a partitioned RabbitMQ cluster for a multi-node cluster
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
--------------------------------------------------------------
The same concepts apply to a multi-node cluster that exist in a single-node
cluster. The only difference is that the various nodes will actually be

View File

@ -627,11 +627,11 @@ containers.
Restoring inventory from backup
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenStack-Ansible maintains a running archive of inventory. If a change has been
introduced into the system that has broken inventory or otherwise has caused an
unforseen issue, the inventory can be reverted to an early version. The backup
file ``/etc/openstack_deploy/backup_openstack_inventory.tar`` contains a set of
timestamped inventories that can be restored as needed.
OpenStack-Ansible maintains a running archive of inventory. If a change has
been introduced into the system that has broken inventory or otherwise has
caused an unforseen issue, the inventory can be reverted to an early version.
The backup file ``/etc/openstack_deploy/backup_openstack_inventory.tar``
contains a set of timestamped inventories that can be restored as needed.
Example inventory restore process.
@ -647,5 +647,5 @@ Example inventory restore process.
rm -rf /tmp/inventory_restore
At the completion of this operation the inventory will be restored to the ealier
version.
At the completion of this operation the inventory will be restored to the
earlier version.

View File

@ -278,6 +278,7 @@ Testing a new role with an AIO
deployment requirements (secrets and var files, HAProxy yml fragments,
repo_package files, etc.) in their own files it makes it easy for you to
automate these additional steps when testing your role.
Integrated repo functional or scenario testing
----------------------------------------------

View File

@ -25,10 +25,11 @@ Network configuration
Switch port configuration
-------------------------
The following example provides a good reference for switch configuration and cab
layout. This example may be more that what is required for basic setups however
it can be adjusted to just about any configuration. Additionally you will need
to adjust the VLANS noted within this example to match your environment.
The following example provides a good reference for switch configuration and
cab layout. This example may be more that what is required for basic setups
however it can be adjusted to just about any configuration. Additionally you
will need to adjust the VLANS noted within this example to match your
environment.
.. image:: ../figures/example-switchport-config-and-cabling.png
:width: 100%