diff --git a/doc/source/admin/maintenance-tasks/ansible-modules.rst b/doc/source/admin/maintenance-tasks/ansible-modules.rst index b729067047..cd8d5dff77 100644 --- a/doc/source/admin/maintenance-tasks/ansible-modules.rst +++ b/doc/source/admin/maintenance-tasks/ansible-modules.rst @@ -45,7 +45,7 @@ For more information, see `Inventory `_. Running the shell module -~~~~~~~~~~~~~~~~~~~~~~~~ +------------------------ The two most common modules used are the ``shell`` and ``copy`` modules. The ``shell`` module takes the command name followed by a list of space delimited @@ -78,7 +78,7 @@ For more information, see `shell - Execute commands in nodes `_. Running the copy module -~~~~~~~~~~~~~~~~~~~~~~~ +----------------------- The copy module copies a file on a local machine to remote locations. Use the fetch module to copy files from remote locations to the local machine. If you @@ -126,7 +126,7 @@ from a single Compute host: -rw-r--r-- 1 root root 2428624 Dec 15 01:23 /tmp/aio1/var/log/nova/nova-compute.log Using tags -~~~~~~~~~~ +---------- Tags are similar to the limit flag for groups except tags are used to only run specific tasks within a playbook. For more information on tags, see @@ -135,7 +135,7 @@ and `Understanding ansible tags `_. Ansible forks -~~~~~~~~~~~~~ +------------- The default ``MaxSessions`` setting for the OpenSSH Daemon is 10. Each Ansible fork makes use of a session. By default, Ansible sets the number of forks to diff --git a/doc/source/admin/maintenance-tasks/containers.rst b/doc/source/admin/maintenance-tasks/containers.rst index 67a589f486..222f1b189d 100644 --- a/doc/source/admin/maintenance-tasks/containers.rst +++ b/doc/source/admin/maintenance-tasks/containers.rst @@ -10,7 +10,7 @@ adding new deployment groups. It is also possible to destroy containers if needed after changes and modifications are complete. Scale individual services -~~~~~~~~~~~~~~~~~~~~~~~~~ +------------------------- Individual OpenStack services, and other open source project services, run within containers. It is possible to scale out these services by @@ -63,7 +63,7 @@ modifying the ``/etc/openstack_deploy/openstack_user_config.yml`` file. $ openstack-ansible lxc-containers-create.yml rabbitmq-install.yml Destroy and recreate containers -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +------------------------------- Resolving some issues may require destroying a container, and rebuilding that container from the beginning. It is possible to destroy and diff --git a/doc/source/admin/maintenance-tasks/firewalls.rst b/doc/source/admin/maintenance-tasks/firewalls.rst index e500072df0..e0fc7201a4 100644 --- a/doc/source/admin/maintenance-tasks/firewalls.rst +++ b/doc/source/admin/maintenance-tasks/firewalls.rst @@ -24,7 +24,7 @@ if you want to use a different port. configure your firewall. Finding ports for your external load balancer -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +--------------------------------------------- As explained in the previous section, you can find (in each role documentation) the default variables used for the public diff --git a/doc/source/admin/maintenance-tasks/galera.rst b/doc/source/admin/maintenance-tasks/galera.rst index 5f9ac71704..45ed4830d4 100644 --- a/doc/source/admin/maintenance-tasks/galera.rst +++ b/doc/source/admin/maintenance-tasks/galera.rst @@ -10,7 +10,7 @@ node, when the service is not running, or when changes are made to the ``/etc/mysql/my.cnf`` configuration file. Verify cluster status -~~~~~~~~~~~~~~~~~~~~~ +--------------------- Compare the output of the following command with the following output. It should give you information about the status of your cluster. @@ -42,7 +42,7 @@ processing SQL requests. When gracefully shutting down multiple nodes, perform the actions sequentially to retain operation. Start a cluster -~~~~~~~~~~~~~~~ +--------------- Gracefully shutting down all nodes destroys the cluster. Starting or restarting a cluster from zero nodes requires creating a new cluster on @@ -147,7 +147,7 @@ one of the nodes. .. _galera-cluster-recovery: Galera cluster recovery -~~~~~~~~~~~~~~~~~~~~~~~ +----------------------- Run the ``galera-install`` playbook using the ``galera-bootstrap`` tag to automatically recover a node or an entire environment. @@ -161,7 +161,7 @@ to automatically recover a node or an entire environment. The cluster comes back online after completion of this command. Recover a single-node failure ------------------------------ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If a single node fails, the other nodes maintain quorum and continue to process SQL requests. @@ -202,7 +202,7 @@ continue to process SQL requests. for the node. Recover a multi-node failure ----------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When all but one node fails, the remaining node cannot achieve quorum and stops processing SQL requests. In this situation, failed nodes that @@ -290,7 +290,7 @@ recover cannot join the cluster because it no longer exists. last resort, rebuild the container for the node. Recover a complete environment failure --------------------------------------- +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Restore from backup if all of the nodes in a Galera cluster fail (do not shutdown gracefully). Change to the ``playbook`` directory and run the @@ -332,7 +332,7 @@ restart the cluster using the ``--wsrep-new-cluster`` command on one node. Rebuild a container -------------------- +~~~~~~~~~~~~~~~~~~~ Recovering from certain failures require rebuilding one or more containers. diff --git a/doc/source/admin/maintenance-tasks/inventory-backups.rst b/doc/source/admin/maintenance-tasks/inventory-backups.rst index d6b67b734b..4bbfa828fd 100644 --- a/doc/source/admin/maintenance-tasks/inventory-backups.rst +++ b/doc/source/admin/maintenance-tasks/inventory-backups.rst @@ -1,41 +1,41 @@ Prune Inventory Backup Archive ============================== -The inventory backup archive will require maintenance over a long enough period -of time. +The inventory backup archive will require maintenance over a long enough +period of time. Bulk pruning ------------ -It's possible to do mass pruning of the inventory backup. The following example will -prune all but the last 15 inventories from the running archive. +It's possible to do mass pruning of the inventory backup. The following +example will prune all but the last 15 inventories from the running archive. .. code-block:: bash - ARCHIVE="/etc/openstack_deploy/backup_openstack_inventory.tar" - tar -tvf ${ARCHIVE} | \ - head -n -15 | awk '{print $6}' | \ - xargs -n 1 tar -vf ${ARCHIVE} --delete + ARCHIVE="/etc/openstack_deploy/backup_openstack_inventory.tar" + tar -tvf ${ARCHIVE} | \ + head -n -15 | awk '{print $6}' | \ + xargs -n 1 tar -vf ${ARCHIVE} --delete Selective Pruning ----------------- -To prune the inventory archive selectively first identify the files you wish to -remove by listing them out. +To prune the inventory archive selectively first identify the files you wish +to remove by listing them out. .. code-block:: bash - tar -tvf /etc/openstack_deploy/backup_openstack_inventory.tar + tar -tvf /etc/openstack_deploy/backup_openstack_inventory.tar - -rw-r--r-- root/root 110096 2018-05-03 10:11 openstack_inventory.json-20180503_151147.json - -rw-r--r-- root/root 110090 2018-05-03 10:11 openstack_inventory.json-20180503_151205.json - -rw-r--r-- root/root 110098 2018-05-03 10:12 openstack_inventory.json-20180503_151217.json + -rw-r--r-- root/root 110096 2018-05-03 10:11 openstack_inventory.json-20180503_151147.json + -rw-r--r-- root/root 110090 2018-05-03 10:11 openstack_inventory.json-20180503_151205.json + -rw-r--r-- root/root 110098 2018-05-03 10:12 openstack_inventory.json-20180503_151217.json Now delete the targeted inventory archive. .. code-block:: bash - tar -vf /etc/openstack_deploy/backup_openstack_inventory.tar --delete openstack_inventory.json-20180503_151205.json + tar -vf /etc/openstack_deploy/backup_openstack_inventory.tar --delete openstack_inventory.json-20180503_151205.json diff --git a/doc/source/admin/maintenance-tasks/rabbitmq-maintain.rst b/doc/source/admin/maintenance-tasks/rabbitmq-maintain.rst index 308848b935..7040cc729e 100644 --- a/doc/source/admin/maintenance-tasks/rabbitmq-maintain.rst +++ b/doc/source/admin/maintenance-tasks/rabbitmq-maintain.rst @@ -27,7 +27,7 @@ restrictive environments. For more details on that setup, see be released in Ansible version 2.3. Create a RabbitMQ cluster -~~~~~~~~~~~~~~~~~~~~~~~~~ +------------------------- RabbitMQ clusters can be formed in two ways: @@ -86,7 +86,7 @@ cluster of the first node. Starting node rabbit@rabbit2 ...done. Check the RabbitMQ cluster status -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +--------------------------------- #. Run ``rabbitmqctl cluster_status`` from either node. @@ -119,7 +119,7 @@ process by stopping the rabbitmq application on the third node. ...done. Stop and restart a RabbitMQ cluster -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +----------------------------------- To stop and start the cluster, keep in mind the order in which you shut the nodes down. The last node you stop, needs to be the @@ -130,7 +130,7 @@ it thinks the current `master` should not be the master and drops the messages to ensure that no new messages are queued while the real master is down. RabbitMQ and mnesia -~~~~~~~~~~~~~~~~~~~ +------------------- Mnesia is a distributed database that RabbitMQ uses to store information about users, exchanges, queues, and bindings. Messages, however @@ -143,7 +143,7 @@ To view the locations of important Rabbit files, see `File Locations `_. Repair a partitioned RabbitMQ cluster for a single-node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +------------------------------------------------------- Invariably due to something in your environment, you are likely to lose a node in your cluster. In this scenario, multiple LXC containers on the same host @@ -195,7 +195,7 @@ the failing node. ...done. Repair a partitioned RabbitMQ cluster for a multi-node cluster -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +-------------------------------------------------------------- The same concepts apply to a multi-node cluster that exist in a single-node cluster. The only difference is that the various nodes will actually be diff --git a/doc/source/admin/troubleshooting.rst b/doc/source/admin/troubleshooting.rst index 5850129ec5..47070cf2b6 100644 --- a/doc/source/admin/troubleshooting.rst +++ b/doc/source/admin/troubleshooting.rst @@ -627,11 +627,11 @@ containers. Restoring inventory from backup ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -OpenStack-Ansible maintains a running archive of inventory. If a change has been -introduced into the system that has broken inventory or otherwise has caused an -unforseen issue, the inventory can be reverted to an early version. The backup -file ``/etc/openstack_deploy/backup_openstack_inventory.tar`` contains a set of -timestamped inventories that can be restored as needed. +OpenStack-Ansible maintains a running archive of inventory. If a change has +been introduced into the system that has broken inventory or otherwise has +caused an unforseen issue, the inventory can be reverted to an early version. +The backup file ``/etc/openstack_deploy/backup_openstack_inventory.tar`` +contains a set of timestamped inventories that can be restored as needed. Example inventory restore process. @@ -647,5 +647,5 @@ Example inventory restore process. rm -rf /tmp/inventory_restore -At the completion of this operation the inventory will be restored to the ealier -version. +At the completion of this operation the inventory will be restored to the +earlier version. diff --git a/doc/source/contributor/testing.rst b/doc/source/contributor/testing.rst index 7687779b4b..4fd1ccaa43 100644 --- a/doc/source/contributor/testing.rst +++ b/doc/source/contributor/testing.rst @@ -278,6 +278,7 @@ Testing a new role with an AIO deployment requirements (secrets and var files, HAProxy yml fragments, repo_package files, etc.) in their own files it makes it easy for you to automate these additional steps when testing your role. + Integrated repo functional or scenario testing ---------------------------------------------- diff --git a/doc/source/user/test/example.rst b/doc/source/user/test/example.rst index dbbff50ace..5fb8b94b24 100644 --- a/doc/source/user/test/example.rst +++ b/doc/source/user/test/example.rst @@ -25,10 +25,11 @@ Network configuration Switch port configuration ------------------------- -The following example provides a good reference for switch configuration and cab -layout. This example may be more that what is required for basic setups however -it can be adjusted to just about any configuration. Additionally you will need -to adjust the VLANS noted within this example to match your environment. +The following example provides a good reference for switch configuration and +cab layout. This example may be more that what is required for basic setups +however it can be adjusted to just about any configuration. Additionally you +will need to adjust the VLANS noted within this example to match your +environment. .. image:: ../figures/example-switchport-config-and-cabling.png :width: 100%