Retire Tripleo: remove repo content

TripleO project is retiring
- https://review.opendev.org/c/openstack/governance/+/905145

this commit remove the content of this project repo

Change-Id: Ic84ac5786ecc68729827ff35ad7ee3ef21a6652f
This commit is contained in:
Ghanshyam Mann 2024-02-24 11:31:11 -08:00
parent 2ab288f14e
commit 7350bf7eb8
41 changed files with 8 additions and 2285 deletions

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,385 +0,0 @@
# Backup and Restore Operations #
The `openstack-operations` role includes some foundational backup and restore Ansible tasks to help with automatically backing up and restoring OpenStack services. The current services available to backup and restore include:
* MySQL on a galera cluster
* Redis
Scenarios tested:
* TripleO, 1 Controller, 1 Compute, backup to the undercloud
* TripleO, 1 Controller, 1 Compute, backup to remote server
* TripleO, 3 Controllers, 1 Compute, backup to the undercloud
* TripleO, 3 Controllers, 1 Compute, backup to remote server
## Architecture ##
The architecture uses three main host types:
* Target Hosts - Which are the OpenStack nodes with data to backup. For example, this would any nodes with database servers running
* Backup Host - The destination to store the backup.
* Control Host - The host that executes the playbook. For example, this would be the undercloud on TripleO.
You can also unify the Backup Host and Control Host onto a single host. For example, a host that runs playbooks AND stores the backup data,
## Requirements ##
General Requirements:
* Backup Host needs access to the `rsync` package. A task in `initialize_backup_host.yml` will attempt to install it.
MySQL/Galera
* Target Hosts needs access to the `mysql` package. Tasks in the backup and restore files will attempt to install it.
* When restoring to Galera, the Control Host requires the `pacemaker_resource` module. You can obtain this module from the `ansible-pacemaker` RPM. If your operating system does not have access to this package, you can clone the [ansible-pacemaker git repo](https://github.com/redhat-openstack/ansible-pacemaker). When running a restore playbook, include the `ansible-pacemaker` module using the `-M` option (e.g. `ansible-playbook -M /usr/share/ansible-modules ...`)
Filesystem
* It has no special requirements, only the `tar` command is going to be used.
Redis
* Target Hosts needs access to the `redis` package. Tasks in the backup and restore files will attempt to install it.
* When restoring Redis, the Control Host requires the `pacemaker_resource` module. You can obtain this module from the `ansible-pacemaker` RPM. If your operating system does not have access to this package, you can clone the [ansible-pacemaker git repo](https://github.com/redhat-openstack/ansible-pacemaker). When running a restore playbook, include the `ansible-pacemaker` module using the `-M` option (e.g. `ansible-playbook -M /usr/share/ansible-modules ...`)
## Task Files ##
The following is a list of the task files used in the backup and restore process.
Initialization Tasks:
* `initialize_backup_host.yml` - Makes sure the Backup Host (destination) has an SSH key pair and rsync installed.
* `enable_ssh.yml` - Enables SSH access from the Backup Host to the Target Hosts. This is so rsync can pull the backed up data and push the data during a restore.
* `disable_ssh.yml` - Disables SSH access from the Backup Host to the Target Hosts. This ensures that access is only granted during the backup only.
* `set_bootstrap.yml` - In situations with high availability, some restore tasks (such as Pacemaker functions) only need to be carried out by one of the Target Hosts. The tasks in `set_bootstrap.yml` set a "bootstrap" node to help execute single tasks on only one Target Host. This is usually the first node in your list of targets.
Backup Tasks:
* `backup_mysql.yml` - Performs a backup of the OpenStack MySQL data and grants, archives them, and sends them to the desired backup host.
* `backup_filesystem.yml` - Creates a tar file of a list of files/directories given and sends then to a desired backup host.
* `backup_redis.yml` - Performs a backup of Redis data from one node, archives them, and sends them to the desired backup host.
Restore Tasks:
* `restore_galera.yml` - Performs a restore of the OpenStack MySQL data and grants on a containerized galera cluster. This involves shutting down the current galera cluster, creating a brand new MySQL database, then importing the data and grants from the archive. In addition, the playbook saves a copy of the old data in case the restore process fails.
* `restore_redis.yml` - Performs a restore of Redis data from one node to all nodes and resets the permissions using a redis container.
Validation Tasks:
* `validate_galera.yml` - Performs the equivalent of `clustercheck` i.e. checks the `wsrep_local_state` is 4 ("Synced").
* `validate_galera.yml` - Performs a Redis check with `redis-cli ping`.
## Variables ##
Use the following variables to customize how you want to run these tasks.
Variables for all backup tasks:
* `backup_directory` - The location on the backup host to rsync archives. If unset, defaults to the home directory of the chosen inventory user for the Backup Host. If you aim to have recurring backup jobs and store multiple iterations of the backup, you should set this to a dynamic value such as a timestamp or UUID.
* `backup_server_hostgroup` - The name of the host group containing the backup server. Ideally, this host group only contains the Backup Host. If more than one host exists in this group, the tasks pick the first host in the group. Note the following:
* The chosen Backup Host Group must be in your inventory.
* The Backup Host must be initialized using the `initialize_backup_host.yml`. You can do this by placing the Backup Host in a single host group called `backup` and refer to it as using `hosts: backup[0]` in a play that runs the `initialize_backup_host` tasks.
* You can only use one Backup Host. This is because the delegation for the `synchronize` module allows only one host.
MySQL and galera backup and restore variables:
* `kolla_path` - The location of the configuration for Kolla containers. Defaults to `/var/lib/config-data/puppet-generated`.
* `mysql_bind_host` - The IP address for database server access. The tasks place a temporary firewall block on this IP address to prevent services writing to the database during the restore.
* `mysql_root_password` - The original root password to access the database. If unsent, it checks the Puppet hieradata for the password.
* `mysql_clustercheck_password` - The original password for the clustercheck user. If unsent, it checks the Puppet hieradata for the password.
* `galera_container_image` - The image to use for the temporary container to restore the galera database. If unset, it tries to determine the image from the existing galera container.
Filesystem backup variables:
* `backup_dirs` - List of the files to backup.
* `baclup_exclude` - List of the files that where not included on the backup.
* `backup_file` - The end of the backup file name.
Redis backup and restore variables:
* `redis_vip` - The VIP address of the Redis cluster. If unsent, it checks the Puppet hieradata for the VIP.
* `redis_matherauth_password` - The master password for the Redis cluster. If unsent, it checks the Puppet hieradata for the password.
* `redis_container_image` - The image to use for the temporary container that restores the permissions to the Redis data directory. If unset, it tries to determine the image from the existing redis container.
## Inventory and Playbooks ##
You ultimately define how to use the tasks with your own playbooks and inventory. The inventory should include the host groups and users to access each host type. For example:
~~~~
[my_backup_host]
192.0.2.200 ansible_user=backup
[my_target_host]
192.0.2.101 ansible_user=openstack
192.0.2.102 ansible_user=openstack
192.0.2.103 ansible_user=openstack
[all:vars]
backup_directory="/home/backup/my-backup-folder/"
~~~~
The process for your playbook depends largely on whether you want to backup or restore. However, the general process usually follows:
1. Initialize the backup host
2. Ensure SSH access from the backup host to your OpenStack nodes
3. Perform the backup or restore. If need be, you might need to set a bootstrap to carry out tasks to isolate on a single Target Host.
4. (Optional) If using a separate Backup Host (i.e. not the Control Host), disable SSH access from the backup host to your OpenStack nodes.
## Examples ##
The following examples show how to use the backup and restore tasks.
### Backup and restore galera and redis to a remote backup server ###
This example shows how to backup data to the `root` user on a remote backup server, and then restore it. The inventory file for both functions are the same:
~~~~
[backup]
192.0.2.250 ansible_user=root
[mysql]
192.0.2.101 ansible_user=heat-admin
192.0.2.102 ansible_user=heat-admin
192.0.2.103 ansible_user=heat-admin
[redis]
192.0.2.101 ansible_user=heat-admin
192.0.2.102 ansible_user=heat-admin
192.0.2.103 ansible_user=heat-admin
[all:vars]
backup_directory="/root/backup-test/"
~~~~
Backup Playbook:
~~~~
---
- name: Initialize backup host
hosts: "{{ backup_hosts | default('backup') }}[0]"
tasks:
- import_role:
name: ansible-role-openstack-operations
tasks_from: initialize_backup_host
- name: Backup MySQL database
hosts: "{{ target_hosts | default('mysql') }}[0]"
vars:
backup_server_hostgroup: "{{ backup_hosts | default('backup') }}"
tasks:
- import_role:
name: ansible-role-openstack-operations
tasks_from: enable_ssh
- import_role:
name: ansible-role-openstack-operations
tasks_from: backup_mysql
- import_role:
name: ansible-role-openstack-operations
tasks_from: disable_ssh
- name: Backup Redis database
hosts: "{{ target_hosts | default('redis') }}[0]"
vars:
backup_server_hostgroup: "{{ backup_hosts | default('backup') }}"
tasks:
- import_role:
name: ansible-role-openstack-operations
tasks_from: enable_ssh
- import_role:
name: ansible-role-openstack-operations
tasks_from: backup_redis
- import_role:
name: ansible-role-openstack-operations
tasks_from: disable_ssh
~~~~
We do not need to include the bootstrap tasks with the backup since all tasks are performed by one of the Target Hosts.
Restore Playbook:
~~~~
---
- name: Initialize backup host
hosts: "{{ backup_hosts | default('backup') }}[0]"
tasks:
- import_role:
name: ansible-role-openstack-operations
tasks_from: initialize_backup_host
- name: Restore MySQL database on galera cluster
hosts: "{{ target_hosts | default('mysql') }}"
vars:
backup_server_hostgroup: "{{ backup_hosts | default('backup') }}"
tasks:
- import_role:
name: ansible-role-openstack-operations
tasks_from: set_bootstrap
- import_role:
name: ansible-role-openstack-operations
tasks_from: enable_ssh
- import_role:
name: ansible-role-openstack-operations
tasks_from: restore_galera
- import_role:
name: ansible-role-openstack-operations
tasks_from: disable_ssh
- name: Restore Redis data
hosts: "{{ target_hosts | default('redis') }}"
vars:
backup_server_hostgroup: "{{ backup_hosts | default('backup') }}"
tasks:
- import_role:
name: ansible-role-openstack-operations
tasks_from: set_bootstrap
- import_role:
name: ansible-role-openstack-operations
tasks_from: enable_ssh
- import_role:
name: ansible-role-openstack-operations
tasks_from: restore_redis
- import_role:
name: ansible-role-openstack-operations
tasks_from: disable_ssh
~~~~
We include the bootstrap tasks with the backup since all Target Hosts are required for the restore but only certain operations are performed on one of the hosts.
### Backup and restore galera and redis to a combined control/backup host ###
This example shows how to back to a directory on the Control Host using the same user. In this case, we use the `stack` user for both Ansible and rsync operations. We also use the `heat-admin` user to access the OpenStack nodes. Both the backup and restore operations use the same inventory file:
~~~~
[backup]
localhost ansible_user=stack
[mysql]
192.0.2.101 ansible_user=heat-admin
192.0.2.102 ansible_user=heat-admin
192.0.2.103 ansible_user=heat-admin
[redis]
192.0.2.101 ansible_user=heat-admin
192.0.2.102 ansible_user=heat-admin
192.0.2.103 ansible_user=heat-admin
[all:vars]
backup_directory="/home/stack/backup-test/"
~~~~
Backup Playbook:
~~~~
---
- name: Initialize backup host
hosts: "{{ backup_hosts | default('backup') }}[0]"
tasks:
- import_role:
name: ansible-role-openstack-operations
tasks_from: initialize_backup_host
- name: Backup MySQL database
hosts: "{{ target_hosts | default('mysql') }}[0]"
vars:
backup_server_hostgroup: "{{ backup_hosts | default('backup') }}"
tasks:
- import_role:
name: ansible-role-openstack-operations
tasks_from: enable_ssh
- import_role:
name: ansible-role-openstack-operations
tasks_from: backup_mysql
- name: Backup Redis database
hosts: "{{ target_hosts | default('redis') }}[0]"
vars:
backup_server_hostgroup: "{{ backup_hosts | default('backup') }}"
tasks:
- import_role:
name: ansible-role-openstack-operations
tasks_from: backup_redis
~~~~
Restore Playbook:
~~~~
---
- name: Initialize backup host
hosts: "{{ backup_hosts | default('backup') }}[0]"
tasks:
- import_role:
name: ansible-role-openstack-operations
tasks_from: initialize_backup_host
- name: Restore MySQL database on galera cluster
hosts: "{{ target_hosts | default('mysql') }}"
vars:
backup_server_hostgroup: "{{ backup_hosts | default('backup') }}"
tasks:
- import_role:
name: ansible-role-openstack-operations
tasks_from: set_bootstrap
- import_role:
name: ansible-role-openstack-operations
tasks_from: enable_ssh
- import_role:
name: ansible-role-openstack-operations
tasks_from: restore_galera
- name: Restore MySQL database on galera cluster
hosts: "{{ target_hosts | default('redis') }}"
vars:
backup_server_hostgroup: "{{ backup_hosts | default('backup') }}"
tasks:
- import_role:
name: ansible-role-openstack-operations
tasks_from: set_bootstrap
- import_role:
name: ansible-role-openstack-operations
tasks_from: enable_ssh
- import_role:
name: ansible-role-openstack-operations
tasks_from: restore_redis
~~~~
In This situation, we do not include the `disable_ssh` tasks since this would disable access from the Control Host to the OpenStack nodes for future Ansible operations.
### Backup filesystem from controller ###
Inventory file
~~~~
[backup]
undercloud-0 ansible_connection=local
[filesystem]
controller-0 ansible_user=heat-admin ansible_host=192.168.24.6
controller-1 ansible_user=heat-admin ansible_host=192.168.24.20
controller-2 ansible_user=heat-admin ansible_host=192.168.24.8
[all:vars]
backup_directory="/var/tmp/backup"
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
~~~~
Filesystem Backup Playbook:
~~~~
---
- name: Initialize backup host
hosts: "{{ backup_hosts | default('backup') }}[0]"
tasks:
- import_role:
name: ansible-role-openstack-operations
tasks_from: initialize_backup_host
- name: Backup Filesystem
hosts: "{{ target_hosts | default('filesystem') }}"
become: yes
vars:
backup_server_hostgroup: "{{ backup_hosts | default('backup') }}"
backup_file: "filesystem.bck.tar"
backup_dirs:
- /etc
- /var/lib/nova
- /var/lib/glance
- /var/lib/heat-config
- /var/lib/heat-cfntools
- /var/lib/openvswitch
- /var/lib/config-data
- /var/lib/tripleo-config
- /srv/node
- /usr/libexec/os-apply-config/
- /root
tasks:
- import_role:
name: ansible-role-openstack-operations
tasks_from: enable_ssh
- import_role:
name: ansible-role-openstack-operations
tasks_from: backup_filesystem
~~~~

View File

@ -1,193 +1,10 @@
OpenStack Operations
====================
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
Perform various common OpenStack operations by calling this role with an action and appropriate variables.
Restart Services
----------------
Restarting OpenStack services is complex. This role aims to intelligently evaluate the environment and determine how a service is running, what components constitute that service, and restart those components appropriately. This allows the operator to think in terms of the service that needs restarting rather than having to remember all the details required to restart that service.
This role uses a service map located in ``vars/main.yml``. The service map is a dictionary with one key per service name. Each service name key contains three additional keys that list the SystemD unit files, container names, and vhosts used by each service. It is possible to extend this list of services by defining custom services in ``operations_custom_service_map``. This will be combined with the default service map and passed to the ``service_map_facts`` module.
The ``service_map_facts`` module will evalute the target system and return a list of services or containers that need to be restarted. Those lists will then be used in subsequent tasks to restart a service or a container.
Fetch Logs
----------
To fetch logs with this role, use the ``fetch_logs.yml`` tasks file. By default, every log file in ``/var/log`` matching the ``*.log`` pattern will be fetched from the remote and put into a folder adjacent to the playbook named for each host, preserving the directory structure as found on the remote host.
See ``defaults/main.yml`` for the dictionary of options to control logs that are fetched.
Cleanup Container Items
-----------------------
**WARNING:** This will delete images, containers, and volumes from the target system(s).
To perform the most common cleanup tasks --- delete dangling images and volumes and delete exited or dead containers --- use the ``container_cleanup.yml`` tasks file.
This role includes modules for listing image, volume, and container IDs. The filtered lists (one each for images, containers, and volumes) returned by this module are used to determine which items to remove. Specifying multiple filters creates an ``and`` match, so all filters must match.
If using Docker, see these guides for `images <https://docs.docker.com/engine/reference/commandline/images/#filtering>`_, `containers <https://docs.docker.com/engine/reference/commandline/ps/#filtering>`_, and `volumes <https://docs.docker.com/engine/reference/commandline/volume_ls/#filtering>`_ for filter options.
Backup and Restore Operations
-----------------------------
See `Backup and Restore Operations`__ for more details.
__ https://github.com/openstack/ansible-role-openstack-operations/blob/master/README-backup-ops.md
Requirements
------------
- ansible >= 2.4
If using Docker:
- docker-py >= 1.7.0
- Docker API >= 1.20
Role Variables
--------------
.. list-table:: Variables used for cleaning up Docker
:widths: auto
:header-rows: 1
* - Name
- Default Value
- Description
* - `operations_container_runtime`
- `docker`
- Container runtime to use. Currently supports `docker` and `podman`.
* - `operations_image_filter`
- `['dangling=true']`
- List of image filters.
* - `operations_volume_filter`
- `['dangling=true']`
- List of volume filters.
* - `operations_container_filter`
- `['status=exited', 'status=dead']`
- List of container filters.
.. list-table:: Variables for fetching logs
:widths: auto
:header-rows: 1
* - Name
- Default Value
- Description
* - `operations_log_destination`
- `{{ playbook_dir }}`
- Path where logs will be stored when fetched from remote systems.
.. list-table:: Variables for restarting services
:widths: auto
:header-rows: 1
* - Name
- Default Value
- Description
* - `operations_services_to_restart`
- `[]`
- List of services to restart on target systems.
* - `operations_custom_service_map`
- `{}`
- Dictionary of services and their systemd unit files, container names, and vhosts. This will be combined with the builtin list of services in `vars/main.yml`.
.. list-table:: Variables for backup
:widths: auto
:header-rows: 1
* - Name
- Default Value
- Description
* - `backup_tmp_dir`
- `/var/tmp/openstack-backup`
- Temporary directory created on host to store backed up data.
* - `backup_directory`
- `/home/{{ ansible_user }}`
- Directory on backup host where backup archive will be saved.
* - `backup_host`
- `{{ hostvars[groups[backup_server_hostgroup][0]]['inventory_hostname'] }}`
- Backup host where data will be archived.
* - `backup_host_ssh_args`
- `-F /var/tmp/{{ ansible_hostname }}_config`
- ssh arguments used for connectiong to backup host.
Dependencies
------------
None
Example Playbooks
-----------------
Restart Services playbook
-------------------------
.. code-block::
- hosts: all
tasks:
- name: Restart a service
import_role:
name: openstack-operations
tasks_from: restart_service.yml
vars:
operations_services_to_restart:
- docker
- keystone
- mariadb
Cleanup Container Items playbook
--------------------------------
.. code-block::
- name: Cleanup dangling and dead images, containers, and volumes
hosts: all
tasks:
- name: Cleanup unused images, containers, and volumes
import_role:
name: openstack-operations
tasks_from: container_cleanup.yml
- name: Use custom filters for cleaning
hosts: all
tasks:
- name: Cleanup unused images, containers, and volumes
import_role:
name: openstack-operations
tasks_from: container_cleanup.yml
vars:
operations_image_filters:
- before=image1
operations_volume_filters:
- label=my_volume
operations_container_filters:
- name=keystone
Fetch Logs playbook
-------------------
.. code-block::
- hosts: all
tasks:
- name: Fetch logs
import_role:
name: openstack-operations
tasks_from: fetch_logs.yml
License
-------
Apache 2.0
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
OFTC.

View File

@ -1,6 +0,0 @@
# These are required here because ansible can't be in global-requirements due
# to licensing conflicts. But we sill need to be able to pull them in for
# lint checks and want to document these as ansible specific things that may
# be required for this repository.
ansible==2.7.6
ansible-lint<=4.0.0

View File

@ -1,21 +0,0 @@
#!/bin/bash
# ANSIBLE0006: Using command rather than module
# we have a few use cases where we need to use curl and rsync
# ANSIBLE0007: Using command rather than an argument to e.g file
# we have a lot of 'rm' command and we should use file module instead
# ANSIBLE0010: Package installs should not use latest.
# Sometimes we need to update some packages.
# ANSIBLE0012: Commands should not change things if nothing needs doing
# ANSIBLE0013: Use Shell only when shell functionality is required
# ANSIBLE0016: Tasks that run when changed should likely be handlers
# this requires refactoring roles, skipping for now
SKIPLIST="ANSIBLE0006,ANSIBLE0007,ANSIBLE0010,ANSIBLE0012,ANSIBLE0013,ANSIBLE0016"
# Lin the role.
ansible-lint -vvv -x $SKIPLIST ./ || lint_error=1
# exit with 1 if we had a least an error or warning.
if [[ -n "$lint_error" ]]; then
exit 1;
fi

View File

@ -1,12 +0,0 @@
#!/bin/bash
_TOP=$(dirname `readlink -f -- $0`)/..
if [ ! -d ~/.ansible/plugins/modules ]; then
mkdir -p ~/.ansible/plugins/modules
fi
cd ~/.ansible/plugins/modules
cat $_TOP/non-galaxy.txt | xargs -i git clone {}

View File

@ -1,57 +0,0 @@
# Cleanup Container
operations_container_runtime: docker
operations_image_filter:
- dangling=true
operations_volume_filter:
- dangling=true
operations_container_filter:
- status=exited
- status=dead
# Fetch Logs
operations_log_destination: "{{ playbook_dir }}"
operations_logs:
# age:
# contains:
# file_type:
# follow:
paths: /var/log
patterns: '*.log'
recurse: yes
# size:
# use_regex:
# Restart Service
operations_services_to_restart: []
operations_custom_service_map: {}
# Backup
backup_tmp_dir: /var/tmp/openstack-backup
backup_directory: "/home/{{ ansible_user }}"
backup_host: "{{ hostvars[groups[backup_server_hostgroup][0]]['inventory_hostname'] }}"
backup_host_ssh_args: "-F /var/tmp/{{ ansible_hostname }}_config"
# Filesystem
backup_file: "system.bck.tar"
backup_dirs:
- /etc
- /var/lib/nova
- /var/lib/glance
- /var/lib/keystone
- /var/lib/cinder
- /var/lib/heat
- /var/lib/heat-config
- /var/lib/heat-cfntools
- /var/lib/rabbitmq
- /var/lib/neutron
- /var/lib/haproxy
- /var/lib/openvswitch
- /var/lib/redis
- /srv/node
- /usr/libexec/os-apply-config/
- /home/heat-admin
- /root
backup_excludes:
- /var/lib/nova/instances

View File

View File

@ -1,235 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2018 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = """
---
module: docker_facts
author:
- Sam Doran (@samdoran)
version_added: '2.6'
short_description: Gather list of volumes, images, containers
notes:
- When specifying mulitple filters, only assets matching B(all) filters
will be returned.
description:
- Gather a list of volumes, images, and containers on a running system
- Return both filtered and unfiltered lists of volumes, images,
and containers.
options:
image_filter:
description:
- List of k=v pairs to use as a filter for images.
type: list
required: false
volume_filter:
description:
- List of k=v pairs to use as a filter for volumes.
type: list
required: false
container_filter:
description:
- List of k=v pairs to use as a filter for containers.
type: list
required: false
"""
EXAMPLES = """
- name: Gather Docker facts
docker_facts:
- name: Gather filtered Docker facts
docker_facts:
image_filter:
- dangling=true
volume_filter:
- dangling=true
container_filter:
- status=exited
- status=dead
- name: Remove containers that matched filters
docker_container:
name: "{{ item }}"
state: absent
loop: "{{ docker.containers_filtered | map(attribute='id') | list }}"
"""
RETURN = """
docker:
description: >
Lists of container, volume, and image IDs,
both filtered and unfiltered.
returned: always
type: complex
contains:
containers:
description: List of dictionaries of container name, state, and ID
returned: always
type: complex
containers_filtered:
description: >
List of dictionaries of container name, state, and ID
that matched the filter(s)
returned: always
type: complex
images:
description: List of image UUIDs
returned: always
type: list
images_filtered:
description: List of UUIDs that matched the filter(s)
returned: always
type: list
volumes:
description: List of volume UUIDs
returned: always
type: list
volumes_filtered:
description: List of UUIDs that matched the filter(s)
returned: always
type: list
"""
from ansible.module_utils.docker_common import AnsibleDockerClient # noqa: E402,E501
def _list_or_dict(value):
if isinstance(value, list):
return value
elif isinstance(value, dict):
return value
raise TypeError
def _to_dict(client, filters):
filter_dict = {}
if isinstance(filters, list):
for item in filters:
a = item.split('=')
filter_dict[a[0]] = a[1]
return filter_dict
elif isinstance(filters, str):
a = filters.split('=')
filter_dict[a[0]] = a[1]
return filter_dict
elif isinstance(filters, dict):
return filters
def get_facts(client, docker_type, filters=None):
result = []
function_to_call = globals()['get_{}'.format(docker_type)]
if filters and len(filters) > 1:
for f in filters:
result.extend(function_to_call(client, f))
else:
result = function_to_call(client, filters)
return result
def get_images(client, filters=None):
result = []
if filters:
filters = _to_dict(client, filters)
images = client.images(filters=filters)
if images:
images = [i['Id'].strip('sha256:') for i in images]
result = images
return result
def get_containers(client, filters=None):
result = []
if filters:
filters = _to_dict(client, filters)
containers = client.containers(filters=filters)
if containers:
containers = [c['Id'].strip('sha256:') for c in containers]
result = containers
return result
def get_volumes(client, filters=None):
result = []
if filters:
filters = _to_dict(client, filters)
volumes = client.volumes(filters=filters)
if volumes['Volumes']:
volumes = [v['Name'] for v in volumes['Volumes']]
result = volumes
return result
def main():
argument_spec = dict(
image_filter=dict(type=list, default=[]),
volume_filter=dict(type=list, default=[]),
container_filter=dict(type=list, default=[]),
)
docker_client = AnsibleDockerClient(
argument_spec=argument_spec,
supports_check_mode=True,
)
docker_facts = {}
types_to_get = ['volumes', 'images', 'containers']
for t in types_to_get:
singular = t.rstrip('s')
filter_key = '{}_filter'.format(singular)
docker_facts[t] = get_facts(docker_client, t)
filters = docker_client.module.params[filter_key]
# Ensure we got a list of k=v filters
if filters and len(filters[0]) <= 1:
docker_client.module.fail_json(
msg='The supplied {filter_key} does not appear to be a list of'
' k=v filters: {filter_value}'.format(
filter_key=filter_key, filter_value=filters)
)
else:
docker_facts['{}_filtered'.format(t)] = get_facts(
docker_client, t,
filters=docker_client.module.params[filter_key])
results = dict(
ansible_facts=dict(
docker=docker_facts
)
)
docker_client.module.exit_json(**results)
if __name__ == '__main__':
main()

View File

@ -1,126 +0,0 @@
# Copyright 2016 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import docker
from ansible.module_utils._text import to_native
from ansible.module_utils.basic import AnsibleModule
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
def get_docker_containers(module, container_list):
from requests.exceptions import ConnectionError
if len(container_list) == 0:
pass
client = docker.from_env()
try:
containers = client.containers.list()
docker_list = [{'container_name': i.attrs['Name'].strip('/')} for i
in containers if i.attrs['Name'].strip('/') in
container_list]
return docker_list
except docker.errors.APIError as e:
module.fail_json(
msg='Error listing containers: {}'.format(to_native(e)))
except ConnectionError as e:
module.fail_json(
msg='Error connecting to Docker: {}'.format(to_native(e))
)
def get_systemd_services(module, service_unit_list):
if len(service_unit_list) == 0:
pass
systemctl_path = \
module.get_bin_path("systemctl",
opt_dirs=["/usr/bin", "/usr/local/bin"])
if systemctl_path is None:
return None
systemd_list = []
for i in service_unit_list:
rc, stdout, stderr = \
module.run_command("{} is-enabled {}".format(systemctl_path, i),
use_unsafe_shell=True)
if stdout == "enabled\n":
state_val = "enabled"
else:
state_val = "disabled"
systemd_list.append({"name": i, "state": state_val})
return systemd_list
def run_module():
module_args = dict(service_map=dict(type=dict, required=True),
services=dict(type=list, required=True),
)
module = AnsibleModule(
argument_spec=module_args,
supports_check_mode=True
)
service_map = module.params.get('service_map')
service_names = module.params.get('services')
services_to_restart = {i: service_map[i] for i in service_names}
container_list = []
for name in service_names:
try:
for item in services_to_restart[name]['container_name']:
container_list.append(item)
except KeyError:
# be tolerant if only a systemd unit is defined for the service
pass
service_unit_list = []
for svc_name in service_names:
try:
for i in services_to_restart[svc_name]['systemd_unit']:
service_unit_list.append(i)
except KeyError:
# be tolerant if only a container name is defined for the service
pass
result = dict(
ansible_facts=dict(
docker_containers_to_restart=get_docker_containers(
module, container_list),
systemd_services_to_restart=get_systemd_services(
module, service_unit_list),
)
)
if module.check_mode:
return result
module.exit_json(**result)
def main():
run_module()
if __name__ == "__main__":
main()

View File

@ -1,17 +0,0 @@
galaxy_info:
author: Sam Doran
description: "Perform common OpenStack operations"
company: Ansible by Red Hat
license: Apache 2.0
min_ansible_version: 2.4
platforms:
- name: EL
versions:
- 7
galaxy_tags:
- openstack
- cloud
dependencies: []

View File

@ -1 +0,0 @@
https://github.com/redhat-openstack/ansible-pacemaker.git

View File

@ -1 +0,0 @@
pbr>=1.6

View File

@ -1,38 +0,0 @@
[metadata]
name = ansible-role-openstack-operations
summary = ansible-openstack-operations - Ansible role to perform some OpenStack Day 2 Operations
description_file =
README.rst
author = TripleO Team
author_email = openstack-discuss@lists.openstack.org
home_page = https://opendev.org/openstack/ansible-role-openstack-operations
classifier =
License :: OSI Approved :: Apache Software License
Development Status :: 4 - Beta
Intended Audience :: Developers
Intended Audience :: System Administrators
Intended Audience :: Information Technology
Topic :: Utilities
[global]
setup_hooks =
pbr.hooks.setup_hook
[files]
data_files =
share/ansible/roles/openstack-operations/defaults = defaults/*
share/ansible/roles/openstack-operations/handlers = handlers/*
share/ansible/roles/openstack-operations/meta = meta/*
share/ansible/roles/openstack-operations/tasks = tasks/*
share/ansible/roles/openstack-operations/templates = templates/*
share/ansible/roles/openstack-operations/tests = tests/*
share/ansible/roles/openstack-operations/vars = vars/*
share/ansible/roles/openstack-operations/files = files/*
[wheel]
universal = 1
[pbr]
skip_authors = True
skip_changelog = True

View File

@ -1,19 +0,0 @@
# Copyright Red Hat, Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import setuptools
setuptools.setup(
setup_requires=['pbr'],
pbr=True)

View File

@ -1,36 +0,0 @@
- name: Remove any existing backup directory
file:
path: "{{ backup_tmp_dir }}/filesystem"
state: absent
- name: Create a new backup directory
file:
path: "{{ backup_tmp_dir }}/filesystem"
state: directory
- name: Backup files and directories
shell: |
/bin/tar --ignore-failed-read --xattrs \
-zcf "{{ backup_tmp_dir }}/filesystem/{{ inventory_hostname }}-{{ backup_file }}" \
{% if backup_excludes is defined %}
{% for backup_exclude in backup_excludes %}
--exclude {{ backup_exclude }} \
{% endfor %}
{% endif %}
{{' '.join(backup_dirs) }}
ignore_errors: yes
delay: 60
- name: Copy the archive to the backup server
synchronize:
mode: pull
src: "{{ backup_tmp_dir }}/filesystem/{{ inventory_hostname }}-{{ backup_file }}"
dest: "{{ backup_directory }}"
set_remote_user: false
ssh_args: "{{ backup_host_ssh_args }}"
delegate_to: "{{ backup_host }}"
- name: Remove any existing backup directory
file:
path: "{{ backup_tmp_dir }}/filesystem"
state: absent

View File

@ -1,63 +0,0 @@
# Tasks for dumping a MySQL backup on a single host and pulling it to the\
# Backup Server.
- name: Make sure mysql client is installed on the Target Hosts
yum:
name: mariadb
state: installed
- name: Remove any existing database backup directory
file:
path: "{{ backup_tmp_dir }}/mysql"
state: absent
- name: Create a new MySQL database backup directory
file:
path: "{{ backup_tmp_dir }}/mysql"
state: directory
- name: Get the database root password
shell: |
/bin/hiera -c /etc/puppet/hiera.yaml mysql::server::root_password
when: mysql_root_password is undefined
register: mysql_root_password_cmd_output
become: true
no_log: true
- name: Convert the database root password if unknown
set_fact:
mysql_root_password: "{{ mysql_root_password_cmd_output.stdout_lines[0] }}"
when: mysql_root_password is undefined
no_log: true
# Originally used the script module for this but it had issues with
# command piping. Using a script to perform the MySQL dumps.
- name: Create MySQL backup script
template:
src: backup_mysql.sh.j2
dest: "{{ backup_tmp_dir }}/mysql/backup_mysql.sh"
mode: u+rwx
- name: Run the MySQL backup script
command: "{{ backup_tmp_dir }}/mysql/backup_mysql.sh"
# The archive module is pretty limited. Using a script instead.
- name: Archive the OpenStack databases
shell: |
/bin/tar --ignore-failed-read --xattrs \
-zcf {{ backup_tmp_dir }}/mysql/openstack-backup-mysql.tar \
{{ backup_tmp_dir }}/mysql/*.sql
- name: Copy the archive to the backup server
synchronize:
mode: pull
src: "{{ backup_tmp_dir }}/mysql/openstack-backup-mysql.tar"
dest: "{{ backup_directory }}"
set_remote_user: false
ssh_args: "{{ backup_host_ssh_args }}"
delegate_to: "{{ backup_host }}"
- name: Remove the database backup directory
file:
path: "{{ backup_tmp_dir }}/mysql"
state: absent

View File

@ -1,41 +0,0 @@
# Tasks for backing up Pacemaker configuration
- name: Remove any existing Pacemaker backup directory
file:
path: "{{ backup_tmp_dir }}/pcs"
state: absent
- name: Create a new Pacemaker backup directory
file:
path: "{{ backup_tmp_dir }}/pcs"
state: directory
- name: Create Pacemaker backup script
template:
src: backup_pacemaker.sh.j2
dest: "{{ backup_tmp_dir }}/pcs/backup_pacemaker.sh"
mode: u+rwx
- name: Run the Pacemaker backup script
command: "{{ backup_tmp_dir }}/pcs/backup_pacemaker.sh"
- name: Archive the Pacemaker configuration
shell: |
/bin/tar --ignore-failed-read --xattrs \
-zcf {{ backup_tmp_dir }}/pcs/openstack-backup-pacemaker.tar \
{{ backup_tmp_dir }}/pcs/cib.xml \
{{ backup_tmp_dir }}/pcs/pacemaker_backup.tar.bz2
- name: Copy the archive to the backup server
synchronize:
mode: pull
src: "{{ backup_tmp_dir }}/pcs/openstack-backup-pacemaker.tar"
dest: "{{ backup_directory }}"
set_remote_user: false
ssh_args: "{{ backup_host_ssh_args }}"
delegate_to: "{{ backup_host }}"
- name: Remove the database backup directory
file:
path: "{{ backup_tmp_dir }}/pcs"
state: absent

View File

@ -1,75 +0,0 @@
# Tasks for dumping a Redis backup from each host and pulling it to the
# Backup Server.
- name: Make sure Redis client is installed on the Target Hosts
yum:
name: redis
state: installed
- name: Remove any existing Redis backup directory
file:
path: "{{ backup_tmp_dir }}/redis"
state: absent
- name: Create a new Redis backup directory
file:
path: "{{ backup_tmp_dir }}/redis"
state: directory
- name: Get the Redis masterauth password
shell: |
/bin/hiera -c /etc/puppet/hiera.yaml redis::masterauth
when: redis_masterauth_password is undefined
register: redis_masterauth_password_cmd_output
become: true
no_log: true
- name: Convert the Redis masterauth password if unknown
set_fact:
redis_masterauth_password: "{{ redis_masterauth_password_cmd_output.stdout_lines[0] }}"
when: redis_masterauth_password is undefined
no_log: true
- name: Get the Redis VIP
shell: |
/bin/hiera -c /etc/puppet/hiera.yaml redis_vip
when: redis_vip is undefined
register: redis_vip_cmd_output
become: true
- name: Convert the Redis VIP if unknown
set_fact:
redis_vip: "{{ redis_vip_cmd_output.stdout_lines[0] }}"
when: redis_vip is undefined
- name: Run the redis backup command
command: /bin/redis-cli -h {{ redis_vip }} -a {{ redis_masterauth_password }} save
no_log: true
- name: Copy the Redis dump
copy:
src: /var/lib/redis/dump.rdb
dest: "{{ backup_tmp_dir }}/redis/dump.rdb"
remote_src: yes
become: true
# The archive module is pretty limited. Using a shell instead.
- name: Archive the OpenStack Redis dump
shell: |
/bin/tar --ignore-failed-read --xattrs \
-zcf {{ backup_tmp_dir }}/redis/openstack-backup-redis.tar \
{{ backup_tmp_dir }}/redis/dump.rdb
- name: Copy the archive to the backup server
synchronize:
mode: pull
src: "{{ backup_tmp_dir }}/redis/openstack-backup-redis.tar"
dest: "{{ backup_directory }}"
set_remote_user: false
ssh_args: "{{ backup_host_ssh_args }}"
delegate_to: "{{ backup_host }}"
- name: Remove the Redis backup directory
file:
path: "{{ backup_tmp_dir }}/redis"
state: absent

View File

@ -1,7 +0,0 @@
- name: Ensure a valid container runtime is used
assert:
msg: Invalid container runtime specified. Only 'docker' and 'podman' are valid.
that:
- operations_container_runtime in ['docker', 'podman']
- include_tasks: "{{ operations_container_runtime }}.yml"

View File

@ -1,11 +0,0 @@
- name: Remove Backup Host authorized key on the OpenStack nodes
authorized_key:
user: root
state: absent
key: "{{ hostvars[groups[backup_server_hostgroup][0]]['backup_ssh_key']['content'] | b64decode }}"
- name: Remove temporary SSH config for each OpenStack node on Backup Host
file:
path: /var/tmp/{{ ansible_hostname }}_config
state: absent
delegate_to: "{{ hostvars[groups[backup_server_hostgroup][0]]['inventory_hostname'] }}"

View File

@ -1,23 +0,0 @@
- name: Gather Docker facts
docker_facts:
image_filter: "{{ operations_image_filter }}"
volume_filter: "{{ operations_volume_filter }}"
container_filter: "{{ operations_container_filter }}"
- name: Remove images
docker_image:
name: "{{ item }}"
state: absent
loop: "{{ docker.images_filtered }}"
- name: Remove containers
docker_container:
name: "{{ item }}"
state: absent
loop: "{{ docker.containers_filtered }}"
- name: Remove dangling volumes
docker_volume:
name: "{{ item }}"
state: absent
loop: "{{ docker.volumes_filtered }}"

View File

@ -1,14 +0,0 @@
---
- name: Allow SSH access from Backup Host to OpenStack nodes
authorized_key:
user: "{{ ansible_user }}"
state: present
key: "{{ hostvars[groups[backup_server_hostgroup][0]]['backup_ssh_key']['content'] | b64decode }}"
# The synchronize module has issues with delegation and remote users. This
# task creates SSH config to set the SSH user for each host.
- name: Add temporary SSH config for each OpenStack node on Backup Host
template:
src: backup_ssh_config.j2
dest: /var/tmp/{{ ansible_hostname }}_config
delegate_to: "{{ hostvars[groups[backup_server_hostgroup][0]]['inventory_hostname'] }}"

View File

@ -1,18 +0,0 @@
- name: Find logs
find:
age: "{{ operations_logs.age | default(omit) }}"
contains: "{{ operations_logs.contains | default(omit) }}"
file_type: "{{ operations_logs.file_type | default(omit) }}"
follow: "{{ operations_logs.follow | default(omit) }}"
paths: "{{ operations_logs.paths | default('/var/log') }}"
patterns: "{{ operations_logs.patterns | default('*.log') }}"
recurse: "{{ operations_logs.recurse | default('yes') }}"
size: "{{ operations_logs.size | default(omit) }}"
use_regex: "{{ operations_logs.use_regex | default(omit) }}"
register: _logs
- name: Fetch logs and place in {{ operations_log_destination }}
fetch:
src: "{{ item.path }}"
dest: "{{ operations_log_destination }}"
with_items: "{{ _logs.files }}"

View File

@ -1,19 +0,0 @@
- name: Make sure the Backup Host has an SSH key
user:
name: "{{ ansible_user }}"
generate_ssh_key: yes
- name: Get the contents of the Backup Host's public key
slurp:
src: "{{ ansible_user_dir }}/.ssh/id_rsa.pub"
register: backup_ssh_key
- name: Install rsync on the Backup Host
yum:
name: rsync
state: installed
- name: Make sure the backup directory exists
file:
path: "{{ backup_directory }}"
state: directory

View File

View File

@ -1,2 +0,0 @@
- debug:
msg: Podman tasks here

View File

@ -1,18 +0,0 @@
- name: Gather container list
service_map_facts:
services: "{{ operations_services_to_restart }}"
service_map: "{{ operations_service_map | combine(operations_custom_service_map) }}"
- name: Restart containerized OpenStack services
docker_container:
name: "{{ item['container_name'] }}"
state: started
restart: yes
loop: "{{ docker_containers_to_restart }}"
- name: Restart OpenStack services
service:
name: "{{ item['name'] }}"
state: restarted
loop: "{{ systemd_services_to_restart }}"
when: item['state'] == "enabled"

View File

@ -1,177 +0,0 @@
# Tasks for restoring a MySQL backup on a galera cluster
- name: Make sure mysql client is installed on the Target Hosts
yum:
name: mariadb
state: installed
- name: Get the galera container image if not user-defined
command: "/bin/bash docker ps --filter name=.*galera.* --format='{{ '{{' }} .Image {{ '}}' }}'"
when: galera_container_image is undefined
register: galera_container_image_cmd_output
become: true
- name: Convert the galera container image variable if unknown
set_fact:
galera_container_image: "{{ galera_container_image_cmd_output.stdout_lines[0] }}"
when: galera_container_image is undefined
- name: Get the database root password
shell: |
/bin/hiera -c /etc/puppet/hiera.yaml mysql::server::root_password
when: mysql_root_password is undefined
register: mysql_root_password_cmd_output
become: true
no_log: true
- name: Convert the database root password variable if unknown
set_fact:
mysql_root_password: "{{ mysql_root_password_cmd_output.stdout_lines[0] }}"
when: mysql_root_password is undefined
no_log: true
- name: Get the database clustercheck password
shell: |
/bin/hiera -c /etc/puppet/hiera.yaml mysql_clustercheck_password
when: mysql_clustercheck_password is undefined
register: mysql_clustercheck_password_cmd_output
become: true
no_log: true
- name: Convert the database clustercheck password variable if unknown
set_fact:
mysql_clustercheck_password: "{{ mysql_clustercheck_password_cmd_output.stdout_lines[0] }}"
when: mysql_clustercheck_password is undefined
no_log: true
- name: Remove any existing database backup directory
file:
path: /var/tmp/openstack-backup/mysql
state: absent
when: bootstrap_node | bool
- name: Create a new mysql database backup directory
file:
path: /var/tmp/openstack-backup/mysql
state: directory
when: bootstrap_node | bool
- name: Copy MySQL backup archive from the backup server
synchronize:
mode: push
src: "{{ backup_directory | default('~/.') }}/openstack-backup-mysql.tar"
dest: /var/tmp/openstack-backup/mysql/
set_remote_user: false
ssh_args: "-F /var/tmp/{{ ansible_hostname }}_config"
delegate_to: "{{ hostvars[groups[backup_server_hostgroup][0]]['inventory_hostname'] }}"
when: bootstrap_node | bool
- name: Unarchive the database archive
shell: |
/bin/tar --xattrs \
-zxf /var/tmp/openstack-backup/mysql/openstack-backup-mysql.tar \
-C /
when: bootstrap_node | bool
- name: Get the database bind host IP on each node
shell: |
/bin/hiera -c /etc/puppet/hiera.yaml mysql_bind_host
when: mysql_bind_host is undefined
register: mysql_bind_host
become: true
- name: Temporarily disable to database port from external access on each node
iptables:
chain: 'INPUT'
destination: "{{ mysql_bind_host.stdout|trim }}"
destination_port: 3306
protocol: tcp
jump: DROP
become: true
- name: Disable galera-bundle
pacemaker_resource:
resource: galera-bundle
state: disable
wait_for_resource: true
become: true
when: bootstrap_node | bool
- name: Get a timestamp
set_fact:
timestamp: "{{ ansible_date_time.iso8601_basic_short }}"
- name: Create directory for the old MySQL database
file:
path: /var/tmp/openstack-backup/mysql-old-{{ timestamp }}
state: directory
- name: Copy old MySQL database
synchronize:
src: "/var/lib/mysql/"
dest: "/var/tmp/openstack-backup/mysql-old-{{ timestamp }}/"
delegate_to: "{{ inventory_hostname }}"
become: true
- name: Create a temporary directory for database creation script
file:
path: /var/tmp/galera-restore
state: directory
- name: Create MySQL backup script
template:
src: create_new_db.sh.j2
dest: /var/tmp/galera-restore/create_new_db.sh
mode: u+rwx
- name: Create a galera restore container, remove the old database, and create a new empty database
docker_container:
name: galera_restore
detach: false
command: "/var/tmp/galera-restore/create_new_db.sh"
image: "{{ galera_container_image }}"
volumes:
- /var/lib/mysql:/var/lib/mysql:rw
- /var/tmp/galera-restore:/var/tmp/galera-restore:ro
become: true
- name: Remove galera restore container
docker_container:
name: galera_restore
state: absent
become: true
- name: Enable galera
pacemaker_resource:
resource: galera-bundle
state: enable
wait_for_resource: true
become: true
when: bootstrap_node | bool
- name: Perform a local database port check
wait_for:
port: 3306
host: "{{ mysql_bind_host.stdout|trim }}"
- name: Import OpenStack MySQL data
command: |
/bin/mysql -u root -p{{ mysql_root_password }} < /var/tmp/openstack-backup/mysql/openstack-backup-mysql.sql
when: bootstrap_node | bool
no_log: true
- name: Import OpenStack MySQL grants data
command: |
/bin/mysql -u root -p{{ mysql_root_password }} < /var/tmp/openstack-backup/mysql/openstack-backup-mysql-grants.sql
when: bootstrap_node | bool
no_log: true
- name: Re-enable the database port externally
iptables:
chain: 'INPUT'
destination: "{{ mysql_bind_host.stdout|trim }}"
destination_port: 3306
protocol: tcp
jump: DROP
state: absent
become: true

View File

@ -1,100 +0,0 @@
# Tasks for restoring Redis backups on a cluster
- name: Make sure Redis client is installed on the Target Hosts
yum:
name: redis
state: installed
- name: Get the Redis container image if not user-defined
command: "/bin/bash docker ps --filter name=.*redis.* --format='{{ '{{' }} .Image {{ '}}' }}'"
when: redis_container_image is undefined
register: redis_container_image_cmd_output
become: true
- name: Convert the Redis container image variable if unknown
set_fact:
redis_container_image: "{{ redis_container_image_cmd_output.stdout_lines[0] }}"
when: redis_container_image is undefined
- name: Get the Redis VIP
shell: |
/bin/hiera -c /etc/puppet/hiera.yaml redis_vip
when: redis_vip is undefined
register: redis_vip_cmd_output
become: true
- name: Convert the Redis VIP if unknown
set_fact:
redis_vip: "{{ redis_vip_cmd_output.stdout_lines[0] }}"
when: redis_vip is undefined
- name: Remove any existing Redis backup directory
file:
path: "{{ backup_tmp_dir }}/redis"
state: absent
- name: Create a new Redis backup directory
file:
path: "{{ backup_tmp_dir }}/redis"
state: directory
- name: Copy Redis backup archive from the backup server
synchronize:
mode: push
src: "{{ backup_directory }}/openstack-backup-redis.tar"
dest: "{{ backup_tmp_dir }}/redis/"
set_remote_user: false
ssh_args: "{{ backup_host_ssh_args }}"
delegate_to: "{{ backup_host }}"
- name: Unarchive the database archive
shell: |
/bin/tar --xattrs \
-zxf {{ backup_tmp_dir }}/redis/openstack-backup-redis.tar \
-C /
- name: Disable redis-bundle
pacemaker_resource:
resource: redis-bundle
state: disable
wait_for_resource: true
become: true
when: bootstrap_node | bool
- name: Delete the old Redis dump
file:
path: /var/lib/redis/dump.rdb
state: absent
become: true
- name: Copy the new Redis dump
copy:
src: "{{ backup_tmp_dir }}/redis/dump.rdb"
dest: /var/lib/redis/dump.rdb
remote_src: yes
become: true
- name: Create a redis_restore container to restore container-based permissions
docker_container:
name: redis_restore
user: root
detach: false
command: "/usr/bin/chown -R redis: /var/lib/redis"
image: "{{ redis_container_image }}"
volumes:
- /var/lib/redis:/var/lib/redis:rw
become: true
- name: Remove redis_restore container
docker_container:
name: redis_restore
state: absent
become: true
- name: Enable redis
pacemaker_resource:
resource: redis-bundle
state: enable
wait_for_resource: true
become: true
when: bootstrap_node | bool

View File

@ -1,8 +0,0 @@
- name: Set bootstrap status to false on all nodes
set_fact:
bootstrap_node: false
- name: Set the bootstrap status on the first node
set_fact:
bootstrap_node: true
when: inventory_hostname == ansible_play_hosts[0]

View File

@ -1,22 +0,0 @@
- name: Get the database clustercheck password
shell: |
/bin/hiera -c /etc/puppet/hiera.yaml mysql_clustercheck_password
when: mysql_clustercheck_password is undefined
register: mysql_clustercheck_password_cmd_output
become: true
no_log: true
- name: Convert the database clustercheck password if unknown
set_fact:
mysql_clustercheck_password: "{{ mysql_clustercheck_password_cmd_output.stdout_lines[0] }}"
when: mysql_clustercheck_password is undefined
no_log: true
- name: Check the Galera cluster is Synced
command: |
/bin/mysql -u clustercheck -p{{ mysql_clustercheck_password }} -nNE -e "SHOW STATUS LIKE 'wsrep_local_state';" | tail -1
register: clustercheck_state
until: clustercheck_state.stdout | trim | int == 4
retries: 10
delay: 5
no_log: true

View File

@ -1,44 +0,0 @@
- name: Get the Redis masterauth password
shell: |
/bin/hiera -c /etc/puppet/hiera.yaml redis::masterauth
when: redis_masterauth_password is undefined
register: redis_masterauth_password_cmd_output
become: true
no_log: true
- name: Convert the Redis masterauth password if unknown
set_fact:
redis_masterauth_password: "{{ redis_masterauth_password_cmd_output.stdout_lines[0] }}"
when: redis_masterauth_password is undefined
no_log: true
- name: Get the Redis VIP
shell: |
/bin/hiera -c /etc/puppet/hiera.yaml redis_vip
when: redis_vip is undefined
register: redis_vip_cmd_output
become: true
- name: Convert the Redis VIP if unknown
set_fact:
redis_vip: "{{ redis_vip_cmd_output.stdout_lines[0] }}"
when: redis_vip is undefined
- name: Perform a Redis check
command: /bin/redis-cli -h {{ redis_vip }} -a {{ redis_masterauth_password }} ping
register: redis_status_check_output
no_log: true
- name: Convert the Redis status
set_fact:
redis_status_check: "{{ redis_status_check_output.stdout_lines[0] }}"
- name: Fail if Redis is not running on the node
fail:
msg: "Redis not running on node: {{ inventory_hostname }}. Check the service is running on the node and try again."
when: redis_status_check != "PONG"
- name: Report Redis success
debug:
msg: "Redis running on node: {{ inventory_hostname }}"
when: redis_status_check == "PONG"

View File

@ -1,5 +0,0 @@
#!/bin/bash
mysql -uroot -p{{ mysql_root_password }} -s -N -e "select distinct table_schema from information_schema.tables where engine='innodb' and table_schema != 'mysql';" | xargs mysqldump -uroot -p{{ mysql_root_password }} --single-transaction --databases > {{ backup_tmp_dir }}/mysql/openstack-backup-mysql.sql
mysql -uroot -p{{ mysql_root_password }} -s -N -e "SELECT CONCAT('\"SHOW GRANTS FOR ''',user,'''@''',host,''';\"') FROM mysql.user where (length(user) > 0 and user NOT LIKE 'root')" | xargs -n1 mysql -uroot -p{{ mysql_root_password }} -s -N -e | sed 's/$/;/' > {{ backup_tmp_dir }}/mysql/openstack-backup-mysql-grants.sql

View File

@ -1,6 +0,0 @@
#!/bin/bash
pushd {{ backup_tmp_dir }}/pcs
pcs cluster cib cib.xml
pcs config backup pacemaker_backup
popd

View File

@ -1,2 +0,0 @@
Host {{ ansible_host }}
User {{ ansible_user }}

View File

@ -1,18 +0,0 @@
#!/bin/bash
rm -rf /var/lib/mysql/*
mysql_install_db --datadir=/var/lib/mysql --user=mysql
chown -R mysql:mysql /var/lib/mysql/
restorecon -R /var/lib/mysql
/usr/bin/mysqld_safe --datadir='/var/lib/mysql' &
while ! mysql -u root -e ";" ; do
echo "Waiting for database to become active..."
sleep 1
done
echo "Database active!"
/usr/bin/mysql -u root -e "CREATE USER 'clustercheck'@'localhost';"
/usr/bin/mysql -u root -e "GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' IDENTIFIED BY '{{ mysql_clustercheck_password }}';"
/usr/bin/mysqladmin -u root password {{ mysql_root_password }}
mysqladmin -u root -p{{ mysql_root_password }} shutdown

View File

@ -1,2 +0,0 @@
hacking>=4.0.0,<4.1.0 # Apache-2.0
pyflakes>=2.2.0

68
tox.ini
View File

@ -1,68 +0,0 @@
[tox]
minversion = 2.0
envlist = docs, linters
skipsdist = True
[testenv]
usedevelop = True
install_command = pip install -c{env:TOX_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} {opts} {packages}
setenv = VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/test-requirements.txt
whitelist_externals = bash
[testenv:bindep]
basepython = python3
# Do not install any requirements. We want this to be fast and work even if
# system dependencies are missing, since it's used to tell you what system
# dependencies are missing! This also means that bindep must be installed
# separately, outside of the requirements files.
deps = bindep
commands = bindep test
[testenv:pep8]
basepython = python3
commands =
# Run hacking/flake8 check for all python files
bash -c "git ls-files | grep -v releasenotes | xargs grep --binary-files=without-match \
--files-with-match '^.!.*python$' \
--exclude-dir .tox \
--exclude-dir .git \
--exclude-dir .eggs \
--exclude-dir *.egg-info \
--exclude-dir dist \
--exclude-dir *lib/python* \
--exclude-dir doc \
| xargs flake8 --verbose"
[testenv:ansible-lint]
basepython=python2
commands =
bash ci-scripts/install-non-galaxy.sh
bash ci-scripts/ansible-lint.sh
[testenv:linters]
basepython = python3
deps =
-r{toxinidir}/test-requirements.txt
-r{toxinidir}/ansible-requirements.txt
commands =
{[testenv:pep8]commands}
{[testenv:ansible-lint]commands}
[testenv:releasenotes]
basepython = python3
whitelist_externals = bash
commands = bash -c ci-scripts/releasenotes_tox.sh
[testenv:venv]
basepython = python3
commands = {posargs}
[flake8]
# E123, E125 skipped as they are invalid PEP-8.
# E265 deals withs paces inside of comments
# FIXME(aschultz): not sure how to fix this for ansible libraries:
# H236 Python 3.x incompatible __metaclass__, use six.add_metaclass()
show-source = True
ignore = E123,E125,E265,H236
builtins = _

View File

@ -1,209 +0,0 @@
operations_service_map:
aodh:
systemd_unit:
- openstack-aodh-evaluator
- openstack-aodh-listener
- openstack-aodh-notifier
container_name:
- aodh_api
- aodh_evaluator
- aodh_listener
- aodh_notifier
vhost:
- aodh
- aodh_wsgi
barbican:
systemd_unit:
- openstack-barbican-api
vhost: ""
ceilometer:
ceilometer-agent:
systemd_unit:
- openstack-ceilometer-central
- openstack-ceilometer-compute
- openstack-ceilometer-polling
- openstack-ceilometer-ipmi
- openstack-ceilometer-notification
container_name:
- ceilometer_agent_central
- ceilometer_agent_notification
cinder:
systemd_unit:
- openstack-cinder-api
- openstack-cinder-scheduler
- openstack-cinder-volume
container_name:
- cinder_api
- cinder_api_cron
- cinder_scheduler
- cinder_volume
vhost: ""
congress:
systemd_unit:
- openstack-congress-server
container_name:
vhost: ""
type: ""
glance:
systemd_unit:
- openstack-glance-api
container_name:
- glance_api
vhost: ""
gnocchi:
systemd_unit:
- openstack-gnocchi-api
container_name:
- gnocchi_api
- gnocchi_metricd
- gnocchi_statsd
vhost:
- gnocchi
haproxy:
systemd_unit:
- haproxy
container_name:
- haproxy
heat:
systemd_unit:
- openstack-heat-api
- openstack-heat-engine
- openstack-heat-api-cfn
container_name:
- heat_api
- heat_api_cfn
- heat_api_cron
- heat_engine
vhost:
- heat_api_wsgi
horizon:
systemd_unit: ""
container_name:
- horizon
vhost:
- horizon_vhost
ironic:
systemd_unit:
- openstack-ironic-api
- openstack-ironic-conductor
container_name:
vhost:
- ironic
keepalived:
systemd_unit:
- keepalived
container_name:
- keepalived
keystone:
systemd_unit: ""
container_name:
vhost:
- keystone_wsgi
manila:
systemd_unit:
- openstack-manila-scheduler
- openstack-manila-share
container_name:
vhost: ""
mistral:
systemd_unit:
- openstack-mistral-api
- openstack-mistral-engine
- openstack-mistral-event-engine
- openstack-mistral-executor
container_name:
vhost:
- mistral
memcached:
systemd_unit:
- memcached
container_name:
- memcached
mysql:
systemd_unit: mariadb
container_name:
- mysql
neutron:
systemd_unit: ""
container_name:
- neutron_ovs_agent
- neutron_l3_agent
- neutron_metadata_agent
- neutron_dhcp
nova-compute:
systemd_unit:
- openstack-nova-compute
- libvirtd
container_name:
- nova_migration_target
- nova_compute
- nova_libvirt
- nova_virtlogd
nova-controller:
systemd_unit:
- openstack-nova-api
- openstack-nova-conductor
- openstack-nova-consoleauth
- openstack-nova-scheduler
- openstack-nova-novncproxy
container_name:
- nova_metadata
- nova_api
- nova_vnc_proxy
- nova_conductor
- nova_consoleauth
- nova_api_cron
- nova_scheduler
- nova_placement
vhost:
- nova
- placement_wsgi
octavia:
systemd_unit: ""
container_name: ""
openvswitch:
systemd_unit:
- openvswitch
- ovs-vswitchd
container_name:
- neutron_ovs_agent
rabbitmq:
systemd_unit:
- rabbitmq-server
container_name:
- rabbitmq
redis:
systemd_unit:
- redis
container_name:
- redis
swift:
systemd_unit:
- ""
container_name:
- swift_account_auditor
- swift_account_reaper
- swift_account_replicator
- swift_account_server
- swift_container_auditor
- swift_container_replicator
- swift_container_server
- swift_container_updater
- swift_object_auditor
- swift_object_expirer
- swift_object_replicator
- swift_object_server
- swift_object_updater
- swift_proxy
- swift_rsync
tripleo-ui:
systemd_unit:
container_name:
vhost:
- tripleo-ui
zaqar:
systemd_unit: ""
container_name:
vhost:
- zaqar_wsgi

View File

@ -1,12 +0,0 @@
- project:
templates:
- publish-to-pypi
check:
jobs:
- openstack-tox-linters
gate:
jobs:
- openstack-tox-linters
post:
jobs:
- publish-openstack-python-branch-tarball