Remove existing TQE collect-logs to test new role

It removes the existing the TQE collect-logs role in order to
test with the new opendev/ansible-role-collect-logs role.

https://tree.taiga.io/project/tripleo-ci-board/task/1001

Change-Id: Ib7892fca145a8c1947f54bfa8f7a35675e625e4d
Signed-off-by: Chandan Kumar <chkumar@redhat.com>
This commit is contained in:
Chandan Kumar 2019-04-23 15:37:20 +05:30 committed by Chandan Kumar (raukadah)
parent d71e549372
commit 920762abf4
39 changed files with 0 additions and 3321 deletions

View File

@ -1,157 +0,0 @@
collect-logs
============
An Ansible role for aggregating logs from TripleO nodes.
Requirements
------------
This role gathers logs and debug information from a target system and
collates them in a designated directory, `artcl_collect_dir`, on the localhost.
Additionally, the role will convert templated bash scripts, created and used by
TripleO-Quickstart during deployment, into rST files. These rST files are
combined with static rST files and fed into Sphinx to create user friendly
post-build-documentation specific to an original deployment.
Finally, the role optionally handles uploading these logs to a rsync server or
to an OpenStack Swift object storage. Logs from Swift can be exposed with
[os-loganalyze](https://github.com/openstack-infra/os-loganalyze).
Role Variables
--------------
### Collection related
* `artcl_collect_list` -- A list of files and directories to gather from
the target. Directories are collected recursively and need to end with a
"/" to get collected. Should be specified as a YaML list, e.g.:
```yaml
artcl_collect_list:
- /etc/nova/
- /home/stack/*.log
- /var/log/
```
* `artcl_collect_list_append` -- A list of files and directories to be appended
in the default list. This is useful for users that want to keep the original
list and just add more relevant paths.
* `artcl_exclude_list` -- A list of files and directories to exclude from
collecting. This list is passed to rsync as an exclude filter and it takes
precedence over the collection list. For details see the "FILTER RULES" topic
in the rsync man page.
* `artcl_collect_dir` -- A local directory where the logs should be
gathered, without a trailing slash.
* `artcl_gzip_only`: false/true -- When true, gathered files are gzipped one
by one in `artcl_collect_dir`, when false, a tar.gz file will contain all the
logs.
### Documentation generation related
* `artcl_gen_docs`: false/true -- If true, the role will use build artifacts
and Sphinx and produce user friendly documentation (default: false)
* `artcl_docs_source_dir` -- a local directory that serves as the Sphinx source
directory.
* `artcl_docs_build_dir` -- A local directory that serves as the Sphinx build
output directory.
* `artcl_create_docs_payload` -- Dictionary of lists that direct what and how
to construct documentation.
* `included_deployment_scripts` -- List of templated bash scripts to be
converted to rST files.
* `included_deployment_scripts` -- List of static rST files that will be
included in the output documentation.
* `table_of_contents` -- List that defines the order in which rST files
will be laid out in the output documentation.
* `artcl_verify_sphinx_build` -- false/true -- If true, verify items defined
in `artcl_create_docs_payload.table_of_contents` exist in sphinx generated
index.html (default: false)
```yaml
artcl_create_docs_payload:
included_deployment_scripts:
- undercloud-install
- undercloud-post-install
included_static_docs:
- env-setup-virt
table_of_contents:
- env-setup-virt
- undercloud-install
- undercloud-post-install
```
### Publishing related
* `artcl_publish`: true/false -- If true, the role will attempt to rsync logs
to the target specified by `artcl_rsync_url`. Uses `BUILD_URL`, `BUILD_TAG`
vars from the environment (set during a Jenkins job run) and requires the
next to variables to be set.
* `artcl_txt_rename`: false/true -- rename text based file to end in .txt.gz to
make upstream log servers display them in the browser instead of offering
them to download
* `artcl_publish_timeout`: the maximum seconds the role can spend uploading the
logs, the default is 1800 (30 minutes)
* `artcl_use_rsync`: false/true -- use rsync to upload the logs
* `artcl_rsync_use_daemon`: false/true -- use rsync daemon instead of ssh to connect
* `artcl_rsync_url` -- rsync target for uploading the logs. The localhost
needs to have passwordless authentication to the target or the
`PROVISIONER_KEY` Var specificed in the environment.
* `artcl_use_swift`: false/true -- use swift object storage to publish the logs
* `artcl_swift_auth_url` -- the OpenStack auth URL for Swift
* `artcl_swift_username` -- OpenStack username for Swift
* `artcl_swift_password` -- password for the Swift user
* `artcl_swift_tenant_name` -- OpenStack tenant name for Swift
* `artcl_swift_container` -- the name of the Swift container to use,
default is `logs`
* `artcl_swift_delete_after` -- The number of seconds after which Swift will
remove the uploaded objects, the default is 2678400 seconds = 31 days.
* `artcl_artifact_url` -- a HTTP URL at which the uploaded logs will be
accessible after upload.
* `artcl_collect_sosreport` -- true/false -- If true, create and collect a
sosreport for each host.
Example Playbook
----------------
```yaml
---
- name: Gather logs
hosts: all:!localhost
roles:
- collect-logs
```
Templated Bash to rST Conversion Notes
--------------------------------------
Templated bash scripts used during deployment are converted to rST files
during the `create-docs` portion of the role's call. Shell scripts are
fed into an awk script and output as restructured text. The awk script
has several simple rules:
1. Only lines between `### ---start_docs` and `### ---stop_docs` will be
parsed.
2. Lines containing `# nodoc` will be excluded.
3. Lines containing `## ::` indicate subsequent lines should be formatted
as code blocks
4. Other lines beginning with `## <anything else>` will have the prepended
`## ` removed. This is how and where general rST formatting is added.
5. All other lines, including shell comments, will be indented by four spaces.
Enabling sosreport Collection
-----------------------------
[sosreport](https://github.com/sosreport/sos) is a unified tool for collecting
system logs and other debug information. To enable creation of sosreport(s)
with this role, create a custom config (you can use centosci-logs.yml
as a template) and ensure that `artcl_collect_sosreport: true` is set.
License
-------
Apache 2.0
Author Information
------------------
RDO-CI Team

View File

@ -1,214 +0,0 @@
---
artcl_collect: true
artcl_collect_list:
- /var/lib/heat-config/
- /var/lib/kolla/config_files
- /var/lib/mistral/
- /var/lib/oooq-images/*/*.log
- /var/lib/oooq-images/*/*.sh
- /var/lib/pacemaker/cib/cib*
- /var/lib/pacemaker/pengine/pe-input*
- /var/log/atop*
- /var/log/dmesg.txt
- /var/log/host_info.txt
- /var/log/journal.txt
- /var/log/postci.txt
- /var/log/secure
- /var/log/bootstrap-subnodes.log
- /var/log/unbound.log
- /var/log/{{ ansible_pkg_mgr }}.log
- /var/log/aodh/
- /var/log/audit/
- /var/log/barbican/
- /var/log/ceilometer/
- /var/log/ceph/
- /var/log/cinder/
- /var/log/cloudkitty/
- /var/log/cluster/
- /var/log/config-data/
- /var/log/congress/
- /var/log/containers/
- /var/log/deployed-server-enable-ssh-admin.log
- /var/log/deployed-server-os-collect-config.log
- /var/log/designate/
- /var/log/dmesg/
- /var/log/extra/
- /var/log/ec2api/
- /var/log/glance/
- /var/log/gnocchi/
- /var/log/heat/
- /var/log/heat-launcher/
- /var/log/horizon/
- /var/log/httpd/
- /var/log/ironic/
- /var/log/ironic-inspector/
- /var/log/libvirt/
- /var/log/keystone/
- /var/log/manila/
- /var/log/mariadb/
- /var/log/mistral/
- /var/log/monasca/
- /var/log/murano/
- /var/log/neutron/
- /var/log/nova/
- /var/log/novajoin/
- /var/log/octavia/
- /var/log/openvswitch/
- /var/log/ovn/
- /var/log/pacemaker/
- /var/log/panko/
- /var/log/qdr/
- /var/log/rabbitmq/
- /var/log/redis/
- /var/log/sahara/
- /var/log/sensu/
- /var/log/swift/
- /var/log/tacker/
- /var/log/tempest/
- /var/log/trove/
- /var/log/tripleo-container-image-prepare.log
- /var/log/vitrage/
- /var/log/watcher/
- /var/log/zaqar/
- /var/tmp/sosreport*
- /etc/
- /home/*/.instack/install-undercloud.log
- /home/*/stackrc
- /home/*/overcloudrc*
- /home/*/*.log
- /home/*/*.json
- /home/*/*.conf
- /home/*/*.yml
- /home/*/*.yaml
- /home/*/*.sh
- /home/*/*.rst
- /home/*/*.pem
- /home/*/deploy-overcloudrc
- /home/*/network-environment.yaml
- /home/*/skip_file
- /home/*/*.subunit
- /home/*/tempest/*.xml
- /home/*/tempest/*.html
- /home/*/tempest/*.log
- /home/*/tempest/etc/*.conf
- /home/*/tempest/*.subunit
- /home/*/tempest/*.json
- /home/*/tripleo-heat-installer-templates/
- /home/*/local_tht/
- /home/*/gating_repo.tar.gz
- /home/*/browbeat/
- /usr/share/openstack-tripleo-heat-templates/
- /home/*/tripleo-heat-templates/
- /tmp/tripleoclient*
# The next 2 items are temporary until config-download is executed
# from a Mistral workflow (WIP in Queens)
- /home/*/inventory
- /home/*/tripleo-config-download/
artcl_exclude_list:
- /etc/udev/hwdb.bin
- /etc/puppet/modules
- /etc/project-config
- /etc/services
- /etc/selinux/targeted
- /etc/pki/ca-trust/extracted
- /etc/alternatives
- /var/log/journal
# artcl_collect_dir is defaulted in extras-common
artcl_gzip_only: true
artcl_tar_gz: false
## publishing related vars
artcl_publish: false
artcl_env: default
artcl_readme_file: "{{ artcl_collect_dir }}/README-logs.html"
artcl_txt_rename: false
# give up log upload after 30 minutes
artcl_publish_timeout: 1800
artcl_artifact_url: "file://{{ local_working_dir }}"
artcl_full_artifact_url: "{{ artcl_artifact_url }}/{{ lookup('env', 'BUILD_TAG') }}/"
artcl_use_rsync: false
artcl_rsync_use_daemon: false
artcl_use_swift: false
# clean up the logs after 31 days
artcl_swift_delete_after: 2678400
artcl_swift_container: logs
artcl_use_zuul_swift_upload: false
artcl_zuul_swift_upload_path: /usr/local/bin
artcl_collect_sosreport: false
artcl_sosreport_options: "--batch"
# Doc generation specific vars
artcl_gen_docs: false
artcl_create_docs_payload:
included_deployment_scripts: []
included_static_docs: []
table_of_contents: []
artcl_docs_source_dir: "{{ local_working_dir }}/usr/local/share/ansible/roles/collect-logs/docs/source"
artcl_docs_build_dir: "{{ artcl_collect_dir }}/docs/build"
artcl_verify_sphinx_build: false
artcl_logstash_files:
- /home/*/deployed_server_prepare.txt
- /home/*/docker_journalctl.log
- /home/*/failed_deployment_list.log
- /home/*/hostname.sh.log
- /home/*/install_built_repo.log
- /home/*/install_packages.sh.log
- /home/*/install-undercloud.log
- /home/*/ironic-python-agent.log
- /home/*/nova_actions_check.log
- /home/*/overcloud_create_ssl_cert.log
- /home/*/overcloud_custom_tht_script.log
- /home/*/overcloud_delete.log
- /home/*/overcloud_deploy.log
- /home/*/overcloud_deploy_post.log
- /home/*/overcloud_failed_prepare_resources.log
- /home/*/overcloud-full.log
- /home/*/overcloud_image_build.log
- /home/*/overcloud_prep_containers.log
- /home/*/overcloud_prep_images.log
- /home/*/overcloud_prep_network.log
- /home/*/overcloud_validate.log
- /home/*/standalone_deploy.log
- /home/*/*upgrade*.log
- /home/*/*update*.log
- /home/*/repo_setup.log
- /home/*/repo_setup.sh.*.log
- /home/*/undercloud_install.log
- /home/*/undercloud_reinstall.log
- /home/*/undercloud_custom_tht_script.log
- /home/*/upgrade-undercloud-repo.sh.log
- /home/*/validate-overcloud-ipmi-connection.log
- /home/*/vxlan_networking.sh.log
- /home/*/workload_launch.log
- /home/*/pkg_mgr_mirror_error.log
- /home/*/pkg_mgr_mirror.log
- /home/*/tempest.log
- /var/log/bootstrap-subnodes.log
- /var/log/tripleo-container-image-prepare.log
# ara_graphite_server: graphite.tripleo.org
ara_graphite_prefix: "tripleo.{{ lookup('env', 'STABLE_RELEASE')|default('master', true) }}.{{ lookup('env', 'TOCI_JOBTYPE') }}."
ara_only_successful_tasks: true
ara_tasks_map:
"overcloud-deploy : Deploy the overcloud": overcloud.deploy.seconds
"undercloud-deploy : Install the undercloud": undercloud.install.seconds
"build-images : run the image build script (direct)": overcloud.images.seconds
"overcloud-prep-images : Prepare the overcloud images for deploy": prepare_images.seconds
"validate-simple : Validate the overcloud": overcloud.ping_test.seconds
"validate-tempest : Execute tempest": overcloud.tempest.seconds
# InfluxDB module settings
influxdb_only_successful_tasks: true
influxdb_measurement: test
# influxdb_url:
influxdb_port: 8086
influxdb_user:
influxdb_password:
influxdb_dbname: testdb
influxdb_data_file_path: "{{ lookup('env', 'LOCAL_WORKING_DIR')|default('/tmp', true) }}/influxdb_data"
influxdb_create_data_file: true
odl_extra_log_dir: /var/log/extra/odl
odl_extra_info_log: "{{ odl_extra_log_dir }}/odl_info.log"
ara_overcloud_db_path: "/var/lib/mistral/overcloud/ara_overcloud.sqlite"
ara_generate_html: true

View File

@ -1,5 +0,0 @@
# Doc requirements
sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3
oslosphinx>=2.2.0 # Apache-2.0
sphinx_rtd_theme==0.1.7
ansible

View File

@ -1,226 +0,0 @@
@import url("css/theme.css");
@import url("rdo_styling.css");
/* CUSTOM CSS OVERRIDES GO HERE */
/* ============================ */
/* LAYOUT */
.wy-nav-side {
overflow: visible;
}
.wy-side-nav-search {
margin-bottom: 0;
}
.wy-nav-content-wrap {
background: white;
}
.wy-nav-content {
max-width: 100%;
box-sizing: border-box;
}
.rst-content .section ol li p.first:last-child {
margin-bottom: 24px;
}
/* LOGO */
.wy-side-nav-search a {
margin-bottom: 5px;
}
.wy-side-nav-search img {
background: none;
border-radius: 0;
height: 60px;
width: auto;
margin: 0;
}
/* TYPOGRAPHY */
p {
margin-bottom: 16px;
}
p + ul, p + ol.simple {
margin-top: -12px;
}
h1, h2, h3, h4, h5, h6, p.rubric {
margin-top: 48px;
}
h2 {
border-bottom: 1px solid rgba(0, 0, 0, 0.2);
}
/* BREADCRUMBS */
.wy-breadcrumbs {
font-size: 85%;
color: rgba(0, 0, 0, 0.45);
}
.wy-breadcrumbs a {
text-decoration: underline;
color: inherit;
}
.wy-breadcrumbs a:hover,
.wy-breadcrumbs a:focus {
color: rgba(0, 0, 0, 0.75);
text-decoration: none;
}
/* FOOTER */
footer {
font-size: 70%;
margin-top: 48px;
}
footer p {
font-size: inherit;
}
/* NOTES, ADMONITTIONS AND TAGS */
.admonition {
font-size: 85%; /* match code size */
background: rgb(240, 240, 240);
color: rgba(0, 0, 0, 0.55);
border: 1px solid rgba(0, 0, 0, 0.1);
padding: 0.5em 1em 0.75em 1em;
margin-bottom: 24px;
}
.admonition p {
font-size: inherit;
}
.admonition p.last {
margin-bottom: 0;
}
.admonition p.first.admonition-title {
display: inline;
background: none;
font-weight: bold;
color: rgba(0, 0, 0, 0.75);
}
/* notes */
.rst-content .note {
background: rgb(240, 240, 240);
}
.note > p.first.admonition-title {
display: inline-block;
background: rgba(0, 0, 0, 0.55);
color: rgba(255, 255, 255, 0.95);
}
/* optional */
.rst-content .optional {
background: white;
}
/* tags */
.rhel {background: #fee;}
.portal {background-color: #ded;}
.satellite {background-color: #dee;}
.centos {background: #fef;}
.baremetal {background: #eef;}
.virtual {background: #efe;}
.ceph {background: #eff;}
/* admonition selector */
#admonition_selector {
color: white;
font-size: 85%;
line-height: 1.4;
background: #2980b9;
border-top: 1px solid rgba(255, 255, 255, 0.4);
}
.trigger {
display: block;
font-size: 110%;
color: rgba(255, 255, 255, 0.75);
line-height: 2.5;
position: relative;
cursor: pointer;
padding: 0 1.618em;
}
.trigger:after {
content: '';
display: block;
font-family: FontAwesome;
font-size: 70%;
position: absolute;
right: 1.618em;
top: 6px;
}
.trigger:hover {
color: white;
}
.content {
display: none;
border-top: 1px solid rgba(255, 255, 255, 0.1);
background: rgba(255, 255, 255, 0.1);
padding: 0.5em 1.618em;
}
.displayed .trigger:after {
content: '';
}
#admonition_selector .title {
color: rgba(255, 255, 255, 0.45);
}
#admonition_selector ul {
margin-bottom: 0.75em;
}
#admonition_selector ul li {
display: block;
}
#admonition_selector label {
display: inline;
color: inherit;
text-decoration: underline dotted;
}
/* LINKS */
a.external:after {
font-family: FontAwesome;
content: '';
visibility: visible;
display: inline-block;
font-size: 70%;
position: relative;
padding-left: 0.5em;
top: -0.5em;
}
/* LIST */
.wy-plain-list-decimal > li > ul,
.rst-content .section ol > li > ul,
.rst-content ol.arabic > li > ul,
article ol > li > ul {
margin-bottom: 24px;
}

View File

@ -1,208 +0,0 @@
/* general settings */
body {
font-family: "Open Sans", Helvetica, Arial, sans-serif;
font-weight: 300;
font-size: 16px;
}
/* remove backgrounds */
.wy-nav-content,
.wy-body-for-nav,
.wy-nav-side,
#admonition_selector {
background: none !important;
color: black !important;
}
/* page header */
.wy-side-nav-search,
.wy-nav-top {
background: rgba(0, 0, 0, 0.05) !important;
}
.wy-nav-top {
line-height: 40px;
border-bottom: 1px solid rgba(0, 0, 0, 0.1);
}
.wy-side-nav-search a,
.wy-nav-top a,
.wy-nav-top i {
color: rgb(160, 0, 0) !important;
}
.wy-nav-top i {
position: relative;
top: 0.1em;
}
.wy-side-nav-search input[type="text"] {
border-color: rgba(0, 0, 0, 0.25);
}
/* sidebar*/
.wy-nav-side {
border-right: 1px solid rgba(0, 0, 0, 0.2);
}
/* admonition selector */
#admonition_selector {
border-top: 0 none !important;
}
.trigger {
color: rgba(0, 0, 0, 0.7) !important;
border-top: 1px solid rgba(0, 0, 0, 0.2);
border-bottom: 1px solid rgba(0, 0, 0, 0.2);
background: rgba(0, 0, 0, 0.05);
}
.trigger:hover {
color: rgba(0, 0, 0, 0.9) !important;
}
.content {
border-top: 0 none !important;
border-bottom: 1px solid rgba(0, 0, 0, 0.2) !important;
background: rgba(0, 0, 0, 0.025) !important;
}
#admonition_selector .title {
color: rgba(0, 0, 0, 0.6) !important;
}
/* menu */
.wy-menu li a,
.wy-menu-vertical li a {
font-size: 100%;
line-height: 1.6;
color: rgb(80, 80, 80);
}
.wy-menu-vertical li a:hover,
.wy-menu-vertical li a:focus,
.wy-menu-vertical li.current a:hover,
.wy-menu-vertical li.current a:focus {
color: black;
text-decoration: underline;
background: none;
}
.wy-menu-vertical li.current,
.wy-menu-vertical li.current a {
border: 0 none;
color: rgb(80, 80, 80);
font-weight: inherit;
background: none;
}
/* level-1 menu item */
.wy-menu-vertical li.toctree-l1.current > a,
.wy-menu-vertical li.toctree-l1.current > a:hover,
.wy-menu-vertical li.toctree-l1.current > a:focus {
background: rgb(230, 230, 230);
}
.wy-menu li.toctree-l1 > a:before {
font-family: FontAwesome;
content: "";
display: inline-block;
position: relative;
padding-right: 0.5em;
}
/* level-2 menu item */
.toctree-l2 {
font-size: 90%;
color: inherit;
}
.wy-menu-vertical .toctree-l2 a {
padding: 0.4045em 0.5em 0.4045em 2.8em !important;
}
.wy-menu-vertical li.toctree-l2.current > a,
.wy-menu-vertical li.toctree-l2.current > a:hover,
.wy-menu-vertical li.toctree-l2.current > a:focus,
.wy-menu-vertical li.toctree-l2.active > a,
.wy-menu-vertical li.toctree-l2.active > a:hover,
.wy-menu-vertical li.toctree-l2.active > a:focus {
background: rgb(242, 242, 242);
}
.wy-menu li.toctree-l2 > a:before {
font-family: FontAwesome;
content: "";
font-size: 30%;
display: inline-block;
position: relative;
bottom: 0.55em;
padding-right: 1.5em;
}
/* typography */
h1 {
color: rgb(160, 0, 0);
font-weight: 300;
margin-top: 36px !important;
}
h3 {
font-size: 135%;
}
h2, h3, h4, h5 {
font-weight: 200;
}
a, a:visited {
color: #2275b4;
text-decoration: none;
}
a:hover, a:focus {
color: #1c6094;
text-decoration: underline;
}
.rst-content .toc-backref {
color: inherit;
}
strong {
font-weight: 600;
}
/* code */
.codeblock,
pre.literal-block,
.rst-content .literal-block,
.rst-content pre.literal-block,
div[class^="highlight"] {
background: rgba(0, 0, 0, 0.05);
color: black;
}
/* notes */
.admonition {
color: rgba(0, 0, 0, 0.5) !important;
font-weight: 400;
}
.rst-content .note {
background: none !important;
}
.note > p.first.admonition-title {
background: rgba(0, 0, 0, 0.5) !important;
color: rgba(255, 255, 255, 0.9) !important;
}

View File

@ -1 +0,0 @@
{% extends "!layout.html" %}

View File

@ -1,144 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# flake8: noqa
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# instack-undercloud documentation build configuration file, created by
# sphinx-quickstart on Wed Feb 25 10:56:57 2015.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sphinx_rtd_theme
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = []
html_theme = "sphinx_rtd_theme"
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
html_theme_options = {}
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'TripleO'
copyright = u'2016, RDO CI Team'
bug_tracker = u'Bugzilla'
bug_tracker_url = u'https://bugzilla.redhat.com'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '3.0.0'
# The full version, including alpha/beta/rc tags.
release = '3.0.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
html_static_path = ['_custom']
html_style = 'custom.css'
html_last_updated_fmt = '%b %d, %Y'
# Output file base name for HTML help builder.
htmlhelp_basename = 'tripleo-documentor'
html_show_sourcelink = True
html_show_sphinx = True
html_show_copyright = True
# -- Options for LaTeX output --------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
}
rst_prolog = """
.. |project| replace:: %s
.. |bug_tracker| replace:: %s
.. |bug_tracker_url| replace:: %s
""" % (project, bug_tracker, bug_tracker_url)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 92 KiB

View File

@ -1,110 +0,0 @@
Copying over customized instackenv.json, network-environment.yaml, nic-configs
------------------------------------------------------------------------------
instackenv.json
^^^^^^^^^^^^^^^
``instackenv.json`` file is generated from a template in tripleo-quickstart:
<https://github.com/openstack/tripleo-quickstart/blob/master/roles/libvirt/setup/overcloud/tasks/main.yml#L91>.
A customized ``instackenv.json`` can be copied to the undercloud by overwriting the
``undercloud_instackenv_template`` variable with the path to the customized file.
Below is an explanation of, and example of, the ``instackenv.json`` file:
The JSON file describing your Overcloud baremetal nodes, is called
``instackenv.json``. The file should contain a JSON object with the only field
``nodes`` containing list of node descriptions.
Each node description should contains required fields:
* ``pm_type`` - driver for Ironic nodes, see `Ironic Drivers`_ for details
* ``pm_addr`` - node BMC IP address (hypervisor address in case of virtual
environment)
* ``pm_user``, ``pm_password`` - node BMC credentials
Some fields are optional if you're going to use introspection later:
* ``mac`` - list of MAC addresses, optional for bare metal
* ``cpu`` - number of CPU's in system
* ``arch`` - CPU architecture (common values are ``i386`` and ``x86_64``)
* ``memory`` - memory size in MiB
* ``disk`` - hard driver size in GiB
It is also possible (but optional) to set Ironic node capabilities directly
in the JSON file. This can be useful for assigning node profiles or setting
boot options at registration time:
* ``capabilities`` - Ironic node capabilities. For example::
"capabilities": "profile:compute,boot_option:local"
For example::
{
"nodes": [
{
"pm_type":"pxe_ipmitool",
"mac":[
"fa:16:3e:2a:0e:36"
],
"cpu":"2",
"memory":"4096",
"disk":"40",
"arch":"x86_64",
"pm_user":"admin",
"pm_password":"password",
"pm_addr":"10.0.0.8"
},
{
"pm_type":"pxe_ipmitool",
"mac":[
"fa:16:3e:da:39:c9"
],
"cpu":"2",
"memory":"4096",
"disk":"40",
"arch":"x86_64",
"pm_user":"admin",
"pm_password":"password",
"pm_addr":"10.0.0.15"
},
{
"pm_type":"pxe_ipmitool",
"mac":[
"fa:16:3e:51:9b:68"
],
"cpu":"2",
"memory":"4096",
"disk":"40",
"arch":"x86_64",
"pm_user":"admin",
"pm_password":"password",
"pm_addr":"10.0.0.16"
}
]
}
network-environment.yaml
^^^^^^^^^^^^^^^^^^^^^^^^
Similarly, the ``network-environment.yaml`` file is generated from a template,
<https://github.com/openstack/tripleo-quickstart/blob/master/roles/tripleo/undercloud/tasks/post-install.yml#L32>
A customized ``network-environment.yaml`` file can be copied to the undercloud by overwriting the
`` network_environment_file`` variable with the path to the customized file.
nic-configs
^^^^^^^^^^^
By default, the virtual environment deployment uses the standard nic-configs files are there is no
ready section to copy custom nic-configs files.
The ``ansible-role-tripleo-overcloud-prep-config`` repo includes a task that copies the nic-configs
files if they are defined,
<https://github.com/redhat-openstack/ansible-role-tripleo-overcloud-prep-config/blob/master/tasks/main.yml#L15>

View File

@ -1,21 +0,0 @@
Customizing external network vlan
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If network-isolation is used in the deployment, tripleo-quickstart will, by default,
add a NIC on the external vlan to the undercloud,
<https://github.com/openstack/tripleo-quickstart/blob/master/roles/tripleo/undercloud/templates/undercloud-install-post.sh.j2#L88>.
When working with a baremetal overcloud, the vlan values must be customized with the correct
system-related values. The default vlan values can be overwritten in a settings file passed
to triple-quickstart as in the following example:
::
undercloud_networks:
external:
address: 10.0.7.13
netmask: 255.255.255.192
device_type: ovs
type: OVSIntPort
ovs_bridge: br-ctlplane
ovs_options: '"tag=102"'
tag: 102

View File

@ -1,21 +0,0 @@
Customizing undercloud.conf
===========================
The undercloud.conf file is copied to the undercloud VM using a template where the system values
are variables. <https://github.com/openstack/tripleo-quickstart/blob/master/roles/tripleo/undercloud/templates/undercloud.conf.j2>.
The tripleo-quickstart defaults for these variables are suited to a virtual overcloud,
but can be overwritten by passing custom settings to tripleo-quickstart in a settings file
(--extra-vars @<file_path>). For example:
::
undercloud_network_cidr: 10.0.5.0/24
undercloud_local_ip: 10.0.5.1/24
undercloud_network_gateway: 10.0.5.1
undercloud_undercloud_public_vip: 10.0.5.2
undercloud_undercloud_admin_vip: 10.0.5.3
undercloud_local_interface: eth1
undercloud_masquerade_network: 10.0.5.0/24
undercloud_dhcp_start: 10.0.5.5
undercloud_dhcp_end: 10.0.5.24
undercloud_inspection_iprange: 10.0.5.100,10.0.5.120

View File

@ -1,43 +0,0 @@
Install the dependencies
------------------------
You need some software available on your local system before you can run
`quickstart.sh`. You can install the necessary dependencies by running:
::
bash quickstart.sh --install-deps
Setup your virtual environment
------------------------------
tripleo-quickstart includes steps to set up libvirt on the undercloud host
machine and to create and setup the undercloud VM.
Deployments on baremetal hardware require steps from third-party repos,
in addition to the steps in tripleo-quickstart.
Below is an example of a complete call to quickstart.sh to run a full deploy
on baremetal overcloud nodes:
::
# $HW_ENV_DIR is the directory where the baremetal environment-specific
# files are stored
pushd $WORKSPACE/tripleo-quickstart
bash quickstart.sh \
--ansible-debug \
--bootstrap \
--working-dir $WORKSPACE/ \
--tags all \
--no-clone \
--teardown all \
--requirements quickstart-role-requirements.txt \
--requirements $WORKSPACE/$HW_ENV_DIR/requirements_files/$REQUIREMENTS_FILE \
--config $WORKSPACE/$HW_ENV_DIR/config_files/$CONFIG_FILE \
--extra-vars @$WORKSPACE/$HW_ENV_DIR/env_settings.yml \
--playbook $PLAYBOOK \
--release $RELEASE \
$VIRTHOST
popd

View File

@ -1,16 +0,0 @@
Additional steps preparing the environment for deployment
---------------------------------------------------------
Depending on the parameters of the baremetal overcloud environment in use,
other pre-deployment steps may be needed to ensure that the deployment succeeds.
<https://github.com/redhat-openstack/ansible-role-tripleo-overcloud-prep-baremetal/tree/master/tasks>
includes a number of these steps. Whether each step is run, depends on variable values
that can be set per environment.
Some examples of additional steps are:
- Adding disk size hints
- Adding disk hints per node, supporting all Ironic hints
- Adjusting MTU values
- Rerunning introspection on failure

View File

@ -1,97 +0,0 @@
Settings for hardware environments
==================================
Throughout the documentation, there are example settings and custom files to
overwrite the virt defaults in TripleO Quickstart. It is recommended to use a
organized directory structure to store the settings and files for each hardware
environment.
Example Directory Structure
---------------------------
Each baremetal environment will need a directory structured as follows:
|-- environment_name
| |-- instackenv.json
| |-- vendor_specific_setup
| |-- <architecture diagram/explanation document>
| |-- network_configs
| | |--<network-islation-type-1>
| | | |-- <network-environment.yaml file>
| | | |-- env_settings.yml
| | | |-- nic_configs
| | | | |-- ceph-storage.yaml
| | | | |-- cinder-storage.yaml
| | | | |-- compute.yaml
| | | | |-- controller.yaml
| | | | |-- swift-storage.yaml
| | | |-- config_files
| | | | |--config.yml
| | | | |--<other config files>
| | | |-- requirements_files
| | | | |--requirements1.yml
| | | | |--requirements2.yml
| | |--<network-islation-type-2>
| | | |-- <network-environment.yaml file>
| | | |-- env_settings.yml
| | | |-- nic_configs
| | | | |-- ceph-storage.yaml
| | | | |-- cinder-storage.yaml
| | | | |-- compute.yaml
| | | | |-- controller.yaml
| | | | |-- swift-storage.yaml
| | | |-- config_files
| | | | |--config.yml
| | | | |--<other config files>
| | | |-- requirements_files
| | | | |--requirements1.yml
| | | | |--requirements2.yml
Explanation of Directory Contents
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- instackenv.json (required)
The instackenv.json file added at this top-level directory will replace the templated instackenv.json file for virt deployments.
- vendor_specific_setup (optional)
If any script needs to run to do environment setup before deployment, such as RAID configuration, it can be included here.
- architecture diagram (optional)
Although not required, if there is a diagram or document detailing the network architecture, it is useful to include that document or diagram here as all the settings and network isolation files will be based off of it.
- network_configs (required)
This directory is used to house the directories divided by network isolation type.
- network-isolation-type (required)
Even if deploying without network isolation, the files should be included in a 'none' directory.
There are files examples of the following network isolation types: single-nic-vlans, multiple-nics, bond-with-vlans, public-bond, none [1].
Network isolation types 'single_nic_vlans', 'bond_with_vlans', 'multi-nic' will be deprecated.
[1] Names are derived from the `tripleo-heat-templates configuration <https://github.com/openstack/tripleo-heat-templates/tree/master/network/config>`_
- network-environment.yaml (required, unless deploying with no network isolation)
This file should be named after the network-isolation type, for example: bond_with_vlans.yaml. This naming convention follows the same pattern used by the default, virt workflow.
- env_settings.yaml (required)
This file stores all environment-specific settings to override default settings in TripleO quickstart and related repos, for example: the location of instackenv.json file, and setting 'overcloud_nodes' to empty so that quickstart does not create VMs for overcloud nodes. All settings required for undercloud.conf are included here.
- nic_configs (optional)
If the default nic-config files are not suitable for a particular hardware environment, specific ones can be added here and copied to the undercloud. Ensure that the network-environment.yaml file points to the correct location for the nic-configs to be used in deploy.
- config_files (required)
The deployment details are stored in the config file. Different config files can be created for scaling up nodes, HA, and other deployment combinations.
- requirements_files (required)
Multiple requirements files can be passed to quickstart.sh to include additional repos. For example, to include IPMI validation, the requirements files would need to include are `here <https://github.com/redhat-openstack/ansible-role-tripleo-validate-ipmi>`_

View File

@ -1,21 +0,0 @@
TripleO Quickstart
==================
TripleO Quickstart is a fast and easy way to setup and configure your virtual environment for TripleO.
Further documentation can be found at https://github.com/openstack/tripleo-quickstart
A quick way to test that your virthost machine is ready to rock is:
::
ssh root@$VIRTHOST uname -a
Getting the script
------------------
You can download the `quickstart.sh` script with `wget`:
::
wget https://raw.githubusercontent.com/openstack/tripleo-quickstart/master/quickstart.sh

View File

@ -1,37 +0,0 @@
Networking
----------
With a Virtual Environment, tripleo-quickstart sets up the networking as part of the workflow.
The networking arrangement needs to be set up prior to working with tripleo-quickstart.
The overcloud nodes will be deployed from the undercloud machine and therefore the
machines need to have have their network settings modified to allow for the
overcloud nodes to be PXE boot'ed using the undercloud machine.
As such, the setup requires that:
* All overcloud machines in the setup must support IPMI
* A management provisioning network is setup for all of the overcloud machines.
One NIC from every machine needs to be in the same broadcast domain of the
provisioning network. In the tested environment, this required setting up a new
VLAN on the switch. Note that you should use the same NIC on each of the
overcloud machines ( for example: use the second NIC on each overcloud
machine). This is because during installation we will need to refer to that NIC
using a single name across all overcloud machines e.g. em2
* The provisioning network NIC should not be the same NIC that you are using
for remote connectivity to the undercloud machine. During the undercloud
installation, a openvswitch bridge will be created for Neutron and the
provisioning NIC will be bridged to the openvswitch bridge. As such,
connectivity would be lost if the provisioning NIC was also used for remote
connectivity to the undercloud machine.
* The overcloud machines can PXE boot off the NIC that is on the private VLAN.
In the tested environment, this required disabling network booting in the BIOS
for all NICs other than the one we wanted to boot and then ensuring that the
chosen NIC is at the top of the boot order (ahead of the local hard disk drive
and CD/DVD drives).
* For each overcloud machine you have: the MAC address of the NIC that will PXE
boot on the provisioning network the IPMI information for the machine (i.e. IP
address of the IPMI NIC, IPMI username and password)
Refer to the following diagram for more information
i.. image:: _images/TripleO_Network_Diagram_.jpg

View File

@ -1,23 +0,0 @@
Minimum System Requirements
---------------------------
By default, tripleo-quickstart requires 3 machines:
* 1 Undercloud (can be a Virtual Machine)
* 1 Overcloud Controller
* 1 Overcloud Compute
Commonly, deployments include HA (3 Overcloud Controllers) and multiple Overcloud Compute nodes.
Each Overcloud machine requires at least:
* 1 quad core CPU
* 8 GB free memory
* 60 GB disk space
The undercloud VM or baremetal machine requires:
* 1 quad core CPU
* 16 GB free memory
* 80 GB disk space

View File

@ -1,9 +0,0 @@
Validating the environment prior to deployment
----------------------------------------------
In a baremetal overcloud deployment there is a custom environment and many related settings
and steps. As such, it is worthwhile to validate the environment and custom configuration
files prior to deployment.
A collection of validation tools is available in the 'clapper' repo:
<https://github.com/rthallisey/clapper/>.

View File

@ -1,17 +0,0 @@
Virtual Undercloud VS. Baremetal Undercloud
-------------------------------------------
When deploying the overcloud on baremetal nodes, there is the option of using an undercloud
deployed on a baremetal machine or creating a virtual machine (VM) on that same baremetal machine
and using the VM to serve as the undercloud.
The advantages of using a VM undercloud are:
* The VM can be rebuilt and reinstalled without reprovisioning the entire baremetal machine
* The tripleo-quickstart default workflow is written for a Virtual Environment deployment.
Using a VM undercloud requires less customization of the default workflow.
.. note:: When using a VM undercloud, but baremetal nodes for the overcloud
deployment, the ``overcloud_nodes`` variable in tripleo-quickstart
must overwritten and set to empty.

View File

@ -1,82 +0,0 @@
Virtual Environment
===================
|project| Quickstart can be used in a virtual environment using virtual machines instead
of actual baremetal. However, one baremetal machine ( VIRTHOST ) is still
needed to act as the host for the virtual machines.
Minimum System Requirements
---------------------------
By default, this setup creates 3 virtual machines:
* 1 Undercloud
* 1 Overcloud Controller
* 1 Overcloud Compute
.. note::
Each virtual machine must consist of at least 4 GB of memory and 40 GB of disk
space.
The virtual machine disk files are thinly provisioned and will not take up
the full 40GB initially.
You will need a baremetal host machine (referred to as ``$VIRTHOST``) with at least
**16G** of RAM, preferably **32G**, and you must be able to ``ssh`` to the
virthost machine as root without a password from the machine running ansible.
Currently the virthost machine must be running a recent Red Hat-based Linux
distribution (CentOS 7, RHEL 7, Fedora 22 - only CentOS 7 is currently tested),
but we hope to add support for non-Red Hat distributions too.
|project| Quickstart currently supports the following operating systems:
* CentOS 7 x86_64
TripleO Quickstart
------------------
TripleO Quickstart is a fast and easy way to setup and configure your virtual environment for TripleO.
Further documentation can be found at https://github.com/openstack/tripleo-quickstart
A quick way to test that your virthost machine is ready to rock is::
ssh root@$VIRTHOST uname -a
Getting the script
^^^^^^^^^^^^^^^^^^
You can download the `quickstart.sh` script with `wget`::
wget https://raw.githubusercontent.com/openstack/tripleo-quickstart/master/quickstart.sh
Install the dependencies
^^^^^^^^^^^^^^^^^^^^^^^^
You need some software available on your local system before you can run
`quickstart.sh`. You can install the necessary dependencies by running::
bash quickstart.sh --install-deps
Setup your virtual environment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Deploy with the most basic default options virtual environment by running::
bash quickstart.sh $VIRTHOST
There are many configuration options available in
tripleo-quickstart/config/general_config/ and also in
tripleo-quickstart-extras/config/general_config/
In the examples below the ha.yml config is located in the tripleo-quickstart repository
and the containers_minimal.yml is located in the tripleo-quickstart-extras repository.
All the configuration files will be installed to your working_directory.
This does require the user to know what the working directory is set to. The variable OPT_WORKDIR
by default is ~/.quickstart but can be overriden with -w or --working_dir
Please review these options and use the appropriate configuration for your deployment.
Below are some examples.::
bash quickstart.sh --config=~/.quickstart/config/general_config/ha.yml $VIRTHOST
bash quickstart.sh --config=~/.quickstart/config/general_config/containers_minimal.yml $VIRTHOST

View File

@ -1,88 +0,0 @@
Install the dependencies for TripleO Quickstart
-----------------------------------------------
You need some software available on your local system before you can run
`quickstart.sh`. You can install the necessary dependencies by running:
::
bash quickstart.sh --install-deps
Deploy TripleO using Quickstart on Openstack Instances
------------------------------------------------------
Deployments on Openstack instances require steps from third-party repos,
in addition to the steps in TripleO Quickstart.
Below is an example of a complete call to quickstart.sh to run a full deploy
on Openstack Instances launched via Openstack Virtual Baremetal (OVB/Heat):
::
# $HW_ENV_DIR is the directory where the environment-specific
# files are stored
pushd $WORKSPACE/tripleo-quickstart
bash quickstart.sh \
--ansible-debug \
--bootstrap \
--working-dir $WORKSPACE/ \
--tags all \
--no-clone \
--requirements quickstart-role-requirements.txt \
--requirements $WORKSPACE/$HW_ENV_DIR/requirements_files/$REQUIREMENTS_FILE \
--config $WORKSPACE/$HW_ENV_DIR/config_files/$CONFIG_FILE \
--extra-vars @$OPENSACK_CLOUD_SETTINGS_FILE \
--extra-vars @$OPENSTACK_CLOUD_CREDS_FILE \
--extra-vars @$WORKSPACE/$HW_ENV_DIR/env_settings.yml \
--playbook $PLAYBOOK \
--release $RELEASE \
localhost
popd
Modify the settings
^^^^^^^^^^^^^^^^^^^
After the undercloud connectivity has been set up, the undercloud is installed and the
overcloud is deployed following the 'baremetal' workflow, using settings relevant to the
undercloud and baremetal nodes created on the Openstack cloud.
Below are a list of example settings (overwriting defaults) that would be passed to quickstart.sh:
::
# undercloud.conf
undercloud_network_cidr: 192.0.2.0/24
undercloud_local_ip: 192.0.2.1/24
undercloud_network_gateway: 192.0.2.1
undercloud_undercloud_public_vip: 192.0.2.2
undercloud_undercloud_admin_vip: 192.0.2.3
undercloud_local_interface: eth1
undercloud_masquerade_network: 192.0.2.0/24
undercloud_dhcp_start: 192.0.2.5
undercloud_dhcp_end: 192.0.2.24
undercloud_inspection_iprange: 192.0.2.25,192.0.2.39
overcloud_nodes:
undercloud_type: ovb
introspect: true
# file locations to be copied to the undercloud (for network-isolation deployment)
undercloud_instackenv_template: "{{ local_working_dir }}/instackenv.json"
network_environment_file: "{{ local_working_dir }}/openstack-virtual-baremetal/network-templates/network-environment.yaml"
baremetal_nic_configs: "{{ local_working_dir }}/openstack-virtual-baremetal/network-templates/nic-configs"
network_isolation: true
# used for access to external network
external_interface: eth2
external_interface_ip: 10.0.0.1
external_interface_netmask: 255.255.255.0
external_interface_hwaddr: fa:05:04:03:02:01
# used for validation
floating_ip_cidr: 10.0.0.0/24
public_net_pool_start: 10.0.0.50
public_net_pool_end: 10.0.0.100
public_net_gateway: 10.0.0.1

View File

@ -1,15 +0,0 @@
TripleO Quickstart
==================
TripleO Quickstart is a fast and easy way to setup and configure your virtual environment for TripleO.
Further documentation can be found at https://github.com/openstack/tripleo-quickstart.
Getting the script
------------------
You can download the `quickstart.sh` script with `wget`:
::
wget https://raw.githubusercontent.com/openstack/tripleo-quickstart/master/quickstart.sh

View File

@ -1,85 +0,0 @@
Running TripleO Quickstart on Openstack instances
-------------------------------------------------
By default, TripleO Quickstart uses libvirt to create virtual machines (VM) to serve
as undercloud and overcloud nodes for a TripleO deployment.
With some steps and modification, TripleO Quickstart can setup an undercloud and
deploy the overcloud on instances launched on an Openstack cloud rather than libvirt VMs.
Beginning assumptions
^^^^^^^^^^^^^^^^^^^^^
This document details the workflow for running TripleO Quickstart on Openstack
instances. In particular, the example case is instances created via Heat and
Openstack Virtual Baremetal <https://github.com/openstack/openstack-virtual-baremetal>.
The following are assumed to have been completed before following this document:
* An Openstack cloud exists and has been set up
(and configured as described in
[Patching the Host Cloud](https://openstack-virtual-baremetal.readthedocs.io/en/latest/host-cloud/patches.html).
if the cloud is pre-Mitaka release). From the Mitaka release the cloud should
not require patching
* The undercloud image under test has been uploaded to Glance in the Openstack cloud.
* A heat stack has been deployed with instances for the undercloud, bmc, and overcloud nodes.
* The nodes.json file has been created (later to be copied to the undercloud as instackenv.json)
Below is an example `env.yaml` file used to create the heat stack that will support a
tripleo-quickstart undercloud and overcloud deployment with network isolation:
::
parameters:
os_user: admin
os_password: password
os_tenant: admin
os_auth_url: http://10.10.10.10:5000/v2.0
bmc_flavor: m1.medium
bmc_image: 'bmc-base'
bmc_prefix: 'bmc'
baremetal_flavor: m1.large
baremetal_image: 'ipxe-boot'
baremetal_prefix: 'baremetal'
key_name: 'key'
private_net: 'private'
node_count: {{ node_count }}
public_net: 'public'
provision_net: 'provision'
# QuintupleO-specific params ignored by virtual-baremetal.yaml
undercloud_name: 'undercloud'
undercloud_image: '{{ latest_undercloud_image }}'
undercloud_flavor: m1.xlarge
external_net: '{{ external_net }}'
undercloud_user_data: |
#!/bin/sh
sed -i "s/no-port-forwarding.*sleep 10\" //" /root/.ssh/authorized_keys
#parameter_defaults:
## Uncomment and customize the following to use an existing floating ip
# undercloud_floating_ip_id: 'uuid of floating ip'
# undercloud_floating_ip: 'address of floating ip'
resource_registry:
## Uncomment the following to use an existing floating ip
# OS::OVB::UndercloudFloating: templates/undercloud-floating-existing.yaml
## Uncomment the following to use no floating ip
# OS::OVB::UndercloudFloating: templates/undercloud-floating-none.yaml
## Uncomment the following to create a private network
OS::OVB::PrivateNetwork: {{ templates_dir }}/private-net-create.yaml
## Uncomment to create all networks required for network-isolation.
## parameter_defaults should be used to override default parameter values
## in baremetal-networks-all.yaml
OS::OVB::BaremetalNetworks: {{ templates_dir }}/baremetal-networks-all.yaml
OS::OVB::BaremetalPorts: {{ templates_dir }}/baremetal-ports-all.yaml
## Uncomment to deploy a quintupleo environment without an undercloud.
# OS::OVB::UndercloudEnvironment: OS::Heat::None

View File

@ -1,61 +0,0 @@
#!/usr/bin/env python
# Copyright 2016 Red Hat Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Usage: openstack stack event list -f json overcloud | \
# heat-deploy-times.py [list of resource names]
# If no resource names are provided, all of the resources will be output.
import json
import sys
import time
def process_events(all_events, events):
times = {}
for event in all_events:
name = event['resource_name']
status = event['resource_status']
# Older clients return timestamps in the first format, newer ones
# append a Z. This way we can handle both formats.
try:
strptime = time.strptime(event['event_time'],
'%Y-%m-%dT%H:%M:%S')
except ValueError:
strptime = time.strptime(event['event_time'],
'%Y-%m-%dT%H:%M:%SZ')
etime = time.mktime(strptime)
if name in events:
if status == 'CREATE_IN_PROGRESS':
times[name] = {'start': etime, 'elapsed': None}
elif status == 'CREATE_COMPLETE':
times[name]['elapsed'] = etime - times[name]['start']
for name, data in sorted(times.items(),
key=lambda x: x[1]['elapsed'],
reverse=True):
elapsed = 'Still in progress'
if times[name]['elapsed'] is not None:
elapsed = times[name]['elapsed']
print('%s %s') % (name, elapsed)
if __name__ == '__main__':
stdin = sys.stdin.read()
all_events = json.loads(stdin)
events = sys.argv[1:]
if not events:
events = set()
for event in all_events:
events.add(event['resource_name'])
process_events(all_events, events)

View File

@ -1,169 +0,0 @@
#!/usr/bin/env python
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
DOCUMENTATION = '''
---
module: ara_graphite
version_added: "1.0"
short_description: Send ARA stats to graphite
description:
- Python ansible module to send ARA stats to graphite
options:
graphite_host:
description:
- The hostname of the Graphite server with optional port:
graphite.example.com:2004. The default port is 2003
required: True
ara_mapping:
description:
- Mapping task names to Graphite paths
required: True
ara_data:
description:
- List of ARA results: ara result list --all -f json
required: True
ara_only_successful:
description:
- Whether to send only successful tasks, ignoring skipped and failed,
by default True.
required: True
'''
EXAMPLES = '''
- name: Get ARA json data
shell: "{{ local_working_dir }}/bin/ara task list --all -f json"
register: ara_data
- ara_graphite:
graphite_host: 10.2.2.2
ara_data: "{{ ara_task_output.stdout }}"
ara_mapping:
- "Name of task that deploys overcloud": overcloud.deploy.seconds
'''
import ast
import datetime
import socket
def stamp(x):
'''Convert ISO timestamp to Unix timestamp
:param x: string with timestamp
:return: string with Unix timestamp
'''
return datetime.datetime.strptime(x, "%Y-%m-%d %H:%M:%S").strftime('%s')
def task_length(x):
'''Calculate task length in seconds from "%H:%M:%S" format
:param x: datetime string
:return: number of seconds spent for task
'''
t = datetime.datetime.strptime(x, "%H:%M:%S")
return datetime.timedelta(hours=t.hour, minutes=t.minute,
seconds=t.second).total_seconds()
def translate(mapping, json_data, only_ok):
'''Create data to send to Graphite server in format:
GraphitePath Timestamp TaskDuration
GraphitePath is taken from mapping dictionary according to task name.
:param mapping: dictionary of mapping task names to graphite paths
:param json_data: JSON data with tasks and times
:return: list of graphite data
'''
items = []
data = ast.literal_eval(json_data)
for task in data:
if not only_ok or (only_ok and task['Status'] in ['changed', 'ok']):
if task['Name'] in mapping:
timestamp, duration = stamp(task['Time Start']), task_length(
task['Duration'])
items.append([mapping[task['Name']], duration, timestamp])
return items
def send(data, gr_host, gr_port, prefix):
'''Actual sending of data to Graphite server via network
:param data: list of items to send to Graphite
:param gr_host: Graphite host (with optional port)
:param prefix: prefix to append before Graphite path
:return: True if sent successfully, otherwise False
'''
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(3.0)
try:
s.connect((gr_host, gr_port))
except Exception as e:
return False, str(e)
for content in data:
s.send(prefix + " ".join([str(i) for i in content]) + "\n")
s.close()
return True, ''
def send_stats(gr_host, gr_port, mapping, json_data, prefix, only_ok):
'''Send ARA statistics to Graphite server
:param gr_host: Graphite host (with optional port)
:param mapping: dictionary of mapping task names to graphite paths
:param json_data: JSON data with tasks and times
:param prefix: prefix to append before Graphite path
:return: JSON ansible result
'''
data2send = translate(mapping, json_data, only_ok)
response, reason = send(data2send, gr_host, gr_port, prefix)
if not response:
return {
'changed': False,
'failed': True,
'graphite_host': gr_host,
'msg': "Can't connect to Graphite: %s" % reason
}
return {
'changed': True,
'graphite_host': gr_host,
'sent_data': data2send,
}
def main():
module = AnsibleModule( # noqa
argument_spec=dict(
graphite_host=dict(required=True, type='str'),
graphite_port=dict(required=False, type='int', default=2003),
ara_mapping=dict(required=True, type='dict'),
ara_data=dict(required=True, type='str'),
graphite_prefix=dict(required=False, type='str', default=''),
only_successful_tasks=dict(required=False, type='bool',
default=True)
)
)
result = send_stats(module.params['graphite_host'],
module.params['graphite_port'],
module.params['ara_mapping'],
module.params['ara_data'],
module.params['graphite_prefix'],
module.params['only_successful_tasks'])
module.exit_json(**result)
from ansible.module_utils.basic import * # noqa
if __name__ == "__main__":
main()

View File

@ -1,516 +0,0 @@
#!/usr/bin/env python
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
DOCUMENTATION = '''
---
module: ara_influxdb
version_added: "1.0"
short_description: Send ARA stats to InfluxDB
description:
- Python ansible module to send ARA stats to InfluxDB timeseries database
options:
influxdb_url:
description:
- The URL of HTTP API of InfluxDB server:
for example https://influxdb.example.com
required: True
influxdb_port:
description:
- The port of HTTP API of InfluxDB server, by default is 8086
required: True
influxdb_user:
description:
- User for authentication to InfluxDB server
required: False
influxdb_password:
description:
- Password for authentication to InfluxDB server
required: False
influxdb_db:
description:
- Database name in InfluxDB server for sending data to it
required: True
measurement:
description:
- Name of Influx measurement in database
required: True
data_file:
description:
- Path to file to save InfluxDB data in it
required: True
ara_data:
description:
- List of ARA results: ara result list --all -f json
required: True
only_successful_tasks:
description:
- Whether to send only successful tasks, ignoring skipped and failed,
by default True.
required: True
mapped_fields:
description:
- Whether to use configured static map of fields and tasks,
by default True.
required: False
standard_fields:
description:
- Whether to send standard fields of each job, i.e. times,
by default True.
required: False
longest_tasks:
description:
- Whether to to print only longest tasks and how many,
by default 0.
required: False
'''
EXAMPLES = '''
- name: Get ARA json data
shell: "{{ local_working_dir }}/bin/ara result list --all -f json"
register: ara_data
- name: Collect and send data to InfluxDB
ara_influxdb:
influxdb_url: https://influxdb.example.com
influxdb_port: 8086
influxdb_user: db_user
influxdb_password: db_password
influxdb_db: db_name
ara_data: "{{ ara_data.stdout }}"
measurement: test
data_file: /tmp/test_data
only_successful_tasks: true
mapped_fields: false
standard_fields: false
longest_tasks: 15
when: ara_data.stdout != "[]"
'''
import ast # noqa pylint: disable=C0413
import datetime # noqa pylint: disable=C0413
import os # noqa pylint: disable=C0413
import re # noqa pylint: disable=C0413
import requests # noqa pylint: disable=C0413
from requests.auth import HTTPBasicAuth # noqa pylint: disable=C0413
SCHEME = '{measure},{tags} {fields} {timestamp}'
CUSTOM_MAP = {
'undercloud_install': ["undercloud-deploy : Install the undercloud"],
'prepare_images': [
"overcloud-prep-images : Prepare the overcloud images for deploy"],
'images_update': [
"modify-image : Convert image",
"modify-image : Run script on image",
"modify-image : Close qcow2 image"
],
'images_build': ["build-images : run the image build script (direct)"],
'containers_prepare': [
"overcloud-prep-containers : "
"Prepare for the containerized deployment"],
'overcloud_deploy': ["overcloud-deploy : Deploy the overcloud"],
'pingtest': ["validate-simple : Validate the overcloud"],
'tempest_run': ["validate-tempest : Execute tempest"],
'undercloud_reinstall': [
"validate-undercloud : Reinstall the undercloud to check idempotency"],
'overcloud_delete': [
"overcloud-delete : check for delete command to complete or fail"],
'overcloud_upgrade': ["overcloud-upgrade : Upgrade the overcloud",
"tripleo-upgrade : run docker upgrade converge step",
"tripleo-upgrade : run docker upgrade composable "
"step"],
'undercloud_upgrade': ["tripleo-upgrade : upgrade undercloud"],
}
class InfluxStandardTags(object):
'''InfluxStandardTags contains:
calculation of standard job describing parameters as:
* release
* nodepool provider cloud
* zuul pipeline name
* toci_jobtype
and rendering them in tags template
'''
def branch(self):
return os.environ.get('STABLE_RELEASE') or 'master'
def cloud(self):
return os.environ.get('NODEPOOL_PROVIDER', 'null')
def pipeline(self):
if os.environ.get('ZUUL_PIPELINE'):
if 'check' in os.environ['ZUUL_PIPELINE']:
return 'check'
elif 'gate' in os.environ['ZUUL_PIPELINE']:
return 'gate'
elif 'periodic' in os.environ['ZUUL_PIPELINE']:
return 'periodic'
return 'null'
def toci_jobtype(self):
return os.environ.get('TOCI_JOBTYPE', 'null')
def render(self):
return ('branch=%s,'
'cloud=%s,'
'pipeline=%s,'
'toci_jobtype=%s') % (
self.branch(),
self.cloud(),
self.pipeline(),
self.toci_jobtype(),
)
class InfluxStandardFields(object):
'''InfluxStandardFields contains:
calculation of time of job steps as:
* whole job duration
* testing environment preparement
* quickstart files and environment preparement
* zuul host preparement
and rendering them in template
'''
def job_duration(self):
if os.environ.get('START_JOB_TIME'):
return int(
datetime.datetime.utcnow().strftime("%s")) - int(
os.environ.get('START_JOB_TIME'))
return 0
def logs_size(self):
# not implemented
return 0
def timestamp(self):
return datetime.datetime.utcnow().strftime("%s")
def testenv_prepare(self):
return os.environ.get('STATS_TESTENV', 0)
def quickstart_prepare(self):
return os.environ.get('STATS_OOOQ', 0)
def zuul_host_prepare(self):
if (os.environ.get('DEVSTACK_GATE_TIMEOUT') and # noqa: W504
os.environ.get('REMAINING_TIME')):
return (int(
os.environ['DEVSTACK_GATE_TIMEOUT']) - int(
os.environ['REMAINING_TIME'])) * 60
return 0
def render(self):
return ('job_duration=%d,'
'logs_size=%d,'
'testenv_prepare=%s,'
'quickstart_prepare=%s,'
'zuul_host_prepare=%d,'
) % (
self.job_duration(),
self.logs_size(),
self.testenv_prepare(),
self.quickstart_prepare(),
self.zuul_host_prepare()
)
class InfluxConfiguredFields(object):
'''InfluxConfiguredFields contains calculation:
* whole job duration
* testing environment preparement
* quickstart files and environment preparement
* zuul host preparement
and rendering them in template
'''
def __init__(self, match_map, json_data, only_ok=True):
"""Set up data for configured field
:param match_map {dict} -- Map of tasks from ansible playbook to
names of data fields in influxDB.
:param json_data: {dict} -- JSON data generated by ARA
:param only_ok=True: {bool} -- to count only passed tasks
"""
self.map = match_map
self.only_ok = only_ok
self.data = json_data
def task_maps(self):
times_dict = tasks_times_dict(self.data, self.only_ok)
tasks = {}
for i in self.map:
tasks[i] = sum([int(times_dict.get(k, 0)) for k in self.map[i]])
return tasks
def render(self):
tasks = self.task_maps()
result = ''
for task, timest in tasks.items():
result += "%s=%d," % (task, timest)
return result
class InfluxLongestFields(object):
'''InfluxLongestFields runs calculation of:
tasks that took the longest time.
The tasks could be from undercloud or overcloud playbooks.
'''
def __init__(self, json_data, only_ok=True, top=15):
"""Constructor for InfluxLongestFields
:param json_data: {dict} -- JSON data generated by ARA
:param only_ok=True: {bool} -- to count only passed tasks
:param top=15: {int} -- how many tasks to send to DB
"""
self.top = top
self.only_ok = only_ok
self.data = json_data
def collect_tasks(self):
tasks_dict = tasks_times_dict(self.data, self.only_ok)
return sorted(
[[k, v] for k, v in tasks_dict.items()],
key=lambda x: x[1],
reverse=True
)[:self.top]
def translate_names(self, names):
for i in names:
i[0] = re.sub(
r'[^0-9A-z\-_]+',
'',
i[0].replace(":", "__").replace(" ", "_"))
i[1] = int(i[1])
return names
def render(self):
result = ''
for i in self.translate_names(self.collect_tasks()):
result += "{0}={1},".format(*i)
return result
def tasks_times_dict(tasks, only_ok=True):
times_dict = {}
for task in tasks:
if not only_ok or task['Status'] in ['changed', 'ok']:
name = task['Name']
if name in times_dict:
times_dict[name].append(task['Duration'])
else:
times_dict[name] = [task['Duration']]
# because of some tasks are executed multiple times we need to count
# all of them and make summary of all durations
for i in times_dict:
times_dict[i] = sum([task_length(t) for t in times_dict[i]])
return times_dict
def task_length(x):
'''Calculate task length in seconds from "%H:%M:%S" format
Arguments:
x {string} -- a timestamp
Returns:
int -- total seconds for the task
'''
t = datetime.datetime.strptime(x, "%H:%M:%S")
return datetime.timedelta(hours=t.hour, minutes=t.minute,
seconds=t.second).total_seconds()
def translate(measure, json_data, only_ok,
mapped_fields=True,
standard_fields=True,
longest_tasks=0):
'''Create data to send to InfluxDB server in format SCHEME
Fields keys are taken from ARA data according to task names.
:param measure: name of InfluxDB measurement
:param json_data: JSON data with tasks and times
:param: only_ok: boolean, where to count only successful tasks
:return: full InfluxDB scheme
'''
data = ast.literal_eval(json_data)
tags = InfluxStandardTags()
std_fields = InfluxStandardFields()
map_fields = InfluxConfiguredFields(
match_map=CUSTOM_MAP, json_data=data, only_ok=only_ok)
longest_fields = InfluxLongestFields(json_data=data,
top=longest_tasks,
only_ok=only_ok)
fields = ''
if standard_fields:
fields += std_fields.render()
if mapped_fields:
fields += map_fields.render()
if longest_tasks:
fields += longest_fields.render()
fields = fields.rstrip(",")
result = SCHEME.format(
measure=measure,
tags=tags.render(),
fields=fields,
timestamp=std_fields.timestamp()
)
return result
def create_file_with_data(data, path):
'''Create a file with InfluxDB data to send
:param data: data to write
:param path: path of the file
:return:
'''
with open(path, "a") as f:
f.write(data + "\n")
def send(file_path, in_url, in_port, in_user, in_pass, in_db):
'''Actual sending of data to InfluxDB server via network
:param file_path: path to file with data to send
:param in_url: InfluxDB URL
:param in_port: InfluxDB port
:param in_user: InfluxDB user
:param in_pass: InfluxDB password
:param in_db: InfluxDB database name
:return: True if sent successfully, otherwise False
'''
url = in_url.rstrip("/")
if in_port != 80:
url += ":%d" % in_port
url += "/write"
params = {"db": in_db, "precision": "s"}
if in_user:
if not in_pass:
if os.environ.get('INFLUXDB_PASSWORD'):
with open(os.environ['INFLUXDB_PASSWORD']) as f:
in_pass = f.read().strip()
else:
return False, 'InfluxDB password was not provided!'
auth = HTTPBasicAuth(in_user, in_pass)
else:
auth = None
with open(file_path, "rb") as payload:
req = requests.post(url, params=params, data=payload, auth=auth,
verify=False)
if not req or req.status_code != 204:
return False, "HTTP: %s\nResponse: %s" % (req.status_code, req.content)
return True, ''
def send_stats(in_url, in_port, in_user, in_pass, in_db, json_data,
measure, data_file, only_ok, mapped_fields=True,
standard_fields=True, longest_tasks=0):
'''Send ARA statistics to InfluxDB server
:param in_url: InfluxDB URL
:param in_port: InfluxDB port
:param in_user: InfluxDB user
:param in_pass: InfluxDB password
:param in_db: InfluxDB database name
:param json_data: JSON data with tasks and times from ARA
:param measure: InfluxDB measurement name
:param data_file: path to file with data to send
:param: only_ok: boolean, where to count only successful tasks
:param: mapped_fields: if to use configured map of fields and tasks
:param: standard_fields: if to send standard fields of each job, i.e. times
:param: longest_tasks: if to print only longest tasks and how many
:return: JSON ansible result
'''
data2send = translate(measure, json_data, only_ok, mapped_fields,
standard_fields, longest_tasks)
create_file_with_data(data2send, data_file)
if in_url:
response, reason = send(data_file, in_url, in_port, in_user, in_pass,
in_db)
if not response:
return {
'changed': False,
'failed': True,
'influxdb_url': in_url,
'msg': reason
}
return {
'changed': True,
'influxdb_url': in_url,
'sent_data': data2send,
}
else:
return {
'changed': True,
'data_file': data_file,
'sent_data': data2send,
}
def main():
module = AnsibleModule( # noqa
argument_spec=dict(
influxdb_url=dict(required=True, type='str'),
influxdb_port=dict(required=True, type='int'),
influxdb_user=dict(required=False, type='str', default=None),
influxdb_password=dict(required=False, type='str',
default=None, no_log=True),
influxdb_db=dict(required=True, type='str'),
ara_data=dict(required=True, type='str'),
measurement=dict(required=True, type='str'),
data_file=dict(required=True, type='str'),
only_successful_tasks=dict(required=True, type='bool'),
mapped_fields=dict(default=True, type='bool'),
standard_fields=dict(default=True, type='bool'),
longest_tasks=dict(default=0, type='int'),
)
)
result = send_stats(module.params['influxdb_url'],
module.params['influxdb_port'],
module.params['influxdb_user'],
module.params['influxdb_password'],
module.params['influxdb_db'],
module.params['ara_data'],
module.params['measurement'],
module.params['data_file'],
module.params['only_successful_tasks'],
module.params['mapped_fields'],
module.params['standard_fields'],
module.params['longest_tasks'],
)
module.exit_json(**result)
# pylint: disable=W0621,W0622,W0614,W0401,C0413
from ansible.module_utils.basic import * # noqa
if __name__ == "__main__":
main()

View File

@ -1,3 +0,0 @@
---
dependencies:
- extras-common

View File

@ -1,42 +0,0 @@
# AWK script used to parse shell scripts, created during TripleO deployments,
# and convert them into rST files for digestion by Sphinx.
#
# General notes:
#
# - Only blocks between `### ---start_docs` and `### ---stop_docs` will be
# parsed
# - Lines containing `# nodocs` will be excluded from rST output
# - Lines containing `## ::` indicate subsequent lines should be formatted
# as code blocks
# - Other lines beginning with `## <anything else>` will have the prepended
# `## ` removed. (This is how you would add general rST formatting)
# - All other lines (including shell comments) will be indented by four spaces
/^### --start_docs/ {
for (;;) {
if ((getline line) <= 0)
unexpected_eof()
if (line ~ /^### --stop_docs/)
break
if (match(line, ".* #nodocs$"))
continue
if (substr(line, 0, 5) == "## ::") {
line = "\n::\n"
} if (substr(line, 0, 3) == "## ") {
line = substr(line, 4)
} else if (line != "") {
line = " "line
}
print line > "/dev/stdout"
}
}
function unexpected_eof() {
printf("%s:%d: unexpected EOF or error\n", FILENAME, FNR) > "/dev/stderr"
exit 1
}
END {
if (curfile)
close(curfile)
}

View File

@ -1,12 +0,0 @@
---
- name: Get ARA json data
shell: "{{ local_working_dir }}/bin/ara result list --all -f json"
register: ara_data
- name: Send to graphite
ara_graphite:
graphite_host: "{{ ara_graphite_server }}"
ara_mapping: "{{ ara_tasks_map }}"
ara_data: "{{ ara_data.stdout }}"
graphite_prefix: "{{ ara_graphite_prefix }}"
only_successful_tasks: "{{ ara_only_successful_tasks }}"

View File

@ -1,59 +0,0 @@
---
- name: Get ARA json data
shell: "{{ local_working_dir }}/bin/ara result list --all -f json"
register: ara_data
- name: Collect and send data to InfluxDB
ara_influxdb:
influxdb_url: "{{ influxdb_url|default('') }}"
influxdb_port: "{{ influxdb_port }}"
influxdb_user: "{{ influxdb_user }}"
influxdb_password: "{{ influxdb_password }}"
influxdb_db: "{{ influxdb_dbname}}"
ara_data: "{{ ara_data.stdout }}"
measurement: "{{ influxdb_measurement }}"
data_file: "{{ influxdb_data_file_path }}"
only_successful_tasks: "{{ influxdb_only_successful_tasks }}"
- name: Get ARA json data for undercloud
become: true
shell: "{{ local_working_dir }}/bin/ara result list --all -f json"
register: ara_root_data
- name: Collect and send data to InfluxDB
ara_influxdb:
influxdb_url: "{{ influxdb_url|default('') }}"
influxdb_port: "{{ influxdb_port }}"
influxdb_user: "{{ influxdb_user }}"
influxdb_password: "{{ influxdb_password }}"
influxdb_db: "{{ influxdb_dbname}}"
ara_data: "{{ ara_root_data.stdout }}"
measurement: "undercloud"
data_file: "{{ influxdb_data_file_path }}"
only_successful_tasks: "{{ influxdb_only_successful_tasks }}"
mapped_fields: false
standard_fields: false
longest_tasks: 15
when: ara_root_data.stdout != "[]"
- name: Get ARA json data for overcloud
shell: "{{ local_working_dir }}/bin/ara result list --all -f json"
register: ara_oc_data
environment:
ARA_DATABASE: 'sqlite:///{{ ara_overcloud_db_path }}'
- name: Collect and send data to InfluxDB
ara_influxdb:
influxdb_url: "{{ influxdb_url|default('') }}"
influxdb_port: "{{ influxdb_port }}"
influxdb_user: "{{ influxdb_user }}"
influxdb_password: "{{ influxdb_password }}"
influxdb_db: "{{ influxdb_dbname}}"
ara_data: "{{ ara_oc_data.stdout }}"
measurement: "overcloud"
data_file: "{{ influxdb_data_file_path }}"
only_successful_tasks: "{{ influxdb_only_successful_tasks }}"
mapped_fields: false
standard_fields: false
longest_tasks: 15
when: ara_oc_data.stdout != "[]"

View File

@ -1,430 +0,0 @@
---
- become: true
ignore_errors: true
block:
- name: Ensure required rpms for logging are installed
package:
state: present
name:
- gzip
- rsync
- socat
- tar
- name: Prepare directory with extra logs
file: dest=/var/log/extra state=directory
- name: rpm -qa
shell: rpm -qa | sort -f >/var/log/extra/rpm-list.txt
- name: package list installed
shell: "{{ ansible_pkg_mgr }} list installed >/var/log/extra/package-list-installed.txt"
- name: Collecting /proc/cpuinfo|meminfo|swaps
shell: "cat /proc/{{item}} &> /var/log/extra/{{item}}.txt"
with_items:
- cpuinfo
- meminfo
- swaps
- name: Collect installed cron jobs
shell: |
for user in $(cut -f1 -d':' /etc/passwd); do \
echo $user; crontab -u $user -l | grep -v '^$\|^\s*\#\|^\s*PATH'; done \
&> /var/log/extra/installed_crons.txt
# used by OSP Release Engineering to import into internal builds
- name: package import delorean
shell: |
repoquery --disablerepo='*' --enablerepo='delorean'\
-a --qf '%{sourcerpm}'|sort -u|sed 's/.src.rpm//g' >> /var/log/extra/import-delorean.txt
# used by OSP Release Engineering to import into internal builds
- name: package import delorean-testing
shell: |
repoquery --disablerepo='*' --enablerepo='delorean-*-testing'\
-a --qf '%{sourcerpm}'|sort -u|sed 's/.src.rpm//g' >> /var/log/extra/import-delorean-testing.txt
- name: Collect logs from all failed systemd services
shell: >
systemctl -t service --failed --no-legend | awk '{print $1}'
| xargs -r -n1 journalctl -u > /var/log/extra/failed_services.txt 2>&1
- name: Collect network status info
shell: >
echo "netstat" > /var/log/extra/network.txt;
netstat -i &> /var/log/extra/network.txt;
for ipv in 4 6; do
echo "### IPv${ipv} addresses" >> /var/log/extra/network.txt;
ip -${ipv} a &>> /var/log/extra/network.txt;
echo "### IPv${ipv} routing" >> /var/log/extra/network.txt;
ip -${ipv} r &>> /var/log/extra/network.txt;
echo "### IPTables (IPv${ipv})" &>> /var/log/extra/network.txt;
test $ipv -eq 4 && iptables-save &>> /var/log/extra/network.txt;
test $ipv -eq 6 && ip6tables-save &>> /var/log/extra/network.txt;
done;
(for NS in $(ip netns list); do
for ipv in 4 6; do
echo "==== $NS (${ipv})====";
echo "### IPv${ipv} addresses";
ip netns exec $NS ip -${ipv} a;
echo "### IPv${ipv} routing";
ip netns exec $NS ip -${ipv} r;
echo "### IPTables (IPv${ipv})";
test $ipv -eq 4 && ip netns exec $NS ip iptables-save;
test $ipv -eq 6 && ip netns exec $NS ip ip6tables-save;
done
PIDS="$(ip netns pids $NS)";
[[ ! -z "$PIDS" ]] && ps --no-headers -f --pids "$PIDS";
echo "";
done) &>> /var/log/extra/network-netns;
(for NB in $(ovs-vsctl show | grep Bridge |awk '{print $2}'); do
echo "==== Bridge name - $NB ====";
ovs-ofctl show $NB;
ovs-ofctl dump-flows $NB;
echo "";
done;
ovsdb-client dump) &> /var/log/extra/network-bridges;
- name: lsof -P -n
shell: "lsof -P -n &> /var/log/extra/lsof.txt"
- name: pstree -p
shell: "pstree -p &> /var/log/extra/pstree.txt"
- name: sysctl -a
shell: "sysctl -a &> /var/log/extra/sysctl.txt"
- name: netstat -lnp
shell: "netstat -lnp &> /var/log/extra/netstat.txt"
- name: openstack-status
shell: "which openstack-status &> /dev/null && (. ~/keystonerc_admin; openstack-status &> /var/log/extra/openstack-status.txt)"
when: "'controller' in inventory_hostname"
- name: List nova servers on undercloud
shell: >
if [[ -e {{ working_dir }}/stackrc ]]; then
source {{ working_dir }}/stackrc;
nova list &> /var/log/extra/nova_list.txt;
fi
- name: Get haproxy stats
shell: >
pgrep haproxy && \
test -S /var/lib/haproxy/stats && \
echo 'show info;show stat;show table' | socat /var/lib/haproxy/stats stdio &> /var/log/extra/haproxy-stats.txt || \
echo "No HAProxy or no socket on host" > /var/log/extra/haproxy-stats.txt
- name: lsmod
shell: "lsmod &> /var/log/extra/lsmod.txt"
- name: lspci
shell: "lspci &> /var/log/extra/lspci.txt"
- name: pip list
shell: "pip list &> /var/log/extra/pip.txt"
- name: lvm debug
shell: "(vgs; pvs; lvs) &> /var/log/extra/lvm.txt"
- name: Collect services status
shell: |
systemctl list-units --full --all &> /var/log/extra/services.txt
systemctl status "*" &>> /var/log/extra/services.txt
- name: check if ODL is enabled via docker
shell: docker ps | grep opendaylight_api
register: odl_container_enabled
- name: check if ODL is enabled via podman
shell: podman ps | grep opendaylight_api
register: odl_container_enabled
when: odl_container_enabled.rc != 0
- name: check if ODL is enabled via rpm
shell: rpm -qa | grep opendaylight
register: odl_rpm_enabled
- name: Create ODL log directory
file: dest="{{ odl_extra_log_dir }}" state=directory
when: (odl_rpm_enabled.rc == 0) or (odl_container_enabled.rc == 0)
- name: Create rsync filter file
template:
src: "odl_extra_logs.j2"
dest: "/tmp/odl_extra_logs.sh"
- name: Collect OVS outputs for ODL
shell: "bash /tmp/odl_extra_logs.sh"
when: (odl_rpm_enabled.rc == 0) or (odl_container_enabled.rc == 0)
- name: Collect ODL info and logs (RPM deployment)
shell: >
cp /opt/opendaylight/data/log/* /var/log/extra/odl/;
journalctl -u opendaylight > /var/log/extra/odl/odl_journal.log
when: odl_rpm_enabled.rc == 0
- name: Generate human-readable SAR logs
shell: "[[ -f /usr/lib64/sa/sa2 ]] && /usr/lib64/sa/sa2 -A"
- name: check for dstat log file
stat: path=/var/log/extra/dstat-csv.log
register: dstat_logfile
- name: kill dstat
shell: "pkill dstat"
become: true
when: dstat_logfile.stat.exists
- name: Get dstat_graph tool
git:
repo: "https://github.com/Dabz/dstat_graph.git"
dest: "/tmp/dstat_graph"
version: master
when: dstat_logfile.stat.exists
- name: Generate HTML dstat graphs if it exists
shell: "/tmp/dstat_graph/generate_page.sh /var/log/extra/dstat-csv.log > /var/log/extra/dstat.html"
when: dstat_logfile.stat.exists
args:
chdir: "/tmp/dstat_graph"
- name: Search for AVC denied
shell: >
grep -i denied /var/log/audit/audit* &&
grep -i denied /var/log/audit/audit* > /var/log/extra/denials.txt
- name: Search for segfaults in logs
shell: >
grep -v ansible-command /var/log/messages | grep segfault &&
grep -v ansible-command /var/log/messages | grep segfault > /var/log/extra/segfaults.txt
- name: Search for oom-killer instances in logs
shell: >
grep -v ansible-command /var/log/messages | grep oom-killer &&
grep -v ansible-command /var/log/messages | grep oom-killer > /var/log/extra/oom-killers.txt
- name: Ensure sos package is installed when collect sosreport(s)
package:
name: sos
state: present
when: artcl_collect_sosreport|bool
- name: Collect sosreport
command: >
sosreport {{ artcl_sosreport_options }}
when: artcl_collect_sosreport|bool
- name: Collect delorean logs
shell: >
if [[ -e /home/{{ undercloud_user }}/DLRN/data/repos ]]; then
rm -rf /tmp/delorean_logs && mkdir /tmp/delorean_logs;
find /home/{{ undercloud_user }}/DLRN/data/repos/ -name '*.log' -exec cp --parents \{\} /tmp/delorean_logs/ \; ;
find /tmp/delorean_logs -name '*.log' -exec gzip \{\} \; ;
find /tmp/delorean_logs -name '*.log.gz' -exec sh -c 'x="{}"; mv "$x" "${x%.log.gz}.log.txt.gz"' \; ;
rm -rf {{ artcl_collect_dir }}/delorean_logs && mkdir {{ artcl_collect_dir }}/delorean_logs;
mv /tmp/delorean_logs/home/{{ undercloud_user }}/DLRN/data/repos/* {{ artcl_collect_dir }}/delorean_logs/;
fi
- name: Collect container info and logs
shell: >
for engine in docker podman; do
if [ $engine = 'docker' ]; then
(command -v docker && systemctl is-active docker) || continue
# container_cp CONTAINER SRC DEST
container_cp() {
docker cp ${1}:${2} $3
}
fi
if [ $engine = 'podman' ]; then
command -v podman || continue
# NOTE(cjeanner): podman has no "cp" subcommand, we hence have to mount the container, copy,
# umount it. More info: https://www.mankier.com/1/podman-cp
# See also: https://github.com/containers/libpod/issues/613
container_cp() {
mnt=$(podman mount $1)
cp -rT ${mnt}${2} $3
podman umount $1
}
fi
BASE_CONTAINER_EXTRA=/var/log/extra/${engine};
mkdir -p $BASE_CONTAINER_EXTRA;
ALL_FILE=$BASE_CONTAINER_EXTRA/${engine}_allinfo.log;
CONTAINER_INFO_CMDS=(
"${engine} ps --all --size"
"${engine} images"
"${engine} stats --all --no-stream"
"${engine} version"
"${engine} info"
);
if [ $engine = 'docker' ]; then
CONTAINER_INFO_CMDS+=("${engine} volume ls")
fi
for cmd in "${CONTAINER_INFO_CMDS[@]}"; do
echo "+ $cmd" >> $ALL_FILE;
$cmd >> $ALL_FILE;
echo "" >> $ALL_FILE;
echo "" >> $ALL_FILE;
done;
# Get only failed containers, in a dedicated file
${engine} ps -a | grep -vE ' (IMAGE|Exited \(0\)|Up) ' &>> /var/log/extra/failed_containers.log;
for cont in $(${engine} ps | awk {'print $NF'} | grep -v NAMES); do
INFO_DIR=$BASE_CONTAINER_EXTRA/containers/${cont};
mkdir -p $INFO_DIR;
(
set -x;
if [ $engine = 'docker' ]; then
${engine} top $cont auxw;
# NOTE(cjeanner): `podman top` does not support `ps` options.
elif [ $engine = 'podman' ]; then
${engine} top $cont;
fi
${engine} exec $cont top -bwn1;
${engine} exec $cont bash -c "\$(command -v dnf || command -v yum) list installed";
${engine} inspect $cont;
) &> $INFO_DIR/${engine}_info.log;
container_cp $cont /var/lib/kolla/config_files/config.json $INFO_DIR/config.json;
# NOTE(flaper87): This should go away. Services should be
# using a `logs` volume
# NOTE(mandre) Do not copy logs if the containers is bind mounting /var/log directory
if ! ${engine} exec $cont stat $BASE_CONTAINER_EXTRA 2>1 > /dev/null; then
container_cp $cont /var/log $INFO_DIR/log;
fi;
# Delete symlinks because they break log collection and are generally
# not useful
find $INFO_DIR -type l -delete;
done;
# NOTE(cjeanner) previous loop cannot have the "-a" flag because of the
# "exec" calls. So we just loop a second time, over ALL containers,
# in order to get all the logs we can. For instance, the previous loop
# would not allow to know why a container is "Exited (1)", preventing
# efficient debugging.
for cont in $(${engine} ps -a | awk {'print $NF'} | grep -v NAMES); do
INFO_DIR=$BASE_CONTAINER_EXTRA/containers/${cont};
mkdir -p $INFO_DIR;
${engine} logs $cont &> $INFO_DIR/stdout.log;
done;
# NOTE(flaper87) Copy contents from the logs volume. We can expect this
# volume to exist in a containerized environment.
# NOTE(cjeanner): Rather test the eXistenZ of the volume, as podman does not
# have such thing
if [ -d /var/lib/docker/volumes/logs/_data ]; then
cp -r /var/lib/docker/volumes/logs/_data $BASE_CONTAINER_EXTRA/logs;
fi
done
- name: Collect config-data
shell: cp -r /var/lib/config-data/puppet-generated /var/log/config-data
- name: Collect text version of the journal from last four hours
shell: journalctl --since=-4h --lines=100000 > /var/log/journal.txt
- name: Collect errors and rename if more than 10 MB
shell: >
grep -rE '^[-0-9]+ [0-9:\.]+ [0-9 ]*ERROR ' /var/log/ |
sed "s/\(.*\)\(20[0-9][0-9]-[0-9][0-9]-[0-9][0-9] [0-9][0-9]:[0-9][0-9]:[0-9][0-9]\.[0-9]\+\)\(.*\)/\2 ERROR \1\3/g" > /tmp/errors.txt;
if (( $(stat -c "%s" /tmp/errors.txt) > 10485760 )); then
ERR_NAME=big-errors.txt;
else
ERR_NAME=errors.txt;
fi;
mv /tmp/errors.txt /var/log/extra/${ERR_NAME}.txt
- name: Create a index file for logstash
shell: >
for i in {{ artcl_logstash_files|default([])|join(" ") }}; do
cat $i; done | grep "^20.*|" | sort -sk1,2 |
sed "s/\(20[0-9][0-9]-[0-9][0-9]-[0-9][0-9] [0-9][0-9]:[0-9][0-9]:[0-9][0-9]\.*[0-9]*\)\(.*\)/\1 INFO \2/g" > /var/log/extra/logstash.txt
- name: Set default collect list
set_fact:
collect_list: "{{ artcl_collect_list }} + {{ artcl_collect_list_append|default([]) }}"
- name: Override collect list
set_fact:
collect_list: "{{ artcl_collect_override[inventory_hostname] }}"
when:
- artcl_collect_override is defined
- artcl_collect_override[inventory_hostname] is defined
- name: Create temp directory before gathering logs
file:
dest: "/tmp/{{ inventory_hostname }}"
state: directory
- name: Create rsync filter file
template:
src: "rsync-filter.j2"
dest: "/tmp/{{ inventory_hostname }}-rsync-filter"
- name: Gather the logs to /tmp
become: true
shell: >
set -o pipefail &&
rsync --quiet --recursive --copy-links --prune-empty-dirs
--filter '. /tmp/{{ inventory_hostname }}-rsync-filter' / /tmp/{{ inventory_hostname }};
find /tmp/{{ inventory_hostname }} -type d -print0 | xargs -0 chmod 755;
find /tmp/{{ inventory_hostname }} -type f -print0 | xargs -0 chmod 644;
find /tmp/{{ inventory_hostname }} -not -type f -not -type d -delete;
chown -R {{ ansible_user }}: /tmp/{{ inventory_hostname }};
- name: Compress logs to tar.gz
shell: >
chdir=/tmp
tar czf {{ inventory_hostname }}.tar.gz {{ inventory_hostname }};
when: artcl_tar_gz|bool
- name: gzip logs individually and tar them
shell: >
chdir=/tmp
gzip -r ./{{ inventory_hostname }};
tar cf {{ inventory_hostname }}.tar {{ inventory_hostname }};
when: artcl_gzip_only|bool
- name: Fetch log archive (tar.gz)
fetch:
src: "/tmp/{{ inventory_hostname }}.tar.gz"
dest: "{{ artcl_collect_dir }}/{{ inventory_hostname }}.tar.gz"
flat: true
validate_checksum: false
when: artcl_tar_gz|bool
- name: Fetch log archive (tar)
fetch:
src: "/tmp/{{ inventory_hostname }}.tar"
dest: "{{ artcl_collect_dir }}/{{ inventory_hostname }}.tar"
flat: true
validate_checksum: false
when: artcl_gzip_only|bool
- name: Delete temporary log directory after collection
file:
path: "/tmp/{{ inventory_hostname }}"
state: absent
ignore_errors: true
- delegate_to: localhost
when: artcl_gzip_only|bool
block:
- name: Extract the logs
shell: >
chdir={{ artcl_collect_dir }}
tar xf {{ inventory_hostname }}.tar;
- name: delete the tar file after extraction
file:
path: "{{ artcl_collect_dir }}/{{ inventory_hostname }}.tar"
state: absent

View File

@ -1,44 +0,0 @@
---
- name: Ensure required python packages are installed
pip:
requirements: "{{ local_working_dir }}/usr/local/share/ansible/roles/collect-logs/doc-requirements.txt"
- name: Unarchive shell scripts
shell: >
gunzip "{{ artcl_collect_dir }}/undercloud/home/{{ undercloud_user }}/{{ item }}.sh.gz";
with_items: "{{ artcl_create_docs_payload.included_deployment_scripts }}"
ignore_errors: true
when: artcl_gzip_only|bool
- name: Generate rST docs from scripts and move to Sphinx src dir
shell: >
awk -f "{{ local_working_dir }}/usr/local/share/ansible/roles/collect-logs/scripts/doc_extrapolation.awk" \
"{{ artcl_collect_dir }}/undercloud/home/{{ undercloud_user }}/{{ item }}.sh" > \
"{{ artcl_docs_source_dir }}/{{ item }}.rst"
with_items: "{{ artcl_create_docs_payload.included_deployment_scripts }}"
ignore_errors: true
- name: Fetch static rST docs to include in output docs
shell: >
cp "{{ artcl_docs_source_dir }}/../static/{{ item }}.rst" "{{ artcl_docs_source_dir }}"
with_items: "{{ artcl_create_docs_payload.included_static_docs }}"
ignore_errors: true
- name: Generate fresh index.rst for Sphinx
template:
src: index.rst.j2
dest: "{{ artcl_docs_source_dir }}/index.rst"
force: true
- name: Ensure docs dir exists
file:
path: "{{ artcl_collect_dir }}/docs"
state: directory
- name: Build docs with Sphinx
shell: >
set -o pipefail &&
sphinx-build -b html "{{ artcl_docs_source_dir }}" "{{ artcl_docs_build_dir }}"
2>&1 {{ timestamper_cmd }} > {{ artcl_collect_dir }}/docs/sphinx_build.log
ignore_errors: true

View File

@ -1,31 +0,0 @@
---
- name: gather facts used by role
setup:
gather_subset: "!min,pkg_mgr"
when: ansible_pkg_mgr is not defined
- name: Collect logs
include: collect.yml
when: artcl_collect|bool
- name: Generate docs
include: create-docs.yml
when:
- artcl_gen_docs|bool
- not artcl_collect|bool
- name: Publish logs
include: publish.yml
when:
- artcl_publish|bool
- not artcl_collect|bool
- name: Verify Sphinx build
shell:
grep -q "{{ item }}" "{{ artcl_collect_dir }}/docs/build/index.html"
with_items: "{{ artcl_create_docs_payload.table_of_contents }}"
changed_when: false
when:
- artcl_gen_docs|bool
- artcl_verify_sphinx_build|bool
- not artcl_collect|bool

View File

@ -1,170 +0,0 @@
---
# collection dir could be either a dir or a link
# file module cannot be used here, because it changes link to dir
# when called with state: directory
- name: Ensure the collection directory exists
shell: |
if [[ ! -d "{{ artcl_collect_dir }}" && ! -h "{{ artcl_collect_dir }}" ]]; then
mkdir -p "{{ artcl_collect_dir }}"
fi
- name: fetch and gzip the console log
shell: >
set -o pipefail &&
curl -k "{{ lookup('env', 'BUILD_URL') }}/timestamps/?time=yyyy-MM-dd%20HH:mm:ss.SSS%20|&appendLog&locale=en_GB"
| gzip > {{ artcl_collect_dir }}/console.txt.gz
when: lookup('env', 'BUILD_URL') != ""
- when: ara_generate_html|bool
block:
- name: Generate and retrieve the ARA static playbook report
shell: >
{{ local_working_dir }}/bin/ara generate html {{ local_working_dir }}/ara_oooq;
{{ local_working_dir }}/bin/ara task list --all -f json > {{ artcl_collect_dir }}/ara.json;
cp -r {{ local_working_dir }}/ara_oooq {{ artcl_collect_dir }}/;
{% if artcl_gzip_only|bool %}gzip --best --recursive {{ artcl_collect_dir }}/ara_oooq;{% endif %}
ignore_errors: true
- name: Generate and retrieve root the ARA static playbook report
become: true
shell: >
{{ local_working_dir }}/bin/ara generate html {{ local_working_dir }}/ara_oooq_root;
{{ local_working_dir }}/bin/ara task list --all -f json > {{ artcl_collect_dir }}/ara.oooq.root.json;
cp -r {{ local_working_dir }}/ara_oooq_root {{ artcl_collect_dir }}/;
{% if artcl_gzip_only|bool %}gzip --best --recursive {{ artcl_collect_dir }}/ara_oooq_root;{% endif %}
ignore_errors: true
- name: Generate and retrieve the ARA static playbook report for OC deploy
become: true
shell: >
{{ local_working_dir }}/bin/ara generate html {{ local_working_dir }}/ara_oooq_oc;
{{ local_working_dir }}/bin/ara task list --all -f json > {{ artcl_collect_dir }}/ara.oooq.oc.json;
cp -r {{ local_working_dir }}/ara_oooq_oc {{ artcl_collect_dir }}/;
{% if artcl_gzip_only|bool %}gzip --best --recursive {{ artcl_collect_dir }}/ara_oooq_oc;{% endif %}
ignore_errors: true
environment:
ARA_DATABASE: 'sqlite:///{{ ara_overcloud_db_path }}'
- name: Copy ara files to ara-report directories
shell: |
mkdir -p {{ artcl_collect_dir }}/{{ item.dir }}/ara-report;
cp {{ item.file }} {{ artcl_collect_dir }}/{{ item.dir }}/ara-report/ansible.sqlite;
loop:
- dir: ara_oooq
file: "{{ local_working_dir }}/ara.sqlite"
- dir: ara_oooq_overcloud
file: "{{ ara_overcloud_db_path }}"
ignore_errors: true
when: not ara_generate_html|bool
- include: ara_graphite.yml
when: ara_graphite_server is defined
ignore_errors: true
- include: ara_influxdb.yml
when: influxdb_url is defined or influxdb_create_data_file|bool
ignore_errors: true
- name: fetch stackviz results to the root of the collect_dir
shell: >
if [ -d {{ artcl_collect_dir }}/undercloud/var/log/extra/stackviz/data ]; then
cp -r {{ artcl_collect_dir }}/undercloud/var/log/extra/stackviz {{ artcl_collect_dir }};
gunzip -fr {{ artcl_collect_dir }}/stackviz;
fi;
- name: fetch stackviz results to the root of the collect_dir for os_tempest
shell: >
if [ -d {{ artcl_collect_dir }}/undercloud/var/log/tempest/stackviz/data ]; then
cp -r {{ artcl_collect_dir }}/undercloud/var/log/tempest/stackviz {{ artcl_collect_dir }};
gunzip -fr {{ artcl_collect_dir }}/stackviz;
fi;
when: use_os_tempest is defined
- name: tempest results to the root of the collect_dir
shell: >
cp {{ artcl_collect_dir }}/undercloud/home/stack/tempest/tempest.{xml,html}{,.gz} {{ artcl_collect_dir }} || true;
gunzip {{ artcl_collect_dir }}/tempest.{xml,html}.gz || true;
- name: Gunzip tempest results to the root of the collect_dir for os_tempest
shell: >
gunzip -c {{ artcl_collect_dir }}/undercloud/var/log/tempest/stestr_results.html.gz > {{ artcl_collect_dir }}/stestr_results.html || true
when: use_os_tempest is defined
- name: Copy testrepository.subunit file to the root of collect_dir for os_tempest
shell: >
cp {{ artcl_collect_dir }}/undercloud/var/log/tempest/testrepository.subunit.gz {{ artcl_collect_dir }}/testrepository.subunit.gz || true
when: use_os_tempest is defined
- name: Fetch .sh and .log files from local working directory on localhost
shell: >
cp {{ item }} {{ artcl_collect_dir }}/
with_items:
- "{{ local_working_dir }}/*.sh"
- "{{ local_working_dir }}/*.log"
ignore_errors: "yes"
- name: Rename compressed text based files to end with txt.gz extension
shell: >
set -o pipefail &&
find {{ artcl_collect_dir }}/ -type f |
awk 'function rename(orig)
{ new=orig; sub(/\.gz$/, ".txt.gz", new); system("mv " orig " " new) }
/\.(conf|ini|json|sh|log|yaml|yml|repo|cfg|j2|py)\.gz$/ { rename($0) }
/(\/var\/log\/|\/etc\/)[^ \/\.]+\.gz$/ { rename($0) }';
when: artcl_txt_rename|bool
- name: Create the reproducer script
include_role:
name: create-reproducer-script
when: ansible_env.TOCI_JOBTYPE is defined
- name: Create the zuul-based reproducer script if we are running on zuul
include_role:
name: create-zuul-based-reproducer
when: zuul is defined
- name: upload to the artifact server using pubkey auth
shell: rsync -av --quiet -e "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" {{ artcl_collect_dir }}/ {{ artcl_rsync_path }}/{{ lookup('env', 'BUILD_TAG') }}
async: "{{ artcl_publish_timeout }}"
poll: 15
retries: 5
delay: 60
when: artcl_use_rsync|bool and not artcl_rsync_use_daemon|bool
- name: upload to the artifact server using password auth
environment:
RSYNC_PASSWORD: "{{ lookup('env', 'RSYNC_PASSWORD') }}"
shell: rsync -av --quiet {{ artcl_collect_dir }}/ {{ artcl_rsync_path }}/{{ lookup('env', 'BUILD_TAG') }}
async: "{{ artcl_publish_timeout }}"
poll: 15
retries: 5
delay: 60
when: artcl_use_rsync|bool and artcl_rsync_use_daemon|bool
- name: upload to swift based artifact server
shell: swift upload --quiet --header "X-Delete-After:{{ artcl_swift_delete_after }}" {{ artcl_swift_container }}/{{ lookup('env', 'BUILD_TAG') }} *
args:
chdir: "{{ artcl_collect_dir }}"
changed_when: true
environment:
OS_AUTH_URL: "{{ artcl_swift_auth_url }}"
OS_USERNAME: "{{ artcl_swift_username }}"
OS_PASSWORD: "{{ artcl_swift_password }}"
OS_TENANT_NAME: "{{ artcl_swift_tenant_name }}"
async: "{{ artcl_publish_timeout }}"
poll: 15
when: artcl_use_swift|bool
- name: use zuul_swift_upload.py to publish the files
shell: "{{ artcl_zuul_swift_upload_path }}/zuul_swift_upload.py --name {{ artcl_swift_container }} --delete-after {{ artcl_swift_delete_after }} {{ artcl_collect_dir }}"
async: "{{ artcl_publish_timeout }}"
poll: 15
when: artcl_use_zuul_swift_upload|bool
- name: create the artifact location redirect file
template:
src: full_logs.html.j2
dest: "{{ artcl_collect_dir }}/full_logs.html"
when: artcl_env != 'tripleo-ci'

View File

@ -1,14 +0,0 @@
<!DOCTYPE HTML>
<html lang="en-US">
<head>
<meta charset="UTF-8">
<meta http-equiv="refresh" content="1; url={{ artcl_full_artifact_url }}">
<script type="text/javascript">
window.location.href = "{{ artcl_full_artifact_url }}"
</script>
<title>Redirection to logs</title>
</head>
<body>
If you are not redirected automatically, follow the <a href='{{ artcl_full_artifact_url }}'>link to the logs</a>.
</body>
</html>

View File

@ -1,59 +0,0 @@
#!/bin/bash
echo "This script will be deprecated in favor of collect-logs files"
set -x
export PATH=$PATH:/sbin
ps -eaufxZ
ls -Z /var/run/
df -h
uptime
sudo netstat -lpn
sudo iptables-save
sudo ovs-vsctl show
ip addr
ip route
ip -6 route
free -h
top -n 1 -b -o RES
rpm -qa | sort
{{ ansible_pkg_mgr }} repolist -v
sudo os-collect-config --print
which pcs &> /dev/null && sudo pcs status --full
which pcs &> /dev/null && sudo pcs constraint show --full
which pcs &> /dev/null && sudo pcs stonith show --full
which crm_verify &> /dev/null && sudo crm_verify -L -VVVVVV
which ceph &> /dev/null && sudo ceph status
sudo facter
find ~jenkins -iname tripleo-overcloud-passwords -execdir cat '{}' ';'
sudo systemctl list-units --full --all
if [ -e {{ working_dir }}/stackrc ] ; then
bash <<-EOF &>/var/log/postci.txt
source {{ working_dir }}/stackrc
openstack <<-EOC
server list -f yaml
workflow list -f yaml
workflow execution list -f yaml
EOC
# If there's no overcloud then there's no point in continuing
openstack stack list | grep -q overcloud
if [ $? -eq 1 ]; then
echo 'No active overcloud found'
exit 0
fi
openstack <<-EOC
stack show overcloud -f yaml
stack resource list -n5 overcloud -f yaml
stack event list overcloud -f yaml
EOC
# --nested-depth 2 seems to get us a reasonable list of resources without
# taking an excessive amount of time
openstack stack event list --nested-depth 2 -f json overcloud | /tmp/heat-deploy-times.py | tee /var/log/heat-deploy-times.log || echo 'Failed to process resource deployment times. This is expected for stable/liberty.'
# useful to see what failed when puppet fails
# NOTE(bnemec): openstack stack failures list only exists in Newton and above.
# On older releases we still need to manually query the deployments.
openstack stack failures list --long overcloud || for failed_deployment in \$(heat resource-list --nested-depth 5 overcloud | grep FAILED | grep 'StructuredDeployment ' | cut -d '|' -f3); do heat deployment-show \$failed_deployment; done;
# NOTE(emilien) "openstack overcloud failures" was introduced in Rocky
openstack overcloud failures || true
EOF
fi

View File

@ -1,22 +0,0 @@
Welcome to |project| Documentation:
===================================
.. note:: This documentation was generated by the collect-logs_ role. If you
find any problems, please note the TripleO-Quickstart call, if available,
that was used to deploy the environment and create a bug on Launchpad for
tripleo-quickstart_.
.. _collect-logs: https://github.com/openstack/tripleo-quickstart-extras/tree/master/roles/collect-logs#documentation-generation-related
.. _tripleo-quickstart: https://bugs.launchpad.net/tripleo-quickstart/+filebug
--------
Contents
--------
.. toctree::
:maxdepth: 2
:numbered:
{% for doc in artcl_create_docs_payload.table_of_contents %}
{{ doc }}
{% endfor %}

View File

@ -1,20 +0,0 @@
echo "+ ip -o link" > {{ odl_extra_info_log }};
ip -o link &>> {{ odl_extra_info_log }};
echo "+ ip -o addr" >> {{ odl_extra_info_log }};
ip -o addr &>> {{ odl_extra_info_log }};
echo "+ arp -an" >> {{ odl_extra_info_log }};
arp -an &>> {{ odl_extra_info_log }};
echo "+ ip netns list" >> {{ odl_extra_info_log }};
ip netns list &>> {{ odl_extra_info_log }};
echo "+ ovs-ofctl -OOpenFlow13 show br-int" >> {{ odl_extra_info_log }};
ovs-ofctl -OOpenFlow13 show br-int &>> {{ odl_extra_info_log }};
echo "+ ovs-ofctl -OOpenFlow13 dump-flows br-int" >> {{ odl_extra_info_log }};
ovs-ofctl -OOpenFlow13 dump-flows br-int &>> {{ odl_extra_info_log }};
echo "+ ovs-ofctl -OOpenFlow13 dump-groups br-int" >> {{ odl_extra_info_log }};
ovs-ofctl -OOpenFlow13 dump-groups br-int &>> {{ odl_extra_info_log }};
echo "+ ovs-ofctl -OOpenFlow13 dump-group-stats br-int" >> {{ odl_extra_info_log }};
ovs-ofctl -OOpenFlow13 dump-group-stats br-int &>> {{ odl_extra_info_log }};
echo "+ ovs-vsctl list Open_vSwitch" >> {{ odl_extra_info_log }};
ovs-vsctl list Open_vSwitch &>> {{ odl_extra_info_log }};
echo "+ ovs-vsctl show" >> {{ odl_extra_info_log }};
ovs-vsctl show &>> {{ odl_extra_info_log }};

View File

@ -1,29 +0,0 @@
# Exclude these paths to speed up the filtering
# These need to be removed/made more specific if we decide to collect
# anything under these paths
- /dev
- /proc
- /run
- /sys
# Exclude paths
{% for exclude_path in artcl_exclude_list|default([]) %}
- {{ exclude_path }}
{% endfor %}
# Include all subdirectories in the check
# See "INCLUDE/EXCLUDE PATTERN RULES" section about --recursive
# in the rsync man page
+ */
# Include paths
{% for include_path in collect_list|default([]) %}
{% if include_path|list|last == "/" %}
+ {{ include_path }}**
{% else %}
+ {{ include_path }}
{% endif %}
{% endfor %}
# Exclude everything else
- *