As log storage takes care of compression is better to avoid performing
any gzip activity on *text* files that we want to access with the
browser.
Change-Id: I7dcd1cf569fea5e536926f7188af953c3301af0f
Since we'll be installing collections into quickstart, we need to set
the ansible collections path correctly in the toci quickstart script.
Change-Id: I39c33da8af98ef01ddf020e4cdf96403bc0edc20
Related-Blueprint: tripleo-operator-ansible
Follows the same configuration that was used on
tripleo-quickstart-extras and documented use on tripleo-docs.
Change-Id: Iba8a2db92137f9f6ad28f498627eb1b87039d99f
Story: https://tree.taiga.io/project/tripleo-ci-board/task/381
Implement the standalone upgrade in CI.
This deploy a N-1 standalone deployment, upgrade the repository, the
packages, and run tripleo upgrade.
The upgrade coverage is limited to:
- upgrade_tasks.
- non-HA services.
This means that no upgrade orchestration is tested and that HA is not
covered.
Change-Id: Id4877152ca6c233e193593995dd13890e17a535b
Depends-On: I2a4ffd8ae26e0965028422c649243a65fef79e65
https://review.openstack.org/#/c/596432 added vexxhost
provider, since then jobs running under "rdo-cloud"
provider are broken. This patch adds regex match with
"rdo-cloud"* to handle both rdo-cloud and rdo-cloud-tripleo.
Closes-Bug: #1803133
Change-Id: I9a7ddef4e4f57dc599b9030a0afbe116b1075b68
Fixes problem where log collection was using config files from
old location after we already switched deployment to use new location.
This was not observed because the file content was the same but
once we attempted to remove the old files we discovered the issue.
Prepares for removal of config files from their old location.
Change-Id: I70aeb3724d6d162aa6d8b00024f3ce1a74dbf79f
Story: https://tree.taiga.io/project/tripleo-ci-board/task/7
Add vexxhost provider data for tripleo-ci jobs
Depends-On: I53851edbb8bb562dc4194fb99d6ade259227d2f9
Change-Id: I39efbfc94fe3650704b636e94445ad859f3ac801
Since we create the collect logs script only at the end of the playbooks
run, when a job times out, there is no collect_logs.sh script created,
and the post-run playbook doesn't actually run anything.
This change split the function in two parts, the first part creates the
script, and its call is moved to before the playbooks run. The second
part is moved to toci_quickstart directly and runs after the playbooks.
The second part should not be needed anymore with zuulv3 in rdo
provider.
Closes-Bug: #1784417
Change-Id: Iee897f780c93bdf847e188fc033e8be112e12a4f
This changes make the undercloud jobs use the new base jobs, nodeset and
playbooks requireed to run the job with the configuration that are
closer to zuulv3.
It also adds logic to not run any vxlan networking on our own for all
the v3 runners.
Change-Id: Idf7b8a54499ef252bf7d34b3b5f16a9a34e6a83f
With dry run, playbooks are not executed.
The env variables, releases script output and playbooks
commands are written out to files for debug.
The review also moves the collect logs lines to a
function for easier organization
Documentation for the dry run option and variable
files is added.
Change-Id: I0bad5ee3150d94115bed018de9488590305a2b80
With dry run, playbooks are not executed.
The env variables, releases script output
and playbooks commands are written out to
files for debug.
The review also moves the collect logs lines
to a function for easier organization.
Documentation for the dry run option and
variable files is added.
Depends-On: I4c8f389978717848e755f12856dd454d605c9137
Change-Id: Ib6ae184a07ba291d719cabb48b51d890b0485a49
We are changing the way we are passing releases to the playbooks.
This change adds the ability to use the new release handling script. It
enables it only for 4 jobs. Two should be moved to the new way of
handling releases, two are there to test the backwards compatibility
offered by the script.
If the script is uses and produces an output, the new method will be
activated, by filling a dictionary with playbook as key and release as
value.
If there is no output from the script, the dictionary will be empty and
the default QUICKSTART_RELEASE file will be used instead.
Co-Authored-By: Gabriele Cerami <gcerami@redhat.com>
Co-Authored-By: Ronelle Landy <rlandy@redhat.com>
Change-Id: I6514ba15ff4300ac4bd4fe543d3a4954aeb6f175
When running multiple playbooks, quickstart_install log contains
only the last log. Add "-a" to "tee" to save logs from all
playbooks.
Change-Id: Ib02190f0bfaad03383c1c3b09579711e5a5f8c27
Featureset047 includes all needed parameters
to run an undercloud upgrade using the
tripleo-upgrade role.
Change toci_gate_test.sh logic to include the
mixed_upgrade environment file *-undercloud-*-overcloud.yml
only when overcloud-deploy is being performed. Otherwise,
it will try to load it for undercloud upgrade too.
Closes-Bug: #1735792
Depends-On: Ib204c89ad88ae4581e556710976325c2b2723039
Change-Id: Iaed38c620f39b66b0b560ae561580dce21ea199b
Add support to run each playbook with its custom args to override
default ones. Required in upgrades jobs to override release vars
when running upgrade playbook.
Change-Id: I060283f19a9738e55c2665b94eb1f746a790de1e
This reverts commit 765389d178.
Collect logs for all multinode jobs after job is finished, and
for ovb jobs during the job run. OVB nodes are wiped off after
job finishes.
Change-Id: I39ce8ef935c887ad650fcf5155639ed50efeae96
we run logs collection in post playbook in upstream infra CI,
do the same in third-party CI whith OVB and multinode jobs.
Collect logs fter job is finished.
Depends on https://review.rdoproject.org/r/#/c/12442/
Change-Id: I4c2ec1847a47bda6a30e5045b04b750e91b1ebda
After merging [0], it seems that an extra
command to run the CI job via ansible-playbook
was left. An error was displayed as the left
command didn't include the playbook to run.
[0] https://review.openstack.org/#/c/525765/9
Change-Id: Ibb9eb78c8e4ef72da820ac64f755176356f9ab3c
Closes-Bug: #1750518
This will allow us to run multiple playbooks instead of just a
single playbook. This depends on the two changes that split
the ovb and multinode playbooks.
Depends-On: I6cc171641c8390e458eb474be3479e732eb2c985
Change-Id: I461a83a3d1b162457a2f607736ca4feb7fdd3e14
All playbooks were moved to oooq extras repository, so remove them
from CI repo.
Depends-on: Ie778d8893d0a92798dfada33260a656234d57350
Change-Id: I92880074e995768f566c193505216dbee3cce9bb
When calculation job statistics we need a few environemnt vars
be available in post playbook. Write them to file and export
when running logs collection role in post playbook.
Change-Id: I2719999a7f3abda43ab322a23bbe4750c7233a76
Condition fails because of no space before tripleo-test-cloud-rh1
Rewrite it with correct syntax. It was failing OVB jobs logs
collection
Change-Id: I99e4b7b7da3bde615dd0f0a6604773745d8d1b97
In some of jobs variables can be not available, so don't export
them if they are not defined. This happens in periodic jobs which
don't have ZUUL_CHANGES defined.
Change-Id: Ic406599e2838fdffb6bdb2ccf40d2169c4474771
We use some generated environment variables in logs collection role
but since it runs in multinode jobs in post playbook, we don't have
them avaiable. Preserver them in collect_log.sh file.
Closes-Bug: #1743140
Change-Id: I0614d2e4e576b818bffb093327f0f8857d122810
When logs collection runs in post playbook we need to limit it in
time to prevent it being stuck and prolong job for hours.
Change-Id: I6b9418a2020df74b07fbb6c6b00ada0ce1706878
Collect logs after main job is finished, so that we'll have logs
in any case of job result.
Do it for multinode jobs only, because ovb nodes are wiped right
after job finishes and collecting logs there requires a bit
different effort.
Depends-On: I7b7582469b01116bbe754af07c81cc698355d8c4
Change-Id: I600ade65052d28978c9d395323c8b86ed213fd38
When running this script in terminal, it fails with errror
_OLD_VIRTUAL_PATH, _OLD_VIRTUAL_PYTHONHOME is unbound. when
activating virtualenv. Seems like it's known issue in venv:
https://github.com/pypa/virtualenv/issues/1029
set +u before activation is a workaround for this issue and then
set -u to continue the rest of the script
Change-Id: I5095118609f8d2dc46112c3761952a109ba4b93e
Now overcloud deploy timeout is derived from DEVSTACK_GATE_TIMEOUT
which is always 170 and equal to 80. It's not appropriate for all
jobs. It will be calculated dynamically depending on how much time
remains in the job for deployment.
Pass end time of CI job to ansible playbook for calculating it.
Partial-Bug: #1738038
Change-Id: I89b743fc16ea4c100ba21ddbe29081fa3e5479e9
This will change CI to only use quicktart.sh to bootstrap the
virtualenv. The ansible-playbook command will then be run
directly for the deploy and collect logs.
This is a first step in breaking the deploy into multiple
ansible-playbook calls so that we CI the ability to run them in
stages for development purposes.
Change-Id: Ie0a3729277fb608c653e7bc2ab85781d9b815880
Due the fact that in the future, we will have several ansible-playbooks
calls, we need to create a function that will calculate the remaining time
for each call. This function gets the START_JOB_TIME and
calculate how much time still has, and execute the command with
/usr/bin/timeout
Change-Id: Ib4766ea144baaaf1b47899c4923adf40a3be4582
Debug unbound DNS queries in job for investigation of
DNS failures.
Dump unbound DNS cache server cache..
Depends-On: Ia76ac9b20d6b8402060b71e11e00e515bc74077d
Related-Bug: #1730931
Change-Id: Ic8b9e33067141366655fad1b99dee1adb40dfba3