As log storage takes care of compression is better to avoid performing
any gzip activity on *text* files that we want to access with the
browser.
Change-Id: I7dcd1cf569fea5e536926f7188af953c3301af0f
- reverted previours workarounds
- assures we run ansible-lint on our files
- bumped linters
- fixed new problems found by the linters
Change-Id: I7812fcfd17569b0c438f58bec73ab0f4b07e625c
These changes were missed by Sysadmin's proposed
Migration Patch, this patch takes care of these
missings.
Also updated kolla review pull task to use
review.opendev.org instead of opendev.org as
remote url because opendev.org currently don't
have review refs/changes synched due to gitea bug.
Related-Bug: #1825732
Change-Id: I30646b0c5b95f391e9ab4bd767b468280ccb3842
Uses the new location of openstack-virtual-baremetal as it was imported
to openstack organization earlier today.
Change-Id: I409c0cd172bbec4c19089b6ebe3fd7200ba2fa75
Follows the same configuration that was used on
tripleo-quickstart-extras and documented use on tripleo-docs.
Change-Id: Iba8a2db92137f9f6ad28f498627eb1b87039d99f
Story: https://tree.taiga.io/project/tripleo-ci-board/task/381
Add vexxhost provider data for tripleo-ci jobs
Depends-On: I53851edbb8bb562dc4194fb99d6ade259227d2f9
Change-Id: I39efbfc94fe3650704b636e94445ad859f3ac801
THis change gathers all nodes console and put them into a html log
directory, divided by node name. They can be retrieved by the job with a
curl so they are gathered with the job log itself.
Change-Id: I8ac885eb61dd633238bcdb4c9bc11de8974d37f6
Set variables in extra node template according to variables in RDO
cloud like flavor ci.*, baremetal image available on cloud.
Change-Id: Id730f9c83b04ad9bc481fcad9fe01b407dc24d65
Closes-Bug: #1745064
When we updated the client code on the te-broker to support some
changes that required newer clients we broke the instance liveness
check thanks to a backwards incompatibility in novaclient. Since
all future te-brokers will be using the newer novaclient this change
just updates the Client creation call to use the new syntax.
Change-Id: Ib3331ee21c533f028007acae355528d9c5edbf76
There are situations where we may need to deploy additional
undercloud-like nodes in a test environment. Support for this has
been added to OVB and this commit wires it into the te-broker.
Change-Id: I84bfba3ee67cd5564ad0a4372c424a2622a97e6f
Previously OVB was configured using a monolithic environment file
that had to contain all of the parameters and resource_registry
entries needed for the stack. Recently support was added for a more
TripleO-like system of sample environment files, which makes the
configuration of OVB much cleaner. While the old method will be
supported for a period of time, we should go ahead and switch to the
new method because the old one is now deprecated.
The role files unfortunately do not support this sort of configuration
yet, but while working on this change I remembered that roles
inherit their configuration from the parent stack so we don't
actually need to duplicate the network-isolation resource_registry
entries in the role file. This simplifies the configuration quite a
bit too.
Change-Id: Iab6db834d587081f13c35b5270f15187b0b5fb3b
With network isolation disabled the public network will not be
attached to the undercloud instance, which means there will be no
port on that network to change the port-security setting on.
Change-Id: I8938b91c5c2ef80d02e6d05a82fd7acd1f926aaf
Adding settings to make tripleo-ci OVB jobs work in RDO Cloud.
- add and conditionally load infrastructure servers IP settings
- disable network sharing since we are not admins in RDO Cloud
- modify flavor settings to match RDO Cloud requirements
- add additional ovb environment file for RDO Cloud settings
Change-Id: I032d6e3558a44d4f90a0de092dec980e95d1f33e
These were added recently to support running on clouds that have
the neutron firewall driver enabled. While we don't currently need
them in rh1, we will in RDO cloud so we might as well start using
them.
Change-Id: Ic1764383e0b89e902590c746ee04dad74073626d
This adds support for heterogeneous OVB environments to the
te-broker. It is primarily intended for scale testing jobs, since
the normal test jobs only deploy a single compute node. We wouldn't
gain much by using a smaller flavor for that one vm and there is a
cost in complexity to setting up the environment.
Right now this will only work for jobs that deploy just control and
compute nodes. Support for a third role type for ceph or others
could be added in a similar fashion.
Change-Id: I398d13356b3c15c0c7cd448366186b7589ad93e4
OVB now has the ability to derive these from the environment, so we
don't need to put them in the env file explicitly.
Change-Id: I5a88074ceb9f1dfcc07de6d4ed33500865a2ca23
I recently pushed a change to disable caching in the bmc by default,
in the interest of avoiding developer confusion. However, for ci
we want to use caching since nobody should be messing with the
baremetal instances without going through the bmc.
Change-Id: I3ce66d8256266cfab23a7e2bceb80fb22e99beea
There is a permanent redirection from http to https in buildlogs, cbs
and trunk repos that might create issues when the redirection fails for
some reasons.
Let's use https directly.
Change-Id: If36d006d76cc712c582cde2265f41f4243b12622
Save the end of the bmc console log so we can look for issues. This
will not be visible in the job logs, but is on the te-broker so
ci-admins can debug the problem.
Change-Id: I9f2f21f9e723c5e01532b74ba2b4617a2bc046dd
Related-Bug: 1657188
We sometimes see more testenvs in existence than there are running
Jenkins jobs. The likely cause of this is that a job gets killed
in some unusual way (maybe a new patch set gets pushed), and the
gear client doesn't get a chance to signal back.
Since we can't rely on anything in the instance to handle this
scenario, let's add a check to the testenv-worker that makes sure
the Jenkins instance is still around, and if not signals the gear
client to proceed and delete the testenv.
Note that this replaces the previously added zuul status check,
which proved ineffective because the instance on which it was
running gets deleted before it has a chance to do anything.
Change-Id: I0270ee2ea1498247e0aeb007f4707f9502af8324
The nonha jobs don't use net-iso, so there's no need to spend the
time creating a lot of networks and ports. In addition, OVB now
has the ability to deploy a network environment that supports
basic bonding, and this change adds support for deploying that as
well. No jobs currently use bonding, but that will be added in a
follow-up patch.
Change-Id: Ifb65d962293b8b69b2a84597c29c1ffae5d9bc2c
Ib01a0b4ee76c033739ce9f7003070aa29714875e makes use of osc, but it
turns out it wasn't installed on the te-broker. This change adds
it to the system setup package list.
Note that I've already installed it manually on the existing
te-broker instance. This is just to ensure it doesn't get lost if
we have to rebuild it.
Change-Id: I34da1da71b7ab2ac5890ef4a443079113baace05
This is done in case there's a failure in creating the ovb stack, this
way we can get the basic error message from the creation failure.
Change-Id: I34a6f1bb4d03d49a336d11c5df71fee0dc78dc9b
If we tell the create-env script to create an extra node (with OVB's
undercloud environment), this tells the build-nodes-json command that it
should add the details for this extra node as well. This is added in an
extra section that is ignored by ironic.
Depends on https://github.com/cybertron/openstack-virtual-baremetal/pull/33
Change-Id: I77589b4e9b97c9bae6d695213b1327dcc86fe8aa
this allows us to deploy and boot an additional node which we can have
ready along side the undercloud node itself.
Change-Id: I352de761841568e2820ba334757496702980d65a
For the sake of simplicity, we delete all of the ports on the
provisioning and public networks before deleting the heat stack.
This is to ensure that the ports we attached to the undercloud vm
are removed before heat tries to delete the subnets. In some jobs
this can mean we delete 12 ports in series. Starting the deletes
in parallel should reduce the overall time it takes to delete the
ports by a fair amount since the controller is capable of working
on more than one request at a time.
Note that we're doing something similar for log collection in
If65e0e460467185a5695cf214272ef35e41c3f03 and it seems to be
working well there.
Change-Id: Iec46b14a5531b287ed70b3dffb7e42cb6bb855c9
This reverts commit 0030f6c664.
Test env requests are queuing up and nothing is getting a env in a timely
mannor. As I think we are still trying to create envs for jobs that have
timed out. Revert this to our behaviour befor last week while we investigate
the problem.
Change-Id: I79353941d838628e492b46acce6dde8ef2ec3aff
Adds a named semaphore to the testenv-worker code that will
prevent more than 5 testenvs from creating at once. This requires
the use of the posix_ipc Python module, which is added to the
te-broker puppet manifest.
Right now if someone pushes a patch series of tripleo-ci changes,
which each start 8 jobs at the moment, we can end up with 20 or 30
testenvs creating concurrently. Not surprisingly, this does not
work well and most of those testenvs will fail or timeout. Using
an approximate testenv creation time of 3 minutes (which is based
solely on my observation of how long the Heat stacks generally take),
the 5 concurrent creations allowed should process up to 30 testenvs
in the 20 minute timeout of the te-broker. Even if we do exceed
that number, the testenvs that _do_ get processed are far more
likely to succeed in a less loaded environment so our overall pass
rate will be higher than all 30+ at once. And of course we can
increase the concurrency if we find that 5 at once can be handled
easily, which would only increase the potential throughput.
The one major drawback of this approach is that Linux IPC is fairly
terrible at error-handling, so if a testenv-worker process holding
a semaphore dies in some ungraceful fashion that doesn't allow the
Python code to release the semaphore, we may have to reboot the
te-broker to clean up. I don't anticipate such a situation happening
often in the simple te-broker environment, but it's something to be
aware of.
Change-Id: Id80105a1578aa2120d2508018e53846affe254a0