This replaces hard-coding of the host "bridge.openstack.org" with
hard-coding of the first (and only) host in the group "bastion".
The idea here is that we can, as much as possible, simply switch one
place to an alternative hostname for the bastion such as
"bridge.opendev.org" when we upgrade. This is just the testing path,
for now; a follow-on will modify the production path (which doesn't
really get speculatively tested)
This needs to be defined in two places :
1) We need to define this in the run jobs for Zuul to use in the
playbooks/zuul/run-*.yaml playbooks, as it sets up and collects
logs from the testing bastion host.
2) The nested Ansible run will then use inventory
inventory/service/groups.yaml
Various other places are updated to use this abstracted group as the
bastion host.
Variables are moved into the bastion group (which only has one host --
the actual bastion host) which means we only have to update the group
mapping to the new host.
This is intended to be a no-op change; all the jobs should work the
same, but just using the new abstractions.
Change-Id: Iffb462371939989b03e5d6ac6c5df63aa7708513
As a short history diversion, at one point we were trying building
diskimage-builder based images for upload to our control-plane
(instead of using upstream generic cloud images). This didn't really
work because the long-lived production servers led to leaking images
and nodepool wasn't really meant to deal with this lifecycle.
Before this the only thing that needed credentials for the
control-plane clouds was bridge.
Id1161bca8f23129202599dba299c288a6aa29212 reworked things to have a
control-plane-clouds group which would have access to the credential
variables.
So at this point we added
zuul/templates/group_vars/control-plane-clouds.yaml.j2 with stub
variables for testing.
However, we also have the same cloud: variable with stub variables in
zuul/templates/host_vars/bridge.openstack.org.yaml.j2. This is
overriding the version from control-plane-clouds because it is more
specific (host variable). Over time this has skewed from the
control-plane-clouds definition, but I think we have not noticed
because we are not updating the control-plane clouds on the non-bridge
(nodepool) nodes any more.
This is a long way of saying remove the bridge-specific definitions,
and just keep the stub variables in the control-plane-clouds group.
Change-Id: I6c1bfe7fdca27d6e34d9691099b0e1c6d30bb967
This adds the new inmotion cloud to clouds.yaml files and the cloud
launcher config. This cloud is running on an openstack as a service
platform so we have quite a bit of freedom to make changes here within
the resource limitations if necessary.
Change-Id: I2aed6dffde4a1d6e3044c4bd8df4ca60065ae1ea
Otherwise you get
BadRequest: Expecting to find domain in project - the server could
not comply with the request since it is either malformed or otherwise
incorrect. The client is assumed to be in error.
Change-Id: If8869fe888c9f1e9c0a487405574d59dd3001b65
The Oregon State University Open Source Lab (OSUOSL;
https://osuosl.org/) has kindly donated some ARM64 resources. Add
initial cloud config.
Change-Id: I43ed7f0cb0b193db52d9908e39c04e351b3887e3
The OpenEdge cloud has been offline for five months, initially
disabled in I4e46c782a63279d9c18ff4ba2944c15b3027114b, so go ahead
and clean up lingering references. If it is restored later, this can
be reverted fairly easily.
Depends-On: https://review.opendev.org/783989
Depends-On: https://review.opendev.org/783990
Change-Id: I544895003344bc8202363993b52f978e1c07d061
This exports Rackspace DNS domains to bind format for backup and
migration purposes.
This installs a small tool to query and export all the domains we can
see via the Racksapce DNS API.
Because we don't want to publish the backups (it's the equivalent of a
zone xfer) it is run on, and logs output to, bridge.openstack.org from
cron once a day.
Change-Id: I50fd33f5f3d6440a8f20d6fec63507cb883f2d56
Sister change for Ia5caff34d3fafaffc459e7572a4eef6bd94422ea and
removing earlier references to the mirror server in preparation for
building and adding the new one.
Change-Id: I7d506be85326835d5e77a0c9c461f2d457b1dfd3
This is a new cloud provided via citycloud that will add resources
capable of running Airship jobs. The goal is to use this as a stepping
stone to having Airship jobs run on our generic CI resources. This cloud
will provide both generic and larger resources to support this.
Change-Id: I63fd9023bc11f1382424c8906dc306cee5b3f58d
As a follow-on to Ie37abb4fd3eb3342b66ade52ab65024c420d7264 remove the
linaro credentials that were related to the (now removed) linaro-cn1
cloud.
Change-Id: Ia1e8dd3732164708c2e9fd82509e350829c438ba
This takes a similar approach to the extant ansible_cron_install_cron
variable to disable the cron job for the cloud launcher when running
under CI.
If you happen to have your CI jobs when the cron job decides to fire,
you end up with a harmless but confusing failed run of the cloud
launcher (that has tried to contact real clouds) in the ARA results.
Use the "disbaled" flag to ensure the cron job doesn't run. Using
"disabled" means we can still check that the job was installed via
testinfra however.
Convert ansible_cron_install_cron to a similar method using disable,
document the variable in the README and add a test for the run_all.sh
script in crontab too.
Change-Id: If4911a5fa4116130c39b5a9717d610867ada7eb1
Donnyd has kindly offered us access to fortnebula's test cloud. This
adds clouds.yaml entries to bridge and nodepool so that we can take
advantage of these resources.
Change-Id: I4ebc261c6f548aca0b3f37dc9b60ffac08029e67
The run_all cron running in test jobs is unawesome because it can
cause the inventory overrides we put in for the testing to get
overwritten with the real inventory. We don't want test jobs
attempting to run against real hosts.
Change-Id: I733f66ff24b329d193799e6063953e88dd6a35b1
Add the gitea k8s cluster to root's .kube/config file on bridge.
The default context does not exist in order to force us to explicitly
specify a context for all commands (so that we do not inadvertently
deploy something on the wrong k8s cluster).
Change-Id: I53368c76e6f5b3ab45b1982e9a977f9ce9f08581
This manages the clouds.yaml files in ansible so that we can get them
updated automatically on bridge.openstack.org (which does not puppet).
Co-Authored-By: James E. Blair <jeblair@redhat.com>
Depends-On: https://review.openstack.org/598378
Change-Id: I2071f2593f57024bc985e18eaf1ffbf6f3d38140