Commit Graph

26 Commits

Author SHA1 Message Date
Jeremy Stanley f18e06e011 Farewell limestone
The mirror in our Limestone Networks donor environment is now
unreachable, but we ceased using this region years ago due to
persistent networking trouble and the admin hasn't been around for
roughly as long, so it's probably time to go ahead and say goodbye
to it.

Change-Id: Ibad440a3e9e5c210c70c14a34bcfec1fb24e07ce
2023-02-13 23:54:59 +00:00
Ian Wienand 9e2d9f6aef bridge: switch OSC from container to local install
Currently "openstack" command on bridge doesn't work, because we need
cinder client pinned to an older version for RAX support.  The
upstream container uses the latest versions of everything and it fails
to parse the "volume_api_version: 2" pin for RAX in the config file.

In general, the version of openstackclient we can probably most likely
rely on to work is the one from the launch-node virtualenv.  It also
means we just have one place to manage a broadly-compatible version,
instead of trying to manage versions in separate containers, etc.

This converts the /usr/local/bin/openstack command from calling into
the container, to calling into the launch venv.

Change-Id: I604d5c17268a8219d51d432ba21feeb2e752a693
2022-11-25 09:37:40 +00:00
Ian Wienand ed7083ed88
launch-node : make into a small package
This turns launch-node into an installable package.  This is not meant
for distribution, we just encapsulate the installation in a virtualenv
on the bastion host.  Small updates to documentation and simple
testing are added (also remove some spaces to make test_bridge.py
consistent).

Change-Id: Ibcb4774114d73600753ca155ed277d775964bc79
2022-11-21 16:29:22 +11:00
Zuul 83df65a252 Merge "bastion host: add global known_hosts values" 2022-11-20 23:48:27 +00:00
Ian Wienand d03f4b1f22
bastion host: add global known_hosts values
Write out the ssh host keys from the inventory as part of the bastion
host bootstrap.

Change-Id: I0823c09165c445e9178c75ac5083f1988e8d3055
2022-11-19 11:18:07 +11:00
Ian Wienand 9c76ebf4af
Update a few s/bridge01/bridge99 references
These were foregotten in I137ab824b9a09ccb067b8d5f0bb2896192291883
when we switched the testing bridge host to bridge99.

Change-Id: I742965c61ed00be05f1daea2d6110413cff99e2a
2022-11-11 15:05:39 +11:00
Ian Wienand 102534fdb8
Switch bridge to bridge01.opendev.org
This switches the bridge name to bridge01.opendev.org.

The testing path is updated along with some final references still in
testinfra.

The production jobs are updated in add-bastion-host, and will have the
correct setup on the new host after the dependent change.

Everything else is abstracted behind the "bastion" group; the entry is
changed here which will make all the relevant playbooks run on the new
host.

Depends-On:  https://review.opendev.org/c/opendev/base-jobs/+/862551
Change-Id: I21df81e45a57f1a4aa5bc290e9884e6dc9b4ca13
2022-10-25 16:08:10 +11:00
Ian Wienand 7e9229c86d
bootstrap-bridge: drop pip3 role, add venv
The pip3 role installs the latest upstream pip, overwriting the
packaged versions.  We would prefer to install things in
venv/virtualenvs moving forward to keep better isolation.

Unfortunately thanks to time the Bionic era packaged pip is so old
that it can't install anything modern like Ansible.  Thus we have to
squash installing Ansible into a separate venv into this change as
well.

Although the venv created by default on the Bionic host also has an
old pip, luckily we already worked around that in
I81fd268a9354685496a75e33a6f038a32b686352 which provides a create-venv
role that creates a fully updated venv for us.

To minimise other changes, this symlinks ansible/ansible-playbook into
/usr/local/bin.  On our current production bastion host this will make
a bit of a mess -- but we are looking at replacing that with a fresh
system soon.  The idea is that this new system will not be
bootstrapped with a globally installed Ansible, so we won't have
things lying around in multiple places.

Change-Id: I7551eb92bb6dc5918c367cc347f046ff562eab0c
2022-10-11 15:09:40 +11:00
David Moreau Simard fb8a5145df Update ARA
ARA's master branch now has static site generation, so we can move
away from the stable branch and get the new reports.

In the mean time ARA upstream has moved to github, so this updates the
references for the -devel job.

Depends-On: https://review.opendev.org/c/openstack/project-config/+/793530
Change-Id: I008b35562994f1205a4f66e53f93b9885a6b8754
2021-06-01 09:38:32 +10:00
Ian Wienand ccd3ac2344 Add tool to export Rackspace DNS domains to bind format
This exports Rackspace DNS domains to bind format for backup and
migration purposes.

This installs a small tool to query and export all the domains we can
see via the Racksapce DNS API.

Because we don't want to publish the backups (it's the equivalent of a
zone xfer) it is run on, and logs output to, bridge.openstack.org from
cron once a day.

Change-Id: I50fd33f5f3d6440a8f20d6fec63507cb883f2d56
2020-06-12 16:49:23 +10:00
Ian Wienand 0d004ea73d testinfra: pass inventory and zuul data
Create a zuul_data fixture for testinfra.

The fixture directly loads the inventory from the inventory YAML file
written out.  This lets you get easy access to the IP addresses of the
hosts.

We pass in the "zuul" variable by writing it out to a YAML file on
disk, and then passing an environment variable to this.  This is
useful for things like determining which job is running.  Additional
arbitrary data could be added to this if required.

Change-Id: I8adb7601f7eec6d48509f8f1a42840beca70120c
2020-05-20 13:41:04 +10:00
Zuul 1bf78e6c1f Merge "service-bridge: skip osc/kubectl things for arm64" 2020-05-12 00:57:09 +00:00
Ian Wienand 1dd2026087 service-bridge: skip osc/kubectl things for arm64
There's no clients for arm64 at this time, skip.

Change-Id: I0783a9d2b06c76072dd2e9234a8a794ca0594204
2020-05-07 15:21:13 +10:00
Monty Taylor 1b126ef48a Run cloud_launcher from zuul
This is running on a cron right now, let's run it from zuul.

This moves the contents from clouds_layouts into the hostvars
for bridge and changes the playbook to run against bridge
instead of localhost. This lets us not pass in the variables
on the CLI, which we don't have support for in the apply job.
It also is made possible by the lack of all-clouds.yaml.

Change-Id: If0d2aacc49b599a0b51bf7d84f8367f56ed2d003
2020-04-30 12:37:38 -05:00
Zuul e3ad9e79eb Merge "Get rid of all-clouds.yaml" 2020-04-16 15:41:55 +00:00
Monty Taylor ebae022d07 Use project-config from zuul instead of direct clones
We use project-config for gerrit, gitea and nodepool config. That's
cool, because can clone that from zuul too and make sure that each
prod run we're doing runs with the contents of the patch in question.

Introduce a flag file that can be touched in /home/zuulcd that will
block zuul from running prod playbooks. By default, if the file is
there, zuul will wait for an hour before giving up.

Rename zuulcd to zuul

To better align prod and test, name the zuul user zuul.

Change-Id: I83c38c9c430218059579f3763e02d6b9f40c7b89
2020-04-15 12:29:33 -05:00
Monty Taylor 8af7b47812 Get rid of all-clouds.yaml
We had the clouds split from back when we used the openstack
dynamic inventory plugin. We don't use that anymore, so we don't
need these to be split. Any other usage we have directly references
a cloud.

Change-Id: I5d95bf910fb8e2cbca64f92c6ad4acd3aaeed1a3
2020-04-09 16:44:20 -05:00
Monty Taylor 589521fd18 Remove run_all.sh and ansible cron job
Remove the script and the cronjob on bridge that runs it.

Change-Id: I45e4d9713f3ba4760ba384d13487c6214d068800
2020-04-08 10:46:55 -05:00
Ian Wienand 82c6dec4fa Disable cloud launcher cron job during CI
This takes a similar approach to the extant ansible_cron_install_cron
variable to disable the cron job for the cloud launcher when running
under CI.

If you happen to have your CI jobs when the cron job decides to fire,
you end up with a harmless but confusing failed run of the cloud
launcher (that has tried to contact real clouds) in the ARA results.

Use the "disbaled" flag to ensure the cron job doesn't run.  Using
"disabled" means we can still check that the job was installed via
testinfra however.

Convert ansible_cron_install_cron to a similar method using disable,
document the variable in the README and add a test for the run_all.sh
script in crontab too.

Change-Id: If4911a5fa4116130c39b5a9717d610867ada7eb1
2019-07-16 15:01:55 +10:00
Clark Boylan 9342c2aa6d Add zuul user to bridge.openstack.org
We want to trigger ansible runs on bridge.o.o from zuul jobs. First
iteration of this tried to login as root but this is not allowed by our
ssh config. That config seems reasonable so we add a zuul user instead
which we can ssh in as then run things as root from zuul jobs. This
makes use of our existing user management system.

Change-Id: I257ebb6ffbade4eb645a08d3602a7024069e60b3
2019-03-04 14:47:51 -08:00
James E. Blair 94d404a535 Install kubectl on bridge
With a snap package.  Because apparently that's how that's done.

Change-Id: I0462cc062c2706509215158bca99e7a2ad58675a
2019-02-11 10:16:58 -08:00
James E. Blair 7610682b6f Configure .kube/config on bridge
Add the gitea k8s cluster to root's .kube/config file on bridge.

The default context does not exist in order to force us to explicitly
specify a context for all commands (so that we do not inadvertently
deploy something on the wrong k8s cluster).

Change-Id: I53368c76e6f5b3ab45b1982e9a977f9ce9f08581
2019-02-06 15:43:19 -08:00
Ian Wienand 3bed6e0fd3
Enable ARA reports for system-config bridge CI jobs
This change takes the ARA report from the "inner" run of the base
playbooks on our bridge.o.o node and publishes it into the final log
output.  This is then displayed by the middleware.

Create a new log hierarchy with a "bridge.o.o" to make it clear the
logs here are related to the test running on that node.  Move the
ansible config under there too.

Change-Id: I74122db09f0f712836a0ee820c6fac87c3c9c734
2018-12-04 17:46:47 -05:00
James E. Blair c49d5d6f2b Allow Zuul to log into bridge
Allow post-review jobs running under system-config and project-config
to ssh into bridge in order to run Ansible.

Change-Id: I841f87425349722ee69e2f4265b99b5ee0b5a2c8
2018-09-12 10:20:26 -06:00
Clark Boylan c4461e3d02 Run cloud launcher on bridge.o.o
This formerly ran on puppetmaster.openstack.org but needs to be
transitioned to bridge.openstack.org so that we properly configure new
clouds.

Depends-On: https://review.openstack.org/#/c/598404
Change-Id: I2d1067ef5176ecabb52815752407fa70b64a001b
2018-09-05 13:33:26 -07:00
James E. Blair 4477291111 Add testinfra tests for bridge
Change-Id: I4df79669c9daa3eb998ee666be6c53c957467748
2018-09-05 14:24:00 +10:00