Our .zuul.yaml file has grown quite large. Try to make this more
manageable by splitting it into zuul.d/ directory with jobs organized by
function.
Change-Id: I0739eb1e2bc64dcacebf92e25503f67302f7c882
We want to replace the current executors with focal executors.
Make sure zuul-executor can run there.
Kubic is apparently the new source for libcontainers stuff:
https://podman.io/getting-started/installation.html
Use only timesyncd on focal
ntp and timesyncd have a hard conflict with each other. Our test
images install ntp. Remove it and just stay with timesyncd.
Change-Id: I0126f7c77d92deb91711f38a19384a9319955cf5
We have two standalone roles, puppet and cloud-launcher, but we
currently install them with galaxy so depends-on patches don't
work. We also install them every time we run anything, even if
we don't need them for the playbook in question.
Add two roles, one to install a set of ansible roles needed by
the host in question, and the other to encapsulate the sequence
of running puppet, which now includes installing the puppet
role, installing puppet, disabling the puppet agent and then
running puppet.
As a followup, we'll do the same thing with the puppet modules,
so that we arent' cloning and rsyncing ALL of the puppet modules
all the time no matter what.
Change-Id: I69a2e99e869ee39a3da573af421b18ad93056d5b
This is running on a cron right now, let's run it from zuul.
This moves the contents from clouds_layouts into the hostvars
for bridge and changes the playbook to run against bridge
instead of localhost. This lets us not pass in the variables
on the CLI, which we don't have support for in the apply job.
It also is made possible by the lack of all-clouds.yaml.
Change-Id: If0d2aacc49b599a0b51bf7d84f8367f56ed2d003
We have a mirror job for arm64, but it runs infrequently and can get
broken by base changes (as described inline we can't currently have a
mixed environment). Let's run the base playbook too to give more
visibility.
Change-Id: I557bdadf7fe09463b4bb51130df6b88737bb9a46
Rather than running a local zookeeper, just run a real zookeeper.
Also, get rid of nb01-test and just use nb04 - what could possibly
go wrong?
Dynamically write zookeeper host information to nodepool.yaml
So that we can run an actual zk using the new zk role on hosts in
ansible inventory, we need to write out the ip addresses of the
hosts that we build in zuul. This means having the info baked in
to the file in project-config isn't going to work.
We can do this in prod too, it shouldn't hurt anything.
Increase timeout for run-service-nodepool
We need to fix the playbook, but we'll do that after we get the
puppet gone.
Change-Id: Ib01d461ae2c5cec3c31ec5105a41b1a99ff9d84a
This job compiles openafs with dkms among other things that cause it run
over the default half hour timeout occasionally. Bump the timeout to an
hour to deal with that.
Change-Id: I8a56a7f42ce2ee8331befb45aceb1d511a33d9e6
This adds a necessary newline, removes port numbers, and sets the
executor ssh key to the correct path.
Change-Id: I6b4afa876b6cd7d8f87cc35bc51b4e9d6e31ee2b
When we install packages on ubuntu, we should use their actual
package names rather than incorrect or otherwise fictional
package names.
Also, fix the hostname in the test job - because when we don't
do that, we don't run all of the roles, and thus we don't
catch these things.
Change-Id: I18e676ef0fe343513db4c8ad7e340ee45092c0a3
Zuul is publishing lovely container images, so we should
go ahead and start using them.
We can't use containers for zuul-executor because of the
docker->bubblewrap->AFS issue, so install from pip there.
Don't start any of the containers by default, which should
let us safely roll this out and then do a rolling restart.
For things (like web or mergers) where it's safe to do so,
a followup change will swap the flag.
Change-Id: I37dcce3a67477ad3b2c36f2fd3657af18bc25c40
We run puppet with ansible now pretty much all the time. It's not
helpful for the puppet output to go to syslog on the remote host.
What's more helpful is for it to come back to the stdout in the
ansible playbook so that we can see it.
Also turn off ansi color from the output.
Depends-On: https://review.opendev.org/721732
Change-Id: I604081d5400bd53b8dda5a3a7685323c1443991b
Extract eavedrop into its own service playbook and
puppet manifest. While doing that, stop using jenkinsuser
on eavesdrop in favor of zuul-user.
Add the ability to override the keys for the zuul user.
Remove openstack_project::server, it doesn't do anything.
Containerize and anisblize accessbot. The structure of
how we're doing it in puppet makes it hard to actually
run the puppet in the gate. Run the script in its own
playbook so that we can avoid running it in the gate.
Change-Id: I53cb63ffa4ae50575d4fa37b24323ad13ec1bac3
Make a service playbook, manifest and jobs for codesearch.
Remove openstack_project::server - it doesn't do anything.
Change-Id: I44c140de4ae0b283940f8e23e8c47af983934471
These use legacy-base, which sucks, but what sucks even more is
that they are in openstack-zuul-jobs, which makes them extra
awkward to try to adjust.
Change-Id: I87b3d56de41f0ba5658c1240ddfc7ecf1c3c43af
This doesn't actually do anything useful any more, but it spends
a lot of time not doing it.
Basically, this is only testing the things in
modules/openstack_project/spec/acceptance/basic_spec.rb, which
are things we install and test in ansible now.
There are related jobs, puppet-beaker-rspec-puppet-4-infra,
which are run on puppet- repos and run their rspec tests, but
that won't be affected by this.
Change-Id: I21b01d360b50dba10673c2986e8a2868b8747522
The jobs which use install-docker and pip3 should be triggered
by changes to install-docker or pip3.
Change-Id: Ia6ec8da72fee38377760cb27dd7df26fa169760b
Zuul uses an re.match() check on file list regexes. This means that the
leading ^ is redundant as is a trailing .*
Attempt to make this more clear by dropping those leading and trailing
regex operators to be consistent across the file. This makes the rules
easier to read and should make them easier to reason about.
Change-Id: Id4cd17d816c9af023a655bdadeedb9421e51cdca