We're noticing that mailman's uwsgi queue is filling up. Before we try
to extend the queue apply our user agent filter to apache to reduce the
number of requests that hit the queue in the first place.
Change-Id: Ib821a7fffa6239a9affcc4c6698eef2dc9378cd1
We've noticed that our uwsgi queues are filling up and a lot of requests
are being made to robots.txt which ends up 500/503 erroring. Add a
robots.txt file which allows crawling of our lists and archives with a
delay value in hopes this will cause bots to cache results and not fill
up the queue with repetetive requests.
Change-Id: I660d8d43f6b2d96663212d93ec48e67d86e9e761
Crawlers that ignore our robots.txt are triggering archive creation
so rapidly our rootfs fills up between weekly purges, so doing it
once a day should hopefully mitigate further problems.
Change-Id: Ib4e56fbd666f7bf93c017739697d8443d527b8c7
Adding the information about which host we were checking for certcheck
did help in debugging. It pointed out that a specific host was at fault
(nb02 in this case though it could change over time) and further
investigation of this host showed acme.sh was not running there at all
due to earlier failures. Rather than the playbook ending at that point
it continued to run until building the certcheck list and then had a
fatal error leading to the confusion.
Add a breadcrumb comment to the Ansible role to help point this behavior
out in the future.
Change-Id: Ib607665d75eb666d19c8508346eb217783b98eb5
We don't need the Mailman 2 service deployment playbook, as we're no
longer running it. This was simply overlooked in the earlier mass
cleanup change, and even refers to a no longer existing role.
Change-Id: I7e65fdf9e81858f780bef8dce15ef88823345be8
We are currently running MariaDB 10.6 for Mailman. We use the
MARIADB_AUTO_UPGRADE flag to automatically upgrade the mariadb
install to 10.11 when switching the image version over to 10.11.
This was successfully performed against several other services
already.
Change-Id: I675753df142d635eca60c15728ece2870b406134
This increases the innodb buffer pool size from the default of
128M to 4G. Some increase is necessary for creating large indexes,
but probably not this much. Having a large pool allows for
significant performance improvement. To that end, allocate half of
our RAM to this.
https://mariadb.com/kb/en/innodb-buffer-pool/#innodb_buffer_pool_size
Change-Id: I0a20cb2e11edc88dac6a55191a05637e7634773f
This adds a robots.txt that kindly asks bots to not crawl anything on
zuul. We've seen soem bots crawling which leads to them trolling the
build logs which seems like overkill and increases bandwidth usage in
our donor clouds. Ask them to stop and quiet everything down a bit.
Change-Id: I88d85c7a51159b5b020aa179e24acec55fb42931
This is some evidence these vhosts are impacted. Mitigate that with our
rules.
While we are at it we modify the ruleset to add a newly noticed item.
Change-Id: I8c20193e4e474898a0bdc395b25fd9de94469dd6
This should cleanup our mirror update server so that we no longer have
configes (cron, scripts, logrotate rules, etc) for mirroring opensuse.
It won't clean up the afs volume, but we can get to that later (and it
will probably require manual intervention). This cleanup is done in a
way that it should be able to be applied to future cleanups too (like
when centos 8 stream goes away and everything is centos stream
specific).
Change-Id: Ib5d15ce800ff0620187345e1cfec0b7b5d65bee5
There are a number of issues with opensuse mirroring content cleanup
that this change aims to address. First up we fix the prefix for the
CentOS 7 networking content; it needed a repositories/ prefix. At the
same time we don't bother deleting the leaf data and instead delete the
more top level directory since we're cleaning this all up.
We then apply this top level cleanup to all of the repositories,
distributions, and updates. This is largely a noop (just some directory
removals) except in the case of update/ which still contains leap 15.2
update packages. These were apparently missed in the initial opensuse
cleaup.
After this lands we should end up with a largely empty volume.
Change-Id: Ic854fcecd1a0fabc388640a33da7e4e1f9ec07c0
We have removed CentOS 7 from nodepool now we can stop mirroring
pacakges for it. This deletes official CentOS 7 package mirror content
and OBS packages mirrored by the OpenSUSE mirror script for CentOS 7.
A followup change will remove the OpenSUSE mirroring entirely as this
was the last thing it was used for.
Change-Id: I484651b0845eaab933e98106684e0a2a6215b3d7
The clouds.yaml and rackdns config files do not need to use two
different Ansible vars to refer to the same credentials. Note that
the forward DNS account is separate, and so we still keep those
intact.
Change-Id: I9dd657f357d32083f2cfd7f074ba0d122ca803c3
After this merges, the temporary credential set opendevci_rax_*
and opendevzuul_rax_* can be removed from hostvars.
Depends-On: https://review.opendev.org/911163
Change-Id: I2e9067aa2f11100d311c86beb4df5bf15c72db69
We will be doing our first set of project renames in about a month since
we've noticed that the waiting queue is not draining properly. The
contents of this queue produce many scary looking (on the order of
thousands) exceptions during gerrit startup. Move the queue aside to
avoid confusion and uncertainty of problems during the project rename
process.
Change-Id: I3cd8fd17de340d536dd7e4e07fa1e18c86107832
Rackspace is requiring multi-factor authentication for all users
beginning 2024-03-26. Enabling MFA on our accounts will immediately
render password-based authentication inoperable for the API. In
preparation for this switch, add new cloud entries for the provider
which authenticate by API key so that we can test and move more
smoothly between the two while we work out any unanticipated kinks.
Change-Id: I787df458aa048ad80e246128085b252bb5888285
We are currently running MariaDB 10.4 for etherpad. We use the
MARIADB_AUTO_UPGRADE flag to automatically upgrade the mariadb install
to 10.11 when switching the image version over to 10.11. This was
successfully performed against the lodgeit paste service.
Change-Id: Id7dae260f3611fc1f88858730567455fef782b1c
We are currently running MariaDB 10.4 for refstack. We use the
MARIADB_AUTO_UPGRADE flag to automatically upgrade the mariadb install
to 10.11 when switching the image version over to 10.11. This was
successfully performed against the lodgeit paste service.
Change-Id: I75262bc8eba3dd59d5869be9bf568fd66dc7f608
Those repos are produced by the Automotive SIG [1], are not used by
OpenStack and increase the size of the centos stream repositories
needlessly.
[1] https://sigs.centos.org/automotive/
Change-Id: I8a12956aa2079ce851ad0bb5ff60f49677f5b7d3
We have successfully removed debian buster from nodepool and zuul at
this point. The last major TODO in debian buster cleanup is to remove it
from our package mirrors. This change is the first step in making that
happen.
For step two we follow the manual process documented in our reprepro
docs [0] for cleaning up mirror components. We will need to perform
these actions against the debian, debian security, and ceph octopus
mirrors.
[0] https://docs.opendev.org/opendev/system-config/latest/reprepro.html#removing-components
Depends-On: https://review.opendev.org/c/openstack/project-config/+/910031
Change-Id: Ic1fc6a45cb7f644d7862312589254b6100e17222
This change updates the opensuse mirror script to stop mirroring
opensuse 15. However, we do not entirely remove the opensuse mirroring
script as it is currently mirring some centos 7 packages from OBS for
kolla. We will clean this up more fully when we remove centos 7.
Depends-On: https://review.opendev.org/c/openstack/project-config/+/909776
Change-Id: I0c3546b79219180b796ca02fa8d82dba2316878a
I have tested this upgrade on a held node going straight from 10.4 to
10.11 in one go. The resulting logs can be found in this paste [0].
The resulting backups of system tables are small enough that it seems
reasonable to keep those enabled (though they can be disabled). Also, we
can either land this change and let docker-compose do the upgrade for
us, or we can put the host in the emergency file, do the upgrade by
hand, then merge this change to reflect the new state of the world.
One advantage to doing this by hand is that we can manually run a db
backup with the service turned off to avoid any lost data between the
time the upgrade occurs and the time of our last backup should anything
go wrong.
In either case we should probably double check that db backups look good
in borg before proceeding. Comments on approach are very much welcome.
[0] https://paste.opendev.org/show/bWhZZH97IMLv44eeiWlB/
Change-Id: I1bfcaeb9b90838a80d002732215f45a14a158fed
Our deployment tasks wait for Jaeger to be listening on its network
socket, but storage-related delays and slowdowns can sometimes cause
it to take longer than the 120 seconds we budgeted. Increase this to
300 seconds so we can be sure we've given it plenty of time to sort
that out.
Change-Id: I4eaffe2d00fca8b9c10ed9235583fca671413dab
We should really be backing this up before it begins to get used by
additional services. Also, since our newer deployment uses a
separate RDBMS, back that up safely.
Change-Id: I4510dd05204f4b0f450d1925ed7be148d7d73e6e