On recent, enforcing systems, the update_yum.sh can't be executed,
because SELinux prevents container_t to open user_tmp_t:
type=AVC msg=audit(1674049913.380:22858): avc: denied { open } for
pid=70472 comm="bash" path="/tmp/yum_update.sh" dev="vda4" ino=218200014
scontext=system_u:system_r:container_t:s0:c65,c705
tcontext=unconfined_u:object_r:user_tmp_t:s0
tclass=file permissive=0
This patch ensures it gets properly relabelled when bind-mounted during
the image build. Using the "z" will also ensure it's still usable even
when running multiple builds at the same time.
Change-Id: I4085865965f48c9fa6a88cde7010a51cd8c653d8
When a gating, component or delorean current repo
is present, containers should be updated with
the latest rpms.
This review checks the installed rpms and
greps for the repos where the updated rpms
are sourced.
Change-Id: Ie29c7c33c8d66bc3729c03c2d72cbdbf85ad443a
/etc/{{ pkg_mgr_suffix }}/vars exists on stream and
some other platforms but is missing from RHEL 8.x.
This patch checks that that directory exists first
before mounting it.
Change-Id: I048434b38eb1d6b1c83a89d90e01f71d467d7fb7
With [1] it used yum/vars but in CentOS8-Stream
that is not available, so detect and use dnf or yum
vars.
Closes-Bug: #1927302
Change-Id: Idda53d1b68b97e5bb65314b1e07d507736932531
Since repos may rely on yum vars, it's required
to mount yum/vars along with yum repos.
Container builds already mount yum/vars along
with yum yum. Molecule jobs also adding it with [1].
Since the repo is branchless and used in CentOS7 too,
so using yum/vars instead of dnf/vars.
[1] https://review.opendev.org/c/openstack/tripleo-ansible/+/787423
Change-Id: I36f175d97a86d4221b09dadf62f64a16b5c527e2
We are seeing occasional failures to pull due to intermittent registry
errors downstream. A retry should help to avoid the whole update
failing because of this.
Change-Id: Ib9415e46a52cc6ad6459ec3f170d7e23aa9aca03
If the yum cache path exists and already mounted by someone,
do not attempt writing to it, use the overlay mode instead.
This still leaves a window of opportunity for another workers to
RW mount the cache after the ansible check has reported a stale
fact about there was no other mounts found. But this is unlikely
to happen.
Also, if it has to be retried in the rescue block, do not use the yum
cache for the maximum data safety and clean (a scratch) cache state
reasons.
This drastically reduces chances to have multiple writers for the
cache.
Closes-bug: #1860804
Change-Id: I19491a162e5bf6d6517fd343d675aff12bdc9719
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
The undercloud registry currently doesn't handle OCI formated images
correctly. We need to ensure that when buildah is run, we specify that
we want the docker format until we correctly support the OCI metadata.
Change-Id: Icf1a1c8f3a353239f2d244aa0bc811f8f86f6867
Related-Bug: #1860585
This is needed if we want to only update installed packages and not
hit depenency issues encountered when updating packages with rpm_install.sh
Change-Id: I5095d7b04cb10fde1bd82afd1bc406445b7595fd
Closes-bug: #1858837
In order to make sure RHUI repos works with in a container,
PKI certs dirs needs to be mounted to container so that
RHUI repo solves and download the content.
Related-Bug: #1854685
Change-Id: Id09059559b5c207ef6f604e4bb999528118ae096
Signed-off-by: Chandan Kumar (raukadah) <chkumar@redhat.com>
We want to try a best effort to remove the buildah containers but since
it's run with multiple processes we occassionally get layer conflicts.
Let's add a bit of a retry and ultimately skip the error since that was
the previous behavior.
Change-Id: I75a85745aed652a85f4c143c987cd5cccbf31cac
Related-Bug: #1846413
Currently in the yum update output, we see buildah trying to rmi the
image we were working with however it is currently in use. After we
commit our changes we need to cleanup our working container so we can
remove the image we were using (if not used by another process
elsewhere).
Change-Id: I54e37b43346b97be0a7cfab12e6cac9809537c83
When yum_cache is set, that directory will be automatically
picked as either the source or destination for the containers
being updated as the following:
* when that host directory is missing (or empty), the container
under update will start populating it, while it gets updated.
That path going to become the lower overlay FS layer for future
use among other containers under concurrent yum update executions.
* when the yum_cache directory exists and is not empty, it will be
bind-mounted as an upper overlay FS layer for other containers under
update. So those can benefit from some of the already prefetched
contents in its yum cache without data races or conflicts when
concurrently accessing the cached data.
Overlaying ensures data safety as each container can only see the lower
layer of the overlay, while storing its local changes on top of it as
an ephemeral. The yum_cache directory existance & non-emptiness facts
act as a single mutex, which only grants a dedicated writing access to
the lower layer to a single "populating" container at a time. This
behavior may be forcefully reset via the force_purge_yum_cache flag.
The container update playbook invoked with it, instantly creates a
new populator and creates a fresh yum cache.
Note that the 100% saturation of the cache is only expected, when the
populating container finishes its execution.
The feature can be used only for buildah in yum update scenarios using
yum or dnf.
Change-Id: I30c6dd12454a0b1781803ab16ef79b5914178114
Related-bug: #1844446
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
The |exists is currently deprecated.
[DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using
`result|exists` use `result is exists`. This feature will be removed in version
2.9. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
Change-Id: I0c32686062e79142aa5c664a4a42ac590263b64e
Executes all linters via pre-commit, which is much faster, guarantees
their version locking and allows upgrading them with a single command.
Before this change the only linter running via pre-commit was
ansible-lint.
Now we also run bashate, flake8 and yamllint via pre-commit.
For developer convenience we still keep the old tox environments
which allow running a single linter.
Added long_description_content_type to fix twine check failure
Change-Id: I037eae61921b2a84aa99838804f70e96ee8d8b13
The "buildah run" randomly fails on centos7 kernel, with:
standard_init_linux.go:203: exec user process caused "no such file or directory"
We think it's related to:
https://github.com/containers/libpod/issues/1844
To workaround this issue, we'll retry 3 times with a delay of 3 seconds
between each "buildah run" command which would fail to produce an exit
code of 0.
Change-Id: Ic50fd359c9bf50a6e0247d7743b26191d2f3dcb5
Rather than copy the ephemeral script yum_update.sh, just mount it in
during the buildah run call which runs it.
This results in one less layer, and may work around an issue seen in
the gate where the file is sometimes not in the image when expected.
Change-Id: I1303be08ed162318f4b4b8f3aabf873c13ae9b99
This is consistent with the tagging done at the end of
modify_image.yml, and fixes an issue with the tagged image being
missing.
Change-Id: Ia98d2ecaf718d6cb9d6f859bfadbbcb07acfd775
In I8a3769c0b55572ba05cc29ecd28a131cc94e8c4d, we switched the playbook
to use buildah CLI and run the yum_update.sh from a directory that
wasn't found by the playbook:
no files found matching "files/yum_update.sh":
no such file or directory
This patch first copies the script in /tmp, so it can be copied from the
host afterward.
Change-Id: I6da0850386c0e3ca51f5f42dbd97c26bf5364a24
This allows directly mounting directories instead of copying them
twice. Also the resulting image has only one extra layer instead of
one per Dockerfile directive.
Change-Id: I8a3769c0b55572ba05cc29ecd28a131cc94e8c4d