Due to the bug [1] in CentOS packaging, systemd-udev is substituted with
systemd-boot-unsigned. So you need to use NVR to properly
install systemd-udev until the bug is fixed.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=2183279
Change-Id: I3c112b74b4777b9443f3c3041a51ecb770d48021
We're relying on udev to exists for glusterfs since we're
applying overrides for it as well as attempting to restart.
While systemd-udev seems not being pre-installed in all CentOS
containers anymore.
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/878926
Change-Id: Ia03ee4aeb381da00a538e3775b824f2a5ce4e01e
Currently it's deployer resposibility to install required binaries for
mounts to succeed, which is not really convenient. Instead of this
we can try to provide at least some basic binaries installation in case
supported type is set.
Depends-On: https://review.opendev.org/755484
Change-Id: I680359ca655d0f69a40e9d29dbf1694cd0aa4ca2
The mount role was using the systemd module to start / stop mounts
however if a mount was restarted when it could have been reloaded
the role could create a fair amount of chaos in a running environment.
This change maps the mount states appropriately to the systemctl command
options to ensure we're not needlessly restarting mounts should the unit
files change. The `systemd_mount_states` has been added which will map
the normal Ansible states to suitable systemd mount states and the mount
state is being managed using the `systemctl` command instead of the
ansible module.
Change-Id: I5c7e5105e54d3ff9ad040f2a1d003d3dd12e4efb
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>