Since ansible-core 2.10 it is recommended to use modules via FQCN
In order to align with recommendation, we perform migration
by applying suggestions made by `ansible-lint --fix=fqcn`
Change-Id: I7220eec2957ea2ef8acfa9dba3e443ab39251646
In order to reduce divergance with ansible-lint rules, we apply
auto-fixing of violations.
In current patch we replace all kind of truthy variables with
`true` or `false` values to align with recommendations.
Change-Id: I7f7dfc3aec4578777e1c886504f5b7e589f17c26
Cinder v2 has been deprecarted and removed for a while now, so we can
safely cleanup leftovers of it now.
We also clean-up/align variable names which were not used.
Change-Id: Ie49d145775d753323406fd5588a479c60c8807cc
With ansible-core 2.16 a breaking changes landed [1] to some filters
making their result returned in arbitrary order. With that, we were
relying on them to always return exactly same ordered lists.
With that we need to ensure that we still have determenistic behaviour
where this is important.
[1] https://github.com/ansible/ansible/issues/82554
Change-Id: I66d23f9e52ef53b1a462878ea2d94cb01faed717
The default value for heartbeat_in_pthread has been reverted in
oslo.messaging to False [1] and backported back to Yoga.
At the moment this setting brings intermittent issues during live
migrations of instances and some other operations. So makes sense
to align it with default value.
[1] https://review.opendev.org/c/openstack/oslo.messaging/+/852251
Change-Id: I06dae436639562c88ef917fd06f1f73e2ce74720
Due to the shortcoming of QManager implementation [1], in case of uWSGI
usage on metal hosts, the flow ends up with having the same
hostname/processname set, making services to fight over same file
under SHM.
In order to avoid this, we prepend the hostname with a service_name.
We can not change processname instead, since it will lead to the fight
between different processes of the same service.
[1] https://bugs.launchpad.net/oslo.messaging/+bug/2065922
Change-Id: I19484946c3049484f27e6e6029afd8152e1b835d
Right now it is not possible to avoid creation of a default volume type
which will have the same name as a backend. In case operator want to
rename volume type or have a different name - it will not be possible
as of today.
While we should migrate to using openstack_resources role for managing
of QoS and volume_types [1], the patch/modules are not ready and will
not be backportable anyway.
[1] https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/906372
Change-Id: Ic1e190fc86bd3b6fd2cc771273e28b5bb095a322
<service>-config tags are quite broad and have a long execution
time. Where you only need to modify a service's '.conf' file and
similar it is useful to have a quicker method to do so.
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/919816
Change-Id: I6f4d5f3388b71ce874650558b2566f795361a6c7
During last release cycle oslo.messaging has landed [1] series of extremely
useful changes that are designed to implement modern messaging
techniques for rabbitmq quorum queues.
Since these changes are breaking and require queues being re-created,
it makes total sense to align these with migration to quorum queues by default.
[1] https://review.opendev.org/q/topic:%22bug-2031497%22
Change-Id: I02db2ad91ad036ff24c35882e1746b3d6a8f3cc0
In order to be able to globally enable notification reporting for all services,
without an need to have ceilometer deployed or bunch of overrides for each
service, we add `oslomsg_notify_enabled` variable that aims to control
behaviour of enabled notifications.
Presence of ceilometer is still respected by default and being referenced.
Potential usecase are various billing panels that do rely on notifications
but do not require presence of Ceilometer.
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/914144
Change-Id: Ifb16270b41eb795dca9b4bd6a1a3e6eabf2f45eb
In order to allow definition of policies per service, we need to add variables
so service roles, that will be passed to openstack.osa.mq_setup.
Currently this can be handled by leveraging group_vars and overriding `oslomsg_rpc_policies` as a whole, but it's not obvious and
can be non-trivial for some groups which are co-locating multiple services
or in case of metal deployments.
Change-Id: I69fee673ebc1fa454a4af0f9a251e9f50999edd8
Default value of Restart for any service which type is not `oneshot` is
`on-failure`. While this suits most usecases, this leads to unexpected
consequences for cinder-purge-deleted.service.
In case there're some historical inconsistencies in the database which
make impossible to flush deleted volumes from the database
(ie due to prior manual intervention), cinder-manage exists with code 1
which triggers systemd to restart the service and attempt cleanup again.
The troublesome part is the transactional behaviour of the script. With
each run it locks records in it's transaction that is failing and being
reverted in a loops with 2 sec delay, that not only causes unnecessary
load for database itself, but also causes deadlocks during operations
with volumes that are not being re-tryed and fail with 500 return code
in cinder-api.
Changing Restart to `on-abnormal` will leave service in a failed state
and systemd won't attempt to restart it.
Change-Id: Ib091cc11a16fcd31ef351d9ec21d070d25829791
While <servuce>_galera_port is defined and used for db_setup
role, it's not in fact used in a connection string for oslo.db.
Change-Id: I6b910817ddc6eab68f815f776faeee432e55012e
With update of ansible-lint to version >=6.0.0 a lot of new
linters were added, that enabled by default. In order to comply
with linter rules we're applying changes to the role.
With that we also update metdata to reflect current state.
Depends-On: https://review.opendev.org/c/openstack/ansible-role-systemd_service/+/888223
Change-Id: I671cc35a055b35fb249ad3054c45ec65f2b54ab4
This patch reduces memory usage for Cinder Volume and Backup services by
tuning glibc.
The specific tuning consist on disabling the per thread arenas and
disabling dynamic thresholds.
This is the equivalent of the devstack proposed patch from Change-Id
Ic9030d01468b3189350f83b04a8d1d346c489d3c
Related-bug: #1908805
Change-Id: I066ee76fe0cef9443f9e9f1ed3c8062d6c6f8566
In order to cover OSSA-2023-003, a requirement to define service_user
section for all cinder services has been added by cinder.
Change-Id: I19c2b03c61f714fedb593da8489e50d3fa08d933
We're adding a service that is responsible for executing db purge.
Service will be deployed by default, but left
stopped/disabled. This way we allow deployers to enable/disable
feature by changing value of cinder_purge_deleted.
Otherwise, when variables set to true once, setting them back to false
won't lead to stopping of DB trimming , so timer would need to be
stopped manually.
Change-Id: Ic5ae8c778bff2858fcb31c85d4b910805e452c3f
By overriding the variable `cinder_backend_ssl: True` HTTPS will
be enabled, disabling HTTP support on the cinder backend api.
The ansible-role-pki is used to generate the required TLS
certificates if this functionality is enabled.
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/879085
Change-Id: Ib682499e900071db38cc2fd7c30822d0c33dba38
It's supposed that online migrations are executed once services are
upgraded and restarted after upgrade. Eventually, you can run
online migrations before the next upgrade according to the doc [1]
So we move that to a separate file that is executed after all services
are upgraded and handlers are flushed. Tasks are delegated to API hosts
and we clean up facts for them as well.
[1] https://docs.openstack.org/cinder/latest/admin/upgrades.html#database-upgrades
Change-Id: Ic3ecdddd7dcc2dd617c8606278590c8e59230fdf
At the moment we don't restart services if systemd unit file is changed.
We knowingly prevent systemd_service role handlers to execute
by providing `state: started` as otherwise service will be restarted twice.
With that now we ensure that role handlers will also listen for systemd
unit changes.
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/879671
Change-Id: I8140add1a4e4fdacee89bd29bd2e3c87eff0953a
We used rsync to synchronize filters from rootwrap.d. However, with
smart-source that is not needed anymore, since /etc/cinder is simply
a symlink to the source directory of rsync. We still need os-brick
rootwrap linkage though.
Change-Id: Ib1571c5be67155b584c412da8336de49bc80d948
Add file to the reno documentation build to show release notes for
stable/zed.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/zed.
Sem-Ver: feature
Change-Id: Id4cda2eb6ffdb55a80e555b33b1cae9ee4c5f67c
With ansible-core 2.13 it tries to substitude package resolution in apt
module.
However git-core is used in Debian as transitional name, but ansible
tries to select it and provide version, which is not correct behaviour.
But since git-core is not really valid anyway, we just replace it
to workaround ansible's imperfectness.
Change-Id: Ib0a75886baffec27c8a7d38d729623c7b41216eb
With changing cinder code we potentially can break some backends.
In order to detect this in time we are adding ceph and nfs scenarios.
We also fix lvm backend for use on RedHat.
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/859339
Change-Id: Ifceb2b816199339ec7725bd95cc890595eed95d9
This line was introduced by I21f84809c44ac4be0165fadfb8da67bbcbc9b05c
for centos-7 support, and should already be covered by the
distribution_major_version line above.
Change-Id: I5d5f84b84de35763024709212e0673607127e264
Role was never migrated to usage of haproxy-endpoints role
and included task was used instead the whole time.
With that to reduce complexity and to have unified approach, all mention
of the role and handler are removed from the code.
Change-Id: I0c055393ccb1c8d61affc2c1bb6d01f0c329afe9
Implement support for service_tokens. For that we convert
role_name to be a list along with renaming corresponding variable.
Additionally service_type is defined now for keystone_authtoken which
enables to validate tokens with restricted access rules
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/845690
Change-Id: I1d0156a2ad829aa730419e1d9dfa1cd49026a6be
Related-Bug: #1948456
Nowadays Cinder does not support v2 api so it makes sense to ensure
that these endpoints or service is not present in catalog.
Change-Id: I62a4ba182cc752a5bc4f6e8c4d2430f7e7aafe54