Task vars are higher precedence than inventory group vars so
cannot be overidden except in user_variables (ansible extra vars)
which then become a global setting, which is almost certainly
incorrect for this case.
Change-Id: Ie43e339df50adbe8240ffe43159c28f132e50000
At the moment Cloudkitty is targeted at all LXC hosts along with containers
which is not needed nor intended.
Unfortunatelly there's no really good compatible fix exist, so action from operator is required to handle transition to new naming for
the service.
Change-Id: I9360495e3b3347568969e36e0e96bb1325efd59f
It is not possible to install Gnocchi 4.5 with 2024.1 due to conflicting
dependency of pyparsing. It is fixed in 4.6 with [1]
[1] a565df6923
Change-Id: I056a4a382abffc2d2b70a0cead787f22dd737fdc
With current inventory state, ironic_compute group is not the same as
ironic-compute_hosts, since latter does also include hosts, on which
ironic_compute LXC container resides in LXC scenario.
For example in AIO LXC, 'ironic-compute_hosts' includes aio1, while
ironic_compute - aio1_ironic_compute_container-5fd060b3.
This results in setting `nova_virt_type` for proper nova-compute, that
resides on AIO breaking it.
Change-Id: I47b2e9af86b5dceafe68c7e56e149a8b34c30439
Due to the issue in formatting healthcheck address was merged with meth
which resulted in invalid haproxy configuration, when SPICE is being used
as a console.
Closes-Bug: #2052891
Change-Id: I38b2ff6887382164e4b28852274ec6dfee4d7d78
With changes to config_template module that restored usage of {% raw %} tags [1]
renderring of mapping keys, if they're defined as variables, was broken.
Ansible, by design [2], does not render mapping keys. Moreover, it was not
working as intended anyway, since renderring happened in post-copy stage
so same records were not merged together, which resulted in #1812245
As such behaviour is expected by Ansible design, instead of adding some
workaround in config_template module, I suggest working around issue
by defining troublesome mapping with Jinja, that will allow it to render properly.
[1] https://review.opendev.org/c/openstack/ansible-config_template/+/881887
[2] https://github.com/ansible/ansible/issues/17324#issuecomment-685102595
Closes-Bug: #2048036
Related-Bug: #1812245
Change-Id: I8a32736239c6326d817c620451799c13d5d8938c
Despite not being documented, order of http-check options are important
Defining `expect` before `check` leads to configuration error. In order to
avoid that we fix some defenitions of haproxy_services variable.
Related-Bug: #2046223
Needed-By: https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/903463
Change-Id: I6153e1ba5a4c45e2ed78d69da73e6524e3911db0
During PTG we agreed to disable quorum queues by default during this
cycle and wait for improvements proposed as part of [1] before enabling
it by default.
This also adds a separate job that will test scenario with enabled quorum
queues.
[1] https://review.opendev.org/q/topic:bug-2031497
Change-Id: I0807cc1ed991fd85f9f74d4a360d3fd23cde227c
This logic has been added to handle TLS condition for glance when ceph
is used.
However, it was never working as `ceph` is not a valid store type, since
it should be `rbd` instead. At the same time logic around available
stores is way more complex.
Needed-By: https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/901034
Change-Id: I426be7d21ba9267879eadf282f5dd055485b37c3
Switch of the policy for classic to version 2 has been wrong in inital
patch [1] both in format and policy name. This patch aims to fix
policy defenition.
[1] https://review.opendev.org/c/openstack/openstack-ansible/+/895806
Change-Id: I163126097459d5d07563c384b7f92f8ecccb78f2
Because implicit localhost is not a part of 'all' or any other group,
playbooks executed with '--limit' may not take it into account.
The problem was extensively described in bug #2041717.
This change explicitly adds localhost to OSA inventory to avoid
unexpected behavior.
Closes-Bug: #2041717
Change-Id: Ib44ed22d7132b42a4185a91f12c66ced5a1a6209
At the moment all haproxy backends are defined if TLS should be used by
using `haproxy_ssl` variable. If deployer don't want to have SSL, they
are supposed to use the variable for that. However, the only service that
is not respecting that is RabbitMQ management interface.
As a result haproxy fails with the invalid configuration, since
certificates are not provisioned when `haproxy_ssl` is False.
So configuration at the end is invalid as reffer to the certificate
that does not exist on the host and was not even issued.
Change-Id: Idc924d4ee485c8e6efc15b90df90ba5021a106e4
Since 2023.2 has been released, we're switching to track and test code
against 2023.2 stable branch and update SHAs to the HEAD of the branch.
Change-Id: I59951bce68fb898a3b0845b5c5f2443e5d57e3bb
When Nova is deployed with a mix of x86 and arm systems
(for example), it may be necessary to deploy both 'novnc' and
'serialconsole' proxy services on the same host in order to
service the mixed compute estate.
This patch introduces a list which defines the required proxy
console types.
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/890521
Change-Id: I5ed49878c192516a504a4a77902271214800c5b8
This breaks the use of the ansible synchronize module
when the parameter use_ssh_args is true with an error
from ssh via rsync that there is an unknown parameter.
Removing the newline makes the synchronize module
work correctly.
Change-Id: Ib7fc3068ecc339e02d641196513c1b676a9a9f69
At the moment all compute nodes are explicitly added as
OVN gateway nodes. At the same time one of recommended setups
are to not pass public networks to compute hosts and have
standalone network nodes that are running ovn gateways which is
not possible to configure with current setup.
Change-Id: If99ddc47d32acf41cdb542b4e56d90b6e3589a56
HA policies were replaced with quorum queues [1] and discuouraged and
marked for removal in 4.0 [2]
Based on that we perform migration from HA queues to quorum,
since they're already supported in oslo.messaging.
Patches per-service are required to enable quorum queues in service
configuration.
This also adjusts upgrade doc to contain a variable required for
proper nova cell update on changed vhost.
[1] https://www.rabbitmq.com/quorum-queues.html
[2] https://blog.rabbitmq.com/posts/2021/08/4.0-deprecation-announcements/
Change-Id: Icd5eabcad4801b454f29b388613d7241bb9b0ad0
At the moment we assume that haproxy should be fine listening on
internal_lb_vip_address, but in real life deployments these are FQDN
and in case of using DNS RR, this assumption is invalid.
We can be smarter and check if haproxy_bind_internal_lb_vip_* variables
are defined, and fallback to previous behaviour if not.
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/852039
Change-Id: Ic0b9646d566425878930eb88745e35f9e6cc2e11