This change implements and enables by default quorum support
for rabbitmq as well as providing default variables to globally tune
it's behaviour.
In order to ensure upgrade path and ability to switch back to HA queues
we change vhost names with removing leading `/`, as enabling quorum
requires to remove exchange which is tricky thing to do with running
services.
Change-Id: I2e3f464534bffe9edd9d969c8d6a24adce06c02c
While <service>_galera_port is defined and used for db_setup
role, it's not in fact used in a connection string for oslo.db.
Change-Id: I65cbe26804fab48aed3c88ed75bfc7f28d3b5f9e
my_ip is leveraged by multiple other options as a default value. So it
makes sense to define it to zun_service_address, which, in turn, is
defaulted to management_address.
Change-Id: Iaa409cde1246b4aacdc0b22cd165f64aa2ca2418
Implement support for service_tokens. For that we convert
role_name to be a list along with renaming corresponding variable.
Additionally service_type is defined now for keystone_authtoken which
enables to validate tokens with restricted access rules
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/845690
Change-Id: Id451d06bcc40c94e9ef021dd7e3c1d14703e73cc
- Implemented new variable ``connection_recycle_time`` responsible for SQLAlchemy's connection recycling
- Set new default values for db pooling variables which are inherited from the global ones.
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/819424
Change-Id: Ib258eeb4989236215d645b21ed25f9d35c3a2a0a
With PKI role in place in most cases you don't need to explicitly
provide path to the CA file because PKI role ensures that CA is trusted
by the system overall. In the meanwhile in PyMySQL [1] you must either
provide CA file or cert/key or enable verify.
Since current behaviour is to provide path to the custom CA we expect
certificate being trusted overall. Thus we enable cert verification when
galera_use_ssl is True.
[1] 78f0cf99e5/pymysql/connections.py (L267)
Change-Id: I8b7b266d2a0633b40d38581e734ad00714b89885
This adds periodic cleanup of the directory which zun uses to
temporarily cache images loaded from Glance to avoid it becoming
too large.
Docker image cleanup is adjusted to make it less aggressive as
the 'until' filtering has been seen to clear images which were
created more recently than one hour.
The network pruning is removed as this causes zun to become out
of sync with Docker which can prevent creation of new containers
on pruned networks.
Finally, the default is to leave cleanup disabled so that it can
be enabled purely based upon user preference.
As Systemd timers cannot be disabled, this is achieved via a file
presence check with can be overridden for manual execution.
Change-Id: I4532d9975a2e68a12a7755ca3798a59f4928593c
This fixes the configuration for the zun-wsproxy service which
relays messages from the Docker daemon, providing output from
containers' consoles to the Horizon dashboard.
Depends-On: https://review.opendev.org/769142
Change-Id: I7158e202be2e778a7a64e9ef2656f496caae97be
This issue is preventing metal upgrade jobs for
victoria->master from deploying haproxy correctly following the
merge of https://review.opendev.org/769142/.
This is intended to be a minimal patch to fix the binding
so that it can be backported in order to fix the upgrades.
Change-Id: I1c3dcbc21bee1bf6c66c9c2f77c4ff832db49f19
This adds support for kata containers by installing and configuring
the relevant runtime.
The default remains as 'runc' but can be adjusted using the
variable added to the defaults.
Change-Id: Iea07012d092333c656b397f97b541a2f0a5f0e44
This ensures that only one record is generated in placement
for each compute host when both Nova and Zun run alongside
each other.
Change-Id: Ie5c741d47d114222934ad01097710fa8dc56dd4c
The Docker image cache does not get emptied automatically and
can take up significant disk space. In addition, old networks can
leave iptables rules, network devices and routing table entries
behind.
This patch adds a periodic timer job to delete this data where it
is safe to do so and won't impact existing containers.
Change-Id: I7045fcbb8bcd7a9744cc35fb2668016bacab4f1b
Brings together a set of existing patches and attempts to address
permissions issues with the kuryr-libnetwork plugin.
Defaults are chosen to match the requirements of the tempest tests
Change-Id: Ie674947ba6673a92e53f85de2cc8acdae5788f8f
Depends-On: https://review.opendev.org/767469
To make configs more readable it's worth dropping out commented out
lines. Also some issues in configs has been fixed.
Change-Id: I1d2316fbe9ae0c74b9d516d3a143c7a58ff59365
This patch aims to add a prefix for memcached_server
on each role to give the ability for deployers to
override the location of memcached cluster. I.e users
wants to create a single memcached cluster with k8s
for each service.
We also add pymemcache based on [1] and fix zun-docker
systemd config.
[1] https://review.opendev.org/711429
Change-Id: Ic7b31506177ebb0f4f24eaff4db134aace5c0b1a
This patch aims to migrate service from usage of regular syslog files
to journald. We also disable uwsgi logging, since it dublicates
requests that are logged by service itself.
Change-Id: Id466ac20d9d18fa86a4615a73433a51720bafc8e
As of change https://review.openstack.org/#/c/596502/ nova
has deprecated the RamFilter and DiskFilter since they are
not necessary when using the default scheduler driver
(filter_scheduler). This change removes their usage from
this deployment project.
Change-Id: I9c05016817cb03933292f09d06119795f8f451a0
This introduces oslo.messaging variables that define the RPC and
Notify transports for the OpenStack services. These parameters replace
the rabbitmq values and are used to generate the messaging
transport_url for the service.
This patch:
* Add oslo.messaging variables for RPC and Notify to defaults
* Update transport_url generation
* Add oslo.messaging to tests inventory and update tests
* Install extra packages for optional drivers
Change-Id: I0b2138ca9eb49387948f2ca87800cf966a9414a8