In cases where certificates were regenerated for OVN, a service restart
is required in order to apply and use new certs.
We provide also a unique handler name to distinguish certs between ones
installed for neutron-server and OVN.
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/912768
Change-Id: Iedea6f1a67349bafecca5c792072fcd8f95cc546
At the moment it's possible to deploy VPNaaS for non-OVN environemnts only.
OVN implementation is slighly different and requires a standalone agent to
run on gateway hosts, where OVN router is active.
This agent spawns namespaces as used to do and talks through RPC with API.
More detailed spec on the feature can be found here [1]. There's also
configuration reference in progress of writing [2].
[1] https://opendev.org/openstack/neutron-specs/src/branch/master/specs/xena/vpnaas-ovn.rst
[2] https://review.opendev.org/c/openstack/neutron-vpnaas/+/895651
Change-Id: Idb223ee0d8187f372682aafda1b8d6fd78cb71d1
Change-Id: Iad163ac7b032a97bd49164d94490b0f0deb83d90
As of today we run some agents, like neutron-ovn-metadata agent as
root user, since it needs access to ovsdb socket, which has 750 permissions
by default.
With that, for OVN we already use connection via host:port to the same
ovsdb manager, which allows to run it as an arbitrary user.
In order to align connection methods and to run services with lower
privileges
we introduce couple of new variables that allow to create valid connection
strings for both OpenFlow listeners and regular connection to the manager.
Change-Id: Iceab27aa1fdacc8b13f7ef6974b6a9076b8b7cd9
At the moment we set 640 permissions to /etc/neutron/rootwrap.d folder.
While it doesn't cause any issues right now, since root still able to read files in there,
but this makes us to use root for services when it should not be needed.
Also playbook is not idempotent, as it changes permissions for same
directory multiple times during runtime.
Task for setting rootwrap permissions is removed, since it's behaviour is
weird by design of file module.
It can be applied only to directories, meaning that either directory will not
have execution permissions or all files inisde it will have executable flag.
Change-Id: I577221e94d6cf9d940ee310757383cee24b80a03
There is a regression in CentOS 9 Stream libvirt version 9.10 which makes
impossible to spawn VMs in this OS and breaking CI.
Let's still leave some non-voting jobs just in case.
Change-Id: I1237769d637d318a68b1891eba7fa44671eb9ac1
At the moment the only way to configure multi-AZ support in Neutron were
config overrides, which work quite nicely with LXB/OVS scenarios. However,
with OVN changing configuration is not enough, and command that sets
up OVN Gateway should provide extra CMS option.
In order to improve AZ support in Neutron role, we add couple of variables
that control behaviour and allow to perform required configuration without
config overrides for OVS/LXB/OVN.
Co-Authored-By: Danila Balagansky <dbalagansky@me.com>
Closes-Bug: #2002040
Change-Id: Ic964329c06765176692f7b0c32f33ec46360a3fb
OpenSwan Package for IPSec has been replaced with libreswan in EL9.
We missed to reflect that while adding EL9 support.
Closes-Bug: #2039098
Change-Id: I04742324ff472b3c40ee4c7d333305c67046aba2
In Debian 12 OVS version to 3.1.0 is used that is affected
by the bug [1]. Until that is fixed, we're masking ovs-record-hostname
service.
While this was fixed be OVS version bump in Ubuntu and RHEL, it's still
an issue for Debian 12.
[1] https://bugs.launchpad.net/cloud-archive/+bug/2017757
Change-Id: I90454ba50840f7cb900586a7b870161a0f4adc01
OpenDaylight support has been deprecated by Neutron team in 2023.2 [1]. We remove support from
our code to address that decision.
[1] 517df91c9e
Change-Id: Iaaf87b6d5400fe88c7edf86995ea9ba891866678
In an LXB environment, the neutron_ovn_controller group still
contains all of the compute nodes, which causes this task to
fail.
Change-Id: I7a63a79e8b9012c9f32b9316d9590ccd9e641c01
The OVS bridge creation logic for OVN deployments may fail
when the provider bridge has not been defined. This patch uses
logic that exists in the OVS deployment scenario to check the
length of neutron_provider_networks.network_mappings to ensure
a value has been set before attempting to create the bridge.
Change-Id: I34256e4ad22169ae6907a3c40270cb714cf33466
Accidentally condition was to check a group against `group_name`,
while this should be `group_names`. Right now in case of definition
neutron_vpnaas_custom_config role will fail with undefined variable.
Change-Id: Ia5b44729858dd9f742f1094f46e3cde1ceb70495
With update of ansible-lint to version >=6.0.0 a lot of new
linters were added, that enabled by default. In order to comply
with linter rules we're applying changes to the role.
With that we also update metdata to reflect current state.
Depends-On: https://review.opendev.org/c/openstack/ansible-role-systemd_service/+/888223
Change-Id: I3905e334cfbeb7ccb976358016f81c5edd6cd284
This task runs immediately after one which may start the OVN
services and the unix socket files may not yet be present
when the command is run to configure the connection settings.
Introduce retires to the task to give time for the services to
start and the sockets to exist.
See https://paste.opendev.org/show/bPgVSIHyVPY5MwC373Zj/
Change-Id: I286169ca9ec493ef9ff1923249336cdc168619d0
This reverts commit 74b0884fc2.
Reason for revert: UCA and OVS SIG have updated package and marked corresponding bugs as resolved.
Change-Id: Idbb9f4ee84a075bfa6e7e63c8d5b81951ce0ae65
Right now we are not using any constraints for docs and releasenotes builds.
This has resulted in docs job failures once Sphinx 7.2.0 has been released.
The patch will ensure that constraints are used an we should not face
simmilar issue again.
TOX_CONSTRAINTS_FILE is updated by Release bot once new branch is created,
so it should always track relevant constraints.
Some extra syntax-related changes can apply, since patch is being passed
through ConfigParser, that does not preserve comments and align indenting.
Change-Id: I877b57ba117a820be7ca05d01037069295099f06
While <service>_galera_port is defined and used for db_setup
role, it's not in fact used in a connection string for oslo.db.
Change-Id: I74735ad2f127a4c62d4e5c4d24dd1af76e5b76a3
Allow configuration of `inactivity_probe` in Connection table in NB and
SB for new installations.
Issues, which successfully resolve by using this as a workaround:
1. https://www.mail-archive.com/ovs-discuss@openvswitch.org/msg07431.html
2. https://bugs.launchpad.net/kolla-ansible/+bug/1917484
According to the OVN ML, specifically this part [1], there is no other
way to set `inactivity_probe` other than using Connection table. And the
only valid option for it would be `0.0.0.0`, so that it could be applied
to all connections.
`ovn-ctl` forces `ovsdb-server` to look for addresses to listen on in
Connection table with `db-nb-use-remote-in-db` and
`db-sb-use-remote-in-db` options which are enabled by default.
If `db-nb-create-insecure-remote` and `db-sb-create-insecure-remote` are
set to `yes` (when `neutron_ovn_ssl` is `False`), this would result in
flooding OVN logs with `Address already in use` errors.
So we will rely on default value `no` for them from now on and only
listen on and with whatever options are provided in Connection tables.
[1] https://www.mail-archive.com/ovs-discuss@openvswitch.org/msg07476.html
Change-Id: If87cf7cfa1788d68c9a4013d7f4877692f2bb11c
This change implements and enables by default quorum support
for rabbitmq as well as providing default variables to globally tune
it's behaviour.
In order to ensure upgrade path and ability to switch back to HA queues
we change vhost names with removing leading `/`, as enabling quorum
requires to remove exchange which is tricky thing to do with running
services.
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/875399
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/873618
Change-Id: I43840a397ea6da6c3187291a74591c2205e1dca1
We're dropping Ubuntu Focal support early in 2023.2 release,
so we need to switch all jobs to Jammy before this happens.
Change-Id: I677494ad02d58f891b376b44230ce9d137ca34a9
OVN packages are isntalled as a part of common package installation
as they're appended during neutron_package_list population. So
there should be no need in having another set of tasks that install
these packages.
Change-Id: I119dd30b6e11e9ba373367a1b65d56d723ef0b45
By overriding the variable `neutron_backend_ssl: True` HTTPS will
be enabled, disabling HTTP support on the neutron backend api.
The ansible-role-pki is used to generate the required TLS
certificates if this functionality is enabled.
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/879085
Change-Id: I9f16f916d1ef3e5937c91f6b09a3d4073594ecb4
In UCA repo for Antelope OVS version to 3.1.0 is used that is affected
by the bug [1]. Until that is fixed, we're masking ovs-record-hostname
service.
[1] https://bugs.launchpad.net/cloud-archive/+bug/2017757
Change-Id: Iead62b464a68bbfcffb0e79a4db004760287e89b
When import is used ansible loads imported role or tasks which
results in plenty of skipped tasks which also consume time. With
includes ansible does not try to load play so time not wasted on
skipping things.
Change-Id: I50b99306a52f1a2379e55f390653b274afd5885f
At the moment we don't restart services if systemd unit file is changed.
We knowingly prevent systemd_service role handlers to execute
by providing `state: started` as otherwise service will be restarted twice.
With that now we ensure that role handlers will also listen for systemd
unit changes.
Change-Id: I831f6d62f0d31384258571e01a4e7cdd75b73e2c
After RDO bumped OVS version to 3.1 from 2.17 CentOS/Rocky fails
tempest testing due to systemd unit calling adding hostname [1]
while ovs-vsctl add in 3.1 actually behaves exactly as `set` which
simply resets defined hostname on each service restart. To avoid that
we're adding `--no-record-hostname` flag that will prevent such
behaviour.
[1] https://github.com/openvswitch/ovs/blob/branch-3.1/utilities/ovs-ctl.in#L51
Change-Id: I8bee1850e3a120f7b76f586909e6d74361696e32
Related-Bug: #2013189
At the moment systemd-udev package is being resolved to
systemd-boot-unsigned due to CentOS packaging issue. Resolution to this
issue would be providing a full path to any of file that is not provided
but systemd-boot-unsigned but provided by systemd-udev
which does not have a really clean and good workaround.
So we're disabling CentOS LXC jobs for now and waiting for CentOS
waiting to fix this. There're bunch of bug reports and all systemd there
in quite a messy state overall.
Change-Id: I6e744d1e708df11204b3436c53ea6ed723683b18
Previously only /etc/neutron/neutron.conf was passed, this patch
uses the uwsgi pyargv option to pass multiple instances of
--config-file to the service.
Depends-On: https://review.opendev.org/c/openstack/ansible-role-uwsgi/+/872195
Change-Id: Ifa1645a9585360e15142cac929e671e60e301bdc
Closes-Bug: 1987405