Since latest ansible handlers are not triggered inside the same
handlers flush, which means that triggering mysql restart
the way we did does not work anymore. So instead of
notifying inside handlers, we add listen key to tasks
that are triggered by these newly produced notifications.
This could be due to the bug [1], but ansible-core version that has
backport included still shows inconsistent behaviour
[1] https://github.com/ansible/ansible/issues/80880
Change-Id: I0d97e0b90a8d18a7b69e880e4effa851238d51d1
With update of ansible-lint to version >=6.0.0 a lot of new
linters were added, that enabled by default. In order to comply
with linter rules we're applying changes to the role.
With that we also update metdata to reflect current state.
Change-Id: I8c316dd62ac22ccd9578bb0199ab8f25c0104f9a
`extra_lb_tls_vip_addresses` is list of additional internal VIP
addresses, which gets parsed into `haproxy_tls_vip_binds` without
`interface` attribute.
Change-Id: I184021b65d6f3f28526c9fa09bea90a2baef77b2
At the moment PKI and haproxy do listen for the same notify, which results in
haproxy trying to generate certs in inappropriate places. This patch starts
leveraging `pki_handler_cert_installed` variable that enables us to trigger
haproxy certificate assemble only when required and expected.
Co-Authored-By: Damian Dąbrowski <damian@dabrowski.cloud>
Depends-On: https://review.opendev.org/c/openstack/ansible-role-pki/+/875757
Change-Id: I66f648e5c3104f71d6601a493b09f8cdcc3332fc
HAProxy supports the use of map files for selecting backends, or
a number of other functions. See [1] and [2].
This patch adds the key `maps` for each service definition allowing
fragments of a complete map to be defined across all the services,
with each service contributing some elements to the overall map file.
The service enabled/disabled and state flags are observed to add and
remove entries from the map file, and individual map entries can also
be marked as present/absent to make inclusion conditional.
[1] https://www.haproxy.com/blog/introduction-to-haproxy-maps/
[2] https://www.haproxy.com/documentation/hapee/latest/configuration/map-files/syntax/
Change-Id: I755c18a4d33ee69c42d68a50daa63614a2b2feb7
The ternary options appear to be getting evaluated whether they
are used or not, so item['interface'] is always accessed.
This patch aims to check for the key's presence before performing
ternary operations, or use Ansible variables to postpone evaluation
until absolutely necessary.
Change-Id: Ib1462c04d1a0820a37998f989e2ed16566f71f54
Right now we don't ensure haproxy conf validity and if it's incorrect
role will fail on attempt to reload haproxy. However it's really worth
adding validation step and do not proceed if configuration is wrong
Change-Id: I54717d4f7230b8d8dff2d293592831cc88c51d24
In some user scenarious (like implementing DNS RR) it might be useful to
bind on 0.0.0.0 but at the same time do not conflict with other services
that are binded to the same ports. For that, we can specify a specific
interface, on which haproxy will be binded to 0.0.0.0.
In netstat it would be represented like `0.0.0.0%br-mgmt:5000`.
With that we also allow to fully override `vip_binds` if assumtions
that role make are not valid for some reason.
Change-Id: Ic4c58ef53abc5f454b6fbebbd87292a932d173ae
Right now we assume, that ca-cert is always present. Though, it might
not be the case for user-provided certs or let's encrypt, as they
are already in ca-certificates.
Change-Id: I101f82c5e378596e76a160aacb34a9e1e7e0c123
We're providing an option to have an IP address per VIP
address. Currently it's used only for creating self-signed
SSLs signed with internal CA per each VIP. With follow-up
patches that will also allow to provide user certificates
per VIP, making possible to cover internal and external
endpoints with different non-wildcard certs.
Change-Id: I0a9eb7689eb42b50daf5c94c874bb7429b271efe
The external PKI role can generate a self signed CA and Intermediate
certificate, and then create a server certificate for haproxy if
no defaults are overridden.
The new openstack_pki_* settings allow an external self signed CA
to be used, but still create valid haproxy server certificates from
that external CA in an openstack-ansible deployment.
The original beheviour providing user supplied certificates in the
haproxy_user_ssl_* variables will still work, disabling the generation
of certificates but using the external PKI role to just install the
supplied certs and keys.
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/788031
Change-Id: I7482f55e991bacd9dccd2748c236dcd9d01124f3
Task fails if the host/container does not have rsyslog present. We
can just skip the restart if it is not installed.
Change-Id: Ie4c9a42133c1f042c587cec48f53b4a87bd50952
This patch aims to migrate service from usage of rsyslog to journald.
By this we mean dropping rsyslog client installation and
set log address to /dev/log, which is served by journald.
Change-Id: I80dccb129e73fd58f7211bd56d36e55b55603c6a
When we restart HAproxy, we kill all the connections and it causes
all of the services to be dropped out. This is really not ideal and
causes things to be lost in the control plane.
This patch instead does a reload which will safely keep the existing
clients connected till they evacuate and then use SO_REUSEPORT for
the new process.
Change-Id: I502457f691ad66dfd68ace21ac1575cea23b538a
This patch changes include: to include_tasks: to avoid warnings
in Ansible 2.4+. It also removes systemd conditionals since all
supported distributions have systemd.
Change-Id: Ic13886e8861d9fa00246eb849e6681d297291d2f
Consolidate distro package install tasks into a
single task using the package module and pass
the package list into the name instead of using
a with_items loop. Tidy up some other tasks to
reduce task file sprawl and consolidate some
task actions.
The minimum Ansible version is raised to 2.2 due to a
known bug [1] in Ansible's apt module which does not
update the cache properly if the cache update and the
install are combined in a single task.
[1] https://github.com/ansible/ansible-modules-core/issues/1497
Change-Id: I3717867208f1c379f0eda74e19c064a4b697cc53
The change adds logging for haproxy on localhost through the use
of rsyslog which is now a dependency. The logs will be stored in
/var/log/haproxy which will later be indexed and shipped to the
logging server. The change makes it possible to debug issues with
haproxy using specific log files instead of having to go digging
through syslog.
Change-Id: Id942ce159ea45703259f7aff0e5a85780a83370b
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
pem generation should always start from closer certificate
to the top of the chain. This commit fixes that.
Change-Id: I315bf4f818cc8eb606823a48843f1931e1779223
Closes-Bug: #1493421
This change brings similar changes as this one targeting horizon:
i.e.:
* The server key/certificate (and optionally a CA cert) are
distributed to all haproxy containers.
* Two new variables have been implemented for a user-provided
server key and certificate:
- haproxy_user_ssl_cert: <path to cert on deployment host>
- haproxy_user_ssl_key: <path to cert on deployment host>
If either of these is not defined, then the missing cert/key
will be self generated on each container. No distribution
of the self generated certificates accross all the hosts
is planned.
* A new variable has been implemented for a user-provided CA
certificate:
- haproxy_user_ssl_ca_cert: <path to cert on deployment host>
* The 'haproxy_cert_regen' variable has been renamed
to 'haproxy_ssl_self_signed_regen' to have the same
naming convention as horizon.
* A change of certificates, whether user dropped
or role generated, triggers pem generation and server restart
DocImpact
Closes-Bug: #1487380
Change-Id: I0c88d197d8ede820ac4e0388e67a2da06b003c2b
This change implements the blueprint to convert all roles and plays into
a more generic setup, following upstream ansible best practices.
Items Changed:
* All tasks have tags.
* All roles use namespaced variables.
* All redundant tasks within a given play and role have been removed.
* All of the repetitive plays have been removed in-favor of a more
simplistic approach. This change duplicates code within the roles but
ensures that the roles only ever run within their own scope.
* All roles have been built using an ansible galaxy syntax.
* The `*requirement.txt` files have been reformatted follow upstream
Openstack practices.
* Dynamically generated inventory is now more organized, this should assist
anyone who may want or need to dive into the JSON blob that is created.
In the inventory a properties field is used for items that customize containers
within the inventory.
* The environment map has been modified to support additional host groups to
enable the seperation of infrastructure pieces. While the old infra_hosts group
will still work this change allows for groups to be divided up into seperate
chunks; eg: deployment of a swift only stack.
* The LXC logic now exists within the plays.
* etc/openstack_deploy/user_variables.yml has all password/token
variables extracted into the separate file
etc/openstack_deploy/user_secrets.yml in order to allow seperate
security settings on that file.
Items Excised:
* All of the roles have had the LXC logic removed from within them which
should allow roles to be consumed outside of the `os-ansible-deployment`
reference architecture.
Note:
* the directory rpc_deployment still exists and is presently pointed at plays
containing a deprecation warning instructing the user to move to the standard
playbooks directory.
* While all of the rackspace specific components and variables have been removed
and or were refactored the repository still relies on an upstream mirror of
Openstack built python files and container images. This upstream mirror is hosted
at rackspace at "http://rpc-repo.rackspace.com" though this is
not locked to and or tied to rackspace specific installations. This repository
contains all of the needed code to create and/or clone your own mirror.
DocImpact
Co-Authored-By: Jesse Pretorius <jesse.pretorius@rackspace.co.uk>
Closes-Bug: #1403676
Implements: blueprint galaxy-roles
Change-Id: I03df3328b7655f0cc9e43ba83b02623d038d214e