These are now in main_pre.yml and the role should be called seperatley
with tasks_from targetting all keystone hosts before being called
again with serial: settings appropraite for H/A deployments.
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/843740
Change-Id: Iecb5567382d27ae6a875f8937f33aa7bb492252e
If keystone_security_txt_content is defined in user variables,
the keystone service will host this file at the following locations
/security.txt and /.well-known/security.txt as defined in
https://securitytxt.org/
Depends-On: https://review.opendev.org/766030
Change-Id: I3b418a7950cb1b89451e1f19d6e1c82b507aa1c0
Stein release candidate contained a bug which interfered with existing
keystone fernet keys. This task is no longer required.
Change-Id: I0d830e92bda5e92ee3046e7ac329358e91d941cf
We use the same condition, which defines against what host some "service"
tasks should run against, several times. It's hard to keep it the same
across the role and ansible spending additional resources to evaluate
it each time, so it's simpler and better for the maintenance to set
a boolean variable which will say for all tasks, that we want to run
only against signle host, if they should run or not now.
Change-Id: Iac06d3f02b1c9ee5e3bfbd28043fbb70d8b1d328
This applies only to source based installations.
The introduction of smart-sources in [1] created a code path
which deletes the /etc/keystone directory before symlinking it
into the keystone venv and creating the necessary config files.
Unfortunatley this has the side effect of also deleting any fernet
and credential keys which pre-existed in the case of an upgrade from
Rocky. The original keys were deleted simulataneously across the whole
keystone_all group in a way which is makes them unrecoverable in
the absence of a backup taken by the operator.
This change simplifies the smart-sources code to always keep the
keystone config files and fernet keys in the host /etc/keystone.
This ensures that the lifecycle of the fernet keys is not coupled
to the lifecycle of the keystone venvs.
In addition, a task is added to rescue any keys which have been
created in the keystone venv by installations from the Stein
release-candidate.
[1] https://review.opendev.org/#/c/588960/
Closes-Bug: 1833414
Change-Id: Ide611fd3d88e352367220f05dbcf4186ac20319f
Current behavior only attempts to remove the keystone
directory from the first container and skips
additional containers past the first one. This
caused upgrades to break as the configs were
still present in any additional containers.
This ensures the keystone directory is removed on
all keystone containers when the install method is
is source.
Change-Id: If588f9ed4bc5d0deeb2b9c1bbeea5e9eb5ce7c79
The files and templates we carry are almost always in a state of
maintenance. The upstream services are maintaining these files and
there's really no reason we need to carry duplicate copies of them. This
change removes all of the files we expect to get from the upstream
service. while the focus of this change is to remove configuration file
maintenance burdens it also allows the role to execute faster.
* Source installs have the configuration files within the venv at
"<<VENV_PATH>>/etc/<<SERVICE_NAME>>". The role will now link the
default configuration path to this directory. When the service is
upgraded the link will move to the new venv path.
* Distro installs package all of the required configuration files.
To maintain our current capabilities to override configuration the
role will fetch files from the disk whenever an override is provided and
then push the fetched file back to the target using `config_template`.
Change-Id: I93cb6463ca1eb93ab7f4e7a3970a7de829efaf66
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
The systemd journal would normally be populated with the standard out of
a service however with the use of uwsgi this is not actually happening
resulting in us only capturing the logs from the uwsgi process instead
of the service itself. This change implements journal logging in the
service config, which is part of OSLO logging.
OSLO logging docs found here: <https://docs.openstack.org/oslo.log/3.28.1/journal.html>
Change-Id: I943bd5f1ac767f83d853cee09a5857f6f9f0efff
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
We need to ensure that {{ keystone_credential_key_repository }} is
created along with the rest of directories in order to prevent problems
like the following one:
OSError: [Errno 2] No such file or directory: '/etc/keystone/credential-keys'
Depends-On: I5a78e2120e596d36629b4ba978b2b5df76b149b0
Change-Id: I394e069f9cbea7b85e5f6f53e3d3f9f54494dafe
In order to allow an install and config split, but not
to have ssh keys left inside an pre-installed container,
the two tasks are split and tagged appropriately.
Change-Id: I468d1178179d70edfe4b19d40a9a32b35ad18258
Cleaning up the warnings like:
[WARNING]: when statements should not include jinja2 templating
delimiters such as {{ }} or {% %}. Found:
_apache2_module.stdout.find('{{ item.name }} already') == -1
Change-Id: I3180afb2f4a90179df1e3142eda906366ac4c9e8
In https://review.openstack.org/452196 the use
of local facts was implemented, but there is no
guarantee that the facts folder exists. If this
is the case then the fact setting fails.
This patch ensures that the fact folder exists
before using it.
Change-Id: Ic0f9ba7406614870f337a965fa70993141e7a357
Fixing a bug that indroduced duplicate when statements in
Ief28c6bed8daa38120207de61aba327c9fe49d3a.
Change-Id: I189325d2d8de17680a08ab1fefb2fe6628f58612
When a playbook runs os_keystone in serial, the SSH and fernet key
distribution are broken. This fixes both items allowing the role
to be run in a serialized playbook.
Change-Id: Ief28c6bed8daa38120207de61aba327c9fe49d3a
The security guide suggests that service config files
should be owned by root and in the service user group with 0640 permissions.
Change-Id: I5dc6e2c44ac5607fc1ff1c9fd2653eb23ef794bf
According to the security guide config files should not be
reachable by any users except the owner and root.
Change-Id: I5caba528ae85a8209de7637ecfdd9407e10ea0df
Remove all tasks and variables related to toggling between installation
of keystone inside or outside of a Python virtual environment.
Installing within a venv is now the only supported deployment.
Additionally, a few changes have been made to make the creation of the
venv more resistant to interruptions during a run of the role.
* unarchiving a pre-built venv will now also occur when the venv
directory is created, not only after being downloaded
* virtualenv-tools is run against both pre-built and non pre-built venvs
to account for interruptions during or prior to unarchiving
Change-Id: Ic0a0dac84a26aba2ef0ce5410dc7c722570cd410
Implements: blueprint only-install-venvs
The numerous tags within the role have been condensed
to two tags: keystone-install and keystone-config
These tags have been chosen as they are namespaced
and cover the two major functions of the role.
Documentation has been updated to inform how each tag
influences the function of the role.
Change-Id: Iea4bff944ce0a35a4b1bc044171472ea44eda323
The sudoers file was being created in the pre-install tasks
which causes an incorrect configuration variable to be dropped
when the venv env is not turned on. To correct this issue the
sudoers template is now dropped in the post install task file
after the bin_path fact has been set.
This change also removes the directory create task for heat, keystone,
glance, and swift because no sudoers files are needed for these services.
Re-Implementation-Of: https://review.openstack.org/#/c/277674/1
Change-Id: I609c9c12579dc1897787d19a1f58fe3e919b5e35
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
This change makes it so that the OS_keystone role is an independent
role and can be installed / tested stand-alone.
Implements: blueprint independent-role-repositories
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
This commit conditionally allows the os_keystone role to
install build and deploy within a venv. This is the new
default behavior of the role however the functionality
can be disabled.
Change-Id: Ie9e51926c96125a543e05eaa1912684fb01fecda
Implements: blueprint enable-venv-support-within-the-roles
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
Presently all services use the single root virtual host within RabbitMQ
and while this is “OK” for small to mid sized deployments however it
would be better to divide services into logical resource groups within
RabbitMQ which will bring with it additional security. This change set
provides OSAD better compartmentalization of consumer services that use
RabbitMQ.
UpgradeImpact
DocImpact
Change-Id: I6f9d07522faf133f3c1c84a5b9046a55d5789e52
Implements: blueprint compartmentalize-rabbitmq
This change adds the bits necessary to configure Keystone as an
identity provider (IdP) for an external service provider (SP).
* New variables to configure Keystone as an identity provider are now
supported under a root `keystone_idp` variable. Example configurations
can be seen in Keystone's defaults file. This configuration includes
the location of the signing certificate, authentication endpoints and
list of allowed service providers.
* xmlsec1 is installed in the Keystone containers when IdP configuration
is enabled.
* The IdP metadata and signing certiciate are generated and installed.
Implements: blueprint keystone-federation
Change-Id: I81455e593e3059633a55f7e341511d5ad9eba76f
This patch ensures that the authorized_keys ansible module, as well as
the built in "generate_ssh_keys" flag for user creation, so that we can
avoid using shell out commands.
Additionally, this moves the key synchronisation to use ansible
variables instead of the memcache server.
Change-Id: I0072b8d0977ab9aea10dd95080756f6864612013
Closes-Bug: #1477512
This change makes the use of fernet tokens production ready. The changes are
as follows:
* Ensures that the keys are rotated on every playbook execution
* Removes the need to sync keys back to a deployment host when distributing
them to other keystone hosts.
* Creates an autonomous key rotation process that can rotate on the following
intervals [reboot, yearly, annually, monthly, weekly, daily, hourly] to all
hosts from any keystone fernet host.
* Fixes the section in `keystone.conf` which was named "fernet_key" instead
of "fernet_token".
Change-Id: I50f6a852930728631f5c681a8aa0f1321d7424ac
Related-Bug: #1463569
Closes-Bug: #1468256
This change adds a number of new tasks that are dependent on the value
of the Keystone token provider (keystone_token_provider) user variable.
If the keystone_token_provider user_variable is set to
keystone.token.providers.fernet.Provider then the playbooks will
appropriately create the fernet keys and distribute them to the rest of
the keystone containers.
This also implements key rotation for generated fernet keys similar to
how the os_nova roles implement key rotation.
Finally, we also need to build cryptography from master for now.
Currently, 0.8.x and 0.9.x use versions of cffi<1.0 which causes a bug
when used with mod_wsgi and Apache. This is fixed in cryptography master
and will be released in 1.0.
Closes-bug: 1463569
Change-Id: I8605e0490a8889d57c6b1b7e03e078fb0da978ab
Enables default domain support using ldap. This change moves the
ldap config to the default domain and enables domain specific
drivers.
Change-Id: I85f6610a25617fdea1fc216b53df0ab30260fed9
Cloes-Bug: 1447768
This change implements the blueprint to convert all roles and plays into
a more generic setup, following upstream ansible best practices.
Items Changed:
* All tasks have tags.
* All roles use namespaced variables.
* All redundant tasks within a given play and role have been removed.
* All of the repetitive plays have been removed in-favor of a more
simplistic approach. This change duplicates code within the roles but
ensures that the roles only ever run within their own scope.
* All roles have been built using an ansible galaxy syntax.
* The `*requirement.txt` files have been reformatted follow upstream
Openstack practices.
* Dynamically generated inventory is now more organized, this should assist
anyone who may want or need to dive into the JSON blob that is created.
In the inventory a properties field is used for items that customize containers
within the inventory.
* The environment map has been modified to support additional host groups to
enable the seperation of infrastructure pieces. While the old infra_hosts group
will still work this change allows for groups to be divided up into seperate
chunks; eg: deployment of a swift only stack.
* The LXC logic now exists within the plays.
* etc/openstack_deploy/user_variables.yml has all password/token
variables extracted into the separate file
etc/openstack_deploy/user_secrets.yml in order to allow seperate
security settings on that file.
Items Excised:
* All of the roles have had the LXC logic removed from within them which
should allow roles to be consumed outside of the `os-ansible-deployment`
reference architecture.
Note:
* the directory rpc_deployment still exists and is presently pointed at plays
containing a deprecation warning instructing the user to move to the standard
playbooks directory.
* While all of the rackspace specific components and variables have been removed
and or were refactored the repository still relies on an upstream mirror of
Openstack built python files and container images. This upstream mirror is hosted
at rackspace at "http://rpc-repo.rackspace.com" though this is
not locked to and or tied to rackspace specific installations. This repository
contains all of the needed code to create and/or clone your own mirror.
DocImpact
Co-Authored-By: Jesse Pretorius <jesse.pretorius@rackspace.co.uk>
Closes-Bug: #1403676
Implements: blueprint galaxy-roles
Change-Id: I03df3328b7655f0cc9e43ba83b02623d038d214e