This change combines the previous puppet and docker files
into a single file that performs the docker service installation
and configuration. With this patch the baremetal version of
nova has been removed.
Change-Id: Ic577851f8d865d5eec41dbfb00c27520bedc3fdb
This change combines the previous puppet and docker files into a single
file that performs the docker service installation and configuration.
With this patch the baremetal version of database service MySQL Client
has been removed.
Change-Id: I855524f30cfe3c8cdab6c52a67fba0dee157103d
Related-Blueprint: services-yaml-flattening
Previously the kolla config is merging the existing apache configuration
files in the container with our generated ones. This can lead to extra
configurations in the containers that we are not expecting. This change
updates the kolla configs to not merge the httpd conf.d folder so we only
end up with our expected configurations.
Change-Id: Ibb9bbeb12e73b2cf8887554f461873e42532edd7
Related-Bug: 1813084
Nova now allows use of templated urls in the database and mq
connections which will allow static configuration elements to be
applied to the urls read from the database per-node. This should
be a simpler and less obscure method of configuring things like
the per-node bind_address necessary for director's HA arrangement.
This patch addresses the templated transport_url urls as part 2.
Nova support added here:
https://review.openstack.org/#/c/578163/
Change-Id: I889dcf632b3306ce7e56ac5394884c7c72481833
Related-Bug: 1808134
Many services currently set an `is_bootstrap_node` fact, meaning they
override each other's results when the fact is being set. As long as
the fact doesn't belong into a particular step but it's executed on
every step, nothing bad happens, as the correct is_bootstrap_node
setting directly precedes any service upgrade tasks. However, we
intend to put the fact setting into step 0 in change
Ib04b051e8f4275e06be0cafa81e2111c9cced9b7 and at that point the name
collision would break upgrades (only one service would "win" in
setting the is_bootstrap_node fact).
This patch changes the is_bootstrap_node facts in upgrade_tasks to use
per-service naming.
Note that fast_forward_upgrade_tasks use their own is_boostrap_node
logic. We've uncovered some weirdness there while looking into the
is_boostrap_node issue, but the fix is not a low hanging fruit and
likely we'll be completely redoing the FFU tasks for Q->T
upgrade. So the FFU tasks are left alone for now.
Change-Id: I9c585d3cb282b7e4eb0bacb3cf6909e04a9a495e
Closes-Bug: #1810408
Nova now allows use of templated urls in the database and mq
connections which will allow static configuration elements to be
applied to the urls read from the database per-node. This should
be a simpler and less obscure method of configuring things like
the per-node bind_address necessary for director's HA arrangement.
This patch addresses the templated DB urls as part 1.
Nova support added here:
https://review.openstack.org/#/c/578163/
Related-Bug: 1808134
Co-Authored-By: Martin Schuppert <mschuppert@redhat.com>
Change-Id: If30b4647bca210663a22fd653e752d4d57345bdd
If compute nodes are deployed without deploying/updating the controllers then
the computes will not have cellv2 mappings as this is run in the controller
deploy steps (nova-api).
This can happen if the controller nodes are blacklisted during a compute scale
out. It's also likely to be an issue going forward if the deployment is staged
(e.g split control plane).
This change moves the cell_v2 discovery logic to the nova-compute/nova-ironic
deploy step.
Closes-bug: 1786961
Change-Id: I12a02f636f31985bc1b71bff5b744d346286a95f
We don't need upgrade_tasks that stop systemd services since all
services are now containerized.
However, we decided to keep the tasks that remove the rpms in case some
of deployments didn't cleanup them in previous releases, they can still
do it now.
Change-Id: I6abdc9e37966cd818306f7af473958fd4662ccb5
Related-Bug: #1806733
When an update changes NovaPassword, we need to run
nova_api_ensure_default_cell container for it to be able to change
the db_connection for the cells in nova_api db. Otherwise
nova_api_discover_hosts which runs all the time fails with db erorr
as the connection string in the database would not change.
Currently, we don't mount either /var/lib/config-data/nova or
/var/lib/config-data/puppet-generated/nova, hence the
TRIPLEO_CONFIG_HASH is not generated for the container and it
does not run during update and may be not in upgrade either.
Change-Id: I0a972796e45a8df614619c95e9d9be9af183b4e5
Closes-Bug: #1805803
For all containers where restart=always is configured and that are not
managed by Pacemaker (this part will be handled later), we remove these
containers at step 1 of post_upgrade_tasks.
Change-Id: Id446dbf7b0a18bd1d4539856e6709d35c7cfa0f0
That task included a validation in the form of "if the container is
not there fails". This was done to ensure that the online database
migration was run even in the case where the upgrade was run on a
environment where some container would be stopped for some reason.
This validation proves to be problematic as having the related host
services stopped and the container non-running is a "legitimate" state
during the re-run of a fail upgrade.
That validation then completely blocks the upgrade.
We remove the validation part of that tasks as it's very unlikely,
belongs to a validation tasks to be done outside of the upgrade and
block a valid path from working.
Change-Id: I6ca70cb913f7cdd6fc4fbcc70698992e2074dc9c
Closes-Bug: #1804459
During upgrade we may have container_cli be Podman but the containers
may still be running on Docker. Handle this situation in the upgrade
tasks which are the last-resort online data migration if user forgot
to trigger them earlier, as they seem to be hitting this issue.
We must support both options at the same time, because the upgrade
code must be idempotent (re-runnable). When running upgrade 1st time,
the containers will be running in Docker, when re-running the upgrade
(e.g. because a part of it failed), the containers will be running in
Podman.
Once we converge onto a single solution and do not have to support
migration, this commit can be reverted.
Change-Id: I933ce754f081ee87ec53d5f8d9c901ab71dceb1e
Closes-Bug: #1802085
The current approach has several disadvantages:
- Requires shelling out to the hiera CLI, and is coupled to the puppet hieradata
- The bootstrap_nodeid is only unique per Role, not per service, so if you
deploy a service spanning more than one role it will evaluate true for
every role, not only once.
Instead lets use the per-service short_bootstrap_node_name, which is
available directly via the ansible inventory now ref
https://review.openstack.org/#/c/605046/
This is the first part of a cleanup for inconsistent handling of
bootstrap node evaluation, triggered by bug #1792613
Change-Id: Iefe4a37e8ced6f4e9018ae0da00e2349390d4927
Partial-Bug: #1792613
Depends-On: Idcee177b21e85cff9e0bf10f4c43c71eff9364ec
This has been unused for a while, and even deprecation was scheduled
(although the patch never merged [1]). So, in order to stop folks
getting confused with this, it's being removed.
[1] https://review.openstack.org/#/c/543871/
Change-Id: Iada64874432146ef311682f26af5990469790ed2
This will pull the online data migrations out of the upgrade
maintenance window and let them be performed after the main upgrade
phase while the cloud is already operational.
The online part of the service upgrades can be run using:
openstack overcloud external-upgrade run --tags online_upgrade
or per-service like:
openstack overcloud external-upgrade run --tags online_upgrade_nova
openstack overcloud external-upgrade run --tags online_upgrade_cinder
openstack overcloud external-upgrade run --tags online_upgrade_ironic
Change-Id: I35c8d9985df21b3084fba558687e1f408e5a0878
Closes-Bug: #1793332
This will allow proper access from the containers without any
new SELinux policy
Depends-On: Ie9f5d3b6380caa6824ca940ca48ed0fcf6308608
Change-Id: I284126db5dcf9dc31ee5ee640b2684643ef3a066
We always run DB sync in deploy_tasks, ensuring that the database is
up to date. We should follow up with online data migrations
too.
Doing this via docker_config has 2 purposes:
* We can easily ensure this happens in a container with the right
config files mounted.
* We can even apply this via a minor update. This is important because
we'll have to backport this all the way to Pike and apply it there
using Pike containers, before upgrading to Queens containers.
There's an additional issue to consider: In Puppet service variant we
ran the online migrations for release X before upgrading to X+1, but
the proposed Docker variant migrations for X run with upgrade to
X. This means that when switching from non-containerized to
containerized, we'll need to run migrations twice, to correctly switch
between the aforementioned approaches.
Change-Id: I2eb6c7c42d7e7aea4a78a892790e42bc5371f792
Closes-Bug: #1790474
This has been unused for a while, and even deprecation was scheduled
(although the patch never merged [1]). So, in order to stop folks
getting confused with this, it's being removed.
[1] https://review.openstack.org/#/c/543871/
Change-Id: Icc6b51044ccc826f5b629eb1abd3342813ed84c0
Add block to step_0 for all services
Add block to step_6 for neutron-api.yaml
Add block to step_1 for nova-compute.yaml
Change-Id: Ib4c59302ad5ad64f23419cd69ee9b2a80333924e
Problem: RHEL and CentOS8 will deprecate the usage of Yum.
From DNF release note:
DNF is the next upcoming major version of yum, a package
manager for RPM-based Linux distributions.
It roughly maintains CLI compatibility with YUM and defines a strict API for
extensions.
Solution: Use "package" Ansible module instead of "yum".
"package" module is smarter when it comes to detect with package manager
runs on the system. The goal of this patch is to support both yum/dnf
(dnf will be the default in rhel/centos 8) from a single ansible module.
Change-Id: I8e67d6f053e8790fdd0eb52a42035dca3051999e
In case the nova-api service is not running in the MySQL
master node, the FFU tasks will fail as it might not have
MySQL installed.
Avoid executing Nova DB tasks on FFU if MySQL not installed,
point to the MySQL server.
Resolves: rhbz#1593910
Closes-bug: 1780425
Change-Id: I02bc48d535707d579ecd590f970b1a08962a0111
This would result in the failure of the online_data_migrations command
during the Ocata to Pike upgrade. The following Pike to Queens upgrade
would hide this failure by running both the sync and migrations again.
Change-Id: I51f40254acee435ed4e60c0e97b5ced86fd67fc2
Closes-bug: #1775868
To not to redefine variable multiple times in each service we
split httpd_enabled to per service fact set in step|int == 0 block.
Change-Id: Icea0865aadd9253ead464247bf78f45842b3a578
Since we moved services in containers, their logs aren't in the old location, but
in /var/log/containers/<service>. This patch fixes the generated Hiera hash used
by Fluentd for its configuration
Regarding Designate config service: some of the yaml doesn't use at all the
`service_config_settings` parameter - they will need to be updated accordingly
once it's supported
Co-Authored-By: Thomas Herve <therve@redhat.com>, Steven Hardy <shardy@redhat.com>
Change-Id: I1bc0930de4053dc1c34b50477e82d9ccdab7ae2e
Closes-Bug: 1769051
Related-Bug: 1674715
The new master branch should point now to rocky.
So, HOT templates should specify that they might contain features
for rocky release [1]
Also, this submission updates the yaml validation to use only latest
heat_version alias. There are cases in which we will need to set
the version for specific templates i.e. mixed versions, so there
is added a variable to assign specific templates to specific heat_version
aliases, avoiding the introductions of error by bulk replacing the
the old version in new releases.
[1]: https://docs.openstack.org/heat/latest/template_guide/hot_spec.html#rocky
Change-Id: Ib17526d9cc453516d99d4659ee5fa51a5aa7fb4b
Instead of using host_prep_tasks (which are part of deployment tasks),
we'll use the upgrade tasks that are now well known and tested in
previous releases, when the we containerized the overcloud.
Depends-On: Id25e6280b4b4f060d5e3f78a50ff83aaca9e6b1a
Change-Id: Ic199c7d431e155e2d37996acd0d7b924d14af2b7
Use the existing nova-compute cellv2 discovery logic for nova-ironic too, now
that we have the --by-service flag.
The nova_api_discover_hosts.sh script will now wait (up to 10 minutes) for all
nova-compute and nova-ironic services to register, then run host discovery
with --by-service to create host mappings for all services. We no longer need
ironic nodes to be deployed on the nova-ironic services for discovery to work.
We also no longer need to enable the priodic job.
Related nova change Ie9f064cb9caf6dcba2414acb24d12b825df45fab
Related-Bug: #1755602
Change-Id: I723237ae7285f3babd6eceb1ce7da4e2734d1e4f
Using host_prep_tasks interface to handle undercloud teardown before we
run the undercloud install.
The reason of not using upgrade_tasks is because the existing tasks were
created for the overcloud upgrade first and there are too much logic
right now so we can easily re-use the bits for the undercloud. In the
future, we'll probably use upgrade_tasks for both the undercloud and
overcloud but right now this is not possible and a simple way to move
forward was to implement these tasks that work fine for the undercloud
containerization case.
Workflow will be:
- Services will be stopped and disabled (except mariadb)
- Neutron DB will be renamed, then mariadb stopped & disabled
- Remove cron jobs
- All packages will be upgraded with yum update.
Change-Id: I36be7f398dcd91e332687c6222b3ccbb9cd74ad2
We need to check the running services only on step 0. We need
to provide correct nova_cell0 DB url.
Change-Id: I1817f4da5578005c95570b77ce5e85380ac3ecf6
fast_forward_upgrade_tasks for Nova covering Ocata and Pike.
- Service status check
- Stop services when updating from Ocata to Pike
- Update nova packages
- Db sync
Change-Id: Iff416668f8b8d15bdf7712f09e145eb7c7a6b83e
If we use variables defined in later step in conditional before
checking which step are we on we will fail.
Resolves: rhbz#1535457
Closes-Bug: #1743764
Change-Id: Ic21f6eb5c4101f230fa894cd0829a11e2f0ef39b
This converts "tags: stepN" to "when: step|int == N" for the direct
execution as an ansible playbook, with a loop variable 'step'.
The tasks all include the explicit cast |int.
This also adds a set_fact task for handling of the package removal
with the UpgradeRemovePackages parameter (no change to the interface)
The yaml-validate also now checks for duplicate 'when:' statements
Q upgrade spec @ Ibde21e6efae3a7d311bee526d63c5692c4e27b28
Related Blueprint: major-upgrade-workflow
[0]: 394a92f761/tripleo_common/utils/config.py (L141)
Change-Id: I6adc5619a28099f4e241351b63377f1e96933810
Resolves an issue during scale out where the Nova init
container that runs discover hosts wasn't executing on a Heat
stack update if the container name and configs all stayed the same.
Change-Id: Ie2ecd3dbddb1cf3ee5bba6f7b33e11bc9b6b8b4e
Closes-bug: 1733966