diff --git a/deploy-guide/source/_extra/.htaccess b/deploy-guide/source/_extra/.htaccess index 74a4d00..8495003 100644 --- a/deploy-guide/source/_extra/.htaccess +++ b/deploy-guide/source/_extra/.htaccess @@ -44,3 +44,8 @@ RedirectMatch 301 ^/project-deploy-guide/charm-deployment-guide/([^/]+)/percona- RedirectMatch 301 ^/project-deploy-guide/charm-deployment-guide/([^/]+)/charmhub-migration.html$ /charm-guide/$1/project/procedures/charmhub-migration.html RedirectMatch 301 ^/project-deploy-guide/charm-deployment-guide/([^/]+)/ovn-migration.html$ /charm-guide/$1/project/procedures/ovn-migration.html RedirectMatch 301 ^/project-deploy-guide/charm-deployment-guide/([^/]+)/upgrade-special.html$ /charm-guide/$1/project/issues-and-procedures.html +RedirectMatch 301 ^/project-deploy-guide/charm-deployment-guide/([^/]+)/upgrade-charms.html$ /charm-guide/$1/admin/upgrades/charms.html +RedirectMatch 301 ^/project-deploy-guide/charm-deployment-guide/([^/]+)/upgrade-series.html$ /charm-guide/$1/admin/upgrades/series.html +RedirectMatch 301 ^/project-deploy-guide/charm-deployment-guide/([^/]+)/upgrade-series-openstack.html$ /charm-guide/$1/admin/upgrades/series-openstack.html +RedirectMatch 301 ^/project-deploy-guide/charm-deployment-guide/([^/]+)/upgrade-openstack.html$ /charm-guide/$1/admin/upgrades/openstack.html +RedirectMatch 301 ^/project-deploy-guide/charm-deployment-guide/([^/]+)/upgrade-overview.html$ /charm-guide/$1/admin/upgrades/overview.html diff --git a/deploy-guide/source/configure-openstack.rst b/deploy-guide/source/configure-openstack.rst index 30a67f6..3ca221f 100644 --- a/deploy-guide/source/configure-openstack.rst +++ b/deploy-guide/source/configure-openstack.rst @@ -403,11 +403,12 @@ You now have a functional OpenStack cloud managed by MAAS-backed Juju. * The entire suite of charms used to manage the cloud should be upgraded to the latest stable charm revision before any major change is made to the cloud (e.g. migrating to new charms, upgrading cloud services, upgrading - machine series). See :doc:`Charms upgrade ` for details. + machine series). See :doc:`cg:admin/upgrades/charms` in the Charm Guide + for details. * The Juju machines that comprise the cloud should all be running the same series (e.g. 'focal' or 'jammy', but not a mix of the two). See - :doc:`Series upgrade ` for details. + :doc:`cg:admin/upgrades/series` in the Charm Guide for details. As next steps, consider browsing these documentation sources: diff --git a/deploy-guide/source/index.rst b/deploy-guide/source/index.rst index 5130d48..481ad0e 100644 --- a/deploy-guide/source/index.rst +++ b/deploy-guide/source/index.rst @@ -21,15 +21,6 @@ OpenStack Charms usage. To help improve it you can `file an issue`_ or install-openstack configure-openstack -.. toctree:: - :caption: Upgrades - :maxdepth: 1 - - Overview - upgrade-charms - upgrade-openstack - upgrade-series - .. LINKS .. _file an issue: https://bugs.launchpad.net/charm-deployment-guide/+filebug .. _submit a contribution: https://opendev.org/openstack/charm-deployment-guide diff --git a/deploy-guide/source/install-openstack.rst b/deploy-guide/source/install-openstack.rst index 8733d6e..aeb87ad 100644 --- a/deploy-guide/source/install-openstack.rst +++ b/deploy-guide/source/install-openstack.rst @@ -41,9 +41,8 @@ nodes will be used during the install of each OpenStack application. Note that some applications are not part of the OpenStack project per se and therefore do not apply (exceptionally, Ceph applications do use this method). -See :ref:`Perform the upgrade ` on the :doc:`OpenStack -Upgrade ` page for more details on cloud archive releases -and how they are used when upgrading OpenStack. +See :ref:`cg:perform_the_upgrade` in the Charm Guide for more details on cloud +archive releases and how they are used when upgrading OpenStack. .. important:: diff --git a/deploy-guide/source/media/ubuntu-openstack-release-cycle.png b/deploy-guide/source/media/ubuntu-openstack-release-cycle.png deleted file mode 100644 index 6455246..0000000 Binary files a/deploy-guide/source/media/ubuntu-openstack-release-cycle.png and /dev/null differ diff --git a/deploy-guide/source/upgrade-charms.rst b/deploy-guide/source/upgrade-charms.rst deleted file mode 100644 index 04c1bed..0000000 --- a/deploy-guide/source/upgrade-charms.rst +++ /dev/null @@ -1,272 +0,0 @@ -============== -Charms upgrade -============== - -The Juju command to use is :command:`upgrade-charm`. For extra guidance see -`How to upgrade applications`_ in the Juju documentation. - -Please read the following before continuing: - -* :doc:`upgrade-overview` -* :doc:`cg:release-notes/index` -* :doc:`cg:project/issues-and-procedures` - -.. note:: - - A charm upgrade affects all corresponding units; upgrading on a per-unit - basis is not currently supported. - -Upgrade order -------------- - -There is no special order in which to upgrade the charms. The order described -here is based on the upgrade order for :ref:`OpenStack upgrades -`, which, in turn, is the order used by internal -testing. - -.. note:: - - Although it may be possible to upgrade some charms concurrently it is - recommended that charm upgrades be performed sequentially (i.e. one at a - time). Verify a charm upgrade before moving on to the next. - -The general order is: - -#. all principal charms -#. all subordinate charms - -The precise order within the group of principal charms is shown in the below -table. - -.. note:: - - At this time, only stable charms are listed in the upgrade order table. - -.. list-table:: Principal charms - :header-rows: 1 - :widths: auto - - * - Order - - Charm - - * - 1 - - `percona-cluster`_ or `mysql-innodb-cluster`_ - - * - 2 - - `rabbitmq-server`_ - - * - 3 - - `ceph-mon`_ - - * - 4 - - `keystone`_ - - * - 5 - - `aodh`_ - - * - 6 - - `barbican`_ - - * - 7 - - `ceilometer`_ - - * - 8 - - `ceph-fs`_ - - * - 9 - - `ceph-radosgw`_ - - * - 10 - - `cinder`_ - - * - 11 - - `designate`_ - - * - 12 - - `designate-bind`_ - - * - 13 - - `glance`_ - - * - 14 - - `gnocchi`_ - - * - 15 - - `heat`_ - - * - 16 - - `manila`_ - - * - 17 - - `manila-ganesha`_ - - * - 18 - - `neutron-api`_ - - * - 19 - - `neutron-gateway`_ or `ovn-dedicated-chassis`_ - - * - 20 - - `ovn-central`_ - - * - 21 - - `placement`_ - - * - 22 - - `nova-cloud-controller`_ - - * - 23 - - `nova-compute`_ - - * - 24 - - `openstack-dashboard`_ - - * - 25 - - `ceph-osd`_ - - * - 26 - - `swift-proxy`_ - - * - 27 - - `swift-storage`_ - - * - 28 - - `octavia`_ - -Upgrade testing for subordinate charms does not follow a prescribed order. Once -all the principal charms have been processed all the subordinate charms can -then be upgraded in any order. - -Perform the upgrade -------------------- - -Prior to upgrading a charm, say keystone, a (partial) output to :command:`juju -status` may look like: - -.. code-block:: console - - App Version Status Scale Charm Store Rev OS Notes - keystone 15.0.0 active 1 keystone jujucharms 306 ubuntu - - Unit Workload Agent Machine Public address Ports Message - keystone/0* active idle 3/lxd/1 10.248.64.69 5000/tcp Unit is ready - -Here, as deduced from the Keystone **service** version of '15.0.0', the cloud -is running Stein. The 'keystone' **charm** however shows a revision number of -'306'. Upon charm upgrade, the service version will remain unchanged but the -charm revision is expected to increase in number. - -So to upgrade the keystone charm: - -.. code-block:: none - - juju upgrade-charm keystone - -The upgrade progress can be monitored via :command:`juju status`. Any -encountered problem will surface as a message in its output. This sample -(partial) output reflects a successful upgrade: - -.. code-block:: console - - App Version Status Scale Charm Store Rev OS Notes - keystone 15.0.0 active 1 keystone jujucharms 309 ubuntu - - Unit Workload Agent Machine Public address Ports Message - keystone/0* active idle 3/lxd/1 10.248.64.69 5000/tcp Unit is ready - -This shows that the charm now has a revision number of '309' but Keystone -itself remains at '15.0.0'. - -.. caution:: - - Any software changes that may have (exceptionally) been made to a charm - currently running on a unit will be overwritten by the target charm during - the upgrade. - -Upgrade target revisions -~~~~~~~~~~~~~~~~~~~~~~~~ - -By default the :command:`upgrade-charm` command will upgrade a charm to its -latest stable revision (a possible multi-step upgrade). This means that -intervening revisions can be conveniently skipped. Use the ``--revision`` -option to specify a target revision. - -The current revision can be discovered via :command:`juju status` output (see -column 'Rev'). For the ceph-mon charm: - -.. code-block:: console - - App Version Status Scale Charm Store Rev OS Notes - ceph-mon 13.2.8 active 3 ceph-mon jujucharms 48 ubuntu - -.. important:: - - As stated earlier, any kind of upgrade should first be tested in a - pre-production environment. OpenStack charm upgrades have been tested for - single-step upgrades only (N+1). - -.. LINKS -.. _How to upgrade applications: https://juju.is/docs/olm/upgrade-applications -.. _Release Notes: https://docs.openstack.org/charm-guide/latest/release-notes.html - -.. _aodh: https://opendev.org/openstack/charm-aodh/ -.. _barbican: https://opendev.org/openstack/charm-barbican/ -.. _barbican-vault: https://opendev.org/openstack/charm-barbican-vault/ -.. _ceilometer: https://opendev.org/openstack/charm-ceilometer/ -.. _ceilometer-agent: https://opendev.org/openstack/charm-ceilometer-agent/ -.. _cinder: https://opendev.org/openstack/charm-cinder/ -.. _cinder-backup: https://opendev.org/openstack/charm-cinder-backup/ -.. _cinder-backup-swift-proxy: https://opendev.org/openstack/charm-cinder-backup-swift-proxy/ -.. _cinder-ceph: https://opendev.org/openstack/charm-cinder-ceph/ -.. _designate: https://opendev.org/openstack/charm-designate/ -.. _glance: https://opendev.org/openstack/charm-glance/ -.. _heat: https://opendev.org/openstack/charm-heat/ -.. _keystone: https://opendev.org/openstack/charm-keystone/ -.. _keystone-ldap: https://opendev.org/openstack/charm-keystone-ldap/ -.. _keystone-saml-mellon: https://opendev.org/openstack/charm-keystone-saml-mellon/ -.. _manila: https://opendev.org/openstack/charm-manila/ -.. _manila-ganesha: https://opendev.org/openstack/charm-manila-ganesha/ -.. _masakari: https://opendev.org/openstack/charm-masakari/ -.. _masakari-monitors: https://opendev.org/openstack/charm-masakari-monitors/ -.. _mysql-innodb-cluster: https://opendev.org/openstack/charm-mysql-innodb-cluster -.. _mysql-router: https://opendev.org/openstack/charm-mysql-router -.. _neutron-api: https://opendev.org/openstack/charm-neutron-api/ -.. _neutron-api-plugin-arista: https://opendev.org/openstack/charm-neutron-api-plugin-arista -.. _neutron-api-plugin-ovn: https://opendev.org/openstack/charm-neutron-api-plugin-ovn -.. _neutron-dynamic-routing: https://opendev.org/openstack/charm-neutron-dynamic-routing/ -.. _neutron-gateway: https://opendev.org/openstack/charm-neutron-gateway/ -.. _neutron-openvswitch: https://opendev.org/openstack/charm-neutron-openvswitch/ -.. _nova-cell-controller: https://opendev.org/openstack/charm-nova-cell-controller/ -.. _nova-cloud-controller: https://opendev.org/openstack/charm-nova-cloud-controller/ -.. _nova-compute: https://opendev.org/openstack/charm-nova-compute/ -.. _octavia: https://opendev.org/openstack/charm-octavia/ -.. _octavia-dashboard: https://opendev.org/openstack/charm-octavia-dashboard/ -.. _octavia-diskimage-retrofit: https://opendev.org/openstack/charm-octavia-diskimage-retrofit/ -.. _openstack-dashboard: https://opendev.org/openstack/charm-openstack-dashboard/ -.. _placement: https://opendev.org/openstack/charm-placement -.. _swift-proxy: https://opendev.org/openstack/charm-swift-proxy/ -.. _swift-storage: https://opendev.org/openstack/charm-swift-storage/ - -.. _ceph-fs: https://opendev.org/openstack/charm-ceph-fs/ -.. _ceph-iscsi: https://opendev.org/openstack/charm-ceph-iscsi/ -.. _ceph-mon: https://opendev.org/openstack/charm-ceph-mon/ -.. _ceph-osd: https://opendev.org/openstack/charm-ceph-osd/ -.. _ceph-proxy: https://opendev.org/openstack/charm-ceph-proxy/ -.. _ceph-radosgw: https://opendev.org/openstack/charm-ceph-radosgw/ -.. _ceph-rbd-mirror: https://opendev.org/openstack/charm-ceph-rbd-mirror/ -.. _cinder-purestorage: https://opendev.org/openstack/charm-cinder-purestorage/ -.. _designate-bind: https://opendev.org/openstack/charm-designate-bind/ -.. _glance-simplestreams-sync: https://opendev.org/openstack/charm-glance-simplestreams-sync/ -.. _gnocchi: https://opendev.org/openstack/charm-gnocchi/ -.. _hacluster: https://opendev.org/openstack/charm-hacluster/ -.. _ovn-central: https://opendev.org/x/charm-ovn-central -.. _ovn-chassis: https://opendev.org/x/charm-ovn-chassis -.. _ovn-dedicated-chassis: https://opendev.org/x/charm-ovn-dedicated-chassis -.. _pacemaker-remote: https://opendev.org/openstack/charm-pacemaker-remote/ -.. _percona-cluster: https://opendev.org/openstack/charm-percona-cluster/ -.. _rabbitmq-server: https://opendev.org/openstack/charm-rabbitmq-server/ -.. _trilio-data-mover: https://opendev.org/openstack/charm-trilio-data-mover/ -.. _trilio-dm-api: https://opendev.org/openstack/charm-trilio-dm-api/ -.. _trilio-horizon-plugin: https://opendev.org/openstack/charm-trilio-horizon-plugin/ -.. _trilio-wlm: https://opendev.org/openstack/charm-trilio-wlm/ -.. _vault: https://opendev.org/openstack/charm-vault/ diff --git a/deploy-guide/source/upgrade-openstack-example-pre-juju-status.rst b/deploy-guide/source/upgrade-openstack-example-pre-juju-status.rst deleted file mode 100644 index 14261ac..0000000 --- a/deploy-guide/source/upgrade-openstack-example-pre-juju-status.rst +++ /dev/null @@ -1,190 +0,0 @@ -:orphan: - -.. code-block:: console - - Model Controller Cloud/Region Version SLA Timestamp - openstack controller cloud/default 2.9.22 unsupported 02:25:29Z - - App Version Status Scale Charm Store Channel Rev OS Message - ceph-mon 16.2.6 active 3 ceph-mon charmstore stable 62 ubuntu Unit is ready and clustered - ceph-osd 16.2.6 active 3 ceph-osd charmstore stable 316 ubuntu Unit is ready (1 OSD) - ceph-radosgw 16.2.6 active 1 ceph-radosgw charmstore stable 300 ubuntu Unit is ready - cinder 18.1.0 active 1 cinder charmstore stable 319 ubuntu Unit is ready - cinder-ceph 18.1.0 active 1 cinder-ceph charmstore stable 268 ubuntu Unit is ready - cinder-mysql-router 8.0.28 active 1 mysql-router charmstore stable 15 ubuntu Unit is ready - dashboard-mysql-router 8.0.28 active 1 mysql-router charmstore stable 15 ubuntu Unit is ready - glance 22.1.0 active 1 glance charmstore stable 313 ubuntu Unit is ready - glance-mysql-router 8.0.28 active 1 mysql-router charmstore stable 15 ubuntu Unit is ready - keystone 19.0.0 active 3 keystone charmstore stable 330 ubuntu Application Ready - keystone-hacluster active 3 hacluster charmstore stable 81 ubuntu Unit is ready and clustered - keystone-mysql-router 8.0.28 active 3 mysql-router charmstore stable 15 ubuntu Unit is ready - mysql-innodb-cluster 8.0.28 active 3 mysql-innodb-cluster charmstore stable 15 ubuntu Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure. - neutron-api 18.1.1 active 1 neutron-api charmstore stable 304 ubuntu Unit is ready - neutron-api-plugin-ovn 18.1.1 active 1 neutron-api-plugin-ovn charmstore stable 50 ubuntu Unit is ready - neutron-mysql-router 8.0.28 active 1 mysql-router charmstore stable 15 ubuntu Unit is ready - nova-cloud-controller 23.1.0 active 1 nova-cloud-controller charmstore stable 363 ubuntu Unit is ready - nova-compute 23.1.0 active 3 nova-compute charmstore stable 337 ubuntu Unit is ready - nova-mysql-router 8.0.28 active 1 mysql-router charmstore stable 15 ubuntu Unit is ready - ntp 3.5 active 3 ntp charmstore stable 47 ubuntu chrony: Ready - openstack-dashboard 19.2.0 active 1 openstack-dashboard charmstore stable 318 ubuntu Unit is ready - ovn-central 20.12.0 active 3 ovn-central charmstore stable 16 ubuntu Unit is ready (leader: ovnnb_db, ovnsb_db) - ovn-chassis 20.12.0 active 3 ovn-chassis charmstore stable 21 ubuntu Unit is ready - placement 5.0.1 active 1 placement charmstore stable 32 ubuntu Unit is ready - placement-mysql-router 8.0.28 active 1 mysql-router charmstore stable 15 ubuntu Unit is ready - rabbitmq-server 3.8.2 active 1 rabbitmq-server charmstore stable 118 ubuntu Unit is ready - vault 1.5.9 active 1 vault charmstore stable 54 ubuntu Unit is ready (active: true, mlock: disabled) - vault-mysql-router 8.0.28 active 1 mysql-router charmstore stable 15 ubuntu Unit is ready - - Unit Workload Agent Machine Public address Ports Message - ceph-mon/0* active idle 0/lxd/0 10.246.114.55 Unit is ready and clustered - ceph-mon/1 active idle 1/lxd/0 10.246.114.56 Unit is ready and clustered - ceph-mon/2 active idle 2/lxd/0 10.246.114.35 Unit is ready and clustered - ceph-osd/0 active idle 0 10.246.114.21 Unit is ready (1 OSD) - ceph-osd/1 active idle 1 10.246.114.22 Unit is ready (1 OSD) - ceph-osd/2* active idle 2 10.246.114.30 Unit is ready (1 OSD) - ceph-radosgw/0* active idle 0/lxd/1 10.246.114.47 80/tcp Unit is ready - cinder/0* active idle 2 10.246.114.30 8776/tcp Unit is ready - cinder-ceph/0* active idle 10.246.114.30 Unit is ready - cinder-mysql-router/0* active idle 10.246.114.30 Unit is ready - glance/0* active idle 2/lxd/1 10.246.114.34 9292/tcp Unit is ready - glance-mysql-router/0* active idle 10.246.114.34 Unit is ready - keystone/0* active idle 0/lxd/2 10.246.114.58 5000/tcp Unit is ready - keystone-hacluster/0* active idle 10.246.114.58 Unit is ready and clustered - keystone-mysql-router/0* active idle 10.246.114.58 Unit is ready - keystone/1 active idle 1/lxd/6 10.246.114.37 5000/tcp Unit is ready - keystone-hacluster/1 active idle 10.246.114.37 Unit is ready and clustered - keystone-mysql-router/1 active idle 10.246.114.37 Unit is ready - keystone/2 active idle 2/lxd/6 10.246.114.38 5000/tcp Unit is ready - keystone-hacluster/2 active idle 10.246.114.38 Unit is ready and clustered - keystone-mysql-router/2 active idle 10.246.114.38 Unit is ready - mysql-innodb-cluster/0 active idle 0/lxd/3 10.246.114.44 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure. - mysql-innodb-cluster/1* active idle 1/lxd/1 10.246.114.27 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure. - mysql-innodb-cluster/2 active idle 2/lxd/2 10.246.114.32 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure. - neutron-api/0* active idle 1/lxd/2 10.246.114.57 9696/tcp Unit is ready - neutron-api-plugin-ovn/0* active idle 10.246.114.57 Unit is ready - neutron-mysql-router/0* active idle 10.246.114.57 Unit is ready - nova-cloud-controller/0* active idle 0/lxd/4 10.246.114.25 8774/tcp,8775/tcp Unit is ready - nova-mysql-router/0* active idle 10.246.114.25 Unit is ready - nova-compute/0 active idle 0 10.246.114.21 Unit is ready - ntp/0* active idle 10.246.114.21 123/udp chrony: Ready - ovn-chassis/0* active idle 10.246.114.21 Unit is ready - nova-compute/1 active idle 1 10.246.114.22 Unit is ready - ntp/1 active idle 10.246.114.22 123/udp chrony: Ready - ovn-chassis/1 active idle 10.246.114.22 Unit is ready - nova-compute/2* active idle 2 10.246.114.30 Unit is ready - ntp/2 active idle 10.246.114.30 123/udp chrony: Ready - ovn-chassis/2 active idle 10.246.114.30 Unit is ready - openstack-dashboard/0* active idle 1/lxd/3 10.246.114.45 80/tcp,443/tcp Unit is ready - dashboard-mysql-router/0* active idle 10.246.114.45 Unit is ready - ovn-central/0* active idle 0/lxd/5 10.246.114.46 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db) - ovn-central/1 active idle 1/lxd/4 10.246.114.24 6641/tcp,6642/tcp Unit is ready (northd: active) - ovn-central/2 active idle 2/lxd/3 10.246.114.36 6641/tcp,6642/tcp Unit is ready - placement/0* active idle 2/lxd/4 10.246.114.33 8778/tcp Unit is ready - placement-mysql-router/0* active idle 10.246.114.33 Unit is ready - rabbitmq-server/0* active idle 2/lxd/5 10.246.114.59 5672/tcp Unit is ready - vault/0* active idle 1/lxd/5 10.246.114.26 8200/tcp Unit is ready (active: true, mlock: disabled) - vault-mysql-router/0* active idle 10.246.114.26 Unit is ready - - Machine State DNS Inst id Series AZ Message - 0 started 10.246.114.21 node-fontana focal default Deployed - 0/lxd/0 started 10.246.114.55 juju-8bef4d-0-lxd-0 focal default Container started - 0/lxd/1 started 10.246.114.47 juju-8bef4d-0-lxd-1 focal default Container started - 0/lxd/2 started 10.246.114.58 juju-8bef4d-0-lxd-2 focal default Container started - 0/lxd/3 started 10.246.114.44 juju-8bef4d-0-lxd-3 focal default Container started - 0/lxd/4 started 10.246.114.25 juju-8bef4d-0-lxd-4 focal default Container started - 0/lxd/5 started 10.246.114.46 juju-8bef4d-0-lxd-5 focal default Container started - 1 started 10.246.114.22 node-sarabhai focal default Deployed - 1/lxd/0 started 10.246.114.56 juju-8bef4d-1-lxd-0 focal default Container started - 1/lxd/1 started 10.246.114.27 juju-8bef4d-1-lxd-1 focal default Container started - 1/lxd/2 started 10.246.114.57 juju-8bef4d-1-lxd-2 focal default Container started - 1/lxd/3 started 10.246.114.45 juju-8bef4d-1-lxd-3 focal default Container started - 1/lxd/4 started 10.246.114.24 juju-8bef4d-1-lxd-4 focal default Container started - 1/lxd/5 started 10.246.114.26 juju-8bef4d-1-lxd-5 focal default Container started - 1/lxd/6 started 10.246.114.37 juju-8bef4d-1-lxd-6 focal default Container started - 2 started 10.246.114.30 node-pytheas focal default Deployed - 2/lxd/0 started 10.246.114.35 juju-8bef4d-2-lxd-0 focal default Container started - 2/lxd/1 started 10.246.114.34 juju-8bef4d-2-lxd-1 focal default Container started - 2/lxd/2 started 10.246.114.32 juju-8bef4d-2-lxd-2 focal default Container started - 2/lxd/3 started 10.246.114.36 juju-8bef4d-2-lxd-3 focal default Container started - 2/lxd/4 started 10.246.114.33 juju-8bef4d-2-lxd-4 focal default Container started - 2/lxd/5 started 10.246.114.59 juju-8bef4d-2-lxd-5 focal default Container started - 2/lxd/6 started 10.246.114.38 juju-8bef4d-2-lxd-6 focal default Container started - - Relation provider Requirer Interface Type Message - ceph-mon:client cinder-ceph:ceph ceph-client regular - ceph-mon:client glance:ceph ceph-client regular - ceph-mon:client nova-compute:ceph ceph-client regular - ceph-mon:mon ceph-mon:mon ceph peer - ceph-mon:osd ceph-osd:mon ceph-osd regular - ceph-mon:radosgw ceph-radosgw:mon ceph-radosgw regular - ceph-radosgw:cluster ceph-radosgw:cluster swift-ha peer - cinder-ceph:ceph-access nova-compute:ceph-access cinder-ceph-key regular - cinder-ceph:storage-backend cinder:storage-backend cinder-backend subordinate - cinder-mysql-router:shared-db cinder:shared-db mysql-shared subordinate - cinder:cinder-volume-service nova-cloud-controller:cinder-volume-service cinder regular - cinder:cluster cinder:cluster cinder-ha peer - dashboard-mysql-router:shared-db openstack-dashboard:shared-db mysql-shared subordinate - glance-mysql-router:shared-db glance:shared-db mysql-shared subordinate - glance:cluster glance:cluster glance-ha peer - glance:image-service cinder:image-service glance regular - glance:image-service nova-cloud-controller:image-service glance regular - glance:image-service nova-compute:image-service glance regular - keystone-hacluster:ha keystone:ha hacluster subordinate - keystone-hacluster:hanode keystone-hacluster:hanode hacluster peer - keystone-mysql-router:shared-db keystone:shared-db mysql-shared subordinate - keystone:cluster keystone:cluster keystone-ha peer - keystone:identity-service ceph-radosgw:identity-service keystone regular - keystone:identity-service cinder:identity-service keystone regular - keystone:identity-service glance:identity-service keystone regular - keystone:identity-service neutron-api:identity-service keystone regular - keystone:identity-service nova-cloud-controller:identity-service keystone regular - keystone:identity-service openstack-dashboard:identity-service keystone regular - keystone:identity-service placement:identity-service keystone regular - mysql-innodb-cluster:cluster mysql-innodb-cluster:cluster mysql-innodb-cluster peer - mysql-innodb-cluster:coordinator mysql-innodb-cluster:coordinator coordinator peer - mysql-innodb-cluster:db-router cinder-mysql-router:db-router mysql-router regular - mysql-innodb-cluster:db-router dashboard-mysql-router:db-router mysql-router regular - mysql-innodb-cluster:db-router glance-mysql-router:db-router mysql-router regular - mysql-innodb-cluster:db-router keystone-mysql-router:db-router mysql-router regular - mysql-innodb-cluster:db-router neutron-mysql-router:db-router mysql-router regular - mysql-innodb-cluster:db-router nova-mysql-router:db-router mysql-router regular - mysql-innodb-cluster:db-router placement-mysql-router:db-router mysql-router regular - mysql-innodb-cluster:db-router vault-mysql-router:db-router mysql-router regular - neutron-api-plugin-ovn:neutron-plugin neutron-api:neutron-plugin-api-subordinate neutron-plugin-api-subordinate subordinate - neutron-api:cluster neutron-api:cluster neutron-api-ha peer - neutron-api:neutron-api nova-cloud-controller:neutron-api neutron-api regular - neutron-mysql-router:shared-db neutron-api:shared-db mysql-shared subordinate - nova-cloud-controller:cluster nova-cloud-controller:cluster nova-ha peer - nova-compute:cloud-compute nova-cloud-controller:cloud-compute nova-compute regular - nova-compute:compute-peer nova-compute:compute-peer nova peer - nova-compute:juju-info ntp:juju-info juju-info subordinate - nova-mysql-router:shared-db nova-cloud-controller:shared-db mysql-shared subordinate - ntp:ntp-peers ntp:ntp-peers ntp peer - openstack-dashboard:cluster openstack-dashboard:cluster openstack-dashboard-ha peer - ovn-central:ovsdb ovn-chassis:ovsdb ovsdb regular - ovn-central:ovsdb-cms neutron-api-plugin-ovn:ovsdb-cms ovsdb-cms regular - ovn-central:ovsdb-peer ovn-central:ovsdb-peer ovsdb-cluster peer - ovn-chassis:nova-compute nova-compute:neutron-plugin neutron-plugin subordinate - placement-mysql-router:shared-db placement:shared-db mysql-shared subordinate - placement:cluster placement:cluster openstack-ha peer - placement:placement nova-cloud-controller:placement placement regular - rabbitmq-server:amqp cinder:amqp rabbitmq regular - rabbitmq-server:amqp glance:amqp rabbitmq regular - rabbitmq-server:amqp neutron-api:amqp rabbitmq regular - rabbitmq-server:amqp nova-cloud-controller:amqp rabbitmq regular - rabbitmq-server:amqp nova-compute:amqp rabbitmq regular - rabbitmq-server:cluster rabbitmq-server:cluster rabbitmq-ha peer - vault-mysql-router:shared-db vault:shared-db mysql-shared subordinate - vault:certificates ceph-radosgw:certificates tls-certificates regular - vault:certificates cinder:certificates tls-certificates regular - vault:certificates glance:certificates tls-certificates regular - vault:certificates keystone:certificates tls-certificates regular - vault:certificates mysql-innodb-cluster:certificates tls-certificates regular - vault:certificates neutron-api-plugin-ovn:certificates tls-certificates regular - vault:certificates neutron-api:certificates tls-certificates regular - vault:certificates nova-cloud-controller:certificates tls-certificates regular - vault:certificates openstack-dashboard:certificates tls-certificates regular - vault:certificates ovn-central:certificates tls-certificates regular - vault:certificates ovn-chassis:certificates tls-certificates regular - vault:certificates placement:certificates tls-certificates regular - vault:cluster vault:cluster vault-ha peer diff --git a/deploy-guide/source/upgrade-openstack-example.rst b/deploy-guide/source/upgrade-openstack-example.rst deleted file mode 100644 index 3b27d2f..0000000 --- a/deploy-guide/source/upgrade-openstack-example.rst +++ /dev/null @@ -1,385 +0,0 @@ -:orphan: - -========================= -OpenStack upgrade example -========================= - -This document shows the specific steps used to perform an OpenStack upgrade. -They are based entirely on the :doc:`OpenStack upgrade ` -page. - -Cloud description ------------------ - -The original cloud deployment was performed via this 'focal-wallaby' -``openstack-base`` `bundle`_. The backing cloud is a MAAS cluster consisting of -three physical machines. - -In order to demonstrate upgrading an application running under HA with the aid -of the hacluster subordinate application, Keystone was scaled out post-deploy: - -.. code-block:: none - - juju add-unit -n 2 --to lxd:1,lxd:2 keystone - juju config keystone vip=10.246.116.11 - juju deploy --config cluster_count=3 hacluster keystone-hacluster - juju add-relation keystone-hacluster:ha keystone:ha - -The pre-upgrade state and topology of the cloud is given via this :doc:`model -status output `. - -Objective ---------- - -Since the cloud was deployed with a UCA OpenStack release of 'focal-wallaby', -the upgrade target is 'focal-xena'. The approach taken is one that minimises -service downtime while the upgrade is in progress. - -Prepare for the upgrade ------------------------ - -It is assumed that the :ref:`preparatory steps ` -have been completed. - -Perform the upgrade -------------------- - -Perform the upgrade by following the below sections. - -Disable unattended-upgrades -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Disable unattended-upgrades on the three cloud nodes. Recall that each one -directly hosts multiple applications (e.g. ceph-osd, cinder, and nova-compute -are deployed on machine 2): - -.. code-block:: none - - juju ssh 0 sudo dpkg-reconfigure -plow unattended-upgrades - juju ssh 1 sudo dpkg-reconfigure -plow unattended-upgrades - juju ssh 2 sudo dpkg-reconfigure -plow unattended-upgrades - -Answer 'No' to the resulting question. - -Perform a backup of the service databases -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Determine the existing service databases and then back them up. - -.. code-block:: none - - PASSWORD=$(juju run -u mysql-innodb-cluster/leader leader-get mysql.passwd) - juju ssh mysql-innodb-cluster/leader "mysql -u root -p${PASSWORD} -e 'SHOW DATABASES;'" - - +-------------------------------+ - | Database | - +-------------------------------+ - | cinder | - | glance | - | horizon | - | information_schema | - | keystone | - | mysql | - | mysql_innodb_cluster_metadata | - | neutron | - | nova | - | nova_api | - | nova_cell0 | - | performance_schema | - | placement | - | sys | - | vault | - +-------------------------------+ - -By omitting the system databases we are left with: - -* ``cinder`` -* ``glance`` -* ``horizon`` -* ``keystone`` -* ``neutron`` -* ``nova`` -* ``nova_api`` -* ``nova_cell0`` -* ``placement`` -* ``vault`` - -Now run the following commands: - -.. code-block:: none - - juju run-action --wait mysql-innodb-cluster/0 mysqldump \ - databases=cinder,glance,horizon,keystone,neutron,nova,nova_api,nova_cell0,placement,vault - juju run -u mysql-innodb-cluster/0 -- sudo chmod o+rx /var/backups/mysql - juju scp -- -r mysql-innodb-cluster/0:/var/backups/mysql . - juju run -u mysql-innodb-cluster/0 -- sudo chmod o-rx /var/backups/mysql - -Move the transferred archive to a safe location (off of the client host). - -Archive old database data -~~~~~~~~~~~~~~~~~~~~~~~~~ - -Archive old database data by running an action on any nova-cloud-controller -unit: - -.. code-block:: none - - juju run-action --wait nova-cloud-controller/0 archive-data - -Repeat this command until the action output reports 'Nothing was archived'. - -Purge old compute service entries -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Purge any old compute service entries for nova-compute units that are no longer -part of the model. These entries will show as 'down' in the list of compute -services: - -.. code-block:: none - - openstack compute service list - -To remove a compute service: - -.. code-block:: none - - openstack compute service delete - -List the upgrade order -~~~~~~~~~~~~~~~~~~~~~~ - -From an excerpt of the initial :command:`juju status` output, create an -inventory of running applications: - -.. code-block:: console - - ceph-mon - ceph-osd - ceph-radosgw - cinder - cinder-ceph - cinder-mysql-router - dashboard-mysql-router - glance - glance-mysql-router - keystone - keystone-mysql-router - mysql-innodb-cluster - neutron-api - neutron-api-plugin-ovn - neutron-mysql-router - nova-cloud-controller - nova-compute - nova-mysql-router - ntp - openstack-dashboard - ovn-central - ovn-chassis - placement - placement-mysql-router - rabbitmq-server - vault - vault-mysql-router - -Ignore from the above all subordinate applications and those applications that -are not part of the UCA. After applying the recommended upgrade order we arrive -at the following ordered list: - -#. ceph-mon -#. keystone -#. ceph-radosgw -#. cinder -#. glance -#. neutron-api -#. ovn-central -#. placement -#. nova-cloud-controller -#. openstack-dashboard -#. nova-compute -#. ceph-osd - -Upgrade each application -~~~~~~~~~~~~~~~~~~~~~~~~ - -Upgrade each application in turn. - -ceph-mon -^^^^^^^^ - -Although there are three units of the ceph-mon application, the all-in-one -method is used because the ceph-mon charm is able to maintain service -availability during the upgrade: - -.. code-block:: none - - juju config ceph-mon source=cloud:focal-xena - -keystone -^^^^^^^^ - -There are three units of the keystone application and its charm supports the -three actions that the paused-single-unit method demands. In addition, the -keystone application is running under HA with the aid of the hacluster -application, which allows for a more controlled upgrade. Application leader -``keystone/0`` is upgraded first: - -.. code-block:: none - - juju config keystone action-managed-upgrade=True - juju config keystone openstack-origin=cloud:focal-xena - - juju run-action --wait keystone-hacluster/0 pause - juju run-action --wait keystone/0 pause - juju run-action --wait keystone/0 openstack-upgrade - juju run-action --wait keystone/0 resume - juju run-action --wait keystone-hacluster/0 resume - - juju run-action --wait keystone-hacluster/1 pause - juju run-action --wait keystone/1 pause - juju run-action --wait keystone/1 openstack-upgrade - juju run-action --wait keystone/1 resume - juju run-action --wait keystone-hacluster/1 resume - - juju run-action --wait keystone-hacluster/2 pause - juju run-action --wait keystone/2 pause - juju run-action --wait keystone/2 openstack-upgrade - juju run-action --wait keystone/2 resume - juju run-action --wait keystone-hacluster/2 resume - -ceph-radosgw -^^^^^^^^^^^^ - -There is only a single unit of the ceph-radosgw application. Use the all-in-one -method: - -.. code-block:: none - - juju config ceph-radosgw source=cloud:focal-xena - -cinder -^^^^^^ - -There is only a single unit of the cinder application. Use the all-in-one -method: - -.. code-block:: none - - juju config cinder openstack-origin=cloud:focal-xena - -glance -^^^^^^ - -There is only a single unit of the glance application. Use the all-in-one -method: - -.. code-block:: none - - juju config glance openstack-origin=cloud:focal-xena - -neutron-api -^^^^^^^^^^^ - -There is only a single unit of the neutron-api application. Use the all-in-one -method: - -.. code-block:: none - - juju config neutron-api openstack-origin=cloud:focal-xena - -ovn-central -^^^^^^^^^^^ - -Although there are three units of the ovn-central application, based on the -actions supported by the ovn-central charm, only the all-in-one method is -available: - -.. code-block:: none - - juju config ovn-central source=cloud:focal-xena - -placement -^^^^^^^^^ - -There is only a single unit of the placement application. Use the all-in-one -method: - -.. code-block:: none - - juju config placement openstack-origin=cloud:focal-xena - -nova-cloud-controller -^^^^^^^^^^^^^^^^^^^^^ - -There is only a single unit of the nova-cloud-controller application. Use the -all-in-one method: - -.. code-block:: none - - juju config nova-cloud-controller openstack-origin=cloud:focal-xena - -openstack-dashboard -^^^^^^^^^^^^^^^^^^^ - -There is only a single unit of the openstack-dashboard application. Use the -all-in-one method: - -.. code-block:: none - - juju config openstack-dashboard openstack-origin=cloud:focal-xena - -nova-compute -^^^^^^^^^^^^ - -There are three units of the nova-compute application and its charm supports -the three actions that the paused-single-unit method requires. Application -leader ``nova-compute/2`` is upgraded first: - -.. code-block:: none - - juju config nova-compute action-managed-upgrade=True - juju config nova-compute openstack-origin=cloud:focal-xena - - juju run-action --wait nova-compute/2 pause - juju run-action --wait nova-compute/2 openstack-upgrade - juju run-action --wait nova-compute/2 resume - - juju run-action --wait nova-compute/1 pause - juju run-action --wait nova-compute/1 openstack-upgrade - juju run-action --wait nova-compute/1 resume - - juju run-action --wait nova-compute/0 pause - juju run-action --wait nova-compute/0 openstack-upgrade - juju run-action --wait nova-compute/0 resume - -ceph-osd -^^^^^^^^ - -Although there are three units of the ceph-osd application, the all-in-one -method is used because the ceph-osd charm is able to maintain service -availability during the upgrade: - -.. code-block:: none - - juju config ceph-osd source=cloud:focal-xena - -Re-enable unattended-upgrades -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Re-enable unattended-upgrades on the three cloud nodes: - -.. code-block:: none - - juju ssh 0 sudo dpkg-reconfigure -plow unattended-upgrades - juju ssh 1 sudo dpkg-reconfigure -plow unattended-upgrades - juju ssh 2 sudo dpkg-reconfigure -plow unattended-upgrades - -Answer 'Yes' to resulting the question. - -Verify the new deployment -~~~~~~~~~~~~~~~~~~~~~~~~~ - -Check for errors in :command:`juju status` output and any monitoring service. -Perform a routine battery of tests. - -.. LINKS -.. _bundle: https://raw.githubusercontent.com/openstack-charmers/openstack-bundles/b1817add83ba56458aca1aa171ed9b74c211474d/stable/openstack-base/bundle.yaml diff --git a/deploy-guide/source/upgrade-openstack.rst b/deploy-guide/source/upgrade-openstack.rst deleted file mode 100644 index 0fb8dea..0000000 --- a/deploy-guide/source/upgrade-openstack.rst +++ /dev/null @@ -1,656 +0,0 @@ -================= -OpenStack upgrade -================= - -This document outlines how to upgrade the OpenStack service components of a -Charmed OpenStack cloud. - -.. warning:: - - Upgrading an OpenStack cloud is not risk-free. The procedures outlined in - this guide should first be tested in a pre-production environment. - -Please read the :doc:`upgrade-overview` page before continuing. - -.. note:: - - The charms only support single-step OpenStack upgrades (N+1). That is, to - upgrade two releases forward you need to upgrade twice. You cannot skip - releases when upgrading OpenStack with charms. - -It may be worthwhile to read the upstream OpenStack `Upgrades`_ guide. - -Software sources ----------------- - -A key part of an OpenStack upgrade is the stipulation of a unit's software -sources. For an upgrade, the latter will naturally reflect a more recent -combination of Ubuntu release (series) and OpenStack release. This combination -is based on the `Ubuntu Cloud Archive`_ and translates to a "cloud archive -OpenStack release". It takes on the following syntax: - -``-`` - -The value is passed to a charm's ``openstack-origin`` configuration option. For -example, to select the 'focal-victoria' release: - -``openstack-origin=cloud:focal-victoria`` - -In this way the charm is informed on where to find updates for the packages -that it is responsible for. - -Notes concerning the value of ``openstack-origin``: - -* The default is 'distro'. This denotes an Ubuntu release's default archive - (e.g. in the case of the focal series it corresponds to OpenStack Ussuri). - The value of 'distro' is therefore invalid in the context of an OpenStack - upgrade. - -* It should normally be the same across all charms. - -* Its series component must be that of the series currently in use (i.e. a - series upgrade and an OpenStack upgrade are two completely separate - procedures). - -.. note:: - - A few charms use option ``source`` instead of ``openstack-origin`` (both - options support identical values). The ``source`` option is used by charms - that don't deploy an actual OpenStack service. - -Upgradable services -------------------- - -Services whose software is not included in the `Ubuntu Cloud Archive`_ do not -get upgraded during a charmed OpenStack upgrade. This software is upgraded by -the administrator (on the units) using other means (e.g. manually via package -utilities, the Landscape management tool, a snap, or as part of a series -upgrade). Common charms where this applies are: - -* memcached -* ntp -* percona-cluster -* mysql-innodb-cluster -* mysql-router -* rabbitmq-server -* vault - -Services that are associated with subordinate charms are upgradable but only -indirectly. They get upgraded along with their parent principal application. -Subordinate charms do not support the ``openstack-origin`` (or ``source``) -configuration option that is, as will be shown, a pre-requisite for initiating -an OpenStack charm payload upgrade. - -.. _openstack_upgrade_prepare: - -Prepare for the upgrade ------------------------ - -Pay special attention to the below pre-upgrade preparatory and informational -sections. - -Release notes -~~~~~~~~~~~~~ - -The OpenStack Charms `Release notes`_ for the corresponding current and target -versions of OpenStack must be consulted for any special instructions. In -particular, pay attention to services and/or configuration options that may be -retired, deprecated, or changed. - -Manual intervention -~~~~~~~~~~~~~~~~~~~ - -By design, the latest stable charms will support the software changes related -to the OpenStack services being upgraded. During the upgrade, the charms will -also strive to preserve the existing configuration of their associated -services. Upstream OpenStack is also designed to support N+1 upgrades. However, -there may still be times when intervention on the part of the operator is -needed. The :doc:`cg:project/issues-and-procedures` page covers this topic. - -Ensure cloud node software is up to date -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Every machine in the cloud, including containers, should have their software -packages updated to ensure that the latest SRUs have been applied. This is done -in the usual manner: - -.. code-block:: none - - sudo apt update - sudo apt full-upgrade - -Verify the current deployment -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Confirm that the output for the :command:`juju status` command of the current -deployment is error-free. In addition, if monitoring is in use (e.g. Nagios), -ensure that all alerts have been resolved. You may also consider running a -battery of operational checks on the cloud. - -This step is to make certain that any issues that are apparent after the -upgrade are not due to pre-existing problems. - -Perform the upgrade -------------------- - -Perform the upgrade by following the below sections. - -.. _disable_unattended_upgrades: - -Disable unattended-upgrades -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -When performing a service upgrade on a cloud node that hosts multiple principal -charms (e.g. nova-compute and ceph-osd), ensure that ``unattended-upgrades`` is -disabled on the underlying machine for the duration of the upgrade process. -This is to prevent the other services from being upgraded outside of Juju's -control. On a cloud node run: - -.. code-block:: none - - sudo dpkg-reconfigure -plow unattended-upgrades - -Perform a backup of the service databases -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Perform a backup of the cloud service databases by applying the ``mysqldump`` -action to any unit of the cloud's database application. Be sure to select all -applicable databases; the commands provided are examples only. - -The permissions on the remote backup directory will need to be adjusted in -order to access the data. Take note that the transfer method presented here -will capture all existing backups in that directory. - -.. important:: - - Store the backup archive in a safe place. - -The next two sections include the commands to run for the two possible database -applications. - -percona-cluster -^^^^^^^^^^^^^^^ - -The percona-cluster application requires a modification to its "strict mode" -(see `Percona strict mode`_ for an understanding of the implications). - -.. code-block:: none - - juju run-action --wait percona-cluster/0 set-pxc-strict-mode mode=MASTER - juju run-action --wait percona-cluster/0 mysqldump \ - databases=aodh,cinder,designate,glance,gnocchi,horizon,keystone,neutron,nova,nova_api,nova_cell0,placement - juju run-action --wait percona-cluster/0 set-pxc-strict-mode mode=ENFORCING - - juju run -u percona-cluster/0 -- sudo chmod o+rx /var/backups/mysql - juju scp -- -r percona-cluster/0:/var/backups/mysql . - juju run -u percona-cluster/0 -- sudo chmod o-rx /var/backups/mysql - -mysql-innodb-cluster -^^^^^^^^^^^^^^^^^^^^ - -.. code-block:: none - - juju run-action --wait mysql-innodb-cluster/0 mysqldump \ - databases=cinder,designate,glance,gnocchi,horizon,keystone,neutron,nova,nova_api,nova_cell0,placement,vault - - juju run -u mysql-innodb-cluster/0 -- sudo chmod o+rx /var/backups/mysql - juju scp -- -r mysql-innodb-cluster/0:/var/backups/mysql . - juju run -u mysql-innodb-cluster/0 -- sudo chmod o-rx /var/backups/mysql - -Archive old database data -~~~~~~~~~~~~~~~~~~~~~~~~~ - -During the upgrade, database migrations will be run. This operation can be -optimised by first archiving any stale data (e.g. deleted instances). Do this -by running the ``archive-data`` action on any nova-cloud-controller unit: - -.. code-block:: none - - juju run-action --wait nova-cloud-controller/0 archive-data - -This action may need to be run multiple times until the action output reports -'Nothing was archived'. - -Purge old compute service entries -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Old compute service entries for units which are no longer part of the model -should be purged prior to upgrading. These entries will show as 'down' (and be -hosted on machines no longer in the model) in the current list of compute -services: - -.. code-block:: none - - openstack compute service list - -To remove a compute service: - -.. code-block:: none - - openstack compute service delete - -.. _openstack_upgrade_order: - -List the upgrade order -~~~~~~~~~~~~~~~~~~~~~~ - -Generally speaking, the upgrade order is determined by the idea of a dependency -tree. Those services that have the most potential impact on other services are -upgraded first and those services that have the least potential impact on other -services are upgraded last. - -In the below table, charms are listed in the order in which their corresponding -OpenStack services should be upgraded. Each service represented by a charm will -need to be upgraded individually. Note that since charms merely modify a -machine's apt sources, any co-located service will have their packages updated -along with those of the service being targeted. - -.. warning:: - - Ceph may require one of its options to be set prior to upgrading, and - failure to consider this may result in a broken cluster. See the associated - :ref:`upgrade issue `. - -.. note:: - - At this time, only stable charms are listed in the upgrade order table. - -.. list-table:: - :header-rows: 1 - :widths: auto - - * - Order - - Charm - - * - 1 - - `ceph-mon`_ - - * - 2 - - `keystone`_ - - * - 3 - - `aodh`_ - - * - 4 - - `barbican`_ - - * - 5 - - `ceilometer`_ - - * - 6 - - `ceph-fs`_ - - * - 7 - - `ceph-radosgw`_ - - * - 8 - - `cinder`_ - - * - 9 - - `designate`_ - - * - 10 - - `designate-bind`_ - - * - 11 - - `glance`_ - - * - 12 - - `gnocchi`_ - - * - 13 - - `heat`_ - - * - 14 - - `manila`_ - - * - 15 - - `manila-ganesha`_ - - * - 16 - - `neutron-api`_ - - * - 17 - - `neutron-gateway`_ or `ovn-dedicated-chassis`_ - - * - 18 - - `ovn-central`_ - - * - 19 - - `placement`_ - - * - 20 - - `nova-cloud-controller`_ - - * - 21 - - `nova-compute`_ - - * - 22 - - `openstack-dashboard`_ - - * - 23 - - `ceph-osd`_ - - * - 24 - - `swift-proxy`_ - - * - 25 - - `swift-storage`_ - - * - 26 - - `octavia`_ - -.. important:: - - The OVN control plane will not be available between the commencement of the - ovn-central upgrade and the completion of the nova-compute upgrade. - -Update the charm channel ------------------------- - -.. warning:: - - This step is only performed for charms that follow a channel (see - :ref:`Charm types `). - -A charm's channel needs to be updated according to the target OpenStack -release. This is done as per the following syntax: - -.. code-block:: none - - juju refresh --channel= - -For example, if the cloud is being upgraded to OpenStack Yoga then the keystone -charm's channel should be updated to 'yoga/stable': - -.. code-block:: none - - juju refresh --channel=yoga/stable keystone - -Charms whose services are not technically part of the OpenStack project will -generally use a channel naming scheme that is not based on OpenStack release -names. Here is the ovn-central charm: - -.. code-block:: none - - juju refresh --channel=22.03/stable ovn-central - -.. _perform_the_upgrade: - -Perform the upgrade -------------------- - -There are three methods available for performing an OpenStack service upgrade, -two of which have charm requirements in terms of supported actions. Each -method also has advantages and disadvantages with regard to: - -* the time required to perform an upgrade -* maintaining service availability during an upgrade - -This table summarises the characteristics and requirements of each method: - -+--------------------+----------+----------+--------------------------------------------------+ -| Method | Time | Downtime | Charm requirements (actions) | -+====================+==========+==========+==================================================+ -| all-in-one | shortest | most | *none* | -+--------------------+----------+----------+--------------------------------------------------+ -| single-unit | medium | medium | ``openstack-upgrade`` | -+--------------------+----------+----------+--------------------------------------------------+ -| paused-single-unit | longest | least | ``openstack-upgrade``, ``pause``, and ``resume`` | -+--------------------+----------+----------+--------------------------------------------------+ - -For example, although the all-in-one method upgrades a service the fastest, it -also has the greatest potential for service downtime. - -.. note:: - - A charm's supported actions can be listed with command :command:`juju - actions `. - -All-in-one -~~~~~~~~~~ - -The all-in-one method upgrades all application units simultaneously. This -method must be used if the application has a sole unit. - -Although it is the quickest route, it will also cause a temporary disruption of -the corresponding service. - -.. important:: - - Exceptionally, the ceph-osd and ceph-mon applications use the all-in-one - method but their charms are able to maintain service availability during the - upgrade. - -The syntax is: - -.. code-block:: none - - juju config openstack-origin=cloud: - -For example, to upgrade Cinder across all units (currently running Focal) from -Ussuri to Victoria: - -.. code-block:: none - - juju config cinder openstack-origin=cloud:focal-victoria - -Charms whose services are not technically part of the OpenStack project will -use the ``source`` charm option instead. The Ceph charms are a classic example: - -.. code-block:: none - - juju config ceph-mon source=cloud:focal-victoria - -Single-unit -~~~~~~~~~~~ - -The single-unit method builds upon the all-in-one method by allowing for the -upgrade of individual units in a controlled manner. The charm must support the -``openstack-upgrade`` action, which in turn guarantees the availability of the -``action-managed-upgrade`` option. - -This method is slower than the all-in-one method due to the need for each unit -to be upgraded separately. There is a lesser chance of downtime as the unit -being upgraded must be in the process of servicing client requests for downtime -to occur. - -As a general rule, whenever there is the possibility of upgrading units -individually, **always upgrade the application leader first**. - -.. note:: - - The leader is the unit with a ***** next to it in the :command:`juju status` - output. It can also be discovered via the CLI: - - .. code-block:: none - - juju run -a is-leader - -For example, to upgrade a three-unit glance application from Ussuri to Victoria -where ``glance/1`` is the leader: - -.. code-block:: none - - juju config glance action-managed-upgrade=True - juju config glance openstack-origin=cloud:focal-victoria - - juju run-action --wait glance/1 openstack-upgrade - juju run-action --wait glance/0 openstack-upgrade - juju run-action --wait glance/2 openstack-upgrade - -.. _paused_single_unit: - -Paused-single-unit -~~~~~~~~~~~~~~~~~~ - -The paused-single-unit method extends the single-unit method by allowing for -the upgrade of individual units while paused. Additional charm requirements are -the ``pause`` and ``resume`` actions. - -This method provides more versatility by allowing a unit to be removed from -service, upgraded, and returned to service. Each of these are distinct events -whose timing is chosen by the operator. - -This is the slowest method due to the need for each unit to be upgraded -separately in addition to the required pause/resume management. However, it is -the method that will result in the least downtime as clients will not be able -to solicit a paused service. - -For example, to upgrade a three-unit nova-compute application from Ussuri to -Victoria where ``nova-compute/0`` is the leader: - -.. code-block:: none - - juju config nova-compute action-managed-upgrade=True - juju config nova-compute openstack-origin=cloud:focal-victoria - - juju run-action --wait nova-compute/0 pause - juju run-action --wait nova-compute/0 openstack-upgrade - juju run-action --wait nova-compute/0 resume - - juju run-action --wait nova-compute/1 pause - juju run-action --wait nova-compute/1 openstack-upgrade - juju run-action --wait nova-compute/1 resume - - juju run-action --wait nova-compute/2 pause - juju run-action --wait nova-compute/2 openstack-upgrade - juju run-action --wait nova-compute/2 resume - -In addition, this method also permits a possible hacluster subordinate unit, -which typically manages a VIP, to be paused so that client requests will never -even be directed to the associated parent unit. - -.. attention:: - - When there is an hacluster subordinate unit then it is recommended to always - take advantage of the pause-single-unit method's ability to pause it before - upgrading the parent unit. - -For example, to upgrade a three-unit keystone application from Ussuri to -Victoria where ``keystone/2`` is the leader: - -.. code-block:: none - - juju config keystone action-managed-upgrade=True - juju config keystone openstack-origin=cloud:focal-victoria - - juju run-action --wait keystone-hacluster/1 pause - juju run-action --wait keystone/2 pause - juju run-action --wait keystone/2 openstack-upgrade - juju run-action --wait keystone/2 resume - juju run-action --wait keystone-hacluster/1 resume - - juju run-action --wait keystone-hacluster/2 pause - juju run-action --wait keystone/1 pause - juju run-action --wait keystone/1 openstack-upgrade - juju run-action --wait keystone/1 resume - juju run-action --wait keystone-hacluster/2 resume - - juju run-action --wait keystone-hacluster/0 pause - juju run-action --wait keystone/0 pause - juju run-action --wait keystone/0 openstack-upgrade - juju run-action --wait keystone/0 resume - juju run-action --wait keystone-hacluster/0 resume - -.. warning:: - - The hacluster subordinate unit number may not necessarily match its parent - unit number. As in the above example, only for ``keystone/0`` do the unit - numbers correspond (i.e. ``keystone-hacluster/0`` is its subordinate unit). - -Re-enable unattended-upgrades ------------------------------ - -In a :ref:`previous step `, unattended-upgrades -were disabled on those cloud nodes that hosted multiple principal charms. Once -such a node has had all of its services upgraded, unattended-upgrades should be -re-enabled: - -.. code-block:: none - - sudo dpkg-reconfigure -plow unattended-upgrades - -Verify the new deployment -------------------------- - -Check for errors in :command:`juju status` output and any monitoring service. - -Example upgrade ---------------- - -The :doc:`OpenStack upgrade example ` page shows the -explicit steps used to upgrade a basic cloud. - -.. LINKS -.. _Ubuntu Cloud Archive: https://wiki.ubuntu.com/OpenStack/CloudArchive -.. _Upgrades: https://docs.openstack.org/operations-guide/ops-upgrades.html -.. _Percona strict mode: https://www.percona.com/doc/percona-xtradb-cluster/LATEST/features/pxc-strict-mode.html - -.. BUGS -.. _LP #1825999: https://bugs.launchpad.net/charm-nova-compute/+bug/1825999 -.. _LP #1809190: https://bugs.launchpad.net/charm-neutron-gateway/+bug/1809190 -.. _LP #1853173: https://bugs.launchpad.net/charm-openstack-dashboard/+bug/1853173 -.. _LP #1828534: https://bugs.launchpad.net/charm-designate/+bug/1828534 - -.. _aodh: https://opendev.org/openstack/charm-aodh/ -.. _barbican: https://opendev.org/openstack/charm-barbican/ -.. _barbican-vault: https://opendev.org/openstack/charm-barbican-vault/ -.. _ceilometer: https://opendev.org/openstack/charm-ceilometer/ -.. _ceilometer-agent: https://opendev.org/openstack/charm-ceilometer-agent/ -.. _cinder: https://opendev.org/openstack/charm-cinder/ -.. _cinder-backup: https://opendev.org/openstack/charm-cinder-backup/ -.. _cinder-backup-swift-proxy: https://opendev.org/openstack/charm-cinder-backup-swift-proxy/ -.. _cinder-ceph: https://opendev.org/openstack/charm-cinder-ceph/ -.. _designate: https://opendev.org/openstack/charm-designate/ -.. _glance: https://opendev.org/openstack/charm-glance/ -.. _heat: https://opendev.org/openstack/charm-heat/ -.. _keystone: https://opendev.org/openstack/charm-keystone/ -.. _keystone-ldap: https://opendev.org/openstack/charm-keystone-ldap/ -.. _keystone-saml-mellon: https://opendev.org/openstack/charm-keystone-saml-mellon/ -.. _manila: https://opendev.org/openstack/charm-manila/ -.. _manila-ganesha: https://opendev.org/openstack/charm-manila-ganesha/ -.. _masakari: https://opendev.org/openstack/charm-masakari/ -.. _masakari-monitors: https://opendev.org/openstack/charm-masakari-monitors/ -.. _mysql-innodb-cluster: https://opendev.org/openstack/charm-mysql-innodb-cluster -.. _mysql-router: https://opendev.org/openstack/charm-mysql-router -.. _neutron-api: https://opendev.org/openstack/charm-neutron-api/ -.. _neutron-api-plugin-arista: https://opendev.org/openstack/charm-neutron-api-plugin-arista -.. _neutron-api-plugin-ovn: https://opendev.org/openstack/charm-neutron-api-plugin-ovn -.. _neutron-dynamic-routing: https://opendev.org/openstack/charm-neutron-dynamic-routing/ -.. _neutron-gateway: https://opendev.org/openstack/charm-neutron-gateway/ -.. _neutron-openvswitch: https://opendev.org/openstack/charm-neutron-openvswitch/ -.. _nova-cell-controller: https://opendev.org/openstack/charm-nova-cell-controller/ -.. _nova-cloud-controller: https://opendev.org/openstack/charm-nova-cloud-controller/ -.. _nova-compute: https://opendev.org/openstack/charm-nova-compute/ -.. _octavia: https://opendev.org/openstack/charm-octavia/ -.. _octavia-dashboard: https://opendev.org/openstack/charm-octavia-dashboard/ -.. _octavia-diskimage-retrofit: https://opendev.org/openstack/charm-octavia-diskimage-retrofit/ -.. _openstack-dashboard: https://opendev.org/openstack/charm-openstack-dashboard/ -.. _placement: https://opendev.org/openstack/charm-placement -.. _swift-proxy: https://opendev.org/openstack/charm-swift-proxy/ -.. _swift-storage: https://opendev.org/openstack/charm-swift-storage/ - -.. _ceph-fs: https://opendev.org/openstack/charm-ceph-fs/ -.. _ceph-iscsi: https://opendev.org/openstack/charm-ceph-iscsi/ -.. _ceph-mon: https://opendev.org/openstack/charm-ceph-mon/ -.. _ceph-osd: https://opendev.org/openstack/charm-ceph-osd/ -.. _ceph-proxy: https://opendev.org/openstack/charm-ceph-proxy/ -.. _ceph-radosgw: https://opendev.org/openstack/charm-ceph-radosgw/ -.. _ceph-rbd-mirror: https://opendev.org/openstack/charm-ceph-rbd-mirror/ -.. _cinder-purestorage: https://opendev.org/openstack/charm-cinder-purestorage/ -.. _designate-bind: https://opendev.org/openstack/charm-designate-bind/ -.. _glance-simplestreams-sync: https://opendev.org/openstack/charm-glance-simplestreams-sync/ -.. _gnocchi: https://opendev.org/openstack/charm-gnocchi/ -.. _hacluster: https://opendev.org/openstack/charm-hacluster/ -.. _ovn-central: https://opendev.org/x/charm-ovn-central -.. _ovn-chassis: https://opendev.org/x/charm-ovn-chassis -.. _ovn-dedicated-chassis: https://opendev.org/x/charm-ovn-dedicated-chassis -.. _pacemaker-remote: https://opendev.org/openstack/charm-pacemaker-remote/ -.. _percona-cluster: https://opendev.org/openstack/charm-percona-cluster/ -.. _rabbitmq-server: https://opendev.org/openstack/charm-rabbitmq-server/ -.. _trilio-data-mover: https://opendev.org/openstack/charm-trilio-data-mover/ -.. _trilio-dm-api: https://opendev.org/openstack/charm-trilio-dm-api/ -.. _trilio-horizon-plugin: https://opendev.org/openstack/charm-trilio-horizon-plugin/ -.. _trilio-wlm: https://opendev.org/openstack/charm-trilio-wlm/ -.. _vault: https://opendev.org/openstack/charm-vault/ diff --git a/deploy-guide/source/upgrade-overview.rst b/deploy-guide/source/upgrade-overview.rst deleted file mode 100644 index 17e83d1..0000000 --- a/deploy-guide/source/upgrade-overview.rst +++ /dev/null @@ -1,252 +0,0 @@ -================= -Upgrades overview -================= - -The purpose of the Upgrades section is to show how to upgrade Charmed OpenStack -as a whole. This page provides a summary of the involved components and how -they relate to each other. The upgrade of each of the components are distinct -operations and are referred to as separate upgrade types. They are defined in -this way: - -Charms upgrade - An upgrade of the charms that are used to deploy and manage Charmed - OpenStack. This includes charms that manage applications which are not - technically part of the OpenStack project such as Ceph, RabbitMQ, and Vault. - -OpenStack upgrade - An upgrade of the software deployed by the OpenStack charms. Each application - is upgraded via its corresponding charm. This constitutes an upgrade from one - major OpenStack version to the next (e.g. Xena to Yoga). - -Series upgrade - An upgrade of the Ubuntu operating system (e.g. Focal to Jammy) on the cloud - nodes. This includes containers. - -.. important:: - - Once initiated, an upgrade type should be completed to its fullest extent - across the cloud. Operating a cloud consisting of partially upgraded - components is not tested nor supported. - -Cloud topology --------------- - -All upgrade procedures assume a specific hyperconverged cloud topology. - -.. caution:: - - Any deviation from the described topology may require adjustments to the - given procedural steps. In particular, look at differences in co-located - principal applications. - -The topology is defined in this way: - -* Only compute and storage charms (and their subordinates) are co-located. - -* Third-party charms either do not exist or have been thoroughly tested for all - three upgrade types. - -* The following services run in LXD containers: - - * all API applications - * the database application (percona-cluster or mysql-innodb-cluster) - * the rabbitmq-server application - * the ceph-mon application - -* All applications, where possible, are under high availability, whether - natively (e.g. ceph-mon, rabbitmq-server) or via hacluster (e.g. - keystone). - -Development notes ------------------ - -This section includes charm development information that will better prepare -the administrator for the task of upgrading Charmed OpenStack. - -* It is possible for a charm to gain new functionality that is only supported - starting with a specific OpenStack version (e.g. gnocchi S3 support with - Stein). - -* A charm may occasionally only support a maximum or minimum series (e.g. - percona-cluster ending with eoan and mysql-innodb-cluster starting with - focal). This is normally due to upstream limitations (e.g Percona XtraDB - Cluster no longer supported on Focal). - -.. note:: - - A charm's limitations concerning OpenStack versions and application features - are stated in its README file. - -.. _charm_types: - -Charm types -~~~~~~~~~~~ - -There are two general types of OpenStack charms: one that does use channels and -one that does not (legacy). - -.. note:: - - For an overview of how charms are distributed to the end-user see - :doc:`cg:project/charm-delivery` in the Charm Guide. - -Channels -^^^^^^^^ - -With the channels type, a channel is dedicated to a single OpenStack release -(release N-1 will be technically supported to assist with upgrades). This means -that a charm that works for a recent series-openstack combination will -generally not work on an older combination. Furthermore, there is a need to -switch to a different channel in order to upgrade to a new OpenStack version -- but not to a new series. - -Legacy -^^^^^^ - -For the legacy charms, unless stated otherwise, each new revision of a charm -includes all the functionality of the previous revision. This means that a -charm that works for a recent series-openstack combination will also work on an -older combination. - -The development of legacy charms has stopped at the 21.10 release of OpenStack -Charms (and at the 21.06 release of Trilio Charms). The last supported -series-openstack combination is ``focal-xena``. - -Software release cycles ------------------------ - -Each software component has a predictable release cycle. - -.. list-table:: **Software release cycles** - :header-rows: 1 - :widths: 14 12 50 - - * - Software - - Cycle (months) - - Schedule - - * - OpenStack Charms - - 6 - - https://docs.openstack.org/charm-guide/latest/release-schedule.html - - * - OpenStack - - 6 - - https://releases.openstack.org - - * - Ubuntu - - 6 - - https://wiki.ubuntu.com/Releases - -Ubuntu LTS releases -~~~~~~~~~~~~~~~~~~~ - -One out of every four Ubuntu releases is an LTS release (i.e. 2 year cycle). -Charmed OpenStack must be LTS-based as OpenStack upgrades are dependent upon -the `Ubuntu Cloud Archive`_ (UCA), which only supports LTS releases. - -The below graphic shows the release schedule of Ubuntu LTS releases and -upstream OpenStack versions. The Ubuntu project and the OpenStack project have -deliberately synchronised their respective release cycles. - -.. figure:: ./media/ubuntu-openstack-release-cycle.png - :scale: 80% - :alt: Ubuntu OpenStack release cycle - -.. role:: raw-html(raw) - :format: html - -:raw-html:`
` - -For example, a deployment can begin on Ubuntu 20.04 LTS (that supports -OpenStack Ussuri in its default package archive) and have the ability, over -time, to upgrade OpenStack through versions V, W, X, and Y. - -.. note:: - - Charmed OpenStack on non-LTS Ubuntu releases is supported but should be - considered for testing purposes only. - -Upgrade order -------------- - -The order in which to upgrade the different software components is critical. -The generic upgrade order is: - -#. charms (to latest stable revision for the current charm type) -#. OpenStack (to latest stable version on the current series) -#. series -#. OpenStack (to desired stable version on the new series) - -An upgrade type can occur without the need for it to be followed by another -upgrade type. For instance, the charms can be upgraded without the necessity of -performing an OpenStack upgrade. - -However the inverse is not true: in order to achieve an upgrade type there is a -requisite upgrade type that needs to be fulfilled. For instance, in order to -upgrade a series one needs to ensure that OpenStack has been upgraded to the -most recent available version on the current series. - -.. note:: - - Irrespective of OpenStack or series upgrades, the charms should be upgraded - before making topological changes to the cloud, conducting charm application - migrations, or submitting bug reports. - -Two example scenarios are provided next. - -target: a specific Ubuntu release -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -* Current state: OpenStack Xena on Ubuntu 20.04 LTS -* Goal state: Ubuntu 22.04 LTS - -Upgrade path: - -#. Upgrade charms to latest stable revision for the current charm type -#. Upgrade OpenStack from Xena to Yoga -#. Upgrade series from focal to jammy - -Final result: OpenStack Yoga on Ubuntu 22.04 LTS - -target: a specific OpenStack version -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -* Current state: OpenStack Ussuri on Ubuntu 18.04 LTS -* Goal state: OpenStack Victoria - -Upgrade path: - -#. Upgrade charms to latest stable revision for the current charm type -#. Upgrade series from bionic to focal -#. Upgrade OpenStack from Ussuri to Victoria - -Final result: OpenStack Victoria on Ubuntu 20.04 LTS - -Disable automatic hook retries ------------------------------- - -For all upgrade types it is recommended to disable automatic hook retries -within the model containing the cloud. This will prevent the charms from -attempting to resolve any encountered problems, thus providing an early -opportunity for the operator to respond accordingly. - -Assuming the cloud model is the current working model turn off hook retries in -this way: - -.. code-block:: none - - juju model-config automatically-retry-hooks=false - -This change should normally be reverted once the upgrade is completed. - -Next steps ----------- - -Each upgrade type is broken down into more detail on the following pages: - -* :doc:`upgrade-charms` -* :doc:`upgrade-openstack` -* :doc:`upgrade-series` - -.. LINKS -.. _Ubuntu Cloud Archive: https://wiki.ubuntu.com/OpenStack/CloudArchive diff --git a/deploy-guide/source/upgrade-series-openstack.rst b/deploy-guide/source/upgrade-series-openstack.rst deleted file mode 100644 index bca6115..0000000 --- a/deploy-guide/source/upgrade-series-openstack.rst +++ /dev/null @@ -1,1101 +0,0 @@ -:orphan: - -======================== -Series upgrade OpenStack -======================== - -This document will provide specific steps for how to perform a series upgrade -across the entirety of a Charmed OpenStack cloud. - -.. warning:: - - This document is based upon the foundational knowledge and guidelines set - forth on the more general `Series upgrade`_ page. That reference must be - studied in-depth prior to attempting the steps outlined here. In particular, - ensure that the :ref:`Pre-upgrade requirements ` - are satisfied and that the :ref:`Workload specific preparations - ` have been addressed during planning. - -Downtime --------- - -Although the goal is to minimise downtime the series upgrade process across a -cloud will nonetheless result in some level of downtime for the control plane. - -When the machines associated with stateful applications such as percona-cluster -and rabbitmq-server undergo a series upgrade all cloud APIs will experience -downtime, in addition to the stateful applications themselves. - -When machines associated with a single API application undergo a series upgrade -that individual API will also experience downtime. This is because it is -necessary to pause services in order to avoid race condition errors. - -For those applications working in tandem with hacluster, as will be shown, some -hacluster units will need to be paused before the upgrade. One should assume -that the commencement of an outage coincides with this step (it will cause -cluster quorum heartbeats to fail and the service VIP will consequently go -offline). - -Generalised OpenStack series upgrade ------------------------------------- - -This section will summarise the series upgrade steps in the context of specific -OpenStack applications. It is an enhancement of the :ref:`Generic series -upgrade ` section in the companion document. - -Generally, this summary is well-suited to API applications (e.g. neutron-api, -keystone, nova-cloud-controller). - -Applications for which this summary does **not** apply include: - -#. those that do not require the pausing of units and where application - leadership is irrelevant: - - * nova-compute - * ceph-mon - * ceph-osd - -#. those that require a special upgrade workflow due to payload/upstream - requirements: - - * percona-cluster - * rabbitmq-server - -.. note:: - - Let the machine associated with the leader of the principal application be - called the "principal leader machine" and its unit the "principal leader - unit". - - Let the machines associated with the non-leaders of the principal - application be be called the "principal non-leader machines" and their units - the "principal non-leader units". - -The steps are as follows: - -#. Set the default series for the principal application. - -#. If hacluster is used, pause the hacluster units not associated with the - principal leader machine. - -#. Pause the principal non-leader units. - -#. Perform a series upgrade on each of the paused machines: - - #. Disable :ref:`Unattended upgrades `. - - #. Perform any pre-upgrade :ref:`workload maintenance tasks - `. - - #. Invoke the :command:`prepare` sub-command. - - #. Upgrade the operating system (APT commands). - - #. Perform any post-upgrade tasks at the machine/unit level. - - #. Re-enable Unattended upgrades. - - #. Reboot. - - #. Invoke the :command:`complete` sub-command. - -#. Pause the principal leader unit. - -#. Repeat step 4 for the paused principal leader machine. - -#. Perform any remaining post-upgrade tasks. - -#. Update the software sources for the principal application's machines. - -Procedures ----------- - -The procedures are categorised based on application types. The example scenario -used throughout is a 'xenial' to 'bionic' series upgrade, within an OpenStack -release of Queens (i.e. the starting point is a UCA release of -'xenial-queens'). - -New default series for the model -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Ensure that any newly-created application units are based on the next series by -setting the model's default series appropriately: - -.. code-block:: none - - juju model-config default-series=bionic - -Stateful applications -~~~~~~~~~~~~~~~~~~~~~ - -This section covers the series upgrade procedure for containerised stateful -applications. These include: - -* ceph-mon -* percona-cluster -* rabbitmq-server - -A stateful application is one that maintains the state of various aspects of -the cloud. Clustered stateful applications, such as the ones given above, -require a quorum to function properly. Therefore, a stateful application should -not have all of its units restarted simultaneously; it must have the series of -its corresponding machines upgraded sequentially. - -ceph-mon -^^^^^^^^ - -.. important:: - - During this upgrade there will NOT be a Ceph service outage. - - The MON cluster will be maintained during the upgrade by the ceph-mon charm, - rendering application leadership irrelevant. Notably, ceph-mon units do not - need to be paused. - -This scenario is represented by the following partial :command:`juju status` -command output: - -.. code-block:: console - - App Version Status Scale Charm Store Channel Rev OS Message - ceph-mon 12.2.13 active 3 ceph-mon charmstore stable 483 ubuntu Unit is ready and clustered - - Unit Workload Agent Machine Public address Ports Message - ceph-mon/0 active idle 0/lxd/0 10.246.114.57 Unit is ready and clustered - ceph-mon/1 active idle 1/lxd/0 10.246.114.56 Unit is ready and clustered - ceph-mon/2* active idle 2/lxd/0 10.246.114.26 Unit is ready and clustered - -#. Perform any workload maintenance pre-upgrade steps. - - For ceph-mon, there are no recommended steps to take. - -#. Set the default series for the principal application: - - .. code-block:: none - - juju set-series ceph-mon bionic - -#. Perform a series upgrade of the machines in any order: - - .. code-block:: none - - juju upgrade-series 0/lxd/0 prepare bionic - juju ssh 0/lxd/0 sudo apt update - juju ssh 0/lxd/0 sudo apt full-upgrade - juju ssh 0/lxd/0 sudo do-release-upgrade - - For ceph-mon, there are no post-upgrade steps; the prompt to reboot can be - answered in the affirmative. - - Invoke the :command:`complete` sub-command: - - .. code-block:: none - - juju upgrade-series 0/lxd/0 complete - -#. Repeat step 4 for each of the remaining machines: - - .. code-block:: none - - juju upgrade-series 1/lxd/0 prepare bionic - juju ssh 1/lxd/0 sudo apt update - juju ssh 1/lxd/0 sudo apt full-upgrade - juju ssh 1/lxd/0 sudo do-release-upgrade # and reboot - juju upgrade-series 1/lxd/0 complete - - .. code-block:: none - - juju upgrade-series 2/lxd/0 prepare bionic - juju ssh 2/lxd/0 sudo apt update - juju ssh 2/lxd/0 sudo apt full-upgrade - juju ssh 2/lxd/0 sudo do-release-upgrade # and reboot - juju upgrade-series 2/lxd/0 complete - -#. Perform any remaining post-upgrade tasks. - - For ceph-mon, there are no remaining post-upgrade steps. - -#. Update the software sources for the application's machines. - - For ceph-mon, set the value of the ``source`` configuration option to - 'distro': - - .. code-block:: none - - juju config ceph-mon source=distro - -The final partial :command:`juju status` output looks like this: - -.. code-block:: console - - App Version Status Scale Charm Store Channel Rev OS Message - ceph-mon 12.2.13 active 3 ceph-mon charmstore stable 483 ubuntu Unit is ready and clustered - - Unit Workload Agent Machine Public address Ports Message - ceph-mon/0 active idle 0/lxd/0 10.246.114.57 Unit is ready and clustered - ceph-mon/1 active idle 1/lxd/0 10.246.114.56 Unit is ready and clustered - ceph-mon/2* active idle 2/lxd/0 10.246.114.26 Unit is ready and clustered - -Note that the version of Ceph has not been upgraded (from 12.2.13 - Luminous) -since the OpenStack release (of Queens) remains unchanged. - -rabbitmq-server -^^^^^^^^^^^^^^^ - -To ensure proper cluster health, the RabbitMQ cluster is not reformed until all -rabbitmq-server units are series upgraded. An action is then used to complete -the upgrade by bringing the cluster back online. - -.. warning:: - - During this upgrade there will be a RabbitMQ service outage. - -This scenario is represented by the following partial :command:`juju status` -command output: - -.. code-block:: console - - App Version Status Scale Charm Store Channel Rev OS Message - rabbitmq-server 3.5.7 active 3 rabbitmq-server charmstore stable 118 ubuntu Unit is ready and clustered - - Unit Workload Agent Machine Public address Ports Message - rabbitmq-server/0* active idle 0/lxd/0 10.0.0.162 5672/tcp Unit is ready and clustered - rabbitmq-server/1 active idle 1/lxd/0 10.0.0.164 5672/tcp Unit is ready and clustered - rabbitmq-server/2 active idle 2/lxd/0 10.0.0.163 5672/tcp Unit is ready and clustered - -In summary, the principal leader unit is rabbitmq-server/0 and is deployed on -machine 0/lxd/0 (the principal leader machine). - -#. Perform any workload maintenance pre-upgrade steps. - - For rabbitmq-server, there are no recommended steps to take. - -#. Set the default series for the principal application: - - .. code-block:: none - - juju set-series rabbitmq-server bionic - -#. Pause the principal non-leader units: - - .. code-block:: none - - juju run-action --wait rabbitmq-server/1 pause - juju run-action --wait rabbitmq-server/2 pause - -#. Perform a series upgrade of the principal leader machine: - - .. code-block:: none - - juju upgrade-series 0/lxd/0 prepare bionic - juju ssh 0/lxd/0 sudo apt update - juju ssh 0/lxd/0 sudo apt full-upgrade - juju ssh 0/lxd/0 sudo do-release-upgrade - - For rabbitmq-server, there are no post-upgrade steps; the prompt to reboot - can be answered in the affirmative. - - Invoke the :command:`complete` sub-command: - - .. code-block:: none - - juju upgrade-series 0/lxd/0 complete - -#. Repeat step 4 for each of the principal non-leader machines: - - .. code-block:: none - - juju upgrade-series 1/lxd/0 prepare bionic - juju ssh 1/lxd/0 sudo apt update - juju ssh 1/lxd/0 sudo apt full-upgrade - juju ssh 1/lxd/0 sudo do-release-upgrade # and reboot - juju upgrade-series 1/lxd/0 complete - - .. code-block:: none - - juju upgrade-series 2/lxd/0 prepare bionic - juju ssh 2/lxd/0 sudo apt update - juju ssh 2/lxd/0 sudo apt full-upgrade - juju ssh 2/lxd/0 sudo do-release-upgrade # and reboot - juju upgrade-series 2/lxd/0 complete - -#. Perform any remaining post-upgrade tasks. - - For rabbitmq-server, run an action: - - .. code-block:: none - - juju run-action --wait rabbitmq-server/leader complete-cluster-series-upgrade - -#. Update the software sources for the application's machines. - - For rabbitmq-server, set the value of the ``source`` configuration option to - 'distro': - - .. code-block:: none - - juju config rabbitmq-server source=distro - -The final partial :command:`juju status` output looks like this: - -.. code-block:: console - - App Version Status Scale Charm Store Channel Rev OS Message - rabbitmq-server 3.6.10 active 3 rabbitmq-server charmstore stable 118 ubuntu Unit is ready and clustered - - Unit Workload Agent Machine Public address Ports Message - rabbitmq-server/0* active idle 0/lxd/0 10.0.0.162 5672/tcp Unit is ready and clustered - rabbitmq-server/1 active idle 1/lxd/0 10.0.0.164 5672/tcp Unit is ready and clustered - rabbitmq-server/2 active idle 2/lxd/0 10.0.0.163 5672/tcp Unit is ready and clustered - -Note that the version of RabbitMQ has been upgraded (from 3.5.7 to 3.6.10) -since more recent software has been found in the Ubuntu package archive for -Bionic. - -percona-cluster -^^^^^^^^^^^^^^^ - -.. warning:: - - During this upgrade there will be a MySQL service outage. - -.. note:: - - These upstream resources may also be useful: - - * `Upgrading Percona XtraDB Cluster`_ - * `Percona XtraDB Cluster In-Place Upgrading Guide From 5.5 to 5.6`_ - * `Galera replication - how to recover a PXC cluster`_ - -To ensure proper cluster health, the Percona cluster is not reformed until all -percona-cluster units are series upgraded. An action is then used to complete -the upgrade by bringing the cluster back online. - -.. warning:: - - The eoan series is the last series supported by the percona-cluster charm. - It is replaced by the `mysql-innodb-cluster`_ and `mysql-router`_ charms in - the focal series. The migration steps are documented in `percona-cluster - charm - series upgrade to focal`_. - - Do not upgrade the machines hosting percona-cluster units to the focal - series. To be clear, if percona-cluster is containerised then it is the LXD - container that must not be upgraded. - -This scenario is represented by the following partial :command:`juju status` -command output: - -.. code-block:: console - - App Version Status Scale Charm Store Channel Rev OS Message - percona-cluster 5.6.37 active 3 percona-cluster charmstore stable 302 ubuntu Unit is ready - percona-cluster-hacluster active 3 hacluster charmstore stable 81 ubuntu Unit is ready and clustered - - Unit Workload Agent Machine Public address Ports Message - percona-cluster/0* active idle 0/lxd/1 10.0.0.165 3306/tcp Unit is ready - percona-cluster-hacluster/2 active idle 10.0.0.165 Unit is ready and clustered - percona-cluster/1 active idle 1/lxd/1 10.0.0.166 3306/tcp Unit is ready - percona-cluster-hacluster/0* active idle 10.0.0.166 Unit is ready and clustered - percona-cluster/2 active idle 2/lxd/1 10.0.0.167 3306/tcp Unit is ready - percona-cluster-hacluster/1 active idle 10.0.0.167 Unit is ready and clustered - -In summary, the principal leader unit is percona-cluster/0 and is deployed on -machine 0/lxd/1 (the principal leader machine). - -#. Perform any workload maintenance pre-upgrade steps. - - For percona-cluster, take a backup and transfer it to a secure location: - - .. code-block:: none - - juju run-action --wait percona-cluster/leader backup - juju scp -- -r percona-cluster/leader:/opt/backups/mysql /path/to/local/directory - - Permissions will need to be altered on the remote machine, and note that the - :command:`scp` command transfers **all** existing backups. - -#. Set the default series for the principal application: - - .. code-block:: none - - juju set-series percona-cluster bionic - -#. Pause the hacluster units not associated with the principal leader machine: - - .. code-block:: none - - juju run-action --wait percona-cluster-hacluster/0 pause - juju run-action --wait percona-cluster-hacluster/1 pause - -#. Pause the principal non-leader units: - - .. code-block:: none - - juju run-action --wait percona-cluster/1 pause - juju run-action --wait percona-cluster/2 pause - - Leaving the principal leader unit up will ensure it has the latest MySQL - sequence number; it will be considered the most up to date cluster member. - - At this point the partial :command:`juju status` output looks like this: - - .. code-block:: console - - App Version Status Scale Charm Store Channel Rev OS Message - percona-cluster 5.6.37 maintenance 3 percona-cluster charmstore stable 302 ubuntu Paused. Use 'resume' action to resume normal service. - percona-cluster-hacluster maintenance 3 hacluster charmstore stable 81 ubuntu Paused. Use 'resume' action to resume normal service. - - Unit Workload Agent Machine Public address Ports Message - percona-cluster/0* active idle 0/lxd/1 10.0.0.165 3306/tcp Unit is ready - percona-cluster-hacluster/2 active idle 10.0.0.165 Unit is ready and clustered - percona-cluster/1 maintenance idle 1/lxd/1 10.0.0.166 3306/tcp Paused. Use 'resume' action to resume normal service. - percona-cluster-hacluster/0* maintenance idle 10.0.0.166 Paused. Use 'resume' action to resume normal service. - percona-cluster/2 maintenance idle 2/lxd/1 10.0.0.167 3306/tcp Paused. Use 'resume' action to resume normal service. - percona-cluster-hacluster/1 maintenance idle 10.0.0.167 Paused. Use 'resume' action to resume normal service. - -#. Perform a series upgrade of the principal leader machine: - - .. code-block:: none - - juju upgrade-series 0/lxd/1 prepare bionic - juju ssh 0/lxd/1 sudo apt update - juju ssh 0/lxd/1 sudo apt full-upgrade - juju ssh 0/lxd/1 sudo do-release-upgrade - - For percona-cluster, there are no post-upgrade steps; the prompt to reboot - can be answered in the affirmative. - - Invoke the :command:`complete` sub-command: - - .. code-block:: none - - juju upgrade-series 0/lxd/1 complete - -#. Repeat step 4 for each of the principal non-leader machines: - - .. code-block:: none - - juju upgrade-series 1/lxd/1 prepare bionic - juju ssh 1/lxd/1 sudo apt update - juju ssh 1/lxd/1 sudo apt full-upgrade - juju ssh 1/lxd/1 sudo do-release-upgrade # and reboot - juju upgrade-series 1/lxd/1 complete - - .. code-block:: none - - juju upgrade-series 2/lxd/1 prepare bionic - juju ssh 2/lxd/1 sudo apt update - juju ssh 2/lxd/1 sudo apt full-upgrade - juju ssh 2/lxd/1 sudo do-release-upgrade # and reboot - juju upgrade-series 2/lxd/1 complete - -#. Perform any remaining post-upgrade tasks. - - For percona-cluster, a sanity check should be performed on the leader unit's - databases and data. - - Also, an action must be run: - - .. code-block:: none - - juju run-action --wait percona-cluster/leader complete-cluster-series-upgrade - -#. Update the software sources for the application's machines. - - For percona-cluster, set the value of the ``source`` configuration option to - 'distro': - - .. code-block:: none - - juju config percona-cluster source=distro - -The final partial :command:`juju status` output looks like this: - -.. code-block:: console - - App Version Status Scale Charm Store Channel Rev OS Message - percona-cluster 5.7.20 active 3 percona-cluster charmstore stable 302 ubuntu Unit is ready - percona-cluster-hacluster active 3 hacluster charmstore stable 81 ubuntu Unit is ready and clustered - - Unit Workload Agent Machine Public address Ports Message - percona-cluster/0* active idle 0/lxd/1 10.0.0.165 3306/tcp Unit is ready - percona-cluster-hacluster/2 active idle 10.0.0.165 Unit is ready and clustered - percona-cluster/1 active idle 1/lxd/1 10.0.0.166 3306/tcp Unit is ready - percona-cluster-hacluster/0* active idle 10.0.0.166 Unit is ready and clustered - percona-cluster/2 active idle 2/lxd/1 10.0.0.167 3306/tcp Unit is ready - percona-cluster-hacluster/1 active idle 10.0.0.167 Unit is ready and clustered - -Note that the version of Percona has been upgraded (from 5.6.37 to 5.7.20) -since more recent software has been found in the Ubuntu package archive for -Bionic. - -API applications -~~~~~~~~~~~~~~~~ - -This section covers series upgrade procedures for containerised API -applications. These include, but are not limited to: - -* cinder -* glance -* keystone -* neutron-api -* nova-cloud-controller - -Machines hosting API applications can have their series upgraded concurrently -because those applications are stateless. This results in a dramatically -reduced downtime for the application. A sequential approach will not reduce -downtime as the HA services will still need to be brought down during the -upgrade associated with the application leader. - -The following two sub-sections will show how to perform a series upgrade -concurrently for a single API application and for multiple API applications. - -Upgrading a single API application concurrently -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -This example procedure will be based on the keystone application. - -This scenario is represented by the following partial :command:`juju status` -command output: - -.. code-block:: console - - App Version Status Scale Charm Store Channel Rev OS Message - keystone 13.0.4 active 3 keystone charmstore stable 330 ubuntu Application Ready - keystone-hacluster active 3 hacluster charmstore stable 81 ubuntu Unit is ready and clustered - - Unit Workload Agent Machine Public address Ports Message - keystone/0* active idle 0/lxd/0 10.0.0.198 5000/tcp Unit is ready - keystone-hacluster/2 active idle 10.0.0.198 Unit is ready and clustered - keystone/1 active idle 1/lxd/0 10.0.0.196 5000/tcp Unit is ready - keystone-hacluster/0* active idle 10.0.0.196 Unit is ready and clustered - keystone/2 active idle 2/lxd/0 10.0.0.197 5000/tcp Unit is ready - keystone-hacluster/1 active idle 10.0.0.197 Unit is ready and clustered - -In summary, the principal leader unit is keystone/0 and is deployed on machine -0/lxd/0 (the principal leader machine). - -#. Set the default series for the principal application: - - .. code-block:: none - - juju set-series keystone bionic - -#. Pause the hacluster units not associated with the principal leader machine: - - .. code-block:: none - - juju run-action --wait keystone-hacluster/0 pause - juju run-action --wait keystone-hacluster/1 pause - -#. Pause the principal non-leader units: - - .. code-block:: none - - juju run-action --wait keystone/1 pause - juju run-action --wait keystone/2 pause - -#. Perform any workload maintenance pre-upgrade steps on all machines. There - are no keystone-specific steps to perform. - -#. Invoke the :command:`prepare` sub-command on all machines, **starting with - the principal leader machine**: - - .. code-block:: none - - juju upgrade-series 0/lxd/0 prepare bionic - juju upgrade-series 1/lxd/0 prepare bionic - juju upgrade-series 2/lxd/0 prepare bionic - - At this point the :command:`juju status` output looks like this: - - .. code-block:: console - - App Version Status Scale Charm Store Channel Rev OS Message - keystone 13.0.4 blocked 3 keystone charmstore stable 330 ubuntu Unit paused. - keystone-hacluster blocked 3 hacluster charmstore stable 81 ubuntu Ready for do-release-upgrade. Set complete when finished - - Unit Workload Agent Machine Public address Ports Message - keystone/0* blocked idle 0/lxd/0 10.0.0.198 5000/tcp Ready for do-release-upgrade and reboot. Set complete when finished., Unit paused. - keystone-hacluster/2 blocked idle 10.0.0.198 Ready for do-release-upgrade. Set complete when finished - keystone/1 blocked idle 1/lxd/0 10.0.0.196 5000/tcp Ready for do-release-upgrade and reboot. Set complete when finished., Unit paused. - keystone-hacluster/0* blocked idle 10.0.0.196 Ready for do-release-upgrade. Set complete when finished - keystone/2 blocked idle 2/lxd/0 10.0.0.197 5000/tcp Ready for do-release-upgrade and reboot. Set complete when finished., Unit paused. - keystone-hacluster/1 blocked idle 10.0.0.197 Ready for do-release-upgrade. Set complete when finished - -#. Upgrade the operating system on all machines. The non-interactive method is - used here: - - .. code-block:: none - - juju run --machine=0/lxd/0,1/lxd/0,2/lxd/0 --timeout=10m \ - -- sudo apt-get update - - juju run --machine=0/lxd/0,1/lxd/0,2/lxd/0 --timeout=60m \ - -- sudo DEBIAN_FRONTEND=noninteractive apt-get --assume-yes \ - -o "Dpkg::Options::=--force-confdef" \ - -o "Dpkg::Options::=--force-confold" dist-upgrade - - juju run --machine=0/lxd/0,1/lxd/0,2/lxd/0 --timeout=120m \ - -- sudo DEBIAN_FRONTEND=noninteractive \ - do-release-upgrade -f DistUpgradeViewNonInteractive - - .. important:: - - Choose values for the ``--timeout`` option that are appropriate for the - task at hand. - -#. Perform any post-upgrade tasks. - - For keystone, there are no specific steps to perform. - -#. Reboot all machines: - - .. code-block:: none - - juju run --machine=0/lxd/0,1/lxd/0,2/lxd/0 -- sudo reboot - -#. Invoke the :command:`complete` sub-command on all machines: - - .. code-block:: none - - juju upgrade-series 0/lxd/0 complete - juju upgrade-series 1/lxd/0 complete - juju upgrade-series 2/lxd/0 complete - -#. Perform any remaining post-upgrade tasks. - - For keystone, there are no remaining post-upgrade steps. - -#. Update the software sources for the application's machines. - - For keystone, set the value of the ``openstack-origin`` configuration option - to 'distro': - - .. code-block:: none - - juju config keystone openstack-origin=distro - -The final partial :command:`juju status` output looks like this: - -.. code-block:: console - - App Version Status Scale Charm Store Channel Rev OS Message - keystone 13.0.4 active 3 keystone charmstore stable 330 ubuntu Application Ready - keystone-hacluster active 3 hacluster charmstore stable 81 ubuntu Unit is ready and clustered - - Unit Workload Agent Machine Public address Ports Message - keystone/0* active idle 0/lxd/0 10.0.0.198 5000/tcp Unit is ready - keystone-hacluster/2 active idle 10.0.0.198 Unit is ready and clustered - keystone/1 active idle 1/lxd/0 10.0.0.196 5000/tcp Unit is ready - keystone-hacluster/0* active idle 10.0.0.196 Unit is ready and clustered - keystone/2 active idle 2/lxd/0 10.0.0.197 5000/tcp Unit is ready - keystone-hacluster/1 active idle 10.0.0.197 Unit is ready and clustered - -Note that the version of Keystone has not been upgraded (from 13.0.4) since the -OpenStack release (of Queens) remains unchanged. - -Upgrading multiple API applications concurrently -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -This example procedure will be based on the nova-cloud-controller and glance -applications. - -This scenario is represented by the following partial :command:`juju status` -command output: - -.. code-block:: console - - App Version Status Scale Charm Store Channel Rev OS Message - glance 16.0.1 active 3 glance charmstore stable 484 ubuntu Unit is ready - glance-hacluster active 3 hacluster charmstore stable 81 ubuntu Unit is ready and clustered - nova-cloud-controller 17.0.13 active 3 nova-cloud-controller charmstore stable 555 ubuntu Unit is ready - nova-cloud-controller-hacluster active 3 hacluster charmstore stable 81 ubuntu Unit is ready and clustered - - Unit Workload Agent Machine Public address Ports Message - glance/0* active idle 2/lxd/1 10.246.114.27 9292/tcp Unit is ready - glance-hacluster/0* active idle 10.246.114.27 Unit is ready and clustered - glance/1 active idle 2/lxd/3 10.246.114.64 9292/tcp Unit is ready - glance-hacluster/2 active idle 10.246.114.64 Unit is ready and clustered - glance/2 active idle 1/lxd/4 10.246.114.65 9292/tcp Unit is ready - glance-hacluster/1 active idle 10.246.114.65 Unit is ready and clustered - nova-cloud-controller/0* active idle 2/lxd/2 10.246.114.25 8774/tcp,8778/tcp Unit is ready - nova-cloud-controller-hacluster/0* active idle 10.246.114.25 Unit is ready and clustered - nova-cloud-controller/1 active idle 1/lxd/2 10.246.114.61 8774/tcp,8778/tcp Unit is ready - nova-cloud-controller-hacluster/1 active idle 10.246.114.61 Unit is ready and clustered - nova-cloud-controller/2 active idle 0/lxd/4 10.246.114.62 8774/tcp,8778/tcp Unit is ready - nova-cloud-controller-hacluster/2 active idle 10.246.114.62 Unit is ready and clustered - -In summary, - -* The glance principal leader unit is glance/0 and is deployed on machine - 2/lxd/1 (the glance principal leader machine). -* The nova-cloud-controller principal leader unit is nova-cloud-controller/0 - and is deployed on machine 2/lxd/2 (the nova-cloud-controller principal - leader machine). - -#. Set the default series for the principal applications: - - .. code-block:: none - - juju set-series glance bionic - juju set-series nova-cloud-controller bionic - -#. Pause the hacluster units not associated with their principal leader - machines: - - .. code-block:: none - - juju run-action --wait glance-hacluster/1 pause - juju run-action --wait glance-hacluster/2 pause - juju run-action --wait nova-cloud-controller-hacluster/1 pause - juju run-action --wait nova-cloud-controller-hacluster/2 pause - -#. Pause the principal non-leader units: - - .. code-block:: none - - juju run-action --wait glance/1 pause - juju run-action --wait glance/2 pause - juju run-action --wait nova-cloud-controller/1 pause - juju run-action --wait nova-cloud-controller/2 pause - -#. Perform any workload maintenance pre-upgrade steps on all machines. There - are no glance-specific nor nova-cloud-controller-specific steps to perform. - -#. Invoke the :command:`prepare` sub-command on all machines, **starting with - the principal leader machines**. The procedure has been expedited slightly - by adding the ``--yes`` confirmation option: - - .. code-block:: none - - juju upgrade-series --yes 2/lxd/1 prepare bionic - juju upgrade-series --yes 2/lxd/2 prepare bionic - juju upgrade-series --yes 2/lxd/3 prepare bionic - juju upgrade-series --yes 1/lxd/4 prepare bionic - juju upgrade-series --yes 1/lxd/2 prepare bionic - juju upgrade-series --yes 0/lxd/4 prepare bionic - -#. Upgrade the operating system on all machines. The non-interactive method is - used here: - - .. code-block:: none - - juju run --machine=2/lxd/1,2/lxd/2,2/lxd/3,1/lxd/4,1/lxd/2,0/lxd/4 \ - --timeout=20m -- sudo apt-get update - - juju run --machine=2/lxd/1,2/lxd/2,2/lxd/3,1/lxd/4,1/lxd/2,0/lxd/4 \ - --timeout=120m -- sudo DEBIAN_FRONTEND=noninteractive apt-get --assume-yes \ - -o "Dpkg::Options::=--force-confdef" \ - -o "Dpkg::Options::=--force-confold" dist-upgrade - - juju run --machine=2/lxd/1,2/lxd/2,2/lxd/3,1/lxd/4,1/lxd/2,0/lxd/4 \ - --timeout=240m -- sudo DEBIAN_FRONTEND=noninteractive \ - do-release-upgrade -f DistUpgradeViewNonInteractive - -#. Perform any workload maintenance post-upgrade steps on all machines. There - are no glance-specific or nova-cloud-controller-specific steps to perform. - -#. Reboot all machines: - - .. code-block:: none - - juju run --machine=2/lxd/1,2/lxd/2,2/lxd/3,1/lxd/4,1/lxd/2,0/lxd/4 \ - -- sudo reboot - -#. Invoke the :command:`complete` sub-command on all machines: - - .. code-block:: none - - juju upgrade-series 2/lxd/1 complete - juju upgrade-series 2/lxd/2 complete - juju upgrade-series 2/lxd/3 complete - juju upgrade-series 1/lxd/4 complete - juju upgrade-series 1/lxd/2 complete - juju upgrade-series 0/lxd/4 complete - -#. Update the software sources for the application's machines. - - For glance and nova-cloud-controller, set the value of the - ``openstack-origin`` configuration option to 'distro': - - .. code-block:: none - - juju config glance openstack-origin=distro - juju config nova-cloud-controller openstack-origin=distro - -The final partial :command:`juju status` output looks like this: - -.. code-block:: console - - App Version Status Scale Charm Store Channel Rev OS Message - glance 16.0.1 active 3 glance charmstore stable 484 ubuntu Unit is ready - glance-hacluster active 3 hacluster charmstore stable 81 ubuntu Unit is ready and clustered - nova-cloud-controller 17.0.13 active 3 nova-cloud-controller charmstore stable 555 ubuntu Unit is ready - nova-cloud-controller-hacluster active 3 hacluster charmstore stable 81 ubuntu Unit is ready and clustered - - Unit Workload Agent Machine Public address Ports Message - glance/0* active idle 2/lxd/1 10.246.114.27 9292/tcp Unit is ready - glance-hacluster/0* active idle 10.246.114.27 Unit is ready and clustered - glance/1 active idle 2/lxd/3 10.246.114.64 9292/tcp Unit is ready - glance-hacluster/2 active idle 10.246.114.64 Unit is ready and clustered - glance/2 active idle 1/lxd/4 10.246.114.65 9292/tcp Unit is ready - glance-hacluster/1 active idle 10.246.114.65 Unit is ready and clustered - nova-cloud-controller/0* active idle 2/lxd/2 10.246.114.25 8774/tcp,8778/tcp Unit is ready - nova-cloud-controller-hacluster/0* active idle 10.246.114.25 Unit is ready and clustered - nova-cloud-controller/1 active idle 1/lxd/2 10.246.114.61 8774/tcp,8778/tcp Unit is ready - nova-cloud-controller-hacluster/1 active idle 10.246.114.61 Unit is ready and clustered - nova-cloud-controller/2 active idle 0/lxd/4 10.246.114.62 8774/tcp,8778/tcp Unit is ready - nova-cloud-controller-hacluster/2 active idle 10.246.114.62 Unit is ready and clustered - -Physical machines -~~~~~~~~~~~~~~~~~ - -This section looks at series upgrades from the standpoint of an individual -(physical) machine. This is different from looking at series upgrades from the -standpoint of applications that happen to be running on certain machines. - -Since the standard topology for Charmed OpenStack is to optimise -containerisation (with one service per container), a physical machine is -expected to directly host only those applications which cannot generally be -containerised. These notably include: - -* ceph-osd -* neutron-gateway -* nova-compute - -Naturally, when the physical machine is rebooted all containerised applications -will also go offline. - -It is assumed that all affected services, as much as is possible, are under -HA. Note that a hypervisor (nova-compute) cannot be made highly available. - -When performing a series upgrade on a physical machine more attention should be -accorded to workload maintenance pre-upgrade steps: - -* For compute nodes migrate all running VMs to another hypervisor. -* For network nodes migrate routers to another cloud node. -* Any storage related tasks that may be required. -* Any site specific tasks that may be required. - -The following two sub-sections will examine series upgrades for a single -physical machine and, concurrently, for multiple physical machines. - -Upgrading a single physical machine -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -This scenario is represented by the following partial :command:`juju status` -command output: - -.. code-block:: console - - App Version Status Scale Charm Store Channel Rev OS Message - ceph-mon 12.2.13 active 1 ceph-mon charmstore stable 483 ubuntu Unit is ready and clustered - ceph-osd 12.2.13 active 1 ceph-osd charmstore stable 502 ubuntu Unit is ready (1 OSD) - glance 16.0.1 active 1 glance charmstore stable 484 ubuntu Unit is ready - glance-hacluster active 0 hacluster charmstore stable 81 ubuntu Unit is ready and clustered - nova-cloud-controller 17.0.13 active 1 nova-cloud-controller charmstore stable 555 ubuntu Unit is ready - nova-cloud-controller-hacluster active 0 hacluster charmstore stable 81 ubuntu Unit is ready and clustered - nova-compute 17.0.13 active 1 nova-compute charmstore stable 578 ubuntu Unit is ready - - Unit Workload Agent Machine Public address Ports Message - ceph-mon/1 active idle 1/lxd/0 10.246.114.56 Unit is ready and clustered - ceph-osd/1 active idle 1 10.246.114.22 Unit is ready (1 OSD) - glance/2 active idle 1/lxd/4 10.246.114.65 9292/tcp Unit is ready - glance-hacluster/1 active idle 10.246.114.65 Unit is ready and clustered - nova-cloud-controller/1 active idle 1/lxd/2 10.246.114.61 8774/tcp,8778/tcp Unit is ready - nova-cloud-controller-hacluster/1 active idle 10.246.114.61 Unit is ready and clustered - nova-compute/0* active idle 1 10.246.114.22 Unit is ready - neutron-openvswitch/0* active idle 10.246.114.22 Unit is ready - - Machine State DNS Inst id Series AZ Message - 1 started 10.246.114.22 node-fontana xenial default Deployed - 1/lxd/0 started 10.246.114.56 juju-0642e9-1-lxd-0 bionic default series upgrade completed: success - 1/lxd/2 started 10.246.114.61 juju-0642e9-1-lxd-2 bionic default series upgrade completed: success - 1/lxd/4 started 10.246.114.65 juju-0642e9-1-lxd-4 bionic default series upgrade completed: success - -As is evidenced by the noted series for each Juju machine, only the physical -machine remains to have its series upgraded. This example procedure will -therefore involve the nova-compute and ceph-osd applications. Note however that -the nova-compute application is coupled with the neutron-openvswitch -subordinate application. - -Discarding those applications whose machines have already been upgraded we -arrive at the following output: - -.. code-block:: console - - App Version Status Scale Charm Store Channel Rev OS Message - ceph-osd 12.2.13 active 1 ceph-osd charmstore stable 502 ubuntu Unit is ready (1 OSD) - neutron-openvswitch 12.1.1 active 0 neutron-openvswitch charmstore stable 454 ubuntu Unit is ready - nova-compute 17.0.13 active 1 nova-compute charmstore stable 578 ubuntu Unit is ready - - Unit Workload Agent Machine Public address Ports Message - ceph-osd/1 active idle 1 10.246.114.22 Unit is ready (1 OSD) - nova-compute/0* active idle 1 10.246.114.22 Unit is ready - neutron-openvswitch/0* active idle 10.246.114.22 Unit is ready - -In summary, the ceph-osd and nova-compute applications are hosted on machine 1. -Since application leadership does not play a significant role with these two -applications, and because the hacluster application is not present, there will -be no units to pause. - -.. important:: - - As was the case for the upgrade procedure involving the ceph-mon - application, during the upgrade involving ceph-osd, there will NOT be a Ceph - service outage. - -#. It is recommended to set the Ceph cluster OSDs to 'noout' to prevent the - rebalancing of data. This is typically done at the application level (i.e. - not at the unit or machine level): - - .. code-block:: none - - juju run-action --wait ceph-mon/leader set-noout - -#. Perform any workload maintenance pre-upgrade steps. - - All running VMs should be migrated to another hypervisor. See cloud - operation `Live migrate VMs from a running compute node`_. - -#. Perform a series upgrade of the machine: - - .. code-block:: none - - juju upgrade-series 1 prepare bionic - juju ssh 1 sudo apt update - juju ssh 1 sudo apt full-upgrade - juju ssh 1 sudo do-release-upgrade # and reboot - juju upgrade-series 1 complete - -#. Perform any remaining post-upgrade tasks. - - If OSDs were previously set to 'noout' then verify the up/in status of the - OSDs and then unset 'noout' for the cluster: - - .. code-block:: none - - juju run --unit ceph-mon/leader -- ceph status - juju run-action --wait ceph-mon/leader unset-noout - -#. Update the software sources for the machine. - - .. caution:: - - As was done in previous procedures, only set software sources once all - machines for the associated applications have had their series upgraded. - - For the principal applications ceph-osd and nova-compute, set the - appropriate configuration option to 'distro': - - .. code-block:: none - - juju config nova-compute openstack-origin=distro - juju config ceph-osd source=distro - - .. note:: - - Although updating the software sources more than once on the same machine - may appear redundant it is recommended to do so. - -Upgrading multiple physical hosts concurrently -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -When physical machines have their series upgraded concurrently Availability -Zones need to be taken into account. Machines should be placed into upgrade -groups such that any API services running on them have a maximum of one unit -per group. This is to ensure API availability at the reboot stage. - -This simplified bundle is used to demonstrate the general idea: - -.. code-block:: yaml - - series: xenial - machines: - 0: {} - 1: {} - 2: {} - 3: {} - 4: {} - 5: {} - applications: - nova-compute: - charm: cs:nova-compute - num_units: 3 - options: - openstack-origin: cloud:xenial-queens - to: - - 0 - - 2 - - 4 - keystone: - charm: cs:keystone - num_units: 3 - options: - vip: 10.85.132.200 - openstack-origin: cloud:xenial-queens - to: - - lxd:1 - - lxd:3 - - lxd:5 - keystone-hacluster: - charm: cs:hacluster - options: - cluster_count: 3 - -Three upgrade groups could consist of the following machines: - -#. Machines 0 and 1 -#. Machines 2 and 3 -#. Machines 4 and 5 - -In this way, a less time-consuming series upgrade can be performed while still -ensuring the availability of services. - -.. caution:: - - For the ceph-osd application, ensure that rack-aware replication rules exist - in the CRUSH map if machines are being rebooted together. This is to prevent - significant interruption to running workloads from occurring if the - same placement group is hosted on those machines. For example, if ceph-mon - is deployed with ``customize-failure-domain`` set to 'true' and the ceph-osd - units are hosted on machines in three or more separate Juju AZs you can - safely reboot ceph-osd machines simultaneously in the same zone. See - `Ceph AZ`_ in `Infrastructure high availability`_ for details. - -Automation ----------- - -Series upgrades across an OpenStack cloud can be time consuming, even when -using concurrent methods wherever possible. They can also be tedious and thus -susceptible to human error. - -The following code examples encapsulate the processes described in this -document. They are provided solely to illustrate the methods used to develop -and test the series upgrade primitives: - -* `Parallel tests`_: An example that is used as a functional verification of - a series upgrade in the OpenStack Charms project. Search for function - ``test_200_run_series_upgrade``. -* `Upgrade helpers`_: A set of helpers used in the above upgrade example. - -.. caution:: - - The example code should only be used for its intended use case of - development and testing. Do not attempt to automate a series upgrade on a - production cloud. - -.. LINKS -.. _Series upgrade: upgrade-series.html -.. _Parallel tests: https://github.com/openstack-charmers/zaza-openstack-tests/blob/master/zaza/openstack/charm_tests/series_upgrade/parallel_tests.py -.. _Upgrade helpers: https://github.com/openstack-charmers/zaza-openstack-tests/blob/master/zaza/openstack/utilities/parallel_series_upgrade.py -.. _Upgrading Percona XtraDB Cluster: https://www.percona.com/doc/percona-xtradb-cluster/LATEST/howtos/upgrade_guide.html -.. _Percona XtraDB Cluster In-Place Upgrading Guide From 5.5 to 5.6: https://www.percona.com/doc/percona-xtradb-cluster/5.6/upgrading_guide_55_56.html -.. _Galera replication - how to recover a PXC cluster: https://www.percona.com/blog/2014/09/01/galera-replication-how-to-recover-a-pxc-cluster -.. _mysql-innodb-cluster: https://jaas.ai/mysql-innodb-cluster -.. _mysql-router: https://jaas.ai/mysql-router -.. _percona-cluster charm - series upgrade to focal: percona-series-upgrade-to-focal.html -.. _Live migrate VMs from a running compute node: https://docs.openstack.org/charm-guide/latest/admin/ops-live-migrate-vms.html -.. _Ceph AZ: https://docs.openstack.org/charm-guide/latest/admin/ha.html#ceph-az -.. _Infrastructure high availability: https://docs.openstack.org/charm-guide/latest/admin/ha.html diff --git a/deploy-guide/source/upgrade-series.rst b/deploy-guide/source/upgrade-series.rst deleted file mode 100644 index 68611db..0000000 --- a/deploy-guide/source/upgrade-series.rst +++ /dev/null @@ -1,360 +0,0 @@ -============== -Series upgrade -============== - -The purpose of this document is to provide foundational knowledge for preparing -an administrator to perform a series upgrade across a Charmed OpenStack cloud. -This translates to upgrading the operating system of every cloud node to an -entirely new version. - -Please read the following before continuing: - -* :doc:`upgrade-overview` -* :doc:`cg:release-notes/index` -* :doc:`cg:project/issues-and-procedures` - -Once this document has been studied the administrator will be ready to graduate -to the :doc:`Series upgrade OpenStack ` guide that -describes the process in more detail. - -Concerning the cloud being operated upon, the following is assumed: - -* It is being upgraded from one LTS series to another (e.g. xenial to - bionic, bionic to focal, etc.). -* Its nodes are backed by MAAS. -* Its services are highly available. -* It is being upgraded with minimal downtime. - -.. warning:: - - Upgrading a single production machine from one LTS to another is a serious - task. Doing so for every cloud node can be that much harder. Attempting to - do this with minimal cloud downtime is an order of magnitude more complex. - - Such an undertaking should be executed by persons who are intimately - familiar with Juju and the currently deployed charms (and their related - applications). It should first be tested on a non-production cloud that - closely resembles the production environment. - -Upgrade candidate availability ------------------------------- - -Ensure that there is an upgrade candidate available. Charmed OpenStack is -primarily designed to run on Ubuntu LTS releases, and an Ubuntu system is -configured, by default, to upgrade only to the next LTS. In addition, this will -be possible only once the first LTS point release is published (see the `Ubuntu -releases wiki page`_ for release date information). For example, an upgrade to -Focal was possible starting on August 6, 2020. - -.. caution:: - - The Juju tooling will initiate the upgrade process irrespective of whether - an upgrade candidate is available or not. A cancelled upgrade is not fatal, - but it will leave erroneous messaging in :command:`juju status` output. - -The Juju :command:`upgrade-series` command ------------------------------------------- - -The Juju :command:`upgrade-series` command is the cornerstone of the entire -procedure. This command manages an operating system upgrade of a targeted -machine and operates on every application unit hosted on that machine. The -command works in conjunction with either the :command:`prepare` or the -:command:`complete` sub-command. - -The basic process is to inform the units on a machine that a series upgrade -is about to commence, to perform the upgrade, and then inform the units that -the upgrade has finished. In most cases with the OpenStack charms, units will -first be paused and be left with a workload status of "blocked" and a message -of "Ready for do-release-upgrade and reboot." - -For example, to inform units on machine '0' that an upgrade (to series -'bionic') is about to occur: - -.. code-block:: none - - juju upgrade-series 0 prepare bionic - -The :command:`prepare` sub-command causes **all** the charms (including -subordinates) on the machine to run their ``pre-series-upgrade`` hook. - -The administrator must then perform the traditional steps involved in upgrading -the OS on the targeted machine (in this example, machine '0'). For example, -update/upgrade packages with :command:`apt update && apt full-upgrade`; invoke -the :command:`do-release-upgrade` command; and reboot the machine once -complete. - -The :command:`complete` sub-command causes **all** the charms (including -subordinates) on the machine to run their ``post-series-upgrade`` hook. In most -cases with the OpenStack charms, configuration files will be re-written, units -will be resumed automatically (if paused), and be left with a workload status -of "active" and a message of "Unit is ready": - -.. code-block:: none - - juju upgrade-series 0 complete - -At this point the series upgrade on the machine and its charms is now done. In -the :command:`juju status` output the machine's entry under the Series column -will have changed from 'xenial' to 'bionic'. - -.. note:: - - Charms are not obliged to support the two series upgrade hooks but they do - make for a more intelligent and a less error-prone series upgrade. - -Containers (and their charms) hosted on the target machine remain unaffected by -this command. However, during the required post-upgrade reboot of the host all -containerised services will naturally be unavailable. - -See the Juju documentation to learn more about the `series upgrade`_ feature. - -.. _pre-upgrade_requirements: - -Pre-upgrade requirements ------------------------- - -This is a list of requirements that apply to any cloud. They must be met before -making any changes. - -* All the cloud nodes should be using the same series, be in good working - order, and be updated with the latest stable software packages (APT - upgrades). - -* The cloud should be running the latest OpenStack release supported by the - current series. See `Ubuntu OpenStack release cycle`_ and `OpenStack - upgrade`_. - -* The cloud should be fully operational and error-free. - -* All currently deployed charms should be upgraded to the latest stable charm - revision. See `Charms upgrade`_. - -* The Juju model comprising the cloud should be error-free (e.g. there should - be no charm hook errors). - -.. _unattended_upgrades: - -Unattended upgrades -------------------- - -Automatic package updates should be disabled on a node that is about to undergo -a series upgrade. This is to avoid potential conflicts with the manual (or -scripted) APT steps. One way to achieve this is with: - -.. code-block:: none - - sudo dpkg-reconfigure -plow unattended-upgrades - -Once the upgrade is complete it is advised to re-enable unattended upgrades for -security reasons. - -.. _workload_specific_preparations: - -Workload specific preparations ------------------------------- - -These are preparations that are specific to the current cloud deployment. -Completing them in advance is an integral part of the upgrade. - -Charm upgradability -~~~~~~~~~~~~~~~~~~~ - -Verify the documented series upgrade processes for all currently deployed -charms. Some charms, especially third-party charms, may either not have -implemented series upgrade yet or simply may not work with the target series. -Pay particular attention to SDN (software defined networking) and storage -charms as these play a crucial role in cloud operations. - -.. _workload_maintenance: - -Workload maintenance -~~~~~~~~~~~~~~~~~~~~ - -Any workload-specific pre and post series upgrade maintenance tasks should be -readied in advance. For example, if a node's workload requires a database then -a pre-upgrade backup plan should be drawn up. Similarly, if a workload requires -settings to be adjusted post-upgrade then those changes should be prepared -ahead of time. Pay particular attention to stateful services due to their -importance in cloud operations. Examples include evacuating a compute node, -switching an HA router to another node, and storage rebalancing. - -Pre-upgrade tasks are performed before issuing the :command:`prepare` -subcommand, and post-upgrade tasks are done immediately prior to issuing the -:command:`complete` subcommand. - -Workflow: sequential vs. concurrent ------------------------------------ - -In terms of the workflow there are two approaches: - -* Sequential - upgrading one machine at a time -* Concurrent - upgrading a group of machines simultaneously - -Normally, it is best to upgrade sequentially as this ensures data reliability -and availability (we've assumed an HA cloud). This approach also minimises -adverse effects to the deployment if something goes wrong. - -However, for even moderately sized clouds, an intervention based purely on a -sequential approach can take a very long time to complete. This is where the -concurrent method becomes attractive. - -In general, a concurrent approach is a viable option for API applications but -is not an option for stateful applications. During the course of the cloud-wide -series upgrade a hybrid strategy is a reasonable choice. - -To be clear, the above pertains to upgrading the series on machines associated -with a single application. It is also possible however to employ similar -thinking to multiple applications. - -Application leadership ----------------------- - -`Application leadership`_ plays a role in determining the order in which -machines will have their series upgraded. The guiding principle is that an -application's non-leader units (if they exist) are upgraded (in no particular -order) prior to its leader unit. There are exceptions to this however, and they -will be indicated on the :doc:`Series upgrade OpenStack -` page. - -.. note:: - - Juju will not transfer the leadership of an application (and any - subordinate) to another unit while the application is undergoing a series - upgrade. This allows a charm to make assumptions that will lead to a more - reliable outcome. - -Assuming that a cloud is intended to eventually undergo a series upgrade, this -guideline will generally influence the cloud's topology. Containerisation is an -effective response to this. - -.. important:: - - Applications should be co-located on the same machine only if leadership - plays a negligible role. Applications deployed with the compute and storage - charms fall into this category. - -.. _generic_series_upgrade: - -Generic series upgrade ----------------------- - -This section contains a generic overview of a series upgrade for three -machines, each hosting a unit of the `ubuntu`_ application. The initial and -target series are xenial and bionic, respectively. - -This scenario is represented by the following :command:`juju status` command -output: - -.. code-block:: console - - Model Controller Cloud/Region Version SLA Timestamp - upgrade maas-controller mymaas/default 2.7.6 unsupported 18:33:49Z - - App Version Status Scale Charm Store Rev OS Notes - ubuntu1 16.04 active 3 ubuntu jujucharms 15 ubuntu - - Unit Workload Agent Machine Public address Ports Message - ubuntu1/0* active idle 0 10.0.0.241 ready - ubuntu1/1 active idle 1 10.0.0.242 ready - ubuntu1/2 active idle 2 10.0.0.243 ready - - Machine State DNS Inst id Series AZ Message - 0 started 10.0.0.241 node2 xenial zone3 Deployed - 1 started 10.0.0.242 node3 xenial zone4 Deployed - 2 started 10.0.0.243 node1 xenial zone5 Deployed - -.. important:: - - The asterisk in the Unit column denotes the leader. Here, ``ubuntu1/0`` is - the leader and its machine ID is 0. - -First ensure that any new applications will (by default) use the new series, in -this case bionic. This is done by configuring at the model level: - -.. code-block:: none - - juju model-config default-series=bionic - -Now do the same at the application level. This will affect any new units of the -existing application, in this case 'ubuntu1': - -.. code-block:: none - - juju set-series ubuntu1 bionic - -To perform the actual series upgrade we begin with a non-leader machine (1): - -.. code-block:: none - :linenos: - - # Perform any workload maintenance pre-upgrade steps here - juju upgrade-series 1 prepare bionic - juju ssh 1 sudo apt update - juju ssh 1 sudo apt full-upgrade - juju ssh 1 sudo do-release-upgrade - # Perform any workload maintenance post-upgrade steps here - # Reboot the machine (if not already done) - juju upgrade-series 1 complete - -.. note:: - - It is recommended to use a terminal multiplexer (e.g. tmux) in order to - prevent a network disruption from breaking the invoked commands. - -In this generic example there are no `workload maintenance`_ steps to perform. -If there were post-upgrade steps then the prompt to reboot the machine at the -end of :command:`do-release-upgrade` should be answered in the negative and the -reboot will be initiated manually on line 7 (i.e. :command:`sudo reboot`). - -It is possible to invoke the :command:`complete` sub-command before the -upgraded machine is ready to process it. Juju will block until the unit is -ready after being restarted. - -In lines 4 and 5 the upgrade proceeds in the usual interactive fashion. If a -non-interactive mode is preferred, those two lines can be replaced with: - -.. code-block:: none - - juju ssh 1 sudo DEBIAN_FRONTEND=noninteractive apt-get --assume-yes \ - -o "Dpkg::Options::=--force-confdef" \ - -o "Dpkg::Options::=--force-confold" dist-upgrade - juju ssh 1 sudo DEBIAN_FRONTEND=noninteractive \ - do-release-upgrade -f DistUpgradeViewNonInteractive - -The :command:`apt-get` command is preferred while in non-interactive mode (or -with scripting). - -By default, an LTS release will not have an upgrade candidate until the "point -release" of the next LTS is published. You can override this policy by using -the ``-d`` (development) option with the :command:`do-release-upgrade` command. - -.. caution:: - - Performing a series upgrade non-interactively can be risky so the decision - to do so should be made only after careful deliberation. - -The remaining non-leader machine (2) is then upgraded: - -.. code-block:: none - - juju upgrade-series 2 prepare bionic - ... - ... - -Finally, the leader machine (0) is upgraded in the same way. - -Next steps ----------- - -When you are ready to perform a series upgrade across your cloud proceed to -the :doc:`Series upgrade OpenStack ` page. - -.. LINKS -.. _Ubuntu releases wiki page: https://wiki.ubuntu.com/Releases -.. _Charms upgrade: upgrade-charms.html -.. _OpenStack upgrade: upgrade-openstack.html -.. _Known OpenStack upgrade issues: upgrade-issues.html -.. _series upgrade: https://discourse.charmhub.io/t/upgrading-a-machines-series -.. _Ubuntu OpenStack release cycle: https://ubuntu.com/about/release-cycle#ubuntu-openstack-release-cycle -.. _Application leadership: https://discourse.charmhub.io/t/implementing-leadership -.. _ubuntu: https://jaas.ai/ubuntu diff --git a/deploy-guide/test/redirect-tests.txt b/deploy-guide/test/redirect-tests.txt index ae5a872..6e571b2 100644 --- a/deploy-guide/test/redirect-tests.txt +++ b/deploy-guide/test/redirect-tests.txt @@ -56,3 +56,8 @@ /project-deploy-guide/charm-deployment-guide/latest/charmhub-migration.html 301 /charm-guide/latest/project/procedures/charmhub-migration.html /project-deploy-guide/charm-deployment-guide/latest/ovn-migration.html 301 /charm-guide/latest/project/procedures/ovn-migration.html /project-deploy-guide/charm-deployment-guide/latest/upgrade-special.html 301 /charm-guide/latest/project/issues-and-procedures.html +/project-deploy-guide/charm-deployment-guide/latest/upgrade-charms.html 301 /charm-guide/latest/admin/upgrades/charms.html +/project-deploy-guide/charm-deployment-guide/latest/upgrade-series.html 301 /charm-guide/latest/admin/upgrades/series.html +/project-deploy-guide/charm-deployment-guide/latest/upgrade-series-openstack.html 301 /charm-guide/latest/admin/upgrades/series-openstack.html +/project-deploy-guide/charm-deployment-guide/latest/upgrade-openstack.html 301 /charm-guide/latest/admin/upgrades/openstack.html +/project-deploy-guide/charm-deployment-guide/latest/upgrade-overview.html 301 /charm-guide/latest/admin/upgrades/overview.html