Remove wait option from all run commands
The Charm Guide was recently updated for Juju 3.x. However, it was not known at the time that the semantics of the --wait option for the `juju run` command had changed. In 3, the run command by default stays in the foreground (a --background option has been added to achieve previous 2.9 default behaviour). The --wait option is now a timeout and, if used, requires a value. Since the previous PR simply substituted commands, all `juju run --wait` commands will fail. Change-Id: I6bb90762ad5cb5ca97ca311501b1ff7d3d9a3ccb Signed-off-by: Peter Matulis <peter.matulis@canonical.com>
This commit is contained in:
parent
c78a13323e
commit
995c6ed96c
|
@ -92,7 +92,7 @@ To see more detail the ``show-deferred-events`` action is used:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait neutron-openvswitch/1 show-deferred-events
|
||||
juju run neutron-openvswitch/1 show-deferred-events
|
||||
|
||||
unit-neutron-openvswitch-1:
|
||||
UnitId: neutron-openvswitch/1
|
||||
|
@ -122,7 +122,7 @@ action:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait neutron-openvswitch/1 restart-services deferred-only=True
|
||||
juju run neutron-openvswitch/1 restart-services deferred-only=True
|
||||
|
||||
The argument ``deferred-only`` ensures that only the necessary services are
|
||||
restarted (for a charm that manages multiple services).
|
||||
|
@ -156,7 +156,7 @@ action:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait neutron-openvswitch/1 run-deferred-hooks
|
||||
juju run neutron-openvswitch/1 run-deferred-hooks
|
||||
|
||||
.. LINKS
|
||||
|
||||
|
|
|
@ -625,7 +625,7 @@ below is generated by the :command:`token create` subcommand:
|
|||
vault operator unseal KEY-2
|
||||
vault operator unseal KEY-3
|
||||
vault token create -ttl=10m
|
||||
juju run --wait vault/leader authorize-charm token=s.ROnC91Y3ByWDDncoZJ3YMtaY
|
||||
juju run vault/leader authorize-charm token=s.ROnC91Y3ByWDDncoZJ3YMtaY
|
||||
|
||||
Here is output from the :command:`juju status` command for this deployment:
|
||||
|
||||
|
@ -783,7 +783,7 @@ use a chain.
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait vault/leader generate-root-ca
|
||||
juju run vault/leader generate-root-ca
|
||||
|
||||
Here is select output from the :command:`juju status` command for a minimal
|
||||
deployment of OVN with MySQL 8:
|
||||
|
|
|
@ -283,7 +283,7 @@ well-known caveats, or just valuable tips.
|
|||
As noted under `Stopping and starting services`_, this document encourages the
|
||||
use of actions for managing application services. The general syntax is::
|
||||
|
||||
juju run --wait <unit> <action>
|
||||
juju run <unit> <action>
|
||||
|
||||
In the procedures that follow, <unit> will be replaced by an example only (e.g.
|
||||
``nova-compute/0``). You will need to substitute in the actual unit for your
|
||||
|
@ -349,7 +349,7 @@ cluster
|
|||
3. Set the cluster-wide ``noout`` option, on any MON unit, to prevent data
|
||||
rebalancing from occurring when OSDs start disappearing from the network::
|
||||
|
||||
juju run --wait ceph-mon/1 set-noout
|
||||
juju run ceph-mon/1 set-noout
|
||||
|
||||
Query status again to ensure that the option is set::
|
||||
|
||||
|
@ -375,7 +375,7 @@ cluster
|
|||
|
||||
Now pause the service::
|
||||
|
||||
juju run --wait ceph-radosgw/0 pause
|
||||
juju run ceph-radosgw/0 pause
|
||||
|
||||
Verify that the service has stopped::
|
||||
|
||||
|
@ -385,7 +385,7 @@ cluster
|
|||
|
||||
5. Stop all of a unit's OSDs. Do this on **each** ``ceph-osd`` unit::
|
||||
|
||||
juju run --wait ceph-osd/1 stop osds=all
|
||||
juju run ceph-osd/1 stop osds=all
|
||||
|
||||
Once done, verify that all of the cluster's OSDs are down::
|
||||
|
||||
|
@ -432,7 +432,7 @@ component
|
|||
|
||||
a. Mark the OSD (with id 2) on a ``ceph-osd`` unit as 'out'::
|
||||
|
||||
juju run --wait ceph-osd/2 osd-out osds=2
|
||||
juju run ceph-osd/2 osd-out osds=2
|
||||
|
||||
b. Do not mark OSDs on another unit as 'out' until the cluster has recovered
|
||||
from the loss of the current one (run a status check).
|
||||
|
@ -441,13 +441,13 @@ component
|
|||
|
||||
Mark the OSD (with id 2) on a ``ceph-osd`` unit as 'down'::
|
||||
|
||||
juju run --wait ceph-osd/2 stop osds=2
|
||||
juju run ceph-osd/2 stop osds=2
|
||||
|
||||
5. **ceph-osd** - To take 'out' all the OSDs on a single unit:
|
||||
|
||||
a. Mark all the OSDs on a ``ceph-osd`` unit as 'out'::
|
||||
|
||||
juju run --wait ceph-osd/2 osd-out osds=all
|
||||
juju run ceph-osd/2 osd-out osds=all
|
||||
|
||||
b. Do not mark OSDs on another unit as 'out' until the cluster has recovered
|
||||
from the loss of the current ones (run a status check).
|
||||
|
@ -456,7 +456,7 @@ component
|
|||
|
||||
Mark all the OSDs on a ``ceph-osd`` unit as 'down'::
|
||||
|
||||
juju run --wait ceph-osd/2 stop osds=all
|
||||
juju run ceph-osd/2 stop osds=all
|
||||
|
||||
startup
|
||||
^^^^^^^
|
||||
|
@ -478,12 +478,12 @@ started in this order:
|
|||
|
||||
a. the ``noout`` option was set, you will need to unset it. On any MON unit::
|
||||
|
||||
juju run --wait ceph-mon/0 unset-noout
|
||||
juju run ceph-mon/0 unset-noout
|
||||
|
||||
b. a RADOS Gateway service was paused, you will need to resume it. Do this for
|
||||
**each** ``ceph-radosgw`` unit::
|
||||
|
||||
juju run --wait ceph-radosgw/0 resume
|
||||
juju run ceph-radosgw/0 resume
|
||||
|
||||
Finally, ensure that the cluster is in a healthy state by running a status
|
||||
check on any MON unit::
|
||||
|
@ -510,13 +510,13 @@ component
|
|||
|
||||
a. Re-insert the OSD (with id 2) on the ``ceph-osd`` unit::
|
||||
|
||||
juju run --wait ceph-osd/1 osd-in osds=2
|
||||
juju run ceph-osd/1 osd-in osds=2
|
||||
|
||||
4. **ceph-osd** - To set as 'in' all the OSDs on a unit:
|
||||
|
||||
a. Re-insert the OSDs on the ``ceph-osd`` unit::
|
||||
|
||||
juju run --wait ceph-osd/1 osd-in osds=all
|
||||
juju run ceph-osd/1 osd-in osds=all
|
||||
|
||||
b. Do not re-insert OSDs on another unit until the cluster has recovered
|
||||
from the addition of the current ones (run a status check).
|
||||
|
@ -531,14 +531,14 @@ shutdown
|
|||
|
||||
To pause the Cinder service::
|
||||
|
||||
juju run --wait cinder/0 pause
|
||||
juju run cinder/0 pause
|
||||
|
||||
startup
|
||||
^^^^^^^
|
||||
|
||||
To resume the Cinder service::
|
||||
|
||||
juju run --wait cinder/0 resume
|
||||
juju run cinder/0 resume
|
||||
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
|
@ -569,7 +569,7 @@ read queries
|
|||
|
||||
To see the etcd cluster status. On any ``etcd`` unit::
|
||||
|
||||
juju run --wait etcd/0 health
|
||||
juju run etcd/0 health
|
||||
|
||||
loss of etcd quorum
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
@ -613,7 +613,7 @@ shutdown
|
|||
|
||||
To pause the Glance service::
|
||||
|
||||
juju run --wait glance/0 pause
|
||||
juju run glance/0 pause
|
||||
|
||||
.. important::
|
||||
|
||||
|
@ -625,7 +625,7 @@ startup
|
|||
|
||||
To resume the Glance service::
|
||||
|
||||
juju run --wait glance/0 resume
|
||||
juju run glance/0 resume
|
||||
|
||||
.. important::
|
||||
|
||||
|
@ -642,7 +642,7 @@ shutdown
|
|||
|
||||
To pause the Keystone service::
|
||||
|
||||
juju run --wait keystone/0 pause
|
||||
juju run keystone/0 pause
|
||||
|
||||
.. important::
|
||||
|
||||
|
@ -654,7 +654,7 @@ startup
|
|||
|
||||
To resume the Keystone service::
|
||||
|
||||
juju run --wait keystone/0 resume
|
||||
juju run keystone/0 resume
|
||||
|
||||
.. important::
|
||||
|
||||
|
@ -677,7 +677,7 @@ shutdown
|
|||
|
||||
1. Pause the Landscape service::
|
||||
|
||||
juju run --wait landscape-server/0 pause
|
||||
juju run landscape-server/0 pause
|
||||
|
||||
2. Stop the PostgreSQL service::
|
||||
|
||||
|
@ -685,7 +685,7 @@ shutdown
|
|||
|
||||
3. Pause the RabbitMQ service::
|
||||
|
||||
juju run --wait rabbitmq-server/0 pause
|
||||
juju run rabbitmq-server/0 pause
|
||||
|
||||
.. caution::
|
||||
|
||||
|
@ -699,7 +699,7 @@ The startup of Landscape should be done in the reverse order.
|
|||
|
||||
1. Ensure the RabbitMQ service is started::
|
||||
|
||||
juju run --wait rabbitmq-server/0 resume
|
||||
juju run rabbitmq-server/0 resume
|
||||
|
||||
2. Ensure the PostgreSQL service is started::
|
||||
|
||||
|
@ -707,7 +707,7 @@ The startup of Landscape should be done in the reverse order.
|
|||
|
||||
3. Resume the Landscape service::
|
||||
|
||||
juju run --wait landscape-server/0 resume
|
||||
juju run landscape-server/0 resume
|
||||
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
|
@ -721,7 +721,7 @@ To pause the MySQL InnoDB Cluster for a mysql-innodb-cluster unit:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait mysql-innodb-cluster/0 pause
|
||||
juju run mysql-innodb-cluster/0 pause
|
||||
|
||||
To gracefully shut down the cluster repeat the above for every unit.
|
||||
|
||||
|
@ -769,7 +769,7 @@ action on any unit:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait mysql-innodb-cluster/1 reboot-cluster-from-complete-outage
|
||||
juju run mysql-innodb-cluster/1 reboot-cluster-from-complete-outage
|
||||
|
||||
Here we see, in the command's partial output, that the chosen unit does not
|
||||
correspond to the GTID node:
|
||||
|
@ -785,7 +785,7 @@ corresponds to unit ``mysql-innodb-cluster/2``. Therefore:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait mysql-innodb-cluster/2 reboot-cluster-from-complete-outage
|
||||
juju run mysql-innodb-cluster/2 reboot-cluster-from-complete-outage
|
||||
|
||||
This time, the output should include:
|
||||
|
||||
|
@ -861,14 +861,14 @@ shutdown
|
|||
|
||||
To pause a Neutron gateway service::
|
||||
|
||||
juju run --wait neutron-gateway/0 pause
|
||||
juju run neutron-gateway/0 pause
|
||||
|
||||
startup
|
||||
^^^^^^^
|
||||
|
||||
To resume a Neutron gateway service::
|
||||
|
||||
juju run --wait neutron-gateway/0 resume
|
||||
juju run neutron-gateway/0 resume
|
||||
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
|
@ -880,14 +880,14 @@ shutdown
|
|||
|
||||
To pause the Open vSwitch service::
|
||||
|
||||
juju run --wait neutron-openvswitch/0 pause
|
||||
juju run neutron-openvswitch/0 pause
|
||||
|
||||
startup
|
||||
^^^^^^^
|
||||
|
||||
To resume the Open vSwitch service::
|
||||
|
||||
juju run --wait neutron-openvswitch/0 resume
|
||||
juju run neutron-openvswitch/0 resume
|
||||
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
|
@ -900,14 +900,14 @@ shutdown
|
|||
To pause Nova controller services (Nova scheduler, Nova api, Nova network, Nova
|
||||
objectstore)::
|
||||
|
||||
juju run --wait nova-cloud-controller/0 pause
|
||||
juju run nova-cloud-controller/0 pause
|
||||
|
||||
startup
|
||||
^^^^^^^
|
||||
|
||||
To resume Nova controller services::
|
||||
|
||||
juju run --wait nova-cloud-controller/0 resume
|
||||
juju run nova-cloud-controller/0 resume
|
||||
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
|
@ -938,7 +938,7 @@ To stop a Nova service:
|
|||
|
||||
3. Pause the Nova service::
|
||||
|
||||
juju run --wait nova-compute/0 pause
|
||||
juju run nova-compute/0 pause
|
||||
|
||||
.. tip::
|
||||
|
||||
|
@ -951,7 +951,7 @@ startup
|
|||
|
||||
To resume a Nova service::
|
||||
|
||||
juju run --wait nova-compute/0 resume
|
||||
juju run nova-compute/0 resume
|
||||
|
||||
Instances that fail to come up properly can be moved to another compute host
|
||||
(see `Evacuate instances`_).
|
||||
|
@ -968,7 +968,7 @@ To pause the Percona XtraDB service for a ``percona-cluster`` unit:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait percona-cluster/0 pause
|
||||
juju run percona-cluster/0 pause
|
||||
|
||||
To gracefully shut down the cluster repeat the above for every unit.
|
||||
|
||||
|
@ -1048,7 +1048,7 @@ to be a non-leader.
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait percona-cluster/2 bootstrap-pxc
|
||||
juju run percona-cluster/2 bootstrap-pxc
|
||||
|
||||
Notify the cluster of the new bootstrap UUID
|
||||
""""""""""""""""""""""""""""""""""""""""""""
|
||||
|
@ -1086,7 +1086,7 @@ leader, which here is ``percona-cluster/0``:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait percona-cluster/0 notify-bootstrapped
|
||||
juju run percona-cluster/0 notify-bootstrapped
|
||||
|
||||
After the model settles, the status output should show all nodes in active and
|
||||
ready state:
|
||||
|
@ -1114,14 +1114,14 @@ shutdown
|
|||
|
||||
To pause a RabbitMQ service::
|
||||
|
||||
juju run --wait rabbitmq-server/0 pause
|
||||
juju run rabbitmq-server/0 pause
|
||||
|
||||
startup
|
||||
^^^^^^^
|
||||
|
||||
To resume a RabbitMQ service::
|
||||
|
||||
juju run --wait rabbitmq-server/0 resume
|
||||
juju run rabbitmq-server/0 resume
|
||||
|
||||
read queries
|
||||
^^^^^^^^^^^^
|
||||
|
@ -1129,7 +1129,7 @@ read queries
|
|||
Provided rabbitmq is running on a ``rabbitmq-server`` unit, you can perform a
|
||||
status check::
|
||||
|
||||
juju run --wait rabbitmq-server/1 cluster-status
|
||||
juju run rabbitmq-server/1 cluster-status
|
||||
|
||||
Example partial output is:
|
||||
|
||||
|
@ -1147,7 +1147,7 @@ above).
|
|||
|
||||
To list unconsumed queues (those with pending messages)::
|
||||
|
||||
juju run --wait rabbitmq-server/1 list-unconsumed-queues
|
||||
juju run rabbitmq-server/1 list-unconsumed-queues
|
||||
|
||||
See `Partitions`_ and `Queues`_ in the RabbitMQ documentation.
|
||||
|
||||
|
@ -1160,10 +1160,10 @@ along the way:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait rabbitmq-server/0 pause
|
||||
juju run --wait rabbitmq-server/1 cluster-status
|
||||
juju run --wait rabbitmq-server/0 pause
|
||||
juju run --wait rabbitmq-server/1 cluster-status
|
||||
juju run rabbitmq-server/0 pause
|
||||
juju run rabbitmq-server/1 cluster-status
|
||||
juju run rabbitmq-server/0 pause
|
||||
juju run rabbitmq-server/1 cluster-status
|
||||
|
||||
If errors persist, the mnesia database will need to be removed from the
|
||||
affected unit so it can be resynced from the other units. Do this by removing
|
||||
|
@ -1235,7 +1235,7 @@ e.g.
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait rabbitmq-server/0 force-boot
|
||||
juju run rabbitmq-server/0 force-boot
|
||||
|
||||
which makes use of the RabbitMQ `force_boot`_ option. The cluster will
|
||||
become operational, however, it will be running on fewer units and
|
||||
|
@ -1276,7 +1276,7 @@ shutdown
|
|||
|
||||
To pause a Vault service::
|
||||
|
||||
juju run --wait vault/0 pause
|
||||
juju run vault/0 pause
|
||||
|
||||
The :command:`juju status` command will return: ``blocked, Vault service not
|
||||
running``.
|
||||
|
@ -1286,7 +1286,7 @@ startup
|
|||
|
||||
To resume a Vault service::
|
||||
|
||||
juju run --wait vault/0 resume
|
||||
juju run vault/0 resume
|
||||
|
||||
The :command:`juju status` command will return: ``blocked, Unit is sealed``.
|
||||
|
||||
|
|
|
@ -163,7 +163,7 @@ on the lead octavia unit:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait octavia/0 configure-resources
|
||||
juju run octavia/0 configure-resources
|
||||
|
||||
This action must be run before Octavia is fully operational.
|
||||
|
||||
|
@ -234,7 +234,7 @@ This is accomplished by running an action on one of the units.
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait octavia-diskimage-retrofit/leader retrofit-image
|
||||
juju run octavia-diskimage-retrofit/leader retrofit-image
|
||||
|
||||
Octavia will use this image for all Amphora instances.
|
||||
|
||||
|
|
|
@ -116,7 +116,7 @@ action:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait glance-simplestreams-sync/leader sync-images
|
||||
juju run glance-simplestreams-sync/leader sync-images
|
||||
|
||||
Sample output:
|
||||
|
||||
|
|
|
@ -92,7 +92,7 @@ For a single unit (``vault/0``):
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait vault/0 restart
|
||||
juju run vault/0 restart
|
||||
|
||||
The output to :command:`juju status vault` should show that Vault is sealed:
|
||||
|
||||
|
|
|
@ -183,7 +183,7 @@ corresponding unit:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait nova-compute/0 disable
|
||||
juju run nova-compute/0 disable
|
||||
|
||||
This will stop nova-compute services and inform nova-scheduler to no longer
|
||||
assign new VMs to the host.
|
||||
|
@ -304,7 +304,7 @@ its corresponding unit:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait nova-compute/0 enable
|
||||
juju run nova-compute/0 enable
|
||||
|
||||
This will start nova-compute services and allows nova-scheduler to run new VMs
|
||||
on this host.
|
||||
|
|
|
@ -80,7 +80,7 @@ To reissue new certificates to all TLS-enabled clients run the
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait vault/leader reissue-certificates
|
||||
juju run vault/leader reissue-certificates
|
||||
|
||||
The output to the :command:`juju status` command for the model will show
|
||||
activity for each affected service as their corresponding endpoints get updated
|
||||
|
|
|
@ -57,7 +57,7 @@ Check cluster status:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait mysql-innodb-cluster/leader cluster-status
|
||||
juju run mysql-innodb-cluster/leader cluster-status
|
||||
|
||||
unit-mysql-innodb-cluster-0:
|
||||
UnitId: mysql-innodb-cluster/0
|
||||
|
@ -183,13 +183,13 @@ While the instance is running:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait mysql-innodb-cluster/2 remove-instance address=<instance-ip-address>
|
||||
juju run mysql-innodb-cluster/2 remove-instance address=<instance-ip-address>
|
||||
|
||||
Use the force argument if the host is down (or no longer exists):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait mysql-innodb-cluster/2 remove-instance address=<instance-ip-address> force=True
|
||||
juju run mysql-innodb-cluster/2 remove-instance address=<instance-ip-address> force=True
|
||||
|
||||
.. warning::
|
||||
|
||||
|
@ -200,7 +200,7 @@ Check cluster status:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait mysql-innodb-cluster/2 cluster-status
|
||||
juju run mysql-innodb-cluster/2 cluster-status
|
||||
|
||||
{
|
||||
"clusterName":"jujuCluster",
|
||||
|
@ -282,7 +282,7 @@ functioning correctly:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait mysql-innodb-cluster/leader cluster-status
|
||||
juju run mysql-innodb-cluster/leader cluster-status
|
||||
|
||||
And run an openstack command:
|
||||
|
||||
|
|
|
@ -110,7 +110,7 @@ Pause the RabbitMQ service on the unhealthy node/unit:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait rabbitmq-server/0 pause
|
||||
juju run rabbitmq-server/0 pause
|
||||
|
||||
Identify the unhealthy node's hostname
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
@ -150,7 +150,7 @@ from the cluster:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait rabbitmq-server/2 forget-cluster-node node=rabbit@juju-64dabb-0-lxd-0
|
||||
juju run rabbitmq-server/2 forget-cluster-node node=rabbit@juju-64dabb-0-lxd-0
|
||||
|
||||
The cluster's status output should now include:
|
||||
|
||||
|
@ -264,7 +264,7 @@ Resume the RabbitMQ service on the repaired node/unit:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait rabbitmq-server/0 resume
|
||||
juju run rabbitmq-server/0 resume
|
||||
|
||||
Verify model health
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
|
|
@ -81,7 +81,7 @@ being removed. Here, unit ``keystone-hacluster/2`` corresponds to unit
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait keystone-hacluster/2 pause
|
||||
juju run keystone-hacluster/2 pause
|
||||
|
||||
Remove the unwanted node
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
|
|
@ -85,7 +85,7 @@ Disable nova-compute services on the node:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait nova-compute/0 disable
|
||||
juju run nova-compute/0 disable
|
||||
|
||||
Respawn any Octavia VMs
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
@ -156,7 +156,7 @@ Unregister the compute node from the cloud:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait nova-compute/0 remove-from-cloud
|
||||
juju run nova-compute/0 remove-from-cloud
|
||||
|
||||
See cloud operation :ref:`Scale back the nova-compute application
|
||||
<unregister_compute_node>` for more details on this step.
|
||||
|
@ -177,8 +177,8 @@ Remove OSD storage devices
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait ceph-osd/2 remove-disk osd-ids=osd.0 purge=true
|
||||
juju run --wait ceph-osd/2 remove-disk osd-ids=osd.1 purge=true
|
||||
juju run ceph-osd/2 remove-disk osd-ids=osd.0 purge=true
|
||||
juju run ceph-osd/2 remove-disk osd-ids=osd.1 purge=true
|
||||
|
||||
.. note::
|
||||
|
||||
|
@ -227,7 +227,7 @@ First list all the disks on the new storage node:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait ceph-osd/10 list-disks
|
||||
juju run ceph-osd/10 list-disks
|
||||
|
||||
Then query the charm option:
|
||||
|
||||
|
@ -242,7 +242,7 @@ previously-assumed values:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait ceph-osd/10 add-disk \
|
||||
juju run ceph-osd/10 add-disk \
|
||||
osd-devices='/dev/nvme0n1 /dev/nvme0n2'
|
||||
|
||||
Inspect Ceph cluster changes
|
||||
|
|
|
@ -69,7 +69,7 @@ being removed. Here, unit ``vault-hacluster/2`` corresponds to unit
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait vault-hacluster/2 pause
|
||||
juju run vault-hacluster/2 pause
|
||||
|
||||
Remove the principal application unit
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
|
|
@ -57,7 +57,7 @@ Disable the compute node by referring to its corresponding unit, here
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait nova-compute/0 disable
|
||||
juju run nova-compute/0 disable
|
||||
|
||||
This will stop nova-compute services and inform nova-scheduler to no longer
|
||||
assign new VMs to the unit.
|
||||
|
@ -76,7 +76,7 @@ Now unregister the compute node from the cloud:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait nova-compute/0 remove-from-cloud
|
||||
juju run nova-compute/0 remove-from-cloud
|
||||
|
||||
The workload status of the unit can be checked with:
|
||||
|
||||
|
|
|
@ -99,7 +99,7 @@ application to settle and run:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait <OVN_CENTRAL_UNIT> cluster-status
|
||||
juju run <OVN_CENTRAL_UNIT> cluster-status
|
||||
|
||||
This output will show yaml-formatted status of both Southbound and Northbound
|
||||
OVN clusters. Each cluster status will contain key "unit_map", if this list
|
||||
|
|
|
@ -39,7 +39,7 @@ action on any mysql-innodb-cluster unit:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait mysql-innodb-cluster/1 reboot-cluster-from-complete-outage
|
||||
juju run mysql-innodb-cluster/1 reboot-cluster-from-complete-outage
|
||||
|
||||
.. important::
|
||||
|
||||
|
|
|
@ -53,7 +53,7 @@ To have Vault generate a self-signed root CA certificate:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait vault/leader generate-root-ca
|
||||
juju run vault/leader generate-root-ca
|
||||
|
||||
You're done.
|
||||
|
||||
|
@ -75,7 +75,7 @@ unit:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait vault/leader get-csr
|
||||
juju run vault/leader get-csr
|
||||
|
||||
.. note::
|
||||
|
||||
|
@ -179,7 +179,7 @@ action on the leader unit:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait vault/leader upload-signed-csr \
|
||||
juju run vault/leader upload-signed-csr \
|
||||
pem="$(cat ~/vault-charm-int.pem | base64)" \
|
||||
root-ca="$(cat ~/root-ca.pem | base64)" \
|
||||
allowed-domains='openstack.local'
|
||||
|
@ -304,7 +304,7 @@ backend:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait vault/leader disable-pki
|
||||
juju run vault/leader disable-pki
|
||||
|
||||
This step deletes the existing root certificate and invalidates any previous
|
||||
CSR requests.
|
||||
|
@ -319,7 +319,7 @@ CA certificate to Vault:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait vault/leader upload-signed-csr \
|
||||
juju run vault/leader upload-signed-csr \
|
||||
pem=“$(cat /path/to/vault-charm-int.pem | base64)" \
|
||||
root-ca="$(cat /path/to/root-ca.pem | base64)"
|
||||
|
||||
|
@ -331,8 +331,8 @@ PKI secrets backend and then generate a root CA certificate:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait vault/leader disable-pki
|
||||
juju run --wait vault/leader generate-root-ca
|
||||
juju run vault/leader disable-pki
|
||||
juju run vault/leader generate-root-ca
|
||||
|
||||
Configuring SSL certificates via charm options
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
|
|
@ -271,14 +271,14 @@ protocol) or manually by the operator:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait -m site-a site-a-ceph-mon/leader create-pool name=mypool app-name=rbd
|
||||
juju run --wait -m site-a site-a-ceph-rbd-mirror/leader refresh-pools
|
||||
juju run -m site-a site-a-ceph-mon/leader create-pool name=mypool app-name=rbd
|
||||
juju run -m site-a site-a-ceph-rbd-mirror/leader refresh-pools
|
||||
|
||||
This can be verified by listing the pools in site 'b':
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait -m site-b site-b-ceph-mon/leader list-pools
|
||||
juju run -m site-b site-b-ceph-mon/leader list-pools
|
||||
|
||||
.. note::
|
||||
|
||||
|
@ -299,22 +299,22 @@ the latter is promoted. The rest of the commands are status checks:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait -m site-a site-a-ceph-rbd-mirror/leader status verbose=true
|
||||
juju run --wait -m site-b site-b-ceph-rbd-mirror/leader status verbose=true
|
||||
juju run -m site-a site-a-ceph-rbd-mirror/leader status verbose=true
|
||||
juju run -m site-b site-b-ceph-rbd-mirror/leader status verbose=true
|
||||
|
||||
juju run --wait -m site-a site-a-ceph-rbd-mirror/leader demote
|
||||
juju run -m site-a site-a-ceph-rbd-mirror/leader demote
|
||||
|
||||
juju run --wait -m site-a site-a-ceph-rbd-mirror/leader status verbose=true
|
||||
juju run --wait -m site-b site-b-ceph-rbd-mirror/leader status verbose=true
|
||||
juju run -m site-a site-a-ceph-rbd-mirror/leader status verbose=true
|
||||
juju run -m site-b site-b-ceph-rbd-mirror/leader status verbose=true
|
||||
|
||||
juju run --wait -m site-b site-b-ceph-rbd-mirror/leader promote
|
||||
juju run -m site-b site-b-ceph-rbd-mirror/leader promote
|
||||
|
||||
To fall back to site 'a' the actions are reversed:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait -m site-b site-b-ceph-rbd-mirror/leader demote
|
||||
juju run --wait -m site-a site-a-ceph-rbd-mirror/leader promote
|
||||
juju run -m site-b site-b-ceph-rbd-mirror/leader demote
|
||||
juju run -m site-a site-a-ceph-rbd-mirror/leader promote
|
||||
|
||||
.. note::
|
||||
|
||||
|
@ -341,13 +341,13 @@ Here, we make site 'a' be the primary by demoting site 'b' and promoting site
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait -m site-b site-b-ceph-rbd-mirror/leader demote
|
||||
juju run --wait -m site-a site-a-ceph-rbd-mirror/leader promote force=true
|
||||
juju run -m site-b site-b-ceph-rbd-mirror/leader demote
|
||||
juju run -m site-a site-a-ceph-rbd-mirror/leader promote force=true
|
||||
|
||||
juju run --wait -m site-a site-a-ceph-rbd-mirror/leader status verbose=true
|
||||
juju run --wait -m site-b site-b-ceph-rbd-mirror/leader status verbose=true
|
||||
juju run -m site-a site-a-ceph-rbd-mirror/leader status verbose=true
|
||||
juju run -m site-b site-b-ceph-rbd-mirror/leader status verbose=true
|
||||
|
||||
juju run --wait -m site-b site-b-ceph-rbd-mirror/leader resync-pools i-really-mean-it=true
|
||||
juju run -m site-b site-b-ceph-rbd-mirror/leader resync-pools i-really-mean-it=true
|
||||
|
||||
.. note::
|
||||
|
||||
|
|
|
@ -194,7 +194,7 @@ Demote the site-a images with the ``demote`` action:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait site-a-ceph-rbd-mirror/0 demote pools=cinder-ceph-a
|
||||
juju run site-a-ceph-rbd-mirror/0 demote pools=cinder-ceph-a
|
||||
|
||||
Flag the site-a images for a resync with the ``resync-pools`` action. The
|
||||
``pools`` argument should point to the corresponding site's pool, which by
|
||||
|
@ -203,7 +203,7 @@ default is the name of the cinder-ceph application for the site (here
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait site-a-ceph-rbd-mirror/0 resync-pools i-really-mean-it=true pools=cinder-ceph-a
|
||||
juju run site-a-ceph-rbd-mirror/0 resync-pools i-really-mean-it=true pools=cinder-ceph-a
|
||||
|
||||
The Ceph RBD mirror daemon will perform the resync in the background.
|
||||
|
||||
|
@ -216,7 +216,7 @@ are fully synchronised. Perform a check with the ceph-rbd-mirror charm's
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait site-a-ceph-rbd-mirror/0 status verbose=true | grep -A3 volume-
|
||||
juju run site-a-ceph-rbd-mirror/0 status verbose=true | grep -A3 volume-
|
||||
|
||||
This will take a while.
|
||||
|
||||
|
@ -273,7 +273,7 @@ We can also check the status of the image as per :ref:`RBD image status
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait site-a-ceph-rbd-mirror/0 status verbose=true | grep -A3 volume-
|
||||
juju run site-a-ceph-rbd-mirror/0 status verbose=true | grep -A3 volume-
|
||||
|
||||
volume-c44d4d20-6ede-422a-903d-588d1b0d51b0:
|
||||
global_id: 3a4aa755-c9ee-4319-8ba4-fc494d20d783
|
||||
|
|
|
@ -154,7 +154,7 @@ Site a (primary),
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait site-a-ceph-rbd-mirror/0 status verbose=true | grep -A3 volume-
|
||||
juju run site-a-ceph-rbd-mirror/0 status verbose=true | grep -A3 volume-
|
||||
volume-c44d4d20-6ede-422a-903d-588d1b0d51b0:
|
||||
global_id: f66140a6-0c09-478c-9431-4eb1eb16ca86
|
||||
state: up+stopped
|
||||
|
@ -164,7 +164,7 @@ Site b (secondary is in sync with the primary),
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait site-b-ceph-rbd-mirror/0 status verbose=true | grep -A3 volume-
|
||||
juju run site-b-ceph-rbd-mirror/0 status verbose=true | grep -A3 volume-
|
||||
volume-c44d4d20-6ede-422a-903d-588d1b0d51b0:
|
||||
global_id: f66140a6-0c09-478c-9431-4eb1eb16ca86
|
||||
state: up+replaying
|
||||
|
@ -396,7 +396,7 @@ the ceph-rbd-mirror unit in site-b is the target:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait site-b-ceph-rbd-mirror/0 status verbose=true | grep -A3 volume-
|
||||
juju run site-b-ceph-rbd-mirror/0 status verbose=true | grep -A3 volume-
|
||||
|
||||
If all images look good, perform the failover of site-a:
|
||||
|
||||
|
@ -515,7 +515,7 @@ ceph-rbd-mirror charm's ``status`` action as per `RBD image status`_:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait site-b-ceph-rbd-mirror/0 status verbose=true | grep -A3 volume-
|
||||
juju run site-b-ceph-rbd-mirror/0 status verbose=true | grep -A3 volume-
|
||||
|
||||
If all images look good, perform the failover of site-a:
|
||||
|
||||
|
|
|
@ -281,7 +281,7 @@ application). This is done with the trilio-wlm charm's
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait trilio-wlm/leader create-cloud-admin-trust password=cloudadminpassword
|
||||
juju run trilio-wlm/leader create-cloud-admin-trust password=cloudadminpassword
|
||||
|
||||
Licensing
|
||||
---------
|
||||
|
|
|
@ -113,7 +113,7 @@ Now run the following commands:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait mysql-innodb-cluster/0 mysqldump \
|
||||
juju run mysql-innodb-cluster/0 mysqldump \
|
||||
databases=cinder,glance,horizon,keystone,neutron,nova,nova_api,nova_cell0,placement,vault
|
||||
juju exec -u mysql-innodb-cluster/0 -- sudo chmod o+rx /var/backups/mysql
|
||||
juju scp -- -r mysql-innodb-cluster/0:/var/backups/mysql .
|
||||
|
@ -129,7 +129,7 @@ unit:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait nova-cloud-controller/0 archive-data
|
||||
juju run nova-cloud-controller/0 archive-data
|
||||
|
||||
Repeat this command until the action output reports 'Nothing was archived'.
|
||||
|
||||
|
@ -233,23 +233,23 @@ application, which allows for a more controlled upgrade. Application leader
|
|||
juju config keystone action-managed-upgrade=True
|
||||
juju config keystone openstack-origin=cloud:focal-xena
|
||||
|
||||
juju run --wait keystone-hacluster/0 pause
|
||||
juju run --wait keystone/0 pause
|
||||
juju run --wait keystone/0 openstack-upgrade
|
||||
juju run --wait keystone/0 resume
|
||||
juju run --wait keystone-hacluster/0 resume
|
||||
juju run keystone-hacluster/0 pause
|
||||
juju run keystone/0 pause
|
||||
juju run keystone/0 openstack-upgrade
|
||||
juju run keystone/0 resume
|
||||
juju run keystone-hacluster/0 resume
|
||||
|
||||
juju run --wait keystone-hacluster/1 pause
|
||||
juju run --wait keystone/1 pause
|
||||
juju run --wait keystone/1 openstack-upgrade
|
||||
juju run --wait keystone/1 resume
|
||||
juju run --wait keystone-hacluster/1 resume
|
||||
juju run keystone-hacluster/1 pause
|
||||
juju run keystone/1 pause
|
||||
juju run keystone/1 openstack-upgrade
|
||||
juju run keystone/1 resume
|
||||
juju run keystone-hacluster/1 resume
|
||||
|
||||
juju run --wait keystone-hacluster/2 pause
|
||||
juju run --wait keystone/2 pause
|
||||
juju run --wait keystone/2 openstack-upgrade
|
||||
juju run --wait keystone/2 resume
|
||||
juju run --wait keystone-hacluster/2 resume
|
||||
juju run keystone-hacluster/2 pause
|
||||
juju run keystone/2 pause
|
||||
juju run keystone/2 openstack-upgrade
|
||||
juju run keystone/2 resume
|
||||
juju run keystone-hacluster/2 resume
|
||||
|
||||
ceph-radosgw
|
||||
^^^^^^^^^^^^
|
||||
|
@ -344,17 +344,17 @@ leader ``nova-compute/2`` is upgraded first:
|
|||
juju config nova-compute action-managed-upgrade=True
|
||||
juju config nova-compute openstack-origin=cloud:focal-xena
|
||||
|
||||
juju run --wait nova-compute/2 pause
|
||||
juju run --wait nova-compute/2 openstack-upgrade
|
||||
juju run --wait nova-compute/2 resume
|
||||
juju run nova-compute/2 pause
|
||||
juju run nova-compute/2 openstack-upgrade
|
||||
juju run nova-compute/2 resume
|
||||
|
||||
juju run --wait nova-compute/1 pause
|
||||
juju run --wait nova-compute/1 openstack-upgrade
|
||||
juju run --wait nova-compute/1 resume
|
||||
juju run nova-compute/1 pause
|
||||
juju run nova-compute/1 openstack-upgrade
|
||||
juju run nova-compute/1 resume
|
||||
|
||||
juju run --wait nova-compute/0 pause
|
||||
juju run --wait nova-compute/0 openstack-upgrade
|
||||
juju run --wait nova-compute/0 resume
|
||||
juju run nova-compute/0 pause
|
||||
juju run nova-compute/0 openstack-upgrade
|
||||
juju run nova-compute/0 resume
|
||||
|
||||
ceph-osd
|
||||
^^^^^^^^
|
||||
|
|
|
@ -144,10 +144,10 @@ The percona-cluster application requires a modification to its "strict mode"
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait percona-cluster/0 set-pxc-strict-mode mode=MASTER
|
||||
juju run --wait percona-cluster/0 mysqldump \
|
||||
juju run percona-cluster/0 set-pxc-strict-mode mode=MASTER
|
||||
juju run percona-cluster/0 mysqldump \
|
||||
databases=aodh,cinder,designate,glance,gnocchi,horizon,keystone,neutron,nova,nova_api,nova_cell0,placement
|
||||
juju run --wait percona-cluster/0 set-pxc-strict-mode mode=ENFORCING
|
||||
juju run percona-cluster/0 set-pxc-strict-mode mode=ENFORCING
|
||||
|
||||
juju exec -u percona-cluster/0 -- sudo chmod o+rx /var/backups/mysql
|
||||
juju scp -- -r percona-cluster/0:/var/backups/mysql .
|
||||
|
@ -158,7 +158,7 @@ mysql-innodb-cluster
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait mysql-innodb-cluster/0 mysqldump \
|
||||
juju run mysql-innodb-cluster/0 mysqldump \
|
||||
databases=cinder,designate,glance,gnocchi,horizon,keystone,neutron,nova,nova_api,nova_cell0,placement,vault
|
||||
|
||||
juju exec -u mysql-innodb-cluster/0 -- sudo chmod o+rx /var/backups/mysql
|
||||
|
@ -174,7 +174,7 @@ by running the ``archive-data`` action on any nova-cloud-controller unit:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait nova-cloud-controller/0 archive-data
|
||||
juju run nova-cloud-controller/0 archive-data
|
||||
|
||||
This action may need to be run multiple times until the action output reports
|
||||
'Nothing was archived'.
|
||||
|
@ -462,9 +462,9 @@ the ``openstack-upgrade`` action applied first):
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait glance/1 openstack-upgrade
|
||||
juju run --wait glance/0 openstack-upgrade
|
||||
juju run --wait glance/2 openstack-upgrade
|
||||
juju run glance/1 openstack-upgrade
|
||||
juju run glance/0 openstack-upgrade
|
||||
juju run glance/2 openstack-upgrade
|
||||
|
||||
.. _paused_single_unit:
|
||||
|
||||
|
@ -509,17 +509,17 @@ the ``openstack-upgrade`` action applied first):
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait nova-compute/0 pause
|
||||
juju run --wait nova-compute/0 openstack-upgrade
|
||||
juju run --wait nova-compute/0 resume
|
||||
juju run nova-compute/0 pause
|
||||
juju run nova-compute/0 openstack-upgrade
|
||||
juju run nova-compute/0 resume
|
||||
|
||||
juju run --wait nova-compute/1 pause
|
||||
juju run --wait nova-compute/1 openstack-upgrade
|
||||
juju run --wait nova-compute/1 resume
|
||||
juju run nova-compute/1 pause
|
||||
juju run nova-compute/1 openstack-upgrade
|
||||
juju run nova-compute/1 resume
|
||||
|
||||
juju run --wait nova-compute/2 pause
|
||||
juju run --wait nova-compute/2 openstack-upgrade
|
||||
juju run --wait nova-compute/2 resume
|
||||
juju run nova-compute/2 pause
|
||||
juju run nova-compute/2 openstack-upgrade
|
||||
juju run nova-compute/2 resume
|
||||
|
||||
Paused-single-unit with hacluster
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
@ -562,23 +562,23 @@ the ``openstack-upgrade`` action applied first):
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait keystone-hacluster/1 pause
|
||||
juju run --wait keystone/2 pause
|
||||
juju run --wait keystone/2 openstack-upgrade
|
||||
juju run --wait keystone/2 resume
|
||||
juju run --wait keystone-hacluster/1 resume
|
||||
juju run keystone-hacluster/1 pause
|
||||
juju run keystone/2 pause
|
||||
juju run keystone/2 openstack-upgrade
|
||||
juju run keystone/2 resume
|
||||
juju run keystone-hacluster/1 resume
|
||||
|
||||
juju run --wait keystone-hacluster/2 pause
|
||||
juju run --wait keystone/1 pause
|
||||
juju run --wait keystone/1 openstack-upgrade
|
||||
juju run --wait keystone/1 resume
|
||||
juju run --wait keystone-hacluster/2 resume
|
||||
juju run keystone-hacluster/2 pause
|
||||
juju run keystone/1 pause
|
||||
juju run keystone/1 openstack-upgrade
|
||||
juju run keystone/1 resume
|
||||
juju run keystone-hacluster/2 resume
|
||||
|
||||
juju run --wait keystone-hacluster/0 pause
|
||||
juju run --wait keystone/0 pause
|
||||
juju run --wait keystone/0 openstack-upgrade
|
||||
juju run --wait keystone/0 resume
|
||||
juju run --wait keystone-hacluster/0 resume
|
||||
juju run keystone-hacluster/0 pause
|
||||
juju run keystone/0 pause
|
||||
juju run keystone/0 openstack-upgrade
|
||||
juju run keystone/0 resume
|
||||
juju run keystone-hacluster/0 resume
|
||||
|
||||
.. warning::
|
||||
|
||||
|
|
|
@ -281,8 +281,8 @@ machine 0/lxd/0 (the principal leader machine).
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait rabbitmq-server/1 pause
|
||||
juju run --wait rabbitmq-server/2 pause
|
||||
juju run rabbitmq-server/1 pause
|
||||
juju run rabbitmq-server/2 pause
|
||||
|
||||
#. Perform a series upgrade of the principal leader machine:
|
||||
|
||||
|
@ -326,7 +326,7 @@ machine 0/lxd/0 (the principal leader machine).
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait rabbitmq-server/leader complete-cluster-series-upgrade
|
||||
juju run rabbitmq-server/leader complete-cluster-series-upgrade
|
||||
|
||||
#. Update the software sources for the application's machines.
|
||||
|
||||
|
@ -409,7 +409,7 @@ machine 0/lxd/1 (the principal leader machine).
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait percona-cluster/leader backup
|
||||
juju run percona-cluster/leader backup
|
||||
juju scp -- -r percona-cluster/leader:/opt/backups/mysql /path/to/local/directory
|
||||
|
||||
Permissions will need to be altered on the remote machine, and note that the
|
||||
|
@ -425,8 +425,8 @@ machine 0/lxd/1 (the principal leader machine).
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait percona-cluster/1 pause
|
||||
juju run --wait percona-cluster/2 pause
|
||||
juju run percona-cluster/1 pause
|
||||
juju run percona-cluster/2 pause
|
||||
|
||||
Leaving the principal leader unit up will ensure it has the latest MySQL
|
||||
sequence number; it will be considered the most up to date cluster member.
|
||||
|
@ -492,7 +492,7 @@ machine 0/lxd/1 (the principal leader machine).
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait percona-cluster/leader complete-cluster-series-upgrade
|
||||
juju run percona-cluster/leader complete-cluster-series-upgrade
|
||||
|
||||
#. Update the software sources for the application's machines.
|
||||
|
||||
|
@ -579,8 +579,8 @@ In summary, the principal leader unit is keystone/0 and is deployed on machine
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait keystone/1 pause
|
||||
juju run --wait keystone/2 pause
|
||||
juju run keystone/1 pause
|
||||
juju run keystone/2 pause
|
||||
|
||||
#. Perform any workload maintenance pre-upgrade steps on all machines. There
|
||||
are no keystone-specific steps to perform.
|
||||
|
@ -732,10 +732,10 @@ In summary,
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait glance/1 pause
|
||||
juju run --wait glance/2 pause
|
||||
juju run --wait nova-cloud-controller/1 pause
|
||||
juju run --wait nova-cloud-controller/2 pause
|
||||
juju run glance/1 pause
|
||||
juju run glance/2 pause
|
||||
juju run nova-cloud-controller/1 pause
|
||||
juju run nova-cloud-controller/2 pause
|
||||
|
||||
#. Perform any workload maintenance pre-upgrade steps on all machines. There
|
||||
are no glance-specific nor nova-cloud-controller-specific steps to perform.
|
||||
|
@ -928,7 +928,7 @@ applications there will be no units to pause.
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait ceph-mon/leader set-noout
|
||||
juju run ceph-mon/leader set-noout
|
||||
|
||||
#. Perform any workload maintenance pre-upgrade steps.
|
||||
|
||||
|
@ -953,7 +953,7 @@ applications there will be no units to pause.
|
|||
.. code-block:: none
|
||||
|
||||
juju exec --unit ceph-mon/leader -- ceph status
|
||||
juju run --wait ceph-mon/leader unset-noout
|
||||
juju run ceph-mon/leader unset-noout
|
||||
|
||||
#. Update the software sources for the machine.
|
||||
|
||||
|
|
|
@ -100,7 +100,7 @@ querying the corresponding nova-compute-nvidia-vgpu application unit:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait nova-compute-nvidia-vgpu/0 list-vgpu-types
|
||||
juju run nova-compute-nvidia-vgpu/0 list-vgpu-types
|
||||
|
||||
Sample output:
|
||||
|
||||
|
|
|
@ -40,7 +40,7 @@ Vault generate the CA certificate:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait vault/leader generate-root-ca
|
||||
juju run vault/leader generate-root-ca
|
||||
|
||||
See the :doc:`../admin/security/tls` page for further guidance.
|
||||
|
||||
|
|
|
@ -269,7 +269,7 @@ Perform the migration
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait neutron-api-plugin-ovn/0 migrate-mtu
|
||||
juju run neutron-api-plugin-ovn/0 migrate-mtu
|
||||
|
||||
10. Enable the Neutron OVN plugin
|
||||
|
||||
|
@ -293,7 +293,7 @@ Perform the migration
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait neutron-api-plugin-ovn/0 migrate-ovn-db
|
||||
juju run neutron-api-plugin-ovn/0 migrate-ovn-db
|
||||
|
||||
13. (Optional) Perform Neutron database surgery to update ``network_type`` of
|
||||
overlay networks to 'geneve'.
|
||||
|
@ -323,7 +323,7 @@ Perform the migration
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait neutron-api-plugin-ovn/0 offline-neutron-morph-db
|
||||
juju run neutron-api-plugin-ovn/0 offline-neutron-morph-db
|
||||
|
||||
14. Resume the Neutron API units
|
||||
|
||||
|
@ -358,11 +358,11 @@ Perform the migration
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait neutron-openvswitch/0 cleanup
|
||||
juju run --wait ovn-chassis/0 resume
|
||||
juju run neutron-openvswitch/0 cleanup
|
||||
juju run ovn-chassis/0 resume
|
||||
|
||||
juju run --wait neutron-gateway/0 cleanup
|
||||
juju run --wait ovn-dedicated-chassis/0 resume
|
||||
juju run neutron-gateway/0 cleanup
|
||||
juju run ovn-dedicated-chassis/0 resume
|
||||
|
||||
16. Post migration tasks
|
||||
|
||||
|
|
|
@ -141,7 +141,7 @@ On a per-application basis:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait percona-cluster/0 set-pxc-strict-mode mode=MASTER
|
||||
juju run percona-cluster/0 set-pxc-strict-mode mode=MASTER
|
||||
|
||||
* Here is a non-exhaustive example that lists databases using the :command:`mysql` client:
|
||||
|
||||
|
@ -203,10 +203,10 @@ On a per-application basis:
|
|||
.. code-block:: none
|
||||
|
||||
# Single DB
|
||||
juju run --wait percona-cluster/0 mysqldump databases=keystone
|
||||
juju run percona-cluster/0 mysqldump databases=keystone
|
||||
|
||||
# Multiple DBs
|
||||
juju run --wait percona-cluster/0 mysqldump \
|
||||
juju run percona-cluster/0 mysqldump \
|
||||
databases=aodh,cinder,designate,glance,gnochii,horizon,keystone,neutron,nova,nova_api,nova_cell0,placement
|
||||
|
||||
* Return Percona enforcing strict mode. See `Percona strict mode`_ to
|
||||
|
@ -214,7 +214,7 @@ On a per-application basis:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait percona-cluster/0 set-pxc-strict-mode mode=ENFORCING
|
||||
juju run percona-cluster/0 set-pxc-strict-mode mode=ENFORCING
|
||||
|
||||
* Transfer the mysqldump file from the percona-cluster unit to the
|
||||
mysql-innodb-cluster RW unit. The RW unit of the mysql-innodb-cluster can be
|
||||
|
@ -230,7 +230,7 @@ On a per-application basis:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --wait mysql-innodb-cluster/0 restore-mysqldump dump-file=/home/ubuntu/mysqldump-keystone-<DATE>.gz
|
||||
juju run mysql-innodb-cluster/0 restore-mysqldump dump-file=/home/ubuntu/mysqldump-keystone-<DATE>.gz
|
||||
|
||||
* Relate an instance of mysql-router for every application that requires a data
|
||||
store (i.e. every application that needed percona-cluster):
|
||||
|
|
|
@ -33,7 +33,7 @@ Here are example commands for the process just described:
|
|||
.. code-block:: none
|
||||
|
||||
juju deploy --series bionic --config openstack-origin=cloud:bionic-train cs:placement
|
||||
juju run --wait nova-cloud-controller/leader pause
|
||||
juju run nova-cloud-controller/leader pause
|
||||
juju integrate placement percona-cluster
|
||||
juju integrate placement keystone
|
||||
juju integrate placement nova-cloud-controller
|
||||
|
@ -44,7 +44,7 @@ placement IP address. Follow this up by resuming nova-cloud-controller:
|
|||
.. code-block:: none
|
||||
|
||||
openstack endpoint list
|
||||
juju run --wait nova-cloud-controller/leader resume
|
||||
juju run nova-cloud-controller/leader resume
|
||||
|
||||
Finally, upgrade the nova-cloud-controller services. Below all units are
|
||||
upgraded simultaneously but see the :ref:`paused_single_unit` service upgrade
|
||||
|
|
|
@ -50,15 +50,15 @@ and 3.x. These are caused by the renaming and re-purposing of several commands
|
|||
|
||||
In the context of this guide, the pertinent changes are shown here:
|
||||
|
||||
+---------------------------+----------------------------+
|
||||
| 2.9.x | 3.x |
|
||||
+===========================+============================+
|
||||
| :command:`add-relation` | :command:`integrate` |
|
||||
+---------------------------+----------------------------+
|
||||
| :command:`run` | :command:`exec` |
|
||||
+---------------------------+----------------------------+
|
||||
| :command:`run-action` | :command:`run` |
|
||||
+---------------------------+----------------------------+
|
||||
+------------------------------+----------------------+
|
||||
| 2.9.x | 3.x |
|
||||
+==============================+======================+
|
||||
| :command:`add-relation` | :command:`integrate` |
|
||||
+------------------------------+----------------------+
|
||||
| :command:`run` | :command:`exec` |
|
||||
+------------------------------+----------------------+
|
||||
| :command:`run-action --wait` | :command:`run` |
|
||||
+------------------------------+----------------------+
|
||||
|
||||
See the `Juju 3.0 release notes`_ for the comprehensive list of changes.
|
||||
|
||||
|
|
Loading…
Reference in New Issue