This is to support ceph-osd requesting movement
of OSD devices into various buckets.
This also implements the osd side with
an action to move a disk
Change-Id: I609ceb8662b20ca06b71d66696d101bda799e25c
This action is fairly simple in that it returns
a list of unmounted disks
This also includes a git-sync to pull in charms.ceph
changes.
Change-Id: Idb6eabd565b0a9951bb0b212b81a57497ada56f1
Closes-Bug: 1645481
This changeset provides pause and resume actions to the ceph charm.
The pause action issues a 'ceph osd out <local_id>' for each of the
ceph osd ids that are on the unit. The action does not stop the
ceph osd processes.
Note that if the pause-health action is NOT used on the ceph charm then the
cluster will start trying to rebalance the PGs accross the remaining OSDs. If
the cluster might reach its 'full ratio' then this will be a breaking action.
The charm does NOT check for this eventuality.
The resume action issues a 'ceph osd in <local_id>' for each of the
local ceph osd process on the unit.
The charm 'remembers' that a pause action was issued, and if
successful, it shows a 'maintenance' workload status as a reminder.
Change-Id: Ic5b5b33e59e72e13843d874a08e3d142a1befde3
Add actions to pause and resume cluster health monitoring within ceph for all osd devices.
This will ensure that no rebalancing is done whilst maintenance actions are happening within the cluster.