Previously, we didn't have a control over volume_backend_name other than
the default app name in the Juju model. A common backend name to
multiple backends with the same character is useful because those can be
treated as a single virtual backend associated with a single volume
type.
Change-Id: I4b57f7979837d21a1b116007f3da707ee154792b
Closes-Bug: #1884511
The new config option is applied only to the broker request to
create the charm replicated pool.
Co-authored-by: Marius Oprin <moprin@cloudbasesolutions.com>
Change-Id: I6bf9544af02d0622b8f714da97b5dbcf49d1d1af
Enable support for use of Erasure Coded (EC) pools for
Cinder volumes.
Add the standard set of EC based configuration options to the
charm.
Update Ceph broker request to create a replicated pool, an erasure
coding profile and an erasure coded pool (using the profile) when
pool-type == erasure-coded is specified.
Resync charm-helpers to pick changes to the standard ceph.conf
template and associated contexts for rbd default data pool mangle
due to lack for explicit support in OpenStack Services.
Update context to use metadata pool name in cinder configuration
when erasure-coding is enabled.
Change-Id: Iae0b9ba2e57a0dcc4ba1074ebeba4c644f1d830c
Co-Authored-By: James Page <james.page@ubuntu.com>
Depends-On: Iec4de19f7b39f0b08158d96c5cc1561b40aefa10
We cannot rely on JUJU_AVAILABILITY_ZONE so let admin(s) set AZ for the
storage backend explicitly through a charm config. Nova-compute charm,
for example, can use JUJU_AVAILABILITY_ZONE because AZ can be set per
unit / compute node basis. However, when it comes to Cinder backends, we
cannot use JUJU_AVAILABILITY_ZONE of cinder-{api,scheduler,volume} units
since those are not related to where storage backends reside. Ceph-mon
units are not suitable either since it consist of three units usually
and the common JUJU_AVAILABILITY_ZONE across those units to represent
the volume backend AZ is not assured.
Change-Id: I38f6926b859de46efde9219f4be7dde83e0a7985
Closes-Bug: #1884014
Expose rbd_flatten_volume_from_snapshot to the user which allows
then to flatten volumes created from snapshots to remove
dependency from volume to snapshot.
Change-Id: I22a3c82535efac5334dd5deaadbba0dd1eae83ab
Closes-Bug: #1824582
In some situations an existing rbd pool may already be populated
with images that are in use. This is the case when migrating
from the old topology where cinder had a direct relation to
ceph-mon.
Change-Id: I93eb801ca4a166f862d5d86711d9476c61851344
Sync charmhelpers and add configuration option to allow access
to ceph pools to be limited based on grouping.
Cinder requires rwx access to pools associated with volumes,
images and vms (to support rbd snapshots).
Change-Id: If09137f5e36d78ab35d27f88624de5533c34ce53
Partial-Bug: 1424771
This config option is used in the ceph.conf template; add to
charm configuration options to allow syslog to be enabled or
disabled for ceph library usage.
The existing context configuration for the charm will just
pickup and use this configuration option, avoiding the current
behaviour of writing out 'none' to the ceph.conf file.
Change-Id: I40debfa5c8ee07999ed5e688e31d1c6ceffbea36
Closes-Bug: 1604575
Provide the weight option to the Ceph broker request API for requesting
the creation of a new Ceph storage pool. The weight is used to indicate
the percentage of the data that the pool is expected to consume. Each
environment may have slightly different needs based on the type of
workload so a config option labelled ceph-pool-weight is provided to
allow the operator to tune this value.
Closes-Bug: #1492742
Change-Id: I844353dc8b354751de1af5d30b6d512712d40a62