Previously, we didn't have a control over volume_backend_name other than
the default app name in the Juju model. A common backend name to
multiple backends with the same character is useful because those can be
treated as a single virtual backend associated with a single volume
type.
Change-Id: I4b57f7979837d21a1b116007f3da707ee154792b
Closes-Bug: #1884511
* Re-trigger `ceph_access_joined` from `ceph_replication_device_changed`.
Without this, we could end up with incomplete `ceph-access` relation
if `ceph_access_joined` is executed before `ceph_replication_device_changed`.
* Seed `replication-device-secret-uuid` early in `config-changed`, to make
sure that it's set if `storage_backend` is executed before `ceph_access_joined`.
* Set `secret_uuid` as part of the `replication_device` config in the backend
config. Without this, Nova won't be able to access the proper secret for
the Cinder Ceph volumes, after a failover.
Change-Id: Ic023d05d5d17a663e1719de393bdd15f18a40484
The new config option is applied only to the broker request to
create the charm replicated pool.
Co-authored-by: Marius Oprin <moprin@cloudbasesolutions.com>
Change-Id: I6bf9544af02d0622b8f714da97b5dbcf49d1d1af
When assessing the charms status check that the current
ceph broker request has been complete. If it has not
put the charm in a 'waiting' state and update the status
message.
Change-Id: Iaaa2021a86b7e360f3255a52b27a49ef859beecd
Closes-Bug: #1899918
Enable support for use of Erasure Coded (EC) pools for
Cinder volumes.
Add the standard set of EC based configuration options to the
charm.
Update Ceph broker request to create a replicated pool, an erasure
coding profile and an erasure coded pool (using the profile) when
pool-type == erasure-coded is specified.
Resync charm-helpers to pick changes to the standard ceph.conf
template and associated contexts for rbd default data pool mangle
due to lack for explicit support in OpenStack Services.
Update context to use metadata pool name in cinder configuration
when erasure-coding is enabled.
Change-Id: Iae0b9ba2e57a0dcc4ba1074ebeba4c644f1d830c
Co-Authored-By: James Page <james.page@ubuntu.com>
Depends-On: Iec4de19f7b39f0b08158d96c5cc1561b40aefa10
We cannot rely on JUJU_AVAILABILITY_ZONE so let admin(s) set AZ for the
storage backend explicitly through a charm config. Nova-compute charm,
for example, can use JUJU_AVAILABILITY_ZONE because AZ can be set per
unit / compute node basis. However, when it comes to Cinder backends, we
cannot use JUJU_AVAILABILITY_ZONE of cinder-{api,scheduler,volume} units
since those are not related to where storage backends reside. Ceph-mon
units are not suitable either since it consist of three units usually
and the common JUJU_AVAILABILITY_ZONE across those units to represent
the volume backend AZ is not assured.
Change-Id: I38f6926b859de46efde9219f4be7dde83e0a7985
Closes-Bug: #1884014
Without a relation to at least one nova-compute application a
cinder-ceph backend will not be functional as the libvirt
secrets will not have been created to allow access to the
ceph cluster from libvirt/qemu.
Add a simple context to check that the 'ceph-access' relation
is present. This will result in a blocked status if the
relation is not detected - for example:
Missing relations: nova-compute
Change-Id: Iedbf4aafc2348cbf6f14257417e86aa9aeb48a81
Closes-Bug: 1718051
This patch adds a dummy update_status function so that the update-status
hook 'has' a function to run and thus silence the log error.
Change-Id: I6a73943af9609810f1e40789c8d351278843dece
Closes-bug: #1837639
Also resolve merge collision between commits 2f8b158b and
616ba364 that accidentally enabled rbd_flatten_volume_from_snapshot
for >= Ocata rather than >= Queens (which is also causing the
amulet test to fail).
Change-Id: I8a8b95d34f498cc3a7a52aaf90a8684ab80399b3
Expose rbd_flatten_volume_from_snapshot to the user which allows
then to flatten volumes created from snapshots to remove
dependency from volume to snapshot.
Change-Id: I22a3c82535efac5334dd5deaadbba0dd1eae83ab
Closes-Bug: #1824582
Use cases for the Ceph pool application name tagging is emerging
and thus far the protocol appears to be ``rbd`` or ``rgw``. Others
might emerge too.
We make use of this to provide "it just works" behaviour to the
ongoing ``rbd-mirror`` feature work in the Ceph charms.
Sync charm-helpers.
Change-Id: Id8e59abdf5aaf578e9f11a223a79209fa971f51c
The pre-install operations may fail, yet that failure is not
elevated to the user. This masks the failure and makes early
package install issues difficult to troubleshoot.
If the basic pre-install script fails, the charm should not
proceed to later hooks as the requirements may not be met.
Hashbangs for bash should specify -e (errexit) on all of the
pre-install bash scripts.
Change-Id: I67b3a965b11844a2768800225396927f17864116
Closes-bug: #1815243
Partial-bug: #1815231
In some situations an existing rbd pool may already be populated
with images that are in use. This is the case when migrating
from the old topology where cinder had a direct relation to
ceph-mon.
Change-Id: I93eb801ca4a166f862d5d86711d9476c61851344
When adding ceph-mon relation to cinder, the charm installs ceph.conf
with the update-alternatives via cinder_utils.resource_map().
However when the relation is removed, the alternative isn't cleaned up.
This can cause issues if installing a cinder-ceph subordinate charm.
The cinder-ceph charm also installs a ceph.conf alternative that will
point to the leftover ceph.conf installed by the ceph-mon charm.
Added remove_alternative() in ceph-relation-broken hook to ensure
that leftover ceph.conf alternatives is removed upon relation removal.
Change-Id: I308e62a626f31eb8ef690a09035fe3908920ccc9
Closes-Bug: 1778084
By default nova/libvirt will not enable trim for
attached volumes so to allow users to use this
feature we now enable it by default.
Also removed < Icehouse unit test.
Change-Id: I58ffaa43e2836068aeed7795df670d279d5e28f8
Closes-Bug: #1781382
Add a tactical change which is already merged into charm-helpers.
This needs to go into all charms to solve the chicken:egg issue
where cosmic is untestable until this change exists.
Reference:
4835c6c167
Change-Id: Ic979610078651e4479f2c251c809e7ff3f542e73
Misc updates for rocky:
- Switch default smoke test to bionic-rocky
- Resync charm helpers
The change for this charm is minimal as it directly uses
ceph-common. The cinder charm actually deals with installation
of the required python-rados/rbd libraries for the ceph
integration.
Change-Id: Ic2ee4b845ab604d80b7e27492f522d57f9463af1
As of the of the queens release cinder supports this config
option which, if enabled, stops cinder from query all
volumes in a pool every time it does a delete in order to
get accurate pool usage stats. The problem is that this
causes tons of non-fatal race conditions and slows down deletes
to the point where the rpc thread pool fills up blocking
further requests. Our charms do not configure pool by default
and we are not aware of anyone doing this in the field so
this patch enables this option by default.
Change-Id: I5377e2886a6e206d30bd7dc38a7e43a085aa524c
Closes-Bug: 1789828
Due to changes to the ceph-osd charm, it is
suggested to use Juju storage for testing.
Change-Id: I14ab9533a53105f8edc2c4af1d98b336a898df00
Related-Bug: #1698154
Drop generation of upstart override file and /etc/environment
and scrub any existing charm configuration in these locations
from an existing install.
These where required way back in the dawn of time when ceph
support was alpha/beta in cinder.
Provide backend specific configuration file path, allowing
multiple ceph clusters to be used with a single cinder
application.
Change-Id: I8a097e4de1c5c980f118a587a1a64792fad2fa05
Closes-Bug: 1769196
The charm has been assuming that the principle charm will install
the packages this charm needs to run. This is not always the case
so the change forces the charm to install what it needs.
Change-Id: I1a394bd9f0a008a403d36ba5d7332b7fb5659006
Closes-Bug: #1754007
When using ceph as a backend request the additional privilege
class-read on rbd_children. This fixes bug 1696073.
Change-Id: I023781e01c1e314cb2755e7867cdf588432791fc
Closes-Bug: #1696073
Depends-On: Icf844ec7d33f2e558dee7935fe5fa3d7f08e0d59
Also sync charm-helpers
This change requires the following charm-helpers change
to land first:
- https://github.com/juju/charm-helpers/pull/32
Change-Id: I2c33e25e14ad49d65f5fc7eb000830086cf829c1
Nova-lxd requires that our Ceph images only contain the features
supported by the kernel RBD driver, and a discussion on the dev mailing
list suggests that 1 should work fine as the driver level
This commit contains a charmhelpers sync to bring in the new
flag to support configuration sent from the ceph charms.
Change-Id: I860584810dc3b8923635d7d45cc468ea96e4ce07