Commit Graph

9 Commits

Author SHA1 Message Date
Alex Kavanagh c3e89c09bd Update to classic charms to build using charmcraft in CI
This update is to ensure that the Zuul Canonical CI builds the charm
before functional tests and ensure that that artifact is used for the
functional tests.  This is to try to ensure that the charm that gets
landed to the charmhub is the same charm that was tested with.

Change-Id: I83118e15ff91480370182b404b3d3b7d24b5c67c
2022-02-17 12:30:03 -05:00
James Page 2069e620b7 Add support for vault key management with vaultlocker
vaultlocker provides support for storage of encryption keys
for LUKS based dm-crypt device in Hashicorp Vault.

Add support for this key management approach for Ceph
Luminous or later.   Applications will block until vault
has been initialized and unsealed at which point OSD devices
will be prepared and booted into the Ceph cluster.

The dm-crypt layer is placed between the block device
parition and the top level LVM PV used to create VG's
and LV's to support OSD operation.

Vaultlocker enables a systemd unit for each encrypted
block device to perform unlocking during reboots of the
unit; ceph-volume will then detect the new VG/LV's and
boot the ceph-osd processes as required.

Note that vault/vaultlocker usage is only supported with
ceph-volume, which was introduced into the Ubuntu packages
as of the 12.2.4 point release for Luminous.  If vault is
configured as the key manager in deployments using older
versions, a hook error will be thrown with a blocked
status message to this effect.

Change-Id: I713492d1fd8d371439e96f9eae824b4fe7260e47
Depends-On: If73e7bd518a7bc60c2db08e2aa3a93dcfe79c0dd
Depends-On: https://github.com/juju/charm-helpers/pull/159
2018-05-15 08:28:15 +01:00
James Page 36d5e14d17 Ensure upgrade keyring exists prior to upgrade checks
During ceph to ceph-osd/ceph-mon migrations, the bootstrap keyring
for the cluster will be in place as the ceph-osd units are started
alongside existing ceph units.

Switch this check to look for the upgrade keyring, which won't be
in place until the ceph-osd <-> ceph-mon relation is complete, at
which point in time a) the unit has the correct access to perform
the upgrade and b) the previous/current version check code will
not trip over due to the previous value of the source option
being None, resulting in a fallback to 'distro' as the previous
source of ceph.

Change-Id: I10895c60aeb543a10461676e4455ed6b5e2fdb46
Closes-Bug: 1729369
2017-11-01 16:35:17 +00:00
Chris MacNaughton 69b821d344 Clean up dependency chain
This includes a resync of charms_ceph to raise the directory one level
The charms_ceph change that we're syncing in changes the
name of the ceph.py file into the __init__.py file to remove the
second level of namespacing

Change-Id: I4eabbd313de2e9420667dc4acca177b2dbbf9581
2016-08-11 11:01:24 -04:00
Chris Holcombe 79c6c28649 Perf Optimizations
This patch starts down the road to automated performance
tuning.  It attempts to identify optimal settings for
hard drives and network cards and then persist them
for reboots.  It is conservative but configurable via
config.yaml settings.

Change-Id: Id4e72ae13ec3cb594e667f57e8cc70b7e18af15b
2016-07-14 15:24:59 -07:00
James Page c32211c8e1 Re-license charm as Apache-2.0
All contributions to this charm where made under Canonical
copyright; switch to Apache-2.0 license as agreed so we
can move forward with official project status.

In order to make this change, this commit also drops the
inclusion of upstart configurations for very early versions
of Ceph (argonaut), as they are no longer required.

Change-Id: I9609dd79855b545a2c5adc12b7ac573c6f246d48
2016-06-28 12:01:05 +01:00
Chris Holcombe a8790f2303 Add support for replacing a failed OSD drive
This patch adds an action to replace a hard drive for an particular
osd server.  The user executing the action will give the OSD number
and also the device name of the replacement drive.  The rest is
taken care of by the action. The action will attempt to go through
all the osd removal steps for the failed drive.  It will force
unmount the drive and if that fails it will lazy unmount the drive.
This force and then lazy pattern comes from experience with dead
hard drives not behaving nicely with umount.

Change-Id: I914cd484280ac3f9b9f1fad8b35ee53e92438a0a
2016-03-17 08:41:15 -07:00
James Page d938364366 Resync charm-helpers
Change-Id: Ibbfcc6d2f0086ee9baf347ccfdf0344ed9c0fb82
2016-03-02 12:06:15 +00:00
uoscibot 3d885c2543 Adapt imports and metadata for github move 2016-02-29 10:45:55 +00:00