This means that for cases where servers may have a different
number of disks, the same application can be deployed across all,
listing all disks in the osd-devices option.
Any devices in the list that aren't found on the server
will simply be ignored.
Change-Id: I7d0e32571845f790bb1ec42aa6eef72cc9b57b38
Using /dev/vdb as default can go in conflict when using juju storage to
attach devices dynamically as OSD and journal, because juju may be
attaching as the first disk a volume that's meant to be used as a
journal and making the list of devices overlap.
Change-Id: I97c7657a82ea463aa090fc6266a4988c8c6bfeb4
Closes-Bug: #1946504
The 'get-availability-zone' action will get information about an
availability zone that will contain information about the CRUSH
structure. Specifically 'rack' and 'row'.
Closes-Bug: #1911006
Change-Id: I99ebbef5f23d6efe3c848b089c7f2b0d26ad0077
* Correct and improve in/out and stop/start
actions
* Make examples for all actions more consistent
* Add link to Charmed Ceph documentation
* Add a bug reference for the zap-disk action
* Small miscellaenous improvements
Change-Id: Iafa3cfce4c5b3eff599b662bfe2cfc3d9183cad3
Explain how BlueStore vs traditional filesystems
are selected.
Improve config.yaml correspondingly. Move the
'bluestore' option's location in that file. Remove
blank lines for consistency.
Change-Id: Iebc21bdcac742a437719afb53f26729abbf8e87f
This improvement is part of a wave of polish
in preparation for the launch of the Ceph product.
In config.yaml, improve 'osd-journal' option description.
Also modernise example values for 'source' and use
consistent words with the ceph-osd, ceph-mon, and ceph-fs
charms.
Change-Id: Iefbf57078115181c67b320e0c5b6cbd7dc05ac55
Review of README.
Corrected doc URLs in actions.yaml.
The trailing spaces on these lines are deliberate
(forces a carriage return):
260
265
291
307
332
337
Change-Id: Ia61edbfcbf27bf9bc6b35a71793df39c7cb46907
Output of `juju list-action` is at time of this writing
formatted in such a way that we should keep description
as terse as possible and refer to documentation elsewhere.
Change-Id: Ib8e7a4804e696199803b9ac386da7bf02aafd465
Weed out some references to the now deprecated `ceph` charm.
Add example of juju storage usage and reference to juju storage
documentation.
Change-Id: Ia9955e2b49589072fd2e1d265a88439d4aebe511
vaultlocker provides support for storage of encryption keys
for LUKS based dm-crypt device in Hashicorp Vault.
Add support for this key management approach for Ceph
Luminous or later. Applications will block until vault
has been initialized and unsealed at which point OSD devices
will be prepared and booted into the Ceph cluster.
The dm-crypt layer is placed between the block device
parition and the top level LVM PV used to create VG's
and LV's to support OSD operation.
Vaultlocker enables a systemd unit for each encrypted
block device to perform unlocking during reboots of the
unit; ceph-volume will then detect the new VG/LV's and
boot the ceph-osd processes as required.
Note that vault/vaultlocker usage is only supported with
ceph-volume, which was introduced into the Ubuntu packages
as of the 12.2.4 point release for Luminous. If vault is
configured as the key manager in deployments using older
versions, a hook error will be thrown with a blocked
status message to this effect.
Change-Id: I713492d1fd8d371439e96f9eae824b4fe7260e47
Depends-On: If73e7bd518a7bc60c2db08e2aa3a93dcfe79c0dd
Depends-On: https://github.com/juju/charm-helpers/pull/159
Juju 2.0 provides support for network spaces, allowing
charm authors to support direct binding of relations and
extra-bindings onto underlying network spaces.
Add public and cluster extra bindings to this charm to
support separation of client facing and cluster network
traffic using Juju network spaces.
Existing network configuration options will still be
preferred over any Juju provided network bindings, ensuring
that upgrades to existing deployments don't break.
Change-Id: I78ab6993ad5bd324ea52e279c6ca2630f965544c