Commit Graph

46 Commits

Author SHA1 Message Date
Peter Sabaini 1bac66ee50 Remove FileStore support
Remove support for creating FileStore OSDs. Also prevent upgrade
attempts to Reef if a FileStore OSD is detected

Change-Id: I9609bc0222365cb1f4059312b466a12ef4e0397f
2023-10-06 09:03:51 +02:00
Samuel Walladge 97be046f9b Save the crash module auth key
Read the key set on the mon relation,
and use ceph-authtool to save it to a keyring,
for use by the crash module for crash reporting.

When this auth key is set, the crash module (enabled by default)
will update ceph-mon with a report.
It also results in a neat summary of recent crashes
that can be viewed by `ceph health detail`.
For example:

```
$ juju ssh ceph-mon/leader -- sudo ceph health detail

HEALTH_WARN 1 daemons have recently crashed
[WRN] RECENT_CRASH: 1 daemons have recently crashed
    osd.1 crashed on host node-3 at 2023-01-04T05:25:18.218628Z
```

ref. https://docs.ceph.com/en/latest/mgr/crash/

See also https://review.opendev.org/c/openstack/charm-ceph-mon/+/869138
for where the client_crash_key relation data set is implemented.

Depends-On: https://review.opendev.org/c/openstack/charm-ceph-mon/+/869138

Closes-Bug: #2000630
Change-Id: I77c84c368e6665e4988ebe9a735f000f99d0b78e
2023-01-20 15:13:13 +10:30
Luciano Lo Giudice 55720fa087 Implement the 'remove-disk' action
This new action allows users to either purge an OSD, or remove it,
opening up the possibility of recycling the previous OSD id. In
addition, this action will clean up any bcache devices that were
created in previous steps.

Change-Id: If3566031ba3f02dac0bc86938dcf9e85a66a66f0
Depends-On: Ib959e81833eb2094d02c7bdd507b1c8b7fbcd3db
func-test-pr: https://github.com/openstack-charmers/zaza-openstack-tests/pull/683
2022-03-31 18:50:22 +00:00
Frode Nordahl b49468fc10
Add BlueStore Compression support
Sync in updates from charm-helpers and charms.ceph.

Depends-On: I153c22efb952fc38c5e3d36eed5d85c953e695f7
Depends-On: Ibec4e3221387199adbc1a920e130975d7b25343c
Change-Id: I028440002cdd36be13aaee4a0f50c6a0bca7abda
2020-08-26 16:30:24 +02:00
Liam Young 3e795f6a62 Apply OSD settings from mons.
Apply OSD settings requested by the mons via the juju relation.
Add the OSD settings to config too. Before applying the settings
config-flags is checked to ensure there is no overlap.

Change-Id: Id69222217a1c99d0269831913abdf488791cb572
2020-05-04 08:23:00 +00:00
Andre Ruiz 56495eecba Implement new option to enable discard on SSDs
This change implements a new option called 'bdev-enable-discard' to control
behaviour of issuing discards to SSDs on ceph bluestore. The new code tries
to autodetect cases where it should be enabled by default but will allow
forcing if desired.

Change-Id: I7b83605c827eb4058bc4b46c92eb114c11108c93
Closes-Bug: #1788433
2019-03-01 15:26:56 +01:00
James Page 2f148f26b6 Fixup ceph.conf templating
Ensure that code snips don't slurp to much whitespace,
resulting in a broken set of keys within the generated
configuration file.

Change-Id: I7cfe026c60c04ac19741a3a2b364cec3fb8746ba
2018-07-18 15:26:21 -04:00
James Page 2069e620b7 Add support for vault key management with vaultlocker
vaultlocker provides support for storage of encryption keys
for LUKS based dm-crypt device in Hashicorp Vault.

Add support for this key management approach for Ceph
Luminous or later.   Applications will block until vault
has been initialized and unsealed at which point OSD devices
will be prepared and booted into the Ceph cluster.

The dm-crypt layer is placed between the block device
parition and the top level LVM PV used to create VG's
and LV's to support OSD operation.

Vaultlocker enables a systemd unit for each encrypted
block device to perform unlocking during reboots of the
unit; ceph-volume will then detect the new VG/LV's and
boot the ceph-osd processes as required.

Note that vault/vaultlocker usage is only supported with
ceph-volume, which was introduced into the Ubuntu packages
as of the 12.2.4 point release for Luminous.  If vault is
configured as the key manager in deployments using older
versions, a hook error will be thrown with a blocked
status message to this effect.

Change-Id: I713492d1fd8d371439e96f9eae824b4fe7260e47
Depends-On: If73e7bd518a7bc60c2db08e2aa3a93dcfe79c0dd
Depends-On: https://github.com/juju/charm-helpers/pull/159
2018-05-15 08:28:15 +01:00
Sandor Zeestraten 5ea1973062 Render crush-initial-weight option if set to 0
Fixes the conditional in the ceph.conf template so it renders the
crush-initial-weight config option if set to 0.

Change-Id: Iaecbdf52bd3731effa3132e61364918407116dbe
Closes-Bug: 1764077
2018-04-16 09:27:39 +02:00
Dmitrii Shcherbakov 189e7620c0 add bluestore-specific config options
Adds bluestore-specific options related to the metadata-only journal.

The options allow a user to control:

1. path to a bluestore wal (block special file or regular file)
2. path to a bluestore db (block special file or regular file)
3. size of both

Their configuration works similarly to the FileStore journal. If paths
are not specified both WAL and DB will be collocated on the same block
device as data.

Other options can be configured via an existing config-flags option if needed.
http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/

Closes-Bug: #1710474
Change-Id: Ia85092230d4dcb0435354deb276012f923547393
Depends-On: I483ee9dae4ce69c71ae06359d0fb96aaa1c56cbc
Depends-On: Idbbb69acec92b2f2efca80691ca73a2030bcf633
2017-12-20 12:02:42 +00:00
Zuul 7d20f7e37a Merge "Add options for osd backfill pressure" 2017-11-10 14:09:41 +00:00
Billy Olsen efcd88640f Change `osd crush location` to `crush location`
Upstream Ceph removed the `osd crush location` option in
commit f9db479a14d9103a2b7c0a24d958fe5fff94100e [0]. This
causes new clusters deployed from the Pike UCA (Luminous)
using the customize-failure-domain option to not create or
move the OSD to the correct spot in the OSD tree. The end
result is that placement groups will fail to peer because
there are no racks to select hosts and OSDs from.

Instead, the charm should set the more generic `crush
location` option in the ceph.conf file. It is and has been
supported since the Firefly (trusty-icehouse) version of
Ceph.

[0] f9db479a14

Change-Id: I0b7055b20f54096a2f33583079326aee17726355
Closes-Bug: 1730839
2017-11-07 19:59:27 -07:00
Xav Paice f0de861368 Add options for osd backfill pressure
Added options for osd_max_backfills and osd_recovery_max_active, if we
should want to override the default.

Change-Id: Iaeb93d3068b1fab242acf2d741c36be5f4b29b57
Closes-bug: #1661560
2017-11-07 11:25:53 +13:00
Jenkins d5a584bc82 Merge "Add option for OSD initial weight" 2017-09-20 12:15:34 +00:00
Xav Paice ef3c3c7a0d Add option for OSD initial weight
In small clusters, adding OSDs at their full weight causes massive IO
workload which makes performance unacceptable.  This adds a config
option to change the initial weight, we can set it to 0 or something
small for clusters that would be affected.

Closes-Bug: 1716783
Change-Id: Idadfd565fbda9ffc3952de73c5c58a0dc1dc69c9
2017-09-20 09:28:05 +01:00
James Page 14f5033815 Drop configuration for global keyring
Drop explicit global configuration of keyring, supporting
installation of the ceph/ceph-mon/ceph-osd charms in the
same machine.

Change-Id: Ib4afd01fbcc4478ce90de5bd464b7829ecc5da7e
Closes-Bug: 1681750
2017-09-14 13:10:13 -06:00
Dmitrii Shcherbakov 6c3fe64fb5 use a non-legacy bluestore option on Luminous+
the 'experimental' option is no longer needed as of Luminous release
https://github.com/ceph/ceph/blob/luminous/src/common/legacy_config_opts.h#L79

Change-Id: Idbbb69acec92b2f2efca80691ca73a2030bcf633
2017-08-29 10:20:37 +03:00
James Page ca8a5c332c Add bluestore support for OSD's
Add highly experimental support for bluestore storage format for
OSD devices; this is disabled by default and should only be enabled
in deployments where loss of data does not present a problem!

Change-Id: I21beff9ce535f1b5c16d7f6f51c35126cc7da43e
Depends-On: I36f7aa9d7b96ec5c9eaa7a3a970593f9ca14cb34
2017-07-07 10:00:23 +01:00
Billy Olsen 2c5406b6b3 Upgrade OSDs one at a time when changing ownership
Some upgrade scenarios (hammer->jewel) require that the ownership
of the ceph osd directories are changed from root:root to ceph:ceph.
This patch improves the upgrade experience by upgrading one OSD at
a time as opposed to stopping all services, changing file ownership,
and then restarting all services at once.

This patch makes use of the `setuser match path` directive in the
ceph.conf, which causes the ceph daemon to start as the owner of the
OSD's root directory. This allows the ceph OSDs to continue running
should an unforeseen incident occur as part of this upgrade.

Change-Id: I00fdbe0fd113c56209429341f0a10797e5baee5a
Closes-Bug: #1662591
2017-03-28 12:45:42 -07:00
Chris Holcombe 79c6c28649 Perf Optimizations
This patch starts down the road to automated performance
tuning.  It attempts to identify optimal settings for
hard drives and network cards and then persist them
for reboots.  It is conservative but configurable via
config.yaml settings.

Change-Id: Id4e72ae13ec3cb594e667f57e8cc70b7e18af15b
2016-07-14 15:24:59 -07:00
Edward Hope-Morley 8f0347d692 Add support for user-provided ceph config
Adds a new config-flags option to the charm that
supports setting a dictionary of ceph configuration
settings that will be applied to ceph.conf.

This implementation supports config sections so that
settings can be applied to any section supported by
the ceph.conf template in the charm.

Change-Id: I306fd138820746c565f8c7cd83d3ffcc388b9735
Closes-Bug: 1522375
2016-06-01 11:31:18 +01:00
Chris MacNaughton 20c89687de Fix Availability Zone support to not break when not set
In addition to ensuring that we have AZ set, we ned to ensure that
the user has asked to have the crush map customized, ensuring
that uysing the availability zone features are entirely opt-in

Change-Id: Ie13f50d4d084317199813d417a8de6dab25d340d
Closes-Bug: 1582274
2016-05-17 16:22:47 -04:00
James Page 53d09832e5 Limit OSD object name lengths for Jewel + ext4
As of the Ceph Jewel release, certain limitations apply to
OSD object name lengths: specifically if ext4 is in use for
block devices or a directory based OSD is configured, OSD's
must be configured to limit object name length:

  osd max object name len = 256
  osd max object namespace len = 64

This may cause problems storing objects with long names via
the ceph-radosgw charm or for direct users of RADOS.

Also ensure that ceph.conf as a final newline as ceph requires
this.

Change-Id: I26f1d8a6f9560b307929f294d2d637c92986cf41
Closes-Bug: 1580320
Closes-Bug: 1578403
2016-05-17 10:21:22 +01:00
Chris Holcombe f1fc225748 Revert "add juju availability zone to ceph osd location when present"
This reverts commit c94e0b4b0e.

Support for juju provided zones was broken on older Ceph releases
where MAAS zones are not configured (i.e. nothing other than the
default zone).

Backing this change out until we can provide a more complete and
backwards compatible solution.

Closes-Bug: 1570960

Change-Id: I889d556d180d47b54af2991a65efcca09d685332
2016-04-20 08:49:23 +01:00
Chris Holcombe 4285f14a55 Rolling upgrades of ceph osd cluster
This change adds functionality to allow the ceph osd cluster to
upgrade in a serial rolled fashion.  This will use the ceph monitor
cluster to lock and allows only 1 ceph osd server at a time to upgrade.
The upgrade is initiated setting a config value for source for the
service which will prompt the osd cluster to upgrade to that new
source and restart all osds processes server by server.  If an osd
server has been waiting on a previous server for more than 10 minutes
and hasn't seen it finish it will assume it died during the upgrade
and proceed with its own upgrade.

I had to modify the amulet test slightly to use the ceph-mon charm
instead of the default ceph charm.  I also changed the test so that
it uses 3 ceph-osd servers instead of 1.

Limtations of this patch: If the osd failure domain has been set to osd
than this patch will cause brief temporary outages while osd processes
are being restarted.  Future work will handle this case.

This reverts commit db09fdce93.

Change-Id: Ied010278085611b6d552e050a9d2bfdad7f3d35d
2016-03-31 11:36:27 -07:00
Chris Holcombe db09fdce93 Revert "Rolling upgrades of ceph osd cluster"
This reverts commit 5b2cebfdc4.

Change-Id: Ic6f371fcc2879886b705fdce4d59bc99e41eea89
2016-03-25 15:02:50 +00:00
Chris Holcombe 5b2cebfdc4 Rolling upgrades of ceph osd cluster
This change adds functionality to allow the ceph osd cluster to
upgrade in a serial rolled fashion.  This will use the ceph monitor
cluster to lock and allows only 1 ceph osd server at a time to upgrade.
The upgrade is initiated setting a config value for source for the
service which will prompt the osd cluster to upgrade to that new
source and restart all osds processes server by server.  If an osd
server has been waiting on a previous server for more than 10 minutes
and hasn't seen it finish it will assume it died during the upgrade
and proceed with its own upgrade.

I had to modify the amulet test slightly to use the ceph-mon charm
instead of the default ceph charm.  I also changed the test so that
it uses 3 ceph-osd servers instead of 1.

Limtations of this patch: If the osd failure domain has been set to osd
than this patch will cause brief temporary outages while osd processes
are being restarted.  Future work will handle this case.

Change-Id: Id9f89241f3aebe4886310e9b208bcb19f88e1e3e
2016-03-23 10:22:04 -07:00
Chris MacNaughton c94e0b4b0e add juju availability zone to ceph osd location when present
The approach here is to use the availability zone as an imaginary rack.
All hosts that are in the same AZ will be in the same imaginary rack.
From Ceph's perspective this doesn't matter as it's just a bucket after all.
This will give users the ability to further customize their ceph deployment.

Change-Id: Ie25ac1b001db558d6a40fe3eaca014e8f4174241
2016-03-18 09:09:06 -04:00
James Page 19b2ecfc86 Add configuration option for toggling use of direct io for OSD journals 2016-01-18 16:42:36 +00:00
Edward Hope-Morley d6aeb13ce3 [hopem,r=]
Add loglevel config option.
Closes-Bug: 1520236
2016-01-13 14:48:57 +02:00
James Page 78991105d6 [coreycb,r=james-page] Add amulet tests. 2014-10-06 23:09:58 +01:00
Edward Hope-Morley d3373781da set private/public addr like in ceph charm 2014-09-29 12:00:26 +01:00
Corey Bryant faee718892 Remove leading whitespace from templates/ceph.conf
(ConfigParser can't parse)
2014-09-27 17:18:59 +00:00
Edward Hope-Morley 3efaa20c51 fixed ceph.conf newline issue 2014-09-24 14:51:16 +01:00
Edward Hope-Morley 9dc6e5bfbb fixed ceph.conf newline issue and get_host_ip() 2014-09-24 14:29:10 +01:00
Edward Hope-Morley 1913919d34 applied jamespage review fixes 2014-09-24 13:54:40 +01:00
Edward Hope-Morley 86b1c30ef5 [hopem,r=]
Adds IPv6 support for ceph osd.
2014-09-18 18:52:25 +01:00
James Page ca5b7b09c2 Rebase 2014-07-25 09:07:41 +01:00
James Page aba8e70f04 Use charm-helper for version comparisons 2014-07-23 12:34:57 +01:00
James Page dff29526a1 Add support for splitting public and cluster networks 2014-06-06 13:34:39 +01:00
Edward Hope-Morley 2aa82ffdbc [hopem] Added use-syslog cfg option to allow logging to syslog 2014-03-25 18:44:23 +00:00
Edward Hope-Morley 1b469dc0fa Adds configurable osd-journal-size option
Fixes: bug 1259919
2013-12-12 11:17:23 +00:00
James Page 80b4985348 Resync with ceph charm, updates for raring 2012-12-17 10:31:03 +00:00
James Page 341e163456 Added support for auth configuration from mons 2012-10-19 16:50:51 +01:00
James Page d5e627205b Enable cephx support by default 2012-10-09 12:19:16 +01:00
James Page 1683ffaa84 Initial ceph-osd charm 2012-10-08 15:07:16 +01:00