Commit Graph

85 Commits

Author SHA1 Message Date
Peter Sabaini ae6ee7f590 Reef default source bobcat
For the reef track, we want to default to having bobcat as a source,
as this will give reef packages

Change-Id: I3b5434cffc7e324c676ecbb6a146d29e2f553e5b
2023-12-15 19:49:09 +01:00
Zuul ca9e4fdf05 Merge "Document how multiple devices must be provided" 2023-12-07 15:11:02 +00:00
Luciano Lo Giudice d46e6b66d6 Revert default source to 'yoga'
The Openstack libs don't recognize Ceph releases when specifying
the charm source. Instead, we have to use an Openstack release.
Since it was set to quincy, reset it to yoga.

Change-Id: Ie9d485e89bd97d10774912691d657428758300ae
Closes-Bug: #2044052
2023-11-27 01:29:25 +00:00
Samuel Walladge a02b1e9e4a Document how multiple devices must be provided
osd-devices, bluestore-db, bluestore-wal, and osd-journal
accept multiple devices.

Note these must be strictly space separated (not newlines),
due to how .split(' ') is used.

Change-Id: Ic1b883b791fbd1801bbda4d9b9330117d6aea516
2023-10-11 14:19:12 +10:30
Peter Sabaini 1bac66ee50 Remove FileStore support
Remove support for creating FileStore OSDs. Also prevent upgrade
attempts to Reef if a FileStore OSD is detected

Change-Id: I9609bc0222365cb1f4059312b466a12ef4e0397f
2023-10-06 09:03:51 +02:00
Samuel Walladge ba6186e5de Add config option for tuning osd memory target
Closes-Bug: #1934143

Depends-On: https://review.opendev.org/c/openstack/charm-ceph-mon/+/869896

Change-Id: I22dfc25c4ac2737f5d872ca2bdab3c533533dbff
2023-08-15 15:16:52 +09:30
Zuul 0505d53f92 Merge "Add 2023.2 Bobcat support" 2023-08-02 20:40:48 +00:00
Samuel Walladge f3d67ff89d Clarify that osd-devices not present are ignored
This means that for cases where servers may have a different
number of disks, the same application can be deployed across all,
listing all disks in the osd-devices option.
Any devices in the list that aren't found on the server
will simply be ignored.

Change-Id: I7d0e32571845f790bb1ec42aa6eef72cc9b57b38
2023-07-31 15:02:57 +09:30
Corey Bryant 986981c6f4 Add 2023.2 Bobcat support
* sync charm-helpers to classic charms
* change openstack-origin/source default to quincy
* add mantic to metadata series
* align testing with bobcat
* add new bobcat bundles
* add bobcat bundles to tests.yaml
* add bobcat tests to osci.yaml
* update build-on and run-on bases
* drop kinetic

Change-Id: I7449eba63107b43525359fb92ae1a0ad9e648bab
2023-07-25 17:26:56 -04:00
alitvinov 38407abdd5 Tweak apparmor profile to access OSD volumes.
Plus add aa-profile-mode enforce option to the test bundles.

Closes-Bug: #1860801
Change-Id: I8264ad760d92da3faa384c8edca5566fc622c57d
2023-01-11 08:14:26 +00:00
Chris MacNaughton 88ecd7f6ec Updates to enable jammy and finalise charmcraft builds
- Add 22.04 to charmcraft.yaml
- Update metadata to include jammy
- Remove impish from metadata
- ensure that the source is yoga

Change-Id: Ibb93704c6d66f522cf112ad115b3a294d7a1eb03
2022-03-30 17:03:00 +02:00
Felipe Reyes c5a2f2f776 Clear the default value for osd-devices
Using /dev/vdb as default can go in conflict when using juju storage to
attach devices dynamically as OSD and journal, because juju may be
attaching as the first disk a volume that's meant to be used as a
journal and making the list of devices overlap.

Change-Id: I97c7657a82ea463aa090fc6266a4988c8c6bfeb4
Closes-Bug: #1946504
2021-10-08 16:44:44 -03:00
David Negreira 13cc2411e3 Add accepted formats on 'key' configuration
Add a bit more information on config.yaml about the type of keys
that can be passed as a parameter to the 'key' configuration.

Signed-off-by: David Negreira <david.negreira@canonical.com>
Change-Id: Ieeb0f598ca9a7188f81619c2b4fe88af14f260fd
Closes-Bug: #1942605
2021-09-03 15:02:30 +02:00
Peter Matulis 6abe0f8d3a Clarify config.yaml re dir-based OSDs
Closes-Bug: #1901058

Change-Id: I47a51e17366f3b5cc873b83c5ac92bbd368bf503
2020-10-22 14:46:45 -04:00
Peter Matulis a88ef0c5e7 Expose BlueStore
Explain how BlueStore vs traditional filesystems
are selected.

Improve config.yaml correspondingly. Move the
'bluestore' option's location in that file. Remove
blank lines for consistency.

Change-Id: Iebc21bdcac742a437719afb53f26729abbf8e87f
2020-10-13 14:21:58 -04:00
Frode Nordahl b49468fc10
Add BlueStore Compression support
Sync in updates from charm-helpers and charms.ceph.

Depends-On: I153c22efb952fc38c5e3d36eed5d85c953e695f7
Depends-On: Ibec4e3221387199adbc1a920e130975d7b25343c
Change-Id: I028440002cdd36be13aaee4a0f50c6a0bca7abda
2020-08-26 16:30:24 +02:00
Zuul bd06ebdf65 Merge "Improve README" 2020-07-23 05:41:50 +00:00
Peter Matulis 07e85175d9 Improve README
This improvement is part of a wave of polish
in preparation for the launch of the Ceph product.

In config.yaml, improve 'osd-journal' option description.
Also modernise example values for 'source' and use
consistent words with the ceph-osd, ceph-mon, and ceph-fs
charms.

Change-Id: Iefbf57078115181c67b320e0c5b6cbd7dc05ac55
2020-07-22 12:34:41 -04:00
Brett Milford 08d56bb040 Warning description for autotune config.
Change-Id: Ieaccc18a39d018d120ae8bd6ee62b97f30d90e41
Partial-Bug: #1798794
2020-07-10 08:15:03 +10:00
Ponnuvel Palaniyappan 791461a7cf Remove a duplicate key and fix typos in config.yaml
Change-Id: Ide6983c8d1b08eb43e5931ba7077c031c46b8ae3
2020-06-15 12:42:56 +01:00
Trent Lloyd cb05b567d1 bluestore-wal only needed if separate to DB device
Update config.yaml to clarify that bluestore-wal should only be set
where a separate (faster) device is being used for the WAL. Otherwise,
the WAL is automatically maintained within the space of the DB device
and does not need to be configured separately.

Additionally clarify that this device is used as an LVM PV and space is
allocated for each block device based on the
bluestore-block-{db,wal}-size setting.

Change-Id: I54fc582ecb2cee5de1302685e9103c636c7a307b
Ref: http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/
2019-07-09 10:37:40 +08:00
Andre Ruiz 56495eecba Implement new option to enable discard on SSDs
This change implements a new option called 'bdev-enable-discard' to control
behaviour of issuing discards to SSDs on ceph bluestore. The new code tries
to autodetect cases where it should be enabled by default but will allow
forcing if desired.

Change-Id: I7b83605c827eb4058bc4b46c92eb114c11108c93
Closes-Bug: #1788433
2019-03-01 15:26:56 +01:00
Vladimir Grevtsev e35ee0399b Extending a Bluestore config options with examples
This proposal is about adding some usage instructions and
examples for future charm users.

Change-Id: If6698e37e32fc2b526d546cb046a7c8091f66634
2018-11-16 22:19:23 +03:00
James Page 8875a7a685 Enable bluestore by default
For Ceph Lumimous (12.2.0) or later enable Bluestore block device
format as the default for Ceph OSD's. Bluestore can be disabled by
setting the bluestore config option to False.

For older releases, Bluestore cannot be enabled as its not
supported - setting the config option will have no effect.

Change-Id: I5ca657b9c4da055c4e0ff12e8b91b39d0964be8c
2018-08-16 05:10:52 +01:00
Zuul 90a57eeadc Merge "Remove the duplicated word" 2018-07-17 15:08:12 +00:00
Bryan Quigley 3527bf4ae1 Removes vm.swappiness and vfs_cache_pressure
They were both set at 1 in the same commit without justification
and both can be bad things to set that low.  This commit will
just let the kernel defaults come through.

Details on how bad it is set to these to 1 courtesy of Jay Vosburgh.

vfs_cache_pressure
Setting vfs_cache_pressure to 1 for all cases is likely to cause
excessive memory usage in the dentry and inode caches for most
workloads. For most uses, the default value of 100 is reasonable.

The vfs_cache_pressure value specifies the percentage of objects
in each of the "dentry" and "inode_entry" slab caches used by
filesystems that will be viewed as "freeable" by the slab shrinking
logic. Some other variables also adjust the actual number of objects
that the kernel will try to free, but for the freeable quantity,
a vfs_cache_pressure of 100 will attempt to free 100 times as many
objects in a cache as a setting of 1. Similarly, a vfs_cache_pressure
of 200 will attempt to free twice as many as a setting of 100.

This only comes into play when the kernel has entered reclaim,
i.e., it is trying to free cached objects in order to make space to
satisfy an allocation that would otherwise fail (or an allocation
has already failed or watermarks have been reached and this is
occurring asynchronously). By setting vfs_cache_pressure to 1,
the kernel will disproportionately reclaim pages from the page
cache instead of from the dentry/inode caches, and those will
grow with almost no bound (if vfs_cache_pressure is 0, they will
literally grow without bound until memory is exhausted).

If the system as a whole has a low cache hit ratio on the objects
in the dentry and inode caches, they will simply consume memory
that is kept idle, and force out page cache pages (file data,
block data and anonymous pages). Eventually, the system will resort
to swapping of pages and if all else fails to killing processes to
free memory. With very low vfs_cache_pressure values, it is more
likely that processes will be killed to free memory before
dentry / inode cache objects are released.

We have had several customers alleviate problems be setting
thus value back to the defaults - or having to make them
higher to clean things up after being at 1 for so long.

vm.swappiness
Setting this to 1 will heavily favor (ratio 1:199) releasing file
backed pages over writing anonymous pages to swap ("swapping" a
file backed page just frees the page, as it can be re-read from
its backing file). So, this would, e.g., favor keeping almost all
process anonymous pages (stack, heap, etc), even for idle processes,
in memory over keeping file backed pages in the page cache.

Change-Id: I94186f3e16f61223e362d3db0ddce799ae6120cb
Closes-Bug: 1770171
Signed-off-by: Bryan Quigley <bryan.quigley@canonical.com>
2018-07-05 11:07:42 -07:00
zhangzs 21127cbd3e Remove the duplicated word
Change-Id: I72159a39e392252e456cfd23c0f6e4f69175dcfe
2018-06-20 14:07:44 +08:00
Ryan Beisner 22ce311b0b
No reformat
Do not reformat devices.  A subsequent change will be necessary
to account for conditions where a reformat is still desired,
such as a set of blocking states and user-driven actions.

Partial-bug: #1698154

Depends-On: I90a866aa138d18e4242783c42d4c7c587f696d7d
Change-Id: I3a41ab38e7a1679cf4f5380a7cc56556da3aaf2b
2018-06-04 12:40:47 +02:00
James Page 2069e620b7 Add support for vault key management with vaultlocker
vaultlocker provides support for storage of encryption keys
for LUKS based dm-crypt device in Hashicorp Vault.

Add support for this key management approach for Ceph
Luminous or later.   Applications will block until vault
has been initialized and unsealed at which point OSD devices
will be prepared and booted into the Ceph cluster.

The dm-crypt layer is placed between the block device
parition and the top level LVM PV used to create VG's
and LV's to support OSD operation.

Vaultlocker enables a systemd unit for each encrypted
block device to perform unlocking during reboots of the
unit; ceph-volume will then detect the new VG/LV's and
boot the ceph-osd processes as required.

Note that vault/vaultlocker usage is only supported with
ceph-volume, which was introduced into the Ubuntu packages
as of the 12.2.4 point release for Luminous.  If vault is
configured as the key manager in deployments using older
versions, a hook error will be thrown with a blocked
status message to this effect.

Change-Id: I713492d1fd8d371439e96f9eae824b4fe7260e47
Depends-On: If73e7bd518a7bc60c2db08e2aa3a93dcfe79c0dd
Depends-On: https://github.com/juju/charm-helpers/pull/159
2018-05-15 08:28:15 +01:00
James Page a21be9a7ab Improve idempotency of block device processing
Resync charms.ceph to pickup improvements in recording
of block devices that have been processed as OSD devices
to support better idempotency of block device processing
codepaths.

This fixes a particularly nasty issue with osd-reformat
is set to True where the charm can wipe and re-prepare
an OSD device prior to the systemd unit actually booting
and mounting the OSD's associated filesystem.

This change also makes the osd-reformat option a boolean
option which is more accessible to users of the charm
via the CLI and the Juju GUI.

Change-Id: I578203aeebf6da2efc21a10d2e157324186e2a66
Depends-On: I2c6e9d5670c8d1d70584ae19b34eaf16be5dea19
2018-04-10 14:34:53 +01:00
Ryan Beisner 2036e2ea39 Update readme for apparmor
Change-Id: I4afe123e8543441a9fee805dea1426ddd19a9416
2018-03-28 13:36:55 -05:00
Dmitrii Shcherbakov 189e7620c0 add bluestore-specific config options
Adds bluestore-specific options related to the metadata-only journal.

The options allow a user to control:

1. path to a bluestore wal (block special file or regular file)
2. path to a bluestore db (block special file or regular file)
3. size of both

Their configuration works similarly to the FileStore journal. If paths
are not specified both WAL and DB will be collocated on the same block
device as data.

Other options can be configured via an existing config-flags option if needed.
http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/

Closes-Bug: #1710474
Change-Id: Ia85092230d4dcb0435354deb276012f923547393
Depends-On: I483ee9dae4ce69c71ae06359d0fb96aaa1c56cbc
Depends-On: Idbbb69acec92b2f2efca80691ca73a2030bcf633
2017-12-20 12:02:42 +00:00
Xav Paice f0de861368 Add options for osd backfill pressure
Added options for osd_max_backfills and osd_recovery_max_active, if we
should want to override the default.

Change-Id: Iaeb93d3068b1fab242acf2d741c36be5f4b29b57
Closes-bug: #1661560
2017-11-07 11:25:53 +13:00
Xav Paice ef3c3c7a0d Add option for OSD initial weight
In small clusters, adding OSDs at their full weight causes massive IO
workload which makes performance unacceptable.  This adds a config
option to change the initial weight, we can set it to 0 or something
small for clusters that would be affected.

Closes-Bug: 1716783
Change-Id: Idadfd565fbda9ffc3952de73c5c58a0dc1dc69c9
2017-09-20 09:28:05 +01:00
Edward Hope-Morley b4900ea86e Add explanation to bluestore config opt
Explain that for Ceph Luminous, which uses Bluestore
as the default backend for OSDs, if the bluetsore
option is set to false then OSDs will continue to use
Filestore.

Change-Id: I0cb65310f98562ec959018fad538e9006f1c41f6
2017-08-25 10:14:41 +01:00
Frode Nordahl 7a21fbec82 config.yaml: Cleanup
Change-Id: I86e1695b0b08dd275b3f198835288d1d4a3c95d7
2017-07-16 18:29:26 +02:00
James Page ca8a5c332c Add bluestore support for OSD's
Add highly experimental support for bluestore storage format for
OSD devices; this is disabled by default and should only be enabled
in deployments where loss of data does not present a problem!

Change-Id: I21beff9ce535f1b5c16d7f6f51c35126cc7da43e
Depends-On: I36f7aa9d7b96ec5c9eaa7a3a970593f9ca14cb34
2017-07-07 10:00:23 +01:00
Chris MacNaughton 877232630a Add availability_zone to the OSD configuration
Addition of configurable availability_zone allows the
administrator to deploy Ceph with two dimensions of
crush locations, one from config and one from Juju's
availability zone

Change-Id: Ic4410a94171b1d77f2a7c2bc56ed4c0dabb2b2d8
2016-12-09 15:06:47 -05:00
Nuno Santos 31215190af Add detail to ephemeral-unmount config option doc
Add additional detail on usage of the ephemeral-unmount configuration
option, making clear that the value should be the mountpoint path.

Change-Id: Ifd0345d0bb80625978476222445bf9875d33793b
2016-11-15 14:07:42 -05:00
Chris Holcombe 7d42f6e060 Add support for apparmor security profiles
Install apparmor profile for ceph-osd processes, and provide
associated configuration option to place any ceph-osd processes
into enforce, complain, or disable apparmor profile mode.

As this is the first release of this feature, default to disabled
and allow charm users to test and provide feedback for this
release.

Change-Id: I4524c587ac70de13aa3a0cb912033e6eb44b0403
2016-09-28 09:30:52 +01:00
Chris Holcombe 79c6c28649 Perf Optimizations
This patch starts down the road to automated performance
tuning.  It attempts to identify optimal settings for
hard drives and network cards and then persist them
for reboots.  It is conservative but configurable via
config.yaml settings.

Change-Id: Id4e72ae13ec3cb594e667f57e8cc70b7e18af15b
2016-07-14 15:24:59 -07:00
Edward Hope-Morley 8f0347d692 Add support for user-provided ceph config
Adds a new config-flags option to the charm that
supports setting a dictionary of ceph configuration
settings that will be applied to ceph.conf.

This implementation supports config sections so that
settings can be applied to any section supported by
the ceph.conf template in the charm.

Change-Id: I306fd138820746c565f8c7cd83d3ffcc388b9735
Closes-Bug: 1522375
2016-06-01 11:31:18 +01:00
Jenkins 7733790748 Merge "Add support for Storage hooks" 2016-05-19 15:12:29 +00:00
Chris MacNaughton 20c89687de Fix Availability Zone support to not break when not set
In addition to ensuring that we have AZ set, we ned to ensure that
the user has asked to have the crush map customized, ensuring
that uysing the availability zone features are entirely opt-in

Change-Id: Ie13f50d4d084317199813d417a8de6dab25d340d
Closes-Bug: 1582274
2016-05-17 16:22:47 -04:00
Chris MacNaughton 7e2ef1d0ea Add support for Storage hooks
This adds support for Juju's storage hooks by merging the config
provided osd-devices with Juju storage provided osd-devices, in the
same way that the existing Ceph charm handles them.

In addition to providing support for ceph-osds via Juju storage,
we provide support for multiple journal devices through Juju storage
as well.

We have to add a shim hook to ensure that Ceph is installed prior
to storage hook invocation because storage attached at deploy time
will execute hooks before the install hook

Change-Id: Idad46e8f4cc32e09fbd64d29cd93745662e9f542
2016-05-17 08:06:07 -04:00
Edward Hope-Morley 62cc614573 Add hardening support
Add charmhelpers.contrib.hardening and calls to install,
config-changed, upgrade-charm and update-status hooks. Also
add new config option to allow one or more hardening
modules to be applied at runtime.

Change-Id: Ic417d678d3b0f7bfda5b393628a67297d7e79107
2016-03-24 11:14:47 +00:00
Chris MacNaughton 130a309a15 support Ceph's --dmcrypt flag for OSD preparation
Tests now verify that ceph osds are running to ensure
they pass in either order

Change-Id: Ia543f4b085d4e97976ba08db508761f8dde97c42
2016-03-03 17:03:30 -05:00
James Page 2585d6f872 Add support for multiple l3 networks. 2016-02-25 15:48:22 +00:00
Edward Hope-Morley 408b1c7a4e [hopem,r=]
Support multiple l3 segments.
Closes-Bug: 1523871
2016-02-10 15:21:47 +00:00
James Page 19b2ecfc86 Add configuration option for toggling use of direct io for OSD journals 2016-01-18 16:42:36 +00:00