For the reef track, we want to default to having bobcat as a source,
as this will give reef packages
Change-Id: I3b5434cffc7e324c676ecbb6a146d29e2f553e5b
The Openstack libs don't recognize Ceph releases when specifying
the charm source. Instead, we have to use an Openstack release.
Since it was set to quincy, reset it to yoga.
Change-Id: Ie9d485e89bd97d10774912691d657428758300ae
Closes-Bug: #2044052
osd-devices, bluestore-db, bluestore-wal, and osd-journal
accept multiple devices.
Note these must be strictly space separated (not newlines),
due to how .split(' ') is used.
Change-Id: Ic1b883b791fbd1801bbda4d9b9330117d6aea516
Remove support for creating FileStore OSDs. Also prevent upgrade
attempts to Reef if a FileStore OSD is detected
Change-Id: I9609bc0222365cb1f4059312b466a12ef4e0397f
This means that for cases where servers may have a different
number of disks, the same application can be deployed across all,
listing all disks in the osd-devices option.
Any devices in the list that aren't found on the server
will simply be ignored.
Change-Id: I7d0e32571845f790bb1ec42aa6eef72cc9b57b38
- Add 22.04 to charmcraft.yaml
- Update metadata to include jammy
- Remove impish from metadata
- ensure that the source is yoga
Change-Id: Ibb93704c6d66f522cf112ad115b3a294d7a1eb03
Using /dev/vdb as default can go in conflict when using juju storage to
attach devices dynamically as OSD and journal, because juju may be
attaching as the first disk a volume that's meant to be used as a
journal and making the list of devices overlap.
Change-Id: I97c7657a82ea463aa090fc6266a4988c8c6bfeb4
Closes-Bug: #1946504
Add a bit more information on config.yaml about the type of keys
that can be passed as a parameter to the 'key' configuration.
Signed-off-by: David Negreira <david.negreira@canonical.com>
Change-Id: Ieeb0f598ca9a7188f81619c2b4fe88af14f260fd
Closes-Bug: #1942605
Explain how BlueStore vs traditional filesystems
are selected.
Improve config.yaml correspondingly. Move the
'bluestore' option's location in that file. Remove
blank lines for consistency.
Change-Id: Iebc21bdcac742a437719afb53f26729abbf8e87f
Sync in updates from charm-helpers and charms.ceph.
Depends-On: I153c22efb952fc38c5e3d36eed5d85c953e695f7
Depends-On: Ibec4e3221387199adbc1a920e130975d7b25343c
Change-Id: I028440002cdd36be13aaee4a0f50c6a0bca7abda
This improvement is part of a wave of polish
in preparation for the launch of the Ceph product.
In config.yaml, improve 'osd-journal' option description.
Also modernise example values for 'source' and use
consistent words with the ceph-osd, ceph-mon, and ceph-fs
charms.
Change-Id: Iefbf57078115181c67b320e0c5b6cbd7dc05ac55
Update config.yaml to clarify that bluestore-wal should only be set
where a separate (faster) device is being used for the WAL. Otherwise,
the WAL is automatically maintained within the space of the DB device
and does not need to be configured separately.
Additionally clarify that this device is used as an LVM PV and space is
allocated for each block device based on the
bluestore-block-{db,wal}-size setting.
Change-Id: I54fc582ecb2cee5de1302685e9103c636c7a307b
Ref: http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/
This change implements a new option called 'bdev-enable-discard' to control
behaviour of issuing discards to SSDs on ceph bluestore. The new code tries
to autodetect cases where it should be enabled by default but will allow
forcing if desired.
Change-Id: I7b83605c827eb4058bc4b46c92eb114c11108c93
Closes-Bug: #1788433
For Ceph Lumimous (12.2.0) or later enable Bluestore block device
format as the default for Ceph OSD's. Bluestore can be disabled by
setting the bluestore config option to False.
For older releases, Bluestore cannot be enabled as its not
supported - setting the config option will have no effect.
Change-Id: I5ca657b9c4da055c4e0ff12e8b91b39d0964be8c
They were both set at 1 in the same commit without justification
and both can be bad things to set that low. This commit will
just let the kernel defaults come through.
Details on how bad it is set to these to 1 courtesy of Jay Vosburgh.
vfs_cache_pressure
Setting vfs_cache_pressure to 1 for all cases is likely to cause
excessive memory usage in the dentry and inode caches for most
workloads. For most uses, the default value of 100 is reasonable.
The vfs_cache_pressure value specifies the percentage of objects
in each of the "dentry" and "inode_entry" slab caches used by
filesystems that will be viewed as "freeable" by the slab shrinking
logic. Some other variables also adjust the actual number of objects
that the kernel will try to free, but for the freeable quantity,
a vfs_cache_pressure of 100 will attempt to free 100 times as many
objects in a cache as a setting of 1. Similarly, a vfs_cache_pressure
of 200 will attempt to free twice as many as a setting of 100.
This only comes into play when the kernel has entered reclaim,
i.e., it is trying to free cached objects in order to make space to
satisfy an allocation that would otherwise fail (or an allocation
has already failed or watermarks have been reached and this is
occurring asynchronously). By setting vfs_cache_pressure to 1,
the kernel will disproportionately reclaim pages from the page
cache instead of from the dentry/inode caches, and those will
grow with almost no bound (if vfs_cache_pressure is 0, they will
literally grow without bound until memory is exhausted).
If the system as a whole has a low cache hit ratio on the objects
in the dentry and inode caches, they will simply consume memory
that is kept idle, and force out page cache pages (file data,
block data and anonymous pages). Eventually, the system will resort
to swapping of pages and if all else fails to killing processes to
free memory. With very low vfs_cache_pressure values, it is more
likely that processes will be killed to free memory before
dentry / inode cache objects are released.
We have had several customers alleviate problems be setting
thus value back to the defaults - or having to make them
higher to clean things up after being at 1 for so long.
vm.swappiness
Setting this to 1 will heavily favor (ratio 1:199) releasing file
backed pages over writing anonymous pages to swap ("swapping" a
file backed page just frees the page, as it can be re-read from
its backing file). So, this would, e.g., favor keeping almost all
process anonymous pages (stack, heap, etc), even for idle processes,
in memory over keeping file backed pages in the page cache.
Change-Id: I94186f3e16f61223e362d3db0ddce799ae6120cb
Closes-Bug: 1770171
Signed-off-by: Bryan Quigley <bryan.quigley@canonical.com>
Do not reformat devices. A subsequent change will be necessary
to account for conditions where a reformat is still desired,
such as a set of blocking states and user-driven actions.
Partial-bug: #1698154
Depends-On: I90a866aa138d18e4242783c42d4c7c587f696d7d
Change-Id: I3a41ab38e7a1679cf4f5380a7cc56556da3aaf2b
vaultlocker provides support for storage of encryption keys
for LUKS based dm-crypt device in Hashicorp Vault.
Add support for this key management approach for Ceph
Luminous or later. Applications will block until vault
has been initialized and unsealed at which point OSD devices
will be prepared and booted into the Ceph cluster.
The dm-crypt layer is placed between the block device
parition and the top level LVM PV used to create VG's
and LV's to support OSD operation.
Vaultlocker enables a systemd unit for each encrypted
block device to perform unlocking during reboots of the
unit; ceph-volume will then detect the new VG/LV's and
boot the ceph-osd processes as required.
Note that vault/vaultlocker usage is only supported with
ceph-volume, which was introduced into the Ubuntu packages
as of the 12.2.4 point release for Luminous. If vault is
configured as the key manager in deployments using older
versions, a hook error will be thrown with a blocked
status message to this effect.
Change-Id: I713492d1fd8d371439e96f9eae824b4fe7260e47
Depends-On: If73e7bd518a7bc60c2db08e2aa3a93dcfe79c0dd
Depends-On: https://github.com/juju/charm-helpers/pull/159
Resync charms.ceph to pickup improvements in recording
of block devices that have been processed as OSD devices
to support better idempotency of block device processing
codepaths.
This fixes a particularly nasty issue with osd-reformat
is set to True where the charm can wipe and re-prepare
an OSD device prior to the systemd unit actually booting
and mounting the OSD's associated filesystem.
This change also makes the osd-reformat option a boolean
option which is more accessible to users of the charm
via the CLI and the Juju GUI.
Change-Id: I578203aeebf6da2efc21a10d2e157324186e2a66
Depends-On: I2c6e9d5670c8d1d70584ae19b34eaf16be5dea19
Adds bluestore-specific options related to the metadata-only journal.
The options allow a user to control:
1. path to a bluestore wal (block special file or regular file)
2. path to a bluestore db (block special file or regular file)
3. size of both
Their configuration works similarly to the FileStore journal. If paths
are not specified both WAL and DB will be collocated on the same block
device as data.
Other options can be configured via an existing config-flags option if needed.
http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/
Closes-Bug: #1710474
Change-Id: Ia85092230d4dcb0435354deb276012f923547393
Depends-On: I483ee9dae4ce69c71ae06359d0fb96aaa1c56cbc
Depends-On: Idbbb69acec92b2f2efca80691ca73a2030bcf633
Added options for osd_max_backfills and osd_recovery_max_active, if we
should want to override the default.
Change-Id: Iaeb93d3068b1fab242acf2d741c36be5f4b29b57
Closes-bug: #1661560
In small clusters, adding OSDs at their full weight causes massive IO
workload which makes performance unacceptable. This adds a config
option to change the initial weight, we can set it to 0 or something
small for clusters that would be affected.
Closes-Bug: 1716783
Change-Id: Idadfd565fbda9ffc3952de73c5c58a0dc1dc69c9
Explain that for Ceph Luminous, which uses Bluestore
as the default backend for OSDs, if the bluetsore
option is set to false then OSDs will continue to use
Filestore.
Change-Id: I0cb65310f98562ec959018fad538e9006f1c41f6
Add highly experimental support for bluestore storage format for
OSD devices; this is disabled by default and should only be enabled
in deployments where loss of data does not present a problem!
Change-Id: I21beff9ce535f1b5c16d7f6f51c35126cc7da43e
Depends-On: I36f7aa9d7b96ec5c9eaa7a3a970593f9ca14cb34
Addition of configurable availability_zone allows the
administrator to deploy Ceph with two dimensions of
crush locations, one from config and one from Juju's
availability zone
Change-Id: Ic4410a94171b1d77f2a7c2bc56ed4c0dabb2b2d8
Add additional detail on usage of the ephemeral-unmount configuration
option, making clear that the value should be the mountpoint path.
Change-Id: Ifd0345d0bb80625978476222445bf9875d33793b
Install apparmor profile for ceph-osd processes, and provide
associated configuration option to place any ceph-osd processes
into enforce, complain, or disable apparmor profile mode.
As this is the first release of this feature, default to disabled
and allow charm users to test and provide feedback for this
release.
Change-Id: I4524c587ac70de13aa3a0cb912033e6eb44b0403
This patch starts down the road to automated performance
tuning. It attempts to identify optimal settings for
hard drives and network cards and then persist them
for reboots. It is conservative but configurable via
config.yaml settings.
Change-Id: Id4e72ae13ec3cb594e667f57e8cc70b7e18af15b
Adds a new config-flags option to the charm that
supports setting a dictionary of ceph configuration
settings that will be applied to ceph.conf.
This implementation supports config sections so that
settings can be applied to any section supported by
the ceph.conf template in the charm.
Change-Id: I306fd138820746c565f8c7cd83d3ffcc388b9735
Closes-Bug: 1522375
In addition to ensuring that we have AZ set, we ned to ensure that
the user has asked to have the crush map customized, ensuring
that uysing the availability zone features are entirely opt-in
Change-Id: Ie13f50d4d084317199813d417a8de6dab25d340d
Closes-Bug: 1582274
This adds support for Juju's storage hooks by merging the config
provided osd-devices with Juju storage provided osd-devices, in the
same way that the existing Ceph charm handles them.
In addition to providing support for ceph-osds via Juju storage,
we provide support for multiple journal devices through Juju storage
as well.
We have to add a shim hook to ensure that Ceph is installed prior
to storage hook invocation because storage attached at deploy time
will execute hooks before the install hook
Change-Id: Idad46e8f4cc32e09fbd64d29cd93745662e9f542
Add charmhelpers.contrib.hardening and calls to install,
config-changed, upgrade-charm and update-status hooks. Also
add new config option to allow one or more hardening
modules to be applied at runtime.
Change-Id: Ic417d678d3b0f7bfda5b393628a67297d7e79107