charm-ceph-mon/config.yaml

362 lines
14 KiB
YAML

options:
loglevel:
type: int
default: 1
description: Mon and OSD debug level. Max is 20.
use-syslog:
type: boolean
default: False
description: |
If set to True, supporting services will log to syslog.
source:
type: string
default: bobcat
description: |
Optional configuration to support use of additional sources such as:
.
- ppa:myteam/ppa
- cloud:bionic-ussuri
- cloud:xenial-proposed/queens
- http://my.archive.com/ubuntu main
.
The last option should be used in conjunction with the key configuration
option.
key:
type: string
default:
description: |
Key ID to import to the apt keyring to support use with arbitary source
configuration from outside of Launchpad archives or PPA's.
harden:
type: string
default:
description: |
Apply system hardening. Supports a space-delimited list of modules
to run. Supported modules currently include os, ssh, apache and mysql.
fsid:
type: string
default:
description: |
The unique identifier (fsid) of the Ceph cluster.
.
WARNING: this option should only be used when performing an in-place
migration of an existing non-charm deployed Ceph cluster to a charm
managed deployment.
config-flags:
type: string
default:
description: |
User provided Ceph configuration. Supports a string representation of
a python dictionary where each top-level key represents a section in
the ceph.conf template. You may only use sections supported in the
template.
.
WARNING: this is not the recommended way to configure the underlying
services that this charm installs and is used at the user's own risk.
This option is mainly provided as a stop-gap for users that either
want to test the effect of modifying some config or who have found
a critical bug in the way the charm has configured their services
and need it fixed immediately. We ask that whenever this is used,
that the user consider opening a bug on this charm at
http://bugs.launchpad.net/charms providing an explanation of why the
config was needed so that we may consider it for inclusion as a
natively supported config in the charm.
auth-supported:
type: string
default: cephx
description: |
[DEPRECATED] Which authentication flavour to use.
.
This option no longer has any effect. It's insecure and breaks expected
Ceph functionality when assigned to None. The charm now ignores the
option and always sets auth to cephx.
.
Original description:
.
[DEPRECATED] Valid options are "cephx" and "none". If "none" is
specified, keys will still be created and deployed so that it can be
enabled later.
monitor-secret:
type: string
default:
description: |
The Ceph secret key used by Ceph monitors. This value will become the
mon.key. To generate a suitable value use:
.
ceph-authtool /dev/stdout --name=mon. --gen-key
.
If left empty, a secret key will be generated.
.
NOTE: Changing this configuration after deployment is not supported and
new service units will not be able to join the cluster.
monitor-count:
type: int
default: 3
description: |
Number of ceph-mon units to wait for before attempting to bootstrap the
monitor cluster. For production clusters the default value of 3 ceph-mon
units is normally a good choice.
.
For test and development environments you can enable single-unit
deployment by setting this to 1.
.
NOTE: To establish quorum and enable partition tolerance a odd number of
ceph-mon units is required.
monitor-hosts:
type: string
default:
description: |
A space-separated list of ceph mon hosts to use. This field is only used
to migrate an existing cluster to a juju-managed solution and should
otherwise be left unset.
monitor-data-available-warning:
type: int
default: 30
description: |
Raise HEALTH_WARN status when the filesystem that houses a monitor's data
store reports that its available capacity is less than or equal to this
percentage.
monitor-data-available-critical:
type: int
default: 5
description: |
Raise HEALTH_ERR status when the filesystem that houses a monitor's data
store reports that its available capacity is less than or equal to this
percentage.
expected-osd-count:
type: int
default: 0
description: |
The number of OSDs expected to be deployed in the cluster. This value can
influence the number of placement groups (PGs) to use for pools. The PG
calculation is based either on the actual number of OSDs or this option's
value, whichever is greater. The default value is '0', which tells the
charm to only consider the actual number of OSDs. If the actual number of
OSDs is less than three then this option must explicitly state that
number.
pgs-per-osd:
type: int
default: 100
description: |
The number of placement groups per OSD to target. It is important to
properly size the number of placement groups per OSD as too many
or too few placement groups per OSD may cause resource constraints and
performance degradation. This value comes from the recommendation of
the Ceph placement group calculator (http://ceph.com/pgcalc/) and
recommended values are:
.
100 - If the cluster OSD count is not expected to increase in the
foreseeable future.
200 - If the cluster OSD count is expected to increase (up to 2x) in the
foreseeable future.
300 - If the cluster OSD count is expected to increase between 2x and 3x
in the foreseeable future.
ceph-public-network:
type: string
default:
description: |
The IP address and netmask of the public (front-side) network (e.g.,
192.168.0.0/24)
.
If multiple networks are to be used, a space-delimited list of a.b.c.d/x
can be provided.
ceph-cluster-network:
type: string
default:
description: |
The IP address and netmask of the cluster (back-side) network (e.g.,
192.168.0.0/24)
.
If multiple networks are to be used, a space-delimited list of a.b.c.d/x
can be provided.
prefer-ipv6:
type: boolean
default: False
description: |
If True enables IPv6 support. The charm will expect network interfaces
to be configured with an IPv6 address. If set to False (default) IPv4
is expected.
.
NOTE: these charms do not currently support IPv6 privacy extension. In
order for this charm to function correctly, the privacy extension must be
disabled and a non-temporary address must be configured/available on
your network interface.
sysctl:
type: string
default: '{ kernel.pid_max : 2097152, vm.max_map_count : 524288,
kernel.threads-max: 2097152 }'
description: |
YAML-formatted associative array of sysctl key/value pairs to be set
persistently. By default we set pid_max, max_map_count and
threads-max to a high value to avoid problems with large numbers (>20)
of OSDs recovering. very large clusters should set those values even
higher (e.g. max for kernel.pid_max is 4194303).
customize-failure-domain:
type: boolean
default: false
description: |
Setting this to true will tell Ceph to replicate across Juju's
Availability Zone instead of specifically by host.
nagios_context:
type: string
default: "juju"
description: |
Used by the nrpe-external-master subordinate charm.
A string that will be prepended to instance name to set the hostname
in nagios. So for instance the hostname would be something like:
.
juju-myservice-0
.
If you're running multiple environments with the same services in them
this allows you to differentiate between them.
nagios_servicegroups:
type: string
default: ""
description: |
A comma-separated list of nagios servicegroups. If left empty, the
nagios_context will be used as the servicegroup.
nagios_degraded_thresh:
default: 0.1
type: float
description: "Threshold for degraded ratio (0.1 = 10%)"
nagios_misplaced_thresh:
default: 0.1
type: float
description: "Threshold for misplaced ratio (0.1 = 10%)"
nagios_recovery_rate:
default: '100'
type: string
description: |
Recovery rate (in objects/s) below which we consider recovery
to be stalled.
nagios_raise_nodeepscrub:
default: True
type: boolean
description: |
Whether to report Critical instead of Warning when the nodeep-scrub
flag is set.
nagios_check_num_osds:
default: False
type: boolean
description: |
Whether to report an error when number of known OSDs does not equal
to the number of OSDs in or up.
nagios_additional_checks:
default: ""
type: string
description: |
Dictionary describing additional checks. Key is a name of a check
which will be visible in Nagios. Value is a string (regular expression)
which is checked against status messages.
.
Example:
.
{'noout_set': 'noout', 'too_few_PGs': 'too few PGs', 'clock': 'clock skew',
'degraded_redundancy': 'Degraded data redundancy'}
.
nagios_additional_checks_critical:
default: False
type: boolean
description: |
Whether additional checks report warning or error when their checks
are positive.
nagios_rgw_zones:
default: ""
type: string
description: |
Comma-separated list of zones that are expected to be connected to this
radosgw. These will be checked by the line "data sync source...
(zone-name)" in the output of `radosgw-admin sync status`.
.
Example:
.
zone1,zone2
nagios_rgw_additional_checks:
default: ""
type: string
description: |
List describing additional checks. Each item is a regular expression to
search in the output of radosgw-admin sync status. Note, this is a
list unlike `nagios_additional_checks` which uses a dictionary.
.
Example:
.
['data is behind on']
.
use-direct-io:
type: boolean
default: True
description: Configure use of direct IO for OSD journals.
default-rbd-features:
type: int
default:
description: |
Default RBD Features to use when creating new images. The value of this
configuration option will be shared with consumers of the ``ceph-client``
interface and client charms may choose to add this to the Ceph
configuration file on the units they manage.
Example:
rbd default features = 1
NOTE: If you have clients using the kernel RBD driver you must set this
configuration option to a value corrensponding to the features the driver
in your kernel supports. The kernel RBD driver tends to be multiple
cycles behind the userspace driver available for libvirt/qemu. Nova LXD
is among the clients depending on the kernel RBD driver.
NOTE: If you want to use the RBD Mirroring feature you must either let
this configuration option be the default or make sure the value you set
includes the ``exclusive-lock`` and ``journaling`` features.
no-bootstrap:
type: boolean
default: False
description: |
Causes the charm to not do any of the initial bootstrapping of the
Ceph monitor cluster. This is only intended to be used when migrating
from the ceph all-in-one charm to a ceph-mon / ceph-osd deployment.
Refer to the Charm Deployment guide at https://docs.openstack.org/charm-deployment-guide/latest/
for more information.
disable-pg-max-object-skew:
type: boolean
default: False
description: |
Openstack clouds that use ceph will typically start their life with at
least one pool (glance) loaded with a disproportionately high amount of
data/objects where other pools may remain empty. This can trigger HEALTH_WARN
if mon_pg_warn_max_object_skew is exceeded but that is actually false positive.
pg-autotune:
type: string
default: auto
description: |
The default configuration for pg-autotune will be to automatically enable
the module for new cluster installs on Ceph Nautilus, but to leave it
disabled for all cluster upgrades to Nautilus. To enable the pg-autotune
feature for upgraded clusters, the pg-autotune option should be set to
'true'. To disable the autotuner for new clusters, the pg-autotune option
should be set to 'false'.
permit-insecure-cmr:
type: boolean
default: False
description: |
The charm does not segregate access to pools from different models properly,
this means that the correct charm settings can result with client model B
having access to the data from model A.
balancer-mode:
type: string
default:
description: |
The balancer mode used by the Ceph manager. Can only be set for Luminous or
later versions, and only when the balancer module is enabled.
rbd-stats-pools:
type: string
default: ""
description: |
Set pools to collect RBD per-image IO statistics by enabling dynamic OSD performance counters.
It can be set to:
- a comma separated list of RBD pools to enable (eg. "pool1,pool2,poolN")
- "*" to enable for all RBD pools
- "" to disable statistics
For more information: https://docs.ceph.com/en/latest/mgr/prometheus/#rbd-io-statistics