Merge "Improve README"

This commit is contained in:
Zuul 2020-07-23 05:41:50 +00:00 committed by Gerrit Code Review
commit bd06ebdf65
2 changed files with 80 additions and 31 deletions

101
README.md
View File

@ -10,11 +10,50 @@ cluster.
# Usage
## Storage devices
## Configuration
This section covers common and/or important configuration options. See file
`config.yaml` for the full list of options, along with their descriptions and
default values. A YAML file (e.g. `ceph-osd.yaml`) is often used to store
configuration options. See the [Juju documentation][juju-docs-config-apps] for
details on configuring applications.
#### `customize-failure-domain`
The `customize-failure-domain` option determines how a Ceph CRUSH map is
configured.
A value of 'false' (the default) will lead to a map that will replicate data
across hosts (implemented as [Ceph bucket type][upstream-ceph-buckets] 'host').
With a value of 'true' all MAAS-defined zones will be used to generate a map
that will replicate data across Ceph availability zones (implemented as bucket
type 'rack').
This option is also supported by the ceph-mon charm. Its value must be the same
for both charms.
#### `osd-devices`
The `osd-devices` option lists what block devices can be used for OSDs across
the cluster. See section 'Storage devices' for an elaboration on this
fundamental topic.
#### `source`
The `source` option states the software sources. A common value is an OpenStack
UCA release (e.g. 'cloud:xenial-queens' or 'cloud:bionic-ussuri'). See [Ceph
and the UCA][cloud-archive-ceph]. The underlying host's existing apt sources
will be used if this option is not specified (this behaviour can be explicitly
chosen by using the value of 'distro').
### Storage devices
A storage device is destined as an OSD (Object Storage Device). There can be
multiple OSDs per storage node (ceph-osd unit).
The list of all possible storage devices for the cluster is defined by the
`osd-devices` option (default value is `/dev/vdb`). Configuration is typically
provided via a YAML file, like `ceph-osd.yaml`. See the following examples:
`osd-devices` option (default value is '/dev/vdb'). The following examples can
be used in the `ceph-osd.yaml` configuration file:
1. Block devices (regular)
@ -65,26 +104,31 @@ detects pre-existing data on a device. In this case the operator can either
instruct the charm to ignore the disk (action `blacklist-add-disk`) or to have
it purge all data on the disk (action `zap-disk`).
> **Important**: The recommended minimum number of OSDs in the cluster is three
and this is what the ceph-mon charm expects (the cluster will not form with a
lesser number). See option `expected-osd-count` in the ceph-mon charm to
overcome this but beware that going below three is not a supported
configuration.
## Deployment
A cloud with three MON nodes is a typical design whereas three OSD nodes are
A cloud with three MON nodes is a typical design whereas three OSDs are
considered the minimum. For example, to deploy a Ceph cluster consisting of
three OSDs and three MONs:
three OSDs (one per ceph-osd unit) and three MONs:
juju deploy --config ceph-osd.yaml -n 3 ceph-osd
juju deploy --to lxd:0 ceph-mon
juju add-unit --to lxd:1 ceph-mon
juju add-unit --to lxd:2 ceph-mon
juju add-relation ceph-osd ceph-mon
juju deploy -n 3 --config ceph-osd.yaml ceph-osd
juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 ceph-mon
juju add-relation ceph-osd:mon ceph-mon:osd
Here, a containerised MON is running alongside each OSD.
Here, a containerised MON is running alongside each storage node. We've assumed
that the machines spawned in the first command are assigned IDs of 0, 1, and 2.
> **Note**: Refer to the [Install OpenStack][cdg-install-openstack] page in the
OpenStack Charms Deployment Guide for instructions on installing the ceph-osd
application for use with OpenStack.
For each ceph-osd unit, the ceph-osd charm will scan for all the devices
configured via the `osd-devices` option and attempt to assign to it all the
configured via the `osd-devices` option and attempt to assign to it all of the
ones it finds. The cluster's initial pool of available storage is the "sum" of
all these assigned devices.
@ -99,8 +143,8 @@ connected to.
The ceph-osd charm exposes the following Ceph traffic types (bindings):
- 'public' (front-side)
- 'cluster' (back-side)
* 'public' (front-side)
* 'cluster' (back-side)
For example, providing that spaces 'data-space' and 'cluster-space' exist, the
deploy command above could look like this:
@ -142,10 +186,10 @@ intended for production.
The profiles generated by the charm should **not** be used in the following
scenarios:
- On any version of Ubuntu older than 16.04
- On any version of Ceph older than Luminous
- When OSD journal devices are in use
- When Ceph BlueStore is enabled
* On any version of Ubuntu older than 16.04
* On any version of Ceph older than Luminous
* When OSD journal devices are in use
* When Ceph BlueStore is enabled
## Block device encryption
@ -246,12 +290,12 @@ Use the `list-disks` action to list disks known to a unit.
The action lists the unit's block devices by categorising them in three ways:
- `disks`: visible (known by udev), unused (not mounted), and not designated as
* `disks`: visible (known by udev), unused (not mounted), and not designated as
an OSD journal (via the `osd-journal` configuration option)
- `blacklist`: like `disks` but blacklisted (see action `blacklist-add-disk`)
* `blacklist`: like `disks` but blacklisted (see action `blacklist-add-disk`)
- `non-pristine`: like `disks` but not eligible for use due to the presence of
* `non-pristine`: like `disks` but not eligible for use due to the presence of
existing data
Example:
@ -271,12 +315,12 @@ operator to manually add OSD volumes (for disks that are not listed by
<!-- The next line has two trailing spaces. -->
- `osd-devices` (required)
* `osd-devices` (required)
A space-separated list of devices to format and initialise as OSD volumes.
<!-- The next line has two trailing spaces. -->
- `bucket`
* `bucket`
The name of a Ceph bucket to add these devices to.
Example:
@ -302,7 +346,7 @@ Use the `list-disks` action to list the unit's blacklist entries.
<!-- The next line has two trailing spaces. -->
- `osd-devices` (required)
* `osd-devices` (required)
A space-separated list of devices to add to a unit's blacklist.
Example:
@ -319,7 +363,7 @@ blacklist.
<!-- The next line has two trailing spaces. -->
- `osd-devices` (required)
* `osd-devices` (required)
A space-separated list of devices to remove from a unit's blacklist.
Each device should have an existing entry in the unit's blacklist. Use the
@ -344,12 +388,12 @@ the `add-disk` action.
<!-- The next line has two trailing spaces. -->
- `devices` (required)
* `devices` (required)
A space-separated list of devices to be recycled.
<!-- The next line has two trailing spaces. -->
- `i-really-mean-it` (required)
* `i-really-mean-it` (required)
An option that acts as a confirmation for performing the action.
Example:
@ -371,7 +415,10 @@ For general charm questions refer to the OpenStack [Charm Guide][cg].
[juju-docs-storage]: https://jaas.ai/docs/storage
[juju-docs-actions]: https://jaas.ai/docs/actions
[juju-docs-spaces]: https://jaas.ai/docs/spaces
[juju-docs-config-apps]: https://juju.is/docs/configuring-applications
[ceph-docs-removing-osds]: https://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/
[ceph-docs-network-ref]: http://docs.ceph.com/docs/master/rados/configuration/network-config-ref
[lp-bugs-charm-ceph-osd]: https://bugs.launchpad.net/charm-ceph-osd/+filebug
[cdg-install-openstack]: https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/install-openstack.html
[upstream-ceph-buckets]: https://docs.ceph.com/docs/master/rados/operations/crush-map/#types-and-buckets
[cloud-archive-ceph]: https://wiki.ubuntu.com/OpenStack/CloudArchive#Ceph_and_the_UCA

View File

@ -10,7 +10,8 @@ options:
Optional configuration to support use of additional sources such as:
.
- ppa:myteam/ppa
- cloud:xenial-proposed/ocata
- cloud:bionic-ussuri
- cloud:xenial-proposed/queens
- http://my.archive.com/ubuntu main
.
The last option should be used in conjunction with the key configuration
@ -76,9 +77,10 @@ options:
type: string
default:
description: |
The device to use as a shared journal drive for all OSD's. By default
a journal partition will be created on each OSD volume device for use by
that OSD.
The device to use as a shared journal drive for all OSDs on a node. By
default a journal partition will be created on each OSD volume device for
use by that OSD. The default behaviour is also the fallback for the case
where the specified journal device does not exist on a node.
.
Only supported with ceph >= 0.48.3.
bluestore-wal: