Juju Charm - Ceph MON
Go to file
Aurelien Lourot e43b5e6562 Sync helpers for 20.05
Change-Id: I695760d4319b8c5c01ad0fd54d6c8d7d1d2633f8
2020-05-18 14:38:49 +02:00
actions Rename lib/ceph to lib/charms_ceph 2020-02-24 15:07:44 +00:00
files/nagios Creates nrpe check for number of OSDs 2019-05-03 10:02:31 +02:00
hooks Set sane min pg size with autoscaler 2020-05-11 12:48:11 +01:00
lib/charms_ceph Sync helpers for 20.05 2020-05-18 14:38:49 +02:00
templates Disable object skew warnings 2018-12-11 13:39:54 +00:00
tests Remove cinder spec from osd-devices 2020-05-07 11:12:53 +01:00
unit_tests Pass on osd settings from clients to osd units 2020-04-23 12:51:56 +00:00
.gitignore Migrate charm to work with Python3 only 2017-11-17 10:22:30 +00:00
.gitreview OpenDev Migration Patch 2019-04-19 19:28:22 +00:00
.project Add support for Juju network spaces 2016-04-07 16:22:52 +01:00
.pydevproject Add support for Juju network spaces 2016-04-07 16:22:52 +01:00
.stestr.conf Move from .testr.conf to .stestr.conf 2017-11-30 10:44:40 +00:00
.zuul.yaml Switch to Ussuri jobs 2019-12-10 09:58:21 +08:00
LICENSE Re-license charm as Apache-2.0 2016-07-01 13:55:54 +01:00
Makefile Sync charm-helpers for py38, distro, and other updates 2020-03-05 13:30:31 +01:00
README.md Format based on README template 2020-05-12 16:08:00 -04:00
TODO Turn on cephx support by default 2012-10-09 12:18:01 +01:00
actions.yaml Add security-checklist to ceph-mon 2019-03-13 10:32:00 +01:00
charm-helpers-hooks.yaml Handle new sysctl exitcode in focal (0 -> 255) 2020-04-21 17:41:39 +02:00
config.yaml Revert "Remove explicit fsid charm config option" 2019-10-25 12:39:32 +01:00
copyright Re-license charm as Apache-2.0 2016-07-01 13:55:54 +01:00
hardening.yaml Add hardening support 2016-03-29 20:26:58 +01:00
icon.svg Update charm icon 2017-07-31 14:15:49 -05:00
metadata.yaml Enable focal and ussuri as part of the gate tests 2020-04-29 11:32:54 +01:00
requirements.txt Sync charm/ceph helpers, tox, and requirements 2019-10-17 22:37:39 +01:00
setup.cfg [dosaboy,r=james-page] Add broker functionality 2014-11-19 16:12:04 -06:00
test-requirements.txt Sync charm/ceph helpers, tox, and requirements 2019-10-17 22:37:39 +01:00
tox.ini Sync helpers for 20.05 2020-05-18 14:38:49 +02:00

README.md

Overview

Ceph is a unified, distributed storage system designed for excellent performance, reliability, and scalability.

The ceph-mon charm deploys Ceph monitor nodes, allowing one to create a monitor cluster. It is used in conjunction with the ceph-osd charm. Together, these charms can scale out the amount of storage available in a Ceph cluster.

Usage

Deployment

A cloud with three MON nodes is a typical design whereas three OSD nodes are considered the minimum. For example, to deploy a Ceph cluster consisting of three OSDs and three MONs:

juju deploy -n 3 --config ceph-osd.yaml ceph-osd
juju deploy --to lxd:0 ceph-mon
juju add-unit --to lxd:1 ceph-mon
juju add-unit --to lxd:2 ceph-mon
juju add-relation ceph-osd ceph-mon

Here, a containerised MON is running alongside each OSD.

By default, the monitor cluster will not be complete until three ceph-mon units have been deployed. This is to ensure that a quorum is achieved prior to the addition of storage devices.

See the Ceph documentation for notes on monitor cluster deployment strategies.

Note: Refer to the Install OpenStack page in the OpenStack Charms Deployment Guide for instructions on installing a monitor cluster for use with OpenStack.

Network spaces

This charm supports the use of Juju network spaces (Juju v.2.0). This feature optionally allows specific types of the application's network traffic to be bound to subnets that the underlying hardware is connected to.

Note: Spaces must be configured in the backing cloud prior to deployment.

The ceph-mon charm exposes the following Ceph traffic types (bindings):

  • 'public' (front-side)
  • 'cluster' (back-side)

For example, providing that spaces 'data-space' and 'cluster-space' exist, the deploy command above could look like this:

juju deploy -n 3 --config ceph-mon.yaml ceph-mon \
   --bind "public=data-space cluster=cluster-space"

Alternatively, configuration can be provided as part of a bundle:

    ceph-mon:
      charm: cs:ceph-mon
      num_units: 1
      bindings:
        public: data-space
        cluster: cluster-space

Refer to the Ceph Network Reference to learn about the implications of segregating Ceph network traffic.

Note: Existing ceph-mon units configured with the ceph-public-network or ceph-cluster-network options will continue to honour them. Furthermore, these options override any space bindings, if set.

Monitoring

The charm supports Ceph metric monitoring with Prometheus. Add relations to the prometheus application in this way:

juju deploy prometheus2
juju add-relation ceph-mon prometheus2

Note: Prometheus support is available starting with Ceph Luminous (xenial-queens UCA pocket).

Actions

This section lists Juju actions supported by the charm. Actions allow specific operations to be performed on a per-unit basis. To display action descriptions run juju actions ceph-mon. If the charm is not deployed then see file actions.yaml.

  • copy-pool
  • create-cache-tier
  • create-crush-rule
  • create-erasure-profile
  • create-pool
  • crushmap-update
  • delete-erasure-profile
  • delete-pool
  • get-erasure-profile
  • get-health
  • list-erasure-profiles
  • list-pools
  • pause-health
  • pool-get
  • pool-set
  • pool-statistics
  • remove-cache-tier
  • remove-pool-snapshot
  • rename-pool
  • resume-health
  • security-checklist
  • set-noout
  • set-pool-max-bytes
  • show-disk-free
  • snapshot-pool
  • unset-noout

Bugs

Please report bugs on Launchpad.

For general charm questions refer to the OpenStack Charm Guide.