Juju Charm - Ceph MON
Go to file
Liam Young 1e05acf2df Updates for stable branch creation
Set default branch for git review/gerrit.

Switch amulet tests to stable.

Switch zaza bundles to stable charms.

Switch to using stable charm-helpers branch.

Change-Id: I45659f330f2a3eef037323dde454851bc131d9c2
2020-02-17 15:42:57 +00:00
actions Add security-checklist to ceph-mon 2019-03-13 10:32:00 +01:00
files/nagios Creates nrpe check for number of OSDs 2019-05-03 10:02:31 +02:00
hooks Charmhelper sync for 20.02 2020-02-04 16:38:18 +00:00
lib/ceph Charmhelper sync for 20.02 2020-02-04 16:38:18 +00:00
templates Disable object skew warnings 2018-12-11 13:39:54 +00:00
tests Updates for stable branch creation 2020-02-17 15:42:57 +00:00
unit_tests Enable pg_autoscaler for new Nautilus installations 2019-09-12 10:18:40 +02:00
.gitignore Migrate charm to work with Python3 only 2017-11-17 10:22:30 +00:00
.gitreview Updates for stable branch creation 2020-02-17 15:42:57 +00:00
.project Add support for Juju network spaces 2016-04-07 16:22:52 +01:00
.pydevproject Add support for Juju network spaces 2016-04-07 16:22:52 +01:00
.stestr.conf Move from .testr.conf to .stestr.conf 2017-11-30 10:44:40 +00:00
.zuul.yaml Switch to Ussuri jobs 2019-12-10 09:58:21 +08:00
LICENSE Re-license charm as Apache-2.0 2016-07-01 13:55:54 +01:00
Makefile Tests dir no longer need copy of charmhelpers 2018-10-10 12:39:10 +00:00
README.md Fix typo in bundle yaml 2020-02-05 11:10:56 -05:00
TODO Turn on cephx support by default 2012-10-09 12:18:01 +01:00
actions.yaml Add security-checklist to ceph-mon 2019-03-13 10:32:00 +01:00
charm-helpers-hooks.yaml Updates for stable branch creation 2020-02-17 15:42:57 +00:00
config.yaml Revert "Remove explicit fsid charm config option" 2019-10-25 12:39:32 +01:00
copyright Re-license charm as Apache-2.0 2016-07-01 13:55:54 +01:00
hardening.yaml Add hardening support 2016-03-29 20:26:58 +01:00
icon.svg Update charm icon 2017-07-31 14:15:49 -05:00
metadata.yaml Update series metadata 2019-08-19 11:47:29 -04:00
requirements.txt Sync charm/ceph helpers, tox, and requirements 2019-10-17 22:37:39 +01:00
setup.cfg [dosaboy,r=james-page] Add broker functionality 2014-11-19 16:12:04 -06:00
test-requirements.txt Sync charm/ceph helpers, tox, and requirements 2019-10-17 22:37:39 +01:00
tox.ini Sync charm/ceph helpers, tox, and requirements 2019-10-17 22:37:39 +01:00

README.md

Overview

Ceph is a unified, distributed storage system designed for excellent performance, reliability, and scalability.

The ceph-mon charm deploys Ceph monitor nodes, allowing one to create a monitor cluster. It is used in conjunction with the ceph-osd charm. Together, these charms can scale out the amount of storage available in a Ceph cluster.

Usage

Deployment

A cloud with three MON nodes is a typical design whereas three OSD nodes are considered the minimum. For example, to deploy a Ceph cluster consisting of three OSDs and three MONs:

juju deploy --config ceph-osd.yaml -n 3 ceph-osd
juju deploy --to lxd:0 ceph-mon
juju add-unit --to lxd:1 ceph-mon
juju add-unit --to lxd:2 ceph-mon
juju add-relation ceph-osd ceph-mon

Here, a containerised MON is running alongside each OSD.

By default, the monitor cluster will not be complete until three ceph-mon units have been deployed. This is to ensure that a quorum is achieved prior to the addition of storage devices.

See the Ceph documentation for notes on monitor cluster deployment strategies.

Note: Refer to the Install OpenStack page in the OpenStack Charms Deployment Guide for instructions on installing a monitor cluster for use with OpenStack.

Network spaces

This charm supports the use of Juju network spaces (Juju v.2.0). This feature optionally allows specific types of the application's network traffic to be bound to subnets that the underlying hardware is connected to.

Note: Spaces must be configured in the backing cloud prior to deployment.

The ceph-mon charm exposes the following Ceph traffic types (bindings):

  • 'public' (front-side)
  • 'cluster' (back-side)

For example, providing that spaces 'data-space' and 'cluster-space' exist, the deploy command above could look like this:

juju deploy --config ceph-mon.yaml -n 3 ceph-mon \
   --bind "public=data-space cluster=cluster-space"

Alternatively, configuration can be provided as part of a bundle:

    ceph-mon:
      charm: cs:ceph-mon
      num_units: 1
      bindings:
        public: data-space
        cluster: cluster-space

Refer to the Ceph Network Reference to learn about the implications of segregating Ceph network traffic.

Note: Existing ceph-mon units configured with the ceph-public-network or ceph-cluster-network options will continue to honour them. Furthermore, these options override any space bindings, if set.

Actions

This section lists Juju actions supported by the charm. Actions allow specific operations to be performed on a per-unit basis.

copy-pool

Copy contents of a pool to a new pool.

create-cache-tier

Create a new cache tier.

create-crush-rule

Create a new replicated CRUSH rule to use on a pool.

create-erasure-profile

Create a new erasure code profile to use on a pool.

create-pool

Create a pool.

crushmap-update

Apply a new CRUSH map definition.

Warning: This action can break your cluster in unexpected ways if misused.

delete-erasure-profile

Delete an erasure code profile.

delete-pool

Delete a pool.

get-erasure-profile

Display an erasure code profile.

get-health

Display cluster health.

list-erasure-profiles

List erasure code profiles.

list-pools

List pools.

pause-health

Pause the cluster's health operations.

pool-get

Get a value for a pool.

pool-set

Set a value for a pool.

pool-statistics

Display a pool's utilisation statistics.

remove-cache-tier

Remove a cache tier.

remove-pool-snapshot

Remove a pool's snapshot.

rename-pool

Rename a pool.

resume-health

Resume the cluster's health operations.

security-checklist

Validate the running configuration against the OpenStack security guides checklist.

set-noout

Set the cluster's 'noout' flag.

set-pool-max-bytes

Set a pool's quota for the maximum number of bytes.

show-disk-free

Show disk utilisation by host and OSD.

snapshot-pool

Create a pool snapshot.

unset-noout

Unset the cluster's 'noout' flag.

Bugs

Please report bugs on Launchpad.

For general charm questions refer to the OpenStack Charm Guide.