Juju Charm - Ceph Proxy
Go to file
Edward Hope-Morley eab3c4f160 Add hardening support
Add charmhelpers.contrib.hardening and calls to install,
config-changed, upgrade-charm and update-status hooks. Also
add new config option to allow one or more hardening
modules to be applied at runtime.

Change-Id: If3e20565b1917828cb9fa2cf00b93bd13c1db00f
2016-03-29 20:26:58 +01:00
actions Merge ceph charm into ceph-mon 2016-03-24 14:31:49 -04:00
files fix tests 2016-01-28 18:21:15 +01:00
hooks Add hardening support 2016-03-29 20:26:58 +01:00
templates Merge ceph charm into ceph-mon 2016-03-24 14:31:49 -04:00
tests Add hardening support 2016-03-29 20:26:58 +01:00
unit_tests Add hardening support 2016-03-29 20:26:58 +01:00
.coveragerc [dosaboy,r=james-page] Add broker functionality 2014-11-19 16:12:04 -06:00
.gitignore Merge ceph charm into ceph-mon 2016-03-24 14:31:49 -04:00
.gitreview Merge ceph charm into ceph-mon 2016-03-24 14:31:49 -04:00
.project Merged changes from pjdc including cephx configuration support and better arbitarty repository handling 2012-10-18 09:24:36 +01:00
.pydevproject Initial move to charm-helpers 2013-06-23 20:10:07 +01:00
.testr.conf Add tox configurations and requirements definitions 2015-10-30 11:15:38 +09:00
Makefile Use tox in Makefile targets 2016-03-15 20:12:02 -07:00
README.md Merge ceph charm into ceph-mon 2016-03-24 14:31:49 -04:00
TODO Turn on cephx support by default 2012-10-09 12:18:01 +01:00
actions.yaml Merge ceph charm into ceph-mon 2016-03-24 14:31:49 -04:00
charm-helpers-hooks.yaml Add hardening support 2016-03-29 20:26:58 +01:00
charm-helpers-tests.yaml Move charm-helpers-sync.yaml to charm-helpers-hooks.yaml and 2014-08-25 18:26:04 +00:00
config.yaml Add hardening support 2016-03-29 20:26:58 +01:00
copyright Updated README verbosity, added checks to harden ceph admin-daemon usage in ceph utils 2012-10-04 14:24:12 +01:00
hardening.yaml Add hardening support 2016-03-29 20:26:58 +01:00
icon.svg Added icon.svg 2013-04-25 14:24:03 -04:00
metadata.yaml remove osd stuff 2016-01-25 11:10:14 -05:00
requirements.txt Resync tox integration 2015-10-30 15:04:43 +09:00
revision [hopem] Added use-syslog cfg option to allow logging to syslog 2014-03-25 18:44:22 +00:00
setup.cfg [dosaboy,r=james-page] Add broker functionality 2014-11-19 16:12:04 -06:00
test-requirements.txt Update to charm-tools >= 2.0.0 2016-03-23 09:30:16 +00:00
tox.ini Update to charm-tools >= 2.0.0 2016-03-23 09:30:16 +00:00

README.md

Overview

Ceph is a distributed storage and network file system designed to provide excellent performance, reliability, and scalability.

This charm deploys a Ceph cluster. juju

Usage

The ceph charm has two pieces of mandatory configuration for which no defaults are provided. You must set these configuration options before deployment or the charm will not work:

fsid:
    uuid specific to a ceph cluster used to ensure that different
    clusters don't get mixed up - use `uuid` to generate one.

monitor-secret:
    a ceph generated key used by the daemons that manage to cluster
    to control security.  You can use the ceph-authtool command to
    generate one:

        ceph-authtool /dev/stdout --name=mon. --gen-key

These two pieces of configuration must NOT be changed post bootstrap; attempting to do this will cause a reconfiguration error and new service units will not join the existing ceph cluster.

At a minimum you must provide a juju config file during initial deployment with the fsid and monitor-secret options (contents of cepy.yaml below):

ceph:
    fsid: ecbb8960-0e21-11e2-b495-83a88f44db01
    monitor-secret: AQD1P2xQiKglDhAA4NGUF5j38Mhq56qwz+45wg==

Boot things up by using:

juju deploy -n 3 --config ceph.yaml ceph

By default the ceph cluster will not bootstrap until 3 service units have been deployed and started; this is to ensure that a quorum is achieved prior to adding storage devices.

Actions

This charm supports pausing and resuming ceph's health functions on a cluster, for example when doing maintainance on a machine. to pause or resume, call:

juju action do --unit ceph-mon/0 pause-health or juju action do --unit ceph-mon/0 resume-health

Scale Out Usage

You can use the Ceph OSD and Ceph Radosgw charms:

Contact Information

Authors

Report bugs on Launchpad

Ceph

Technical Footnotes

This charm uses the new-style Ceph deployment as reverse-engineered from the Chef cookbook at https://github.com/ceph/ceph-cookbooks, although we selected a different strategy to form the monitor cluster. Since we don't know the names or addresses of the machines in advance, we use the relation-joined hook to wait for all three nodes to come up, and then write their addresses to ceph.conf in the "mon host" parameter. After we initialize the monitor cluster a quorum forms quickly, and OSD bringup proceeds.

See the documentation for more information on Ceph monitor cluster deployment strategies and pitfalls.