Juju Charm - Ceph MON
Go to file
Liam Young 23d76a5ac7 [gnuoy,trivial] Pre-release charmhelper sync 2015-04-23 15:53:03 +01:00
files [bradm] Initial nrpe checks 2014-10-30 16:57:10 +10:00
hooks [gnuoy,trivial] Pre-release charmhelper sync 2015-04-19 10:01:44 +01:00
templates fixup ceph.conf template (needs newline at EOF) 2014-09-29 11:55:42 +01:00
tests [gnuoy,trivial] Pre-release charmhelper sync 2015-04-23 15:53:03 +01:00
unit_tests cleanup 2014-11-09 12:58:04 +00:00
.bzrignore [xianghui,dosaboy,r=james-page,t=gema] Add IPv6 support using prefer-ipv6 flag 2014-10-01 21:08:33 +01:00
.coveragerc [dosaboy,r=james-page] Add broker functionality 2014-11-19 16:12:04 -06:00
.project Merged changes from pjdc including cephx configuration support and better arbitarty repository handling 2012-10-18 09:24:36 +01:00
.pydevproject Initial move to charm-helpers 2013-06-23 20:10:07 +01:00
Makefile auto Makefile test target (amulet): bump juju test timeout to 2700s (same value as the juju-deployer default). Also remove explicit test names, which will cause all +x files in ./tests to be executed (as bundletester does by default). 2015-04-16 21:32:00 +00:00
README.md Revised readme for audit. 2014-01-27 16:34:35 -05:00
TODO Turn on cephx support by default 2012-10-09 12:18:01 +01:00
charm-helpers-hooks.yaml Merged next in and resolved conflicts 2015-01-09 16:19:00 +00:00
charm-helpers-tests.yaml Move charm-helpers-sync.yaml to charm-helpers-hooks.yaml and 2014-08-25 18:26:04 +00:00
config.yaml [bradm, r=gnuoy] Handle case of empty nagios_servicegroups setting 2015-02-26 11:01:14 +00:00
copyright Updated README verbosity, added checks to harden ceph admin-daemon usage in ceph utils 2012-10-04 14:24:12 +01:00
icon.svg Added icon.svg 2013-04-25 14:24:03 -04:00
metadata.yaml [gnuoy,trivial] Fix charm proof warning 2015-02-04 15:31:08 +00:00
revision [hopem] Added use-syslog cfg option to allow logging to syslog 2014-03-25 18:44:22 +00:00
setup.cfg [dosaboy,r=james-page] Add broker functionality 2014-11-19 16:12:04 -06:00

README.md

Overview

Ceph is a distributed storage and network file system designed to provide excellent performance, reliability, and scalability.

This charm deploys a Ceph cluster. juju

Usage

The ceph charm has two pieces of mandatory configuration for which no defaults are provided. You must set these configuration options before deployment or the charm will not work:

fsid:
    uuid specific to a ceph cluster used to ensure that different
    clusters don't get mixed up - use `uuid` to generate one.

monitor-secret: 
    a ceph generated key used by the daemons that manage to cluster
    to control security.  You can use the ceph-authtool command to 
    generate one:

        ceph-authtool /dev/stdout --name=mon. --gen-key

These two pieces of configuration must NOT be changed post bootstrap; attempting to do this will cause a reconfiguration error and new service units will not join the existing ceph cluster.

The charm also supports the specification of storage devices to be used in the ceph cluster.

osd-devices:
    A list of devices that the charm will attempt to detect, initialise and
    activate as ceph storage.

    This can be a superset of the actual storage devices presented to each
    service unit and can be changed post ceph bootstrap using `juju set`.

    The full path of each device must be provided, e.g. /dev/vdb.

    For Ceph >= 0.56.6 (Raring or the Grizzly Cloud Archive) use of
    directories instead of devices is also supported.

At a minimum you must provide a juju config file during initial deployment with the fsid and monitor-secret options (contents of cepy.yaml below):

ceph:
    fsid: ecbb8960-0e21-11e2-b495-83a88f44db01 
    monitor-secret: AQD1P2xQiKglDhAA4NGUF5j38Mhq56qwz+45wg==
    osd-devices: /dev/vdb /dev/vdc /dev/vdd /dev/vde

Specifying the osd-devices to use is also a good idea.

Boot things up by using:

juju deploy -n 3 --config ceph.yaml ceph

By default the ceph cluster will not bootstrap until 3 service units have been deployed and started; this is to ensure that a quorum is achieved prior to adding storage devices.

Scale Out Usage

You can use the Ceph OSD and Ceph Radosgw charms:

Contact Information

Authors

Report bugs on Launchpad

Ceph

Technical Footnotes

This charm uses the new-style Ceph deployment as reverse-engineered from the Chef cookbook at https://github.com/ceph/ceph-cookbooks, although we selected a different strategy to form the monitor cluster. Since we don't know the names or addresses of the machines in advance, we use the relation-joined hook to wait for all three nodes to come up, and then write their addresses to ceph.conf in the "mon host" parameter. After we initialize the monitor cluster a quorum forms quickly, and OSD bringup proceeds.

The osds use so-called "OSD hotplugging". ceph-disk-prepare is used to create the filesystems with a special GPT partition type. udev is set up to mount such filesystems and start the osd daemons as their storage becomes visible to the system (or after udevadm trigger).

The Chef cookbook mentioned above performs some extra steps to generate an OSD bootstrapping key and propagate it to the other nodes in the cluster. Since all OSDs run on nodes that also run mon, we don't need this and did not implement it.

See the documentation for more information on Ceph monitor cluster deployment strategies and pitfalls.