Documentation updates

This commit is contained in:
James Page 2012-10-08 17:10:10 +01:00
parent 1683ffaa84
commit f9ac39e6f0
6 changed files with 27 additions and 88 deletions

85
README
View File

@ -4,84 +4,39 @@ Overview
Ceph is a distributed storage and network file system designed to provide
excellent performance, reliability, and scalability.
This charm deploys a Ceph cluster.
This charm deploys additional Ceph OSD storage service units and should be
used in conjunction with the 'ceph' charm to scale out the amount of storage
available in a Ceph cluster.
Usage
=====
The ceph charm has two pieces of mandatory configuration for which no defaults
are provided:
fsid:
uuid specific to a ceph cluster used to ensure that different
clusters don't get mixed up - use `uuid` to generate one.
monitor-secret:
a ceph generated key used by the daemons that manage to cluster
to control security. You can use the ceph-authtool command to
generate one:
ceph-authtool /dev/stdout --name=mon. --gen-key
These two pieces of configuration must NOT be changed post bootstrap; attempting
todo this will cause a reconfiguration error and new service units will not join
the existing ceph cluster.
The charm also supports specification of the storage devices to use in the ceph
cluster.
cluster::
osd-devices:
A list of devices that the charm will attempt to detect, initialise and
activate as ceph storage.
This this can be a superset of the actual storage devices presented to
each service unit and can be changed post ceph bootstrap using `juju set`.
At a minimum you must provide a juju config file during initial deployment
with the fsid and monitor-secret options (contents of cepy.yaml below):
each service unit and can be changed post ceph-osd deployment using
`juju set`.
ceph-brolin:
fsid: ecbb8960-0e21-11e2-b495-83a88f44db01
monitor-secret: AQD1P2xQiKglDhAA4NGUF5j38Mhq56qwz+45wg==
For example::
ceph-osd:
osd-devices: /dev/vdb /dev/vdc /dev/vdd /dev/vde
Specifying the osd-devices to use is also a good idea.
Boot things up by using::
Boot things up by using:
juju deploy -n 3 --config ceph.yaml ceph-brolin
By default the ceph cluster will not bootstrap until 3 service units have been
deployed and started; this is to ensure that a quorum is achieved prior to adding
storage devices.
juju deploy -n 3 --config ceph.yaml ceph
Technical Bootnotes
===================
You can then deploy this charm by simple doing::
This charm is currently deliberately inflexible and potentially destructive.
It is designed to deploy on exactly three machines. Each machine will run mon
and osd.
This charm uses the new-style Ceph deployment as reverse-engineered from the
Chef cookbook at https://github.com/ceph/ceph-cookbooks, although we selected
a different strategy to form the monitor cluster. Since we don't know the
names *or* addresses of the machines in advance, we use the relation-joined
hook to wait for all three nodes to come up, and then write their addresses
to ceph.conf in the "mon host" parameter. After we initialize the monitor
cluster a quorum forms quickly, and OSD bringup proceeds.
The osds use so-called "OSD hotplugging". ceph-disk-prepare is used to create
the filesystems with a special GPT partition type. udev is set up to mounti
such filesystems and start the osd daemons as their storage becomes visible to
the system (or after "udevadm trigger").
The Chef cookbook above performs some extra steps to generate an OSD
bootstrapping key and propagate it to the other nodes in the cluster. Since
all OSDs run on nodes that also run mon, we don't need this and did not
implement it.
The charm does not currently implement cephx and its explicitly turned off in
the configuration generated for ceph.
See http://ceph.com/docs/master/dev/mon-bootstrap/ for more information on Ceph
monitor cluster deployment strategies and pitfalls.
juju deploy -n 10 --config ceph.yaml ceph-osd
juju add-relation ceph-osd ceph
Once the ceph charm has bootstrapped the cluster, it will notify the ceph-osd
charm which will scan for the configured storage devices and add them to the
pool of available storage.

13
TODO
View File

@ -1,11 +1,4 @@
== Minor ==
Ceph OSD Charm
==============
* fix tunables (http://tracker.newdream.net/issues/2210)
* more than 192 PGs
* fixup data placement in crush to be host not osd driven
== Public Charm ==
* cephx support
* rel: remote MON clients (+client keys for cephx)
* rel: RADOS gateway (+client key for cephx)
* cephx support

View File

@ -1,11 +1,4 @@
options:
fsid:
type: string
description: |
fsid of the ceph cluster. To generate a suitable value use `uuid`
.
This configuration element is mandatory and the service will fail on
install if it is not provided.
osd-devices:
type: string
default: /dev/sdb /dev/sdc /dev/sdd /dev/sde

View File

@ -58,8 +58,6 @@ def config_changed():
def get_mon_hosts():
hosts = []
hosts.append('{}:6789'.format(utils.get_host_ip()))
for relid in utils.relation_ids('mon'):
for unit in utils.relation_list(relid):
hosts.append(

View File

@ -1,12 +1,12 @@
name: ceph-osd
summary: Highly scalable distributed storage - OSD nodes
summary: Highly scalable distributed storage - Ceph OSD storage
maintainer: James Page <james.page@ubuntu.com>
description: |
Ceph is a distributed storage and network file system designed to provide
excellent performance, reliability, and scalability.
.
This charm provides the OSD personality for expanding storage nodes within
a ceph deployment.
This charm provides the Ceph OSD personality for expanding storage capacity
within a ceph deployment.
requires:
mon:
interface: ceph-osd

View File

@ -1 +1 @@
2
3