Misc Ceph changes

This commit is contained in:
James Page 2017-07-12 10:37:12 +01:00
parent 3c12bd774e
commit 0e96562c85
1 changed files with 7 additions and 35 deletions

View File

@ -77,7 +77,7 @@ to the new model. Each application will be installed from the `Charm
store <https://jujucharms.com>`__. We'll be providing the configuration for many
of the charms as a ``yaml`` file which we include as we deploy them.
`Ceph-OSD <https://jujucharms.com/ceph-osd>`__
`Ceph OSD <https://jujucharms.com/ceph-osd>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We're starting with the Ceph object storage daemon and we want to configure Ceph
@ -387,44 +387,17 @@ Relations:
juju add-relation glance keystone
juju add-relation glance rabbitmq-server
`Ceph monitor <https://jujucharms.com/ceph/>`__
`Ceph monitor <https://jujucharms.com/ceph-mon/>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ceph, the distributed storage system, needs a couple of extra parameters.
The first is a UUID, ensuring each cluster has a unique identifier. This is
simply generated by running the ``uuid`` command (``apt install uuid``, if it's
not already installed). We'll use this value as the ``fsid`` in the following
``ceph-mon.yaml`` configuration file.
The second parameter is a ``monitor-secret`` for the configuration file. This is
generated on the MAAS machine by first installing the ``ceph-common`` package
and then by typing the following:
For Ceph monitors (which monitor the topology of the Ceph deployment and
manage the CRUSH map which is used by clients to read and write data) no
additional configuration over the defaults provided is required, so
deploy three units with this:
.. code:: bash
ceph-authtool /dev/stdout --name=mon. --gen-key
The output will be similar to the following:
.. code:: bash
[mon.]
key = AQAARuRYD1p/AhAAKvtuJtim255+E1sBJNUkcg==
This is what the configuration file looks like with the required parameters:
.. code:: yaml
ceph-mon:
fsid: "a1ee9afe-194c-11e7-bf0f-53d6"
monitor-secret: AQAARuRYD1p/AhAAKvtuJtim255+E1sBJNUkcg==
Finally, deploy and scale the application as follows:
.. code:: bash
juju deploy --to lxd:1 --config ceph-mon.yaml ceph-mon
juju deploy --to lxd:1 ceph-mon
juju add-unit --to lxd:2 ceph-mon
juju add-unit --to lxd:3 ceph-mon
@ -506,7 +479,6 @@ make:
.. code:: bash
juju add-relation neutron-gateway ntp
juju add-relation nova-compute ntp
juju add-relation ceph-osd ntp
All that's now left to do is wait on the output from ``juju status`` to show