charm-ceph-fs/src
Liam Young f347a37d69 Add Support For Erase-Coded Pools
Add support for erasure coded pools.

1) Pool name of replicated and EC pools can now be set via
   pool-name config option.
2) Weight of replicated and EC pools can now be set via
   ceph-pool-weight config option.
3) Charm no longer uses initialize_mds from the ceph-mds
   interface. This moves the charm inline with ceph-client
   charms where the charm explicitly creates the pools they
   need.
4) Metadata pool name format is preserved with an underscore
   rather than a hyphen.

Change-Id: I97641c6daeeb2a1a65b081201772c89f6a7f539c
2020-08-25 08:48:23 +00:00
..
actions Add .gitreview and clean up repo 2017-01-06 08:29:43 -08:00
files Sync charm/ceph helpers, tox, and requirements 2019-09-30 22:41:33 -05:00
lib Fix Ceph upgrade issue by porting charm to common framework 2020-06-12 08:24:57 +02:00
reactive Add Support For Erase-Coded Pools 2020-08-25 08:48:23 +00:00
templates Fix Ceph upgrade issue by porting charm to common framework 2020-06-12 08:24:57 +02:00
tests Add Victoria test bundles 2020-07-09 16:22:39 +02:00
README.md Improve README 2020-07-09 13:06:40 -04:00
actions.yaml Add .gitreview and clean up repo 2017-01-06 08:29:43 -08:00
config.yaml Add Support For Erase-Coded Pools 2020-08-25 08:48:23 +00:00
copyright Add .gitreview and clean up repo 2017-01-06 08:29:43 -08:00
icon.svg Update charm icon 2017-07-31 13:48:41 -05:00
layer.yaml Fix Ceph upgrade issue by porting charm to common framework 2020-06-12 08:24:57 +02:00
metadata.yaml Updates for 20.08 cycle start for groovy and libs 2020-07-09 08:48:49 +00:00
test-requirements.txt Release sync for 20.08 2020-07-27 20:49:28 +01:00
tox.ini Release sync for 20.08 2020-07-27 20:49:28 +01:00
wheelhouse.txt Remove duplicate requirement 2020-07-02 11:50:04 +00:00

README.md

Overview

Ceph is a unified, distributed storage system designed for excellent performance, reliability, and scalability.

The ceph-fs charm deploys the metadata server daemon (MDS) for the Ceph distributed file system (CephFS). It is used in conjunction with the ceph-mon and the ceph-osd charms.

Highly available CephFS is achieved by deploying multiple MDS servers (i.e. multiple ceph-fs units).

Usage

Configuration

This section covers common and/or important configuration options. See file config.yaml for the full list of options, along with their descriptions and default values. A YAML file (e.g. ceph-osd.yaml) is often used to store configuration options. See the Juju documentation for details on configuring applications.

source

The source option states the software sources. A common value is an OpenStack UCA release (e.g. 'cloud:xenial-queens' or 'cloud:bionic-ussuri'). See Ceph and the UCA. The underlying host's existing apt sources will be used if this option is not specified (this behaviour can be explicitly chosen by using the value of 'distro').

Deployment

We are assuming a pre-existing Ceph cluster.

To deploy a single MDS node:

juju deploy ceph-fs

Then add a relation to the ceph-mon application:

juju add-relation ceph-fs:ceph-mds ceph-mon:mds

Actions

This section lists Juju actions supported by the charm. Actions allow specific operations to be performed on a per-unit basis. To display action descriptions run juju actions ceph-fs. If the charm is not deployed then see file actions.yaml.

  • get-quota
  • remove-quota
  • set-quota

Bugs

Please report bugs on Launchpad.

For general charm questions refer to the OpenStack Charm Guide.