charm-specs/specs/ceph/approved/cephfs.rst

2.5 KiB

CephFS

CephFS has recently gone GA and we can now move forward with offering this to people as a charm. Up until this point it was considered too experimental to store production data on.

Problem Description

A new CephFS charm will be created. This charm will leverage the Ceph base layer.

Proposed Change

A new CephFS charm will be created. This charm will leverage the Ceph base layer. The effort required should be small once the Ceph base layer is ready to go.

Alternatives

GlusterFS is an alternative to CephFS and will probably fit many users needs. However there are users who don't want to deploy additional hardware to create another cluster so this can be convenient in those cases.

Implementation

Assignee(s)

Primary assignee:

cholcombe973

Gerrit Topic

Use Gerrit topic "<topic_name>" for all patches related to this spec.

git-review -t cephfs

Work Items

Work items or tasks -- break the feature up into the things that need to be done to implement it. Those parts might end up being done by different people, but we're mostly trying to understand the timeline for implementation. 1. Create the ceph-fs charm utilizing the base Ceph layer - Expose interesting config items such as: mds cache size, mds bal mode - Create actions to allow blacklisting of misbehaving clients, breaking of locks, creating new filesystems, add_data_pool, remove_data_pool, set quotas, etc. 2. Create an interface to allow other charms to mount the filesytem.

Repositories

  1. github.com/openstack/charm-ceph-fs

Documentation

A README.md will be created for the charm as part of the normal development workflow.

Security

No additional security concerns.

Testing

A mojo spec will be developed to exercise this charm along with amulet tests if needed.

Dependencies

  • This project depends on the Ceph Layering project being successful.