charm-ceph-rbd-mirror/src
Alex Kavanagh 418bd85392 Update to build using charmcraft
Due to a build problem with the reactive plugin, this change falls back
on overriding the steps and doing a manual build, but it also ensures
the CI system builds the charm using charmcraft.  Changes:

- add a build-requirements.txt
- modify charmcraft.yaml
- modify osci.yaml
    -> indicate build with charmcraft
- modify tox.ini
    -> tox -e build does charmcraft build/rename
    -> tox -e build-reactive does the reactive build
- modify bundles to use the <charm>.charm artifact in tests.
  and fix deprecation warning re: prefix
- tox inception to enable tox -e func-test in the CI

This change also switches away from directory backed OSD devices
in the test bundles, as they are not supported anymore.

Change-Id: I57d1b47afbbeef211bb777fdbd0b4a091a021c19
Co-authored-by: Aurelien Lourot <aurelien.lourot@canonical.com>
2022-04-12 12:05:59 +02:00
..
actions Cinder Ceph Replication tests 2021-03-10 14:31:25 +00:00
files Sync charm/ceph helpers, tox, and requirements 2019-09-30 20:59:14 -05:00
lib Handle RBD mirroring mode set in the relation 2020-12-15 15:52:01 +02:00
reactive Handle RBD mirroring mode set in the relation 2020-12-15 15:52:01 +02:00
templates Add configuration of public/cluster network 2019-03-06 13:31:23 +01:00
tests Update to build using charmcraft 2022-04-12 12:05:59 +02:00
HACKING.md Initial commit of charm skeleton 2019-01-10 16:11:41 +01:00
README.md Refresh README 2020-08-28 20:48:45 -04:00
actions.yaml Cinder Ceph Replication tests 2021-03-10 14:31:25 +00:00
config.yaml Use `ceph-rbd-mirror` interface, move boilerplate upstream 2019-02-06 08:00:12 +01:00
copyright Initial commit of charm skeleton 2019-01-10 16:11:41 +01:00
icon.svg Initial commit of charm skeleton 2019-01-10 16:11:41 +01:00
layer.yaml Remove inherited configuration options invalid for charm 2020-08-07 14:10:19 +00:00
metadata.yaml Update to build using charmcraft 2022-04-12 12:05:59 +02:00
test-requirements.txt Update to build using charmcraft 2022-04-12 12:05:59 +02:00
tox.ini Add yoga bundles and release-tool syncs 2021-11-12 11:13:56 -05:00
wheelhouse.txt Updates to flip all libraries back to master 2021-05-03 16:03:39 +01:00

README.md

Overview

Ceph is a unified, distributed storage system designed for excellent performance, reliability, and scalability.

The ceph-rbd-mirror charm deploys the Ceph rbd-mirror daemon and helps automate remote creation and configuration of mirroring for Ceph pools used for hosting RBD images.

Note: RBD mirroring is only one aspect of datacentre redundancy. Refer to Ceph RADOS Gateway Multisite Replication and other work to arrive at a complete solution.

Functionality

The charm has the following major features:

  • Support for a maximum of two Ceph clusters. The clusters may reside within a single model or be contained within two separate models.

  • Specifically written for two-way replication. This provides the ability to fail over and fall back to/from a single secondary site. Ceph does have support for mirroring to any number of clusters but the charm does not support this.

  • Automatically creates and configures (for mirroring) pools in the remote cluster based on any pools in the local cluster that are labelled with the 'rbd' tag.

  • Mirroring of whole pools only. Ceph itself has support for the mirroring of individual images but the charm does not support this.

  • Network space aware. The mirror daemon can be informed about network configuration by binding the public and cluster endpoints. The daemon will use the network associated with the cluster endpoint for mirroring traffic.

Other notes on RBD mirroring:

  • Supports multiple running instances of the mirror daemon in each cluster. Doing so allows for the dynamic re-distribution of the mirroring load amongst the daemons. This addresses both high availability and performance concerns. Leverage this feature by scaling out the ceph-rbd-mirror application (i.e. add more units).

  • Requires that every RBD image within each pool is created with the journaling and exclusive-lock image features enabled. The charm enables these features by default and the ceph-mon charm will announce them over the client relation when it has units connected to its rbd-mirror endpoint.

  • The feature first appeared in Ceph Luminous (OpenStack Queens).

Usage

Configuration

See file config.yaml of the built charm (or see the charm in the Charm Store) for the full list of configuration options, along with their descriptions and default values. See the Juju documentation for details on configuring applications.

Deployment

A standard topology consists of two Ceph clusters with each cluster residing in a separate Juju model. The deployment steps are a fairly involved and are therefore covered under Ceph RBD Mirroring in the OpenStack Charms Deployment Guide.

Actions

This section lists Juju actions supported by the charm. Actions allow specific operations to be performed on a per-unit basis. To display action descriptions run juju actions ceph-rbd-mirror. If the charm is not deployed then see file actions.yaml.

  • copy-pool
  • demote
  • promote
  • refresh-pools
  • resync-pools
  • status

Operations

Operational procedures touch upon pool creation, failover & fallback, and recovering from an abrupt shutdown. These topics are also covered under Ceph RBD Mirroring in the OpenStack Charms Deployment Guide.

Bugs

Please report bugs on Launchpad.

For general charm questions refer to the OpenStack Charm Guide.