Juju Charm - Cinder Ceph backend
Go to file
Alex Kavanagh 3034176d46 Updates for caracal tox.ini
Update the tox.ini file(s) to use the constraints file from
zaza-openstack-tests.

Change-Id: I83c0ff8af444c1fd807855077d0b4d9997ab9b16
2024-02-24 20:11:18 +00:00
actions Update tox.ini files from release-tools gold copy 2016-09-09 19:22:05 +00:00
charmhelpers Updates for caracal testing support 2024-02-12 18:19:08 +00:00
files Sync charm/ceph helpers, tox, and requirements 2019-09-30 22:41:38 -05:00
hooks Add support of a common volume_backend_name 2021-03-30 15:10:14 +09:00
lib Update tox.ini files from release-tools gold copy 2016-09-09 19:22:05 +00:00
templates Add marker to avoid empty directory problem for git migration 2015-11-06 12:37:56 +00:00
tests Updates for caracal testing support 2024-02-12 18:19:08 +00:00
unit_tests Improve platform mocking 2023-10-25 14:40:56 +01:00
.gitignore Update to classic charms to build using charmcraft in CI 2022-02-17 12:30:07 -05:00
.gitreview OpenDev Migration Patch 2019-04-19 19:35:21 +00:00
.project Initial version of charm 2014-01-23 16:14:44 +00:00
.pydevproject Fix support for cinder ceph rbd on Ocata 2017-03-13 13:43:07 +00:00
.stestr.conf Replace ostestr with stestr in testing framework. 2019-03-07 17:10:57 -05:00
.zuul.yaml Add Antelope support 2023-03-08 13:22:36 +00:00
LICENSE Re-license charm as Apache-2.0 2016-07-01 17:22:51 +01:00
Makefile Sync helpers for 20.05 2020-05-20 12:28:22 +02:00
README.md Apply README template 2022-09-15 17:12:55 -04:00
bindep.txt Add Kinetic and Zed support 2022-08-26 18:40:31 +00:00
charm-helpers-hooks.yaml Updates to enable jammy and finalise charmcraft builds 2022-04-13 15:14:56 +01:00
charmcraft.yaml Updates for caracal testing support 2024-02-12 18:19:08 +00:00
config.yaml Add support of a common volume_backend_name 2021-03-30 15:10:14 +09:00
copyright Re-license charm as Apache-2.0 2016-07-01 17:22:51 +01:00
icon.svg Update charm icon 2017-08-02 17:55:03 +01:00
metadata.yaml Updates for caracal testing support 2024-02-12 18:19:08 +00:00
osci.yaml Add jammy-bobcat-ec testing bundle 2023-11-03 10:11:09 -03:00
rename.sh Update to classic charms to build using charmcraft in CI 2022-02-17 12:30:07 -05:00
requirements.txt Add Kinetic and Zed support 2022-08-26 18:40:31 +00:00
revision Fixes from testing 2014-01-23 16:38:22 +00:00
setup.cfg Initial version of charm 2014-01-23 16:14:44 +00:00
test-requirements.txt Add Antelope support 2023-03-08 13:22:36 +00:00
tox.ini Updates for caracal tox.ini 2024-02-24 20:11:18 +00:00

README.md

Overview

Cinder is the OpenStack block storage (volume) service. Ceph is a unified, distributed storage system designed for excellent performance, reliability, and scalability. Ceph-backed Cinder therefore allows for scalability and redundancy for storage volumes. This arrangement is intended for large-scale production deployments.

The cinder-ceph charm provides a Ceph (RBD) storage backend for Cinder and is used in conjunction with the cinder charm and an existing Ceph cluster (via the ceph-mon or the ceph-proxy charms).

Specialised use cases:

  • Through the use of multiple application names (e.g. cinder-ceph-1, cinder-ceph-2), multiple Ceph clusters can be associated with a single Cinder deployment.

  • A variety of storage types can be achieved with a single Ceph cluster by mapping pools with multiple cinder-ceph applications. For instance, different pools could be used for HDD or SSD devices. See option rbd-pool-name below.

Note: There is currently no upgrade path to using the cinder-ceph charm for older deployments that have the cinder and ceph-mon applications related directly. This issue is tracked in bug LP #1727184.

Usage

Configuration

To display all configuration option information run juju config <application>. If the application is not deployed then see the charm's Configure tab in the Charmhub. Finally, the Juju documentation provides general guidance on configuring applications.

Ceph pool type

Ceph storage pools can be configured to ensure data resiliency either through replication or by erasure coding. This charm supports both types via the pool-type configuration option, which can take on the values of 'replicated' and 'erasure-coded'. The default value is 'replicated'.

For this charm, the pool type will be associated with Cinder volumes.

Note: Erasure-coded pools are supported starting with Ceph Luminous.

Replicated pools

Replicated pools use a simple replication strategy in which each written object is copied, in full, to multiple OSDs within the cluster.

The ceph-osd-replication-count option sets the replica count for any object stored within the 'cinder-ceph' rbd pool. Increasing this value increases data resilience at the cost of consuming more real storage in the Ceph cluster. The default value is '3'.

Important: The ceph-osd-replication-count option must be set prior to adding the relation to the ceph-mon (or ceph-proxy) application. Otherwise, the pool's configuration will need to be set by interfacing with the cluster directly.

Erasure coded pools

Erasure coded pools use a technique that allows for the same resiliency as replicated pools, yet reduces the amount of space required. Written data is split into data chunks and error correction chunks, which are both distributed throughout the cluster.

Note: Erasure coded pools require more memory and CPU cycles than replicated pools do.

When using erasure coded pools for Cinder volumes two pools will be created: a replicated pool (for storing RBD metadata) and an erasure coded pool (for storing the data written into the RBD). The ceph-osd-replication-count configuration option only applies to the metadata (replicated) pool.

Erasure coded pools can be configured via options whose names begin with the ec- prefix.

Important: It is strongly recommended to tailor the ec-profile-k and ec-profile-m options to the needs of the given environment. These latter options have default values of '1' and '2' respectively, which result in the same space requirements as those of a replicated pool.

See Ceph Erasure Coding in the OpenStack Charms Deployment Guide for more information.

Ceph BlueStore compression

This charm supports BlueStore inline compression for its associated Ceph storage pool(s). The feature is enabled by assigning a compression mode via the bluestore-compression-mode configuration option. The default behaviour is to disable compression.

The efficiency of compression depends heavily on what type of data is stored in the pool and the charm provides a set of configuration options to fine tune the compression behaviour.

Note: BlueStore compression is supported starting with Ceph Mimic.

Deployment

The cinder-ceph application requires the cinder application and a Ceph cluster to be present.

First configure Cinder to not use a local block device. Then deploy cinder-ceph, and add a relation to both the cinder and ceph-mon applications:

juju config cinder block-device=None
juju deploy cinder-ceph
juju add-relation cinder-ceph:storage-backend cinder:storage-backend
juju add-relation cinder-ceph:ceph ceph-mon:client

Additionally, when both the nova-compute and cinder-ceph applications are deployed a relation is needed between them:

juju add-relation cinder-ceph:ceph-access nova-compute:ceph-access

Documentation

The OpenStack Charms project maintains two documentation guides:

Bugs

Please report bugs on Launchpad.