summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorDmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>2018-11-09 14:35:59 +0300
committerJames Page <james.page@ubuntu.com>2018-12-11 11:30:35 +0000
commit746876f8182bc9de01c808c2daf8b9a0a9f5f74a (patch)
treee7a347dadd499a935624f64e5b497009089e040a
parent275721c90cd57451f248cc27413178d525b23ae3 (diff)
Add spec for Multisite Ceph RADOS Gateway
Add specification for implementation of multisite replication between Ceph RADOS Gateway deployments. The primary intent of this feature is to support disaster recovery capabilities between geographically distant data centers. Change-Id: I15475f57e1931dcc3a6aefff9ed0192e11e02292
Notes
Notes (review): Code-Review+1: James Page <james.page@canonical.com> Code-Review+1: Chris MacNaughton (icey) <chris.macnaughton@canonical.com> Code-Review+2: Frode Nordahl <frode.nordahl@canonical.com> Workflow+1: Frode Nordahl <frode.nordahl@canonical.com> Verified+2: Zuul Submitted-by: Zuul Submitted-at: Wed, 19 Dec 2018 13:34:06 +0000 Reviewed-on: https://review.openstack.org/616884 Project: openstack/charm-specs Branch: refs/heads/master
-rw-r--r--specs/stein/approved/radosgw-multi-site.rst140
1 files changed, 140 insertions, 0 deletions
diff --git a/specs/stein/approved/radosgw-multi-site.rst b/specs/stein/approved/radosgw-multi-site.rst
new file mode 100644
index 0000000..5a036d8
--- /dev/null
+++ b/specs/stein/approved/radosgw-multi-site.rst
@@ -0,0 +1,140 @@
1..
2 Copyright 2018 Canonical Ltd.
3
4 This work is licensed under a Creative Commons Attribution 3.0
5 Unported License.
6 http://creativecommons.org/licenses/by/3.0/legalcode
7
8..
9 This template should be in ReSTructured text. Please do not delete
10 any of the sections in this template. If you have nothing to say
11 for a whole section, just write: "None". For help with syntax, see
12 http://sphinx-doc.org/rest.html To test out your formatting, see
13 http://www.tele3.cz/jbar/rest/rest.html
14
15====================================
16RadosGW Charm Multi-site Replication
17====================================
18
19Problem Description
20===================
21
22RadosGW `multi-site configuration <http://docs.ceph.com/docs/luminous/radosgw/multisite/>`__ can be set up to provide object sync for
23disaster recovery and other purposes such as using the same object data stored
24in a Ceph cluster local to a cloud region. A typical setup would look like
25this:
26
27* One zone per Zone Group (1 cluster per “region”);
28* Multiple Zone Groups (“regions”);
29* One Realm;
30* Mode of operation: active-active or active-passive.
31
32.. note::
33
34 Ceph does support active-passive configurations, but to simplify
35 deployment choice the charms will only support active-active.
36
37There could also be more complex configurations with multiple zones (clusters)
38per zone group.
39
40In order to set this up, independent radosgw application deployments in
41different Juju models have to be aware of each other and set up the
42necessary configuration:
43
44* Realm name for radosgw;
45* Master zone group and master zone configuration;
46* a system user for authentication between daemons;
47* Access key and secret key setup for master zone authentication;
48* A period needs to be updated after configuration changes to change an epoch.
49
50.. note::
51
52 Migration of an existing single site ceph-radosgw deployment to a
53 multi-zone deployment will not be supported by the charms.
54
55Proposed Change
56===============
57
58To be able to configure multi-site radosgw deployments it is necessary to
59modify the radosgw charm to support cross-model relations between multiple
60radosgw applications. This relation will be used to exchange endpoint and
61authentication information between the RADOS gateway deployment for
62configuration of replication.
63
64The charms will target a fix topology with a single realm and zone group
65and two zones. Its assumed that zones will be supported by separate
66Ceph clusters but this is not a hard requirement (but is recommended).
67
68Actions will be provided to promote and demote a RADOS gateway cluster
69to and from master status. No automatic failover will be provided and
70these operations must be performed by an operator in the event of site
71failover/failback.
72
73Alternatives
74------------
75
76As this is a RADOS gateway specific feature, no alternatives have been
77considered.
78
79Implementation
80==============
81
82Assignee(s)
83-----------
84
85Primary assignee:
86
87Gerrit Topic
88------------
89
90Use Gerrit topic "radosgw-multi-site" for all patches related to this spec.
91
92.. code-block:: bash
93
94 git-review -t radosgw-multi-site
95
96Work Items
97----------
98
99* Implement support for new (cross-model) relation 'rgw-peer' between radosgw
100 applications associated with different Ceph clusters.
101* Add support for additional configuration keys to set up realm, zonegroup and
102 zone for each ceph-radosgw deployment.
103* Implement functionality to set up a master zone and add secondary zones to
104 it.
105* Write unit tests for newly added features.
106* Write functional tests that include the deployment of multiple clusters and
107 verification of object synchronization.
108
109Repositories
110------------
111
112No new git repositories will be created.
113
114Documentation
115-------------
116
117The ``radosgw`` charm README should contain instructions on deploying the
118charm with new functionality enabled.
119
120Security
121--------
122
123- TLS termination can be enabled on any side and needs to be supported without
124 manual steps of synchronizing CA certificates between sites. SSL CA certs
125 will be shared between RADOS peers using the new rgw-peer relation.
126
127Testing
128-------
129
130Code written or changed will be covered by unit tests; functional testing will
131be done using ``Zaza``.
132
133Dependencies
134============
135
136The ceph-radosgw charm currently uses the old-style radosgw systemd unit and
137a global cephx key for access to the underlying Ceph cluster.
138
139The charm should be migrated to use the new ceph-radosgw systemd units and
140switch to use of cephx keys which are specific to individual radosgw units.