diff options
authorZuul <>2018-06-07 20:03:01 +0000
committerGerrit Code Review <>2018-06-07 20:03:01 +0000
commitbddc7e50d5ac98b70b5791ab9af08652a62378b7 (patch)
parent8cdccfe68a2a66a289a5ffafc9bad5df57d874b8 (diff)
parent90a913dfa34a59681f4df07c2b1b5a2c58ee0c0a (diff)
Merge "Multi-store problem description"
1 files changed, 73 insertions, 0 deletions
diff --git a/specs/rocky/approved/glance/multi-store.rst b/specs/rocky/approved/glance/multi-store.rst
new file mode 100644
index 0000000..297ac68
--- /dev/null
+++ b/specs/rocky/approved/glance/multi-store.rst
@@ -0,0 +1,73 @@
2 This work is licensed under a Creative Commons Attribution 3.0 Unported
3 License.
8multi-store backend support
13The Image service supports several back ends for storing virtual machine
14image namely Block Storage service (cinder), Filesystem (a directory on
15a local file system), HTTP, Ceph RBD, Sheepdog, Object Storage service
16(swift) and VMware ESX. As of now operator can configure single backend
17on a per scheme basis but it's not possible to configure multiple backends
18for same or different stores. For example if a cloud deployment has
19multiple ceph deployed glance will not be able to use all those backends
20at once.
22Consider the following use cases for providing multi-store backend support:
24 * Deployer might want to provide different level of costing for different
25 tiers of storage, i.e. One backend for SSDs and another for
26 spindles. Customer may choose one of those based on his need.
27 * Old storage is retired and deployer wants to have all new images being
28 added to new storage and old storage will be operational until data
29 is migrated.
30 * Operator wants to differentiate the images from images added by user.
31 * Different hypervisors provided from different backends (For
32 example, Ceph, Cinder, VMware etc.).
33 * Each site with their local backends which nova hosts are accessing
34 directly (Ceph) and users can select the site where image will be stored.
36Problem description
39At the moment glance only supports a single store per scheme. So for example,
40if an operator wanted to configure the Ceph store (RBD) driver for
412 backend Ceph servers (1 per store), this is not possible today without
42substantial changes to the store driver code itself. Even if the store driver
43code was changed, the operator today would still have no means to upload or
44download image bits from a targeted store without using direct image URLs.
46As a result, operators today needs to perform a number of manual steps
47in order to replicate or target image bits on backend glance stores. For
48example, in order to replicate a existing image's bits to secondary storage
49of the same type / scheme as the primary:
51 * It's a manual out-of-band task to copy image bits to secondary storage.
52 * The operator must manage store locations manually; there is no way to
53 query the available stores to back an image's bits in glance.
54 * The operator must remember to register secondary location URL using
55 the glance API.
56 * Constructing the location URL by hand is error prone as some URLs are
57 lengthy and complex. Moreover they require knowledge of the backing store
58 in order to construct properly.
60Also consider the case when a glance API consumer wants to download the image
61bits from a secondary backend location which was added out-of-band. Today
62the consumer must use the direct location URL which implies the consumer
63needs the logic necessary to translate that direct URL into a connection
64to the backend.
67Current state
70Glance community agrees to address the problem described above during
71Rocky/S cycles. The actual detailed specification is still under discussion
72and will amend this spec as when
73the implementation details are agreed on.