Add ability to import image into multi-stores

The import image api now supports a list of stores to import data into.
This list can be specified through a new "stores" field that has been
added to the request body.
During import stage, Glance iterates overs this list and send the data
to each store one by one.
If an invalid backend is requested by the user, an exception is raised.
If an errors occurs during verify, already pushed data is removed and
image state is unchanged.

Change-Id: Id3ac19488c0a693d7042be4a3c83f3b9f12313d0
Implements: blueprint import-multi-stores
This commit is contained in:
Grégoire Unbekandt 2019-06-24 16:39:46 +02:00
parent 8649fdc2a2
commit 92492cf504
23 changed files with 1131 additions and 100 deletions

View File

@ -171,12 +171,12 @@ call.
In the ``web-download`` workflow, the data is made available to the Image In the ``web-download`` workflow, the data is made available to the Image
service by being posted to an accessible location with a URL that you know. service by being posted to an accessible location with a URL that you know.
Beginning with API version 2.8, an optional ``X-Image-Meta-Store`` Beginning with API version 2.8, an optional ``stores`` parameter may be added
header may be added to the request. When present, the image data will be to the body request. When present, it contains the list of backing store
placed into the backing store whose identifier is the value of this identifiers to import the image binary data to. If at least one store
header. If the store identifier specified is not recognized, a 409 (Conflict) identifier specified is not recognized, a 409 (Conflict) response is returned.
response is returned. When the header is not present, the image When the parameter is not present, the image data is placed into the default
data is placed into the default backing store. backing store.
* Store identifiers are site-specific. Use the :ref:`Store * Store identifiers are site-specific. Use the :ref:`Store
Discovery <store-discovery-call>` call to determine what Discovery <store-discovery-call>` call to determine what
@ -184,13 +184,27 @@ data is placed into the default backing store.
* The default store may be determined from the :ref:`Store * The default store may be determined from the :ref:`Store
Discovery <store-discovery-call>` response. Discovery <store-discovery-call>` response.
* A default store is always defined, so if you do not have a need * A default store is always defined, so if you do not have a need
to use a particular store, simply omit this header and the default store to use particular stores, simply omit this parameter and the default store
will be used. will be used.
* For API versions before version 2.8, this header is silently * For API versions before version 2.8, this parameter is silently
ignored. ignored.
Example call: ``curl -i -X POST -H "X-Image-Meta-Store: {store_identifier}" For backwards compatibility, if the ``stores`` parameter is not specified, the
-H "X-Auth-Token: $token" $image_url/v2/images/{image_id}/import`` header 'X-Image-Meta-Store' is evaluated.
To import the data into the entire set of stores you may consume from this
particular deployment of Glance without specifying each one of them, you can
use the optional boolean body parameter ``all_stores``.
Note that this can't be used simultaneously with the ``stores`` parameter.
To set the behavior of the import workflow in case of error, you can use the
optional boolean body parameter ``all_stores_must_succeed``.
When set to True, if an error occurs during the upload in at least one store,
the worfklow fails, the data is deleted from stores where copying is done and
the state of the image remains unchanged.
When set to False (default), the workflow will fail only if the upload fails
on all stores specified. In case of a partial success, the locations added to
the image will be the stores where the data has been correctly uploaded.
The JSON request body specifies what import method you wish to use The JSON request body specifies what import method you wish to use
for this image request. for this image request.
@ -259,6 +273,9 @@ Request
- X-Image-Meta-Store: store-header - X-Image-Meta-Store: store-header
- image_id: image_id-in-path - image_id: image_id-in-path
- method: method-in-request - method: method-in-request
- all_stores: all-stores-in-request
- all_stores_must_succeed: all-stores-succeed-in-request
- stores: stores-in-request
Request Example - glance-direct import method Request Example - glance-direct import method
--------------------------------------------- ---------------------------------------------

View File

@ -286,6 +286,30 @@ visibility-in-query:
type: string type: string
# variables in body # variables in body
all-stores-in-request:
description: |
When set to True the data will be imported to the set of stores you may
consume from this particular deployment of Glance (ie: the same set of
stores returned to a call to /v2/info/stores on the glance-api the request
hits).
This can't be used simultaneously with the ``stores`` parameter.
in: body
required: false
type: boolean
all-stores-succeed-in-request:
description: |
A boolean parameter indicating the behavior of the import workflow when an
error occurs.
When set to True, if an error occurs during the upload in at least one
store, the worfklow fails, the data is deleted from stores where copying
is done (not staging), and the state of the image is unchanged.
When set to False, the workflow will fail (data deleted from stores, ...)
only if the import fails on all stores specified by the user. In case of a
partial success, the locations added to the image will be the stores where
the data has been correctly uploaded.
in: body
required: false
type: boolean
checksum: checksum:
description: | description: |
Hash that is used over the image data. The Image Hash that is used over the image data. The Image
@ -606,6 +630,13 @@ status:
in: body in: body
required: true required: true
type: string type: string
stores-in-request:
description: |
If present contains the list of store id to import the image binary data
to.
in: body
required: false
type: array
tags: tags:
description: | description: |
List of tags for this image, possibly an empty list. List of tags for this image, possibly an empty list.

View File

@ -1,5 +1,7 @@
{ {
"method": { "method": {
"name": "glance-direct" "name": "glance-direct"
} },
"stores": ["common", "cheap", "fast", "reliable"],
"all_stores_must_succeed": false
} }

View File

@ -2,5 +2,7 @@
"method": { "method": {
"name": "web-download", "name": "web-download",
"uri": "https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-ppc64le-disk.img" "uri": "https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-ppc64le-disk.img"
} },
"all_stores": true,
"all_stores_must_succeed": true
} }

View File

@ -223,6 +223,111 @@ be either 80 or 443.)
.. _iir_plugins: .. _iir_plugins:
Importing in multiple stores
----------------------------
Starting with Ussuri, it is possible to import data into multiple stores
using interoperable image import workflow.
The status of the image is set to ``active`` according to the value of
``all_stores_must_succeed`` parameter.
* If set to False: the image will be available as soon as an import to
one store has succeeded.
* If set to True (default): the status is set to ``active`` only when all
stores have been successfully treated.
Check progress
~~~~~~~~~~~~~~
As each store is treated sequentially, it can take quite some time for the
workflow to complete depending on the size of the image and the number of
stores to import data to.
It is possible to follow task progress by looking at 2 reserved image
properties:
* ``os_glance_importing_to_stores``: This property contains a list of stores
that has not yet been processed. At the beginning of the import flow, it is
filled with the stores provided in the request. Each time a store is fully
handled, it is removed from the list.
* ``os_glance_failed_import``: Each time an import in a store fails, it is
added to this list. This property is emptied at the beginning of the import
flow.
These 2 properties are also available in the notifications sent during the
workflow:
.. note:: Example
An operator calls the import image api with the following parameters::
curl -i -X POST -H "X-Auth-Token: $token"
-H "Content-Type: application/json"
-d '{"method": {"name":"glance-direct"},
"stores": ["ceph1", "ceph2"],
"all_stores_must_succeed": false}'
$image_url/v2/images/{image_id}/import
The upload fails for 'ceph2' but succeed on 'ceph1'. Since the parameter
``all_stores_must_succeed`` has been set to 'false', the task ends
successfully and the image is now active.
Notifications sent by glance looks like (payload is truncated for
clarity)::
{
"priority": "INFO",
"event_type": "image.prepare",
"timestamp": "2019-08-27 16:10:30.066867",
"payload": {"status": "importing",
"name": "example",
"backend": "ceph1",
"os_glance_importing_to_stores": ["ceph1", "ceph2"],
"os_glance_failed_import": [],
...},
"message_id": "1c8993ad-e47c-4af7-9f75-fa49596eeb10",
...
}
{
"priority": "INFO",
"event_type": "image.upload",
"timestamp": "2019-08-27 16:10:32.058812",
"payload": {"status": "active",
"name": "example",
"backend": "ceph1",
"os_glance_importing_to_stores": ["ceph2"],
"os_glance_failed_import": [],
...},
"message_id": "8b8993ad-e47c-4af7-9f75-fa49596eeb11",
...
}
{
"priority": "INFO",
"event_type": "image.prepare",
"timestamp": "2019-08-27 16:10:33.066867",
"payload": {"status": "active",
"name": "example",
"backend": "ceph2",
"os_glance_importing_to_stores": ["ceph2"],
"os_glance_failed_import": [],
...},
"message_id": "1c8993ad-e47c-4af7-9f75-fa49596eeb18",
...
}
{
"priority": "ERROR",
"event_type": "image.upload",
"timestamp": "2019-08-27 16:10:34.058812",
"payload": "Error Message",
"message_id": "8b8993ad-e47c-4af7-9f75-fa49596eeb11",
...
}
Customizing the image import process Customizing the image import process
------------------------------------ ------------------------------------

View File

@ -105,6 +105,7 @@ class ImagesController(object):
task_repo = self.gateway.get_task_repo(req.context) task_repo = self.gateway.get_task_repo(req.context)
import_method = body.get('method').get('name') import_method = body.get('method').get('name')
uri = body.get('method').get('uri') uri = body.get('method').get('uri')
all_stores_must_succeed = body.get('all_stores_must_succeed', True)
try: try:
image = image_repo.get(image_id) image = image_repo.get(image_id)
@ -127,24 +128,26 @@ class ImagesController(object):
msg = _("'disk_format' needs to be set before import") msg = _("'disk_format' needs to be set before import")
raise exception.Conflict(msg) raise exception.Conflict(msg)
backend = None stores = [None]
if CONF.enabled_backends: if CONF.enabled_backends:
backend = req.headers.get('x-image-meta-store',
CONF.glance_store.default_backend)
try: try:
glance_store.get_store_from_store_identifier(backend) stores = utils.get_stores_from_request(req, body)
except glance_store.UnknownScheme: except glance_store.UnknownScheme as exc:
msg = _("Store for scheme %s not found") % backend LOG.warn(exc.msg)
LOG.warn(msg) raise exception.Conflict(exc.msg)
raise exception.Conflict(msg)
except exception.Conflict as e: except exception.Conflict as e:
raise webob.exc.HTTPConflict(explanation=e.msg) raise webob.exc.HTTPConflict(explanation=e.msg)
except exception.NotFound as e: except exception.NotFound as e:
raise webob.exc.HTTPNotFound(explanation=e.msg) raise webob.exc.HTTPNotFound(explanation=e.msg)
if (not all_stores_must_succeed) and (not CONF.enabled_backends):
msg = (_("All_stores_must_succeed can only be set with "
"enabled_backends %s") % uri)
raise webob.exc.HTTPBadRequest(explanation=msg)
task_input = {'image_id': image_id, task_input = {'image_id': image_id,
'import_req': body, 'import_req': body,
'backend': backend} 'backend': stores}
if (import_method == 'web-download' and if (import_method == 'web-download' and
not utils.validate_import_uri(uri)): not utils.validate_import_uri(uri)):

View File

@ -16,6 +16,7 @@ import os
import glance_store as store_api import glance_store as store_api
from glance_store import backend from glance_store import backend
from glance_store import exceptions as store_exceptions
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log as logging from oslo_log import log as logging
from oslo_utils import encodeutils from oslo_utils import encodeutils
@ -188,13 +189,16 @@ class _VerifyStaging(task.Task):
class _ImportToStore(task.Task): class _ImportToStore(task.Task):
def __init__(self, task_id, task_type, image_repo, uri, image_id, backend): def __init__(self, task_id, task_type, image_repo, uri, image_id, backend,
all_stores_must_succeed, set_active):
self.task_id = task_id self.task_id = task_id
self.task_type = task_type self.task_type = task_type
self.image_repo = image_repo self.image_repo = image_repo
self.uri = uri self.uri = uri
self.image_id = image_id self.image_id = image_id
self.backend = backend self.backend = backend
self.all_stores_must_succeed = all_stores_must_succeed
self.set_active = set_active
super(_ImportToStore, self).__init__( super(_ImportToStore, self).__init__(
name='%s-ImportToStore-%s' % (task_type, task_id)) name='%s-ImportToStore-%s' % (task_type, task_id))
@ -245,13 +249,74 @@ class _ImportToStore(task.Task):
# will need the file path anyways for our delete workflow for now. # will need the file path anyways for our delete workflow for now.
# For future proofing keeping this as is. # For future proofing keeping this as is.
image = self.image_repo.get(self.image_id) image = self.image_repo.get(self.image_id)
image_import.set_image_data(image, file_path or self.uri, self.task_id, if image.status == "deleted":
backend=self.backend) raise exception.ImportTaskError("Image has been deleted, aborting"
" import.")
# NOTE(flaper87): We need to save the image again after the locations try:
# have been set in the image. image_import.set_image_data(image, file_path or self.uri,
self.task_id, backend=self.backend,
set_active=self.set_active)
# NOTE(yebinama): set_image_data catches Exception and raises from
# them. Can't be more specific on exceptions catched.
except Exception:
if self.all_stores_must_succeed:
raise
msg = (_("%(task_id)s of %(task_type)s failed but since "
"all_stores_must_succeed is set to false, continue.") %
{'task_id': self.task_id, 'task_type': self.task_type})
LOG.warning(msg)
if self.backend is not None:
failed_import = image.extra_properties.get(
'os_glance_failed_import', '').split(',')
failed_import.append(self.backend)
image.extra_properties['os_glance_failed_import'] = ','.join(
failed_import).lstrip(',')
if self.backend is not None:
importing = image.extra_properties.get(
'os_glance_importing_to_stores', '').split(',')
try:
importing.remove(self.backend)
image.extra_properties[
'os_glance_importing_to_stores'] = ','.join(
importing).lstrip(',')
except ValueError:
LOG.debug("Store %s not found in property "
"os_glance_importing_to_stores.", self.backend)
# NOTE(flaper87): We need to save the image again after
# the locations have been set in the image.
self.image_repo.save(image) self.image_repo.save(image)
def revert(self, result, **kwargs):
"""
Remove location from image in case of failure
:param result: taskflow result object
"""
image = self.image_repo.get(self.image_id)
for i, location in enumerate(image.locations):
if location.get('metadata', {}).get('store') == self.backend:
try:
image.locations.pop(i)
except (store_exceptions.NotFound,
store_exceptions.Forbidden):
msg = (_("Error deleting from store %{store}s when "
"reverting.") % {'store': self.backend})
LOG.warning(msg)
# NOTE(yebinama): Some store drivers doesn't document which
# exceptions they throw.
except Exception:
msg = (_("Unexpected exception when deleting from store"
"%{store}s.") % {'store': self.backend})
LOG.warning(msg)
else:
if len(image.locations) == 0:
image.checksum = None
image.os_hash_algo = None
image.os_hash_value = None
image.size = None
self.image_repo.save(image)
break
class _SaveImage(task.Task): class _SaveImage(task.Task):
@ -337,7 +402,9 @@ def get_flow(**kwargs):
image_id = kwargs.get('image_id') image_id = kwargs.get('image_id')
import_method = kwargs.get('import_req')['method']['name'] import_method = kwargs.get('import_req')['method']['name']
uri = kwargs.get('import_req')['method'].get('uri') uri = kwargs.get('import_req')['method'].get('uri')
backend = kwargs.get('backend') stores = kwargs.get('backend', [None])
all_stores_must_succeed = kwargs.get('import_req').get(
'all_stores_must_succeed', True)
separator = '' separator = ''
if not CONF.enabled_backends and not CONF.node_staging_uri.endswith('/'): if not CONF.enabled_backends and not CONF.node_staging_uri.endswith('/'):
@ -368,13 +435,20 @@ def get_flow(**kwargs):
for plugin in import_plugins.get_import_plugins(**kwargs): for plugin in import_plugins.get_import_plugins(**kwargs):
flow.add(plugin) flow.add(plugin)
import_to_store = _ImportToStore(task_id, for idx, store in enumerate(stores, 1):
task_type, set_active = (not all_stores_must_succeed) or (idx == len(stores))
image_repo, task_name = task_type + "-" + (store or "")
file_uri, import_task = lf.Flow(task_name)
image_id, import_to_store = _ImportToStore(task_id,
backend) task_name,
flow.add(import_to_store) image_repo,
file_uri,
image_id,
store,
all_stores_must_succeed,
set_active)
import_task.add(import_to_store)
flow.add(import_task)
delete_task = lf.Flow(task_type).add(_DeleteFromFS(task_id, task_type)) delete_task = lf.Flow(task_type).add(_DeleteFromFS(task_id, task_type))
flow.add(delete_task) flow.add(delete_task)
@ -394,6 +468,11 @@ def get_flow(**kwargs):
image = image_repo.get(image_id) image = image_repo.get(image_id)
from_state = image.status from_state = image.status
image.status = 'importing' image.status = 'importing'
image.extra_properties[
'os_glance_importing_to_stores'] = ','.join((store for store in
stores if
store is not None))
image.extra_properties['os_glance_failed_import'] = ''
image_repo.save(image, from_state=from_state) image_repo.save(image, from_state=from_state)
return flow return flow

View File

@ -137,13 +137,13 @@ def create_image(image_repo, image_factory, image_properties, task_id):
return image return image
def set_image_data(image, uri, task_id, backend=None): def set_image_data(image, uri, task_id, backend=None, set_active=True):
data_iter = None data_iter = None
try: try:
LOG.info(_LI("Task %(task_id)s: Got image data uri %(data_uri)s to be " LOG.info(_LI("Task %(task_id)s: Got image data uri %(data_uri)s to be "
"imported"), {"data_uri": uri, "task_id": task_id}) "imported"), {"data_uri": uri, "task_id": task_id})
data_iter = script_utils.get_image_data_iter(uri) data_iter = script_utils.get_image_data_iter(uri)
image.set_data(data_iter, backend=backend) image.set_data(data_iter, backend=backend, set_active=set_active)
except Exception as e: except Exception as e:
with excutils.save_and_reraise_exception(): with excutils.save_and_reraise_exception():
LOG.warn(_LW("Task %(task_id)s failed with exception %(error)s") % LOG.warn(_LW("Task %(task_id)s failed with exception %(error)s") %

View File

@ -29,6 +29,7 @@ except ImportError:
from eventlet.green import socket from eventlet.green import socket
import functools import functools
import glance_store
import os import os
import re import re
@ -668,3 +669,35 @@ def evaluate_filter_op(value, operator, threshold):
msg = _("Unable to filter on a unknown operator.") msg = _("Unable to filter on a unknown operator.")
raise exception.InvalidFilterOperatorValue(msg) raise exception.InvalidFilterOperatorValue(msg)
def get_stores_from_request(req, body):
"""Processes a supplied request and extract stores from it
:param req: request to process
:param body: request body
:raises glance_store.UnknownScheme: if a store is not valid
:return: a list of stores
"""
if body.get('all_stores', False):
if 'stores' in body or 'x-image-meta-store' in req.headers:
msg = _("All_stores parameter can't be used with "
"x-image-meta-store header or stores parameter")
raise exc.HTTPBadRequest(explanation=msg)
stores = list(CONF.enabled_backends)
else:
try:
stores = body['stores']
except KeyError:
stores = [req.headers.get('x-image-meta-store',
CONF.glance_store.default_backend)]
else:
if 'x-image-meta-store' in req.headers:
msg = _("Stores parameter and x-image-meta-store header can't "
"be both specified")
raise exc.HTTPBadRequest(explanation=msg)
# Validate each store
for store in stores:
glance_store.get_store_from_store_identifier(store)
return stores

View File

@ -289,7 +289,7 @@ class Image(object):
def get_data(self, *args, **kwargs): def get_data(self, *args, **kwargs):
raise NotImplementedError() raise NotImplementedError()
def set_data(self, data, size=None, backend=None): def set_data(self, data, size=None, backend=None, set_active=True):
raise NotImplementedError() raise NotImplementedError()

View File

@ -194,8 +194,8 @@ class Image(object):
def reactivate(self): def reactivate(self):
self.base.reactivate() self.base.reactivate()
def set_data(self, data, size=None, backend=None): def set_data(self, data, size=None, backend=None, set_active=True):
self.base.set_data(data, size, backend=backend) self.base.set_data(data, size, backend=backend, set_active=set_active)
def get_data(self, *args, **kwargs): def get_data(self, *args, **kwargs):
return self.base.get_data(*args, **kwargs) return self.base.get_data(*args, **kwargs)

View File

@ -436,31 +436,17 @@ class ImageProxy(glance.domain.proxy.Image):
self.image.image_id, self.image.image_id,
location) location)
def set_data(self, data, size=None, backend=None): def _upload_to_store(self, data, verifier, store=None, size=None):
if size is None: """
size = 0 # NOTE(markwash): zero -> unknown size Upload data to store
# Create the verifier for signature verification (if correct properties :param data: data to upload to store
# are present) :param verifier: for signature verification
extra_props = self.image.extra_properties :param store: store to upload data to
if (signature_utils.should_create_verifier(extra_props)): :param size: data size
# NOTE(bpoulos): if creating verifier fails, exception will be :return:
# raised """
img_signature = extra_props[signature_utils.SIGNATURE] hashing_algo = self.image.os_hash_algo or CONF['hashing_algorithm']
hash_method = extra_props[signature_utils.HASH_METHOD]
key_type = extra_props[signature_utils.KEY_TYPE]
cert_uuid = extra_props[signature_utils.CERT_UUID]
verifier = signature_utils.get_verifier(
context=self.context,
img_signature_certificate_uuid=cert_uuid,
img_signature_hash_method=hash_method,
img_signature=img_signature,
img_signature_key_type=key_type
)
else:
verifier = None
hashing_algo = CONF['hashing_algorithm']
if CONF.enabled_backends: if CONF.enabled_backends:
(location, size, checksum, (location, size, checksum,
multihash, loc_meta) = self.store_api.add_with_multihash( multihash, loc_meta) = self.store_api.add_with_multihash(
@ -469,7 +455,7 @@ class ImageProxy(glance.domain.proxy.Image):
utils.LimitingReader(utils.CooperativeReader(data), utils.LimitingReader(utils.CooperativeReader(data),
CONF.image_size_cap), CONF.image_size_cap),
size, size,
backend, store,
hashing_algo, hashing_algo,
context=self.context, context=self.context,
verifier=verifier) verifier=verifier)
@ -487,16 +473,33 @@ class ImageProxy(glance.domain.proxy.Image):
hashing_algo, hashing_algo,
context=self.context, context=self.context,
verifier=verifier) verifier=verifier)
self._verify_signature(verifier, location, loc_meta)
for attr, data in {"size": size, "os_hash_value": multihash,
"checksum": checksum}.items():
self._verify_uploaded_data(data, attr)
self.image.locations.append({'url': location, 'metadata': loc_meta,
'status': 'active'})
self.image.checksum = checksum
self.image.os_hash_value = multihash
self.image.size = size
self.image.os_hash_algo = hashing_algo
def _verify_signature(self, verifier, location, loc_meta):
"""
Verify signature of uploaded data.
:param verifier: for signature verification
"""
# NOTE(bpoulos): if verification fails, exception will be raised # NOTE(bpoulos): if verification fails, exception will be raised
if verifier: if verifier is not None:
try: try:
verifier.verify() verifier.verify()
LOG.info(_LI("Successfully verified signature for image %s"), msg = _LI("Successfully verified signature for image %s")
self.image.image_id) LOG.info(msg, self.image.image_id)
except crypto_exception.InvalidSignature: except crypto_exception.InvalidSignature:
if CONF.enabled_backends: if CONF.enabled_backends:
self.store_api.delete(location, loc_meta.get('store'), self.store_api.delete(location,
loc_meta.get('store'),
context=self.context) context=self.context)
else: else:
self.store_api.delete_from_backend(location, self.store_api.delete_from_backend(location,
@ -505,13 +508,46 @@ class ImageProxy(glance.domain.proxy.Image):
_('Signature verification failed') _('Signature verification failed')
) )
self.image.locations = [{'url': location, 'metadata': loc_meta, def _verify_uploaded_data(self, value, attribute_name):
'status': 'active'}] """
self.image.size = size Verify value of attribute_name uploaded data
self.image.checksum = checksum
self.image.os_hash_value = multihash :param value: value to compare
self.image.os_hash_algo = hashing_algo :param attribute_name: attribute name of the image to compare with
self.image.status = 'active' """
image_value = getattr(self.image, attribute_name)
if image_value is not None and value != image_value:
msg = _("%s of uploaded data is different from current "
"value set on the image.")
LOG.error(msg, attribute_name)
raise exception.UploadException(msg % attribute_name)
def set_data(self, data, size=None, backend=None, set_active=True):
if size is None:
size = 0 # NOTE(markwash): zero -> unknown size
# Create the verifier for signature verification (if correct properties
# are present)
extra_props = self.image.extra_properties
verifier = None
if signature_utils.should_create_verifier(extra_props):
# NOTE(bpoulos): if creating verifier fails, exception will be
# raised
img_signature = extra_props[signature_utils.SIGNATURE]
hash_method = extra_props[signature_utils.HASH_METHOD]
key_type = extra_props[signature_utils.KEY_TYPE]
cert_uuid = extra_props[signature_utils.CERT_UUID]
verifier = signature_utils.get_verifier(
context=self.context,
img_signature_certificate_uuid=cert_uuid,
img_signature_hash_method=hash_method,
img_signature=img_signature,
img_signature_key_type=key_type
)
self._upload_to_store(data, verifier, backend, size)
if set_active and self.image.status != 'active':
self.image.status = 'active'
def get_data(self, offset=0, chunk_size=None): def get_data(self, offset=0, chunk_size=None):
if not self.image.locations: if not self.image.locations:

View File

@ -400,6 +400,17 @@ class ImageProxy(NotificationProxy, domain_proxy.Image):
'receiver_user_id': self.context.user_id, 'receiver_user_id': self.context.user_id,
} }
def _format_import_properties(self):
importing = self.repo.extra_properties.get(
'os_glance_importing_to_stores')
importing = importing.split(',') if importing else []
failed = self.repo.extra_properties.get('os_glance_failed_import')
failed = failed.split(',') if failed else []
return {
'os_glance_importing_to_stores': importing,
'os_glance_failed_import': failed
}
def _get_chunk_data_iterator(self, data, chunk_size=None): def _get_chunk_data_iterator(self, data, chunk_size=None):
sent = 0 sent = 0
for chunk in data: for chunk in data:
@ -426,12 +437,15 @@ class ImageProxy(NotificationProxy, domain_proxy.Image):
data = self.repo.get_data(offset=offset, chunk_size=chunk_size) data = self.repo.get_data(offset=offset, chunk_size=chunk_size)
return self._get_chunk_data_iterator(data, chunk_size=chunk_size) return self._get_chunk_data_iterator(data, chunk_size=chunk_size)
def set_data(self, data, size=None, backend=None): def set_data(self, data, size=None, backend=None, set_active=True):
self.send_notification('image.prepare', self.repo, backend=backend) self.send_notification('image.prepare', self.repo, backend=backend,
extra_payload=self._format_import_properties())
notify_error = self.notifier.error notify_error = self.notifier.error
status = self.repo.status
try: try:
self.repo.set_data(data, size, backend=backend) self.repo.set_data(data, size, backend=backend,
set_active=set_active)
except glance_store.StorageFull as e: except glance_store.StorageFull as e:
msg = (_("Image storage media is full: %s") % msg = (_("Image storage media is full: %s") %
encodeutils.exception_to_unicode(e)) encodeutils.exception_to_unicode(e))
@ -486,8 +500,11 @@ class ImageProxy(NotificationProxy, domain_proxy.Image):
'error': encodeutils.exception_to_unicode(e)}) 'error': encodeutils.exception_to_unicode(e)})
_send_notification(notify_error, 'image.upload', msg) _send_notification(notify_error, 'image.upload', msg)
else: else:
self.send_notification('image.upload', self.repo) extra_payload = self._format_import_properties()
self.send_notification('image.activate', self.repo) self.send_notification('image.upload', self.repo,
extra_payload=extra_payload)
if set_active and status != 'active':
self.send_notification('image.activate', self.repo)
class ImageMemberProxy(NotificationProxy, domain_proxy.ImageMember): class ImageMemberProxy(NotificationProxy, domain_proxy.ImageMember):

View File

@ -306,7 +306,7 @@ class ImageProxy(glance.domain.proxy.Image):
super(ImageProxy, self).__init__(image) super(ImageProxy, self).__init__(image)
self.orig_props = set(image.extra_properties.keys()) self.orig_props = set(image.extra_properties.keys())
def set_data(self, data, size=None, backend=None): def set_data(self, data, size=None, backend=None, set_active=True):
remaining = glance.api.common.check_quota( remaining = glance.api.common.check_quota(
self.context, size, self.db_api, image_id=self.image.image_id) self.context, size, self.db_api, image_id=self.image.image_id)
if remaining is not None: if remaining is not None:
@ -315,7 +315,8 @@ class ImageProxy(glance.domain.proxy.Image):
data = utils.LimitingReader( data = utils.LimitingReader(
data, remaining, exception_class=exception.StorageQuotaFull) data, remaining, exception_class=exception.StorageQuotaFull)
self.image.set_data(data, size=size, backend=backend) self.image.set_data(data, size=size, backend=backend,
set_active=set_active)
# NOTE(jbresnah) If two uploads happen at the same time and neither # NOTE(jbresnah) If two uploads happen at the same time and neither
# properly sets the size attribute[1] then there is a race condition # properly sets the size attribute[1] then there is a race condition

View File

@ -5134,6 +5134,332 @@ class TestImagesMultipleBackend(functional.MultipleBackendFunctionalTest):
self.stop_servers() self.stop_servers()
def test_image_import_multi_stores(self):
self.config(node_staging_uri="file:///tmp/staging/")
self.start_servers(**self.__dict__.copy())
# Image list should be empty
path = self._url('/v2/images')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
images = jsonutils.loads(response.text)['images']
self.assertEqual(0, len(images))
# web-download should be available in discovery response
path = self._url('/v2/info/import')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
discovery_calls = jsonutils.loads(
response.text)['import-methods']['value']
self.assertIn("web-download", discovery_calls)
# file1 and file2 should be available in discovery response
available_stores = ['file1', 'file2']
path = self._url('/v2/info/stores')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
discovery_calls = jsonutils.loads(
response.text)['stores']
# os_glance_staging_store should not be available in discovery response
for stores in discovery_calls:
self.assertIn('id', stores)
self.assertIn(stores['id'], available_stores)
self.assertFalse(stores["id"].startswith("os_glance_"))
# Create an image
path = self._url('/v2/images')
headers = self._headers({'content-type': 'application/json'})
data = jsonutils.dumps({'name': 'image-1', 'type': 'kernel',
'disk_format': 'aki',
'container_format': 'aki'})
response = requests.post(path, headers=headers, data=data)
self.assertEqual(http.CREATED, response.status_code)
# Check 'OpenStack-image-store-ids' header present in response
self.assertIn('OpenStack-image-store-ids', response.headers)
for store in available_stores:
self.assertIn(store, response.headers['OpenStack-image-store-ids'])
# Returned image entity should have a generated id and status
image = jsonutils.loads(response.text)
image_id = image['id']
checked_keys = set([
u'status',
u'name',
u'tags',
u'created_at',
u'updated_at',
u'visibility',
u'self',
u'protected',
u'id',
u'file',
u'min_disk',
u'type',
u'min_ram',
u'schema',
u'disk_format',
u'container_format',
u'owner',
u'checksum',
u'size',
u'virtual_size',
u'os_hidden',
u'os_hash_algo',
u'os_hash_value'
])
self.assertEqual(checked_keys, set(image.keys()))
expected_image = {
'status': 'queued',
'name': 'image-1',
'tags': [],
'visibility': 'shared',
'self': '/v2/images/%s' % image_id,
'protected': False,
'file': '/v2/images/%s/file' % image_id,
'min_disk': 0,
'type': 'kernel',
'min_ram': 0,
'schema': '/v2/schemas/image',
}
for key, value in expected_image.items():
self.assertEqual(value, image[key], key)
# Image list should now have one entry
path = self._url('/v2/images')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
images = jsonutils.loads(response.text)['images']
self.assertEqual(1, len(images))
self.assertEqual(image_id, images[0]['id'])
# Verify image is in queued state and checksum is None
func_utils.verify_image_hashes_and_status(self, image_id,
status='queued')
# Import image to multiple stores
path = self._url('/v2/images/%s/import' % image_id)
headers = self._headers({
'content-type': 'application/json',
'X-Roles': 'admin'
})
# Start http server locally
thread, httpd, port = test_utils.start_standalone_http_server()
image_data_uri = 'http://localhost:%s/' % port
data = jsonutils.dumps(
{'method': {'name': 'web-download', 'uri': image_data_uri},
'stores': ['file1', 'file2']})
response = requests.post(path, headers=headers, data=data)
self.assertEqual(http.ACCEPTED, response.status_code)
# Verify image is in active state and checksum is set
# NOTE(abhishekk): As import is a async call we need to provide
# some timelap to complete the call.
path = self._url('/v2/images/%s' % image_id)
func_utils.wait_for_status(request_path=path,
request_headers=self._headers(),
status='active',
max_sec=40,
delay_sec=0.2,
start_delay_sec=1)
with requests.get(image_data_uri) as r:
expect_c = six.text_type(hashlib.md5(r.content).hexdigest())
expect_h = six.text_type(hashlib.sha512(r.content).hexdigest())
func_utils.verify_image_hashes_and_status(self,
image_id,
checksum=expect_c,
os_hash_value=expect_h,
status='active')
# kill the local http server
httpd.shutdown()
httpd.server_close()
# Ensure image is created in the two stores
path = self._url('/v2/images/%s' % image_id)
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
self.assertIn('file2', jsonutils.loads(response.text)['stores'])
self.assertIn('file1', jsonutils.loads(response.text)['stores'])
# Deleting image should work
path = self._url('/v2/images/%s' % image_id)
response = requests.delete(path, headers=self._headers())
self.assertEqual(http.NO_CONTENT, response.status_code)
# Image list should now be empty
path = self._url('/v2/images')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
images = jsonutils.loads(response.text)['images']
self.assertEqual(0, len(images))
self.stop_servers()
def test_image_import_multi_stores_specifying_all_stores(self):
self.config(node_staging_uri="file:///tmp/staging/")
self.start_servers(**self.__dict__.copy())
# Image list should be empty
path = self._url('/v2/images')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
images = jsonutils.loads(response.text)['images']
self.assertEqual(0, len(images))
# web-download should be available in discovery response
path = self._url('/v2/info/import')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
discovery_calls = jsonutils.loads(
response.text)['import-methods']['value']
self.assertIn("web-download", discovery_calls)
# file1 and file2 should be available in discovery response
available_stores = ['file1', 'file2']
path = self._url('/v2/info/stores')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
discovery_calls = jsonutils.loads(
response.text)['stores']
# os_glance_staging_store should not be available in discovery response
for stores in discovery_calls:
self.assertIn('id', stores)
self.assertIn(stores['id'], available_stores)
self.assertFalse(stores["id"].startswith("os_glance_"))
# Create an image
path = self._url('/v2/images')
headers = self._headers({'content-type': 'application/json'})
data = jsonutils.dumps({'name': 'image-1', 'type': 'kernel',
'disk_format': 'aki',
'container_format': 'aki'})
response = requests.post(path, headers=headers, data=data)
self.assertEqual(http.CREATED, response.status_code)
# Check 'OpenStack-image-store-ids' header present in response
self.assertIn('OpenStack-image-store-ids', response.headers)
for store in available_stores:
self.assertIn(store, response.headers['OpenStack-image-store-ids'])
# Returned image entity should have a generated id and status
image = jsonutils.loads(response.text)
image_id = image['id']
checked_keys = set([
u'status',
u'name',
u'tags',
u'created_at',
u'updated_at',
u'visibility',
u'self',
u'protected',
u'id',
u'file',
u'min_disk',
u'type',
u'min_ram',
u'schema',
u'disk_format',
u'container_format',
u'owner',
u'checksum',
u'size',
u'virtual_size',
u'os_hidden',
u'os_hash_algo',
u'os_hash_value'
])
self.assertEqual(checked_keys, set(image.keys()))
expected_image = {
'status': 'queued',
'name': 'image-1',
'tags': [],
'visibility': 'shared',
'self': '/v2/images/%s' % image_id,
'protected': False,
'file': '/v2/images/%s/file' % image_id,
'min_disk': 0,
'type': 'kernel',
'min_ram': 0,
'schema': '/v2/schemas/image',
}
for key, value in expected_image.items():
self.assertEqual(value, image[key], key)
# Image list should now have one entry
path = self._url('/v2/images')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
images = jsonutils.loads(response.text)['images']
self.assertEqual(1, len(images))
self.assertEqual(image_id, images[0]['id'])
# Verify image is in queued state and checksum is None
func_utils.verify_image_hashes_and_status(self, image_id,
status='queued')
# Import image to multiple stores
path = self._url('/v2/images/%s/import' % image_id)
headers = self._headers({
'content-type': 'application/json',
'X-Roles': 'admin'
})
# Start http server locally
thread, httpd, port = test_utils.start_standalone_http_server()
image_data_uri = 'http://localhost:%s/' % port
data = jsonutils.dumps(
{'method': {'name': 'web-download', 'uri': image_data_uri},
'all_stores': True})
response = requests.post(path, headers=headers, data=data)
self.assertEqual(http.ACCEPTED, response.status_code)
# Verify image is in active state and checksum is set
# NOTE(abhishekk): As import is a async call we need to provide
# some timelap to complete the call.
path = self._url('/v2/images/%s' % image_id)
func_utils.wait_for_status(request_path=path,
request_headers=self._headers(),
status='active',
max_sec=40,
delay_sec=0.2,
start_delay_sec=1)
with requests.get(image_data_uri) as r:
expect_c = six.text_type(hashlib.md5(r.content).hexdigest())
expect_h = six.text_type(hashlib.sha512(r.content).hexdigest())
func_utils.verify_image_hashes_and_status(self,
image_id,
checksum=expect_c,
os_hash_value=expect_h,
status='active')
# kill the local http server
httpd.shutdown()
httpd.server_close()
# Ensure image is created in the two stores
path = self._url('/v2/images/%s' % image_id)
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
self.assertIn('file2', jsonutils.loads(response.text)['stores'])
self.assertIn('file1', jsonutils.loads(response.text)['stores'])
# Deleting image should work
path = self._url('/v2/images/%s' % image_id)
response = requests.delete(path, headers=self._headers())
self.assertEqual(http.NO_CONTENT, response.status_code)
# Image list should now be empty
path = self._url('/v2/images')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
images = jsonutils.loads(response.text)['images']
self.assertEqual(0, len(images))
self.stop_servers()
def test_image_lifecycle(self): def test_image_lifecycle(self):
# Image list should be empty # Image list should be empty
self.start_servers(**self.__dict__.copy()) self.start_servers(**self.__dict__.copy())

View File

@ -18,13 +18,20 @@ import mock
from oslo_config import cfg from oslo_config import cfg
import glance.async_.flows.api_image_import as import_flow import glance.async_.flows.api_image_import as import_flow
from glance.common.exception import ImportTaskError
from glance import context
from glance import gateway
import glance.tests.utils as test_utils import glance.tests.utils as test_utils
from cursive import exception as cursive_exception
CONF = cfg.CONF CONF = cfg.CONF
TASK_TYPE = 'api_image_import' TASK_TYPE = 'api_image_import'
TASK_ID1 = 'dbbe7231-020f-4311-87e1-5aaa6da56c02' TASK_ID1 = 'dbbe7231-020f-4311-87e1-5aaa6da56c02'
IMAGE_ID1 = '41f5b3b0-f54c-4cef-bd45-ce3e376a142f' IMAGE_ID1 = '41f5b3b0-f54c-4cef-bd45-ce3e376a142f'
UUID1 = 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d'
TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df'
class TestApiImageImportTask(test_utils.BaseTestCase): class TestApiImageImportTask(test_utils.BaseTestCase):
@ -88,3 +95,74 @@ class TestApiImageImportTask(test_utils.BaseTestCase):
import_req=self.wd_task_input['import_req']) import_req=self.wd_task_input['import_req'])
self._pass_uri(uri=test_uri, file_uri=expected_uri, self._pass_uri(uri=test_uri, file_uri=expected_uri,
import_req=self.gd_task_input['import_req']) import_req=self.gd_task_input['import_req'])
class TestImportToStoreTask(test_utils.BaseTestCase):
def setUp(self):
super(TestImportToStoreTask, self).setUp()
self.gateway = gateway.Gateway()
self.context = context.RequestContext(user_id=TENANT1,
project_id=TENANT1,
overwrite=False)
self.img_factory = self.gateway.get_image_factory(self.context)
def test_raises_when_image_deleted(self):
img_repo = mock.MagicMock()
image_import = import_flow._ImportToStore(TASK_ID1, TASK_TYPE,
img_repo, "http://url",
IMAGE_ID1, "store1", False,
True)
image = self.img_factory.new_image(image_id=UUID1)
image.status = "deleted"
img_repo.get.return_value = image
self.assertRaises(ImportTaskError, image_import.execute)
@mock.patch("glance.async_.flows.api_image_import.image_import")
def test_remove_store_from_property(self, mock_import):
img_repo = mock.MagicMock()
image_import = import_flow._ImportToStore(TASK_ID1, TASK_TYPE,
img_repo, "http://url",
IMAGE_ID1, "store1", True,
True)
extra_properties = {"os_glance_importing_to_stores": "store1,store2"}
image = self.img_factory.new_image(image_id=UUID1,
extra_properties=extra_properties)
img_repo.get.return_value = image
image_import.execute()
self.assertEqual(
image.extra_properties['os_glance_importing_to_stores'], "store2")
@mock.patch("glance.async_.flows.api_image_import.image_import")
def test_raises_when_all_stores_must_succeed(self, mock_import):
img_repo = mock.MagicMock()
image_import = import_flow._ImportToStore(TASK_ID1, TASK_TYPE,
img_repo, "http://url",
IMAGE_ID1, "store1", True,
True)
image = self.img_factory.new_image(image_id=UUID1)
img_repo.get.return_value = image
mock_import.set_image_data.side_effect = \
cursive_exception.SignatureVerificationError(
"Signature verification failed")
self.assertRaises(cursive_exception.SignatureVerificationError,
image_import.execute)
@mock.patch("glance.async_.flows.api_image_import.image_import")
def test_doesnt_raise_when_not_all_stores_must_succeed(self, mock_import):
img_repo = mock.MagicMock()
image_import = import_flow._ImportToStore(TASK_ID1, TASK_TYPE,
img_repo, "http://url",
IMAGE_ID1, "store1", False,
True)
image = self.img_factory.new_image(image_id=UUID1)
img_repo.get.return_value = image
mock_import.set_image_data.side_effect = \
cursive_exception.SignatureVerificationError(
"Signature verification failed")
try:
image_import.execute()
self.assertEqual(image.extra_properties['os_glance_failed_import'],
"store1")
except cursive_exception.SignatureVerificationError:
self.fail("Exception shouldn't be raised")

View File

@ -14,9 +14,11 @@
# License for the specific language governing permissions and limitations # License for the specific language governing permissions and limitations
# under the License. # under the License.
import glance_store as store
import mock import mock
import tempfile import tempfile
from oslo_config import cfg
from oslo_log import log as logging from oslo_log import log as logging
import six import six
import webob import webob
@ -27,6 +29,9 @@ from glance.common import utils
from glance.tests import utils as test_utils from glance.tests import utils as test_utils
CONF = cfg.CONF
class TestStoreUtils(test_utils.BaseTestCase): class TestStoreUtils(test_utils.BaseTestCase):
"""Test glance.common.store_utils module""" """Test glance.common.store_utils module"""
@ -408,6 +413,104 @@ class TestUtils(test_utils.BaseTestCase):
utils.parse_valid_host_port, utils.parse_valid_host_port,
pair) pair)
def test_get_stores_from_request_returns_default(self):
enabled_backends = {
"ceph1": "rbd",
"ceph2": "rbd"
}
self.config(enabled_backends=enabled_backends)
store.register_store_opts(CONF)
self.config(default_backend="ceph1", group="glance_store")
req = webob.Request.blank('/some_request')
mp = "glance.common.utils.glance_store.get_store_from_store_identifier"
with mock.patch(mp) as mock_get_store:
result = utils.get_stores_from_request(req, {})
self.assertEqual(["ceph1"], result)
mock_get_store.assert_called_once_with("ceph1")
def test_get_stores_from_request_returns_stores_from_body(self):
enabled_backends = {
"ceph1": "rbd",
"ceph2": "rbd"
}
self.config(enabled_backends=enabled_backends)
store.register_store_opts(CONF)
self.config(default_backend="ceph1", group="glance_store")
body = {"stores": ["ceph1", "ceph2"]}
req = webob.Request.blank("/some_request")
mp = "glance.common.utils.glance_store.get_store_from_store_identifier"
with mock.patch(mp) as mock_get_store:
result = utils.get_stores_from_request(req, body)
self.assertEqual(["ceph1", "ceph2"], result)
mock_get_store.assert_any_call("ceph1")
mock_get_store.assert_any_call("ceph2")
self.assertEqual(mock_get_store.call_count, 2)
def test_get_stores_from_request_returns_store_from_headers(self):
enabled_backends = {
"ceph1": "rbd",
"ceph2": "rbd"
}
self.config(enabled_backends=enabled_backends)
store.register_store_opts(CONF)
self.config(default_backend="ceph1", group="glance_store")
headers = {"x-image-meta-store": "ceph2"}
req = webob.Request.blank("/some_request", headers=headers)
mp = "glance.common.utils.glance_store.get_store_from_store_identifier"
with mock.patch(mp) as mock_get_store:
result = utils.get_stores_from_request(req, {})
self.assertEqual(["ceph2"], result)
mock_get_store.assert_called_once_with("ceph2")
def test_get_stores_from_request_raises_bad_request(self):
enabled_backends = {
"ceph1": "rbd",
"ceph2": "rbd"
}
self.config(enabled_backends=enabled_backends)
store.register_store_opts(CONF)
self.config(default_backend="ceph1", group="glance_store")
headers = {"x-image-meta-store": "ceph2"}
body = {"stores": ["ceph1"]}
req = webob.Request.blank("/some_request", headers=headers)
self.assertRaises(webob.exc.HTTPBadRequest,
utils.get_stores_from_request, req, body)
def test_get_stores_from_request_returns_all_stores(self):
enabled_backends = {
"ceph1": "rbd",
"ceph2": "rbd"
}
self.config(enabled_backends=enabled_backends)
store.register_store_opts(CONF)
self.config(default_backend="ceph1", group="glance_store")
body = {"all_stores": True}
req = webob.Request.blank("/some_request")
mp = "glance.common.utils.glance_store.get_store_from_store_identifier"
with mock.patch(mp) as mock_get_store:
result = sorted(utils.get_stores_from_request(req, body))
self.assertEqual(["ceph1", "ceph2"], result)
mock_get_store.assert_any_call("ceph1")
mock_get_store.assert_any_call("ceph2")
self.assertEqual(mock_get_store.call_count, 2)
def test_get_stores_from_request_raises_bad_request_with_all_stores(self):
enabled_backends = {
"ceph1": "rbd",
"ceph2": "rbd"
}
self.config(enabled_backends=enabled_backends)
store.register_store_opts(CONF)
self.config(default_backend="ceph1", group="glance_store")
headers = {"x-image-meta-store": "ceph2"}
body = {"stores": ["ceph1"], "all_stores": True}
req = webob.Request.blank("/some_request", headers=headers)
self.assertRaises(webob.exc.HTTPBadRequest,
utils.get_stores_from_request, req, body)
class SplitFilterOpTestCase(test_utils.BaseTestCase): class SplitFilterOpTestCase(test_utils.BaseTestCase):

View File

@ -38,6 +38,7 @@ class ImageStub(object):
self.extra_properties = extra_properties self.extra_properties = extra_properties
self.checksum = 'c1234' self.checksum = 'c1234'
self.size = 123456789 self.size = 123456789
self.os_hash_algo = None
class TestCacheMiddlewareURLMatching(testtools.TestCase): class TestCacheMiddlewareURLMatching(testtools.TestCase):

View File

@ -44,7 +44,7 @@ class ImageStub(glance.domain.Image):
def get_data(self, offset=0, chunk_size=None): def get_data(self, offset=0, chunk_size=None):
return ['01234', '56789'] return ['01234', '56789']
def set_data(self, data, size, backend=None): def set_data(self, data, size, backend=None, set_active=True):
for chunk in data: for chunk in data:
pass pass
@ -272,10 +272,17 @@ class TestImageNotifications(utils.BaseTestCase):
self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('INFO', output_log['notification_type'])
self.assertEqual('image.prepare', output_log['event_type']) self.assertEqual('image.prepare', output_log['event_type'])
self.assertEqual(self.image.image_id, output_log['payload']['id']) self.assertEqual(self.image.image_id, output_log['payload']['id'])
self.assertEqual(['store1', 'store2'], output_log['payload'][
'os_glance_importing_to_stores'])
self.assertEqual([],
output_log['payload']['os_glance_failed_import'])
yield 'abcd' yield 'abcd'
yield 'efgh' yield 'efgh'
insurance['called'] = True insurance['called'] = True
self.image_proxy.extra_properties[
'os_glance_importing_to_stores'] = 'store1,store2'
self.image_proxy.extra_properties['os_glance_failed_import'] = ''
self.image_proxy.set_data(data_iterator(), 8) self.image_proxy.set_data(data_iterator(), 8)
self.assertTrue(insurance['called']) self.assertTrue(insurance['called'])
@ -294,39 +301,83 @@ class TestImageNotifications(utils.BaseTestCase):
self.assertTrue(insurance['called']) self.assertTrue(insurance['called'])
def test_image_set_data_upload_and_activate_notification(self): def test_image_set_data_upload_and_activate_notification(self):
image = ImageStub(image_id=UUID1, name='image-1', status='queued',
created_at=DATETIME, updated_at=DATETIME,
owner=TENANT1, visibility='public')
context = glance.context.RequestContext(tenant=TENANT2, user=USER1)
fake_notifier = unit_test_utils.FakeNotifier()
image_proxy = glance.notifier.ImageProxy(image, context, fake_notifier)
def data_iterator(): def data_iterator():
self.notifier.log = [] fake_notifier.log = []
yield 'abcde' yield 'abcde'
yield 'fghij' yield 'fghij'
image_proxy.extra_properties[
'os_glance_importing_to_stores'] = 'store2'
self.image_proxy.set_data(data_iterator(), 10) image_proxy.extra_properties[
'os_glance_importing_to_stores'] = 'store1,store2'
image_proxy.extra_properties['os_glance_failed_import'] = ''
image_proxy.set_data(data_iterator(), 10)
output_logs = self.notifier.get_logs() output_logs = fake_notifier.get_logs()
self.assertEqual(2, len(output_logs)) self.assertEqual(2, len(output_logs))
output_log = output_logs[0] output_log = output_logs[0]
self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('INFO', output_log['notification_type'])
self.assertEqual('image.upload', output_log['event_type']) self.assertEqual('image.upload', output_log['event_type'])
self.assertEqual(self.image.image_id, output_log['payload']['id']) self.assertEqual(self.image.image_id, output_log['payload']['id'])
self.assertEqual(['store2'], output_log['payload'][
'os_glance_importing_to_stores'])
self.assertEqual([],
output_log['payload']['os_glance_failed_import'])
output_log = output_logs[1] output_log = output_logs[1]
self.assertEqual('INFO', output_log['notification_type']) self.assertEqual('INFO', output_log['notification_type'])
self.assertEqual('image.activate', output_log['event_type']) self.assertEqual('image.activate', output_log['event_type'])
self.assertEqual(self.image.image_id, output_log['payload']['id']) self.assertEqual(self.image.image_id, output_log['payload']['id'])
def test_image_set_data_upload_and_activate_notification_disabled(self): def test_image_set_data_upload_and_not_activate_notification(self):
insurance = {'called': False} insurance = {'called': False}
def data_iterator(): def data_iterator():
self.notifier.log = [] self.notifier.log = []
yield 'abcde' yield 'abcde'
yield 'fghij' yield 'fghij'
self.image_proxy.extra_properties[
'os_glance_importing_to_stores'] = 'store2'
insurance['called'] = True
self.image_proxy.set_data(data_iterator(), 10)
output_logs = self.notifier.get_logs()
self.assertEqual(1, len(output_logs))
output_log = output_logs[0]
self.assertEqual('INFO', output_log['notification_type'])
self.assertEqual('image.upload', output_log['event_type'])
self.assertEqual(self.image.image_id, output_log['payload']['id'])
self.assertTrue(insurance['called'])
def test_image_set_data_upload_and_activate_notification_disabled(self):
insurance = {'called': False}
image = ImageStub(image_id=UUID1, name='image-1', status='queued',
created_at=DATETIME, updated_at=DATETIME,
owner=TENANT1, visibility='public')
context = glance.context.RequestContext(tenant=TENANT2, user=USER1)
fake_notifier = unit_test_utils.FakeNotifier()
image_proxy = glance.notifier.ImageProxy(image, context, fake_notifier)
def data_iterator():
fake_notifier.log = []
yield 'abcde'
yield 'fghij'
insurance['called'] = True insurance['called'] = True
self.config(disabled_notifications=['image.activate', 'image.upload']) self.config(disabled_notifications=['image.activate', 'image.upload'])
self.image_proxy.set_data(data_iterator(), 10) image_proxy.set_data(data_iterator(), 10)
self.assertTrue(insurance['called']) self.assertTrue(insurance['called'])
output_logs = self.notifier.get_logs() output_logs = fake_notifier.get_logs()
self.assertEqual(0, len(output_logs)) self.assertEqual(0, len(output_logs))
def test_image_set_data_storage_full(self): def test_image_set_data_storage_full(self):

View File

@ -43,7 +43,7 @@ class FakeImage(object):
locations = [{'url': 'file:///not/a/path', 'metadata': {}}] locations = [{'url': 'file:///not/a/path', 'metadata': {}}]
tags = set([]) tags = set([])
def set_data(self, data, size=None, backend=None): def set_data(self, data, size=None, backend=None, set_active=True):
self.size = 0 self.size = 0
for d in data: for d in data:
self.size += len(d) self.size += len(d)

View File

@ -12,6 +12,7 @@
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations # License for the specific language governing permissions and limitations
# under the License. # under the License.
from cryptography import exceptions as crypto_exception
from cursive import exception as cursive_exception from cursive import exception as cursive_exception
from cursive import signature_utils from cursive import signature_utils
import glance_store import glance_store
@ -48,8 +49,11 @@ class ImageStub(object):
self.status = status self.status = status
self.locations = locations or [] self.locations = locations or []
self.visibility = visibility self.visibility = visibility
self.size = 1 self.size = None
self.extra_properties = extra_properties or {} self.extra_properties = extra_properties or {}
self.os_hash_algo = None
self.os_hash_value = None
self.checksum = None
def delete(self): def delete(self):
self.status = 'deleted' self.status = 'deleted'
@ -84,6 +88,104 @@ class FakeMemberRepo(object):
self.tenants.remove(member.member_id) self.tenants.remove(member.member_id)
class TestStoreMultiBackends(utils.BaseTestCase):
def setUp(self):
self.store_api = unit_test_utils.FakeStoreAPI()
self.store_utils = unit_test_utils.FakeStoreUtils(self.store_api)
self.enabled_backends = {
"ceph1": "rbd",
"ceph2": "rbd"
}
super(TestStoreMultiBackends, self).setUp()
self.config(enabled_backends=self.enabled_backends)
@mock.patch("glance.location.signature_utils.get_verifier")
def test_set_data_calls_upload_to_store(self, msig):
context = glance.context.RequestContext(user=USER1)
extra_properties = {
'img_signature_certificate_uuid': 'UUID',
'img_signature_hash_method': 'METHOD',
'img_signature_key_type': 'TYPE',
'img_signature': 'VALID'
}
image_stub = ImageStub(UUID2, status='queued', locations=[],
extra_properties=extra_properties)
image = glance.location.ImageProxy(image_stub, context,
self.store_api, self.store_utils)
with mock.patch.object(image, "_upload_to_store") as mloc:
image.set_data('YYYY', 4, backend='ceph1')
msig.assert_called_once_with(context=context,
img_signature_certificate_uuid='UUID',
img_signature_hash_method='METHOD',
img_signature='VALID',
img_signature_key_type='TYPE')
mloc.assert_called_once_with('YYYY', msig.return_value, 'ceph1', 4)
self.assertEqual('active', image.status)
def test_image_set_data(self):
store_api = mock.MagicMock()
store_api.add_with_multihash.return_value = (
"rbd://ceph1", 4, "Z", "MH", {"backend": "ceph1"})
context = glance.context.RequestContext(user=USER1)
image_stub = ImageStub(UUID2, status='queued', locations=[])
image = glance.location.ImageProxy(image_stub, context,
store_api, self.store_utils)
image.set_data('YYYY', 4, backend='ceph1')
self.assertEqual(4, image.size)
# NOTE(markwash): FakeStore returns image_id for location
self.assertEqual("rbd://ceph1", image.locations[0]['url'])
self.assertEqual({"backend": "ceph1"}, image.locations[0]['metadata'])
self.assertEqual('Z', image.checksum)
self.assertEqual('active', image.status)
@mock.patch('glance.location.LOG')
def test_image_set_data_valid_signature(self, mock_log):
store_api = mock.MagicMock()
store_api.add_with_multihash.return_value = (
"rbd://ceph1", 4, "Z", "MH", {"backend": "ceph1"})
context = glance.context.RequestContext(user=USER1)
extra_properties = {
'img_signature_certificate_uuid': 'UUID',
'img_signature_hash_method': 'METHOD',
'img_signature_key_type': 'TYPE',
'img_signature': 'VALID'
}
image_stub = ImageStub(UUID2, status='queued',
extra_properties=extra_properties)
self.mock_object(signature_utils, 'get_verifier',
unit_test_utils.fake_get_verifier)
image = glance.location.ImageProxy(image_stub, context,
store_api, self.store_utils)
image.set_data('YYYY', 4, backend='ceph1')
self.assertEqual('active', image.status)
call = mock.call(u'Successfully verified signature for image %s',
UUID2)
mock_log.info.assert_has_calls([call])
@mock.patch("glance.location.signature_utils.get_verifier")
def test_image_set_data_invalid_signature(self, msig):
msig.return_value.verify.side_effect = \
crypto_exception.InvalidSignature
store_api = mock.MagicMock()
store_api.add_with_multihash.return_value = (
"rbd://ceph1", 4, "Z", "MH", {"backend": "ceph1"})
context = glance.context.RequestContext(user=USER1)
extra_properties = {
'img_signature_certificate_uuid': 'UUID',
'img_signature_hash_method': 'METHOD',
'img_signature_key_type': 'TYPE',
'img_signature': 'INVALID'
}
image_stub = ImageStub(UUID2, status='queued',
extra_properties=extra_properties)
image = glance.location.ImageProxy(image_stub, context,
store_api, self.store_utils)
self.assertRaises(cursive_exception.SignatureVerificationError,
image.set_data, 'YYYY', 4, backend='ceph1')
class TestStoreImage(utils.BaseTestCase): class TestStoreImage(utils.BaseTestCase):
def setUp(self): def setUp(self):
locations = [{'url': '%s/%s' % (BASE_URI, UUID1), locations = [{'url': '%s/%s' % (BASE_URI, UUID1),
@ -124,7 +226,11 @@ class TestStoreImage(utils.BaseTestCase):
context = glance.context.RequestContext(user=USER1) context = glance.context.RequestContext(user=USER1)
(image2, image_stub2) = self._add_image(context, UUID2, 'ZZZ', 3) (image2, image_stub2) = self._add_image(context, UUID2, 'ZZZ', 3)
location_data = image2.locations[0] location_data = image2.locations[0]
image1.locations.append(location_data)
with mock.patch("glance.location.store") as mock_store:
mock_store.get_size_from_uri_and_backend.return_value = 3
image1.locations.append(location_data)
self.assertEqual(2, len(image1.locations)) self.assertEqual(2, len(image1.locations))
self.assertEqual(UUID2, location_data['url']) self.assertEqual(UUID2, location_data['url'])
@ -603,8 +709,9 @@ class TestStoreImage(utils.BaseTestCase):
location2 = {'url': UUID2, 'metadata': {}} location2 = {'url': UUID2, 'metadata': {}}
location3 = {'url': UUID3, 'metadata': {}} location3 = {'url': UUID3, 'metadata': {}}
with mock.patch("glance.location.store") as mock_store:
image3.locations += [location2, location3] mock_store.get_size_from_uri_and_backend.return_value = 4
image3.locations += [location2, location3]
self.assertEqual([location2, location3], image_stub3.locations) self.assertEqual([location2, location3], image_stub3.locations)
self.assertEqual([location2, location3], image3.locations) self.assertEqual([location2, location3], image3.locations)
@ -633,7 +740,9 @@ class TestStoreImage(utils.BaseTestCase):
location2 = {'url': UUID2, 'metadata': {}} location2 = {'url': UUID2, 'metadata': {}}
location3 = {'url': UUID3, 'metadata': {}} location3 = {'url': UUID3, 'metadata': {}}
image3.locations += [location2, location3] with mock.patch("glance.location.store") as mock_store:
mock_store.get_size_from_uri_and_backend.return_value = 4
image3.locations += [location2, location3]
self.assertEqual(1, image_stub3.locations.index(location3)) self.assertEqual(1, image_stub3.locations.index(location3))
@ -660,7 +769,9 @@ class TestStoreImage(utils.BaseTestCase):
location2 = {'url': UUID2, 'metadata': {}} location2 = {'url': UUID2, 'metadata': {}}
location3 = {'url': UUID3, 'metadata': {}} location3 = {'url': UUID3, 'metadata': {}}
image3.locations += [location2, location3] with mock.patch("glance.location.store") as mock_store:
mock_store.get_size_from_uri_and_backend.return_value = 4
image3.locations += [location2, location3]
self.assertEqual(1, image_stub3.locations.index(location3)) self.assertEqual(1, image_stub3.locations.index(location3))
self.assertEqual(location2, image_stub3.locations[0]) self.assertEqual(location2, image_stub3.locations[0])
@ -690,7 +801,9 @@ class TestStoreImage(utils.BaseTestCase):
location3 = {'url': UUID3, 'metadata': {}} location3 = {'url': UUID3, 'metadata': {}}
location_bad = {'url': 'unknown://location', 'metadata': {}} location_bad = {'url': 'unknown://location', 'metadata': {}}
image3.locations += [location2, location3] with mock.patch("glance.location.store") as mock_store:
mock_store.get_size_from_uri_and_backend.return_value = 4
image3.locations += [location2, location3]
self.assertIn(location3, image_stub3.locations) self.assertIn(location3, image_stub3.locations)
self.assertNotIn(location_bad, image_stub3.locations) self.assertNotIn(location_bad, image_stub3.locations)
@ -718,7 +831,9 @@ class TestStoreImage(utils.BaseTestCase):
image_stub3 = ImageStub('fake_image_id', status='queued', locations=[]) image_stub3 = ImageStub('fake_image_id', status='queued', locations=[])
image3 = glance.location.ImageProxy(image_stub3, context, image3 = glance.location.ImageProxy(image_stub3, context,
self.store_api, self.store_utils) self.store_api, self.store_utils)
image3.locations += [location2, location3] with mock.patch("glance.location.store") as mock_store:
mock_store.get_size_from_uri_and_backend.return_value = 4
image3.locations += [location2, location3]
image_stub3.locations.reverse() image_stub3.locations.reverse()

View File

@ -71,7 +71,7 @@ class FakeImage(object):
return self.data[offset:offset + chunk_size] return self.data[offset:offset + chunk_size]
return self.data[offset:] return self.data[offset:]
def set_data(self, data, size=None, backend=None): def set_data(self, data, size=None, backend=None, set_active=True):
self.data = ''.join(data) self.data = ''.join(data)
self.size = size self.size = size
self.status = 'modified-by-fake' self.status = 'modified-by-fake'

View File

@ -0,0 +1,31 @@
---
features:
- |
Add ability to import image into multiple stores during `interoperable
image import process`_.
upgrade:
- |
Add ability to import image into multiple stores during `interoperable
image import process`_.
This feature will only work if multiple stores are enabled in the
deployment.
It introduces 3 new optional body fields to the `import API path`:
- ``stores``: List containing the stores id to import the image binary data
to.
- ``all_stores``: To import the data in all configured stores.
- ``all_stores_must_succeed``: Control wether the import have to succeed in
all stores.
Users can follow workflow execution with 2 new reserved properties:
- ``os_glance_importing_to_stores``: list of stores that has not yet been
processed.
- ``os_glance_failed_import``: Each time an import in a store fails, it is
added to this list.
.. _`interoperable image import process`: https://developer.openstack.org/api-ref/image/v2/#interoperable-image-import