Merge "Copy existing image in multiple stores"

This commit is contained in:
Zuul 2020-02-13 14:05:25 +00:00 committed by Gerrit Code Review
commit 518b9cc5cb
16 changed files with 1045 additions and 38 deletions

View File

@ -51,12 +51,6 @@ Thus, the first step is:
of creating an image by making the :ref:`Stores Discovery
<store-discovery-call>` call.
.. note:: The Multi Store feature is introduced as EXPERIMENTAL in Rocky and
its use in production systems is currently **not supported**.
However we encourage people to use this feature for testing
purposes and report the issues so that we can make it stable and
fully supported in Stein release.
The glance-direct import method
-------------------------------
@ -83,6 +77,17 @@ The ``web-download`` workflow has **two** parts:
the import process. You will specify that you are using the
``web-download`` import method in the body of the import call.
The copy-image import method
------------------------------
The ``copy-image`` workflow has **two** parts:
1. Identify the existing image which you want to copy to other stores.
2. Issue the :ref:`Image Import <image-import-call>` call to complete
the import process. You will specify that you are using the
``copy-image`` import method in the body of the import call.
.. _image-stage-call:
Stage binary image data
@ -171,6 +176,9 @@ call.
In the ``web-download`` workflow, the data is made available to the Image
service by being posted to an accessible location with a URL that you know.
In the ``copy-image`` workflow, the data is made available to the Image
service by copying existing image data to the staging area.
Beginning with API version 2.8, an optional ``stores`` parameter may be added
to the body request. When present, it contains the list of backing store
identifiers to import the image binary data to. If at least one store
@ -249,10 +257,34 @@ If you are using the ``web-download`` import method:
restricted in a particular cloud. Consult the cloud's local documentation
for details.
If you are using the ``copy-image`` import method:
- The image status must be ``active``. (This indicates that image data is
associated with the image.)
- The body of your request must indicate that you are using the
``copy-image`` import method, and it must contain either the list of
stores where you want to copy your image or all_stores which will copy
the image in all available stores set in glance_api.conf using
``enabled_backends`` configuration option.
- If body of your request contains ``all_stores_must_succeed`` (default to
True) and an error occurs during the copying in at least one store,
the request will be rejected, the data will be deleted from the new stores
where copying is done (not staging), and the state of the image
remains the same.
- If body of your request contains ``all_stores_must_succeed`` set to
False and an error occurs, then the request will fail (data deleted
from stores, ...) only if the copying fails on all stores specified by
the user. In case of a partial success, the locations added to the
image will be the stores where the data has been correctly uploaded.
**Synchronous Postconditions**
- With correct permissions, you can see the image status as
``importing`` through API calls. (Be aware, however, that if the import
``importing`` (only for glance-direct and web-download import methods)
through API calls. (Be aware, however, that if the import
process completes before you make the API call, the image may already
show as ``active``.)
@ -289,4 +321,8 @@ Request Example - web-download import method
.. literalinclude:: samples/image-import-w-d-request.json
:language: json
Request Example - copy-image import method
--------------------------------------------
.. literalinclude:: samples/image-import-c-i-request.json
:language: json

View File

@ -0,0 +1,9 @@
{
"method": {
"name": "copy-image"
},
"stores": ["common", "cheap", "fast", "reliable"],
"all_stores_must_succeed": false,
"all_stores": false
}

View File

@ -89,9 +89,9 @@ task-related policies:
Image Import Methods
--------------------
Glance provides two import methods that you can make available to your
users: ``glance-direct`` and ``web-download``. By default, both methods
are enabled.
Glance provides three import methods that you can make available to your
users: ``glance-direct``, ``web-download`` and ``copy-image``. By default,
all three methods are enabled.
* The ``glance-direct`` import method allows your users to upload image data
directly to Glance.
@ -108,6 +108,10 @@ are enabled.
Queens release of Glance (16.x.x) is the final version in which you can
expect to find the Image API v1.
* The ``copy-image`` method allows and end user to copy existing image to
other Glance backends available in deployment. This import method is
only used if multiple glance backends are enabled in your deployment.
You control which methods are available to API users by the
``enabled_import_methods`` configuration option in the default section of the
**glance-api.conf** file.
@ -221,8 +225,33 @@ be either 80 or 443.)
subdirectory of the Glance source code tree. Make sure that you are looking
in the correct branch for the OpenStack release you are working with.
Configuring the copy-image method
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For the ``copy-image`` method, make sure that ``copy-image`` is included
in the list specified by your ``enabled_import_methods`` setting as well
as you have multiple glance backends configured in your environment.
.. _iir_plugins:
Copying existing-image in multiple stores
-----------------------------------------
Starting with Ussuri release, it is possible to copy existing image data
into multiple stores using interoperable image import workflow.
Operator or end user can either copy the existing image by specifying
``all_stores`` as True in request body or by passing list of desired
stores in request body. If ``all_stores`` is specified and image data
is already present in some of the available stores then those stores
will be silently excluded from the list of all configured stores, whereas
if ``all_stores`` is False, ``stores`` are specified in explicitly in
request body and if image data is present in any of the specified store
then the request will be rejected.
Image will be copied to staging area from one of the available locations
and then import processing will be continued using import workflow as
explained in below ``Importing in multiple stores`` section.
Importing in multiple stores
----------------------------

View File

@ -40,7 +40,7 @@ from glance.common import utils
from glance.common import wsgi
import glance.db
import glance.gateway
from glance.i18n import _, _LW
from glance.i18n import _, _LI, _LW
import glance.notifier
import glance.schema
@ -109,9 +109,13 @@ class ImagesController(object):
try:
image = image_repo.get(image_id)
if image.status == 'active':
if image.status == 'active' and import_method != "copy-image":
msg = _("Image with status active cannot be target for import")
raise exception.Conflict(msg)
if image.status != 'active' and import_method == "copy-image":
msg = _("Only images with status active can be targeted for "
"copying")
raise exception.Conflict(msg)
if image.status != 'queued' and import_method == 'web-download':
msg = _("Image needs to be in 'queued' state to use "
"'web-download' method")
@ -135,6 +139,34 @@ class ImagesController(object):
except glance_store.UnknownScheme as exc:
LOG.warn(exc.msg)
raise exception.Conflict(exc.msg)
# NOTE(abhishekk): If all_stores is specified and import_method is
# copy_image, then remove those stores where image is already
# present.
all_stores = body.get('all_stores', False)
if import_method == 'copy-image' and all_stores:
for loc in image.locations:
existing_store = loc['metadata']['store']
if existing_store in stores:
LOG.debug("Removing store '%s' from all stores as "
"image is already available in that "
"store." % existing_store)
stores.remove(existing_store)
if len(stores) == 0:
LOG.info(_LI("Exiting copying workflow as image is "
"available in all configured stores."))
return image_id
# validate if image is already existing in given stores when
# all_stores is False
if import_method == 'copy-image' and not all_stores:
for loc in image.locations:
existing_store = loc['metadata']['store']
if existing_store in stores:
msg = _("Image is already present at store "
"'%s'") % existing_store
raise webob.exc.HTTPBadRequest(explanation=msg)
except exception.Conflict as e:
raise webob.exc.HTTPConflict(explanation=e.msg)
except exception.NotFound as e:

View File

@ -0,0 +1,124 @@
# Copyright 2020 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import glance_store as store_api
from oslo_config import cfg
from oslo_log import log as logging
from taskflow.patterns import linear_flow as lf
from taskflow import task
from taskflow.types import failure
from glance.common import exception
from glance.i18n import _, _LE
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
class _CopyImage(task.Task):
default_provides = 'file_uri'
def __init__(self, task_id, task_type, image_repo, image_id):
self.task_id = task_id
self.task_type = task_type
self.image_repo = image_repo
self.image_id = image_id
super(_CopyImage, self).__init__(
name='%s-CopyImage-%s' % (task_type, task_id))
self.staging_store = store_api.get_store_from_store_identifier(
'os_glance_staging_store')
def execute(self):
"""Create temp file into store and return path to it
:param image_id: Glance Image ID
"""
image = self.image_repo.get(self.image_id)
# NOTE (abhishekk): If ``all_stores_must_succeed`` is set to True
# and copying task fails then we keep data in staging area as it
# is so that if second call is made to copy the same image then
# no need to copy the data in staging area again.
file_path = "%s/%s" % (getattr(
CONF, 'os_glance_staging_store').filesystem_store_datadir,
self.image_id)
if os.path.exists(file_path):
return file_path, 0
# At first search image in default_backend
default_store = CONF.glance_store.default_backend
for loc in image.locations:
if loc['metadata'].get('store') == default_store:
try:
return self._copy_to_staging_store(loc)
except store_api.exceptions.NotFound:
msg = (_LE("Image not present in default store, searching "
"in all glance-api specific available "
"stores"))
LOG.error(msg)
break
available_backends = CONF.enabled_backends
for loc in image.locations:
image_backend = loc['metadata'].get('store')
if (image_backend in available_backends.keys()
and image_backend != default_store):
try:
return self._copy_to_staging_store(loc)
except store_api.exceptions.NotFound:
LOG.error(_LE('Image: %(img_id)s is not present in store '
'%(store)s.'),
{'img_id': self.image_id,
'store': image_backend})
continue
raise exception.NotFound(_("Image not found in any configured "
"store"))
def _copy_to_staging_store(self, loc):
store_backend = loc['metadata'].get('store')
image_data, size = store_api.get(loc['url'], store_backend)
msg = ("Found image, copying it in staging area")
LOG.debug(msg)
return self.staging_store.add(self.image_id, image_data, size)[0]
def revert(self, result, **kwargs):
if isinstance(result, failure.Failure):
LOG.error(_LE('Task: %(task_id)s failed to copy image '
'%(image_id)s.'),
{'task_id': self.task_id,
'image_id': self.image_id})
def get_flow(**kwargs):
"""Return task flow for web-download.
:param task_id: Task ID.
:param task_type: Type of the task.
:param image_repo: Image repository used.
:param uri: URI the image data is downloaded from.
"""
task_id = kwargs.get('task_id')
task_type = kwargs.get('task_type')
image_repo = kwargs.get('image_repo')
image_id = kwargs.get('image_id')
return lf.Flow(task_type).add(
_CopyImage(task_id, task_type, image_repo, image_id),
)

View File

@ -320,11 +320,13 @@ class _ImportToStore(task.Task):
class _SaveImage(task.Task):
def __init__(self, task_id, task_type, image_repo, image_id):
def __init__(self, task_id, task_type, image_repo, image_id,
import_method):
self.task_id = task_id
self.task_type = task_type
self.image_repo = image_repo
self.image_id = image_id
self.import_method = import_method
super(_SaveImage, self).__init__(
name='%s-SaveImage-%s' % (task_type, task_id))
@ -334,7 +336,8 @@ class _SaveImage(task.Task):
:param image_id: Glance Image ID
"""
new_image = self.image_repo.get(self.image_id)
if new_image.status == 'saving':
if (self.import_method != 'copy-image'
and new_image.status == 'importing'):
# NOTE(flaper87): THIS IS WRONG!
# we should be doing atomic updates to avoid
# race conditions. This happens in other places
@ -410,7 +413,7 @@ def get_flow(**kwargs):
if not CONF.enabled_backends and not CONF.node_staging_uri.endswith('/'):
separator = '/'
if not uri and import_method == 'glance-direct':
if not uri and import_method in ['glance-direct', 'copy-image']:
if CONF.enabled_backends:
separator, staging_dir = store_utils.get_dir_separator()
uri = separator.join((staging_dir, str(image_id)))
@ -419,9 +422,9 @@ def get_flow(**kwargs):
flow = lf.Flow(task_type, retry=retry.AlwaysRevert())
if import_method == 'web-download':
downloadToStaging = internal_plugins.get_import_plugin(**kwargs)
flow.add(downloadToStaging)
if import_method in ['web-download', 'copy-image']:
internal_plugin = internal_plugins.get_import_plugin(**kwargs)
flow.add(internal_plugin)
if CONF.enabled_backends:
separator, staging_dir = store_utils.get_dir_separator()
file_uri = separator.join((staging_dir, str(image_id)))
@ -437,6 +440,8 @@ def get_flow(**kwargs):
for idx, store in enumerate(stores, 1):
set_active = (not all_stores_must_succeed) or (idx == len(stores))
if import_method == 'copy-image':
set_active = False
task_name = task_type + "-" + (store or "")
import_task = lf.Flow(task_name)
import_to_store = _ImportToStore(task_id,
@ -456,7 +461,8 @@ def get_flow(**kwargs):
save_task = _SaveImage(task_id,
task_type,
image_repo,
image_id)
image_id,
import_method)
flow.add(save_task)
complete_task = _CompleteTask(task_id,
@ -467,7 +473,9 @@ def get_flow(**kwargs):
image = image_repo.get(image_id)
from_state = image.status
image.status = 'importing'
if import_method != 'copy-image':
image.status = 'importing'
image.extra_properties[
'os_glance_importing_to_stores'] = ','.join((store for store in
stores if

View File

@ -684,14 +684,15 @@ Related options:
cfg.ListOpt('enabled_import_methods',
item_type=cfg.types.String(quotes=True),
bounds=True,
default=['glance-direct', 'web-download'],
default=['glance-direct', 'web-download',
'copy-image'],
help=_("""
List of enabled Image Import Methods
List of enabled Image Import Methods
Both 'glance-direct' and 'web-download' are enabled by default.
'glance-direct', 'copy-image' and 'web-download' are enabled by default.
Related options:
* [DEFAULT]/node_staging_uri""")),
Related options:
* [DEFAULT]/node_staging_uri""")),
]
CONF = cfg.CONF

View File

@ -553,6 +553,7 @@ class ApiServerForMultipleBackend(Server):
self.metadata_encryption_key = "012345678901234567890123456789ab"
self.image_dir_backend_1 = os.path.join(self.test_dir, "images_1")
self.image_dir_backend_2 = os.path.join(self.test_dir, "images_2")
self.image_dir_backend_3 = os.path.join(self.test_dir, "images_3")
self.staging_dir = os.path.join(self.test_dir, "staging")
self.pid_file = pid_file or os.path.join(self.test_dir,
"multiple_backend_api.pid")
@ -620,7 +621,7 @@ image_tag_quota=%(image_tag_quota)s
image_location_quota=%(image_location_quota)s
location_strategy=%(location_strategy)s
allow_additional_image_properties = True
enabled_backends=file1:file,file2:file
enabled_backends=file1:file,file2:file,file3:file
[oslo_policy]
policy_file = %(policy_file)s
policy_default_rule = %(policy_default_rule)s
@ -634,6 +635,8 @@ default_backend = %(default_backend)s
filesystem_store_datadir=%(image_dir_backend_1)s
[file2]
filesystem_store_datadir=%(image_dir_backend_2)s
[file3]
filesystem_store_datadir=%(image_dir_backend_3)s
[import_filtering_opts]
allowed_ports = []
[os_glance_staging_store]

View File

@ -82,3 +82,50 @@ def wait_for_status(request_path, request_headers, status='active',
entity_id = request_path.rsplit('/', 1)[1]
msg = "Entity {0} failed to reach status '{1}' within {2} sec"
raise Exception(msg.format(entity_id, status, max_sec))
def wait_for_copying(request_path, request_headers, stores=[],
max_sec=10, delay_sec=0.2, start_delay_sec=None,
failure_scenario=False):
"""
Performs a time-bounded wait for the entity at the request_path to
wait until image is copied to specified stores.
:param request_path: path to use to make the request
:param request_headers: headers to use when making the request
:param stores: list of stores to copy
:param max_sec: the maximum number of seconds to wait (default: 10)
:param delay_sec: seconds to sleep before the next request is
made (default: 0.2)
:param start_delay_sec: seconds to wait before making the first
request (default: None)
:raises Exception: if the entity fails to reach the status within
the requested time or if the server returns something
other than a 200 response
"""
start_time = time.time()
done_time = start_time + max_sec
if start_delay_sec:
time.sleep(start_delay_sec)
while time.time() <= done_time:
resp = requests.get(request_path, headers=request_headers)
if resp.status_code != http.OK:
raise Exception("Received {} response from server".format(
resp.status_code))
entity = jsonutils.loads(resp.text)
all_copied = False
for store in stores:
if store in entity['stores']:
all_copied = True
else:
all_copied = False
if all_copied:
return
time.sleep(delay_sec)
if not failure_scenario:
entity_id = request_path.rsplit('/', 1)[1]
msg = "Entity {0} failed to copy image to stores '{1}' within {2} sec"
raise Exception(msg.format(entity_id, ",".join(stores), max_sec))

View File

@ -14,6 +14,7 @@
# under the License.
import hashlib
import os
import uuid
from oslo_serialization import jsonutils
@ -4409,7 +4410,7 @@ class TestImagesMultipleBackend(functional.MultipleBackendFunctionalTest):
self.assertIn("glance-direct", discovery_calls)
# file1 and file2 should be available in discovery response
available_stores = ['file1', 'file2']
available_stores = ['file1', 'file2', 'file3']
path = self._url('/v2/info/stores')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
@ -4572,7 +4573,7 @@ class TestImagesMultipleBackend(functional.MultipleBackendFunctionalTest):
self.assertIn("glance-direct", discovery_calls)
# file1 and file2 should be available in discovery response
available_stores = ['file1', 'file2']
available_stores = ['file1', 'file2', 'file3']
path = self._url('/v2/info/stores')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
@ -4736,7 +4737,7 @@ class TestImagesMultipleBackend(functional.MultipleBackendFunctionalTest):
self.assertIn("web-download", discovery_calls)
# file1 and file2 should be available in discovery response
available_stores = ['file1', 'file2']
available_stores = ['file1', 'file2', 'file3']
path = self._url('/v2/info/stores')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
@ -4899,7 +4900,7 @@ class TestImagesMultipleBackend(functional.MultipleBackendFunctionalTest):
self.assertIn("web-download", discovery_calls)
# file1 and file2 should be available in discovery response
available_stores = ['file1', 'file2']
available_stores = ['file1', 'file2', 'file3']
path = self._url('/v2/info/stores')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
@ -5063,7 +5064,7 @@ class TestImagesMultipleBackend(functional.MultipleBackendFunctionalTest):
self.assertIn("web-download", discovery_calls)
# file1 and file2 should be available in discovery response
available_stores = ['file1', 'file2']
available_stores = ['file1', 'file2', 'file3']
path = self._url('/v2/info/stores')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
@ -5206,6 +5207,436 @@ class TestImagesMultipleBackend(functional.MultipleBackendFunctionalTest):
self.stop_servers()
def test_copy_image_lifecycle(self):
self.start_servers(**self.__dict__.copy())
# Image list should be empty
path = self._url('/v2/images')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
images = jsonutils.loads(response.text)['images']
self.assertEqual(0, len(images))
# copy-image should be available in discovery response
path = self._url('/v2/info/import')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
discovery_calls = jsonutils.loads(
response.text)['import-methods']['value']
self.assertIn("copy-image", discovery_calls)
# file1 and file2 should be available in discovery response
available_stores = ['file1', 'file2', 'file3']
path = self._url('/v2/info/stores')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
discovery_calls = jsonutils.loads(
response.text)['stores']
# os_glance_staging_store should not be available in discovery response
for stores in discovery_calls:
self.assertIn('id', stores)
self.assertIn(stores['id'], available_stores)
self.assertFalse(stores["id"].startswith("os_glance_"))
# Create an image
path = self._url('/v2/images')
headers = self._headers({'content-type': 'application/json'})
data = jsonutils.dumps({'name': 'image-1', 'type': 'kernel',
'disk_format': 'aki',
'container_format': 'aki'})
response = requests.post(path, headers=headers, data=data)
self.assertEqual(http.CREATED, response.status_code)
# Check 'OpenStack-image-store-ids' header present in response
self.assertIn('OpenStack-image-store-ids', response.headers)
for store in available_stores:
self.assertIn(store, response.headers['OpenStack-image-store-ids'])
# Returned image entity should have a generated id and status
image = jsonutils.loads(response.text)
image_id = image['id']
checked_keys = set([
u'status',
u'name',
u'tags',
u'created_at',
u'updated_at',
u'visibility',
u'self',
u'protected',
u'id',
u'file',
u'min_disk',
u'type',
u'min_ram',
u'schema',
u'disk_format',
u'container_format',
u'owner',
u'checksum',
u'size',
u'virtual_size',
u'os_hidden',
u'os_hash_algo',
u'os_hash_value'
])
self.assertEqual(checked_keys, set(image.keys()))
expected_image = {
'status': 'queued',
'name': 'image-1',
'tags': [],
'visibility': 'shared',
'self': '/v2/images/%s' % image_id,
'protected': False,
'file': '/v2/images/%s/file' % image_id,
'min_disk': 0,
'type': 'kernel',
'min_ram': 0,
'schema': '/v2/schemas/image',
}
for key, value in expected_image.items():
self.assertEqual(value, image[key], key)
# Image list should now have one entry
path = self._url('/v2/images')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
images = jsonutils.loads(response.text)['images']
self.assertEqual(1, len(images))
self.assertEqual(image_id, images[0]['id'])
# Verify image is in queued state and checksum is None
func_utils.verify_image_hashes_and_status(self, image_id,
status='queued')
# Import image to multiple stores
path = self._url('/v2/images/%s/import' % image_id)
headers = self._headers({
'content-type': 'application/json',
'X-Roles': 'admin'
})
# Start http server locally
thread, httpd, port = test_utils.start_standalone_http_server()
image_data_uri = 'http://localhost:%s/' % port
data = jsonutils.dumps(
{'method': {'name': 'web-download', 'uri': image_data_uri},
'stores': ['file1']})
response = requests.post(path, headers=headers, data=data)
self.assertEqual(http.ACCEPTED, response.status_code)
# Verify image is in active state and checksum is set
# NOTE(abhishekk): As import is a async call we need to provide
# some timelap to complete the call.
path = self._url('/v2/images/%s' % image_id)
func_utils.wait_for_status(request_path=path,
request_headers=self._headers(),
status='active',
max_sec=40,
delay_sec=0.2,
start_delay_sec=1)
with requests.get(image_data_uri) as r:
expect_c = six.text_type(hashlib.md5(r.content).hexdigest())
expect_h = six.text_type(hashlib.sha512(r.content).hexdigest())
func_utils.verify_image_hashes_and_status(self,
image_id,
checksum=expect_c,
os_hash_value=expect_h,
status='active')
# kill the local http server
httpd.shutdown()
httpd.server_close()
# Ensure image is created in the one store
path = self._url('/v2/images/%s' % image_id)
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
self.assertIn('file1', jsonutils.loads(response.text)['stores'])
# Copy newly created image to file2 and file3 stores
path = self._url('/v2/images/%s/import' % image_id)
headers = self._headers({
'content-type': 'application/json',
'X-Roles': 'admin'
})
data = jsonutils.dumps(
{'method': {'name': 'copy-image'},
'stores': ['file2', 'file3']})
response = requests.post(path, headers=headers, data=data)
self.assertEqual(http.ACCEPTED, response.status_code)
# Verify image is copied
# NOTE(abhishekk): As import is a async call we need to provide
# some timelap to complete the call.
path = self._url('/v2/images/%s' % image_id)
func_utils.wait_for_copying(request_path=path,
request_headers=self._headers(),
stores=['file2', 'file3'],
max_sec=40,
delay_sec=0.2,
start_delay_sec=1)
# Ensure image is copied to the file2 and file3 store
path = self._url('/v2/images/%s' % image_id)
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
self.assertIn('file2', jsonutils.loads(response.text)['stores'])
self.assertIn('file3', jsonutils.loads(response.text)['stores'])
# Deleting image should work
path = self._url('/v2/images/%s' % image_id)
response = requests.delete(path, headers=self._headers())
self.assertEqual(http.NO_CONTENT, response.status_code)
# Image list should now be empty
path = self._url('/v2/images')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
images = jsonutils.loads(response.text)['images']
self.assertEqual(0, len(images))
self.stop_servers()
def test_copy_image_revert_lifecycle(self):
# Test if copying task fails in between then the rollback
# should delete the data from only stores to which it is
# copied and not from the existing stores.
self.start_servers(**self.__dict__.copy())
# Image list should be empty
path = self._url('/v2/images')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
images = jsonutils.loads(response.text)['images']
self.assertEqual(0, len(images))
# copy-image should be available in discovery response
path = self._url('/v2/info/import')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
discovery_calls = jsonutils.loads(
response.text)['import-methods']['value']
self.assertIn("copy-image", discovery_calls)
# file1 and file2 should be available in discovery response
available_stores = ['file1', 'file2', 'file3']
path = self._url('/v2/info/stores')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
discovery_calls = jsonutils.loads(
response.text)['stores']
# os_glance_staging_store should not be available in discovery response
for stores in discovery_calls:
self.assertIn('id', stores)
self.assertIn(stores['id'], available_stores)
self.assertFalse(stores["id"].startswith("os_glance_"))
# Create an image
path = self._url('/v2/images')
headers = self._headers({'content-type': 'application/json'})
data = jsonutils.dumps({'name': 'image-1', 'type': 'kernel',
'disk_format': 'aki',
'container_format': 'aki'})
response = requests.post(path, headers=headers, data=data)
self.assertEqual(http.CREATED, response.status_code)
# Check 'OpenStack-image-store-ids' header present in response
self.assertIn('OpenStack-image-store-ids', response.headers)
for store in available_stores:
self.assertIn(store, response.headers['OpenStack-image-store-ids'])
# Returned image entity should have a generated id and status
image = jsonutils.loads(response.text)
image_id = image['id']
checked_keys = set([
u'status',
u'name',
u'tags',
u'created_at',
u'updated_at',
u'visibility',
u'self',
u'protected',
u'id',
u'file',
u'min_disk',
u'type',
u'min_ram',
u'schema',
u'disk_format',
u'container_format',
u'owner',
u'checksum',
u'size',
u'virtual_size',
u'os_hidden',
u'os_hash_algo',
u'os_hash_value'
])
self.assertEqual(checked_keys, set(image.keys()))
expected_image = {
'status': 'queued',
'name': 'image-1',
'tags': [],
'visibility': 'shared',
'self': '/v2/images/%s' % image_id,
'protected': False,
'file': '/v2/images/%s/file' % image_id,
'min_disk': 0,
'type': 'kernel',
'min_ram': 0,
'schema': '/v2/schemas/image',
}
for key, value in expected_image.items():
self.assertEqual(value, image[key], key)
# Image list should now have one entry
path = self._url('/v2/images')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
images = jsonutils.loads(response.text)['images']
self.assertEqual(1, len(images))
self.assertEqual(image_id, images[0]['id'])
# Verify image is in queued state and checksum is None
func_utils.verify_image_hashes_and_status(self, image_id,
status='queued')
# Import image to multiple stores
path = self._url('/v2/images/%s/import' % image_id)
headers = self._headers({
'content-type': 'application/json',
'X-Roles': 'admin'
})
# Start http server locally
thread, httpd, port = test_utils.start_standalone_http_server()
image_data_uri = 'http://localhost:%s/' % port
data = jsonutils.dumps(
{'method': {'name': 'web-download', 'uri': image_data_uri},
'stores': ['file1']})
response = requests.post(path, headers=headers, data=data)
self.assertEqual(http.ACCEPTED, response.status_code)
# Verify image is in active state and checksum is set
# NOTE(abhishekk): As import is a async call we need to provide
# some timelap to complete the call.
path = self._url('/v2/images/%s' % image_id)
func_utils.wait_for_status(request_path=path,
request_headers=self._headers(),
status='active',
max_sec=40,
delay_sec=0.2,
start_delay_sec=1)
with requests.get(image_data_uri) as r:
expect_c = six.text_type(hashlib.md5(r.content).hexdigest())
expect_h = six.text_type(hashlib.sha512(r.content).hexdigest())
func_utils.verify_image_hashes_and_status(self,
image_id,
checksum=expect_c,
os_hash_value=expect_h,
status='active')
# kill the local http server
httpd.shutdown()
httpd.server_close()
# Ensure image is created in the one store
path = self._url('/v2/images/%s' % image_id)
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
self.assertIn('file1', jsonutils.loads(response.text)['stores'])
# Copy newly created image to file2 and file3 stores
path = self._url('/v2/images/%s/import' % image_id)
headers = self._headers({
'content-type': 'application/json',
'X-Roles': 'admin'
})
data = jsonutils.dumps(
{'method': {'name': 'copy-image'},
'stores': ['file2', 'file3']})
response = requests.post(path, headers=headers, data=data)
self.assertEqual(http.ACCEPTED, response.status_code)
# NOTE(abhishekk): Deleting file3 image directory to trigger the
# failure, so that we can verify that revert call does not delete
# the data from existing stores
os.rmdir(self.test_dir + "/images_3")
# Verify image is copied
# NOTE(abhishekk): As import is a async call we need to provide
# some timelap to complete the call.
path = self._url('/v2/images/%s' % image_id)
func_utils.wait_for_copying(request_path=path,
request_headers=self._headers(),
stores=['file2'],
max_sec=10,
delay_sec=0.2,
start_delay_sec=1,
failure_scenario=True)
# Ensure data is not deleted from existing stores on failure
path = self._url('/v2/images/%s' % image_id)
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
self.assertIn('file1', jsonutils.loads(response.text)['stores'])
self.assertNotIn('file2', jsonutils.loads(response.text)['stores'])
self.assertNotIn('file3', jsonutils.loads(response.text)['stores'])
# Copy newly created image to file2 and file3 stores and
# all_stores_must_succeed set to false.
path = self._url('/v2/images/%s/import' % image_id)
headers = self._headers({
'content-type': 'application/json',
'X-Roles': 'admin'
})
data = jsonutils.dumps(
{'method': {'name': 'copy-image'},
'stores': ['file2', 'file3'],
'all_stores_must_succeed': False})
response = requests.post(path, headers=headers, data=data)
self.assertEqual(http.ACCEPTED, response.status_code)
# Verify image is copied
# NOTE(abhishekk): As import is a async call we need to provide
# some timelap to complete the call.
path = self._url('/v2/images/%s' % image_id)
func_utils.wait_for_copying(request_path=path,
request_headers=self._headers(),
stores=['file2'],
max_sec=10,
delay_sec=0.2,
start_delay_sec=1,
failure_scenario=True)
# Ensure data is not deleted from existing stores as well as
# from the stores where it is copied successfully
path = self._url('/v2/images/%s' % image_id)
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
self.assertIn('file1', jsonutils.loads(response.text)['stores'])
self.assertIn('file2', jsonutils.loads(response.text)['stores'])
self.assertNotIn('file3', jsonutils.loads(response.text)['stores'])
# Deleting image should work
path = self._url('/v2/images/%s' % image_id)
response = requests.delete(path, headers=self._headers())
self.assertEqual(http.NO_CONTENT, response.status_code)
# Image list should now be empty
path = self._url('/v2/images')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
images = jsonutils.loads(response.text)['images']
self.assertEqual(0, len(images))
self.stop_servers()
def test_image_import_multi_stores_specifying_all_stores(self):
self.config(node_staging_uri="file:///tmp/staging/")
self.start_servers(**self.__dict__.copy())
@ -5226,7 +5657,7 @@ class TestImagesMultipleBackend(functional.MultipleBackendFunctionalTest):
self.assertIn("web-download", discovery_calls)
# file1 and file2 should be available in discovery response
available_stores = ['file1', 'file2']
available_stores = ['file1', 'file2', 'file3']
path = self._url('/v2/info/stores')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
@ -5352,6 +5783,7 @@ class TestImagesMultipleBackend(functional.MultipleBackendFunctionalTest):
path = self._url('/v2/images/%s' % image_id)
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
self.assertIn('file3', jsonutils.loads(response.text)['stores'])
self.assertIn('file2', jsonutils.loads(response.text)['stores'])
self.assertIn('file1', jsonutils.loads(response.text)['stores'])
@ -5379,7 +5811,7 @@ class TestImagesMultipleBackend(functional.MultipleBackendFunctionalTest):
self.assertEqual(0, len(images))
# file1 and file2 should be available in discovery response
available_stores = ['file1', 'file2']
available_stores = ['file1', 'file2', 'file3']
path = self._url('/v2/info/stores')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)
@ -5548,7 +5980,7 @@ class TestImagesMultipleBackend(functional.MultipleBackendFunctionalTest):
self.assertEqual(0, len(images))
# file1 and file2 should be available in discovery response
available_stores = ['file1', 'file2']
available_stores = ['file1', 'file2', 'file3']
path = self._url('/v2/info/stores')
response = requests.get(path, headers=self._headers())
self.assertEqual(http.OK, response.status_code)

View File

@ -0,0 +1,153 @@
# Copyright 2020 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import mock
import glance_store as store_api
from oslo_config import cfg
from glance.async_.flows._internal_plugins import copy_image
import glance.common.exception as exception
from glance import domain
import glance.tests.unit.utils as unit_test_utils
import glance.tests.utils as test_utils
CONF = cfg.CONF
DATETIME = datetime.datetime(2012, 5, 16, 15, 27, 36, 325355)
TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df'
UUID1 = 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d'
FAKEHASHALGO = 'fake-name-for-sha512'
CHKSUM = '93264c3edf5972c9f1cb309543d38a5c'
RESERVED_STORES = {
'os_glance_staging_store': 'file',
}
def _db_fixture(id, **kwargs):
obj = {
'id': id,
'name': None,
'visibility': 'shared',
'properties': {},
'checksum': None,
'os_hash_algo': FAKEHASHALGO,
'os_hash_value': None,
'owner': None,
'status': 'queued',
'tags': [],
'size': None,
'virtual_size': None,
'locations': [],
'protected': False,
'disk_format': None,
'container_format': None,
'deleted': False,
'min_ram': None,
'min_disk': None,
}
obj.update(kwargs)
return obj
class TestCopyImageTask(test_utils.BaseTestCase):
def setUp(self):
super(TestCopyImageTask, self).setUp()
self.db = unit_test_utils.FakeDB(initialize=False)
self._create_images()
self.image_repo = mock.MagicMock()
self.task_repo = mock.MagicMock()
self.image_id = UUID1
self.staging_store = mock.MagicMock()
self.task_factory = domain.TaskFactory()
task_input = {
"import_req": {
'method': {
'name': 'copy-image',
},
'stores': ['fast']
}
}
task_ttl = CONF.task.task_time_to_live
self.task_type = 'import'
self.task = self.task_factory.new_task(self.task_type, TENANT1,
task_time_to_live=task_ttl,
task_input=task_input)
stores = {'cheap': 'file', 'fast': 'file'}
self.config(enabled_backends=stores)
store_api.register_store_opts(CONF, reserved_stores=RESERVED_STORES)
self.config(default_backend='fast', group='glance_store')
store_api.create_multi_stores(CONF, reserved_stores=RESERVED_STORES)
def _create_images(self):
self.images = [
_db_fixture(UUID1, owner=TENANT1, checksum=CHKSUM,
name='1', size=512, virtual_size=2048,
visibility='public',
disk_format='raw',
container_format='bare',
status='active',
tags=['redhat', '64bit', 'power'],
properties={'hypervisor_type': 'kvm', 'foo': 'bar',
'bar': 'foo'},
locations=[{'url': 'file://%s/%s' % (self.test_dir,
UUID1),
'metadata': {'store': 'fast'},
'status': 'active'}],
created_at=DATETIME + datetime.timedelta(seconds=1)),
]
[self.db.image_create(None, image) for image in self.images]
self.db.image_tag_set_all(None, UUID1, ['ping', 'pong'])
@mock.patch.object(store_api, 'get_store_from_store_identifier')
def test_copy_image_to_staging_store(self, mock_store_api):
mock_store_api.return_value = self.staging_store
copy_image_task = copy_image._CopyImage(
self.task.task_id, self.task_type, self.image_repo,
self.image_id)
with mock.patch.object(self.image_repo, 'get') as get_mock:
get_mock.return_value = mock.MagicMock(
image_id=self.images[0]['id'],
locations=self.images[0]['locations'],
status=self.images[0]['status']
)
with mock.patch.object(store_api, 'get') as get_data:
get_data.return_value = (b"dddd", 4)
copy_image_task.execute()
self.staging_store.add.assert_called_once()
mock_store_api.assert_called_once_with(
"os_glance_staging_store")
@mock.patch.object(store_api, 'get_store_from_store_identifier')
def test_copy_non_existing_image_to_staging_store_(self, mock_store_api):
mock_store_api.return_value = self.staging_store
copy_image_task = copy_image._CopyImage(
self.task.task_id, self.task_type, self.image_repo,
self.image_id)
with mock.patch.object(self.image_repo, 'get') as get_mock:
get_mock.side_effect = exception.NotFound()
self.assertRaises(exception.NotFound, copy_image_task.execute)
mock_store_api.assert_called_once_with(
"os_glance_staging_store")

View File

@ -35,7 +35,8 @@ class TestInfoControllers(test_utils.BaseTestCase):
def test_get_import_info(self):
# TODO(rosmaita): change this when import methods are
# listed in the config file
import_methods = ['glance-direct', 'web-download']
import_methods = ['glance-direct', 'web-download',
'copy-image']
req = unit_test_utils.get_fake_request()
output = self.controller.get_image_import(req)

View File

@ -22,6 +22,7 @@ import uuid
from castellan.common import exception as castellan_exception
import glance_store as store
import mock
from oslo_config import cfg
from oslo_serialization import jsonutils
import six
from six.moves import http_client as http
@ -41,6 +42,8 @@ from glance.tests.unit.keymgr import fake as fake_keymgr
import glance.tests.unit.utils as unit_test_utils
import glance.tests.utils as test_utils
CONF = cfg.CONF
DATETIME = datetime.datetime(2012, 5, 16, 15, 27, 36, 325355)
ISOTIME = '2012-05-16T15:27:36Z'
@ -53,6 +56,8 @@ UUID2 = 'a85abd86-55b3-4d5b-b0b4-5d0a6e6042fc'
UUID3 = '971ec09a-8067-4bc8-a91f-ae3557f1c4c7'
UUID4 = '6bbe7cc2-eae7-4c0f-b50d-a7160b0c6a86'
UUID5 = '13c58ac4-210d-41ab-8cdb-1adfe4610019'
UUID6 = '6d33fd0f-2438-4419-acd0-ce1d452c97a0'
UUID7 = '75ddbc84-9427-4f3b-8d7d-b0fd0543d9a8'
TENANT1 = '6838eb7b-6ded-434a-882c-b344c77fe8df'
TENANT2 = '2c014f32-55eb-467d-8fcb-4bd706012f81'
@ -127,12 +132,13 @@ def _db_image_member_fixture(image_id, member_id, **kwargs):
class FakeImage(object):
def __init__(self, status='active', container_format='ami',
disk_format='ami'):
self.id = UUID4
def __init__(self, id=None, status='active', container_format='ami',
disk_format='ami', locations=None):
self.id = id or UUID4
self.status = status
self.container_format = container_format
self.disk_format = disk_format
self.locations = locations
class TestImagesController(base.IsolatedUnitTest):
@ -5098,6 +5104,9 @@ class TestMultiImagesController(base.MultiIsolatedUnitTest):
self.store = store
self._create_images()
self._create_image_members()
stores = {'cheap': 'file', 'fast': 'file'}
self.config(enabled_backends=stores)
self.store.register_store_opts(CONF)
self.controller = glance.api.v2.images.ImagesController(self.db,
self.policy,
self.notifier,
@ -5147,6 +5156,38 @@ class TestMultiImagesController(base.MultiIsolatedUnitTest):
_db_fixture(UUID4, owner=TENANT4, name='4',
size=1024, virtual_size=3072,
created_at=DATETIME + datetime.timedelta(seconds=3)),
_db_fixture(UUID6, owner=TENANT3, checksum=CHKSUM1,
name='3', size=512, virtual_size=2048,
visibility='public',
disk_format='raw',
container_format='bare',
status='active',
tags=['redhat', '64bit', 'power'],
properties={'hypervisor_type': 'kvm', 'foo': 'bar',
'bar': 'foo'},
locations=[{'url': 'file://%s/%s' % (self.test_dir,
UUID6),
'metadata': {'store': 'fast'},
'status': 'active'},
{'url': 'file://%s/%s' % (self.test_dir2,
UUID6),
'metadata': {'store': 'cheap'},
'status': 'active'}],
created_at=DATETIME + datetime.timedelta(seconds=1)),
_db_fixture(UUID7, owner=TENANT3, checksum=CHKSUM1,
name='3', size=512, virtual_size=2048,
visibility='public',
disk_format='raw',
container_format='bare',
status='active',
tags=['redhat', '64bit', 'power'],
properties={'hypervisor_type': 'kvm', 'foo': 'bar',
'bar': 'foo'},
locations=[{'url': 'file://%s/%s' % (self.test_dir,
UUID7),
'metadata': {'store': 'fast'},
'status': 'active'}],
created_at=DATETIME + datetime.timedelta(seconds=1)),
]
[self.db.image_create(None, image) for image in self.images]
@ -5245,3 +5286,76 @@ class TestMultiImagesController(base.MultiIsolatedUnitTest):
self.assertRaises(webob.exc.HTTPConflict,
self.controller.import_image, request, UUID4,
{'method': {'name': 'web-download'}})
def test_copy_image_stores_specified_in_header_and_body(self):
request = unit_test_utils.get_fake_request()
request.headers['x-image-meta-store'] = 'fast'
with mock.patch.object(
glance.api.authorization.ImageRepoProxy, 'get') as mock_get:
mock_get.return_value = FakeImage()
self.assertRaises(webob.exc.HTTPBadRequest,
self.controller.import_image, request, UUID7,
{'method': {'name': 'copy-image'},
'stores': ["fast"]})
def test_copy_image_non_existing_image(self):
request = unit_test_utils.get_fake_request()
with mock.patch.object(
glance.api.authorization.ImageRepoProxy, 'get') as mock_get:
mock_get.side_effect = exception.NotFound
self.assertRaises(webob.exc.HTTPNotFound,
self.controller.import_image, request, UUID1,
{'method': {'name': 'copy-image'},
'stores': ["fast"]})
def test_copy_image_with_all_stores(self):
request = unit_test_utils.get_fake_request()
locations = {'url': 'file://%s/%s' % (self.test_dir,
UUID7),
'metadata': {'store': 'fast'},
'status': 'active'},
with mock.patch.object(
glance.api.authorization.ImageRepoProxy, 'get') as mock_get:
mock_get.return_value = FakeImage(id=UUID7, status='active',
locations=locations)
self.assertIsNotNone(self.controller.import_image(request, UUID7,
{'method': {'name': 'copy-image'},
'all_stores': True}))
def test_copy_non_active_image(self):
request = unit_test_utils.get_fake_request()
with mock.patch.object(
glance.api.authorization.ImageRepoProxy, 'get') as mock_get:
mock_get.return_value = FakeImage(status='uploading')
self.assertRaises(webob.exc.HTTPConflict,
self.controller.import_image, request, UUID1,
{'method': {'name': 'copy-image'},
'stores': ["fast"]})
def test_copy_image_in_existing_store(self):
request = unit_test_utils.get_fake_request()
self.assertRaises(webob.exc.HTTPBadRequest,
self.controller.import_image, request, UUID6,
{'method': {'name': 'copy-image'},
'stores': ["fast"]})
def test_copy_image_to_other_stores(self):
request = unit_test_utils.get_fake_request()
locations = {'url': 'file://%s/%s' % (self.test_dir,
UUID7),
'metadata': {'store': 'fast'},
'status': 'active'},
with mock.patch.object(
glance.api.authorization.ImageRepoProxy, 'get') as mock_get:
mock_get.return_value = FakeImage(id=UUID7, status='active',
locations=locations)
output = self.controller.import_image(
request, UUID7, {'method': {'name': 'copy-image'},
'stores': ["cheap"]})
self.assertEqual(UUID7, output)

View File

@ -78,6 +78,7 @@ class BaseTestCase(testtools.TestCase):
self.addCleanup(CONF.reset)
self.mock_object(exception, '_FATAL_EXCEPTION_FORMAT_ERRORS', True)
self.test_dir = self.useFixture(fixtures.TempDir()).path
self.test_dir2 = self.useFixture(fixtures.TempDir()).path
self.conf_dir = os.path.join(self.test_dir, 'etc')
utils.safe_mkdirs(self.conf_dir)
self.set_policy()

View File

@ -0,0 +1,16 @@
---
features:
- |
Added new import method ``copy-image`` which will copy existing image
into multiple stores.
upgrade:
- |
Added new import method ``copy-image`` which will copy existing image
into multiple stores. The new import method will work only if multiple
stores are enabled in the deployment. To use this feature operator
needs to mention ``copy-image`` import method in ``enabled_import_methods``
configuration option. Note that this new internal plugin applies *only* to
images imported via the `interoperable image import process`_.
.. _`interoperable image import process`: https://developer.openstack.org/api-ref/image/v2/#interoperable-image-import

View File

@ -85,6 +85,7 @@ glance.image_import.plugins =
glance.image_import.internal_plugins =
web_download = glance.async_.flows._internal_plugins.web_download:get_flow
copy_image = glance.async_.flows._internal_plugins.copy_image:get_flow
[egg_info]