Retire Packaging Deb project repos

This commit is part of a series to retire the Packaging Deb
project. Step 2 is to remove all content from the project
repos, replacing it with a README notification where to find
ongoing work, and how to recover the repo if needed at some
future point (as in
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project).

Change-Id: Ibb0cc03c6212d0992aa31a859a1126d6c3c610c5
This commit is contained in:
Tony Breeds 2017-09-12 15:59:27 -06:00
parent 36fb986434
commit b1bbbda4b1
100 changed files with 14 additions and 15139 deletions

33
.gitignore vendored
View File

@ -1,33 +0,0 @@
*~
*.pyc
*.dat
TAGS
*.egg-info
*.egg
.eggs/*
build
.coverage
.tox
cover
venv
.venv
output.xml
*.sublime-workspace
*.sqlite
*.html
.*.swp
.DS_Store
.testrepository
versioninfo
var/*
ChangeLog
AUTHORS
subunit.log
covhtml/
doc/source/reference/api
# Files created by releasenotes build
releasenotes/build
#Files created by functional tests
functional_testing.conf

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/glance_store.git

View File

@ -1,8 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./glance_store/tests/unit} $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

14
README Normal file
View File

@ -0,0 +1,14 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For ongoing work on maintaining OpenStack packages in the Debian
distribution, please see the Debian OpenStack packaging team at
https://wiki.debian.org/OpenStack/.
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,36 +0,0 @@
========================
Team and repository tags
========================
.. image:: http://governance.openstack.org/badges/glance_store.svg
:target: http://governance.openstack.org/reference/tags/index.html
:alt: The following tags have been asserted for the Glance Store
Library:
"project:official",
"stable:follows-policy",
"vulnerability:managed",
"team:diverse-affiliation".
Follow the link for an explanation of these tags.
.. NOTE(rosmaita): the alt text above will have to be updated when
additional tags are asserted for glance_store. (The SVG in the
governance repo is updated automatically.)
.. Change things from this point on
Glance Store Library
====================
Glance's stores library
This library has been extracted from the Glance source code for the
specific use of the Glance and Glare projects.
The API it exposes is not stable, has some shortcomings, and is not a
general purpose interface. We would eventually like to change this,
but for now using this library outside of Glance or Glare will not be
supported by the core team.
* License: Apache License, Version 2.0
* Documentation: http://docs.openstack.org/developer/glance_store
* Source: http://git.openstack.org/cgit/openstack/glance_store
* Bugs: http://bugs.launchpad.net/glance-store

View File

@ -1 +0,0 @@
[python: **.py]

View File

@ -1,76 +0,0 @@
import os
import subprocess
import sys
import warnings
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc', 'openstackdocstheme']
# openstackdocstheme options
repository_name = 'openstack/glance_store'
bug_project = 'glance-store'
bug_tag = ''
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# Add any paths that contain templates here, relative to this directory.
# templates_path = []
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'glance_store'
copyright = u'2014, OpenStack Foundation'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# html_static_path = ['static']
html_theme = 'openstackdocs'
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
modindex_common_prefix = ['glance_store.']
git_cmd = ["git", "log", "--pretty=format:'%ad, commit %h'", "--date=local",
"-n1"]
try:
html_last_updated_fmt = subprocess.check_output(git_cmd).decode('utf-8')
except Exception:
warnings.warn('Cannot get last updated time from git repository. '
'Not setting "html_last_updated_fmt".')
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
'%s Documentation' % project,
'OpenStack Foundation', 'manual'),
]

View File

@ -1,19 +0,0 @@
==============
glance_store
==============
The glance_store library supports the creation, deletion and gather of data
assets from/to a set of several, different, storage technologies.
.. toctree::
:maxdepth: 1
user/index
reference/index
.. rubric:: Indices and tables
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,8 +0,0 @@
==============================
glance-store Reference Guide
==============================
.. toctree::
:maxdepth: 1
api/autoindex

View File

@ -1,26 +0,0 @@
Glance Store Drivers
====================
Glance store supports several different drivers. These drivers live
within the library's code base and they are maintained by either
members of the Glance community or OpenStack in general. Please, find
below the table of supported drivers and maintainers:
+-------------------+---------------------+------------------------------------+------------------+
| Driver | Maintainer | Email | IRC Nick |
+===================+=====================+====================================+==================+
| File System | Glance Team | openstack-dev@lists.openstack.org | openstack-glance |
+-------------------+---------------------+------------------------------------+------------------+
| HTTP | Glance Team | openstack-dev@lists.openstack.org | openstack-glance |
+-------------------+---------------------+------------------------------------+------------------+
| RBD | Fei Long Wang | flwang@catalyst.net.nz | flwang |
+-------------------+---------------------+------------------------------------+------------------+
| Cinder | Tomoki Sekiyama | tomoki.sekiyama@gmail.com | |
+-------------------+---------------------+------------------------------------+------------------+
| Swift | Matthew Oliver | matt@oliver.net.au | mattoliverau |
+-------------------+---------------------+------------------------------------+------------------+
| VMware | Sabari Murugesan | smurugesan@vmware.com | sabari |
+-------------------+---------------------+------------------------------------+------------------+
| Sheepdog | YAMADA Hideki | yamada.hideki@lab.ntt.co.jp | yamada-h |
+-------------------+---------------------+------------------------------------+------------------+

View File

@ -1,7 +0,0 @@
=================================
glance-store User Documentation
=================================
.. toctree::
drivers

View File

@ -1,27 +0,0 @@
# Configuration for glance-rootwrap
# This file should be owned by (and only-writable by) the root user
[DEFAULT]
# List of directories to load filter definitions from (separated by ',').
# These directories MUST all be only writeable by root !
filters_path=/etc/glance/rootwrap.d,/usr/share/glance/rootwrap
# List of directories to search executables in, in case filters do not
# explicitely specify a full path (separated by ',')
# If not specified, defaults to system PATH environment variable.
# These directories MUST all be only writeable by root !
exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin,/usr/local/bin,/usr/local/sbin
# Enable logging to syslog
# Default value is False
use_syslog=False
# Which syslog facility to use.
# Valid values include auth, authpriv, syslog, local0, local1...
# Default value is 'syslog'
syslog_log_facility=syslog
# Which messages to log.
# INFO means log all usage
# ERROR means only log unsuccessful attempts
syslog_log_level=ERROR

View File

@ -1,29 +0,0 @@
# glance-rootwrap command filters for glance cinder store
# This file should be owned by (and only-writable by) the root user
[Filters]
# cinder store driver
disk_chown: RegExpFilter, chown, root, chown, \d+, /dev/(?!.*/\.\.).*
# os-brick
mount: CommandFilter, mount, root
blockdev: RegExpFilter, blockdev, root, blockdev, (--getsize64|--flushbufs), /dev/.*
tee: CommandFilter, tee, root
mkdir: CommandFilter, mkdir, root
chown: RegExpFilter, chown, root, chown root:root /etc/pstorage/clusters/(?!.*/\.\.).*
ip: CommandFilter, ip, root
dd: CommandFilter, dd, root
iscsiadm: CommandFilter, iscsiadm, root
aoe-revalidate: CommandFilter, aoe-revalidate, root
aoe-discover: CommandFilter, aoe-discover, root
aoe-flush: CommandFilter, aoe-flush, root
read_initiator: ReadFileFilter, /etc/iscsi/initiatorname.iscsi
multipath: CommandFilter, multipath, root
multipathd: CommandFilter, multipathd, root
systool: CommandFilter, systool, root
sg_scan: CommandFilter, sg_scan, root
cp: CommandFilter, cp, root
drv_cfg: CommandFilter, /opt/emc/scaleio/sdc/bin/drv_cfg, root, /opt/emc/scaleio/sdc/bin/drv_cfg, --query_guid
sds_cli: CommandFilter, /usr/local/bin/sds/sds_cli, root
vgc-cluster: CommandFilter, vgc-cluster, root
scsi_id: CommandFilter, /lib/udev/scsi_id, root

View File

@ -1,9 +0,0 @@
[tests]
stores = file,swift
[admin]
user = admin:admin
key = secretadmin
auth_version = 2
auth_address = http://localhost:35357/v2.0
region = RegionOne

View File

@ -1,18 +0,0 @@
# Copyright 2014 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from .backend import * # noqa
from .driver import * # noqa
from .exceptions import * # noqa

View File

@ -1,765 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Storage backend for Cinder"""
import contextlib
import errno
import hashlib
import logging
import math
import os
import shlex
import socket
import time
from keystoneauth1.access import service_catalog as keystone_sc
from keystoneauth1 import exceptions as keystone_exc
from oslo_concurrency import processutils
from oslo_config import cfg
from oslo_utils import units
from glance_store import capabilities
from glance_store.common import utils
import glance_store.driver
from glance_store import exceptions
from glance_store.i18n import _, _LE, _LW, _LI
import glance_store.location
try:
from cinderclient import exceptions as cinder_exception
from cinderclient.v2 import client as cinderclient
from os_brick.initiator import connector
from oslo_privsep import priv_context
except ImportError:
cinder_exception = None
cinderclient = None
connector = None
priv_context = None
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
_CINDER_OPTS = [
cfg.StrOpt('cinder_catalog_info',
default='volumev2::publicURL',
help=_("""
Information to match when looking for cinder in the service catalog.
When the ``cinder_endpoint_template`` is not set and any of
``cinder_store_auth_address``, ``cinder_store_user_name``,
``cinder_store_project_name``, ``cinder_store_password`` is not set,
cinder store uses this information to lookup cinder endpoint from the service
catalog in the current context. ``cinder_os_region_name``, if set, is taken
into consideration to fetch the appropriate endpoint.
The service catalog can be listed by the ``openstack catalog list`` command.
Possible values:
* A string of of the following form:
``<service_type>:<service_name>:<interface>``
At least ``service_type`` and ``interface`` should be specified.
``service_name`` can be omitted.
Related options:
* cinder_os_region_name
* cinder_endpoint_template
* cinder_store_auth_address
* cinder_store_user_name
* cinder_store_project_name
* cinder_store_password
""")),
cfg.StrOpt('cinder_endpoint_template',
default=None,
help=_("""
Override service catalog lookup with template for cinder endpoint.
When this option is set, this value is used to generate cinder endpoint,
instead of looking up from the service catalog.
This value is ignored if ``cinder_store_auth_address``,
``cinder_store_user_name``, ``cinder_store_project_name``, and
``cinder_store_password`` are specified.
If this configuration option is set, ``cinder_catalog_info`` will be ignored.
Possible values:
* URL template string for cinder endpoint, where ``%%(tenant)s`` is
replaced with the current tenant (project) name.
For example: ``http://cinder.openstack.example.org/v2/%%(tenant)s``
Related options:
* cinder_store_auth_address
* cinder_store_user_name
* cinder_store_project_name
* cinder_store_password
* cinder_catalog_info
""")),
cfg.StrOpt('cinder_os_region_name', deprecated_name='os_region_name',
default=None,
help=_("""
Region name to lookup cinder service from the service catalog.
This is used only when ``cinder_catalog_info`` is used for determining the
endpoint. If set, the lookup for cinder endpoint by this node is filtered to
the specified region. It is useful when multiple regions are listed in the
catalog. If this is not set, the endpoint is looked up from every region.
Possible values:
* A string that is a valid region name.
Related options:
* cinder_catalog_info
""")),
cfg.StrOpt('cinder_ca_certificates_file',
help=_("""
Location of a CA certificates file used for cinder client requests.
The specified CA certificates file, if set, is used to verify cinder
connections via HTTPS endpoint. If the endpoint is HTTP, this value is ignored.
``cinder_api_insecure`` must be set to ``True`` to enable the verification.
Possible values:
* Path to a ca certificates file
Related options:
* cinder_api_insecure
""")),
cfg.IntOpt('cinder_http_retries',
min=0,
default=3,
help=_("""
Number of cinderclient retries on failed http calls.
When a call failed by any errors, cinderclient will retry the call up to the
specified times after sleeping a few seconds.
Possible values:
* A positive integer
Related options:
* None
""")),
cfg.IntOpt('cinder_state_transition_timeout',
min=0,
default=300,
help=_("""
Time period, in seconds, to wait for a cinder volume transition to
complete.
When the cinder volume is created, deleted, or attached to the glance node to
read/write the volume data, the volume's state is changed. For example, the
newly created volume status changes from ``creating`` to ``available`` after
the creation process is completed. This specifies the maximum time to wait for
the status change. If a timeout occurs while waiting, or the status is changed
to an unexpected value (e.g. `error``), the image creation fails.
Possible values:
* A positive integer
Related options:
* None
""")),
cfg.BoolOpt('cinder_api_insecure',
default=False,
help=_("""
Allow to perform insecure SSL requests to cinder.
If this option is set to True, HTTPS endpoint connection is verified using the
CA certificates file specified by ``cinder_ca_certificates_file`` option.
Possible values:
* True
* False
Related options:
* cinder_ca_certificates_file
""")),
cfg.StrOpt('cinder_store_auth_address',
default=None,
help=_("""
The address where the cinder authentication service is listening.
When all of ``cinder_store_auth_address``, ``cinder_store_user_name``,
``cinder_store_project_name``, and ``cinder_store_password`` options are
specified, the specified values are always used for the authentication.
This is useful to hide the image volumes from users by storing them in a
project/tenant specific to the image service. It also enables users to share
the image volume among other projects under the control of glance's ACL.
If either of these options are not set, the cinder endpoint is looked up
from the service catalog, and current context's user and project are used.
Possible values:
* A valid authentication service address, for example:
``http://openstack.example.org/identity/v2.0``
Related options:
* cinder_store_user_name
* cinder_store_password
* cinder_store_project_name
""")),
cfg.StrOpt('cinder_store_user_name',
default=None,
help=_("""
User name to authenticate against cinder.
This must be used with all the following related options. If any of these are
not specified, the user of the current context is used.
Possible values:
* A valid user name
Related options:
* cinder_store_auth_address
* cinder_store_password
* cinder_store_project_name
""")),
cfg.StrOpt('cinder_store_password', secret=True,
help=_("""
Password for the user authenticating against cinder.
This must be used with all the following related options. If any of these are
not specified, the user of the current context is used.
Possible values:
* A valid password for the user specified by ``cinder_store_user_name``
Related options:
* cinder_store_auth_address
* cinder_store_user_name
* cinder_store_project_name
""")),
cfg.StrOpt('cinder_store_project_name',
default=None,
help=_("""
Project name where the image volume is stored in cinder.
If this configuration option is not set, the project in current context is
used.
This must be used with all the following related options. If any of these are
not specified, the project of the current context is used.
Possible values:
* A valid project name
Related options:
* ``cinder_store_auth_address``
* ``cinder_store_user_name``
* ``cinder_store_password``
""")),
cfg.StrOpt('rootwrap_config',
default='/etc/glance/rootwrap.conf',
help=_("""
Path to the rootwrap configuration file to use for running commands as root.
The cinder store requires root privileges to operate the image volumes (for
connecting to iSCSI/FC volumes and reading/writing the volume data, etc.).
The configuration file should allow the required commands by cinder store and
os-brick library.
Possible values:
* Path to the rootwrap config file
Related options:
* None
""")),
cfg.StrOpt('cinder_volume_type',
default=None,
help=_("""
Volume type that will be used for volume creation in cinder.
Some cinder backends can have several volume types to optimize storage usage.
Adding this option allows an operator to choose a specific volume type
in cinder that can be optimized for images.
If this is not set, then the default volume type specified in the cinder
configuration will be used for volume creation.
Possible values:
* A valid volume type from cinder
Related options:
* None
""")),
]
def get_root_helper():
return 'sudo glance-rootwrap %s' % CONF.glance_store.rootwrap_config
def is_user_overriden(conf):
return all([conf.glance_store.get('cinder_store_' + key)
for key in ['user_name', 'password',
'project_name', 'auth_address']])
def get_cinderclient(conf, context=None):
glance_store = conf.glance_store
user_overriden = is_user_overriden(conf)
if user_overriden:
username = glance_store.cinder_store_user_name
password = glance_store.cinder_store_password
project = glance_store.cinder_store_project_name
url = glance_store.cinder_store_auth_address
else:
username = context.user
password = context.auth_token
project = context.tenant
if glance_store.cinder_endpoint_template:
url = glance_store.cinder_endpoint_template % context.to_dict()
else:
info = glance_store.cinder_catalog_info
service_type, service_name, interface = info.split(':')
try:
catalog = keystone_sc.ServiceCatalogV2(context.service_catalog)
url = catalog.url_for(
region_name=glance_store.cinder_os_region_name,
service_type=service_type,
service_name=service_name,
interface=interface)
except keystone_exc.EndpointNotFound:
reason = _("Failed to find Cinder from a service catalog.")
raise exceptions.BadStoreConfiguration(store_name="cinder",
reason=reason)
c = cinderclient.Client(username,
password,
project,
auth_url=url,
insecure=glance_store.cinder_api_insecure,
retries=glance_store.cinder_http_retries,
cacert=glance_store.cinder_ca_certificates_file)
LOG.debug('Cinderclient connection created for user %(user)s using URL: '
'%(url)s.', {'user': username, 'url': url})
# noauth extracts user_id:project_id from auth_token
if not user_overriden:
c.client.auth_token = context.auth_token or '%s:%s' % (username,
project)
c.client.management_url = url
return c
class StoreLocation(glance_store.location.StoreLocation):
"""Class describing a Cinder URI."""
def process_specs(self):
self.scheme = self.specs.get('scheme', 'cinder')
self.volume_id = self.specs.get('volume_id')
def get_uri(self):
return "cinder://%s" % self.volume_id
def parse_uri(self, uri):
if not uri.startswith('cinder://'):
reason = _("URI must start with 'cinder://'")
LOG.info(reason)
raise exceptions.BadStoreUri(message=reason)
self.scheme = 'cinder'
self.volume_id = uri[9:]
if not utils.is_uuid_like(self.volume_id):
reason = _("URI contains invalid volume ID")
LOG.info(reason)
raise exceptions.BadStoreUri(message=reason)
@contextlib.contextmanager
def temporary_chown(path):
owner_uid = os.getuid()
orig_uid = os.stat(path).st_uid
if orig_uid != owner_uid:
processutils.execute('chown', owner_uid, path,
run_as_root=True,
root_helper=get_root_helper())
try:
yield
finally:
if orig_uid != owner_uid:
processutils.execute('chown', orig_uid, path,
run_as_root=True,
root_helper=get_root_helper())
class Store(glance_store.driver.Store):
"""Cinder backend store adapter."""
_CAPABILITIES = (capabilities.BitMasks.READ_RANDOM |
capabilities.BitMasks.WRITE_ACCESS |
capabilities.BitMasks.DRIVER_REUSABLE)
OPTIONS = _CINDER_OPTS
EXAMPLE_URL = "cinder://<VOLUME_ID>"
def __init__(self, *args, **kargs):
super(Store, self).__init__(*args, **kargs)
LOG.warning(_LW("Cinder store is considered experimental. "
"Current deployers should be aware that the use "
"of it in production right now may be risky."))
def get_schemes(self):
return ('cinder',)
def _check_context(self, context, require_tenant=False):
user_overriden = is_user_overriden(self.conf)
if user_overriden and not require_tenant:
return
if context is None:
reason = _("Cinder storage requires a context.")
raise exceptions.BadStoreConfiguration(store_name="cinder",
reason=reason)
if not user_overriden and context.service_catalog is None:
reason = _("Cinder storage requires a service catalog.")
raise exceptions.BadStoreConfiguration(store_name="cinder",
reason=reason)
def _wait_volume_status(self, volume, status_transition, status_expected):
max_recheck_wait = 15
timeout = self.conf.glance_store.cinder_state_transition_timeout
volume = volume.manager.get(volume.id)
tries = 0
elapsed = 0
while volume.status == status_transition:
if elapsed >= timeout:
msg = (_('Timeout while waiting while volume %(volume_id)s '
'status is %(status)s.')
% {'volume_id': volume.id, 'status': status_transition})
LOG.error(msg)
raise exceptions.BackendException(msg)
wait = min(0.5 * 2 ** tries, max_recheck_wait)
time.sleep(wait)
tries += 1
elapsed += wait
volume = volume.manager.get(volume.id)
if volume.status != status_expected:
msg = (_('The status of volume %(volume_id)s is unexpected: '
'status = %(status)s, expected = %(expected)s.')
% {'volume_id': volume.id, 'status': volume.status,
'expected': status_expected})
LOG.error(msg)
raise exceptions.BackendException(msg)
return volume
@contextlib.contextmanager
def _open_cinder_volume(self, client, volume, mode):
attach_mode = 'rw' if mode == 'wb' else 'ro'
device = None
root_helper = get_root_helper()
priv_context.init(root_helper=shlex.split(root_helper))
host = socket.gethostname()
properties = connector.get_connector_properties(root_helper, host,
False, False)
try:
volume.reserve(volume)
except cinder_exception.ClientException as e:
msg = (_('Failed to reserve volume %(volume_id)s: %(error)s')
% {'volume_id': volume.id, 'error': e})
LOG.error(msg)
raise exceptions.BackendException(msg)
try:
connection_info = volume.initialize_connection(volume, properties)
conn = connector.InitiatorConnector.factory(
connection_info['driver_volume_type'], root_helper,
conn=connection_info)
device = conn.connect_volume(connection_info['data'])
volume.attach(None, None, attach_mode, host_name=host)
volume = self._wait_volume_status(volume, 'attaching', 'in-use')
if (connection_info['driver_volume_type'] == 'rbd' and
not conn.do_local_attach):
yield device['path']
else:
with temporary_chown(device['path']), \
open(device['path'], mode) as f:
yield f
except Exception:
LOG.exception(_LE('Exception while accessing to cinder volume '
'%(volume_id)s.'), {'volume_id': volume.id})
raise
finally:
if volume.status == 'in-use':
volume.begin_detaching(volume)
elif volume.status == 'attaching':
volume.unreserve(volume)
if device:
try:
conn.disconnect_volume(connection_info['data'], device)
except Exception:
LOG.exception(_LE('Failed to disconnect volume '
'%(volume_id)s.'),
{'volume_id': volume.id})
try:
volume.terminate_connection(volume, properties)
except Exception:
LOG.exception(_LE('Failed to terminate connection of volume '
'%(volume_id)s.'), {'volume_id': volume.id})
try:
client.volumes.detach(volume)
except Exception:
LOG.exception(_LE('Failed to detach volume %(volume_id)s.'),
{'volume_id': volume.id})
def _cinder_volume_data_iterator(self, client, volume, max_size, offset=0,
chunk_size=None, partial_length=None):
chunk_size = chunk_size if chunk_size else self.READ_CHUNKSIZE
partial = partial_length is not None
with self._open_cinder_volume(client, volume, 'rb') as fp:
if offset:
fp.seek(offset)
max_size -= offset
while True:
if partial:
size = min(chunk_size, partial_length, max_size)
else:
size = min(chunk_size, max_size)
chunk = fp.read(size)
if chunk:
yield chunk
max_size -= len(chunk)
if max_size <= 0:
break
if partial:
partial_length -= len(chunk)
if partial_length <= 0:
break
else:
break
@capabilities.check
def get(self, location, offset=0, chunk_size=None, context=None):
"""
Takes a `glance_store.location.Location` object that indicates
where to find the image file, and returns a tuple of generator
(for reading the image file) and image_size
:param location `glance_store.location.Location` object, supplied
from glance_store.location.get_location_from_uri()
:param offset: offset to start reading
:param chunk_size: size to read, or None to get all the image
:param context: Request context
:raises `glance_store.exceptions.NotFound` if image does not exist
"""
loc = location.store_location
self._check_context(context)
try:
client = get_cinderclient(self.conf, context)
volume = client.volumes.get(loc.volume_id)
size = int(volume.metadata.get('image_size',
volume.size * units.Gi))
iterator = self._cinder_volume_data_iterator(
client, volume, size, offset=offset,
chunk_size=self.READ_CHUNKSIZE, partial_length=chunk_size)
return (iterator, chunk_size or size)
except cinder_exception.NotFound:
reason = _("Failed to get image size due to "
"volume can not be found: %s") % loc.volume_id
LOG.error(reason)
raise exceptions.NotFound(reason)
except cinder_exception.ClientException as e:
msg = (_('Failed to get image volume %(volume_id)s: %(error)s')
% {'volume_id': loc.volume_id, 'error': e})
LOG.error(msg)
raise exceptions.BackendException(msg)
def get_size(self, location, context=None):
"""
Takes a `glance_store.location.Location` object that indicates
where to find the image file and returns the image size
:param location: `glance_store.location.Location` object, supplied
from glance_store.location.get_location_from_uri()
:raises: `glance_store.exceptions.NotFound` if image does not exist
:rtype: int
"""
loc = location.store_location
try:
self._check_context(context)
volume = get_cinderclient(self.conf,
context).volumes.get(loc.volume_id)
return int(volume.metadata.get('image_size',
volume.size * units.Gi))
except cinder_exception.NotFound:
raise exceptions.NotFound(image=loc.volume_id)
except Exception:
LOG.exception(_LE("Failed to get image size due to "
"internal error."))
return 0
@capabilities.check
def add(self, image_id, image_file, image_size, context=None,
verifier=None):
"""
Stores an image file with supplied identifier to the backend
storage system and returns a tuple containing information
about the stored image.
:param image_id: The opaque image identifier
:param image_file: The image data to write, as a file-like object
:param image_size: The size of the image data to write, in bytes
:param context: The request context
:param verifier: An object used to verify signatures for images
:retval tuple of URL in backing store, bytes written, checksum
and a dictionary with storage system specific information
:raises `glance_store.exceptions.Duplicate` if the image already
existed
"""
self._check_context(context, require_tenant=True)
client = get_cinderclient(self.conf, context)
checksum = hashlib.md5()
bytes_written = 0
size_gb = int(math.ceil(float(image_size) / units.Gi))
if size_gb == 0:
size_gb = 1
name = "image-%s" % image_id
owner = context.tenant
metadata = {'glance_image_id': image_id,
'image_size': str(image_size),
'image_owner': owner}
volume_type = self.conf.glance_store.cinder_volume_type
LOG.debug('Creating a new volume: image_size=%d size_gb=%d type=%s',
image_size, size_gb, volume_type or 'None')
if image_size == 0:
LOG.info(_LI("Since image size is zero, we will be doing "
"resize-before-write for each GB which "
"will be considerably slower than normal."))
volume = client.volumes.create(size_gb, name=name, metadata=metadata,
volume_type=volume_type)
volume = self._wait_volume_status(volume, 'creating', 'available')
size_gb = volume.size
failed = True
need_extend = True
buf = None
try:
while need_extend:
with self._open_cinder_volume(client, volume, 'wb') as f:
f.seek(bytes_written)
if buf:
f.write(buf)
bytes_written += len(buf)
while True:
buf = image_file.read(self.WRITE_CHUNKSIZE)
if not buf:
need_extend = False
break
checksum.update(buf)
if verifier:
verifier.update(buf)
if (bytes_written + len(buf) > size_gb * units.Gi and
image_size == 0):
break
f.write(buf)
bytes_written += len(buf)
if need_extend:
size_gb += 1
LOG.debug("Extending volume %(volume_id)s to %(size)s GB.",
{'volume_id': volume.id, 'size': size_gb})
volume.extend(volume, size_gb)
try:
volume = self._wait_volume_status(volume,
'extending',
'available')
size_gb = volume.size
except exceptions.BackendException:
raise exceptions.StorageFull()
failed = False
except IOError as e:
# Convert IOError reasons to Glance Store exceptions
errors = {errno.EFBIG: exceptions.StorageFull(),
errno.ENOSPC: exceptions.StorageFull(),
errno.EACCES: exceptions.StorageWriteDenied()}
raise errors.get(e.errno, e)
finally:
if failed:
LOG.error(_LE("Failed to write to volume %(volume_id)s."),
{'volume_id': volume.id})
try:
volume.delete()
except Exception:
LOG.exception(_LE('Failed to delete of volume '
'%(volume_id)s.'),
{'volume_id': volume.id})
if image_size == 0:
metadata.update({'image_size': str(bytes_written)})
volume.update_all_metadata(metadata)
volume.update_readonly_flag(volume, True)
checksum_hex = checksum.hexdigest()
LOG.debug("Wrote %(bytes_written)d bytes to volume %(volume_id)s "
"with checksum %(checksum_hex)s.",
{'bytes_written': bytes_written,
'volume_id': volume.id,
'checksum_hex': checksum_hex})
return ('cinder://%s' % volume.id, bytes_written, checksum_hex, {})
@capabilities.check
def delete(self, location, context=None):
"""
Takes a `glance_store.location.Location` object that indicates
where to find the image file to delete
:location `glance_store.location.Location` object, supplied
from glance_store.location.get_location_from_uri()
:raises NotFound if image does not exist
:raises Forbidden if cannot delete because of permissions
"""
loc = location.store_location
self._check_context(context)
try:
volume = get_cinderclient(self.conf,
context).volumes.get(loc.volume_id)
volume.delete()
except cinder_exception.NotFound:
raise exceptions.NotFound(image=loc.volume_id)
except cinder_exception.ClientException as e:
msg = (_('Failed to delete volume %(volume_id)s: %(error)s') %
{'volume_id': loc.volume_id, 'error': e})
raise exceptions.BackendException(msg)

View File

@ -1,727 +0,0 @@
# Copyright 2010 OpenStack Foundation
# Copyright 2014 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
A simple filesystem-backed store
"""
import errno
import hashlib
import logging
import os
import stat
import jsonschema
from oslo_config import cfg
from oslo_serialization import jsonutils
from oslo_utils import encodeutils
from oslo_utils import excutils
from oslo_utils import units
from six.moves import urllib
import glance_store
from glance_store import capabilities
from glance_store.common import utils
import glance_store.driver
from glance_store import exceptions
from glance_store.i18n import _, _LE, _LW
import glance_store.location
LOG = logging.getLogger(__name__)
_FILESYSTEM_CONFIGS = [
cfg.StrOpt('filesystem_store_datadir',
default='/var/lib/glance/images',
help=_("""
Directory to which the filesystem backend store writes images.
Upon start up, Glance creates the directory if it doesn't already
exist and verifies write access to the user under which
``glance-api`` runs. If the write access isn't available, a
``BadStoreConfiguration`` exception is raised and the filesystem
store may not be available for adding new images.
NOTE: This directory is used only when filesystem store is used as a
storage backend. Either ``filesystem_store_datadir`` or
``filesystem_store_datadirs`` option must be specified in
``glance-api.conf``. If both options are specified, a
``BadStoreConfiguration`` will be raised and the filesystem store
may not be available for adding new images.
Possible values:
* A valid path to a directory
Related options:
* ``filesystem_store_datadirs``
* ``filesystem_store_file_perm``
""")),
cfg.MultiStrOpt('filesystem_store_datadirs',
help=_("""
List of directories and their priorities to which the filesystem
backend store writes images.
The filesystem store can be configured to store images in multiple
directories as opposed to using a single directory specified by the
``filesystem_store_datadir`` configuration option. When using
multiple directories, each directory can be given an optional
priority to specify the preference order in which they should
be used. Priority is an integer that is concatenated to the
directory path with a colon where a higher value indicates higher
priority. When two directories have the same priority, the directory
with most free space is used. When no priority is specified, it
defaults to zero.
More information on configuring filesystem store with multiple store
directories can be found at
http://docs.openstack.org/developer/glance/configuring.html
NOTE: This directory is used only when filesystem store is used as a
storage backend. Either ``filesystem_store_datadir`` or
``filesystem_store_datadirs`` option must be specified in
``glance-api.conf``. If both options are specified, a
``BadStoreConfiguration`` will be raised and the filesystem store
may not be available for adding new images.
Possible values:
* List of strings of the following form:
* ``<a valid directory path>:<optional integer priority>``
Related options:
* ``filesystem_store_datadir``
* ``filesystem_store_file_perm``
""")),
cfg.StrOpt('filesystem_store_metadata_file',
help=_("""
Filesystem store metadata file.
The path to a file which contains the metadata to be returned with
any location associated with the filesystem store. The file must
contain a valid JSON object. The object should contain the keys
``id`` and ``mountpoint``. The value for both keys should be a
string.
Possible values:
* A valid path to the store metadata file
Related options:
* None
""")),
cfg.IntOpt('filesystem_store_file_perm',
default=0,
help=_("""
File access permissions for the image files.
Set the intended file access permissions for image data. This provides
a way to enable other services, e.g. Nova, to consume images directly
from the filesystem store. The users running the services that are
intended to be given access to could be made a member of the group
that owns the files created. Assigning a value less then or equal to
zero for this configuration option signifies that no changes be made
to the default permissions. This value will be decoded as an octal
digit.
For more information, please refer the documentation at
http://docs.openstack.org/developer/glance/configuring.html
Possible values:
* A valid file access permission
* Zero
* Any negative integer
Related options:
* None
"""))]
MULTI_FILESYSTEM_METADATA_SCHEMA = {
"type": "array",
"items": {
"type": "object",
"properties": {
"id": {"type": "string"},
"mountpoint": {"type": "string"}
},
"required": ["id", "mountpoint"],
}
}
class StoreLocation(glance_store.location.StoreLocation):
"""Class describing a Filesystem URI."""
def process_specs(self):
self.scheme = self.specs.get('scheme', 'file')
self.path = self.specs.get('path')
def get_uri(self):
return "file://%s" % self.path
def parse_uri(self, uri):
"""
Parse URLs. This method fixes an issue where credentials specified
in the URL are interpreted differently in Python 2.6.1+ than prior
versions of Python.
"""
pieces = urllib.parse.urlparse(uri)
assert pieces.scheme in ('file', 'filesystem')
self.scheme = pieces.scheme
path = (pieces.netloc + pieces.path).strip()
if path == '':
reason = _("No path specified in URI")
LOG.info(reason)
raise exceptions.BadStoreUri(message=reason)
self.path = path
class ChunkedFile(object):
"""
We send this back to the Glance API server as
something that can iterate over a large file
"""
def __init__(self, filepath, offset=0, chunk_size=4096,
partial_length=None):
self.filepath = filepath
self.chunk_size = chunk_size
self.partial_length = partial_length
self.partial = self.partial_length is not None
self.fp = open(self.filepath, 'rb')
if offset:
self.fp.seek(offset)
def __iter__(self):
"""Return an iterator over the image file."""
try:
if self.fp:
while True:
if self.partial:
size = min(self.chunk_size, self.partial_length)
else:
size = self.chunk_size
chunk = self.fp.read(size)
if chunk:
yield chunk
if self.partial:
self.partial_length -= len(chunk)
if self.partial_length <= 0:
break
else:
break
finally:
self.close()
def close(self):
"""Close the internal file pointer"""
if self.fp:
self.fp.close()
self.fp = None
class Store(glance_store.driver.Store):
_CAPABILITIES = (capabilities.BitMasks.READ_RANDOM |
capabilities.BitMasks.WRITE_ACCESS |
capabilities.BitMasks.DRIVER_REUSABLE)
OPTIONS = _FILESYSTEM_CONFIGS
READ_CHUNKSIZE = 64 * units.Ki
WRITE_CHUNKSIZE = READ_CHUNKSIZE
FILESYSTEM_STORE_METADATA = None
def get_schemes(self):
return ('file', 'filesystem')
def _check_write_permission(self, datadir):
"""
Checks if directory created to write image files has
write permission.
:datadir is a directory path in which glance wites image files.
:raises: BadStoreConfiguration exception if datadir is read-only.
"""
if not os.access(datadir, os.W_OK):
msg = (_("Permission to write in %s denied") % datadir)
LOG.exception(msg)
raise exceptions.BadStoreConfiguration(
store_name="filesystem", reason=msg)
def _set_exec_permission(self, datadir):
"""
Set the execution permission of owner-group and/or other-users to
image directory if the image file which contained needs relevant
access permissions.
:datadir is a directory path in which glance writes image files.
"""
if self.conf.glance_store.filesystem_store_file_perm <= 0:
return
try:
mode = os.stat(datadir)[stat.ST_MODE]
perm = int(str(self.conf.glance_store.filesystem_store_file_perm),
8)
if perm & stat.S_IRWXO > 0:
if not mode & stat.S_IXOTH:
# chmod o+x
mode |= stat.S_IXOTH
os.chmod(datadir, mode)
if perm & stat.S_IRWXG > 0:
if not mode & stat.S_IXGRP:
# chmod g+x
os.chmod(datadir, mode | stat.S_IXGRP)
except (IOError, OSError):
LOG.warning(_LW("Unable to set execution permission of "
"owner-group and/or other-users to datadir: %s")
% datadir)
def _create_image_directories(self, directory_paths):
"""
Create directories to write image files if
it does not exist.
:directory_paths is a list of directories belonging to glance store.
:raises: BadStoreConfiguration exception if creating a directory fails.
"""
for datadir in directory_paths:
if os.path.exists(datadir):
self._check_write_permission(datadir)
self._set_exec_permission(datadir)
else:
msg = _("Directory to write image files does not exist "
"(%s). Creating.") % datadir
LOG.info(msg)
try:
os.makedirs(datadir)
self._check_write_permission(datadir)
self._set_exec_permission(datadir)
except (IOError, OSError):
if os.path.exists(datadir):
# NOTE(markwash): If the path now exists, some other
# process must have beat us in the race condition.
# But it doesn't hurt, so we can safely ignore
# the error.
self._check_write_permission(datadir)
self._set_exec_permission(datadir)
continue
reason = _("Unable to create datadir: %s") % datadir
LOG.error(reason)
raise exceptions.BadStoreConfiguration(
store_name="filesystem", reason=reason)
def _validate_metadata(self, metadata_file):
"""Validate metadata against json schema.
If metadata is valid then cache metadata and use it when
creating new image.
:param metadata_file: JSON metadata file path
:raises: BadStoreConfiguration exception if metadata is not valid.
"""
try:
with open(metadata_file, 'rb') as fptr:
metadata = jsonutils.load(fptr)
if isinstance(metadata, dict):
# If metadata is of type dictionary
# i.e. - it contains only one mountpoint
# then convert it to list of dictionary.
metadata = [metadata]
# Validate metadata against json schema
jsonschema.validate(metadata, MULTI_FILESYSTEM_METADATA_SCHEMA)
glance_store.check_location_metadata(metadata)
self.FILESYSTEM_STORE_METADATA = metadata
except (jsonschema.exceptions.ValidationError,
exceptions.BackendException, ValueError) as vee:
err_msg = encodeutils.exception_to_unicode(vee)
reason = _('The JSON in the metadata file %(file)s is '
'not valid and it can not be used: '
'%(vee)s.') % dict(file=metadata_file,
vee=err_msg)
LOG.error(reason)
raise exceptions.BadStoreConfiguration(
store_name="filesystem", reason=reason)
except IOError as ioe:
err_msg = encodeutils.exception_to_unicode(ioe)
reason = _('The path for the metadata file %(file)s could '
'not be accessed: '
'%(ioe)s.') % dict(file=metadata_file,
ioe=err_msg)
LOG.error(reason)
raise exceptions.BadStoreConfiguration(
store_name="filesystem", reason=reason)
def configure_add(self):
"""
Configure the Store to use the stored configuration options
Any store that needs special configuration should implement
this method. If the store was not able to successfully configure
itself, it should raise `exceptions.BadStoreConfiguration`
"""
if not (self.conf.glance_store.filesystem_store_datadir or
self.conf.glance_store.filesystem_store_datadirs):
reason = (_("Specify at least 'filesystem_store_datadir' or "
"'filesystem_store_datadirs' option"))
LOG.error(reason)
raise exceptions.BadStoreConfiguration(store_name="filesystem",
reason=reason)
if (self.conf.glance_store.filesystem_store_datadir and
self.conf.glance_store.filesystem_store_datadirs):
reason = (_("Specify either 'filesystem_store_datadir' or "
"'filesystem_store_datadirs' option"))
LOG.error(reason)
raise exceptions.BadStoreConfiguration(store_name="filesystem",
reason=reason)
if self.conf.glance_store.filesystem_store_file_perm > 0:
perm = int(str(self.conf.glance_store.filesystem_store_file_perm),
8)
if not perm & stat.S_IRUSR:
reason = _LE("Specified an invalid "
"'filesystem_store_file_perm' option which "
"could make image file to be unaccessible by "
"glance service.")
LOG.error(reason)
reason = _("Invalid 'filesystem_store_file_perm' option.")
raise exceptions.BadStoreConfiguration(store_name="filesystem",
reason=reason)
self.multiple_datadirs = False
directory_paths = set()
if self.conf.glance_store.filesystem_store_datadir:
self.datadir = self.conf.glance_store.filesystem_store_datadir
directory_paths.add(self.datadir)
else:
self.multiple_datadirs = True
self.priority_data_map = {}
for datadir in self.conf.glance_store.filesystem_store_datadirs:
(datadir_path,
priority) = self._get_datadir_path_and_priority(datadir)
priority_paths = self.priority_data_map.setdefault(
int(priority), [])
self._check_directory_paths(datadir_path, directory_paths,
priority_paths)
directory_paths.add(datadir_path)
priority_paths.append(datadir_path)
self.priority_list = sorted(self.priority_data_map,
reverse=True)
self._create_image_directories(directory_paths)
metadata_file = self.conf.glance_store.filesystem_store_metadata_file
if metadata_file:
self._validate_metadata(metadata_file)
def _check_directory_paths(self, datadir_path, directory_paths,
priority_paths):
"""
Checks if directory_path is already present in directory_paths.
:datadir_path is directory path.
:datadir_paths is set of all directory paths.
:raises: BadStoreConfiguration exception if same directory path is
already present in directory_paths.
"""
if datadir_path in directory_paths:
msg = (_("Directory %(datadir_path)s specified "
"multiple times in filesystem_store_datadirs "
"option of filesystem configuration") %
{'datadir_path': datadir_path})
# If present with different priority it's a bad configuration
if datadir_path not in priority_paths:
LOG.exception(msg)
raise exceptions.BadStoreConfiguration(
store_name="filesystem", reason=msg)
# Present with same prio (exact duplicate) only deserves a warning
LOG.warning(msg)
def _get_datadir_path_and_priority(self, datadir):
"""
Gets directory paths and its priority from
filesystem_store_datadirs option in glance-api.conf.
:param datadir: is directory path with its priority.
:returns: datadir_path as directory path
priority as priority associated with datadir_path
:raises: BadStoreConfiguration exception if priority is invalid or
empty directory path is specified.
"""
priority = 0
parts = [part.strip() for part in datadir.rsplit(":", 1)]
datadir_path = parts[0]
if len(parts) == 2 and parts[1]:
priority = parts[1]
if not priority.isdigit():
msg = (_("Invalid priority value %(priority)s in "
"filesystem configuration") % {'priority': priority})
LOG.exception(msg)
raise exceptions.BadStoreConfiguration(
store_name="filesystem", reason=msg)
if not datadir_path:
msg = _("Invalid directory specified in filesystem configuration")
LOG.exception(msg)
raise exceptions.BadStoreConfiguration(
store_name="filesystem", reason=msg)
return datadir_path, priority
@staticmethod
def _resolve_location(location):
filepath = location.store_location.path
if not os.path.exists(filepath):
raise exceptions.NotFound(image=filepath)
filesize = os.path.getsize(filepath)
return filepath, filesize
def _get_metadata(self, filepath):
"""Return metadata dictionary.
If metadata is provided as list of dictionaries then return
metadata as dictionary containing 'id' and 'mountpoint'.
If there are multiple nfs directories (mountpoints) configured
for glance, then we need to create metadata JSON file as list
of dictionaries containing all mountpoints with unique id.
But Nova will not be able to find in which directory (mountpoint)
image is present if we store list of dictionary(containing mountpoints)
in glance image metadata. So if there are multiple mountpoints then
we will return dict containing exact mountpoint where image is stored.
If image path does not start with any of the 'mountpoint' provided
in metadata JSON file then error is logged and empty
dictionary is returned.
:param filepath: Path of image on store
:returns: metadata dictionary
"""
if self.FILESYSTEM_STORE_METADATA:
for image_meta in self.FILESYSTEM_STORE_METADATA:
if filepath.startswith(image_meta['mountpoint']):
return image_meta
reason = (_LE("The image path %(path)s does not match with "
"any of the mountpoint defined in "
"metadata: %(metadata)s. An empty dictionary "
"will be returned to the client.")
% dict(path=filepath,
metadata=self.FILESYSTEM_STORE_METADATA))
LOG.error(reason)
return {}
@capabilities.check
def get(self, location, offset=0, chunk_size=None, context=None):
"""
Takes a `glance_store.location.Location` object that indicates
where to find the image file, and returns a tuple of generator
(for reading the image file) and image_size
:param location: `glance_store.location.Location` object, supplied
from glance_store.location.get_location_from_uri()
:raises: `glance_store.exceptions.NotFound` if image does not exist
"""
filepath, filesize = self._resolve_location(location)
msg = _("Found image at %s. Returning in ChunkedFile.") % filepath
LOG.debug(msg)
return (ChunkedFile(filepath,
offset=offset,
chunk_size=self.READ_CHUNKSIZE,
partial_length=chunk_size),
chunk_size or filesize)
def get_size(self, location, context=None):
"""
Takes a `glance_store.location.Location` object that indicates
where to find the image file and returns the image size
:param location: `glance_store.location.Location` object, supplied
from glance_store.location.get_location_from_uri()
:raises: `glance_store.exceptions.NotFound` if image does not exist
:rtype: int
"""
filepath, filesize = self._resolve_location(location)
msg = _("Found image at %s.") % filepath
LOG.debug(msg)
return filesize
@capabilities.check
def delete(self, location, context=None):
"""
Takes a `glance_store.location.Location` object that indicates
where to find the image file to delete
:param location: `glance_store.location.Location` object, supplied
from glance_store.location.get_location_from_uri()
:raises: NotFound if image does not exist
:raises: Forbidden if cannot delete because of permissions
"""
loc = location.store_location
fn = loc.path
if os.path.exists(fn):
try:
LOG.debug(_("Deleting image at %(fn)s"), {'fn': fn})
os.unlink(fn)
except OSError:
raise exceptions.Forbidden(
message=(_("You cannot delete file %s") % fn))
else:
raise exceptions.NotFound(image=fn)
def _get_capacity_info(self, mount_point):
"""Calculates total available space for given mount point.
:mount_point is path of glance data directory
"""
# Calculate total available space
stvfs_result = os.statvfs(mount_point)
total_available_space = stvfs_result.f_bavail * stvfs_result.f_bsize
return max(0, total_available_space)
def _find_best_datadir(self, image_size):
"""Finds the best datadir by priority and free space.
Traverse directories returning the first one that has sufficient
free space, in priority order. If two suitable directories have
the same priority, choose the one with the most free space
available.
:param image_size: size of image being uploaded.
:returns: best_datadir as directory path of the best priority datadir.
:raises: exceptions.StorageFull if there is no datadir in
self.priority_data_map that can accommodate the image.
"""
if not self.multiple_datadirs:
return self.datadir
best_datadir = None
max_free_space = 0
for priority in self.priority_list:
for datadir in self.priority_data_map.get(priority):
free_space = self._get_capacity_info(datadir)
if free_space >= image_size and free_space > max_free_space:
max_free_space = free_space
best_datadir = datadir
# If datadir is found which can accommodate image and has maximum
# free space for the given priority then break the loop,
# else continue to lookup further.
if best_datadir:
break
else:
msg = (_("There is no enough disk space left on the image "
"storage media. requested=%s") % image_size)
LOG.exception(msg)
raise exceptions.StorageFull(message=msg)
return best_datadir
@capabilities.check
def add(self, image_id, image_file, image_size, context=None,
verifier=None):
"""
Stores an image file with supplied identifier to the backend
storage system and returns a tuple containing information
about the stored image.
:param image_id: The opaque image identifier
:param image_file: The image data to write, as a file-like object
:param image_size: The size of the image data to write, in bytes
:param verifier: An object used to verify signatures for images
:retval: tuple of URL in backing store, bytes written, checksum
and a dictionary with storage system specific information
:raises: `glance_store.exceptions.Duplicate` if the image already
existed
:note:: By default, the backend writes the image data to a file
`/<DATADIR>/<ID>`, where <DATADIR> is the value of
the filesystem_store_datadir configuration option and <ID>
is the supplied image ID.
"""
datadir = self._find_best_datadir(image_size)
filepath = os.path.join(datadir, str(image_id))
if os.path.exists(filepath):
raise exceptions.Duplicate(image=filepath)
checksum = hashlib.md5()
bytes_written = 0
try:
with open(filepath, 'wb') as f:
for buf in utils.chunkreadable(image_file,
self.WRITE_CHUNKSIZE):
bytes_written += len(buf)
checksum.update(buf)
if verifier:
verifier.update(buf)
f.write(buf)
except IOError as e:
if e.errno != errno.EACCES:
self._delete_partial(filepath, image_id)
errors = {errno.EFBIG: exceptions.StorageFull(),
errno.ENOSPC: exceptions.StorageFull(),
errno.EACCES: exceptions.StorageWriteDenied()}
raise errors.get(e.errno, e)
except Exception:
with excutils.save_and_reraise_exception():
self._delete_partial(filepath, image_id)
checksum_hex = checksum.hexdigest()
metadata = self._get_metadata(filepath)
LOG.debug(_("Wrote %(bytes_written)d bytes to %(filepath)s with "
"checksum %(checksum_hex)s"),
{'bytes_written': bytes_written,
'filepath': filepath,
'checksum_hex': checksum_hex})
if self.conf.glance_store.filesystem_store_file_perm > 0:
perm = int(str(self.conf.glance_store.filesystem_store_file_perm),
8)
try:
os.chmod(filepath, perm)
except (IOError, OSError):
LOG.warning(_LW("Unable to set permission to image: %s") %
filepath)
return ('file://%s' % filepath, bytes_written, checksum_hex, metadata)
@staticmethod
def _delete_partial(filepath, iid):
try:
os.unlink(filepath)
except Exception as e:
msg = _('Unable to remove partial image '
'data for image %(iid)s: %(e)s')
LOG.error(msg % dict(iid=iid,
e=encodeutils.exception_to_unicode(e)))

View File

@ -1,325 +0,0 @@
# Copyright 2010 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
from oslo_config import cfg
from oslo_utils import encodeutils
from six.moves import urllib
import requests
from glance_store import capabilities
import glance_store.driver
from glance_store import exceptions
from glance_store.i18n import _, _LI
import glance_store.location
LOG = logging.getLogger(__name__)
MAX_REDIRECTS = 5
_HTTP_OPTS = [
cfg.StrOpt('https_ca_certificates_file',
help=_("""
Path to the CA bundle file.
This configuration option enables the operator to use a custom
Certificate Authority file to verify the remote server certificate. If
this option is set, the ``https_insecure`` option will be ignored and
the CA file specified will be used to authenticate the server
certificate and establish a secure connection to the server.
Possible values:
* A valid path to a CA file
Related options:
* https_insecure
""")),
cfg.BoolOpt('https_insecure',
default=True,
help=_("""
Set verification of the remote server certificate.
This configuration option takes in a boolean value to determine
whether or not to verify the remote server certificate. If set to
True, the remote server certificate is not verified. If the option is
set to False, then the default CA truststore is used for verification.
This option is ignored if ``https_ca_certificates_file`` is set.
The remote server certificate will then be verified using the file
specified using the ``https_ca_certificates_file`` option.
Possible values:
* True
* False
Related options:
* https_ca_certificates_file
""")),
cfg.DictOpt('http_proxy_information',
default={},
help=_("""
The http/https proxy information to be used to connect to the remote
server.
This configuration option specifies the http/https proxy information
that should be used to connect to the remote server. The proxy
information should be a key value pair of the scheme and proxy, for
example, http:10.0.0.1:3128. You can also specify proxies for multiple
schemes by separating the key value pairs with a comma, for example,
http:10.0.0.1:3128, https:10.0.0.1:1080.
Possible values:
* A comma separated list of scheme:proxy pairs as described above
Related options:
* None
"""))]
class StoreLocation(glance_store.location.StoreLocation):
"""Class describing an HTTP(S) URI."""
def process_specs(self):
self.scheme = self.specs.get('scheme', 'http')
self.netloc = self.specs['netloc']
self.user = self.specs.get('user')
self.password = self.specs.get('password')
self.path = self.specs.get('path')
def _get_credstring(self):
if self.user:
return '%s:%s@' % (self.user, self.password)
return ''
def get_uri(self):
return "%s://%s%s%s" % (
self.scheme,
self._get_credstring(),
self.netloc,
self.path)
def parse_uri(self, uri):
"""
Parse URLs. This method fixes an issue where credentials specified
in the URL are interpreted differently in Python 2.6.1+ than prior
versions of Python.
"""
pieces = urllib.parse.urlparse(uri)
assert pieces.scheme in ('https', 'http')
self.scheme = pieces.scheme
netloc = pieces.netloc
path = pieces.path
try:
if '@' in netloc:
creds, netloc = netloc.split('@')
else:
creds = None
except ValueError:
# Python 2.6.1 compat
# see lp659445 and Python issue7904
if '@' in path:
creds, path = path.split('@')
else:
creds = None
if creds:
try:
self.user, self.password = creds.split(':')
except ValueError:
reason = _("Credentials are not well-formatted.")
LOG.info(reason)
raise exceptions.BadStoreUri(message=reason)
else:
self.user = None
if netloc == '':
LOG.info(_LI("No address specified in HTTP URL"))
raise exceptions.BadStoreUri(uri=uri)
else:
# IPv6 address has the following format [1223:0:0:..]:<some_port>
# we need to be sure that we are validating port in both IPv4,IPv6
delimiter = "]:" if netloc.count(":") > 1 else ":"
host, dlm, port = netloc.partition(delimiter)
# if port is present in location then validate port format
if port and not port.isdigit():
raise exceptions.BadStoreUri(uri=uri)
self.netloc = netloc
self.path = path
def http_response_iterator(conn, response, size):
"""
Return an iterator for a file-like object.
:param conn: HTTP(S) Connection
:param response: urllib3.HTTPResponse object
:param size: Chunk size to iterate with
"""
try:
chunk = response.read(size)
while chunk:
yield chunk
chunk = response.read(size)
finally:
conn.close()
class Store(glance_store.driver.Store):
"""An implementation of the HTTP(S) Backend Adapter"""
_CAPABILITIES = (capabilities.BitMasks.READ_ACCESS |
capabilities.BitMasks.DRIVER_REUSABLE)
OPTIONS = _HTTP_OPTS
@capabilities.check
def get(self, location, offset=0, chunk_size=None, context=None):
"""
Takes a `glance_store.location.Location` object that indicates
where to find the image file, and returns a tuple of generator
(for reading the image file) and image_size
:param location: `glance_store.location.Location` object, supplied
from glance_store.location.get_location_from_uri()
"""
try:
conn, resp, content_length = self._query(location, 'GET')
except requests.exceptions.ConnectionError:
reason = _("Remote server where the image is present "
"is unavailable.")
LOG.exception(reason)
raise exceptions.RemoteServiceUnavailable(message=reason)
iterator = http_response_iterator(conn, resp, self.READ_CHUNKSIZE)
class ResponseIndexable(glance_store.Indexable):
def another(self):
try:
return next(self.wrapped)
except StopIteration:
return ''
return (ResponseIndexable(iterator, content_length), content_length)
def get_schemes(self):
return ('http', 'https')
def get_size(self, location, context=None):
"""
Takes a `glance_store.location.Location` object that indicates
where to find the image file, and returns the size
:param location: `glance_store.location.Location` object, supplied
from glance_store.location.get_location_from_uri()
"""
conn = None
try:
conn, resp, size = self._query(location, 'HEAD')
except requests.exceptions.ConnectionError as exc:
err_msg = encodeutils.exception_to_unicode(exc)
reason = _("The HTTP URL is invalid: %s") % err_msg
LOG.info(reason)
raise exceptions.BadStoreUri(message=reason)
finally:
# NOTE(sabari): Close the connection as the request was made with
# stream=True
if conn is not None:
conn.close()
return size
def _query(self, location, verb):
redirects_followed = 0
while redirects_followed < MAX_REDIRECTS:
loc = location.store_location
conn = self._get_response(loc, verb)
# NOTE(sigmavirus24): If it was generally successful, break early
if conn.status_code < 300:
break
self._check_store_uri(conn, loc)
redirects_followed += 1
# NOTE(sigmavirus24): Close the response so we don't leak sockets
conn.close()
location = self._new_location(location, conn.headers['location'])
else:
reason = (_("The HTTP URL exceeded %s maximum "
"redirects.") % MAX_REDIRECTS)
LOG.debug(reason)
raise exceptions.MaxRedirectsExceeded(message=reason)
resp = conn.raw
content_length = int(resp.getheader('content-length', 0))
return (conn, resp, content_length)
def _new_location(self, old_location, url):
store_name = old_location.store_name
store_class = old_location.store_location.__class__
image_id = old_location.image_id
store_specs = old_location.store_specs
return glance_store.location.Location(store_name,
store_class,
self.conf,
uri=url,
image_id=image_id,
store_specs=store_specs)
@staticmethod
def _check_store_uri(conn, loc):
# TODO(sigmavirus24): Make this a staticmethod
# Check for bad status codes
if conn.status_code >= 400:
if conn.status_code == requests.codes.not_found:
reason = _("HTTP datastore could not find image at URI.")
LOG.debug(reason)
raise exceptions.NotFound(message=reason)
reason = (_("HTTP URL %(url)s returned a "
"%(status)s status code. \nThe response body:\n"
"%(body)s") %
{'url': loc.path, 'status': conn.status_code,
'body': conn.text})
LOG.debug(reason)
raise exceptions.BadStoreUri(message=reason)
if conn.is_redirect and conn.status_code not in (301, 302):
reason = (_("The HTTP URL %(url)s attempted to redirect "
"with an invalid %(status)s status code."),
{'url': loc.path, 'status': conn.status_code})
LOG.info(reason)
raise exceptions.BadStoreUri(message=reason)
def _get_response(self, location, verb):
if not hasattr(self, 'session'):
self.session = requests.Session()
ca_bundle = self.conf.glance_store.https_ca_certificates_file
disable_https = self.conf.glance_store.https_insecure
self.session.verify = ca_bundle if ca_bundle else not disable_https
self.session.proxies = self.conf.glance_store.http_proxy_information
return self.session.request(verb, location.get_uri(), stream=True,
allow_redirects=False)

View File

@ -1,538 +0,0 @@
# Copyright 2010-2011 Josh Durgin
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Storage backend for RBD
(RADOS (Reliable Autonomic Distributed Object Store) Block Device)"""
from __future__ import absolute_import
from __future__ import with_statement
import contextlib
import hashlib
import logging
import math
from oslo_config import cfg
from oslo_utils import units
from six.moves import urllib
from glance_store import capabilities
from glance_store.common import utils
from glance_store import driver
from glance_store import exceptions
from glance_store.i18n import _, _LE, _LI
from glance_store import location
try:
import rados
import rbd
except ImportError:
rados = None
rbd = None
DEFAULT_POOL = 'images'
DEFAULT_CONFFILE = '/etc/ceph/ceph.conf'
DEFAULT_USER = None # let librados decide based on the Ceph conf file
DEFAULT_CHUNKSIZE = 8 # in MiB
DEFAULT_SNAPNAME = 'snap'
LOG = logging.getLogger(__name__)
_RBD_OPTS = [
cfg.IntOpt('rbd_store_chunk_size', default=DEFAULT_CHUNKSIZE,
min=1,
help=_("""
Size, in megabytes, to chunk RADOS images into.
Provide an integer value representing the size in megabytes to chunk
Glance images into. The default chunk size is 8 megabytes. For optimal
performance, the value should be a power of two.
When Ceph's RBD object storage system is used as the storage backend
for storing Glance images, the images are chunked into objects of the
size set using this option. These chunked objects are then stored
across the distributed block data store to use for Glance.
Possible Values:
* Any positive integer value
Related options:
* None
""")),
cfg.StrOpt('rbd_store_pool', default=DEFAULT_POOL,
help=_("""
RADOS pool in which images are stored.
When RBD is used as the storage backend for storing Glance images, the
images are stored by means of logical grouping of the objects (chunks
of images) into a ``pool``. Each pool is defined with the number of
placement groups it can contain. The default pool that is used is
'images'.
More information on the RBD storage backend can be found here:
http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/
Possible Values:
* A valid pool name
Related options:
* None
""")),
cfg.StrOpt('rbd_store_user', default=DEFAULT_USER,
help=_("""
RADOS user to authenticate as.
This configuration option takes in the RADOS user to authenticate as.
This is only needed when RADOS authentication is enabled and is
applicable only if the user is using Cephx authentication. If the
value for this option is not set by the user or is set to None, a
default value will be chosen, which will be based on the client.
section in rbd_store_ceph_conf.
Possible Values:
* A valid RADOS user
Related options:
* rbd_store_ceph_conf
""")),
cfg.StrOpt('rbd_store_ceph_conf', default=DEFAULT_CONFFILE,
help=_("""
Ceph configuration file path.
This configuration option takes in the path to the Ceph configuration
file to be used. If the value for this option is not set by the user
or is set to None, librados will locate the default configuration file
which is located at /etc/ceph/ceph.conf. If using Cephx
authentication, this file should include a reference to the right
keyring in a client.<USER> section
Possible Values:
* A valid path to a configuration file
Related options:
* rbd_store_user
""")),
cfg.IntOpt('rados_connect_timeout', default=0,
help=_("""
Timeout value for connecting to Ceph cluster.
This configuration option takes in the timeout value in seconds used
when connecting to the Ceph cluster i.e. it sets the time to wait for
glance-api before closing the connection. This prevents glance-api
hangups during the connection to RBD. If the value for this option
is set to less than or equal to 0, no timeout is set and the default
librados value is used.
Possible Values:
* Any integer value
Related options:
* None
"""))
]
class StoreLocation(location.StoreLocation):
"""
Class describing a RBD URI. This is of the form:
rbd://image
or
rbd://fsid/pool/image/snapshot
"""
def process_specs(self):
# convert to ascii since librbd doesn't handle unicode
for key, value in self.specs.items():
self.specs[key] = str(value)
self.fsid = self.specs.get('fsid')
self.pool = self.specs.get('pool')
self.image = self.specs.get('image')
self.snapshot = self.specs.get('snapshot')
def get_uri(self):
if self.fsid and self.pool and self.snapshot:
# ensure nothing contains / or any other url-unsafe character
safe_fsid = urllib.parse.quote(self.fsid, '')
safe_pool = urllib.parse.quote(self.pool, '')
safe_image = urllib.parse.quote(self.image, '')
safe_snapshot = urllib.parse.quote(self.snapshot, '')
return "rbd://%s/%s/%s/%s" % (safe_fsid, safe_pool,
safe_image, safe_snapshot)
else:
return "rbd://%s" % self.image
def parse_uri(self, uri):
prefix = 'rbd://'
if not uri.startswith(prefix):
reason = _('URI must start with rbd://')
msg = _LI("Invalid URI: %s") % reason
LOG.info(msg)
raise exceptions.BadStoreUri(message=reason)
# convert to ascii since librbd doesn't handle unicode
try:
ascii_uri = str(uri)
except UnicodeError:
reason = _('URI contains non-ascii characters')
msg = _LI("Invalid URI: %s") % reason
LOG.info(msg)
raise exceptions.BadStoreUri(message=reason)
pieces = ascii_uri[len(prefix):].split('/')
if len(pieces) == 1:
self.fsid, self.pool, self.image, self.snapshot = \
(None, None, pieces[0], None)
elif len(pieces) == 4:
self.fsid, self.pool, self.image, self.snapshot = \
map(urllib.parse.unquote, pieces)
else:
reason = _('URI must have exactly 1 or 4 components')
msg = _LI("Invalid URI: %s") % reason
LOG.info(msg)
raise exceptions.BadStoreUri(message=reason)
if any(map(lambda p: p == '', pieces)):
reason = _('URI cannot contain empty components')
msg = _LI("Invalid URI: %s") % reason
LOG.info(msg)
raise exceptions.BadStoreUri(message=reason)
class ImageIterator(object):
"""
Reads data from an RBD image, one chunk at a time.
"""
def __init__(self, pool, name, snapshot, store, chunk_size=None):
self.pool = pool or store.pool
self.name = name
self.snapshot = snapshot
self.user = store.user
self.conf_file = store.conf_file
self.chunk_size = chunk_size or store.READ_CHUNKSIZE
self.store = store
def __iter__(self):
try:
with self.store.get_connection(conffile=self.conf_file,
rados_id=self.user) as conn:
with conn.open_ioctx(self.pool) as ioctx:
with rbd.Image(ioctx, self.name,
snapshot=self.snapshot) as image:
size = image.size()
bytes_left = size
while bytes_left > 0:
length = min(self.chunk_size, bytes_left)
data = image.read(size - bytes_left, length)
bytes_left -= len(data)
yield data
raise StopIteration()
except rbd.ImageNotFound:
raise exceptions.NotFound(
_('RBD image %s does not exist') % self.name)
class Store(driver.Store):
"""An implementation of the RBD backend adapter."""
_CAPABILITIES = capabilities.BitMasks.RW_ACCESS
OPTIONS = _RBD_OPTS
EXAMPLE_URL = "rbd://<FSID>/<POOL>/<IMAGE>/<SNAP>"
def get_schemes(self):
return ('rbd',)
@contextlib.contextmanager
def get_connection(self, conffile, rados_id):
client = rados.Rados(conffile=conffile, rados_id=rados_id)
try:
client.connect(timeout=self.connect_timeout)
except rados.Error:
msg = _LE("Error connecting to ceph cluster.")
LOG.exception(msg)
raise exceptions.BackendException()
try:
yield client
finally:
client.shutdown()
def configure_add(self):
"""
Configure the Store to use the stored configuration options
Any store that needs special configuration should implement
this method. If the store was not able to successfully configure
itself, it should raise `exceptions.BadStoreConfiguration`
"""
try:
chunk = self.conf.glance_store.rbd_store_chunk_size
self.chunk_size = chunk * units.Mi
self.READ_CHUNKSIZE = self.chunk_size
self.WRITE_CHUNKSIZE = self.READ_CHUNKSIZE
# these must not be unicode since they will be passed to a
# non-unicode-aware C library
self.pool = str(self.conf.glance_store.rbd_store_pool)
self.user = str(self.conf.glance_store.rbd_store_user)
self.conf_file = str(self.conf.glance_store.rbd_store_ceph_conf)
self.connect_timeout = self.conf.glance_store.rados_connect_timeout
except cfg.ConfigFileValueError as e:
reason = _("Error in store configuration: %s") % e
LOG.error(reason)
raise exceptions.BadStoreConfiguration(store_name='rbd',
reason=reason)
@capabilities.check
def get(self, location, offset=0, chunk_size=None, context=None):
"""
Takes a `glance_store.location.Location` object that indicates
where to find the image file, and returns a tuple of generator
(for reading the image file) and image_size
:param location: `glance_store.location.Location` object, supplied
from glance_store.location.get_location_from_uri()
:raises: `glance_store.exceptions.NotFound` if image does not exist
"""
loc = location.store_location
return (ImageIterator(loc.pool, loc.image, loc.snapshot, self),
self.get_size(location))
def get_size(self, location, context=None):
"""
Takes a `glance_store.location.Location` object that indicates
where to find the image file, and returns the size
:param location: `glance_store.location.Location` object, supplied
from glance_store.location.get_location_from_uri()
:raises: `glance_store.exceptions.NotFound` if image does not exist
"""
loc = location.store_location
# if there is a pool specific in the location, use it; otherwise
# we fall back to the default pool specified in the config
target_pool = loc.pool or self.pool
with self.get_connection(conffile=self.conf_file,
rados_id=self.user) as conn:
with conn.open_ioctx(target_pool) as ioctx:
try:
with rbd.Image(ioctx, loc.image,
snapshot=loc.snapshot) as image:
img_info = image.stat()
return img_info['size']
except rbd.ImageNotFound:
msg = _('RBD image %s does not exist') % loc.get_uri()
LOG.debug(msg)
raise exceptions.NotFound(msg)
def _create_image(self, fsid, conn, ioctx, image_name,
size, order, context=None):
"""
Create an rbd image. If librbd supports it,
make it a cloneable snapshot, so that copy-on-write
volumes can be created from it.
:param image_name: Image's name
:retval: `glance_store.rbd.StoreLocation` object
"""
librbd = rbd.RBD()
features = conn.conf_get('rbd_default_features')
if ((features is None) or (int(features) == 0)):
features = rbd.RBD_FEATURE_LAYERING
librbd.create(ioctx, image_name, size, order, old_format=False,
features=int(features))
return StoreLocation({
'fsid': fsid,
'pool': self.pool,
'image': image_name,
'snapshot': DEFAULT_SNAPNAME,
}, self.conf)
def _delete_image(self, target_pool, image_name,
snapshot_name=None, context=None):
"""
Delete RBD image and snapshot.
:param image_name: Image's name
:param snapshot_name: Image snapshot's name
:raises: NotFound if image does not exist;
InUseByStore if image is in use or snapshot unprotect failed
"""
with self.get_connection(conffile=self.conf_file,
rados_id=self.user) as conn:
with conn.open_ioctx(target_pool) as ioctx:
try:
# First remove snapshot.
if snapshot_name is not None:
with rbd.Image(ioctx, image_name) as image:
try:
self._unprotect_snapshot(image, snapshot_name)
image.remove_snap(snapshot_name)
except rbd.ImageNotFound as exc:
msg = (_("Snap Operating Exception "
"%(snap_exc)s "
"Snapshot does not exist.") %
{'snap_exc': exc})
LOG.debug(msg)
except rbd.ImageBusy as exc:
log_msg = (_LE("Snap Operating Exception "
"%(snap_exc)s "
"Snapshot is in use.") %
{'snap_exc': exc})
LOG.error(log_msg)
raise exceptions.InUseByStore()
# Then delete image.
rbd.RBD().remove(ioctx, image_name)
except rbd.ImageHasSnapshots:
log_msg = (_LE("Remove image %(img_name)s failed. "
"It has snapshot(s) left.") %
{'img_name': image_name})
LOG.error(log_msg)
raise exceptions.HasSnapshot()
except rbd.ImageBusy:
log_msg = (_LE("Remove image %(img_name)s failed. "
"It is in use.") %
{'img_name': image_name})
LOG.error(log_msg)
raise exceptions.InUseByStore()
except rbd.ImageNotFound:
msg = _("RBD image %s does not exist") % image_name
raise exceptions.NotFound(message=msg)
def _unprotect_snapshot(self, image, snap_name):
try:
image.unprotect_snap(snap_name)
except rbd.InvalidArgument:
# NOTE(slaweq): if snapshot was unprotected already, rbd library
# raises InvalidArgument exception without any "clear" message.
# Such exception is not dangerous for us so it will be just logged
LOG.debug("Snapshot %s is unprotected already" % snap_name)
@capabilities.check
def add(self, image_id, image_file, image_size, context=None,
verifier=None):
"""
Stores an image file with supplied identifier to the backend
storage system and returns a tuple containing information
about the stored image.
:param image_id: The opaque image identifier
:param image_file: The image data to write, as a file-like object
:param image_size: The size of the image data to write, in bytes
:param verifier: An object used to verify signatures for images
:retval: tuple of URL in backing store, bytes written, checksum
and a dictionary with storage system specific information
:raises: `glance_store.exceptions.Duplicate` if the image already
existed
"""
checksum = hashlib.md5()
image_name = str(image_id)
with self.get_connection(conffile=self.conf_file,
rados_id=self.user) as conn:
fsid = None
if hasattr(conn, 'get_fsid'):
fsid = conn.get_fsid()
with conn.open_ioctx(self.pool) as ioctx:
order = int(math.log(self.WRITE_CHUNKSIZE, 2))
LOG.debug('creating image %s with order %d and size %d',
image_name, order, image_size)
if image_size == 0:
LOG.warning(_("since image size is zero we will be doing "
"resize-before-write for each chunk which "
"will be considerably slower than normal"))
try:
loc = self._create_image(fsid, conn, ioctx, image_name,
image_size, order)
except rbd.ImageExists:
msg = _('RBD image %s already exists') % image_id
raise exceptions.Duplicate(message=msg)
try:
with rbd.Image(ioctx, image_name) as image:
bytes_written = 0
offset = 0
chunks = utils.chunkreadable(image_file,
self.WRITE_CHUNKSIZE)
for chunk in chunks:
# If the image size provided is zero we need to do
# a resize for the amount we are writing. This will
# be slower so setting a higher chunk size may
# speed things up a bit.
if image_size == 0:
chunk_length = len(chunk)
length = offset + chunk_length
bytes_written += chunk_length
LOG.debug(_("resizing image to %s KiB") %
(length / units.Ki))
image.resize(length)
LOG.debug(_("writing chunk at offset %s") %
(offset))
offset += image.write(chunk, offset)
checksum.update(chunk)
if verifier:
verifier.update(chunk)
if loc.snapshot:
image.create_snap(loc.snapshot)
image.protect_snap(loc.snapshot)
except Exception as exc:
log_msg = (_LE("Failed to store image %(img_name)s "
"Store Exception %(store_exc)s") %
{'img_name': image_name,
'store_exc': exc})
LOG.error(log_msg)
# Delete image if one was created
try:
target_pool = loc.pool or self.pool
self._delete_image(target_pool, loc.image,
loc.snapshot)
except exceptions.NotFound:
pass
raise exc
# Make sure we send back the image size whether provided or inferred.
if image_size == 0:
image_size = bytes_written
return (loc.get_uri(), image_size, checksum.hexdigest(), {})
@capabilities.check
def delete(self, location, context=None):
"""
Takes a `glance_store.location.Location` object that indicates
where to find the image file to delete.
:param location: `glance_store.location.Location` object, supplied
from glance_store.location.get_location_from_uri()
:raises: NotFound if image does not exist;
InUseByStore if image is in use or snapshot unprotect failed
"""
loc = location.store_location
target_pool = loc.pool or self.pool
self._delete_image(target_pool, loc.image, loc.snapshot)

View File

@ -1,414 +0,0 @@
# Copyright 2013 Taobao Inc.
# Copyright (C) 2016 Nippon Telegraph and Telephone Corporation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Storage backend for Sheepdog storage system"""
import hashlib
import logging
import six
from oslo_concurrency import processutils
from oslo_config import cfg
from oslo_utils import excutils
from oslo_utils import units
import glance_store
from glance_store import capabilities
from glance_store.common import utils
import glance_store.driver
from glance_store import exceptions
from glance_store.i18n import _
import glance_store.location
LOG = logging.getLogger(__name__)
DEFAULT_ADDR = '127.0.0.1'
DEFAULT_PORT = 7000
DEFAULT_CHUNKSIZE = 64 # in MiB
_SHEEPDOG_OPTS = [
cfg.IntOpt('sheepdog_store_chunk_size',
min=1,
default=DEFAULT_CHUNKSIZE,
help=_("""
Chunk size for images to be stored in Sheepdog data store.
Provide an integer value representing the size in mebibyte
(1048576 bytes) to chunk Glance images into. The default
chunk size is 64 mebibytes.
When using Sheepdog distributed storage system, the images are
chunked into objects of this size and then stored across the
distributed data store to use for Glance.
Chunk sizes, if a power of two, help avoid fragmentation and
enable improved performance.
Possible values:
* Positive integer value representing size in mebibytes.
Related Options:
* None
""")),
cfg.PortOpt('sheepdog_store_port',
default=DEFAULT_PORT,
help=_("""
Port number on which the sheep daemon will listen.
Provide an integer value representing a valid port number on
which you want the Sheepdog daemon to listen on. The default
port is 7000.
The Sheepdog daemon, also called 'sheep', manages the storage
in the distributed cluster by writing objects across the storage
network. It identifies and acts on the messages it receives on
the port number set using ``sheepdog_store_port`` option to store
chunks of Glance images.
Possible values:
* A valid port number (0 to 65535)
Related Options:
* sheepdog_store_address
""")),
cfg.HostAddressOpt('sheepdog_store_address',
default=DEFAULT_ADDR,
help=_("""
Address to bind the Sheepdog daemon to.
Provide a string value representing the address to bind the
Sheepdog daemon to. The default address set for the 'sheep'
is 127.0.0.1.
The Sheepdog daemon, also called 'sheep', manages the storage
in the distributed cluster by writing objects across the storage
network. It identifies and acts on the messages directed to the
address set using ``sheepdog_store_address`` option to store
chunks of Glance images.
Possible values:
* A valid IPv4 address
* A valid IPv6 address
* A valid hostname
Related Options:
* sheepdog_store_port
"""))
]
class SheepdogImage(object):
"""Class describing an image stored in Sheepdog storage."""
def __init__(self, addr, port, name, chunk_size):
self.addr = addr
self.port = port
self.name = name
self.chunk_size = chunk_size
def _run_command(self, command, data, *params):
cmd = ['collie', 'vdi']
cmd.extend(command.split(' '))
cmd.extend(['-a', self.addr, '-p', self.port, self.name])
cmd.extend(params)
try:
return processutils.execute(
*cmd, process_input=data)[0]
except processutils.ProcessExecutionError as exc:
LOG.error(exc)
raise glance_store.BackendException(exc)
def get_size(self):
"""
Return the size of the this image
Sheepdog Usage: collie vdi list -r -a address -p port image
"""
out = self._run_command("list -r", None)
return int(out.split(' ')[3])
def read(self, offset, count):
"""
Read up to 'count' bytes from this image starting at 'offset' and
return the data.
Sheepdog Usage: collie vdi read -a address -p port image offset len
"""
return self._run_command("read", None, str(offset), str(count))
def write(self, data, offset, count):
"""
Write up to 'count' bytes from the data to this image starting at
'offset'
Sheepdog Usage: collie vdi write -a address -p port image offset len
"""
self._run_command("write", data, str(offset), str(count))
def create(self, size):
"""
Create this image in the Sheepdog cluster with size 'size'.
Sheepdog Usage: collie vdi create -a address -p port image size
"""
if not isinstance(size, (six.integer_types, float)):
raise exceptions.Forbidden("Size is not a number")
self._run_command("create", None, str(size))
def resize(self, size):
"""Resize this image in the Sheepdog cluster with size 'size'.
Sheepdog Usage: collie vdi create -a address -p port image size
"""
self._run_command("resize", None, str(size))
def delete(self):
"""
Delete this image in the Sheepdog cluster
Sheepdog Usage: collie vdi delete -a address -p port image
"""
self._run_command("delete", None)
def exist(self):
"""
Check if this image exists in the Sheepdog cluster via 'list' command
Sheepdog Usage: collie vdi list -r -a address -p port image
"""
out = self._run_command("list -r", None)
if not out:
return False
else:
return True
class StoreLocation(glance_store.location.StoreLocation):
"""
Class describing a Sheepdog URI. This is of the form:
sheepdog://addr:port:image
"""
def process_specs(self):
self.image = self.specs.get('image')
self.addr = self.specs.get('addr')
self.port = self.specs.get('port')
def get_uri(self):
return "sheepdog://%(addr)s:%(port)d:%(image)s" % {
'addr': self.addr,
'port': self.port,
'image': self.image}
def parse_uri(self, uri):
valid_schema = 'sheepdog://'
if not uri.startswith(valid_schema):
reason = _("URI must start with '%s'") % valid_schema
raise exceptions.BadStoreUri(message=reason)
pieces = uri[len(valid_schema):].split(':')
if len(pieces) == 3:
self.image = pieces[2]
self.port = int(pieces[1])
self.addr = pieces[0]
# This is used for backwards compatibility.
else:
self.image = pieces[0]
self.port = self.conf.glance_store.sheepdog_store_port
self.addr = self.conf.glance_store.sheepdog_store_address
class ImageIterator(object):
"""
Reads data from an Sheepdog image, one chunk at a time.
"""
def __init__(self, image):
self.image = image
def __iter__(self):
image = self.image
total = left = image.get_size()
while left > 0:
length = min(image.chunk_size, left)
data = image.read(total - left, length)
left -= len(data)
yield data
raise StopIteration()
class Store(glance_store.driver.Store):
"""Sheepdog backend adapter."""
_CAPABILITIES = (capabilities.BitMasks.RW_ACCESS |
capabilities.BitMasks.DRIVER_REUSABLE)
OPTIONS = _SHEEPDOG_OPTS
EXAMPLE_URL = "sheepdog://addr:port:image"
def get_schemes(self):
return ('sheepdog',)
def configure_add(self):
"""
Configure the Store to use the stored configuration options
Any store that needs special configuration should implement
this method. If the store was not able to successfully configure
itself, it should raise `exceptions.BadStoreConfiguration`
"""
try:
chunk_size = self.conf.glance_store.sheepdog_store_chunk_size
self.chunk_size = chunk_size * units.Mi
self.READ_CHUNKSIZE = self.chunk_size
self.WRITE_CHUNKSIZE = self.READ_CHUNKSIZE
self.addr = self.conf.glance_store.sheepdog_store_address
self.port = self.conf.glance_store.sheepdog_store_port
except cfg.ConfigFileValueError as e:
reason = _("Error in store configuration: %s") % e
LOG.error(reason)
raise exceptions.BadStoreConfiguration(store_name='sheepdog',
reason=reason)
try:
processutils.execute("collie")
except processutils.ProcessExecutionError as exc:
reason = _("Error in store configuration: %s") % exc
LOG.error(reason)
raise exceptions.BadStoreConfiguration(store_name='sheepdog',
reason=reason)
@capabilities.check
def get(self, location, offset=0, chunk_size=None, context=None):
"""
Takes a `glance_store.location.Location` object that indicates
where to find the image file, and returns a generator for reading
the image file
:param location: `glance_store.location.Location` object, supplied
from glance_store.location.get_location_from_uri()
:raises: `glance_store.exceptions.NotFound` if image does not exist
"""
loc = location.store_location
image = SheepdogImage(loc.addr, loc.port, loc.image,
self.READ_CHUNKSIZE)
if not image.exist():
raise exceptions.NotFound(_("Sheepdog image %s does not exist")
% image.name)
return (ImageIterator(image), image.get_size())
def get_size(self, location, context=None):
"""
Takes a `glance_store.location.Location` object that indicates
where to find the image file and returns the image size
:param location: `glance_store.location.Location` object, supplied
from glance_store.location.get_location_from_uri()
:raises: `glance_store.exceptions.NotFound` if image does not exist
:rtype: int
"""
loc = location.store_location
image = SheepdogImage(loc.addr, loc.port, loc.image,
self.READ_CHUNKSIZE)
if not image.exist():
raise exceptions.NotFound(_("Sheepdog image %s does not exist")
% image.name)
return image.get_size()
@capabilities.check
def add(self, image_id, image_file, image_size, context=None,
verifier=None):
"""
Stores an image file with supplied identifier to the backend
storage system and returns a tuple containing information
about the stored image.
:param image_id: The opaque image identifier
:param image_file: The image data to write, as a file-like object
:param image_size: The size of the image data to write, in bytes
:param verifier: An object used to verify signatures for images
:retval: tuple of URL in backing store, bytes written, and checksum
:raises: `glance_store.exceptions.Duplicate` if the image already
existed
"""
image = SheepdogImage(self.addr, self.port, image_id,
self.WRITE_CHUNKSIZE)
if image.exist():
raise exceptions.Duplicate(_("Sheepdog image %s already exists")
% image_id)
location = StoreLocation({
'image': image_id,
'addr': self.addr,
'port': self.port
}, self.conf)
image.create(image_size)
try:
offset = 0
checksum = hashlib.md5()
chunks = utils.chunkreadable(image_file, self.WRITE_CHUNKSIZE)
for chunk in chunks:
chunk_length = len(chunk)
# If the image size provided is zero we need to do
# a resize for the amount we are writing. This will
# be slower so setting a higher chunk size may
# speed things up a bit.
if image_size == 0:
image.resize(offset + chunk_length)
image.write(chunk, offset, chunk_length)
offset += chunk_length
checksum.update(chunk)
if verifier:
verifier.update(chunk)
except Exception:
# Note(zhiyan): clean up already received data when
# error occurs such as ImageSizeLimitExceeded exceptions.
with excutils.save_and_reraise_exception():
image.delete()
return (location.get_uri(), offset, checksum.hexdigest(), {})
@capabilities.check
def delete(self, location, context=None):
"""
Takes a `glance_store.location.Location` object that indicates
where to find the image file to delete
:param location: `glance_store.location.Location` object, supplied
from glance_store.location.get_location_from_uri()
:raises: NotFound if image does not exist
"""
loc = location.store_location
image = SheepdogImage(loc.addr, loc.port, loc.image,
self.WRITE_CHUNKSIZE)
if not image.exist():
raise exceptions.NotFound(_("Sheepdog image %s does not exist") %
loc.image)
image.delete()

View File

@ -1,17 +0,0 @@
# Copyright 2014 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from glance_store._drivers.swift import utils # noqa
from glance_store._drivers.swift.store import * # noqa

View File

@ -1,207 +0,0 @@
# Copyright 2010-2015 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Connection Manager for Swift connections that responsible for providing
connection with valid credentials and updated token"""
import logging
from oslo_utils import encodeutils
from glance_store import exceptions
from glance_store.i18n import _, _LI
LOG = logging.getLogger(__name__)
class SwiftConnectionManager(object):
"""Connection Manager class responsible for initializing and managing
swiftclient connections in store. The instance of that class can provide
swift connections with a valid(and refreshed) user token if the token is
going to expire soon.
"""
AUTH_HEADER_NAME = 'X-Auth-Token'
def __init__(self, store, store_location, context=None,
allow_reauth=False):
"""Initialize manager with parameters required to establish connection.
Initialize store and prepare it for interacting with swift. Also
initialize keystone client that need to be used for authentication if
allow_reauth is True.
The method invariant is the following: if method was executed
successfully and self.allow_reauth is True users can safely request
valid(no expiration) swift connections any time. Otherwise, connection
manager initialize a connection once and always returns that connection
to users.
:param store: store that provides connections
:param store_location: image location in store
:param context: user context to access data in Swift
:param allow_reauth: defines if re-authentication need to be executed
when a user request the connection
"""
self._client = None
self.store = store
self.location = store_location
self.context = context
self.allow_reauth = allow_reauth
self.storage_url = self._get_storage_url()
self.connection = self._init_connection()
def get_connection(self):
"""Get swift client connection.
Returns swift client connection. If allow_reauth is True and
connection token is going to expire soon then the method returns
updated connection.
The method invariant is the following: if self.allow_reauth is False
then the method returns the same connection for every call. So the
connection may expire. If self.allow_reauth is True the returned
swift connection is always valid and cannot expire at least for
swift_store_expire_soon_interval.
"""
if self.allow_reauth:
# we are refreshing token only and if only connection manager
# re-authentication is allowed. Token refreshing is setup by
# connection manager users. Also we disable re-authentication
# if there is not way to execute it (cannot initialize trusts for
# multi-tenant or auth_version is not 3)
auth_ref = self.client.session.auth.get_auth_ref(
self.client.session)
# if connection token is going to expire soon (keystone checks
# is token is going to expire or expired already)
if auth_ref.will_expire_soon(
self.store.conf.glance_store.swift_store_expire_soon_interval
):
LOG.info(_LI("Requesting new token for swift connection."))
# request new token with session and client provided by store
auth_token = self.client.session.get_auth_headers().get(
self.AUTH_HEADER_NAME)
LOG.info(_LI("Token has been successfully requested. "
"Refreshing swift connection."))
# initialize new switclient connection with fresh token
self.connection = self.store.get_store_connection(
auth_token, self.storage_url)
return self.connection
@property
def client(self):
"""Return keystone client to request a new token.
Initialize a client lazily from the method provided by glance_store.
The method invariant is the following: if client cannot be
initialized raise exception otherwise return initialized client that
can be used for re-authentication any time.
"""
if self._client is None:
self._client = self._init_client()
return self._client
def _init_connection(self):
"""Initialize and return valid Swift connection."""
auth_token = self.client.session.get_auth_headers().get(
self.AUTH_HEADER_NAME)
return self.store.get_store_connection(
auth_token, self.storage_url)
def _init_client(self):
"""Initialize Keystone client."""
return self.store.init_client(location=self.location,
context=self.context)
def _get_storage_url(self):
"""Request swift storage url."""
raise NotImplementedError()
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
pass
class SingleTenantConnectionManager(SwiftConnectionManager):
def _get_storage_url(self):
"""Get swift endpoint from keystone
Return endpoint for swift from service catalog. The method works only
Keystone v3. If you are using different version (1 or 2)
it returns None.
:return: swift endpoint
"""
if self.store.auth_version == '3':
try:
return self.client.session.get_endpoint(
service_type=self.store.service_type,
interface=self.store.endpoint_type,
region_name=self.store.region
)
except Exception as e:
# do the same that swift driver does
# when catching ClientException
msg = _("Cannot find swift service endpoint : "
"%s") % encodeutils.exception_to_unicode(e)
raise exceptions.BackendException(msg)
def _init_connection(self):
if self.store.auth_version == '3':
return super(SingleTenantConnectionManager,
self)._init_connection()
else:
# no re-authentication for v1 and v2
self.allow_reauth = False
# use good old connection initialization
return self.store.get_connection(self.location, self.context)
class MultiTenantConnectionManager(SwiftConnectionManager):
def __init__(self, store, store_location, context=None,
allow_reauth=False):
# no context - no party
if context is None:
reason = _("Multi-tenant Swift storage requires a user context.")
raise exceptions.BadStoreConfiguration(store_name="swift",
reason=reason)
super(MultiTenantConnectionManager, self).__init__(
store, store_location, context, allow_reauth)
def __exit__(self, exc_type, exc_val, exc_tb):
if self._client and self.client.trust_id:
# client has been initialized - need to cleanup resources
LOG.info(_LI("Revoking trust %s"), self.client.trust_id)
self.client.trusts.delete(self.client.trust_id)
def _get_storage_url(self):
return self.location.swift_url
def _init_connection(self):
if self.allow_reauth:
try:
return super(MultiTenantConnectionManager,
self)._init_connection()
except Exception as e:
LOG.debug("Cannot initialize swift connection for multi-tenant"
" store with trustee token: %s. Using user token for"
" connection initialization.", e)
# for multi-tenant store we have a token, so we can use it
# for connection initialization but we cannot fetch new token
# with client
self.allow_reauth = False
return self.store.get_store_connection(
self.context.auth_token, self.storage_url)

File diff suppressed because it is too large Load Diff

View File

@ -1,186 +0,0 @@
# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import sys
from oslo_config import cfg
from six.moves import configparser
from glance_store import exceptions
from glance_store.i18n import _, _LE
swift_opts = [
cfg.StrOpt('default_swift_reference',
default="ref1",
help=_("""
Reference to default Swift account/backing store parameters.
Provide a string value representing a reference to the default set
of parameters required for using swift account/backing store for
image storage. The default reference value for this configuration
option is 'ref1'. This configuration option dereferences the
parameters and facilitates image storage in Swift storage backend
every time a new image is added.
Possible values:
* A valid string value
Related options:
* None
""")),
cfg.StrOpt('swift_store_auth_version', default='2',
help=_('Version of the authentication service to use. '
'Valid versions are 2 and 3 for keystone and 1 '
'(deprecated) for swauth and rackspace.'),
deprecated_for_removal=True,
deprecated_reason=_("""
The option 'auth_version' in the Swift back-end configuration file is
used instead.
""")),
cfg.StrOpt('swift_store_auth_address',
help=_('The address where the Swift authentication '
'service is listening.'),
deprecated_for_removal=True,
deprecated_reason=_("""
The option 'auth_address' in the Swift back-end configuration file is
used instead.
""")),
cfg.StrOpt('swift_store_user', secret=True,
help=_('The user to authenticate against the Swift '
'authentication service.'),
deprecated_for_removal=True,
deprecated_reason=_("""
The option 'user' in the Swift back-end configuration file is set instead.
""")),
cfg.StrOpt('swift_store_key', secret=True,
help=_('Auth key for the user authenticating against the '
'Swift authentication service.'),
deprecated_for_removal=True,
deprecated_reason=_("""
The option 'key' in the Swift back-end configuration file is used
to set the authentication key instead.
""")),
cfg.StrOpt('swift_store_config_file',
default=None,
help=_("""
Absolute path to the file containing the swift account(s)
configurations.
Include a string value representing the path to a configuration
file that has references for each of the configured Swift
account(s)/backing stores. By default, no file path is specified
and customized Swift referencing is disabled. Configuring this
option is highly recommended while using Swift storage backend for
image storage as it avoids storage of credentials in the database.
NOTE: Please do not configure this option if you have set
``swift_store_multi_tenant`` to ``True``.
Possible values:
* String value representing an absolute path on the glance-api
node
Related options:
* swift_store_multi_tenant
""")),
]
_config_defaults = {'user_domain_id': 'default',
'user_domain_name': None,
'project_domain_id': 'default',
'project_domain_name': None}
if sys.version_info >= (3, 2):
CONFIG = configparser.ConfigParser(defaults=_config_defaults)
else:
CONFIG = configparser.SafeConfigParser(defaults=_config_defaults)
LOG = logging.getLogger(__name__)
def is_multiple_swift_store_accounts_enabled(conf):
if conf.glance_store.swift_store_config_file is None:
return False
return True
class SwiftParams(object):
def __init__(self, conf):
self.conf = conf
if is_multiple_swift_store_accounts_enabled(self.conf):
self.params = self._load_config()
else:
self.params = self._form_default_params()
def _form_default_params(self):
default = {}
if (
self.conf.glance_store.swift_store_user and
self.conf.glance_store.swift_store_key and
self.conf.glance_store.swift_store_auth_address
):
glance_store = self.conf.glance_store
default['user'] = glance_store.swift_store_user
default['key'] = glance_store.swift_store_key
default['auth_address'] = glance_store.swift_store_auth_address
default['project_domain_id'] = 'default'
default['project_domain_name'] = None
default['user_domain_id'] = 'default'
default['user_domain_name'] = None
default['auth_version'] = glance_store.swift_store_auth_version
return {glance_store.default_swift_reference: default}
return {}
def _load_config(self):
try:
scf = self.conf.glance_store.swift_store_config_file
conf_file = self.conf.find_file(scf)
CONFIG.read(conf_file)
except Exception as e:
msg = (_("swift config file "
"%(conf)s:%(exc)s not found"),
{'conf': self.conf.glance_store.swift_store_config_file,
'exc': e})
LOG.error(msg)
raise exceptions.BadStoreConfiguration(store_name='swift',
reason=msg)
account_params = {}
account_references = CONFIG.sections()
for ref in account_references:
reference = {}
try:
for param in ('auth_address',
'user',
'key',
'project_domain_id',
'project_domain_name',
'user_domain_id',
'user_domain_name'):
reference[param] = CONFIG.get(ref, param)
try:
reference['auth_version'] = CONFIG.get(ref, 'auth_version')
except configparser.NoOptionError:
av = self.conf.glance_store.swift_store_auth_version
reference['auth_version'] = av
account_params[ref] = reference
except (ValueError, SyntaxError, configparser.NoOptionError) as e:
LOG.exception(_LE("Invalid format of swift store config cfg"))
return account_params

View File

@ -1,780 +0,0 @@
# Copyright 2014 OpenStack, LLC
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Storage backend for VMware Datastore"""
import hashlib
import logging
import os
from oslo_config import cfg
from oslo_utils import excutils
from oslo_utils import netutils
from oslo_utils import units
try:
from oslo_vmware import api
import oslo_vmware.exceptions as vexc
from oslo_vmware.objects import datacenter as oslo_datacenter
from oslo_vmware.objects import datastore as oslo_datastore
from oslo_vmware import vim_util
except ImportError:
api = None
from six.moves import urllib
import six.moves.urllib.parse as urlparse
import requests
from requests import adapters
from requests.packages.urllib3.util import retry
import six
# NOTE(jokke): simplified transition to py3, behaves like py2 xrange
from six.moves import range
import glance_store
from glance_store import capabilities
from glance_store.common import utils
from glance_store import exceptions
from glance_store.i18n import _, _LE
from glance_store import location
LOG = logging.getLogger(__name__)
CHUNKSIZE = 1024 * 64 # 64kB
MAX_REDIRECTS = 5
DEFAULT_STORE_IMAGE_DIR = '/openstack_glance'
DS_URL_PREFIX = '/folder'
STORE_SCHEME = 'vsphere'
_VMWARE_OPTS = [
cfg.HostAddressOpt('vmware_server_host',
sample_default='127.0.0.1',
help=_("""
Address of the ESX/ESXi or vCenter Server target system.
This configuration option sets the address of the ESX/ESXi or vCenter
Server target system. This option is required when using the VMware
storage backend. The address can contain an IP address (127.0.0.1) or
a DNS name (www.my-domain.com).
Possible Values:
* A valid IPv4 or IPv6 address
* A valid DNS name
Related options:
* vmware_server_username
* vmware_server_password
""")),
cfg.StrOpt('vmware_server_username',
sample_default='root',
help=_("""
Server username.
This configuration option takes the username for authenticating with
the VMware ESX/ESXi or vCenter Server. This option is required when
using the VMware storage backend.
Possible Values:
* Any string that is the username for a user with appropriate
privileges
Related options:
* vmware_server_host
* vmware_server_password
""")),
cfg.StrOpt('vmware_server_password',
sample_default='vmware',
help=_("""
Server password.
This configuration option takes the password for authenticating with
the VMware ESX/ESXi or vCenter Server. This option is required when
using the VMware storage backend.
Possible Values:
* Any string that is a password corresponding to the username
specified using the "vmware_server_username" option
Related options:
* vmware_server_host
* vmware_server_username
"""),
secret=True),
cfg.IntOpt('vmware_api_retry_count',
default=10,
min=1,
help=_("""
The number of VMware API retries.
This configuration option specifies the number of times the VMware
ESX/VC server API must be retried upon connection related issues or
server API call overload. It is not possible to specify 'retry
forever'.
Possible Values:
* Any positive integer value
Related options:
* None
""")),
cfg.IntOpt('vmware_task_poll_interval',
default=5,
min=1,
help=_("""
Interval in seconds used for polling remote tasks invoked on VMware
ESX/VC server.
This configuration option takes in the sleep time in seconds for polling an
on-going async task as part of the VMWare ESX/VC server API call.
Possible Values:
* Any positive integer value
Related options:
* None
""")),
cfg.StrOpt('vmware_store_image_dir',
default=DEFAULT_STORE_IMAGE_DIR,
help=_("""
The directory where the glance images will be stored in the datastore.
This configuration option specifies the path to the directory where the
glance images will be stored in the VMware datastore. If this option
is not set, the default directory where the glance images are stored
is openstack_glance.
Possible Values:
* Any string that is a valid path to a directory
Related options:
* None
""")),
cfg.BoolOpt('vmware_insecure',
default=False,
deprecated_name='vmware_api_insecure',
help=_("""
Set verification of the ESX/vCenter server certificate.
This configuration option takes a boolean value to determine
whether or not to verify the ESX/vCenter server certificate. If this
option is set to True, the ESX/vCenter server certificate is not
verified. If this option is set to False, then the default CA
truststore is used for verification.
This option is ignored if the "vmware_ca_file" option is set. In that
case, the ESX/vCenter server certificate will then be verified using
the file specified using the "vmware_ca_file" option .
Possible Values:
* True
* False
Related options:
* vmware_ca_file
""")),
cfg.StrOpt('vmware_ca_file',
sample_default='/etc/ssl/certs/ca-certificates.crt',
help=_("""
Absolute path to the CA bundle file.
This configuration option enables the operator to use a custom
Cerificate Authority File to verify the ESX/vCenter certificate.
If this option is set, the "vmware_insecure" option will be ignored
and the CA file specified will be used to authenticate the ESX/vCenter
server certificate and establish a secure connection to the server.
Possible Values:
* Any string that is a valid absolute path to a CA file
Related options:
* vmware_insecure
""")),
cfg.MultiStrOpt(
'vmware_datastores',
help=_("""
The datastores where the image can be stored.
This configuration option specifies the datastores where the image can
be stored in the VMWare store backend. This option may be specified
multiple times for specifying multiple datastores. The datastore name
should be specified after its datacenter path, separated by ":". An
optional weight may be given after the datastore name, separated again
by ":" to specify the priority. Thus, the required format becomes
<datacenter_path>:<datastore_name>:<optional_weight>.
When adding an image, the datastore with highest weight will be
selected, unless there is not enough free space available in cases
where the image size is already known. If no weight is given, it is
assumed to be zero and the directory will be considered for selection
last. If multiple datastores have the same weight, then the one with
the most free space available is selected.
Possible Values:
* Any string of the format:
<datacenter_path>:<datastore_name>:<optional_weight>
Related options:
* None
"""))]
def http_response_iterator(conn, response, size):
"""Return an iterator for a file-like object.
:param conn: HTTP(S) Connection
:param response: http_client.HTTPResponse object
:param size: Chunk size to iterate with
"""
try:
chunk = response.read(size)
while chunk:
yield chunk
chunk = response.read(size)
finally:
conn.close()
class _Reader(object):
def __init__(self, data, verifier=None):
self._size = 0
self.data = data
self.checksum = hashlib.md5()
self.verifier = verifier
def read(self, size=None):
result = self.data.read(size)
self._size += len(result)
self.checksum.update(result)
if self.verifier:
self.verifier.update(result)
return result
@property
def size(self):
return self._size
class StoreLocation(location.StoreLocation):
"""Class describing an VMware URI.
An VMware URI can look like any of the following:
vsphere://server_host/folder/file_path?dcPath=dc_path&dsName=ds_name
"""
def __init__(self, store_specs, conf):
super(StoreLocation, self).__init__(store_specs, conf)
self.datacenter_path = None
self.datastore_name = None
def process_specs(self):
self.scheme = self.specs.get('scheme', STORE_SCHEME)
self.server_host = self.specs.get('server_host')
self.path = os.path.join(DS_URL_PREFIX,
self.specs.get('image_dir').strip('/'),
self.specs.get('image_id'))
self.datacenter_path = self.specs.get('datacenter_path')
self.datstore_name = self.specs.get('datastore_name')
param_list = {'dsName': self.datstore_name}
if self.datacenter_path:
param_list['dcPath'] = self.datacenter_path
self.query = urllib.parse.urlencode(param_list)
def get_uri(self):
if netutils.is_valid_ipv6(self.server_host):
base_url = '%s://[%s]%s' % (self.scheme,
self.server_host, self.path)
else:
base_url = '%s://%s%s' % (self.scheme,
self.server_host, self.path)
return '%s?%s' % (base_url, self.query)
# NOTE(flaper87): Commenting out for now, it's probably better to do
# it during image add/get. This validation relies on a config param
# which doesn't make sense to have in the StoreLocation instance.
# def _is_valid_path(self, path):
# sdir = self.conf.glance_store.vmware_store_image_dir.strip('/')
# return path.startswith(os.path.join(DS_URL_PREFIX, sdir))
def parse_uri(self, uri):
if not uri.startswith('%s://' % STORE_SCHEME):
reason = (_("URI %(uri)s must start with %(scheme)s://") %
{'uri': uri, 'scheme': STORE_SCHEME})
LOG.info(reason)
raise exceptions.BadStoreUri(message=reason)
(self.scheme, self.server_host,
path, params, query, fragment) = urllib.parse.urlparse(uri)
if not query:
path, query = path.split('?')
self.path = path
self.query = query
# NOTE(flaper87): Read comment on `_is_valid_path`
# reason = 'Badly formed VMware datastore URI %(uri)s.' % {'uri': uri}
# LOG.debug(reason)
# raise exceptions.BadStoreUri(reason)
parts = urllib.parse.parse_qs(self.query)
dc_path = parts.get('dcPath')
if dc_path:
self.datacenter_path = dc_path[0]
ds_name = parts.get('dsName')
if ds_name:
self.datastore_name = ds_name[0]
@property
def https_url(self):
"""
Creates a https url that can be used to upload/download data from a
vmware store.
"""
parsed_url = urlparse.urlparse(self.get_uri())
new_url = parsed_url._replace(scheme='https')
return urlparse.urlunparse(new_url)
class Store(glance_store.Store):
"""An implementation of the VMware datastore adapter."""
_CAPABILITIES = (capabilities.BitMasks.RW_ACCESS |
capabilities.BitMasks.DRIVER_REUSABLE)
OPTIONS = _VMWARE_OPTS
WRITE_CHUNKSIZE = units.Mi
def __init__(self, conf):
super(Store, self).__init__(conf)
self.datastores = {}
def reset_session(self):
self.session = api.VMwareAPISession(
self.server_host, self.server_username, self.server_password,
self.api_retry_count, self.tpoll_interval,
cacert=self.ca_file,
insecure=self.api_insecure)
return self.session
def get_schemes(self):
return (STORE_SCHEME,)
def _sanity_check(self):
if self.conf.glance_store.vmware_api_retry_count <= 0:
msg = _('vmware_api_retry_count should be greater than zero')
LOG.error(msg)
raise exceptions.BadStoreConfiguration(
store_name='vmware_datastore', reason=msg)
if self.conf.glance_store.vmware_task_poll_interval <= 0:
msg = _('vmware_task_poll_interval should be greater than zero')
LOG.error(msg)
raise exceptions.BadStoreConfiguration(
store_name='vmware_datastore', reason=msg)
def configure(self, re_raise_bsc=False):
self._sanity_check()
self.scheme = STORE_SCHEME
self.server_host = self._option_get('vmware_server_host')
self.server_username = self._option_get('vmware_server_username')
self.server_password = self._option_get('vmware_server_password')
self.api_retry_count = self.conf.glance_store.vmware_api_retry_count
self.tpoll_interval = self.conf.glance_store.vmware_task_poll_interval
self.ca_file = self.conf.glance_store.vmware_ca_file
self.api_insecure = self.conf.glance_store.vmware_insecure
if api is None:
msg = _("Missing dependencies: oslo_vmware")
raise exceptions.BadStoreConfiguration(
store_name="vmware_datastore", reason=msg)
self.session = self.reset_session()
super(Store, self).configure(re_raise_bsc=re_raise_bsc)
def _get_datacenter(self, datacenter_path):
search_index_moref = self.session.vim.service_content.searchIndex
dc_moref = self.session.invoke_api(
self.session.vim,
'FindByInventoryPath',
search_index_moref,
inventoryPath=datacenter_path)
dc_name = datacenter_path.rsplit('/', 1)[-1]
# TODO(sabari): Add datacenter_path attribute in oslo.vmware
dc_obj = oslo_datacenter.Datacenter(ref=dc_moref, name=dc_name)
dc_obj.path = datacenter_path
return dc_obj
def _get_datastore(self, datacenter_path, datastore_name):
dc_obj = self._get_datacenter(datacenter_path)
datastore_ret = self.session.invoke_api(
vim_util, 'get_object_property', self.session.vim, dc_obj.ref,
'datastore')
if datastore_ret:
datastore_refs = datastore_ret.ManagedObjectReference
for ds_ref in datastore_refs:
ds_obj = oslo_datastore.get_datastore_by_ref(self.session,
ds_ref)
if ds_obj.name == datastore_name:
ds_obj.datacenter = dc_obj
return ds_obj
def _get_freespace(self, ds_obj):
# TODO(sabari): Move this function into oslo_vmware's datastore object.
return self.session.invoke_api(
vim_util, 'get_object_property', self.session.vim, ds_obj.ref,
'summary.freeSpace')
def _parse_datastore_info_and_weight(self, datastore):
weight = 0
parts = [part.strip() for part in datastore.rsplit(":", 2)]
if len(parts) < 2:
msg = _('vmware_datastores format must be '
'datacenter_path:datastore_name:weight or '
'datacenter_path:datastore_name')
LOG.error(msg)
raise exceptions.BadStoreConfiguration(
store_name='vmware_datastore', reason=msg)
if len(parts) == 3 and parts[2]:
weight = parts[2]
if not weight.isdigit():
msg = (_('Invalid weight value %(weight)s in '
'vmware_datastores configuration') %
{'weight': weight})
LOG.exception(msg)
raise exceptions.BadStoreConfiguration(
store_name="vmware_datastore", reason=msg)
datacenter_path, datastore_name = parts[0], parts[1]
if not datacenter_path or not datastore_name:
msg = _('Invalid datacenter_path or datastore_name specified '
'in vmware_datastores configuration')
LOG.exception(msg)
raise exceptions.BadStoreConfiguration(
store_name="vmware_datastore", reason=msg)
return datacenter_path, datastore_name, weight
def _build_datastore_weighted_map(self, datastores):
"""Build an ordered map where the key is a weight and the value is a
Datastore object.
:param: a list of datastores in the format
datacenter_path:datastore_name:weight
:return: a map with key-value <weight>:<Datastore>
"""
ds_map = {}
for ds in datastores:
dc_path, name, weight = self._parse_datastore_info_and_weight(ds)
# Fetch the server side reference.
ds_obj = self._get_datastore(dc_path, name)
if not ds_obj:
msg = (_("Could not find datastore %(ds_name)s "
"in datacenter %(dc_path)s")
% {'ds_name': name,
'dc_path': dc_path})
LOG.error(msg)
raise exceptions.BadStoreConfiguration(
store_name='vmware_datastore', reason=msg)
ds_map.setdefault(int(weight), []).append(ds_obj)
return ds_map
def configure_add(self):
datastores = self._option_get('vmware_datastores')
self.datastores = self._build_datastore_weighted_map(datastores)
self.store_image_dir = self.conf.glance_store.vmware_store_image_dir
def select_datastore(self, image_size):
"""Select a datastore with free space larger than image size."""
for k, v in sorted(self.datastores.items(), reverse=True):
max_ds = None
max_fs = 0
for ds in v:
# Update with current freespace
ds.freespace = self._get_freespace(ds)
if ds.freespace > max_fs:
max_ds = ds
max_fs = ds.freespace
if max_ds and max_ds.freespace >= image_size:
return max_ds
msg = _LE("No datastore found with enough free space to contain an "
"image of size %d") % image_size
LOG.error(msg)
raise exceptions.StorageFull()
def _option_get(self, param):
result = getattr(self.conf.glance_store, param)
if not result:
reason = (_("Could not find %(param)s in configuration "
"options.") % {'param': param})
raise exceptions.BadStoreConfiguration(
store_name='vmware_datastore', reason=reason)
return result
def _build_vim_cookie_header(self, verify_session=False):
"""Build ESX host session cookie header."""
if verify_session and not self.session.is_current_session_active():
self.reset_session()
vim_cookies = self.session.vim.client.options.transport.cookiejar
if len(list(vim_cookies)) > 0:
cookie = list(vim_cookies)[0]
return cookie.name + '=' + cookie.value
@capabilities.check
def add(self, image_id, image_file, image_size, context=None,
verifier=None):
"""Stores an image file with supplied identifier to the backend
storage system and returns a tuple containing information
about the stored image.
:param image_id: The opaque image identifier
:param image_file: The image data to write, as a file-like object
:param image_size: The size of the image data to write, in bytes
:param verifier: An object used to verify signatures for images
:retval tuple of URL in backing store, bytes written, checksum
and a dictionary with storage system specific information
:raises: `glance.common.exceptions.Duplicate` if the image already
existed
`glance.common.exceptions.UnexpectedStatus` if the upload
request returned an unexpected status. The expected responses
are 201 Created and 200 OK.
"""
ds = self.select_datastore(image_size)
image_file = _Reader(image_file, verifier)
headers = {}
if image_size > 0:
headers.update({'Content-Length': six.text_type(image_size)})
data = image_file
else:
data = utils.chunkiter(image_file, CHUNKSIZE)
loc = StoreLocation({'scheme': self.scheme,
'server_host': self.server_host,
'image_dir': self.store_image_dir,
'datacenter_path': ds.datacenter.path,
'datastore_name': ds.name,
'image_id': image_id}, self.conf)
# NOTE(arnaud): use a decorator when the config is not tied to self
cookie = self._build_vim_cookie_header(True)
headers = dict(headers)
headers.update({'Cookie': cookie})
session = new_session(self.api_insecure, self.ca_file)
url = loc.https_url
try:
response = session.put(url, data=data, headers=headers)
except IOError as e:
# TODO(sigmavirus24): Figure out what the new exception type would
# be in requests.
# When a session is not authenticated, the socket is closed by
# the server after sending the response. http_client has an open
# issue with https that raises Broken Pipe
# error instead of returning the response.
# See http://bugs.python.org/issue16062. Here, we log the error
# and continue to look into the response.
msg = _LE('Communication error sending http %(method)s request '
'to the url %(url)s.\n'
'Got IOError %(e)s') % {'method': 'PUT',
'url': url,
'e': e}
LOG.error(msg)
raise exceptions.BackendException(msg)
except Exception:
with excutils.save_and_reraise_exception():
LOG.exception(_LE('Failed to upload content of image '
'%(image)s'), {'image': image_id})
res = response.raw
if res.status == requests.codes.conflict:
raise exceptions.Duplicate(_("Image file %(image_id)s already "
"exists!") %
{'image_id': image_id})
if res.status not in (requests.codes.created, requests.codes.ok):
msg = (_LE('Failed to upload content of image %(image)s. '
'The request returned an unexpected status: %(status)s.'
'\nThe response body:\n%(body)s') %
{'image': image_id,
'status': res.status,
'body': getattr(res, 'body', None)})
LOG.error(msg)
raise exceptions.BackendException(msg)
return (loc.get_uri(), image_file.size,
image_file.checksum.hexdigest(), {})
@capabilities.check
def get(self, location, offset=0, chunk_size=None, context=None):
"""Takes a `glance_store.location.Location` object that indicates
where to find the image file, and returns a tuple of generator
(for reading the image file) and image_size
:param location: `glance_store.location.Location` object, supplied
from glance_store.location.get_location_from_uri()
"""
conn, resp, content_length = self._query(location, 'GET')
iterator = http_response_iterator(conn, resp, self.READ_CHUNKSIZE)
class ResponseIndexable(glance_store.Indexable):
def another(self):
try:
return next(self.wrapped)
except StopIteration:
return ''
return (ResponseIndexable(iterator, content_length), content_length)
def get_size(self, location, context=None):
"""Takes a `glance_store.location.Location` object that indicates
where to find the image file, and returns the size
:param location: `glance_store.location.Location` object, supplied
from glance_store.location.get_location_from_uri()
"""
conn = None
try:
conn, resp, size = self._query(location, 'HEAD')
return size
finally:
# NOTE(sabari): Close the connection as the request was made with
# stream=True.
if conn is not None:
conn.close()
@capabilities.check
def delete(self, location, context=None):
"""Takes a `glance_store.location.Location` object that indicates
where to find the image file to delete
:param location: `glance_store.location.Location` object, supplied
from glance_store.location.get_location_from_uri()
:raises: NotFound if image does not exist
"""
file_path = '[%s] %s' % (
location.store_location.datastore_name,
location.store_location.path[len(DS_URL_PREFIX):])
dc_obj = self._get_datacenter(location.store_location.datacenter_path)
delete_task = self.session.invoke_api(
self.session.vim,
'DeleteDatastoreFile_Task',
self.session.vim.service_content.fileManager,
name=file_path,
datacenter=dc_obj.ref)
try:
self.session.wait_for_task(delete_task)
except vexc.FileNotFoundException:
msg = _('Image file %s not found') % file_path
LOG.warning(msg)
raise exceptions.NotFound(message=msg)
except Exception:
with excutils.save_and_reraise_exception():
LOG.exception(_LE('Failed to delete image %(image)s '
'content.') % {'image': location.image_id})
def _query(self, location, method):
session = new_session(self.api_insecure, self.ca_file)
loc = location.store_location
redirects_followed = 0
# TODO(sabari): The redirect logic was added to handle cases when the
# backend redirects http url's to https. But the store never makes a
# http request and hence this can be safely removed.
while redirects_followed < MAX_REDIRECTS:
conn, resp = self._retry_request(session, method, location)
# NOTE(sigmavirus24): _retry_request handles 4xx and 5xx errors so
# if the response is not a redirect, we can return early.
if not conn.is_redirect:
break
redirects_followed += 1
location_header = conn.headers.get('location')
if location_header:
if resp.status not in (301, 302):
reason = (_("The HTTP URL %(path)s attempted to redirect "
"with an invalid %(status)s status code.")
% {'path': loc.path, 'status': resp.status})
LOG.info(reason)
raise exceptions.BadStoreUri(message=reason)
conn.close()
location = self._new_location(location, location_header)
else:
# NOTE(sigmavirus24): We exceeded the maximum number of redirects
msg = ("The HTTP URL exceeded %(max_redirects)s maximum "
"redirects.", {'max_redirects': MAX_REDIRECTS})
LOG.debug(msg)
raise exceptions.MaxRedirectsExceeded(redirects=MAX_REDIRECTS)
content_length = int(resp.getheader('content-length', 0))
return (conn, resp, content_length)
def _retry_request(self, session, method, location):
loc = location.store_location
# NOTE(arnaud): use a decorator when the config is not tied to self
for i in range(self.api_retry_count + 1):
cookie = self._build_vim_cookie_header()
headers = {'Cookie': cookie}
conn = session.request(method, loc.https_url, headers=headers,
stream=True)
resp = conn.raw
if resp.status >= 400:
if resp.status == requests.codes.unauthorized:
self.reset_session()
continue
if resp.status == requests.codes.not_found:
reason = _('VMware datastore could not find image at URI.')
LOG.info(reason)
raise exceptions.NotFound(message=reason)
msg = ('HTTP request returned a %(status)s status code.'
% {'status': resp.status})
LOG.debug(msg)
raise exceptions.BadStoreUri(msg)
break
return conn, resp
def _new_location(self, old_location, url):
store_name = old_location.store_name
store_class = old_location.store_location.__class__
image_id = old_location.image_id
store_specs = old_location.store_specs
# Note(sabari): The redirect url will have a scheme 'http(s)', but the
# store only accepts url with scheme 'vsphere'. Thus, replacing with
# store's scheme.
parsed_url = urlparse.urlparse(url)
new_url = parsed_url._replace(scheme='vsphere')
vsphere_url = urlparse.urlunparse(new_url)
return glance_store.location.Location(store_name,
store_class,
self.conf,
uri=vsphere_url,
image_id=image_id,
store_specs=store_specs)
def new_session(insecure=False, ca_file=None, total_retries=None):
session = requests.Session()
if total_retries is not None:
http_adapter = adapters.HTTPAdapter(
max_retries=retry.Retry(total=total_retries))
https_adapter = adapters.HTTPAdapter(
max_retries=retry.Retry(total=total_retries))
session.mount('http://', http_adapter)
session.mount('https://', https_adapter)
session.verify = ca_file if ca_file else not insecure
return session

View File

@ -1,471 +0,0 @@
# Copyright 2010-2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
from oslo_config import cfg
from oslo_utils import encodeutils
import six
from stevedore import driver
from stevedore import extension
from glance_store import capabilities
from glance_store import exceptions
from glance_store.i18n import _
from glance_store import location
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
_STORE_OPTS = [
cfg.ListOpt('stores',
default=['file', 'http'],
help=_("""
List of enabled Glance stores.
Register the storage backends to use for storing disk images
as a comma separated list. The default stores enabled for
storing disk images with Glance are ``file`` and ``http``.
Possible values:
* A comma separated list that could include:
* file
* http
* swift
* rbd
* sheepdog
* cinder
* vmware
Related Options:
* default_store
""")),
cfg.StrOpt('default_store',
default='file',
choices=('file', 'filesystem', 'http', 'https', 'swift',
'swift+http', 'swift+https', 'swift+config', 'rbd',
'sheepdog', 'cinder', 'vsphere'),
help=_("""
The default scheme to use for storing images.
Provide a string value representing the default scheme to use for
storing images. If not set, Glance uses ``file`` as the default
scheme to store images with the ``file`` store.
NOTE: The value given for this configuration option must be a valid
scheme for a store registered with the ``stores`` configuration
option.
Possible values:
* file
* filesystem
* http
* https
* swift
* swift+http
* swift+https
* swift+config
* rbd
* sheepdog
* cinder
* vsphere
Related Options:
* stores
""")),
cfg.IntOpt('store_capabilities_update_min_interval',
default=0,
min=0,
help=_("""
Minimum interval in seconds to execute updating dynamic storage
capabilities based on current backend status.
Provide an integer value representing time in seconds to set the
minimum interval before an update of dynamic storage capabilities
for a storage backend can be attempted. Setting
``store_capabilities_update_min_interval`` does not mean updates
occur periodically based on the set interval. Rather, the update
is performed at the elapse of this interval set, if an operation
of the store is triggered.
By default, this option is set to zero and is disabled. Provide an
integer value greater than zero to enable this option.
NOTE: For more information on store capabilities and their updates,
please visit: https://specs.openstack.org/openstack/glance-specs/\
specs/kilo/store-capabilities.html
For more information on setting up a particular store in your
deployment and help with the usage of this feature, please contact
the storage driver maintainers listed here:
http://docs.openstack.org/developer/glance_store/drivers/index.html
Possible values:
* Zero
* Positive integer
Related Options:
* None
""")),
]
_STORE_CFG_GROUP = 'glance_store'
def _list_opts():
driver_opts = []
mgr = extension.ExtensionManager('glance_store.drivers')
# NOTE(zhiyan): Handle available drivers entry_points provided
# NOTE(nikhil): Return a sorted list of drivers to ensure that the sample
# configuration files generated by oslo config generator retain the order
# in which the config opts appear across different runs. If this order of
# config opts is not preserved, some downstream packagers may see a long
# diff of the changes though not relevant as only order has changed. See
# some more details at bug 1619487.
drivers = sorted([ext.name for ext in mgr])
handled_drivers = [] # Used to handle backwards-compatible entries
for store_entry in drivers:
driver_cls = _load_store(None, store_entry, False)
if driver_cls and driver_cls not in handled_drivers:
if getattr(driver_cls, 'OPTIONS', None) is not None:
driver_opts += driver_cls.OPTIONS
handled_drivers.append(driver_cls)
# NOTE(zhiyan): This separated approach could list
# store options before all driver ones, which easier
# to read and configure by operator.
return ([(_STORE_CFG_GROUP, _STORE_OPTS)] +
[(_STORE_CFG_GROUP, driver_opts)])
def register_opts(conf):
opts = _list_opts()
for group, opt_list in opts:
LOG.debug("Registering options for group %s" % group)
for opt in opt_list:
conf.register_opt(opt, group=group)
class Indexable(object):
"""Indexable for file-like objs iterators
Wrapper that allows an iterator or filelike be treated as an indexable
data structure. This is required in the case where the return value from
Store.get() is passed to Store.add() when adding a Copy-From image to a
Store where the client library relies on eventlet GreenSockets, in which
case the data to be written is indexed over.
"""
def __init__(self, wrapped, size):
"""
Initialize the object
:param wrappped: the wrapped iterator or filelike.
:param size: the size of data available
"""
self.wrapped = wrapped
self.size = int(size) if size else (wrapped.len
if hasattr(wrapped, 'len') else 0)
self.cursor = 0
self.chunk = None
def __iter__(self):
"""
Delegate iteration to the wrapped instance.
"""
for self.chunk in self.wrapped:
yield self.chunk
def __getitem__(self, i):
"""
Index into the next chunk (or previous chunk in the case where
the last data returned was not fully consumed).
:param i: a slice-to-the-end
"""
start = i.start if isinstance(i, slice) else i
if start < self.cursor:
return self.chunk[(start - self.cursor):]
self.chunk = self.another()
if self.chunk:
self.cursor += len(self.chunk)
return self.chunk
def another(self):
"""Implemented by subclasses to return the next element."""
raise NotImplementedError
def getvalue(self):
"""
Return entire string value... used in testing
"""
return self.wrapped.getvalue()
def __len__(self):
"""
Length accessor.
"""
return self.size
def _load_store(conf, store_entry, invoke_load=True):
try:
LOG.debug("Attempting to import store %s", store_entry)
mgr = driver.DriverManager('glance_store.drivers',
store_entry,
invoke_args=[conf],
invoke_on_load=invoke_load)
return mgr.driver
except RuntimeError as e:
LOG.warning("Failed to load driver %(driver)s. The "
"driver will be disabled" % dict(driver=str([driver, e])))
def _load_stores(conf):
for store_entry in set(conf.glance_store.stores):
try:
# FIXME(flaper87): Don't hide BadStoreConfiguration
# exceptions. These exceptions should be propagated
# to the user of the library.
store_instance = _load_store(conf, store_entry)
if not store_instance:
continue
yield (store_entry, store_instance)
except exceptions.BadStoreConfiguration:
continue
def create_stores(conf=CONF):
"""
Registers all store modules and all schemes
from the given config. Duplicates are not re-registered.
"""
store_count = 0
for (store_entry, store_instance) in _load_stores(conf):
try:
schemes = store_instance.get_schemes()
store_instance.configure(re_raise_bsc=False)
except NotImplementedError:
continue
if not schemes:
raise exceptions.BackendException('Unable to register store %s. '
'No schemes associated with it.'
% store_entry)
else:
LOG.debug("Registering store %s with schemes %s",
store_entry, schemes)
scheme_map = {}
loc_cls = store_instance.get_store_location_class()
for scheme in schemes:
scheme_map[scheme] = {
'store': store_instance,
'location_class': loc_cls,
'store_entry': store_entry
}
location.register_scheme_map(scheme_map)
store_count += 1
return store_count
def verify_default_store():
scheme = CONF.glance_store.default_store
try:
get_store_from_scheme(scheme)
except exceptions.UnknownScheme:
msg = _("Store for scheme %s not found") % scheme
raise RuntimeError(msg)
def get_known_schemes():
"""Returns list of known schemes."""
return location.SCHEME_TO_CLS_MAP.keys()
def get_store_from_scheme(scheme):
"""
Given a scheme, return the appropriate store object
for handling that scheme.
"""
if scheme not in location.SCHEME_TO_CLS_MAP:
raise exceptions.UnknownScheme(scheme=scheme)
scheme_info = location.SCHEME_TO_CLS_MAP[scheme]
store = scheme_info['store']
if not store.is_capable(capabilities.BitMasks.DRIVER_REUSABLE):
# Driver instance isn't stateless so it can't
# be reused safely and need recreation.
store_entry = scheme_info['store_entry']
store = _load_store(store.conf, store_entry, invoke_load=True)
store.configure()
try:
scheme_map = {}
loc_cls = store.get_store_location_class()
for scheme in store.get_schemes():
scheme_map[scheme] = {
'store': store,
'location_class': loc_cls,
'store_entry': store_entry
}
location.register_scheme_map(scheme_map)
except NotImplementedError:
scheme_info['store'] = store
return store
def get_store_from_uri(uri):
"""
Given a URI, return the store object that would handle
operations on the URI.
:param uri: URI to analyze
"""
scheme = uri[0:uri.find('/') - 1]
return get_store_from_scheme(scheme)
def get_from_backend(uri, offset=0, chunk_size=None, context=None):
"""Yields chunks of data from backend specified by uri."""
loc = location.get_location_from_uri(uri, conf=CONF)
store = get_store_from_uri(uri)
return store.get(loc, offset=offset,
chunk_size=chunk_size,
context=context)
def get_size_from_backend(uri, context=None):
"""Retrieves image size from backend specified by uri."""
loc = location.get_location_from_uri(uri, conf=CONF)
store = get_store_from_uri(uri)
return store.get_size(loc, context=context)
def delete_from_backend(uri, context=None):
"""Removes chunks of data from backend specified by uri."""
loc = location.get_location_from_uri(uri, conf=CONF)
store = get_store_from_uri(uri)
return store.delete(loc, context=context)
def get_store_from_location(uri):
"""
Given a location (assumed to be a URL), attempt to determine
the store from the location. We use here a simple guess that
the scheme of the parsed URL is the store...
:param uri: Location to check for the store
"""
loc = location.get_location_from_uri(uri, conf=CONF)
return loc.store_name
def check_location_metadata(val, key=''):
if isinstance(val, dict):
for key in val:
check_location_metadata(val[key], key=key)
elif isinstance(val, list):
ndx = 0
for v in val:
check_location_metadata(v, key='%s[%d]' % (key, ndx))
ndx = ndx + 1
elif not isinstance(val, six.text_type):
raise exceptions.BackendException(_("The image metadata key %(key)s "
"has an invalid type of %(type)s. "
"Only dict, list, and unicode are "
"supported.")
% dict(key=key, type=type(val)))
def store_add_to_backend(image_id, data, size, store, context=None,
verifier=None):
"""
A wrapper around a call to each stores add() method. This gives glance
a common place to check the output
:param image_id: The image add to which data is added
:param data: The data to be stored
:param size: The length of the data in bytes
:param store: The store to which the data is being added
:param context: The request context
:param verifier: An object used to verify signatures for images
:return: The url location of the file,
the size amount of data,
the checksum of the data
the storage systems metadata dictionary for the location
"""
(location, size, checksum, metadata) = store.add(image_id,
data,
size,
context=context,
verifier=verifier)
if metadata is not None:
if not isinstance(metadata, dict):
msg = (_("The storage driver %(driver)s returned invalid "
" metadata %(metadata)s. This must be a dictionary type")
% dict(driver=str(store), metadata=str(metadata)))
LOG.error(msg)
raise exceptions.BackendException(msg)
try:
check_location_metadata(metadata)
except exceptions.BackendException as e:
e_msg = (_("A bad metadata structure was returned from the "
"%(driver)s storage driver: %(metadata)s. %(e)s.") %
dict(driver=encodeutils.exception_to_unicode(store),
metadata=encodeutils.exception_to_unicode(metadata),
e=encodeutils.exception_to_unicode(e)))
LOG.error(e_msg)
raise exceptions.BackendException(e_msg)
return (location, size, checksum, metadata)
def add_to_backend(conf, image_id, data, size, scheme=None, context=None,
verifier=None):
if scheme is None:
scheme = conf['glance_store']['default_store']
store = get_store_from_scheme(scheme)
return store_add_to_backend(image_id, data, size, store, context,
verifier)
def set_acls(location_uri, public=False, read_tenants=[],
write_tenants=None, context=None):
if write_tenants is None:
write_tenants = []
loc = location.get_location_from_uri(location_uri, conf=CONF)
scheme = get_store_from_location(location_uri)
store = get_store_from_scheme(scheme)
try:
store.set_acls(loc, public=public,
read_tenants=read_tenants,
write_tenants=write_tenants,
context=context)
except NotImplementedError:
LOG.debug(_("Skipping store.set_acls... not implemented."))

View File

@ -1,227 +0,0 @@
# Copyright (c) 2015 IBM, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Glance Store capability"""
import logging
import threading
import time
import enum
from eventlet import tpool
from oslo_utils import reflection
from glance_store import exceptions
from glance_store.i18n import _LW
_STORE_CAPABILITES_UPDATE_SCHEDULING_BOOK = {}
_STORE_CAPABILITES_UPDATE_SCHEDULING_LOCK = threading.Lock()
LOG = logging.getLogger(__name__)
class BitMasks(enum.IntEnum):
NONE = 0b00000000
ALL = 0b11111111
READ_ACCESS = 0b00000001
# Included READ_ACCESS
READ_OFFSET = 0b00000011
# Included READ_ACCESS
READ_CHUNK = 0b00000101
# READ_OFFSET | READ_CHUNK
READ_RANDOM = 0b00000111
WRITE_ACCESS = 0b00001000
# Included WRITE_ACCESS
WRITE_OFFSET = 0b00011000
# Included WRITE_ACCESS
WRITE_CHUNK = 0b00101000
# WRITE_OFFSET | WRITE_CHUNK
WRITE_RANDOM = 0b00111000
# READ_ACCESS | WRITE_ACCESS
RW_ACCESS = 0b00001001
# READ_OFFSET | WRITE_OFFSET
RW_OFFSET = 0b00011011
# READ_CHUNK | WRITE_CHUNK
RW_CHUNK = 0b00101101
# RW_OFFSET | RW_CHUNK
RW_RANDOM = 0b00111111
# driver is stateless and can be reused safely
DRIVER_REUSABLE = 0b01000000
class StoreCapability(object):
def __init__(self):
# Set static store capabilities base on
# current driver implementation.
self._capabilities = getattr(self.__class__, "_CAPABILITIES", 0)
@property
def capabilities(self):
return self._capabilities
@staticmethod
def contains(x, y):
return x & y == y
def update_capabilities(self):
"""
Update dynamic storage capabilities based on current
driver configuration and backend status when needed.
As a hook, the function will be triggered in two cases:
calling once after store driver get configured, it was
used to update dynamic storage capabilities based on
current driver configuration, or calling when the
capabilities checking of an operation failed every time,
this was used to refresh dynamic storage capabilities
based on backend status then.
This function shouldn't raise any exception out.
"""
LOG.debug(("Store %s doesn't support updating dynamic "
"storage capabilities. Please overwrite "
"'update_capabilities' method of the store to "
"implement updating logics if needed.") %
reflection.get_class_name(self))
def is_capable(self, *capabilities):
"""
Check if requested capability(s) are supported by
current driver instance.
:param capabilities: required capability(s).
"""
caps = 0
for cap in capabilities:
caps |= int(cap)
return self.contains(self.capabilities, caps)
def set_capabilities(self, *dynamic_capabilites):
"""
Set dynamic storage capabilities based on current
driver configuration and backend status.
:param dynamic_capabilites: dynamic storage capability(s).
"""
for cap in dynamic_capabilites:
self._capabilities |= int(cap)
def unset_capabilities(self, *dynamic_capabilites):
"""
Unset dynamic storage capabilities.
:param dynamic_capabilites: dynamic storage capability(s).
"""
caps = 0
for cap in dynamic_capabilites:
caps |= int(cap)
# TODO(zhiyan): Cascaded capability removal is
# skipped currently, we can add it back later
# when a concrete requirement comes out.
# For example, when removing READ_ACCESS, all
# read related capabilities need to be removed
# together, e.g. READ_RANDOM.
self._capabilities &= ~caps
def _schedule_capabilities_update(store):
def _update_capabilities(store, context):
with context['lock']:
if context['updating']:
return
context['updating'] = True
try:
store.update_capabilities()
except Exception:
pass
finally:
context['updating'] = False
# NOTE(zhiyan): Update 'latest_update' field
# in anyway even an exception raised, to
# prevent call problematic routine cyclically.
context['latest_update'] = int(time.time())
global _STORE_CAPABILITES_UPDATE_SCHEDULING_BOOK
book = _STORE_CAPABILITES_UPDATE_SCHEDULING_BOOK
if store not in book:
with _STORE_CAPABILITES_UPDATE_SCHEDULING_LOCK:
if store not in book:
book[store] = {'latest_update': int(time.time()),
'lock': threading.Lock(),
'updating': False}
else:
context = book[store]
# NOTE(zhiyan): We don't need to lock 'latest_update'
# field for check since time increased one-way only.
sec = (int(time.time()) - context['latest_update'] -
store.conf.glance_store.store_capabilities_update_min_interval)
if sec >= 0:
if not context['updating']:
# NOTE(zhiyan): Using a real thread pool instead
# of green pool due to store capabilities updating
# probably calls some inevitably blocking code for
# IO operation on remote or local storage.
# Eventlet allows operator to uses environment var
# EVENTLET_THREADPOOL_SIZE to desired pool size.
tpool.execute(_update_capabilities, store, context)
def check(store_op_fun):
def op_checker(store, *args, **kwargs):
# NOTE(zhiyan): Trigger the hook of updating store
# dynamic capabilities based on current store status.
if store.conf.glance_store.store_capabilities_update_min_interval > 0:
_schedule_capabilities_update(store)
get_capabilities = [
BitMasks.READ_ACCESS,
BitMasks.READ_OFFSET if kwargs.get('offset') else BitMasks.NONE,
BitMasks.READ_CHUNK if kwargs.get('chunk_size') else BitMasks.NONE
]
op_cap_map = {
'get': get_capabilities,
'add': [BitMasks.WRITE_ACCESS],
'delete': [BitMasks.WRITE_ACCESS]}
op_exec_map = {
'get': (exceptions.StoreRandomGetNotSupported
if kwargs.get('offset') or kwargs.get('chunk_size') else
exceptions.StoreGetNotSupported),
'add': exceptions.StoreAddDisabled,
'delete': exceptions.StoreDeleteNotSupported}
op = store_op_fun.__name__.lower()
try:
req_cap = op_cap_map[op]
except KeyError:
LOG.warning(_LW('The capability of operation "%s" '
'could not be checked.'), op)
else:
if not store.is_capable(*req_cap):
kwargs.setdefault('offset', 0)
kwargs.setdefault('chunk_size', None)
raise op_exec_map[op](**kwargs)
return store_op_fun(store, *args, **kwargs)
return op_checker

View File

@ -1,141 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
System-level utilities and helper functions.
"""
import logging
import uuid
try:
from eventlet import sleep
except ImportError:
from time import sleep
from glance_store.i18n import _
LOG = logging.getLogger(__name__)
def is_uuid_like(val):
"""Returns validation of a value as a UUID.
For our purposes, a UUID is a canonical form string:
aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa
"""
try:
return str(uuid.UUID(val)) == val
except (TypeError, ValueError, AttributeError):
return False
def chunkreadable(iter, chunk_size=65536):
"""
Wrap a readable iterator with a reader yielding chunks of
a preferred size, otherwise leave iterator unchanged.
:param iter: an iter which may also be readable
:param chunk_size: maximum size of chunk
"""
return chunkiter(iter, chunk_size) if hasattr(iter, 'read') else iter
def chunkiter(fp, chunk_size=65536):
"""
Return an iterator to a file-like obj which yields fixed size chunks
:param fp: a file-like object
:param chunk_size: maximum size of chunk
"""
while True:
chunk = fp.read(chunk_size)
if chunk:
yield chunk
else:
break
def cooperative_iter(iter):
"""
Return an iterator which schedules after each
iteration. This can prevent eventlet thread starvation.
:param iter: an iterator to wrap
"""
try:
for chunk in iter:
sleep(0)
yield chunk
except Exception as err:
msg = _("Error: cooperative_iter exception %s") % err
LOG.error(msg)
raise
def cooperative_read(fd):
"""
Wrap a file descriptor's read with a partial function which schedules
after each read. This can prevent eventlet thread starvation.
:param fd: a file descriptor to wrap
"""
def readfn(*args):
result = fd.read(*args)
sleep(0)
return result
return readfn
class CooperativeReader(object):
"""
An eventlet thread friendly class for reading in image data.
When accessing data either through the iterator or the read method
we perform a sleep to allow a co-operative yield. When there is more than
one image being uploaded/downloaded this prevents eventlet thread
starvation, ie allows all threads to be scheduled periodically rather than
having the same thread be continuously active.
"""
def __init__(self, fd):
"""
:param fd: Underlying image file object
"""
self.fd = fd
self.iterator = None
# NOTE(markwash): if the underlying supports read(), overwrite the
# default iterator-based implementation with cooperative_read which
# is more straightforward
if hasattr(fd, 'read'):
self.read = cooperative_read(fd)
def read(self, length=None):
"""Return the next chunk of the underlying iterator.
This is replaced with cooperative_read in __init__ if the underlying
fd already supports read().
"""
if self.iterator is None:
self.iterator = self.__iter__()
try:
return next(self.iterator)
except StopIteration:
return ''
def __iter__(self):
return cooperative_iter(self.fd.__iter__())

View File

@ -1,172 +0,0 @@
# Copyright 2011 OpenStack Foundation
# Copyright 2012 RedHat Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Base class for all storage backends"""
import logging
from oslo_config import cfg
from oslo_utils import encodeutils
from oslo_utils import importutils
from oslo_utils import units
from glance_store import capabilities
from glance_store import exceptions
from glance_store.i18n import _
LOG = logging.getLogger(__name__)
class Store(capabilities.StoreCapability):
OPTIONS = None
READ_CHUNKSIZE = 4 * units.Mi # 4M
WRITE_CHUNKSIZE = READ_CHUNKSIZE
def __init__(self, conf):
"""
Initialize the Store
"""
super(Store, self).__init__()
self.conf = conf
self.store_location_class = None
try:
if self.OPTIONS is not None:
self.conf.register_opts(self.OPTIONS, group='glance_store')
except cfg.DuplicateOptError:
pass
def configure(self, re_raise_bsc=False):
"""
Configure the store to use the stored configuration options
and initialize capabilities based on current configuration.
Any store that needs special configuration should implement
this method.
"""
try:
self.configure_add()
except exceptions.BadStoreConfiguration as e:
self.unset_capabilities(capabilities.BitMasks.WRITE_ACCESS)
msg = (_(u"Failed to configure store correctly: %s "
"Disabling add method.")
% encodeutils.exception_to_unicode(e))
LOG.warning(msg)
if re_raise_bsc:
raise
finally:
self.update_capabilities()
def get_schemes(self):
"""
Returns a tuple of schemes which this store can handle.
"""
raise NotImplementedError
def get_store_location_class(self):
"""
Returns the store location class that is used by this store.
"""
if not self.store_location_class:
class_name = "%s.StoreLocation" % (self.__module__)
LOG.debug("Late loading location class %s", class_name)
self.store_location_class = importutils.import_class(class_name)
return self.store_location_class
def configure_add(self):
"""
This is like `configure` except that it's specifically for
configuring the store to accept objects.
If the store was not able to successfully configure
itself, it should raise `exceptions.BadStoreConfiguration`.
"""
# NOTE(flaper87): This should probably go away
@capabilities.check
def get(self, location, offset=0, chunk_size=None, context=None):
"""
Takes a `glance_store.location.Location` object that indicates
where to find the image file, and returns a tuple of generator
(for reading the image file) and image_size
:param location: `glance_store.location.Location` object, supplied
from glance_store.location.get_location_from_uri()
:raises: `glance.exceptions.NotFound` if image does not exist
"""
raise NotImplementedError
def get_size(self, location, context=None):
"""
Takes a `glance_store.location.Location` object that indicates
where to find the image file, and returns the size
:param location: `glance_store.location.Location` object, supplied
from glance_store.location.get_location_from_uri()
:raises: `glance_store.exceptions.NotFound` if image does not exist
"""
raise NotImplementedError
@capabilities.check
def add(self, image_id, image_file, image_size, context=None,
verifier=None):
"""
Stores an image file with supplied identifier to the backend
storage system and returns a tuple containing information
about the stored image.
:param image_id: The opaque image identifier
:param image_file: The image data to write, as a file-like object
:param image_size: The size of the image data to write, in bytes
:retval: tuple of URL in backing store, bytes written, checksum
and a dictionary with storage system specific information
:raises: `glance_store.exceptions.Duplicate` if the image already
existed
"""
raise NotImplementedError
@capabilities.check
def delete(self, location, context=None):
"""
Takes a `glance_store.location.Location` object that indicates
where to find the image file to delete
:param location: `glance_store.location.Location` object, supplied
from glance_store.location.get_location_from_uri()
:raises: `glance_store.exceptions.NotFound` if image does not exist
"""
raise NotImplementedError
def set_acls(self, location, public=False, read_tenants=None,
write_tenants=None, context=None):
"""
Sets the read and write access control list for an image in the
backend store.
:param location: `glance_store.location.Location` object, supplied
from glance_store.location.get_location_from_uri()
:param public: A boolean indicating whether the image should be public.
:param read_tenants: A list of tenant strings which should be granted
read access for an image.
:param write_tenants: A list of tenant strings which should be granted
write access for an image.
"""
raise NotImplementedError

View File

@ -1,181 +0,0 @@
# Copyright (c) 2014 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Glance Store exception subclasses"""
import six
import six.moves.urllib.parse as urlparse
import warnings
from glance_store.i18n import _
warnings.simplefilter('always')
class BackendException(Exception):
pass
class UnsupportedBackend(BackendException):
pass
class RedirectException(Exception):
def __init__(self, url):
self.url = urlparse.urlparse(url)
class GlanceStoreException(Exception):
"""
Base Glance Store Exception
To correctly use this class, inherit from it and define
a 'message' property. That message will get printf'd
with the keyword arguments provided to the constructor.
"""
message = _("An unknown exception occurred")
def __init__(self, message=None, **kwargs):
if not message:
message = self.message
try:
if kwargs:
message = message % kwargs
except Exception:
pass
self.msg = message
super(GlanceStoreException, self).__init__(message)
def __unicode__(self):
# NOTE(flwang): By default, self.msg is an instance of Message, which
# can't be converted by str(). Based on the definition of
# __unicode__, it should return unicode always.
return six.text_type(self.msg)
class MissingCredentialError(GlanceStoreException):
message = _("Missing required credential: %(required)s")
class BadAuthStrategy(GlanceStoreException):
message = _("Incorrect auth strategy, expected \"%(expected)s\" but "
"received \"%(received)s\"")
class AuthorizationRedirect(GlanceStoreException):
message = _("Redirecting to %(uri)s for authorization.")
class NotFound(GlanceStoreException):
message = _("Image %(image)s not found")
class UnknownScheme(GlanceStoreException):
message = _("Unknown scheme '%(scheme)s' found in URI")
class BadStoreUri(GlanceStoreException):
message = _("The Store URI was malformed: %(uri)s")
class Duplicate(GlanceStoreException):
message = _("Image %(image)s already exists")
class StorageFull(GlanceStoreException):
message = _("There is not enough disk space on the image storage media.")
class StorageWriteDenied(GlanceStoreException):
message = _("Permission to write image storage media denied.")
class AuthBadRequest(GlanceStoreException):
message = _("Connect error/bad request to Auth service at URL %(url)s.")
class AuthUrlNotFound(GlanceStoreException):
message = _("Auth service at URL %(url)s not found.")
class AuthorizationFailure(GlanceStoreException):
message = _("Authorization failed.")
class NotAuthenticated(GlanceStoreException):
message = _("You are not authenticated.")
class Forbidden(GlanceStoreException):
message = _("You are not authorized to complete this action.")
class Invalid(GlanceStoreException):
# NOTE(NiallBunting) This could be deprecated however the debtcollector
# seems to have problems deprecating this as well as the subclasses.
message = _("Data supplied was not valid.")
class BadStoreConfiguration(GlanceStoreException):
message = _("Store %(store_name)s could not be configured correctly. "
"Reason: %(reason)s")
class DriverLoadFailure(GlanceStoreException):
message = _("Driver %(driver_name)s could not be loaded.")
class StoreDeleteNotSupported(GlanceStoreException):
message = _("Deleting images from this store is not supported.")
class StoreGetNotSupported(GlanceStoreException):
message = _("Getting images from this store is not supported.")
class StoreRandomGetNotSupported(StoreGetNotSupported):
message = _("Getting images randomly from this store is not supported. "
"Offset: %(offset)s, length: %(chunk_size)s")
class StoreAddDisabled(GlanceStoreException):
message = _("Configuration for store failed. Adding images to this "
"store is disabled.")
class MaxRedirectsExceeded(GlanceStoreException):
message = _("Maximum redirects (%(redirects)s) was exceeded.")
class NoServiceEndpoint(GlanceStoreException):
message = _("Response from Keystone does not contain a Glance endpoint.")
class RegionAmbiguity(GlanceStoreException):
message = _("Multiple 'image' service matches for region %(region)s. This "
"generally means that a region is required and you have not "
"supplied one.")
class RemoteServiceUnavailable(GlanceStoreException):
message = _("Remote server where the image is present is unavailable.")
class HasSnapshot(GlanceStoreException):
message = _("The image cannot be deleted because it has snapshot(s).")
class InUseByStore(GlanceStoreException):
message = _("The image cannot be deleted because it is in use through "
"the backend store outside of Glance.")

View File

@ -1,31 +0,0 @@
# Copyright 2014 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import oslo_i18n as i18n
_translators = i18n.TranslatorFactory(domain='glance_store')
# The primary translation function using the well-known name "_"
_ = _translators.primary
# Translators for log levels.
#
# The abbreviated names are meant to reflect the usual use of a short
# name like '_'. The "L" is for "log" and the other letter comes from
# the level.
_LI = _translators.log_info
_LW = _translators.log_warning
_LE = _translators.log_error
_LC = _translators.log_critical

View File

@ -1,173 +0,0 @@
# Andi Chandler <andi@gowling.com>, 2016. #zanata
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: glance_store 0.20.1.dev18\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2017-03-22 21:38+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-07-05 01:51+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n"
"Language: en-GB\n"
"X-Generator: Zanata 3.9.6\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
#, python-format
msgid ""
"A bad metadata structure was returned from the %(driver)s storage driver: "
"%(metadata)s. %(e)s."
msgstr ""
"A bad metadata structure was returned from the %(driver)s storage driver: "
"%(metadata)s. %(e)s."
msgid "An unknown exception occurred"
msgstr "An unknown exception occurred"
#, python-format
msgid "Auth service at URL %(url)s not found."
msgstr "Auth service at URL %(url)s not found."
msgid "Authorization failed."
msgstr "Authorisation failed."
msgid ""
"Configuration for store failed. Adding images to this store is disabled."
msgstr ""
"Configuration for store failed. Adding images to this store is disabled."
#, python-format
msgid "Connect error/bad request to Auth service at URL %(url)s."
msgstr "Connect error/bad request to Auth service at URL %(url)s."
msgid "Data supplied was not valid."
msgstr "Data supplied was not valid."
msgid "Deleting images from this store is not supported."
msgstr "Deleting images from this store is not supported."
#, python-format
msgid "Driver %(driver_name)s could not be loaded."
msgstr "Driver %(driver_name)s could not be loaded."
#, python-format
msgid "Error: cooperative_iter exception %s"
msgstr "Error: cooperative_iter exception %s"
#, python-format
msgid "Failed to configure store correctly: %s Disabling add method."
msgstr "Failed to configure store correctly: %s Disabling add method."
msgid "Getting images from this store is not supported."
msgstr "Getting images from this store is not supported."
#, python-format
msgid ""
"Getting images randomly from this store is not supported. Offset: "
"%(offset)s, length: %(chunk_size)s"
msgstr ""
"Getting images randomly from this store is not supported. Offset: "
"%(offset)s, length: %(chunk_size)s"
#, python-format
msgid "Image %(image)s already exists"
msgstr "Image %(image)s already exists"
#, python-format
msgid "Image %(image)s not found"
msgstr "Image %(image)s not found"
#, python-format
msgid ""
"Incorrect auth strategy, expected \"%(expected)s\" but received "
"\"%(received)s\""
msgstr ""
"Incorrect auth strategy, expected \"%(expected)s\" but received "
"\"%(received)s\""
#, python-format
msgid "Maximum redirects (%(redirects)s) was exceeded."
msgstr "Maximum redirects (%(redirects)s) was exceeded."
#, python-format
msgid "Missing required credential: %(required)s"
msgstr "Missing required credential: %(required)s"
#, python-format
msgid ""
"Multiple 'image' service matches for region %(region)s. This generally means "
"that a region is required and you have not supplied one."
msgstr ""
"Multiple 'image' service matches for region %(region)s. This generally means "
"that a region is required and you have not supplied one."
msgid "Permission to write image storage media denied."
msgstr "Permission to write image storage media denied."
#, python-format
msgid "Redirecting to %(uri)s for authorization."
msgstr "Redirecting to %(uri)s for authorisation."
msgid "Remote server where the image is present is unavailable."
msgstr "Remote server where the image is present is unavailable."
msgid "Response from Keystone does not contain a Glance endpoint."
msgstr "Response from Keystone does not contain a Glance endpoint."
msgid "Skipping store.set_acls... not implemented."
msgstr "Skipping store.set_acls... not implemented."
#, python-format
msgid ""
"Store %(store_name)s could not be configured correctly. Reason: %(reason)s"
msgstr ""
"Store %(store_name)s could not be configured correctly. Reason: %(reason)s"
#, python-format
msgid "Store for scheme %s not found"
msgstr "Store for scheme %s not found"
#, python-format
msgid "The Store URI was malformed: %(uri)s"
msgstr "The Store URI was malformed: %(uri)s"
msgid "The image cannot be deleted because it has snapshot(s)."
msgstr "The image cannot be deleted because it has snapshot(s)."
msgid ""
"The image cannot be deleted because it is in use through the backend store "
"outside of Glance."
msgstr ""
"The image cannot be deleted because it is in use through the backend store "
"outside of Glance."
#, python-format
msgid ""
"The image metadata key %(key)s has an invalid type of %(type)s. Only dict, "
"list, and unicode are supported."
msgstr ""
"The image metadata key %(key)s has an invalid type of %(type)s. Only dict, "
"list, and unicode are supported."
#, python-format
msgid ""
"The storage driver %(driver)s returned invalid metadata %(metadata)s. This "
"must be a dictionary type"
msgstr ""
"The storage driver %(driver)s returned invalid metadata %(metadata)s. This "
"must be a dictionary type"
msgid "There is not enough disk space on the image storage media."
msgstr "There is not enough disk space on the image storage media."
#, python-format
msgid "Unknown scheme '%(scheme)s' found in URI"
msgstr "Unknown scheme '%(scheme)s' found in URI"
msgid "You are not authenticated."
msgstr "You are not authenticated."
msgid "You are not authorized to complete this action."
msgstr "You are not authorised to complete this action."

View File

@ -1,169 +0,0 @@
# Copyright 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
A class that describes the location of an image in Glance.
In Glance, an image can either be **stored** in Glance, or it can be
**registered** in Glance but actually be stored somewhere else.
We needed a class that could support the various ways that Glance
describes where exactly an image is stored.
An image in Glance has two location properties: the image URI
and the image storage URI.
The image URI is essentially the permalink identifier for the image.
It is displayed in the output of various Glance API calls and,
while read-only, is entirely user-facing. It shall **not** contain any
security credential information at all. The Glance image URI shall
be the host:port of that Glance API server along with /images/<IMAGE_ID>.
The Glance storage URI is an internal URI structure that Glance
uses to maintain critical information about how to access the images
that it stores in its storage backends. It **may contain** security
credentials and is **not** user-facing.
"""
import logging
from oslo_config import cfg
from six.moves import urllib
from glance_store import exceptions
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
SCHEME_TO_CLS_MAP = {}
def get_location_from_uri(uri, conf=CONF):
"""
Given a URI, return a Location object that has had an appropriate
store parse the URI.
:param uri: A URI that could come from the end-user in the Location
attribute/header.
:param conf: The global configuration.
Example URIs:
https://user:pass@example.com:80/images/some-id
http://images.oracle.com/123456
swift://example.com/container/obj-id
swift://user:account:pass@authurl.com/container/obj-id
swift+http://user:account:pass@authurl.com/container/obj-id
file:///var/lib/glance/images/1
cinder://volume-id
"""
pieces = urllib.parse.urlparse(uri)
if pieces.scheme not in SCHEME_TO_CLS_MAP.keys():
raise exceptions.UnknownScheme(scheme=pieces.scheme)
scheme_info = SCHEME_TO_CLS_MAP[pieces.scheme]
return Location(pieces.scheme, scheme_info['location_class'],
conf, uri=uri)
def register_scheme_map(scheme_map):
"""
Given a mapping of 'scheme' to store_name, adds the mapping to the
known list of schemes.
This function overrides existing stores.
"""
for (k, v) in scheme_map.items():
LOG.debug("Registering scheme %s with %s", k, v)
SCHEME_TO_CLS_MAP[k] = v
class Location(object):
"""
Class describing the location of an image that Glance knows about
"""
def __init__(self, store_name, store_location_class, conf,
uri=None, image_id=None, store_specs=None):
"""
Create a new Location object.
:param store_name: The string identifier/scheme of the storage backend
:param store_location_class: The store location class to use
for this location instance.
:param image_id: The identifier of the image in whatever storage
backend is used.
:param uri: Optional URI to construct location from
:param store_specs: Dictionary of information about the location
of the image that is dependent on the backend
store
"""
self.store_name = store_name
self.image_id = image_id
self.store_specs = store_specs or {}
self.conf = conf
self.store_location = store_location_class(self.store_specs, conf)
if uri:
self.store_location.parse_uri(uri)
def get_store_uri(self):
"""
Returns the Glance image URI, which is the host:port of the API server
along with /images/<IMAGE_ID>
"""
return self.store_location.get_uri()
def get_uri(self):
return None
class StoreLocation(object):
"""
Base class that must be implemented by each store
"""
def __init__(self, store_specs, conf):
self.conf = conf
self.specs = store_specs
if self.specs:
self.process_specs()
def process_specs(self):
"""
Subclasses should implement any processing of the self.specs collection
such as storing credentials and possibly establishing connections.
"""
pass
def get_uri(self):
"""
Subclasses should implement a method that returns an internal URI that,
when supplied to the StoreLocation instance, can be interpreted by the
StoreLocation's parse_uri() method. The URI returned from this method
shall never be public and only used internally within Glance, so it is
fine to encode credentials in this URI.
"""
raise NotImplementedError("StoreLocation subclass must implement "
"get_uri()")
def parse_uri(self, uri):
"""
Subclasses should implement a method that accepts a string URI and
sets appropriate internal fields such that a call to get_uri() will
return a proper internal URI
"""
raise NotImplementedError("StoreLocation subclass must implement "
"parse_uri()")

View File

@ -1,83 +0,0 @@
# Copyright 2011 OpenStack Foundation
# Copyright 2014 Red Hat, Inc
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import shutil
import fixtures
from oslo_config import cfg
from oslotest import base
import glance_store as store
from glance_store import location
class StoreBaseTest(base.BaseTestCase):
# NOTE(flaper87): temporary until we
# can move to a fully-local lib.
# (Swift store's fault)
_CONF = cfg.ConfigOpts()
def setUp(self):
super(StoreBaseTest, self).setUp()
self.conf = self._CONF
self.conf(args=[])
store.register_opts(self.conf)
self.config(stores=[])
# Ensure stores + locations cleared
location.SCHEME_TO_CLS_MAP = {}
store.create_stores(self.conf)
self.addCleanup(setattr, location, 'SCHEME_TO_CLS_MAP', dict())
self.test_dir = self.useFixture(fixtures.TempDir()).path
self.addCleanup(self.conf.reset)
def copy_data_file(self, file_name, dst_dir):
src_file_name = os.path.join('glance_store/tests/etc', file_name)
shutil.copy(src_file_name, dst_dir)
dst_file_name = os.path.join(dst_dir, file_name)
return dst_file_name
def config(self, **kw):
"""Override some configuration values.
The keyword arguments are the names of configuration options to
override and their values.
If a group argument is supplied, the overrides are applied to
the specified configuration option group.
All overrides are automatically cleared at the end of the current
test by the fixtures cleanup process.
"""
group = kw.pop('group', 'glance_store')
for k, v in kw.items():
self.conf.set_override(k, v, group)
def register_store_schemes(self, store, store_entry):
schemes = store.get_schemes()
scheme_map = {}
loc_cls = store.get_store_location_class()
for scheme in schemes:
scheme_map[scheme] = {
'store': store,
'location_class': loc_cls,
'store_entry': store_entry
}
location.register_scheme_map(scheme_map)

View File

@ -1,37 +0,0 @@
[ref1]
user = tenant:user1
key = key1
auth_address = example.com
[ref2]
user = user2
key = key2
user_domain_id = default
project_domain_id = default
auth_version = 3
auth_address = http://example.com
[store_2]
user = tenant:user1
key = key1
auth_address= https://localhost:8080
[store_3]
user= tenant:user2
key= key2
auth_address= https://localhost:8080
[store_4]
user = tenant:user1
key = key1
auth_address = http://localhost:80
[store_5]
user = tenant:user1
key = key1
auth_address = http://localhost
[store_6]
user = tenant:user1
key = key1
auth_address = https://localhost/v1

View File

@ -1,22 +0,0 @@
# Copyright 2014 Red hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from glance_store import driver
from glance_store import exceptions
class UnconfigurableStore(driver.Store):
def configure(self, re_raise_bsc=False):
raise exceptions.BadStoreConfiguration()

View File

@ -1,97 +0,0 @@
# Copyright 2015 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
try:
import configparser as ConfigParser
except ImportError:
from six.moves import configparser as ConfigParser
from io import BytesIO
import glance_store
from oslo_config import cfg
import testtools
CONF = cfg.CONF
UUID1 = '961973d8-3360-4364-919e-2c197825dbb4'
UUID2 = 'e03cf3b1-3070-4497-a37d-9703edfb615b'
UUID3 = '0d7f89b2-e236-45e9-b081-561cd3102e92'
UUID4 = '165e9681-ea56-46b0-a84c-f148c752ef8b'
IMAGE_BITS = b'I am a bootable image, I promise'
class Base(testtools.TestCase):
def __init__(self, driver_name, *args, **kwargs):
super(Base, self).__init__(*args, **kwargs)
self.driver_name = driver_name
self.config = ConfigParser.RawConfigParser()
self.config.read('functional_testing.conf')
glance_store.register_opts(CONF)
def setUp(self):
super(Base, self).setUp()
stores = self.config.get('tests', 'stores').split(',')
if self.driver_name not in stores:
self.skipTest('Not running %s store tests' % self.driver_name)
CONF.set_override('stores', [self.driver_name], group='glance_store')
CONF.set_override('default_store',
self.driver_name,
group='glance_store'
)
glance_store.create_stores()
self.store = glance_store.backend._load_store(CONF, self.driver_name)
self.store.configure()
class BaseFunctionalTests(Base):
def test_add(self):
image_file = BytesIO(IMAGE_BITS)
loc, written, _, _ = self.store.add(UUID1, image_file, len(IMAGE_BITS))
self.assertEqual(len(IMAGE_BITS), written)
def test_delete(self):
image_file = BytesIO(IMAGE_BITS)
loc, written, _, _ = self.store.add(UUID2, image_file, len(IMAGE_BITS))
location = glance_store.location.get_location_from_uri(loc)
self.store.delete(location)
def test_get_size(self):
image_file = BytesIO(IMAGE_BITS)
loc, written, _, _ = self.store.add(UUID3, image_file, len(IMAGE_BITS))
location = glance_store.location.get_location_from_uri(loc)
size = self.store.get_size(location)
self.assertEqual(len(IMAGE_BITS), size)
def test_get(self):
image_file = BytesIO(IMAGE_BITS)
loc, written, _, _ = self.store.add(UUID3, image_file, len(IMAGE_BITS))
location = glance_store.location.get_location_from_uri(loc)
image, size = self.store.get(location)
self.assertEqual(len(IMAGE_BITS), size)
data = b''
for chunk in image:
data += chunk
self.assertEqual(IMAGE_BITS, data)

View File

@ -1,44 +0,0 @@
# Copyright 2015 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import shutil
import tempfile
from oslo_config import cfg
from glance_store.tests.functional import base
CONF = cfg.CONF
logging.basicConfig()
class TestFilesystem(base.BaseFunctionalTests):
def __init__(self, *args, **kwargs):
super(TestFilesystem, self).__init__('file', *args, **kwargs)
def setUp(self):
self.tmp_image_dir = tempfile.mkdtemp(prefix='glance_store_')
CONF.set_override('filesystem_store_datadir',
self.tmp_image_dir,
group='glance_store')
super(TestFilesystem, self).setUp()
def tearDown(self):
shutil.rmtree(self.tmp_image_dir)
super(TestFilesystem, self).tearDown()

View File

@ -1,33 +0,0 @@
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This script is executed inside gate_hook function in devstack gate.
# NOTE(NiallBunting) The store to test is passed in here from the
# project config.
GLANCE_STORE_DRIVER=${1:-swift}
ENABLED_SERVICES+=",key,glance"
case $GLANCE_STORE_DRIVER in
swift)
ENABLED_SERVICES+=",s-proxy,s-account,s-container,s-object,"
;;
esac
export GLANCE_STORE_DRIVER
export ENABLED_SERVICES
$BASE/new/devstack-gate/devstack-vm-gate.sh

View File

@ -1,79 +0,0 @@
#!/bin/bash -xe
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This script is executed inside post_test_hook function in devstack gate.
set -xe
export GLANCE_STORE_DIR="$BASE/new/glance_store"
SCRIPTS_DIR="/usr/os-testr-env/bin/"
GLANCE_STORE_DRIVER=${1:-swift}
function generate_test_logs {
local path="$1"
# Compress all $path/*.txt files and move the directories holding those
# files to /opt/stack/logs. Files with .log suffix have their
# suffix changed to .txt (so browsers will know to open the compressed
# files and not download them).
if [ -d "$path" ]
then
sudo find $path -iname "*.log" -type f -exec mv {} {}.txt \; -exec gzip -9 {}.txt \;
sudo mv $path/* /opt/stack/logs/
fi
}
function generate_testr_results {
if [ -f .testrepository/0 ]; then
# Give job user rights to access tox logs
sudo -H -u "$owner" chmod o+rw .
sudo -H -u "$owner" chmod o+rw -R .testrepository
if [[ -f ".testrepository/0" ]] ; then
"subunit-1to2" < .testrepository/0 > ./testrepository.subunit
$SCRIPTS_DIR/subunit2html ./testrepository.subunit testr_results.html
gzip -9 ./testrepository.subunit
gzip -9 ./testr_results.html
sudo mv ./*.gz /opt/stack/logs/
fi
fi
}
owner=jenkins
# Get admin credentials
cd $BASE/new/devstack
source openrc admin admin
# Go to the glance_store dir
cd $GLANCE_STORE_DIR
sudo chown -R $owner:stack $GLANCE_STORE_DIR
sudo cp $GLANCE_STORE_DIR/functional_testing.conf.sample $GLANCE_STORE_DIR/functional_testing.conf
# Set admin creds
iniset $GLANCE_STORE_DIR/functional_testing.conf admin key $ADMIN_PASSWORD
# Run tests
echo "Running glance_store functional test suite"
set +e
# Preserve env for OS_ credentials
sudo -E -H -u jenkins tox -e functional-$GLANCE_STORE_DRIVER
EXIT_CODE=$?
set -e
# Collect and parse result
generate_testr_results
exit $EXIT_CODE

View File

@ -1,92 +0,0 @@
# Copyright 2015 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import random
import time
from oslo_config import cfg
import swiftclient
from glance_store.tests.functional import base
CONF = cfg.CONF
logging.basicConfig()
class TestSwift(base.BaseFunctionalTests):
def __init__(self, *args, **kwargs):
super(TestSwift, self).__init__('swift', *args, **kwargs)
self.auth = self.config.get('admin', 'auth_address')
user = self.config.get('admin', 'user')
self.key = self.config.get('admin', 'key')
self.region = self.config.get('admin', 'region')
self.tenant, self.username = user.split(':')
CONF.set_override('swift_store_user',
user,
group='glance_store')
CONF.set_override('swift_store_auth_address',
self.auth,
group='glance_store')
CONF.set_override('swift_store_key',
self.key,
group='glance_store')
CONF.set_override('swift_store_create_container_on_put',
True,
group='glance_store')
CONF.set_override('swift_store_region',
self.region,
group='glance_store')
CONF.set_override('swift_store_create_container_on_put',
True,
group='glance_store')
def setUp(self):
self.container = ("glance_store_container_" +
str(int(random.random() * 1000)))
CONF.set_override('swift_store_container',
self.container,
group='glance_store')
super(TestSwift, self).setUp()
def tearDown(self):
for x in range(1, 4):
time.sleep(x)
try:
swift = swiftclient.client.Connection(auth_version='2',
user=self.username,
key=self.key,
tenant_name=self.tenant,
authurl=self.auth)
_, objects = swift.get_container(self.container)
for obj in objects:
swift.delete_object(self.container, obj.get('name'))
swift.delete_container(self.container)
except Exception:
if x < 3:
pass
else:
raise
else:
break
super(TestSwift, self).tearDown()

View File

@ -1,115 +0,0 @@
# Copyright 2016 OpenStack, LLC
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Tests the backend store API's"""
import mock
from glance_store import backend
from glance_store import exceptions
from glance_store.tests import base
class TestStoreAddToBackend(base.StoreBaseTest):
def setUp(self):
super(TestStoreAddToBackend, self).setUp()
self.image_id = "animage"
self.data = "dataandstuff"
self.size = len(self.data)
self.location = "file:///ab/cde/fgh"
self.checksum = "md5"
def _bad_metadata(self, in_metadata):
mstore = mock.Mock()
mstore.add.return_value = (self.location, self.size,
self.checksum, in_metadata)
mstore.__str__ = lambda self: "hello"
mstore.__unicode__ = lambda self: "hello"
self.assertRaises(exceptions.BackendException,
backend.store_add_to_backend,
self.image_id,
self.data,
self.size,
mstore)
mstore.add.assert_called_once_with(self.image_id, mock.ANY,
self.size, context=None,
verifier=None)
def _good_metadata(self, in_metadata):
mstore = mock.Mock()
mstore.add.return_value = (self.location, self.size,
self.checksum, in_metadata)
(location,
size,
checksum,
metadata) = backend.store_add_to_backend(self.image_id,
self.data,
self.size,
mstore)
mstore.add.assert_called_once_with(self.image_id, mock.ANY,
self.size, context=None,
verifier=None)
self.assertEqual(self.location, location)
self.assertEqual(self.size, size)
self.assertEqual(self.checksum, checksum)
self.assertEqual(in_metadata, metadata)
def test_empty(self):
metadata = {}
self._good_metadata(metadata)
def test_string(self):
metadata = {'key': u'somevalue'}
self._good_metadata(metadata)
def test_list(self):
m = {'key': [u'somevalue', u'2']}
self._good_metadata(m)
def test_unicode_dict(self):
inner = {'key1': u'somevalue', 'key2': u'somevalue'}
m = {'topkey': inner}
self._good_metadata(m)
def test_unicode_dict_list(self):
inner = {'key1': u'somevalue', 'key2': u'somevalue'}
m = {'topkey': inner, 'list': [u'somevalue', u'2'], 'u': u'2'}
self._good_metadata(m)
def test_nested_dict(self):
inner = {'key1': u'somevalue', 'key2': u'somevalue'}
inner = {'newkey': inner}
inner = {'anotherkey': inner}
m = {'topkey': inner}
self._good_metadata(m)
def test_bad_top_level_nonunicode(self):
metadata = {'key': b'a string'}
self._bad_metadata(metadata)
def test_bad_nonunicode_dict_list(self):
inner = {'key1': u'somevalue', 'key2': u'somevalue',
'k3': [1, object()]}
m = {'topkey': inner, 'list': [u'somevalue', u'2'], 'u': u'2'}
self._bad_metadata(m)
def test_bad_metadata_not_dict(self):
self._bad_metadata([])

View File

@ -1,353 +0,0 @@
# Copyright 2013 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import contextlib
import errno
import hashlib
import mock
import os
import six
import socket
import tempfile
import time
import uuid
from os_brick.initiator import connector
from oslo_concurrency import processutils
from oslo_utils import units
from glance_store._drivers import cinder
from glance_store import exceptions
from glance_store import location
from glance_store.tests import base
from glance_store.tests.unit import test_store_capabilities
class FakeObject(object):
def __init__(self, **kwargs):
for name, value in kwargs.items():
setattr(self, name, value)
class TestCinderStore(base.StoreBaseTest,
test_store_capabilities.TestStoreCapabilitiesChecking):
def setUp(self):
super(TestCinderStore, self).setUp()
self.store = cinder.Store(self.conf)
self.store.configure()
self.register_store_schemes(self.store, 'cinder')
self.store.READ_CHUNKSIZE = 4096
self.store.WRITE_CHUNKSIZE = 4096
fake_sc = [{u'endpoints': [{u'publicURL': u'http://foo/public_url'}],
u'endpoints_links': [],
u'name': u'cinder',
u'type': u'volumev2'}]
self.context = FakeObject(service_catalog=fake_sc,
user='fake_user',
auth_token='fake_token',
tenant='fake_tenant')
def test_get_cinderclient(self):
cc = cinder.get_cinderclient(self.conf, self.context)
self.assertEqual('fake_token', cc.client.auth_token)
self.assertEqual('http://foo/public_url', cc.client.management_url)
def test_get_cinderclient_with_user_overriden(self):
self.config(cinder_store_user_name='test_user')
self.config(cinder_store_password='test_password')
self.config(cinder_store_project_name='test_project')
self.config(cinder_store_auth_address='test_address')
cc = cinder.get_cinderclient(self.conf, self.context)
self.assertIsNone(cc.client.auth_token)
self.assertEqual('test_address', cc.client.management_url)
def test_temporary_chown(self):
class fake_stat(object):
st_uid = 1
with mock.patch.object(os, 'stat', return_value=fake_stat()), \
mock.patch.object(os, 'getuid', return_value=2), \
mock.patch.object(processutils, 'execute') as mock_execute, \
mock.patch.object(cinder, 'get_root_helper',
return_value='sudo'):
with cinder.temporary_chown('test'):
pass
expected_calls = [mock.call('chown', 2, 'test', run_as_root=True,
root_helper='sudo'),
mock.call('chown', 1, 'test', run_as_root=True,
root_helper='sudo')]
self.assertEqual(expected_calls, mock_execute.call_args_list)
@mock.patch.object(time, 'sleep')
def test_wait_volume_status(self, mock_sleep):
fake_manager = FakeObject(get=mock.Mock())
volume_available = FakeObject(manager=fake_manager,
id='fake-id',
status='available')
volume_in_use = FakeObject(manager=fake_manager,
id='fake-id',
status='in-use')
fake_manager.get.side_effect = [volume_available, volume_in_use]
self.assertEqual(volume_in_use,
self.store._wait_volume_status(
volume_available, 'available', 'in-use'))
fake_manager.get.assert_called_with('fake-id')
mock_sleep.assert_called_once_with(0.5)
@mock.patch.object(time, 'sleep')
def test_wait_volume_status_unexpected(self, mock_sleep):
fake_manager = FakeObject(get=mock.Mock())
volume_available = FakeObject(manager=fake_manager,
id='fake-id',
status='error')
fake_manager.get.return_value = volume_available
self.assertRaises(exceptions.BackendException,
self.store._wait_volume_status,
volume_available, 'available', 'in-use')
fake_manager.get.assert_called_with('fake-id')
@mock.patch.object(time, 'sleep')
def test_wait_volume_status_timeout(self, mock_sleep):
fake_manager = FakeObject(get=mock.Mock())
volume_available = FakeObject(manager=fake_manager,
id='fake-id',
status='available')
fake_manager.get.return_value = volume_available
self.assertRaises(exceptions.BackendException,
self.store._wait_volume_status,
volume_available, 'available', 'in-use')
fake_manager.get.assert_called_with('fake-id')
def _test_open_cinder_volume(self, open_mode, attach_mode, error):
fake_volume = mock.MagicMock(id=str(uuid.uuid4()), status='available')
fake_volumes = FakeObject(get=lambda id: fake_volume,
detach=mock.Mock())
fake_client = FakeObject(volumes=fake_volumes)
_, fake_dev_path = tempfile.mkstemp(dir=self.test_dir)
fake_devinfo = {'path': fake_dev_path}
fake_connector = FakeObject(
connect_volume=mock.Mock(return_value=fake_devinfo),
disconnect_volume=mock.Mock())
@contextlib.contextmanager
def fake_chown(path):
yield
def do_open():
with self.store._open_cinder_volume(
fake_client, fake_volume, open_mode):
if error:
raise error
def fake_factory(protocol, root_helper, **kwargs):
self.assertEqual(fake_volume.initialize_connection.return_value,
kwargs['conn'])
return fake_connector
root_helper = "sudo glance-rootwrap /etc/glance/rootwrap.conf"
with mock.patch.object(cinder.Store,
'_wait_volume_status',
return_value=fake_volume), \
mock.patch.object(cinder, 'temporary_chown',
side_effect=fake_chown), \
mock.patch.object(cinder, 'get_root_helper',
return_value=root_helper), \
mock.patch.object(connector, 'get_connector_properties'), \
mock.patch.object(connector.InitiatorConnector, 'factory',
side_effect=fake_factory):
if error:
self.assertRaises(error, do_open)
else:
do_open()
fake_connector.connect_volume.assert_called_once_with(mock.ANY)
fake_connector.disconnect_volume.assert_called_once_with(
mock.ANY, fake_devinfo)
fake_volume.attach.assert_called_once_with(
None, None, attach_mode, host_name=socket.gethostname())
fake_volumes.detach.assert_called_once_with(fake_volume)
def test_open_cinder_volume_rw(self):
self._test_open_cinder_volume('wb', 'rw', None)
def test_open_cinder_volume_ro(self):
self._test_open_cinder_volume('rb', 'ro', None)
def test_open_cinder_volume_error(self):
self._test_open_cinder_volume('wb', 'rw', IOError)
def test_cinder_configure_add(self):
self.assertRaises(exceptions.BadStoreConfiguration,
self.store._check_context, None)
self.assertRaises(exceptions.BadStoreConfiguration,
self.store._check_context,
FakeObject(service_catalog=None))
self.store._check_context(FakeObject(service_catalog='fake'))
def test_cinder_get(self):
expected_size = 5 * units.Ki
expected_file_contents = b"*" * expected_size
volume_file = six.BytesIO(expected_file_contents)
fake_client = FakeObject(auth_token=None, management_url=None)
fake_volume_uuid = str(uuid.uuid4())
fake_volume = mock.MagicMock(id=fake_volume_uuid,
metadata={'image_size': expected_size},
status='available')
fake_volume.manager.get.return_value = fake_volume
fake_volumes = FakeObject(get=lambda id: fake_volume)
@contextlib.contextmanager
def fake_open(client, volume, mode):
self.assertEqual('rb', mode)
yield volume_file
with mock.patch.object(cinder, 'get_cinderclient') as mock_cc, \
mock.patch.object(self.store, '_open_cinder_volume',
side_effect=fake_open):
mock_cc.return_value = FakeObject(client=fake_client,
volumes=fake_volumes)
uri = "cinder://%s" % fake_volume_uuid
loc = location.get_location_from_uri(uri, conf=self.conf)
(image_file, image_size) = self.store.get(loc,
context=self.context)
expected_num_chunks = 2
data = b""
num_chunks = 0
for chunk in image_file:
num_chunks += 1
data += chunk
self.assertEqual(expected_num_chunks, num_chunks)
self.assertEqual(expected_file_contents, data)
def test_cinder_get_size(self):
fake_client = FakeObject(auth_token=None, management_url=None)
fake_volume_uuid = str(uuid.uuid4())
fake_volume = FakeObject(size=5, metadata={})
fake_volumes = {fake_volume_uuid: fake_volume}
with mock.patch.object(cinder, 'get_cinderclient') as mocked_cc:
mocked_cc.return_value = FakeObject(client=fake_client,
volumes=fake_volumes)
uri = 'cinder://%s' % fake_volume_uuid
loc = location.get_location_from_uri(uri, conf=self.conf)
image_size = self.store.get_size(loc, context=self.context)
self.assertEqual(fake_volume.size * units.Gi, image_size)
def test_cinder_get_size_with_metadata(self):
fake_client = FakeObject(auth_token=None, management_url=None)
fake_volume_uuid = str(uuid.uuid4())
expected_image_size = 4500 * units.Mi
fake_volume = FakeObject(size=5,
metadata={'image_size': expected_image_size})
fake_volumes = {fake_volume_uuid: fake_volume}
with mock.patch.object(cinder, 'get_cinderclient') as mocked_cc:
mocked_cc.return_value = FakeObject(client=fake_client,
volumes=fake_volumes)
uri = 'cinder://%s' % fake_volume_uuid
loc = location.get_location_from_uri(uri, conf=self.conf)
image_size = self.store.get_size(loc, context=self.context)
self.assertEqual(expected_image_size, image_size)
def _test_cinder_add(self, fake_volume, volume_file, size_kb=5,
verifier=None):
expected_image_id = str(uuid.uuid4())
expected_size = size_kb * units.Ki
expected_file_contents = b"*" * expected_size
image_file = six.BytesIO(expected_file_contents)
expected_checksum = hashlib.md5(expected_file_contents).hexdigest()
expected_location = 'cinder://%s' % fake_volume.id
fake_client = FakeObject(auth_token=None, management_url=None)
fake_volume.manager.get.return_value = fake_volume
fake_volumes = FakeObject(create=mock.Mock(return_value=fake_volume))
self.config(cinder_volume_type='some_type')
@contextlib.contextmanager
def fake_open(client, volume, mode):
self.assertEqual('wb', mode)
yield volume_file
with mock.patch.object(cinder, 'get_cinderclient') as mock_cc, \
mock.patch.object(self.store, '_open_cinder_volume',
side_effect=fake_open):
mock_cc.return_value = FakeObject(client=fake_client,
volumes=fake_volumes)
loc, size, checksum, _ = self.store.add(expected_image_id,
image_file,
expected_size,
self.context,
verifier)
self.assertEqual(expected_location, loc)
self.assertEqual(expected_size, size)
self.assertEqual(expected_checksum, checksum)
fake_volumes.create.assert_called_once_with(
1,
name='image-%s' % expected_image_id,
metadata={'image_owner': self.context.tenant,
'glance_image_id': expected_image_id,
'image_size': str(expected_size)},
volume_type='some_type')
def test_cinder_add(self):
fake_volume = mock.MagicMock(id=str(uuid.uuid4()),
status='available',
size=1)
volume_file = six.BytesIO()
self._test_cinder_add(fake_volume, volume_file)
def test_cinder_add_with_verifier(self):
fake_volume = mock.MagicMock(id=str(uuid.uuid4()),
status='available',
size=1)
volume_file = six.BytesIO()
verifier = mock.MagicMock()
self._test_cinder_add(fake_volume, volume_file, 1, verifier)
verifier.update.assert_called_with(b"*" * units.Ki)
def test_cinder_add_volume_full(self):
e = IOError()
volume_file = six.BytesIO()
e.errno = errno.ENOSPC
fake_volume = mock.MagicMock(id=str(uuid.uuid4()),
status='available',
size=1)
with mock.patch.object(volume_file, 'write', side_effect=e):
self.assertRaises(exceptions.StorageFull,
self._test_cinder_add, fake_volume, volume_file)
fake_volume.delete.assert_called_once_with()
def test_cinder_delete(self):
fake_client = FakeObject(auth_token=None, management_url=None)
fake_volume_uuid = str(uuid.uuid4())
fake_volume = FakeObject(delete=mock.Mock())
fake_volumes = {fake_volume_uuid: fake_volume}
with mock.patch.object(cinder, 'get_cinderclient') as mocked_cc:
mocked_cc.return_value = FakeObject(client=fake_client,
volumes=fake_volumes)
uri = 'cinder://%s' % fake_volume_uuid
loc = location.get_location_from_uri(uri, conf=self.conf)
self.store.delete(loc, context=self.context)
fake_volume.delete.assert_called_once_with()

View File

@ -1,180 +0,0 @@
# Copyright 2014 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from glance_store._drivers.swift import connection_manager
from glance_store._drivers.swift import store as swift_store
from glance_store import exceptions
from glance_store.tests import base
class TestConnectionManager(base.StoreBaseTest):
def setUp(self):
super(TestConnectionManager, self).setUp()
self.client = mock.MagicMock()
self.client.session.get_auth_headers.return_value = {
connection_manager.SwiftConnectionManager.AUTH_HEADER_NAME:
"fake_token"}
self.location = mock.create_autospec(swift_store.StoreLocation)
self.context = mock.MagicMock()
self.conf = mock.MagicMock()
def prepare_store(self, multi_tenant=False):
if multi_tenant:
store = mock.create_autospec(swift_store.MultiTenantStore,
conf=self.conf)
else:
store = mock.create_autospec(swift_store.SingleTenantStore,
service_type="swift",
endpoint_type="internal",
region=None,
conf=self.conf,
auth_version='3')
store.init_client.return_value = self.client
return store
def test_basic_single_tenant_cm_init(self):
store = self.prepare_store()
manager = connection_manager.SingleTenantConnectionManager(
store=store,
store_location=self.location
)
store.init_client.assert_called_once_with(self.location, None)
self.client.session.get_endpoint.assert_called_once_with(
service_type=store.service_type,
interface=store.endpoint_type,
region_name=store.region
)
store.get_store_connection.assert_called_once_with(
"fake_token", manager.storage_url
)
def test_basic_multi_tenant_cm_init(self):
store = self.prepare_store(multi_tenant=True)
manager = connection_manager.MultiTenantConnectionManager(
store=store,
store_location=self.location,
context=self.context
)
store.get_store_connection.assert_called_once_with(
self.context.auth_token, manager.storage_url)
def test_basis_multi_tenant_no_context(self):
store = self.prepare_store(multi_tenant=True)
self.assertRaises(exceptions.BadStoreConfiguration,
connection_manager.MultiTenantConnectionManager,
store=store, store_location=self.location)
def test_multi_tenant_client_cm_with_client_creation_fails(self):
store = self.prepare_store(multi_tenant=True)
store.init_client.side_effect = [Exception]
manager = connection_manager.MultiTenantConnectionManager(
store=store,
store_location=self.location,
context=self.context,
allow_reauth=True
)
store.init_client.assert_called_once_with(self.location,
self.context)
store.get_store_connection.assert_called_once_with(
self.context.auth_token, manager.storage_url)
self.assertFalse(manager.allow_reauth)
def test_multi_tenant_client_cm_with_no_expiration(self):
store = self.prepare_store(multi_tenant=True)
manager = connection_manager.MultiTenantConnectionManager(
store=store,
store_location=self.location,
context=self.context,
allow_reauth=True
)
store.init_client.assert_called_once_with(self.location,
self.context)
# return the same connection because it should not be expired
auth_ref = mock.MagicMock()
self.client.session.auth.get_auth_ref.return_value = auth_ref
auth_ref.will_expire_soon.return_value = False
manager.get_connection()
# check that we don't update connection
store.get_store_connection.assert_called_once_with("fake_token",
manager.storage_url)
self.client.session.get_auth_headers.assert_called_once_with()
def test_multi_tenant_client_cm_with_expiration(self):
store = self.prepare_store(multi_tenant=True)
manager = connection_manager.MultiTenantConnectionManager(
store=store,
store_location=self.location,
context=self.context,
allow_reauth=True
)
store.init_client.assert_called_once_with(self.location,
self.context)
# return the same connection because it should not be expired
auth_ref = mock.MagicMock()
self.client.session.auth.get_auth_ref.return_value = auth_ref
auth_ref.will_expire_soon.return_value = True
manager.get_connection()
# check that we don't update connection
self.assertEqual(2, store.get_store_connection.call_count)
self.assertEqual(2, self.client.session.get_auth_headers.call_count)
def test_single_tenant_client_cm_with_no_expiration(self):
store = self.prepare_store()
manager = connection_manager.SingleTenantConnectionManager(
store=store,
store_location=self.location,
allow_reauth=True
)
store.init_client.assert_called_once_with(self.location, None)
self.client.session.get_endpoint.assert_called_once_with(
service_type=store.service_type,
interface=store.endpoint_type,
region_name=store.region
)
# return the same connection because it should not be expired
auth_ref = mock.MagicMock()
self.client.session.auth.get_auth_ref.return_value = auth_ref
auth_ref.will_expire_soon.return_value = False
manager.get_connection()
# check that we don't update connection
store.get_store_connection.assert_called_once_with("fake_token",
manager.storage_url)
self.client.session.get_auth_headers.assert_called_once_with()
def test_single_tenant_client_cm_with_expiration(self):
store = self.prepare_store()
manager = connection_manager.SingleTenantConnectionManager(
store=store,
store_location=self.location,
allow_reauth=True
)
store.init_client.assert_called_once_with(self.location, None)
self.client.session.get_endpoint.assert_called_once_with(
service_type=store.service_type,
interface=store.endpoint_type,
region_name=store.region
)
# return the same connection because it should not be expired
auth_ref = mock.MagicMock()
self.client.session.auth.get_auth_ref.return_value = auth_ref
auth_ref.will_expire_soon.return_value = True
manager.get_connection()
# check that we don't update connection
self.assertEqual(2, store.get_store_connection.call_count)
self.assertEqual(2, self.client.session.get_auth_headers.call_count)

View File

@ -1,57 +0,0 @@
# Copyright 2015 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_utils import encodeutils
from oslotest import base
import six
import glance_store
class TestExceptions(base.BaseTestCase):
"""Test routines in glance_store.common.utils."""
def test_backend_exception(self):
msg = glance_store.BackendException()
self.assertIn(u'', encodeutils.exception_to_unicode(msg))
def test_unsupported_backend_exception(self):
msg = glance_store.UnsupportedBackend()
self.assertIn(u'', encodeutils.exception_to_unicode(msg))
def test_redirect_exception(self):
# Just checks imports work ok
glance_store.RedirectException(url='http://localhost')
def test_exception_no_message(self):
msg = glance_store.NotFound()
self.assertIn('Image %(image)s not found',
encodeutils.exception_to_unicode(msg))
def test_exception_not_found_with_image(self):
msg = glance_store.NotFound(image='123')
self.assertIn('Image 123 not found',
encodeutils.exception_to_unicode(msg))
def test_exception_with_message(self):
msg = glance_store.NotFound('Some message')
self.assertIn('Some message', encodeutils.exception_to_unicode(msg))
def test_exception_with_kwargs(self):
msg = glance_store.NotFound('Message: %(foo)s', foo='bar')
self.assertIn('Message: bar', encodeutils.exception_to_unicode(msg))
def test_non_unicode_error_msg(self):
exc = glance_store.NotFound(str('test'))
self.assertIsInstance(encodeutils.exception_to_unicode(exc),
six.text_type)

View File

@ -1,736 +0,0 @@
# Copyright 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Tests the filesystem backend store"""
import errno
import hashlib
import json
import mock
import os
import stat
import uuid
import fixtures
from oslo_utils import units
import six
from six.moves import builtins
# NOTE(jokke): simplified transition to py3, behaves like py2 xrange
from six.moves import range
from glance_store._drivers import filesystem
from glance_store import exceptions
from glance_store import location
from glance_store.tests import base
from glance_store.tests.unit import test_store_capabilities
class TestStore(base.StoreBaseTest,
test_store_capabilities.TestStoreCapabilitiesChecking):
def setUp(self):
"""Establish a clean test environment."""
super(TestStore, self).setUp()
self.orig_chunksize = filesystem.Store.READ_CHUNKSIZE
filesystem.Store.READ_CHUNKSIZE = 10
self.store = filesystem.Store(self.conf)
self.config(filesystem_store_datadir=self.test_dir,
stores=['glance.store.filesystem.Store'],
group="glance_store")
self.store.configure()
self.register_store_schemes(self.store, 'file')
def tearDown(self):
"""Clear the test environment."""
super(TestStore, self).tearDown()
filesystem.ChunkedFile.CHUNKSIZE = self.orig_chunksize
def _create_metadata_json_file(self, metadata):
expected_image_id = str(uuid.uuid4())
jsonfilename = os.path.join(self.test_dir,
"storage_metadata.%s" % expected_image_id)
self.config(filesystem_store_metadata_file=jsonfilename,
group="glance_store")
with open(jsonfilename, 'w') as fptr:
json.dump(metadata, fptr)
def _store_image(self, in_metadata):
expected_image_id = str(uuid.uuid4())
expected_file_size = 10
expected_file_contents = b"*" * expected_file_size
image_file = six.BytesIO(expected_file_contents)
self.store.FILESYSTEM_STORE_METADATA = in_metadata
return self.store.add(expected_image_id, image_file,
expected_file_size)
def test_get(self):
"""Test a "normal" retrieval of an image in chunks."""
# First add an image...
image_id = str(uuid.uuid4())
file_contents = b"chunk00000remainder"
image_file = six.BytesIO(file_contents)
loc, size, checksum, _ = self.store.add(image_id,
image_file,
len(file_contents))
# Now read it back...
uri = "file:///%s/%s" % (self.test_dir, image_id)
loc = location.get_location_from_uri(uri, conf=self.conf)
(image_file, image_size) = self.store.get(loc)
expected_data = b"chunk00000remainder"
expected_num_chunks = 2
data = b""
num_chunks = 0
for chunk in image_file:
num_chunks += 1
data += chunk
self.assertEqual(expected_data, data)
self.assertEqual(expected_num_chunks, num_chunks)
def test_get_random_access(self):
"""Test a "normal" retrieval of an image in chunks."""
# First add an image...
image_id = str(uuid.uuid4())
file_contents = b"chunk00000remainder"
image_file = six.BytesIO(file_contents)
loc, size, checksum, _ = self.store.add(image_id,
image_file,
len(file_contents))
# Now read it back...
uri = "file:///%s/%s" % (self.test_dir, image_id)
loc = location.get_location_from_uri(uri, conf=self.conf)
data = b""
for offset in range(len(file_contents)):
(image_file, image_size) = self.store.get(loc,
offset=offset,
chunk_size=1)
for chunk in image_file:
data += chunk
self.assertEqual(file_contents, data)
data = b""
chunk_size = 5
(image_file, image_size) = self.store.get(loc,
offset=chunk_size,
chunk_size=chunk_size)
for chunk in image_file:
data += chunk
self.assertEqual(b'00000', data)
self.assertEqual(chunk_size, image_size)
def test_get_non_existing(self):
"""
Test that trying to retrieve a file that doesn't exist
raises an error
"""
loc = location.get_location_from_uri(
"file:///%s/non-existing" % self.test_dir, conf=self.conf)
self.assertRaises(exceptions.NotFound,
self.store.get,
loc)
def test_add(self):
"""Test that we can add an image via the filesystem backend."""
filesystem.ChunkedFile.CHUNKSIZE = units.Ki
expected_image_id = str(uuid.uuid4())
expected_file_size = 5 * units.Ki # 5K
expected_file_contents = b"*" * expected_file_size
expected_checksum = hashlib.md5(expected_file_contents).hexdigest()
expected_location = "file://%s/%s" % (self.test_dir,
expected_image_id)
image_file = six.BytesIO(expected_file_contents)
loc, size, checksum, _ = self.store.add(expected_image_id,
image_file,
expected_file_size)
self.assertEqual(expected_location, loc)
self.assertEqual(expected_file_size, size)
self.assertEqual(expected_checksum, checksum)
uri = "file:///%s/%s" % (self.test_dir, expected_image_id)
loc = location.get_location_from_uri(uri, conf=self.conf)
(new_image_file, new_image_size) = self.store.get(loc)
new_image_contents = b""
new_image_file_size = 0
for chunk in new_image_file:
new_image_file_size += len(chunk)
new_image_contents += chunk
self.assertEqual(expected_file_contents, new_image_contents)
self.assertEqual(expected_file_size, new_image_file_size)
def test_add_with_verifier(self):
"""Test that 'verifier.update' is called when verifier is provided."""
verifier = mock.MagicMock(name='mock_verifier')
self.store.chunk_size = units.Ki
image_id = str(uuid.uuid4())
file_size = units.Ki # 1K
file_contents = b"*" * file_size
image_file = six.BytesIO(file_contents)
self.store.add(image_id, image_file, file_size, verifier=verifier)
verifier.update.assert_called_with(file_contents)
def test_add_check_metadata_with_invalid_mountpoint_location(self):
in_metadata = [{'id': 'abcdefg',
'mountpoint': '/xyz/images'}]
location, size, checksum, metadata = self._store_image(in_metadata)
self.assertEqual({}, metadata)
def test_add_check_metadata_list_with_invalid_mountpoint_locations(self):
in_metadata = [{'id': 'abcdefg', 'mountpoint': '/xyz/images'},
{'id': 'xyz1234', 'mountpoint': '/pqr/images'}]
location, size, checksum, metadata = self._store_image(in_metadata)
self.assertEqual({}, metadata)
def test_add_check_metadata_list_with_valid_mountpoint_locations(self):
in_metadata = [{'id': 'abcdefg', 'mountpoint': '/tmp'},
{'id': 'xyz1234', 'mountpoint': '/xyz'}]
location, size, checksum, metadata = self._store_image(in_metadata)
self.assertEqual(in_metadata[0], metadata)
def test_add_check_metadata_bad_nosuch_file(self):
expected_image_id = str(uuid.uuid4())
jsonfilename = os.path.join(self.test_dir,
"storage_metadata.%s" % expected_image_id)
self.config(filesystem_store_metadata_file=jsonfilename,
group="glance_store")
expected_file_size = 10
expected_file_contents = b"*" * expected_file_size
image_file = six.BytesIO(expected_file_contents)
location, size, checksum, metadata = self.store.add(expected_image_id,
image_file,
expected_file_size)
self.assertEqual(metadata, {})
def test_add_already_existing(self):
"""
Tests that adding an image with an existing identifier
raises an appropriate exception
"""
filesystem.ChunkedFile.CHUNKSIZE = units.Ki
image_id = str(uuid.uuid4())
file_size = 5 * units.Ki # 5K
file_contents = b"*" * file_size
image_file = six.BytesIO(file_contents)
location, size, checksum, _ = self.store.add(image_id,
image_file,
file_size)
image_file = six.BytesIO(b"nevergonnamakeit")
self.assertRaises(exceptions.Duplicate,
self.store.add,
image_id, image_file, 0)
def _do_test_add_write_failure(self, errno, exception):
filesystem.ChunkedFile.CHUNKSIZE = units.Ki
image_id = str(uuid.uuid4())
file_size = 5 * units.Ki # 5K
file_contents = b"*" * file_size
path = os.path.join(self.test_dir, image_id)
image_file = six.BytesIO(file_contents)
with mock.patch.object(builtins, 'open') as popen:
e = IOError()
e.errno = errno
popen.side_effect = e
self.assertRaises(exception,
self.store.add,
image_id, image_file, 0)
self.assertFalse(os.path.exists(path))
def test_add_storage_full(self):
"""
Tests that adding an image without enough space on disk
raises an appropriate exception
"""
self._do_test_add_write_failure(errno.ENOSPC, exceptions.StorageFull)
def test_add_file_too_big(self):
"""
Tests that adding an excessively large image file
raises an appropriate exception
"""
self._do_test_add_write_failure(errno.EFBIG, exceptions.StorageFull)
def test_add_storage_write_denied(self):
"""
Tests that adding an image with insufficient filestore permissions
raises an appropriate exception
"""
self._do_test_add_write_failure(errno.EACCES,
exceptions.StorageWriteDenied)
def test_add_other_failure(self):
"""
Tests that a non-space-related IOError does not raise a
StorageFull exceptions.
"""
self._do_test_add_write_failure(errno.ENOTDIR, IOError)
def test_add_cleanup_on_read_failure(self):
"""
Tests the partial image file is cleaned up after a read
failure.
"""
filesystem.ChunkedFile.CHUNKSIZE = units.Ki
image_id = str(uuid.uuid4())
file_size = 5 * units.Ki # 5K
file_contents = b"*" * file_size
path = os.path.join(self.test_dir, image_id)
image_file = six.BytesIO(file_contents)
def fake_Error(size):
raise AttributeError()
with mock.patch.object(image_file, 'read') as mock_read:
mock_read.side_effect = fake_Error
self.assertRaises(AttributeError,
self.store.add,
image_id, image_file, 0)
self.assertFalse(os.path.exists(path))
def test_delete(self):
"""
Test we can delete an existing image in the filesystem store
"""
# First add an image
image_id = str(uuid.uuid4())
file_size = 5 * units.Ki # 5K
file_contents = b"*" * file_size
image_file = six.BytesIO(file_contents)
loc, size, checksum, _ = self.store.add(image_id,
image_file,
file_size)
# Now check that we can delete it
uri = "file:///%s/%s" % (self.test_dir, image_id)
loc = location.get_location_from_uri(uri, conf=self.conf)
self.store.delete(loc)
self.assertRaises(exceptions.NotFound, self.store.get, loc)
def test_delete_non_existing(self):
"""
Test that trying to delete a file that doesn't exist
raises an error
"""
loc = location.get_location_from_uri(
"file:///tmp/glance-tests/non-existing", conf=self.conf)
self.assertRaises(exceptions.NotFound,
self.store.delete,
loc)
def test_delete_forbidden(self):
"""
Tests that trying to delete a file without permissions
raises the correct error
"""
# First add an image
image_id = str(uuid.uuid4())
file_size = 5 * units.Ki # 5K
file_contents = b"*" * file_size
image_file = six.BytesIO(file_contents)
loc, size, checksum, _ = self.store.add(image_id,
image_file,
file_size)
uri = "file:///%s/%s" % (self.test_dir, image_id)
loc = location.get_location_from_uri(uri, conf=self.conf)
# Mock unlink to raise an OSError for lack of permissions
# and make sure we can't delete the image
with mock.patch.object(os, 'unlink') as unlink:
e = OSError()
e.errno = errno
unlink.side_effect = e
self.assertRaises(exceptions.Forbidden,
self.store.delete,
loc)
# Make sure the image didn't get deleted
self.store.get(loc)
def test_configure_add_with_multi_datadirs(self):
"""
Tests multiple filesystem specified by filesystem_store_datadirs
are parsed correctly.
"""
store_map = [self.useFixture(fixtures.TempDir()).path,
self.useFixture(fixtures.TempDir()).path]
self.conf.set_override('filesystem_store_datadir',
override=None,
group='glance_store')
self.conf.set_override('filesystem_store_datadirs',
[store_map[0] + ":100",
store_map[1] + ":200"],
group='glance_store')
self.store.configure_add()
expected_priority_map = {100: [store_map[0]], 200: [store_map[1]]}
expected_priority_list = [200, 100]
self.assertEqual(expected_priority_map, self.store.priority_data_map)
self.assertEqual(expected_priority_list, self.store.priority_list)
def test_configure_add_with_metadata_file_success(self):
metadata = {'id': 'asdf1234',
'mountpoint': '/tmp'}
self._create_metadata_json_file(metadata)
self.store.configure_add()
self.assertEqual([metadata], self.store.FILESYSTEM_STORE_METADATA)
def test_configure_add_check_metadata_list_of_dicts_success(self):
metadata = [{'id': 'abcdefg', 'mountpoint': '/xyz/images'},
{'id': 'xyz1234', 'mountpoint': '/tmp/'}]
self._create_metadata_json_file(metadata)
self.store.configure_add()
self.assertEqual(metadata, self.store.FILESYSTEM_STORE_METADATA)
def test_configure_add_check_metadata_success_list_val_for_some_key(self):
metadata = {'akey': ['value1', 'value2'], 'id': 'asdf1234',
'mountpoint': '/tmp'}
self._create_metadata_json_file(metadata)
self.store.configure_add()
self.assertEqual([metadata], self.store.FILESYSTEM_STORE_METADATA)
def test_configure_add_check_metadata_bad_data(self):
metadata = {'akey': 10, 'id': 'asdf1234',
'mountpoint': '/tmp'} # only unicode is allowed
self._create_metadata_json_file(metadata)
self.assertRaises(exceptions.BadStoreConfiguration,
self.store.configure_add)
def test_configure_add_check_metadata_with_no_id_or_mountpoint(self):
metadata = {'mountpoint': '/tmp'}
self._create_metadata_json_file(metadata)
self.assertRaises(exceptions.BadStoreConfiguration,
self.store.configure_add)
metadata = {'id': 'asdfg1234'}
self._create_metadata_json_file(metadata)
self.assertRaises(exceptions.BadStoreConfiguration,
self.store.configure_add)
def test_configure_add_check_metadata_id_or_mountpoint_is_not_string(self):
metadata = {'id': 10, 'mountpoint': '/tmp'}
self._create_metadata_json_file(metadata)
self.assertRaises(exceptions.BadStoreConfiguration,
self.store.configure_add)
metadata = {'id': 'asdf1234', 'mountpoint': 12345}
self._create_metadata_json_file(metadata)
self.assertRaises(exceptions.BadStoreConfiguration,
self.store.configure_add)
def test_configure_add_check_metadata_list_with_no_id_or_mountpoint(self):
metadata = [{'id': 'abcdefg', 'mountpoint': '/xyz/images'},
{'mountpoint': '/pqr/images'}]
self._create_metadata_json_file(metadata)
self.assertRaises(exceptions.BadStoreConfiguration,
self.store.configure_add)
metadata = [{'id': 'abcdefg'},
{'id': 'xyz1234', 'mountpoint': '/pqr/images'}]
self._create_metadata_json_file(metadata)
self.assertRaises(exceptions.BadStoreConfiguration,
self.store.configure_add)
def test_add_check_metadata_list_id_or_mountpoint_is_not_string(self):
metadata = [{'id': 'abcdefg', 'mountpoint': '/xyz/images'},
{'id': 1234, 'mountpoint': '/pqr/images'}]
self._create_metadata_json_file(metadata)
self.assertRaises(exceptions.BadStoreConfiguration,
self.store.configure_add)
metadata = [{'id': 'abcdefg', 'mountpoint': 1234},
{'id': 'xyz1234', 'mountpoint': '/pqr/images'}]
self._create_metadata_json_file(metadata)
self.assertRaises(exceptions.BadStoreConfiguration,
self.store.configure_add)
def test_configure_add_same_dir_multiple_times(self):
"""
Tests BadStoreConfiguration exception is raised if same directory
is specified multiple times in filesystem_store_datadirs.
"""
store_map = [self.useFixture(fixtures.TempDir()).path,
self.useFixture(fixtures.TempDir()).path]
self.conf.clear_override('filesystem_store_datadir',
group='glance_store')
self.conf.set_override('filesystem_store_datadirs',
[store_map[0] + ":100",
store_map[1] + ":200",
store_map[0] + ":300"],
group='glance_store')
self.assertRaises(exceptions.BadStoreConfiguration,
self.store.configure_add)
def test_configure_add_same_dir_multiple_times_same_priority(self):
"""
Tests BadStoreConfiguration exception is raised if same directory
is specified multiple times in filesystem_store_datadirs.
"""
store_map = [self.useFixture(fixtures.TempDir()).path,
self.useFixture(fixtures.TempDir()).path]
self.conf.set_override('filesystem_store_datadir',
override=None,
group='glance_store')
self.conf.set_override('filesystem_store_datadirs',
[store_map[0] + ":100",
store_map[1] + ":200",
store_map[0] + ":100"],
group='glance_store')
try:
self.store.configure()
except exceptions.BadStoreConfiguration:
self.fail("configure() raised BadStoreConfiguration unexpectedly!")
# Test that we can add an image via the filesystem backend
filesystem.ChunkedFile.CHUNKSIZE = 1024
expected_image_id = str(uuid.uuid4())
expected_file_size = 5 * units.Ki # 5K
expected_file_contents = b"*" * expected_file_size
expected_checksum = hashlib.md5(expected_file_contents).hexdigest()
expected_location = "file://%s/%s" % (store_map[1],
expected_image_id)
image_file = six.BytesIO(expected_file_contents)
loc, size, checksum, _ = self.store.add(expected_image_id,
image_file,
expected_file_size)
self.assertEqual(expected_location, loc)
self.assertEqual(expected_file_size, size)
self.assertEqual(expected_checksum, checksum)
loc = location.get_location_from_uri(expected_location,
conf=self.conf)
(new_image_file, new_image_size) = self.store.get(loc)
new_image_contents = b""
new_image_file_size = 0
for chunk in new_image_file:
new_image_file_size += len(chunk)
new_image_contents += chunk
self.assertEqual(expected_file_contents, new_image_contents)
self.assertEqual(expected_file_size, new_image_file_size)
def test_add_with_multiple_dirs(self):
"""Test adding multiple filesystem directories."""
store_map = [self.useFixture(fixtures.TempDir()).path,
self.useFixture(fixtures.TempDir()).path]
self.conf.set_override('filesystem_store_datadir',
override=None,
group='glance_store')
self.conf.set_override('filesystem_store_datadirs',
[store_map[0] + ":100",
store_map[1] + ":200"],
group='glance_store')
self.store.configure()
# Test that we can add an image via the filesystem backend
filesystem.ChunkedFile.CHUNKSIZE = units.Ki
expected_image_id = str(uuid.uuid4())
expected_file_size = 5 * units.Ki # 5K
expected_file_contents = b"*" * expected_file_size
expected_checksum = hashlib.md5(expected_file_contents).hexdigest()
expected_location = "file://%s/%s" % (store_map[1],
expected_image_id)
image_file = six.BytesIO(expected_file_contents)
loc, size, checksum, _ = self.store.add(expected_image_id,
image_file,
expected_file_size)
self.assertEqual(expected_location, loc)
self.assertEqual(expected_file_size, size)
self.assertEqual(expected_checksum, checksum)
loc = location.get_location_from_uri(expected_location,
conf=self.conf)
(new_image_file, new_image_size) = self.store.get(loc)
new_image_contents = b""
new_image_file_size = 0
for chunk in new_image_file:
new_image_file_size += len(chunk)
new_image_contents += chunk
self.assertEqual(expected_file_contents, new_image_contents)
self.assertEqual(expected_file_size, new_image_file_size)
def test_add_with_multiple_dirs_storage_full(self):
"""
Test StorageFull exception is raised if no filesystem directory
is found that can store an image.
"""
store_map = [self.useFixture(fixtures.TempDir()).path,
self.useFixture(fixtures.TempDir()).path]
self.conf.set_override('filesystem_store_datadir',
override=None,
group='glance_store')
self.conf.set_override('filesystem_store_datadirs',
[store_map[0] + ":100",
store_map[1] + ":200"],
group='glance_store')
self.store.configure_add()
def fake_get_capacity_info(mount_point):
return 0
with mock.patch.object(self.store, '_get_capacity_info') as capacity:
capacity.return_value = 0
filesystem.ChunkedFile.CHUNKSIZE = units.Ki
expected_image_id = str(uuid.uuid4())
expected_file_size = 5 * units.Ki # 5K
expected_file_contents = b"*" * expected_file_size
image_file = six.BytesIO(expected_file_contents)
self.assertRaises(exceptions.StorageFull, self.store.add,
expected_image_id, image_file,
expected_file_size)
def test_configure_add_with_file_perm(self):
"""
Tests filesystem specified by filesystem_store_file_perm
are parsed correctly.
"""
store = self.useFixture(fixtures.TempDir()).path
self.conf.set_override('filesystem_store_datadir', store,
group='glance_store')
self.conf.set_override('filesystem_store_file_perm', 700, # -rwx------
group='glance_store')
self.store.configure_add()
self.assertEqual(self.store.datadir, store)
def test_configure_add_with_unaccessible_file_perm(self):
"""
Tests BadStoreConfiguration exception is raised if an invalid
file permission specified in filesystem_store_file_perm.
"""
store = self.useFixture(fixtures.TempDir()).path
self.conf.set_override('filesystem_store_datadir', store,
group='glance_store')
self.conf.set_override('filesystem_store_file_perm', 7, # -------rwx
group='glance_store')
self.assertRaises(exceptions.BadStoreConfiguration,
self.store.configure_add)
def test_add_with_file_perm_for_group_other_users_access(self):
"""
Test that we can add an image via the filesystem backend with a
required image file permission.
"""
store = self.useFixture(fixtures.TempDir()).path
self.conf.set_override('filesystem_store_datadir', store,
group='glance_store')
self.conf.set_override('filesystem_store_file_perm', 744, # -rwxr--r--
group='glance_store')
# -rwx------
os.chmod(store, 0o700)
self.assertEqual(0o700, stat.S_IMODE(os.stat(store)[stat.ST_MODE]))
self.store.configure_add()
filesystem.Store.WRITE_CHUNKSIZE = units.Ki
expected_image_id = str(uuid.uuid4())
expected_file_size = 5 * units.Ki # 5K
expected_file_contents = b"*" * expected_file_size
expected_checksum = hashlib.md5(expected_file_contents).hexdigest()
expected_location = "file://%s/%s" % (store,
expected_image_id)
image_file = six.BytesIO(expected_file_contents)
location, size, checksum, _ = self.store.add(expected_image_id,
image_file,
expected_file_size)
self.assertEqual(expected_location, location)
self.assertEqual(expected_file_size, size)
self.assertEqual(expected_checksum, checksum)
# -rwx--x--x for store directory
self.assertEqual(0o711, stat.S_IMODE(os.stat(store)[stat.ST_MODE]))
# -rwxr--r-- for image file
mode = os.stat(expected_location[len('file:/'):])[stat.ST_MODE]
perm = int(str(self.conf.glance_store.filesystem_store_file_perm), 8)
self.assertEqual(perm, stat.S_IMODE(mode))
def test_add_with_file_perm_for_owner_users_access(self):
"""
Test that we can add an image via the filesystem backend with a
required image file permission.
"""
store = self.useFixture(fixtures.TempDir()).path
self.conf.set_override('filesystem_store_datadir', store,
group='glance_store')
self.conf.set_override('filesystem_store_file_perm', 600, # -rw-------
group='glance_store')
# -rwx------
os.chmod(store, 0o700)
self.assertEqual(0o700, stat.S_IMODE(os.stat(store)[stat.ST_MODE]))
self.store.configure_add()
filesystem.Store.WRITE_CHUNKSIZE = units.Ki
expected_image_id = str(uuid.uuid4())
expected_file_size = 5 * units.Ki # 5K
expected_file_contents = b"*" * expected_file_size
expected_checksum = hashlib.md5(expected_file_contents).hexdigest()
expected_location = "file://%s/%s" % (store,
expected_image_id)
image_file = six.BytesIO(expected_file_contents)
location, size, checksum, _ = self.store.add(expected_image_id,
image_file,
expected_file_size)
self.assertEqual(expected_location, location)
self.assertEqual(expected_file_size, size)
self.assertEqual(expected_checksum, checksum)
# -rwx------ for store directory
self.assertEqual(0o700, stat.S_IMODE(os.stat(store)[stat.ST_MODE]))
# -rw------- for image file
mode = os.stat(expected_location[len('file:/'):])[stat.ST_MODE]
perm = int(str(self.conf.glance_store.filesystem_store_file_perm), 8)
self.assertEqual(perm, stat.S_IMODE(mode))

View File

@ -1,192 +0,0 @@
# Copyright 2010-2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import requests
import glance_store
from glance_store._drivers import http
from glance_store import exceptions
from glance_store import location
from glance_store.tests import base
from glance_store.tests.unit import test_store_capabilities
from glance_store.tests import utils
class TestHttpStore(base.StoreBaseTest,
test_store_capabilities.TestStoreCapabilitiesChecking):
def setUp(self):
super(TestHttpStore, self).setUp()
self.config(default_store='http', group='glance_store')
http.Store.READ_CHUNKSIZE = 2
self.store = http.Store(self.conf)
self.register_store_schemes(self.store, 'http')
def _mock_requests(self):
"""Mock requests session object.
Should be called when we need to mock request/response objects.
"""
request = mock.patch('requests.Session.request')
self.request = request.start()
self.addCleanup(request.stop)
def test_http_get(self):
self._mock_requests()
self.request.return_value = utils.fake_response()
uri = "http://netloc/path/to/file.tar.gz"
expected_returns = ['I ', 'am', ' a', ' t', 'ea', 'po', 't,', ' s',
'ho', 'rt', ' a', 'nd', ' s', 'to', 'ut', '\n']
loc = location.get_location_from_uri(uri, conf=self.conf)
(image_file, image_size) = self.store.get(loc)
self.assertEqual(31, image_size)
chunks = [c for c in image_file]
self.assertEqual(expected_returns, chunks)
def test_http_partial_get(self):
uri = "http://netloc/path/to/file.tar.gz"
loc = location.get_location_from_uri(uri, conf=self.conf)
self.assertRaises(exceptions.StoreRandomGetNotSupported,
self.store.get, loc, chunk_size=1)
def test_http_get_redirect(self):
# Add two layers of redirects to the response stack, which will
# return the default 200 OK with the expected data after resolving
# both redirects.
self._mock_requests()
redirect1 = {"location": "http://example.com/teapot.img"}
redirect2 = {"location": "http://example.com/teapot_real.img"}
responses = [utils.fake_response(),
utils.fake_response(status_code=301, headers=redirect2),
utils.fake_response(status_code=302, headers=redirect1)]
def getresponse(*args, **kwargs):
return responses.pop()
self.request.side_effect = getresponse
uri = "http://netloc/path/to/file.tar.gz"
expected_returns = ['I ', 'am', ' a', ' t', 'ea', 'po', 't,', ' s',
'ho', 'rt', ' a', 'nd', ' s', 'to', 'ut', '\n']
loc = location.get_location_from_uri(uri, conf=self.conf)
(image_file, image_size) = self.store.get(loc)
self.assertEqual(0, len(responses))
self.assertEqual(31, image_size)
chunks = [c for c in image_file]
self.assertEqual(expected_returns, chunks)
def test_http_get_max_redirects(self):
self._mock_requests()
redirect = {"location": "http://example.com/teapot.img"}
responses = ([utils.fake_response(status_code=302, headers=redirect)]
* (http.MAX_REDIRECTS + 2))
def getresponse(*args, **kwargs):
return responses.pop()
self.request.side_effect = getresponse
uri = "http://netloc/path/to/file.tar.gz"
loc = location.get_location_from_uri(uri, conf=self.conf)
self.assertRaises(exceptions.MaxRedirectsExceeded, self.store.get, loc)
def test_http_get_redirect_invalid(self):
self._mock_requests()
redirect = {"location": "http://example.com/teapot.img"}
redirect_resp = utils.fake_response(status_code=307, headers=redirect)
self.request.return_value = redirect_resp
uri = "http://netloc/path/to/file.tar.gz"
loc = location.get_location_from_uri(uri, conf=self.conf)
self.assertRaises(exceptions.BadStoreUri, self.store.get, loc)
def test_http_get_not_found(self):
self._mock_requests()
fake = utils.fake_response(status_code=404, content="404 Not Found")
self.request.return_value = fake
uri = "http://netloc/path/to/file.tar.gz"
loc = location.get_location_from_uri(uri, conf=self.conf)
self.assertRaises(exceptions.NotFound, self.store.get, loc)
def test_http_delete_raise_error(self):
self._mock_requests()
self.request.return_value = utils.fake_response()
uri = "https://netloc/path/to/file.tar.gz"
loc = location.get_location_from_uri(uri, conf=self.conf)
self.assertRaises(exceptions.StoreDeleteNotSupported,
self.store.delete, loc)
self.assertRaises(exceptions.StoreDeleteNotSupported,
glance_store.delete_from_backend, uri, {})
def test_http_add_raise_error(self):
self.assertRaises(exceptions.StoreAddDisabled,
self.store.add, None, None, None, None)
self.assertRaises(exceptions.StoreAddDisabled,
glance_store.add_to_backend, None, None,
None, None, 'http')
def test_http_get_size_with_non_existent_image_raises_Not_Found(self):
self._mock_requests()
self.request.return_value = utils.fake_response(
status_code=404, content='404 Not Found')
uri = "http://netloc/path/to/file.tar.gz"
loc = location.get_location_from_uri(uri, conf=self.conf)
self.assertRaises(exceptions.NotFound, self.store.get_size, loc)
self.request.assert_called_once_with('HEAD', uri, stream=True,
allow_redirects=False)
def test_http_get_size_bad_status_line(self):
self._mock_requests()
# Note(sabari): Low-level httplib.BadStatusLine will be raised as
# ConnectionErorr after migrating to requests.
self.request.side_effect = requests.exceptions.ConnectionError
uri = "http://netloc/path/to/file.tar.gz"
loc = location.get_location_from_uri(uri, conf=self.conf)
self.assertRaises(exceptions.BadStoreUri, self.store.get_size, loc)
def test_http_store_location_initialization(self):
"""Test store location initialization from valid uris"""
uris = [
"http://127.0.0.1:8000/ubuntu.iso",
"http://openstack.com:80/ubuntu.iso",
"http://[1080::8:800:200C:417A]:80/ubuntu.iso"
]
for uri in uris:
location.get_location_from_uri(uri)
def test_http_store_location_initialization_with_invalid_url(self):
"""Test store location initialization from incorrect uris."""
incorrect_uris = [
"http://127.0.0.1:~/ubuntu.iso",
"http://openstack.com:some_text/ubuntu.iso",
"http://[1080::8:800:200C:417A]:some_text/ubuntu.iso"
]
for uri in incorrect_uris:
self.assertRaises(exceptions.BadStoreUri,
location.get_location_from_uri, uri)
def test_http_get_raises_remote_service_unavailable(self):
"""Test http store raises RemoteServiceUnavailable."""
uri = "http://netloc/path/to/file.tar.gz"
loc = location.get_location_from_uri(uri, conf=self.conf)
self.assertRaises(exceptions.RemoteServiceUnavailable,
self.store.get, loc)

View File

@ -1,139 +0,0 @@
# Copyright 2014 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pkg_resources
from testtools import matchers
from glance_store import backend
from glance_store.tests import base
def load_entry_point(entry_point, verify_requirements=False):
"""Load an entry-point without requiring dependencies."""
resolve = getattr(entry_point, 'resolve', None)
require = getattr(entry_point, 'require', None)
if resolve is not None and require is not None:
if verify_requirements:
entry_point.require()
return entry_point.resolve()
else:
return entry_point.load(require=verify_requirements)
class OptsTestCase(base.StoreBaseTest):
def _check_opt_groups(self, opt_list, expected_opt_groups):
self.assertThat(opt_list, matchers.HasLength(len(expected_opt_groups)))
groups = [g for (g, _l) in opt_list]
self.assertThat(groups, matchers.HasLength(len(expected_opt_groups)))
for idx, group in enumerate(groups):
self.assertEqual(expected_opt_groups[idx], group)
def _check_opt_names(self, opt_list, expected_opt_names):
opt_names = [o.name for (g, l) in opt_list for o in l]
self.assertThat(opt_names, matchers.HasLength(len(expected_opt_names)))
for opt in opt_names:
self.assertIn(opt, expected_opt_names)
def _test_entry_point(self, namespace,
expected_opt_groups, expected_opt_names):
opt_list = None
for ep in pkg_resources.iter_entry_points('oslo.config.opts'):
if ep.name == namespace:
list_fn = load_entry_point(ep)
opt_list = list_fn()
break
self.assertIsNotNone(opt_list)
self._check_opt_groups(opt_list, expected_opt_groups)
self._check_opt_names(opt_list, expected_opt_names)
def test_list_api_opts(self):
opt_list = backend._list_opts()
expected_opt_groups = ['glance_store', 'glance_store']
expected_opt_names = [
'default_store',
'stores',
'store_capabilities_update_min_interval',
'cinder_api_insecure',
'cinder_ca_certificates_file',
'cinder_catalog_info',
'cinder_endpoint_template',
'cinder_http_retries',
'cinder_os_region_name',
'cinder_state_transition_timeout',
'cinder_store_auth_address',
'cinder_store_user_name',
'cinder_store_password',
'cinder_store_project_name',
'cinder_volume_type',
'default_swift_reference',
'https_insecure',
'filesystem_store_datadir',
'filesystem_store_datadirs',
'filesystem_store_file_perm',
'filesystem_store_metadata_file',
'http_proxy_information',
'https_ca_certificates_file',
'rbd_store_ceph_conf',
'rbd_store_chunk_size',
'rbd_store_pool',
'rbd_store_user',
'rados_connect_timeout',
'rootwrap_config',
'swift_store_expire_soon_interval',
'sheepdog_store_address',
'sheepdog_store_chunk_size',
'sheepdog_store_port',
'swift_store_admin_tenants',
'swift_store_auth_address',
'swift_store_cacert',
'swift_store_auth_insecure',
'swift_store_auth_version',
'swift_store_config_file',
'swift_store_container',
'swift_store_create_container_on_put',
'swift_store_endpoint',
'swift_store_endpoint_type',
'swift_store_key',
'swift_store_large_object_chunk_size',
'swift_store_large_object_size',
'swift_store_multi_tenant',
'swift_store_multiple_containers_seed',
'swift_store_region',
'swift_store_retry_get_count',
'swift_store_service_type',
'swift_store_ssl_compression',
'swift_store_use_trusts',
'swift_store_user',
'vmware_insecure',
'vmware_ca_file',
'vmware_api_retry_count',
'vmware_datastores',
'vmware_server_host',
'vmware_server_password',
'vmware_server_username',
'vmware_store_image_dir',
'vmware_task_poll_interval'
]
self._check_opt_groups(opt_list, expected_opt_groups)
self._check_opt_names(opt_list, expected_opt_names)
self._test_entry_point('glance.store',
expected_opt_groups, expected_opt_names)

View File

@ -1,431 +0,0 @@
# Copyright 2013 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from oslo_utils import units
import six
from glance_store._drivers import rbd as rbd_store
from glance_store import exceptions
from glance_store import location as g_location
from glance_store.tests import base
from glance_store.tests.unit import test_store_capabilities
class TestException(Exception):
pass
class MockRados(object):
class Error(Exception):
pass
class ioctx(object):
def __init__(self, *args, **kwargs):
pass
def __enter__(self, *args, **kwargs):
return self
def __exit__(self, *args, **kwargs):
return False
def close(self, *args, **kwargs):
pass
class Rados(object):
def __init__(self, *args, **kwargs):
pass
def __enter__(self, *args, **kwargs):
return self
def __exit__(self, *args, **kwargs):
return False
def connect(self, *args, **kwargs):
pass
def open_ioctx(self, *args, **kwargs):
return MockRados.ioctx()
def shutdown(self, *args, **kwargs):
pass
def conf_get(self, *args, **kwargs):
pass
class MockRBD(object):
class ImageExists(Exception):
pass
class ImageHasSnapshots(Exception):
pass
class ImageBusy(Exception):
pass
class ImageNotFound(Exception):
pass
class InvalidArgument(Exception):
pass
class Image(object):
def __init__(self, *args, **kwargs):
pass
def __enter__(self, *args, **kwargs):
return self
def __exit__(self, *args, **kwargs):
pass
def create_snap(self, *args, **kwargs):
pass
def remove_snap(self, *args, **kwargs):
pass
def protect_snap(self, *args, **kwargs):
pass
def unprotect_snap(self, *args, **kwargs):
pass
def read(self, *args, **kwargs):
raise NotImplementedError()
def write(self, *args, **kwargs):
raise NotImplementedError()
def resize(self, *args, **kwargs):
raise NotImplementedError()
def discard(self, offset, length):
raise NotImplementedError()
def close(self):
pass
def list_snaps(self):
raise NotImplementedError()
def parent_info(self):
raise NotImplementedError()
def size(self):
raise NotImplementedError()
class RBD(object):
def __init__(self, *args, **kwargs):
pass
def __enter__(self, *args, **kwargs):
return self
def __exit__(self, *args, **kwargs):
return False
def create(self, *args, **kwargs):
pass
def remove(self, *args, **kwargs):
pass
def list(self, *args, **kwargs):
raise NotImplementedError()
def clone(self, *args, **kwargs):
raise NotImplementedError()
RBD_FEATURE_LAYERING = 1
class TestStore(base.StoreBaseTest,
test_store_capabilities.TestStoreCapabilitiesChecking):
def setUp(self):
"""Establish a clean test environment."""
super(TestStore, self).setUp()
rbd_store.rados = MockRados
rbd_store.rbd = MockRBD
self.store = rbd_store.Store(self.conf)
self.store.configure()
self.store.chunk_size = 2
self.called_commands_actual = []
self.called_commands_expected = []
self.store_specs = {'pool': 'fake_pool',
'image': 'fake_image',
'snapshot': 'fake_snapshot'}
self.location = rbd_store.StoreLocation(self.store_specs,
self.conf)
# Provide enough data to get more than one chunk iteration.
self.data_len = 3 * units.Ki
self.data_iter = six.BytesIO(b'*' * self.data_len)
def test_add_w_image_size_zero(self):
"""Assert that correct size is returned even though 0 was provided."""
self.store.chunk_size = units.Ki
with mock.patch.object(rbd_store.rbd.Image, 'resize') as resize:
with mock.patch.object(rbd_store.rbd.Image, 'write') as write:
ret = self.store.add('fake_image_id', self.data_iter, 0)
self.assertTrue(resize.called)
self.assertTrue(write.called)
self.assertEqual(ret[1], self.data_len)
@mock.patch.object(MockRBD.Image, '__enter__')
@mock.patch.object(rbd_store.Store, '_create_image')
@mock.patch.object(rbd_store.Store, '_delete_image')
def test_add_w_rbd_image_exception(self, delete, create, enter):
def _fake_create_image(*args, **kwargs):
self.called_commands_actual.append('create')
return self.location
def _fake_delete_image(target_pool, image_name, snapshot_name=None):
self.assertEqual(self.location.pool, target_pool)
self.assertEqual(self.location.image, image_name)
self.assertEqual(self.location.snapshot, snapshot_name)
self.called_commands_actual.append('delete')
def _fake_enter(*args, **kwargs):
raise exceptions.NotFound(image="fake_image_id")
create.side_effect = _fake_create_image
delete.side_effect = _fake_delete_image
enter.side_effect = _fake_enter
self.assertRaises(exceptions.NotFound, self.store.add,
'fake_image_id', self.data_iter, self.data_len)
self.called_commands_expected = ['create', 'delete']
def test_add_duplicate_image(self):
def _fake_create_image(*args, **kwargs):
self.called_commands_actual.append('create')
raise MockRBD.ImageExists()
with mock.patch.object(self.store, '_create_image') as create_image:
create_image.side_effect = _fake_create_image
self.assertRaises(exceptions.Duplicate, self.store.add,
'fake_image_id', self.data_iter, self.data_len)
self.called_commands_expected = ['create']
def test_add_with_verifier(self):
"""Assert 'verifier.update' is called when verifier is provided."""
self.store.chunk_size = units.Ki
verifier = mock.MagicMock(name='mock_verifier')
image_id = 'fake_image_id'
file_size = 5 * units.Ki # 5K
file_contents = b"*" * file_size
image_file = six.BytesIO(file_contents)
with mock.patch.object(rbd_store.rbd.Image, 'write'):
self.store.add(image_id, image_file, file_size, verifier=verifier)
verifier.update.assert_called_with(file_contents)
def test_delete(self):
def _fake_remove(*args, **kwargs):
self.called_commands_actual.append('remove')
with mock.patch.object(MockRBD.RBD, 'remove') as remove_image:
remove_image.side_effect = _fake_remove
self.store.delete(g_location.Location('test_rbd_store',
rbd_store.StoreLocation,
self.conf,
uri=self.location.get_uri()))
self.called_commands_expected = ['remove']
def test_delete_image(self):
def _fake_remove(*args, **kwargs):
self.called_commands_actual.append('remove')
with mock.patch.object(MockRBD.RBD, 'remove') as remove_image:
remove_image.side_effect = _fake_remove
self.store._delete_image('fake_pool', self.location.image)
self.called_commands_expected = ['remove']
def test_delete_image_exc_image_not_found(self):
def _fake_remove(*args, **kwargs):
self.called_commands_actual.append('remove')
raise MockRBD.ImageNotFound()
with mock.patch.object(MockRBD.RBD, 'remove') as remove:
remove.side_effect = _fake_remove
self.assertRaises(exceptions.NotFound, self.store._delete_image,
'fake_pool', self.location.image)
self.called_commands_expected = ['remove']
@mock.patch.object(MockRBD.RBD, 'remove')
@mock.patch.object(MockRBD.Image, 'remove_snap')
@mock.patch.object(MockRBD.Image, 'unprotect_snap')
def test_delete_image_w_snap(self, unprotect, remove_snap, remove):
def _fake_unprotect_snap(*args, **kwargs):
self.called_commands_actual.append('unprotect_snap')
def _fake_remove_snap(*args, **kwargs):
self.called_commands_actual.append('remove_snap')
def _fake_remove(*args, **kwargs):
self.called_commands_actual.append('remove')
remove.side_effect = _fake_remove
unprotect.side_effect = _fake_unprotect_snap
remove_snap.side_effect = _fake_remove_snap
self.store._delete_image('fake_pool', self.location.image,
snapshot_name='snap')
self.called_commands_expected = ['unprotect_snap', 'remove_snap',
'remove']
@mock.patch.object(MockRBD.RBD, 'remove')
@mock.patch.object(MockRBD.Image, 'remove_snap')
@mock.patch.object(MockRBD.Image, 'unprotect_snap')
def test_delete_image_w_unprotected_snap(self, unprotect, remove_snap,
remove):
def _fake_unprotect_snap(*args, **kwargs):
self.called_commands_actual.append('unprotect_snap')
raise MockRBD.InvalidArgument()
def _fake_remove_snap(*args, **kwargs):
self.called_commands_actual.append('remove_snap')
def _fake_remove(*args, **kwargs):
self.called_commands_actual.append('remove')
remove.side_effect = _fake_remove
unprotect.side_effect = _fake_unprotect_snap
remove_snap.side_effect = _fake_remove_snap
self.store._delete_image('fake_pool', self.location.image,
snapshot_name='snap')
self.called_commands_expected = ['unprotect_snap', 'remove_snap',
'remove']
@mock.patch.object(MockRBD.RBD, 'remove')
@mock.patch.object(MockRBD.Image, 'remove_snap')
@mock.patch.object(MockRBD.Image, 'unprotect_snap')
def test_delete_image_w_snap_with_error(self, unprotect, remove_snap,
remove):
def _fake_unprotect_snap(*args, **kwargs):
self.called_commands_actual.append('unprotect_snap')
raise TestException()
def _fake_remove_snap(*args, **kwargs):
self.called_commands_actual.append('remove_snap')
def _fake_remove(*args, **kwargs):
self.called_commands_actual.append('remove')
remove.side_effect = _fake_remove
unprotect.side_effect = _fake_unprotect_snap
remove_snap.side_effect = _fake_remove_snap
self.assertRaises(TestException, self.store._delete_image,
'fake_pool', self.location.image,
snapshot_name='snap')
self.called_commands_expected = ['unprotect_snap']
def test_delete_image_w_snap_exc_image_busy(self):
def _fake_unprotect_snap(*args, **kwargs):
self.called_commands_actual.append('unprotect_snap')
raise MockRBD.ImageBusy()
with mock.patch.object(MockRBD.Image, 'unprotect_snap') as mocked:
mocked.side_effect = _fake_unprotect_snap
self.assertRaises(exceptions.InUseByStore,
self.store._delete_image,
'fake_pool', self.location.image,
snapshot_name='snap')
self.called_commands_expected = ['unprotect_snap']
def test_delete_image_w_snap_exc_image_has_snap(self):
def _fake_remove(*args, **kwargs):
self.called_commands_actual.append('remove')
raise MockRBD.ImageHasSnapshots()
with mock.patch.object(MockRBD.RBD, 'remove') as remove:
remove.side_effect = _fake_remove
self.assertRaises(exceptions.HasSnapshot, self.store._delete_image,
'fake_pool', self.location.image)
self.called_commands_expected = ['remove']
def test_get_partial_image(self):
loc = g_location.Location('test_rbd_store', rbd_store.StoreLocation,
self.conf, store_specs=self.store_specs)
self.assertRaises(exceptions.StoreRandomGetNotSupported,
self.store.get, loc, chunk_size=1)
@mock.patch.object(MockRados.Rados, 'connect')
def test_rados_connect_timeout(self, mock_rados_connect):
socket_timeout = 1
self.config(rados_connect_timeout=socket_timeout)
self.store.configure()
with self.store.get_connection('conffile', 'rados_id'):
mock_rados_connect.assert_called_with(timeout=socket_timeout)
@mock.patch.object(MockRados.Rados, 'connect', side_effect=MockRados.Error)
def test_rados_connect_error(self, _):
rbd_store.rados.Error = MockRados.Error
def test():
with self.store.get_connection('conffile', 'rados_id'):
pass
self.assertRaises(exceptions.BackendException, test)
def test_create_image_conf_features(self):
# Tests that we use non-0 features from ceph.conf and cast to int.
fsid = 'fake'
features = '3'
conf_get_mock = mock.Mock(return_value=features)
conn = mock.Mock(conf_get=conf_get_mock)
ioctxt = mock.sentinel.ioctxt
name = '1'
size = 1024
order = 3
with mock.patch.object(rbd_store.rbd.RBD, 'create') as create_mock:
location = self.store._create_image(
fsid, conn, ioctxt, name, size, order)
self.assertEqual(fsid, location.specs['fsid'])
self.assertEqual(rbd_store.DEFAULT_POOL, location.specs['pool'])
self.assertEqual(name, location.specs['image'])
self.assertEqual(rbd_store.DEFAULT_SNAPNAME,
location.specs['snapshot'])
create_mock.assert_called_once_with(ioctxt, name, size, order,
old_format=False, features=3)
def tearDown(self):
self.assertEqual(self.called_commands_expected,
self.called_commands_actual)
super(TestStore, self).tearDown()

View File

@ -1,209 +0,0 @@
# Copyright 2013 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from oslo_concurrency import processutils
from oslo_utils import units
import oslotest
import six
from glance_store._drivers import sheepdog
from glance_store import exceptions
from glance_store import location
from glance_store.tests import base
from glance_store.tests.unit import test_store_capabilities
class TestStoreLocation(oslotest.base.BaseTestCase):
def test_process_spec(self):
mock_conf = mock.Mock()
fake_spec = {
'image': '6bd59e6e-c410-11e5-ab67-0a73f1fda51b',
'addr': '127.0.0.1',
'port': 7000,
}
loc = sheepdog.StoreLocation(fake_spec, mock_conf)
self.assertEqual(fake_spec['image'], loc.image)
self.assertEqual(fake_spec['addr'], loc.addr)
self.assertEqual(fake_spec['port'], loc.port)
def test_parse_uri(self):
mock_conf = mock.Mock()
fake_uri = ('sheepdog://127.0.0.1:7000'
':6bd59e6e-c410-11e5-ab67-0a73f1fda51b')
loc = sheepdog.StoreLocation({}, mock_conf)
loc.parse_uri(fake_uri)
self.assertEqual('6bd59e6e-c410-11e5-ab67-0a73f1fda51b', loc.image)
self.assertEqual('127.0.0.1', loc.addr)
self.assertEqual(7000, loc.port)
class TestSheepdogImage(oslotest.base.BaseTestCase):
@mock.patch.object(processutils, 'execute')
def test_run_command(self, mock_execute):
image = sheepdog.SheepdogImage(
'127.0.0.1', 7000, '6bd59e6e-c410-11e5-ab67-0a73f1fda51b',
sheepdog.DEFAULT_CHUNKSIZE,
)
image._run_command('create', None)
expected_cmd = (
'collie', 'vdi', 'create', '-a', '127.0.0.1', '-p', 7000,
'6bd59e6e-c410-11e5-ab67-0a73f1fda51b',
)
actual_cmd = mock_execute.call_args[0]
self.assertEqual(expected_cmd, actual_cmd)
class TestSheepdogStore(base.StoreBaseTest,
test_store_capabilities.TestStoreCapabilitiesChecking):
def setUp(self):
"""Establish a clean test environment."""
super(TestSheepdogStore, self).setUp()
def _fake_execute(*cmd, **kwargs):
pass
self.config(default_store='sheepdog',
group='glance_store')
execute = mock.patch.object(processutils, 'execute').start()
execute.side_effect = _fake_execute
self.addCleanup(execute.stop)
self.store = sheepdog.Store(self.conf)
self.store.configure()
self.store_specs = {'image': '6bd59e6e-c410-11e5-ab67-0a73f1fda51b',
'addr': '127.0.0.1',
'port': 7000}
@mock.patch.object(sheepdog.SheepdogImage, 'write')
@mock.patch.object(sheepdog.SheepdogImage, 'create')
@mock.patch.object(sheepdog.SheepdogImage, 'exist')
def test_add_image(self, mock_exist, mock_create, mock_write):
data = six.BytesIO(b'xx')
mock_exist.return_value = False
(uri, size, checksum, loc) = self.store.add('fake_image_id', data, 2)
mock_exist.assert_called_once_with()
mock_create.assert_called_once_with(2)
mock_write.assert_called_once_with(b'xx', 0, 2)
@mock.patch.object(sheepdog.SheepdogImage, 'write')
@mock.patch.object(sheepdog.SheepdogImage, 'exist')
def test_add_bad_size_with_image(self, mock_exist, mock_write):
data = six.BytesIO(b'xx')
mock_exist.return_value = False
self.assertRaises(exceptions.Forbidden, self.store.add,
'fake_image_id', data, 'test')
mock_exist.assert_called_once_with()
self.assertEqual(mock_write.call_count, 0)
@mock.patch.object(sheepdog.SheepdogImage, 'delete')
@mock.patch.object(sheepdog.SheepdogImage, 'write')
@mock.patch.object(sheepdog.SheepdogImage, 'create')
@mock.patch.object(sheepdog.SheepdogImage, 'exist')
def test_cleanup_when_add_image_exception(self, mock_exist, mock_create,
mock_write, mock_delete):
data = six.BytesIO(b'xx')
mock_exist.return_value = False
mock_write.side_effect = exceptions.BackendException
self.assertRaises(exceptions.BackendException, self.store.add,
'fake_image_id', data, 2)
mock_exist.assert_called_once_with()
mock_create.assert_called_once_with(2)
mock_write.assert_called_once_with(b'xx', 0, 2)
mock_delete.assert_called_once_with()
def test_add_duplicate_image(self):
def _fake_run_command(command, data, *params):
if command == "list -r":
return "= fake_volume 0 1000"
with mock.patch.object(sheepdog.SheepdogImage, '_run_command') as cmd:
cmd.side_effect = _fake_run_command
data = six.BytesIO(b'xx')
self.assertRaises(exceptions.Duplicate, self.store.add,
'fake_image_id', data, 2)
def test_get(self):
def _fake_run_command(command, data, *params):
if command == "list -r":
return "= fake_volume 0 1000"
with mock.patch.object(sheepdog.SheepdogImage, '_run_command') as cmd:
cmd.side_effect = _fake_run_command
loc = location.Location('test_sheepdog_store',
sheepdog.StoreLocation,
self.conf, store_specs=self.store_specs)
ret = self.store.get(loc)
self.assertEqual(1000, ret[1])
def test_partial_get(self):
loc = location.Location('test_sheepdog_store', sheepdog.StoreLocation,
self.conf, store_specs=self.store_specs)
self.assertRaises(exceptions.StoreRandomGetNotSupported,
self.store.get, loc, chunk_size=1)
def test_get_size(self):
def _fake_run_command(command, data, *params):
if command == "list -r":
return "= fake_volume 0 1000"
with mock.patch.object(sheepdog.SheepdogImage, '_run_command') as cmd:
cmd.side_effect = _fake_run_command
loc = location.Location('test_sheepdog_store',
sheepdog.StoreLocation,
self.conf, store_specs=self.store_specs)
ret = self.store.get_size(loc)
self.assertEqual(1000, ret)
def test_delete(self):
called_commands = []
def _fake_run_command(command, data, *params):
called_commands.append(command)
if command == "list -r":
return "= fake_volume 0 1000"
with mock.patch.object(sheepdog.SheepdogImage, '_run_command') as cmd:
cmd.side_effect = _fake_run_command
loc = location.Location('test_sheepdog_store',
sheepdog.StoreLocation,
self.conf, store_specs=self.store_specs)
self.store.delete(loc)
self.assertEqual(['list -r', 'delete'], called_commands)
def test_add_with_verifier(self):
"""Test that 'verifier.update' is called when verifier is provided."""
verifier = mock.MagicMock(name='mock_verifier')
self.store.chunk_size = units.Ki
image_id = 'fake_image_id'
file_size = units.Ki # 1K
file_contents = b"*" * file_size
image_file = six.BytesIO(file_contents)
def _fake_run_command(command, data, *params):
pass
with mock.patch.object(sheepdog.SheepdogImage, '_run_command') as cmd:
cmd.side_effect = _fake_run_command
self.store.add(image_id, image_file, file_size, verifier=verifier)
verifier.update.assert_called_with(file_contents)

View File

@ -1,40 +0,0 @@
# Copyright 2011-2013 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import glance_store as store
from glance_store import backend
from glance_store.tests import base
class TestStoreBase(base.StoreBaseTest):
def setUp(self):
super(TestStoreBase, self).setUp()
self.config(default_store='file', group='glance_store')
@mock.patch.object(store.driver, 'LOG')
def test_configure_does_not_raise_on_missing_driver_conf(self, mock_log):
self.config(stores=['file'], group='glance_store')
self.config(filesystem_store_datadir=None, group='glance_store')
self.config(filesystem_store_datadirs=None, group='glance_store')
for (__, store_instance) in backend._load_stores(self.conf):
store_instance.configure()
mock_log.warning.assert_called_once_with(
"Failed to configure store correctly: Store filesystem "
"could not be configured correctly. Reason: Specify "
"at least 'filesystem_store_datadir' or "
"'filesystem_store_datadirs' option Disabling add method.")

View File

@ -1,144 +0,0 @@
# Copyright 2014 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from glance_store import capabilities as caps
from glance_store.tests import base
class FakeStoreWithStaticCapabilities(caps.StoreCapability):
_CAPABILITIES = caps.BitMasks.READ_RANDOM | caps.BitMasks.DRIVER_REUSABLE
class FakeStoreWithDynamicCapabilities(caps.StoreCapability):
def __init__(self, *cap_list):
super(FakeStoreWithDynamicCapabilities, self).__init__()
if not cap_list:
cap_list = [caps.BitMasks.READ_RANDOM,
caps.BitMasks.DRIVER_REUSABLE]
self.set_capabilities(*cap_list)
class FakeStoreWithMixedCapabilities(caps.StoreCapability):
_CAPABILITIES = caps.BitMasks.READ_RANDOM
def __init__(self):
super(FakeStoreWithMixedCapabilities, self).__init__()
self.set_capabilities(caps.BitMasks.DRIVER_REUSABLE)
class TestStoreCapabilitiesChecking(object):
def test_store_capabilities_checked_on_io_operations(self):
self.assertEqual('op_checker', self.store.add.__name__)
self.assertEqual('op_checker', self.store.get.__name__)
self.assertEqual('op_checker', self.store.delete.__name__)
class TestStoreCapabilities(base.StoreBaseTest):
def _verify_store_capabilities(self, store):
# This function tested is_capable() as well.
self.assertTrue(store.is_capable(caps.BitMasks.READ_RANDOM))
self.assertTrue(store.is_capable(caps.BitMasks.DRIVER_REUSABLE))
self.assertFalse(store.is_capable(caps.BitMasks.WRITE_ACCESS))
def test_static_capabilities_setup(self):
self._verify_store_capabilities(FakeStoreWithStaticCapabilities())
def test_dynamic_capabilities_setup(self):
self._verify_store_capabilities(FakeStoreWithDynamicCapabilities())
def test_mixed_capabilities_setup(self):
self._verify_store_capabilities(FakeStoreWithMixedCapabilities())
def test_set_unset_capabilities(self):
store = FakeStoreWithStaticCapabilities()
self.assertFalse(store.is_capable(caps.BitMasks.WRITE_ACCESS))
# Set and unset single capability on one time
store.set_capabilities(caps.BitMasks.WRITE_ACCESS)
self.assertTrue(store.is_capable(caps.BitMasks.WRITE_ACCESS))
store.unset_capabilities(caps.BitMasks.WRITE_ACCESS)
self.assertFalse(store.is_capable(caps.BitMasks.WRITE_ACCESS))
# Set and unset multiple capabilities on one time
cap_list = [caps.BitMasks.WRITE_ACCESS, caps.BitMasks.WRITE_OFFSET]
store.set_capabilities(*cap_list)
self.assertTrue(store.is_capable(*cap_list))
store.unset_capabilities(*cap_list)
self.assertFalse(store.is_capable(*cap_list))
def test_store_capabilities_property(self):
store1 = FakeStoreWithDynamicCapabilities()
self.assertTrue(hasattr(store1, 'capabilities'))
store2 = FakeStoreWithMixedCapabilities()
self.assertEqual(store1.capabilities, store2.capabilities)
def test_cascaded_unset_capabilities(self):
# Test read capability
store = FakeStoreWithMixedCapabilities()
self._verify_store_capabilities(store)
store.unset_capabilities(caps.BitMasks.READ_ACCESS)
cap_list = [caps.BitMasks.READ_ACCESS, caps.BitMasks.READ_OFFSET,
caps.BitMasks.READ_CHUNK, caps.BitMasks.READ_RANDOM]
for cap in cap_list:
# To make sure all of them are unsetted.
self.assertFalse(store.is_capable(cap))
self.assertTrue(store.is_capable(caps.BitMasks.DRIVER_REUSABLE))
# Test write capability
store = FakeStoreWithDynamicCapabilities(caps.BitMasks.WRITE_RANDOM,
caps.BitMasks.DRIVER_REUSABLE)
self.assertTrue(store.is_capable(caps.BitMasks.WRITE_RANDOM))
self.assertTrue(store.is_capable(caps.BitMasks.DRIVER_REUSABLE))
store.unset_capabilities(caps.BitMasks.WRITE_ACCESS)
cap_list = [caps.BitMasks.WRITE_ACCESS, caps.BitMasks.WRITE_OFFSET,
caps.BitMasks.WRITE_CHUNK, caps.BitMasks.WRITE_RANDOM]
for cap in cap_list:
# To make sure all of them are unsetted.
self.assertFalse(store.is_capable(cap))
self.assertTrue(store.is_capable(caps.BitMasks.DRIVER_REUSABLE))
class TestStoreCapabilityConstants(base.StoreBaseTest):
def test_one_single_capability_own_one_bit(self):
cap_list = [
caps.BitMasks.READ_ACCESS,
caps.BitMasks.WRITE_ACCESS,
caps.BitMasks.DRIVER_REUSABLE,
]
for cap in cap_list:
self.assertEqual(1, bin(cap).count('1'))
def test_combined_capability_bits(self):
check = caps.StoreCapability.contains
check(caps.BitMasks.READ_OFFSET, caps.BitMasks.READ_ACCESS)
check(caps.BitMasks.READ_CHUNK, caps.BitMasks.READ_ACCESS)
check(caps.BitMasks.READ_RANDOM, caps.BitMasks.READ_CHUNK)
check(caps.BitMasks.READ_RANDOM, caps.BitMasks.READ_OFFSET)
check(caps.BitMasks.WRITE_OFFSET, caps.BitMasks.WRITE_ACCESS)
check(caps.BitMasks.WRITE_CHUNK, caps.BitMasks.WRITE_ACCESS)
check(caps.BitMasks.WRITE_RANDOM, caps.BitMasks.WRITE_CHUNK)
check(caps.BitMasks.WRITE_RANDOM, caps.BitMasks.WRITE_OFFSET)
check(caps.BitMasks.RW_ACCESS, caps.BitMasks.READ_ACCESS)
check(caps.BitMasks.RW_ACCESS, caps.BitMasks.WRITE_ACCESS)
check(caps.BitMasks.RW_OFFSET, caps.BitMasks.READ_OFFSET)
check(caps.BitMasks.RW_OFFSET, caps.BitMasks.WRITE_OFFSET)
check(caps.BitMasks.RW_CHUNK, caps.BitMasks.READ_CHUNK)
check(caps.BitMasks.RW_CHUNK, caps.BitMasks.WRITE_CHUNK)
check(caps.BitMasks.RW_RANDOM, caps.BitMasks.READ_RANDOM)
check(caps.BitMasks.RW_RANDOM, caps.BitMasks.WRITE_RANDOM)

File diff suppressed because it is too large Load Diff

View File

@ -1,87 +0,0 @@
# Copyright 2014 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import fixtures
from glance_store._drivers.swift import utils as sutils
from glance_store import exceptions
from glance_store.tests import base
class TestSwiftParams(base.StoreBaseTest):
def setUp(self):
super(TestSwiftParams, self).setUp()
conf_file = "glance-swift.conf"
test_dir = self.useFixture(fixtures.TempDir()).path
self.swift_config_file = self.copy_data_file(conf_file, test_dir)
self.config(swift_store_config_file=self.swift_config_file)
def test_multiple_swift_account_enabled(self):
self.config(swift_store_config_file="glance-swift.conf")
self.assertTrue(
sutils.is_multiple_swift_store_accounts_enabled(self.conf))
def test_multiple_swift_account_disabled(self):
self.config(swift_store_config_file=None)
self.assertFalse(
sutils.is_multiple_swift_store_accounts_enabled(self.conf))
def test_swift_config_file_doesnt_exist(self):
self.config(swift_store_config_file='fake-file.conf')
self.assertRaises(exceptions.BadStoreConfiguration,
sutils.SwiftParams, self.conf)
def test_swift_config_uses_default_values_multiple_account_disabled(self):
default_user = 'user_default'
default_key = 'key_default'
default_auth_address = 'auth@default.com'
default_account_reference = 'ref_default'
conf = {'swift_store_config_file': None,
'swift_store_user': default_user,
'swift_store_key': default_key,
'swift_store_auth_address': default_auth_address,
'default_swift_reference': default_account_reference}
self.config(**conf)
swift_params = sutils.SwiftParams(self.conf).params
self.assertEqual(1, len(swift_params.keys()))
self.assertEqual(default_user,
swift_params[default_account_reference]['user']
)
self.assertEqual(default_key,
swift_params[default_account_reference]['key']
)
self.assertEqual(default_auth_address,
swift_params[default_account_reference]
['auth_address']
)
def test_swift_store_config_validates_for_creds_auth_address(self):
swift_params = sutils.SwiftParams(self.conf).params
self.assertEqual('tenant:user1',
swift_params['ref1']['user']
)
self.assertEqual('key1',
swift_params['ref1']['key']
)
self.assertEqual('example.com',
swift_params['ref1']['auth_address'])
self.assertEqual('user2',
swift_params['ref2']['user'])
self.assertEqual('key2',
swift_params['ref2']['key'])
self.assertEqual('http://example.com',
swift_params['ref2']['auth_address']
)

View File

@ -1,637 +0,0 @@
# Copyright 2014 OpenStack, LLC
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Tests the VMware Datastore backend store"""
import hashlib
import uuid
import mock
from oslo_utils import units
from oslo_vmware import api
from oslo_vmware import exceptions as vmware_exceptions
from oslo_vmware.objects import datacenter as oslo_datacenter
from oslo_vmware.objects import datastore as oslo_datastore
import six
import glance_store._drivers.vmware_datastore as vm_store
from glance_store import backend
from glance_store import exceptions
from glance_store import location
from glance_store.tests import base
from glance_store.tests.unit import test_store_capabilities
from glance_store.tests import utils
FAKE_UUID = str(uuid.uuid4())
FIVE_KB = 5 * units.Ki
VMWARE_DS = {
'debug': True,
'known_stores': ['vmware_datastore'],
'default_store': 'vsphere',
'vmware_server_host': '127.0.0.1',
'vmware_server_username': 'username',
'vmware_server_password': 'password',
'vmware_store_image_dir': '/openstack_glance',
'vmware_insecure': 'True',
'vmware_datastores': ['a:b:0'],
}
def format_location(host_ip, folder_name, image_id, datastores):
"""
Helper method that returns a VMware Datastore store URI given
the component pieces.
"""
scheme = 'vsphere'
(datacenter_path, datastore_name, weight) = datastores[0].split(':')
return ("%s://%s/folder%s/%s?dcPath=%s&dsName=%s"
% (scheme, host_ip, folder_name,
image_id, datacenter_path, datastore_name))
def fake_datastore_obj(*args, **kwargs):
dc_obj = oslo_datacenter.Datacenter(ref='fake-ref',
name='fake-name')
dc_obj.path = args[0]
return oslo_datastore.Datastore(ref='fake-ref',
datacenter=dc_obj,
name=args[1])
class TestStore(base.StoreBaseTest,
test_store_capabilities.TestStoreCapabilitiesChecking):
@mock.patch.object(vm_store.Store, '_get_datastore')
@mock.patch('oslo_vmware.api.VMwareAPISession')
def setUp(self, mock_api_session, mock_get_datastore):
"""Establish a clean test environment."""
super(TestStore, self).setUp()
vm_store.Store.CHUNKSIZE = 2
default_store = VMWARE_DS['default_store']
self.config(default_store=default_store, stores=['vmware'])
backend.register_opts(self.conf)
self.config(group='glance_store',
vmware_server_username='admin',
vmware_server_password='admin',
vmware_server_host=VMWARE_DS['vmware_server_host'],
vmware_insecure=VMWARE_DS['vmware_insecure'],
vmware_datastores=VMWARE_DS['vmware_datastores'])
mock_get_datastore.side_effect = fake_datastore_obj
backend.create_stores(self.conf)
self.store = backend.get_store_from_scheme('vsphere')
self.store.store_image_dir = (
VMWARE_DS['vmware_store_image_dir'])
def _mock_http_connection(self):
return mock.patch('six.moves.http_client.HTTPConnection')
@mock.patch('oslo_vmware.api.VMwareAPISession')
def test_get(self, mock_api_session):
"""Test a "normal" retrieval of an image in chunks."""
expected_image_size = 31
expected_returns = ['I am a teapot, short and stout\n']
loc = location.get_location_from_uri(
"vsphere://127.0.0.1/folder/openstack_glance/%s"
"?dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf)
with mock.patch('requests.Session.request') as HttpConn:
HttpConn.return_value = utils.fake_response()
(image_file, image_size) = self.store.get(loc)
self.assertEqual(expected_image_size, image_size)
chunks = [c for c in image_file]
self.assertEqual(expected_returns, chunks)
@mock.patch('oslo_vmware.api.VMwareAPISession')
def test_get_non_existing(self, mock_api_session):
"""
Test that trying to retrieve an image that doesn't exist
raises an error
"""
loc = location.get_location_from_uri(
"vsphere://127.0.0.1/folder/openstack_glan"
"ce/%s?dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf)
with mock.patch('requests.Session.request') as HttpConn:
HttpConn.return_value = utils.fake_response(status_code=404)
self.assertRaises(exceptions.NotFound, self.store.get, loc)
@mock.patch.object(vm_store.Store, '_build_vim_cookie_header')
@mock.patch.object(vm_store.Store, 'select_datastore')
@mock.patch.object(vm_store._Reader, 'size')
@mock.patch.object(api, 'VMwareAPISession')
def test_add(self, fake_api_session, fake_size, fake_select_datastore,
fake_cookie):
"""Test that we can add an image via the VMware backend."""
fake_select_datastore.return_value = self.store.datastores[0][0]
expected_image_id = str(uuid.uuid4())
expected_size = FIVE_KB
expected_contents = b"*" * expected_size
hash_code = hashlib.md5(expected_contents)
expected_checksum = hash_code.hexdigest()
fake_size.__get__ = mock.Mock(return_value=expected_size)
expected_cookie = 'vmware_soap_session=fake-uuid'
fake_cookie.return_value = expected_cookie
expected_headers = {'Content-Length': six.text_type(expected_size),
'Cookie': expected_cookie}
with mock.patch('hashlib.md5') as md5:
md5.return_value = hash_code
expected_location = format_location(
VMWARE_DS['vmware_server_host'],
VMWARE_DS['vmware_store_image_dir'],
expected_image_id,
VMWARE_DS['vmware_datastores'])
image = six.BytesIO(expected_contents)
with mock.patch('requests.Session.request') as HttpConn:
HttpConn.return_value = utils.fake_response()
location, size, checksum, _ = self.store.add(expected_image_id,
image,
expected_size)
_, kwargs = HttpConn.call_args
self.assertEqual(expected_headers, kwargs['headers'])
self.assertEqual(utils.sort_url_by_qs_keys(expected_location),
utils.sort_url_by_qs_keys(location))
self.assertEqual(expected_size, size)
self.assertEqual(expected_checksum, checksum)
@mock.patch.object(vm_store.Store, 'select_datastore')
@mock.patch.object(vm_store._Reader, 'size')
@mock.patch('oslo_vmware.api.VMwareAPISession')
def test_add_size_zero(self, mock_api_session, fake_size,
fake_select_datastore):
"""
Test that when specifying size zero for the image to add,
the actual size of the image is returned.
"""
fake_select_datastore.return_value = self.store.datastores[0][0]
expected_image_id = str(uuid.uuid4())
expected_size = FIVE_KB
expected_contents = b"*" * expected_size
hash_code = hashlib.md5(expected_contents)
expected_checksum = hash_code.hexdigest()
fake_size.__get__ = mock.Mock(return_value=expected_size)
with mock.patch('hashlib.md5') as md5:
md5.return_value = hash_code
expected_location = format_location(
VMWARE_DS['vmware_server_host'],
VMWARE_DS['vmware_store_image_dir'],
expected_image_id,
VMWARE_DS['vmware_datastores'])
image = six.BytesIO(expected_contents)
with mock.patch('requests.Session.request') as HttpConn:
HttpConn.return_value = utils.fake_response()
location, size, checksum, _ = self.store.add(expected_image_id,
image, 0)
self.assertEqual(utils.sort_url_by_qs_keys(expected_location),
utils.sort_url_by_qs_keys(location))
self.assertEqual(expected_size, size)
self.assertEqual(expected_checksum, checksum)
@mock.patch.object(vm_store.Store, 'select_datastore')
@mock.patch('glance_store._drivers.vmware_datastore._Reader')
def test_add_with_verifier(self, fake_reader, fake_select_datastore):
"""Test that the verifier is passed to the _Reader during add."""
verifier = mock.MagicMock(name='mock_verifier')
image_id = str(uuid.uuid4())
size = FIVE_KB
contents = b"*" * size
image = six.BytesIO(contents)
with mock.patch('requests.Session.request') as HttpConn:
HttpConn.return_value = utils.fake_response()
self.store.add(image_id, image, size, verifier=verifier)
fake_reader.assert_called_with(image, verifier)
@mock.patch.object(vm_store.Store, 'select_datastore')
@mock.patch('glance_store._drivers.vmware_datastore._Reader')
def test_add_with_verifier_size_zero(self, fake_reader, fake_select_ds):
"""Test that the verifier is passed to the _ChunkReader during add."""
verifier = mock.MagicMock(name='mock_verifier')
image_id = str(uuid.uuid4())
size = FIVE_KB
contents = b"*" * size
image = six.BytesIO(contents)
with mock.patch('requests.Session.request') as HttpConn:
HttpConn.return_value = utils.fake_response()
self.store.add(image_id, image, 0, verifier=verifier)
fake_reader.assert_called_with(image, verifier)
@mock.patch('oslo_vmware.api.VMwareAPISession')
def test_delete(self, mock_api_session):
"""Test we can delete an existing image in the VMware store."""
loc = location.get_location_from_uri(
"vsphere://127.0.0.1/folder/openstack_glance/%s?"
"dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf)
with mock.patch('requests.Session.request') as HttpConn:
HttpConn.return_value = utils.fake_response()
vm_store.Store._service_content = mock.Mock()
self.store.delete(loc)
with mock.patch('requests.Session.request') as HttpConn:
HttpConn.return_value = utils.fake_response(status_code=404)
self.assertRaises(exceptions.NotFound, self.store.get, loc)
@mock.patch('oslo_vmware.api.VMwareAPISession')
def test_delete_non_existing(self, mock_api_session):
"""
Test that trying to delete an image that doesn't exist raises an error
"""
loc = location.get_location_from_uri(
"vsphere://127.0.0.1/folder/openstack_glance/%s?"
"dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf)
with mock.patch.object(self.store.session,
'wait_for_task') as mock_task:
mock_task.side_effect = vmware_exceptions.FileNotFoundException
self.assertRaises(exceptions.NotFound, self.store.delete, loc)
@mock.patch('oslo_vmware.api.VMwareAPISession')
def test_get_size(self, mock_api_session):
"""
Test we can get the size of an existing image in the VMware store
"""
loc = location.get_location_from_uri(
"vsphere://127.0.0.1/folder/openstack_glance/%s"
"?dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf)
with mock.patch('requests.Session.request') as HttpConn:
HttpConn.return_value = utils.fake_response()
image_size = self.store.get_size(loc)
self.assertEqual(image_size, 31)
@mock.patch('oslo_vmware.api.VMwareAPISession')
def test_get_size_non_existing(self, mock_api_session):
"""
Test that trying to retrieve an image size that doesn't exist
raises an error
"""
loc = location.get_location_from_uri(
"vsphere://127.0.0.1/folder/openstack_glan"
"ce/%s?dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf)
with mock.patch('requests.Session.request') as HttpConn:
HttpConn.return_value = utils.fake_response(status_code=404)
self.assertRaises(exceptions.NotFound, self.store.get_size, loc)
def test_reader_full(self):
content = b'XXX'
image = six.BytesIO(content)
expected_checksum = hashlib.md5(content).hexdigest()
reader = vm_store._Reader(image)
ret = reader.read()
self.assertEqual(content, ret)
self.assertEqual(expected_checksum, reader.checksum.hexdigest())
self.assertEqual(len(content), reader.size)
def test_reader_partial(self):
content = b'XXX'
image = six.BytesIO(content)
expected_checksum = hashlib.md5(b'X').hexdigest()
reader = vm_store._Reader(image)
ret = reader.read(1)
self.assertEqual(b'X', ret)
self.assertEqual(expected_checksum, reader.checksum.hexdigest())
self.assertEqual(1, reader.size)
def test_reader_with_verifier(self):
content = b'XXX'
image = six.BytesIO(content)
verifier = mock.MagicMock(name='mock_verifier')
reader = vm_store._Reader(image, verifier)
reader.read()
verifier.update.assert_called_with(content)
def test_sanity_check_api_retry_count(self):
"""Test that sanity check raises if api_retry_count is <= 0."""
self.store.conf.glance_store.vmware_api_retry_count = -1
self.assertRaises(exceptions.BadStoreConfiguration,
self.store._sanity_check)
self.store.conf.glance_store.vmware_api_retry_count = 0
self.assertRaises(exceptions.BadStoreConfiguration,
self.store._sanity_check)
self.store.conf.glance_store.vmware_api_retry_count = 1
try:
self.store._sanity_check()
except exceptions.BadStoreConfiguration:
self.fail()
def test_sanity_check_task_poll_interval(self):
"""Test that sanity check raises if task_poll_interval is <= 0."""
self.store.conf.glance_store.vmware_task_poll_interval = -1
self.assertRaises(exceptions.BadStoreConfiguration,
self.store._sanity_check)
self.store.conf.glance_store.vmware_task_poll_interval = 0
self.assertRaises(exceptions.BadStoreConfiguration,
self.store._sanity_check)
self.store.conf.glance_store.vmware_task_poll_interval = 1
try:
self.store._sanity_check()
except exceptions.BadStoreConfiguration:
self.fail()
def test_sanity_check_multiple_datastores(self):
self.store.conf.glance_store.vmware_api_retry_count = 1
self.store.conf.glance_store.vmware_task_poll_interval = 1
self.store.conf.glance_store.vmware_datastores = ['a:b:0', 'a:d:0']
try:
self.store._sanity_check()
except exceptions.BadStoreConfiguration:
self.fail()
def test_parse_datastore_info_and_weight_less_opts(self):
datastore = 'a'
self.assertRaises(exceptions.BadStoreConfiguration,
self.store._parse_datastore_info_and_weight,
datastore)
def test_parse_datastore_info_and_weight_invalid_weight(self):
datastore = 'a:b:c'
self.assertRaises(exceptions.BadStoreConfiguration,
self.store._parse_datastore_info_and_weight,
datastore)
def test_parse_datastore_info_and_weight_empty_opts(self):
datastore = 'a: :0'
self.assertRaises(exceptions.BadStoreConfiguration,
self.store._parse_datastore_info_and_weight,
datastore)
datastore = ':b:0'
self.assertRaises(exceptions.BadStoreConfiguration,
self.store._parse_datastore_info_and_weight,
datastore)
def test_parse_datastore_info_and_weight(self):
datastore = 'a:b:100'
parts = self.store._parse_datastore_info_and_weight(datastore)
self.assertEqual('a', parts[0])
self.assertEqual('b', parts[1])
self.assertEqual('100', parts[2])
def test_parse_datastore_info_and_weight_default_weight(self):
datastore = 'a:b'
parts = self.store._parse_datastore_info_and_weight(datastore)
self.assertEqual('a', parts[0])
self.assertEqual('b', parts[1])
self.assertEqual(0, parts[2])
@mock.patch.object(vm_store.Store, 'select_datastore')
@mock.patch.object(api, 'VMwareAPISession')
def test_unexpected_status(self, mock_api_session, mock_select_datastore):
expected_image_id = str(uuid.uuid4())
expected_size = FIVE_KB
expected_contents = b"*" * expected_size
image = six.BytesIO(expected_contents)
self.session = mock.Mock()
with mock.patch('requests.Session.request') as HttpConn:
HttpConn.return_value = utils.fake_response(status_code=401)
self.assertRaises(exceptions.BackendException,
self.store.add,
expected_image_id, image, expected_size)
@mock.patch.object(vm_store.Store, 'select_datastore')
@mock.patch.object(api, 'VMwareAPISession')
def test_unexpected_status_no_response_body(self, mock_api_session,
mock_select_datastore):
expected_image_id = str(uuid.uuid4())
expected_size = FIVE_KB
expected_contents = b"*" * expected_size
image = six.BytesIO(expected_contents)
self.session = mock.Mock()
with self._mock_http_connection() as HttpConn:
HttpConn.return_value = utils.fake_response(status_code=500,
no_response_body=True)
self.assertRaises(exceptions.BackendException,
self.store.add,
expected_image_id, image, expected_size)
@mock.patch.object(api, 'VMwareAPISession')
def test_reset_session(self, mock_api_session):
self.store.reset_session()
self.assertTrue(mock_api_session.called)
@mock.patch.object(api, 'VMwareAPISession')
def test_build_vim_cookie_header_active(self, mock_api_session):
self.store.session.is_current_session_active = mock.Mock()
self.store.session.is_current_session_active.return_value = True
self.store._build_vim_cookie_header(True)
self.assertFalse(mock_api_session.called)
@mock.patch.object(api, 'VMwareAPISession')
def test_build_vim_cookie_header_expired(self, mock_api_session):
self.store.session.is_current_session_active = mock.Mock()
self.store.session.is_current_session_active.return_value = False
self.store._build_vim_cookie_header(True)
self.assertTrue(mock_api_session.called)
@mock.patch.object(api, 'VMwareAPISession')
def test_build_vim_cookie_header_expired_noverify(self, mock_api_session):
self.store.session.is_current_session_active = mock.Mock()
self.store.session.is_current_session_active.return_value = False
self.store._build_vim_cookie_header()
self.assertFalse(mock_api_session.called)
@mock.patch.object(vm_store.Store, 'select_datastore')
@mock.patch.object(api, 'VMwareAPISession')
def test_add_ioerror(self, mock_api_session, mock_select_datastore):
mock_select_datastore.return_value = self.store.datastores[0][0]
expected_image_id = str(uuid.uuid4())
expected_size = FIVE_KB
expected_contents = b"*" * expected_size
image = six.BytesIO(expected_contents)
self.session = mock.Mock()
with mock.patch('requests.Session.request') as HttpConn:
HttpConn.request.side_effect = IOError
self.assertRaises(exceptions.BackendException,
self.store.add,
expected_image_id, image, expected_size)
def test_qs_sort_with_literal_question_mark(self):
url = 'scheme://example.com/path?key2=val2&key1=val1?sort=true'
exp_url = 'scheme://example.com/path?key1=val1%3Fsort%3Dtrue&key2=val2'
self.assertEqual(exp_url,
utils.sort_url_by_qs_keys(url))
@mock.patch.object(vm_store.Store, '_get_datastore')
@mock.patch.object(api, 'VMwareAPISession')
def test_build_datastore_weighted_map(self, mock_api_session, mock_ds_obj):
datastores = ['a:b:100', 'c:d:100', 'e:f:200']
mock_ds_obj.side_effect = fake_datastore_obj
ret = self.store._build_datastore_weighted_map(datastores)
ds = ret[200]
self.assertEqual('e', ds[0].datacenter.path)
self.assertEqual('f', ds[0].name)
ds = ret[100]
self.assertEqual(2, len(ds))
@mock.patch.object(vm_store.Store, '_get_datastore')
@mock.patch.object(api, 'VMwareAPISession')
def test_build_datastore_weighted_map_equal_weight(self, mock_api_session,
mock_ds_obj):
datastores = ['a:b:200', 'a:b:200']
mock_ds_obj.side_effect = fake_datastore_obj
ret = self.store._build_datastore_weighted_map(datastores)
ds = ret[200]
self.assertEqual(2, len(ds))
@mock.patch.object(vm_store.Store, '_get_datastore')
@mock.patch.object(api, 'VMwareAPISession')
def test_build_datastore_weighted_map_empty_list(self, mock_api_session,
mock_ds_ref):
datastores = []
ret = self.store._build_datastore_weighted_map(datastores)
self.assertEqual({}, ret)
@mock.patch.object(vm_store.Store, '_get_datastore')
@mock.patch.object(vm_store.Store, '_get_freespace')
def test_select_datastore_insufficient_freespace(self, mock_get_freespace,
mock_ds_ref):
datastores = ['a:b:100', 'c:d:100', 'e:f:200']
image_size = 10
self.store.datastores = (
self.store._build_datastore_weighted_map(datastores))
freespaces = [5, 5, 5]
def fake_get_fp(*args, **kwargs):
return freespaces.pop(0)
mock_get_freespace.side_effect = fake_get_fp
self.assertRaises(exceptions.StorageFull,
self.store.select_datastore, image_size)
@mock.patch.object(vm_store.Store, '_get_datastore')
@mock.patch.object(vm_store.Store, '_get_freespace')
def test_select_datastore_insufficient_fs_one_ds(self, mock_get_freespace,
mock_ds_ref):
# Tests if fs is updated with just one datastore.
datastores = ['a:b:100']
image_size = 10
self.store.datastores = (
self.store._build_datastore_weighted_map(datastores))
freespaces = [5]
def fake_get_fp(*args, **kwargs):
return freespaces.pop(0)
mock_get_freespace.side_effect = fake_get_fp
self.assertRaises(exceptions.StorageFull,
self.store.select_datastore, image_size)
@mock.patch.object(vm_store.Store, '_get_datastore')
@mock.patch.object(vm_store.Store, '_get_freespace')
def test_select_datastore_equal_freespace(self, mock_get_freespace,
mock_ds_obj):
datastores = ['a:b:100', 'c:d:100', 'e:f:200']
image_size = 10
mock_ds_obj.side_effect = fake_datastore_obj
self.store.datastores = (
self.store._build_datastore_weighted_map(datastores))
freespaces = [11, 11, 11]
def fake_get_fp(*args, **kwargs):
return freespaces.pop(0)
mock_get_freespace.side_effect = fake_get_fp
ds = self.store.select_datastore(image_size)
self.assertEqual('e', ds.datacenter.path)
self.assertEqual('f', ds.name)
@mock.patch.object(vm_store.Store, '_get_datastore')
@mock.patch.object(vm_store.Store, '_get_freespace')
def test_select_datastore_contention(self, mock_get_freespace,
mock_ds_obj):
datastores = ['a:b:100', 'c:d:100', 'e:f:200']
image_size = 10
mock_ds_obj.side_effect = fake_datastore_obj
self.store.datastores = (
self.store._build_datastore_weighted_map(datastores))
freespaces = [5, 11, 12]
def fake_get_fp(*args, **kwargs):
return freespaces.pop(0)
mock_get_freespace.side_effect = fake_get_fp
ds = self.store.select_datastore(image_size)
self.assertEqual('c', ds.datacenter.path)
self.assertEqual('d', ds.name)
def test_select_datastore_empty_list(self):
datastores = []
self.store.datastores = (
self.store._build_datastore_weighted_map(datastores))
self.assertRaises(exceptions.StorageFull,
self.store.select_datastore, 10)
@mock.patch('oslo_vmware.api.VMwareAPISession')
def test_get_datacenter_ref(self, mock_api_session):
datacenter_path = 'Datacenter1'
self.store._get_datacenter(datacenter_path)
self.store.session.invoke_api.assert_called_with(
self.store.session.vim,
'FindByInventoryPath',
self.store.session.vim.service_content.searchIndex,
inventoryPath=datacenter_path)
@mock.patch('oslo_vmware.api.VMwareAPISession')
def test_http_get_redirect(self, mock_api_session):
# Add two layers of redirects to the response stack, which will
# return the default 200 OK with the expected data after resolving
# both redirects.
redirect1 = {"location": "https://example.com?dsName=ds1&dcPath=dc1"}
redirect2 = {"location": "https://example.com?dsName=ds2&dcPath=dc2"}
responses = [utils.fake_response(),
utils.fake_response(status_code=302, headers=redirect1),
utils.fake_response(status_code=301, headers=redirect2)]
def getresponse(*args, **kwargs):
return responses.pop()
expected_image_size = 31
expected_returns = ['I am a teapot, short and stout\n']
loc = location.get_location_from_uri(
"vsphere://127.0.0.1/folder/openstack_glance/%s"
"?dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf)
with mock.patch('requests.Session.request') as HttpConn:
HttpConn.side_effect = getresponse
(image_file, image_size) = self.store.get(loc)
self.assertEqual(expected_image_size, image_size)
chunks = [c for c in image_file]
self.assertEqual(expected_returns, chunks)
@mock.patch('oslo_vmware.api.VMwareAPISession')
def test_http_get_max_redirects(self, mock_api_session):
redirect = {"location": "https://example.com?dsName=ds1&dcPath=dc1"}
responses = ([utils.fake_response(status_code=302, headers=redirect)]
* (vm_store.MAX_REDIRECTS + 1))
def getresponse(*args, **kwargs):
return responses.pop()
loc = location.get_location_from_uri(
"vsphere://127.0.0.1/folder/openstack_glance/%s"
"?dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf)
with mock.patch('requests.Session.request') as HttpConn:
HttpConn.side_effect = getresponse
self.assertRaises(exceptions.MaxRedirectsExceeded, self.store.get,
loc)
@mock.patch('oslo_vmware.api.VMwareAPISession')
def test_http_get_redirect_invalid(self, mock_api_session):
redirect = {"location": "https://example.com?dsName=ds1&dcPath=dc1"}
loc = location.get_location_from_uri(
"vsphere://127.0.0.1/folder/openstack_glance/%s"
"?dsName=ds1&dcPath=dc1" % FAKE_UUID, conf=self.conf)
with mock.patch('requests.Session.request') as HttpConn:
HttpConn.return_value = utils.fake_response(status_code=307,
headers=redirect)
self.assertRaises(exceptions.BadStoreUri, self.store.get, loc)

View File

@ -1,75 +0,0 @@
# Copyright 2014 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import six
from six.moves import urllib
import requests
def sort_url_by_qs_keys(url):
# NOTE(kragniz): this only sorts the keys of the query string of a url.
# For example, an input of '/v2/tasks?sort_key=id&sort_dir=asc&limit=10'
# returns '/v2/tasks?limit=10&sort_dir=asc&sort_key=id'. This is to prevent
# non-deterministic ordering of the query string causing problems with unit
# tests.
parsed = urllib.parse.urlparse(url)
# In python2.6, for arbitrary url schemes, query string
# is not parsed from url. http://bugs.python.org/issue9374
path = parsed.path
query = parsed.query
if not query:
path, query = parsed.path.split('?', 1)
queries = urllib.parse.parse_qsl(query, True)
sorted_query = sorted(queries, key=lambda x: x[0])
encoded_sorted_query = urllib.parse.urlencode(sorted_query, True)
url_parts = (parsed.scheme, parsed.netloc, path,
parsed.params, encoded_sorted_query,
parsed.fragment)
return urllib.parse.urlunparse(url_parts)
class FakeHTTPResponse(object):
def __init__(self, status=200, headers=None, data=None, *args, **kwargs):
data = data or 'I am a teapot, short and stout\n'
self.data = six.StringIO(data)
self.read = self.data.read
self.status = status
self.headers = headers or {'content-length': len(data)}
if not kwargs.get('no_response_body', False):
self.body = None
def getheader(self, name, default=None):
return self.headers.get(name.lower(), default)
def getheaders(self):
return self.headers or {}
def read(self, amt):
self.data.read(amt)
def release_conn(self):
pass
def close(self):
self.data.close()
def fake_response(status_code=200, headers=None, content=None, **kwargs):
r = requests.models.Response()
r.status_code = status_code
r.headers = headers or {}
r.raw = FakeHTTPResponse(status_code, headers, content, kwargs)
return r

View File

@ -1,29 +0,0 @@
---
prelude: >
Improved configuration options for glance_store. Please
refer to the ``other`` section for more information.
other:
- The glance_store configuration options have been
improved with detailed help texts, defaults for
sample configuration files, explicit choices
of values for operators to choose from, and a
strict range defined with ``min`` and ``max``
boundaries.
It is to be noted that the configuration options
that take integer values now have a strict range defined
with "min" and/or "max" boundaries where appropriate. This
renders the configuration options incapable of taking certain
values that may have been accepted before but were actually
invalid. For example, configuration options specifying counts,
where a negative value was undefined, would have still accepted
the supplied negative value. Such options will no longer accept
negative values. However, options where a negative value was
previously defined (for example, -1 to mean unlimited) will
remain unaffected by this change.
Values that do not comply with the appropriate restrictions
will prevent the service from starting. The logs will contain
a message indicating the problematic configuration option and
the reason why the supplied value has been rejected.

View File

@ -1,5 +0,0 @@
---
upgrade:
- Packagers should be aware that the rootwrap configuration
files have been moved from etc/ to etc/glance/ in order to
be consistent with where other projects place these files.

View File

@ -1,9 +0,0 @@
---
upgrade:
- If using Swift in the multi-tenant mode for storing
images in Glance, please note that the configuration
options ``swift_store_multi_tenant`` and
``swift_store_config_file`` are now mutually exclusive
and cannot be configured together. If you intend to
use multi-tenant store, please make sure that you have
not set a swift configuration file.

View File

@ -1,45 +0,0 @@
---
prelude: >
This was a quiet development cycle for the ``glance_store`` library.
No new features were added. Several bugs were fixed and some code
changes were committed to increase stability.
fixes:
- |
The following bugs were fixed during the Pike release cycle.
* Bug 1618666_: Fix SafeConfigParser DeprecationWarning in Python 3.2+
* Bug 1668848_: PBR 2.0.0 will break projects not using constraints
* Bug 1657710_: Unit test passes only because is launched as non-root user
* Bug 1686063_: RBD driver can't delete image with unprotected snapshot
* Bug 1691132_: Fixed tests failing due to updated oslo.config
* Bug 1693670_: Fix doc generation for Python3
* Bug 1643516_: Cinder driver: TypeError in _open_cinder_volume
* Bug 1620214_: Sheepdog: command execution failure
.. _1618666: https://code.launchpad.net/bugs/1618666
.. _1668848: https://code.launchpad.net/bugs/1668848
.. _1657710: https://code.launchpad.net/bugs/1657710
.. _1686063: https://code.launchpad.net/bugs/1686063
.. _1691132: https://code.launchpad.net/bugs/1691132
.. _1693670: https://code.launchpad.net/bugs/1693670
.. _1643516: https://code.launchpad.net/bugs/1643516
.. _1620214: https://code.launchpad.net/bugs/1620214
other:
- |
The following improvements were made during the Pike release cycle.
* `Fixed string formatting in log message
<https://git.openstack.org/cgit/openstack/glance_store/commit/?id=802c5a785444ba9ea5888c7cd131d004ec2a19ad>`_
* `Correct error msg variable that could be unassigned
<https://git.openstack.org/cgit/openstack/glance_store/commit/?id=ccc9696e3f071383cd05d88ba2488f5a5ee98120>`_
* `Use HostAddressOpt for store opts that accept IP and hostnames
<https://git.openstack.org/cgit/openstack/glance_store/commit/?id=d6f3c4e2d921d8a6db8be79e4a81e393334cfa4c>`_
* `Replace six.iteritems() with .items()
<https://git.openstack.org/cgit/openstack/glance_store/commit/?id=edc19a290b05a12f39f3059b11e2b978a9362052>`_
* `Add python 3.5 in classifier and envlist
<https://git.openstack.org/cgit/openstack/glance_store/commit/?id=963e2a0fd1c173556a2c40915ad26db28d8375a6>`_
* `Initialize privsep root_helper command
<https://git.openstack.org/cgit/openstack/glance_store/commit/?id=d16dff9a08d1104540182f3aa36758dc89603fc0>`_
* `Documentation was reorganized according to the new standard layout
<http://specs.openstack.org/openstack/docs-specs/specs/pike/os-manuals-migration.html>`_

View File

@ -1,12 +0,0 @@
---
prelude: >
Prevent Unauthorized errors during uploading or
donwloading data to Swift store.
features:
- Allow glance_store to refresh token when upload or download data to Swift
store. glance_store identifies if token is going to expire soon when
executing request to Swift and refresh the token. For multi-tenant swift
store glance_store uses trusts, for single-tenant swift store glance_store
uses credentials from swift store configurations. Please also note that
this feature is enabled if and only if Keystone V3 API is available
and enabled.

View File

@ -1,14 +0,0 @@
---
prelude: >
Some deprecated exceptions have been removed. See
upgrade section for more details.
upgrade:
- The following list of exceptions have been deprecated
since 0.10.0 release -- ``Conflict``, ``ForbiddenPublicImage``
``ProtectedImageDelete``, ``BadDriverConfiguration``,
``InvalidRedirect``, ``WorkerCreationFailure``,
``SchemaLoadError``, ``InvalidObject``,
``UnsupportedHeaderFeature``, ``ImageDataNotFound``,
``InvalidParameterValue``, ``InvalidImageStatusTransition``.
This release removes these exceptions so any remnant
consumption of the same must be avoided/removed.

View File

@ -1,7 +0,0 @@
---
prelude: >
glance_store._drivers.gridfs
deprecations:
- The gridfs driver has been removed from the tree.
The environments using this driver that were not
migrated will stop working after the upgrade.

View File

@ -1,15 +0,0 @@
---
prelude: >
glance_store._drivers.s3 removed from tree.
upgrade:
- The S3 driver has been removed completely from the
glance_store source tree. All environments running
and (or) using this s3-driver piece of code and have
not been migrated will stop working after the upgrade.
We recommend you use a different storage backend that
is still being supported by Glance. The standard
deprecation path has been used to remove this. The
proces requiring store driver maintainers was initiated
at http://lists.openstack.org/pipermail/openstack-dev/2015-December/081966.html .
Since, S3 driver did not get any maintainer, it was
decided to remove it.

View File

@ -1,5 +0,0 @@
---
other:
- For years, `/var/lib/glance/images` has been presented as the default dir
for the filesystem store. It was not part of the default value until now.
New deployments and ppl overriding config files should watch for this.

View File

@ -1,16 +0,0 @@
---
prelude: >
Return list of store drivers in sorted order for
generating configs. More info in ``Upgrade Notes``
and ``Bug Fixes`` section.
upgrade:
- This version of glance_store will result in Glance
generating the configs in a sorted (deterministic)
order. So, preferably store releases on or after this
should be used for generating any new configs if the
mismatched ordering of the configs results in an issue
in your environment.
fixes:
- Bug 1619487 is fixed which was causing random order of
the generation of configs in Glance. See ``upgrade``
section for more details.

View File

@ -1,3 +0,0 @@
---
other:
- Start using reno to manage release notes.

View File

@ -1,8 +0,0 @@
---
features:
- Implemented image uploading, downloading and deletion for cinder store.
It also supports new settings to put image volumes into a specific project
to hide them from users and to control them based on ACL of the images.
Note that cinder store is currently considered experimental, so
current deployers should be aware that the use of it in production right
now may be risky.

View File

@ -1,6 +0,0 @@
---
security:
- Previously the VMWare Datastore was using HTTPS Connections from httplib
which do not verify the connection. By switching to using requests library
the VMware storage backend now verifies HTTPS connection to vCenter server
and thus addresses the vulnerabilities described in OSSN-0033.

View File

@ -1,287 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Glance_store Release Notes documentation build configuration file
#
# Modified from corresponding configuration file in Glance.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'openstackdocstheme',
'reno.sphinxext',
]
# openstackdocstheme options
repository_name = 'openstack/glance_store'
bug_project = 'glance-store'
bug_tag = ''
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Glance_store Release Notes'
copyright = u'2015, Openstack Foundation'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
import pbr.version
glance_store_version = pbr.version.VersionInfo('glance_store')
# The full version, including alpha/beta/rc tags.
release = glance_store_version.version_string_with_vcs()
# The short X.Y version.
version = glance_store_version.canonical_version_string()
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'GlanceStoreReleaseNotesdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index',
'GlanceStoreReleaseNotes.tex',
u'Glance_store Release Notes Documentation',
u'Glance_store Developers',
'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index',
'glancestorereleasenotes',
u'Glance_store Release Notes Documentation',
[u'Glance_store Developers'],
1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index',
'GlanceStoreReleaseNotes',
u'Glance_store Release Notes Documentation',
u'Glance_store Developers',
'GlanceStoreReleaseNotes',
'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
# -- Options for Internationalization output ------------------------------
locale_dirs = ['locale/']

View File

@ -1,12 +0,0 @@
============================
Glance_store Release Notes
============================
.. toctree::
:maxdepth: 1
unreleased
ocata
newton
mitaka
liberty

View File

@ -1,6 +0,0 @@
==============================
Liberty Series Release Notes
==============================
.. release-notes::
:branch: origin/stable/liberty

View File

@ -1,113 +0,0 @@
# Andi Chandler <andi@gowling.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: Glance_store Release Notes 0.13.1\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2016-07-01 12:05+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-07-05 01:54+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n"
"Language: en-GB\n"
"X-Generator: Zanata 3.7.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
msgid "0.11.0"
msgstr "0.11.0"
msgid "0.12.0"
msgstr "0.12.0"
msgid ""
"Allow glance_store to refresh token when upload or download data to Swift "
"store. glance_store identifies if token is going to expire soon when "
"executing request to Swift and refresh the token. For multi-tenant swift "
"store glance_store uses trusts, for single-tenant swift store glance_store "
"uses credentials from swift store configurations. Please also note that this "
"feature is enabled if and only if Keystone V3 API is available and enabled."
msgstr ""
"Allow glance_store to refresh token when upload or download data to Swift "
"store. glance_store identifies if token is going to expire soon when "
"executing request to Swift and refresh the token. For multi-tenant swift "
"store glance_store uses trusts, for single-tenant swift store glance_store "
"uses credentials from swift store configurations. Please also note that this "
"feature is enabled if and only if Keystone V3 API is available and enabled."
msgid "Current Series Release Notes"
msgstr "Current Series Release Notes"
msgid "Deprecation Notes"
msgstr "Deprecation Notes"
msgid ""
"For years, `/var/lib/glance/images` has been presented as the default dir "
"for the filesystem store. It was not part of the default value until now. "
"New deployments and ppl overriding config files should watch for this."
msgstr ""
"For years, `/var/lib/glance/images` has been presented as the default dir "
"for the filesystem store. It was not part of the default value until now. "
"New deployments and people overriding config files should watch for this."
msgid "Glance_store Release Notes"
msgstr "Glance_store Release Notes"
msgid ""
"Implemented image uploading, downloading and deletion for cinder store. It "
"also supports new settings to put image volumes into a specific project to "
"hide them from users and to control them based on ACL of the images. Note "
"that cinder store is currently considered experimental, so current deployers "
"should be aware that the use of it in production right now may be risky."
msgstr ""
"Implemented image uploading, downloading and deletion for Cinder store. It "
"also supports new settings to put image volumes into a specific project to "
"hide them from users and to control them based on ACL of the images. Note "
"that Cinder store is currently considered experimental, so current deployers "
"should be aware that the use of it in production right now may be risky."
msgid "Liberty Series Release Notes"
msgstr "Liberty Series Release Notes"
msgid "Mitaka Series Release Notes"
msgstr "Mitaka Series Release Notes"
msgid "New Features"
msgstr "New Features"
msgid "Other Notes"
msgstr "Other Notes"
msgid ""
"Prevent Unauthorized errors during uploading or donwloading data to Swift "
"store."
msgstr ""
"Prevent Unauthorised errors during uploading or downloading data to Swift "
"store."
msgid ""
"Previously the VMWare Datastore was using HTTPS Connections from httplib "
"which do not verify the connection. By switching to using requests library "
"the VMware storage backend now verifies HTTPS connection to vCenter server "
"and thus addresses the vulnerabilities described in OSSN-0033."
msgstr ""
"Previously the VMware Datastore was using HTTPS Connections from httplib "
"which do not verify the connection. By switching to using requests library "
"the VMware storage backend now verifies HTTPS connection to vCenter server "
"and thus addresses the vulnerabilities described in OSSN-0033."
msgid "Security Issues"
msgstr "Security Issues"
msgid "Start using reno to manage release notes."
msgstr "Start using reno to manage release notes."
msgid ""
"The gridfs driver has been removed from the tree. The environments using "
"this driver that were not migrated will stop working after the upgrade."
msgstr ""
"The gridfs driver has been removed from the tree. The environments using "
"this driver that were not migrated will stop working after the upgrade."
msgid "glance_store._drivers.gridfs"
msgstr "glance_store._drivers.gridfs"

View File

@ -1,80 +0,0 @@
# zzxwill <zzxwill@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: Glance_store Release Notes 0.20.1\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2017-03-22 21:38+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-08-23 02:05+0000\n"
"Last-Translator: zzxwill <zzxwill@gmail.com>\n"
"Language-Team: Chinese (China)\n"
"Language: zh-CN\n"
"X-Generator: Zanata 3.9.6\n"
"Plural-Forms: nplurals=1; plural=0\n"
msgid "0.11.0"
msgstr "0.11.0"
msgid "0.12.0"
msgstr "0.12.0"
msgid "0.16.0"
msgstr "0.16.0"
msgid "0.17.0"
msgstr "0.17.0"
msgid "Current Series Release Notes"
msgstr "当前版本发布说明"
msgid "Deprecation Notes"
msgstr "弃用说明"
msgid "Glance_store Release Notes"
msgstr "Glance_store发布说明"
msgid "Liberty Series Release Notes"
msgstr "Liberty版本发布说明"
msgid "Mitaka Series Release Notes"
msgstr "Mitaka 版本发布说明"
msgid "New Features"
msgstr "新特性"
msgid "Other Notes"
msgstr "其他说明"
msgid "Security Issues"
msgstr "安全问题"
msgid "Start using reno to manage release notes."
msgstr "开始使用reno管理发布说明。"
msgid ""
"The following list of exceptions have been deprecated since 0.10.0 release "
"-- ``Conflict``, ``ForbiddenPublicImage`` ``ProtectedImageDelete``, "
"``BadDriverConfiguration``, ``InvalidRedirect``, ``WorkerCreationFailure``, "
"``SchemaLoadError``, ``InvalidObject``, ``UnsupportedHeaderFeature``, "
"``ImageDataNotFound``, ``InvalidParameterValue``, "
"``InvalidImageStatusTransition``. This release removes these exceptions so "
"any remnant consumption of the same must be avoided/removed."
msgstr ""
"以下的异常列表自0.10.0版本后已经弃用了 ——``Conflict``, "
"``ForbiddenPublicImage`` ``ProtectedImageDelete``, "
"``BadDriverConfiguration``, ``InvalidRedirect``, ``WorkerCreationFailure``, "
"``SchemaLoadError``, ``InvalidObject``, ``UnsupportedHeaderFeature``, "
"``ImageDataNotFound``, ``InvalidParameterValue``, "
"``InvalidImageStatusTransition``。该版本移除了这些异常,所以任何遗留的相同的"
"使用方式必须避免或去掉。"
msgid "Upgrade Notes"
msgstr "升级说明"
msgid "glance_store._drivers.gridfs"
msgstr "glance_store._drivers.gridfs"
msgid "glance_store._drivers.s3 removed from tree."
msgstr "glance_store._drivers.s3从树上移除了。"

View File

@ -1,6 +0,0 @@
===================================
Mitaka Series Release Notes
===================================
.. release-notes::
:branch: origin/stable/mitaka

View File

@ -1,6 +0,0 @@
===================================
Newton Series Release Notes
===================================
.. release-notes::
:branch: origin/stable/newton

View File

@ -1,6 +0,0 @@
===================================
Ocata Series Release Notes
===================================
.. release-notes::
:branch: origin/stable/ocata

View File

@ -1,5 +0,0 @@
==============================
Current Series Release Notes
==============================
.. release-notes::

View File

@ -1,18 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
oslo.config!=4.3.0,!=4.4.0,>=4.0.0 # Apache-2.0
oslo.i18n!=3.15.2,>=2.1.0 # Apache-2.0
oslo.serialization!=2.19.1,>=1.10.0 # Apache-2.0
oslo.utils>=3.20.0 # Apache-2.0
oslo.concurrency>=3.8.0 # Apache-2.0
stevedore>=1.20.0 # Apache-2.0
enum34;python_version=='2.7' or python_version=='2.6' or python_version=='3.3' # BSD
eventlet!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2 # MIT
six>=1.9.0 # MIT
jsonschema!=2.5.0,<3.0.0,>=2.0.0 # MIT
keystoneauth1>=3.0.1 # Apache-2.0
python-keystoneclient>=3.8.0 # Apache-2.0
requests>=2.14.2 # Apache-2.0

View File

@ -1,234 +0,0 @@
#!/bin/bash
set -eu
function usage {
echo "Usage: $0 [OPTION]..."
echo "Run test suite(s)"
echo ""
echo " -V, --virtual-env Always use virtualenv. Install automatically if not present"
echo " -N, --no-virtual-env Don't use virtualenv. Run tests in local environment"
echo " -s, --no-site-packages Isolate the virtualenv from the global Python environment"
echo " -f, --force Force a clean re-build of the virtual environment. Useful when dependencies have been added."
echo " -u, --update Update the virtual environment with any newer package versions"
echo " -p, --pep8 Just run PEP8 and HACKING compliance check"
echo " -P, --no-pep8 Don't run static code checks"
echo " -c, --coverage Generate coverage report"
echo " -d, --debug Run tests with testtools instead of testr. This allows you to use the debugger."
echo " -h, --help Print this usage message"
echo " --virtual-env-path <path> Location of the virtualenv directory"
echo " Default: \$(pwd)"
echo " --virtual-env-name <name> Name of the virtualenv directory"
echo " Default: .venv"
echo " --tools-path <dir> Location of the tools directory"
echo " Default: \$(pwd)"
echo " --concurrency <concurrency> How many processes to use when running the tests. A value of 0 autodetects concurrency from your CPU count"
echo " Default: 0"
echo ""
echo "Note: with no options specified, the script will try to run the tests in a virtual environment,"
echo " If no virtualenv is found, the script will ask if you would like to create one. If you "
echo " prefer to run tests NOT in a virtual environment, simply pass the -N option."
exit
}
function process_options {
i=1
while [ $i -le $# ]; do
case "${!i}" in
-h|--help) usage;;
-V|--virtual-env) always_venv=1; never_venv=0;;
-N|--no-virtual-env) always_venv=0; never_venv=1;;
-s|--no-site-packages) no_site_packages=1;;
-f|--force) force=1;;
-u|--update) update=1;;
-p|--pep8) just_pep8=1;;
-P|--no-pep8) no_pep8=1;;
-c|--coverage) coverage=1;;
-d|--debug) debug=1;;
--virtual-env-path)
(( i++ ))
venv_path=${!i}
;;
--virtual-env-name)
(( i++ ))
venv_dir=${!i}
;;
--tools-path)
(( i++ ))
tools_path=${!i}
;;
--concurrency)
(( i++ ))
concurrency=${!i}
;;
-*) testropts="$testropts ${!i}";;
*) testrargs="$testrargs ${!i}"
esac
(( i++ ))
done
}
tool_path=${tools_path:-$(pwd)}
venv_path=${venv_path:-$(pwd)}
venv_dir=${venv_name:-.venv}
with_venv=tools/with_venv.sh
always_venv=0
never_venv=0
force=0
no_site_packages=0
installvenvopts=
testrargs=
testropts=
wrapper=""
just_pep8=0
no_pep8=0
coverage=0
debug=0
update=0
concurrency=0
LANG=en_US.UTF-8
LANGUAGE=en_US:en
LC_ALL=C
process_options $@
# Make our paths available to other scripts we call
export venv_path
export venv_dir
export venv_name
export tools_dir
export venv=${venv_path}/${venv_dir}
if [ $no_site_packages -eq 1 ]; then
installvenvopts="--no-site-packages"
fi
function run_tests {
# Cleanup *pyc
${wrapper} find . -type f -name "*.pyc" -delete
if [ $debug -eq 1 ]; then
if [ "$testropts" = "" ] && [ "$testrargs" = "" ]; then
# Default to running all tests if specific test is not
# provided.
testrargs="discover ./tests"
fi
${wrapper} python -m testtools.run $testropts $testrargs
# Short circuit because all of the testr and coverage stuff
# below does not make sense when running testtools.run for
# debugging purposes.
return $?
fi
if [ $coverage -eq 1 ]; then
TESTRTESTS="$TESTRTESTS --coverage"
else
TESTRTESTS="$TESTRTESTS"
fi
# Just run the test suites in current environment
set +e
testrargs=`echo "$testrargs" | sed -e's/^\s*\(.*\)\s*$/\1/'`
TESTRTESTS="$TESTRTESTS --testr-args='--subunit --concurrency $concurrency $testropts $testrargs'"
if [ setup.cfg -nt glance_store.egg-info/entry_points.txt ]
then
${wrapper} python setup.py egg_info
fi
echo "Running \`${wrapper} $TESTRTESTS\`"
if ${wrapper} which subunit-2to1 2>&1 > /dev/null
then
# subunit-2to1 is present, testr subunit stream should be in version 2
# format. Convert to version one before colorizing.
bash -c "${wrapper} $TESTRTESTS | ${wrapper} subunit-2to1 | ${wrapper} tools/colorizer.py"
else
bash -c "${wrapper} $TESTRTESTS | ${wrapper} tools/colorizer.py"
fi
RESULT=$?
set -e
copy_subunit_log
if [ $coverage -eq 1 ]; then
echo "Generating HTML coverage report in covhtml/"
# Don't compute coverage for common code, which is tested elsewhere
${wrapper} coverage combine
${wrapper} coverage html --include='glance_store/*' -d covhtml -i
${wrapper} coverage report --include='glance_store/*' -i
fi
return $RESULT
}
function copy_subunit_log {
LOGNAME=`cat .testrepository/next-stream`
LOGNAME=$(($LOGNAME - 1))
LOGNAME=".testrepository/${LOGNAME}"
cp $LOGNAME subunit.log
}
function run_pep8 {
echo "Running flake8 ..."
if [ $never_venv -eq 1 ]; then
echo "**WARNING**:"
echo "Running flake8 without virtual env may miss OpenStack HACKING detection"
fi
bash -c "${wrapper} flake8"
echo "Testing translation files ..."
bash -c "${wrapper} find glance_store -type f -regex '.*\.pot?' -print0|${wrapper} xargs --null -n 1 ${wrapper} msgfmt --check-format -o /dev/null"
}
TESTRTESTS="python setup.py testr"
if [ $never_venv -eq 0 ]
then
# Remove the virtual environment if --force used
if [ $force -eq 1 ]; then
echo "Cleaning virtualenv..."
rm -rf ${venv}
fi
if [ $update -eq 1 ]; then
echo "Updating virtualenv..."
python tools/install_venv.py $installvenvopts
fi
if [ -e ${venv} ]; then
wrapper="${with_venv}"
else
if [ $always_venv -eq 1 ]; then
# Automatically install the virtualenv
python tools/install_venv.py $installvenvopts
wrapper="${with_venv}"
else
echo -e "No virtual environment found...create one? (Y/n) \c"
read use_ve
if [ "x$use_ve" = "xY" -o "x$use_ve" = "x" -o "x$use_ve" = "xy" ]; then
# Install the virtualenv and run the test suite in it
python tools/install_venv.py $installvenvopts
wrapper=${with_venv}
fi
fi
fi
fi
# Delete old coverage data from previous runs
if [ $coverage -eq 1 ]; then
${wrapper} coverage erase
fi
if [ $just_pep8 -eq 1 ]; then
run_pep8
exit
fi
run_tests
# NOTE(sirp): we only want to run pep8 when we're running the full-test suite,
# not when we're running tests individually. To handle this, we need to
# distinguish between options (testropts), which begin with a '-', and
# arguments (testrargs).
if [ -z "$testrargs" ]; then
if [ $no_pep8 -eq 0 ]; then
run_pep8
fi
fi

View File

@ -1,97 +0,0 @@
[metadata]
name = glance_store
summary = OpenStack Image Service Store Library
description-file =
README.rst
author = OpenStack
author-email = openstack-dev@lists.openstack.org
home-page = http://docs.openstack.org/developer/glance_store
classifier =
Development Status :: 5 - Production/Stable
Environment :: OpenStack
Intended Audience :: Developers
Intended Audience :: Information Technology
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 3
Programming Language :: Python :: 3.5
[files]
packages =
glance_store
[entry_points]
glance_store.drivers =
file = glance_store._drivers.filesystem:Store
http = glance_store._drivers.http:Store
swift = glance_store._drivers.swift:Store
rbd = glance_store._drivers.rbd:Store
sheepdog = glance_store._drivers.sheepdog:Store
cinder = glance_store._drivers.cinder:Store
vmware = glance_store._drivers.vmware_datastore:Store
# TESTS ONLY
no_conf = glance_store.tests.fakes:UnconfigurableStore
# Backwards compatibility
glance.store.filesystem.Store = glance_store._drivers.filesystem:Store
glance.store.http.Store = glance_store._drivers.http:Store
glance.store.swift.Store = glance_store._drivers.swift:Store
glance.store.rbd.Store = glance_store._drivers.rbd:Store
glance.store.sheepdog.Store = glance_store._drivers.sheepdog:Store
glance.store.cinder.Store = glance_store._drivers.cinder:Store
glance.store.vmware_datastore.Store = glance_store._drivers.vmware_datastore:Store
oslo.config.opts =
glance.store = glance_store.backend:_list_opts
console_scripts =
glance-rootwrap = oslo_rootwrap.cmd:main
[extras]
# Dependencies for each of the optional stores
vmware =
oslo.vmware>=2.17.0 # Apache-2.0
swift =
httplib2>=0.7.5 # MIT
python-swiftclient>=3.2.0 # Apache-2.0
cinder =
python-cinderclient>=3.0.0 # Apache-2.0
os-brick>=1.15.1 # Apache-2.0
oslo.rootwrap>=5.0.0 # Apache-2.0
oslo.privsep!=1.17.0,>=1.9.0 # Apache-2.0
[build_sphinx]
source-dir = doc/source
build-dir = doc/build
all_files = 1
warning-is-error = 1
[pbr]
autodoc_index_modules = True
api_doc_dir = reference/api
autodoc_exclude_modules =
glance_store.tests.*
[upload_sphinx]
upload-dir = doc/build/html
[compile_catalog]
directory = glance_store/locale
domain = glance_store
[update_catalog]
domain = glance_store
output_dir = glance_store/locale
input_file = glance_store/locale/glance_store.pot
[extract_messages]
keywords = _ gettext ngettext l_ lazy_gettext
mapping_file = babel.cfg
output_file = glance_store/locale/glance_store.pot
[wheel]
universal = 1

View File

@ -1,29 +0,0 @@
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
setup_requires=['pbr>=2.0.0'],
pbr=True)

View File

@ -1,25 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
# Metrics and style
hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0
# Packaging
mock>=2.0 # BSD
# Unit testing
coverage!=4.4,>=4.0 # Apache-2.0
fixtures>=3.0.0 # Apache-2.0/BSD
python-subunit>=0.0.18 # Apache-2.0/BSD
requests-mock>=1.1 # Apache-2.0
testrepository>=0.0.18 # Apache-2.0/BSD
testscenarios>=0.4 # Apache-2.0/BSD
testtools>=1.4.0 # MIT
oslotest>=1.10.0 # Apache-2.0
os-testr>=0.8.0 # Apache-2.0
bandit>=1.1.0 # Apache-2.0
# this is required for the docs build jobs
sphinx>=1.6.2 # BSD
openstackdocstheme>=1.11.0 # Apache-2.0
reno!=2.3.1,>=1.8.0 # Apache-2.0

View File

@ -1,336 +0,0 @@
#!/usr/bin/env python
# Copyright (c) 2013, Nebula, Inc.
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# Colorizer Code is borrowed from Twisted:
# Copyright (c) 2001-2010 Twisted Matrix Laboratories.
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""Display a subunit stream through a colorized unittest test runner."""
import heapq
import six
import subunit
import sys
import unittest
import testtools
class _AnsiColorizer(object):
"""
A colorizer is an object that loosely wraps around a stream, allowing
callers to write text to the stream in a particular color.
Colorizer classes must implement C{supported()} and C{write(text, color)}.
"""
_colors = dict(black=30, red=31, green=32, yellow=33,
blue=34, magenta=35, cyan=36, white=37)
def __init__(self, stream):
self.stream = stream
@staticmethod
def supported(stream=sys.stdout):
"""
A method that returns True if the current platform supports
coloring terminal output using this method. Returns False otherwise.
"""
if not stream.isatty():
return False # auto color only on TTYs
try:
import curses
except ImportError:
return False
else:
try:
try:
return curses.tigetnum("colors") > 2
except curses.error:
curses.setupterm()
return curses.tigetnum("colors") > 2
except Exception:
# guess false in case of error
return False
def write(self, text, color):
"""
Write the given text to the stream in the given color.
@param text: Text to be written to the stream.
@param color: A string label for a color. e.g. 'red', 'white'.
"""
color = self._colors[color]
self.stream.write('\x1b[%s;1m%s\x1b[0m' % (color, text))
class _Win32Colorizer(object):
"""
See _AnsiColorizer docstring.
"""
def __init__(self, stream):
import win32console
red, green, blue, bold = (win32console.FOREGROUND_RED,
win32console.FOREGROUND_GREEN,
win32console.FOREGROUND_BLUE,
win32console.FOREGROUND_INTENSITY)
self.stream = stream
self.screenBuffer = win32console.GetStdHandle(
win32console.STD_OUT_HANDLE)
self._colors = {
'normal': red | green | blue,
'red': red | bold,
'green': green | bold,
'blue': blue | bold,
'yellow': red | green | bold,
'magenta': red | blue | bold,
'cyan': green | blue | bold,
'white': red | green | blue | bold
}
@staticmethod
def supported(stream=sys.stdout):
try:
import win32console
screenBuffer = win32console.GetStdHandle(
win32console.STD_OUT_HANDLE)
except ImportError:
return False
import pywintypes
try:
screenBuffer.SetConsoleTextAttribute(
win32console.FOREGROUND_RED |
win32console.FOREGROUND_GREEN |
win32console.FOREGROUND_BLUE)
except pywintypes.error:
return False
else:
return True
def write(self, text, color):
color = self._colors[color]
self.screenBuffer.SetConsoleTextAttribute(color)
self.stream.write(text)
self.screenBuffer.SetConsoleTextAttribute(self._colors['normal'])
class _NullColorizer(object):
"""
See _AnsiColorizer docstring.
"""
def __init__(self, stream):
self.stream = stream
@staticmethod
def supported(stream=sys.stdout):
return True
def write(self, text, color):
self.stream.write(text)
def get_elapsed_time_color(elapsed_time):
if elapsed_time > 1.0:
return 'red'
elif elapsed_time > 0.25:
return 'yellow'
else:
return 'green'
class SubunitTestResult(testtools.TestResult):
def __init__(self, stream, descriptions, verbosity):
super(SubunitTestResult, self).__init__()
self.stream = stream
self.showAll = verbosity > 1
self.num_slow_tests = 10
self.slow_tests = [] # this is a fixed-sized heap
self.colorizer = None
# NOTE(vish): reset stdout for the terminal check
stdout = sys.stdout
sys.stdout = sys.__stdout__
for colorizer in [_Win32Colorizer, _AnsiColorizer, _NullColorizer]:
if colorizer.supported():
self.colorizer = colorizer(self.stream)
break
sys.stdout = stdout
self.start_time = None
self.last_time = {}
self.results = {}
self.last_written = None
def _writeElapsedTime(self, elapsed):
color = get_elapsed_time_color(elapsed)
self.colorizer.write(" %.2f" % elapsed, color)
def _addResult(self, test, *args):
try:
name = test.id()
except AttributeError:
name = 'Unknown.unknown'
test_class, test_name = name.rsplit('.', 1)
elapsed = (self._now() - self.start_time).total_seconds()
item = (elapsed, test_class, test_name)
if len(self.slow_tests) >= self.num_slow_tests:
heapq.heappushpop(self.slow_tests, item)
else:
heapq.heappush(self.slow_tests, item)
self.results.setdefault(test_class, [])
self.results[test_class].append((test_name, elapsed) + args)
self.last_time[test_class] = self._now()
self.writeTests()
def _writeResult(self, test_name, elapsed, long_result, color,
short_result, success):
if self.showAll:
self.stream.write(' %s' % str(test_name).ljust(66))
self.colorizer.write(long_result, color)
if success:
self._writeElapsedTime(elapsed)
self.stream.writeln()
else:
self.colorizer.write(short_result, color)
def addSuccess(self, test):
super(SubunitTestResult, self).addSuccess(test)
self._addResult(test, 'OK', 'green', '.', True)
def addFailure(self, test, err):
if test.id() == 'process-returncode':
return
super(SubunitTestResult, self).addFailure(test, err)
self._addResult(test, 'FAIL', 'red', 'F', False)
def addError(self, test, err):
super(SubunitTestResult, self).addFailure(test, err)
self._addResult(test, 'ERROR', 'red', 'E', False)
def addSkip(self, test, reason=None, details=None):
super(SubunitTestResult, self).addSkip(test, reason, details)
self._addResult(test, 'SKIP', 'blue', 'S', True)
def startTest(self, test):
self.start_time = self._now()
super(SubunitTestResult, self).startTest(test)
def writeTestCase(self, cls):
if not self.results.get(cls):
return
if cls != self.last_written:
self.colorizer.write(cls, 'white')
self.stream.writeln()
for result in self.results[cls]:
self._writeResult(*result)
del self.results[cls]
self.stream.flush()
self.last_written = cls
def writeTests(self):
time = self.last_time.get(self.last_written, self._now())
if not self.last_written or (self._now() - time).total_seconds() > 2.0:
diff = 3.0
while diff > 2.0:
classes = self.results.keys()
oldest = min(classes, key=lambda x: self.last_time[x])
diff = (self._now() - self.last_time[oldest]).total_seconds()
self.writeTestCase(oldest)
else:
self.writeTestCase(self.last_written)
def done(self):
self.stopTestRun()
def stopTestRun(self):
for cls in list(six.iterkeys(self.results)):
self.writeTestCase(cls)
self.stream.writeln()
self.writeSlowTests()
def writeSlowTests(self):
# Pare out 'fast' tests
slow_tests = [item for item in self.slow_tests
if get_elapsed_time_color(item[0]) != 'green']
if slow_tests:
slow_total_time = sum(item[0] for item in slow_tests)
slow = ("Slowest %i tests took %.2f secs:"
% (len(slow_tests), slow_total_time))
self.colorizer.write(slow, 'yellow')
self.stream.writeln()
last_cls = None
# sort by name
for elapsed, cls, name in sorted(slow_tests,
key=lambda x: x[1] + x[2]):
if cls != last_cls:
self.colorizer.write(cls, 'white')
self.stream.writeln()
last_cls = cls
self.stream.write(' %s' % str(name).ljust(68))
self._writeElapsedTime(elapsed)
self.stream.writeln()
def printErrors(self):
if self.showAll:
self.stream.writeln()
self.printErrorList('ERROR', self.errors)
self.printErrorList('FAIL', self.failures)
def printErrorList(self, flavor, errors):
for test, err in errors:
self.colorizer.write("=" * 70, 'red')
self.stream.writeln()
self.colorizer.write(flavor, 'red')
self.stream.writeln(": %s" % test.id())
self.colorizer.write("-" * 70, 'red')
self.stream.writeln()
self.stream.writeln("%s" % err)
test = subunit.ProtocolTestCase(sys.stdin, passthrough=None)
if sys.version_info[0:2] <= (2, 6):
runner = unittest.TextTestRunner(verbosity=2)
else:
runner = unittest.TextTestRunner(
verbosity=2, resultclass=SubunitTestResult)
if runner.run(test).wasSuccessful():
exit_code = 0
else:
exit_code = 1
sys.exit(exit_code)

View File

@ -1,73 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Copyright 2010 OpenStack Foundation
# Copyright 2013 IBM Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Installation script for glance_store's development virtualenv
"""
from __future__ import print_function
import os
import sys
import install_venv_common as install_venv # noqa
def print_help():
help = """
glance_store development environment setup is complete.
glance_store development uses virtualenv to track and manage Python
dependencies while in development and testing.
To activate the glance_store virtualenv for the extent of your current shell
session you can run:
$ source .venv/bin/activate
Or, if you prefer, you can run commands in the virtualenv on a case by case
basis by running:
$ tools/with_venv.sh <your command>
Also, make test will automatically use the virtualenv.
"""
print(help)
def main(argv):
root = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
venv = os.path.join(root, '.venv')
pip_requires = os.path.join(root, 'requirements.txt')
test_requires = os.path.join(root, 'test-requirements.txt')
py_version = "python%s.%s" % (sys.version_info[0], sys.version_info[1])
project = 'glance_store'
install = install_venv.InstallVenv(root, venv, pip_requires, test_requires,
py_version, project)
options = install.parse_args(argv)
install.check_python_version()
install.check_dependencies()
install.create_virtualenv(no_site_packages=options.no_site_packages)
install.install_dependencies()
install.run_command([os.path.join(venv, 'bin/python'),
'setup.py', 'develop'])
print_help()
if __name__ == '__main__':
main(sys.argv)

View File

@ -1,172 +0,0 @@
# Copyright 2013 OpenStack Foundation
# Copyright 2013 IBM Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Provides methods needed by installation script for OpenStack development
virtual environments.
Since this script is used to bootstrap a virtualenv from the system's Python
environment, it should be kept strictly compatible with Python 2.6.
Synced in from openstack-common
"""
from __future__ import print_function
import optparse
import os
import subprocess
import sys
class InstallVenv(object):
def __init__(self, root, venv, requirements,
test_requirements, py_version,
project):
self.root = root
self.venv = venv
self.requirements = requirements
self.test_requirements = test_requirements
self.py_version = py_version
self.project = project
def die(self, message, *args):
print(message % args, file=sys.stderr)
sys.exit(1)
def check_python_version(self):
if sys.version_info < (2, 6):
self.die("Need Python Version >= 2.6")
def run_command_with_code(self, cmd, redirect_output=True,
check_exit_code=True):
"""Runs a command in an out-of-process shell.
Returns the output of that command. Working directory is self.root.
"""
if redirect_output:
stdout = subprocess.PIPE
else:
stdout = None
proc = subprocess.Popen(cmd, cwd=self.root, stdout=stdout)
output = proc.communicate()[0]
if check_exit_code and proc.returncode != 0:
self.die('Command "%s" failed.\n%s', ' '.join(cmd), output)
return (output, proc.returncode)
def run_command(self, cmd, redirect_output=True, check_exit_code=True):
return self.run_command_with_code(cmd, redirect_output,
check_exit_code)[0]
def get_distro(self):
if (os.path.exists('/etc/fedora-release') or
os.path.exists('/etc/redhat-release')):
return Fedora(
self.root, self.venv, self.requirements,
self.test_requirements, self.py_version, self.project)
else:
return Distro(
self.root, self.venv, self.requirements,
self.test_requirements, self.py_version, self.project)
def check_dependencies(self):
self.get_distro().install_virtualenv()
def create_virtualenv(self, no_site_packages=True):
"""Creates the virtual environment and installs PIP.
Creates the virtual environment and installs PIP only into the
virtual environment.
"""
if not os.path.isdir(self.venv):
print('Creating venv...', end=' ')
if no_site_packages:
self.run_command(['virtualenv', '-q', '--no-site-packages',
self.venv])
else:
self.run_command(['virtualenv', '-q', self.venv])
print('done.')
else:
print("venv already exists...")
pass
def pip_install(self, *args):
self.run_command(['tools/with_venv.sh',
'pip', 'install', '--upgrade'] + list(args),
redirect_output=False)
def install_dependencies(self):
print('Installing dependencies with pip (this can take a while)...')
# First things first, make sure our venv has the latest pip and
# setuptools and pbr
self.pip_install('pip>=1.4')
self.pip_install('setuptools')
self.pip_install('pbr')
self.pip_install('-r', self.requirements, '-r', self.test_requirements)
def parse_args(self, argv):
"""Parses command-line arguments."""
parser = optparse.OptionParser()
parser.add_option('-n', '--no-site-packages',
action='store_true',
help="Do not inherit packages from global Python "
"install")
return parser.parse_args(argv[1:])[0]
class Distro(InstallVenv):
def check_cmd(self, cmd):
return bool(self.run_command(['which', cmd],
check_exit_code=False).strip())
def install_virtualenv(self):
if self.check_cmd('virtualenv'):
return
if self.check_cmd('easy_install'):
print('Installing virtualenv via easy_install...', end=' ')
if self.run_command(['easy_install', 'virtualenv']):
print('Succeeded')
return
else:
print('Failed')
self.die('ERROR: virtualenv not found.\n\n%s development'
' requires virtualenv, please install it using your'
' favorite package management tool' % self.project)
class Fedora(Distro):
"""This covers all Fedora-based distributions.
Includes: Fedora, RHEL, CentOS, Scientific Linux
"""
def check_pkg(self, pkg):
return self.run_command_with_code(['rpm', '-q', pkg],
check_exit_code=False)[1] == 0
def install_virtualenv(self):
if self.check_cmd('virtualenv'):
return
if not self.check_pkg('python-virtualenv'):
self.die("Please install 'python-virtualenv'.")
super(Fedora, self).install_virtualenv()

View File

@ -1,55 +0,0 @@
#!/usr/bin/env bash
# Library constraint file contains version pin that is in conflict with
# installing the library from source. We should replace the version pin in
# the constraints file before applying it for from-source installation.
ZUUL_CLONER=/usr/zuul-env/bin/zuul-cloner
BRANCH_NAME=master
LIB_NAME=glance_store
requirements_installed=$(echo "import openstack_requirements" | python 2>/dev/null ; echo $?)
set -e
CONSTRAINTS_FILE=$1
shift
install_cmd="pip install"
mydir=$(mktemp -dt "$LIB_NAME-tox_install-XXXXXXX")
trap "rm -rf $mydir" EXIT
localfile=$mydir/upper-constraints.txt
if [[ $CONSTRAINTS_FILE != http* ]]; then
CONSTRAINTS_FILE=file://$CONSTRAINTS_FILE
fi
curl $CONSTRAINTS_FILE -k -o $localfile
install_cmd="$install_cmd -c$localfile"
if [ $requirements_installed -eq 0 ]; then
echo "ALREADY INSTALLED" > /tmp/tox_install.txt
echo "Requirements already installed; using existing package"
elif [ -x "$ZUUL_CLONER" ]; then
echo "ZUUL CLONER" > /tmp/tox_install.txt
pushd $mydir
$ZUUL_CLONER --cache-dir \
/opt/git \
--branch $BRANCH_NAME \
git://git.openstack.org \
openstack/requirements
cd openstack/requirements
$install_cmd -e .
popd
else
echo "PIP HARDCODE" > /tmp/tox_install.txt
if [ -z "$REQUIREMENTS_PIP_LOCATION" ]; then
REQUIREMENTS_PIP_LOCATION="git+https://git.openstack.org/openstack/requirements@$BRANCH_NAME#egg=requirements"
fi
$install_cmd -U -e ${REQUIREMENTS_PIP_LOCATION}
fi
# This is the main purpose of the script: Allow local installation of
# the current repo. It is listed in constraints file and thus any
# install will be constrained and we need to unconstrain it.
edit-constraints $localfile -- $LIB_NAME "-e file://$PWD#egg=$LIB_NAME"
$install_cmd -U $*
exit $?

View File

@ -1,7 +0,0 @@
#!/bin/bash
TOOLS_PATH=${TOOLS_PATH:-$(dirname $0)}
VENV_PATH=${VENV_PATH:-${TOOLS_PATH}}
VENV_DIR=${VENV_NAME:-/../.venv}
TOOLS=${TOOLS_PATH}
VENV=${VENV:-${VENV_PATH}/${VENV_DIR}}
source ${VENV}/bin/activate && "$@"

64
tox.ini
View File

@ -1,64 +0,0 @@
[tox]
minversion = 1.6
envlist = py35,py27,pep8
skipsdist = True
[testenv]
setenv = VIRTUAL_ENV={envdir}
usedevelop = True
install_command = {toxinidir}/tools/tox_install.sh {env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} --allow-all-external --allow-insecure netaddr -U {opts} {packages}
deps = -r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
.[vmware,swift,cinder]
passenv = OS_TEST_*
commands = ostestr --slowest {posargs}
[testenv:docs]
commands = python setup.py build_sphinx
[testenv:releasenotes]
commands = sphinx-build -a -E -W -d releasenotes/build/.doctrees -b html releasenotes/source releasenotes/build/html
[testenv:pep8]
commands =
flake8 {posargs}
# Run security linter
# The following bandit tests are being skipped:
# B101 - Use of assert detected.
# B110 - Try, Except, Pass detected.
# B303 - Use of insecure MD2, MD4, or MD5 hash function.
bandit -r glance_store -x tests --skip B101,B110,B303
[testenv:bandit]
# NOTE(browne): This is required for the integration test job of the bandit
# project. Please do not remove.
# The following bandit tests are being skipped:
# B101 - Use of assert detected.
# B110 - Try, Except, Pass detected.
# B303 - Use of insecure MD2, MD4, or MD5 hash function.
commands = bandit -r glance_store -x tests --skip B101,B110,B303
[testenv:cover]
setenv = VIRTUAL_ENV={envdir}
commands = python setup.py testr --coverage --testr-args='^(?!.*test.*coverage).*$'
[testenv:venv]
commands = {posargs}
[testenv:functional-swift]
sitepackages = True
setenv = OS_TEST_PATH=./glance_store/tests/functional/swift
commands = python setup.py testr --slowest --testr-args='glance_store.tests.functional.swift'
[testenv:functional-filesystem]
sitepackages = True
setenv = OS_TEST_PATH=./glance_store/tests/functional/filesystem
commands = python setup.py testr --slowest --testr-args='glance_store.tests.functional.filesystem'
[flake8]
# TODO(dmllr): Analyze or fix the warnings blacklisted below
# H301 one import per line
# H404 multi line docstring should start with a summary
# H405 multi line docstring summary not separated with an empty line
ignore = H301,H404,H405
exclude = .venv,.git,.tox,dist,doc,etc,*glance_store/locale*,*lib/python*,*egg,build