Stop cinderlib development

Remove files from master, as development will no longer occur there.
The stable branches continue to be supported while they are in
Maintained status.

Updated the README to indicate this change.

Depends-on: Ib186ac5830e5920e264d79be946995e63e960426
Depends-on: I081cd363117671eaab6a3193094d5872f9820354
Depends-on: If2b9a82cddb20543b176ee22765049db257c89b9
Depends-on: I1143e5e5ccf8103e386fe1ce614a554e7f152d9a
Change-Id: I4722b869033ad1bd357e36c4a258b6d3ea61f5d6
This commit is contained in:
Brian Rosmaita 2023-12-10 12:57:04 -05:00
parent ee1d86c058
commit f165c6ff5e
125 changed files with 13 additions and 12255 deletions

73
.gitignore vendored
View File

@ -1,73 +0,0 @@
# Byte-compiled / optimized / DLL files
.*
!.gitignore
!.testr.conf
!.stestr.conf
!.zuul.yaml
!.travis.yml
.*.sw?
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*,cover
cover/
.hypothesis/
# Translations
*.mo
*.pot
# Django stuff:
*.log
# PyBuilder
target/
# pyenv python configuration file
.python-version
# Temp directory, for example for the LVM file, our custom config, etc.
temp/
cinder-lioadm
local-upper-constraints.txt

View File

@ -1,3 +0,0 @@
[DEFAULT]
test_path=${OS_TEST_PATH:-./cinderlib/tests/unit}
top_dir=./

View File

@ -1,142 +0,0 @@
- project:
vars:
ensure_tox_version: '<4'
queue: integrated
templates:
- publish-openstack-docs-pti
- release-notes-jobs-python3
check:
jobs:
- openstack-tox-pep8
- cinderlib-tox-py38
- cinderlib-tox-py39
- cinderlib-tox-py310
# TODO: make this voting when cinderlib opens for 2024.1 development
- cinderlib-tox-py311:
voting: false
- cinderlib-lvm-functional
- cinderlib-ceph-functional
# NOTE: when cinderlib opens for 2024.1 development, use the parent
# job instead
- cinderlib-os-brick-src-tempest-lvm-lio-barbican-2023.2
gate:
jobs:
- openstack-tox-pep8
- cinderlib-tox-py38
- cinderlib-tox-py39
- cinderlib-tox-py310
- cinderlib-lvm-functional
- cinderlib-ceph-functional
# NOTE: when cinderlib opens for 2024.1 development, use the parent
# job instead
- cinderlib-os-brick-src-tempest-lvm-lio-barbican-2023.2
post:
jobs:
- publish-openstack-python-branch-tarball
- job:
name: cinderlib-tox-py38
parent: openstack-tox-py38
required-projects:
- name: openstack/os-brick
override-checkout: stable/2023.2
- name: openstack/cinder
override-checkout: stable/2023.2
- name: openstack/requirements
override-checkout: stable/2023.2
- job:
name: cinderlib-tox-py39
parent: openstack-tox-py39
required-projects:
- name: openstack/os-brick
override-checkout: stable/2023.2
- name: openstack/cinder
override-checkout: stable/2023.2
- name: openstack/requirements
override-checkout: stable/2023.2
- job:
name: cinderlib-tox-py310
parent: openstack-tox-py310
required-projects:
- name: openstack/os-brick
override-checkout: stable/2023.2
- name: openstack/cinder
override-checkout: stable/2023.2
- name: openstack/requirements
override-checkout: stable/2023.2
- job:
name: cinderlib-tox-py311
parent: openstack-tox-py311
required-projects:
- name: openstack/os-brick
override-checkout: stable/2023.2
- name: openstack/cinder
override-checkout: stable/2023.2
- name: openstack/requirements
override-checkout: stable/2023.2
- job:
name: cinderlib-functional
parent: openstack-tox-functional-with-sudo
required-projects:
- name: openstack/os-brick
override-checkout: stable/2023.2
- name: openstack/cinder
override-checkout: stable/2023.2
- name: openstack/requirements
override-checkout: stable/2023.2
pre-run: playbooks/required-projects-bindeps.yaml
irrelevant-files:
- ^.*\.rst$
- ^doc/.*$
- ^releasenotes/.*$
- job:
name: cinderlib-lvm-functional
parent: cinderlib-functional
pre-run: playbooks/setup-lvm.yaml
nodeset: centos-9-stream
vars:
tox_environment:
# Workaround for https://github.com/pypa/pip/issues/6264
PIP_OPTIONS: "--no-use-pep517"
CL_FTEST_MEMORY_PERSISTENCE: "false"
# These come from great-great-grandparent tox job
NOSE_WITH_HTML_OUTPUT: 1
NOSE_HTML_OUT_FILE: nose_results.html
NOSE_WITH_XUNIT: 1
# The Ceph job tests cinderlib without unnecessary libraries
- job:
name: cinderlib-ceph-functional
parent: cinderlib-functional
pre-run: playbooks/setup-ceph.yaml
# TODO: move back to centos as soon as Ceph packages are available
nodeset: ubuntu-focal
vars:
tox_environment:
CL_FTEST_CFG: "{{ ansible_user_dir }}/{{ zuul.projects['opendev.org/openstack/cinderlib'].src_dir }}/cinderlib/tests/functional/ceph.yaml"
# These come from great-great-grandparent tox job
NOSE_WITH_HTML_OUTPUT: 1
NOSE_HTML_OUT_FILE: nose_results.html
NOSE_WITH_XUNIT: 1
- job:
name: cinderlib-os-brick-src-tempest-lvm-lio-barbican-2023.2
parent: os-brick-src-tempest-lvm-lio-barbican
description: |
Use this job during the phase when cinderlib master is still
the development branch of the cinder previous release. When
cinderlib master and cinder master are the development branches
for the *same* release, you should use the parent job directly
in the check and gate, above.
override-checkout: stable/2023.2
# NOTE: while the cinderlib stable/2023.2 branch does not exist,
# zuul will fall back to using cinderlib master, which is the
# behavior we want.

View File

@ -1,19 +0,0 @@
The source repository for this project can be found at:
https://opendev.org/openstack/cinderlib
Pull requests submitted through GitHub are not monitored.
To start contributing to OpenStack, follow the steps in the contribution guide
to set up and use Gerrit:
https://docs.openstack.org/contributors/code-and-documentation/quick-start.html
Bugs should be filed on Launchpad:
https://bugs.launchpad.net/cinderlib
For more specific information about contributing to this repository, see the
cinder contributor guide:
https://docs.openstack.org/cinderlib/latest/contributor/contributing.html

View File

@ -1,49 +0,0 @@
The Cinder Library, also known as cinderlib, is a Python library that leverages
the Cinder project to provide an object oriented abstraction around Cinder's
storage drivers to allow their usage directly without running any of the Cinder
services or surrounding services, such as KeyStone, MySQL or RabbitMQ.
* Free software: Apache Software License 2.0
* Documentation: https://docs.openstack.org/cinderlib/latest/
The library is intended for developers who only need the basic CRUD
functionality of the drivers and don't care for all the additional features
Cinder provides such as quotas, replication, multi-tenancy, migrations,
retyping, scheduling, backups, authorization, authentication, REST API, etc.
The library was originally created as an external project, so it didn't have
the broad range of backend testing Cinder does, and only a limited number of
drivers were validated at the time. Drivers should work out of the box, and
we'll keep a list of drivers that have added the cinderlib functional tests to
the driver gates confirming they work and ensuring they will keep working.
Features
--------
* Use a Cinder driver without running a DBMS, Message broker, or Cinder
service.
* Using multiple simultaneous drivers on the same application.
* Basic operations support:
- Create volume
- Delete volume
- Extend volume
- Clone volume
- Create snapshot
- Delete snapshot
- Create volume from snapshot
- Connect volume
- Disconnect volume
- Local attach
- Local detach
- Validate connector
- Extra Specs for specific backend functionality.
- Backend QoS
- Multi-pool support
* Metadata persistence plugins:
- Stateless: Caller stores JSON serialization.
- Database: Metadata is stored in a database: MySQL, PostgreSQL, SQLite...
- Custom plugin: Caller provides module to store Metadata and cinderlib calls
it when necessary.

View File

@ -1,53 +0,0 @@
Cinderlib Style Commandments
============================
- Step 1: Read the OpenStack Style Commandments
https://docs.openstack.org/hacking/latest/
- Step 2: Read on
Cinder Specific Commandments
----------------------------
- [N314] Check for vi editor configuration in source files.
- [N322] Ensure default arguments are not mutable.
- [N323] Add check for explicit import of _() to ensure proper translation.
- [N325] str() and unicode() cannot be used on an exception. Remove or use six.text_type().
- [N336] Must use a dict comprehension instead of a dict constructor with a sequence of key-value pairs.
- [C301] timeutils.utcnow() from oslo_utils should be used instead of datetime.now().
- [C302] six.text_type should be used instead of unicode.
- [C303] Ensure that there are no 'print()' statements in code that is being committed.
- [C304] Enforce no use of LOG.audit messages. LOG.info should be used instead.
- [C305] Prevent use of deprecated contextlib.nested.
- [C306] timeutils.strtime() must not be used (deprecated).
- [C307] LOG.warn is deprecated. Enforce use of LOG.warning.
- [C308] timeutils.isotime() must not be used (deprecated).
- [C309] Unit tests should not perform logging.
- [C310] Check for improper use of logging format arguments.
- [C311] Check for proper naming and usage in option registration.
- [C312] Validate that logs are not translated.
- [C313] Check that assertTrue(value) is used and not assertEqual(True, value).
General
-------
- Use 'raise' instead of 'raise e' to preserve original traceback or exception being reraised::
except Exception as e:
...
raise e # BAD
except Exception:
...
raise # OKAY
Creating Unit Tests
-------------------
For every new feature, unit tests should be created that both test and
(implicitly) document the usage of said feature. If submitting a patch for a
bug that had no unit test, a new passing unit test should be added. If a
submitted bug fix does have a unit test, be sure to add a new one that fails
without the patch and passes with the patch.
For more information on creating unit tests and utilizing the testing
infrastructure in OpenStack Cinder, please see
https://docs.openstack.org/cinder/latest/contributor/testing.html

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,78 +1,19 @@
Cinder Library
==============
.. image:: https://img.shields.io/pypi/v/cinderlib.svg
:target: https://pypi.python.org/pypi/cinderlib
This project is no longer being developed. Previous releases will
continue to be supported under the schedule outlined in the
`OpenStack Stable Branches Policy
<https://docs.openstack.org/project-team-guide/stable-branches.html>`_.
.. image:: https://img.shields.io/pypi/pyversions/cinderlib.svg
:target: https://pypi.python.org/pypi/cinderlib
While stable branches exist, you will be able to see them here,
but they will be deleted as they reach End of Life.
.. image:: https://img.shields.io/:license-apache-blue.svg
:target: http://www.apache.org/licenses/LICENSE-2.0
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
Introduction
------------
The Cinder Library, also known as cinderlib, is a Python library that leverages
the Cinder project to provide an object oriented abstraction around Cinder's
storage drivers to allow their usage directly without running any of the Cinder
services or surrounding services, such as KeyStone, MySQL or RabbitMQ.
* Free software: Apache Software License 2.0
* Documentation: https://docs.openstack.org/cinderlib/latest/
The library is intended for developers who only need the basic CRUD
functionality of the drivers and don't care for all the additional features
Cinder provides such as quotas, replication, multi-tenancy, migrations,
retyping, scheduling, backups, authorization, authentication, REST API, etc.
The library was originally created as an external project, so it didn't have
the broad range of backend testing Cinder does, and only a limited number of
drivers were validated at the time. Drivers should work out of the box, and
we'll keep a list of drivers that have added the cinderlib functional tests to
the driver gates confirming they work and ensuring they will keep working.
Features
--------
* Use a Cinder driver without running a DBMS, Message broker, or Cinder
service.
* Using multiple simultaneous drivers on the same application.
* Basic operations support:
- Create volume
- Delete volume
- Extend volume
- Clone volume
- Create snapshot
- Delete snapshot
- Create volume from snapshot
- Connect volume
- Disconnect volume
- Local attach
- Local detach
- Validate connector
- Extra Specs for specific backend functionality.
- Backend QoS
- Multi-pool support
* Metadata persistence plugins:
- Stateless: Caller stores JSON serialization.
- Database: Metadata is stored in a database: MySQL, PostgreSQL, SQLite...
- Custom plugin: Caller provides module to store Metadata and cinderlib calls
it when necessary.
Demo
----
.. raw:: html
<a href="https://asciinema.org/a/TcTR7Lu7jI0pEsd9ThEn01l7n?autoplay=1"
target="_blank"><img
src="https://asciinema.org/a/TcTR7Lu7jI0pEsd9ThEn01l7n.png"/></a>
.. _GIGO: https://en.wikipedia.org/wiki/Garbage_in,_garbage_out
.. _official project documentation: https://readthedocs.org/projects/cinderlib/badge/?version=latest
.. _OpenStack's Cinder volume driver configuration documentation: https://docs.openstack.org/cinder/latest/configuration/block-storage/volume-drivers.html
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
OFTC.

View File

@ -1,2 +0,0 @@
[python: **.py]

View File

@ -1,44 +0,0 @@
# This is a cross-platform list tracking distribution packages needed for
# install and tests;
# see https://docs.openstack.org/infra/bindep/ for additional information.
build-essential [platform:dpkg test]
gcc [platform:rpm test]
python3 [platform:redhat test]
python3-devel [platform:redhat test]
# gettext and graphviz are needed by doc builds only. For transition,
# have them in both doc and test.
# TODO(jaegerandi): Remove test once infra scripts are updated.
gettext [!platform:suse doc test]
gettext-runtime [platform:suse doc test]
graphviz [doc test]
# for pdf-docs
fonts-liberation [doc platform:dpkg]
latexmk [doc platform:dpkg]
librsvg2-bin [doc platform:dpkg]
sg3-utils [platform:dpkg]
texlive-latex-base [doc platform:dpkg]
texlive-latex-extra [doc platform:dpkg]
texlive-xetex [doc platform:dpkg]
texlive-fonts-recommended [doc platform:dpkg]
xindy [doc platform:dpkg]
latexmk [doc platform:rpm]
librsvg2-tools [doc platform:rpm]
python3-sphinxcontrib-svg2pdfconverter-common [doc platform:rpm]
sg3_utils [platform:rpm]
texlive [doc platform:rpm]
texlive-capt-of [doc platform:rpm]
texlive-fncychap [doc platform:rpm]
texlive-framed [doc platform:rpm]
texlive-needspace [doc platform:rpm]
texlive-pdftex [doc platform:rpm]
texlive-polyglossia [doc platform:rpm]
texlive-tabulary [doc platform:rpm]
texlive-titlesec [doc platform:rpm]
texlive-upquote [doc platform:rpm]
texlive-wrapfig [doc platform:rpm]
texlive-xetex [doc platform:rpm]
texlive-xindy [doc platform:rpm]

View File

@ -1,53 +0,0 @@
# Copyright (c) 2018, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
try:
# For python 3.8 and later
import importlib.metadata as importlib_metadata
except ImportError:
# For everyone else
import importlib_metadata
from os_brick.initiator import connector
from cinderlib import _fake_packages # noqa F401
from cinderlib import cinderlib
from cinderlib import objects
from cinderlib import serialization
try:
__version__ = importlib_metadata.version('cinderlib')
except importlib_metadata.PackageNotFoundError:
__version__ = '0.0.0'
DEFAULT_PROJECT_ID = objects.DEFAULT_PROJECT_ID
DEFAULT_USER_ID = objects.DEFAULT_USER_ID
Volume = objects.Volume
Snapshot = objects.Snapshot
Connection = objects.Connection
KeyValue = objects.KeyValue
load = serialization.load
json = serialization.json
jsons = serialization.jsons
dump = serialization.dump
dumps = serialization.dumps
setup = cinderlib.setup
Backend = cinderlib.Backend
# This gets reassigned on initialization by nos_brick.init
get_connector_properties = connector.get_connector_properties
list_supported_drivers = cinderlib.Backend.list_supported_drivers

View File

@ -1,169 +0,0 @@
# Copyright (c) 2019, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Fake unnecessary packages
There are many packages that are automatically imported when loading cinder
modules, and are used for normal Cinder operation, but they are not necessary
for cinderlib's execution. One example of this happening is when cinderlib
loads a module to get configuration options but won't execute any of the code
present in that module.
This module fakes these packages providing the following benefits:
- Faster load times
- Reduced memory footprint
- Distributions can create a cinderlib package with fewer dependencies.
"""
try:
# Only present and needed in Python >= 3.4
from importlib import machinery
except ImportError:
pass
import logging
import sys
import types
from oslo_config import cfg
__all__ = ['faker']
PACKAGES = [
'glanceclient', 'novaclient', 'swiftclient', 'barbicanclient', 'cursive',
'keystoneauth1', 'keystonemiddleware', 'keystoneclient', 'castellan',
'oslo_reports', 'oslo_policy', 'oslo_messaging', 'osprofiler', 'paste',
'oslo_middleware', 'webob', 'pyparsing', 'routes', 'jsonschema', 'os_win',
'oauth2client', 'oslo_upgradecheck', 'googleapiclient', 'pastedeploy',
]
_DECORATOR_CLASSES = (types.FunctionType, types.MethodType)
LOG = logging.getLogger(__name__)
class _FakeObject(object):
"""Generic fake object: Iterable, Class, decorator, etc."""
def __init__(self, *args, **kwargs):
self.__key_value__ = {}
def __len__(self):
return len(self.__key_value__)
def __contains__(self, key):
return key in self.__key_value__
def __iter__(self):
return iter(self.__key_value__)
def __mro_entries__(self, bases):
return (self.__class__,)
def __setitem__(self, key, value):
self.__key_value__[key] = value
def _new_instance(self, class_name):
attrs = {'__module__': self.__module__ + '.' + self.__class__.__name__}
return type(class_name, (self.__class__,), attrs)()
# No need to define __class_getitem__, as __getitem__ has the priority
def __getitem__(self, key):
if key in self.__key_value__.get:
return self.__key_value__.get[key]
return self._new_instance(key)
def __getattr__(self, key):
return self._new_instance(key)
def __call__(self, *args, **kw):
# If we are a decorator return the method that we are decorating
if args and isinstance(args[0], _DECORATOR_CLASSES):
return args[0]
return self
def __repr__(self):
return self.__qualname__
class Faker(object):
"""Fake Finder and Loader for whole packages."""
def __init__(self, packages):
self.faked_modules = []
self.packages = packages
def _fake_module(self, name):
"""Dynamically create a module as close as possible to a real one."""
LOG.debug('Faking %s', name)
attributes = {
'__doc__': None,
'__name__': name,
'__file__': name,
'__loader__': self,
'__builtins__': __builtins__,
'__package__': name.rsplit('.', 1)[0] if '.' in name else None,
'__repr__': lambda self: self.__name__,
'__getattr__': lambda self, name: (
type(name, (_FakeObject,), {'__module__': self.__name__})()),
}
keys = ['__doc__', '__name__', '__file__', '__builtins__',
'__package__']
# Path only present at the package level
if '.' not in name:
attributes['__path__'] = [name]
keys.append('__path__')
# We only want to show some of our attributes
attributes.update(__dict__={k: attributes[k] for k in keys},
__dir__=lambda self: keys)
# Create the class and instantiate it
module_class = type(name, (types.ModuleType,), attributes)
self.faked_modules.append(name)
return module_class(name)
def find_module(self, fullname, path=None):
"""Find a module and return a Loader if it's one of ours or None."""
package = fullname.split('.')[0]
# If it's one of ours, then we are the loader
if package in self.packages:
return self
return None
def load_module(self, fullname):
"""Create a new Fake module if it's not already present."""
if fullname in sys.modules:
return sys.modules[fullname]
sys.modules[fullname] = self._fake_module(fullname)
return sys.modules[fullname]
def find_spec(self, fullname, path=None, target=None):
"""Return our spec it it's one of our packages or None."""
if self.find_module(fullname):
return machinery.ModuleSpec(fullname,
self,
is_package='.' not in fullname)
return None
def create_module(self, spec):
"""Fake a module."""
return self._fake_module(spec.name)
# cinder.quota_utils manually imports keystone_authtoken config group, so we
# create a fake one to avoid failure.
cfg.CONF.register_opts([cfg.StrOpt('fake')], group='keystone_authtoken')
# Create faker and add it to the list of Finders
faker = Faker(PACKAGES)
sys.meta_path.insert(0, faker)

View File

@ -1,10 +0,0 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copy of oslo.privsep's privsep-helper that uses the virtual env python
# instead of a hardcoded Python version
import re
import sys
from oslo_privsep.daemon import helper_main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(helper_main())

View File

@ -1,594 +0,0 @@
# Copyright (c) 2017, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import configparser
import glob
import json as json_lib
import logging
import multiprocessing
import os
import shutil
from cinder import coordination
from cinder.db import api as db_api
from cinder import objects as cinder_objects
# We need this here until we remove from cinder/volume/manager.py:
# VA_LIST = objects.VolumeAttachmentList
cinder_objects.register_all() # noqa
from cinder.interface import util as cinder_interface_util
import cinder.privsep
from cinder import utils
from cinder.volume import configuration
from cinder.volume import manager # noqa We need to import config options
import os_brick.privileged
from oslo_config import cfg
from oslo_log import log as oslo_logging
from oslo_privsep import priv_context
from oslo_utils import importutils
import urllib3
import cinderlib
from cinderlib import objects
from cinderlib import persistence
from cinderlib import serialization
from cinderlib import utils as cinderlib_utils
__all__ = ['setup', 'Backend']
LOG = logging.getLogger(__name__)
class Backend(object):
"""Representation of a Cinder Driver.
User facing attributes are:
- __init__
- json
- jsons
- load
- stats
- create_volume
- global_setup
- validate_connector
"""
backends = {}
global_initialization = False
# Some drivers try access the DB directly for extra specs on creation.
# With this dictionary the DB class can get the necessary data
_volumes_inflight = {}
def __new__(cls, volume_backend_name, **driver_cfg):
# Prevent redefinition of an already initialized backend on the same
# persistence storage with a different configuration.
backend = Backend.backends.get(volume_backend_name)
if backend:
# If we are instantiating the same backend return the one we have
# saved (singleton pattern).
if driver_cfg == backend._original_driver_cfg:
return backend
raise ValueError('Backend named %s already exists with a different'
' configuration' % volume_backend_name)
return super(Backend, cls).__new__(cls)
def __init__(self, volume_backend_name, **driver_cfg):
if not self.global_initialization:
self.global_setup()
# Instance already initialized
if volume_backend_name in Backend.backends:
return
# Save the original config before we add the backend name and template
# the values.
self._original_driver_cfg = driver_cfg.copy()
driver_cfg['volume_backend_name'] = volume_backend_name
conf = self._get_backend_config(driver_cfg)
self._apply_backend_workarounds(conf)
self.driver = importutils.import_object(
conf.volume_driver,
configuration=conf,
db=self.persistence.db,
host='%s@%s' % (cfg.CONF.host, volume_backend_name),
cluster_name=None, # We don't use cfg.CONF.cluster for now
active_backend_id=None) # No failover for now
# do_setup and check_for_setup errors were merged into setup in Yoga.
# First try the old interface, and if it fails, try the new one.
try:
self.driver.do_setup(objects.CONTEXT)
self.driver.check_for_setup_error()
except AttributeError:
self.driver.setup(objects.CONTEXT)
self.driver.init_capabilities()
self.driver.set_throttle()
self.driver.set_initialized()
self._driver_cfg = driver_cfg
self._volumes = None
# Some drivers don't implement the caching correctly. Populate cache
# with data retrieved in init_capabilities.
stats = self.driver.capabilities.copy()
stats.pop('properties', None)
stats.pop('vendor_prefix', None)
self._stats = self._transform_legacy_stats(stats)
self._pool_names = tuple(pool['pool_name'] for pool in stats['pools'])
Backend.backends[volume_backend_name] = self
@property
def pool_names(self):
return self._pool_names
def __repr__(self):
return '<cinderlib.Backend %s>' % self.id
def __getattr__(self, name):
return getattr(self.driver, name)
@property
def id(self):
return self._driver_cfg['volume_backend_name']
@property
def volumes(self):
if self._volumes is None:
self._volumes = self.persistence.get_volumes(backend_name=self.id)
return self._volumes
def volumes_filtered(self, volume_id=None, volume_name=None):
return self.persistence.get_volumes(backend_name=self.id,
volume_id=volume_id,
volume_name=volume_name)
def _transform_legacy_stats(self, stats):
"""Convert legacy stats to new stats with pools key."""
# Fill pools for legacy driver reports
if stats and 'pools' not in stats:
pool = stats.copy()
pool['pool_name'] = self.id
for key in ('driver_version', 'shared_targets',
'sparse_copy_volume', 'storage_protocol',
'vendor_name', 'volume_backend_name'):
pool.pop(key, None)
stats['pools'] = [pool]
return stats
def stats(self, refresh=False):
# Some drivers don't implement the caching correctly, so we implement
# it ourselves.
if refresh:
stats = self.driver.get_volume_stats(refresh=refresh)
self._stats = self._transform_legacy_stats(stats)
return self._stats
def create_volume(self, size, name='', description='', bootable=False,
**kwargs):
vol = objects.Volume(self, size=size, name=name,
description=description, bootable=bootable,
**kwargs)
vol.create()
return vol
def _volume_removed(self, volume):
i, vol = cinderlib_utils.find_by_id(volume.id, self._volumes)
if vol:
del self._volumes[i]
@classmethod
def _start_creating_volume(cls, volume):
cls._volumes_inflight[volume.id] = volume
def _volume_created(self, volume):
if self._volumes is not None:
self._volumes.append(volume)
self._volumes_inflight.pop(volume.id, None)
def validate_connector(self, connector_dict):
"""Raise exception if missing info for volume's connect call."""
self.driver.validate_connector(connector_dict)
@classmethod
def set_persistence(cls, persistence_config):
if not hasattr(cls, 'project_id'):
raise Exception('set_persistence can only be called after '
'cinderlib has been configured')
cls.persistence = persistence.setup(persistence_config)
objects.setup(cls.persistence, Backend, cls.project_id, cls.user_id,
cls.non_uuid_ids)
for backend in cls.backends.values():
backend.driver.db = cls.persistence.db
# Replace the standard DB implementation instance with the one from
# the persistence plugin.
db_api.IMPL = cls.persistence.db
@classmethod
def _set_cinder_config(cls, host, locks_path, cinder_config_params):
"""Setup the parser with all the known Cinder configuration."""
cfg.CONF.set_default('state_path', os.getcwd())
cfg.CONF.set_default('lock_path', '$state_path', 'oslo_concurrency')
cfg.CONF.version = cinderlib.__version__
if locks_path:
cfg.CONF.oslo_concurrency.lock_path = locks_path
cfg.CONF.coordination.backend_url = 'file://' + locks_path
if host:
cfg.CONF.host = host
cls._validate_and_set_options(cinder_config_params)
# Replace command line arg parser so we ignore caller's args
cfg._CachedArgumentParser.parse_args = lambda *a, **kw: None
@classmethod
def _validate_and_set_options(cls, kvs, group=None):
"""Validate options and substitute references."""
# Dynamically loading the driver triggers adding the specific
# configuration options to the backend_defaults section
if kvs.get('volume_driver'):
driver_ns = kvs['volume_driver'].rsplit('.', 1)[0]
__import__(driver_ns)
group = group or 'backend_defaults'
for k, v in kvs.items():
try:
# set_override does the validation
cfg.CONF.set_override(k, v, group)
except cfg.NoSuchOptError:
# RBD keyring may be removed from the Cinder RBD driver, but
# the functionality will remain for cinderlib usage only, so we
# do the validation manually in that case.
# NOTE: Templating won't work on the rbd_keyring_conf, but it's
# unlikely to be needed.
if k == 'rbd_keyring_conf':
if v and not isinstance(v, str):
raise ValueError('%s must be a string' % k)
else:
# Don't fail on unknown variables, behave like cinder
LOG.warning('Unknown config option %s', k)
oslo_group = getattr(cfg.CONF, str(group), cfg.CONF)
# Now that we have validated/templated everything set updated values
for k, v in kvs.items():
kvs[k] = getattr(oslo_group, k, v)
# For global configuration we leave the overrides, but for drivers we
# don't to prevent cross-driver config polination. The cfg will be
# set as an attribute of the configuration that's passed to the driver.
if group:
for k in kvs.keys():
try:
cfg.CONF.clear_override(k, group, clear_cache=True)
except cfg.NoSuchOptError:
pass
def _get_backend_config(self, driver_cfg):
# Create the group for the backend
backend_name = driver_cfg['volume_backend_name']
cfg.CONF.register_group(cfg.OptGroup(backend_name))
# Validate and set config options
self._validate_and_set_options(driver_cfg)
backend_group = getattr(cfg.CONF, backend_name)
for key, value in driver_cfg.items():
setattr(backend_group, key, value)
# Return the Configuration that will be passed to the driver
config = configuration.Configuration([], config_group=backend_name)
return config
@classmethod
def global_setup(cls, file_locks_path=None, root_helper='sudo',
suppress_requests_ssl_warnings=True, disable_logs=True,
non_uuid_ids=False, output_all_backend_info=False,
project_id=None, user_id=None, persistence_config=None,
fail_on_missing_backend=True, host=None,
**cinder_config_params):
# Global setup can only be set once
if cls.global_initialization:
raise Exception('Already setup')
cls.im_root = os.getuid() == 0
cls.fail_on_missing_backend = fail_on_missing_backend
cls.project_id = project_id
cls.user_id = user_id
cls.non_uuid_ids = non_uuid_ids
cls.set_persistence(persistence_config)
cls._set_cinder_config(host, file_locks_path, cinder_config_params)
serialization.setup(cls)
cls._set_logging(disable_logs)
cls._set_priv_helper(root_helper)
coordination.COORDINATOR.start()
if suppress_requests_ssl_warnings:
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
urllib3.disable_warnings(
urllib3.exceptions.InsecurePlatformWarning)
cls.global_initialization = True
cls.output_all_backend_info = output_all_backend_info
def _apply_backend_workarounds(self, config):
"""Apply workarounds for drivers that do bad stuff."""
if 'netapp' in config.volume_driver:
# Workaround NetApp's weird replication stuff that makes it reload
# config sections in get_backend_configuration. OK since we don't
# support replication.
cfg.CONF.list_all_sections = lambda: config.volume_backend_name
@classmethod
def _set_logging(cls, disable_logs):
if disable_logs:
logging.Logger.disabled = property(lambda s: True,
lambda s, x: None)
return
oslo_logging.setup(cfg.CONF, 'cinder')
logging.captureWarnings(True)
@classmethod
def _set_priv_helper(cls, root_helper):
# If we are using a virtual environment then the rootwrap config files
# Should be within the environment and not under /etc/cinder/
venv = os.environ.get('VIRTUAL_ENV')
if (venv and not cfg.CONF.rootwrap_config.startswith(venv) and
not os.path.exists(cfg.CONF.rootwrap_config)):
# We need to remove the absolute path (initial '/') to generate the
# config path under the virtualenv
# for the join to work.
wrap_path = cfg.CONF.rootwrap_config[1:]
venv_wrap_file = os.path.join(venv, wrap_path)
venv_wrap_dir = os.path.dirname(venv_wrap_file)
# In virtual environments our rootwrap config file is no longer
# '/etc/cinder/rootwrap.conf'. We have 2 possible roots, it's
# either the virtualenv's directory or our where our sources are if
# we have installed cinder as editable.
# For editable we need to copy the files into the virtualenv if we
# haven't copied them before.
if not utils.__file__.startswith(venv):
# If we haven't copied the files yet
if not os.path.exists(venv_wrap_file):
editable_link = glob.glob(os.path.join(
venv, 'lib/python*/site-packages/cinder.egg-link'))
with open(editable_link[0], 'r') as f:
cinder_source_path = f.read().split('\n')[0]
cinder_source_etc = os.path.join(cinder_source_path,
'etc/cinder')
shutil.copytree(cinder_source_etc, venv_wrap_dir)
# For venvs we need to update configured filters_path and exec_dirs
parser = configparser.ConfigParser()
parser.read(venv_wrap_file)
# Change contents if we haven't done it already
if not parser['DEFAULT']['filters_path'].startswith(venv_wrap_dir):
parser['DEFAULT']['filters_path'] = os.path.join(venv_wrap_dir,
'rootwrap.d')
parser['DEFAULT']['exec_dirs'] = (
os.path.join(venv, 'bin,') +
parser['DEFAULT']['exec_dirs'])
with open(venv_wrap_file, 'w') as f:
parser.write(f)
# Don't use set_override because it doesn't work as it should
cfg.CONF.rootwrap_config = venv_wrap_file
# The default Cinder roothelper in Cinder and privsep is sudo, so
# nothing to do in those cases.
if root_helper != 'sudo':
# Get the current helper (usually 'sudo cinder-rootwrap
# <CONF.rootwrap_config>') and replace the sudo part
original_helper = utils.get_root_helper()
# If we haven't already set the helper
if root_helper not in original_helper:
new_helper = original_helper.replace('sudo', root_helper)
utils.get_root_helper = lambda: new_helper
# Initialize privsep's context to not use 'sudo'
priv_context.init(root_helper=[root_helper])
# When using privsep from the system we need to replace the
# privsep-helper with our own to use the virtual env libraries.
if venv and not priv_context.__file__.startswith(venv):
# Use importlib.resources to support PEP 302-based import hooks
# Can only use importlib.resources on 3.10 because it was added to
# 3.7, but files to 3.9 and namespace packages only to 3.10
import sys
if sys.version_info[:2] >= (3, 10):
from importlib.resources import files
else:
from importlib_resources import files
privhelper = files('cinderlib.bin').joinpath('venv-privsep-helper')
cmd = f'{root_helper} {privhelper}'
# Change default of the option instead of the value of the
# different contexts
for opt in priv_context.OPTS:
if opt.name == 'helper_command':
opt.default = cmd
break
# Don't use server/client mode when running as root
client_mode = not cls.im_root
cinder.privsep.sys_admin_pctxt.set_client_mode(client_mode)
os_brick.privileged.default.set_client_mode(client_mode)
@property
def config(self):
if self.output_all_backend_info:
return self._driver_cfg
return {'volume_backend_name': self._driver_cfg['volume_backend_name']}
def _serialize(self, property_name):
result = [getattr(volume, property_name) for volume in self.volumes]
# We only need to output the full backend configuration once
if self.output_all_backend_info:
backend = {'volume_backend_name': self.id}
for volume in result:
volume['backend'] = backend
return {'class': type(self).__name__,
'backend': self.config,
'volumes': result}
@property
def json(self):
return self._serialize('json')
@property
def dump(self):
return self._serialize('dump')
@property
def jsons(self):
return json_lib.dumps(self.json)
@property
def dumps(self):
return json_lib.dumps(self.dump)
@classmethod
def load(cls, json_src, save=False):
backend = Backend.load_backend(json_src['backend'])
volumes = json_src.get('volumes')
if volumes:
backend._volumes = [objects.Volume.load(v, save) for v in volumes]
return backend
@classmethod
def load_backend(cls, backend_data):
backend_name = backend_data['volume_backend_name']
if backend_name in cls.backends:
return cls.backends[backend_name]
if len(backend_data) > 1:
return cls(**backend_data)
if cls.fail_on_missing_backend:
raise Exception('Backend not present in system or json.')
return backend_name
def refresh(self):
if self._volumes is not None:
self._volumes = None
self.volumes
@staticmethod
def list_supported_drivers(output_version=1):
"""Returns dictionary with driver classes names as keys.
The output of the method changes from version to version, so we can
pass the output_version parameter to specify which version we are
expecting.
Version 1: Original output intended for human consumption, where all
dictionary values are strings.
Version 2: Improved version intended for automated consumption.
- type is now a dictionary with detailed information
- Values retain their types, so we'll no longer get 'None'
or 'False'.
"""
def get_vars(obj):
return {k: v for k, v in vars(obj).items()
if not k.startswith('_')}
def get_strs(obj):
return {k: str(v) for k, v in vars(obj).items()
if not k.startswith('_')}
def convert_oslo_config(oslo_option, output_version):
if output_version != 2:
return get_strs(oslo_option)
res = get_vars(oslo_option)
type_class = res['type']
res['type'] = get_vars(oslo_option.type)
res['type']['type_class'] = type_class
return res
def fix_cinderlib_options(driver_dict, output_version):
# The rbd_keyring_conf option is deprecated and will be removed for
# Cinder, because it's a security vulnerability there (OSSN-0085),
# but it isn't for cinderlib, since the user of the library already
# has access to all the credentials, and cinderlib needs it to work
# with RBD, so we need to make sure that the config option is
# there whether it's reported as deprecated or removed from Cinder.
RBD_KEYRING_CONF = cfg.StrOpt('rbd_keyring_conf',
default='',
help='Path to the ceph keyring file')
if driver_dict['class_name'] != 'RBDDriver':
return
rbd_opt = convert_oslo_config(RBD_KEYRING_CONF, output_version)
for opt in driver_dict['driver_options']:
if opt['dest'] == 'rbd_keyring_conf':
opt.clear()
opt.update(rbd_opt)
break
else:
driver_dict['driver_options'].append(rbd_opt)
def list_drivers(queue, output_version):
cwd = os.getcwd()
# Go to the parent directory directory where Cinder is installed
os.chdir(utils.__file__.rsplit(os.sep, 2)[0])
try:
drivers = cinder_interface_util.get_volume_drivers()
mapping = {d.class_name: vars(d) for d in drivers}
for driver in mapping.values():
driver.pop('cls', None)
if 'driver_options' in driver:
driver['driver_options'] = [
convert_oslo_config(opt, output_version)
for opt in driver['driver_options']
]
fix_cinderlib_options(driver, output_version)
finally:
os.chdir(cwd)
queue.put(mapping)
if not (1 <= output_version <= 2):
raise ValueError('Acceptable versions are 1 and 2')
# Use a different process to avoid having all driver classes loaded in
# memory during our execution.
queue = multiprocessing.Queue()
p = multiprocessing.Process(target=list_drivers,
args=(queue, output_version))
p.start()
result = queue.get()
p.join()
return result
setup = Backend.global_setup
# Used by serialization.load
objects.Backend = Backend
# Needed if we use serialization.load before initializing cinderlib
objects.Object.backend_class = Backend

View File

@ -1,65 +0,0 @@
#!/bin/env python
# Copyright (c) 2017, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Generate Python code to initialize cinderlib based on Cinder config file
This tool generates Python code to instantiate backends using a cinder.conf
file.
It supports multiple backends as defined in enabled_backends.
This program uses the oslo.config module to load configuration options instead
of using configparser directly because drivers will need variables to have the
right type (string, list, integer...), and the types are defined in the code
using oslo.config.
cinder-cfg-to_python cinder.conf cinderlib-conf.py
If no output is provided it will use stdout, and if we also don't provide an
input file, it will default to /etc/cinder/cinder.conf.
"""
import sys
from cinderlib.cmd import cinder_to_yaml
def _to_str(value):
if isinstance(value, str):
return '"' + value + '"'
return value
def convert(source, dest):
config = cinder_to_yaml.convert(source)
result = ['import cinderlib as cl']
for backend in config['backends']:
name = backend['volume_backend_name']
name = name.replace(' ', '_').replace('-', '_')
cfg = ', '.join('%s=%s' % (k, _to_str(v)) for k, v in backend.items())
result.append('%s = cl.Backend(%s)' % (name, cfg))
with open(dest, 'w') as f:
f.write('\n\n'.join(result) + '\n')
def main():
source = '/etc/cinder/cinder.conf' if len(sys.argv) < 2 else sys.argv[1]
dest = '/dev/stdout' if len(sys.argv) < 3 else sys.argv[2]
convert(source, dest)
if __name__ == '__main__':
main()

View File

@ -1,70 +0,0 @@
# Copyright (c) 2018, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from os import path
import yaml
import configparser
from cinder.cmd import volume
volume.objects.register_all() # noqa
from cinder.volume import configuration as config
from cinder.volume import manager
def convert(cinder_source, yaml_dest=None):
result_cfgs = []
if not path.exists(cinder_source):
raise Exception("Cinder config file %s doesn't exist" % cinder_source)
# Manually parse the Cinder configuration file so we know which options are
# set.
parser = configparser.ConfigParser()
parser.read(cinder_source)
enabled_backends = parser.get('DEFAULT', 'enabled_backends')
backends = [name.strip() for name in enabled_backends.split(',') if name]
volume.CONF(('--config-file', cinder_source), project='cinder')
for backend in backends:
options_present = parser.options(backend)
# Dynamically loading the driver triggers adding the specific
# configuration options to the backend_defaults section
cfg = config.Configuration(manager.volume_backend_opts,
config_group=backend)
driver_ns = cfg.volume_driver.rsplit('.', 1)[0]
__import__(driver_ns)
# Use the backend_defaults section to extract the configuration for
# options that are present in the backend section and add them to
# the backend section.
opts = volume.CONF._groups['backend_defaults']._opts
known_present_options = [opt for opt in options_present if opt in opts]
volume_opts = [opts[option]['opt'] for option in known_present_options]
cfg.append_config_values(volume_opts)
# Now retrieve the options that are set in the configuration file.
result_cfgs.append({option: cfg.safe_get(option)
for option in known_present_options})
result = {'backends': result_cfgs}
if yaml_dest:
# Write the YAML to the destination
with open(yaml_dest, 'w') as f:
yaml.dump(result, f)
return result

View File

@ -1,37 +0,0 @@
# Copyright (c) 2018, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from cinder import exception
NotFound = exception.NotFound
VolumeNotFound = exception.VolumeNotFound
SnapshotNotFound = exception.SnapshotNotFound
ConnectionNotFound = exception.VolumeAttachmentNotFound
InvalidVolume = exception.InvalidVolume
class InvalidPersistence(Exception):
__msg = 'Invalid persistence storage: %s.'
def __init__(self, name):
super(InvalidPersistence, self).__init__(self.__msg % name)
class NotLocal(Exception):
__msg = "Volume %s doesn't seem to be attached locally."
def __init__(self, name):
super(NotLocal, self).__init__(self.__msg % name)

View File

@ -1,996 +0,0 @@
# Copyright (c) 2018, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json as json_lib
import sys
import uuid
from cinder import context
from cinder import exception as cinder_exception
from cinder import objects as cinder_objs
from cinder.objects import base as cinder_base_ovo
from cinder.volume import volume_utils as volume_utils
from os_brick import exception as brick_exception
from os_brick import initiator as brick_initiator
from oslo_config import cfg
from oslo_log import log as logging
from oslo_utils import timeutils
from cinderlib import exception
from cinderlib import utils
LOG = logging.getLogger(__name__)
DEFAULT_PROJECT_ID = 'cinderlib'
DEFAULT_USER_ID = 'cinderlib'
BACKEND_NAME_SNAPSHOT_FIELD = 'progress'
CONNECTIONS_OVO_FIELD = 'volume_attachment'
GB = 1024 ** 3
# This cannot go in the setup method because cinderlib objects need them to
# be setup to set OVO_CLASS
cinder_objs.register_all()
class KeyValue(object):
def __init__(self, key=None, value=None):
self.key = key
self.value = value
def __eq__(self, other):
return (self.key, self.value) == (other.key, other.value)
class Object(object):
"""Base class for our resource representation objects."""
SIMPLE_JSON_IGNORE = tuple()
DEFAULT_FIELDS_VALUES = {}
LAZY_PROPERTIES = tuple()
backend_class = None
CONTEXT = context.RequestContext(user_id=DEFAULT_USER_ID,
project_id=DEFAULT_PROJECT_ID,
is_admin=True,
overwrite=False)
def _get_backend(self, backend_name_or_obj):
if isinstance(backend_name_or_obj, str):
try:
return self.backend_class.backends[backend_name_or_obj]
except KeyError:
if self.backend_class.fail_on_missing_backend:
raise
return backend_name_or_obj
def __init__(self, backend, **fields_data):
self.backend = self._get_backend(backend)
__ovo = fields_data.get('__ovo')
if __ovo:
self._ovo = __ovo
else:
self._ovo = self._create_ovo(**fields_data)
# Store a reference to the cinderlib obj in the OVO for serialization
self._ovo._cl_obj_ = self
@classmethod
def setup(cls, persistence_driver, backend_class, project_id, user_id,
non_uuid_ids):
cls.persistence = persistence_driver
cls.backend_class = backend_class
# Set the global context if we aren't using the default
project_id = project_id or DEFAULT_PROJECT_ID
user_id = user_id or DEFAULT_USER_ID
if (project_id != cls.CONTEXT.project_id or
user_id != cls.CONTEXT.user_id):
cls.CONTEXT.user_id = user_id
cls.CONTEXT.project_id = project_id
Volume.DEFAULT_FIELDS_VALUES['user_id'] = user_id
Volume.DEFAULT_FIELDS_VALUES['project_id'] = project_id
# Configure OVOs to support non_uuid_ids
if non_uuid_ids:
for ovo_name in cinder_base_ovo.CinderObjectRegistry.obj_classes():
ovo_cls = getattr(cinder_objs, ovo_name)
if 'id' in ovo_cls.fields:
ovo_cls.fields['id'] = cinder_base_ovo.fields.StringField()
def _to_primitive(self):
"""Return custom cinderlib data for serialization."""
return None
def _create_ovo(self, **fields_data):
# The base are the default values we define on our own classes
fields_values = self.DEFAULT_FIELDS_VALUES.copy()
# Apply the values defined by the caller
fields_values.update(fields_data)
# We support manually setting the id, so set only if not already set
# or if set to None
if not fields_values.get('id'):
fields_values['id'] = self.new_uuid()
# Set non set field values based on OVO's default value and on whether
# it is nullable or not.
for field_name, field in self.OVO_CLASS.fields.items():
if field.default != cinder_base_ovo.fields.UnspecifiedDefault:
fields_values.setdefault(field_name, field.default)
elif field.nullable:
fields_values.setdefault(field_name, None)
if ('created_at' in self.OVO_CLASS.fields and
not fields_values.get('created_at')):
fields_values['created_at'] = timeutils.utcnow()
return self.OVO_CLASS(context=self.CONTEXT, **fields_values)
@property
def json(self):
return self.to_json(simplified=False)
def to_json(self, simplified=True):
visited = set()
if simplified:
for field in self.SIMPLE_JSON_IGNORE:
if self._ovo.obj_attr_is_set(field):
visited.add(id(getattr(self._ovo, field)))
ovo = self._ovo.obj_to_primitive(visited=visited)
return {'class': type(self).__name__,
# If no driver loaded, just return the name of the backend
'backend': getattr(self.backend, 'config',
{'volume_backend_name': self.backend}),
'ovo': ovo}
@property
def jsons(self):
return self.to_jsons(simplified=False)
def to_jsons(self, simplified=True):
json_data = self.to_json(simplified)
return json_lib.dumps(json_data, separators=(',', ':'))
def _only_ovo_data(self, ovo):
if isinstance(ovo, dict):
if 'versioned_object.data' in ovo:
value = ovo['versioned_object.data']
if ['objects'] == value.keys():
return self._only_ovo_data(value['objects'])
key = ovo['versioned_object.name'].lower()
return {key: self._only_ovo_data(value)}
for key in ovo.keys():
ovo[key] = self._only_ovo_data(ovo[key])
if isinstance(ovo, list) and ovo:
return [self._only_ovo_data(e) for e in ovo]
return ovo
def to_dict(self):
json_ovo = self.json
return self._only_ovo_data(json_ovo['ovo'])
@property
def dump(self):
# Make sure we load lazy loading properties
for lazy_property in self.LAZY_PROPERTIES:
getattr(self, lazy_property)
return self.json
@property
def dumps(self):
return json_lib.dumps(self.dump, separators=(',', ':'))
def __repr__(self):
backend = self.backend
if isinstance(self.backend, self.backend_class):
backend = backend.id
return ('<cinderlib.%s object %s on backend %s>' %
(type(self).__name__, self.id, backend))
@classmethod
def load(cls, json_src, save=False):
backend = cls.backend_class.load_backend(json_src['backend'])
ovo = cinder_base_ovo.CinderObject.obj_from_primitive(json_src['ovo'],
cls.CONTEXT)
return cls._load(backend, ovo, save=save)
@staticmethod
def new_uuid():
return str(uuid.uuid4())
def __getattr__(self, name):
if name == '_ovo':
raise AttributeError('Attribute _ovo is not yet set')
return getattr(self._ovo, name)
def _raise_with_resource(self):
exc_info = sys.exc_info()
exc_info[1].resource = self
if exc_info[1].__traceback__ is not exc_info[2]:
raise exc_info[1].with_traceback(exc_info[2])
raise exc_info[1]
class NamedObject(Object):
def __init__(self, backend, **fields_data):
if 'description' in fields_data:
fields_data['display_description'] = fields_data.pop('description')
if 'name' in fields_data:
fields_data['display_name'] = fields_data.pop('name')
super(NamedObject, self).__init__(backend, **fields_data)
@property
def name(self):
return self._ovo.display_name
@property
def description(self):
return self._ovo.display_description
@property
def name_in_storage(self):
return self._ovo.name
class LazyVolumeAttr(object):
LAZY_PROPERTIES = ('volume',)
_volume = None
def __init__(self, volume):
if volume:
self._volume = volume
# Ensure circular reference is set
self._ovo.volume = volume._ovo
self._ovo.volume_id = volume._ovo.id
elif self._ovo.obj_attr_is_set('volume'):
self._volume = Volume._load(self.backend, self._ovo.volume)
@property
def volume(self):
# Lazy loading
if self._volume is None:
self._volume = Volume.get_by_id(self.volume_id)
self._ovo.volume = self._volume._ovo
return self._volume
@volume.setter
def volume(self, value):
self._volume = value
self._ovo.volume = value._ovo
def refresh(self):
last_self = self.get_by_id(self.id)
if self._volume is not None:
last_self.volume
vars(self).clear()
vars(self).update(vars(last_self))
class Volume(NamedObject):
OVO_CLASS = cinder_objs.Volume
SIMPLE_JSON_IGNORE = ('snapshots', 'volume_attachment')
DEFAULT_FIELDS_VALUES = {
'size': 1,
'user_id': Object.CONTEXT.user_id,
'project_id': Object.CONTEXT.project_id,
'status': 'creating',
'attach_status': 'detached',
'metadata': {},
'admin_metadata': {},
'glance_metadata': {},
}
LAZY_PROPERTIES = ('snapshots', 'connections')
_ignore_keys = ('id', CONNECTIONS_OVO_FIELD, 'snapshots', 'volume_type')
def __init__(self, backend_or_vol, pool_name=None, **kwargs):
# Accept backend name for convenience
if isinstance(backend_or_vol, str):
backend_name = backend_or_vol
backend_or_vol = self._get_backend(backend_or_vol)
elif isinstance(backend_or_vol, self.backend_class):
backend_name = backend_or_vol.id
elif isinstance(backend_or_vol, Volume):
backend_str, pool = backend_or_vol._ovo.host.split('#')
backend_name = backend_str.split('@')[-1]
pool_name = pool_name or pool
for key in backend_or_vol._ovo.fields:
if (backend_or_vol._ovo.obj_attr_is_set(key) and
key not in self._ignore_keys):
kwargs.setdefault(key, getattr(backend_or_vol._ovo, key))
if backend_or_vol.volume_type:
kwargs.setdefault('extra_specs',
backend_or_vol.volume_type.extra_specs)
if backend_or_vol.volume_type.qos_specs:
kwargs.setdefault(
'qos_specs',
backend_or_vol.volume_type.qos_specs.specs)
backend_or_vol = backend_or_vol.backend
if '__ovo' not in kwargs:
kwargs[CONNECTIONS_OVO_FIELD] = (
cinder_objs.VolumeAttachmentList(context=self.CONTEXT))
kwargs['snapshots'] = (
cinder_objs.SnapshotList(context=self.CONTEXT))
self._snapshots = []
self._connections = []
qos_specs = kwargs.pop('qos_specs', None)
extra_specs = kwargs.pop('extra_specs', {})
super(Volume, self).__init__(backend_or_vol, **kwargs)
self._populate_data()
self.local_attach = None
# If we overwrote the host, then we ignore pool_name and don't set a
# default value or copy the one from the source either.
if 'host' not in kwargs and '__ovo' not in kwargs:
# TODO(geguileo): Add pool support
pool_name = pool_name or backend_or_vol.pool_names[0]
self._ovo.host = ('%s@%s#%s' %
(cfg.CONF.host, backend_name, pool_name))
if qos_specs or extra_specs:
if qos_specs:
qos_specs = cinder_objs.QualityOfServiceSpecs(
id=self.id, name=self.id,
consumer='back-end', specs=qos_specs)
qos_specs_id = self.id
else:
qos_specs = qos_specs_id = None
self._ovo.volume_type = cinder_objs.VolumeType(
context=self.CONTEXT,
is_public=True,
id=self.id,
name=self.id,
qos_specs_id=qos_specs_id,
extra_specs=extra_specs,
qos_specs=qos_specs)
self._ovo.volume_type_id = self.id
@property
def snapshots(self):
# Lazy loading
if self._snapshots is None:
self._snapshots = self.persistence.get_snapshots(volume_id=self.id)
for snap in self._snapshots:
snap.volume = self
ovos = [snap._ovo for snap in self._snapshots]
self._ovo.snapshots = cinder_objs.SnapshotList(objects=ovos)
self._ovo.obj_reset_changes(('snapshots',))
return self._snapshots
@property
def connections(self):
# Lazy loading
if self._connections is None:
# Check if the driver has already lazy loaded it using OVOs
if self._ovo.obj_attr_is_set(CONNECTIONS_OVO_FIELD):
conns = [Connection(None, volume=self, __ovo=ovo)
for ovo
in getattr(self._ovo, CONNECTIONS_OVO_FIELD).objects]
# Retrieve data from persistence storage
else:
conns = self.persistence.get_connections(volume_id=self.id)
for conn in conns:
conn.volume = self
ovos = [conn._ovo for conn in conns]
setattr(self._ovo, CONNECTIONS_OVO_FIELD,
cinder_objs.VolumeAttachmentList(objects=ovos))
self._ovo.obj_reset_changes((CONNECTIONS_OVO_FIELD,))
self._connections = conns
return self._connections
@classmethod
def get_by_id(cls, volume_id):
result = cls.persistence.get_volumes(volume_id=volume_id)
if not result:
raise exception.VolumeNotFound(volume_id=volume_id)
return result[0]
@classmethod
def get_by_name(cls, volume_name):
return cls.persistence.get_volumes(volume_name=volume_name)
def _populate_data(self):
if self._ovo.obj_attr_is_set('snapshots'):
self._snapshots = []
for snap_ovo in self._ovo.snapshots:
# Set circular reference
snap_ovo.volume = self._ovo
Snapshot._load(self.backend, snap_ovo, self)
else:
self._snapshots = None
if self._ovo.obj_attr_is_set(CONNECTIONS_OVO_FIELD):
self._connections = []
for conn_ovo in getattr(self._ovo, CONNECTIONS_OVO_FIELD):
# Set circular reference
conn_ovo.volume = self._ovo
Connection._load(self.backend, conn_ovo, self)
else:
self._connections = None
@classmethod
def _load(cls, backend, ovo, save=None):
vol = cls(backend, __ovo=ovo)
if save:
vol.save()
if vol._snapshots:
for s in vol._snapshots:
s.obj_reset_changes()
s.save()
if vol._connections:
for c in vol._connections:
c.obj_reset_changes()
c.save()
return vol
def create(self):
self.backend._start_creating_volume(self)
try:
model_update = self.backend.driver.create_volume(self._ovo)
self._ovo.status = 'available'
if model_update:
self._ovo.update(model_update)
self.backend._volume_created(self)
except Exception:
self._ovo.status = 'error'
self._raise_with_resource()
finally:
self.save()
def _snapshot_removed(self, snapshot):
# The snapshot instance in memory could be out of sync and not be
# identical, so check by ID.
i, snap = utils.find_by_id(snapshot.id, self._snapshots)
if snap:
del self._snapshots[i]
i, ovo = utils.find_by_id(snapshot.id, self._ovo.snapshots.objects)
if ovo:
del self._ovo.snapshots.objects[i]
def _connection_removed(self, connection):
# The connection instance in memory could be out of sync and not be
# identical, so check by ID.
i, conn = utils.find_by_id(connection.id, self._connections)
if conn:
del self._connections[i]
ovo_conns = getattr(self._ovo, CONNECTIONS_OVO_FIELD).objects
i, ovo_conn = utils.find_by_id(connection.id, ovo_conns)
if ovo_conn:
del ovo_conns[i]
def delete(self):
if self.snapshots:
msg = 'Cannot delete volume %s with snapshots' % self.id
raise exception.InvalidVolume(reason=msg)
try:
self.backend.driver.delete_volume(self._ovo)
self.persistence.delete_volume(self)
self.backend._volume_removed(self)
self._ovo.status = 'deleted'
except Exception:
self._ovo.status = 'error_deleting'
self.save()
self._raise_with_resource()
def extend(self, size):
volume = self._ovo
volume.previous_status = volume.status
volume.status = 'extending'
try:
self.backend.driver.extend_volume(volume, size)
volume.size = size
volume.status = volume.previous_status
volume.previous_status = None
except Exception:
volume.status = 'error'
self._raise_with_resource()
finally:
self.save()
if volume.status == 'in-use' and self.local_attach:
return self.local_attach.extend()
# Must return size in bytes
return size * GB
def clone(self, **new_vol_attrs):
new_vol_attrs['source_volid'] = self.id
new_vol = Volume(self, **new_vol_attrs)
self.backend._start_creating_volume(new_vol)
try:
model_update = self.backend.driver.create_cloned_volume(
new_vol._ovo, self._ovo)
new_vol._ovo.status = 'available'
if model_update:
new_vol._ovo.update(model_update)
self.backend._volume_created(new_vol)
except Exception:
new_vol._ovo.status = 'error'
new_vol._raise_with_resource()
finally:
new_vol.save()
return new_vol
def create_snapshot(self, name='', description='', **kwargs):
snap = Snapshot(self, name=name, description=description, **kwargs)
try:
snap.create()
finally:
if self._snapshots is not None:
self._snapshots.append(snap)
self._ovo.snapshots.objects.append(snap._ovo)
return snap
def attach(self):
connector_dict = volume_utils.brick_get_connector_properties(
self.backend.configuration.use_multipath_for_image_xfer,
self.backend.configuration.enforce_multipath_for_image_xfer)
conn = self.connect(connector_dict)
try:
conn.attach()
except Exception:
self.disconnect(conn)
raise
return conn
def detach(self, force=False, ignore_errors=False):
if not self.local_attach:
raise exception.NotLocal(self.id)
exc = brick_exception.ExceptionChainer()
conn = self.local_attach
try:
conn.detach(force, ignore_errors, exc)
except Exception:
if not force:
raise
with exc.context(force, 'Unable to disconnect'):
conn.disconnect(force)
if exc and not ignore_errors:
raise exc
def connect(self, connector_dict, **ovo_fields):
model_update = self.backend.driver.create_export(self.CONTEXT,
self._ovo,
connector_dict)
if model_update:
self._ovo.update(model_update)
self.save()
try:
conn = Connection.connect(self, connector_dict, **ovo_fields)
if self._connections is not None:
self._connections.append(conn)
ovo_conns = getattr(self._ovo, CONNECTIONS_OVO_FIELD).objects
ovo_conns.append(conn._ovo)
self._ovo.status = 'in-use'
self.save()
except Exception:
self._remove_export()
self._raise_with_resource()
return conn
def _disconnect(self, connection):
self._remove_export()
self._connection_removed(connection)
if not self.connections:
self._ovo.status = 'available'
self.save()
def disconnect(self, connection, force=False):
connection._disconnect(force)
self._disconnect(connection)
def cleanup(self):
for attach in self.connections:
attach.detach()
self._remove_export()
def _remove_export(self):
self.backend.driver.remove_export(self._context, self._ovo)
def refresh(self):
last_self = self.get_by_id(self.id)
if self._snapshots is not None:
last_self.snapshots
if self._connections is not None:
last_self.connections
last_self.local_attach = self.local_attach
vars(self).clear()
vars(self).update(vars(last_self))
def save(self):
self.persistence.set_volume(self)
class Connection(Object, LazyVolumeAttr):
"""Cinderlib Connection info that maps to VolumeAttachment.
On Pike we don't have the connector field on the VolumeAttachment ORM
instance so we use the connection_info to store everything.
We'll have a dictionary:
{'conn': connection info
'connector': connector dictionary
'device': result of connect_volume}
"""
OVO_CLASS = cinder_objs.VolumeAttachment
SIMPLE_JSON_IGNORE = ('volume',)
@classmethod
def connect(cls, volume, connector, **kwargs):
conn_info = volume.backend.driver.initialize_connection(
volume._ovo, connector)
conn = cls(volume.backend,
connector=connector,
volume=volume,
status='attached',
connection_info={'conn': conn_info},
**kwargs)
conn.connector_info = connector
conn.save()
return conn
@staticmethod
def _is_multipathed_conn(kwargs):
# Priority:
# - kwargs['use_multipath']
# - Multipath in connector_dict in kwargs or _ovo
# - Detect from connection_info data from OVO in kwargs
if 'use_multipath' in kwargs:
return kwargs['use_multipath']
connector = kwargs.get('connector') or {}
conn_info = kwargs.get('connection_info') or {}
if '__ovo' in kwargs:
ovo = kwargs['__ovo']
conn_info = conn_info or ovo.connection_info or {}
connector = connector or ovo.connection_info.get('connector') or {}
if 'multipath' in connector:
return connector['multipath']
# If multipathed not defined autodetect based on connection info
conn_info = conn_info['conn'].get('data', {})
iscsi_mp = 'target_iqns' in conn_info and 'target_portals' in conn_info
fc_mp = not isinstance(conn_info.get('target_wwn', ''), str)
return iscsi_mp or fc_mp
def __init__(self, *args, **kwargs):
self.use_multipath = self._is_multipathed_conn(kwargs)
scan_attempts = brick_initiator.DEVICE_SCAN_ATTEMPTS_DEFAULT
self.scan_attempts = kwargs.pop('device_scan_attempts', scan_attempts)
volume = kwargs.pop('volume', None)
self._connector = None
super(Connection, self).__init__(*args, **kwargs)
LazyVolumeAttr.__init__(self, volume)
# Attributes could be coming from __ovo, so we need to do this after
# all the initialization.
data = self.conn_info.get('data', {})
if not (self._ovo.obj_attr_is_set('attach_mode') and self.attach_mode):
self._ovo.attach_mode = data.get('access_mode', 'rw')
if data:
data['access_mode'] = self.attach_mode
@property
def conn_info(self):
conn_info = self._ovo.connection_info
if conn_info:
return conn_info.get('conn')
return {}
@conn_info.setter
def conn_info(self, value):
if not value:
self._ovo.connection_info = None
return
if self._ovo.connection_info is None:
self._ovo.connection_info = {}
# access_mode in the connection_info is set on __init__, here we ensure
# it's also set whenever we change the connection_info out of __init__.
if 'data' in value:
mode = value['data'].setdefault('access_mode', self.attach_mode)
# Keep attach_mode in sync.
self._ovo.attach_mode = mode
self._ovo.connection_info['conn'] = value
@property
def protocol(self):
return self.conn_info.get('driver_volume_type')
@property
def connector_info(self):
if self.connection_info:
return self.connection_info.get('connector')
return None
@connector_info.setter
def connector_info(self, value):
if self._ovo.connection_info is None:
self._ovo.connection_info = {}
self.connection_info['connector'] = value
# Since we are changing the dictionary the OVO won't detect the change
self._changed_fields.add('connection_info')
@property
def device(self):
if self.connection_info:
return self.connection_info.get('device')
return None
@device.setter
def device(self, value):
if value:
self.connection_info['device'] = value
else:
self.connection_info.pop('device', None)
# Since we are changing the dictionary the OVO won't detect the change
self._changed_fields.add('connection_info')
@property
def path(self):
device = self.device
if not device:
return None
return device['path']
@property
def connector(self):
if not self._connector:
if not self.conn_info:
return None
self._connector = volume_utils.brick_get_connector(
self.protocol,
use_multipath=self.use_multipath,
device_scan_attempts=self.scan_attempts,
# NOTE(geguileo): afaik only remotefs uses the connection info
conn=self.conn_info,
do_local_attach=True)
return self._connector
@property
def attached(self):
return bool(self.device)
@property
def connected(self):
return bool(self.conn_info)
@classmethod
def _load(cls, backend, ovo, volume=None, save=False):
# We let the __init__ method set the _volume if exists
conn = cls(backend, __ovo=ovo, volume=volume)
if save:
conn.save()
# Restore circular reference only if we have all the elements
if conn._volume:
utils.add_by_id(conn, conn._volume._connections)
connections = getattr(conn._volume._ovo,
CONNECTIONS_OVO_FIELD).objects
utils.add_by_id(conn._ovo, connections)
return conn
def _disconnect(self, force=False):
self.backend.driver.terminate_connection(self.volume._ovo,
self.connector_info,
force=force)
self.conn_info = None
self._ovo.status = 'detached'
self.persistence.delete_connection(self)
def disconnect(self, force=False):
self._disconnect(force)
self.volume._disconnect(self)
def device_attached(self, device):
self.device = device
self.save()
def attach(self):
device = self.connector.connect_volume(self.conn_info['data'])
self.device_attached(device)
try:
if self.connector.check_valid_device(self.path):
error_msg = None
else:
error_msg = ('Unable to access the backend storage via path '
'%s.' % self.path)
except Exception:
error_msg = ('Could not validate device %s. There may be missing '
'packages on your host.' % self.path)
LOG.exception(error_msg)
if error_msg:
# Prepare exception while we still have the value of the path
exc = cinder_exception.DeviceUnavailable(
path=self.path, attach_info=self._ovo.connection_info,
reason=error_msg)
self.detach(force=True, ignore_errors=True)
raise exc
if self._volume:
self.volume.local_attach = self
def detach(self, force=False, ignore_errors=False, exc=None):
if not exc:
exc = brick_exception.ExceptionChainer()
with exc.context(force, 'Disconnect failed'):
self.connector.disconnect_volume(self.conn_info['data'],
self.device,
force=force,
ignore_errors=ignore_errors)
if not exc or ignore_errors:
if self._volume:
self.volume.local_attach = None
self.device = None
self.save()
self._connector = None
if exc and not ignore_errors:
raise exc
@classmethod
def get_by_id(cls, connection_id):
result = cls.persistence.get_connections(connection_id=connection_id)
if not result:
msg = 'id=%s' % connection_id
raise exception.ConnectionNotFound(filter=msg)
return result[0]
@property
def backend(self):
if self._backend is None and hasattr(self, '_volume'):
self._backend = self.volume.backend
return self._backend
@backend.setter
def backend(self, value):
self._backend = value
def save(self):
self.persistence.set_connection(self)
def extend(self):
return self.connector.extend_volume(self.conn_info['data'])
class Snapshot(NamedObject, LazyVolumeAttr):
OVO_CLASS = cinder_objs.Snapshot
SIMPLE_JSON_IGNORE = ('volume',)
DEFAULT_FIELDS_VALUES = {
'status': 'creating',
'metadata': {},
}
def __init__(self, volume, **kwargs):
param_backend = self._get_backend(kwargs.pop('backend', None))
if '__ovo' in kwargs:
backend = kwargs['__ovo'][BACKEND_NAME_SNAPSHOT_FIELD]
else:
kwargs.setdefault('user_id', volume.user_id)
kwargs.setdefault('project_id', volume.project_id)
kwargs['volume_id'] = volume.id
kwargs['volume_size'] = volume.size
kwargs['volume_type_id'] = volume.volume_type_id
kwargs['volume'] = volume._ovo
if volume:
backend = volume.backend.id
kwargs[BACKEND_NAME_SNAPSHOT_FIELD] = backend
else:
backend = param_backend and param_backend.id
if not (backend or param_backend):
raise ValueError('Backend not provided')
if backend and param_backend and param_backend.id != backend:
raise ValueError("Multiple backends provided and they don't match")
super(Snapshot, self).__init__(backend=param_backend or backend,
**kwargs)
LazyVolumeAttr.__init__(self, volume)
@classmethod
def _load(cls, backend, ovo, volume=None, save=False):
# We let the __init__ method set the _volume if exists
snap = cls(volume, backend=backend, __ovo=ovo)
if save:
snap.save()
# Restore circular reference only if we have all the elements
if snap._volume:
utils.add_by_id(snap, snap._volume._snapshots)
utils.add_by_id(snap._ovo, snap._volume._ovo.snapshots.objects)
return snap
def create(self):
try:
model_update = self.backend.driver.create_snapshot(self._ovo)
self._ovo.status = 'available'
if model_update:
self._ovo.update(model_update)
except Exception:
self._ovo.status = 'error'
self._raise_with_resource()
finally:
self.save()
def delete(self):
try:
self.backend.driver.delete_snapshot(self._ovo)
self.persistence.delete_snapshot(self)
self._ovo.status = 'deleted'
except Exception:
self._ovo.status = 'error_deleting'
self.save()
self._raise_with_resource()
if self._volume is not None:
self._volume._snapshot_removed(self)
def create_volume(self, **new_vol_params):
new_vol_params.setdefault('size', self.volume_size)
new_vol_params['snapshot_id'] = self.id
new_vol = Volume(self.volume, **new_vol_params)
self.backend._start_creating_volume(new_vol)
try:
model_update = self.backend.driver.create_volume_from_snapshot(
new_vol._ovo, self._ovo)
new_vol._ovo.status = 'available'
if model_update:
new_vol._ovo.update(model_update)
self.backend._volume_created(new_vol)
except Exception:
new_vol._ovo.status = 'error'
new_vol._raise_with_resource()
finally:
new_vol.save()
return new_vol
@classmethod
def get_by_id(cls, snapshot_id):
result = cls.persistence.get_snapshots(snapshot_id=snapshot_id)
if not result:
raise exception.SnapshotNotFound(snapshot_id=snapshot_id)
return result[0]
@classmethod
def get_by_name(cls, snapshot_name):
return cls.persistence.get_snapshots(snapshot_name=snapshot_name)
def save(self):
self.persistence.set_snapshot(self)
setup = Object.setup
CONTEXT = Object.CONTEXT

View File

@ -1,65 +0,0 @@
# Copyright (c) 2018, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import inspect
from stevedore import driver
from cinderlib import exception
from cinderlib.persistence import base
DEFAULT_STORAGE = 'memory'
def setup(config):
"""Setup persistence to be used in cinderlib.
By default memory persistance will be used, but there are other mechanisms
available and other ways to use custom mechanisms:
- Persistence plugins: Plugin mechanism uses Python entrypoints under
namespace cinderlib.persistence.storage, and cinderlib comes with 3
different mechanisms, "memory", "dbms", and "memory_dbms". To use any of
these one must pass the string name in the storage parameter and any
other configuration as keyword arguments.
- Passing a class that inherits from PersistenceDriverBase as storage
parameter and initialization parameters as keyword arguments.
- Passing an instance that inherits from PersistenceDriverBase as storage
parameter.
"""
if config is None:
config = {}
else:
config = config.copy()
# Default configuration is using memory storage
storage = config.pop('storage', None) or DEFAULT_STORAGE
if isinstance(storage, base.PersistenceDriverBase):
return storage
if inspect.isclass(storage) and issubclass(storage,
base.PersistenceDriverBase):
return storage(**config)
if not isinstance(storage, str):
raise exception.InvalidPersistence(storage)
persistence_driver = driver.DriverManager(
namespace='cinderlib.persistence.storage',
name=storage,
invoke_on_load=True,
invoke_kwds=config,
)
return persistence_driver.driver

View File

@ -1,259 +0,0 @@
# Copyright (c) 2018, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# NOTE(geguileo): Probably a good idea not to depend on cinder.cmd.volume
# having all the other imports as they could change.
from cinder import objects
from cinder.objects import base as cinder_base_ovo
from oslo_utils import timeutils
from oslo_versionedobjects import fields
import cinderlib
from cinderlib import serialization
class PersistenceDriverBase(object):
"""Provide Metadata Persistency for our resources.
This class will be used to store new resources as they are created,
updated, and removed, as well as provide a mechanism for users to retrieve
volumes, snapshots, and connections.
"""
def __init__(self, **kwargs):
pass
@property
def db(self):
raise NotImplementedError()
def get_volumes(self, volume_id=None, volume_name=None, backend_name=None):
raise NotImplementedError()
def get_snapshots(self, snapshot_id=None, snapshot_name=None,
volume_id=None):
raise NotImplementedError()
def get_connections(self, connection_id=None, volume_id=None):
raise NotImplementedError()
def get_key_values(self, key):
raise NotImplementedError()
def set_volume(self, volume):
self.reset_change_tracker(volume)
if volume.volume_type:
volume.volume_type.obj_reset_changes()
if volume.volume_type.qos_specs_id:
volume.volume_type.qos_specs.obj_reset_changes()
def set_snapshot(self, snapshot):
self.reset_change_tracker(snapshot)
def set_connection(self, connection):
self.reset_change_tracker(connection)
def set_key_value(self, key_value):
pass
def delete_volume(self, volume):
self._set_deleted(volume)
self.reset_change_tracker(volume)
def delete_snapshot(self, snapshot):
self._set_deleted(snapshot)
self.reset_change_tracker(snapshot)
def delete_connection(self, connection):
self._set_deleted(connection)
self.reset_change_tracker(connection)
def delete_key_value(self, key):
pass
def _set_deleted(self, resource):
resource._ovo.deleted = True
resource._ovo.deleted_at = timeutils.utcnow()
if hasattr(resource._ovo, 'status'):
resource._ovo.status = 'deleted'
def reset_change_tracker(self, resource, fields=None):
if isinstance(fields, str):
fields = (fields,)
resource._ovo.obj_reset_changes(fields)
def get_changed_fields(self, resource):
# NOTE(geguileo): We don't use cinder_obj_get_changes to prevent
# recursion to children OVO which we are not interested and may result
# in circular references.
result = {key: getattr(resource._ovo, key)
for key in resource._changed_fields
if not isinstance(resource.fields[key], fields.ObjectField)}
if getattr(resource._ovo, 'volume_type', None):
if ('qos_specs' in resource.volume_type._changed_fields and
resource.volume_type.qos_specs):
result['qos_specs'] = resource._ovo.volume_type.qos_specs.specs
if ('extra_specs' in resource.volume_type._changed_fields and
resource.volume_type.extra_specs):
result['extra_specs'] = resource._ovo.volume_type.extra_specs
return result
def get_fields(self, resource):
result = {
key: getattr(resource._ovo, key)
for key in resource.fields
if (resource._ovo.obj_attr_is_set(key) and
key not in getattr(resource, 'obj_extra_fields', []) and not
isinstance(resource.fields[key], fields.ObjectField))
}
if getattr(resource._ovo, 'volume_type_id', None):
result['extra_specs'] = resource._ovo.volume_type.extra_specs
if resource._ovo.volume_type.qos_specs_id:
result['qos_specs'] = resource._ovo.volume_type.qos_specs.specs
return result
class DB(object):
"""Replacement for DB access methods.
This will serve as replacement for methods used by:
- Drivers
- OVOs' get_by_id and save methods
- DB implementation
Data will be retrieved using the persistence driver we setup.
"""
GET_METHODS_PER_DB_MODEL = {
objects.Volume.model: 'volume_get',
objects.VolumeType.model: 'volume_type_get',
objects.Snapshot.model: 'snapshot_get',
objects.QualityOfServiceSpecs.model: 'qos_specs_get',
}
def __init__(self, persistence_driver):
self.persistence = persistence_driver
# Replace get_by_id OVO methods with something that will return
# expected data
objects.Volume.get_by_id = self.volume_get
objects.Snapshot.get_by_id = self.snapshot_get
objects.VolumeAttachmentList.get_all_by_volume_id = \
self.__connections_get
# Disable saving in OVOs
for ovo_name in cinder_base_ovo.CinderObjectRegistry.obj_classes():
ovo_cls = getattr(objects, ovo_name)
ovo_cls.save = lambda *args, **kwargs: None
def __connections_get(self, context, volume_id):
# Used by drivers to lazy load volume_attachment
connections = self.persistence.get_connections(volume_id=volume_id)
ovos = [conn._ovo for conn in connections]
result = objects.VolumeAttachmentList(objects=ovos)
return result
def __volume_get(self, volume_id, as_ovo=True):
in_memory = volume_id in cinderlib.Backend._volumes_inflight
if in_memory:
vol = cinderlib.Backend._volumes_inflight[volume_id]
else:
vol = self.persistence.get_volumes(volume_id)[0]
vol_result = vol._ovo if as_ovo else vol
return in_memory, vol_result
def volume_get(self, context, volume_id, *args, **kwargs):
return self.__volume_get(volume_id)[1]
def snapshot_get(self, context, snapshot_id, *args, **kwargs):
return self.persistence.get_snapshots(snapshot_id)[0]._ovo
def volume_type_get(self, context, id, inactive=False,
expected_fields=None):
if id in cinderlib.Backend._volumes_inflight:
vol = cinderlib.Backend._volumes_inflight[id]
else:
vol = self.persistence.get_volumes(id)[0]
if not vol._ovo.volume_type_id:
return None
return vol_type_to_dict(vol._ovo.volume_type)
# Our volume type name is the same as the id and the volume name
def _volume_type_get_by_name(self, context, name, session=None):
return self.volume_type_get(context, name)
def qos_specs_get(self, context, qos_specs_id, inactive=False):
if qos_specs_id in cinderlib.Backend._volumes_inflight:
vol = cinderlib.Backend._volumes_inflight[qos_specs_id]
else:
vol = self.persistence.get_volumes(qos_specs_id)[0]
if not vol._ovo.volume_type_id:
return None
return vol_type_to_dict(vol._ovo.volume_type)['qos_specs']
@classmethod
def image_volume_cache_get_by_volume_id(cls, context, volume_id):
return None
def get_by_id(self, context, model, id, *args, **kwargs):
method = getattr(self, self.GET_METHODS_PER_DB_MODEL[model])
return method(context, id)
def volume_get_all_by_host(self, context, host, filters=None):
backend_name = host.split('#')[0].split('@')[1]
result = self.persistence.get_volumes(backend_name=backend_name)
return [vol._ovo for vol in result]
def _volume_admin_metadata_get(self, context, volume_id, session=None):
vol = self.volume_get(context, volume_id)
return vol.admin_metadata
def _volume_admin_metadata_update(self, context, volume_id, metadata,
delete, session=None, add=True,
update=True):
vol_in_memory, vol = self.__volume_get(volume_id, as_ovo=False)
changed = False
if delete:
remove = set(vol.admin_metadata.keys()).difference(metadata.keys())
changed = bool(remove)
for k in remove:
del vol.admin_metadata[k]
for k, v in metadata.items():
is_in = k in vol.admin_metadata
if (not is_in and add) or (is_in and update):
vol.admin_metadata[k] = v
changed = True
if changed and not vol_in_memory:
vol._changed_fields.add('admin_metadata')
self.persistence.set_volume(vol)
def volume_admin_metadata_delete(self, context, volume_id, key):
vol_in_memory, vol = self.__volume_get(volume_id, as_ovo=False)
if key in vol.admin_metadata:
del vol.admin_metadata[key]
if not vol_in_memory:
vol._changed_fields.add('admin_metadata')
self.persistence.set_volume(vol)
def vol_type_to_dict(volume_type):
res = serialization.obj_to_primitive(volume_type)
res = res['versioned_object.data']
if res.get('qos_specs'):
res['qos_specs'] = res['qos_specs']['versioned_object.data']
return res

View File

@ -1,417 +0,0 @@
# Copyright (c) 2018, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import alembic.script.revision
import alembic.util.exc
from cinder.db import api as db_api
from cinder.db import migration
from cinder.db.sqlalchemy import api as sqla_api
from cinder.db.sqlalchemy import models
from cinder import exception as cinder_exception
from cinder import objects as cinder_objs
from oslo_config import cfg
from oslo_db import exception
from oslo_db.sqlalchemy import models as oslo_db_models
from oslo_log import log
import sqlalchemy as sa
from cinderlib import objects
from cinderlib.persistence import base as persistence_base
LOG = log.getLogger(__name__)
def db_writer(func):
"""Decorator to start a DB writing transaction.
With the new Oslo DB Transaction Sessions everything needs to use the
sessions of the enginefacade using the function decorator or the context
manager approach: https://docs.openstack.org/oslo.db/ocata/usage.html
This plugin cannot use the decorator form because its fuctions don't
receive a Context objects that the decorator can find and use, so we use
this decorator instead.
Cinder DB API methods already have a decorator, so methods calling them
don't require this decorator, but methods that directly call the DB using
sqlalchemy or using the model_query method do.
Using this decorator at this level also allows us to enclose everything in
a single transaction, and it doesn't have any problems with the existing
Cinder decorators.
"""
def wrapper(*args, **kwargs):
with sqla_api.main_context_manager.writer.using(objects.CONTEXT):
return func(*args, **kwargs)
return wrapper
class KeyValue(models.BASE, oslo_db_models.ModelBase, objects.KeyValue):
__tablename__ = 'cinderlib_persistence_key_value'
key = sa.Column(sa.String(255), primary_key=True)
value = sa.Column(sa.Text)
class DBPersistence(persistence_base.PersistenceDriverBase):
GET_METHODS_PER_DB_MODEL = {
cinder_objs.VolumeType.model: 'volume_type_get',
cinder_objs.QualityOfServiceSpecs.model: 'qos_specs_get',
}
def __init__(self, connection, sqlite_synchronous=True,
soft_deletes=False):
self.soft_deletes = soft_deletes
cfg.CONF.set_override('connection', connection, 'database')
cfg.CONF.set_override('sqlite_synchronous',
sqlite_synchronous,
'database')
# Suppress logging for alembic
alembic_logger = logging.getLogger('alembic.runtime.migration')
alembic_logger.setLevel(logging.WARNING)
self._clear_facade()
self.db_instance = db_api.oslo_db_api.DBAPI.from_config(
conf=cfg.CONF, backend_mapping=db_api._BACKEND_MAPPING,
lazy=True)
# We need to wrap some get methods that get called before the volume is
# actually created.
self.original_vol_type_get = self.db_instance.volume_type_get
self.db_instance.volume_type_get = self.vol_type_get
self.original_qos_specs_get = self.db_instance.qos_specs_get
self.db_instance.qos_specs_get = self.qos_specs_get
self.original_get_by_id = self.db_instance.get_by_id
self.db_instance.get_by_id = self.get_by_id
try:
migration.db_sync()
except alembic.util.exc.CommandError as exc:
# We can be running 2 Cinder versions at the same time on the same
# DB while we upgrade, so we must ignore the fact that the DB is
# now on a newer version.
if not isinstance(
exc.__cause__, alembic.script.revision.ResolutionError,
):
raise
self._create_key_value_table()
# NOTE : At this point, the persistence isn't ready so we need to use
# db_instance instead of sqlalchemy API or DB API.
orm_obj = self.db_instance.volume_type_get_by_name(objects.CONTEXT,
'__DEFAULT__')
cls = cinder_objs.VolumeType
expected_attrs = cls._get_expected_attrs(objects.CONTEXT)
self.DEFAULT_TYPE = cls._from_db_object(
objects.CONTEXT, cls(objects.CONTEXT), orm_obj,
expected_attrs=expected_attrs)
super(DBPersistence, self).__init__()
def vol_type_get(self, context, id, inactive=False,
expected_fields=None):
if id not in objects.Backend._volumes_inflight:
return self.original_vol_type_get(context, id, inactive)
vol = objects.Backend._volumes_inflight[id]._ovo
if not vol.volume_type_id:
return None
return persistence_base.vol_type_to_dict(vol.volume_type)
def qos_specs_get(self, context, qos_specs_id, inactive=False):
if qos_specs_id not in objects.Backend._volumes_inflight:
return self.original_qos_specs_get(context, qos_specs_id, inactive)
vol = objects.Backend._volumes_inflight[qos_specs_id]._ovo
if not vol.volume_type_id:
return None
return persistence_base.vol_type_to_dict(vol.volume_type)['qos_specs']
def get_by_id(self, context, model, id, *args, **kwargs):
if model not in self.GET_METHODS_PER_DB_MODEL:
return self.original_get_by_id(context, model, id, *args, **kwargs)
method = getattr(self, self.GET_METHODS_PER_DB_MODEL[model])
return method(context, id)
def _clear_facade(self):
# This is for Pike
if hasattr(sqla_api, '_FACADE'):
sqla_api._FACADE = None
# This is for Queens or later
elif hasattr(sqla_api, 'main_context_manager'):
sqla_api.main_context_manager.configure(**dict(cfg.CONF.database))
def _create_key_value_table(self):
models.BASE.metadata.create_all(sqla_api.get_engine(),
tables=[KeyValue.__table__])
@property
def db(self):
return self.db_instance
@staticmethod
def _build_filter(**kwargs):
return {key: value for key, value in kwargs.items() if value}
def get_volumes(self, volume_id=None, volume_name=None, backend_name=None):
# Use the % wildcard to ignore the host name on the backend_name search
host = '%@' + backend_name if backend_name else None
filters = self._build_filter(id=volume_id, display_name=volume_name,
host=host)
LOG.debug('get_volumes for %s', filters)
ovos = cinder_objs.VolumeList.get_all(objects.CONTEXT, filters=filters)
result = []
for ovo in ovos:
backend = ovo.host.split('@')[-1].split('#')[0]
# Trigger lazy loading of specs
if ovo.volume_type_id:
ovo.volume_type.extra_specs
ovo.volume_type.qos_specs
result.append(objects.Volume(backend, __ovo=ovo))
return result
def get_snapshots(self, snapshot_id=None, snapshot_name=None,
volume_id=None):
filters = self._build_filter(id=snapshot_id, volume_id=volume_id,
display_name=snapshot_name)
LOG.debug('get_snapshots for %s', filters)
ovos = cinder_objs.SnapshotList.get_all(objects.CONTEXT,
filters=filters)
result = [objects.Snapshot(None, __ovo=ovo) for ovo in ovos.objects]
return result
def get_connections(self, connection_id=None, volume_id=None):
filters = self._build_filter(id=connection_id, volume_id=volume_id)
LOG.debug('get_connections for %s', filters)
ovos = cinder_objs.VolumeAttachmentList.get_all(objects.CONTEXT,
filters)
# Leverage lazy loading of the volume and backend in Connection
result = [objects.Connection(None, volume=None, __ovo=ovo)
for ovo in ovos.objects]
return result
def _get_kv(self, session, key=None):
query = session.query(KeyValue)
if key is not None:
query = query.filter_by(key=key)
res = query.all()
# If we want to use the result as an ORM
if session:
return res
return [objects.KeyValue(r.key, r.value) for r in res]
def get_key_values(self, key=None):
with sqla_api.main_context_manager.reader.using(objects.CONTEXT) as s:
return self._get_kv(s, key)
@db_writer
def set_volume(self, volume):
changed = self.get_changed_fields(volume)
if not changed:
changed = self.get_fields(volume)
extra_specs = changed.pop('extra_specs', None)
qos_specs = changed.pop('qos_specs', None)
# Since OVOs are not tracking QoS or Extra specs dictionary changes,
# we only support setting QoS or Extra specs on creation or add them
# later.
vol_type_id = changed.get('volume_type_id')
if vol_type_id == self.DEFAULT_TYPE.id:
if extra_specs or qos_specs:
raise cinder_exception.VolumeTypeUpdateFailed(
id=self.DEFAULT_TYPE.name)
elif vol_type_id:
vol_type_fields = {'id': volume.volume_type_id,
'name': volume.volume_type_id,
'extra_specs': extra_specs,
'is_public': True}
if qos_specs:
res = self.db.qos_specs_create(objects.CONTEXT,
{'name': volume.volume_type_id,
'consumer': 'back-end',
'specs': qos_specs})
# Cinder is automatically generating an ID, replace it
query = sqla_api.model_query(objects.CONTEXT,
models.QualityOfServiceSpecs)
query.filter_by(id=res['id']).update(
{'id': volume.volume_type.qos_specs_id})
self.db.volume_type_create(objects.CONTEXT, vol_type_fields)
else:
if extra_specs is not None:
self.db.volume_type_extra_specs_update_or_create(
objects.CONTEXT, volume.volume_type_id, extra_specs)
self.db.qos_specs_update(objects.CONTEXT,
volume.volume_type.qos_specs_id,
{'name': volume.volume_type_id,
'consumer': 'back-end',
'specs': qos_specs})
else:
volume._ovo.volume_type = self.DEFAULT_TYPE
volume._ovo.volume_type_id = self.DEFAULT_TYPE.id
changed['volume_type_id'] = self.DEFAULT_TYPE.id
# Create the volume
if 'id' in changed:
LOG.debug('set_volume creating %s', changed)
try:
self.db.volume_create(objects.CONTEXT, changed)
changed = None
except exception.DBDuplicateEntry:
del changed['id']
if changed:
LOG.debug('set_volume updating %s', changed)
self.db.volume_update(objects.CONTEXT, volume.id, changed)
super(DBPersistence, self).set_volume(volume)
@db_writer
def set_snapshot(self, snapshot):
changed = self.get_changed_fields(snapshot)
if not changed:
changed = self.get_fields(snapshot)
# Create
if 'id' in changed:
LOG.debug('set_snapshot creating %s', changed)
try:
self.db.snapshot_create(objects.CONTEXT, changed)
changed = None
except exception.DBDuplicateEntry:
del changed['id']
if changed:
LOG.debug('set_snapshot updating %s', changed)
self.db.snapshot_update(objects.CONTEXT, snapshot.id, changed)
super(DBPersistence, self).set_snapshot(snapshot)
@db_writer
def set_connection(self, connection):
changed = self.get_changed_fields(connection)
if not changed:
changed = self.get_fields(connection)
if 'connection_info' in changed:
connection._convert_connection_info_to_db_format(changed)
if 'connector' in changed:
connection._convert_connector_to_db_format(changed)
# Create
if 'id' in changed:
LOG.debug('set_connection creating %s', changed)
try:
sqla_api.volume_attach(objects.CONTEXT, changed)
changed = None
except exception.DBDuplicateEntry:
del changed['id']
if changed:
LOG.debug('set_connection updating %s', changed)
self.db.volume_attachment_update(objects.CONTEXT, connection.id,
changed)
super(DBPersistence, self).set_connection(connection)
@db_writer
def set_key_value(self, key_value):
session = objects.CONTEXT.session
kv = self._get_kv(session, key_value.key)
kv = kv[0] if kv else KeyValue(key=key_value.key)
kv.value = key_value.value
session.add(kv)
@db_writer
def delete_volume(self, volume):
delete_type = (volume.volume_type_id != self.DEFAULT_TYPE.id
and volume.volume_type_id)
if self.soft_deletes:
LOG.debug('soft deleting volume %s', volume.id)
self.db.volume_destroy(objects.CONTEXT, volume.id)
if delete_type:
LOG.debug('soft deleting volume type %s',
volume.volume_type_id)
self.db.volume_destroy(objects.CONTEXT, volume.volume_type_id)
if volume.volume_type.qos_specs_id:
self.db.qos_specs_delete(objects.CONTEXT,
volume.volume_type.qos_specs_id)
else:
LOG.debug('hard deleting volume %s', volume.id)
for model in (models.VolumeMetadata, models.VolumeAdminMetadata):
query = sqla_api.model_query(objects.CONTEXT, model)
query.filter_by(volume_id=volume.id).delete()
query = sqla_api.model_query(objects.CONTEXT, models.Volume)
query.filter_by(id=volume.id).delete()
if delete_type:
LOG.debug('hard deleting volume type %s',
volume.volume_type_id)
query = sqla_api.model_query(objects.CONTEXT,
models.VolumeTypeExtraSpecs)
query.filter_by(volume_type_id=volume.volume_type_id).delete()
query = sqla_api.model_query(objects.CONTEXT,
models.VolumeType)
query.filter_by(id=volume.volume_type_id).delete()
query = sqla_api.model_query(objects.CONTEXT,
models.QualityOfServiceSpecs)
qos_id = volume.volume_type.qos_specs_id
if qos_id:
query.filter(sqla_api.or_(
models.QualityOfServiceSpecs.id == qos_id,
models.QualityOfServiceSpecs.specs_id == qos_id
)).delete()
super(DBPersistence, self).delete_volume(volume)
@db_writer
def delete_snapshot(self, snapshot):
if self.soft_deletes:
LOG.debug('soft deleting snapshot %s', snapshot.id)
self.db.snapshot_destroy(objects.CONTEXT, snapshot.id)
else:
LOG.debug('hard deleting snapshot %s', snapshot.id)
query = sqla_api.model_query(objects.CONTEXT, models.Snapshot)
query.filter_by(id=snapshot.id).delete()
super(DBPersistence, self).delete_snapshot(snapshot)
@db_writer
def delete_connection(self, connection):
if self.soft_deletes:
LOG.debug('soft deleting connection %s', connection.id)
self.db.attachment_destroy(objects.CONTEXT, connection.id)
else:
LOG.debug('hard deleting connection %s', connection.id)
query = sqla_api.model_query(objects.CONTEXT,
models.VolumeAttachment)
query.filter_by(id=connection.id).delete()
super(DBPersistence, self).delete_connection(connection)
@db_writer
def delete_key_value(self, key_value):
session = objects.CONTEXT.session
query = session.query(KeyValue)
query.filter_by(key=key_value.key).delete()
class MemoryDBPersistence(DBPersistence):
def __init__(self):
super(MemoryDBPersistence, self).__init__(connection='sqlite://')

View File

@ -1,113 +0,0 @@
# Copyright (c) 2018, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from cinderlib.persistence import base as persistence_base
class MemoryPersistence(persistence_base.PersistenceDriverBase):
volumes = {}
snapshots = {}
connections = {}
key_values = {}
def __init__(self):
# Create fake DB for drivers
self.fake_db = persistence_base.DB(self)
super(MemoryPersistence, self).__init__()
@property
def db(self):
return self.fake_db
@staticmethod
def _get_field(res, field):
res = getattr(res, field)
if field == 'host':
res = res.split('@')[1].split('#')[0]
return res
def _filter_by(self, values, field, value):
if not value:
return values
return [res for res in values if self._get_field(res, field) == value]
def get_volumes(self, volume_id=None, volume_name=None, backend_name=None):
try:
res = ([self.volumes[volume_id]] if volume_id
else self.volumes.values())
except KeyError:
return []
res = self._filter_by(res, 'display_name', volume_name)
res = self._filter_by(res, 'host', backend_name)
return res
def get_snapshots(self, snapshot_id=None, snapshot_name=None,
volume_id=None):
try:
result = ([self.snapshots[snapshot_id]] if snapshot_id
else self.snapshots.values())
except KeyError:
return []
result = self._filter_by(result, 'volume_id', volume_id)
result = self._filter_by(result, 'display_name', snapshot_name)
return result
def get_connections(self, connection_id=None, volume_id=None):
try:
result = ([self.connections[connection_id]] if connection_id
else self.connections.values())
except KeyError:
return []
result = self._filter_by(result, 'volume_id', volume_id)
return result
def get_key_values(self, key=None):
try:
result = ([self.key_values[key]] if key
else list(self.key_values.values()))
except KeyError:
return []
return result
def set_volume(self, volume):
self.volumes[volume.id] = volume
super(MemoryPersistence, self).set_volume(volume)
def set_snapshot(self, snapshot):
self.snapshots[snapshot.id] = snapshot
super(MemoryPersistence, self).set_snapshot(snapshot)
def set_connection(self, connection):
self.connections[connection.id] = connection
super(MemoryPersistence, self).set_connection(connection)
def set_key_value(self, key_value):
self.key_values[key_value.key] = key_value
def delete_volume(self, volume):
self.volumes.pop(volume.id, None)
super(MemoryPersistence, self).delete_volume(volume)
def delete_snapshot(self, snapshot):
self.snapshots.pop(snapshot.id, None)
super(MemoryPersistence, self).delete_snapshot(snapshot)
def delete_connection(self, connection):
self.connections.pop(connection.id, None)
super(MemoryPersistence, self).delete_connection(connection)
def delete_key_value(self, key_value):
self.key_values.pop(key_value.key, None)

View File

@ -1,204 +0,0 @@
# Copyright (c) 2018, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Oslo Versioned Objects helper file.
These methods help with the serialization of Cinderlib objects that uses the
OVO serialization mechanism, so we remove circular references when doing the
JSON serialization of objects (for example in a Volume OVO it has a 'snapshot'
field which is a Snapshot OVO that has a 'volume' back reference), piggy back
on the OVO's serialization mechanism to add/get additional data we want.
"""
import functools
import json as json_lib
from cinder.objects import base as cinder_base_ovo
from oslo_versionedobjects import base as base_ovo
from oslo_versionedobjects import fields as ovo_fields
from cinderlib import objects
# Variable used to avoid circular references
BACKEND_CLASS = None
def setup(backend_class):
global BACKEND_CLASS
BACKEND_CLASS = backend_class
# Use custom dehydration methods that prevent maximum recursion errors
# due to circular references:
# ie: snapshot -> volume -> snapshots -> snapshot
base_ovo.VersionedObject.obj_to_primitive = obj_to_primitive
cinder_base_ovo.CinderObject.obj_from_primitive = classmethod(
obj_from_primitive)
fields = base_ovo.obj_fields
fields.Object.to_primitive = staticmethod(field_ovo_to_primitive)
fields.Field.to_primitive = field_to_primitive
fields.List.to_primitive = iterable_to_primitive
fields.Set.to_primitive = iterable_to_primitive
fields.Dict.to_primitive = dict_to_primitive
fields.DateTime.to_primitive = staticmethod(datetime_to_primitive)
wrap_to_primitive(fields.FieldType)
wrap_to_primitive(fields.IPAddress)
def wrap_to_primitive(cls):
method = getattr(cls, 'to_primitive')
@functools.wraps(method)
def to_primitive(obj, attr, value, visited=None):
return method(obj, attr, value)
setattr(cls, 'to_primitive', staticmethod(to_primitive))
def _set_visited(element, visited):
# visited keeps track of elements visited to prevent loops
if visited is None:
visited = set()
# We only care about complex object that can have loops, others are ignored
# to prevent us from not serializing simple objects, such as booleans, that
# can have the same instance used for multiple fields.
if isinstance(element,
(ovo_fields.ObjectField, cinder_base_ovo.CinderObject)):
visited.add(id(element))
return visited
def obj_to_primitive(self, target_version=None,
version_manifest=None, visited=None):
# No target_version, version_manifest, or changes support
visited = _set_visited(self, visited)
primitive = {}
for name, field in self.fields.items():
if self.obj_attr_is_set(name):
value = getattr(self, name)
# Skip cycles
if id(value) in visited:
continue
primitive[name] = field.to_primitive(self, name, value,
visited)
obj_name = self.obj_name()
obj = {
self._obj_primitive_key('name'): obj_name,
self._obj_primitive_key('namespace'): self.OBJ_PROJECT_NAMESPACE,
self._obj_primitive_key('version'): self.VERSION,
self._obj_primitive_key('data'): primitive
}
# Piggyback to store our own data
cl_obj = getattr(self, '_cl_obj', None)
clib_data = cl_obj and cl_obj._to_primitive()
if clib_data:
obj['cinderlib.data'] = clib_data
return obj
def obj_from_primitive(
cls, primitive, context=None,
original_method=cinder_base_ovo.CinderObject.obj_from_primitive):
result = original_method(primitive, context)
result.cinderlib_data = primitive.get('cinderlib.data')
return result
def field_ovo_to_primitive(obj, attr, value, visited=None):
return value.obj_to_primitive(visited=visited)
def field_to_primitive(self, obj, attr, value, visited=None):
if value is None:
return None
return self._type.to_primitive(obj, attr, value, visited)
def iterable_to_primitive(self, obj, attr, value, visited=None):
visited = _set_visited(self, visited)
result = []
for elem in value:
if id(elem) in visited:
continue
_set_visited(elem, visited)
r = self._element_type.to_primitive(obj, attr, elem, visited)
result.append(r)
return result
def dict_to_primitive(self, obj, attr, value, visited=None):
visited = _set_visited(self, visited)
primitive = {}
for key, elem in value.items():
if id(elem) in visited:
continue
_set_visited(elem, visited)
primitive[key] = self._element_type.to_primitive(
obj, '%s["%s"]' % (attr, key), elem, visited)
return primitive
def datetime_to_primitive(obj, attr, value, visited=None):
"""Stringify time in ISO 8601 with subsecond format.
This is the same code as the one used by the OVO DateTime to_primitive
but adding the subsecond resolution with the '.%f' part in strftime call.
This is backward compatible with cinderlib using code that didn't generate
subsecond resolution, because the from_primitive code of the OVO field uses
oslo_utils.timeutils.parse_isotime which in the end uses
iso8601.parse_date, and since the subsecond format is also ISO8601 it is
properly parsed.
"""
st = value.strftime('%Y-%m-%dT%H:%M:%S.%f')
tz = value.tzinfo.tzname(None) if value.tzinfo else 'UTC'
# Need to handle either iso8601 or python UTC format
st += ('Z' if tz in ['UTC', 'UTC+00:00'] else tz)
return st
def load(json_src, save=False):
"""Load any json serialized cinderlib object."""
if isinstance(json_src, str):
json_src = json_lib.loads(json_src)
if isinstance(json_src, list):
return [getattr(objects, obj['class']).load(obj, save)
for obj in json_src]
return getattr(objects, json_src['class']).load(json_src, save)
def json():
"""Convert to Json everything we have in this system."""
return [backend.json for backend in BACKEND_CLASS.backends.values()]
def jsons():
"""Convert to a Json string everything we have in this system."""
return json_lib.dumps(json(), separators=(',', ':'))
def dump():
"""Convert to Json everything we have in this system."""
return [backend.dump for backend in BACKEND_CLASS.backends.values()]
def dumps():
"""Convert to a Json string everything we have in this system."""
return json_lib.dumps(dump(), separators=(',', ':'))

View File

@ -1,259 +0,0 @@
# Copyright (c) 2018, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import functools
import os
import subprocess
import tempfile
import unittest
from oslo_utils import strutils
import yaml
import cinderlib
from cinderlib.cmd import cinder_to_yaml
def set_backend(func, new_name, backend_name):
@functools.wraps(func)
def wrapper(self, *args, **kwargs):
self.backend = cinderlib.Backend.backends[backend_name]
return func(self, *args, **kwargs)
wrapper.__name__ = new_name
wrapper.__wrapped__ = func
return wrapper
def test_all_backends(cls):
"""Decorator to run tests in a class for all available backends."""
config = BaseFunctTestCase.ensure_config_loaded()
# Prevent dictionary changed size during iteration on Python 3
for fname, func in dict(vars(cls)).items():
if fname.startswith('test_'):
for backend in config['backends']:
bname = backend['volume_backend_name']
test_name = '%s_on_%s' % (fname, bname)
setattr(cls, test_name, set_backend(func, test_name, bname))
delattr(cls, fname)
return cls
def get_bool_env(param_string, default=False):
param = os.environ.get(param_string, default)
return strutils.bool_from_string(param, strict=True)
class BaseFunctTestCase(unittest.TestCase):
FNULL = open(os.devnull, 'w')
CONFIG_FILE = os.environ.get('CL_FTEST_CFG', '/etc/cinder/cinder.conf')
PRECISION = os.environ.get('CL_FTEST_PRECISION', 0)
LOGGING_ENABLED = get_bool_env('CL_FTEST_LOGGING', False)
DEBUG_ENABLED = get_bool_env('CL_FTEST_DEBUG', False)
ROOT_HELPER = os.environ.get('CL_FTEST_ROOT_HELPER', 'sudo')
MEMORY_PERSISTENCE = get_bool_env('CL_FTEST_MEMORY_PERSISTENCE', True)
DEFAULT_POOL = os.environ.get('CL_FTEST_POOL_NAME', None)
tests_config = None
@classmethod
def ensure_config_loaded(cls):
if not cls.tests_config:
# If it's a .conf type of configuration file convert it to dict
if cls.CONFIG_FILE.endswith('.conf'):
cls.tests_config = cinder_to_yaml.convert(cls.CONFIG_FILE)
else:
with open(cls.CONFIG_FILE, 'r') as f:
cls.tests_config = yaml.safe_load(f)
cls.tests_config.setdefault('logs', cls.LOGGING_ENABLED)
cls.tests_config.setdefault('size_precision', cls.PRECISION)
cls.tests_config.setdefault('debug', cls.DEBUG_ENABLED)
backend = cls.tests_config['backends'][0]
if backend['volume_driver'].endswith('.RBDDriver'):
print('Cinderlib tests use config: %s' % cls.tests_config)
ceph_conf = open(backend['rbd_ceph_conf'], 'r').read()
print('Contents of ceph.conf are: %s' % ceph_conf)
return cls.tests_config
@classmethod
def setUpClass(cls):
config = cls.ensure_config_loaded()
# Use memory_db persistence instead of memory to ensure migrations work
cinderlib.setup(root_helper=cls.ROOT_HELPER,
disable_logs=not config['logs'],
debug=config['debug'],
persistence_config={'storage': 'memory_db'})
if cls.MEMORY_PERSISTENCE:
# Now replace it with the memory plugin for the tests to ensure the
# Cinder driver is compatible with the persistence plugin
# mechanism, as the DB plugin could hide issues.
cinderlib.Backend.global_initialization = False
cinderlib.setup(root_helper=cls.ROOT_HELPER,
disable_logs=not config['logs'],
debug=config['debug'],
persistence_config={'storage': 'memory'})
# Initialize backends
cls.backends = [cinderlib.Backend(**cfg) for cfg in
config['backends']]
# Lazy load backend's _volumes variable using the volumes property so
# new volumes are added to this list on successful creation.
for backend in cls.backends:
backend.volumes
# Set current backend, by default is the first
cls.backend = cls.backends[0]
cls.size_precision = config['size_precision']
@classmethod
def tearDownClass(cls):
errors = []
# Do the cleanup of the resources the tests haven't cleaned up already
for backend in cls.backends:
# For each of the volumes that haven't been deleted delete the
# snapshots that are still there and then the volume.
# NOTE(geguileo): Don't use volumes and snapshots iterables since
# they are modified when deleting.
# NOTE(geguileo): Cleanup in reverse because RBD driver cannot
# delete a snapshot that has a volume created from it.
for vol in list(backend.volumes)[::-1]:
for snap in list(vol.snapshots):
try:
snap.delete()
except Exception as exc:
errors.append('Error deleting snapshot %s from volume '
'%s: %s' % (snap.id, vol.id, exc))
# Detach if locally attached
if vol.local_attach:
try:
vol.detach()
except Exception as exc:
errors.append('Error detaching %s for volume %s: %s' %
(vol.local_attach.path, vol.id, exc))
# Disconnect any existing connections
for conn in vol.connections:
try:
conn.disconnect()
except Exception as exc:
errors.append('Error disconnecting volume %s: %s' %
(vol.id, exc))
try:
vol.delete()
except Exception as exc:
errors.append('Error deleting volume %s: %s' %
(vol.id, exc))
if errors:
raise Exception('Errors on test cleanup: %s' % '\n\t'.join(errors))
def _root_execute(self, *args, **kwargs):
cmd = [self.ROOT_HELPER]
cmd.extend(args)
cmd.extend("%s=%s" % (k, v) for k, v in kwargs.items())
return subprocess.check_output(cmd, stderr=self.FNULL)
def _create_vol(self, backend=None, **kwargs):
if not backend:
backend = self.backend
vol_size = kwargs.setdefault('size', 1)
name = kwargs.setdefault('name', backend.id)
kwargs.setdefault('pool_name', self.DEFAULT_POOL)
vol = backend.create_volume(**kwargs)
self.assertEqual('available', vol.status)
self.assertEqual(vol_size, vol.size)
self.assertEqual(name, vol.display_name)
self.assertIn(vol, backend.volumes)
return vol
def _create_snap(self, vol, **kwargs):
name = kwargs.setdefault('name', vol.id)
snap = vol.create_snapshot(name=vol.id)
self.assertEqual('available', snap.status)
self.assertEqual(vol.size, snap.volume_size)
self.assertEqual(name, snap.display_name)
self.assertIn(snap, vol.snapshots)
return snap
def _get_vol_size(self, vol, do_detach=True):
if not vol.local_attach:
vol.attach()
try:
while True:
try:
result = self._root_execute('lsblk', '-o', 'SIZE',
'-b', vol.local_attach.path)
size_bytes = result.split()[1]
return float(size_bytes) / 1024.0 / 1024.0 / 1024.0
# NOTE(geguileo): We can't catch subprocess.CalledProcessError
# because somehow we get an instance from a different
# subprocess.CalledProcessError class that isn't the same.
except Exception as exc:
# If the volume is not yet available
if getattr(exc, 'returncode', 0) != 32:
raise
finally:
if do_detach:
vol.detach()
def _write_data(self, vol, data=None, do_detach=True):
if not data:
data = b'0123456789' * 100
if not vol.local_attach:
vol.attach()
# TODO(geguileo: This will not work on Windows, for that we need to
# pass delete=False and do the manual deletion ourselves.
try:
with tempfile.NamedTemporaryFile() as f:
f.write(data)
f.flush()
self._root_execute('dd', 'if=' + f.name,
of=vol.local_attach.path)
finally:
if do_detach:
vol.detach()
return data
def _read_data(self, vol, length, do_detach=True):
if not vol.local_attach:
vol.attach()
try:
stdout = self._root_execute('dd', 'if=' + vol.local_attach.path,
count=1, ibs=length)
finally:
if do_detach:
vol.detach()
return stdout
def _pools_info(self, stats):
return stats.get('pools', [stats])
def assertSize(self, expected_size, actual_size):
if self.size_precision:
self.assertAlmostEqual(expected_size, actual_size,
self.size_precision)
else:
self.assertEqual(expected_size, actual_size)

View File

@ -1,12 +0,0 @@
# Logs are way too verbose, so we disable them
logs: true
debug: true
# We only define one backend
backends:
- volume_backend_name: ceph
volume_driver: cinder.volume.drivers.rbd.RBDDriver
rbd_user: admin
rbd_pool: rbd
rbd_ceph_conf: /etc/ceph/ceph.conf
rbd_keyring_conf: /etc/ceph/ceph.client.admin.keyring

View File

@ -1,15 +0,0 @@
# For Fedora, CentOS, RHEL we require the targetcli package.
# For Ubuntu we require lio-utils or changing the target iscsi_helper
#
# Logs are way too verbose, so we disable them
logs: true
debug: true
# We only define one backend
backends:
- volume_backend_name: lvm
volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group: cinder-volumes
target_protocol: iscsi
target_helper: lioadm

View File

@ -1,275 +0,0 @@
# Copyright (c) 2018, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import random
import ddt
import cinderlib
from cinderlib.tests.functional import base_tests
@ddt.ddt
class BaseFunctTestCase(base_tests.unittest.TestCase):
@ddt.data([], [1], [2])
def test_list_supported_drivers(self, args):
is_v2 = args == [2]
expected_type = dict if is_v2 else str
expected_keys = {'version', 'class_name', 'supported', 'ci_wiki_name',
'driver_options', 'class_fqn', 'desc'}
drivers = cinderlib.Backend.list_supported_drivers(*args)
self.assertNotEqual(0, len(drivers))
for name, driver_info in drivers.items():
self.assertEqual(expected_keys, set(driver_info.keys()))
# Ensure that the RBDDriver has the rbd_keyring_conf option and
# it's not deprecated
if name == 'RBDDriver':
keyring_conf = [conf for conf in driver_info['driver_options']
if conf['dest'] == 'rbd_keyring_conf']
self.assertEqual(1, len(keyring_conf))
expected_value = False if is_v2 else 'False'
self.assertEqual(expected_value,
keyring_conf[0]['deprecated_for_removal'])
for option in driver_info['driver_options']:
self.assertIsInstance(option['type'], expected_type)
if is_v2:
self.assertIn('type_class', option['type'])
else:
for v in option.values():
self.assertIsInstance(v, str)
@base_tests.test_all_backends
class BackendFunctBasic(base_tests.BaseFunctTestCase):
def test_stats(self):
stats = self.backend.stats()
self.assertIn('vendor_name', stats)
self.assertIn('volume_backend_name', stats)
pools_info = self._pools_info(stats)
for pool_info in pools_info:
self.assertIn('free_capacity_gb', pool_info)
self.assertIn('total_capacity_gb', pool_info)
def _volumes_in_pools(self, pools_info):
if not any('total_volumes' in p for p in pools_info):
return None
return sum(p.get('total_volumes', 0) for p in pools_info)
def test_stats_with_creation(self):
initial_stats = self.backend.stats(refresh=True)
initial_pools_info = self._pools_info(initial_stats)
initial_volumes = self._volumes_in_pools(initial_pools_info)
initial_size = sum(p.get('allocated_capacity_gb',
p.get('provisioned_capacity_gb', 0))
for p in initial_pools_info)
size = random.randint(1, 5)
vol = self._create_vol(self.backend, size=size)
# Check that without refresh we get the same data
duplicate_stats = self.backend.stats(refresh=False)
self.assertEqual(initial_stats, duplicate_stats)
new_stats = self.backend.stats(refresh=True)
new_pools_info = self._pools_info(new_stats)
new_volumes = self._volumes_in_pools(new_pools_info)
new_size = sum(p.get('allocated_capacity_gb',
p.get('provisioned_capacity_gb', vol.size))
for p in new_pools_info)
# We could be sharing the pool with other CI jobs or with parallel
# executions of this same one, so we cannot check that we have 1 more
# volume and 1 more GB used, so we just check that the values have
# changed. This could still fail if another job just deletes 1 volume
# of the same size, that's why we randomize the size, to reduce the
# risk of the volumes having the same size.
# If the backend is reporting the number of volumes, check them
if initial_volumes is not None:
self.assertNotEqual(initial_volumes, new_volumes)
self.assertNotEqual(initial_size, new_size)
def test_create_volume(self):
vol = self._create_vol(self.backend)
vol_size = self._get_vol_size(vol)
self.assertSize(vol.size, vol_size)
# We are not testing delete, so leave the deletion to the tearDown
def test_create_delete_volume(self):
vol = self._create_vol(self.backend)
vol.delete()
self.assertEqual('deleted', vol.status)
self.assertTrue(vol.deleted)
self.assertNotIn(vol, self.backend.volumes)
# Confirm idempotency of the operation by deleting it again
vol._ovo.status = 'error'
vol._ovo.deleted = False
vol.delete()
self.assertEqual('deleted', vol.status)
self.assertTrue(vol.deleted)
def test_create_snapshot(self):
vol = self._create_vol(self.backend)
self._create_snap(vol)
# We are not testing delete, so leave the deletion to the tearDown
def test_create_delete_snapshot(self):
vol = self._create_vol(self.backend)
snap = self._create_snap(vol)
snap.delete()
self.assertEqual('deleted', snap.status)
self.assertTrue(snap.deleted)
self.assertNotIn(snap, vol.snapshots)
# Confirm idempotency of the operation by deleting it again
snap._ovo.status = 'error'
snap._ovo.deleted = False
snap.delete()
self.assertEqual('deleted', snap.status)
self.assertTrue(snap.deleted)
def test_attach_volume(self):
vol = self._create_vol(self.backend)
attach = vol.attach()
path = attach.path
self.assertIs(attach, vol.local_attach)
self.assertIn(attach, vol.connections)
self.assertTrue(os.path.exists(path))
# We are not testing detach, so leave it to the tearDown
def test_attach_detach_volume(self):
vol = self._create_vol(self.backend)
attach = vol.attach()
self.assertIs(attach, vol.local_attach)
self.assertIn(attach, vol.connections)
vol.detach()
self.assertIsNone(vol.local_attach)
self.assertNotIn(attach, vol.connections)
def test_attach_detach_volume_via_attachment(self):
vol = self._create_vol(self.backend)
attach = vol.attach()
self.assertTrue(attach.attached)
path = attach.path
self.assertTrue(os.path.exists(path))
attach.detach()
self.assertFalse(attach.attached)
self.assertIsNone(vol.local_attach)
# We haven't disconnected the volume, just detached it
self.assertIn(attach, vol.connections)
attach.disconnect()
self.assertNotIn(attach, vol.connections)
def test_disk_io(self):
vol = self._create_vol(self.backend)
data = self._write_data(vol)
read_data = self._read_data(vol, len(data))
self.assertEqual(data, read_data)
def test_extend(self):
vol = self._create_vol(self.backend)
original_size = vol.size
result_original_size = self._get_vol_size(vol)
self.assertSize(original_size, result_original_size)
new_size = vol.size + 1
# Retrieve the volume from the persistence storage to ensure lazy
# loading works. Prevent regression after fixing bug #1852629
vol_from_db = self.backend.persistence.get_volumes(vol.id)[0]
vol_from_db.extend(new_size)
self.assertEqual(new_size, vol.size)
result_new_size = self._get_vol_size(vol)
self.assertSize(new_size, result_new_size)
def test_extend_attached(self):
vol = self._create_vol(self.backend)
original_size = vol.size
# Attach, get size, and leave volume attached
result_original_size = self._get_vol_size(vol, do_detach=False)
self.assertSize(original_size, result_original_size)
new_size = vol.size + 1
# Extending the volume should also extend the local view of the volume
reported_size = vol.extend(new_size)
# The instance size must have been updated
self.assertEqual(new_size, vol.size)
self.assertEqual(new_size, vol._ovo.size)
# Returned size must match the requested one
self.assertEqual(new_size * (1024 ** 3), reported_size)
# Get size of attached volume on the host and detach it
result_new_size = self._get_vol_size(vol)
self.assertSize(new_size, result_new_size)
def test_clone(self):
vol = self._create_vol(self.backend)
original_size = self._get_vol_size(vol, do_detach=False)
data = self._write_data(vol)
new_vol = vol.clone()
self.assertEqual(vol.size, new_vol.size)
self.assertEqual(vol.id, new_vol.source_volid)
cloned_size = self._get_vol_size(new_vol, do_detach=False)
read_data = self._read_data(new_vol, len(data))
self.assertEqual(original_size, cloned_size)
self.assertEqual(data, read_data)
def test_create_volume_from_snapshot(self):
# Create a volume and write some data
vol = self._create_vol(self.backend)
original_size = self._get_vol_size(vol, do_detach=False)
data = self._write_data(vol)
# Take a snapshot
snap = vol.create_snapshot()
self.assertEqual(vol.size, snap.volume_size)
# Change the data in the volume
reversed_data = data[::-1]
self._write_data(vol, data=reversed_data)
# Create a new volume from the snapshot with the original data
new_vol = snap.create_volume()
self.assertEqual(vol.size, new_vol.size)
created_size = self._get_vol_size(new_vol, do_detach=False)
read_data = self._read_data(new_vol, len(data))
self.assertEqual(original_size, created_size)
self.assertEqual(data, read_data)

View File

@ -1,48 +0,0 @@
# Copyright (c) 2018, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import unittest
from unittest import mock
import cinderlib
from cinderlib.tests.unit import utils
cinderlib.setup(persistence_config={'storage': utils.get_mock_persistence()})
class BaseTest(unittest.TestCase):
PERSISTENCE_CFG = None
def setUp(self):
if not self.PERSISTENCE_CFG:
cfg = {'storage': utils.get_mock_persistence()}
cinderlib.Backend.set_persistence(cfg)
self.backend_name = 'fake_backend'
self.backend = utils.FakeBackend(volume_backend_name=self.backend_name)
self.persistence = self.backend.persistence
cinderlib.Backend._volumes_inflight = {}
def tearDown(self):
# Clear all existing backends
cinderlib.Backend.backends = {}
def patch(self, path, *args, **kwargs):
"""Use python mock to mock a path with automatic cleanup."""
patcher = mock.patch(path, *args, **kwargs)
result = patcher.start()
self.addCleanup(patcher.stop)
return result

View File

@ -1,291 +0,0 @@
# Copyright (c) 2018, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from unittest import mock
import ddt
from cinderlib import exception
from cinderlib import objects
from cinderlib.tests.unit import base
@ddt.ddt
class TestConnection(base.BaseTest):
def setUp(self):
self.original_is_multipathed = objects.Connection._is_multipathed_conn
self.mock_is_mp = self.patch(
'cinderlib.objects.Connection._is_multipathed_conn')
self.mock_default = self.patch(
'os_brick.initiator.DEVICE_SCAN_ATTEMPTS_DEFAULT')
super(TestConnection, self).setUp()
self.vol = objects.Volume(self.backend_name, size=10)
self.kwargs = {'k1': 'v1', 'k2': 'v2',
'connection_info': {'conn': {'data': {'t': 0}}}}
self.conn = objects.Connection(self.backend, volume=self.vol,
**self.kwargs)
self.conn._ovo.connection_info = {
'connector': {'multipath': mock.sentinel.mp_ovo_connector}}
def test_init(self):
self.mock_is_mp.assert_called_once_with(self.kwargs)
self.assertEqual(self.conn.use_multipath, self.mock_is_mp.return_value)
self.assertEqual(self.conn.scan_attempts, self.mock_default)
self.assertEqual(self.conn.attach_mode, 'rw')
self.assertIsNone(self.conn._connector)
self.assertEqual(self.vol, self.conn._volume)
self.assertEqual(self.vol._ovo, self.conn._ovo.volume)
self.assertEqual(self.vol._ovo.id, self.conn._ovo.volume_id)
def test__is_multipathed_conn_kwargs(self):
res = self.original_is_multipathed(dict(
use_multipath=mock.sentinel.mp_kwargs,
connector={'multipath': mock.sentinel.mp_connector},
__ovo=self.conn._ovo))
self.assertEqual(mock.sentinel.mp_kwargs, res)
def test__is_multipathed_conn_connector_kwarg(self):
res = self.original_is_multipathed(dict(
connector={'multipath': mock.sentinel.mp_connector},
__ovo=self.conn._ovo))
self.assertEqual(mock.sentinel.mp_connector, res)
def test__is_multipathed_conn_connector_ovo(self):
res = self.original_is_multipathed(dict(connector={},
__ovo=self.conn._ovo))
self.assertEqual(mock.sentinel.mp_ovo_connector, res)
def test__is_multipathed_conn_connection_info_iscsi_true(self):
res = self.original_is_multipathed(dict(
connection_info={'conn': {'data': {'target_iqns': '',
'target_portals': ''}}}))
self.assertTrue(res)
def test__is_multipathed_conn_connection_info_iscsi_false(self):
res = self.original_is_multipathed(dict(
connection_info={'conn': {'data': {'target_iqns': ''}}}))
self.assertFalse(res)
def test__is_multipathed_conn_connection_info_fc_true(self):
res = self.original_is_multipathed(dict(
connection_info={'conn': {'data': {'target_wwn': []}}}))
self.assertTrue(res)
def test__is_multipathed_conn_connection_info_fc_false(self):
res = self.original_is_multipathed(dict(
connection_info={'conn': {'data': {'target_wwn': ''}}}))
self.assertFalse(res)
def test_init_no_backend(self):
self.assertRaises(TypeError, objects.Connection)
def test_init_preference_attach_mode(self):
kwargs = {'attach_mode': 'ro',
'connection_info': {'conn': {'data': {'access_mode': 'rw'}}}}
conn = objects.Connection(self.backend, **kwargs)
self.assertEqual(conn.conn_info['data']['access_mode'], 'ro')
def test_init_no_volume(self):
self.mock_is_mp.reset_mock()
kwargs = {'attach_mode': 'ro',
'connection_info': {'conn': {'data': {'t': 0}}}}
conn = objects.Connection(self.backend, **kwargs)
self.mock_is_mp.assert_called_once_with(kwargs)
self.assertEqual(conn.use_multipath, self.mock_is_mp.return_value)
self.assertEqual(conn.scan_attempts, self.mock_default)
self.assertEqual(conn.attach_mode, 'ro')
self.assertEqual({'data': {'access_mode': 'ro', 't': 0}},
conn.conn_info)
self.assertIsNone(conn._connector)
def test_connect(self):
init_conn = self.backend.driver.initialize_connection
init_conn.return_value = {'data': {}}
connector = {'my_c': 'v'}
conn = self.conn.connect(self.vol, connector)
init_conn.assert_called_once_with(self.vol, connector)
self.assertIsInstance(conn, objects.Connection)
self.assertEqual('attached', conn.status)
self.assertEqual(init_conn.return_value, conn.connection_info['conn'])
self.assertEqual(connector, conn.connector_info)
self.persistence.set_connection.assert_called_once_with(conn)
@mock.patch('cinderlib.objects.Volume._disconnect')
@mock.patch('cinderlib.objects.Connection._disconnect')
def test_disconnect(self, mock_disc, mock_vol_disc):
self.conn.disconnect(force=mock.sentinel.force)
mock_disc.assert_called_once_with(mock.sentinel.force)
mock_vol_disc.assert_called_once_with(self.conn)
def test__disconnect(self):
conn_info = self.conn.connector_info
self.conn._disconnect(mock.sentinel.force)
self.backend.driver.terminate_connection.assert_called_once_with(
self.vol._ovo, conn_info, force=mock.sentinel.force)
self.assertEqual({}, self.conn.conn_info)
self.assertEqual('detached', self.conn.status)
self.persistence.delete_connection.assert_called_once_with(self.conn)
@mock.patch('cinderlib.objects.Connection.conn_info', {'data': 'mydata'})
@mock.patch('cinderlib.objects.Connection.path')
@mock.patch('cinderlib.objects.Connection.device_attached')
def test_attach(self, mock_attached, mock_path):
with mock.patch('cinderlib.objects.Connection.connector') as mock_conn:
self.conn.attach()
mock_conn.connect_volume.assert_called_once_with('mydata')
mock_attached.assert_called_once_with(
mock_conn.connect_volume.return_value)
mock_conn.check_valid_device.assert_called_once_with(mock_path)
self.assertEqual(self.conn, self.vol.local_attach)
@mock.patch('cinderlib.objects.Connection.conn_info', {'data': 'mydata'})
@mock.patch('cinderlib.objects.Connection.device')
def test_detach(self, mock_device):
self.vol.local_attach = mock.Mock()
with mock.patch('cinderlib.objects.Connection.connector') as mock_conn:
self.conn.detach(mock.sentinel.force, mock.sentinel.ignore)
mock_conn.disconnect_volume.assert_called_once_with(
'mydata',
mock_device,
force=mock.sentinel.force,
ignore_errors=mock.sentinel.ignore)
self.assertIsNone(self.vol.local_attach)
self.assertIsNone(self.conn.device)
self.assertIsNone(self.conn._connector)
self.persistence.set_connection.assert_called_once_with(self.conn)
def test_get_by_id(self):
self.persistence.get_connections.return_value = [mock.sentinel.conn]
res = objects.Connection.get_by_id(mock.sentinel.conn_id)
self.assertEqual(mock.sentinel.conn, res)
self.persistence.get_connections.assert_called_once_with(
connection_id=mock.sentinel.conn_id)
def test_get_by_id_not_found(self):
self.persistence.get_connections.return_value = None
self.assertRaises(exception.ConnectionNotFound,
objects.Connection.get_by_id,
mock.sentinel.conn_id)
self.persistence.get_connections.assert_called_once_with(
connection_id=mock.sentinel.conn_id)
def test_device_attached(self):
self.conn.device_attached(mock.sentinel.device)
self.assertEqual(mock.sentinel.device,
self.conn.connection_info['device'])
self.persistence.set_connection.assert_called_once_with(self.conn)
def test_conn_info_setter_changes_attach_mode(self):
self.assertEqual('rw', self.conn._ovo.attach_mode)
self.conn.conn_info = {'data': {'target_lun': 0, 'access_mode': 'ro'}}
self.assertEqual({'data': {'target_lun': 0, 'access_mode': 'ro'}},
self.conn._ovo.connection_info['conn'])
self.assertEqual('ro', self.conn._ovo.attach_mode)
def test_conn_info_setter_uses_attach_mode(self):
self.assertEqual('rw', self.conn._ovo.attach_mode)
self.conn._ovo.attach_mode = 'ro'
self.conn.conn_info = {'data': {'target_lun': 0}}
self.assertEqual({'data': {'target_lun': 0, 'access_mode': 'ro'}},
self.conn.conn_info)
self.assertEqual('ro', self.conn._ovo.attach_mode)
def test_conn_info_setter_clear(self):
self.conn.conn_info = {'data': {}}
self.conn.conn_info = {}
self.assertIsNone(self.conn._ovo.connection_info)
def test_conn_info_getter(self):
value = {'data': {'access_mode': 'ro'}}
self.conn.conn_info = value
self.assertEqual(value, self.conn.conn_info)
def test_conn_info_getter_none(self):
self.conn.conn_info = None
self.assertEqual({}, self.conn.conn_info)
def test_protocol(self):
self.conn.conn_info = {'driver_volume_type': mock.sentinel.iscsi}
self.assertEqual(mock.sentinel.iscsi, self.conn.protocol)
def test_connector_info_setter(self):
self.conn.connector_info = mock.sentinel.connector
self.assertEqual(mock.sentinel.connector,
self.conn._ovo.connection_info['connector'])
self.assertIn('connection_info', self.conn._ovo._changed_fields)
def test_connector_info_getter(self):
self.conn.connector_info = mock.sentinel.connector
self.assertEqual(mock.sentinel.connector, self.conn.connector_info)
def test_connector_info_getter_empty(self):
self.conn._ovo.connection_info = None
self.assertIsNone(self.conn.connector_info)
def test_device_setter(self):
self.conn.device = mock.sentinel.device
self.assertEqual(mock.sentinel.device,
self.conn._ovo.connection_info['device'])
self.assertIn('connection_info', self.conn._ovo._changed_fields)
def test_device_setter_none(self):
self.conn.device = mock.sentinel.device
self.conn.device = None
self.assertNotIn('device', self.conn._ovo.connection_info)
self.assertIn('connection_info', self.conn._ovo._changed_fields)
def test_device_getter(self):
self.conn.device = mock.sentinel.device
self.assertEqual(mock.sentinel.device, self.conn.device)
def test_path(self):
self.conn.device = {'path': mock.sentinel.path}
self.assertEqual(mock.sentinel.path, self.conn.path)
@mock.patch('cinderlib.objects.Connection.conn_info')
@mock.patch('cinderlib.objects.Connection.protocol')
@mock.patch('cinder.volume.volume_utils.brick_get_connector')
def test_connector_getter(self, mock_connector, mock_proto, mock_info):
res = self.conn.connector
self.assertEqual(mock_connector.return_value, res)
mock_connector.assert_called_once_with(
mock_proto,
use_multipath=self.mock_is_mp.return_value,
device_scan_attempts=self.mock_default,
conn=mock_info,
do_local_attach=True)
# Make sure we cache the value
res = self.conn.connector
self.assertEqual(1, mock_connector.call_count)
@ddt.data(True, False)
def test_attached_true(self, value):
with mock.patch('cinderlib.objects.Connection.device', value):
self.assertEqual(value, self.conn.attached)
@ddt.data(True, False)
def test_connected(self, value):
with mock.patch('cinderlib.objects.Connection.conn_info', value):
self.assertEqual(value, self.conn.connected)
def test_extend(self):
self.conn._ovo.connection_info['conn'] = {'data': mock.sentinel.data}
with mock.patch('cinderlib.objects.Connection.connector') as mock_conn:
res = self.conn.extend()
mock_conn.extend_volume.assert_called_once_with(mock.sentinel.data)
self.assertEqual(mock_conn.extend_volume.return_value, res)

View File

@ -1,152 +0,0 @@
# Copyright (c) 2018, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from unittest import mock
from cinderlib import exception
from cinderlib import objects
from cinderlib.tests.unit import base
class TestSnapshot(base.BaseTest):
def setUp(self):
super(TestSnapshot, self).setUp()
self.vol = objects.Volume(self.backend_name, size=10,
extra_specs={'e': 'v'},
qos_specs={'q': 'qv'})
self.snap = objects.Snapshot(self.vol,
name='my_snap', description='my_desc')
self.vol._snapshots.append(self.snap)
self.vol._ovo.snapshots.objects.append(self.snap._ovo)
def test_init_from_volume(self):
self.assertIsNotNone(self.snap.id)
self.assertEqual(self.backend, self.snap.backend)
self.assertEqual('my_snap', self.snap.name)
self.assertEqual('my_snap', self.snap.display_name)
self.assertEqual('my_desc', self.snap.description)
self.assertEqual(self.vol.user_id, self.snap.user_id)
self.assertEqual(self.vol.project_id, self.snap.project_id)
self.assertEqual(self.vol.id, self.snap.volume_id)
self.assertEqual(self.vol.size, self.snap.volume_size)
self.assertEqual(self.vol._ovo, self.snap._ovo.volume)
self.assertEqual(self.vol.volume_type_id, self.snap.volume_type_id)
self.assertEqual(self.vol, self.snap.volume)
def test_init_from_ovo(self):
snap2 = objects.Snapshot(None, __ovo=self.snap._ovo)
self.assertEqual(self.snap.backend, snap2.backend)
self.assertEqual(self.snap._ovo, snap2._ovo)
self.assertEqual(self.vol, self.snap.volume)
def test_create(self):
update_vol = {'provider_id': 'provider_id'}
self.backend.driver.create_snapshot.return_value = update_vol
self.snap.create()
self.assertEqual('available', self.snap.status)
self.assertEqual('provider_id', self.snap.provider_id)
self.backend.driver.create_snapshot.assert_called_once_with(
self.snap._ovo)
self.persistence.set_snapshot.assert_called_once_with(self.snap)
def test_create_error(self):
self.backend.driver.create_snapshot.side_effect = exception.NotFound
with self.assertRaises(exception.NotFound) as assert_context:
self.snap.create()
self.assertEqual(self.snap, assert_context.exception.resource)
self.backend.driver.create_snapshot.assert_called_once_with(
self.snap._ovo)
self.assertEqual('error', self.snap.status)
self.persistence.set_snapshot.assert_called_once_with(self.snap)
def test_delete(self):
with mock.patch.object(
self.vol, '_snapshot_removed',
wraps=self.vol._snapshot_removed) as snap_removed_mock:
self.snap.delete()
snap_removed_mock.assert_called_once_with(self.snap)
self.backend.driver.delete_snapshot.assert_called_once_with(
self.snap._ovo)
self.persistence.delete_snapshot.assert_called_once_with(self.snap)
self.assertEqual([], self.vol.snapshots)
self.assertEqual([], self.vol._ovo.snapshots.objects)
self.assertEqual('deleted', self.snap._ovo.status)
@mock.patch('cinderlib.objects.Volume._snapshot_removed')
def test_delete_error(self, snap_removed_mock):
self.backend.driver.delete_snapshot.side_effect = exception.NotFound
with self.assertRaises(exception.NotFound) as assert_context:
self.snap.delete()
self.assertEqual(self.snap, assert_context.exception.resource)
self.backend.driver.delete_snapshot.assert_called_once_with(
self.snap._ovo)
snap_removed_mock.assert_not_called()
self.persistence.delete_snapshot.assert_not_called()
self.assertEqual([self.snap], self.vol.snapshots)
self.assertEqual([self.snap._ovo], self.vol._ovo.snapshots.objects)
self.assertEqual('error_deleting', self.snap._ovo.status)
def test_create_volume(self):
create_mock = self.backend.driver.create_volume_from_snapshot
create_mock.return_value = None
vol2 = self.snap.create_volume(name='new_name', description='new_desc')
create_mock.assert_called_once_with(vol2._ovo, self.snap._ovo)
self.assertEqual('available', vol2.status)
self.assertEqual(1, len(self.backend._volumes))
self.assertEqual(vol2, self.backend._volumes[0])
self.persistence.set_volume.assert_called_once_with(vol2)
self.assertEqual(self.vol.id, self.vol.volume_type_id)
self.assertNotEqual(self.vol.id, vol2.id)
self.assertEqual(vol2.id, vol2.volume_type_id)
self.assertEqual(self.vol.volume_type.extra_specs,
vol2.volume_type.extra_specs)
self.assertEqual(self.vol.volume_type.qos_specs.specs,
vol2.volume_type.qos_specs.specs)
def test_create_volume_error(self):
create_mock = self.backend.driver.create_volume_from_snapshot
create_mock.side_effect = exception.NotFound
with self.assertRaises(exception.NotFound) as assert_context:
self.snap.create_volume()
self.assertEqual(1, len(self.backend._volumes_inflight))
vol2 = list(self.backend._volumes_inflight.values())[0]
self.assertEqual(vol2, assert_context.exception.resource)
create_mock.assert_called_once_with(vol2, self.snap._ovo)
self.assertEqual('error', vol2.status)
self.persistence.set_volume.assert_called_once_with(mock.ANY)
def test_get_by_id(self):
mock_get_snaps = self.persistence.get_snapshots
mock_get_snaps.return_value = [mock.sentinel.snap]
res = objects.Snapshot.get_by_id(mock.sentinel.snap_id)
mock_get_snaps.assert_called_once_with(
snapshot_id=mock.sentinel.snap_id)
self.assertEqual(mock.sentinel.snap, res)
def test_get_by_id_not_found(self):
mock_get_snaps = self.persistence.get_snapshots
mock_get_snaps.return_value = None
self.assertRaises(exception.SnapshotNotFound,
objects.Snapshot.get_by_id, mock.sentinel.snap_id)
mock_get_snaps.assert_called_once_with(
snapshot_id=mock.sentinel.snap_id)
def test_get_by_name(self):
res = objects.Snapshot.get_by_name(mock.sentinel.name)
mock_get_snaps = self.persistence.get_snapshots
mock_get_snaps.assert_called_once_with(
snapshot_name=mock.sentinel.name)
self.assertEqual(mock_get_snaps.return_value, res)

View File

@ -1,549 +0,0 @@
# Copyright (c) 2018, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from unittest import mock
from cinder import objects as cinder_ovos
from cinderlib import exception
from cinderlib import objects
from cinderlib.tests.unit import base
class TestVolume(base.BaseTest):
def test_init_from_args_backend_name(self):
vol = objects.Volume(self.backend_name,
name='vol_name', description='vol_desc', size=10)
self.assertEqual(self.backend, vol.backend)
self.assertEqual('vol_name', vol.name)
self.assertEqual('vol_name', vol.display_name)
self.assertEqual('vol_desc', vol.description)
self.assertEqual(10, vol.size)
self.assertIsNotNone(vol.id)
def test_init_from_args_backend(self):
vol = objects.Volume(self.backend,
name='vol_name', description='vol_desc', size=10)
self.assertEqual(self.backend, vol.backend)
self.assertEqual('vol_name', vol.name)
self.assertEqual('vol_name', vol.display_name)
self.assertEqual('vol_desc', vol.description)
self.assertEqual(10, vol.size)
self.assertIsNotNone(vol.id)
def test_init_from_volume(self):
vol = objects.Volume(self.backend,
name='vol_name', description='vol_desc', size=10)
vol2 = objects.Volume(vol, name='new_name', size=11)
self.assertEqual(self.backend, vol2.backend)
self.assertEqual('new_name', vol2.name)
self.assertEqual('new_name', vol2.display_name)
self.assertEqual(vol.description, vol2.description)
self.assertEqual(11, vol2.size)
self.assertIsNotNone(vol2.id)
self.assertNotEqual(vol.id, vol2.id)
def test_init_from_ovo(self):
vol = objects.Volume(self.backend, size=10)
vol2 = objects.Volume(self.backend, __ovo=vol._ovo)
self.assertEqual(vol._ovo, vol2._ovo)
def test_snapshots_lazy_loading(self):
vol = objects.Volume(self.backend, size=10)
vol._snapshots = None
snaps = [objects.Snapshot(vol, name='my_snap')]
# Persistence retrieves Snapshots without the Volume, just volume_id
snaps[0]._ovo.volume = None
mock_get_snaps = self.persistence.get_snapshots
mock_get_snaps.return_value = snaps
result = vol.snapshots
mock_get_snaps.assert_called_once_with(volume_id=vol.id)
self.assertEqual(snaps, result)
self.assertEqual(snaps, vol._snapshots)
self.assertEqual(1, len(vol._ovo.snapshots))
self.assertEqual(vol._ovo.snapshots[0], result[0]._ovo)
# There is no second call when we reference it again
mock_get_snaps.reset_mock()
result = vol.snapshots
self.assertEqual(snaps, result)
mock_get_snaps.not_called()
def test_connections_lazy_loading(self):
vol = objects.Volume(self.backend, size=10)
vol._connections = None
delattr(vol._ovo, '_obj_volume_attachment')
conns = [objects.Connection(self.backend, connector={'k': 'v'},
volume_id=vol.id, status='attached',
attach_mode='rw',
connection_info={'conn': {}},
name='my_snap')]
mock_get_conns = self.persistence.get_connections
mock_get_conns.return_value = conns
result = vol.connections
mock_get_conns.assert_called_once_with(volume_id=vol.id)
self.assertEqual(conns, result)
self.assertEqual(conns, vol._connections)
self.assertEqual(1, len(vol._ovo.volume_attachment))
self.assertEqual(vol._ovo.volume_attachment[0], result[0]._ovo)
# There is no second call when we reference it again
mock_get_conns.reset_mock()
result = vol.connections
self.assertEqual(conns, result)
mock_get_conns.not_called()
@mock.patch('cinder.objects.volume_attachment.VolumeAttachmentList.'
'get_all_by_volume_id')
def test_connections_lazy_loading_from_ovo(self, get_all_mock):
"""Test we don't reload connections if data is in OVO."""
vol = objects.Volume(self.backend, size=10)
vol._connections = None
delattr(vol._ovo, '_obj_volume_attachment')
conns = [objects.Connection(self.backend, connector={'k': 'v'},
volume_id=vol.id, status='attached',
attach_mode='rw',
connection_info={'conn': {}},
name='my_snap')]
ovo_conns = [conn._ovo for conn in conns]
ovo_attach_list = cinder_ovos.VolumeAttachmentList(objects=ovo_conns)
get_all_mock.return_value = ovo_attach_list
mock_get_conns = self.persistence.get_connections
ovo_result = vol._ovo.volume_attachment
mock_get_conns.not_called()
self.assertEqual(ovo_attach_list, ovo_result)
# Cinderlib object doesn't have the connections yet
self.assertIsNone(vol._connections)
self.assertEqual(1, len(vol._ovo.volume_attachment))
self.assertEqual(vol._ovo.volume_attachment[0], ovo_result[0])
# There is no second call when we access the cinderlib object, as the
# data is retrieved from the OVO that already has it
result = vol.connections
mock_get_conns.not_called()
# Confirm we used the OVO
self.assertIs(ovo_conns[0], result[0]._ovo)
def test_get_by_id(self):
mock_get_vols = self.persistence.get_volumes
mock_get_vols.return_value = [mock.sentinel.vol]
res = objects.Volume.get_by_id(mock.sentinel.vol_id)
mock_get_vols.assert_called_once_with(volume_id=mock.sentinel.vol_id)
self.assertEqual(mock.sentinel.vol, res)
def test_get_by_id_not_found(self):
mock_get_vols = self.persistence.get_volumes
mock_get_vols.return_value = None
self.assertRaises(exception.VolumeNotFound,
objects.Volume.get_by_id, mock.sentinel.vol_id)
mock_get_vols.assert_called_once_with(volume_id=mock.sentinel.vol_id)
def test_get_by_name(self):
res = objects.Volume.get_by_name(mock.sentinel.name)
mock_get_vols = self.persistence.get_volumes
mock_get_vols.assert_called_once_with(volume_name=mock.sentinel.name)
self.assertEqual(mock_get_vols.return_value, res)
def test_create(self):
self.backend.driver.create_volume.return_value = None
vol = self.backend.create_volume(10, name='vol_name',
description='des')
self.backend.driver.create_volume.assert_called_once_with(vol._ovo)
self.assertEqual('available', vol.status)
self.persistence.set_volume.assert_called_once_with(vol)
def test_create_error(self):
self.backend.driver.create_volume.side_effect = exception.NotFound
with self.assertRaises(exception.NotFound) as assert_context:
self.backend.create_volume(10, name='vol_name', description='des')
vol = assert_context.exception.resource
self.assertIsInstance(vol, objects.Volume)
self.assertEqual(10, vol.size)
self.assertEqual('vol_name', vol.name)
self.assertEqual('des', vol.description)
def test_delete(self):
vol = objects.Volume(self.backend_name, size=10)
vol.delete()
self.backend.driver.delete_volume.assert_called_once_with(vol._ovo)
self.persistence.delete_volume.assert_called_once_with(vol)
self.assertEqual('deleted', vol._ovo.status)
def test_delete_error_with_snaps(self):
vol = objects.Volume(self.backend_name, size=10, status='available')
snap = objects.Snapshot(vol)
vol._snapshots.append(snap)
self.assertRaises(exception.InvalidVolume, vol.delete)
self.assertEqual('available', vol._ovo.status)
def test_delete_error(self):
vol = objects.Volume(self.backend_name,
name='vol_name', description='vol_desc', size=10)
self.backend.driver.delete_volume.side_effect = exception.NotFound
with self.assertRaises(exception.NotFound) as assert_context:
vol.delete()
self.assertEqual(vol, assert_context.exception.resource)
self.backend.driver.delete_volume.assert_called_once_with(vol._ovo)
self.assertEqual('error_deleting', vol._ovo.status)
def test_extend(self):
vol = objects.Volume(self.backend_name, status='available', size=10)
res = vol.extend(11)
self.assertEqual(11 * (1024 ** 3), res) # size is in bytes not GBi
self.backend.driver.extend_volume.assert_called_once_with(vol._ovo, 11)
self.persistence.set_volume.assert_called_once_with(vol)
self.assertEqual('available', vol.status)
self.assertEqual(11, vol.size)
def test_extend_attached(self):
vol = objects.Volume(self.backend_name, status='in-use', size=10)
vol.local_attach = mock.Mock()
res = vol.extend(11)
self.assertEqual(vol.local_attach.extend.return_value, res)
self.backend.driver.extend_volume.assert_called_once_with(vol._ovo, 11)
vol.local_attach.extend.assert_called_once_with()
self.persistence.set_volume.assert_called_once_with(vol)
self.assertEqual('in-use', vol.status)
self.assertEqual(11, vol.size)
def test_extend_error(self):
vol = objects.Volume(self.backend_name, status='available', size=10)
self.backend.driver.extend_volume.side_effect = exception.NotFound
with self.assertRaises(exception.NotFound) as assert_context:
vol.extend(11)
self.assertEqual(vol, assert_context.exception.resource)
self.backend.driver.extend_volume.assert_called_once_with(vol._ovo, 11)
self.persistence.set_volume.assert_called_once_with(vol)
self.assertEqual('error', vol.status)
self.assertEqual(10, vol.size)
def test_clone(self):
vol = objects.Volume(self.backend_name, status='available', size=10,
extra_specs={'e': 'v'}, qos_specs={'q': 'qv'})
mock_clone = self.backend.driver.create_cloned_volume
mock_clone.return_value = None
self.assertEqual(0, len(self.backend.volumes))
res = vol.clone(size=11)
mock_clone.assert_called_once_with(res._ovo, vol._ovo)
self.persistence.set_volume.assert_called_once_with(res)
self.assertEqual('available', res._ovo.status)
self.assertEqual(11, res.size)
self.assertEqual(vol.id, vol.volume_type_id)
self.assertNotEqual(vol.id, res.id)
self.assertEqual(res.id, res.volume_type_id)
self.assertEqual(vol.volume_type.extra_specs,
res.volume_type.extra_specs)
self.assertEqual(vol.volume_type.qos_specs.specs,
res.volume_type.qos_specs.specs)
self.assertEqual(vol.id, res.source_volid)
self.assertEqual(1, len(self.backend.volumes))
self.assertIsInstance(self.backend.volumes[0], objects.Volume)
def test_clone_error(self):
vol = objects.Volume(self.backend_name, status='available', size=10)
mock_clone = self.backend.driver.create_cloned_volume
mock_clone.side_effect = exception.NotFound
with self.assertRaises(exception.NotFound) as assert_context:
vol.clone(size=11)
# Cloning volume is still in flight
self.assertEqual(1, len(self.backend._volumes_inflight))
new_vol = list(self.backend._volumes_inflight.values())[0]
self.assertEqual(new_vol, assert_context.exception.resource)
mock_clone.assert_called_once_with(new_vol, vol._ovo)
self.persistence.set_volume.assert_called_once_with(new_vol)
self.assertEqual('error', new_vol._ovo.status)
self.assertEqual(11, new_vol.size)
def test_create_snapshot(self):
vol = objects.Volume(self.backend_name, status='available', size=10)
mock_create = self.backend.driver.create_snapshot
mock_create.return_value = None
snap = vol.create_snapshot()
self.assertEqual([snap], vol.snapshots)
self.assertEqual([snap._ovo], vol._ovo.snapshots.objects)
mock_create.assert_called_once_with(snap._ovo)
self.assertEqual('available', snap.status)
self.assertEqual(10, snap.volume_size)
self.persistence.set_snapshot.assert_called_once_with(snap)
def test_create_snapshot_error(self):
vol = objects.Volume(self.backend_name, status='available', size=10)
mock_create = self.backend.driver.create_snapshot
mock_create.side_effect = exception.NotFound
self.assertRaises(exception.NotFound, vol.create_snapshot)
self.assertEqual(1, len(vol.snapshots))
snap = vol.snapshots[0]
self.persistence.set_snapshot.assert_called_once_with(snap)
self.assertEqual('error', snap.status)
mock_create.assert_called_once_with(snap._ovo)
@mock.patch('cinder.volume.volume_utils.brick_get_connector_properties')
@mock.patch('cinderlib.objects.Volume.connect')
def test_attach(self, mock_connect, mock_conn_props):
vol = objects.Volume(self.backend_name, status='available', size=10)
res = vol.attach()
mock_conn_props.assert_called_once_with(
self.backend.configuration.use_multipath_for_image_xfer,
self.backend.configuration.enforce_multipath_for_image_xfer)
mock_connect.assert_called_once_with(mock_conn_props.return_value)
mock_connect.return_value.attach.assert_called_once_with()
self.assertEqual(mock_connect.return_value, res)
@mock.patch('cinder.volume.volume_utils.brick_get_connector_properties')
@mock.patch('cinderlib.objects.Volume.connect')
def test_attach_error_connect(self, mock_connect, mock_conn_props):
vol = objects.Volume(self.backend_name, status='available', size=10)
mock_connect.side_effect = exception.NotFound
self.assertRaises(exception.NotFound, vol.attach)
mock_conn_props.assert_called_once_with(
self.backend.configuration.use_multipath_for_image_xfer,
self.backend.configuration.enforce_multipath_for_image_xfer)
mock_connect.assert_called_once_with(mock_conn_props.return_value)
mock_connect.return_value.attach.assert_not_called()
@mock.patch('cinderlib.objects.Volume.disconnect')
@mock.patch('cinder.volume.volume_utils.brick_get_connector_properties')
@mock.patch('cinderlib.objects.Volume.connect')
def test_attach_error_attach(self, mock_connect, mock_conn_props,
mock_disconnect):
vol = objects.Volume(self.backend_name, status='available', size=10)
mock_attach = mock_connect.return_value.attach
mock_attach.side_effect = exception.NotFound
self.assertRaises(exception.NotFound, vol.attach)
mock_conn_props.assert_called_once_with(
self.backend.configuration.use_multipath_for_image_xfer,
self.backend.configuration.enforce_multipath_for_image_xfer)
mock_connect.assert_called_once_with(mock_conn_props.return_value)
mock_disconnect.assert_called_once_with(mock_connect.return_value)
def test_detach_not_local(self):
vol = objects.Volume(self.backend_name, status='available', size=10)
self.assertRaises(exception.NotLocal, vol.detach)
def test_detach(self):
vol = objects.Volume(self.backend_name, status='available', size=10)
mock_conn = mock.Mock()
vol.local_attach = mock_conn
vol.detach(mock.sentinel.force, mock.sentinel.ignore_errors)
mock_conn.detach.assert_called_once_with(mock.sentinel.force,
mock.sentinel.ignore_errors,
mock.ANY)
mock_conn.disconnect.assert_called_once_with(mock.sentinel.force)
def test_detach_error_detach(self):
vol = objects.Volume(self.backend_name, status='available', size=10)
mock_conn = mock.Mock()
mock_conn.detach.side_effect = exception.NotFound
vol.local_attach = mock_conn
self.assertRaises(exception.NotFound,
vol.detach,
False, mock.sentinel.ignore_errors)
mock_conn.detach.assert_called_once_with(False,
mock.sentinel.ignore_errors,
mock.ANY)
mock_conn.disconnect.assert_not_called()
def test_detach_error_disconnect(self):
vol = objects.Volume(self.backend_name, status='available', size=10)
mock_conn = mock.Mock()
mock_conn.disconnect.side_effect = exception.NotFound
vol.local_attach = mock_conn
self.assertRaises(objects.brick_exception.ExceptionChainer,
vol.detach,
mock.sentinel.force, False)
mock_conn.detach.assert_called_once_with(mock.sentinel.force,
False,
mock.ANY)
mock_conn.disconnect.assert_called_once_with(mock.sentinel.force)
@mock.patch('cinderlib.objects.Connection.connect')
def test_connect(self, mock_connect):
vol = objects.Volume(self.backend_name, status='available', size=10)
mock_connect.return_value._ovo = objects.cinder_objs.VolumeAttachment()
mock_export = self.backend.driver.create_export
mock_export.return_value = None
res = vol.connect(mock.sentinel.conn_dict)
mock_connect.assert_called_once_with(vol, mock.sentinel.conn_dict)
self.assertEqual([res], vol.connections)
self.assertEqual([res._ovo], vol._ovo.volume_attachment.objects)
self.assertEqual('in-use', vol.status)
self.persistence.set_volume.assert_called_once_with(vol)
@mock.patch('cinderlib.objects.Volume._remove_export')
@mock.patch('cinderlib.objects.Connection.connect')
def test_connect_error(self, mock_connect, mock_remove_export):
vol = objects.Volume(self.backend_name, status='available', size=10)
mock_export = self.backend.driver.create_export
mock_export.return_value = None
mock_connect.side_effect = exception.NotFound
self.assertRaises(exception.NotFound,
vol.connect, mock.sentinel.conn_dict)
mock_connect.assert_called_once_with(vol, mock.sentinel.conn_dict)
self.assertEqual('available', vol.status)
self.persistence.set_volume.assert_not_called()
mock_remove_export.assert_called_once_with()
@mock.patch('cinderlib.objects.Volume._disconnect')
def test_disconnect(self, mock_disconnect):
vol = objects.Volume(self.backend_name, status='available', size=10)
mock_conn = mock.Mock()
vol.disconnect(mock_conn, mock.sentinel.force)
mock_conn._disconnect.assert_called_once_with(mock.sentinel.force)
mock_disconnect.assert_called_once_with(mock_conn)
@mock.patch('cinderlib.objects.Volume._connection_removed')
@mock.patch('cinderlib.objects.Volume._remove_export')
def test__disconnect(self, mock_remove_export, mock_conn_removed):
vol = objects.Volume(self.backend_name, status='in-use', size=10)
vol._disconnect(mock.sentinel.connection)
mock_remove_export.assert_called_once_with()
mock_conn_removed.assert_called_once_with(mock.sentinel.connection)
self.assertEqual('available', vol.status)
self.persistence.set_volume.assert_called_once_with(vol)
def test__remove_export(self):
vol = objects.Volume(self.backend_name, status='in-use', size=10)
vol._remove_export()
self.backend.driver.remove_export.assert_called_once_with(vol._context,
vol._ovo)
@mock.patch('cinderlib.objects.Volume._remove_export')
def test_cleanup(self, mock_remove_export):
vol = objects.Volume(self.backend_name, status='in-use', size=10)
connections = [mock.Mock(), mock.Mock()]
vol._connections = connections
vol.cleanup()
mock_remove_export.assert_called_once_with()
for c in connections:
c.detach.assert_called_once_with()
def test__snapshot_removed_not_loaded(self):
vol = objects.Volume(self.backend,
name='vol_name', description='vol_desc', size=10)
vol._snapshots = None
snap = objects.Snapshot(vol)
# Just check it doesn't break
vol._snapshot_removed(snap)
def test__snapshot_removed_not_present(self):
vol = objects.Volume(self.backend,
name='vol_name', description='vol_desc', size=10)
snap = objects.Snapshot(vol)
snap2 = objects.Snapshot(vol)
vol._snapshots = [snap2]
vol._ovo.snapshots.objects = [snap2._ovo]
# Just check it doesn't break or remove any other snaps
vol._snapshot_removed(snap)
self.assertEqual([snap2], vol._snapshots)
self.assertEqual([snap2._ovo], vol._ovo.snapshots.objects)
def test__snapshot_removed(self):
vol = objects.Volume(self.backend,
name='vol_name', description='vol_desc', size=10)
snap = objects.Snapshot(vol)
snap2 = objects.Snapshot(vol)
snap_other_instance = objects.Snapshot(vol, id=snap.id,
description='d')
snap_other_instance2 = objects.Snapshot(vol, id=snap.id,
description='e')
vol._snapshots = [snap2, snap_other_instance]
vol._ovo.snapshots.objects = [snap2._ovo, snap_other_instance2._ovo]
# Just check it doesn't break or remove any other snaps
vol._snapshot_removed(snap)
self.assertEqual([snap2], vol._snapshots)
self.assertEqual([snap2._ovo], vol._ovo.snapshots.objects)
def test__connection_removed_not_loaded(self):
vol = objects.Volume(self.backend,
name='vol_name', description='vol_desc', size=10)
vol._connections = None
conn = objects.Connection(self.backend, connection_info={'conn': {}})
# Just check it doesn't break
vol._connection_removed(conn)
def test__connection_removed_not_present(self):
vol = objects.Volume(self.backend,
name='vol_name', description='vol_desc', size=10)
conn = objects.Connection(self.backend, connection_info={'conn': {}})
conn2 = objects.Connection(self.backend, connection_info={'conn': {}})
vol._connections = [conn2]
vol._ovo.volume_attachment.objects = [conn2._ovo]
# Just check it doesn't break or remove any other snaps
vol._connection_removed(conn)
self.assertEqual([conn2], vol._connections)
self.assertEqual([conn2._ovo], vol._ovo.volume_attachment.objects)
def test__connection_removed(self):
vol = objects.Volume(self.backend, size=10)
conn = objects.Connection(self.backend, connection_info={'conn': {}})
conn2 = objects.Connection(self.backend, connection_info={'conn': {}})
conn_other_instance = objects.Connection(self.backend, id=conn.id,
connection_info={'conn': {}})
conn_other_instance2 = objects.Connection(self.backend, id=conn.id,
connection_info={'conn': {}})
vol._connections = [conn2, conn_other_instance]
vol._ovo.volume_attachment.objects = [conn2._ovo,
conn_other_instance2._ovo]
# Just check it doesn't break or remove any other snaps
vol._connection_removed(conn)
self.assertEqual([conn2], vol._connections)
self.assertEqual([conn2._ovo], vol._ovo.volume_attachment.objects)

View File

@ -1,561 +0,0 @@
# Copyright (c) 2018, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from unittest import mock
from oslo_config import cfg
import cinderlib
from cinderlib.persistence import base as persistence_base
from cinderlib.tests.unit.persistence import helper
from cinderlib.tests.unit import utils
class BasePersistenceTest(helper.TestHelper):
def setUp(self):
super(BasePersistenceTest, self).setUp()
def assertListEqualObj(self, expected, actual):
exp = [self._convert_to_dict(e) for e in expected]
act = [self._convert_to_dict(a) for a in actual]
self.assertListEqual(exp, act)
def assertEqualObj(self, expected, actual):
exp = self._convert_to_dict(expected)
act = self._convert_to_dict(actual)
self.assertDictEqual(exp, act)
def test_db(self):
raise NotImplementedError('Test class must implement this method')
def test_set_volume(self):
raise NotImplementedError('Test class must implement this method')
def test_get_volumes_all(self):
vols = self.create_n_volumes(2)
res = self.persistence.get_volumes()
self.assertListEqualObj(vols, self.sorted(res))
def test_get_volumes_by_id(self):
vols = self.create_n_volumes(2)
res = self.persistence.get_volumes(volume_id=vols[1].id)
# Use res instead of res[0] in case res is empty list
self.assertListEqualObj([vols[1]], res)
def test_get_volumes_by_id_not_found(self):
self.create_n_volumes(2)
res = self.persistence.get_volumes(volume_id='fake-uuid')
self.assertListEqualObj([], res)
def test_get_volumes_by_name_single(self):
vols = self.create_n_volumes(2)
res = self.persistence.get_volumes(volume_name=vols[1].name)
self.assertListEqualObj([vols[1]], res)
def test_get_volumes_by_name_multiple(self):
volume_name = 'disk'
vols = self.create_volumes([{'size': 1, 'name': volume_name},
{'size': 2, 'name': volume_name}])
res = self.persistence.get_volumes(volume_name=volume_name)
self.assertListEqualObj(vols, self.sorted(res))
def test_get_volumes_by_name_not_found(self):
self.create_n_volumes(2)
res = self.persistence.get_volumes(volume_name='disk3')
self.assertListEqualObj([], res)
def test_get_volumes_by_backend(self):
vols = self.create_n_volumes(2)
backend2 = utils.FakeBackend(volume_backend_name='fake2')
vol = self.create_volumes([{'backend_or_vol': backend2, 'size': 3}])
res = self.persistence.get_volumes(backend_name=self.backend.id)
self.assertListEqualObj(vols, self.sorted(res))
res = self.persistence.get_volumes(backend_name=backend2.id)
self.assertListEqualObj(vol, res)
def test_get_volumes_by_backend_not_found(self):
self.create_n_volumes(2)
res = self.persistence.get_volumes(backend_name='fake2')
self.assertListEqualObj([], res)
def test_get_volumes_by_multiple(self):
volume_name = 'disk'
vols = self.create_volumes([{'size': 1, 'name': volume_name},
{'size': 2, 'name': volume_name}])
res = self.persistence.get_volumes(backend_name=self.backend.id,
volume_name=volume_name,
volume_id=vols[0].id)
self.assertListEqualObj([vols[0]], res)
def test_get_volumes_by_multiple_not_found(self):
vols = self.create_n_volumes(2)
res = self.persistence.get_volumes(backend_name=self.backend.id,
volume_name=vols[1].name,
volume_id=vols[0].id)
self.assertListEqualObj([], res)
def _check_volume_type(self, extra_specs, qos_specs, vol):
self.assertEqual(vol.id, vol.volume_type.id)
self.assertEqual(vol.id, vol.volume_type.name)
self.assertTrue(vol.volume_type.is_public)
self.assertEqual(extra_specs, vol.volume_type.extra_specs)
if qos_specs:
self.assertEqual(vol.id, vol.volume_type.qos_specs_id)
self.assertEqual(vol.id, vol.volume_type.qos_specs.id)
self.assertEqual(vol.id, vol.volume_type.qos_specs.name)
self.assertEqual('back-end', vol.volume_type.qos_specs.consumer)
self.assertEqual(qos_specs, vol.volume_type.qos_specs.specs)
else:
self.assertIsNone(vol.volume_type.qos_specs_id)
def test_get_volumes_extra_specs(self):
extra_specs = [{'k1': 'v1', 'k2': 'v2'},
{'kk1': 'vv1', 'kk2': 'vv2', 'kk3': 'vv3'}]
vols = self.create_volumes(
[{'size': 1, 'extra_specs': extra_specs[0]},
{'size': 2, 'extra_specs': extra_specs[1]}],
sort=False)
# Check the volume type and the extra specs on created volumes
for i in range(len(vols)):
self._check_volume_type(extra_specs[i], None, vols[i])
# Check that we get what we stored
res = self.persistence.get_volumes(backend_name=self.backend.id)
vols = self.sorted(vols)
self.assertListEqualObj(vols, self.sorted(res))
for i in range(len(vols)):
self._check_volume_type(vols[i].volume_type.extra_specs, {},
vols[i])
def test_get_volumes_qos_specs(self):
qos_specs = [{'q1': 'r1', 'q2': 'r2'},
{'qq1': 'rr1', 'qq2': 'rr2', 'qq3': 'rr3'}]
vols = self.create_volumes(
[{'size': 1, 'qos_specs': qos_specs[0]},
{'size': 2, 'qos_specs': qos_specs[1]}],
sort=False)
# Check the volume type and the extra specs on created volumes
for i in range(len(vols)):
self._check_volume_type({}, qos_specs[i], vols[i])
# Check that we get what we stored
res = self.persistence.get_volumes(backend_name=self.backend.id)
vols = self.sorted(vols)
res = self.sorted(res)
self.assertListEqualObj(vols, res)
for i in range(len(vols)):
self._check_volume_type({}, vols[i].volume_type.qos_specs.specs,
vols[i])
def test_get_volumes_extra_and_qos_specs(self):
qos_specs = [{'q1': 'r1', 'q2': 'r2'},
{'qq1': 'rr1', 'qq2': 'rr2', 'qq3': 'rr3'}]
extra_specs = [{'k1': 'v1', 'k2': 'v2'},
{'kk1': 'vv1', 'kk2': 'vv2', 'kk3': 'vv3'}]
vols = self.create_volumes(
[{'size': 1, 'qos_specs': qos_specs[0],
'extra_specs': extra_specs[0]},
{'size': 2, 'qos_specs': qos_specs[1],
'extra_specs': extra_specs[1]}],
sort=False)
# Check the volume type and the extra specs on created volumes
for i in range(len(vols)):
self._check_volume_type(extra_specs[i], qos_specs[i], vols[i])
# Check that we get what we stored
res = self.persistence.get_volumes(backend_name=self.backend.id)
vols = self.sorted(vols)
self.assertListEqualObj(vols, self.sorted(res))
for i in range(len(vols)):
self._check_volume_type(vols[i].volume_type.extra_specs,
vols[i].volume_type.qos_specs.specs,
vols[i])
def test_delete_volume(self):
vols = self.create_n_volumes(2)
self.persistence.delete_volume(vols[0])
res = self.persistence.get_volumes()
self.assertListEqualObj([vols[1]], res)
def test_delete_volume_not_found(self):
vols = self.create_n_volumes(2)
fake_vol = cinderlib.Volume(backend_or_vol=self.backend)
self.persistence.delete_volume(fake_vol)
res = self.persistence.get_volumes()
self.assertListEqualObj(vols, self.sorted(res))
def test_set_snapshot(self):
raise NotImplementedError('Test class must implement this method')
def get_snapshots_all(self):
snaps = self.create_snapshots()
res = self.persistence.get_snapshots()
self.assertListEqualObj(snaps, self.sorted(res))
def test_get_snapshots_by_id(self):
snaps = self.create_snapshots()
res = self.persistence.get_snapshots(snapshot_id=snaps[1].id)
self.assertListEqualObj([snaps[1]], res)
def test_get_snapshots_by_id_not_found(self):
self.create_snapshots()
res = self.persistence.get_snapshots(snapshot_id='fake-uuid')
self.assertListEqualObj([], res)
def test_get_snapshots_by_name_single(self):
snaps = self.create_snapshots()
res = self.persistence.get_snapshots(snapshot_name=snaps[1].name)
self.assertListEqualObj([snaps[1]], res)
def test_get_snapshots_by_name_multiple(self):
snap_name = 'snap'
vol = self.create_volumes([{'size': 1}])[0]
snaps = [cinderlib.Snapshot(vol, name=snap_name) for i in range(2)]
[self.persistence.set_snapshot(snap) for snap in snaps]
res = self.persistence.get_snapshots(snapshot_name=snap_name)
self.assertListEqualObj(self.sorted(snaps), self.sorted(res))
def test_get_snapshots_by_name_not_found(self):
self.create_snapshots()
res = self.persistence.get_snapshots(snapshot_name='snap3')
self.assertListEqualObj([], res)
def test_get_snapshots_by_volume(self):
snaps = self.create_snapshots()
vol = snaps[0].volume
expected_snaps = [snaps[0], cinderlib.Snapshot(vol)]
self.persistence.set_snapshot(expected_snaps[1])
res = self.persistence.get_snapshots(volume_id=vol.id)
self.assertListEqualObj(self.sorted(expected_snaps), self.sorted(res))
def test_get_snapshots_by_volume_not_found(self):
self.create_snapshots()
res = self.persistence.get_snapshots(volume_id='fake_uuid')
self.assertListEqualObj([], res)
def test_get_snapshots_by_multiple(self):
snap_name = 'snap'
vol = self.create_volumes([{'size': 1}])[0]
snaps = [cinderlib.Snapshot(vol, name=snap_name) for i in range(2)]
[self.persistence.set_snapshot(snap) for snap in snaps]
res = self.persistence.get_snapshots(volume_id=vol.id,
snapshot_name=snap_name,
snapshot_id=snaps[0].id)
self.assertListEqualObj([snaps[0]], self.sorted(res))
def test_get_snapshots_by_multiple_not_found(self):
snaps = self.create_snapshots()
res = self.persistence.get_snapshots(snapshot_name=snaps[1].name,
volume_id=snaps[0].volume.id)
self.assertListEqualObj([], res)
def test_delete_snapshot(self):
snaps = self.create_snapshots()
self.persistence.delete_snapshot(snaps[0])
res = self.persistence.get_snapshots()
self.assertListEqualObj([snaps[1]], res)
def test_delete_snapshot_not_found(self):
snaps = self.create_snapshots()
fake_snap = cinderlib.Snapshot(snaps[0].volume)
self.persistence.delete_snapshot(fake_snap)
res = self.persistence.get_snapshots()
self.assertListEqualObj(snaps, self.sorted(res))
def test_set_connection(self):
raise NotImplementedError('Test class must implement this method')
def get_connections_all(self):
conns = self.create_connections()
res = self.persistence.get_connections()
self.assertListEqual(conns, self.sorted(res))
def test_get_connections_by_id(self):
conns = self.create_connections()
res = self.persistence.get_connections(connection_id=conns[1].id)
self.assertListEqualObj([conns[1]], res)
def test_get_connections_by_id_not_found(self):
self.create_connections()
res = self.persistence.get_connections(connection_id='fake-uuid')
self.assertListEqualObj([], res)
def test_get_connections_by_volume(self):
conns = self.create_connections()
vol = conns[0].volume
expected_conns = [conns[0], cinderlib.Connection(
self.backend, volume=vol, connection_info={'conn': {'data': {}}})]
self.persistence.set_connection(expected_conns[1])
res = self.persistence.get_connections(volume_id=vol.id)
self.assertListEqualObj(self.sorted(expected_conns), self.sorted(res))
def test_get_connections_by_volume_not_found(self):
self.create_connections()
res = self.persistence.get_connections(volume_id='fake_uuid')
self.assertListEqualObj([], res)
def test_get_connections_by_multiple(self):
vol = self.create_volumes([{'size': 1}])[0]
conns = [cinderlib.Connection(self.backend, volume=vol,
connection_info={'conn': {'data': {}}})
for i in range(2)]
[self.persistence.set_connection(conn) for conn in conns]
res = self.persistence.get_connections(volume_id=vol.id,
connection_id=conns[0].id)
self.assertListEqualObj([conns[0]], self.sorted(res))
def test_get_connections_by_multiple_not_found(self):
conns = self.create_connections()
res = self.persistence.get_connections(volume_id=conns[0].volume.id,
connection_id=conns[1].id)
self.assertListEqualObj([], res)
def test_delete_connection(self):
conns = self.create_connections()
self.persistence.delete_connection(conns[1])
res = self.persistence.get_connections()
self.assertListEqualObj([conns[0]], res)
def test_delete_connection_not_found(self):
conns = self.create_connections()
fake_conn = cinderlib.Connection(
self.backend,
volume=conns[0].volume,
connection_info={'conn': {'data': {}}})
self.persistence.delete_connection(fake_conn)
res = self.persistence.get_connections()
self.assertListEqualObj(conns, self.sorted(res))
def test_set_key_values(self):
raise NotImplementedError('Test class must implement this method')
def assertKVsEqual(self, expected, actual):
if len(expected) == len(actual):
for (key, value), actual in zip(expected, actual):
self.assertEqual(key, actual.key)
self.assertEqual(value, actual.value)
return
assert False, '%s is not equal to %s' % (expected, actual)
def get_key_values_all(self):
kvs = self.create_key_values()
res = self.persistence.get_key_values()
self.assertListEqual(kvs, self.sorted(res, 'key'))
def test_get_key_values_by_key(self):
kvs = self.create_key_values()
res = self.persistence.get_key_values(key=kvs[1].key)
self.assertListEqual([kvs[1]], res)
def test_get_key_values_by_key_not_found(self):
self.create_key_values()
res = self.persistence.get_key_values(key='fake-uuid')
self.assertListEqual([], res)
def test_delete_key_value(self):
kvs = self.create_key_values()
self.persistence.delete_key_value(kvs[1])
res = self.persistence.get_key_values()
self.assertListEqual([kvs[0]], res)
def test_delete_key_not_found(self):
kvs = self.create_key_values()
fake_key = cinderlib.KeyValue('fake-key')
self.persistence.delete_key_value(fake_key)
res = self.persistence.get_key_values()
self.assertListEqual(kvs, self.sorted(res, 'key'))
@mock.patch('cinderlib.persistence.base.DB.volume_type_get')
def test__volume_type_get_by_name(self, get_mock):
# Only test when using our fake DB class. We cannot use
# unittest.skipUnless because persistence is configure in setUpClass,
# which is called after the decorator.
if not isinstance(cinderlib.objects.Backend.persistence.db,
persistence_base.DB):
return
# Volume type id and name are the same, so method must be too
res = self.persistence.db._volume_type_get_by_name(self.context,
mock.sentinel.name)
self.assertEqual(get_mock.return_value, res)
get_mock.assert_called_once_with(self.context, mock.sentinel.name)
def test_volume_type_get_by_id(self):
extra_specs = [{'k1': 'v1', 'k2': 'v2'},
{'kk1': 'vv1', 'kk2': 'vv2', 'kk3': 'vv3'}]
vols = self.create_volumes(
[{'size': 1, 'extra_specs': extra_specs[0]},
{'size': 2, 'extra_specs': extra_specs[1]}],
sort=False)
res = self.persistence.db.volume_type_get(self.context, vols[0].id)
self.assertEqual(vols[0].id, res['id'])
self.assertEqual(vols[0].id, res['name'])
self.assertEqual(extra_specs[0], res['extra_specs'])
def test_volume_get_all_by_host(self):
# Only test when using our fake DB class. We cannot use
# unittest.skipUnless because persistence is configure in setUpClass,
# which is called after the decorator.
if not isinstance(cinderlib.objects.Backend.persistence.db,
persistence_base.DB):
return
persistence_db = self.persistence.db
host = '%s@%s' % (cfg.CONF.host, self.backend.id)
vols = [v._ovo for v in self.create_n_volumes(2)]
backend2 = utils.FakeBackend(volume_backend_name='fake2')
vol = self.create_volumes([{'backend_or_vol': backend2, 'size': 3}])
# We should be able to get it using the host@backend
res = persistence_db.volume_get_all_by_host(self.context, host)
self.assertListEqualObj(vols, self.sorted(res))
# Confirm it also works when we pass a host that includes the pool
res = persistence_db.volume_get_all_by_host(self.context, vols[0].host)
self.assertListEqualObj(vols, self.sorted(res))
# Check we also get the other backend's volume
host = '%s@%s' % (cfg.CONF.host, backend2.id)
res = persistence_db.volume_get_all_by_host(self.context, host)
self.assertListEqualObj(vol[0]._ovo, res[0])
def test__volume_admin_metadata_get(self):
# Only test when using our fake DB class. We cannot use
# unittest.skipUnless because persistence is configure in setUpClass,
# which is called after the decorator.
if not isinstance(cinderlib.objects.Backend.persistence.db,
persistence_base.DB):
return
admin_metadata = {'k': 'v'}
vols = self.create_volumes([{'size': 1,
'admin_metadata': admin_metadata}])
result = self.persistence.db._volume_admin_metadata_get(self.context,
vols[0].id)
self.assertDictEqual(admin_metadata, result)
def test__volume_admin_metadata_update(self):
# Only test when using our fake DB class. We cannot use
# unittest.skipUnless because persistence is configure in setUpClass,
# which is called after the decorator.
if not isinstance(cinderlib.objects.Backend.persistence.db,
persistence_base.DB):
return
create_admin_metadata = {'k': 'v', 'k2': 'v2'}
admin_metadata = {'k2': 'v2.1', 'k3': 'v3'}
vols = self.create_volumes([{'size': 1,
'admin_metadata': create_admin_metadata}])
self.persistence.db._volume_admin_metadata_update(self.context,
vols[0].id,
admin_metadata,
delete=True,
add=True,
update=True)
result = self.persistence.db._volume_admin_metadata_get(self.context,
vols[0].id)
self.assertDictEqual({'k2': 'v2.1', 'k3': 'v3'}, result)
def test__volume_admin_metadata_update_do_nothing(self):
# Only test when using our fake DB class. We cannot use
# unittest.skipUnless because persistence is configure in setUpClass,
# which is called after the decorator.
if not isinstance(cinderlib.objects.Backend.persistence.db,
persistence_base.DB):
return
create_admin_metadata = {'k': 'v', 'k2': 'v2'}
admin_metadata = {'k2': 'v2.1', 'k3': 'v3'}
vols = self.create_volumes([{'size': 1,
'admin_metadata': create_admin_metadata}])
# Setting delete, add, and update to False means we don't do anything
self.persistence.db._volume_admin_metadata_update(self.context,
vols[0].id,
admin_metadata,
delete=False,
add=False,
update=False)
result = self.persistence.db._volume_admin_metadata_get(self.context,
vols[0].id)
self.assertDictEqual(create_admin_metadata, result)
def test_volume_admin_metadata_delete(self):
# Only test when using our fake DB class. We cannot use
# unittest.skipUnless because persistence is configure in setUpClass,
# which is called after the decorator.
if not isinstance(cinderlib.objects.Backend.persistence.db,
persistence_base.DB):
return
admin_metadata = {'k': 'v', 'k2': 'v2'}
vols = self.create_volumes([{'size': 1,
'admin_metadata': admin_metadata}])
self.persistence.db.volume_admin_metadata_delete(self.context,
vols[0].id,
'k2')
result = self.persistence.db._volume_admin_metadata_get(self.context,
vols[0].id)
self.assertDictEqual({'k': 'v'}, result)
@mock.patch('cinderlib.objects.Volume.get_by_id')
@mock.patch('cinderlib.objects.Volume.snapshots',
new_callable=mock.PropertyMock)
@mock.patch('cinderlib.objects.Volume.connections',
new_callable=mock.PropertyMock)
def test_volume_refresh(self, get_conns_mock, get_snaps_mock, get_mock):
vol = self.create_n_volumes(1)[0]
vol_id = vol.id
# This is to simulate situation where the persistence does lazy loading
vol._snapshots = vol._connections = None
get_mock.return_value = cinderlib.Volume(vol)
vol.refresh()
get_mock.assert_called_once_with(vol_id)
get_conns_mock.assert_not_called()
get_snaps_mock.assert_not_called()
self.assertIsNone(vol.local_attach)
@mock.patch('cinderlib.objects.Volume.get_by_id')
@mock.patch('cinderlib.objects.Volume.snapshots',
new_callable=mock.PropertyMock)
@mock.patch('cinderlib.objects.Volume.connections',
new_callable=mock.PropertyMock)
def test_volume_refresh_with_conn_and_snaps(self, get_conns_mock,
get_snaps_mock, get_mock):
vol = self.create_n_volumes(1)[0]
vol_id = vol.id
vol.local_attach = mock.sentinel.local_attach
get_mock.return_value = cinderlib.Volume(vol)
vol.refresh()
get_mock.assert_called_once_with(vol_id)
get_conns_mock.assert_called_once_with()
get_snaps_mock.assert_called_once_with()
self.assertIs(mock.sentinel.local_attach, vol.local_attach)

View File

@ -1,125 +0,0 @@
# Copyright (c) 2018, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from cinder.cmd import volume as volume_cmd
from cinder.db.sqlalchemy import api
from cinder.db.sqlalchemy import models
from cinder import objects
from cinder.objects import base as cinder_base_ovo
from oslo_versionedobjects import fields
import cinderlib
from cinderlib.tests.unit import base
class TestHelper(base.BaseTest):
@classmethod
def setUpClass(cls):
# Save OVO methods that some persistence plugins mess up
cls.ovo_methods = {}
for ovo_name in cinder_base_ovo.CinderObjectRegistry.obj_classes():
ovo_cls = getattr(objects, ovo_name)
cls.ovo_methods[ovo_name] = {
'save': getattr(ovo_cls, 'save', None),
'get_by_id': getattr(ovo_cls, 'get_by_id', None),
}
cls.original_impl = volume_cmd.session.IMPL
cinderlib.Backend.global_initialization = False
cinderlib.setup(persistence_config=cls.PERSISTENCE_CFG)
@classmethod
def tearDownClass(cls):
volume_cmd.session.IMPL = cls.original_impl
cinderlib.Backend.global_initialization = False
# Cannot just replace the context manager itself because it is already
# decorating cinder DB methods and those would continue accessing the
# old database, so we replace the existing CM'sinternal transaction
# factory, efectively "reseting" the context manager.
cm = api.main_context_manager
if cm.is_started:
cm._root_factory = api.enginefacade._TransactionFactory()
for ovo_name, methods in cls.ovo_methods.items():
ovo_cls = getattr(objects, ovo_name)
for method_name, method in methods.items():
if method:
setattr(ovo_cls, method_name, method)
def setUp(self):
super(TestHelper, self).setUp()
self.context = cinderlib.objects.CONTEXT
def sorted(self, resources, key='id'):
return sorted(resources, key=lambda x: getattr(x, key))
def create_n_volumes(self, n):
return self.create_volumes([{'size': i, 'name': 'disk%s' % i}
for i in range(1, n + 1)])
def create_volumes(self, data, sort=True):
vols = []
for d in data:
d.setdefault('backend_or_vol', self.backend)
vol = cinderlib.Volume(**d)
vols.append(vol)
self.persistence.set_volume(vol)
if sort:
return self.sorted(vols)
return vols
def create_snapshots(self):
vols = self.create_n_volumes(2)
snaps = []
for i, vol in enumerate(vols):
snap = cinderlib.Snapshot(vol, name='snaps%s' % (i + i))
snaps.append(snap)
self.persistence.set_snapshot(snap)
return self.sorted(snaps)
def create_connections(self):
vols = self.create_n_volumes(2)
conns = []
for i, vol in enumerate(vols):
conn = cinderlib.Connection(self.backend, volume=vol,
connection_info={'conn': {'data': {}}})
conns.append(conn)
self.persistence.set_connection(conn)
return self.sorted(conns)
def create_key_values(self):
kvs = []
for i in range(2):
kv = cinderlib.KeyValue(key='key%i' % i, value='value%i' % i)
kvs.append(kv)
self.persistence.set_key_value(kv)
return kvs
def _convert_to_dict(self, obj):
if isinstance(obj, models.BASE):
return dict(obj)
if not isinstance(obj, cinderlib.objects.Object):
return obj
res = dict(obj._ovo)
for key, value in obj._ovo.fields.items():
if isinstance(value, fields.ObjectField):
res.pop(key, None)
res.pop('glance_metadata', None)
res.pop('metadata', None)
return res

View File

@ -1,43 +0,0 @@
# Copyright (c) 2018, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import cinderlib
from cinderlib.tests.unit.persistence import helper
class TestBasePersistence(helper.TestHelper):
PERSISTENCE_CFG = {'storage': 'memory'}
def tearDown(self):
self.persistence.volumes.clear()
self.persistence.snapshots.clear()
self.persistence.connections.clear()
self.persistence.key_values.clear()
super(TestBasePersistence, self).tearDown()
def test_get_changed_fields_volume(self):
vol = cinderlib.Volume(self.backend, size=1, extra_specs={'k': 'v'})
self.persistence.set_volume(vol)
vol._ovo.display_name = "abcde"
result = self.persistence.get_changed_fields(vol)
self.assertEqual(result, {'display_name': vol._ovo.display_name})
def test_get_changed_fields_snapshot(self):
vol = cinderlib.Volume(self.backend, size=1, extra_specs={'k': 'v'})
snap = cinderlib.Snapshot(vol)
self.persistence.set_snapshot(snap)
snap._ovo.display_name = "abcde"
result = self.persistence.get_changed_fields(snap)
self.assertEqual(result, {'display_name': snap._ovo.display_name})

View File

@ -1,173 +0,0 @@
# Copyright (c) 2018, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import tempfile
from unittest import mock
import alembic.script.revision
import alembic.util.exc
from cinder.db.sqlalchemy import api as sqla_api
from cinder.db.sqlalchemy import models as sqla_models
from cinder import objects as cinder_ovos
from oslo_db import api as oslo_db_api
import cinderlib
from cinderlib.persistence import dbms
from cinderlib.tests.unit.persistence import base
class TestDBPersistence(base.BasePersistenceTest):
CONNECTION = 'sqlite:///' + tempfile.NamedTemporaryFile().name
PERSISTENCE_CFG = {'storage': 'db',
'connection': CONNECTION}
def tearDown(self):
super(TestDBPersistence, self).tearDown()
with sqla_api.main_context_manager.writer.using(self.context):
sqla_api.model_query(self.context, sqla_models.Snapshot).delete()
sqla_api.model_query(self.context,
sqla_models.VolumeAttachment).delete()
sqla_api.model_query(self.context, sqla_models.Volume).delete()
self.context.session.query(dbms.KeyValue).delete()
def test_db(self):
self.assertIsInstance(self.persistence.db,
oslo_db_api.DBAPI)
def test_set_volume(self):
res = sqla_api.volume_get_all(self.context)
self.assertListEqual([], res)
vol = cinderlib.Volume(self.backend, size=1, name='disk')
expected = {'availability_zone': vol.availability_zone,
'size': vol.size, 'name': vol.name}
self.persistence.set_volume(vol)
db_vol = sqla_api.volume_get(self.context, vol.id)
actual = {'availability_zone': db_vol.availability_zone,
'size': db_vol.size, 'name': db_vol.display_name}
self.assertDictEqual(expected, actual)
def test_set_snapshot(self):
vol = cinderlib.Volume(self.backend, size=1, name='disk')
# This will assign a volume type, which is necessary for the snapshot
vol.save()
snap = cinderlib.Snapshot(vol, name='disk')
self.assertEqual(0, len(sqla_api.snapshot_get_all(self.context)))
self.persistence.set_snapshot(snap)
db_entries = sqla_api.snapshot_get_all(self.context)
self.assertEqual(1, len(db_entries))
ovo_snap = cinder_ovos.Snapshot(self.context)
ovo_snap._from_db_object(ovo_snap._context, ovo_snap, db_entries[0])
cl_snap = cinderlib.Snapshot(vol, __ovo=ovo_snap)
self.assertEqualObj(snap, cl_snap)
def test_set_connection(self):
vol = cinderlib.Volume(self.backend, size=1, name='disk')
conn = cinderlib.Connection(self.backend, volume=vol, connector={},
connection_info={'conn': {'data': {}}})
self.assertEqual(0,
len(sqla_api.volume_attachment_get_all(self.context)))
self.persistence.set_connection(conn)
db_entries = sqla_api.volume_attachment_get_all(self.context)
self.assertEqual(1, len(db_entries))
ovo_conn = cinder_ovos.VolumeAttachment(self.context)
ovo_conn._from_db_object(ovo_conn._context, ovo_conn, db_entries[0])
cl_conn = cinderlib.Connection(vol.backend, volume=vol, __ovo=ovo_conn)
self.assertEqualObj(conn, cl_conn)
def test_set_key_values(self):
with sqla_api.main_context_manager.reader.using(self.context):
res = self.context.session.query(dbms.KeyValue).all()
self.assertListEqual([], res)
expected = [dbms.KeyValue(key='key', value='value')]
self.persistence.set_key_value(expected[0])
with sqla_api.main_context_manager.reader.using(self.context):
actual = self.context.session.query(dbms.KeyValue).all()
self.assertListEqualObj(expected, actual)
def test_create_volume_with_default_volume_type(self):
vol = cinderlib.Volume(self.backend, size=1, name='disk')
self.persistence.set_volume(vol)
self.assertEqual(self.persistence.DEFAULT_TYPE.id, vol.volume_type_id)
self.assertIs(self.persistence.DEFAULT_TYPE, vol.volume_type)
res = sqla_api.volume_type_get(self.context, vol.volume_type_id)
self.assertIsNotNone(res)
self.assertEqual('__DEFAULT__', res['name'])
def test_default_volume_type(self):
self.assertIsInstance(self.persistence.DEFAULT_TYPE,
cinder_ovos.VolumeType)
self.assertEqual('__DEFAULT__', self.persistence.DEFAULT_TYPE.name)
def test_delete_volume_with_metadata(self):
vols = self.create_volumes([{'size': i, 'name': 'disk%s' % i,
'metadata': {'k': 'v', 'k2': 'v2'},
'admin_metadata': {'k': '1'}}
for i in range(1, 3)])
self.persistence.delete_volume(vols[0])
res = self.persistence.get_volumes()
self.assertListEqualObj([vols[1]], res)
for model in (dbms.models.VolumeMetadata,
dbms.models.VolumeAdminMetadata):
with sqla_api.main_context_manager.reader.using(self.context):
query = dbms.sqla_api.model_query(self.context, model)
res = query.filter_by(volume_id=vols[0].id).all()
self.assertEqual([], res)
class TestDBPersistenceNewerSchema(base.helper.TestHelper):
"""Test DBMS plugin can start when the DB has a newer schema."""
CONNECTION = 'sqlite:///' + tempfile.NamedTemporaryFile().name
PERSISTENCE_CFG = {'storage': 'db',
'connection': CONNECTION}
@classmethod
def setUpClass(cls):
pass
def _raise_exc(self):
inner_exc = alembic.script.revision.ResolutionError('foo', 'rev')
outer_exc = alembic.util.exc.CommandError('bar')
self.original_db_sync()
raise outer_exc from inner_exc
def test_newer_db_schema(self):
self.original_db_sync = dbms.migration.db_sync
with mock.patch.object(dbms.migration, 'db_sync',
side_effect=self._raise_exc) as db_sync_mock:
super(TestDBPersistenceNewerSchema, self).setUpClass()
db_sync_mock.assert_called_once()
self.assertIsInstance(cinderlib.Backend.persistence,
dbms.DBPersistence)
class TestMemoryDBPersistence(TestDBPersistence):
PERSISTENCE_CFG = {'storage': 'memory_db'}

View File

@ -1,92 +0,0 @@
# Copyright (c) 2018, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from unittest import mock
from cinder import objects as ovos
import cinderlib
from cinderlib import objects
from cinderlib.tests.unit.persistence import base
class TestMemoryPersistence(base.BasePersistenceTest):
PERSISTENCE_CFG = {'storage': 'memory'}
def tearDown(self):
# Since this plugin uses class attributes we have to clear them
self.persistence.volumes.clear()
self.persistence.snapshots.clear()
self.persistence.connections.clear()
self.persistence.key_values.clear()
super(TestMemoryPersistence, self).tearDown()
def test_db(self):
self.assertIsInstance(self.persistence.db,
cinderlib.persistence.base.DB)
self.assertEqual(self.persistence.db._DB__connections_get,
ovos.VolumeAttachmentList.get_all_by_volume_id)
def test___connections_get(self):
"""Check we can get volume_attachment from OVO."""
vol = objects.Volume(self.backend, size=10)
vol._connections = None
delattr(vol._ovo, '_obj_volume_attachment')
conns = [objects.Connection(self.backend, connector={'k': 'v'},
volume_id=vol.id, status='attached',
attach_mode='rw',
connection_info={'conn': {}})]
with mock.patch.object(self.persistence, 'get_connections') \
as get_conns_mock:
get_conns_mock.return_value = conns
res = vol._ovo.volume_attachment
self.assertIsInstance(res, ovos.VolumeAttachmentList)
self.assertEqual(1, len(res))
self.assertEqual(conns[0]._ovo, res.objects[0])
get_conns_mock.assert_called_once_with(volume_id=vol.id)
def test_set_volume(self):
vol = cinderlib.Volume(self.backend, size=1, name='disk')
self.assertDictEqual({}, self.persistence.volumes)
self.persistence.set_volume(vol)
self.assertDictEqual({vol.id: vol}, self.persistence.volumes)
def test_set_snapshot(self):
vol = cinderlib.Volume(self.backend, size=1, name='disk')
snap = cinderlib.Snapshot(vol, name='disk')
self.assertDictEqual({}, self.persistence.snapshots)
self.persistence.set_snapshot(snap)
self.assertDictEqual({snap.id: snap}, self.persistence.snapshots)
def test_set_connection(self):
vol = cinderlib.Volume(self.backend, size=1, name='disk')
conn = cinderlib.Connection(self.backend, volume=vol, connector={},
connection_info={'conn': {'data': {}}})
self.assertDictEqual({}, self.persistence.connections)
self.persistence.set_connection(conn)
self.assertDictEqual({conn.id: conn}, self.persistence.connections)
def test_set_key_values(self):
self.assertDictEqual({}, self.persistence.key_values)
expected = [cinderlib.KeyValue('key', 'value')]
self.persistence.set_key_value(expected[0])
self.assertIn('key', self.persistence.key_values)
self.assertEqual(expected, list(self.persistence.key_values.values()))

View File

@ -1,716 +0,0 @@
# Copyright (c) 2017, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import configparser
import os
from unittest import mock
from cinder import utils
import ddt
from oslo_config import cfg
from oslo_privsep import priv_context
import cinderlib
from cinderlib import objects
from cinderlib.tests.unit import base
@ddt.ddt
class TestCinderlib(base.BaseTest):
@ddt.data([], [1], [2])
def test_list_supported_drivers(self, args):
is_v2 = args == [2]
expected_type = dict if is_v2 else str
expected_keys = {'version', 'class_name', 'supported', 'ci_wiki_name',
'driver_options', 'class_fqn', 'desc'}
drivers = cinderlib.Backend.list_supported_drivers(*args)
self.assertNotEqual(0, len(drivers))
for name, driver_info in drivers.items():
self.assertEqual(expected_keys, set(driver_info.keys()))
# Ensure that the RBDDriver has the rbd_keyring_conf option and
# it's not deprecated
if name == 'RBDDriver':
keyring_conf = [conf for conf in driver_info['driver_options']
if conf['dest'] == 'rbd_keyring_conf']
self.assertEqual(1, len(keyring_conf))
expected_value = False if is_v2 else 'False'
self.assertEqual(expected_value,
keyring_conf[0]['deprecated_for_removal'])
for option in driver_info['driver_options']:
self.assertIsInstance(option['type'], expected_type)
if is_v2:
self.assertIn('type_class', option['type'])
else:
for v in option.values():
self.assertIsInstance(v, str)
def test_lib_assignations(self):
self.assertEqual(cinderlib.setup, cinderlib.Backend.global_setup)
self.assertEqual(cinderlib.Backend, cinderlib.objects.Backend)
self.assertEqual(cinderlib.Backend,
cinderlib.objects.Object.backend_class)
@mock.patch('cinderlib.Backend._apply_backend_workarounds')
@mock.patch('oslo_utils.importutils.import_object')
@mock.patch('cinderlib.Backend._get_backend_config')
@mock.patch('cinderlib.Backend.global_setup')
def test_init(self, mock_global_setup, mock_config, mock_import,
mock_workarounds):
cfg.CONF.set_override('host', 'host')
driver_cfg = {'k': 'v', 'k2': 'v2', 'volume_backend_name': 'Test'}
cinderlib.Backend.global_initialization = False
driver = mock_import.return_value
driver.capabilities = {'pools': [{'pool_name': 'default'}]}
backend = objects.Backend(**driver_cfg)
mock_global_setup.assert_called_once_with()
self.assertIn('Test', objects.Backend.backends)
self.assertEqual(backend, objects.Backend.backends['Test'])
mock_config.assert_called_once_with(driver_cfg)
conf = mock_config.return_value
mock_import.assert_called_once_with(conf.volume_driver,
configuration=conf,
db=self.persistence.db,
host='host@Test',
cluster_name=None,
active_backend_id=None)
self.assertEqual(backend.driver, driver)
driver.do_setup.assert_called_once_with(objects.CONTEXT)
driver.check_for_setup_error.assert_called_once_with()
driver.init_capabilities.assert_called_once_with()
driver.set_throttle.assert_called_once_with()
driver.set_initialized.assert_called_once_with()
self.assertEqual(driver_cfg, backend._driver_cfg)
self.assertIsNone(backend._volumes)
driver.get_volume_stats.assert_not_called()
self.assertEqual(('default',), backend.pool_names)
mock_workarounds.assert_called_once_with(mock_config.return_value)
@mock.patch('cinderlib.Backend._apply_backend_workarounds')
@mock.patch('oslo_utils.importutils.import_object')
@mock.patch('cinderlib.Backend._get_backend_config')
@mock.patch('cinderlib.Backend.global_setup')
def test_init_setup(self, mock_global_setup, mock_config, mock_import,
mock_workarounds):
"""Test initialization with the new 'setup' driver method."""
cfg.CONF.set_override('host', 'host')
driver_cfg = {'k': 'v', 'k2': 'v2', 'volume_backend_name': 'Test'}
cinderlib.Backend.global_initialization = False
driver = mock_import.return_value
driver.do_setup.side_effect = AttributeError
driver.capabilities = {'pools': [{'pool_name': 'default'}]}
backend = objects.Backend(**driver_cfg)
mock_global_setup.assert_called_once_with()
self.assertIn('Test', objects.Backend.backends)
self.assertEqual(backend, objects.Backend.backends['Test'])
mock_config.assert_called_once_with(driver_cfg)
conf = mock_config.return_value
mock_import.assert_called_once_with(conf.volume_driver,
configuration=conf,
db=self.persistence.db,
host='host@Test',
cluster_name=None,
active_backend_id=None)
self.assertEqual(backend.driver, driver)
driver.do_setup.assert_called_once_with(objects.CONTEXT)
driver.check_for_setup_error.assert_not_called()
driver.setup.assert_called_once_with(objects.CONTEXT)
driver.init_capabilities.assert_called_once_with()
driver.set_throttle.assert_called_once_with()
driver.set_initialized.assert_called_once_with()
self.assertEqual(driver_cfg, backend._driver_cfg)
self.assertIsNone(backend._volumes)
driver.get_volume_stats.assert_not_called()
self.assertEqual(('default',), backend.pool_names)
mock_workarounds.assert_called_once_with(mock_config.return_value)
@mock.patch.object(objects.Backend, 'global_initialization', True)
@mock.patch.object(objects.Backend, '_apply_backend_workarounds')
@mock.patch('oslo_utils.importutils.import_object')
@mock.patch.object(objects.Backend, '_get_backend_config')
def test_init_call_twice(self, mock_config, mock_import, mock_workarounds):
cinderlib.Backend.global_initialization = False
driver_cfg = {'k': 'v', 'k2': 'v2', 'volume_backend_name': 'Test'}
driver = mock_import.return_value
driver.capabilities = {'pools': [{'pool_name': 'default'}]}
backend = objects.Backend(**driver_cfg)
self.assertEqual(1, mock_config.call_count)
self.assertEqual(1, mock_import.call_count)
self.assertEqual(1, mock_workarounds.call_count)
# When initiallizing a Backend with the same configuration the Backend
# class must behave as a singleton and we won't initialize it again
backend_second = objects.Backend(**driver_cfg)
self.assertIs(backend, backend_second)
self.assertEqual(1, mock_config.call_count)
self.assertEqual(1, mock_import.call_count)
self.assertEqual(1, mock_workarounds.call_count)
@mock.patch.object(objects.Backend, 'global_initialization', True)
@mock.patch.object(objects.Backend, '_apply_backend_workarounds')
@mock.patch('oslo_utils.importutils.import_object')
@mock.patch.object(objects.Backend, '_get_backend_config')
def test_init_call_twice_different_config(self, mock_config, mock_import,
mock_workarounds):
cinderlib.Backend.global_initialization = False
driver_cfg = {'k': 'v', 'k2': 'v2', 'volume_backend_name': 'Test'}
driver = mock_import.return_value
driver.capabilities = {'pools': [{'pool_name': 'default'}]}
objects.Backend(**driver_cfg)
self.assertEqual(1, mock_config.call_count)
self.assertEqual(1, mock_import.call_count)
self.assertEqual(1, mock_workarounds.call_count)
# It should fail if we reuse the backend name but change the config
self.assertRaises(ValueError, objects.Backend, k3='v3', **driver_cfg)
self.assertEqual(1, mock_config.call_count)
self.assertEqual(1, mock_import.call_count)
self.assertEqual(1, mock_workarounds.call_count)
@mock.patch('cinderlib.Backend._validate_and_set_options')
@mock.patch.object(cfg, 'CONF')
def test__set_cinder_config(self, conf_mock, validate_mock):
objects.Backend._set_cinder_config('host', 'locks_path',
mock.sentinel.cfg)
self.assertEqual(2, conf_mock.set_default.call_count)
conf_mock.set_default.assert_has_calls(
[mock.call('state_path', os.getcwd()),
mock.call('lock_path', '$state_path', 'oslo_concurrency')])
self.assertEqual(cinderlib.__version__, cfg.CONF.version)
self.assertEqual('locks_path', cfg.CONF.oslo_concurrency.lock_path)
self.assertEqual('file://locks_path',
cfg.CONF.coordination.backend_url)
self.assertEqual('host', cfg.CONF.host)
validate_mock.assert_called_once_with(mock.sentinel.cfg)
self.assertIsNone(cfg._CachedArgumentParser().parse_args())
@mock.patch('cinderlib.Backend._set_priv_helper')
@mock.patch('cinderlib.Backend._set_cinder_config')
@mock.patch('urllib3.disable_warnings')
@mock.patch('cinder.coordination.COORDINATOR')
@mock.patch('cinderlib.Backend._set_logging')
@mock.patch('cinderlib.cinderlib.serialization')
@mock.patch('cinderlib.Backend.set_persistence')
def test_global_setup(self, mock_set_pers, mock_serial, mock_log,
mock_coord, mock_disable_warn, mock_set_config,
mock_priv_helper):
cls = objects.Backend
cls.global_initialization = False
cinder_cfg = {'k': 'v', 'k2': 'v2'}
# Save the current class configuration
saved_cfg = vars(cls).copy()
try:
cls.global_setup(mock.sentinel.locks_path,
mock.sentinel.root_helper,
mock.sentinel.ssl_warnings,
mock.sentinel.disable_logs,
mock.sentinel.non_uuid_ids,
mock.sentinel.backend_info,
mock.sentinel.project_id,
mock.sentinel.user_id,
mock.sentinel.pers_cfg,
mock.sentinel.fail_missing_backend,
mock.sentinel.host,
**cinder_cfg)
mock_set_config.assert_called_once_with(mock.sentinel.host,
mock.sentinel.locks_path,
cinder_cfg)
self.assertEqual(mock.sentinel.fail_missing_backend,
cls.fail_on_missing_backend)
self.assertEqual(mock.sentinel.project_id, cls.project_id)
self.assertEqual(mock.sentinel.user_id, cls.user_id)
self.assertEqual(mock.sentinel.non_uuid_ids, cls.non_uuid_ids)
mock_set_pers.assert_called_once_with(mock.sentinel.pers_cfg)
mock_serial.setup.assert_called_once_with(cls)
mock_log.assert_called_once_with(mock.sentinel.disable_logs)
mock_coord.start.assert_called_once_with()
mock_priv_helper.assert_called_once_with(mock.sentinel.root_helper)
self.assertEqual(2, mock_disable_warn.call_count)
self.assertTrue(cls.global_initialization)
self.assertEqual(mock.sentinel.backend_info,
cls.output_all_backend_info)
finally:
# Restore the class configuration
for k, v in saved_cfg.items():
if not k.startswith('__'):
setattr(cls, k, v)
@mock.patch('cinderlib.cinderlib.LOG.warning')
def test__validate_and_set_options(self, warning_mock):
self.addCleanup(cfg.CONF.clear_override, 'osapi_volume_extension')
self.addCleanup(cfg.CONF.clear_override, 'debug')
# Validate default group config with Boolean and MultiStrOpt
self.backend._validate_and_set_options(
{'debug': True,
'osapi_volume_extension': ['a', 'b', 'c'],
})
# Global values overrides are left
self.assertIs(True, cfg.CONF.debug)
self.assertEqual(['a', 'b', 'c'], cfg.CONF.osapi_volume_extension)
cinder_cfg = {
'volume_driver': 'cinder.volume.drivers.lvm.LVMVolumeDriver',
'volume_group': 'lvm-volumes',
'target_secondary_ip_addresses': ['w.x.y.z', 'a.b.c.d'],
'target_port': 12345,
}
expected_cfg = cinder_cfg.copy()
# Test driver options with String, ListOpt, PortOpt
self.backend._validate_and_set_options(cinder_cfg)
# Non global value overrides have been cleaned up
self.assertEqual('cinder-volumes',
cfg.CONF.backend_defaults.volume_group)
self.assertEqual(
[], cfg.CONF.backend_defaults.target_secondary_ip_addresses)
self.assertEqual(3260, cfg.CONF.backend_defaults.target_port)
self.assertEqual(expected_cfg, cinder_cfg)
warning_mock.assert_not_called()
@mock.patch('cinderlib.cinderlib.LOG.warning')
def test__validate_and_set_options_rbd(self, warning_mock):
original_override = cfg.CONF.set_override
original_getattr = cfg.ConfigOpts.GroupAttr.__getattr__
def my_override(option, value, *args):
original_override(option, value, *args)
# Simulate that the config option is missing if it's not
if option == 'rbd_keyring_conf':
raise cfg.NoSuchOptError('rbd_keyring_conf')
def my_getattr(self, name):
res = original_getattr(self, name)
# Simulate that the config option is missing if it's not
if name == 'rbd_keyring_conf':
raise AttributeError()
return res
self.patch('oslo_config.cfg.ConfigOpts.GroupAttr.__getattr__',
my_getattr)
self.patch('oslo_config.cfg.CONF.set_override',
side_effect=my_override)
cinder_cfg = {'volume_driver': 'cinder.volume.drivers.rbd.RBDDriver',
'rbd_keyring_conf': '/etc/ceph/ceph.client.adm.keyring',
'rbd_user': 'adm',
'rbd_pool': 'volumes'}
expected_cfg = cinder_cfg.copy()
# Test driver options with String, ListOpt, PortOpt
self.backend._validate_and_set_options(cinder_cfg)
self.assertEqual(expected_cfg, cinder_cfg)
# Non global value overrides have been cleaned up
self.assertEqual(None, cfg.CONF.backend_defaults.rbd_user)
self.assertEqual('rbd', cfg.CONF.backend_defaults.rbd_pool)
warning_mock.assert_not_called()
@ddt.data(
('debug', 'sure', None),
('target_port', 'abc', 'cinder.volume.drivers.lvm.LVMVolumeDriver'))
@ddt.unpack
def test__validate_and_set_options_failures(self, option, value,
driver):
self.assertRaises(
ValueError,
self.backend._validate_and_set_options,
{'volume_driver': driver,
option: value})
@mock.patch('cinderlib.cinderlib.LOG.warning')
def test__validate_and_set_options_unknown(self, warning_mock):
self.backend._validate_and_set_options(
{'volume_driver': 'cinder.volume.drivers.lvm.LVMVolumeDriver',
'vmware_cluster_name': 'name'})
self.assertEqual(1, warning_mock.call_count)
def test_validate_and_set_options_templates(self):
self.addCleanup(cfg.CONF.clear_override, 'my_ip')
cfg.CONF.set_override('my_ip', '127.0.0.1')
config_options = dict(
volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_backend_name='lvm_iscsi',
volume_group='my-${backend_defaults.volume_backend_name}-vg',
target_ip_address='$my_ip',
)
expected = dict(
volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_backend_name='lvm_iscsi',
volume_group='my-lvm_iscsi-vg',
target_ip_address='127.0.0.1',
)
self.backend._validate_and_set_options(config_options)
self.assertDictEqual(expected, config_options)
# Non global value overrides have been cleaned up
self.assertEqual('cinder-volumes',
cfg.CONF.backend_defaults.volume_group)
@mock.patch('cinderlib.cinderlib.Backend._validate_and_set_options')
def test__get_backend_config(self, mock_validate):
def my_validate(*args):
# Simulate the cache clear happening in _validate_and_set_options
cfg.CONF.clear_override('my_ip')
mock_validate.side_effect = my_validate
config_options = dict(
volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_backend_name='lvm_iscsi',
volume_group='volumes',
)
res = self.backend._get_backend_config(config_options)
mock_validate.assert_called_once_with(config_options)
self.assertEqual('lvm_iscsi', res.config_group)
for opt in config_options.keys():
self.assertEqual(config_options[opt], getattr(res, opt))
def test_pool_names(self):
pool_names = [mock.sentinel._pool_names]
self.backend._pool_names = pool_names
self.assertEqual(pool_names, self.backend.pool_names)
def test_volumes(self):
self.backend._volumes = None
res = self.backend.volumes
self.assertEqual(self.persistence.get_volumes.return_value, res)
self.assertEqual(self.persistence.get_volumes.return_value,
self.backend._volumes)
self.persistence.get_volumes.assert_called_once_with(
backend_name=self.backend.id)
def test_id(self):
self.assertEqual(self.backend._driver_cfg['volume_backend_name'],
self.backend.id)
def test_volumes_filtered(self):
res = self.backend.volumes_filtered(mock.sentinel.vol_id,
mock.sentinel.vol_name)
self.assertEqual(self.persistence.get_volumes.return_value, res)
self.assertEqual([], self.backend._volumes)
self.persistence.get_volumes.assert_called_once_with(
backend_name=self.backend.id,
volume_id=mock.sentinel.vol_id,
volume_name=mock.sentinel.vol_name)
def test_stats(self):
expect = {'pools': [mock.sentinel.data]}
with mock.patch.object(self.backend.driver, 'get_volume_stats',
return_value=expect) as mock_stat:
res = self.backend.stats(mock.sentinel.refresh)
self.assertEqual(expect, res)
mock_stat.assert_called_once_with(refresh=mock.sentinel.refresh)
def test_stats_single(self):
stat_value = {'driver_version': 'v1', 'key': 'value'}
expect = {'driver_version': 'v1', 'key': 'value',
'pools': [{'key': 'value', 'pool_name': self.backend_name}]}
with mock.patch.object(self.backend.driver, 'get_volume_stats',
return_value=stat_value) as mock_stat:
res = self.backend.stats(mock.sentinel.refresh)
self.assertEqual(expect, res)
mock_stat.assert_called_once_with(refresh=mock.sentinel.refresh)
@mock.patch('cinderlib.objects.Volume')
def test_create_volume(self, mock_vol):
kwargs = {'k': 'v', 'k2': 'v2'}
res = self.backend.create_volume(mock.sentinel.size,
mock.sentinel.name,
mock.sentinel.desc,
mock.sentinel.boot,
**kwargs)
self.assertEqual(mock_vol.return_value, res)
mock_vol.assert_called_once_with(self.backend, size=mock.sentinel.size,
name=mock.sentinel.name,
description=mock.sentinel.desc,
bootable=mock.sentinel.boot,
**kwargs)
mock_vol.return_value.create.assert_called_once_with()
def test__volume_removed_no_list(self):
vol = cinderlib.objects.Volume(self.backend, size=10)
self.backend._volume_removed(vol)
def test__volume_removed(self):
vol = cinderlib.objects.Volume(self.backend, size=10)
vol2 = cinderlib.objects.Volume(self.backend, id=vol.id, size=10)
self.backend._volumes.append(vol)
self.backend._volume_removed(vol2)
self.assertEqual([], self.backend.volumes)
def test__volume_created(self):
vol = cinderlib.objects.Volume(self.backend, size=10)
self.backend._volume_created(vol)
self.assertEqual([vol], self.backend.volumes)
def test__volume_created_is_none(self):
vol = cinderlib.objects.Volume(self.backend, size=10)
self.backend._volume_created(vol)
self.assertEqual([vol], self.backend.volumes)
def test_validate_connector(self):
self.backend.validate_connector(mock.sentinel.connector)
self.backend.driver.validate_connector.assert_called_once_with(
mock.sentinel.connector)
@mock.patch('cinderlib.objects.setup')
@mock.patch('cinderlib.persistence.setup')
def test_set_persistence(self, mock_pers_setup, mock_obj_setup):
cinderlib.Backend.global_initialization = True
cinderlib.Backend.set_persistence(mock.sentinel.pers_cfg)
mock_pers_setup.assert_called_once_with(mock.sentinel.pers_cfg)
self.assertEqual(mock_pers_setup.return_value,
cinderlib.Backend.persistence)
mock_obj_setup.assert_called_once_with(mock_pers_setup.return_value,
cinderlib.Backend,
self.backend.project_id,
self.backend.user_id,
self.backend.non_uuid_ids)
self.assertEqual(mock_pers_setup.return_value.db,
self.backend.driver.db)
def test_config(self):
self.backend.output_all_backend_info = False
res = self.backend.config
self.assertEqual({'volume_backend_name': self.backend.id}, res)
def test_config_full(self):
self.backend.output_all_backend_info = True
with mock.patch.object(self.backend, '_driver_cfg') as mock_driver:
res = self.backend.config
self.assertEqual(mock_driver, res)
def test_refresh(self):
self.backend.refresh()
self.persistence.get_volumes.assert_called_once_with(
backend_name=self.backend.id)
def test_refresh_no_call(self):
self.backend._volumes = None
self.backend.refresh()
self.persistence.get_volumes.assert_not_called()
@staticmethod
def odict(*args):
res = collections.OrderedDict()
for i in range(0, len(args), 2):
res[args[i]] = args[i + 1]
return res
@mock.patch('cinderlib.cinderlib.cfg.CONF')
def test__apply_backend_workarounds(self, mock_conf):
cfg = mock.Mock(volume_driver='cinder.volume.drivers.netapp...')
self.backend._apply_backend_workarounds(cfg)
self.assertEqual(cfg.volume_backend_name,
mock_conf.list_all_sections())
@mock.patch('cinderlib.cinderlib.cfg.CONF')
def test__apply_backend_workarounds_do_nothing(self, mock_conf):
cfg = mock.Mock(volume_driver='cinder.volume.drivers.lvm...')
self.backend._apply_backend_workarounds(cfg)
self.assertEqual(mock_conf.list_all_sections.return_value,
mock_conf.list_all_sections())
def _check_privsep_root_helper_opt(self, is_changed):
for opt in priv_context.OPTS:
if opt.name == 'helper_command':
break
helper_path = os.path.join(os.path.dirname(cinderlib.__file__),
'bin/venv-privsep-helper')
self.assertIs(is_changed,
f'mysudo {helper_path}' == opt.default)
@mock.patch.dict(os.environ, {}, clear=True)
@mock.patch('os.path.exists')
@mock.patch('configparser.ConfigParser')
@mock.patch('oslo_privsep.priv_context.init')
def test__set_priv_helper_no_venv_sudo(self, mock_ctxt_init, mock_parser,
mock_exists):
original_helper_func = utils.get_root_helper
original_rootwrap_config = cfg.CONF.rootwrap_config
rootwrap_config = '/etc/cinder/rootwrap.conf'
# Not using set_override because it's not working as it should
cfg.CONF.rootwrap_config = rootwrap_config
try:
self.backend._set_priv_helper('sudo')
mock_exists.assert_not_called()
mock_parser.assert_not_called()
mock_ctxt_init.assert_not_called()
self.assertIs(original_helper_func, utils.get_root_helper)
self.assertIs(rootwrap_config, cfg.CONF.rootwrap_config)
self._check_privsep_root_helper_opt(is_changed=False)
finally:
cfg.CONF.rootwrap_config = original_rootwrap_config
@mock.patch('configparser.ConfigParser.read', mock.Mock())
@mock.patch('configparser.ConfigParser.write', mock.Mock())
@mock.patch('cinderlib.cinderlib.utils.__file__',
'/.venv/lib/python3.7/site-packages/cinder')
@mock.patch('cinderlib.cinderlib.os.environ', {'VIRTUAL_ENV': '/.venv'})
@mock.patch('cinderlib.cinderlib.open')
@mock.patch('os.path.exists', return_value=False)
@mock.patch('oslo_privsep.priv_context.init')
def test__set_priv_helper_venv_no_sudo(self, mock_ctxt_init, mock_exists,
mock_open):
file_contents = {'DEFAULT': {'filters_path': '/etc/cinder/rootwrap.d',
'exec_dirs': '/dir1,/dir2'}}
parser = configparser.ConfigParser()
venv_wrap_cfg = '/.venv/etc/cinder/rootwrap.conf'
original_helper_func = utils.get_root_helper
original_rootwrap_config = cfg.CONF.rootwrap_config
# Not using set_override because it's not working as it should
default_wrap_cfg = '/etc/cinder/rootwrap.conf'
cfg.CONF.rootwrap_config = default_wrap_cfg
try:
with mock.patch('cinder.utils.get_root_helper',
return_value='sudo wrapper') as mock_helper, \
mock.patch.dict(parser, file_contents, clear=True), \
mock.patch('configparser.ConfigParser') as mock_parser:
mock_parser.return_value = parser
self.backend._set_priv_helper('mysudo')
mock_exists.assert_called_once_with(default_wrap_cfg)
mock_parser.assert_called_once_with()
parser.read.assert_called_once_with(venv_wrap_cfg)
self.assertEqual('/.venv/etc/cinder/rootwrap.d',
parser['DEFAULT']['filters_path'])
self.assertEqual('/.venv/bin,/dir1,/dir2',
parser['DEFAULT']['exec_dirs'])
mock_open.assert_called_once_with(venv_wrap_cfg, 'w')
parser.write.assert_called_once_with(
mock_open.return_value.__enter__.return_value)
self.assertEqual('mysudo wrapper', utils.get_root_helper())
mock_helper.assert_called_once_with()
mock_ctxt_init.assert_called_once_with(root_helper=['mysudo'])
self.assertIs(original_helper_func, utils.get_root_helper)
self.assertEqual(venv_wrap_cfg, cfg.CONF.rootwrap_config)
self._check_privsep_root_helper_opt(is_changed=True)
finally:
cfg.CONF.rootwrap_config = original_rootwrap_config
utils.get_root_helper = original_helper_func
@mock.patch('configparser.ConfigParser.read', mock.Mock())
@mock.patch('configparser.ConfigParser.write', mock.Mock())
@mock.patch('cinderlib.cinderlib.utils.__file__', '/opt/stack/cinder')
@mock.patch('cinderlib.cinderlib.os.environ', {'VIRTUAL_ENV': '/.venv'})
@mock.patch('shutil.copytree')
@mock.patch('glob.glob',)
@mock.patch('cinderlib.cinderlib.open')
@mock.patch('os.path.exists', return_value=False)
@mock.patch('oslo_privsep.priv_context.init')
def test__set_priv_helper_venv_editable_no_sudo(self, mock_ctxt_init,
mock_exists, mock_open,
mock_glob, mock_copy):
link_file = '/.venv/lib/python3.7/site-packages/cinder.egg-link'
cinder_source_path = '/opt/stack/cinder'
link_file_contents = cinder_source_path + '\n.'
mock_glob.return_value = [link_file]
open_fd = mock_open.return_value.__enter__.return_value
open_fd.read.return_value = link_file_contents
file_contents = {'DEFAULT': {'filters_path': '/etc/cinder/rootwrap.d',
'exec_dirs': '/dir1,/dir2'}}
parser = configparser.ConfigParser()
venv_wrap_cfg = '/.venv/etc/cinder/rootwrap.conf'
original_helper_func = utils.get_root_helper
original_rootwrap_config = cfg.CONF.rootwrap_config
# Not using set_override because it's not working as it should
default_wrap_cfg = '/etc/cinder/rootwrap.conf'
cfg.CONF.rootwrap_config = default_wrap_cfg
try:
with mock.patch('cinder.utils.get_root_helper',
return_value='sudo wrapper') as mock_helper, \
mock.patch.dict(parser, file_contents, clear=True), \
mock.patch('configparser.ConfigParser') as mock_parser:
mock_parser.return_value = parser
self.backend._set_priv_helper('mysudo')
mock_glob.assert_called_once_with(
'/.venv/lib/python*/site-packages/cinder.egg-link')
self.assertEqual(2, mock_exists.call_count)
mock_exists.assert_has_calls([mock.call(default_wrap_cfg),
mock.call(venv_wrap_cfg)])
self.assertEqual(2, mock_open.call_count)
mock_open.assert_any_call(link_file, 'r')
mock_copy.assert_called_once_with(
cinder_source_path + '/etc/cinder', '/.venv/etc/cinder')
mock_parser.assert_called_once_with()
parser.read.assert_called_once_with(venv_wrap_cfg)
self.assertEqual('/.venv/etc/cinder/rootwrap.d',
parser['DEFAULT']['filters_path'])
self.assertEqual('/.venv/bin,/dir1,/dir2',
parser['DEFAULT']['exec_dirs'])
mock_open.assert_any_call(venv_wrap_cfg, 'w')
parser.write.assert_called_once_with(open_fd)
self.assertEqual('mysudo wrapper', utils.get_root_helper())
mock_helper.assert_called_once_with()
mock_ctxt_init.assert_called_once_with(root_helper=['mysudo'])
self.assertIs(original_helper_func, utils.get_root_helper)
self.assertEqual(venv_wrap_cfg, cfg.CONF.rootwrap_config)
self._check_privsep_root_helper_opt(is_changed=True)
finally:
cfg.CONF.rootwrap_config = original_rootwrap_config
utils.get_root_helper = original_helper_func

View File

@ -1,85 +0,0 @@
# Copyright (c) 2021, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from unittest import mock
from cinderlib import objects
from cinderlib.tests.unit import base
class TestSerialization(base.BaseTest):
def test_vol_to_and_from(self):
vol = objects.Volume(self.backend, size=10)
snap = objects.Snapshot(vol, name='disk')
# Associate the snapshot with the volume
vol._snapshots = None
with mock.patch.object(vol.persistence, 'get_snapshots',
return_value=[snap]):
vol.snapshots
self.assertEqual(1, len(vol.snapshots))
json_data = vol.json
# Confirm vol.json property is equivalent to the non simplified version
self.assertEqual(json_data, vol.to_json(simplified=False))
vol2 = objects.Volume.load(json_data)
# Check snapshots are recovered as well
self.assertEqual(1, len(vol2.snapshots))
self.assertEqual(vol.json, vol2.json)
def test_snap_to_and_from(self):
vol = objects.Volume(self.backend, size=10)
snap = objects.Snapshot(vol, name='disk')
json_data = snap.json
# Confirm vol.json property is equivalent to the non simplified version
self.assertEqual(json_data, snap.to_json(simplified=False))
snap2 = objects.Snapshot.load(json_data)
self.assertEqual(snap.json, snap2.json)
def test_conn_to_and_from(self):
vol = objects.Volume(self.backend, size=1, name='disk')
conn = objects.Connection(self.backend, volume=vol, connector={},
connection_info={'conn': {'data': {}}})
json_data = conn.json
# Confirm vol.json property is equivalent to the non simplified version
self.assertEqual(json_data, conn.to_json(simplified=False))
conn2 = objects.Connection.load(json_data)
self.assertEqual(conn.json, conn2.json)
def test_datetime_subsecond(self):
"""Test microsecond serialization of DateTime fields."""
microsecond = 123456
vol = objects.Volume(self.backend, size=1, name='disk')
vol._ovo.created_at = vol.created_at.replace(microsecond=microsecond)
created_at = vol.created_at
json_data = vol.json
vol2 = objects.Volume.load(json_data)
self.assertEqual(created_at, vol2.created_at)
self.assertEqual(microsecond, vol2.created_at.microsecond)
def test_datetime_non_subsecond(self):
"""Test rehydration of DateTime field without microsecond."""
vol = objects.Volume(self.backend, size=1, name='disk')
vol._ovo.created_at = vol.created_at.replace(microsecond=123456)
with mock.patch.object(vol._ovo.fields['created_at'], 'to_primitive',
return_value='2021-06-28T17:14:59Z'):
json_data = vol.json
vol2 = objects.Volume.load(json_data)
self.assertEqual(0, vol2.created_at.microsecond)

View File

@ -1,34 +0,0 @@
# Copyright (c) 2018, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from unittest import mock
import cinderlib
from cinderlib.persistence import base
def get_mock_persistence():
return mock.MagicMock(spec=base.PersistenceDriverBase)
class FakeBackend(cinderlib.Backend):
def __init__(self, *args, **kwargs):
driver_name = kwargs.get('volume_backend_name', 'fake')
cinderlib.Backend.backends[driver_name] = self
self._driver_cfg = {'volume_backend_name': driver_name}
self.driver = mock.Mock()
self.driver.persistence = cinderlib.Backend.persistence
self._pool_names = (driver_name,)
self._volumes = []

View File

@ -1,31 +0,0 @@
# Copyright (c) 2019, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
def find_by_id(resource_id, elements):
if elements:
for i, element in enumerate(elements):
if resource_id == element.id:
return i, element
return None, None
def add_by_id(resource, elements):
if elements is not None:
i, element = find_by_id(resource.id, elements)
if element:
elements[i] = resource
else:
elements.append(resource)

View File

@ -1,37 +0,0 @@
Cinderlib DevStack Plugin
=========================
This directory contains the cinderlib DevStack plugin.
To configure cinderlib with DevStack, you will need to enable this plugin by
adding one line to the [[local|localrc]] section of your local.conf file.
To enable the plugin, add a line of the form::
enable_plugin cinderlib <GITURL> [GITREF]
where::
<GITURL> is the URL of a cinderlib repository
[GITREF] is an optional git ref (branch/ref/tag). The default is master.
For example::
enable_plugin cinderlib https://opendev.org/openstack/cinderlib
Another example using Train's stable branch::
enable_plugin cinderlib https://opendev.org/openstack/cinderlib stable/train
The cinderlib DevStack plugin will install cinderlib from Git by default, but
it can be installed from PyPi using the ``CINDERLIB_FROM_GIT`` configuration
option::
CINDERLIB_FROM_GIT=False
The plugin will also generate the code equivalent to the deployed Cinder's
configuration in ``$CINDERLIB_SAMPLE_DIR/cinderlib.py`` which defaults to the
same directory where the Cinder configuration is saved.
For more information, see the `DevStack plugin documentation
<https://docs.openstack.org/devstack/latest/plugins.html>`_.

View File

@ -1,7 +0,0 @@
ALL_LIBS+=" cinderlib"
CINDERLIB_FROM_GIT=$(trueorfalse True CINDERLIB_FROM_GIT)
if [[ "$CINDERLIB_FROM_GIT" == "True" ]]; then
PROJECTS="openstack/cinderlib $PROJECTS"
LIBS_FROM_GIT="cinderlib,$LIBS_FROM_GIT"
fi

View File

@ -1,40 +0,0 @@
#!/bin/bash
# plugin.sh - DevStack plugin.sh dispatch script for cinderlib
_XTRACE_CINDERLIB=$(set +o | grep xtrace)
function install_cinderlib {
if use_library_from_git "cinderlib"; then
git_clone_by_name "cinderlib"
setup_dev_lib "cinderlib"
else
pip_install cinderlib
fi
}
stable_compare="stable/[a-r]"
# Cinderlib only makes sense if Cinder is enabled and we are in stein or later
if [[ ! "${GITBRANCH["cinderlib"]}" =~ $stable_compare ]] && is_service_enabled cinder; then
if [[ "$1" == "stack" && "$2" == "install" ]]; then
# Perform installation of service source
echo_summary "Installing cinderlib"
install_cinderlib
# Plugins such as Ceph configure themselves at post-config, so we have to
# configure ourselves at the next stage, "extra"
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
# Generate the cinderlib configuration
echo_summary "Generating cinderlib initialization example python code"
sudo cinder-cfg-to-cinderlib-code $CINDER_CONF $CINDERLIB_SAMPLE
fi
if [[ "$1" == "clean" || "$1" == "unstack" ]]; then
echo_summary "Removing cinderlib and its code example from cinder.conf"
sudo rm -f $CINDERLIB_SAMPLE
pip_uninstall cinderlib
fi
fi
# Restore xtrace
$_XTRACE_CINDERLIB

View File

@ -1,9 +0,0 @@
# Defaults
# --------
# Set up default directories
CINDERLIB_SAMPLE_DIR=${CINDERLIB_CONF_DIR:-/etc/cinder}
CINDERLIB_SAMPLE=$CINDERLIB_SAMPLE_DIR/cinderlib.py
CINDERLIB_FROM_GIT=$(trueorfalse True CINDERLIB_FROM_GIT)
define_plugin cinderlib

3
doc/.gitignore vendored
View File

@ -1,3 +0,0 @@
build/*
source/api/*
.autogenerated

View File

@ -1,7 +0,0 @@
openstackdocstheme>=2.2.1 # Apache-2.0
reno>=3.1.0 # Apache-2.0
doc8>=0.6.0 # Apache-2.0
sphinx>=2.0.0,!=2.1.0 # BSD
os-api-ref>=1.4.0 # Apache-2.0
sphinxcontrib-apidoc>=0.2.0 # BSD
sphinxcontrib-svg2pdfconverter>=0.1.0 # BSD

View File

@ -1,177 +0,0 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/cinderlib.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/cinderlib.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/cinderlib"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/cinderlib"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."

View File

@ -1 +0,0 @@

View File

@ -1,300 +0,0 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# cinderlib documentation build configuration file, created by
# sphinx-quickstart on Tue Jul 9 22:26:36 2013.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import os
import sys
# If extensions (or modules to document with autodoc) are in another
# directory, add these directories to sys.path here. If the directory is
# relative to the documentation root, use os.path.abspath to make it
# absolute, like shown here.
project_root = os.path.abspath('../../')
sys.path.insert(0, project_root)
# # Get the project root dir, which is the parent dir of this
# import pdb; pdb.set_trace()
# cwd = os.getcwd()
# project_root = os.path.dirname(cwd)
#
# # Insert the project root dir as the first element in the PYTHONPATH.
# # This lets us ensure that the source package is imported, and that its
# # version is used.
# sys.path.insert(0, project_root)
# -- General configuration ---------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
needs_sphinx = '1.6.5'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc',
'sphinx.ext.viewcode',
'sphinxcontrib.apidoc',
'openstackdocstheme',
'sphinxcontrib.rsvgconverter']
# sphinxcontrib.apidoc options
apidoc_module_dir = '../../cinderlib'
apidoc_output_dir = 'api'
apidoc_excluded_paths = [
'tests/*',
'tests',
'persistence/dbms.py',
'persistence/memory.py',
]
apidoc_separate_modules = True
apidoc_toc_file = False
autodoc_mock_imports = ['cinder', 'os_brick', 'oslo_utils',
'oslo_versionedobjects', 'oslo_concurrency',
'oslo_log', 'stevedore', 'oslo_db', 'oslo_config',
'oslo_privsep', 'cinder.db.sqlalchemy']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# List of directories, relative to source directory, that shouldn't be searched
# for source files.
exclude_trees = []
# General information about the project.
copyright = "2017, Cinder Developers"
# openstackdocstheme options
openstackdocs_repo_name = 'openstack/cinderlib'
openstackdocs_pdf_link = True
openstackdocs_bug_project = 'cinderlib'
openstackdocs_bug_tag = ''
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to
# some non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = False
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'native'
# A list of ignored prefixes for module index sorting.
modindex_common_prefix = ['cinderlib.']
# If true, keep warnings as "system message" paragraphs in the built
# documents.
#keep_warnings = False
# -- Options for HTML output -------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a
# theme further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as
# html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the
# top of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon
# of the docs. This file should be a Windows icon file (.ico) being
# 16x16 or 32x32 pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets)
# here, relative to this directory. They are copied after the builtin
# static files, so a file named "default.css" will overwrite the builtin
# "default.css".
html_static_path = ['_static']
# Add any paths that contain "extra" files, such as .htaccess.
html_extra_path = ['_extra']
# If not '', a 'Last updated on:' timestamp is inserted at every page
# bottom, using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names
# to template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer.
# Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer.
# Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages
# will contain a <link> tag referring to it. The value of this option
# must be the base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'cinderlibdoc'
# -- Options for LaTeX output ------------------------------------------
latex_elements = {
'makeindex': '',
'printindex': '',
'preamble': r'\setcounter{tocdepth}{3}',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index', 'doc-cinderlib.tex',
'Cinder Library Documentation',
'Cinder Contributors', 'manual'),
]
# The name of an image file (relative to this directory) to place at
# the top of the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings
# are parts, not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
latex_domain_indices = False
# Disable usage of xindy https://bugzilla.redhat.com/show_bug.cgi?id=1643664
latex_use_xindy = False
latex_additional_files = ['cinderlib.sty']
# -- Options for manual page output ------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'cinderlib',
'Cinder Library Documentation',
['Cinder Contributors'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output ----------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'cinderlib',
'Cinder Library Documentation',
'Cinder Contributors',
'cinderlib',
'Direct usage of Cinder Block Storage drivers without the services.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False

View File

@ -1,315 +0,0 @@
============================
So You Want to Contribute...
============================
For general information on contributing to OpenStack, please check out the
`contributor guide <https://docs.openstack.org/contributors/>`_ to get started.
It covers all the basics that are common to all OpenStack projects: the
accounts you need, the basics of interacting with our Gerrit review system, how
we communicate as a community, etc.
The cinderlib library is maintained by the OpenStack Cinder project. To
understand our development process and how you can contribute to it, please
look at the Cinder project's general contributor's page:
http://docs.openstack.org/cinder/latest/contributor/contributing.html
Some cinderlib specific information is below.
cinderlib release model
-----------------------
The OpenStack release model for cinderlib is `cycle-with-intermediary
<https://releases.openstack.org/reference/release_models.html#cycle-with-intermediary>`__.
This means that there can be multiple full releases of cinderlib from master
during a development cycle. The deliverable type of cinderlib is 'trailing'
which means that the final release of cinderlib for a development cycle must
occur within 3 months after the official OpenStack coordinated release.
At the time of the final release, the stable branch is cut, and cinderlib
releases from that branch follow the normal OpenStack stable release policy.
The primary thing to keep in mind here is that there is a period at the
beginning of each OpenStack development cycle (for example, Zed) when the
master branch in cinder and os-brick is open for Zed development, but
cinderlib's master branch is still being used for Yoga development.
cinderlib development model
---------------------------
Because cinderlib depends on cinder and os-brick, its ``tox.ini`` file is set
up to use cinder and os-brick from source (not from released versions)
so that changes in cinder and os-brick are immediately available for testing
cinderlib.
We follow this practice both for cinderlib master and for the cinderlib stable
branches.
cinderlib tox and zuul configuration maintenance
------------------------------------------------
As mentioned above, cinderlib's release schedule is offset from the OpenStack
coordinated release schedule by about 3 months. Thus, once cinder and os-brick
have had their final release for a cycle, their master branches become the
development branch for the *next* cycle, whereas cinderlib's master branch is
still the development branch for the *previous* cycle.
This has an impact on both ``tox.ini``, which controls your local development
testing environment, and ``.zuul.yaml``, which controls cinderlib's CI
environment. These files require manual maintenance at two points during
each OpenStack development cycle:
#. When the cinder (not cinderlib) master branch opens for n+1 cycle
development. This happens when the first release candidate for release
n is made and the stable branch for release n is created. At this time,
cinderlib master is still being used for release n development, so cinderlib
master is out of phase with cinder/os-brick master branch, and we must make
adjustments to cinderlib master's ``tox.ini`` and ``.zuul.yaml`` files.
#. When the cinderlib release n is made, cinderlib master opens for release
n+1 development. Thus, cinderlib's master branch is back in phase with
cinder/os-brick master branch, and we must make adjustments to cinderlib
master's ``tox.ini`` and ``.zuul.yaml`` files.
Although cinderlib's ``requirements.txt`` file is not used by tox (and hence
not by Zuul, either), we must maintain it for people who install cinderlib via
pypi. Thus it must be checked for correctness before cinderlib is released.
Throughout this section, we'll be talking about release 'n' and release
'n+1'. The example we'll use is 'n' is yoga and 'n+1' is zed.
cinderlib tox.ini maintenance
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The items are listed below in the order you'll find them in ``tox.ini``.
[testenv]setenv
```````````````
The environment variable ``CINDERLIB_RELEASE`` must be set to the name of
the release that this is the development branch for.
* What is this used by? It's used by ``tools/special_install.sh`` to figure
out what the appropriate upper-constraints file is.
* When should it be changed? The requirements team has been setting up the
redirect for https://releases.openstack.org/constraints/upper/{release}
at the beginning of each OpenStack development cycle (that is, when master
is Zed development, for example, the url
https://releases.openstack.org/constraints/upper/zed
redirects to the ``upper-constraints.txt`` in requirements master). Thus,
you should only have to change the value of ``CINDERLIB_RELEASE`` in
cinderlib master at the time it opens for release 'n+1'.
[testenv]deps
`````````````
* While both the cinder and cinderlib master branches are the development
branches for the 'n' release cycle (yoga, for example), the base testenv
in ``tox.ini`` in master should look like this:
.. code-block::
# Use cinder and os-brick from the appropriate development branch instead of
# from PyPi. Defining the egg name we won't overwrite the package installed
# by Zuul on jobs supporting cross-project dependencies (include Cinder in
# required-projects). This allows us to also run local tests against the
# latest cinder/brick code instead of released code.
# NOTE: Functional tests may fail if host is missing bindeps from deps projects
deps= -r{toxinidir}/test-requirements.txt
git+https://opendev.org/openstack/os-brick
git+https://opendev.org/openstack/cinder
* When the coordinated release for cycle 'n' has occurred, cinderlib's
``tox.ini`` in master must be modified so that cinderlib is being tested
against cinder and os-brick from the stable branches for the 'n' release (in
this example, stable/yoga):
.. code-block::
deps = -r{toxinidir}/test-requirements.txt
git+https://opendev.org/openstack/os-brick@stable/yoga
git+https://opendev.org/openstack/cinder@stable/yoga
* After the 'n' release of cinderlib occurs (and the stable/n branch is cut),
all of cinder, os-brick, and cinderlib master branches are all 'n+1' cycle
development branches, so:
* The base testenv in ``tox.ini`` in master must be modified to use cinder
and os-brick from master for testing, reverting the first code block change
above.
[testenv:py{3,36,38}]install_command
````````````````````````````````````
Note: the actual list of versions may be different from what's listed in the
documentation heading above.
This testenv inherits from the base testenv and is the parent for all the
unit tests. At the time cinderlib master opens for release 'n+1' development,
check that all supported python versions for the release are listed between
the braces (that is, ``{`` and ``}``).
* The tox term for this is "Generative section names". See the `tox docs
<https://tox-gaborbernat.readthedocs.io/en/latest/config.html#generative-envlist>`_
for more information and the proper syntax.
* The list of supported python runtimes can be found in the `OpenStack
governance documentation
<https://governance.openstack.org/tc/reference/runtimes/>`_.
* If the supported python runtimes have changed from the previous release,
you may also need to update the ``python_requires`` and the "Programming
Language" classifiers in cinderlib's ``setup.cfg`` file.
[testenv:docs]install_command
`````````````````````````````
* The ``docs`` testenv sets a default value for ``TOX_CONSTRAINTS_FILE`` as
part of the ``install_command``. This only needs to be changed at the time
cinderlib master opens for release 'n+1'. See the discussion above about
setting the value for ``CINDERLIB_RELEASE``; the same considerations apply
here.
The ``[testenv:docs]install_command`` is referred to by the other
documentation-like testenvs, so you should only have to change the value
of ``TOX_CONSTRAINTS_FILE`` in one place. (But do a scan of ``tox.ini``
to be sure, and if you find another, please update this page.)
cinderlib .zuul.yaml maintenance
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A few things to note about the cinderlib ``.zuul.yaml`` file.
* The OpenStack QA team defines "templates" that can be used for testing.
A template defines a set of jobs that are run in the check and the gate,
and the QA team takes the responsibility to make sure that the template
for a release includes all the appropriate tests.
We don't use the 'openstack-python3-{release}-jobs' template; instead, we
directly configure the jobs that are listed in the template. The reason for
this is that during cinderlib's trailing development phase (when cinderlib
master is the development branch for release 'n' while cinder and os-brick
master is the development branch for release 'n+1', we need to make sure that
zuul installs the correct cinder and os-brick branch to test against. We
can do this by specifying an 'override-checkout' for cinder, os-brick, and
requirements in the job definitions.
We need to do this even though the zuul jobs will ultimately call cinderlib's
tox.ini, where we have already configured the correct branches to use.
That's because Zuul doesn't simply call tox; it does a bunch of setup work
to download packages and configure the environment, and if we don't
specifically tell Zuul what branches to use, when we run a job on a cinderlib
master patch, Zuul figures that all components are supposed to be installed
from their master branch -- including openstack requirements, which specifies
the upper-constraints for the release.
* The QA testing templates are defined here:
https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/zuul.d/project-templates.yaml
The ``openstack-zuul-jobs`` repo is not branched, so that file will contain
the testing templates for all stable branches for which OpenStack CI is
still supported.
After the cinderlib 'n' release, you will open cinderlib for 'n+1'
development. For example, after the yoga release, you will open cinderlib
for zed development. For the reasons outlined above, we won't use the
zed template directly, but you need to look at it to see what jobs it
includes, and make sure that cinderlib's ``.zuul.yaml`` uses equivalent jobs
in each of the check, gate, and post pipelines.
* What's meant by "equivalent jobs" is best explained by an example.
The ``openstack-python3-zed-jobs`` template contains (among other things)
an ``openstack-tox-py39`` job. We don't use that job directly, but
instead have an ``cinderlib-tox-py39`` job defined in the cinderlib
``.zuul.yaml`` that has ``openstack-tox-py39`` as a parent. (If the
equivalent job you need doesn't exist, you must create it, using the
other jobs as examples.)
We need these cinderlib-specific jobs for running unit tests in the
CI because the tests run using the development versions of cinder and
os-brick, not released versions, so we need to tell Zuul that it needs
to have the code repositories for cinder and os-brick available. (We
also tell it to have the requirements repo available; it will be needed
during cinderlib's cycle-trailing development phase.)
With that background, here are the ``.zuul.yaml`` maintenance tasks.
* When the coordinated release for cycle 'n' has occurred, the jobs in
cinderlib's ``.zuul.yaml`` in master must be updated to use the 'n'
stable branch for each of its sibling projects. Letting 'n' be the
Yoga relase, what this means is that the jobs will change from looking
like this:
.. code-block::
- job:
name: cinderlib-tox-py39
parent: openstack-tox-py39
required-projects:
- name: openstack/os-brick
- name: openstack/cinder
- name: openstack/requirements
to looking like this:
.. code-block::
- job:
name: cinderlib-tox-py39
parent: openstack-tox-py39
required-projects:
- name: openstack/os-brick
override-checkout: stable/yoga
- name: openstack/cinder
override-checkout: stable/yoga
- name: openstack/requirements
override-checkout: stable/yoga
Additionally, instead of running the
``os-brick-src-tempest-lvm-lio-barbican`` job (which is defined in
the os-brick repository), we will need to run a special version of
that job will be defined in cinderlib's ``.zuul.yaml``. This job
should already be defined in the file, and will be named
``cinderlib-os-brick-src-tempest-lvm-lio-barbican-{release}``.
Verify that the job has the correct branch specified for
``override-checkout``, and then configure the ``check`` and ``gate``
sections to run this job.
* After the 'n' release of cinderlib, when cinderlib master has become
the 'n+1' development branch and is once again in sync with the master
branches of cinder and os-brick:
* remove the ``override-checkout`` specification from the
``cinderlib-tox-*`` job definitions
* take a look at the 'n+1' release testing template (as discussed
above) and make sure that cinderlib is running the correct jobs
for the cycle
* run ``os-brick-src-tempest-lvm-lio-barbican`` in the check and
gate
* update the definition for the
'cinderlib-os-brick-src-tempest-lvm-lio-barbican-{release}'
job so that it will be ready when you need it later in the cycle.
cinderlib requirements.txt maintenance
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* When the coordinated release for cycle 'n' has occurred, cinderlib's
``requirements.txt`` in master must be updated to use only 'n' deliverables
(in this example, yoga):
.. code-block::
# restrict cinder to the yoga release only
cinder>=20.0.0.0,<21.0.0 # Apache-2.0
# brick upper bound is controlled by yoga/upper-constraints
os-brick>=5.2.0 # Apache-2.0
* After the 'n' release of cinderlib, when cinderlib master has become
the 'n+1' development branch, ``requirements.txt`` can again be updated:
* Remove the upper bound from cinder.
* The release team likes to push an early release of os-brick from master
early in the development cycle. Check to see if that has happened
already, and if so, update the minimum version of os-brick to the latest
release and make appropriate adjustments to the comments in the file.

View File

@ -1,104 +0,0 @@
Welcome to Cinder Library's documentation!
==========================================
.. image:: https://img.shields.io/pypi/v/cinderlib.svg
:target: https://pypi.python.org/pypi/cinderlib
.. image:: https://img.shields.io/pypi/pyversions/cinderlib.svg
:target: https://pypi.python.org/pypi/cinderlib
.. image:: https://img.shields.io/:license-apache-blue.svg
:target: http://www.apache.org/licenses/LICENSE-2.0
|
The Cinder Library, also known as cinderlib, is a Python library that leverages
the Cinder project to provide an object oriented abstraction around Cinder's
storage drivers to allow their usage directly without running any of the Cinder
services or surrounding services, such as KeyStone, MySQL or RabbitMQ.
The library is intended for developers who only need the basic CRUD
functionality of the drivers and don't care for all the additional features
Cinder provides such as quotas, replication, multi-tenancy, migrations,
retyping, scheduling, backups, authorization, authentication, REST API, etc.
The library was originally created as an external project, so it didn't have
the broad range of backend testing Cinder does, and only a limited number of
drivers were validated at the time. Drivers should work out of the box, and
we'll keep a list of drivers that have added the cinderlib functional tests to
the driver gates confirming they work and ensuring they will keep working.
Features
--------
* Use a Cinder driver without running a DBMS, Message broker, or Cinder
service.
* Using multiple simultaneous drivers on the same application.
* Basic operations support:
- Create volume
- Delete volume
- Extend volume
- Clone volume
- Create snapshot
- Delete snapshot
- Create volume from snapshot
- Connect volume
- Disconnect volume
- Local attach
- Local detach
- Validate connector
- Extra Specs for specific backend functionality.
- Backend QoS
- Multi-pool support
* Metadata persistence plugins:
- Stateless: Caller stores JSON serialization.
- Database: Metadata is stored in a database: MySQL, PostgreSQL, SQLite...
- Custom plugin: Caller provides module to store Metadata and cinderlib calls
it when necessary.
Example
-------
The following code extract is a simple example to illustrate how cinderlib
works. The code will use the LVM backend to create a volume, attach it to the
local host via iSCSI, and finally snapshot it:
.. code-block:: python
import cinderlib as cl
# Initialize the LVM driver
lvm = cl.Backend(volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi')
# Create a 1GB volume
vol = lvm.create_volume(1, name='lvm-vol')
# Export, initialize, and do a local attach of the volume
attach = vol.attach()
print('Volume %s attached to %s' % (vol.id, attach.path))
# Snapshot it
snap = vol.create_snapshot('lvm-snap')
Table of Contents
-----------------
.. toctree::
:maxdepth: 2
installation
usage
validated
validating
limitations
contributor/contributing

View File

@ -1,165 +0,0 @@
.. highlight:: shell
============
Installation
============
The Cinder Library is an interfacing library that doesn't have any storage
driver code, so it expects Cinder drivers to be installed in the system to run
properly.
We can use the latest stable release or the latest code from master branch.
Stable release
--------------
Drivers
_______
For Red Hat distributions the recommendation is to use RPMs to install the
Cinder drivers instead of using `pip`. If we don't have access to the
`Red Hat OpenStack Platform packages
<https://www.redhat.com/en/technologies/linux-platforms/openstack-platform>`_
we can use the `RDO community packages <https://www.rdoproject.org/>`_.
On CentOS, the Extras repository provides the RPM that enables the OpenStack
repository. Extras is enabled by default on CentOS 7, so you can simply install
the RPM to set up the OpenStack repository:
.. code-block:: console
# yum install -y centos-release-openstack-rocky
# yum install -y openstack-cinder
On RHEL and Fedora, you'll need to download and install the RDO repository RPM
to set up the OpenStack repository:
.. code-block:: console
# yum install -y https://www.rdoproject.org/repos/rdo-release.rpm
# yum install -y openstack-cinder
We can also install directly from source on the system or a virtual environment:
.. code-block:: console
$ virtualenv venv
$ source venv/bin/activate
(venv) $ pip install git+git://github.com/openstack/cinder.git@stable/rocky
Library
_______
To install Cinder Library we'll use PyPI, so we'll make sure to have the `pip`_
command available:
.. code-block:: console
# yum install -y python-pip
# pip install cinderlib
This is the preferred method to install Cinder Library, as it will always
install the most recent stable release.
If you don't have `pip`_ installed, this `Python installation guide`_ can guide
you through the process.
.. _pip: https://pip.pypa.io
.. _Python installation guide: http://docs.python-guide.org/en/latest/starting/installation/
Latest code
-----------
Drivers
_______
If we don't have a packaged version or if we want to use a virtual environment
we can install the drivers from source:
.. code-block:: console
$ virtualenv cinder
$ source cinder/bin/activate
$ pip install git+git://github.com/openstack/cinder.git
Library
_______
The sources for Cinder Library can be downloaded from the `Github repo`_ to use
the latest version of the library.
You can either clone the public repository:
.. code-block:: console
$ git clone git://github.com/akrog/cinderlib
Or download the `tarball`_:
.. code-block:: console
$ curl -OL https://github.com/akrog/cinderlib/tarball/master
Once you have a copy of the source, you can install it with:
.. code-block:: console
$ virtualenv cinder
$ python setup.py install
Dependencies
------------
*Cinderlib* has less functionality than Cinder, which results in fewer required
libraries.
When installing from PyPi or source, we'll get all the dependencies regardless
of whether they are needed by *cinderlib* or not, since the Cinder Python
package specifies all the dependencies. Installing from packages may result in
fewer dependencies, but this will depend on the distribution package itself.
To increase loading speed, and reduce memory footprint and dependencies,
*cinderlib* fakes all unnecessary packages at runtime if they have not already
been loaded.
This can be convenient when creating containers, as one can remove unnecessary
packages on the same layer *cinderlib* gets installed to get a smaller
containers.
If our application uses any of the packages *cinderlib* fakes, we just have to
import them before importing *cinderlib*. This way *cinderlib* will not fake
them.
The list of top level packages unnecessary for *cinderlib* are:
- castellan
- cursive
- googleapiclient
- jsonschema
- keystoneauth1
- keystonemiddleware
- oauth2client
- os-win
- oslo.messaging
- oslo.middleware
- oslo.policy
- oslo.reports
- oslo.upgradecheck
- osprofiler
- paste
- pastedeploy
- pyparsing
- python-barbicanclient
- python-glanceclient
- python-novaclient
- python-swiftclient
- python-keystoneclient
- routes
- webob
.. _Github repo: https://github.com/openstack/cinderlib
.. _tarball: https://github.com/openstack/cinderlib/tarball/master

View File

@ -1,49 +0,0 @@
Limitations
-----------
Cinderlib works around a number of issues that were preventing the usage of the
drivers by other Python applications, some of these are:
- *Oslo config* configuration loading.
- Cinder-volume dynamic configuration loading.
- Privileged helper service.
- DLM configuration.
- Disabling of cinder logging.
- Direct DB access within drivers.
- *Oslo Versioned Objects* DB access methods such as `refresh` and `save`.
- Circular references in *Oslo Versioned Objects* for serialization.
- Using multiple drivers in the same process.
Being in its early development stages, the library is in no way close to the
robustness or feature richness that the Cinder project provides. Some of the
more noticeable limitations one should be aware of are:
- Most methods don't perform argument validation so it's a classic GIGO_
library.
- The logic has been kept to a minimum and higher functioning logic is expected
to be handled by the caller: Quotas, tenant control, migration, etc.
- Limited test coverage.
- Only a subset of Cinder available operations are supported by the library.
Besides *cinderlib's* own limitations the library also inherits some from
*Cinder's* code and will be bound by the same restrictions and behaviors of the
drivers as if they were running under the standard *Cinder* services. The most
notorious ones are:
- Dependency on the *eventlet* library.
- Behavior inconsistency on some operations across drivers. For example you
can find drivers where cloning is a cheap operation performed by the storage
array whereas other will actually create a new volume, attach the source and
new volume and perform a full copy of the data.
- External dependencies must be handled manually. So users will have to take
care of any library, package, or CLI tool that is required by the driver.
- Relies on command execution via *sudo* for attach/detach operations as well
as some CLI tools.
.. _GIGO: https://en.wikipedia.org/wiki/Garbage_in,_garbage_out

View File

@ -1,242 +0,0 @@
@ECHO OFF
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set BUILDDIR=_build
set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
set I18NSPHINXOPTS=%SPHINXOPTS% .
if NOT "%PAPER%" == "" (
set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
)
if "%1" == "" goto help
if "%1" == "help" (
:help
echo.Please use `make ^<target^>` where ^<target^> is one of
echo. html to make standalone HTML files
echo. dirhtml to make HTML files named index.html in directories
echo. singlehtml to make a single large HTML file
echo. pickle to make pickle files
echo. json to make JSON files
echo. htmlhelp to make HTML files and a HTML help project
echo. qthelp to make HTML files and a qthelp project
echo. devhelp to make HTML files and a Devhelp project
echo. epub to make an epub
echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
echo. text to make text files
echo. man to make manual pages
echo. texinfo to make Texinfo files
echo. gettext to make PO message catalogs
echo. changes to make an overview over all changed/added/deprecated items
echo. xml to make Docutils-native XML files
echo. pseudoxml to make pseudoxml-XML files for display purposes
echo. linkcheck to check all external links for integrity
echo. doctest to run all doctests embedded in the documentation if enabled
goto end
)
if "%1" == "clean" (
for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
del /q /s %BUILDDIR%\*
goto end
)
%SPHINXBUILD% 2> nul
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
if "%1" == "html" (
%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/html.
goto end
)
if "%1" == "dirhtml" (
%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
goto end
)
if "%1" == "singlehtml" (
%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
goto end
)
if "%1" == "pickle" (
%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the pickle files.
goto end
)
if "%1" == "json" (
%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the JSON files.
goto end
)
if "%1" == "htmlhelp" (
%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run HTML Help Workshop with the ^
.hhp project file in %BUILDDIR%/htmlhelp.
goto end
)
if "%1" == "qthelp" (
%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run "qcollectiongenerator" with the ^
.qhcp project file in %BUILDDIR%/qthelp, like this:
echo.^> qcollectiongenerator %BUILDDIR%\qthelp\cinderlib.qhcp
echo.To view the help file:
echo.^> assistant -collectionFile %BUILDDIR%\qthelp\cinderlib.ghc
goto end
)
if "%1" == "devhelp" (
%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished.
goto end
)
if "%1" == "epub" (
%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The epub file is in %BUILDDIR%/epub.
goto end
)
if "%1" == "latex" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
if errorlevel 1 exit /b 1
echo.
echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdf" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf
cd %BUILDDIR%/..
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdfja" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf-ja
cd %BUILDDIR%/..
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "text" (
%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The text files are in %BUILDDIR%/text.
goto end
)
if "%1" == "man" (
%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The manual pages are in %BUILDDIR%/man.
goto end
)
if "%1" == "texinfo" (
%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
goto end
)
if "%1" == "gettext" (
%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
goto end
)
if "%1" == "changes" (
%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
if errorlevel 1 exit /b 1
echo.
echo.The overview file is in %BUILDDIR%/changes.
goto end
)
if "%1" == "linkcheck" (
%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
if errorlevel 1 exit /b 1
echo.
echo.Link check complete; look for any errors in the above output ^
or in %BUILDDIR%/linkcheck/output.txt.
goto end
)
if "%1" == "doctest" (
%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
if errorlevel 1 exit /b 1
echo.
echo.Testing of doctests in the sources finished, look at the ^
results in %BUILDDIR%/doctest/output.txt.
goto end
)
if "%1" == "xml" (
%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The XML files are in %BUILDDIR%/xml.
goto end
)
if "%1" == "pseudoxml" (
%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.
goto end
)
:end

View File

@ -1,379 +0,0 @@
========
Backends
========
The *Backend* class provides the abstraction to access a storage array with an
specific configuration, which usually constraint our ability to operate on the
backend to a single pool.
.. note::
While some drivers have been manually validated most drivers have not, so
there's a good chance that using any non tested driver will show unexpected
behavior.
If you are testing *cinderlib* with a non verified backend you should use
an exclusive pool for the validation so you don't have to be so careful
when creating resources as you know that everything within that pool is
related to *cinderlib* and can be deleted using the vendor's management
tool.
If you try the library with another storage array I would love to hear
about your results, the library version, and configuration used (masked
IPs, passwords, and users).
Initialization
--------------
Before we can have access to an storage array we have to initialize the
*Backend*, which only has one defined parameter and all other parameters are
not defined in the method prototype:
.. code-block:: python
class Backend(object):
def __init__(self, volume_backend_name, **driver_cfg):
There are two arguments that we'll always have to pass on the initialization,
one is the `volume_backend_name` that is the unique identifier that *cinderlib*
will use to identify this specific driver initialization, so we'll need to make
sure not to repeat the name, and the other one is the `volume_driver` which
refers to the Python namespace that points to the *Cinder* driver.
All other *Backend* configuration options are free-form keyword arguments.
Each driver and storage array requires different information to operate, some
require credentials to be passed as parameters, while others use a file, and
some require the control address as well as the data addresses. This behavior
is inherited from the *Cinder* project.
To find what configuration options are available and which ones are compulsory
the best is going to the Vendor's documentation or to the `OpenStack's Cinder
volume driver configuration documentation`_.
*Cinderlib* supports references in the configuration values using the forms:
- ``$[<config_group>.]<config_option>``
- ``${[<config_group>.]<config_option>}``
Where ``config_group`` is ``backend_defaults`` for the driver configuration
options.
.. attention::
The ``rbd_keyring_file`` configuration parameter does not accept
templating.
Examples:
- ``target_ip_address='$my_ip'``
- ``volume_group='my-${backend_defaults.volume_backend_name}-vg'``
.. attention::
Some drivers have external dependencies which we must satisfy before
initializing the driver or it may fail either on the initialization or when
running specific operations. For example Kaminario requires the *krest*
Python library, and Pure requires *purestorage* Python library.
Python library dependencies are usually documented in the
`driver-requirements.txt file
<https://opendev.org/openstack/cinder/src/branch/master/driver-requirements.txt>`_,
as for the CLI required tools, we'll have to check in the Vendor's
documentation.
Cinder only supports using one driver at a time, as each process only handles
one backend, but *cinderlib* has overcome this limitation and supports having
multiple *Backends* simultaneously.
Let's see now initialization examples of some storage backends:
LVM
---
.. code-block:: python
import cinderlib
lvm = cinderlib.Backend(
volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi',
)
XtremIO
-------
.. code-block:: python
import cinderlib
xtremio = cinderlib.Backend(
volume_driver='cinder.volume.drivers.dell_emc.xtremio.XtremIOISCSIDriver',
san_ip='10.10.10.1',
xtremio_cluster_name='xtremio_cluster',
san_login='xtremio_user',
san_password='xtremio_password',
volume_backend_name='xtremio',
)
Kaminario
---------
.. code-block:: python
import cinderlib
kaminario = cl.Backend(
volume_driver='cinder.volume.drivers.kaminario.kaminario_iscsi.KaminarioISCSIDriver',
san_ip='10.10.10.2',
san_login='kaminario_user',
san_password='kaminario_password',
volume_backend_name='kaminario_iscsi',
)
For other backend configuration examples please refer to the :doc:`../validated` page.
Available Backends
------------------
Usual procedure is to initialize a *Backend* and store it in a variable at the
same time so we can use it to manage our storage backend, but there are cases
where we may have lost the reference or we are in a place in our code where we
don't have access to the original variable.
For these situations we can use *cinderlib's* tracking of *Backends* through
the `backends` class dictionary where all created *Backends* are stored using
the `volume_backend_name` as the key.
.. code-block:: python
for backend in cinderlib.Backend.backends.values():
initialized_msg = '' if backend.initialized else 'not '
print('Backend %s is %sinitialized with configuration: %s' %
(backend.id, initialized_msg, backend.config))
Installed Drivers
-----------------
Available drivers for *cinderlib* depend on the Cinder version installed, so we
have a method, called `list_supported_drivers` to list information about the
drivers that are included with the Cinder release installed in the system.
The method accepts parameter ``output_version`` where we can specify the
desired output format:
- ``1`` for human usage (default value).
- ``2`` for automation tools.
The main difference are the values of the driver options and how the expected
type of these options is described.
.. code-block:: python
import cinderlib
drivers = cinderlib.list_supported_drivers()
And what we'll get is a dictionary with the class name of the driver, a
description, the version of the driver, etc.
Here's the entry for the LVM driver:
.. code-block:: python
{'LVMVolumeDriver':
{'ci_wiki_name': 'Cinder_Jenkins',
'class_fqn': 'cinder.volume.drivers.lvm.LVMVolumeDriver',
'class_name': 'LVMVolumeDriver',
'desc': 'Executes commands relating to Volumes.',
'supported': True,
'version': '3.0.0',
'driver_options': [
{'advanced': 'False',
'default': '64',
'deprecated_for_removal': 'False',
'deprecated_opts': '[]',
'deprecated_reason': 'None',
'deprecated_since': 'None',
'dest': 'spdk_max_queue_depth',
'help': 'Queue depth for rdma transport.',
'metavar': 'None',
'mutable': 'False',
'name': 'spdk_max_queue_depth',
'positional': 'False',
'required': 'False',
'sample_default': 'None',
'secret': 'False',
'short': 'None',
'type': 'Integer(min=1, max=128)'},
],
}
},
The equivalent for the LVM driver for automation would be:
.. code-block::
import cinderlib
drivers = cinderlib.list_supported_drivers(2)
{'LVMVolumeDriver':
{'ci_wiki_name': 'Cinder_Jenkins',
'class_fqn': 'cinder.volume.drivers.lvm.LVMVolumeDriver',
'class_name': 'LVMVolumeDriver',
'desc': 'Executes commands relating to Volumes.',
'supported': True,
'version': '3.0.0',
'driver_options': [
{'advanced': False,
'default': 64,
'deprecated_for_removal': False,
'deprecated_opts': [],
'deprecated_reason': None,
'deprecated_since': None,
'dest': 'spdk_max_queue_depth',
'help': 'Queue depth for rdma transport.',
'metavar': None,
'mutable': False,
'name': 'spdk_max_queue_depth',
'positional': False,
'required': False,
'sample_default': None,
'secret': False,
'short': None,
'type': {'choices': None,
'max': 128,
'min': 1,
'num_type': <class 'int'>,
'type_class': Integer(min=1, max=128),
'type_name': 'integer value'}}
],
}
},
Stats
-----
In *Cinder* all cinder-volume services periodically report the stats of their
backend to the cinder-scheduler services so they can do informed placing
decisions on operations such as volume creation and volume migration.
Some of the keys provided in the stats dictionary include:
- `driver_version`
- `free_capacity_gb`
- `storage_protocol`
- `total_capacity_gb`
- `vendor_name volume_backend_name`
Additional information can be found in the `Volume Stats section
<https://docs.openstack.org/cinder/queens/contributor/drivers.html#volume-stats>`_
within the Developer's Documentation.
Gathering stats is a costly operation for many storage backends, so by default
the stats method will return cached values instead of collecting them again.
If latest data is required parameter `refresh=True` should be passed in the
`stats` method call.
Here's an example of the output from the LVM *Backend* with refresh:
.. code-block:: python
>>> from pprint import pprint
>>> pprint(lvm.stats(refresh=True))
{'driver_version': '3.0.0',
'pools': [{'QoS_support': False,
'filter_function': None,
'free_capacity_gb': 20.9,
'goodness_function': None,
'location_info': 'LVMVolumeDriver:router:cinder-volumes:thin:0',
'max_over_subscription_ratio': 20.0,
'multiattach': False,
'pool_name': 'LVM',
'provisioned_capacity_gb': 0.0,
'reserved_percentage': 0,
'thick_provisioning_support': False,
'thin_provisioning_support': True,
'total_capacity_gb': '20.90',
'total_volumes': 1}],
'sparse_copy_volume': True,
'storage_protocol': 'iSCSI',
'vendor_name': 'Open Source',
'volume_backend_name': 'LVM'}
Available volumes
-----------------
The *Backend* class keeps track of all the *Backend* instances in the
`backends` class attribute, and each *Backend* instance has a `volumes`
property that will return a `list` all the existing volumes in the specific
backend. Deleted volumes will no longer be present.
So assuming that we have an `lvm` variable holding an initialized *Backend*
instance where we have created volumes we could list them with:
.. code-block:: python
for vol in lvm.volumes:
print('Volume %s has %s GB' % (vol.id, vol.size))
Attribute `volumes` is a lazy loadable property that will only update its value
on the first access. More information about lazy loadable properties can be
found in the :doc:`tracking` section. For more information on data loading
please refer to the :doc:`metadata` section.
.. note::
The `volumes` property does not query the storage array for a list of
existing volumes. It queries the metadata storage to see what volumes
have been created using *cinderlib* and return this list. This means that
we won't be able to manage pre-existing resources from the backend, and we
won't notice when a resource is removed directly on the backend.
Attributes
----------
The *Backend* class has no attributes of interest besides the `backends`
mentioned above and the `id`, `config`, and JSON related properties we'll see
later in the :doc:`serialization` section.
The `id` property refers to the `volume_backend_name`, which is also the key
used in the `backends` class attribute.
The `config` property will return a dictionary with only the volume backend's
name by default to limit unintended exposure of backend credentials on
serialization. If we want it to return all the configuration options we need
to pass `output_all_backend_info=True` on *cinderlib* initialization.
If we try to access any non-existent attribute in the *Backend*, *cinderlib*
will understand we are trying to access a *Cinder* driver attribute and will
try to retrieve it from the driver's instance. This is the case with the
`initialized` property we accessed in the backends listing example.
Other methods
-------------
All other methods available in the *Backend* class will be explained in their
relevant sections:
- `load` and `load_backend` will be explained together with `json`, `jsons`,
`dump`, `dumps` properties and `to_dict` method in the :doc:`serialization`
section.
- `create_volume` method will be covered in the :doc:`volumes` section.
- `validate_connector` will be explained in the :doc:`connections` section.
- `global_setup` has been covered in the :doc:`initialization` section.
- `pool_names` tuple with all the pools available in the driver. Non pool
aware drivers will have only 1 pool and use the name of the backend as its
name. Pool aware drivers may report multiple values, which can be passed to
the `create_volume` method in the `pool_name` parameter.
.. _OpenStack's Cinder volume driver configuration documentation: https://docs.openstack.org/cinder/latest/configuration/block-storage/volume-drivers.html

View File

@ -1,331 +0,0 @@
===========
Connections
===========
When talking about attaching a *Cinder* volume there are three steps that must
happen before the volume is available in the host:
1. Retrieve connection information from the host where the volume is going to
be attached. Here we would be getting iSCSI initiator name, IP, and similar
information.
2. Use the connection information from step 1 and make the volume accessible to
it in the storage backend returning the volume connection information. This
step entails exporting the volume and initializing the connection.
3. Attaching the volume to the host using the data retrieved on step 2.
If we are running *cinderlib* and doing the attach in the same host, then all
steps will be done in the same host. But in many cases you may want to manage
the storage backend in one host and attach a volume in another. In such cases,
steps 1 and 3 will happen in the host that needs the attach and step 2 on the
node running *cinderlib*.
Projects in *OpenStack* use the *OS-Brick* library to manage the attaching and
detaching processes. Same thing happens in *cinderlib*. The only difference
is that there are some connection types that are handled by the hypervisors in
*OpenStack*, so we need some alternative code in *cinderlib* to manage them.
*Connection* objects' most interesting attributes are:
- `connected`: Boolean that reflects if the connection is complete.
- `volume`: The *Volume* to which this instance holds the connection
information.
- `protocol`: String with the connection protocol for this volume, ie: `iscsi`,
`rbd`.
- `connector_info`: Dictionary with the connection information from the host
that is attaching. Such as it's hostname, IP address, initiator name, etc.
- `conn_info`: Dictionary with the connection information the host requires to
do the attachment, such as IP address, target name, credentials, etc.
- `device`: If we have done a local attachment this will hold a dictionary with
all the attachment information, such as the `path`, the `type`, the
`scsi_wwn`, etc.
- `path`: String with the path of the system device that has been created when
the volume was attached.
Local attach
------------
Once we have created a volume with *cinderlib* doing a local attachment is
really simple, we just have to call the `attach` method from the *Volume* and
we'll get the *Connection* information from the attached volume, and once we
are done we call the `detach` method on the *Volume*.
.. code-block:: python
vol = lvm.create_volume(size=1)
attach = vol.attach()
with open(attach.path, 'w') as f:
f.write('*' * 100)
vol.detach()
This `attach` method will take care of everything, from gathering our local
connection information, to exporting the volume, initializing the connection,
and finally doing the local attachment of the volume to our host.
The `detach` operation works in a similar way, but performing the exact
opposite steps and in reverse. It will detach the volume from our host,
terminate the connection, and if there are no more connections to the volume it
will also remove the export of the volume.
.. attention::
The *Connection* instance returned by the *Volume* `attach` method also has
a `detach` method, but this one behaves differently than the one we've seen
in the *Volume*, as it will just perform the local detach step and not the
termiante connection or the remove export method.
Remote connection
-----------------
For a remote connection, where you don't have the driver configuration or
access to the management storage network, attaching and detaching volumes is a
little more inconvenient, and how you do it will depend on whether you have
access to the metadata persistence storage or not.
In any case the general attach flow looks something like this:
- Consumer gets connector information from its host.
- Controller receives the connector information from the consumer. -
Controller exports and maps the volume using the connector information and
gets the connection information needed to attach the volume on the consumer.
- The consumer gets the connection information. - The consumer attaches the
volume using the connection information.
With access to the metadata persistence storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In this case things are easier, as you can use the persistence storage to pass
information between the consumer and the controller node.
Assuming you have the following variables:
- `persistence_config` configuration of your metadata persistence storage.
- `node_id` unique string identifier for your consumer nodes that doesn't
change between reboots.
- `cinderlib_driver_configuration` is a dictionary with the Cinder backend
configuration needed by cinderlib to connect to the storage.
- `volume_id` ID of the volume we want to attach.
The consumer node must store its connector properties on start using the
key-value storage provided by the persistence plugin:
.. code-block:: python
import socket
import cinderlib as cl
cl.setup(persistence_config=persistence_config)
kv = cl.Backend.persistence.get_key_values(node_id)
if not kv:
storage_nw_ip = socket.gethostbyname(socket.gethostname())
connector_dict = cl.get_connector_properties('sudo', storage_nw_ip,
True, False)
value = json.dumps(connector_dict, separators=(',', ':'))
kv = cl.KeyValue(node_id, value)
cl.Backend.persistence.set_key_value(kv)
Then when we want to attach a volume to `node_id` the controller can retrieve
this information using the persistence plugin and export and map the volume for
the specific host.
.. code-block:: python
import cinderlib as cl
cl.setup(persistence_config=persistence_config)
storage = cl.Backend(**cinderlib_driver_configuration)
kv = cl.Backend.persistence.get_key_values(node_id)
if not kv:
raise Exception('Unknown node')
connector_info = json.loads(kv[0].value)
vol = storage.Volume.get_by_id(volume_id)
vol.connect(connector_info, attached_host=node_id)
Once the volume has been exported and mapped, the connection information is
automatically stored by the persistence plugin and the consumer host can attach
the volume:
.. code-block:: python
vol = storage.Volume.get_by_id(volume_id)
connection = vol.connections[0]
connection.attach()
print('Volume %s attached to %s' % (vol.id, connection.path))
When attaching the volume the metadata plugin will store changes to the
Connection instance that are needed for the detaching.
No access to the metadata persistence storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is more inconvenient, as you'll have to handle the data exchange manually
as well as the *OS-Brick* library calls to do the attach/detach.
First we need to get the connector information on the host that is going to do
the attach:
.. code-block:: python
from os_brick.initiator import connector
connector_dict = connector.get_connector_properties('sudo', storage_nw_ip,
True, False)
Now we need to pass this connector information dictionary to the controller
node. This part will depend on your specific application/system.
In the controller node, once we have the contents of the `connector_dict`
variable we can export and map the volume and get the info needed by the
consumer:
.. code-block:: python
import cinderlib as cl
cl.setup(persistence_config=persistence_config)
storage = cl.Backend(**cinderlib_driver_configuration)
vol = storage.Volume.get_by_id(volume_id)
conn = vol.connect(connector_info, attached_host=node_id)
connection_info = conn.connection_info
We have to pass the contents of `connection_info` information to the consumer
node, and that node will use it to attach the volume:
.. code-block:: python
import os_brick
from os_brick.initiator import connector
connector_dict = connection_info['connector']
conn_info = connection_info['conn']
protocol = conn_info['driver_volume_type']
conn = connector.InitiatorConnector.factory(
protocol, 'sudo', user_multipath=True,
device_scan_attempts=3, conn=connector_dict)
device = conn.connect_volume(conn_info['data'])
print('Volume attached to %s' % device.get('path'))
At this point we have the `device` variable that needs to be stored for the
disconnection, so we have to either store it on the consumer node, or pass it
to the controller node so we can save it with the connector info.
Here's an example on how to save it on the controller node:
.. code-block:: python
conn = vol.connections[0]
conn.device = device
conn.save()
.. warning:: At the time of this writing this mechanism doesn't support RBD
connections, as this support is added by cinderlib itself.
Multipath
---------
If we want to use multipathing for local attachments we must let the *Backend*
know when instantiating the driver by passing the
`use_multipath_for_image_xfer=True`:
.. code-block:: python
import cinderlib
lvm = cinderlib.Backend(
volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi',
use_multipath_for_image_xfer=True,
)
Extend
------
The `Connection` object has an `extend` method that will refresh the host's
view of an attached volume to reflect the latest size of the volume and return
the new size in bytes.
There is no need to manually call this method for volumes that are locally
attached to the node that calls the `Volume`'s `extend` method, since that call
takes care of it.
When extending volumes that are attached to nodes other than the one calling
the `Volume`'s `extend` method we will need to either detach and re-attach the
volume on the host following the mechanisms explained above, or refresh the
current view of the volume.
How we refresh the host's view of an attached volume will depend on how we are
attaching the volumes.
With access to the metadata persistence storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In this case things are easier, just like it was on the
`Remote connection`_.
Assuming we have a `volume_id` variable with the volume, and `storage` has the
`Backend` instance, all we need to do is:
.. code-block:: python
vol = storage.Volume.get_by_id(volume_id)
vol.connections[0].extend()
No access to the metadata persistence storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is more inconvenient, as you'll have to handle the data exchange manually
as well as the *OS-Brick* library calls to do the extend.
We'll need to get the connector information on the host that is going to do
the attach. Asuuming the dictionary is available in `connection_info` the
code would look like this:
.. code-block:: python
from os_brick.initiator import connector
connector_dict = connection_info['connector']
protocol = connection_info['conn']['driver_volume_type']
conn = connector.InitiatorConnector.factory(
protocol, 'sudo', user_multipath=True,
device_scan_attempts=3, conn=connector_dict)
conn.extend()
Multi attach
------------
Multi attach support has been added to *Cinder* in the Queens cycle, and it's
not currently supported by *cinderlib*.
Other methods
-------------
All other methods available in the *Snapshot* class will be explained in their
relevant sections:
- `load` will be explained together with `json`, `jsons`, `dump`, and `dumps`
properties, and the `to_dict` method in the :doc:`serialization` section.
- `refresh` will reload the volume from the metadata storage and reload any
lazy loadable property that has already been loaded. Covered in the
:doc:`serialization` and :doc:`tracking` sections.

View File

@ -1,197 +0,0 @@
==============
Initialization
==============
The cinderlib itself doesn't require an initialization, as it tries to provide
sensible settings, but in some cases we may want to modify these defaults to
fit a specific desired behavior and the library provides a mechanism to support
this.
Library initialization should be done before making any other library call,
including *Backend* initialization and loading serialized data, if we try to do
it after other calls the library will raise an `Exception`.
Provided *setup* method is `cinderlib.Backend.global_setup`, but for
convenience the library provides a reference to this class method in
`cinderlib.setup`
The method definition is as follows:
.. code-block:: python
@classmethod
def global_setup(cls, file_locks_path=None, root_helper='sudo',
suppress_requests_ssl_warnings=True, disable_logs=True,
non_uuid_ids=False, output_all_backend_info=False,
project_id=None, user_id=None, persistence_config=None,
fail_on_missing_backend=True, host=None,
**cinder_config_params):
The meaning of the library's configuration options are:
file_locks_path
---------------
Cinder is a complex system that can support Active-Active deployments, and each
driver and storage backend has different restrictions, so in order to
facilitate mutual exclusion it provides 3 different types of locks depending
on the scope the driver requires:
- Between threads of the same process.
- Between different processes on the same host.
- In all the OpenStack deployment.
Cinderlib doesn't currently support the third type of locks, but that should
not be an inconvenience for most cinderlib usage.
Cinder uses file locks for the between process locking and cinderlib uses that
same kind of locking for the third type of locks, which is also what Cinder
uses when not deployed in an Active-Active fashion.
Parameter defaults to `None`, which will use the path indicated by the
`state_path` configuration option. It defaults to the current directory.
root_helper
-----------
There are some operations in *Cinder* drivers that require `sudo` privileges,
this could be because they are running Python code that requires it or because
they are running a command with `sudo`.
Attaching and detaching operations with *cinderlib* will also require `sudo`
privileges.
This configuration option allows us to define a custom root helper or disabling
all `sudo` operations passing an empty string when we know we don't require
them and we are running the process with a non passwordless `sudo` user.
Defaults to `sudo`.
suppress_requests_ssl_warnings
------------------------------
Controls the suppression of the *requests* library SSL certificate warnings.
Defaults to `True`.
non_uuid_ids
------------
As mentioned in the :doc:`volumes` section we can provide resource IDs manually
at creation time, and some drivers even support non UUID identificators, but
since that's not a given validation will reject any non UUID value.
This configuration option allows us to disable the validation on the IDs, at
the user's risk.
Defaults to `False`.
output_all_backend_info
-----------------------
Whether to include the *Backend* configuration when serializing objects.
Detailed information can be found in the :doc:`serialization` section.
Defaults to `False`.
disable_logs
------------
*Cinder* drivers are meant to be run within a full blown service, so they can
be quite verbose in terms of logging, that's why *cinderlib* disables it by
default.
Defaults to `True`.
project_id
----------
*Cinder* is a multi-tenant service, and when resources are created they belong
to a specific tenant/project. With this parameter we can define, using a
string, an identifier for our project that will be assigned to the resources we
create.
Defaults to `cinderlib`.
user_id
-------
Within each project/tenant the *Cinder* project supports multiple users, so
when it creates a resource a reference to the user that created it is stored
in the resource. Using this this parameter we can define, using a string, an
identifier for the user of cinderlib to be recorded in the resources.
Defaults to `cinderlib`.
persistence_config
------------------
*Cinderlib* operation requires data persistence, which is achieved with a
metadata persistence plugin mechanism.
The project includes 2 types of plugins providing 3 different persistence
solutions and more can be used via Python modules and passing custom plugins in
this parameter.
Users of the *cinderlib* library must decide which plugin best fits their needs
and pass the appropriate configuration in a dictionary as the
`persistence_config` parameter.
The parameter is optional, and defaults to the `memory` plugin, but if it's
passed it must always include the `storage` key specifying the plugin to be
used. All other key-value pairs must be valid parameters for the specific
plugin.
Value for the `storage` key can be a string identifying a plugin registered
using Python entrypoints, an instance of a class inheriting from
`PersistenceDriverBase`, or a `PersistenceDriverBase` class.
Information regarding available plugins, their description and parameters, and
different ways to initialize the persistence can be found in the
:doc:`metadata` section.
fail_on_missing_backend
-----------------------
To facilitate operations on resources, *Cinderlib* stores a reference to the
instance of the *backend* in most of the in-memory objects.
When deserializing or retrieving objects from the metadata persistence storage
*cinderlib* tries to properly set this *backend* instance based on the
*backends* currently in memory.
Trying to load an object without having instantiated the *backend* will result
in an error, unless we define `fail_on_missing_backend` to `False` on
initialization.
This is useful if we are sharing the metadata persistence storage and we want
to load a volume that is already connected to do just the attachment.
host
----
Host configuration option used for all volumes created by this cinderlib
execution.
On cinderlib volumes are selected based on the backend name, not on the
host@backend combination like cinder does. Therefore backend names must be
unique across all cinderlib applications that are using the same persistence
storage backend.
A second application running cinderlib with a different host value will have
access to the same resources if it uses the same backend name.
Defaults to the host's hostname.
Other keyword arguments
-----------------------
Any other keyword argument passed to the initialization method will be
considered a *Cinder* configuration option in the `[DEFAULT]` section.
This can be useful to set additional logging configuration like debug log
level, the `state_path` used by default in many option, or other options like
the `ssh_hosts_key_file` required by drivers that use SSH.
For a list of the possible configuration options one should look into the
*Cinder* project's documentation.

View File

@ -1,268 +0,0 @@
====================
Metadata Persistence
====================
*Cinder* drivers are not stateless, and the interface between the *Cinder* core
code and the drivers allows them to return data that can be stored in the
database. Some drivers, that have not been updated, are even accessing the
database directly.
Because *cinderlib* uses the *Cinder* drivers as they are, it cannot be
stateless either.
Originally *cinderlib* stored all the required metadata in RAM, and passed the
responsibility of persisting this information to the user of the library.
Library users would create or modify resources using *cinderlib*, and then
serialize the resources and manage the storage of this information themselves.
This allowed referencing those resources after exiting the application and in
case of a crash.
This solution would result in code duplication across projects, as many library
users would end up using the same storage types for the serialized data.
That's when the metadata persistence plugin was introduced in the code.
With the metadata plugin mechanism we can have plugins for different storages
and they can be shared between different projects.
*Cinderlib* includes 2 types of plugins providing 3 different persistence
solutions:
- Memory (the default)
- Database
- Database in memory
Using the memory mechanisms users can still use the JSON serialization
mechanism to store the metadata.
Currently we have memory and database plugins. Users can store the data
wherever they want using the JSON serialization mechanism or with a custom
metadata plugin.
Persistence mechanism must be configured before initializing any *Backend*
using the `persistence_config` parameter in the `setup` or `global_setup`
methods.
.. note:: When deserializing data using the `load` method on memory based
storage we will not be making this data available using the *Backend* unless
we pass `save=True` on the `load` call.
Memory plugin
-------------
The memory plugin is the fastest one, but it's has its drawbacks. It doesn't
provide persistence across application restarts and it's more likely to have
issues than the database plugin.
Even though it's more likely to present issues with some untested drivers, it
is still the default plugin, because it's the plugin that exposes the raw
plugin mechanism and will expose any incompatibility issues with external
plugins in *Cinder* drivers.
This plugin is identified with the name `memory`, and here we can see a simple
example of how to save everything to the database:
.. code-block:: python
import cinderlib as cl
cl.setup(persistence_config={'storage': 'memory'})
lvm = cl.Backend(volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi')
vol = lvm.create_volume(1)
with open('lvm.txt', 'w') as f:
f.write(lvm.dumps)
And how to load it back:
.. code-block:: python
import cinderlib as cl
cl.setup(persistence_config={'storage': 'memory'})
lvm = cl.Backend(volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi')
with open('cinderlib.txt', 'r') as f:
data = f.read()
backends = cl.load(data, save=True)
print backends[0].volumes
Database plugin
---------------
This metadata plugin is the most likely to be compatible with any *Cinder*
driver, as its built on top of *Cinder's* actual database layer.
This plugin includes 2 storage options: memory and real database. They are
identified with the storage identifiers `memory_db` and `db` respectively.
The memory option will store the data as an in memory SQLite database. This
option helps debugging issues on untested drivers. If a driver works with the
memory database plugin, but doesn't with the `memory` one, then the issue is
most likely caused by the driver accessing the database. Accessing the
database could be happening directly importing the database layer, or
indirectly using versioned objects.
The memory database doesn't require any additional configuration, but when
using a real database we must pass the connection information using `SQLAlchemy
database URLs format`_ as the value of the `connection` key.
.. code-block:: python
import cinderlib as cl
persistence_config = {'storage': 'db', 'connection': 'sqlite:///cl.sqlite'}
cl.setup(persistence_config=persistence_config)
lvm = cl.Backend(volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi')
vol = lvm.create_volume(1)
Using it later is exactly the same:
.. code-block:: python
import cinderlib as cl
persistence_config = {'storage': 'db', 'connection': 'sqlite:///cl.sqlite'}
cl.setup(persistence_config=persistence_config)
lvm = cl.Backend(volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi')
print lvm.volumes
Custom plugins
--------------
The plugin mechanism uses Python entrypoints to identify plugins present in the
system. So any module exposing the `cinderlib.persistence.storage` entrypoint
will be recognized as a *cinderlib* metadata persistence plugin.
As an example, the definition in `setup.py` of the entrypoints for the plugins
included in *cinderlib* is:
.. code-block:: python
entry_points={
'cinderlib.persistence.storage': [
'memory = cinderlib.persistence.memory:MemoryPersistence',
'db = cinderlib.persistence.dbms:DBPersistence',
'memory_db = cinderlib.persistence.dbms:MemoryDBPersistence',
],
},
But there may be cases were we don't want to create entry points available
system wide, and we want an application only plugin mechanism. For this
purpose *cinderlib* supports passing a plugin instance or class as the value of
the `storage` key in the `persistence_config` parameters.
The instance and class must inherit from the `PersistenceDriverBase` in
`cinderlib/persistence/base.py` and implement all the following methods:
- `db`
- `get_volumes`
- `get_snapshots`
- `get_connections`
- `get_key_values`
- `set_volume`
- `set_snapshot`
- `set_connection`
- `set_key_value`
- `delete_volume`
- `delete_snapshot`
- `delete_connection`
- `delete_key_value`
And the `__init__` method is usually needed as well, and it will receive as
keyword arguments the parameters provided in the `persistence_config`. The
`storage` key-value pair is not included as part of the keyword parameters.
The invocation with a class plugin would look something like this:
.. code-block:: python
import cinderlib as cl
from cinderlib.persistence import base
class MyPlugin(base.PersistenceDriverBase):
def __init__(self, location, user, password):
...
persistence_config = {'storage': MyPlugin, 'location': '127.0.0.1',
'user': 'admin', 'password': 'nomoresecrets'}
cl.setup(persistence_config=persistence_config)
lvm = cl.Backend(volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi')
Migrating storage
-----------------
Metadata is crucial for the proper operation of *cinderlib*, as the *Cinder*
drivers cannot retrieve this information from the storage backend.
There may be cases where we want to stop using a metadata plugin and start
using another one, but we have metadata on the old plugin, so we need to
migrate this information from one backend to another.
To achieve a metadata migration we can use methods `refresh`, `dump`, `load`,
and `set_persistence`.
An example code of how to migrate from SQLite to MySQL could look like this:
.. code-block:: python
import cinderlib as cl
# Setup the source persistence plugin
persistence_config = {'storage': 'db',
'connection': 'sqlite:///cinderlib.sqlite'}
cl.setup(persistence_config=persistence_config)
# Setup backends we want to migrate
lvm = cl.Backend(volume_driver='cinder.volume.drivers.lvm.LVMVolumeDriver',
volume_group='cinder-volumes',
target_protocol='iscsi',
target_helper='lioadm',
volume_backend_name='lvm_iscsi')
# Get all the data into memory
data = cl.dump()
# Setup new persistence plugin
new_config = {
'storage': 'db',
'connection': 'mysql+pymysql://user:password@IP/cinder?charset=utf8'
}
cl.Backend.set_persistence(new_config)
# Load and save the data into the new plugin
backends = cl.load(data, save=True)
.. _SQLAlchemy database URLs format: http://docs.sqlalchemy.org/en/latest/core/engines.html#database-urls

View File

@ -1,210 +0,0 @@
=============
Serialization
=============
A *Cinder* driver is stateless on itself, but it still requires the right data
to work, and that's why the cinder-volume service takes care of storing the
state in the DB. This means that *cinderlib* will have to simulate the DB for
the drivers, as some operations actually return additional data that needs to
be kept and provided in any future operation.
Originally *cinderlib* stored all the required metadata in RAM, and passed the
responsibility of persisting this information to the user of the library.
Library users would create or modify resources using *cinderlib*, and then
would have to serialize the resources and manage the storage of this
information. This allowed referencing those resources after exiting the
application and in case of a crash.
Now we support :doc:`metadata` plugins, but there are still cases were we'll
want to serialize the data:
- When logging or debugging resources.
- When using a metadata plugin that stores the data in memory.
- Over the wire transmission of the connection information to attach a volume
on a remote nodattach a volume on a remote node.
We have multiple methods to satisfy these needs, to serialize the data (`json`,
`jsons`, `dump`, `dumps`), to deserialize it (`load`), and to convert to a user
friendly object (`to_dict`).
To JSON
-------
We can get a JSON representation of any *cinderlib* object - *Backend*,
*Volume*, *Snapshot*, and *Connection* - using their following properties:
- `json`: Returns a JSON representation of the current object information as a
Python dictionary. Lazy loadable objects that have not been loaded will not
be present in the resulting dictionary.
- `jsons`: Returns a string with the JSON representation. It's the equivalent
of converting to a string the dictionary from the `json` property.
- `dump`: Identical to the `json` property with the exception that it ensures
all lazy loadable attributes have been loaded. If an attribute had already
been loaded its contents will not be refreshed.
- `dumps`: Returns a string with the JSON representation of the fully loaded
object. It's the equivalent of converting to a string the dictionary from
the `dump` property.
Besides these resource specific properties, we also have their equivalent
methods at the library level that will operate on all the *Backends* present in
the application.
.. attention:: On the objects, these are properties (`volume.dumps`), but on
the library, these are methods (`cinderlib.dumps()`).
.. note::
We don't have to worry about circular references, such as a *Volume* with a
*Snapshot* that has a reference to its source *Volume*, since *cinderlib*
is prepared to handle them.
To demonstrate the serialization in *cinderlib* we can look at an easy way to
save all the *Backends'* resources information from an application that uses
*cinderlib* with the metadata stored in memory:
.. code-block:: python
with open('cinderlib.txt', 'w') as f:
f.write(cinderlib.dumps())
In a similar way we can also store a single *Backend* or a single *Volume*:
.. code-block:: python
vol = lvm.create_volume(size=1)
with open('lvm.txt', 'w') as f:
f.write(lvm.dumps)
with open('vol.txt', 'w') as f:
f.write(vol.dumps)
We must remember that `dump` and `dumps` triggers loading of properties that
are not already loaded. Any lazy loadable property that was already loaded
will not be updated. A good way to ensure we are using the latest data is to
trigger a `refresh` on the backends before doing the `dump` or `dumps`.
.. code-block:: python
for backend in cinderlib.Backend.backends:
backend.refresh()
with open('cinderlib.txt', 'w') as f:
f.write(cinderlib.dumps())
When serializing *cinderlib* resources we'll get all the data currently
present. This means that when serializing a volume that is attached and has
snapshots we'll get them all serialized.
There are some cases where we don't want this, such as when implementing a
persistence metadata plugin. We should use the `to_json` and `to_jsons`
methods for such cases, as they will return a simplified serialization of the
resource containing only the data from the resource itself.
From JSON
---------
Just like we had the `json`, `jsons`, `dump`, and `dumps` methods in all the
*cinderlib* objects to serialize data, we also have the `load` method to
deserialize this data back and recreate a *cinderlib* internal representation
from JSON, be it stored in a Python string or a Python dictionary.
The `load` method is present in *Backend*, *Volume*, *Snapshot*, and
*Connection* classes as well as in the library itself. The resource specific
`load` class method is the exact counterpart of the serialization methods, and
it will deserialize the specific resource from the class its being called from.
The library's `load` method is capable of loading anything we have serialized.
Not only can it load the full list of *Backends* with their resources, but it
can also load individual resources. This makes it the recommended way to
deserialize any data in *cinderlib*. By default, serialization and the
metadata storage are disconnected, so loading serialized data will not ensure
that the data is present in the persistence storage. We can ensure that
deserialized data is present in the persistence storage passing `save=True` to
the loading method.
Considering the files we created in the earlier examples we can easily load our
whole configuration with:
.. code-block:: python
# We must have initialized the Backends before reaching this point
with open('cinderlib.txt', 'r') as f:
data = f.read()
backends = cinderlib.load(data, save=True)
And for a specific backend or an individual volume:
.. code-block:: python
# We must have initialized the Backends before reaching this point
with open('lvm.txt', 'r') as f:
data = f.read()
lvm = cinderlib.load(data, save=True)
with open('vol.txt', 'r') as f:
data = f.read()
vol = cinderlib.load(data)
This is the preferred way to deserialize objects, but we could also use the
specific object's `load` method.
.. code-block:: python
# We must have initialized the Backends before reaching this point
with open('lvm.txt', 'r') as f:
data = f.read()
lvm = cinderlib.Backend.load(data)
with open('vol.txt', 'r') as f:
data = f.read()
vol = cinderlib.Volume.load(data)
To dict
-------
Serialization properties and methos presented earlier are meant to store all
the data and allow reuse of that data when using drivers of different releases.
So it will include all required information to be backward compatible when
moving from release N *Cinder* drivers to release N+1 drivers.
There will be times when we'll just want to have a nice dictionary
representation of a resource, be it to log it, to display it while debugging,
or to send it from our controller application to the node where we are going to
be doing the attachment. For these specific cases all resources, except the
*Backend* have a `to_dict` method (not property this time) that will only
return the relevant data from the resources.
Backend configuration
---------------------
When *cinderlib* serializes any object it also stores the *Backend* this object
belongs to. For security reasons it only stores the identifier of the backend
by default, which is the `volume_backend_name`. Since we are only storing a
reference to the *Backend*, this means that when we are going through the
deserialization process the *Backend* the object belonged to must already be
present in *cinderlib*.
This should be OK for most *cinderlib* usages, since it's common practice to
store the storage backend connection information (credentials, addresses, etc.)
in a different location than the data; but there may be situations (for example
while testing) where we'll want to store everything in the same file, not only
the *cinderlib* representation of all the storage resources but also the
*Backend* configuration required to access the storage array.
To enable the serialization of the whole driver configuration we have to
specify `output_all_backend_info=True` on the *cinderlib* initialization
resulting in a self contained file with all the information required to manage
the resources.
This means that with this configuration option we won't need to configure the
*Backends* prior to loading the serialized JSON data, we can just load the data
and *cinderlib* will automatically setup the *Backends*.

View File

@ -1,69 +0,0 @@
=========
Snapshots
=========
The *Snapshot* class provides the abstraction layer required to perform all
operations on an existing snapshot, which means that the snapshot creation
operation must be invoked from other class instance, since the new snapshot we
want to create doesn't exist yet and we cannot use the *Snapshot* class to
manage it.
Create
------
Once we have a *Volume* instance we are ready to create snapshots from it, and
we can do it for attached as well as detached volumes.
.. note::
Some drivers, like the NFS, require assistance from the Compute service for
attached volumes, so there is currently no way of doing this with
*cinderlib*
Creating a snapshot can only be performed by the `create_snapshot` method from
our *Volume* instance, and once we have created a snapshot it will be tracked
in the *Volume* instance's `snapshots` set.
Here is a simple code to create a snapshot and use the `snapshots` set to
verify that both, the returned value by the call as well as the entry added to
the `snapshots` attribute, reference the same object and that the `volume`
attribute in the *Snapshot* is referencing the source volume.
.. code-block:: python
vol = lvm.create_volume(size=1)
snap = vol.create_snapshot()
assert snap is list(vol.snapshots)[0]
assert vol is snap.volume
Delete
------
Once we have created a *Snapshot* we can use its `delete` method to permanently
remove it from the storage backend.
Deleting a snapshot will remove its reference from the source *Volume*'s
`snapshots` set.
.. code-block:: python
vol = lvm.create_volume(size=1)
snap = vol.create_snapshot()
assert 1 == len(vol.snapshots)
snap.delete()
assert 0 == len(vol.snapshots)
Other methods
-------------
All other methods available in the *Snapshot* class will be explained in their
relevant sections:
- `load` will be explained together with `json`, `jsons`, `dump`, and `dumps`
properties, and the `to_dict` method in the :doc:`serialization` section.
- `refresh` will reload the volume from the metadata storage and reload any
lazy loadable property that has already been loaded. Covered in the
:doc:`serialization` and :doc:`tracking` sections.
- `create_volume` method has been covered in the :doc:`volumes` section.

View File

@ -1,66 +0,0 @@
Resource tracking
-----------------
*Cinderlib* users will surely have their own variables to keep track of the
*Backends*, *Volumes*, *Snapshots*, and *Connections*, but there may be cases
where this is not enough, be it because we are in a place in our code where we
don't have access to the original variables, because we want to iterate all
instances, or maybe we are running some manual tests and we have lost the
reference to a resource.
For these cases we can use *cinderlib's* various tracking systems to access the
resources. These tracking systems are also used by *cinderlib* in the
serialization process. They all used to be in memory, but some will now reside
in the metadata persistence storage.
*Cinderlib* keeps track of all:
- Initialized *Backends*.
- Existing volumes in a *Backend*.
- Connections to a volume.
- Local attachment to a volume.
- Snapshots for a given volume.
Initialized *Backends* are stored in a dictionary in `Backends.backends` using
the `volume_backend_name` as key.
Existing volumes in a *Backend* are stored in the persistence storage, and can
be lazy loaded using the *Backend* instance's `volumes` property.
Existing *Snapshots* for a *Volume* are stored in the persistence storage, and
can be lazy loaded using the *Volume* instance's `snapshots` property.
Connections to a *Volume* are stored in the persistence storage, and can be
lazy loaded using the *Volume* instance's `connections` property.
.. note:: Lazy loadable properties will only load the value the first time we
access them. Successive accesses will just return the cached value. To
retrieve latest values for them as well as for the instance we can use the
`refresh` method.
The local attachment *Connection* of a volume is stored in the *Volume*
instance's `local_attach` attribute and is stored in memory, so unloading the
library will lose this information.
We can easily use all these properties to display the status of all the
resources we've created:
.. code-block:: python
# If volumes lazy loadable property was already loaded, refresh it
lvm_backend.refresh()
for vol in lvm_backend.volumes:
print('Volume %s is currently %s' % (vol.id, vol.status)
# Refresh volume's snapshots and connections if previously lazy loaded
vol.refresh()
for snap in vol.snapshots:
print('Snapshot %s for volume %s is currently %s' %
(snap.id, snap.volume.id, snap.status))
for conn in vol.connections:
print('Connection from %s with ip %s to volume %s is %s' %
(conn.connector_info['host'], conn.connector_info['ip'],
conn.volume.id, conn.status))

View File

@ -1,262 +0,0 @@
=======
Volumes
=======
"The *Volume* class provides the abstraction layer required to perform all
operations on an existing volume. Volume creation operations are carried out
at the *Backend* level.
Create
------
The base resource in storage is the volume, and to create one the *cinderlib*
provides three different mechanisms, each one with a different method that will
be called on the source of the new volume.
So we have:
- Empty volumes that have no resource source and will have to be created
directly on the *Backend* via the `create_volume` method.
- Cloned volumes that will be created from a source *Volume* using its `clone`
method.
- Volumes from a snapshot, where the creation is initiated by the
`create_volume` method from the *Snapshot* instance.
.. note::
*Cinder* NFS backends will create an image and not a directory to store
files, which falls in line with *Cinder* being a Block Storage provider and
not filesystem provider like *Manila* is.
So assuming that we have an `lvm` variable holding an initialized *Backend*
instance we could create a new 1GB volume quite easily:
.. code-block:: python
print('Stats before creating the volume are:')
pprint(lvm.stats())
vol = lvm.create_volume(1)
print('Stats after creating the volume are:')
pprint(lvm.stats())
Now, if we have a volume that already contains data and we want to create a new
volume that starts with the same contents we can use the source volume as the
cloning source:
.. code-block:: python
cloned_vol = vol.clone()
Some drivers support cloning to a bigger volume, so we could define the new
size in the call and the driver would take care of extending the volume after
cloning it, this is usually tightly linked to the `extend` operation support by
the driver.
Cloning to a greater size would look like this:
.. code-block:: python
new_size = vol.size + 1
cloned_bigger_volume = vol.clone(size=new_size)
.. note::
Cloning efficiency is directly linked to the storage backend in use, so it
will not have the same performance in all backends. While some backends
like the Ceph/RBD will be extremely efficient others may range from slow to
being actually implemented as a `dd` operation performed by the driver
attaching source and destination volumes.
.. code-block:: python
vol = snap.create_volume()
.. note::
Just like with the cloning functionality, not all storage backends can
efficiently handle creating a volume from a snapshot.
On volume creation we can pass additional parameters like a `name` or a
`description`, but these will be irrelevant for the actual volume creation and
will only be useful to us to easily identify our volumes or to store additional
information.
Available fields with their types can be found in `Cinder's Volume OVO
definition
<https://github.com/openstack/cinder/blob/stable/queens/cinder/objects/volume.py#L71-L131>`_,
but most of them are only relevant within the full *Cinder* service.
We can access these fields as if they were part of the *cinderlib* *Volume*
instance, since the class will try to retrieve any non *cinderlib* *Volume*
from *Cinder*'s internal OVO representation.
Some of the fields we could be interested in are:
- `id`: UUID-4 unique identifier for the volume.
- `user_id`: String identifier, in *Cinder* it's a UUID, but we can choose
here.
- `project_id`: String identifier, in *Cinder* it's a UUID, but we can choose
here.
- `snapshot_id`: ID of the source snapshot used to create the volume. This
will be filled by *cinderlib*.
- `host`: Used to store the backend name information together with the host
name where cinderlib is running. This information is stored as a string in
the form of *host@backend#pool*. This is an optional parameter, and passing
it to `create_volume` will override default value, allowing our caller to
request a specific pool for multi-pool backends, though we recommend using
the `pool_name` parameter instead. Issues will arise if parameter doesn't
contain correct information.
- `pool_name`: Pool name to use when creating the volume. Default is to use
the first or only pool. To know possible values for a backend use the
`pool_names` property on the *Backend* instance.
- `size`: Volume size in GBi.
- `availability_zone`: In case we want to define AZs.
- `status`: This represents the status of the volume, and the most important
statuses are `available`, `error`, `deleted`, `in-use`, `creating`.
- `attach_status`: This can be `attached` or `detached`.
- `scheduled_at`: Date-time when the volume was scheduled to be created.
Currently not being used by *cinderlib*.
- `launched_at`: Date-time when the volume creation was completed. Currently
not being used by *cinderlib*.
- `deleted`: Boolean value indicating whether the volume has already been
deleted. It will be filled by *cinderlib*.
- `terminated_at`: When the volume delete was sent to the backend.
- `deleted_at`: When the volume delete was completed.
- `display_name`: Name identifier, this is passed as `name` to all *cinderlib*
volume creation methods.
- `display_description`: Long description of the volume, this is passed as
`description` to all *cinderlib* volume creation methods.
- `source_volid`: ID of the source volume used to create this volume. This
will be filled by *cinderlib*.
- `bootable`: Not relevant for *cinderlib*, but maybe useful for the
*cinderlib* user.
- `extra_specs`: Extra volume configuration used by some drivers to specify
additional information, such as compression, deduplication, etc. Key-Value
pairs are driver specific.
- `qos_specs`: Backend QoS configuration. Dictionary with driver specific
key-value pares that enforced by the backend.
.. note::
*Cinderlib* automatically generates a UUID for the `id` if one is not
provided at volume creation time, but the caller can actually provide a
specific `id`.
By default the `id` is limited to valid UUID and this is the only kind of
ID that is guaranteed to work on all drivers. For drivers that support non
UUID IDs we can instruct *cinderlib* to modify *Cinder*'s behavior and
allow them. This is done on *cinderlib* initialization time passing
`non_uuid_ids=True`.
.. note::
*Cinderlib* does not do scheduling on driver pools, so setting the
`extra_specs` for a volume on drivers that expect the scheduler to select
a specific pool using them will have the same behavior as in Cinder.
In that case the caller of Cinderlib is expected to go through the stats
and check the pool that matches the criteria and pass it to the Backend's
`create_volume` method on the `pool_name` parameter.
Delete
------
Once we have created a *Volume* we can use its `delete` method to permanently
remove it from the storage backend.
In *Cinder* there are safeguards to prevent a delete operation from completing
if it has snapshots (unless the delete request comes with the `cascade` option
set to true), but here in *cinderlib* we don't, so it's the callers
responsibility to delete the snapshots.
Deleting a volume with snapshots doesn't have a defined behavior for *Cinder*
drivers, since it's never meant to happen, so some storage backends delete the
snapshots, other leave them as they were, and others will fail the request.
Example of creating and deleting a volume:
.. code-block:: python
vol = lvm.create_volume(size=1)
vol.delete()
.. attention::
When deleting a volume that was the source of a cloning operation some
backends cannot delete them (since they have copy-on-write clones) and they
just keep them as a silent volume that will be deleted when its snapshot
and clones are deleted.
Extend
------
Many storage backends and *Cinder* drivers support extending a volume to have
more space and you can do this via the `extend` method present in your *Volume*
instance.
If the *Cinder* driver doesn't implement the extend operation it will raise a
`NotImplementedError`.
The only parameter received by the `extend` method is the new size, and this
must always be greater than the current value because *cinderlib* is not
validating this at the moment.
The call will return the new size of the volume in bytes.
Example of creating, extending, and deleting a volume:
.. code-block:: python
vol = lvm.create_volume(size=1)
print('Vol %s has %s GBi' % (vol.id, vol.size))
new_size = vol.extend(2)
print('Extended vol %s has %s GBi' % (vol.id, vol.size))
print('Detected new size is %s bytes' % new_size)
vol.delete()
A call to `extend` on a locally attached volume will automatically update the
host's view of the volume to reflect the new size. For non locally attached
volumes please refer to the `extend section in the connections
<connections.html#extend>`_ section.
Other methods
-------------
All other methods available in the *Volume* class will be explained in their
relevant sections:
- `load` will be explained together with `json`, `jsons`, `dump`, and `dumps`
properties, and the `to_dict` method in the :doc:`serialization` section.
- `refresh` will reload the volume from the metadata storage and reload any
lazy loadable property that has already been loaded. Covered in the
:doc:`serialization` and :doc:`tracking` sections.
- `create_snapshot` method will be covered in the :doc:`snapshots` section
together with the `snapshots` attribute.
- `attach`, `detach`, `connect`, and `disconnect` methods will be explained in
the :doc:`connections` section.

View File

@ -1,67 +0,0 @@
=====
Usage
=====
Thanks to the fully Object Oriented abstraction, instead of a classic method
invocation passing the resources to work on, *cinderlib* makes it easy to hit
the ground running when managing storage resources.
Once the *Cinder* and *cinderlib* packages are installed we just have to import
the library to start using it:
.. code-block:: python
import cinderlib
.. note::
Installing the *Cinder* package does not require to start any of its
services (volume, scheduler, api) or auxiliary services (KeyStone, MySQL,
RabbitMQ, etc.).
Usage documentation is not too long, and it is recommended to read it all
before using the library to be sure we have at least a high level view of the
different aspects related to managing our storage with *cinderlib*.
Before going into too much detail there are some aspects we need to clarify to
make sure our terminology is in sync and we understand where each piece fits.
In *cinderlib* we have *Backends*, that refer to a storage array's specific
connection configuration so it usually doesn't refer to the whole storage. With
a backend we'll usually have access to the configured pool.
Resources managed by *cinderlib* are *Volumes* and *Snapshots*, and a *Volume*
can be created from a *Backend*, another *Volume*, or from a *Snapshot*, and a
*Snapshot* can only be created from a *Volume*.
Once we have a volume we can create *Connections* so it can be accessible from
other hosts or we can do a local *Attachment* of the volume which will retrieve
required local connection information of this host, create a *Connection* on
the storage to this host, and then do the local *Attachment*.
Given that *Cinder* drivers are not stateless, *cinderlib* cannot be either.
That's why there is a metadata persistence plugin mechanism to provide
different ways to store resource states. Currently we have memory and database
plugins. Users can store the data wherever they want using the JSON
serialization mechanism or with a custom metadata plugin.
Each of the different topics are treated in detail on their specific sections:
.. toctree::
:maxdepth: 1
topics/initialization
topics/backends
topics/volumes
topics/snapshots
topics/connections
topics/serialization
topics/tracking
topics/metadata
Auto-generated documentation is also available:
.. toctree::
:maxdepth: 2
api/cinderlib

View File

@ -1,285 +0,0 @@
=================
Validated drivers
=================
We are in the process of validating the *cinderlib* support of more *Cinder*
drivers and adding more automated testing of drivers on *Cinder*'s gate.
For now we have 2 backends, LVM and Ceph, that are tested on every *Cinder* and
*cinderlib* patch that is submitted and merged.
We have also been able to manually test multiple backends ourselves and
received reports of other backends that have been successfully tested.
In this document we present the list of all these drivers, and for each one we
include the storage array that was used, the configuration (with masked
sensitive data), any necessary external requirements -such as packages or
libraries-, whether it is being automatically tested on the OpenStack gates
or not, and any additional notes.
Currently the following backends have been verified:
- `LVM`_ with LIO
- `Ceph`_
- Dell EMC `XtremIO`_
- Dell EMC `VMAX`_
- `Kaminario`_ K2
- NetApp `SolidFire`_
- HPE `3PAR`_
- `Synology`_
- `QNAP`_
LVM
---
- *Storage*: LVM with LIO
- *Connection type*: iSCSI
- *Requirements*: None
- *Automated testing*: On *cinderlib* and *Cinder* jobs.
*Configuration*:
.. code-block:: YAML
backends:
- volume_backend_name: lvm
volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group: cinder-volumes
target_protocol: iscsi
target_helper: lioadm
Ceph
----
- *Storage*: Ceph/RBD
- *Versions*: Luminous v12.2.5
- *Connection type*: RBD
- *Requirements*:
- ``ceph-common`` package
- ``ceph.conf`` file
- Ceph keyring file
- *Automated testing*: On *cinderlib* and *Cinder* jobs.
- *Notes*:
- If we don't define the ``keyring`` configuration parameter (must use an
absolute path) in our ``rbd_ceph_conf`` to point to our
``rbd_keyring_conf`` file, we'll need the ``rbd_keyring_conf`` to be in
``/etc/ceph/``.
- ``rbd_keyring_confg`` must always be present and must follow the naming
convention of ``$cluster.client.$rbd_user.conf``.
- Current driver cannot delete a snapshot if there's a dependent volume
(a volume created from it exists).
*Configuration*:
.. code-block:: YAML
backends:
- volume_backend_name: ceph
volume_driver: cinder.volume.drivers.rbd.RBDDriver
rbd_user: cinder
rbd_pool: volumes
rbd_ceph_conf: tmp/ceph.conf
rbd_keyring_conf: /etc/ceph/ceph.client.cinder.keyring
XtremIO
-------
- *Storage*: Dell EMC XtremIO
- *Versions*: v4.0.15-20_hotfix_3
- *Connection type*: iSCSI, FC
- *Requirements*: None
- *Automated testing*: No
*Configuration* for iSCSI:
.. code-block:: YAML
backends:
- volume_backend_name: xtremio
volume_driver: cinder.volume.drivers.dell_emc.xtremio.XtremIOISCSIDriver
xtremio_cluster_name: CLUSTER_NAME
use_multipath_for_image_xfer: true
san_ip: w.x.y.z
san_login: user
san_password: toomanysecrets
*Configuration* for FC:
.. code-block:: YAML
backends:
- volume_backend_name: xtremio
volume_driver: cinder.volume.drivers.dell_emc.xtremio.XtremIOFCDriver
xtremio_cluster_name: CLUSTER_NAME
use_multipath_for_image_xfer: true
san_ip: w.x.y.z
san_login: user
san_password: toomanysecrets
Kaminario
---------
- *Storage*: Kaminario K2
- *Versions*: VisionOS v6.0.72.10
- *Connection type*: iSCSI
- *Requirements*:
- ``krest`` Python package from PyPi
- *Automated testing*: No
*Configuration*:
.. code-block:: YAML
backends:
- volume_backend_name: kaminario
volume_driver: cinder.volume.drivers.kaminario.kaminario_iscsi.KaminarioISCSIDriver
san_ip: w.x.y.z
san_login: user
san_password: toomanysecrets
use_multipath_for_image_xfer: true
SolidFire
---------
- *Storage*: NetApp SolidFire
- *Versions*: Unknown
- *Connection type*: iSCSI
- *Requirements*: None
- *Automated testing*: No
*Configuration*:
.. code-block:: YAML
backends:
- volume_backend_name: solidfire
volume_driver: cinder.volume.drivers.solidfire.SolidFireDriver
san_ip: w.x.y.z
san_login: admin
san_password: toomanysecrets
sf_allow_template_caching = false
image_volume_cache_enabled = True
volume_clear = zero
VMAX
----
- *Storage*: Dell EMC VMAX
- *Versions*: Unknown
- *Connection type*: iSCSI
- *Automated testing*: No
.. code-block:: YAML
size_precision: 2
backends:
- image_volume_cache_enabled: True
volume_clear: zero
volume_backend_name: VMAX_ISCSI_DIAMOND
volume_driver: cinder.volume.drivers.dell_emc.vmax.iscsi.VMAXISCSIDriver
san_ip: w.x.y.z
san_rest_port: 8443
san_login: user
san_password: toomanysecrets
vmax_srp: SRP_1
vmax_array: 000197800128
vmax_port_groups: [os-iscsi-pg]
3PAR
----
- *Storage*: HPE 3PAR 8200
- *Versions*: 3.3.1.410 (MU2)+P32,P34,P37,P40,P41,P45
- *Connection type*: iSCSI
- *Requirements*:
- ``python-3parclient>=4.1.0`` Python package from PyPi
- *Automated testing*: No
- *Notes*:
- Features work as expected, but due to a `bug in the 3PAR driver
<https://bugs.launchpad.net/cinder/+bug/1824371>`_ the stats test
(``test_stats_with_creation_on_3par``) fails.
*Configuration*:
.. code-block:: YAML
backends:
- volume_backend_name: 3par
hpe3par_api_url: https://w.x.y.z:8080/api/v1
hpe3par_username: user
hpe3par_password: toomanysecrets
hpe3par_cpg: [CPG_name]
san_ip: w.x.y.z
san_login: user
san_password: toomanysecrets
volume_driver: cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver
hpe3par_iscsi_ips: [w.x.y2.z2,w.x.y2.z3,w.x.y2.z4,w.x.y2.z4]
hpe3par_debug: false
hpe3par_iscsi_chap_enabled: false
hpe3par_snapshot_retention: 0
hpe3par_snapshot_expiration: 1
use_multipath_for_image_xfer: true
Synology
--------
- *Storage*: Synology DS916+
- *Versions*: DSM 6.2.1-23824 Update 6
- *Connection type*: iSCSI
- *Requirements*: None
- *Automated testing*: No
*Configuration*:
.. code-block:: YAML
backends:
- volume_backend_name: synology
volume_driver: cinder.volume.drivers.synology.synology_iscsi.SynoISCSIDriver
iscs_protocol: iscsi
target_ip_address: synology.example.com
synology_admin_port: 5001
synology_username: admin
synology_password: toomanysecrets
synology_pool_name: volume1
driver_use_ssl: true
QNAP
----
- *Storage*: QNAP TS-831X
- *Versions*: 4.3.5.0728
- *Connection type*: iSCSI
- *Requirements*: None
- *Automated testing*: No
*Configuration*:
.. code-block:: YAML
backends:
- volume_backend_name: qnap
volume_driver: cinder.volume.drivers.qnap.QnapISCSIDriver
use_multipath_for_image_xfer: true
qnap_management_url: https://w.x.y.z:443
iscsi_ip_address: w.x.y.z
qnap_storage_protocol: iscsi
qnap_poolname: Storage Pool 1
san_login: admin
san_password: toomanysecrets

View File

@ -1,440 +0,0 @@
===================
Validating a driver
===================
This is a guide for *Cinder* driver maintainers to validate that their drivers
are fully supported by *cinderlib* and therefore by projects like Ember-CSI_
and oVirt_ that rely on it for storage backend management.
Validation steps include initial manual validation as well as automatic testing
at the gate as part of *Cinder*'s 3rd party CI jobs.
With DevStack
-------------
There are many ways we can install *cinderlib* for the initial validation
phase, such as using pip from master repositories or PyPi or using packaged
versions of the project, but the official recommendation is to use DevStack_.
We believe that, as a *Cinder* driver maintainer, you will be already familiar
with DevStack_ and know how to configure and use it to work with your storage
backend, so this will most likely be the easiest way for you to do an initial
validation of the driver.
*Cinderlib* has a `DevStack plugin`_ that automatically installs the library as
during the stacking process when running the ``./stach.sh`` script, so we will
be adding this plugin to our ``local.conf`` file.
To use *cinderlib*'s master code we will add the line ``enable_plugin cinderlib
https://git.openstack.org/openstack/cinderlib`` after the ``[[local|localrc]]``
header the in our normal ``local.conf`` file that already configures our
backend. The result will look like this::
[[local|localrc]]
enable_plugin cinderlib https://opendev.org/openstack/cinderlib
After adding this we can proceed to run the ``stack.sh`` script.
Once the script has finished executing we will have *cinderlib* installed from
Git in our system and we will also have sample Python code of how to use our
backend in *cinderlib* using the same backend configuration that exists in our
``cinder.conf``. The sample Python code is generated in file ``cinderlib.py``
in the same directory as our ``cinder.conf`` file.
The tool generating the ``cinderlib.py`` file supports ``cinder.conf`` files
with multiple backends, so there's no need to make any additional changes to
your ``local.conf`` if you usually deploy DevStack_ with multiple backends.
The generation of the sample code runs at the very end of the stacking process
(the ``extra`` stage), so we can use other DevStack storage plugins, such as
the Ceph plugin, and the sample code will still be properly generated.
For the LVM default backend the contents of the ``cinderlib.py`` file are:
.. code-block:: shell
$ cat /etc/cinder/cinderlib.py
import cinderlib as cl
lvmdriver_1 = cl.Backend(volume_clear="zero", lvm_type="auto",
volume_backend_name="lvmdriver-1",
target_helper="lioadm",
volume_driver="cinder.volume.drivers.lvm.LVMVolumeDriver",
image_volume_cache_enabled=True,
volume_group="stack-volumes-lvmdriver-1")
To confirm that this automatically generated configuration is correct we can
do:
.. code-block:: shell
$ cd /etc/cinder
$ mv cinderlib.py example.py
$ python
[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from pprint import pprint as pp
>>> import cinderlib
>>> pp(example.lvmdriver_1.stats())
{'driver_version': '3.0.0',
'pools': [{'QoS_support': False,
'backend_state': 'up',
'filter_function': None,
'free_capacity_gb': 4.75,
'goodness_function': None,
'location_info': 'LVMVolumeDriver:localhost.localdomain:stack-volumes-lvmdriver-1:thin:0',
'max_over_subscription_ratio': '20.0',
'multiattach': True,
'pool_name': 'lvmdriver-1',
'provisioned_capacity_gb': 0.0,
'reserved_percentage': 0,
'thick_provisioning_support': False,
'thin_provisioning_support': True,
'total_capacity_gb': 4.75,
'total_volumes': 1}],
'shared_targets': False,
'sparse_copy_volume': True,
'storage_protocol': 'iSCSI',
'vendor_name': 'Open Source',
'volume_backend_name': 'lvmdriver-1'}
>>>
Here the name of the variable is `lvmdriver_1`, but in your case the name will
be different, as it uses the ``volume_backend_name`` from the different driver
section in the ``cinder.conf`` file. One way to see the backends that have
been initialized by importing the example code is looking into the
`example.cl.Backend.backends` dictionary.
Some people deploy DevStack_ with the default backend and then manually modify
the ``cinder.conf`` file afterwards and restart the *Cinder* services to use
their configuration. This is fine as well, as you can easily recreate the
Python code to include you backend using the `cinder-cfg-to-cinderlib-code`
tool that's installed with *cinderlib*.
Generating the example code manually can be done like this::
$ cinder-cfg-to-cinderlib-code /etc/cinder/cinder.conf example.py
Now that we know that *cinderlib* can access our backend we will proceed to run
*cinderlib*'s functional tests to confirm that all the operations work as
expected.
The functional tests use the contents of the existing
``/etc/cinder/cinder.conf`` file to get the backend configuration. The
functional test runner also supports ``cinder.conf`` files with multiple
backends. Test methods have meaningful names ending in the backend name as per
the ``volume_backend_name`` values in the configuration file.
The functional tests are quite fast, as they usually take about 1 minute to
run:
.. code-block:: shell
$ python -m unittest discover -v cinderlib.tests.functional
test_attach_detach_volume_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
test_attach_detach_volume_via_attachment_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
test_attach_volume_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
test_clone_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
test_create_delete_snapshot_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
test_create_delete_volume_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
test_create_snapshot_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
test_create_volume_from_snapshot_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
test_create_volume_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
test_disk_io_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
test_extend_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
test_stats_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
test_stats_with_creation_on_lvmdriver-1 (cinderlib.tests.functional.test_basic.BackendFunctBasic) ... ok
----------------------------------------------------------------------
Ran 13 tests in 54.179s
OK
There are a couple of interesting options we can use when the running
functional tests using environmental variables:
- ``CL_FTEST_LOGGING``: If set it will enable the *Cinder* code to log to
stdout during the testing. Undefined by default, which means no output.
- ``CL_FTEST_PRECISION``: Integer value describing how much precision we must
use when comparing volume sizes. Due to cylinder sizes some storage arrays
don't abide 100% to the requested size of the volume. With this option we
can define how many decimals will be correct when testing sizes. A value of
2 means that the backend could create a 1.0015869140625GB volume when we
request a 1GB volume and the tests wouldn't fail. Default is zero, which
means that it must be perfect or it will fail.
- ``CL_FTEST_CFG```: Location of the configuration file. Defaults to
``/etc/cinder/cinder.conf``.
- ``CL_FTEST_POOL_NAME``: If our backend has multi-pool support and we have
configured multiple pools we can use this parameter to define which pool to
use for the functional tests. If not defined it will use the first reported
pool.
If we encounter problems while running the functional tests, but the *Cinder*
service is running just fine, we can go to the #openstack-cinder IRC channel in
OFTC, or send an email to the `discuss-openstack mailing list`_ starting
the subject with *[cinderlib]*.
Cinder 3rd party CI
-------------------
Once we have been able to successfully run the functional tests it's time to
make the CI jobs run them on every patch submitted to *Cinder* to ensure the
driver keeps being compatible.
There are multiples ways we can accomplish this:
1. Create a 3rd party CI job listening to *cinderlib* patches
2. Create an additional 3rd party CI job in *Cinder*, similar to the one we
already have.
3. Reusing our existing 3rd party CI job making it also run the *cinderlib*
functional tests.
Options #1 and #2 require more work, as we have to create new jobs, but they
make it easier to know that our driver is compatible with *cinderlib*. Option
#3 is the opposite, it is easy to setup, but it doesn't make it so obvious that
our driver is supported by *cinderlib*.
Configuration
^^^^^^^^^^^^^
When reusing existing 3rd party CI jobs, the normal setup will generate a valid
configuration file on ``/etc/cinder/cinder.conf`` and *cinderlib* functional
tests will use it by default, so we don't have to do anything, but when running
a custom CI job we will have to write the configuration ourselves. Though we
don't have to do this dynamically. We can write it once and use it in all the
*cinderlib* jobs.
To get our backend configuration file for the functional tests we can:
- Use the ``cinder.conf`` file from one of your `DevStack`_ deployments.
- Manually create a minimal ``cinder.conf`` file.
- Create a custom YAML file.
We can create the minimal ``cinder.conf`` file using one generated by
`DevStack`_. Having a minimal configuration has the advantage of being easy to
read.
For an LVM backend could look like this::
[DEFAULT]
enabled_backends = lvm
[lvm]
volume_clear = none
target_helper = lioadm
volume_group = cinder-volumes
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = lvm
Besides the *INI* style configuration files, we can also use YAML configuration
files for the functional tests.
The YAML file has 3 key-value pairs that are of interest to us. Only one of
them is mandatory, the other 2 are optional.
- ``logs``: Boolean value defining whether we want the *Cinder* code to log to
stdout during the testing. Defaults to ``false``. Takes precedence over
environmental variable ``CL_TESTING_LOGGING``.
- `size_precision`: Integer value describing how much precision we must use
when comparing volume sizes. Due to cylinder sizes some storage arrays don't
abide 100% to the requested size of the volume. With this option we can
define how many decimals will be correct when testing sizes. A value of 2
means that the backend could create a 1.0015869140625GB volume when we
request a 1GB volume and the tests wouldn't fail. Default is zero, which for
us means that it must be perfect or it will fail. Takes precedence over
environmental variable ``CL_FTEST_PRECISION``.
- `backends`: This is a list of dictionaries, each with the configuration
parameters that are set in the backend section of the ``cinder.conf`` file in
*Cinder*. This is a mandatory field.
The same configuration we presented for the LVM backend as a minimal
``cinder.conf`` file would look like this in the YAML format:
.. code-block:: yaml
logs: false
venv_sudo: false
backends:
- volume_backend_name: lvm
volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group: cinder-volumes
target_helper: lioadm
volume_clear: none
To pass the location of the configuration file to the functional test runner we
must use the ``CL_FTEST_CFG`` environmental variable to point to the location
of our file. If we are using a ``cinder.conf`` file and we save it in
``etc/cinder`` then we don't need to pass it to the tests runner, since that's
the default location.
Use independent job
^^^^^^^^^^^^^^^^^^^
Creating new jobs is mostly identical to `what you already did for the Cinder
job <https://docs.openstack.org/infra/system-config/third_party.html>`_ with
the difference that here we don't need to do a full DevStack_ installation, as
it would take too long. We only need the *cinderlib*, *Cinder*, and *OS-Brick*
projects from master and then run *cinderlib*'s functional tests.
As an example here's the Ceph job in the *cinderlib* project that takes
approximately 8 minutes to run at the gate. In the ``pre-run`` phase it starts
a Ceph demo container to run a Ceph toy cluster as the backend. Then
provides a custom configuration YAML file with the backend configuration::
- job:
name: cinderlib-ceph-functional
parent: openstack-tox-functional-with-sudo
required-projects:
- openstack/os-brick
- openstack/cinder
pre-run: playbooks/setup-ceph.yaml
nodeset: ubuntu-bionic
vars:
tox_environment:
CL_FTEST_CFG: "cinderlib/tests/functional/ceph.yaml"
CL_FTEST_ROOT_HELPER: sudo
# These come from great-great-grandparent tox job
NOSE_WITH_HTML_OUTPUT: 1
NOSE_HTML_OUT_FILE: nose_results.html
NOSE_WITH_XUNIT: 1
For jobs in the *cinderlib* project you can use the
``openstack-tox-functional-with-sudo`` parent, but for jobs in the *Cinder*
project you'll have to call this yourself by calling tox or using the same
command we used during our manual testing: ``python -m unittest discover -v
cinderlib.tests.functional``.
Use existing job
^^^^^^^^^^^^^^^^
The easiest way to run the *cinderlib* functional tests is to reuse an
existing *Cinder* CI job, since we don't need to setup anything. We just need
to modify our job to run an additional command at the end.
Running the *cinderlib* functional tests after tempest will only add about 1
minute to the job's current runtime.
You will need to add ``openstack/cinderlib`` to the ``required-projects``
configuration of the Zuul job. This will ensure not only that *cinderlib* is
installed, but also that is using the right patch when a patch has
cross-repository dependencies.
For example, the LVM lio job called ``cinder-tempest-dsvm-lvm-lio-barbican``
has the following required projects::
required-projects:
- openstack-infra/devstack-gate
- openstack/barbican
- openstack/cinderlib
- openstack/python-barbicanclient
- openstack/tempest
- openstack/os-brick
To facilitate running the *cinderlib* functional tests in existing CI jobs the
*Cinder* project includes 2 playbooks:
- ``playbooks/tempest-and-cinderlib-run.yaml``
- ``playbooks/cinderlib-run.yaml``
These 2 playbooks support the ``cinderlib_ignore_errors`` boolean variable to
allow CI jobs to run the functional tests and ignore the results so that
*cinderlib* failures won't block patches. You can think of it as running the
*cinderlib* tests as non voting. We don't recommend setting it, as it would
defeat the purpose of running the jobs at the gate and the *cinderlib* tests
are very consistent and reliable and don't raise false failures.
Which one of these 2 playbook to use depends on how we are defining our CI job.
For example the LVM job uses the ``cinderlib-run.yaml`` job in it's `run.yaml
file
<http://git.openstack.org/cgit/openstack/cinder/tree/playbooks/legacy/cinder-tempest-dsvm-lvm-lio-barbican/run.yaml>`_,
and the Ceph job uses the ``tempest-and-cinderlib-run.yaml`` as its `run job
command <http://git.openstack.org/cgit/openstack/cinder/tree/.zuul.yaml>`_.
If you are running tempest tests using a custom script you can also add the
running of the *cinderlib* tests at the end.
Notes
-----
Additional features
^^^^^^^^^^^^^^^^^^^
The validation process we've discussed tests the basic functionality, but some
*Cinder* drivers have additional functionality such as backend QoS, multi-pool
support, and support for extra specs parameters that modify advanced volume
characteristics -such as compression, deduplication, and thin/thick
provisioning- on a per volume basis.
*Cinderlib* supports these features, but since they are driver specific, there
is no automated testing in *cinderlib*'s functional tests; but we can test them
manually ourselves using the ``extra_specs``, ``qos_specs`` and ``pool_name``
parameters in the ``create_volume`` and ``clone`` methods.
We can see the list of available pools in multi-pool drivers on the
``pool_names`` property in the Backend instance.
Configuration options
^^^^^^^^^^^^^^^^^^^^^
One of the difficulties in the *Cinder* project is determining which options
are valid for a specific driver on a specific release. This is usually handled
by users checking the *OpenStack* or vendor documentation, which makes it
impossible to automate.
There was a recent addition to the *Cinder* driver interface that allowed
drivers to report exactly which configuration options were relevant for them
via the ``get_driver_options`` method.
On the initial patch some basic values were added to the drivers, but we urge
all driver maintainers to have a careful look at the values currently being
returned and make sure they are returning all relevant options, because this
will not only be useful for some *Cinder* installers, but also for projects
using *cinderlib*, as they will be able to automatically build GUIs to
configure backends and to validate provided parameters. Having incorrect or
missing values there will result in undesired behavior in those systems.
Reporting results
-----------------
Once you have completed the process described in this guide you will have a
*Cinder* driver that is supported not only in *OpenStack*, but also by
*cinderlib* and its related projects, and it is time to make it visible.
For this you just need to submit a patch to the *cinderlib* project modifying
the ``doc/source/validated.rst`` file with the information from your backend.
The information that must be added to the documentation is:
- *Storage*: The make and model of the hardware used.
- *Versions*: Firmware versions used for the manual testing.
- *Connection type*: iSCSI, FC, RBD, etc. Can add multiple types on the same
line.
- *Requirements*: Required packages, Python libraries, configuration files,
etc. for the driver to work.
- *Automated testing*: Accepted values are:
- No
- On *cinderlib* jobs.
- On *cinder* jobs.
- On *cinderlib* and *Cinder* jobs.
- *Notes*: Any additional information relevant for *cinderlib* usage.
- *Configuration*: The contents of the YAML file or the driver section in the
``cinder.conf``, with masked sensitive data.
.. _Ember-CSI: https://ember-csi.io
.. _oVirt: https://ovirt.org
.. _DevStack: https://docs.openstack.org/devstack
.. _DevStack plugin: http://git.openstack.org/cgit/openstack/cinderlib/tree/devstack
.. _discuss-openstack mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss

View File

@ -1,4 +0,0 @@
- hosts: all
become: True
role:
- run-cinderlib-tests

View File

@ -1,27 +0,0 @@
# Copyright (c) 2020, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
#
---
#------------------------------------------------------------------------------
# Install OS-Brick bindeps for jobs that use them in required-projects.
# We don't install Cinder bindeps because for now we don't need them, as they
# include things like mariadb, postgresql, and mysql-server.
#------------------------------------------------------------------------------
- hosts: all
roles:
- role: bindep
vars:
bindep_dir: "{{ zuul.projects['opendev.org/openstack/os-brick'].src_dir }}"

View File

@ -1,95 +0,0 @@
# Copyright (c) 2018, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
#
---
#------------------------------------------------------------------------------
# Setup an Ceph cluster that will be used by cinderlib's functional tests
#------------------------------------------------------------------------------
- hosts: all
vars:
ansible_become: yes
tasks:
# Leave pyparsing, as it's needed by tox through the packaging library.
- name: Remove Python packages unnecessary for cinderlib
pip:
name: ['glanceclient', 'novaclient', 'swiftclient', 'barbicanclient',
'cursive', 'keystoneauth1', 'keystonemiddleware', 'webob',
'keystoneclient', 'castellan', 'oslo_reports', 'oslo_policy',
'oslo_messaging', 'osprofiler', 'oauth2client', 'paste',
'oslo_middleware', 'routes', 'jsonschema', 'os-win',
'oslo_upgradecheck', 'googleapiclient', 'pastedeploy']
state: absent
- name: Install ceph requirements
package:
name:
- ceph-common
- python3-rados
- python3-rbd
state: present
- name: Install Docker
package:
name: 'docker.io'
state: present
- name: Start Docker
service:
name: docker
state: started
- name: Start Ceph demo
command: |
docker run -d
--name ceph-demo
-e MON_IP=127.0.0.1
-e CEPH_PUBLIC_NETWORK=127.0.0.1/0
-e DEMO_DAEMONS="osd mds"
--net=host
--volume /etc/ceph:/etc/ceph
--privileged
ceph/daemon:latest-pacific
demo
- name: Wait for ceph.conf
# Start of the file
wait_for:
path: /etc/ceph/ceph.conf
search_regex: '\[global\]'
# End of the file
- wait_for:
path: /etc/ceph/ceph.conf
search_regex: '^osd data ='
- name: Set ceph features in config
lineinfile:
path: /etc/ceph/ceph.conf
insertafter: '\[global\]'
firstmatch: yes
line: 'rbd default features = 3'
state: present
- name: Set ceph keyring mode
file:
path: /etc/ceph/ceph.client.admin.keyring
mode: 0644
- name: Create rbd pool
command: |
docker exec ceph-demo ceph osd pool create rbd 8
- shell: cat /etc/ceph/ceph.conf
register: ceph_conf

View File

@ -1,72 +0,0 @@
# Copyright (c) 2018, Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
#
---
#------------------------------------------------------------------------------
# Setup an LVM VG that will be used by cinderlib's functional tests
#------------------------------------------------------------------------------
- hosts: all
vars:
cldir: .
vg: cinder-volumes
ansible_become: yes
tasks:
- name: Install LVM package
package:
name: lvm2
state: present
- name: Create LVM backing file
command: "truncate -s 10G {{vg}}"
args:
creates: "{{cldir}}/{{vg}}"
- name: Check if VG already exists
shell: "losetup -l | awk '/{{vg}}/ {print $1}'"
changed_when: false
register: existing_loop_device
- name: "Create loopback device {{vg}}"
command: "losetup --show -f {{cldir}}/{{vg}}"
register: new_loop_device
when: existing_loop_device.stdout == ''
# Workaround because Ansible destroys registers when skipped
- set_fact: loop_device="{{ new_loop_device.stdout if new_loop_device.changed else existing_loop_device.stdout }}"
- name: "Create VG {{vg}}"
shell: "vgcreate {{vg}} {{loop_device}} && touch {{cldir}}/lvm.vgcreate"
args:
creates: "{{cldir}}/lvm.vgcreate"
- command: "vgscan --cache"
changed_when: false
- name: Install iSCSI packages
package:
name:
- iscsi-initiator-utils
- targetcli
state: present
- name: Create initiator name
shell: echo InitiatorName=`iscsi-iname` > /etc/iscsi/initiatorname.iscsi
args:
creates: /etc/iscsi/initiatorname.iscsi
- name: Start iSCSI initiator
service:
name: iscsid
state: started

View File

@ -1,5 +0,0 @@
---
fixes:
- |
Bug #1821898: Improve compatibility with Cinder drivers that access the DB
directly. This allows cinderlib to support IBM SVC.

View File

@ -1,5 +0,0 @@
---
features:
- |
Support references in driver configuration values passed to
``cinderlib.Backend``, such as ``target_ip_address='$my_ip'``.

View File

@ -1,4 +0,0 @@
---
fixes:
- |
Bug #1819706: Support setting attach_mode on attach and connect calls.

View File

@ -1,6 +0,0 @@
---
fixes:
- |
Prevent the creation of multiple backends with the same name during the
same code run.
(Bug #1886164).

View File

@ -1,46 +0,0 @@
---
prelude: >
The Cinder Library, also known as cinderlib, is a Python library that
leverages the Cinder project to provide an object oriented abstraction
around Cinder's storage drivers to allow their usage directly without
running any of the Cinder services or surrounding services, such as
KeyStone, MySQL or RabbitMQ.
This is the Tech Preview release of the library, and is intended for
developers who only need the basic CRUD functionality of the drivers and
don't care for all the additional features Cinder provides such as quotas,
replication, multi-tenancy, migrations, retyping, scheduling, backups,
authorization, authentication, REST API, etc.
features:
- Use a Cinder driver without running a DBMS, Message broker, or Cinder
service.
- Using multiple simultaneous drivers on the same application.
- |
Basic operations support.
* Create volume
* Delete volume
* Extend volume
* Clone volume
* Create snapshot
* Delete snapshot
* Create volume from snapshot
* Connect volume
* Disconnect volume
* Local attach
* Local detach
* Validate connector
* Extra Specs for specific backend functionality.
* Backend QoS
* Multi-pool support
- |
Metadata persistence plugins.
* Stateless: Caller stores JSON serialization.
* Database: Metadata is stored in a database: MySQL, PostgreSQL, SQLite...
* Custom plugin: Caller provides module to store Metadata and cinderlib
calls

View File

@ -1,5 +0,0 @@
---
fixes:
- |
`Bug #1933964 <https://bugs.launchpad.net/cinder/+bug/1933964>`_: Fixed
date time fields losing subsecond resolution on serialization.

View File

@ -1,6 +0,0 @@
---
upgrade:
- |
Python 2.7 support has been dropped. OpenStack Train ships the last release
of cinderlib with py2.7 support (1.x). The minimum version of Python now
supported by cinderlib is Python 3.6.

View File

@ -1,8 +0,0 @@
---
features:
- |
Enhance volume extend functionality in cinderlib by supporting the refresh
of the host's view of an attached volume that has been extended in the
backend to reflect the new size. A call to volume.extend will
automatically extend the view if the volume is locally attached and
connection.extend will do the same when run on a non controller host.

View File

@ -1,12 +0,0 @@
---
features:
- |
Fake unused packages: Many packages that are automatically imported when
loading cinder modules are only used for normal Cinder operation and are
not necessary for cinderlib's execution. For example when loading a Cinder
module to get configuration options but without executing the code present
in the module.
We now fake these unnecessary packages, providing faster load times,
reduced footprint, and the possibility for distributions to create a
cinderlib package or containers with up to 40% fewer dependencies.

View File

@ -1,6 +0,0 @@
---
fixes:
- |
Bug #1849339: Cloning doesn't store the source volume id.
- |
Bug #1849828: In-use volume clone status is in-use instead of available.

View File

@ -1,6 +0,0 @@
---
fixes:
- |
Bug #1852629: Extending an LVM raised an exception, even though the volume
was extended. For in-use volumes the node that had the volume attached
wouldn't see the new size.

View File

@ -1,5 +0,0 @@
---
fixes:
- |
Bug #1868148: Volume deletion no longer fails due to DB constraints when
using the DBMS metadata persistence plugin with a MySQL backend.

View File

@ -1,5 +0,0 @@
---
fixes:
- |
Bug #1868153: Volume refresh will no longer forget if the volume is locally
attached.

View File

@ -1,7 +0,0 @@
---
fixes:
- |
`Bug #1979534 <https://bugs.launchpad.net/cinderlib/+bug/1979534>`_: Fix
issues running privileged commands within a virtual environments that use
system site-packages when the privsep package is installed system-wide or
when privileged commands use user site-packages.

View File

@ -1,6 +0,0 @@
---
features:
- |
Improve ``list_supported_drivers`` method to facilitate usage by automation
tools. Method now accepts ``output_version`` parameter with value ``2`` to
get Python objects instead of having all values converted to strings.

View File

@ -1,6 +0,0 @@
---
fixes:
- |
Bug #1868145: Support rolling upgrades with the DBMS persistence plugin.
Can run an N release version even if we have already run once an N+1
release.

View File

@ -1,5 +0,0 @@
---
fixes:
- |
`Bug #1883720 <https://bugs.launchpad.net/cinderlib/+bug/1883720>`_: Added
privsep support, increasing cinderlib's compatibility with Cinder drivers.

View File

@ -1,6 +0,0 @@
---
fixes:
- |
Fix cases where our detection of when we are running containerized is not
very reliable for the RBD driver.
(Bug #1885302).

View File

@ -1,7 +0,0 @@
---
fixes:
- |
Fix issue where disconnecting an RBD volume inside a container when the
host had the ceph-common package installed could lead to failure due to a
"No such file or directory" error.
(Bug #1885293).

View File

@ -1,6 +0,0 @@
---
fixes:
- |
Fix issue on RBD driver about not specifying a root helper when running as
a non-root user inside a container.
(Bug #1885291).

View File

@ -1,4 +0,0 @@
---
fixes:
- |
Bug #1836724: Fix create snapshot from a volume with volume type.

View File

@ -1,5 +0,0 @@
---
fixes:
- |
Bug #1854188: Work with complex configuration options like ListOpt,
DictOpt, and MultiOpt with dict items.

View File

@ -1,7 +0,0 @@
---
prelude: >
Welcome to the Zed release of cinderlib.
upgrade:
- |
This release drops support for Python 3.6.

View File

@ -1,6 +0,0 @@
===========================
2023.1 Series Release Notes
===========================
.. release-notes::
:branch: stable/2023.1

Some files were not shown because too many files have changed in this diff Show More