Retire repo
This repo was created by accident, use deb-python-oslo.cache instead. Needed-By: I1ac1a06931c8b6dd7c2e73620a0302c29e605f03 Change-Id: I81894aea69b9d09b0977039623c26781093a397a
This commit is contained in:
parent
3009e5f7be
commit
541332c90f
|
@ -1,7 +0,0 @@
|
||||||
[run]
|
|
||||||
branch = True
|
|
||||||
source = cache
|
|
||||||
omit = cache/tests/*,cache/openstack/*
|
|
||||||
|
|
||||||
[report]
|
|
||||||
ignore_errors = True
|
|
|
@ -1,55 +0,0 @@
|
||||||
*.py[cod]
|
|
||||||
|
|
||||||
# C extensions
|
|
||||||
*.so
|
|
||||||
|
|
||||||
# Packages
|
|
||||||
*.egg*
|
|
||||||
*.egg-info
|
|
||||||
dist
|
|
||||||
build
|
|
||||||
eggs
|
|
||||||
parts
|
|
||||||
bin
|
|
||||||
var
|
|
||||||
sdist
|
|
||||||
develop-eggs
|
|
||||||
.installed.cfg
|
|
||||||
lib
|
|
||||||
lib64
|
|
||||||
|
|
||||||
# Installer logs
|
|
||||||
pip-log.txt
|
|
||||||
|
|
||||||
# Unit test / coverage reports
|
|
||||||
.coverage
|
|
||||||
.tox
|
|
||||||
nosetests.xml
|
|
||||||
.testrepository
|
|
||||||
|
|
||||||
# Translations
|
|
||||||
*.mo
|
|
||||||
|
|
||||||
# Mr Developer
|
|
||||||
.mr.developer.cfg
|
|
||||||
.project
|
|
||||||
.pydevproject
|
|
||||||
|
|
||||||
# Complexity
|
|
||||||
output/*.html
|
|
||||||
output/*/index.html
|
|
||||||
|
|
||||||
# Sphinx
|
|
||||||
doc/build
|
|
||||||
doc/source/api
|
|
||||||
|
|
||||||
# pbr generates these
|
|
||||||
AUTHORS
|
|
||||||
ChangeLog
|
|
||||||
|
|
||||||
# Editors
|
|
||||||
*~
|
|
||||||
.*.swp
|
|
||||||
|
|
||||||
# reno build
|
|
||||||
releasenotes/build
|
|
|
@ -1,4 +0,0 @@
|
||||||
[gerrit]
|
|
||||||
host=review.openstack.org
|
|
||||||
port=29418
|
|
||||||
project=openstack/oslo.cache.git
|
|
3
.mailmap
3
.mailmap
|
@ -1,3 +0,0 @@
|
||||||
# Format is:
|
|
||||||
# <preferred e-mail> <other e-mail 1>
|
|
||||||
# <preferred e-mail> <other e-mail 2>
|
|
|
@ -1,7 +0,0 @@
|
||||||
[DEFAULT]
|
|
||||||
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
|
|
||||||
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
|
|
||||||
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
|
|
||||||
${PYTHON:-python} -m subunit.run discover -t ./ ./oslo_cache $LISTOPT $IDOPTION
|
|
||||||
test_id_option=--load-list $IDFILE
|
|
||||||
test_list_option=--list
|
|
|
@ -1,16 +0,0 @@
|
||||||
If you would like to contribute to the development of OpenStack,
|
|
||||||
you must follow the steps in this page:
|
|
||||||
|
|
||||||
http://docs.openstack.org/infra/manual/developers.html
|
|
||||||
|
|
||||||
Once those steps have been completed, changes to OpenStack
|
|
||||||
should be submitted for review via the Gerrit tool, following
|
|
||||||
the workflow documented at:
|
|
||||||
|
|
||||||
http://docs.openstack.org/infra/manual/developers.html#development-workflow
|
|
||||||
|
|
||||||
Pull requests submitted through GitHub will be ignored.
|
|
||||||
|
|
||||||
Bugs should be filed on Launchpad, not GitHub:
|
|
||||||
|
|
||||||
https://bugs.launchpad.net/oslo.cache
|
|
|
@ -1,4 +0,0 @@
|
||||||
oslo.cache Style Commandments
|
|
||||||
======================================================
|
|
||||||
|
|
||||||
Read the OpenStack Style Commandments http://docs.openstack.org/developer/hacking/
|
|
176
LICENSE
176
LICENSE
|
@ -1,176 +0,0 @@
|
||||||
|
|
||||||
Apache License
|
|
||||||
Version 2.0, January 2004
|
|
||||||
http://www.apache.org/licenses/
|
|
||||||
|
|
||||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
|
||||||
|
|
||||||
1. Definitions.
|
|
||||||
|
|
||||||
"License" shall mean the terms and conditions for use, reproduction,
|
|
||||||
and distribution as defined by Sections 1 through 9 of this document.
|
|
||||||
|
|
||||||
"Licensor" shall mean the copyright owner or entity authorized by
|
|
||||||
the copyright owner that is granting the License.
|
|
||||||
|
|
||||||
"Legal Entity" shall mean the union of the acting entity and all
|
|
||||||
other entities that control, are controlled by, or are under common
|
|
||||||
control with that entity. For the purposes of this definition,
|
|
||||||
"control" means (i) the power, direct or indirect, to cause the
|
|
||||||
direction or management of such entity, whether by contract or
|
|
||||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
|
||||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
|
||||||
|
|
||||||
"You" (or "Your") shall mean an individual or Legal Entity
|
|
||||||
exercising permissions granted by this License.
|
|
||||||
|
|
||||||
"Source" form shall mean the preferred form for making modifications,
|
|
||||||
including but not limited to software source code, documentation
|
|
||||||
source, and configuration files.
|
|
||||||
|
|
||||||
"Object" form shall mean any form resulting from mechanical
|
|
||||||
transformation or translation of a Source form, including but
|
|
||||||
not limited to compiled object code, generated documentation,
|
|
||||||
and conversions to other media types.
|
|
||||||
|
|
||||||
"Work" shall mean the work of authorship, whether in Source or
|
|
||||||
Object form, made available under the License, as indicated by a
|
|
||||||
copyright notice that is included in or attached to the work
|
|
||||||
(an example is provided in the Appendix below).
|
|
||||||
|
|
||||||
"Derivative Works" shall mean any work, whether in Source or Object
|
|
||||||
form, that is based on (or derived from) the Work and for which the
|
|
||||||
editorial revisions, annotations, elaborations, or other modifications
|
|
||||||
represent, as a whole, an original work of authorship. For the purposes
|
|
||||||
of this License, Derivative Works shall not include works that remain
|
|
||||||
separable from, or merely link (or bind by name) to the interfaces of,
|
|
||||||
the Work and Derivative Works thereof.
|
|
||||||
|
|
||||||
"Contribution" shall mean any work of authorship, including
|
|
||||||
the original version of the Work and any modifications or additions
|
|
||||||
to that Work or Derivative Works thereof, that is intentionally
|
|
||||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
|
||||||
or by an individual or Legal Entity authorized to submit on behalf of
|
|
||||||
the copyright owner. For the purposes of this definition, "submitted"
|
|
||||||
means any form of electronic, verbal, or written communication sent
|
|
||||||
to the Licensor or its representatives, including but not limited to
|
|
||||||
communication on electronic mailing lists, source code control systems,
|
|
||||||
and issue tracking systems that are managed by, or on behalf of, the
|
|
||||||
Licensor for the purpose of discussing and improving the Work, but
|
|
||||||
excluding communication that is conspicuously marked or otherwise
|
|
||||||
designated in writing by the copyright owner as "Not a Contribution."
|
|
||||||
|
|
||||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
|
||||||
on behalf of whom a Contribution has been received by Licensor and
|
|
||||||
subsequently incorporated within the Work.
|
|
||||||
|
|
||||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
copyright license to reproduce, prepare Derivative Works of,
|
|
||||||
publicly display, publicly perform, sublicense, and distribute the
|
|
||||||
Work and such Derivative Works in Source or Object form.
|
|
||||||
|
|
||||||
3. Grant of Patent License. Subject to the terms and conditions of
|
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
(except as stated in this section) patent license to make, have made,
|
|
||||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
|
||||||
where such license applies only to those patent claims licensable
|
|
||||||
by such Contributor that are necessarily infringed by their
|
|
||||||
Contribution(s) alone or by combination of their Contribution(s)
|
|
||||||
with the Work to which such Contribution(s) was submitted. If You
|
|
||||||
institute patent litigation against any entity (including a
|
|
||||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
|
||||||
or a Contribution incorporated within the Work constitutes direct
|
|
||||||
or contributory patent infringement, then any patent licenses
|
|
||||||
granted to You under this License for that Work shall terminate
|
|
||||||
as of the date such litigation is filed.
|
|
||||||
|
|
||||||
4. Redistribution. You may reproduce and distribute copies of the
|
|
||||||
Work or Derivative Works thereof in any medium, with or without
|
|
||||||
modifications, and in Source or Object form, provided that You
|
|
||||||
meet the following conditions:
|
|
||||||
|
|
||||||
(a) You must give any other recipients of the Work or
|
|
||||||
Derivative Works a copy of this License; and
|
|
||||||
|
|
||||||
(b) You must cause any modified files to carry prominent notices
|
|
||||||
stating that You changed the files; and
|
|
||||||
|
|
||||||
(c) You must retain, in the Source form of any Derivative Works
|
|
||||||
that You distribute, all copyright, patent, trademark, and
|
|
||||||
attribution notices from the Source form of the Work,
|
|
||||||
excluding those notices that do not pertain to any part of
|
|
||||||
the Derivative Works; and
|
|
||||||
|
|
||||||
(d) If the Work includes a "NOTICE" text file as part of its
|
|
||||||
distribution, then any Derivative Works that You distribute must
|
|
||||||
include a readable copy of the attribution notices contained
|
|
||||||
within such NOTICE file, excluding those notices that do not
|
|
||||||
pertain to any part of the Derivative Works, in at least one
|
|
||||||
of the following places: within a NOTICE text file distributed
|
|
||||||
as part of the Derivative Works; within the Source form or
|
|
||||||
documentation, if provided along with the Derivative Works; or,
|
|
||||||
within a display generated by the Derivative Works, if and
|
|
||||||
wherever such third-party notices normally appear. The contents
|
|
||||||
of the NOTICE file are for informational purposes only and
|
|
||||||
do not modify the License. You may add Your own attribution
|
|
||||||
notices within Derivative Works that You distribute, alongside
|
|
||||||
or as an addendum to the NOTICE text from the Work, provided
|
|
||||||
that such additional attribution notices cannot be construed
|
|
||||||
as modifying the License.
|
|
||||||
|
|
||||||
You may add Your own copyright statement to Your modifications and
|
|
||||||
may provide additional or different license terms and conditions
|
|
||||||
for use, reproduction, or distribution of Your modifications, or
|
|
||||||
for any such Derivative Works as a whole, provided Your use,
|
|
||||||
reproduction, and distribution of the Work otherwise complies with
|
|
||||||
the conditions stated in this License.
|
|
||||||
|
|
||||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
|
||||||
any Contribution intentionally submitted for inclusion in the Work
|
|
||||||
by You to the Licensor shall be under the terms and conditions of
|
|
||||||
this License, without any additional terms or conditions.
|
|
||||||
Notwithstanding the above, nothing herein shall supersede or modify
|
|
||||||
the terms of any separate license agreement you may have executed
|
|
||||||
with Licensor regarding such Contributions.
|
|
||||||
|
|
||||||
6. Trademarks. This License does not grant permission to use the trade
|
|
||||||
names, trademarks, service marks, or product names of the Licensor,
|
|
||||||
except as required for reasonable and customary use in describing the
|
|
||||||
origin of the Work and reproducing the content of the NOTICE file.
|
|
||||||
|
|
||||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
|
||||||
agreed to in writing, Licensor provides the Work (and each
|
|
||||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
implied, including, without limitation, any warranties or conditions
|
|
||||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
|
||||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
|
||||||
appropriateness of using or redistributing the Work and assume any
|
|
||||||
risks associated with Your exercise of permissions under this License.
|
|
||||||
|
|
||||||
8. Limitation of Liability. In no event and under no legal theory,
|
|
||||||
whether in tort (including negligence), contract, or otherwise,
|
|
||||||
unless required by applicable law (such as deliberate and grossly
|
|
||||||
negligent acts) or agreed to in writing, shall any Contributor be
|
|
||||||
liable to You for damages, including any direct, indirect, special,
|
|
||||||
incidental, or consequential damages of any character arising as a
|
|
||||||
result of this License or out of the use or inability to use the
|
|
||||||
Work (including but not limited to damages for loss of goodwill,
|
|
||||||
work stoppage, computer failure or malfunction, or any and all
|
|
||||||
other commercial damages or losses), even if such Contributor
|
|
||||||
has been advised of the possibility of such damages.
|
|
||||||
|
|
||||||
9. Accepting Warranty or Additional Liability. While redistributing
|
|
||||||
the Work or Derivative Works thereof, You may choose to offer,
|
|
||||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
|
||||||
or other liability obligations and/or rights consistent with this
|
|
||||||
License. However, in accepting such obligations, You may act only
|
|
||||||
on Your own behalf and on Your sole responsibility, not on behalf
|
|
||||||
of any other Contributor, and only if You agree to indemnify,
|
|
||||||
defend, and hold each Contributor harmless for any liability
|
|
||||||
incurred by, or claims asserted against, such Contributor by reason
|
|
||||||
of your accepting any such warranty or additional liability.
|
|
||||||
|
|
24
README.rst
24
README.rst
|
@ -1,24 +0,0 @@
|
||||||
==========
|
|
||||||
oslo.cache
|
|
||||||
==========
|
|
||||||
|
|
||||||
.. image:: https://img.shields.io/pypi/v/oslo.cache.svg
|
|
||||||
:target: https://pypi.python.org/pypi/oslo.cache/
|
|
||||||
:alt: Latest Version
|
|
||||||
|
|
||||||
.. image:: https://img.shields.io/pypi/dm/oslo.cache.svg
|
|
||||||
:target: https://pypi.python.org/pypi/oslo.cache/
|
|
||||||
:alt: Downloads
|
|
||||||
|
|
||||||
`oslo.cache` aims to provide a generic caching mechanism for OpenStack projects
|
|
||||||
by wrapping the `dogpile.cache
|
|
||||||
<http://dogpilecache.readthedocs.org/en/latest/>`_ library. The dogpile.cache
|
|
||||||
library provides support memoization, key value storage and interfaces to common
|
|
||||||
caching backends such as `Memcached <http://www.memcached.org/>`_.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
* Free software: Apache license
|
|
||||||
* Documentation: http://docs.openstack.org/developer/oslo.cache
|
|
||||||
* Source: http://git.openstack.org/cgit/openstack/oslo.cache
|
|
||||||
* Bugs: http://bugs.launchpad.net/oslo.cache
|
|
|
@ -0,0 +1,13 @@
|
||||||
|
This project is no longer maintained.
|
||||||
|
|
||||||
|
The contents of this repository are still available in the Git
|
||||||
|
source code management system. To see the contents of this
|
||||||
|
repository before it reached its end of life, please check out the
|
||||||
|
previous commit with "git checkout HEAD^1".
|
||||||
|
|
||||||
|
Use instead the project deb-python-oslo.cache at
|
||||||
|
http://git.openstack.org/cgit/openstack/deb-python-oslo.cache .
|
||||||
|
|
||||||
|
For any further questions, please email
|
||||||
|
openstack-dev@lists.openstack.org or join #openstack-dev on
|
||||||
|
Freenode.
|
|
@ -1,82 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
|
|
||||||
sys.path.insert(0, os.path.abspath('../..'))
|
|
||||||
# -- General configuration ----------------------------------------------------
|
|
||||||
|
|
||||||
# Add any Sphinx extension module names here, as strings. They can be
|
|
||||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
|
|
||||||
extensions = [
|
|
||||||
'sphinx.ext.autodoc',
|
|
||||||
'sphinx.ext.intersphinx',
|
|
||||||
'oslosphinx',
|
|
||||||
'oslo_config.sphinxext',
|
|
||||||
]
|
|
||||||
|
|
||||||
# autodoc generation is a bit aggressive and a nuisance when doing heavy
|
|
||||||
# text edit cycles.
|
|
||||||
# execute "export SPHINX_DEBUG=1" in your terminal to disable
|
|
||||||
|
|
||||||
# The suffix of source filenames.
|
|
||||||
source_suffix = '.rst'
|
|
||||||
|
|
||||||
# The master toctree document.
|
|
||||||
master_doc = 'index'
|
|
||||||
|
|
||||||
# General information about the project.
|
|
||||||
project = u'oslo.cache'
|
|
||||||
copyright = u'2014, OpenStack Foundation'
|
|
||||||
|
|
||||||
# If true, '()' will be appended to :func: etc. cross-reference text.
|
|
||||||
add_function_parentheses = True
|
|
||||||
|
|
||||||
# If true, the current module name will be prepended to all description
|
|
||||||
# unit titles (such as .. function::).
|
|
||||||
add_module_names = True
|
|
||||||
|
|
||||||
# The name of the Pygments (syntax highlighting) style to use.
|
|
||||||
pygments_style = 'sphinx'
|
|
||||||
|
|
||||||
# A list of ignored prefixes for module index sorting.
|
|
||||||
modindex_common_prefix = ['oslo_cache.']
|
|
||||||
|
|
||||||
# -- Options for HTML output --------------------------------------------------
|
|
||||||
|
|
||||||
# The theme to use for HTML and HTML Help pages. Major themes that come with
|
|
||||||
# Sphinx are currently 'default' and 'sphinxdoc'.
|
|
||||||
# html_theme_path = ["."]
|
|
||||||
# html_theme = '_theme'
|
|
||||||
# html_static_path = ['static']
|
|
||||||
|
|
||||||
# Output file base name for HTML help builder.
|
|
||||||
htmlhelp_basename = '%sdoc' % project
|
|
||||||
|
|
||||||
# Grouping the document tree into LaTeX files. List of tuples
|
|
||||||
# (source start file, target name, title, author, documentclass
|
|
||||||
# [howto/manual]).
|
|
||||||
latex_documents = [
|
|
||||||
('index',
|
|
||||||
'%s.tex' % project,
|
|
||||||
u'%s Documentation' % project,
|
|
||||||
u'OpenStack Foundation', 'manual'),
|
|
||||||
]
|
|
||||||
|
|
||||||
intersphinx_mapping = {
|
|
||||||
'python': ('https://docs.python.org/', None),
|
|
||||||
'osloconfig': ('http://docs.openstack.org/developer/oslo.config/', None),
|
|
||||||
'dogpilecache': ('https://dogpilecache.readthedocs.io/en/latest/', None),
|
|
||||||
}
|
|
|
@ -1,5 +0,0 @@
|
||||||
==============
|
|
||||||
Contributing
|
|
||||||
==============
|
|
||||||
|
|
||||||
.. include:: ../../CONTRIBUTING.rst
|
|
|
@ -1 +0,0 @@
|
||||||
.. include:: ../../ChangeLog
|
|
|
@ -1,26 +0,0 @@
|
||||||
============
|
|
||||||
oslo.cache
|
|
||||||
============
|
|
||||||
|
|
||||||
Cache storage for Openstack projects.
|
|
||||||
|
|
||||||
Contents
|
|
||||||
========
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 2
|
|
||||||
|
|
||||||
installation
|
|
||||||
api/modules
|
|
||||||
usage
|
|
||||||
opts
|
|
||||||
contributing
|
|
||||||
history
|
|
||||||
|
|
||||||
Indices and tables
|
|
||||||
==================
|
|
||||||
|
|
||||||
* :ref:`genindex`
|
|
||||||
* :ref:`modindex`
|
|
||||||
* :ref:`search`
|
|
||||||
|
|
|
@ -1,7 +0,0 @@
|
||||||
==============
|
|
||||||
Installation
|
|
||||||
==============
|
|
||||||
|
|
||||||
At the command line::
|
|
||||||
|
|
||||||
$ pip install oslo.cache
|
|
|
@ -1,8 +0,0 @@
|
||||||
=======================
|
|
||||||
Configuration Options
|
|
||||||
=======================
|
|
||||||
|
|
||||||
oslo.cache uses oslo.config to define and manage configuration options
|
|
||||||
to allow the deployer to control how an application uses this library.
|
|
||||||
|
|
||||||
.. show-options:: oslo.cache
|
|
|
@ -1,7 +0,0 @@
|
||||||
=======
|
|
||||||
Usage
|
|
||||||
=======
|
|
||||||
|
|
||||||
To use oslo.cache in a project::
|
|
||||||
|
|
||||||
import oslo_cache
|
|
|
@ -1,23 +0,0 @@
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
|
|
||||||
from oslo_cache.core import * # noqa
|
|
||||||
|
|
||||||
|
|
||||||
__all__ = [
|
|
||||||
'configure',
|
|
||||||
'configure_cache_region',
|
|
||||||
'create_region',
|
|
||||||
'get_memoization_decorator',
|
|
||||||
'NO_VALUE',
|
|
||||||
]
|
|
|
@ -1,35 +0,0 @@
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
"""oslo.i18n integration module.
|
|
||||||
|
|
||||||
See http://docs.openstack.org/developer/oslo.i18n/usage.html
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
import oslo_i18n
|
|
||||||
|
|
||||||
|
|
||||||
_translators = oslo_i18n.TranslatorFactory(domain='oslo_cache')
|
|
||||||
|
|
||||||
# The primary translation function using the well-known name "_"
|
|
||||||
_ = _translators.primary
|
|
||||||
|
|
||||||
# Translators for log levels.
|
|
||||||
#
|
|
||||||
# The abbreviated names are meant to reflect the usual use of a short
|
|
||||||
# name like '_'. The "L" is for "log" and the other letter comes from
|
|
||||||
# the level.
|
|
||||||
_LI = _translators.log_info
|
|
||||||
_LW = _translators.log_warning
|
|
||||||
_LE = _translators.log_error
|
|
||||||
_LC = _translators.log_critical
|
|
|
@ -1,259 +0,0 @@
|
||||||
# Copyright 2014 Mirantis Inc
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
"""Thread-safe connection pool for python-memcached."""
|
|
||||||
|
|
||||||
# NOTE(yorik-sar): this file is copied between keystone and keystonemiddleware
|
|
||||||
# and should be kept in sync until we can use external library for this.
|
|
||||||
|
|
||||||
import collections
|
|
||||||
import contextlib
|
|
||||||
import itertools
|
|
||||||
import logging
|
|
||||||
import threading
|
|
||||||
import time
|
|
||||||
|
|
||||||
import memcache
|
|
||||||
from oslo_log import log
|
|
||||||
from six.moves import queue
|
|
||||||
from six.moves import zip
|
|
||||||
|
|
||||||
from oslo_cache._i18n import _
|
|
||||||
from oslo_cache import exception
|
|
||||||
|
|
||||||
|
|
||||||
LOG = log.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class _MemcacheClient(memcache.Client):
|
|
||||||
"""Thread global memcache client
|
|
||||||
|
|
||||||
As client is inherited from threading.local we have to restore object
|
|
||||||
methods overloaded by threading.local so we can reuse clients in
|
|
||||||
different threads
|
|
||||||
"""
|
|
||||||
__delattr__ = object.__delattr__
|
|
||||||
__getattribute__ = object.__getattribute__
|
|
||||||
__new__ = object.__new__
|
|
||||||
__setattr__ = object.__setattr__
|
|
||||||
|
|
||||||
def __del__(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
_PoolItem = collections.namedtuple('_PoolItem', ['ttl', 'connection'])
|
|
||||||
|
|
||||||
|
|
||||||
class ConnectionPool(queue.Queue):
|
|
||||||
"""Base connection pool class
|
|
||||||
|
|
||||||
This class implements the basic connection pool logic as an abstract base
|
|
||||||
class.
|
|
||||||
"""
|
|
||||||
def __init__(self, maxsize, unused_timeout, conn_get_timeout=None):
|
|
||||||
"""Initialize the connection pool.
|
|
||||||
|
|
||||||
:param maxsize: maximum number of client connections for the pool
|
|
||||||
:type maxsize: int
|
|
||||||
:param unused_timeout: idle time to live for unused clients (in
|
|
||||||
seconds). If a client connection object has been
|
|
||||||
in the pool and idle for longer than the
|
|
||||||
unused_timeout, it will be reaped. This is to
|
|
||||||
ensure resources are released as utilization
|
|
||||||
goes down.
|
|
||||||
:type unused_timeout: int
|
|
||||||
:param conn_get_timeout: maximum time in seconds to wait for a
|
|
||||||
connection. If set to `None` timeout is
|
|
||||||
indefinite.
|
|
||||||
:type conn_get_timeout: int
|
|
||||||
"""
|
|
||||||
# super() cannot be used here because Queue in stdlib is an
|
|
||||||
# old-style class
|
|
||||||
queue.Queue.__init__(self, maxsize)
|
|
||||||
self._unused_timeout = unused_timeout
|
|
||||||
self._connection_get_timeout = conn_get_timeout
|
|
||||||
self._acquired = 0
|
|
||||||
|
|
||||||
def _create_connection(self):
|
|
||||||
"""Returns a connection instance.
|
|
||||||
|
|
||||||
This is called when the pool needs another instance created.
|
|
||||||
|
|
||||||
:returns: a new connection instance
|
|
||||||
|
|
||||||
"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def _destroy_connection(self, conn):
|
|
||||||
"""Destroy and cleanup a connection instance.
|
|
||||||
|
|
||||||
This is called when the pool wishes to get rid of an existing
|
|
||||||
connection. This is the opportunity for a subclass to free up
|
|
||||||
resources and cleanup after itself.
|
|
||||||
|
|
||||||
:param conn: the connection object to destroy
|
|
||||||
|
|
||||||
"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def _do_log(self, level, msg, *args, **kwargs):
|
|
||||||
if LOG.isEnabledFor(level):
|
|
||||||
thread_id = threading.current_thread().ident
|
|
||||||
args = (id(self), thread_id) + args
|
|
||||||
prefix = 'Memcached pool %s, thread %s: '
|
|
||||||
LOG.log(level, prefix + msg, *args, **kwargs)
|
|
||||||
|
|
||||||
def _debug_logger(self, msg, *args, **kwargs):
|
|
||||||
self._do_log(logging.DEBUG, msg, *args, **kwargs)
|
|
||||||
|
|
||||||
def _trace_logger(self, msg, *args, **kwargs):
|
|
||||||
self._do_log(log.TRACE, msg, *args, **kwargs)
|
|
||||||
|
|
||||||
@contextlib.contextmanager
|
|
||||||
def acquire(self):
|
|
||||||
self._trace_logger('Acquiring connection')
|
|
||||||
try:
|
|
||||||
conn = self.get(timeout=self._connection_get_timeout)
|
|
||||||
except queue.Empty:
|
|
||||||
raise exception.QueueEmpty(
|
|
||||||
_('Unable to get a connection from pool id %(id)s after '
|
|
||||||
'%(seconds)s seconds.') %
|
|
||||||
{'id': id(self), 'seconds': self._connection_get_timeout})
|
|
||||||
self._trace_logger('Acquired connection %s', id(conn))
|
|
||||||
try:
|
|
||||||
yield conn
|
|
||||||
finally:
|
|
||||||
self._trace_logger('Releasing connection %s', id(conn))
|
|
||||||
self._drop_expired_connections()
|
|
||||||
try:
|
|
||||||
# super() cannot be used here because Queue in stdlib is an
|
|
||||||
# old-style class
|
|
||||||
queue.Queue.put(self, conn, block=False)
|
|
||||||
except queue.Full:
|
|
||||||
self._trace_logger('Reaping exceeding connection %s', id(conn))
|
|
||||||
self._destroy_connection(conn)
|
|
||||||
|
|
||||||
def _qsize(self):
|
|
||||||
if self.maxsize:
|
|
||||||
return self.maxsize - self._acquired
|
|
||||||
else:
|
|
||||||
# A value indicating there is always a free connection
|
|
||||||
# if maxsize is None or 0
|
|
||||||
return 1
|
|
||||||
|
|
||||||
# NOTE(dstanek): stdlib and eventlet Queue implementations
|
|
||||||
# have different names for the qsize method. This ensures
|
|
||||||
# that we override both of them.
|
|
||||||
if not hasattr(queue.Queue, '_qsize'):
|
|
||||||
qsize = _qsize
|
|
||||||
|
|
||||||
def _get(self):
|
|
||||||
try:
|
|
||||||
conn = self.queue.pop().connection
|
|
||||||
except IndexError:
|
|
||||||
conn = self._create_connection()
|
|
||||||
self._acquired += 1
|
|
||||||
return conn
|
|
||||||
|
|
||||||
def _drop_expired_connections(self):
|
|
||||||
"""Drop all expired connections from the left end of the queue."""
|
|
||||||
now = time.time()
|
|
||||||
try:
|
|
||||||
while self.queue[0].ttl < now:
|
|
||||||
conn = self.queue.popleft().connection
|
|
||||||
self._trace_logger('Reaping connection %s', id(conn))
|
|
||||||
self._destroy_connection(conn)
|
|
||||||
except IndexError:
|
|
||||||
# NOTE(amakarov): This is an expected excepton. so there's no
|
|
||||||
# need to react. We have to handle exceptions instead of
|
|
||||||
# checking queue length as IndexError is a result of race
|
|
||||||
# condition too as well as of mere queue depletio of mere queue
|
|
||||||
# depletionn.
|
|
||||||
pass
|
|
||||||
|
|
||||||
def _put(self, conn):
|
|
||||||
self.queue.append(_PoolItem(
|
|
||||||
ttl=time.time() + self._unused_timeout,
|
|
||||||
connection=conn,
|
|
||||||
))
|
|
||||||
self._acquired -= 1
|
|
||||||
|
|
||||||
|
|
||||||
class MemcacheClientPool(ConnectionPool):
|
|
||||||
def __init__(self, urls, arguments, **kwargs):
|
|
||||||
# super() cannot be used here because Queue in stdlib is an
|
|
||||||
# old-style class
|
|
||||||
ConnectionPool.__init__(self, **kwargs)
|
|
||||||
self.urls = urls
|
|
||||||
self._arguments = arguments
|
|
||||||
# NOTE(morganfainberg): The host objects expect an int for the
|
|
||||||
# deaduntil value. Initialize this at 0 for each host with 0 indicating
|
|
||||||
# the host is not dead.
|
|
||||||
self._hosts_deaduntil = [0] * len(urls)
|
|
||||||
|
|
||||||
def _create_connection(self):
|
|
||||||
return _MemcacheClient(self.urls, **self._arguments)
|
|
||||||
|
|
||||||
def _destroy_connection(self, conn):
|
|
||||||
conn.disconnect_all()
|
|
||||||
|
|
||||||
def _get(self):
|
|
||||||
# super() cannot be used here because Queue in stdlib is an
|
|
||||||
# old-style class
|
|
||||||
conn = ConnectionPool._get(self)
|
|
||||||
try:
|
|
||||||
# Propagate host state known to us to this client's list
|
|
||||||
now = time.time()
|
|
||||||
for deaduntil, host in zip(self._hosts_deaduntil, conn.servers):
|
|
||||||
if deaduntil > now and host.deaduntil <= now:
|
|
||||||
host.mark_dead('propagating death mark from the pool')
|
|
||||||
host.deaduntil = deaduntil
|
|
||||||
except Exception:
|
|
||||||
# We need to be sure that connection doesn't leak from the pool.
|
|
||||||
# This code runs before we enter context manager's try-finally
|
|
||||||
# block, so we need to explicitly release it here.
|
|
||||||
# super() cannot be used here because Queue in stdlib is an
|
|
||||||
# old-style class
|
|
||||||
ConnectionPool._put(self, conn)
|
|
||||||
raise
|
|
||||||
return conn
|
|
||||||
|
|
||||||
def _put(self, conn):
|
|
||||||
try:
|
|
||||||
# If this client found that one of the hosts is dead, mark it as
|
|
||||||
# such in our internal list
|
|
||||||
now = time.time()
|
|
||||||
for i, host in zip(itertools.count(), conn.servers):
|
|
||||||
deaduntil = self._hosts_deaduntil[i]
|
|
||||||
# Do nothing if we already know this host is dead
|
|
||||||
if deaduntil <= now:
|
|
||||||
if host.deaduntil > now:
|
|
||||||
self._hosts_deaduntil[i] = host.deaduntil
|
|
||||||
self._debug_logger(
|
|
||||||
'Marked host %s dead until %s',
|
|
||||||
self.urls[i], host.deaduntil)
|
|
||||||
else:
|
|
||||||
self._hosts_deaduntil[i] = 0
|
|
||||||
# If all hosts are dead we should forget that they're dead. This
|
|
||||||
# way we won't get completely shut off until dead_retry seconds
|
|
||||||
# pass, but will be checking servers as frequent as we can (over
|
|
||||||
# way smaller socket_timeout)
|
|
||||||
if all(deaduntil > now for deaduntil in self._hosts_deaduntil):
|
|
||||||
self._debug_logger('All hosts are dead. Marking them as live.')
|
|
||||||
self._hosts_deaduntil[:] = [0] * len(self._hosts_deaduntil)
|
|
||||||
finally:
|
|
||||||
# super() cannot be used here because Queue in stdlib is an
|
|
||||||
# old-style class
|
|
||||||
ConnectionPool._put(self, conn)
|
|
|
@ -1,124 +0,0 @@
|
||||||
# Copyright 2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from oslo_config import cfg
|
|
||||||
|
|
||||||
|
|
||||||
_DEFAULT_BACKEND = 'dogpile.cache.null'
|
|
||||||
|
|
||||||
FILE_OPTIONS = {
|
|
||||||
'cache': [
|
|
||||||
cfg.StrOpt('config_prefix', default='cache.oslo',
|
|
||||||
help='Prefix for building the configuration dictionary '
|
|
||||||
'for the cache region. This should not need to be '
|
|
||||||
'changed unless there is another dogpile.cache '
|
|
||||||
'region with the same configuration name.'),
|
|
||||||
cfg.IntOpt('expiration_time', default=600,
|
|
||||||
help='Default TTL, in seconds, for any cached item in '
|
|
||||||
'the dogpile.cache region. This applies to any '
|
|
||||||
'cached method that doesn\'t have an explicit '
|
|
||||||
'cache expiration time defined for it.'),
|
|
||||||
# NOTE(morganfainberg): the dogpile.cache.memory acceptable in devstack
|
|
||||||
# and other such single-process/thread deployments. Running
|
|
||||||
# dogpile.cache.memory in any other configuration has the same pitfalls
|
|
||||||
# as the KVS token backend. It is recommended that either Redis or
|
|
||||||
# Memcached are used as the dogpile backend for real workloads. To
|
|
||||||
# prevent issues with the memory cache ending up in "production"
|
|
||||||
# unintentionally, we register a no-op as the keystone default caching
|
|
||||||
# backend.
|
|
||||||
cfg.StrOpt('backend', default=_DEFAULT_BACKEND,
|
|
||||||
help='Dogpile.cache backend module. It is recommended '
|
|
||||||
'that Memcache with pooling '
|
|
||||||
'(oslo_cache.memcache_pool) or Redis '
|
|
||||||
'(dogpile.cache.redis) be used in production '
|
|
||||||
'deployments. Small workloads (single process) '
|
|
||||||
'like devstack can use the dogpile.cache.memory '
|
|
||||||
'backend.'),
|
|
||||||
cfg.MultiStrOpt('backend_argument', default=[], secret=True,
|
|
||||||
help='Arguments supplied to the backend module. '
|
|
||||||
'Specify this option once per argument to be '
|
|
||||||
'passed to the dogpile.cache backend. Example '
|
|
||||||
'format: "<argname>:<value>".'),
|
|
||||||
cfg.ListOpt('proxies', default=[],
|
|
||||||
help='Proxy classes to import that will affect the way '
|
|
||||||
'the dogpile.cache backend functions. See the '
|
|
||||||
'dogpile.cache documentation on '
|
|
||||||
'changing-backend-behavior.'),
|
|
||||||
cfg.BoolOpt('enabled', default=False,
|
|
||||||
help='Global toggle for caching.'),
|
|
||||||
cfg.BoolOpt('debug_cache_backend', default=False,
|
|
||||||
help='Extra debugging from the cache backend (cache '
|
|
||||||
'keys, get/set/delete/etc calls). This is only '
|
|
||||||
'really useful if you need to see the specific '
|
|
||||||
'cache-backend get/set/delete calls with the '
|
|
||||||
'keys/values. Typically this should be left set '
|
|
||||||
'to false.'),
|
|
||||||
cfg.ListOpt('memcache_servers', default=['localhost:11211'],
|
|
||||||
help='Memcache servers in the format of "host:port".'
|
|
||||||
' (dogpile.cache.memcache and oslo_cache.memcache_pool'
|
|
||||||
' backends only).'),
|
|
||||||
cfg.IntOpt('memcache_dead_retry',
|
|
||||||
default=5 * 60,
|
|
||||||
help='Number of seconds memcached server is considered dead'
|
|
||||||
' before it is tried again. (dogpile.cache.memcache and'
|
|
||||||
' oslo_cache.memcache_pool backends only).'),
|
|
||||||
cfg.IntOpt('memcache_socket_timeout',
|
|
||||||
default=3,
|
|
||||||
help='Timeout in seconds for every call to a server.'
|
|
||||||
' (dogpile.cache.memcache and oslo_cache.memcache_pool'
|
|
||||||
' backends only).'),
|
|
||||||
cfg.IntOpt('memcache_pool_maxsize',
|
|
||||||
default=10,
|
|
||||||
help='Max total number of open connections to every'
|
|
||||||
' memcached server. (oslo_cache.memcache_pool backend'
|
|
||||||
' only).'),
|
|
||||||
cfg.IntOpt('memcache_pool_unused_timeout',
|
|
||||||
default=60,
|
|
||||||
help='Number of seconds a connection to memcached is held'
|
|
||||||
' unused in the pool before it is closed.'
|
|
||||||
' (oslo_cache.memcache_pool backend only).'),
|
|
||||||
cfg.IntOpt('memcache_pool_connection_get_timeout',
|
|
||||||
default=10,
|
|
||||||
help='Number of seconds that an operation will wait to get '
|
|
||||||
'a memcache client connection.'),
|
|
||||||
],
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def configure(conf):
|
|
||||||
for section in FILE_OPTIONS:
|
|
||||||
for option in FILE_OPTIONS[section]:
|
|
||||||
conf.register_opt(option, group=section)
|
|
||||||
|
|
||||||
|
|
||||||
def list_opts():
|
|
||||||
"""Return a list of oslo_config options.
|
|
||||||
|
|
||||||
The returned list includes all oslo_config options which are registered as
|
|
||||||
the "FILE_OPTIONS".
|
|
||||||
|
|
||||||
Each object in the list is a two element tuple. The first element of
|
|
||||||
each tuple is the name of the group under which the list of options in the
|
|
||||||
second element will be registered. A group name of None corresponds to the
|
|
||||||
[DEFAULT] group in config files.
|
|
||||||
|
|
||||||
This function is also discoverable via the 'oslo_config.opts' entry point
|
|
||||||
under the 'oslo_cache.config.opts' namespace.
|
|
||||||
|
|
||||||
The purpose of this is to allow tools like the Oslo sample config file
|
|
||||||
generator to discover the options exposed to users by this library.
|
|
||||||
|
|
||||||
:returns: a list of (group_name, opts) tuples
|
|
||||||
"""
|
|
||||||
return list(FILE_OPTIONS.items())
|
|
|
@ -1,106 +0,0 @@
|
||||||
# Copyright 2015 Mirantis Inc
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
"""dogpile.cache backend that uses dictionary for storage"""
|
|
||||||
|
|
||||||
from dogpile.cache import api
|
|
||||||
from oslo_cache import core
|
|
||||||
from oslo_utils import timeutils
|
|
||||||
|
|
||||||
__all__ = [
|
|
||||||
'DictCacheBackend'
|
|
||||||
]
|
|
||||||
|
|
||||||
_NO_VALUE = core.NO_VALUE
|
|
||||||
|
|
||||||
|
|
||||||
class DictCacheBackend(api.CacheBackend):
|
|
||||||
"""A DictCacheBackend based on dictionary.
|
|
||||||
|
|
||||||
Arguments accepted in the arguments dictionary:
|
|
||||||
|
|
||||||
:param expiration_time: interval in seconds to indicate maximum
|
|
||||||
time-to-live value for each key in DictCacheBackend.
|
|
||||||
Default expiration_time value is 0, that means that all keys have
|
|
||||||
infinite time-to-live value.
|
|
||||||
:type expiration_time: real
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, arguments):
|
|
||||||
self.expiration_time = arguments.get('expiration_time', 0)
|
|
||||||
self.cache = {}
|
|
||||||
|
|
||||||
def get(self, key):
|
|
||||||
"""Retrieves the value for a key.
|
|
||||||
|
|
||||||
:param key: dictionary key
|
|
||||||
:returns: value for a key or :data:`oslo_cache.core.NO_VALUE`
|
|
||||||
for nonexistent or expired keys.
|
|
||||||
"""
|
|
||||||
(value, timeout) = self.cache.get(key, (_NO_VALUE, 0))
|
|
||||||
if self.expiration_time > 0 and timeutils.utcnow_ts() >= timeout:
|
|
||||||
self.cache.pop(key, None)
|
|
||||||
return _NO_VALUE
|
|
||||||
|
|
||||||
return value
|
|
||||||
|
|
||||||
def get_multi(self, keys):
|
|
||||||
"""Retrieves the value for a list of keys."""
|
|
||||||
return [self.get(key) for key in keys]
|
|
||||||
|
|
||||||
def set(self, key, value):
|
|
||||||
"""Sets the value for a key.
|
|
||||||
|
|
||||||
Expunges expired keys during each set.
|
|
||||||
|
|
||||||
:param key: dictionary key
|
|
||||||
:param value: value associated with the key
|
|
||||||
"""
|
|
||||||
self.set_multi({key: value})
|
|
||||||
|
|
||||||
def set_multi(self, mapping):
|
|
||||||
"""Set multiple values in the cache.
|
|
||||||
Expunges expired keys during each set.
|
|
||||||
|
|
||||||
:param mapping: dictionary with key/value pairs
|
|
||||||
"""
|
|
||||||
self._clear()
|
|
||||||
timeout = 0
|
|
||||||
if self.expiration_time > 0:
|
|
||||||
timeout = timeutils.utcnow_ts() + self.expiration_time
|
|
||||||
for key, value in mapping.items():
|
|
||||||
self.cache[key] = (value, timeout)
|
|
||||||
|
|
||||||
def delete(self, key):
|
|
||||||
"""Deletes the value associated with the key if it exists.
|
|
||||||
|
|
||||||
:param key: dictionary key
|
|
||||||
"""
|
|
||||||
self.cache.pop(key, None)
|
|
||||||
|
|
||||||
def delete_multi(self, keys):
|
|
||||||
"""Deletes the value associated with each key in list if it exists.
|
|
||||||
|
|
||||||
:param keys: list of dictionary keys
|
|
||||||
"""
|
|
||||||
for key in keys:
|
|
||||||
self.cache.pop(key, None)
|
|
||||||
|
|
||||||
def _clear(self):
|
|
||||||
"""Expunges expired keys."""
|
|
||||||
now = timeutils.utcnow_ts()
|
|
||||||
for k in list(self.cache):
|
|
||||||
(_value, timeout) = self.cache[k]
|
|
||||||
if timeout > 0 and now >= timeout:
|
|
||||||
del self.cache[k]
|
|
|
@ -1,63 +0,0 @@
|
||||||
# Copyright 2014 Mirantis Inc
|
|
||||||
# All Rights Reserved.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
"""dogpile.cache backend that uses Memcached connection pool"""
|
|
||||||
|
|
||||||
import functools
|
|
||||||
import logging
|
|
||||||
|
|
||||||
from dogpile.cache.backends import memcached as memcached_backend
|
|
||||||
|
|
||||||
from oslo_cache import _memcache_pool
|
|
||||||
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
# Helper to ease backend refactoring
|
|
||||||
class ClientProxy(object):
|
|
||||||
def __init__(self, client_pool):
|
|
||||||
self.client_pool = client_pool
|
|
||||||
|
|
||||||
def _run_method(self, __name, *args, **kwargs):
|
|
||||||
with self.client_pool.acquire() as client:
|
|
||||||
return getattr(client, __name)(*args, **kwargs)
|
|
||||||
|
|
||||||
def __getattr__(self, name):
|
|
||||||
return functools.partial(self._run_method, name)
|
|
||||||
|
|
||||||
|
|
||||||
class PooledMemcachedBackend(memcached_backend.MemcachedBackend):
|
|
||||||
"""Memcached backend that does connection pooling."""
|
|
||||||
|
|
||||||
# Composed from GenericMemcachedBackend's and MemcacheArgs's __init__
|
|
||||||
def __init__(self, arguments):
|
|
||||||
super(PooledMemcachedBackend, self).__init__(arguments)
|
|
||||||
self.client_pool = _memcache_pool.MemcacheClientPool(
|
|
||||||
self.url,
|
|
||||||
arguments={
|
|
||||||
'dead_retry': arguments.get('dead_retry', 5 * 60),
|
|
||||||
'socket_timeout': arguments.get('socket_timeout', 3),
|
|
||||||
},
|
|
||||||
maxsize=arguments.get('pool_maxsize', 10),
|
|
||||||
unused_timeout=arguments.get('pool_unused_timeout', 60),
|
|
||||||
conn_get_timeout=arguments.get('pool_connection_get_timeout', 10),
|
|
||||||
)
|
|
||||||
|
|
||||||
# Since all methods in backend just call one of methods of client, this
|
|
||||||
# lets us avoid need to hack it too much
|
|
||||||
@property
|
|
||||||
def client(self):
|
|
||||||
return ClientProxy(self.client_pool)
|
|
|
@ -1,579 +0,0 @@
|
||||||
# Copyright 2014 Hewlett-Packard Development Company, L.P.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import abc
|
|
||||||
import datetime
|
|
||||||
|
|
||||||
from dogpile.cache import api
|
|
||||||
from dogpile import util as dp_util
|
|
||||||
from oslo_cache import core
|
|
||||||
from oslo_log import log
|
|
||||||
from oslo_utils import importutils
|
|
||||||
from oslo_utils import timeutils
|
|
||||||
import six
|
|
||||||
|
|
||||||
from oslo_cache._i18n import _
|
|
||||||
from oslo_cache._i18n import _LW
|
|
||||||
from oslo_cache import exception
|
|
||||||
|
|
||||||
|
|
||||||
__all__ = [
|
|
||||||
'MongoCacheBackend'
|
|
||||||
]
|
|
||||||
|
|
||||||
_NO_VALUE = core.NO_VALUE
|
|
||||||
LOG = log.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class MongoCacheBackend(api.CacheBackend):
|
|
||||||
"""A MongoDB based caching backend implementing dogpile backend APIs.
|
|
||||||
|
|
||||||
Arguments accepted in the arguments dictionary:
|
|
||||||
|
|
||||||
:param db_hosts: string (required), hostname or IP address of the
|
|
||||||
MongoDB server instance. This can be a single MongoDB connection URI,
|
|
||||||
or a list of MongoDB connection URIs.
|
|
||||||
|
|
||||||
:param db_name: string (required), the name of the database to be used.
|
|
||||||
|
|
||||||
:param cache_collection: string (required), the name of collection to store
|
|
||||||
cached data.
|
|
||||||
*Note:* Different collection name can be provided if there is need to
|
|
||||||
create separate container (i.e. collection) for cache data. So region
|
|
||||||
configuration is done per collection.
|
|
||||||
|
|
||||||
Following are optional parameters for MongoDB backend configuration,
|
|
||||||
|
|
||||||
:param username: string, the name of the user to authenticate.
|
|
||||||
|
|
||||||
:param password: string, the password of the user to authenticate.
|
|
||||||
|
|
||||||
:param max_pool_size: integer, the maximum number of connections that the
|
|
||||||
pool will open simultaneously. By default the pool size is 10.
|
|
||||||
|
|
||||||
:param w: integer, write acknowledgement for MongoDB client
|
|
||||||
|
|
||||||
If not provided, then no default is set on MongoDB and then write
|
|
||||||
acknowledgement behavior occurs as per MongoDB default. This parameter
|
|
||||||
name is same as what is used in MongoDB docs. This value is specified
|
|
||||||
at collection level so its applicable to `cache_collection` db write
|
|
||||||
operations.
|
|
||||||
|
|
||||||
If this is a replica set, write operations will block until they have
|
|
||||||
been replicated to the specified number or tagged set of servers.
|
|
||||||
Setting w=0 disables write acknowledgement and all other write concern
|
|
||||||
options.
|
|
||||||
|
|
||||||
:param read_preference: string, the read preference mode for MongoDB client
|
|
||||||
Expected value is ``primary``, ``primaryPreferred``, ``secondary``,
|
|
||||||
``secondaryPreferred``, or ``nearest``. This read_preference is
|
|
||||||
specified at collection level so its applicable to `cache_collection`
|
|
||||||
db read operations.
|
|
||||||
|
|
||||||
:param use_replica: boolean, flag to indicate if replica client to be
|
|
||||||
used. Default is `False`. `replicaset_name` value is required if
|
|
||||||
`True`.
|
|
||||||
|
|
||||||
:param replicaset_name: string, name of replica set.
|
|
||||||
Becomes required if `use_replica` is `True`
|
|
||||||
|
|
||||||
:param son_manipulator: string, name of class with module name which
|
|
||||||
implements MongoDB SONManipulator.
|
|
||||||
Default manipulator used is :class:`.BaseTransform`.
|
|
||||||
|
|
||||||
This manipulator is added per database. In multiple cache
|
|
||||||
configurations, the manipulator name should be same if same
|
|
||||||
database name ``db_name`` is used in those configurations.
|
|
||||||
|
|
||||||
SONManipulator is used to manipulate custom data types as they are
|
|
||||||
saved or retrieved from MongoDB. Custom impl is only needed if cached
|
|
||||||
data is custom class and needs transformations when saving or reading
|
|
||||||
from db. If dogpile cached value contains built-in data types, then
|
|
||||||
BaseTransform class is sufficient as it already handles dogpile
|
|
||||||
CachedValue class transformation.
|
|
||||||
|
|
||||||
:param mongo_ttl_seconds: integer, interval in seconds to indicate maximum
|
|
||||||
time-to-live value.
|
|
||||||
If value is greater than 0, then its assumed that cache_collection
|
|
||||||
needs to be TTL type (has index at 'doc_date' field).
|
|
||||||
By default, the value is -1 and its disabled.
|
|
||||||
Reference: <http://docs.mongodb.org/manual/tutorial/expire-data/>
|
|
||||||
|
|
||||||
.. NOTE::
|
|
||||||
|
|
||||||
This parameter is different from Dogpile own
|
|
||||||
expiration_time, which is the number of seconds after which Dogpile
|
|
||||||
will consider the value to be expired. When Dogpile considers a
|
|
||||||
value to be expired, it continues to use the value until generation
|
|
||||||
of a new value is complete, when using CacheRegion.get_or_create().
|
|
||||||
Therefore, if you are setting `mongo_ttl_seconds`, you will want to
|
|
||||||
make sure it is greater than expiration_time by at least enough
|
|
||||||
seconds for new values to be generated, else the value would not
|
|
||||||
be available during a regeneration, forcing all threads to wait for
|
|
||||||
a regeneration each time a value expires.
|
|
||||||
|
|
||||||
:param ssl: boolean, If True, create the connection to the server
|
|
||||||
using SSL. Default is `False`. Client SSL connection parameters depends
|
|
||||||
on server side SSL setup. For further reference on SSL configuration:
|
|
||||||
<http://docs.mongodb.org/manual/tutorial/configure-ssl/>
|
|
||||||
|
|
||||||
:param ssl_keyfile: string, the private keyfile used to identify the
|
|
||||||
local connection against mongod. If included with the certfile then
|
|
||||||
only the `ssl_certfile` is needed. Used only when `ssl` is `True`.
|
|
||||||
|
|
||||||
:param ssl_certfile: string, the certificate file used to identify the
|
|
||||||
local connection against mongod. Used only when `ssl` is `True`.
|
|
||||||
|
|
||||||
:param ssl_ca_certs: string, the ca_certs file contains a set of
|
|
||||||
concatenated 'certification authority' certificates, which are used to
|
|
||||||
validate certificates passed from the other end of the connection.
|
|
||||||
Used only when `ssl` is `True`.
|
|
||||||
|
|
||||||
:param ssl_cert_reqs: string, the parameter cert_reqs specifies whether
|
|
||||||
a certificate is required from the other side of the connection, and
|
|
||||||
whether it will be validated if provided. It must be one of the three
|
|
||||||
values ``ssl.CERT_NONE`` (certificates ignored), ``ssl.CERT_OPTIONAL``
|
|
||||||
(not required, but validated if provided), or
|
|
||||||
``ssl.CERT_REQUIRED`` (required and validated). If the value of this
|
|
||||||
parameter is not ``ssl.CERT_NONE``, then the ssl_ca_certs parameter
|
|
||||||
must point to a file of CA certificates. Used only when `ssl`
|
|
||||||
is `True`.
|
|
||||||
|
|
||||||
Rest of arguments are passed to mongo calls for read, write and remove.
|
|
||||||
So related options can be specified to pass to these operations.
|
|
||||||
|
|
||||||
Further details of various supported arguments can be referred from
|
|
||||||
<http://api.mongodb.org/python/current/api/pymongo/>
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, arguments):
|
|
||||||
self.api = MongoApi(arguments)
|
|
||||||
|
|
||||||
@dp_util.memoized_property
|
|
||||||
def client(self):
|
|
||||||
"""Initializes MongoDB connection and collection defaults.
|
|
||||||
|
|
||||||
This initialization is done only once and performed as part of lazy
|
|
||||||
inclusion of MongoDB dependency i.e. add imports only if related
|
|
||||||
backend is used.
|
|
||||||
|
|
||||||
:return: :class:`.MongoApi` instance
|
|
||||||
"""
|
|
||||||
self.api.get_cache_collection()
|
|
||||||
return self.api
|
|
||||||
|
|
||||||
def get(self, key):
|
|
||||||
"""Retrieves the value for a key.
|
|
||||||
|
|
||||||
:param key: key to be retrieved.
|
|
||||||
:returns: value for a key or :data:`oslo_cache.core.NO_VALUE`
|
|
||||||
for nonexistent or expired keys.
|
|
||||||
"""
|
|
||||||
value = self.client.get(key)
|
|
||||||
if value is None:
|
|
||||||
return _NO_VALUE
|
|
||||||
else:
|
|
||||||
return value
|
|
||||||
|
|
||||||
def get_multi(self, keys):
|
|
||||||
"""Return multiple values from the cache, based on the given keys.
|
|
||||||
|
|
||||||
:param keys: sequence of keys to be retrieved.
|
|
||||||
:returns: returns values (or :data:`oslo_cache.core.NO_VALUE`)
|
|
||||||
as a list matching the keys given.
|
|
||||||
"""
|
|
||||||
values = self.client.get_multi(keys)
|
|
||||||
return [
|
|
||||||
_NO_VALUE if key not in values
|
|
||||||
else values[key] for key in keys
|
|
||||||
]
|
|
||||||
|
|
||||||
def set(self, key, value):
|
|
||||||
self.client.set(key, value)
|
|
||||||
|
|
||||||
def set_multi(self, mapping):
|
|
||||||
self.client.set_multi(mapping)
|
|
||||||
|
|
||||||
def delete(self, key):
|
|
||||||
self.client.delete(key)
|
|
||||||
|
|
||||||
def delete_multi(self, keys):
|
|
||||||
self.client.delete_multi(keys)
|
|
||||||
|
|
||||||
|
|
||||||
class MongoApi(object):
|
|
||||||
"""Class handling MongoDB specific functionality.
|
|
||||||
|
|
||||||
This class uses PyMongo APIs internally to create database connection
|
|
||||||
with configured pool size, ensures unique index on key, does database
|
|
||||||
authentication and ensure TTL collection index if configured so.
|
|
||||||
This class also serves as handle to cache collection for dogpile cache
|
|
||||||
APIs.
|
|
||||||
|
|
||||||
In a single deployment, multiple cache configuration can be defined. In
|
|
||||||
that case of multiple cache collections usage, db client connection pool
|
|
||||||
is shared when cache collections are within same database.
|
|
||||||
"""
|
|
||||||
|
|
||||||
# class level attributes for re-use of db client connection and collection
|
|
||||||
_DB = {} # dict of db_name: db connection reference
|
|
||||||
_MONGO_COLLS = {} # dict of cache_collection : db collection reference
|
|
||||||
|
|
||||||
def __init__(self, arguments):
|
|
||||||
self._init_args(arguments)
|
|
||||||
self._data_manipulator = None
|
|
||||||
|
|
||||||
def _init_args(self, arguments):
|
|
||||||
"""Helper logic for collecting and parsing MongoDB specific arguments.
|
|
||||||
|
|
||||||
The arguments passed in are separated out in connection specific
|
|
||||||
setting and rest of arguments are passed to create/update/delete
|
|
||||||
db operations.
|
|
||||||
"""
|
|
||||||
self.conn_kwargs = {} # connection specific arguments
|
|
||||||
|
|
||||||
self.hosts = arguments.pop('db_hosts', None)
|
|
||||||
if self.hosts is None:
|
|
||||||
msg = _('db_hosts value is required')
|
|
||||||
raise exception.ConfigurationError(msg)
|
|
||||||
|
|
||||||
self.db_name = arguments.pop('db_name', None)
|
|
||||||
if self.db_name is None:
|
|
||||||
msg = _('database db_name is required')
|
|
||||||
raise exception.ConfigurationError(msg)
|
|
||||||
|
|
||||||
self.cache_collection = arguments.pop('cache_collection', None)
|
|
||||||
if self.cache_collection is None:
|
|
||||||
msg = _('cache_collection name is required')
|
|
||||||
raise exception.ConfigurationError(msg)
|
|
||||||
|
|
||||||
self.username = arguments.pop('username', None)
|
|
||||||
self.password = arguments.pop('password', None)
|
|
||||||
self.max_pool_size = arguments.pop('max_pool_size', 10)
|
|
||||||
|
|
||||||
self.w = arguments.pop('w', -1)
|
|
||||||
try:
|
|
||||||
self.w = int(self.w)
|
|
||||||
except ValueError:
|
|
||||||
msg = _('integer value expected for w (write concern attribute)')
|
|
||||||
raise exception.ConfigurationError(msg)
|
|
||||||
|
|
||||||
self.read_preference = arguments.pop('read_preference', None)
|
|
||||||
|
|
||||||
self.use_replica = arguments.pop('use_replica', False)
|
|
||||||
if self.use_replica:
|
|
||||||
if arguments.get('replicaset_name') is None:
|
|
||||||
msg = _('replicaset_name required when use_replica is True')
|
|
||||||
raise exception.ConfigurationError(msg)
|
|
||||||
self.replicaset_name = arguments.get('replicaset_name')
|
|
||||||
|
|
||||||
self.son_manipulator = arguments.pop('son_manipulator', None)
|
|
||||||
|
|
||||||
# set if mongo collection needs to be TTL type.
|
|
||||||
# This needs to be max ttl for any cache entry.
|
|
||||||
# By default, -1 means don't use TTL collection.
|
|
||||||
# With ttl set, it creates related index and have doc_date field with
|
|
||||||
# needed expiration interval
|
|
||||||
self.ttl_seconds = arguments.pop('mongo_ttl_seconds', -1)
|
|
||||||
try:
|
|
||||||
self.ttl_seconds = int(self.ttl_seconds)
|
|
||||||
except ValueError:
|
|
||||||
msg = _('integer value expected for mongo_ttl_seconds')
|
|
||||||
raise exception.ConfigurationError(msg)
|
|
||||||
|
|
||||||
self.conn_kwargs['ssl'] = arguments.pop('ssl', False)
|
|
||||||
if self.conn_kwargs['ssl']:
|
|
||||||
ssl_keyfile = arguments.pop('ssl_keyfile', None)
|
|
||||||
ssl_certfile = arguments.pop('ssl_certfile', None)
|
|
||||||
ssl_ca_certs = arguments.pop('ssl_ca_certs', None)
|
|
||||||
ssl_cert_reqs = arguments.pop('ssl_cert_reqs', None)
|
|
||||||
if ssl_keyfile:
|
|
||||||
self.conn_kwargs['ssl_keyfile'] = ssl_keyfile
|
|
||||||
if ssl_certfile:
|
|
||||||
self.conn_kwargs['ssl_certfile'] = ssl_certfile
|
|
||||||
if ssl_ca_certs:
|
|
||||||
self.conn_kwargs['ssl_ca_certs'] = ssl_ca_certs
|
|
||||||
if ssl_cert_reqs:
|
|
||||||
self.conn_kwargs['ssl_cert_reqs'] = (
|
|
||||||
self._ssl_cert_req_type(ssl_cert_reqs))
|
|
||||||
|
|
||||||
# rest of arguments are passed to mongo crud calls
|
|
||||||
self.meth_kwargs = arguments
|
|
||||||
|
|
||||||
def _ssl_cert_req_type(self, req_type):
|
|
||||||
try:
|
|
||||||
import ssl
|
|
||||||
except ImportError:
|
|
||||||
raise exception.ConfigurationError(_('no ssl support available'))
|
|
||||||
req_type = req_type.upper()
|
|
||||||
try:
|
|
||||||
return {
|
|
||||||
'NONE': ssl.CERT_NONE,
|
|
||||||
'OPTIONAL': ssl.CERT_OPTIONAL,
|
|
||||||
'REQUIRED': ssl.CERT_REQUIRED
|
|
||||||
}[req_type]
|
|
||||||
except KeyError:
|
|
||||||
msg = _('Invalid ssl_cert_reqs value of %s, must be one of '
|
|
||||||
'"NONE", "OPTIONAL", "REQUIRED"') % req_type
|
|
||||||
raise exception.ConfigurationError(msg)
|
|
||||||
|
|
||||||
def _get_db(self):
|
|
||||||
# defer imports until backend is used
|
|
||||||
global pymongo
|
|
||||||
import pymongo
|
|
||||||
if self.use_replica:
|
|
||||||
connection = pymongo.MongoReplicaSetClient(
|
|
||||||
host=self.hosts, replicaSet=self.replicaset_name,
|
|
||||||
max_pool_size=self.max_pool_size, **self.conn_kwargs)
|
|
||||||
else: # used for standalone node or mongos in sharded setup
|
|
||||||
connection = pymongo.MongoClient(
|
|
||||||
host=self.hosts, max_pool_size=self.max_pool_size,
|
|
||||||
**self.conn_kwargs)
|
|
||||||
|
|
||||||
database = getattr(connection, self.db_name)
|
|
||||||
|
|
||||||
self._assign_data_mainpulator()
|
|
||||||
database.add_son_manipulator(self._data_manipulator)
|
|
||||||
if self.username and self.password:
|
|
||||||
database.authenticate(self.username, self.password)
|
|
||||||
return database
|
|
||||||
|
|
||||||
def _assign_data_mainpulator(self):
|
|
||||||
if self._data_manipulator is None:
|
|
||||||
if self.son_manipulator:
|
|
||||||
self._data_manipulator = importutils.import_object(
|
|
||||||
self.son_manipulator)
|
|
||||||
else:
|
|
||||||
self._data_manipulator = BaseTransform()
|
|
||||||
|
|
||||||
def _get_doc_date(self):
|
|
||||||
if self.ttl_seconds > 0:
|
|
||||||
expire_delta = datetime.timedelta(seconds=self.ttl_seconds)
|
|
||||||
doc_date = timeutils.utcnow() + expire_delta
|
|
||||||
else:
|
|
||||||
doc_date = timeutils.utcnow()
|
|
||||||
return doc_date
|
|
||||||
|
|
||||||
def get_cache_collection(self):
|
|
||||||
if self.cache_collection not in self._MONGO_COLLS:
|
|
||||||
global pymongo
|
|
||||||
import pymongo
|
|
||||||
# re-use db client connection if already defined as part of
|
|
||||||
# earlier dogpile cache configuration
|
|
||||||
if self.db_name not in self._DB:
|
|
||||||
self._DB[self.db_name] = self._get_db()
|
|
||||||
coll = getattr(self._DB[self.db_name], self.cache_collection)
|
|
||||||
|
|
||||||
self._assign_data_mainpulator()
|
|
||||||
if self.read_preference:
|
|
||||||
# pymongo 3.0 renamed mongos_enum to read_pref_mode_from_name
|
|
||||||
f = getattr(pymongo.read_preferences,
|
|
||||||
'read_pref_mode_from_name', None)
|
|
||||||
if not f:
|
|
||||||
f = pymongo.read_preferences.mongos_enum
|
|
||||||
self.read_preference = f(self.read_preference)
|
|
||||||
coll.read_preference = self.read_preference
|
|
||||||
if self.w > -1:
|
|
||||||
coll.write_concern['w'] = self.w
|
|
||||||
if self.ttl_seconds > 0:
|
|
||||||
kwargs = {'expireAfterSeconds': self.ttl_seconds}
|
|
||||||
coll.ensure_index('doc_date', cache_for=5, **kwargs)
|
|
||||||
else:
|
|
||||||
self._validate_ttl_index(coll, self.cache_collection,
|
|
||||||
self.ttl_seconds)
|
|
||||||
self._MONGO_COLLS[self.cache_collection] = coll
|
|
||||||
|
|
||||||
return self._MONGO_COLLS[self.cache_collection]
|
|
||||||
|
|
||||||
def _get_cache_entry(self, key, value, meta, doc_date):
|
|
||||||
"""MongoDB cache data representation.
|
|
||||||
|
|
||||||
Storing cache key as ``_id`` field as MongoDB by default creates
|
|
||||||
unique index on this field. So no need to create separate field and
|
|
||||||
index for storing cache key. Cache data has additional ``doc_date``
|
|
||||||
field for MongoDB TTL collection support.
|
|
||||||
"""
|
|
||||||
return dict(_id=key, value=value, meta=meta, doc_date=doc_date)
|
|
||||||
|
|
||||||
def _validate_ttl_index(self, collection, coll_name, ttl_seconds):
|
|
||||||
"""Checks if existing TTL index is removed on a collection.
|
|
||||||
|
|
||||||
This logs warning when existing collection has TTL index defined and
|
|
||||||
new cache configuration tries to disable index with
|
|
||||||
``mongo_ttl_seconds < 0``. In that case, existing index needs
|
|
||||||
to be addressed first to make new configuration effective.
|
|
||||||
Refer to MongoDB documentation around TTL index for further details.
|
|
||||||
"""
|
|
||||||
indexes = collection.index_information()
|
|
||||||
for indx_name, index_data in six.iteritems(indexes):
|
|
||||||
if all(k in index_data for k in ('key', 'expireAfterSeconds')):
|
|
||||||
existing_value = index_data['expireAfterSeconds']
|
|
||||||
fld_present = 'doc_date' in index_data['key'][0]
|
|
||||||
if fld_present and existing_value > -1 and ttl_seconds < 1:
|
|
||||||
msg = _LW('TTL index already exists on db collection '
|
|
||||||
'<%(c_name)s>, remove index <%(indx_name)s> '
|
|
||||||
'first to make updated mongo_ttl_seconds value '
|
|
||||||
'to be effective')
|
|
||||||
LOG.warning(msg, {'c_name': coll_name,
|
|
||||||
'indx_name': indx_name})
|
|
||||||
|
|
||||||
def get(self, key):
|
|
||||||
criteria = {'_id': key}
|
|
||||||
result = self.get_cache_collection().find_one(spec_or_id=criteria,
|
|
||||||
**self.meth_kwargs)
|
|
||||||
if result:
|
|
||||||
return result['value']
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
def get_multi(self, keys):
|
|
||||||
db_results = self._get_results_as_dict(keys)
|
|
||||||
return {doc['_id']: doc['value'] for doc in six.itervalues(db_results)}
|
|
||||||
|
|
||||||
def _get_results_as_dict(self, keys):
|
|
||||||
criteria = {'_id': {'$in': keys}}
|
|
||||||
db_results = self.get_cache_collection().find(spec=criteria,
|
|
||||||
**self.meth_kwargs)
|
|
||||||
return {doc['_id']: doc for doc in db_results}
|
|
||||||
|
|
||||||
def set(self, key, value):
|
|
||||||
doc_date = self._get_doc_date()
|
|
||||||
ref = self._get_cache_entry(key, value.payload, value.metadata,
|
|
||||||
doc_date)
|
|
||||||
spec = {'_id': key}
|
|
||||||
# find and modify does not have manipulator support
|
|
||||||
# so need to do conversion as part of input document
|
|
||||||
ref = self._data_manipulator.transform_incoming(ref, self)
|
|
||||||
self.get_cache_collection().find_and_modify(spec, ref, upsert=True,
|
|
||||||
**self.meth_kwargs)
|
|
||||||
|
|
||||||
def set_multi(self, mapping):
|
|
||||||
"""Insert multiple documents specified as key, value pairs.
|
|
||||||
|
|
||||||
In this case, multiple documents can be added via insert provided they
|
|
||||||
do not exist.
|
|
||||||
Update of multiple existing documents is done one by one
|
|
||||||
"""
|
|
||||||
doc_date = self._get_doc_date()
|
|
||||||
insert_refs = []
|
|
||||||
update_refs = []
|
|
||||||
existing_docs = self._get_results_as_dict(list(mapping.keys()))
|
|
||||||
for key, value in mapping.items():
|
|
||||||
ref = self._get_cache_entry(key, value.payload, value.metadata,
|
|
||||||
doc_date)
|
|
||||||
if key in existing_docs:
|
|
||||||
ref['_id'] = existing_docs[key]['_id']
|
|
||||||
update_refs.append(ref)
|
|
||||||
else:
|
|
||||||
insert_refs.append(ref)
|
|
||||||
if insert_refs:
|
|
||||||
self.get_cache_collection().insert(insert_refs, manipulate=True,
|
|
||||||
**self.meth_kwargs)
|
|
||||||
for upd_doc in update_refs:
|
|
||||||
self.get_cache_collection().save(upd_doc, manipulate=True,
|
|
||||||
**self.meth_kwargs)
|
|
||||||
|
|
||||||
def delete(self, key):
|
|
||||||
criteria = {'_id': key}
|
|
||||||
self.get_cache_collection().remove(spec_or_id=criteria,
|
|
||||||
**self.meth_kwargs)
|
|
||||||
|
|
||||||
def delete_multi(self, keys):
|
|
||||||
criteria = {'_id': {'$in': keys}}
|
|
||||||
self.get_cache_collection().remove(spec_or_id=criteria,
|
|
||||||
**self.meth_kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
@six.add_metaclass(abc.ABCMeta)
|
|
||||||
class AbstractManipulator(object):
|
|
||||||
"""Abstract class with methods which need to be implemented for custom
|
|
||||||
manipulation.
|
|
||||||
|
|
||||||
Adding this as a base class for :class:`.BaseTransform` instead of adding
|
|
||||||
import dependency of pymongo specific class i.e.
|
|
||||||
`pymongo.son_manipulator.SONManipulator` and using that as base class.
|
|
||||||
This is done to avoid pymongo dependency if MongoDB backend is not used.
|
|
||||||
"""
|
|
||||||
@abc.abstractmethod
|
|
||||||
def transform_incoming(self, son, collection):
|
|
||||||
"""Used while saving data to MongoDB.
|
|
||||||
|
|
||||||
:param son: the SON object to be inserted into the database
|
|
||||||
:param collection: the collection the object is being inserted into
|
|
||||||
|
|
||||||
:returns: transformed SON object
|
|
||||||
|
|
||||||
"""
|
|
||||||
raise NotImplementedError() # pragma: no cover
|
|
||||||
|
|
||||||
@abc.abstractmethod
|
|
||||||
def transform_outgoing(self, son, collection):
|
|
||||||
"""Used while reading data from MongoDB.
|
|
||||||
|
|
||||||
:param son: the SON object being retrieved from the database
|
|
||||||
:param collection: the collection this object was stored in
|
|
||||||
|
|
||||||
:returns: transformed SON object
|
|
||||||
"""
|
|
||||||
raise NotImplementedError() # pragma: no cover
|
|
||||||
|
|
||||||
def will_copy(self):
|
|
||||||
"""Will this SON manipulator make a copy of the incoming document?
|
|
||||||
|
|
||||||
Derived classes that do need to make a copy should override this
|
|
||||||
method, returning `True` instead of `False`.
|
|
||||||
|
|
||||||
:returns: boolean
|
|
||||||
"""
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
class BaseTransform(AbstractManipulator):
|
|
||||||
"""Base transformation class to store and read dogpile cached data
|
|
||||||
from MongoDB.
|
|
||||||
|
|
||||||
This is needed as dogpile internally stores data as a custom class
|
|
||||||
i.e. dogpile.cache.api.CachedValue
|
|
||||||
|
|
||||||
Note: Custom manipulator needs to always override ``transform_incoming``
|
|
||||||
and ``transform_outgoing`` methods. MongoDB manipulator logic specifically
|
|
||||||
checks that overridden method in instance and its super are different.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def transform_incoming(self, son, collection):
|
|
||||||
"""Used while saving data to MongoDB."""
|
|
||||||
for (key, value) in list(son.items()):
|
|
||||||
if isinstance(value, api.CachedValue):
|
|
||||||
son[key] = value.payload # key is 'value' field here
|
|
||||||
son['meta'] = value.metadata
|
|
||||||
elif isinstance(value, dict): # Make sure we recurse into sub-docs
|
|
||||||
son[key] = self.transform_incoming(value, collection)
|
|
||||||
return son
|
|
||||||
|
|
||||||
def transform_outgoing(self, son, collection):
|
|
||||||
"""Used while reading data from MongoDB."""
|
|
||||||
metadata = None
|
|
||||||
# make sure its top level dictionary with all expected fields names
|
|
||||||
# present
|
|
||||||
if isinstance(son, dict) and all(k in son for k in
|
|
||||||
('_id', 'value', 'meta', 'doc_date')):
|
|
||||||
payload = son.pop('value', None)
|
|
||||||
metadata = son.pop('meta', None)
|
|
||||||
for (key, value) in list(son.items()):
|
|
||||||
if isinstance(value, dict):
|
|
||||||
son[key] = self.transform_outgoing(value, collection)
|
|
||||||
if metadata is not None:
|
|
||||||
son['value'] = api.CachedValue(payload, metadata)
|
|
||||||
return son
|
|
|
@ -1,372 +0,0 @@
|
||||||
# Copyright 2013 Metacloud
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
"""Caching Layer Implementation.
|
|
||||||
|
|
||||||
To use this library:
|
|
||||||
|
|
||||||
You must call :func:`configure`.
|
|
||||||
|
|
||||||
Inside your application code, decorate the methods that you want the results
|
|
||||||
to be cached with a memoization decorator created with
|
|
||||||
:func:`get_memoization_decorator`. This function takes a group name from the
|
|
||||||
config. Register [`group`] ``caching`` and [`group`] ``cache_time`` options
|
|
||||||
for the groups that your decorators use so that caching can be configured.
|
|
||||||
|
|
||||||
This library's configuration options must be registered in your application's
|
|
||||||
:class:`oslo_config.cfg.ConfigOpts` instance. Do this by passing the ConfigOpts
|
|
||||||
instance to :func:`configure`.
|
|
||||||
|
|
||||||
The library has special public value for nonexistent or expired keys called
|
|
||||||
:data:`NO_VALUE`. To use this value you should import it from oslo_cache.core::
|
|
||||||
|
|
||||||
from oslo_cache import core
|
|
||||||
NO_VALUE = core.NO_VALUE
|
|
||||||
"""
|
|
||||||
|
|
||||||
import dogpile.cache
|
|
||||||
from dogpile.cache import api
|
|
||||||
from dogpile.cache import proxy
|
|
||||||
from dogpile.cache import util
|
|
||||||
from oslo_log import log
|
|
||||||
from oslo_utils import importutils
|
|
||||||
|
|
||||||
from oslo_cache._i18n import _
|
|
||||||
from oslo_cache._i18n import _LE
|
|
||||||
from oslo_cache import _opts
|
|
||||||
from oslo_cache import exception
|
|
||||||
|
|
||||||
|
|
||||||
__all__ = [
|
|
||||||
'configure',
|
|
||||||
'configure_cache_region',
|
|
||||||
'create_region',
|
|
||||||
'get_memoization_decorator',
|
|
||||||
'NO_VALUE',
|
|
||||||
]
|
|
||||||
|
|
||||||
NO_VALUE = api.NO_VALUE
|
|
||||||
"""Value returned for nonexistent or expired keys."""
|
|
||||||
|
|
||||||
_LOG = log.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class _DebugProxy(proxy.ProxyBackend):
|
|
||||||
"""Extra Logging ProxyBackend."""
|
|
||||||
# NOTE(morganfainberg): Pass all key/values through repr to ensure we have
|
|
||||||
# a clean description of the information. Without use of repr, it might
|
|
||||||
# be possible to run into encode/decode error(s). For logging/debugging
|
|
||||||
# purposes encode/decode is irrelevant and we should be looking at the
|
|
||||||
# data exactly as it stands.
|
|
||||||
|
|
||||||
def get(self, key):
|
|
||||||
value = self.proxied.get(key)
|
|
||||||
_LOG.debug('CACHE_GET: Key: "%(key)r" Value: "%(value)r"',
|
|
||||||
{'key': key, 'value': value})
|
|
||||||
return value
|
|
||||||
|
|
||||||
def get_multi(self, keys):
|
|
||||||
values = self.proxied.get_multi(keys)
|
|
||||||
_LOG.debug('CACHE_GET_MULTI: "%(keys)r" Values: "%(values)r"',
|
|
||||||
{'keys': keys, 'values': values})
|
|
||||||
return values
|
|
||||||
|
|
||||||
def set(self, key, value):
|
|
||||||
_LOG.debug('CACHE_SET: Key: "%(key)r" Value: "%(value)r"',
|
|
||||||
{'key': key, 'value': value})
|
|
||||||
return self.proxied.set(key, value)
|
|
||||||
|
|
||||||
def set_multi(self, keys):
|
|
||||||
_LOG.debug('CACHE_SET_MULTI: "%r"', keys)
|
|
||||||
self.proxied.set_multi(keys)
|
|
||||||
|
|
||||||
def delete(self, key):
|
|
||||||
self.proxied.delete(key)
|
|
||||||
_LOG.debug('CACHE_DELETE: "%r"', key)
|
|
||||||
|
|
||||||
def delete_multi(self, keys):
|
|
||||||
_LOG.debug('CACHE_DELETE_MULTI: "%r"', keys)
|
|
||||||
self.proxied.delete_multi(keys)
|
|
||||||
|
|
||||||
|
|
||||||
def _build_cache_config(conf):
|
|
||||||
"""Build the cache region dictionary configuration.
|
|
||||||
|
|
||||||
:returns: dict
|
|
||||||
"""
|
|
||||||
prefix = conf.cache.config_prefix
|
|
||||||
conf_dict = {}
|
|
||||||
conf_dict['%s.backend' % prefix] = _opts._DEFAULT_BACKEND
|
|
||||||
if conf.cache.enabled is True:
|
|
||||||
conf_dict['%s.backend' % prefix] = conf.cache.backend
|
|
||||||
conf_dict['%s.expiration_time' % prefix] = conf.cache.expiration_time
|
|
||||||
for argument in conf.cache.backend_argument:
|
|
||||||
try:
|
|
||||||
(argname, argvalue) = argument.split(':', 1)
|
|
||||||
except ValueError:
|
|
||||||
msg = _LE('Unable to build cache config-key. Expected format '
|
|
||||||
'"<argname>:<value>". Skipping unknown format: %s')
|
|
||||||
_LOG.error(msg, argument)
|
|
||||||
continue
|
|
||||||
|
|
||||||
arg_key = '.'.join([prefix, 'arguments', argname])
|
|
||||||
conf_dict[arg_key] = argvalue
|
|
||||||
|
|
||||||
_LOG.debug('Oslo Cache Config: %s', conf_dict)
|
|
||||||
# NOTE(yorik-sar): these arguments will be used for memcache-related
|
|
||||||
# backends. Use setdefault for url to support old-style setting through
|
|
||||||
# backend_argument=url:127.0.0.1:11211
|
|
||||||
conf_dict.setdefault('%s.arguments.url' % prefix,
|
|
||||||
conf.cache.memcache_servers)
|
|
||||||
for arg in ('dead_retry', 'socket_timeout', 'pool_maxsize',
|
|
||||||
'pool_unused_timeout', 'pool_connection_get_timeout'):
|
|
||||||
value = getattr(conf.cache, 'memcache_' + arg)
|
|
||||||
conf_dict['%s.arguments.%s' % (prefix, arg)] = value
|
|
||||||
|
|
||||||
return conf_dict
|
|
||||||
|
|
||||||
|
|
||||||
def _sha1_mangle_key(key):
|
|
||||||
"""Wrapper for dogpile's sha1_mangle_key.
|
|
||||||
|
|
||||||
dogpile's sha1_mangle_key function expects an encoded string, so we
|
|
||||||
should take steps to properly handle multiple inputs before passing
|
|
||||||
the key through.
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
key = key.encode('utf-8', errors='xmlcharrefreplace')
|
|
||||||
except (UnicodeError, AttributeError):
|
|
||||||
# NOTE(stevemar): if encoding fails just continue anyway.
|
|
||||||
pass
|
|
||||||
return util.sha1_mangle_key(key)
|
|
||||||
|
|
||||||
|
|
||||||
def create_region():
|
|
||||||
"""Create a region.
|
|
||||||
|
|
||||||
This is just dogpile.cache.make_region, but the key generator has a
|
|
||||||
different to_str mechanism.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
You must call :func:`configure_cache_region` with this region before
|
|
||||||
a memoized method is called.
|
|
||||||
|
|
||||||
:returns: The new region.
|
|
||||||
:rtype: :class:`dogpile.cache.region.CacheRegion`
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
return dogpile.cache.make_region(
|
|
||||||
function_key_generator=_function_key_generator)
|
|
||||||
|
|
||||||
|
|
||||||
def configure_cache_region(conf, region):
|
|
||||||
"""Configure a cache region.
|
|
||||||
|
|
||||||
If the cache region is already configured, this function does nothing.
|
|
||||||
Otherwise, the region is configured.
|
|
||||||
|
|
||||||
:param conf: config object, must have had :func:`configure` called on it.
|
|
||||||
:type conf: oslo_config.cfg.ConfigOpts
|
|
||||||
:param region: Cache region to configure (see :func:`create_region`).
|
|
||||||
:type region: dogpile.cache.region.CacheRegion
|
|
||||||
:raises oslo_cache.exception.ConfigurationError: If the region parameter is
|
|
||||||
not a dogpile.cache.CacheRegion.
|
|
||||||
:returns: The region.
|
|
||||||
:rtype: :class:`dogpile.cache.region.CacheRegion`
|
|
||||||
"""
|
|
||||||
if not isinstance(region, dogpile.cache.CacheRegion):
|
|
||||||
raise exception.ConfigurationError(
|
|
||||||
_('region not type dogpile.cache.CacheRegion'))
|
|
||||||
|
|
||||||
if not region.is_configured:
|
|
||||||
# NOTE(morganfainberg): this is how you tell if a region is configured.
|
|
||||||
# There is a request logged with dogpile.cache upstream to make this
|
|
||||||
# easier / less ugly.
|
|
||||||
|
|
||||||
config_dict = _build_cache_config(conf)
|
|
||||||
region.configure_from_config(config_dict,
|
|
||||||
'%s.' % conf.cache.config_prefix)
|
|
||||||
|
|
||||||
if conf.cache.debug_cache_backend:
|
|
||||||
region.wrap(_DebugProxy)
|
|
||||||
|
|
||||||
# NOTE(morganfainberg): if the backend requests the use of a
|
|
||||||
# key_mangler, we should respect that key_mangler function. If a
|
|
||||||
# key_mangler is not defined by the backend, use the sha1_mangle_key
|
|
||||||
# mangler provided by dogpile.cache. This ensures we always use a fixed
|
|
||||||
# size cache-key.
|
|
||||||
if region.key_mangler is None:
|
|
||||||
region.key_mangler = _sha1_mangle_key
|
|
||||||
|
|
||||||
for class_path in conf.cache.proxies:
|
|
||||||
# NOTE(morganfainberg): if we have any proxy wrappers, we should
|
|
||||||
# ensure they are added to the cache region's backend. Since
|
|
||||||
# configure_from_config doesn't handle the wrap argument, we need
|
|
||||||
# to manually add the Proxies. For information on how the
|
|
||||||
# ProxyBackends work, see the dogpile.cache documents on
|
|
||||||
# "changing-backend-behavior"
|
|
||||||
cls = importutils.import_class(class_path)
|
|
||||||
_LOG.debug("Adding cache-proxy '%s' to backend.", class_path)
|
|
||||||
region.wrap(cls)
|
|
||||||
|
|
||||||
return region
|
|
||||||
|
|
||||||
|
|
||||||
def _get_should_cache_fn(conf, group):
|
|
||||||
"""Build a function that returns a config group's caching status.
|
|
||||||
|
|
||||||
For any given object that has caching capabilities, a boolean config option
|
|
||||||
for that object's group should exist and default to ``True``. This
|
|
||||||
function will use that value to tell the caching decorator if caching for
|
|
||||||
that object is enabled. To properly use this with the decorator, pass this
|
|
||||||
function the configuration group and assign the result to a variable.
|
|
||||||
Pass the new variable to the caching decorator as the named argument
|
|
||||||
``should_cache_fn``.
|
|
||||||
|
|
||||||
:param conf: config object, must have had :func:`configure` called on it.
|
|
||||||
:type conf: oslo_config.cfg.ConfigOpts
|
|
||||||
:param group: name of the configuration group to examine
|
|
||||||
:type group: string
|
|
||||||
:returns: function reference
|
|
||||||
"""
|
|
||||||
def should_cache(value):
|
|
||||||
if not conf.cache.enabled:
|
|
||||||
return False
|
|
||||||
conf_group = getattr(conf, group)
|
|
||||||
return getattr(conf_group, 'caching', True)
|
|
||||||
return should_cache
|
|
||||||
|
|
||||||
|
|
||||||
def _get_expiration_time_fn(conf, group):
|
|
||||||
"""Build a function that returns a config group's expiration time status.
|
|
||||||
|
|
||||||
For any given object that has caching capabilities, an int config option
|
|
||||||
called ``cache_time`` for that driver's group should exist and typically
|
|
||||||
default to ``None``. This function will use that value to tell the caching
|
|
||||||
decorator of the TTL override for caching the resulting objects. If the
|
|
||||||
value of the config option is ``None`` the default value provided in the
|
|
||||||
``[cache] expiration_time`` option will be used by the decorator. The
|
|
||||||
default may be set to something other than ``None`` in cases where the
|
|
||||||
caching TTL should not be tied to the global default(s).
|
|
||||||
|
|
||||||
To properly use this with the decorator, pass this function the
|
|
||||||
configuration group and assign the result to a variable. Pass the new
|
|
||||||
variable to the caching decorator as the named argument
|
|
||||||
``expiration_time``.
|
|
||||||
|
|
||||||
:param group: name of the configuration group to examine
|
|
||||||
:type group: string
|
|
||||||
:rtype: function reference
|
|
||||||
"""
|
|
||||||
def get_expiration_time():
|
|
||||||
conf_group = getattr(conf, group)
|
|
||||||
return getattr(conf_group, 'cache_time', None)
|
|
||||||
return get_expiration_time
|
|
||||||
|
|
||||||
|
|
||||||
def _key_generate_to_str(s):
|
|
||||||
# NOTE(morganfainberg): Since we need to stringify all arguments, attempt
|
|
||||||
# to stringify and handle the Unicode error explicitly as needed.
|
|
||||||
try:
|
|
||||||
return str(s)
|
|
||||||
except UnicodeEncodeError:
|
|
||||||
return s.encode('utf-8')
|
|
||||||
|
|
||||||
|
|
||||||
def _function_key_generator(namespace, fn, to_str=_key_generate_to_str):
|
|
||||||
# NOTE(morganfainberg): This wraps dogpile.cache's default
|
|
||||||
# function_key_generator to change the default to_str mechanism.
|
|
||||||
return util.function_key_generator(namespace, fn, to_str=to_str)
|
|
||||||
|
|
||||||
|
|
||||||
def get_memoization_decorator(conf, region, group, expiration_group=None):
|
|
||||||
"""Build a function based on the `cache_on_arguments` decorator.
|
|
||||||
|
|
||||||
The memoization decorator that gets created by this function is a
|
|
||||||
:meth:`dogpile.cache.region.CacheRegion.cache_on_arguments` decorator,
|
|
||||||
where
|
|
||||||
|
|
||||||
* The ``should_cache_fn`` is set to a function that returns True if both
|
|
||||||
the ``[cache] enabled`` option is true and [`group`] ``caching`` is
|
|
||||||
True.
|
|
||||||
|
|
||||||
* The ``expiration_time`` is set from the
|
|
||||||
[`expiration_group`] ``cache_time`` option if ``expiration_group``
|
|
||||||
is passed in and the value is set, or [`group`] ``cache_time`` if
|
|
||||||
``expiration_group`` is not passed in and the value is set, or
|
|
||||||
``[cache] expiration_time`` otherwise.
|
|
||||||
|
|
||||||
Example usage::
|
|
||||||
|
|
||||||
import oslo_cache.core
|
|
||||||
|
|
||||||
MEMOIZE = oslo_cache.core.get_memoization_decorator(conf,
|
|
||||||
group='group1')
|
|
||||||
|
|
||||||
@MEMOIZE
|
|
||||||
def function(arg1, arg2):
|
|
||||||
...
|
|
||||||
|
|
||||||
|
|
||||||
ALTERNATE_MEMOIZE = oslo_cache.core.get_memoization_decorator(
|
|
||||||
conf, group='group2', expiration_group='group3')
|
|
||||||
|
|
||||||
@ALTERNATE_MEMOIZE
|
|
||||||
def function2(arg1, arg2):
|
|
||||||
...
|
|
||||||
|
|
||||||
:param conf: config object, must have had :func:`configure` called on it.
|
|
||||||
:type conf: oslo_config.cfg.ConfigOpts
|
|
||||||
:param region: region as created by :func:`create_region`.
|
|
||||||
:type region: dogpile.cache.region.CacheRegion
|
|
||||||
:param group: name of the configuration group to examine
|
|
||||||
:type group: string
|
|
||||||
:param expiration_group: name of the configuration group to examine
|
|
||||||
for the expiration option. This will fall back to
|
|
||||||
using ``group`` if the value is unspecified or
|
|
||||||
``None``
|
|
||||||
:type expiration_group: string
|
|
||||||
:rtype: function reference
|
|
||||||
"""
|
|
||||||
if expiration_group is None:
|
|
||||||
expiration_group = group
|
|
||||||
should_cache = _get_should_cache_fn(conf, group)
|
|
||||||
expiration_time = _get_expiration_time_fn(conf, expiration_group)
|
|
||||||
|
|
||||||
memoize = region.cache_on_arguments(should_cache_fn=should_cache,
|
|
||||||
expiration_time=expiration_time)
|
|
||||||
|
|
||||||
# Make sure the actual "should_cache" and "expiration_time" methods are
|
|
||||||
# available. This is potentially interesting/useful to pre-seed cache
|
|
||||||
# values.
|
|
||||||
memoize.should_cache = should_cache
|
|
||||||
memoize.get_expiration_time = expiration_time
|
|
||||||
|
|
||||||
return memoize
|
|
||||||
|
|
||||||
|
|
||||||
def configure(conf):
|
|
||||||
"""Configure the library.
|
|
||||||
|
|
||||||
Register the required oslo.cache config options into an oslo.config CONF
|
|
||||||
object.
|
|
||||||
|
|
||||||
This must be called before :py:func:`configure_cache_region`.
|
|
||||||
|
|
||||||
:param conf: The configuration object.
|
|
||||||
:type conf: oslo_config.cfg.ConfigOpts
|
|
||||||
"""
|
|
||||||
_opts.configure(conf)
|
|
|
@ -1,21 +0,0 @@
|
||||||
# Copyright 2012 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
|
|
||||||
class ConfigurationError(Exception):
|
|
||||||
"""Raised when the cache isn't configured correctly."""
|
|
||||||
|
|
||||||
|
|
||||||
class QueueEmpty(Exception):
|
|
||||||
"""Raised when a connection cannot be acquired."""
|
|
|
@ -1,54 +0,0 @@
|
||||||
# Tom Cocozzello <tjcocozz@us.ibm.com>, 2015. #zanata
|
|
||||||
# Alex Eng <loones1595@gmail.com>, 2016. #zanata
|
|
||||||
msgid ""
|
|
||||||
msgstr ""
|
|
||||||
"Project-Id-Version: oslo.cache 1.9.1.dev1\n"
|
|
||||||
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
|
|
||||||
"POT-Creation-Date: 2016-06-12 08:30+0000\n"
|
|
||||||
"MIME-Version: 1.0\n"
|
|
||||||
"Content-Type: text/plain; charset=UTF-8\n"
|
|
||||||
"Content-Transfer-Encoding: 8bit\n"
|
|
||||||
"PO-Revision-Date: 2016-06-03 05:50+0000\n"
|
|
||||||
"Last-Translator: Alex Eng <loones1595@gmail.com>\n"
|
|
||||||
"Language-Team: German\n"
|
|
||||||
"Language: de\n"
|
|
||||||
"X-Generator: Zanata 3.7.3\n"
|
|
||||||
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
|
|
||||||
|
|
||||||
#, python-format
|
|
||||||
msgid ""
|
|
||||||
"Invalid ssl_cert_reqs value of %s, must be one of \"NONE\", \"OPTIONAL\", "
|
|
||||||
"\"REQUIRED\""
|
|
||||||
msgstr ""
|
|
||||||
"Ungültiger Wert %s für ssl_cert_reqs, muss lauten \"NONE\", \"OPTIONAL\", "
|
|
||||||
"\"REQUIRED\""
|
|
||||||
|
|
||||||
#, python-format
|
|
||||||
msgid ""
|
|
||||||
"Unable to get a connection from pool id %(id)s after %(seconds)s seconds."
|
|
||||||
msgstr ""
|
|
||||||
"Verbindung konnte von Pool-ID %(id)s nach %(seconds)s nicht abgerufen werden."
|
|
||||||
|
|
||||||
msgid "cache_collection name is required"
|
|
||||||
msgstr "Ein Name für cache_collection ist erforderlich"
|
|
||||||
|
|
||||||
msgid "database db_name is required"
|
|
||||||
msgstr "Die Datenbank db_name ist erforderlich"
|
|
||||||
|
|
||||||
msgid "db_hosts value is required"
|
|
||||||
msgstr "Ein Wert für db_hosts ist erforderlich"
|
|
||||||
|
|
||||||
msgid "integer value expected for mongo_ttl_seconds"
|
|
||||||
msgstr "Ganzzahlwert für mongo_ttl_seconds erwartet"
|
|
||||||
|
|
||||||
msgid "integer value expected for w (write concern attribute)"
|
|
||||||
msgstr "Ganzzahlwert für Attribut 'w' ('write concern'-Attribut) erwartet"
|
|
||||||
|
|
||||||
msgid "no ssl support available"
|
|
||||||
msgstr "Keine SSL-Unterstützung verfügbar"
|
|
||||||
|
|
||||||
msgid "region not type dogpile.cache.CacheRegion"
|
|
||||||
msgstr "Region weist nicht den Typ 'dogpile.cache.CacheRegion' auf"
|
|
||||||
|
|
||||||
msgid "replicaset_name required when use_replica is True"
|
|
||||||
msgstr "replicaset_name erforderlich, wenn use_replica 'True' ist"
|
|
|
@ -1,53 +0,0 @@
|
||||||
# Andi Chandler <andi@gowling.com>, 2016. #zanata
|
|
||||||
msgid ""
|
|
||||||
msgstr ""
|
|
||||||
"Project-Id-Version: oslo.cache 1.10.1.dev2\n"
|
|
||||||
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
|
|
||||||
"POT-Creation-Date: 2016-07-11 22:41+0000\n"
|
|
||||||
"MIME-Version: 1.0\n"
|
|
||||||
"Content-Type: text/plain; charset=UTF-8\n"
|
|
||||||
"Content-Transfer-Encoding: 8bit\n"
|
|
||||||
"PO-Revision-Date: 2016-06-28 05:53+0000\n"
|
|
||||||
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
|
|
||||||
"Language-Team: English (United Kingdom)\n"
|
|
||||||
"Language: en-GB\n"
|
|
||||||
"X-Generator: Zanata 3.7.3\n"
|
|
||||||
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
|
|
||||||
|
|
||||||
#, python-format
|
|
||||||
msgid ""
|
|
||||||
"Invalid ssl_cert_reqs value of %s, must be one of \"NONE\", \"OPTIONAL\", "
|
|
||||||
"\"REQUIRED\""
|
|
||||||
msgstr ""
|
|
||||||
"Invalid ssl_cert_reqs value of %s, must be one of \"NONE\", \"OPTIONAL\", "
|
|
||||||
"\"REQUIRED\""
|
|
||||||
|
|
||||||
#, python-format
|
|
||||||
msgid ""
|
|
||||||
"Unable to get a connection from pool id %(id)s after %(seconds)s seconds."
|
|
||||||
msgstr ""
|
|
||||||
"Unable to get a connection from pool id %(id)s after %(seconds)s seconds."
|
|
||||||
|
|
||||||
msgid "cache_collection name is required"
|
|
||||||
msgstr "cache_collection name is required"
|
|
||||||
|
|
||||||
msgid "database db_name is required"
|
|
||||||
msgstr "database db_name is required"
|
|
||||||
|
|
||||||
msgid "db_hosts value is required"
|
|
||||||
msgstr "db_hosts value is required"
|
|
||||||
|
|
||||||
msgid "integer value expected for mongo_ttl_seconds"
|
|
||||||
msgstr "integer value expected for mongo_ttl_seconds"
|
|
||||||
|
|
||||||
msgid "integer value expected for w (write concern attribute)"
|
|
||||||
msgstr "integer value expected for w (write concern attribute)"
|
|
||||||
|
|
||||||
msgid "no ssl support available"
|
|
||||||
msgstr "no SSL support available"
|
|
||||||
|
|
||||||
msgid "region not type dogpile.cache.CacheRegion"
|
|
||||||
msgstr "region not type dogpile.cache.CacheRegion"
|
|
||||||
|
|
||||||
msgid "replicaset_name required when use_replica is True"
|
|
||||||
msgstr "replicaset_name required when use_replica is True"
|
|
|
@ -1,57 +0,0 @@
|
||||||
# OpenStack Infra <zanata@openstack.org>, 2015. #zanata
|
|
||||||
# Tom Cocozzello <tjcocozz@us.ibm.com>, 2015. #zanata
|
|
||||||
# Alex Eng <loones1595@gmail.com>, 2016. #zanata
|
|
||||||
# KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>, 2016. #zanata
|
|
||||||
msgid ""
|
|
||||||
msgstr ""
|
|
||||||
"Project-Id-Version: oslo.cache 1.10.1.dev2\n"
|
|
||||||
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
|
|
||||||
"POT-Creation-Date: 2016-07-11 22:41+0000\n"
|
|
||||||
"MIME-Version: 1.0\n"
|
|
||||||
"Content-Type: text/plain; charset=UTF-8\n"
|
|
||||||
"Content-Transfer-Encoding: 8bit\n"
|
|
||||||
"PO-Revision-Date: 2016-07-02 08:19+0000\n"
|
|
||||||
"Last-Translator: KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>\n"
|
|
||||||
"Language-Team: Spanish\n"
|
|
||||||
"Language: es\n"
|
|
||||||
"X-Generator: Zanata 3.7.3\n"
|
|
||||||
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
|
|
||||||
|
|
||||||
#, python-format
|
|
||||||
msgid ""
|
|
||||||
"Invalid ssl_cert_reqs value of %s, must be one of \"NONE\", \"OPTIONAL\", "
|
|
||||||
"\"REQUIRED\""
|
|
||||||
msgstr ""
|
|
||||||
"Valor ssl_cert_reqs no válido de %s, debe ser uno de \"NONE\", \"OPTIONAL\", "
|
|
||||||
"\"REQUIRED\""
|
|
||||||
|
|
||||||
#, python-format
|
|
||||||
msgid ""
|
|
||||||
"Unable to get a connection from pool id %(id)s after %(seconds)s seconds."
|
|
||||||
msgstr ""
|
|
||||||
"No se puede obtener una conexión del ID de agrupación %(id)s después de "
|
|
||||||
"%(seconds)s segundos."
|
|
||||||
|
|
||||||
msgid "cache_collection name is required"
|
|
||||||
msgstr "el nombre de cache_collection es necesario"
|
|
||||||
|
|
||||||
msgid "database db_name is required"
|
|
||||||
msgstr "base de datos db_name es necesario"
|
|
||||||
|
|
||||||
msgid "db_hosts value is required"
|
|
||||||
msgstr "El valor db_hosts es necesario"
|
|
||||||
|
|
||||||
msgid "integer value expected for mongo_ttl_seconds"
|
|
||||||
msgstr "se esperaba un valor entero para mongo_ttl_seconds"
|
|
||||||
|
|
||||||
msgid "integer value expected for w (write concern attribute)"
|
|
||||||
msgstr "se esperaba un valor entero para w (atributo en cuestión write)"
|
|
||||||
|
|
||||||
msgid "no ssl support available"
|
|
||||||
msgstr "Soporte SSL no disponible"
|
|
||||||
|
|
||||||
msgid "region not type dogpile.cache.CacheRegion"
|
|
||||||
msgstr "región no tipo dogpile.cache.CacheRegion"
|
|
||||||
|
|
||||||
msgid "replicaset_name required when use_replica is True"
|
|
||||||
msgstr "se necesita replicaset_name cuando use_replica es True (verdadero)"
|
|
|
@ -1,57 +0,0 @@
|
||||||
# OpenStack Infra <zanata@openstack.org>, 2015. #zanata
|
|
||||||
# Tom Cocozzello <tjcocozz@us.ibm.com>, 2015. #zanata
|
|
||||||
# Alex Eng <loones1595@gmail.com>, 2016. #zanata
|
|
||||||
# KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>, 2016. #zanata
|
|
||||||
msgid ""
|
|
||||||
msgstr ""
|
|
||||||
"Project-Id-Version: oslo.cache 1.10.1.dev2\n"
|
|
||||||
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
|
|
||||||
"POT-Creation-Date: 2016-07-11 22:41+0000\n"
|
|
||||||
"MIME-Version: 1.0\n"
|
|
||||||
"Content-Type: text/plain; charset=UTF-8\n"
|
|
||||||
"Content-Transfer-Encoding: 8bit\n"
|
|
||||||
"PO-Revision-Date: 2016-07-02 08:20+0000\n"
|
|
||||||
"Last-Translator: KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>\n"
|
|
||||||
"Language-Team: French\n"
|
|
||||||
"Language: fr\n"
|
|
||||||
"X-Generator: Zanata 3.7.3\n"
|
|
||||||
"Plural-Forms: nplurals=2; plural=(n > 1)\n"
|
|
||||||
|
|
||||||
#, python-format
|
|
||||||
msgid ""
|
|
||||||
"Invalid ssl_cert_reqs value of %s, must be one of \"NONE\", \"OPTIONAL\", "
|
|
||||||
"\"REQUIRED\""
|
|
||||||
msgstr ""
|
|
||||||
"Valeur de ssl_cert_reqs non valide (%s), doit être l'une des valeurs "
|
|
||||||
"suivantes: \"NONE\", \"OPTIONAL\", \"REQUIRED\""
|
|
||||||
|
|
||||||
#, python-format
|
|
||||||
msgid ""
|
|
||||||
"Unable to get a connection from pool id %(id)s after %(seconds)s seconds."
|
|
||||||
msgstr ""
|
|
||||||
"Impossible d'établir une connexion à partir de l'ID de pool %(id)s après "
|
|
||||||
"%(seconds)s secondes."
|
|
||||||
|
|
||||||
msgid "cache_collection name is required"
|
|
||||||
msgstr "Nom cache_collection est requis"
|
|
||||||
|
|
||||||
msgid "database db_name is required"
|
|
||||||
msgstr "db_name database est requis"
|
|
||||||
|
|
||||||
msgid "db_hosts value is required"
|
|
||||||
msgstr "Valeur db_hosts est requis"
|
|
||||||
|
|
||||||
msgid "integer value expected for mongo_ttl_seconds"
|
|
||||||
msgstr "valeur entière attendue pour mongo_ttl_seconds"
|
|
||||||
|
|
||||||
msgid "integer value expected for w (write concern attribute)"
|
|
||||||
msgstr "valeur entière attendue pour w (attribut d'écriture)"
|
|
||||||
|
|
||||||
msgid "no ssl support available"
|
|
||||||
msgstr "pas de support du ssl"
|
|
||||||
|
|
||||||
msgid "region not type dogpile.cache.CacheRegion"
|
|
||||||
msgstr "la région n'est pas de type dogpile.cache.CacheRegion"
|
|
||||||
|
|
||||||
msgid "replicaset_name required when use_replica is True"
|
|
||||||
msgstr "replicaset_name requis si use_replica a la valeur True"
|
|
|
@ -1,56 +0,0 @@
|
||||||
# Tom Cocozzello <tjcocozz@us.ibm.com>, 2015. #zanata
|
|
||||||
# Alex Eng <loones1595@gmail.com>, 2016. #zanata
|
|
||||||
# KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>, 2016. #zanata
|
|
||||||
msgid ""
|
|
||||||
msgstr ""
|
|
||||||
"Project-Id-Version: oslo.cache 1.10.1.dev2\n"
|
|
||||||
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
|
|
||||||
"POT-Creation-Date: 2016-07-11 22:41+0000\n"
|
|
||||||
"MIME-Version: 1.0\n"
|
|
||||||
"Content-Type: text/plain; charset=UTF-8\n"
|
|
||||||
"Content-Transfer-Encoding: 8bit\n"
|
|
||||||
"PO-Revision-Date: 2016-07-02 08:21+0000\n"
|
|
||||||
"Last-Translator: KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>\n"
|
|
||||||
"Language-Team: Italian\n"
|
|
||||||
"Language: it\n"
|
|
||||||
"X-Generator: Zanata 3.7.3\n"
|
|
||||||
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
|
|
||||||
|
|
||||||
#, python-format
|
|
||||||
msgid ""
|
|
||||||
"Invalid ssl_cert_reqs value of %s, must be one of \"NONE\", \"OPTIONAL\", "
|
|
||||||
"\"REQUIRED\""
|
|
||||||
msgstr ""
|
|
||||||
"Valore ssl_cert_reqs di %s non valido; deve essere uno tra \"NONE\", "
|
|
||||||
"\"OPTIONAL\", \"REQUIRED\""
|
|
||||||
|
|
||||||
#, python-format
|
|
||||||
msgid ""
|
|
||||||
"Unable to get a connection from pool id %(id)s after %(seconds)s seconds."
|
|
||||||
msgstr ""
|
|
||||||
"Impossibile ottenere una connessione dall'ID pool %(id)s dopo %(seconds)s "
|
|
||||||
"secondi."
|
|
||||||
|
|
||||||
msgid "cache_collection name is required"
|
|
||||||
msgstr "Il nome cache_collection è obbligatorio"
|
|
||||||
|
|
||||||
msgid "database db_name is required"
|
|
||||||
msgstr "Il database db_name è obbligatorio"
|
|
||||||
|
|
||||||
msgid "db_hosts value is required"
|
|
||||||
msgstr "Il valore db_hosts è obbligatorio"
|
|
||||||
|
|
||||||
msgid "integer value expected for mongo_ttl_seconds"
|
|
||||||
msgstr "valore intero previsto per mongo_ttl_seconds"
|
|
||||||
|
|
||||||
msgid "integer value expected for w (write concern attribute)"
|
|
||||||
msgstr "valore intero previsto per w (attributo di scrittura)"
|
|
||||||
|
|
||||||
msgid "no ssl support available"
|
|
||||||
msgstr "nessun supporto ssl disponibile"
|
|
||||||
|
|
||||||
msgid "region not type dogpile.cache.CacheRegion"
|
|
||||||
msgstr "regione non tipo dogpile.cache.CacheRegion"
|
|
||||||
|
|
||||||
msgid "replicaset_name required when use_replica is True"
|
|
||||||
msgstr "replicaset_name è obbligatorio quando use_replica è True"
|
|
|
@ -1,54 +0,0 @@
|
||||||
# Lucas Palm <lapalm@us.ibm.com>, 2015. #zanata
|
|
||||||
# Alex Eng <loones1595@gmail.com>, 2016. #zanata
|
|
||||||
# KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>, 2016. #zanata
|
|
||||||
msgid ""
|
|
||||||
msgstr ""
|
|
||||||
"Project-Id-Version: oslo.cache 1.10.1.dev2\n"
|
|
||||||
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
|
|
||||||
"POT-Creation-Date: 2016-07-11 22:41+0000\n"
|
|
||||||
"MIME-Version: 1.0\n"
|
|
||||||
"Content-Type: text/plain; charset=UTF-8\n"
|
|
||||||
"Content-Transfer-Encoding: 8bit\n"
|
|
||||||
"PO-Revision-Date: 2016-07-02 08:22+0000\n"
|
|
||||||
"Last-Translator: KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>\n"
|
|
||||||
"Language-Team: Korean (South Korea)\n"
|
|
||||||
"Language: ko-KR\n"
|
|
||||||
"X-Generator: Zanata 3.7.3\n"
|
|
||||||
"Plural-Forms: nplurals=1; plural=0\n"
|
|
||||||
|
|
||||||
#, python-format
|
|
||||||
msgid ""
|
|
||||||
"Invalid ssl_cert_reqs value of %s, must be one of \"NONE\", \"OPTIONAL\", "
|
|
||||||
"\"REQUIRED\""
|
|
||||||
msgstr ""
|
|
||||||
"%s의 ssl_cert_reqs 값이 올바르지 않음, \"NONE\", \"OPTIONAL\", \"REQUIRED\" "
|
|
||||||
"중 하나여야 함 "
|
|
||||||
|
|
||||||
#, python-format
|
|
||||||
msgid ""
|
|
||||||
"Unable to get a connection from pool id %(id)s after %(seconds)s seconds."
|
|
||||||
msgstr "풀 id %(id)s에서 %(seconds)s분 후에 연결할 수 없습니다."
|
|
||||||
|
|
||||||
msgid "cache_collection name is required"
|
|
||||||
msgstr "cache_collection 이름이 필요함"
|
|
||||||
|
|
||||||
msgid "database db_name is required"
|
|
||||||
msgstr "database db_name이 필요함"
|
|
||||||
|
|
||||||
msgid "db_hosts value is required"
|
|
||||||
msgstr "db_hosts 값이 필요함"
|
|
||||||
|
|
||||||
msgid "integer value expected for mongo_ttl_seconds"
|
|
||||||
msgstr "mongo_ttl_seconds 에 대해 정수 값이 예상됨 "
|
|
||||||
|
|
||||||
msgid "integer value expected for w (write concern attribute)"
|
|
||||||
msgstr "w(write concern 속성)에 대해 정수 값이 예상됨"
|
|
||||||
|
|
||||||
msgid "no ssl support available"
|
|
||||||
msgstr "사용 가능한 ssl 지원이 없음"
|
|
||||||
|
|
||||||
msgid "region not type dogpile.cache.CacheRegion"
|
|
||||||
msgstr "리젼이 dogpile.cache.CacheRegion 유형이 아님 "
|
|
||||||
|
|
||||||
msgid "replicaset_name required when use_replica is True"
|
|
||||||
msgstr "use_replica가 True인 경우 replicaset_name이 필요함 "
|
|
|
@ -1,57 +0,0 @@
|
||||||
# Lucas Palm <lapalm@us.ibm.com>, 2015. #zanata
|
|
||||||
# OpenStack Infra <zanata@openstack.org>, 2015. #zanata
|
|
||||||
# Alex Eng <loones1595@gmail.com>, 2016. #zanata
|
|
||||||
# KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>, 2016. #zanata
|
|
||||||
msgid ""
|
|
||||||
msgstr ""
|
|
||||||
"Project-Id-Version: oslo.cache 1.10.1.dev2\n"
|
|
||||||
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
|
|
||||||
"POT-Creation-Date: 2016-07-11 22:41+0000\n"
|
|
||||||
"MIME-Version: 1.0\n"
|
|
||||||
"Content-Type: text/plain; charset=UTF-8\n"
|
|
||||||
"Content-Transfer-Encoding: 8bit\n"
|
|
||||||
"PO-Revision-Date: 2016-07-02 08:23+0000\n"
|
|
||||||
"Last-Translator: KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>\n"
|
|
||||||
"Language-Team: Portuguese (Brazil)\n"
|
|
||||||
"Language: pt-BR\n"
|
|
||||||
"X-Generator: Zanata 3.7.3\n"
|
|
||||||
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
|
|
||||||
|
|
||||||
#, python-format
|
|
||||||
msgid ""
|
|
||||||
"Invalid ssl_cert_reqs value of %s, must be one of \"NONE\", \"OPTIONAL\", "
|
|
||||||
"\"REQUIRED\""
|
|
||||||
msgstr ""
|
|
||||||
"valor ssl_cert_reqs inválido de %s, deve ser um de \"NONE\", \"OPTIMAL\", "
|
|
||||||
"\"REQUIRED\""
|
|
||||||
|
|
||||||
#, python-format
|
|
||||||
msgid ""
|
|
||||||
"Unable to get a connection from pool id %(id)s after %(seconds)s seconds."
|
|
||||||
msgstr ""
|
|
||||||
"Não é possível obter uma conexão do ID do conjunto %(id)s após %(seconds)s "
|
|
||||||
"segundos."
|
|
||||||
|
|
||||||
msgid "cache_collection name is required"
|
|
||||||
msgstr "nome cache_collection é necessário"
|
|
||||||
|
|
||||||
msgid "database db_name is required"
|
|
||||||
msgstr "banco de dados db_name é necessário"
|
|
||||||
|
|
||||||
msgid "db_hosts value is required"
|
|
||||||
msgstr "valor db_hosts é necessário"
|
|
||||||
|
|
||||||
msgid "integer value expected for mongo_ttl_seconds"
|
|
||||||
msgstr "valor de número inteiro esperado para mongo_ttl_seconds"
|
|
||||||
|
|
||||||
msgid "integer value expected for w (write concern attribute)"
|
|
||||||
msgstr "valor inteiro esperado para w (atributo relativo a gravação)"
|
|
||||||
|
|
||||||
msgid "no ssl support available"
|
|
||||||
msgstr "suporte ssl não disponível"
|
|
||||||
|
|
||||||
msgid "region not type dogpile.cache.CacheRegion"
|
|
||||||
msgstr "região não é do tipo dogpile.cache.CacheRegion"
|
|
||||||
|
|
||||||
msgid "replicaset_name required when use_replica is True"
|
|
||||||
msgstr "replicaset_name necessário quando use_replica for True"
|
|
|
@ -1,58 +0,0 @@
|
||||||
# Lucas Palm <lapalm@us.ibm.com>, 2015. #zanata
|
|
||||||
# Alex Eng <loones1595@gmail.com>, 2016. #zanata
|
|
||||||
# KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>, 2016. #zanata
|
|
||||||
msgid ""
|
|
||||||
msgstr ""
|
|
||||||
"Project-Id-Version: oslo.cache 1.10.1.dev2\n"
|
|
||||||
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
|
|
||||||
"POT-Creation-Date: 2016-07-11 22:41+0000\n"
|
|
||||||
"MIME-Version: 1.0\n"
|
|
||||||
"Content-Type: text/plain; charset=UTF-8\n"
|
|
||||||
"Content-Transfer-Encoding: 8bit\n"
|
|
||||||
"PO-Revision-Date: 2016-07-02 08:24+0000\n"
|
|
||||||
"Last-Translator: KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>\n"
|
|
||||||
"Language-Team: Russian\n"
|
|
||||||
"Language: ru\n"
|
|
||||||
"X-Generator: Zanata 3.7.3\n"
|
|
||||||
"Plural-Forms: nplurals=3; plural=(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n"
|
|
||||||
"%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2)\n"
|
|
||||||
|
|
||||||
#, python-format
|
|
||||||
msgid ""
|
|
||||||
"Invalid ssl_cert_reqs value of %s, must be one of \"NONE\", \"OPTIONAL\", "
|
|
||||||
"\"REQUIRED\""
|
|
||||||
msgstr ""
|
|
||||||
"Недопустимое значение ssl_cert_reqs, %s, необходимо указать одно из "
|
|
||||||
"значений: \"NONE\", \"OPTIONAL\", \"REQUIRED\""
|
|
||||||
|
|
||||||
#, python-format
|
|
||||||
msgid ""
|
|
||||||
"Unable to get a connection from pool id %(id)s after %(seconds)s seconds."
|
|
||||||
msgstr ""
|
|
||||||
"Не удалось получить соединение из пула с ИД %(id)s за %(seconds)s секунд."
|
|
||||||
|
|
||||||
msgid "cache_collection name is required"
|
|
||||||
msgstr "имя cache_collection является обязательным"
|
|
||||||
|
|
||||||
msgid "database db_name is required"
|
|
||||||
msgstr "db_name базы данных является обязательным"
|
|
||||||
|
|
||||||
msgid "db_hosts value is required"
|
|
||||||
msgstr "Значение db_hosts является обязательным"
|
|
||||||
|
|
||||||
msgid "integer value expected for mongo_ttl_seconds"
|
|
||||||
msgstr "для атрибута mongo_ttl_seconds ожидается целочисленное значение"
|
|
||||||
|
|
||||||
msgid "integer value expected for w (write concern attribute)"
|
|
||||||
msgstr "для w (атрибут участия в записи) ожидается целочисленное значение"
|
|
||||||
|
|
||||||
msgid "no ssl support available"
|
|
||||||
msgstr "отсутствует поддержка ssl"
|
|
||||||
|
|
||||||
msgid "region not type dogpile.cache.CacheRegion"
|
|
||||||
msgstr "регион не относится к типу dogpile.cache.CacheRegion"
|
|
||||||
|
|
||||||
msgid "replicaset_name required when use_replica is True"
|
|
||||||
msgstr ""
|
|
||||||
"replicaset_name является обязательным, если для use_replica задано значение "
|
|
||||||
"True"
|
|
|
@ -1,54 +0,0 @@
|
||||||
# OpenStack Infra <zanata@openstack.org>, 2015. #zanata
|
|
||||||
# Alex Eng <loones1595@gmail.com>, 2016. #zanata
|
|
||||||
# KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>, 2016. #zanata
|
|
||||||
msgid ""
|
|
||||||
msgstr ""
|
|
||||||
"Project-Id-Version: oslo.cache 1.10.1.dev2\n"
|
|
||||||
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
|
|
||||||
"POT-Creation-Date: 2016-07-11 22:41+0000\n"
|
|
||||||
"MIME-Version: 1.0\n"
|
|
||||||
"Content-Type: text/plain; charset=UTF-8\n"
|
|
||||||
"Content-Transfer-Encoding: 8bit\n"
|
|
||||||
"PO-Revision-Date: 2016-07-02 08:25+0000\n"
|
|
||||||
"Last-Translator: KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>\n"
|
|
||||||
"Language-Team: Turkish (Turkey)\n"
|
|
||||||
"Language: tr-TR\n"
|
|
||||||
"X-Generator: Zanata 3.7.3\n"
|
|
||||||
"Plural-Forms: nplurals=2; plural=(n>1)\n"
|
|
||||||
|
|
||||||
#, python-format
|
|
||||||
msgid ""
|
|
||||||
"Invalid ssl_cert_reqs value of %s, must be one of \"NONE\", \"OPTIONAL\", "
|
|
||||||
"\"REQUIRED\""
|
|
||||||
msgstr ""
|
|
||||||
"%s değerinde geçersiz ssl_cert_reqs, \"HİÇBİRİ\", \"İSTEĞE BAĞLI\", "
|
|
||||||
"\"GEREKLİ\" den biri olmalı"
|
|
||||||
|
|
||||||
#, python-format
|
|
||||||
msgid ""
|
|
||||||
"Unable to get a connection from pool id %(id)s after %(seconds)s seconds."
|
|
||||||
msgstr "%(seconds)s saniye sonra havuz %(id)s'den bağlantı alınamadı."
|
|
||||||
|
|
||||||
msgid "cache_collection name is required"
|
|
||||||
msgstr "cache_collection ismi gerekli"
|
|
||||||
|
|
||||||
msgid "database db_name is required"
|
|
||||||
msgstr "veri tabanı db_name gerekli"
|
|
||||||
|
|
||||||
msgid "db_hosts value is required"
|
|
||||||
msgstr "db_hosts değeri gerekli"
|
|
||||||
|
|
||||||
msgid "integer value expected for mongo_ttl_seconds"
|
|
||||||
msgstr "mongo_ttl_seconds için tam sayı değer bekleniyor"
|
|
||||||
|
|
||||||
msgid "integer value expected for w (write concern attribute)"
|
|
||||||
msgstr "w için tam sayı değer bekleniyor (yazma ilgisi özniteliği)"
|
|
||||||
|
|
||||||
msgid "no ssl support available"
|
|
||||||
msgstr "ssl desteği yok"
|
|
||||||
|
|
||||||
msgid "region not type dogpile.cache.CacheRegion"
|
|
||||||
msgstr "bölge dogpile.cache.CacheRegion türünde değil"
|
|
||||||
|
|
||||||
msgid "replicaset_name required when use_replica is True"
|
|
||||||
msgstr "use_replica True olduğunda replicaset_name gereklidir"
|
|
|
@ -1,53 +0,0 @@
|
||||||
# Lucas Palm <lapalm@us.ibm.com>, 2015. #zanata
|
|
||||||
# Alex Eng <loones1595@gmail.com>, 2016. #zanata
|
|
||||||
# KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>, 2016. #zanata
|
|
||||||
msgid ""
|
|
||||||
msgstr ""
|
|
||||||
"Project-Id-Version: oslo.cache 1.10.1.dev2\n"
|
|
||||||
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
|
|
||||||
"POT-Creation-Date: 2016-07-11 22:41+0000\n"
|
|
||||||
"MIME-Version: 1.0\n"
|
|
||||||
"Content-Type: text/plain; charset=UTF-8\n"
|
|
||||||
"Content-Transfer-Encoding: 8bit\n"
|
|
||||||
"PO-Revision-Date: 2016-07-02 08:27+0000\n"
|
|
||||||
"Last-Translator: KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>\n"
|
|
||||||
"Language-Team: Chinese (China)\n"
|
|
||||||
"Language: zh-CN\n"
|
|
||||||
"X-Generator: Zanata 3.7.3\n"
|
|
||||||
"Plural-Forms: nplurals=1; plural=0\n"
|
|
||||||
|
|
||||||
#, python-format
|
|
||||||
msgid ""
|
|
||||||
"Invalid ssl_cert_reqs value of %s, must be one of \"NONE\", \"OPTIONAL\", "
|
|
||||||
"\"REQUIRED\""
|
|
||||||
msgstr ""
|
|
||||||
"ssl_cert_reqs 值 %s 无效,必须是下列其中一项:“NONE”、“OPTIONAL”和“REQUIRED”"
|
|
||||||
|
|
||||||
#, python-format
|
|
||||||
msgid ""
|
|
||||||
"Unable to get a connection from pool id %(id)s after %(seconds)s seconds."
|
|
||||||
msgstr "在 %(seconds)s 秒之后,无法根据池标识 %(id)s 获取连接。"
|
|
||||||
|
|
||||||
msgid "cache_collection name is required"
|
|
||||||
msgstr "需要 cache_collection 名称"
|
|
||||||
|
|
||||||
msgid "database db_name is required"
|
|
||||||
msgstr "需要数据库 db_name"
|
|
||||||
|
|
||||||
msgid "db_hosts value is required"
|
|
||||||
msgstr "需要 db_hosts 值"
|
|
||||||
|
|
||||||
msgid "integer value expected for mongo_ttl_seconds"
|
|
||||||
msgstr "mongo_ttl_seconds 需要整数值"
|
|
||||||
|
|
||||||
msgid "integer value expected for w (write concern attribute)"
|
|
||||||
msgstr "w(写相关属性)需要整数值"
|
|
||||||
|
|
||||||
msgid "no ssl support available"
|
|
||||||
msgstr "未提供 ssl 支持"
|
|
||||||
|
|
||||||
msgid "region not type dogpile.cache.CacheRegion"
|
|
||||||
msgstr "区域的类型不是 dogpile.cache.CacheRegion"
|
|
||||||
|
|
||||||
msgid "replicaset_name required when use_replica is True"
|
|
||||||
msgstr "当 use_replica 为 True 时,需要 replicaset_name"
|
|
|
@ -1,54 +0,0 @@
|
||||||
# Lucas Palm <lapalm@us.ibm.com>, 2015. #zanata
|
|
||||||
# Alex Eng <loones1595@gmail.com>, 2016. #zanata
|
|
||||||
# KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>, 2016. #zanata
|
|
||||||
msgid ""
|
|
||||||
msgstr ""
|
|
||||||
"Project-Id-Version: oslo.cache 1.10.1.dev2\n"
|
|
||||||
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
|
|
||||||
"POT-Creation-Date: 2016-07-11 22:41+0000\n"
|
|
||||||
"MIME-Version: 1.0\n"
|
|
||||||
"Content-Type: text/plain; charset=UTF-8\n"
|
|
||||||
"Content-Transfer-Encoding: 8bit\n"
|
|
||||||
"PO-Revision-Date: 2016-07-02 08:26+0000\n"
|
|
||||||
"Last-Translator: KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>\n"
|
|
||||||
"Language-Team: Chinese (Taiwan)\n"
|
|
||||||
"Language: zh-TW\n"
|
|
||||||
"X-Generator: Zanata 3.7.3\n"
|
|
||||||
"Plural-Forms: nplurals=1; plural=0\n"
|
|
||||||
|
|
||||||
#, python-format
|
|
||||||
msgid ""
|
|
||||||
"Invalid ssl_cert_reqs value of %s, must be one of \"NONE\", \"OPTIONAL\", "
|
|
||||||
"\"REQUIRED\""
|
|
||||||
msgstr ""
|
|
||||||
"%s 的 ssl_cert_reqs 值無效,必須是 \"NONE\"、\"OPTIONAL\" 及 \"REQUIRED\" 的"
|
|
||||||
"其中之一"
|
|
||||||
|
|
||||||
#, python-format
|
|
||||||
msgid ""
|
|
||||||
"Unable to get a connection from pool id %(id)s after %(seconds)s seconds."
|
|
||||||
msgstr "在 %(seconds)s 秒之後,無法從儲存區 ID %(id)s 取得連線。"
|
|
||||||
|
|
||||||
msgid "cache_collection name is required"
|
|
||||||
msgstr "需要 cache_collection 名稱"
|
|
||||||
|
|
||||||
msgid "database db_name is required"
|
|
||||||
msgstr "需要資料庫 db_name"
|
|
||||||
|
|
||||||
msgid "db_hosts value is required"
|
|
||||||
msgstr "需要 db_hosts 值"
|
|
||||||
|
|
||||||
msgid "integer value expected for mongo_ttl_seconds"
|
|
||||||
msgstr "mongo_ttl_seconds 預期整數值"
|
|
||||||
|
|
||||||
msgid "integer value expected for w (write concern attribute)"
|
|
||||||
msgstr "w(WriteConcern 屬性)預期整數值"
|
|
||||||
|
|
||||||
msgid "no ssl support available"
|
|
||||||
msgstr "無法使用 SSL 支援"
|
|
||||||
|
|
||||||
msgid "region not type dogpile.cache.CacheRegion"
|
|
||||||
msgstr "區域不是 dogpile.cache.CacheRegion 類型"
|
|
||||||
|
|
||||||
msgid "replicaset_name required when use_replica is True"
|
|
||||||
msgstr "use_replica 為 True 時需要 replicaset_name"
|
|
|
@ -1,70 +0,0 @@
|
||||||
# Copyright 2013 Metacloud
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
"""Items useful for external testing."""
|
|
||||||
|
|
||||||
|
|
||||||
import copy
|
|
||||||
|
|
||||||
from dogpile.cache import proxy
|
|
||||||
|
|
||||||
from oslo_cache import core as cache
|
|
||||||
|
|
||||||
|
|
||||||
__all__ = [
|
|
||||||
'CacheIsolatingProxy',
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
NO_VALUE = cache.NO_VALUE
|
|
||||||
|
|
||||||
|
|
||||||
def _copy_value(value):
|
|
||||||
if value is not NO_VALUE:
|
|
||||||
value = copy.deepcopy(value)
|
|
||||||
return value
|
|
||||||
|
|
||||||
|
|
||||||
# NOTE(morganfainberg): WARNING - It is not recommended to use the Memory
|
|
||||||
# backend for dogpile.cache in a real deployment under any circumstances. The
|
|
||||||
# backend does no cleanup of expired values and therefore will leak memory. The
|
|
||||||
# backend is not implemented in a way to share data across processes (e.g.
|
|
||||||
# Keystone in HTTPD. This proxy is a hack to get around the lack of isolation
|
|
||||||
# of values in memory. Currently it blindly stores and retrieves the values
|
|
||||||
# from the cache, and modifications to dicts/lists/etc returned can result in
|
|
||||||
# changes to the cached values. In short, do not use the dogpile.cache.memory
|
|
||||||
# backend unless you are running tests or expecting odd/strange results.
|
|
||||||
class CacheIsolatingProxy(proxy.ProxyBackend):
|
|
||||||
"""Proxy that forces a memory copy of stored values.
|
|
||||||
|
|
||||||
The default in-memory cache-region does not perform a copy on values it
|
|
||||||
is meant to cache. Therefore if the value is modified after set or after
|
|
||||||
get, the cached value also is modified. This proxy does a copy as the last
|
|
||||||
thing before storing data.
|
|
||||||
|
|
||||||
In your application's tests, you'll want to set this as a proxy for the
|
|
||||||
in-memory cache, like this::
|
|
||||||
|
|
||||||
self.config_fixture.config(
|
|
||||||
group='cache',
|
|
||||||
backend='dogpile.cache.memory',
|
|
||||||
enabled=True,
|
|
||||||
proxies=['oslo_cache.testing.CacheIsolatingProxy'])
|
|
||||||
|
|
||||||
"""
|
|
||||||
def get(self, key):
|
|
||||||
return _copy_value(self.proxied.get(key))
|
|
||||||
|
|
||||||
def set(self, key, value):
|
|
||||||
self.proxied.set(key, _copy_value(value))
|
|
|
@ -1,18 +0,0 @@
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from oslo_cache import core
|
|
||||||
|
|
||||||
from oslo_config import cfg
|
|
||||||
|
|
||||||
|
|
||||||
core.configure(cfg.CONF)
|
|
|
@ -1,326 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
# Copyright 2013 Metacloud
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import copy
|
|
||||||
import time
|
|
||||||
import uuid
|
|
||||||
|
|
||||||
from dogpile.cache import proxy
|
|
||||||
import mock
|
|
||||||
from oslo_config import cfg
|
|
||||||
from oslo_config import fixture as config_fixture
|
|
||||||
from oslotest import base
|
|
||||||
|
|
||||||
from oslo_cache import _opts
|
|
||||||
from oslo_cache import core as cache
|
|
||||||
from oslo_cache import exception
|
|
||||||
|
|
||||||
|
|
||||||
NO_VALUE = cache.NO_VALUE
|
|
||||||
TEST_GROUP = uuid.uuid4().hex
|
|
||||||
TEST_GROUP2 = uuid.uuid4().hex
|
|
||||||
|
|
||||||
|
|
||||||
class BaseTestCase(base.BaseTestCase):
|
|
||||||
def setUp(self):
|
|
||||||
super(BaseTestCase, self).setUp()
|
|
||||||
self.config_fixture = self.useFixture(config_fixture.Config())
|
|
||||||
self.config_fixture.config(
|
|
||||||
# TODO(morganfainberg): Make Cache Testing a separate test case
|
|
||||||
# in tempest, and move it out of the base unit tests.
|
|
||||||
group='cache',
|
|
||||||
backend='dogpile.cache.memory',
|
|
||||||
enabled=True,
|
|
||||||
proxies=['oslo_cache.testing.CacheIsolatingProxy'])
|
|
||||||
|
|
||||||
|
|
||||||
def _copy_value(value):
|
|
||||||
if value is not NO_VALUE:
|
|
||||||
value = copy.deepcopy(value)
|
|
||||||
return value
|
|
||||||
|
|
||||||
|
|
||||||
class TestProxy(proxy.ProxyBackend):
|
|
||||||
def get(self, key):
|
|
||||||
value = _copy_value(self.proxied.get(key))
|
|
||||||
if value is not NO_VALUE:
|
|
||||||
if isinstance(value[0], TestProxyValue):
|
|
||||||
value[0].cached = True
|
|
||||||
return value
|
|
||||||
|
|
||||||
|
|
||||||
class TestProxyValue(object):
|
|
||||||
def __init__(self, value):
|
|
||||||
self.value = value
|
|
||||||
self.cached = False
|
|
||||||
|
|
||||||
|
|
||||||
class CacheRegionTest(BaseTestCase):
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(CacheRegionTest, self).setUp()
|
|
||||||
self.region = cache.create_region()
|
|
||||||
cache.configure_cache_region(self.config_fixture.conf, self.region)
|
|
||||||
self.region.wrap(TestProxy)
|
|
||||||
self.test_value = TestProxyValue('Decorator Test')
|
|
||||||
|
|
||||||
def _add_test_caching_option(self):
|
|
||||||
self.config_fixture.register_opt(
|
|
||||||
cfg.BoolOpt('caching', default=True), group='cache')
|
|
||||||
|
|
||||||
def _add_dummy_config_group(self):
|
|
||||||
self.config_fixture.register_opt(
|
|
||||||
cfg.IntOpt('cache_time', default=None), group=TEST_GROUP)
|
|
||||||
self.config_fixture.register_opt(
|
|
||||||
cfg.IntOpt('cache_time', default=None), group=TEST_GROUP2)
|
|
||||||
|
|
||||||
def _get_cacheable_function(self):
|
|
||||||
memoize = cache.get_memoization_decorator(
|
|
||||||
self.config_fixture.conf, self.region, group='cache')
|
|
||||||
|
|
||||||
@memoize
|
|
||||||
def cacheable_function(value):
|
|
||||||
return value
|
|
||||||
|
|
||||||
return cacheable_function
|
|
||||||
|
|
||||||
def test_region_built_with_proxy_direct_cache_test(self):
|
|
||||||
# Verify cache regions are properly built with proxies.
|
|
||||||
test_value = TestProxyValue('Direct Cache Test')
|
|
||||||
self.region.set('cache_test', test_value)
|
|
||||||
cached_value = self.region.get('cache_test')
|
|
||||||
self.assertTrue(cached_value.cached)
|
|
||||||
|
|
||||||
def test_cache_region_no_error_multiple_config(self):
|
|
||||||
# Verify configuring the CacheRegion again doesn't error.
|
|
||||||
cache.configure_cache_region(self.config_fixture.conf, self.region)
|
|
||||||
cache.configure_cache_region(self.config_fixture.conf, self.region)
|
|
||||||
|
|
||||||
def _get_cache_fallthrough_fn(self, cache_time):
|
|
||||||
memoize = cache.get_memoization_decorator(
|
|
||||||
self.config_fixture.conf,
|
|
||||||
self.region,
|
|
||||||
group='cache',
|
|
||||||
expiration_group=TEST_GROUP2)
|
|
||||||
|
|
||||||
class _test_obj(object):
|
|
||||||
def __init__(self, value):
|
|
||||||
self.test_value = value
|
|
||||||
|
|
||||||
@memoize
|
|
||||||
def get_test_value(self):
|
|
||||||
return self.test_value
|
|
||||||
|
|
||||||
def _do_test(value):
|
|
||||||
|
|
||||||
test_obj = _test_obj(value)
|
|
||||||
|
|
||||||
# Ensure the value has been cached
|
|
||||||
test_obj.get_test_value()
|
|
||||||
# Get the now cached value
|
|
||||||
cached_value = test_obj.get_test_value()
|
|
||||||
self.assertTrue(cached_value.cached)
|
|
||||||
self.assertEqual(value.value, cached_value.value)
|
|
||||||
self.assertEqual(cached_value.value, test_obj.test_value.value)
|
|
||||||
# Change the underlying value on the test object.
|
|
||||||
test_obj.test_value = TestProxyValue(uuid.uuid4().hex)
|
|
||||||
self.assertEqual(cached_value.value,
|
|
||||||
test_obj.get_test_value().value)
|
|
||||||
# override the system time to ensure the non-cached new value
|
|
||||||
# is returned
|
|
||||||
new_time = time.time() + (cache_time * 2)
|
|
||||||
with mock.patch.object(time, 'time',
|
|
||||||
return_value=new_time):
|
|
||||||
overriden_cache_value = test_obj.get_test_value()
|
|
||||||
self.assertNotEqual(cached_value.value,
|
|
||||||
overriden_cache_value.value)
|
|
||||||
self.assertEqual(test_obj.test_value.value,
|
|
||||||
overriden_cache_value.value)
|
|
||||||
|
|
||||||
return _do_test
|
|
||||||
|
|
||||||
def test_cache_no_fallthrough_expiration_time_fn(self):
|
|
||||||
self._add_dummy_config_group()
|
|
||||||
# Since we do not re-configure the cache region, for ease of testing
|
|
||||||
# this value is set the same as the expiration_time default in the
|
|
||||||
# [cache] group
|
|
||||||
cache_time = 600
|
|
||||||
expiration_time = cache._get_expiration_time_fn(
|
|
||||||
self.config_fixture.conf, TEST_GROUP)
|
|
||||||
do_test = self._get_cache_fallthrough_fn(cache_time)
|
|
||||||
# Run the test with the dummy group cache_time value
|
|
||||||
self.config_fixture.config(cache_time=cache_time,
|
|
||||||
group=TEST_GROUP)
|
|
||||||
test_value = TestProxyValue(uuid.uuid4().hex)
|
|
||||||
self.assertEqual(cache_time, expiration_time())
|
|
||||||
do_test(value=test_value)
|
|
||||||
|
|
||||||
def test_cache_fallthrough_expiration_time_fn(self):
|
|
||||||
self._add_dummy_config_group()
|
|
||||||
# Since we do not re-configure the cache region, for ease of testing
|
|
||||||
# this value is set the same as the expiration_time default in the
|
|
||||||
# [cache] group
|
|
||||||
cache_time = 599
|
|
||||||
expiration_time = cache._get_expiration_time_fn(
|
|
||||||
self.config_fixture.conf, TEST_GROUP)
|
|
||||||
do_test = self._get_cache_fallthrough_fn(cache_time)
|
|
||||||
# Run the test with the dummy group cache_time value set to None and
|
|
||||||
# the global value set.
|
|
||||||
self.config_fixture.config(cache_time=None, group=TEST_GROUP)
|
|
||||||
test_value = TestProxyValue(uuid.uuid4().hex)
|
|
||||||
self.assertIsNone(expiration_time())
|
|
||||||
do_test(value=test_value)
|
|
||||||
|
|
||||||
def test_should_cache_fn_global_cache_enabled(self):
|
|
||||||
# Verify should_cache_fn generates a sane function for subsystem and
|
|
||||||
# functions as expected with caching globally enabled.
|
|
||||||
cacheable_function = self._get_cacheable_function()
|
|
||||||
|
|
||||||
self.config_fixture.config(group='cache', enabled=True)
|
|
||||||
cacheable_function(self.test_value)
|
|
||||||
cached_value = cacheable_function(self.test_value)
|
|
||||||
self.assertTrue(cached_value.cached)
|
|
||||||
|
|
||||||
def test_should_cache_fn_global_cache_disabled(self):
|
|
||||||
# Verify should_cache_fn generates a sane function for subsystem and
|
|
||||||
# functions as expected with caching globally disabled.
|
|
||||||
cacheable_function = self._get_cacheable_function()
|
|
||||||
|
|
||||||
self.config_fixture.config(group='cache', enabled=False)
|
|
||||||
cacheable_function(self.test_value)
|
|
||||||
cached_value = cacheable_function(self.test_value)
|
|
||||||
self.assertFalse(cached_value.cached)
|
|
||||||
|
|
||||||
def test_should_cache_fn_global_cache_disabled_group_cache_enabled(self):
|
|
||||||
# Verify should_cache_fn generates a sane function for subsystem and
|
|
||||||
# functions as expected with caching globally disabled and the specific
|
|
||||||
# group caching enabled.
|
|
||||||
cacheable_function = self._get_cacheable_function()
|
|
||||||
|
|
||||||
self._add_test_caching_option()
|
|
||||||
self.config_fixture.config(group='cache', enabled=False)
|
|
||||||
self.config_fixture.config(group='cache', caching=True)
|
|
||||||
|
|
||||||
cacheable_function(self.test_value)
|
|
||||||
cached_value = cacheable_function(self.test_value)
|
|
||||||
self.assertFalse(cached_value.cached)
|
|
||||||
|
|
||||||
def test_should_cache_fn_global_cache_enabled_group_cache_disabled(self):
|
|
||||||
# Verify should_cache_fn generates a sane function for subsystem and
|
|
||||||
# functions as expected with caching globally enabled and the specific
|
|
||||||
# group caching disabled.
|
|
||||||
cacheable_function = self._get_cacheable_function()
|
|
||||||
|
|
||||||
self._add_test_caching_option()
|
|
||||||
self.config_fixture.config(group='cache', enabled=True)
|
|
||||||
self.config_fixture.config(group='cache', caching=False)
|
|
||||||
|
|
||||||
cacheable_function(self.test_value)
|
|
||||||
cached_value = cacheable_function(self.test_value)
|
|
||||||
self.assertFalse(cached_value.cached)
|
|
||||||
|
|
||||||
def test_should_cache_fn_global_cache_enabled_group_cache_enabled(self):
|
|
||||||
# Verify should_cache_fn generates a sane function for subsystem and
|
|
||||||
# functions as expected with caching globally enabled and the specific
|
|
||||||
# group caching enabled.
|
|
||||||
cacheable_function = self._get_cacheable_function()
|
|
||||||
|
|
||||||
self._add_test_caching_option()
|
|
||||||
self.config_fixture.config(group='cache', enabled=True)
|
|
||||||
self.config_fixture.config(group='cache', caching=True)
|
|
||||||
|
|
||||||
cacheable_function(self.test_value)
|
|
||||||
cached_value = cacheable_function(self.test_value)
|
|
||||||
self.assertTrue(cached_value.cached)
|
|
||||||
|
|
||||||
def test_cache_dictionary_config_builder(self):
|
|
||||||
"""Validate we build a sane dogpile.cache dictionary config."""
|
|
||||||
self.config_fixture.config(group='cache',
|
|
||||||
config_prefix='test_prefix',
|
|
||||||
backend='some_test_backend',
|
|
||||||
expiration_time=86400,
|
|
||||||
backend_argument=['arg1:test',
|
|
||||||
'arg2:test:test',
|
|
||||||
'arg3.invalid'])
|
|
||||||
|
|
||||||
config_dict = cache._build_cache_config(self.config_fixture.conf)
|
|
||||||
self.assertEqual(
|
|
||||||
self.config_fixture.conf.cache.backend,
|
|
||||||
config_dict['test_prefix.backend'])
|
|
||||||
self.assertEqual(
|
|
||||||
self.config_fixture.conf.cache.expiration_time,
|
|
||||||
config_dict['test_prefix.expiration_time'])
|
|
||||||
self.assertEqual('test', config_dict['test_prefix.arguments.arg1'])
|
|
||||||
self.assertEqual('test:test',
|
|
||||||
config_dict['test_prefix.arguments.arg2'])
|
|
||||||
self.assertNotIn('test_prefix.arguments.arg3', config_dict)
|
|
||||||
|
|
||||||
def test_cache_dictionary_config_builder_global_disabled(self):
|
|
||||||
"""Validate the backend is reset to default if caching is disabled."""
|
|
||||||
self.config_fixture.config(group='cache',
|
|
||||||
enabled=False,
|
|
||||||
config_prefix='test_prefix',
|
|
||||||
backend='some_test_backend')
|
|
||||||
|
|
||||||
self.assertFalse(self.config_fixture.conf.cache.enabled)
|
|
||||||
config_dict = cache._build_cache_config(self.config_fixture.conf)
|
|
||||||
self.assertEqual(
|
|
||||||
_opts._DEFAULT_BACKEND,
|
|
||||||
config_dict['test_prefix.backend'])
|
|
||||||
|
|
||||||
def test_cache_debug_proxy(self):
|
|
||||||
single_value = 'Test Value'
|
|
||||||
single_key = 'testkey'
|
|
||||||
multi_values = {'key1': 1, 'key2': 2, 'key3': 3}
|
|
||||||
|
|
||||||
self.region.set(single_key, single_value)
|
|
||||||
self.assertEqual(single_value, self.region.get(single_key))
|
|
||||||
|
|
||||||
self.region.delete(single_key)
|
|
||||||
self.assertEqual(NO_VALUE, self.region.get(single_key))
|
|
||||||
|
|
||||||
self.region.set_multi(multi_values)
|
|
||||||
cached_values = self.region.get_multi(multi_values.keys())
|
|
||||||
for value in multi_values.values():
|
|
||||||
self.assertIn(value, cached_values)
|
|
||||||
self.assertEqual(len(multi_values.values()), len(cached_values))
|
|
||||||
|
|
||||||
self.region.delete_multi(multi_values.keys())
|
|
||||||
for value in self.region.get_multi(multi_values.keys()):
|
|
||||||
self.assertEqual(NO_VALUE, value)
|
|
||||||
|
|
||||||
def test_configure_non_region_object_raises_error(self):
|
|
||||||
self.assertRaises(exception.ConfigurationError,
|
|
||||||
cache.configure_cache_region,
|
|
||||||
self.config_fixture.conf,
|
|
||||||
"bogus")
|
|
||||||
|
|
||||||
|
|
||||||
class UTF8KeyManglerTests(BaseTestCase):
|
|
||||||
|
|
||||||
def test_key_is_utf8_encoded(self):
|
|
||||||
key = u'fäké1'
|
|
||||||
encoded = cache._sha1_mangle_key(key)
|
|
||||||
self.assertIsNotNone(encoded)
|
|
||||||
|
|
||||||
def test_key_is_bytestring(self):
|
|
||||||
key = b'\xcf\x84o\xcf\x81\xce\xbdo\xcf\x82'
|
|
||||||
encoded = cache._sha1_mangle_key(key)
|
|
||||||
self.assertIsNotNone(encoded)
|
|
||||||
|
|
||||||
def test_key_is_string(self):
|
|
||||||
key = 'fake'
|
|
||||||
encoded = cache._sha1_mangle_key(key)
|
|
||||||
self.assertIsNotNone(encoded)
|
|
|
@ -1,727 +0,0 @@
|
||||||
# Copyright 2014 Hewlett-Packard Development Company, L.P.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import collections
|
|
||||||
import copy
|
|
||||||
import functools
|
|
||||||
import uuid
|
|
||||||
|
|
||||||
from dogpile.cache import region as dp_region
|
|
||||||
import six
|
|
||||||
from six.moves import range
|
|
||||||
|
|
||||||
from oslo_cache.backends import mongo
|
|
||||||
from oslo_cache import core
|
|
||||||
from oslo_cache import exception
|
|
||||||
from oslo_cache.tests import test_cache
|
|
||||||
|
|
||||||
# Mock database structure sample where 'ks_cache' is database and
|
|
||||||
# 'cache' is collection. Dogpile CachedValue data is divided in two
|
|
||||||
# fields `value` (CachedValue.payload) and `meta` (CachedValue.metadata)
|
|
||||||
ks_cache = {
|
|
||||||
"cache": [
|
|
||||||
{
|
|
||||||
"value": {
|
|
||||||
"serviceType": "identity",
|
|
||||||
"allVersionsUrl": "https://dummyUrl",
|
|
||||||
"dateLastModified": "ISODDate(2014-02-08T18:39:13.237Z)",
|
|
||||||
"serviceName": "Identity",
|
|
||||||
"enabled": "True"
|
|
||||||
},
|
|
||||||
"meta": {
|
|
||||||
"v": 1,
|
|
||||||
"ct": 1392371422.015121
|
|
||||||
},
|
|
||||||
"doc_date": "ISODate('2014-02-14T09:50:22.015Z')",
|
|
||||||
"_id": "8251dc95f63842719c077072f1047ddf"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"value": "dummyValueX",
|
|
||||||
"meta": {
|
|
||||||
"v": 1,
|
|
||||||
"ct": 1392371422.014058
|
|
||||||
},
|
|
||||||
"doc_date": "ISODate('2014-02-14T09:50:22.014Z')",
|
|
||||||
"_id": "66730b9534d146f0804d23729ad35436"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
COLLECTIONS = {}
|
|
||||||
SON_MANIPULATOR = None
|
|
||||||
NO_VALUE = core.NO_VALUE
|
|
||||||
|
|
||||||
|
|
||||||
class MockCursor(object):
|
|
||||||
|
|
||||||
def __init__(self, collection, dataset_factory):
|
|
||||||
super(MockCursor, self).__init__()
|
|
||||||
self.collection = collection
|
|
||||||
self._factory = dataset_factory
|
|
||||||
self._dataset = self._factory()
|
|
||||||
self._limit = None
|
|
||||||
self._skip = None
|
|
||||||
|
|
||||||
def __iter__(self):
|
|
||||||
return self
|
|
||||||
|
|
||||||
def __next__(self):
|
|
||||||
if self._skip:
|
|
||||||
for _ in range(self._skip):
|
|
||||||
next(self._dataset)
|
|
||||||
self._skip = None
|
|
||||||
if self._limit is not None and self._limit <= 0:
|
|
||||||
raise StopIteration()
|
|
||||||
if self._limit is not None:
|
|
||||||
self._limit -= 1
|
|
||||||
return next(self._dataset)
|
|
||||||
|
|
||||||
next = __next__
|
|
||||||
|
|
||||||
def __getitem__(self, index):
|
|
||||||
arr = [x for x in self._dataset]
|
|
||||||
self._dataset = iter(arr)
|
|
||||||
return arr[index]
|
|
||||||
|
|
||||||
|
|
||||||
class MockCollection(object):
|
|
||||||
|
|
||||||
def __init__(self, db, name):
|
|
||||||
super(MockCollection, self).__init__()
|
|
||||||
self.name = name
|
|
||||||
self._collection_database = db
|
|
||||||
self._documents = {}
|
|
||||||
self.write_concern = {}
|
|
||||||
|
|
||||||
def __getattr__(self, name):
|
|
||||||
if name == 'database':
|
|
||||||
return self._collection_database
|
|
||||||
|
|
||||||
def ensure_index(self, key_or_list, *args, **kwargs):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def index_information(self):
|
|
||||||
return {}
|
|
||||||
|
|
||||||
def find_one(self, spec_or_id=None, *args, **kwargs):
|
|
||||||
if spec_or_id is None:
|
|
||||||
spec_or_id = {}
|
|
||||||
if not isinstance(spec_or_id, collections.Mapping):
|
|
||||||
spec_or_id = {'_id': spec_or_id}
|
|
||||||
|
|
||||||
try:
|
|
||||||
return next(self.find(spec_or_id, *args, **kwargs))
|
|
||||||
except StopIteration:
|
|
||||||
return None
|
|
||||||
|
|
||||||
def find(self, spec=None, *args, **kwargs):
|
|
||||||
return MockCursor(self, functools.partial(self._get_dataset, spec))
|
|
||||||
|
|
||||||
def _get_dataset(self, spec):
|
|
||||||
dataset = (self._copy_doc(document, dict) for document in
|
|
||||||
self._iter_documents(spec))
|
|
||||||
return dataset
|
|
||||||
|
|
||||||
def _iter_documents(self, spec=None):
|
|
||||||
return (SON_MANIPULATOR.transform_outgoing(document, self) for
|
|
||||||
document in six.itervalues(self._documents)
|
|
||||||
if self._apply_filter(document, spec))
|
|
||||||
|
|
||||||
def _apply_filter(self, document, query):
|
|
||||||
for key, search in six.iteritems(query):
|
|
||||||
doc_val = document.get(key)
|
|
||||||
if isinstance(search, dict):
|
|
||||||
op_dict = {'$in': lambda dv, sv: dv in sv}
|
|
||||||
is_match = all(
|
|
||||||
op_str in op_dict and op_dict[op_str](doc_val, search_val)
|
|
||||||
for op_str, search_val in six.iteritems(search)
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
is_match = doc_val == search
|
|
||||||
|
|
||||||
return is_match
|
|
||||||
|
|
||||||
def _copy_doc(self, obj, container):
|
|
||||||
if isinstance(obj, list):
|
|
||||||
new = []
|
|
||||||
for item in obj:
|
|
||||||
new.append(self._copy_doc(item, container))
|
|
||||||
return new
|
|
||||||
if isinstance(obj, dict):
|
|
||||||
new = container()
|
|
||||||
for key, value in list(obj.items()):
|
|
||||||
new[key] = self._copy_doc(value, container)
|
|
||||||
return new
|
|
||||||
else:
|
|
||||||
return copy.copy(obj)
|
|
||||||
|
|
||||||
def insert(self, data, manipulate=True, **kwargs):
|
|
||||||
if isinstance(data, list):
|
|
||||||
return [self._insert(element) for element in data]
|
|
||||||
return self._insert(data)
|
|
||||||
|
|
||||||
def save(self, data, manipulate=True, **kwargs):
|
|
||||||
return self._insert(data)
|
|
||||||
|
|
||||||
def _insert(self, data):
|
|
||||||
if '_id' not in data:
|
|
||||||
data['_id'] = uuid.uuid4().hex
|
|
||||||
object_id = data['_id']
|
|
||||||
self._documents[object_id] = self._internalize_dict(data)
|
|
||||||
return object_id
|
|
||||||
|
|
||||||
def find_and_modify(self, spec, document, upsert=False, **kwargs):
|
|
||||||
self.update(spec, document, upsert, **kwargs)
|
|
||||||
|
|
||||||
def update(self, spec, document, upsert=False, **kwargs):
|
|
||||||
|
|
||||||
existing_docs = [doc for doc in six.itervalues(self._documents)
|
|
||||||
if self._apply_filter(doc, spec)]
|
|
||||||
if existing_docs:
|
|
||||||
existing_doc = existing_docs[0] # should find only 1 match
|
|
||||||
_id = existing_doc['_id']
|
|
||||||
existing_doc.clear()
|
|
||||||
existing_doc['_id'] = _id
|
|
||||||
existing_doc.update(self._internalize_dict(document))
|
|
||||||
elif upsert:
|
|
||||||
existing_doc = self._documents[self._insert(document)]
|
|
||||||
|
|
||||||
def _internalize_dict(self, d):
|
|
||||||
return {k: copy.deepcopy(v) for k, v in six.iteritems(d)}
|
|
||||||
|
|
||||||
def remove(self, spec_or_id=None, search_filter=None):
|
|
||||||
"""Remove objects matching spec_or_id from the collection."""
|
|
||||||
if spec_or_id is None:
|
|
||||||
spec_or_id = search_filter if search_filter else {}
|
|
||||||
if not isinstance(spec_or_id, dict):
|
|
||||||
spec_or_id = {'_id': spec_or_id}
|
|
||||||
to_delete = list(self.find(spec=spec_or_id))
|
|
||||||
for doc in to_delete:
|
|
||||||
doc_id = doc['_id']
|
|
||||||
del self._documents[doc_id]
|
|
||||||
|
|
||||||
return {
|
|
||||||
"connectionId": uuid.uuid4().hex,
|
|
||||||
"n": len(to_delete),
|
|
||||||
"ok": 1.0,
|
|
||||||
"err": None,
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
class MockMongoDB(object):
|
|
||||||
def __init__(self, dbname):
|
|
||||||
self._dbname = dbname
|
|
||||||
|
|
||||||
def authenticate(self, username, password):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def add_son_manipulator(self, manipulator):
|
|
||||||
global SON_MANIPULATOR
|
|
||||||
SON_MANIPULATOR = manipulator
|
|
||||||
|
|
||||||
def __getattr__(self, name):
|
|
||||||
if name == 'authenticate':
|
|
||||||
return self.authenticate
|
|
||||||
elif name == 'name':
|
|
||||||
return self._dbname
|
|
||||||
elif name == 'add_son_manipulator':
|
|
||||||
return self.add_son_manipulator
|
|
||||||
else:
|
|
||||||
return get_collection(self._dbname, name)
|
|
||||||
|
|
||||||
def __getitem__(self, name):
|
|
||||||
return get_collection(self._dbname, name)
|
|
||||||
|
|
||||||
|
|
||||||
class MockMongoClient(object):
|
|
||||||
def __init__(self, *args, **kwargs):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def __getattr__(self, dbname):
|
|
||||||
return MockMongoDB(dbname)
|
|
||||||
|
|
||||||
|
|
||||||
def get_collection(db_name, collection_name):
|
|
||||||
mongo_collection = MockCollection(MockMongoDB(db_name), collection_name)
|
|
||||||
return mongo_collection
|
|
||||||
|
|
||||||
|
|
||||||
def pymongo_override():
|
|
||||||
global pymongo
|
|
||||||
import pymongo
|
|
||||||
if pymongo.MongoClient is not MockMongoClient:
|
|
||||||
pymongo.MongoClient = MockMongoClient
|
|
||||||
if pymongo.MongoReplicaSetClient is not MockMongoClient:
|
|
||||||
pymongo.MongoClient = MockMongoClient
|
|
||||||
|
|
||||||
|
|
||||||
class MyTransformer(mongo.BaseTransform):
|
|
||||||
"""Added here just to check manipulator logic is used correctly."""
|
|
||||||
|
|
||||||
def transform_incoming(self, son, collection):
|
|
||||||
return super(MyTransformer, self).transform_incoming(son, collection)
|
|
||||||
|
|
||||||
def transform_outgoing(self, son, collection):
|
|
||||||
return super(MyTransformer, self).transform_outgoing(son, collection)
|
|
||||||
|
|
||||||
|
|
||||||
class MongoCache(test_cache.BaseTestCase):
|
|
||||||
def setUp(self):
|
|
||||||
super(MongoCache, self).setUp()
|
|
||||||
global COLLECTIONS
|
|
||||||
COLLECTIONS = {}
|
|
||||||
mongo.MongoApi._DB = {}
|
|
||||||
mongo.MongoApi._MONGO_COLLS = {}
|
|
||||||
pymongo_override()
|
|
||||||
# using typical configuration
|
|
||||||
self.arguments = {
|
|
||||||
'db_hosts': 'localhost:27017',
|
|
||||||
'db_name': 'ks_cache',
|
|
||||||
'cache_collection': 'cache',
|
|
||||||
'username': 'test_user',
|
|
||||||
'password': 'test_password'
|
|
||||||
}
|
|
||||||
|
|
||||||
def test_missing_db_hosts(self):
|
|
||||||
self.arguments.pop('db_hosts')
|
|
||||||
region = dp_region.make_region()
|
|
||||||
self.assertRaises(exception.ConfigurationError, region.configure,
|
|
||||||
'oslo_cache.mongo',
|
|
||||||
arguments=self.arguments)
|
|
||||||
|
|
||||||
def test_missing_db_name(self):
|
|
||||||
self.arguments.pop('db_name')
|
|
||||||
region = dp_region.make_region()
|
|
||||||
self.assertRaises(exception.ConfigurationError, region.configure,
|
|
||||||
'oslo_cache.mongo',
|
|
||||||
arguments=self.arguments)
|
|
||||||
|
|
||||||
def test_missing_cache_collection_name(self):
|
|
||||||
self.arguments.pop('cache_collection')
|
|
||||||
region = dp_region.make_region()
|
|
||||||
self.assertRaises(exception.ConfigurationError, region.configure,
|
|
||||||
'oslo_cache.mongo',
|
|
||||||
arguments=self.arguments)
|
|
||||||
|
|
||||||
def test_incorrect_write_concern(self):
|
|
||||||
self.arguments['w'] = 'one value'
|
|
||||||
region = dp_region.make_region()
|
|
||||||
self.assertRaises(exception.ConfigurationError, region.configure,
|
|
||||||
'oslo_cache.mongo',
|
|
||||||
arguments=self.arguments)
|
|
||||||
|
|
||||||
def test_correct_write_concern(self):
|
|
||||||
self.arguments['w'] = 1
|
|
||||||
region = dp_region.make_region().configure('oslo_cache.mongo',
|
|
||||||
arguments=self.arguments)
|
|
||||||
|
|
||||||
random_key = uuid.uuid4().hex
|
|
||||||
region.set(random_key, "dummyValue10")
|
|
||||||
# There is no proxy so can access MongoCacheBackend directly
|
|
||||||
self.assertEqual(1, region.backend.api.w)
|
|
||||||
|
|
||||||
def test_incorrect_read_preference(self):
|
|
||||||
self.arguments['read_preference'] = 'inValidValue'
|
|
||||||
region = dp_region.make_region().configure('oslo_cache.mongo',
|
|
||||||
arguments=self.arguments)
|
|
||||||
# As per delayed loading of pymongo, read_preference value should
|
|
||||||
# still be string and NOT enum
|
|
||||||
self.assertEqual('inValidValue', region.backend.api.read_preference)
|
|
||||||
|
|
||||||
random_key = uuid.uuid4().hex
|
|
||||||
self.assertRaises(ValueError, region.set,
|
|
||||||
random_key, "dummyValue10")
|
|
||||||
|
|
||||||
def test_correct_read_preference(self):
|
|
||||||
self.arguments['read_preference'] = 'secondaryPreferred'
|
|
||||||
region = dp_region.make_region().configure('oslo_cache.mongo',
|
|
||||||
arguments=self.arguments)
|
|
||||||
# As per delayed loading of pymongo, read_preference value should
|
|
||||||
# still be string and NOT enum
|
|
||||||
self.assertEqual('secondaryPreferred',
|
|
||||||
region.backend.api.read_preference)
|
|
||||||
|
|
||||||
random_key = uuid.uuid4().hex
|
|
||||||
region.set(random_key, "dummyValue10")
|
|
||||||
|
|
||||||
# Now as pymongo is loaded so expected read_preference value is enum.
|
|
||||||
# There is no proxy so can access MongoCacheBackend directly
|
|
||||||
self.assertEqual(3, region.backend.api.read_preference)
|
|
||||||
|
|
||||||
def test_missing_replica_set_name(self):
|
|
||||||
self.arguments['use_replica'] = True
|
|
||||||
region = dp_region.make_region()
|
|
||||||
self.assertRaises(exception.ConfigurationError, region.configure,
|
|
||||||
'oslo_cache.mongo',
|
|
||||||
arguments=self.arguments)
|
|
||||||
|
|
||||||
def test_provided_replica_set_name(self):
|
|
||||||
self.arguments['use_replica'] = True
|
|
||||||
self.arguments['replicaset_name'] = 'my_replica'
|
|
||||||
dp_region.make_region().configure('oslo_cache.mongo',
|
|
||||||
arguments=self.arguments)
|
|
||||||
self.assertTrue(True) # reached here means no initialization error
|
|
||||||
|
|
||||||
def test_incorrect_mongo_ttl_seconds(self):
|
|
||||||
self.arguments['mongo_ttl_seconds'] = 'sixty'
|
|
||||||
region = dp_region.make_region()
|
|
||||||
self.assertRaises(exception.ConfigurationError, region.configure,
|
|
||||||
'oslo_cache.mongo',
|
|
||||||
arguments=self.arguments)
|
|
||||||
|
|
||||||
def test_cache_configuration_values_assertion(self):
|
|
||||||
self.arguments['use_replica'] = True
|
|
||||||
self.arguments['replicaset_name'] = 'my_replica'
|
|
||||||
self.arguments['mongo_ttl_seconds'] = 60
|
|
||||||
self.arguments['ssl'] = False
|
|
||||||
region = dp_region.make_region().configure('oslo_cache.mongo',
|
|
||||||
arguments=self.arguments)
|
|
||||||
# There is no proxy so can access MongoCacheBackend directly
|
|
||||||
self.assertEqual('localhost:27017', region.backend.api.hosts)
|
|
||||||
self.assertEqual('ks_cache', region.backend.api.db_name)
|
|
||||||
self.assertEqual('cache', region.backend.api.cache_collection)
|
|
||||||
self.assertEqual('test_user', region.backend.api.username)
|
|
||||||
self.assertEqual('test_password', region.backend.api.password)
|
|
||||||
self.assertEqual(True, region.backend.api.use_replica)
|
|
||||||
self.assertEqual('my_replica', region.backend.api.replicaset_name)
|
|
||||||
self.assertEqual(False, region.backend.api.conn_kwargs['ssl'])
|
|
||||||
self.assertEqual(60, region.backend.api.ttl_seconds)
|
|
||||||
|
|
||||||
def test_multiple_region_cache_configuration(self):
|
|
||||||
arguments1 = copy.copy(self.arguments)
|
|
||||||
arguments1['cache_collection'] = 'cache_region1'
|
|
||||||
|
|
||||||
region1 = dp_region.make_region().configure('oslo_cache.mongo',
|
|
||||||
arguments=arguments1)
|
|
||||||
# There is no proxy so can access MongoCacheBackend directly
|
|
||||||
self.assertEqual('localhost:27017', region1.backend.api.hosts)
|
|
||||||
self.assertEqual('ks_cache', region1.backend.api.db_name)
|
|
||||||
self.assertEqual('cache_region1', region1.backend.api.cache_collection)
|
|
||||||
self.assertEqual('test_user', region1.backend.api.username)
|
|
||||||
self.assertEqual('test_password', region1.backend.api.password)
|
|
||||||
# Should be None because of delayed initialization
|
|
||||||
self.assertIsNone(region1.backend.api._data_manipulator)
|
|
||||||
|
|
||||||
random_key1 = uuid.uuid4().hex
|
|
||||||
region1.set(random_key1, "dummyValue10")
|
|
||||||
self.assertEqual("dummyValue10", region1.get(random_key1))
|
|
||||||
# Now should have initialized
|
|
||||||
self.assertIsInstance(region1.backend.api._data_manipulator,
|
|
||||||
mongo.BaseTransform)
|
|
||||||
|
|
||||||
class_name = '%s.%s' % (MyTransformer.__module__, "MyTransformer")
|
|
||||||
|
|
||||||
arguments2 = copy.copy(self.arguments)
|
|
||||||
arguments2['cache_collection'] = 'cache_region2'
|
|
||||||
arguments2['son_manipulator'] = class_name
|
|
||||||
|
|
||||||
region2 = dp_region.make_region().configure('oslo_cache.mongo',
|
|
||||||
arguments=arguments2)
|
|
||||||
# There is no proxy so can access MongoCacheBackend directly
|
|
||||||
self.assertEqual('localhost:27017', region2.backend.api.hosts)
|
|
||||||
self.assertEqual('ks_cache', region2.backend.api.db_name)
|
|
||||||
self.assertEqual('cache_region2', region2.backend.api.cache_collection)
|
|
||||||
|
|
||||||
# Should be None because of delayed initialization
|
|
||||||
self.assertIsNone(region2.backend.api._data_manipulator)
|
|
||||||
|
|
||||||
random_key = uuid.uuid4().hex
|
|
||||||
region2.set(random_key, "dummyValue20")
|
|
||||||
self.assertEqual("dummyValue20", region2.get(random_key))
|
|
||||||
# Now should have initialized
|
|
||||||
self.assertIsInstance(region2.backend.api._data_manipulator,
|
|
||||||
MyTransformer)
|
|
||||||
|
|
||||||
region1.set(random_key1, "dummyValue22")
|
|
||||||
self.assertEqual("dummyValue22", region1.get(random_key1))
|
|
||||||
|
|
||||||
def test_typical_configuration(self):
|
|
||||||
|
|
||||||
dp_region.make_region().configure(
|
|
||||||
'oslo_cache.mongo',
|
|
||||||
arguments=self.arguments
|
|
||||||
)
|
|
||||||
self.assertTrue(True) # reached here means no initialization error
|
|
||||||
|
|
||||||
def test_backend_get_missing_data(self):
|
|
||||||
|
|
||||||
region = dp_region.make_region().configure(
|
|
||||||
'oslo_cache.mongo',
|
|
||||||
arguments=self.arguments
|
|
||||||
)
|
|
||||||
|
|
||||||
random_key = uuid.uuid4().hex
|
|
||||||
# should return NO_VALUE as key does not exist in cache
|
|
||||||
self.assertEqual(NO_VALUE, region.get(random_key))
|
|
||||||
|
|
||||||
def test_backend_set_data(self):
|
|
||||||
|
|
||||||
region = dp_region.make_region().configure(
|
|
||||||
'oslo_cache.mongo',
|
|
||||||
arguments=self.arguments
|
|
||||||
)
|
|
||||||
|
|
||||||
random_key = uuid.uuid4().hex
|
|
||||||
region.set(random_key, "dummyValue")
|
|
||||||
self.assertEqual("dummyValue", region.get(random_key))
|
|
||||||
|
|
||||||
def test_backend_set_data_with_string_as_valid_ttl(self):
|
|
||||||
|
|
||||||
self.arguments['mongo_ttl_seconds'] = '3600'
|
|
||||||
region = dp_region.make_region().configure('oslo_cache.mongo',
|
|
||||||
arguments=self.arguments)
|
|
||||||
self.assertEqual(3600, region.backend.api.ttl_seconds)
|
|
||||||
random_key = uuid.uuid4().hex
|
|
||||||
region.set(random_key, "dummyValue")
|
|
||||||
self.assertEqual("dummyValue", region.get(random_key))
|
|
||||||
|
|
||||||
def test_backend_set_data_with_int_as_valid_ttl(self):
|
|
||||||
|
|
||||||
self.arguments['mongo_ttl_seconds'] = 1800
|
|
||||||
region = dp_region.make_region().configure('oslo_cache.mongo',
|
|
||||||
arguments=self.arguments)
|
|
||||||
self.assertEqual(1800, region.backend.api.ttl_seconds)
|
|
||||||
random_key = uuid.uuid4().hex
|
|
||||||
region.set(random_key, "dummyValue")
|
|
||||||
self.assertEqual("dummyValue", region.get(random_key))
|
|
||||||
|
|
||||||
def test_backend_set_none_as_data(self):
|
|
||||||
|
|
||||||
region = dp_region.make_region().configure(
|
|
||||||
'oslo_cache.mongo',
|
|
||||||
arguments=self.arguments
|
|
||||||
)
|
|
||||||
|
|
||||||
random_key = uuid.uuid4().hex
|
|
||||||
region.set(random_key, None)
|
|
||||||
self.assertIsNone(region.get(random_key))
|
|
||||||
|
|
||||||
def test_backend_set_blank_as_data(self):
|
|
||||||
|
|
||||||
region = dp_region.make_region().configure(
|
|
||||||
'oslo_cache.mongo',
|
|
||||||
arguments=self.arguments
|
|
||||||
)
|
|
||||||
|
|
||||||
random_key = uuid.uuid4().hex
|
|
||||||
region.set(random_key, "")
|
|
||||||
self.assertEqual("", region.get(random_key))
|
|
||||||
|
|
||||||
def test_backend_set_same_key_multiple_times(self):
|
|
||||||
|
|
||||||
region = dp_region.make_region().configure(
|
|
||||||
'oslo_cache.mongo',
|
|
||||||
arguments=self.arguments
|
|
||||||
)
|
|
||||||
|
|
||||||
random_key = uuid.uuid4().hex
|
|
||||||
region.set(random_key, "dummyValue")
|
|
||||||
self.assertEqual("dummyValue", region.get(random_key))
|
|
||||||
|
|
||||||
dict_value = {'key1': 'value1'}
|
|
||||||
region.set(random_key, dict_value)
|
|
||||||
self.assertEqual(dict_value, region.get(random_key))
|
|
||||||
|
|
||||||
region.set(random_key, "dummyValue2")
|
|
||||||
self.assertEqual("dummyValue2", region.get(random_key))
|
|
||||||
|
|
||||||
def test_backend_multi_set_data(self):
|
|
||||||
|
|
||||||
region = dp_region.make_region().configure(
|
|
||||||
'oslo_cache.mongo',
|
|
||||||
arguments=self.arguments
|
|
||||||
)
|
|
||||||
random_key = uuid.uuid4().hex
|
|
||||||
random_key1 = uuid.uuid4().hex
|
|
||||||
random_key2 = uuid.uuid4().hex
|
|
||||||
random_key3 = uuid.uuid4().hex
|
|
||||||
mapping = {random_key1: 'dummyValue1',
|
|
||||||
random_key2: 'dummyValue2',
|
|
||||||
random_key3: 'dummyValue3'}
|
|
||||||
region.set_multi(mapping)
|
|
||||||
# should return NO_VALUE as key does not exist in cache
|
|
||||||
self.assertEqual(NO_VALUE, region.get(random_key))
|
|
||||||
self.assertFalse(region.get(random_key))
|
|
||||||
self.assertEqual("dummyValue1", region.get(random_key1))
|
|
||||||
self.assertEqual("dummyValue2", region.get(random_key2))
|
|
||||||
self.assertEqual("dummyValue3", region.get(random_key3))
|
|
||||||
|
|
||||||
def test_backend_multi_get_data(self):
|
|
||||||
|
|
||||||
region = dp_region.make_region().configure(
|
|
||||||
'oslo_cache.mongo',
|
|
||||||
arguments=self.arguments
|
|
||||||
)
|
|
||||||
random_key = uuid.uuid4().hex
|
|
||||||
random_key1 = uuid.uuid4().hex
|
|
||||||
random_key2 = uuid.uuid4().hex
|
|
||||||
random_key3 = uuid.uuid4().hex
|
|
||||||
mapping = {random_key1: 'dummyValue1',
|
|
||||||
random_key2: '',
|
|
||||||
random_key3: 'dummyValue3'}
|
|
||||||
region.set_multi(mapping)
|
|
||||||
|
|
||||||
keys = [random_key, random_key1, random_key2, random_key3]
|
|
||||||
results = region.get_multi(keys)
|
|
||||||
# should return NO_VALUE as key does not exist in cache
|
|
||||||
self.assertEqual(NO_VALUE, results[0])
|
|
||||||
self.assertEqual("dummyValue1", results[1])
|
|
||||||
self.assertEqual("", results[2])
|
|
||||||
self.assertEqual("dummyValue3", results[3])
|
|
||||||
|
|
||||||
def test_backend_multi_set_should_update_existing(self):
|
|
||||||
|
|
||||||
region = dp_region.make_region().configure(
|
|
||||||
'oslo_cache.mongo',
|
|
||||||
arguments=self.arguments
|
|
||||||
)
|
|
||||||
random_key = uuid.uuid4().hex
|
|
||||||
random_key1 = uuid.uuid4().hex
|
|
||||||
random_key2 = uuid.uuid4().hex
|
|
||||||
random_key3 = uuid.uuid4().hex
|
|
||||||
mapping = {random_key1: 'dummyValue1',
|
|
||||||
random_key2: 'dummyValue2',
|
|
||||||
random_key3: 'dummyValue3'}
|
|
||||||
region.set_multi(mapping)
|
|
||||||
# should return NO_VALUE as key does not exist in cache
|
|
||||||
self.assertEqual(NO_VALUE, region.get(random_key))
|
|
||||||
self.assertEqual("dummyValue1", region.get(random_key1))
|
|
||||||
self.assertEqual("dummyValue2", region.get(random_key2))
|
|
||||||
self.assertEqual("dummyValue3", region.get(random_key3))
|
|
||||||
|
|
||||||
mapping = {random_key1: 'dummyValue4',
|
|
||||||
random_key2: 'dummyValue5'}
|
|
||||||
region.set_multi(mapping)
|
|
||||||
self.assertEqual(NO_VALUE, region.get(random_key))
|
|
||||||
self.assertEqual("dummyValue4", region.get(random_key1))
|
|
||||||
self.assertEqual("dummyValue5", region.get(random_key2))
|
|
||||||
self.assertEqual("dummyValue3", region.get(random_key3))
|
|
||||||
|
|
||||||
def test_backend_multi_set_get_with_blanks_none(self):
|
|
||||||
|
|
||||||
region = dp_region.make_region().configure(
|
|
||||||
'oslo_cache.mongo',
|
|
||||||
arguments=self.arguments
|
|
||||||
)
|
|
||||||
random_key = uuid.uuid4().hex
|
|
||||||
random_key1 = uuid.uuid4().hex
|
|
||||||
random_key2 = uuid.uuid4().hex
|
|
||||||
random_key3 = uuid.uuid4().hex
|
|
||||||
random_key4 = uuid.uuid4().hex
|
|
||||||
mapping = {random_key1: 'dummyValue1',
|
|
||||||
random_key2: None,
|
|
||||||
random_key3: '',
|
|
||||||
random_key4: 'dummyValue4'}
|
|
||||||
region.set_multi(mapping)
|
|
||||||
# should return NO_VALUE as key does not exist in cache
|
|
||||||
self.assertEqual(NO_VALUE, region.get(random_key))
|
|
||||||
self.assertEqual("dummyValue1", region.get(random_key1))
|
|
||||||
self.assertIsNone(region.get(random_key2))
|
|
||||||
self.assertEqual("", region.get(random_key3))
|
|
||||||
self.assertEqual("dummyValue4", region.get(random_key4))
|
|
||||||
|
|
||||||
keys = [random_key, random_key1, random_key2, random_key3, random_key4]
|
|
||||||
results = region.get_multi(keys)
|
|
||||||
|
|
||||||
# should return NO_VALUE as key does not exist in cache
|
|
||||||
self.assertEqual(NO_VALUE, results[0])
|
|
||||||
self.assertEqual("dummyValue1", results[1])
|
|
||||||
self.assertIsNone(results[2])
|
|
||||||
self.assertEqual("", results[3])
|
|
||||||
self.assertEqual("dummyValue4", results[4])
|
|
||||||
|
|
||||||
mapping = {random_key1: 'dummyValue5',
|
|
||||||
random_key2: 'dummyValue6'}
|
|
||||||
region.set_multi(mapping)
|
|
||||||
self.assertEqual(NO_VALUE, region.get(random_key))
|
|
||||||
self.assertEqual("dummyValue5", region.get(random_key1))
|
|
||||||
self.assertEqual("dummyValue6", region.get(random_key2))
|
|
||||||
self.assertEqual("", region.get(random_key3))
|
|
||||||
|
|
||||||
def test_backend_delete_data(self):
|
|
||||||
|
|
||||||
region = dp_region.make_region().configure(
|
|
||||||
'oslo_cache.mongo',
|
|
||||||
arguments=self.arguments
|
|
||||||
)
|
|
||||||
|
|
||||||
random_key = uuid.uuid4().hex
|
|
||||||
region.set(random_key, "dummyValue")
|
|
||||||
self.assertEqual("dummyValue", region.get(random_key))
|
|
||||||
|
|
||||||
region.delete(random_key)
|
|
||||||
# should return NO_VALUE as key no longer exists in cache
|
|
||||||
self.assertEqual(NO_VALUE, region.get(random_key))
|
|
||||||
|
|
||||||
def test_backend_multi_delete_data(self):
|
|
||||||
|
|
||||||
region = dp_region.make_region().configure(
|
|
||||||
'oslo_cache.mongo',
|
|
||||||
arguments=self.arguments
|
|
||||||
)
|
|
||||||
random_key = uuid.uuid4().hex
|
|
||||||
random_key1 = uuid.uuid4().hex
|
|
||||||
random_key2 = uuid.uuid4().hex
|
|
||||||
random_key3 = uuid.uuid4().hex
|
|
||||||
mapping = {random_key1: 'dummyValue1',
|
|
||||||
random_key2: 'dummyValue2',
|
|
||||||
random_key3: 'dummyValue3'}
|
|
||||||
region.set_multi(mapping)
|
|
||||||
# should return NO_VALUE as key does not exist in cache
|
|
||||||
self.assertEqual(NO_VALUE, region.get(random_key))
|
|
||||||
self.assertEqual("dummyValue1", region.get(random_key1))
|
|
||||||
self.assertEqual("dummyValue2", region.get(random_key2))
|
|
||||||
self.assertEqual("dummyValue3", region.get(random_key3))
|
|
||||||
self.assertEqual(NO_VALUE, region.get("InvalidKey"))
|
|
||||||
|
|
||||||
keys = mapping.keys()
|
|
||||||
|
|
||||||
region.delete_multi(keys)
|
|
||||||
|
|
||||||
self.assertEqual(NO_VALUE, region.get("InvalidKey"))
|
|
||||||
# should return NO_VALUE as keys no longer exist in cache
|
|
||||||
self.assertEqual(NO_VALUE, region.get(random_key1))
|
|
||||||
self.assertEqual(NO_VALUE, region.get(random_key2))
|
|
||||||
self.assertEqual(NO_VALUE, region.get(random_key3))
|
|
||||||
|
|
||||||
def test_additional_crud_method_arguments_support(self):
|
|
||||||
"""Additional arguments should works across find/insert/update."""
|
|
||||||
|
|
||||||
self.arguments['wtimeout'] = 30000
|
|
||||||
self.arguments['j'] = True
|
|
||||||
self.arguments['continue_on_error'] = True
|
|
||||||
self.arguments['secondary_acceptable_latency_ms'] = 60
|
|
||||||
region = dp_region.make_region().configure(
|
|
||||||
'oslo_cache.mongo',
|
|
||||||
arguments=self.arguments
|
|
||||||
)
|
|
||||||
|
|
||||||
# There is no proxy so can access MongoCacheBackend directly
|
|
||||||
api_methargs = region.backend.api.meth_kwargs
|
|
||||||
self.assertEqual(30000, api_methargs['wtimeout'])
|
|
||||||
self.assertEqual(True, api_methargs['j'])
|
|
||||||
self.assertEqual(True, api_methargs['continue_on_error'])
|
|
||||||
self.assertEqual(60, api_methargs['secondary_acceptable_latency_ms'])
|
|
||||||
|
|
||||||
random_key = uuid.uuid4().hex
|
|
||||||
region.set(random_key, "dummyValue1")
|
|
||||||
self.assertEqual("dummyValue1", region.get(random_key))
|
|
||||||
|
|
||||||
region.set(random_key, "dummyValue2")
|
|
||||||
self.assertEqual("dummyValue2", region.get(random_key))
|
|
||||||
|
|
||||||
random_key = uuid.uuid4().hex
|
|
||||||
region.set(random_key, "dummyValue3")
|
|
||||||
self.assertEqual("dummyValue3", region.get(random_key))
|
|
|
@ -1,147 +0,0 @@
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import threading
|
|
||||||
import time
|
|
||||||
|
|
||||||
import mock
|
|
||||||
import six
|
|
||||||
from six.moves import queue
|
|
||||||
import testtools
|
|
||||||
from testtools import matchers
|
|
||||||
|
|
||||||
from oslo_cache import _memcache_pool
|
|
||||||
from oslo_cache import exception
|
|
||||||
from oslo_cache.tests import test_cache
|
|
||||||
|
|
||||||
|
|
||||||
class _TestConnectionPool(_memcache_pool.ConnectionPool):
|
|
||||||
destroyed_value = 'destroyed'
|
|
||||||
|
|
||||||
def _create_connection(self):
|
|
||||||
return mock.MagicMock()
|
|
||||||
|
|
||||||
def _destroy_connection(self, conn):
|
|
||||||
conn(self.destroyed_value)
|
|
||||||
|
|
||||||
|
|
||||||
class TestConnectionPool(test_cache.BaseTestCase):
|
|
||||||
def setUp(self):
|
|
||||||
super(TestConnectionPool, self).setUp()
|
|
||||||
self.unused_timeout = 10
|
|
||||||
self.maxsize = 2
|
|
||||||
self.connection_pool = _TestConnectionPool(
|
|
||||||
maxsize=self.maxsize,
|
|
||||||
unused_timeout=self.unused_timeout)
|
|
||||||
self.addCleanup(self.cleanup_instance('connection_pool'))
|
|
||||||
|
|
||||||
def cleanup_instance(self, *names):
|
|
||||||
"""Create a function suitable for use with self.addCleanup.
|
|
||||||
|
|
||||||
:returns: a callable that uses a closure to delete instance attributes
|
|
||||||
"""
|
|
||||||
|
|
||||||
def cleanup():
|
|
||||||
for name in names:
|
|
||||||
if hasattr(self, name):
|
|
||||||
delattr(self, name)
|
|
||||||
return cleanup
|
|
||||||
|
|
||||||
def test_get_context_manager(self):
|
|
||||||
self.assertThat(self.connection_pool.queue, matchers.HasLength(0))
|
|
||||||
with self.connection_pool.acquire() as conn:
|
|
||||||
self.assertEqual(1, self.connection_pool._acquired)
|
|
||||||
self.assertEqual(0, self.connection_pool._acquired)
|
|
||||||
self.assertThat(self.connection_pool.queue, matchers.HasLength(1))
|
|
||||||
self.assertEqual(conn, self.connection_pool.queue[0].connection)
|
|
||||||
|
|
||||||
def test_cleanup_pool(self):
|
|
||||||
self.test_get_context_manager()
|
|
||||||
newtime = time.time() + self.unused_timeout * 2
|
|
||||||
non_expired_connection = _memcache_pool._PoolItem(
|
|
||||||
ttl=(newtime * 2),
|
|
||||||
connection=mock.MagicMock())
|
|
||||||
self.connection_pool.queue.append(non_expired_connection)
|
|
||||||
self.assertThat(self.connection_pool.queue, matchers.HasLength(2))
|
|
||||||
with mock.patch.object(time, 'time', return_value=newtime):
|
|
||||||
conn = self.connection_pool.queue[0].connection
|
|
||||||
with self.connection_pool.acquire():
|
|
||||||
pass
|
|
||||||
conn.assert_has_calls(
|
|
||||||
[mock.call(self.connection_pool.destroyed_value)])
|
|
||||||
self.assertThat(self.connection_pool.queue, matchers.HasLength(1))
|
|
||||||
self.assertEqual(0, non_expired_connection.connection.call_count)
|
|
||||||
|
|
||||||
def test_acquire_conn_exception_returns_acquired_count(self):
|
|
||||||
class TestException(Exception):
|
|
||||||
pass
|
|
||||||
|
|
||||||
with mock.patch.object(_TestConnectionPool, '_create_connection',
|
|
||||||
side_effect=TestException):
|
|
||||||
with testtools.ExpectedException(TestException):
|
|
||||||
with self.connection_pool.acquire():
|
|
||||||
pass
|
|
||||||
self.assertThat(self.connection_pool.queue,
|
|
||||||
matchers.HasLength(0))
|
|
||||||
self.assertEqual(0, self.connection_pool._acquired)
|
|
||||||
|
|
||||||
def test_connection_pool_limits_maximum_connections(self):
|
|
||||||
# NOTE(morganfainberg): To ensure we don't lockup tests until the
|
|
||||||
# job limit, explicitly call .get_nowait() and .put_nowait() in this
|
|
||||||
# case.
|
|
||||||
conn1 = self.connection_pool.get_nowait()
|
|
||||||
conn2 = self.connection_pool.get_nowait()
|
|
||||||
|
|
||||||
# Use a nowait version to raise an Empty exception indicating we would
|
|
||||||
# not get another connection until one is placed back into the queue.
|
|
||||||
self.assertRaises(queue.Empty, self.connection_pool.get_nowait)
|
|
||||||
|
|
||||||
# Place the connections back into the pool.
|
|
||||||
self.connection_pool.put_nowait(conn1)
|
|
||||||
self.connection_pool.put_nowait(conn2)
|
|
||||||
|
|
||||||
# Make sure we can get a connection out of the pool again.
|
|
||||||
self.connection_pool.get_nowait()
|
|
||||||
|
|
||||||
def test_connection_pool_maximum_connection_get_timeout(self):
|
|
||||||
connection_pool = _TestConnectionPool(
|
|
||||||
maxsize=1,
|
|
||||||
unused_timeout=self.unused_timeout,
|
|
||||||
conn_get_timeout=0)
|
|
||||||
|
|
||||||
def _acquire_connection():
|
|
||||||
with connection_pool.acquire():
|
|
||||||
pass
|
|
||||||
|
|
||||||
# Make sure we've consumed the only available connection from the pool
|
|
||||||
conn = connection_pool.get_nowait()
|
|
||||||
|
|
||||||
self.assertRaises(exception.QueueEmpty, _acquire_connection)
|
|
||||||
|
|
||||||
# Put the connection back and ensure we can acquire the connection
|
|
||||||
# after it is available.
|
|
||||||
connection_pool.put_nowait(conn)
|
|
||||||
_acquire_connection()
|
|
||||||
|
|
||||||
|
|
||||||
class TestMemcacheClientOverrides(test_cache.BaseTestCase):
|
|
||||||
|
|
||||||
def test_client_stripped_of_threading_local(self):
|
|
||||||
"""threading.local overrides are restored for _MemcacheClient"""
|
|
||||||
client_class = _memcache_pool._MemcacheClient
|
|
||||||
# get the genuine thread._local from MRO
|
|
||||||
thread_local = client_class.__mro__[2]
|
|
||||||
self.assertTrue(thread_local is threading.local)
|
|
||||||
for field in six.iterkeys(thread_local.__dict__):
|
|
||||||
if field not in ('__dict__', '__weakref__'):
|
|
||||||
self.assertNotEqual(id(getattr(thread_local, field, None)),
|
|
||||||
id(getattr(client_class, field, None)))
|
|
|
@ -1,115 +0,0 @@
|
||||||
# Copyright 2015 Mirantis Inc
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from dogpile.cache import region as dp_region
|
|
||||||
|
|
||||||
from oslo_cache import core
|
|
||||||
from oslo_cache.tests import test_cache
|
|
||||||
from oslo_config import fixture as config_fixture
|
|
||||||
from oslo_utils import fixture as time_fixture
|
|
||||||
|
|
||||||
|
|
||||||
NO_VALUE = core.NO_VALUE
|
|
||||||
KEY = 'test_key'
|
|
||||||
VALUE = 'test_value'
|
|
||||||
|
|
||||||
|
|
||||||
class CacheDictBackendTest(test_cache.BaseTestCase):
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(CacheDictBackendTest, self).setUp()
|
|
||||||
self.config_fixture = self.useFixture(config_fixture.Config())
|
|
||||||
self.config_fixture.config(group='cache', backend='oslo_cache.dict')
|
|
||||||
self.time_fixture = self.useFixture(time_fixture.TimeFixture())
|
|
||||||
self.region = dp_region.make_region()
|
|
||||||
self.region.configure(
|
|
||||||
'oslo_cache.dict', arguments={'expiration_time': 0.5})
|
|
||||||
|
|
||||||
def test_dict_backend(self):
|
|
||||||
self.assertIs(NO_VALUE, self.region.get(KEY))
|
|
||||||
|
|
||||||
self.region.set(KEY, VALUE)
|
|
||||||
self.assertEqual(VALUE, self.region.get(KEY))
|
|
||||||
|
|
||||||
self.region.delete(KEY)
|
|
||||||
self.assertIs(NO_VALUE, self.region.get(KEY))
|
|
||||||
|
|
||||||
def test_dict_backend_expiration_time(self):
|
|
||||||
self.region.set(KEY, VALUE)
|
|
||||||
self.assertEqual(VALUE, self.region.get(KEY))
|
|
||||||
|
|
||||||
self.time_fixture.advance_time_seconds(1)
|
|
||||||
self.assertIs(NO_VALUE, self.region.get(KEY))
|
|
||||||
|
|
||||||
def test_dict_backend_clear_cache(self):
|
|
||||||
self.region.set(KEY, VALUE)
|
|
||||||
|
|
||||||
self.time_fixture.advance_time_seconds(1)
|
|
||||||
|
|
||||||
self.assertEqual(1, len(self.region.backend.cache))
|
|
||||||
self.region.backend._clear()
|
|
||||||
self.assertEqual(0, len(self.region.backend.cache))
|
|
||||||
|
|
||||||
def test_dict_backend_zero_expiration_time(self):
|
|
||||||
self.region = dp_region.make_region()
|
|
||||||
self.region.configure(
|
|
||||||
'oslo_cache.dict', arguments={'expiration_time': 0})
|
|
||||||
|
|
||||||
self.region.set(KEY, VALUE)
|
|
||||||
self.time_fixture.advance_time_seconds(1)
|
|
||||||
|
|
||||||
self.assertEqual(VALUE, self.region.get(KEY))
|
|
||||||
self.assertEqual(1, len(self.region.backend.cache))
|
|
||||||
|
|
||||||
self.region.backend._clear()
|
|
||||||
|
|
||||||
self.assertEqual(VALUE, self.region.get(KEY))
|
|
||||||
self.assertEqual(1, len(self.region.backend.cache))
|
|
||||||
|
|
||||||
def test_dict_backend_multi_keys(self):
|
|
||||||
self.region.set('key1', 'value1')
|
|
||||||
self.region.set('key2', 'value2')
|
|
||||||
self.time_fixture.advance_time_seconds(1)
|
|
||||||
self.region.set('key3', 'value3')
|
|
||||||
|
|
||||||
self.assertEqual(1, len(self.region.backend.cache))
|
|
||||||
self.assertIs(NO_VALUE, self.region.get('key1'))
|
|
||||||
self.assertIs(NO_VALUE, self.region.get('key2'))
|
|
||||||
self.assertEqual('value3', self.region.get('key3'))
|
|
||||||
|
|
||||||
def test_dict_backend_multi_keys_in_one_call(self):
|
|
||||||
single_value = 'Test Value'
|
|
||||||
single_key = 'testkey'
|
|
||||||
multi_values = {'key1': 1, 'key2': 2, 'key3': 3}
|
|
||||||
|
|
||||||
self.region.set(single_key, single_value)
|
|
||||||
self.assertEqual(single_value, self.region.get(single_key))
|
|
||||||
|
|
||||||
self.region.delete(single_key)
|
|
||||||
self.assertEqual(NO_VALUE, self.region.get(single_key))
|
|
||||||
|
|
||||||
self.region.set_multi(multi_values)
|
|
||||||
cached_values = self.region.get_multi(multi_values.keys())
|
|
||||||
for value in multi_values.values():
|
|
||||||
self.assertIn(value, cached_values)
|
|
||||||
self.assertEqual(len(multi_values.values()), len(cached_values))
|
|
||||||
|
|
||||||
self.region.delete_multi(multi_values.keys())
|
|
||||||
for value in self.region.get_multi(multi_values.keys()):
|
|
||||||
self.assertEqual(NO_VALUE, value)
|
|
||||||
|
|
||||||
def test_dict_backend_rewrite_value(self):
|
|
||||||
self.region.set(KEY, 'value1')
|
|
||||||
self.region.set(KEY, 'value2')
|
|
||||||
self.assertEqual('value2', self.region.get(KEY))
|
|
|
@ -1,18 +0,0 @@
|
||||||
# Copyright 2016 OpenStack Foundation
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
|
|
||||||
import pbr.version
|
|
||||||
|
|
||||||
version_info = pbr.version.VersionInfo('oslo_cache')
|
|
|
@ -1,3 +0,0 @@
|
||||||
---
|
|
||||||
other:
|
|
||||||
- Switch to reno for managing release notes.
|
|
|
@ -1,273 +0,0 @@
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
# This file is execfile()d with the current directory set to its
|
|
||||||
# containing dir.
|
|
||||||
#
|
|
||||||
# Note that not all possible configuration values are present in this
|
|
||||||
# autogenerated file.
|
|
||||||
#
|
|
||||||
# All configuration values have a default; values that are commented out
|
|
||||||
# serve to show the default.
|
|
||||||
|
|
||||||
# If extensions (or modules to document with autodoc) are in another directory,
|
|
||||||
# add these directories to sys.path here. If the directory is relative to the
|
|
||||||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
|
||||||
# sys.path.insert(0, os.path.abspath('.'))
|
|
||||||
|
|
||||||
# -- General configuration ------------------------------------------------
|
|
||||||
|
|
||||||
# If your documentation needs a minimal Sphinx version, state it here.
|
|
||||||
# needs_sphinx = '1.0'
|
|
||||||
|
|
||||||
# Add any Sphinx extension module names here, as strings. They can be
|
|
||||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
|
||||||
# ones.
|
|
||||||
extensions = [
|
|
||||||
'oslosphinx',
|
|
||||||
'reno.sphinxext',
|
|
||||||
]
|
|
||||||
|
|
||||||
# Add any paths that contain templates here, relative to this directory.
|
|
||||||
templates_path = ['_templates']
|
|
||||||
|
|
||||||
# The suffix of source filenames.
|
|
||||||
source_suffix = '.rst'
|
|
||||||
|
|
||||||
# The encoding of source files.
|
|
||||||
# source_encoding = 'utf-8-sig'
|
|
||||||
|
|
||||||
# The master toctree document.
|
|
||||||
master_doc = 'index'
|
|
||||||
|
|
||||||
# General information about the project.
|
|
||||||
project = u'oslo.cache Release Notes'
|
|
||||||
copyright = u'2016, oslo.cache Developers'
|
|
||||||
|
|
||||||
# The version info for the project you're documenting, acts as replacement for
|
|
||||||
# |version| and |release|, also used in various other places throughout the
|
|
||||||
# built documents.
|
|
||||||
#
|
|
||||||
# The short X.Y version.
|
|
||||||
from oslo_cache.version import version_info as oslo_cache_version
|
|
||||||
# The full version, including alpha/beta/rc tags.
|
|
||||||
release = oslo_cache_version.version_string_with_vcs()
|
|
||||||
# The short X.Y version.
|
|
||||||
version = oslo_cache_version.canonical_version_string()
|
|
||||||
|
|
||||||
# The language for content autogenerated by Sphinx. Refer to documentation
|
|
||||||
# for a list of supported languages.
|
|
||||||
# language = None
|
|
||||||
|
|
||||||
# There are two options for replacing |today|: either, you set today to some
|
|
||||||
# non-false value, then it is used:
|
|
||||||
# today = ''
|
|
||||||
# Else, today_fmt is used as the format for a strftime call.
|
|
||||||
# today_fmt = '%B %d, %Y'
|
|
||||||
|
|
||||||
# List of patterns, relative to source directory, that match files and
|
|
||||||
# directories to ignore when looking for source files.
|
|
||||||
exclude_patterns = []
|
|
||||||
|
|
||||||
# The reST default role (used for this markup: `text`) to use for all
|
|
||||||
# documents.
|
|
||||||
# default_role = None
|
|
||||||
|
|
||||||
# If true, '()' will be appended to :func: etc. cross-reference text.
|
|
||||||
# add_function_parentheses = True
|
|
||||||
|
|
||||||
# If true, the current module name will be prepended to all description
|
|
||||||
# unit titles (such as .. function::).
|
|
||||||
# add_module_names = True
|
|
||||||
|
|
||||||
# If true, sectionauthor and moduleauthor directives will be shown in the
|
|
||||||
# output. They are ignored by default.
|
|
||||||
# show_authors = False
|
|
||||||
|
|
||||||
# The name of the Pygments (syntax highlighting) style to use.
|
|
||||||
pygments_style = 'sphinx'
|
|
||||||
|
|
||||||
# A list of ignored prefixes for module index sorting.
|
|
||||||
# modindex_common_prefix = []
|
|
||||||
|
|
||||||
# If true, keep warnings as "system message" paragraphs in the built documents.
|
|
||||||
# keep_warnings = False
|
|
||||||
|
|
||||||
|
|
||||||
# -- Options for HTML output ----------------------------------------------
|
|
||||||
|
|
||||||
# The theme to use for HTML and HTML Help pages. See the documentation for
|
|
||||||
# a list of builtin themes.
|
|
||||||
html_theme = 'default'
|
|
||||||
|
|
||||||
# Theme options are theme-specific and customize the look and feel of a theme
|
|
||||||
# further. For a list of options available for each theme, see the
|
|
||||||
# documentation.
|
|
||||||
# html_theme_options = {}
|
|
||||||
|
|
||||||
# Add any paths that contain custom themes here, relative to this directory.
|
|
||||||
# html_theme_path = []
|
|
||||||
|
|
||||||
# The name for this set of Sphinx documents. If None, it defaults to
|
|
||||||
# "<project> v<release> documentation".
|
|
||||||
# html_title = None
|
|
||||||
|
|
||||||
# A shorter title for the navigation bar. Default is the same as html_title.
|
|
||||||
# html_short_title = None
|
|
||||||
|
|
||||||
# The name of an image file (relative to this directory) to place at the top
|
|
||||||
# of the sidebar.
|
|
||||||
# html_logo = None
|
|
||||||
|
|
||||||
# The name of an image file (within the static path) to use as favicon of the
|
|
||||||
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
|
|
||||||
# pixels large.
|
|
||||||
# html_favicon = None
|
|
||||||
|
|
||||||
# Add any paths that contain custom static files (such as style sheets) here,
|
|
||||||
# relative to this directory. They are copied after the builtin static files,
|
|
||||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
|
||||||
html_static_path = ['_static']
|
|
||||||
|
|
||||||
# Add any extra paths that contain custom files (such as robots.txt or
|
|
||||||
# .htaccess) here, relative to this directory. These files are copied
|
|
||||||
# directly to the root of the documentation.
|
|
||||||
# html_extra_path = []
|
|
||||||
|
|
||||||
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
|
|
||||||
# using the given strftime format.
|
|
||||||
# html_last_updated_fmt = '%b %d, %Y'
|
|
||||||
|
|
||||||
# If true, SmartyPants will be used to convert quotes and dashes to
|
|
||||||
# typographically correct entities.
|
|
||||||
# html_use_smartypants = True
|
|
||||||
|
|
||||||
# Custom sidebar templates, maps document names to template names.
|
|
||||||
# html_sidebars = {}
|
|
||||||
|
|
||||||
# Additional templates that should be rendered to pages, maps page names to
|
|
||||||
# template names.
|
|
||||||
# html_additional_pages = {}
|
|
||||||
|
|
||||||
# If false, no module index is generated.
|
|
||||||
# html_domain_indices = True
|
|
||||||
|
|
||||||
# If false, no index is generated.
|
|
||||||
# html_use_index = True
|
|
||||||
|
|
||||||
# If true, the index is split into individual pages for each letter.
|
|
||||||
# html_split_index = False
|
|
||||||
|
|
||||||
# If true, links to the reST sources are added to the pages.
|
|
||||||
# html_show_sourcelink = True
|
|
||||||
|
|
||||||
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
|
|
||||||
# html_show_sphinx = True
|
|
||||||
|
|
||||||
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
|
|
||||||
# html_show_copyright = True
|
|
||||||
|
|
||||||
# If true, an OpenSearch description file will be output, and all pages will
|
|
||||||
# contain a <link> tag referring to it. The value of this option must be the
|
|
||||||
# base URL from which the finished HTML is served.
|
|
||||||
# html_use_opensearch = ''
|
|
||||||
|
|
||||||
# This is the file name suffix for HTML files (e.g. ".xhtml").
|
|
||||||
# html_file_suffix = None
|
|
||||||
|
|
||||||
# Output file base name for HTML help builder.
|
|
||||||
htmlhelp_basename = 'oslo.cacheReleaseNotesDoc'
|
|
||||||
|
|
||||||
|
|
||||||
# -- Options for LaTeX output ---------------------------------------------
|
|
||||||
|
|
||||||
latex_elements = {
|
|
||||||
# The paper size ('letterpaper' or 'a4paper').
|
|
||||||
# 'papersize': 'letterpaper',
|
|
||||||
|
|
||||||
# The font size ('10pt', '11pt' or '12pt').
|
|
||||||
# 'pointsize': '10pt',
|
|
||||||
|
|
||||||
# Additional stuff for the LaTeX preamble.
|
|
||||||
# 'preamble': '',
|
|
||||||
}
|
|
||||||
|
|
||||||
# Grouping the document tree into LaTeX files. List of tuples
|
|
||||||
# (source start file, target name, title,
|
|
||||||
# author, documentclass [howto, manual, or own class]).
|
|
||||||
latex_documents = [
|
|
||||||
('index', 'oslo.cacheReleaseNotes.tex',
|
|
||||||
u'oslo.cache Release Notes Documentation',
|
|
||||||
u'oslo.cache Developers', 'manual'),
|
|
||||||
]
|
|
||||||
|
|
||||||
# The name of an image file (relative to this directory) to place at the top of
|
|
||||||
# the title page.
|
|
||||||
# latex_logo = None
|
|
||||||
|
|
||||||
# For "manual" documents, if this is true, then toplevel headings are parts,
|
|
||||||
# not chapters.
|
|
||||||
# latex_use_parts = False
|
|
||||||
|
|
||||||
# If true, show page references after internal links.
|
|
||||||
# latex_show_pagerefs = False
|
|
||||||
|
|
||||||
# If true, show URL addresses after external links.
|
|
||||||
# latex_show_urls = False
|
|
||||||
|
|
||||||
# Documents to append as an appendix to all manuals.
|
|
||||||
# latex_appendices = []
|
|
||||||
|
|
||||||
# If false, no module index is generated.
|
|
||||||
# latex_domain_indices = True
|
|
||||||
|
|
||||||
|
|
||||||
# -- Options for manual page output ---------------------------------------
|
|
||||||
|
|
||||||
# One entry per manual page. List of tuples
|
|
||||||
# (source start file, name, description, authors, manual section).
|
|
||||||
man_pages = [
|
|
||||||
('index', 'oslo.cacheReleaseNotes',
|
|
||||||
u'oslo.cache Release Notes Documentation',
|
|
||||||
[u'oslo.cache Developers'], 1)
|
|
||||||
]
|
|
||||||
|
|
||||||
# If true, show URL addresses after external links.
|
|
||||||
# man_show_urls = False
|
|
||||||
|
|
||||||
|
|
||||||
# -- Options for Texinfo output -------------------------------------------
|
|
||||||
|
|
||||||
# Grouping the document tree into Texinfo files. List of tuples
|
|
||||||
# (source start file, target name, title, author,
|
|
||||||
# dir menu entry, description, category)
|
|
||||||
texinfo_documents = [
|
|
||||||
('index', 'oslo.cacheReleaseNotes',
|
|
||||||
u'oslo.cache Release Notes Documentation',
|
|
||||||
u'oslo.cache Developers', 'oslo.cacheReleaseNotes',
|
|
||||||
'One line description of project.',
|
|
||||||
'Miscellaneous'),
|
|
||||||
]
|
|
||||||
|
|
||||||
# Documents to append as an appendix to all manuals.
|
|
||||||
# texinfo_appendices = []
|
|
||||||
|
|
||||||
# If false, no module index is generated.
|
|
||||||
# texinfo_domain_indices = True
|
|
||||||
|
|
||||||
# How to display URL addresses: 'footnote', 'no', or 'inline'.
|
|
||||||
# texinfo_show_urls = 'footnote'
|
|
||||||
|
|
||||||
# If true, do not generate a @detailmenu in the "Top" node's menu.
|
|
||||||
# texinfo_no_detailmenu = False
|
|
|
@ -1,8 +0,0 @@
|
||||||
=============================
|
|
||||||
oslo.cache Release Notes
|
|
||||||
=============================
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 1
|
|
||||||
|
|
||||||
unreleased
|
|
|
@ -1,30 +0,0 @@
|
||||||
# Andi Chandler <andi@gowling.com>, 2016. #zanata
|
|
||||||
msgid ""
|
|
||||||
msgstr ""
|
|
||||||
"Project-Id-Version: oslo.cache Release Notes 1.10.1\n"
|
|
||||||
"Report-Msgid-Bugs-To: \n"
|
|
||||||
"POT-Creation-Date: 2016-07-11 22:41+0000\n"
|
|
||||||
"MIME-Version: 1.0\n"
|
|
||||||
"Content-Type: text/plain; charset=UTF-8\n"
|
|
||||||
"Content-Transfer-Encoding: 8bit\n"
|
|
||||||
"PO-Revision-Date: 2016-06-28 05:54+0000\n"
|
|
||||||
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
|
|
||||||
"Language-Team: English (United Kingdom)\n"
|
|
||||||
"Language: en-GB\n"
|
|
||||||
"X-Generator: Zanata 3.7.3\n"
|
|
||||||
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
|
|
||||||
|
|
||||||
msgid "1.9.0"
|
|
||||||
msgstr "1.9.0"
|
|
||||||
|
|
||||||
msgid "Other Notes"
|
|
||||||
msgstr "Other Notes"
|
|
||||||
|
|
||||||
msgid "Switch to reno for managing release notes."
|
|
||||||
msgstr "Switch to reno for managing release notes."
|
|
||||||
|
|
||||||
msgid "Unreleased Release Notes"
|
|
||||||
msgstr "Unreleased Release Notes"
|
|
||||||
|
|
||||||
msgid "oslo.cache Release Notes"
|
|
||||||
msgstr "oslo.cache Release Notes"
|
|
|
@ -1,5 +0,0 @@
|
||||||
==========================
|
|
||||||
Unreleased Release Notes
|
|
||||||
==========================
|
|
||||||
|
|
||||||
.. release-notes::
|
|
|
@ -1,10 +0,0 @@
|
||||||
# The order of packages is significant, because pip processes them in the order
|
|
||||||
# of appearance. Changing the order has an impact on the overall integration
|
|
||||||
# process, which may cause wedges in the gate later.
|
|
||||||
|
|
||||||
dogpile.cache>=0.6.1 # BSD
|
|
||||||
six>=1.9.0 # MIT
|
|
||||||
oslo.config>=3.12.0 # Apache-2.0
|
|
||||||
oslo.i18n>=2.1.0 # Apache-2.0
|
|
||||||
oslo.log>=1.14.0 # Apache-2.0
|
|
||||||
oslo.utils>=3.16.0 # Apache-2.0
|
|
68
setup.cfg
68
setup.cfg
|
@ -1,68 +0,0 @@
|
||||||
[metadata]
|
|
||||||
name = oslo.cache
|
|
||||||
summary = Cache storage for Openstack projects.
|
|
||||||
description-file =
|
|
||||||
README.rst
|
|
||||||
author = OpenStack
|
|
||||||
author-email = openstack-dev@lists.openstack.org
|
|
||||||
home-page = http://launchpad.net/oslo
|
|
||||||
classifier =
|
|
||||||
Environment :: OpenStack
|
|
||||||
Intended Audience :: Information Technology
|
|
||||||
Intended Audience :: System Administrators
|
|
||||||
License :: OSI Approved :: Apache Software License
|
|
||||||
Operating System :: POSIX :: Linux
|
|
||||||
Programming Language :: Python
|
|
||||||
Programming Language :: Python :: 2
|
|
||||||
Programming Language :: Python :: 2.7
|
|
||||||
Programming Language :: Python :: 3
|
|
||||||
Programming Language :: Python :: 3.4
|
|
||||||
Programming Language :: Python :: 3.5
|
|
||||||
|
|
||||||
[files]
|
|
||||||
packages =
|
|
||||||
oslo_cache
|
|
||||||
|
|
||||||
[entry_points]
|
|
||||||
oslo.config.opts =
|
|
||||||
oslo.cache = oslo_cache._opts:list_opts
|
|
||||||
|
|
||||||
dogpile.cache =
|
|
||||||
oslo_cache.mongo = oslo_cache.backends.mongo:MongoCacheBackend
|
|
||||||
oslo_cache.memcache_pool = oslo_cache.backends.memcache_pool:PooledMemcachedBackend
|
|
||||||
oslo_cache.dict = oslo_cache.backends.dictionary:DictCacheBackend
|
|
||||||
|
|
||||||
[extras]
|
|
||||||
dogpile =
|
|
||||||
python-memcached>=1.56 # PSF
|
|
||||||
mongo =
|
|
||||||
pymongo!=3.1,>=3.0.2 # Apache-2.0
|
|
||||||
|
|
||||||
[pbr]
|
|
||||||
warnerrors = true
|
|
||||||
autodoc_tree_index_modules = True
|
|
||||||
|
|
||||||
[build_sphinx]
|
|
||||||
source-dir = doc/source
|
|
||||||
build-dir = doc/build
|
|
||||||
all_files = 1
|
|
||||||
|
|
||||||
[upload_sphinx]
|
|
||||||
upload-dir = doc/build/html
|
|
||||||
|
|
||||||
[compile_catalog]
|
|
||||||
directory = oslo_cache/locale
|
|
||||||
domain = oslo_cache
|
|
||||||
|
|
||||||
[update_catalog]
|
|
||||||
domain = oslo_cache
|
|
||||||
output_dir = oslo_cache/locale
|
|
||||||
input_file = oslo_cache/locale/oslo_cache.pot
|
|
||||||
|
|
||||||
[extract_messages]
|
|
||||||
keywords = _ gettext ngettext l_ lazy_gettext
|
|
||||||
mapping_file = babel.cfg
|
|
||||||
output_file = oslo_cache/locale/oslo_cache.pot
|
|
||||||
|
|
||||||
[wheel]
|
|
||||||
universal = true
|
|
29
setup.py
29
setup.py
|
@ -1,29 +0,0 @@
|
||||||
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
|
|
||||||
import setuptools
|
|
||||||
|
|
||||||
# In python < 2.7.4, a lazy loading of package `pbr` will break
|
|
||||||
# setuptools if some other modules registered functions in `atexit`.
|
|
||||||
# solution from: http://bugs.python.org/issue15881#msg170215
|
|
||||||
try:
|
|
||||||
import multiprocessing # noqa
|
|
||||||
except ImportError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
setuptools.setup(
|
|
||||||
setup_requires=['pbr>=1.8'],
|
|
||||||
pbr=True)
|
|
|
@ -1,9 +0,0 @@
|
||||||
# The order of packages is significant, because pip processes them in the order
|
|
||||||
# of appearance. Changing the order has an impact on the overall integration
|
|
||||||
# process, which may cause wedges in the gate later.
|
|
||||||
hacking<0.11,>=0.10.0
|
|
||||||
mock>=2.0 # BSD
|
|
||||||
oslotest>=1.10.0 # Apache-2.0
|
|
||||||
oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
|
|
||||||
sphinx!=1.3b1,<1.3,>=1.2.1 # BSD
|
|
||||||
reno>=1.8.0 # Apache2
|
|
43
tox.ini
43
tox.ini
|
@ -1,43 +0,0 @@
|
||||||
[tox]
|
|
||||||
minversion = 1.6
|
|
||||||
envlist = py35,py34,py27,pypy,pep8
|
|
||||||
|
|
||||||
[testenv]
|
|
||||||
deps = .[dogpile]
|
|
||||||
.[mongo]
|
|
||||||
-r{toxinidir}/test-requirements.txt
|
|
||||||
commands =
|
|
||||||
find . -type f -name "*.pyc" -delete
|
|
||||||
python setup.py testr --slowest --testr-args='{posargs}'
|
|
||||||
|
|
||||||
[testenv:pep8]
|
|
||||||
commands = flake8
|
|
||||||
|
|
||||||
[testenv:venv]
|
|
||||||
commands = {posargs}
|
|
||||||
|
|
||||||
[testenv:docs]
|
|
||||||
commands = python setup.py build_sphinx
|
|
||||||
|
|
||||||
[testenv:cover]
|
|
||||||
commands = python setup.py testr --coverage --testr-args='{posargs}'
|
|
||||||
|
|
||||||
[flake8]
|
|
||||||
show-source = True
|
|
||||||
ignore = H405
|
|
||||||
builtins = _
|
|
||||||
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build
|
|
||||||
|
|
||||||
[hacking]
|
|
||||||
import_exceptions =
|
|
||||||
|
|
||||||
[testenv:pip-missing-reqs]
|
|
||||||
# do not install test-requirements as that will pollute the virtualenv for
|
|
||||||
# determining missing packages
|
|
||||||
# this also means that pip-missing-reqs must be installed separately, outside
|
|
||||||
# of the requirements.txt files
|
|
||||||
deps = pip_missing_reqs
|
|
||||||
commands = pip-missing-reqs -d --ignore-module=oslo_cache* --ignore-file=oslo_cache/tests/* --ignore-file=tests/ oslo_cache
|
|
||||||
|
|
||||||
[testenv:releasenotes]
|
|
||||||
commands = sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html
|
|
Loading…
Reference in New Issue