Retire repo

This repo was created by accident, use deb-python-oslo.service
instead.

Needed-By: I1ac1a06931c8b6dd7c2e73620a0302c29e605f03
Change-Id: I81894aea69b9d09b0977039623c26781093a397a
This commit is contained in:
Andreas Jaeger 2017-04-17 19:40:01 +02:00
parent bde278f0b1
commit 998fb7401c
61 changed files with 13 additions and 6869 deletions

View File

@ -1,8 +0,0 @@
[run]
branch = True
source = oslo_service
omit = oslo_service/tests/*,oslo_service/openstack/*
[report]
ignore_errors = True
precision = 2

52
.gitignore vendored
View File

@ -1,52 +0,0 @@
*.py[cod]
# C extensions
*.so
# Packages
*.egg*
*.egg-info
dist
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
cover
.tox
nosetests.xml
.testrepository
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Complexity
output/*.html
output/*/index.html
# Sphinx
doc/build
# pbr generates these
AUTHORS
ChangeLog
# Editors
*~
.*.swp

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/oslo.service.git

View File

@ -1,3 +0,0 @@
# Format is:
# <preferred e-mail> <other e-mail 1>
# <preferred e-mail> <other e-mail 2>

View File

@ -1,7 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
${PYTHON:-python} -m subunit.run discover -t ./ ./oslo_service $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -1,16 +0,0 @@
If you would like to contribute to the development of OpenStack,
you must follow the steps in this page:
http://docs.openstack.org/infra/manual/developers.html
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/oslo.service

View File

@ -1,4 +0,0 @@
oslo.service Style Commandments
======================================================
Read the OpenStack Style Commandments http://docs.openstack.org/developer/hacking/

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,22 +0,0 @@
========================================================
oslo.service -- Library for running OpenStack services
========================================================
.. image:: https://img.shields.io/pypi/v/oslo.service.svg
:target: https://pypi.python.org/pypi/oslo.service/
:alt: Latest Version
.. image:: https://img.shields.io/pypi/dm/oslo.service.svg
:target: https://pypi.python.org/pypi/oslo.service/
:alt: Downloads
oslo.service provides a framework for defining new long-running
services using the patterns established by other OpenStack
applications. It also includes utilities long-running applications
might need for working with SSL or WSGI, performing periodic
operations, interacting with systemd, etc.
* Free software: Apache license
* Documentation: http://docs.openstack.org/developer/oslo.service
* Source: http://git.openstack.org/cgit/openstack/oslo.service
* Bugs: http://bugs.launchpad.net/oslo.service

13
README.txt Normal file
View File

@ -0,0 +1,13 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
Use instead the project deb-python-oslo.service at
http://git.openstack.org/cgit/openstack/deb-python-oslo.service .
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,2 +0,0 @@
[python: **.py]

View File

@ -1,344 +0,0 @@
# Generated using bandit_conf_generator
profiles:
gate:
include:
- any_other_function_with_shell_equals_true
- assert_used
- blacklist_calls
- blacklist_import_func
- blacklist_imports
- exec_used
- execute_with_run_as_root_equals_true
- hardcoded_bind_all_interfaces
- hardcoded_password_string
- hardcoded_password_funcarg
- hardcoded_password_default
- hardcoded_sql_expressions
- hardcoded_tmp_directory
- jinja2_autoescape_false
- linux_commands_wildcard_injection
- paramiko_calls
- password_config_option_not_marked_secret
- request_with_no_cert_validation
- set_bad_file_permissions
- subprocess_popen_with_shell_equals_true
- subprocess_without_shell_equals_true
- start_process_with_a_shell
- start_process_with_no_shell
- start_process_with_partial_path
- ssl_with_bad_defaults
- ssl_with_bad_version
- ssl_with_no_version
- try_except_pass
- use_of_mako_templates
- weak_cryptographic_key
exclude_dirs:
- /tests/
shell_injection:
no_shell:
- os.execl
- os.execle
- os.execlp
- os.execlpe
- os.execv
- os.execve
- os.execvp
- os.execvpe
- os.spawnl
- os.spawnle
- os.spawnlp
- os.spawnlpe
- os.spawnv
- os.spawnve
- os.spawnvp
- os.spawnvpe
- os.startfile
shell:
- os.system
- os.popen
- os.popen2
- os.popen3
- os.popen4
- popen2.popen2
- popen2.popen3
- popen2.popen4
- popen2.Popen3
- popen2.Popen4
- commands.getoutput
- commands.getstatusoutput
subprocess:
- subprocess.Popen
- subprocess.call
- subprocess.check_call
- subprocess.check_output
- utils.execute
- utils.execute_with_timeout
ssl_with_bad_version:
bad_protocol_versions:
- PROTOCOL_SSLv2
- SSLv2_METHOD
- SSLv23_METHOD
- PROTOCOL_SSLv3
- PROTOCOL_TLSv1
- SSLv3_METHOD
- TLSv1_METHOD
try_except_pass:
check_typed_exception: true
plugin_name_pattern: '*.py'
blacklist_calls:
bad_name_sets:
- pickle:
message: 'Pickle library appears to be in use, possible security issue.
'
qualnames:
- pickle.loads
- pickle.load
- pickle.Unpickler
- cPickle.loads
- cPickle.load
- cPickle.Unpickler
- marshal:
message: 'Deserialization with the marshal module is possibly dangerous.
'
qualnames:
- marshal.load
- marshal.loads
- md5:
message: Use of insecure MD2, MD4, or MD5 hash function.
qualnames:
- hashlib.md5
- Crypto.Hash.MD2.new
- Crypto.Hash.MD4.new
- Crypto.Hash.MD5.new
- cryptography.hazmat.primitives.hashes.MD5
- ciphers:
level: HIGH
message: 'Use of insecure cipher {func}. Replace with a known secure cipher
such as AES.
'
qualnames:
- Crypto.Cipher.ARC2.new
- Crypto.Cipher.ARC4.new
- Crypto.Cipher.Blowfish.new
- Crypto.Cipher.DES.new
- Crypto.Cipher.XOR.new
- cryptography.hazmat.primitives.ciphers.algorithms.ARC4
- cryptography.hazmat.primitives.ciphers.algorithms.Blowfish
- cryptography.hazmat.primitives.ciphers.algorithms.IDEA
- cipher_modes:
message: Use of insecure cipher mode {func}.
qualnames:
- cryptography.hazmat.primitives.ciphers.modes.ECB
- mktemp_q:
message: Use of insecure and deprecated function (mktemp).
qualnames:
- tempfile.mktemp
- eval:
message: 'Use of possibly insecure function - consider using safer ast.literal_eval.
'
qualnames:
- eval
- mark_safe:
message: 'Use of mark_safe() may expose cross-site scripting vulnerabilities
and should be reviewed.
'
names:
- mark_safe
- httpsconnection:
message: 'Use of HTTPSConnection does not provide security, see https://wiki.openstack.org/wiki/OSSN/OSSN-0033
'
qualnames:
- httplib.HTTPSConnection
- http.client.HTTPSConnection
- six.moves.http_client.HTTPSConnection
- yaml_load:
message: 'Use of unsafe yaml load. Allows instantiation of arbitrary objects.
Consider yaml.safe_load().
'
qualnames:
- yaml.load
- urllib_urlopen:
message: 'Audit url open for permitted schemes. Allowing use of file:/ or custom
schemes is often unexpected.
'
qualnames:
- urllib.urlopen
- urllib.request.urlopen
- urllib.urlretrieve
- urllib.request.urlretrieve
- urllib.URLopener
- urllib.request.URLopener
- urllib.FancyURLopener
- urllib.request.FancyURLopener
- urllib2.urlopen
- urllib2.Request
- six.moves.urllib.request.urlopen
- six.moves.urllib.request.urlretrieve
- six.moves.urllib.request.URLopener
- six.moves.urllib.request.FancyURLopener
- telnetlib:
level: HIGH
message: 'Telnet-related funtions are being called. Telnet is considered insecure.
Use SSH or some other encrypted protocol.
'
qualnames:
- telnetlib.*
- xml_bad_cElementTree:
message: 'Using {func} to parse untrusted XML data is known to be vulnerable
to XML attacks. Replace {func} with its defusedxml equivalent function.
'
qualnames:
- xml.etree.cElementTree.parse
- xml.etree.cElementTree.iterparse
- xml.etree.cElementTree.fromstring
- xml.etree.cElementTree.XMLParser
- xml_bad_ElementTree:
message: 'Using {func} to parse untrusted XML data is known to be vulnerable
to XML attacks. Replace {func} with its defusedxml equivalent function.
'
qualnames:
- xml.etree.ElementTree.parse
- xml.etree.ElementTree.iterparse
- xml.etree.ElementTree.fromstring
- xml.etree.ElementTree.XMLParser
- xml_bad_expatreader:
message: 'Using {func} to parse untrusted XML data is known to be vulnerable
to XML attacks. Replace {func} with its defusedxml equivalent function.
'
qualnames:
- xml.sax.expatreader.create_parser
- xml_bad_expatbuilder:
message: 'Using {func} to parse untrusted XML data is known to be vulnerable
to XML attacks. Replace {func} with its defusedxml equivalent function.
'
qualnames:
- xml.dom.expatbuilder.parse
- xml.dom.expatbuilder.parseString
- xml_bad_sax:
message: 'Using {func} to parse untrusted XML data is known to be vulnerable
to XML attacks. Replace {func} with its defusedxml equivalent function.
'
qualnames:
- xml.sax.parse
- xml.sax.parseString
- xml.sax.make_parser
- xml_bad_minidom:
message: 'Using {func} to parse untrusted XML data is known to be vulnerable
to XML attacks. Replace {func} with its defusedxml equivalent function.
'
qualnames:
- xml.dom.minidom.parse
- xml.dom.minidom.parseString
- xml_bad_pulldom:
message: 'Using {func} to parse untrusted XML data is known to be vulnerable
to XML attacks. Replace {func} with its defusedxml equivalent function.
'
qualnames:
- xml.dom.pulldom.parse
- xml.dom.pulldom.parseString
- xml_bad_etree:
message: 'Using {func} to parse untrusted XML data is known to be vulnerable
to XML attacks. Replace {func} with its defusedxml equivalent function.
'
qualnames:
- lxml.etree.parse
- lxml.etree.fromstring
- lxml.etree.RestrictedElement
- lxml.etree.GlobalParserTLS
- lxml.etree.getDefaultParser
- lxml.etree.check_docinfo
hardcoded_tmp_directory:
tmp_dirs:
- /tmp
- /var/tmp
- /dev/shm
blacklist_imports:
bad_import_sets:
- telnet:
imports:
- telnetlib
level: HIGH
message: 'A telnet-related module is being imported. Telnet is considered insecure.
Use SSH or some other encrypted protocol.
'
- info_libs:
imports:
- pickle
- cPickle
- subprocess
- Crypto
level: LOW
message: 'Consider possible security implications associated with {module} module.
'
- xml_libs:
imports:
- xml.etree.cElementTree
- xml.etree.ElementTree
- xml.sax.expatreader
- xml.sax
- xml.dom.expatbuilder
- xml.dom.minidom
- xml.dom.pulldom
- lxml.etree
- lxml
level: LOW
message: 'Using {module} to parse untrusted XML data is known to be vulnerable
to XML attacks. Replace {module} with the equivalent defusedxml package.
'
- xml_libs_high:
imports:
- xmlrpclib
level: HIGH
message: 'Using {module} to parse untrusted XML data is known to be vulnerable
to XML attacks. Use defused.xmlrpc.monkey_patch() function to monkey-patch
xmlrpclib and mitigate XML vulnerabilities.
'
include:
- '*.py'
- '*.pyw'
password_config_option_not_marked_secret:
function_names:
- oslo.config.cfg.StrOpt
- oslo_config.cfg.StrOpt
hardcoded_password:
word_list: '%(site_data_dir)s/wordlist/default-passwords'
execute_with_run_as_root_equals_true:
function_names:
- ceilometer.utils.execute
- cinder.utils.execute
- neutron.agent.linux.utils.execute
- nova.utils.execute
- nova.utils.trycmd

View File

@ -1,7 +0,0 @@
==================
eventlet_backdoor
==================
.. automodule:: oslo_service.eventlet_backdoor
:members:
:show-inheritance:

View File

@ -1,8 +0,0 @@
=============
loopingcall
=============
.. automodule:: oslo_service.loopingcall
:members:
:undoc-members:
:show-inheritance:

View File

@ -1,8 +0,0 @@
==============
periodic_task
==============
.. automodule:: oslo_service.periodic_task
:members:
:undoc-members:
:show-inheritance:

View File

@ -1,7 +0,0 @@
=========
service
=========
.. automodule:: oslo_service.service
:members:
:show-inheritance:

View File

@ -1,8 +0,0 @@
==========
sslutils
==========
.. automodule:: oslo_service.sslutils
:members:
:undoc-members:
:show-inheritance:

View File

@ -1,8 +0,0 @@
=========
systemd
=========
.. automodule:: oslo_service.systemd
:members:
:undoc-members:
:show-inheritance:

View File

@ -1,8 +0,0 @@
=============
threadgroup
=============
.. automodule:: oslo_service.threadgroup
:members:
:undoc-members:
:show-inheritance:

View File

@ -1,75 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'oslosphinx',
'oslo_config.sphinxext',
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'oslo.service'
copyright = u'2014, OpenStack Foundation'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
#intersphinx_mapping = {'http://docs.python.org/': None}

View File

@ -1,5 +0,0 @@
==============
Contributing
==============
.. include:: ../../CONTRIBUTING.rst

View File

@ -1 +0,0 @@
.. include:: ../../ChangeLog

View File

@ -1,48 +0,0 @@
========================================================
oslo.service -- Library for running OpenStack services
========================================================
oslo.service provides a framework for defining new long-running
services using the patterns established by other OpenStack
applications. It also includes utilities long-running applications
might need for working with SSL or WSGI, performing periodic
operations, interacting with systemd, etc.
.. toctree::
:maxdepth: 2
installation
usage
opts
contributing
API Documentation
=================
.. toctree::
:maxdepth: 2
api/eventlet_backdoor
api/loopingcall
api/periodic_task
api/service
api/sslutils
api/systemd
api/threadgroup
Release Notes
=============
.. toctree::
:maxdepth: 1
history
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
.. _oslo: https://wiki.openstack.org/wiki/Oslo

View File

@ -1,7 +0,0 @@
==============
Installation
==============
At the command line::
$ pip install oslo.service

View File

@ -1,36 +0,0 @@
=======================
Configuration Options
=======================
oslo.service uses oslo.config to define and manage configuration options
to allow the deployer to control how an application uses this library.
periodic_task
=============
These options apply to services using the periodic task features of
oslo.service.
.. show-options:: oslo.service.periodic_task
service
=======
These options apply to services using the basic service framework.
.. show-options:: oslo.service.service
sslutils
========
These options apply to services using the SSL utilities module.
.. show-options:: oslo.service.sslutils
wsgi
====
These options apply to services using the WSGI (Web Service Gateway
Interface) module.
.. show-options:: oslo.service.wsgi

View File

@ -1,181 +0,0 @@
=======
Usage
=======
To use oslo.service in a project::
import oslo_service
Migrating to oslo.service
=========================
The ``oslo.service`` library no longer assumes a global configuration object is
available. Instead the following functions and classes have been
changed to expect the consuming application to pass in an ``oslo.config``
configuration object:
* :func:`~oslo_service.eventlet_backdoor.initialize_if_enabled`
* :py:class:`oslo_service.periodic_task.PeriodicTasks`
* :func:`~oslo_service.service.launch`
* :py:class:`oslo_service.service.ProcessLauncher`
* :py:class:`oslo_service.service.ServiceLauncher`
* :func:`~oslo_service.sslutils.is_enabled`
* :func:`~oslo_service.sslutils.wrap`
When using service from oslo-incubator
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
from foo.openstack.common import service
launcher = service.launch(service, workers=2)
When using oslo.service
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
from oslo_config import cfg
from oslo_service import service
CONF = cfg.CONF
launcher = service.launch(CONF, service, workers=2)
Using oslo.service with oslo-config-generator
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The ``oslo.service`` provides several entry points to generate a configuration
files.
* :func:`oslo.service.service <oslo_service.service.list_opts>`
The options from the service and eventlet_backdoor modules for
the [DEFAULT] section.
* :func:`oslo.service.periodic_task <oslo_service.periodic_task.list_opts>`
The options from the periodic_task module for the [DEFAULT] section.
* :func:`oslo.service.sslutils <oslo_service.sslutils.list_opts>`
The options from the sslutils module for the [ssl] section.
* :func:`oslo.service.wsgi <oslo_service.wsgi.list_opts>`
The options from the wsgi module for the [DEFAULT] section.
**ATTENTION:** The library doesn't provide an oslo.service entry point.
.. code-block:: bash
$ oslo-config-generator --namespace oslo.service.service \
--namespace oslo.service.periodic_task \
--namespace oslo.service.sslutils
Launching and controlling services
==================================
oslo_service.service module provides tools for launching OpenStack
services and controlling their lifecycles.
A service is an instance of any class that
subclasses :py:class:`oslo_service.service.ServiceBase`.
:py:class:`ServiceBase <oslo_service.service.ServiceBase>` is an
abstract class that defines an interface every
service should implement. :py:class:`oslo_service.service.Service` can
serve as a base for constructing new services.
Launchers
~~~~~~~~~
oslo_service.service module provides two launchers for running services:
* :py:class:`oslo_service.service.ServiceLauncher` - used for
running one or more service in a parent process.
* :py:class:`oslo_service.service.ProcessLauncher` - forks a given
number of workers in which service(s) are then started.
It is possible to initialize whatever launcher is needed and then
launch a service using it.
::
from oslo_config import cfg
from oslo_service import service
CONF = cfg.CONF
service_launcher = service.ServiceLauncher(CONF)
service_launcher.launch_service(service.Service())
process_launcher = service.ProcessLauncher(CONF, wait_interval=1.0)
process_launcher.launch_service(service.Service(), workers=2)
Or one can simply call :func:`oslo_service.service.launch` which will
automatically pick an appropriate launcher based on a number of workers that
are passed to it (ServiceLauncher in case workers=1 or None and
ProcessLauncher in other case).
::
from oslo_config import cfg
from oslo_service import service
CONF = cfg.CONF
launcher = service.launch(CONF, service.Service(), workers=3)
*NOTE:* Please be informed that it is highly recommended to use no
more than one instance of ServiceLauncher and ProcessLauncher classes
per process.
Signal handling
~~~~~~~~~~~~~~~
oslo_service.service provides handlers for such signals as SIGTERM, SIGINT
and SIGHUP.
SIGTERM is used for graceful termination of services. This can allow a
server to wait for all clients to close connections while rejecting new
incoming requests. Config option graceful_shutdown_timeout specifies how
many seconds after receiving a SIGTERM signal, a server should continue
to run, handling the existing connections. Setting graceful_shutdown_timeout
to zero means that the server will wait indefinitely until all remaining
requests have been fully served.
To force instantaneous termination SIGINT signal must be sent.
On receiving SIGHUP configuration files are reloaded and a service
is being reset and started again. Then all child workers are gracefully
stopped using SIGTERM and workers with new configuration are
spawned. Thus, SIGHUP can be used for changing config options on the go.
*NOTE:* SIGHUP is not supported on Windows.
*NOTE:* Config option graceful_shutdown_timeout is not supported on Windows.
Below is the example of a service with a reset method that allows reloading
logging options by sending a SIGHUP.
::
from oslo_config import cfg
from oslo_log import log as logging
from oslo_service import service
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
class FooService(service.ServiceBase):
def start(self):
pass
def wait(self):
pass
def stop(self):
pass
def reset(self):
logging.setup(cfg.CONF, 'foo')

View File

@ -1,41 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import eventlet.patcher
import monotonic
from oslo_log import log as logging
time = eventlet.patcher.original('time')
LOG = logging.getLogger(__name__)
if hasattr(time, 'monotonic'):
# Use builtin monotonic clock, Python 3.3+
_monotonic = time.monotonic
else:
_monotonic = monotonic.monotonic
def service_hub():
# NOTE(dims): Add a custom impl for EVENTLET_HUB, so we can
# override the clock used in the eventlet hubs. The default
# uses time.time() and we need to use a monotonic timer
# to ensure that things like loopingcall work properly.
hub = eventlet.hubs.get_default_hub().Hub()
hub.clock = _monotonic
return hub
os.environ['EVENTLET_HUB'] = 'oslo_service:service_hub'

View File

@ -1,45 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""oslo.i18n integration module.
See http://docs.openstack.org/developer/oslo.i18n/usage.html .
"""
import oslo_i18n
DOMAIN = "oslo_service"
_translators = oslo_i18n.TranslatorFactory(domain=DOMAIN)
# The primary translation function using the well-known name "_"
_ = _translators.primary
# The contextual translation function using the name "_C"
_C = _translators.contextual_form
# The plural translation function using the name "_P"
_P = _translators.plural_form
# Translators for log levels.
#
# The abbreviated names are meant to reflect the usual use of a short
# name like '_'. The "L" is for "log" and the other letter comes from
# the level.
_LI = _translators.log_info
_LW = _translators.log_warning
_LE = _translators.log_error
_LC = _translators.log_critical
def get_available_languages():
return oslo_i18n.get_available_languages(DOMAIN)

View File

@ -1,118 +0,0 @@
# Copyright 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
help_for_backdoor_port = (
"Acceptable values are 0, <port>, and <start>:<end>, where 0 results "
"in listening on a random tcp port number; <port> results in listening "
"on the specified port number (and not enabling backdoor if that port "
"is in use); and <start>:<end> results in listening on the smallest "
"unused port number within the specified range of port numbers. The "
"chosen port is displayed in the service's log file.")
eventlet_backdoor_opts = [
cfg.StrOpt('backdoor_port',
help="Enable eventlet backdoor. %s" % help_for_backdoor_port),
cfg.StrOpt('backdoor_socket',
help="Enable eventlet backdoor, using the provided path"
" as a unix socket that can receive connections. This"
" option is mutually exclusive with 'backdoor_port' in"
" that only one should be provided. If both are provided"
" then the existence of this option overrides the usage of"
" that option.")
]
periodic_opts = [
cfg.BoolOpt('run_external_periodic_tasks',
default=True,
help='Some periodic tasks can be run in a separate process. '
'Should we run them here?'),
]
service_opts = [
cfg.BoolOpt('log_options',
default=True,
help='Enables or disables logging values of all registered '
'options when starting a service (at DEBUG level).'),
cfg.IntOpt('graceful_shutdown_timeout',
default=60,
help='Specify a timeout after which a gracefully shutdown '
'server will exit. Zero value means endless wait.'),
]
wsgi_opts = [
cfg.StrOpt('api_paste_config',
default="api-paste.ini",
help='File name for the paste.deploy config for api service'),
cfg.StrOpt('wsgi_log_format',
default='%(client_ip)s "%(request_line)s" status: '
'%(status_code)s len: %(body_length)s time:'
' %(wall_seconds).7f',
help='A python format string that is used as the template to '
'generate log lines. The following values can be'
'formatted into it: client_ip, date_time, request_line, '
'status_code, body_length, wall_seconds.'),
cfg.IntOpt('tcp_keepidle',
default=600,
help="Sets the value of TCP_KEEPIDLE in seconds for each "
"server socket. Not supported on OS X."),
cfg.IntOpt('wsgi_default_pool_size',
default=100,
help="Size of the pool of greenthreads used by wsgi"),
cfg.IntOpt('max_header_line',
default=16384,
help="Maximum line size of message headers to be accepted. "
"max_header_line may need to be increased when using "
"large tokens (typically those generated when keystone "
"is configured to use PKI tokens with big service "
"catalogs)."),
cfg.BoolOpt('wsgi_keep_alive',
default=True,
help="If False, closes the client socket connection "
"explicitly."),
cfg.IntOpt('client_socket_timeout', default=900,
help="Timeout for client connections' socket operations. "
"If an incoming connection is idle for this number of "
"seconds it will be closed. A value of '0' means "
"wait forever."),
]
ssl_opts = [
cfg.StrOpt('ca_file',
help="CA certificate file to use to verify "
"connecting clients.",
deprecated_group='DEFAULT',
deprecated_name='ssl_ca_file'),
cfg.StrOpt('cert_file',
help="Certificate file to use when starting "
"the server securely.",
deprecated_group='DEFAULT',
deprecated_name='ssl_cert_file'),
cfg.StrOpt('key_file',
help="Private key file to use when starting "
"the server securely.",
deprecated_group='DEFAULT',
deprecated_name='ssl_key_file'),
cfg.StrOpt('version',
help='SSL version to use (valid only if SSL enabled). '
'Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, '
'TLSv1_1, and TLSv1_2 may be available on some '
'distributions.'
),
cfg.StrOpt('ciphers',
help='Sets the list of available ciphers. value should be a '
'string in the OpenSSL cipher list format.'
),
]

View File

@ -1,231 +0,0 @@
# Copyright (c) 2012 OpenStack Foundation.
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
import errno
import gc
import logging
import os
import pprint
import socket
import sys
import traceback
import eventlet.backdoor
import greenlet
from oslo_service._i18n import _LI, _
from oslo_service import _options
LOG = logging.getLogger(__name__)
class EventletBackdoorConfigValueError(Exception):
def __init__(self, port_range, help_msg, ex):
msg = (_('Invalid backdoor_port configuration %(range)s: %(ex)s. '
'%(help)s') %
{'range': port_range, 'ex': ex, 'help': help_msg})
super(EventletBackdoorConfigValueError, self).__init__(msg)
self.port_range = port_range
def _dont_use_this():
print("Don't use this, just disconnect instead")
def _dump_frame(f, frame_chapter):
co = f.f_code
print(" %s Frame: %s" % (frame_chapter, co.co_name))
print(" File: %s" % (co.co_filename))
print(" Captured at line number: %s" % (f.f_lineno))
co_locals = set(co.co_varnames)
if len(co_locals):
not_set = co_locals.copy()
set_locals = {}
for var_name in f.f_locals.keys():
if var_name in co_locals:
set_locals[var_name] = f.f_locals[var_name]
not_set.discard(var_name)
if set_locals:
print(" %s set local variables:" % (len(set_locals)))
for var_name in sorted(set_locals.keys()):
print(" %s => %r" % (var_name, f.f_locals[var_name]))
else:
print(" 0 set local variables.")
if not_set:
print(" %s not set local variables:" % (len(not_set)))
for var_name in sorted(not_set):
print(" %s" % (var_name))
else:
print(" 0 not set local variables.")
else:
print(" 0 Local variables.")
def _detailed_dump_frames(f, thread_index):
i = 0
while f is not None:
_dump_frame(f, "%s.%s" % (thread_index, i + 1))
f = f.f_back
i += 1
def _find_objects(t):
return [o for o in gc.get_objects() if isinstance(o, t)]
def _print_greenthreads(simple=True):
for i, gt in enumerate(_find_objects(greenlet.greenlet)):
print(i, gt)
if simple:
traceback.print_stack(gt.gr_frame)
else:
_detailed_dump_frames(gt.gr_frame, i)
print()
def _print_nativethreads():
for threadId, stack in sys._current_frames().items():
print(threadId)
traceback.print_stack(stack)
print()
def _parse_port_range(port_range):
if ':' not in port_range:
start, end = port_range, port_range
else:
start, end = port_range.split(':', 1)
try:
start, end = int(start), int(end)
if end < start:
raise ValueError
return start, end
except ValueError as ex:
raise EventletBackdoorConfigValueError(
port_range, ex, _options.help_for_backdoor_port)
def _listen(host, start_port, end_port, listen_func):
try_port = start_port
while True:
try:
return listen_func((host, try_port))
except socket.error as exc:
if (exc.errno != errno.EADDRINUSE or
try_port >= end_port):
raise
try_port += 1
def _try_open_unix_domain_socket(socket_path):
try:
return eventlet.listen(socket_path, socket.AF_UNIX)
except socket.error as e:
if e.errno != errno.EADDRINUSE:
# NOTE(harlowja): Some other non-address in use error
# occurred, since we aren't handling those, re-raise
# and give up...
raise
else:
# Attempt to remove the file before opening it again.
try:
os.unlink(socket_path)
except OSError as e:
if e.errno != errno.ENOENT:
# NOTE(harlowja): File existed, but we couldn't
# delete it, give up...
raise
return eventlet.listen(socket_path, socket.AF_UNIX)
def _initialize_if_enabled(conf):
conf.register_opts(_options.eventlet_backdoor_opts)
backdoor_locals = {
'exit': _dont_use_this, # So we don't exit the entire process
'quit': _dont_use_this, # So we don't exit the entire process
'fo': _find_objects,
'pgt': _print_greenthreads,
'pnt': _print_nativethreads,
}
if conf.backdoor_port is None and conf.backdoor_socket is None:
return None
if conf.backdoor_socket is None:
start_port, end_port = _parse_port_range(str(conf.backdoor_port))
sock = _listen('localhost', start_port, end_port, eventlet.listen)
# In the case of backdoor port being zero, a port number is assigned by
# listen(). In any case, pull the port number out here.
where_running = sock.getsockname()[1]
else:
sock = _try_open_unix_domain_socket(conf.backdoor_socket)
where_running = conf.backdoor_socket
# NOTE(johannes): The standard sys.displayhook will print the value of
# the last expression and set it to __builtin__._, which overwrites
# the __builtin__._ that gettext sets. Let's switch to using pprint
# since it won't interact poorly with gettext, and it's easier to
# read the output too.
def displayhook(val):
if val is not None:
pprint.pprint(val)
sys.displayhook = displayhook
LOG.info(
_LI('Eventlet backdoor listening on %(where_running)s for'
' process %(pid)d'),
{'where_running': where_running, 'pid': os.getpid()}
)
thread = eventlet.spawn(eventlet.backdoor.backdoor_server, sock,
locals=backdoor_locals)
return (where_running, thread)
def initialize_if_enabled(conf):
where_running_thread = _initialize_if_enabled(conf)
if not where_running_thread:
return None
else:
where_running, _thread = where_running_thread
return where_running
def _main():
import eventlet
eventlet.monkey_patch(all=True)
from oslo_config import cfg
logging.basicConfig(level=logging.DEBUG)
conf = cfg.ConfigOpts()
conf.register_cli_opts(_options.eventlet_backdoor_opts)
conf(sys.argv[1:])
where_running_thread = _initialize_if_enabled(conf)
if not where_running_thread:
raise RuntimeError(_("Did not create backdoor at requested location"))
else:
_where_running, thread = where_running_thread
thread.wait()
if __name__ == '__main__':
# simple CLI for testing
_main()

View File

@ -1,50 +0,0 @@
# Andi Chandler <andi@gowling.com>, 2016. #zanata
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: oslo.service 1.11.1.dev6\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-06-04 05:23+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-06-03 10:53+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n"
"Language: en-GB\n"
"X-Generator: Zanata 3.7.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
#, python-format
msgid "%(kind)s %(func_name)r failed"
msgstr "%(kind)s %(func_name)r failed"
#, python-format
msgid "Could not bind to %(host)s:%(port)s"
msgstr "Could not bind to %(host)s:%(port)s"
#, python-format
msgid "Couldn't lookup app: %s"
msgstr "Couldn't lookup app: %s"
msgid "Error canceling thread."
msgstr "Error cancelling thread."
#, python-format
msgid "Error during %(full_task_name)s"
msgstr "Error during %(full_task_name)s"
msgid "Error starting thread."
msgstr "Error starting thread."
msgid "Error stopping thread."
msgstr "Error stopping thread."
msgid "Error waiting on thread."
msgstr "Error waiting on thread."
msgid "Error waiting on timer."
msgstr "Error waiting on timer."
msgid "Unhandled exception"
msgstr "Unhandled exception"

View File

@ -1,85 +0,0 @@
# Andi Chandler <andi@gowling.com>, 2016. #zanata
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: oslo.service 1.11.1.dev6\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-06-04 05:23+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-06-03 10:54+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n"
"Language: en-GB\n"
"X-Generator: Zanata 3.7.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
#, python-format
msgid "%(name)s listening on %(host)s:%(port)s"
msgstr "%(name)s listening on %(host)s:%(port)s"
#, python-format
msgid "%(name)s listening on %(socket_file)s:"
msgstr "%(name)s listening on %(socket_file)s:"
#, python-format
msgid "Caught %s, exiting"
msgstr "Caught %s, exiting"
#, python-format
msgid "Caught %s, stopping children"
msgstr "Caught %s, stopping children"
msgid "Caught SIGINT signal, instantaneous exiting"
msgstr "Caught SIGINT signal, instantaneous exiting"
#, python-format
msgid "Child %(pid)d killed by signal %(sig)d"
msgstr "Child %(pid)d killed by signal %(sig)d"
#, python-format
msgid "Child %(pid)s exited with status %(code)d"
msgstr "Child %(pid)s exited with status %(code)d"
#, python-format
msgid "Child caught %s, exiting"
msgstr "Child caught %s, exiting"
#, python-format
msgid "Eventlet backdoor listening on %(where_running)s for process %(pid)d"
msgstr "Eventlet backdoor listening on %(where_running)s for process %(pid)d"
msgid "Forking too fast, sleeping"
msgstr "Forking too fast, sleeping"
msgid "Graceful shutdown timeout exceeded, instantaneous exiting"
msgstr "Graceful shutdown timeout exceeded, instantaneous exiting"
msgid "Parent process has died unexpectedly, exiting"
msgstr "Parent process has died unexpectedly, exiting"
#, python-format
msgid "Skipping periodic task %(task)s because it is disabled"
msgstr "Skipping periodic task %(task)s because it is disabled"
#, python-format
msgid "Skipping periodic task %(task)s because its interval is negative"
msgstr "Skipping periodic task %(task)s because its interval is negative"
#, python-format
msgid "Starting %d workers"
msgstr "Starting %d workers"
msgid "Stopping WSGI server."
msgstr "Stopping WSGI server."
msgid "WSGI server has stopped."
msgstr "WSGI server has stopped."
msgid "Wait called after thread killed. Cleaning up."
msgstr "Wait called after thread killed. Cleaning up."
#, python-format
msgid "Waiting on %d children to exit"
msgstr "Waiting on %d children to exit"

View File

@ -1,23 +0,0 @@
# Andi Chandler <andi@gowling.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: oslo.service 1.11.1.dev6\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-06-04 05:23+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-06-03 10:51+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n"
"Language: en-GB\n"
"X-Generator: Zanata 3.7.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
#, python-format
msgid "Function %(func_name)r run outlasted interval by %(delay).2f sec"
msgstr "Function %(func_name)r run outlasted interval by %(delay).2f sec"
#, python-format
msgid "pid %d not in child list"
msgstr "pid %d not in child list"

View File

@ -1,124 +0,0 @@
# Andi Chandler <andi@gowling.com>, 2016. #zanata
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: oslo.service 1.11.1.dev6\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-06-04 05:23+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-06-03 10:50+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n"
"Language: en-GB\n"
"X-Generator: Zanata 3.7.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
msgid ""
"A dynamic backoff interval looping call can only run one function at a time"
msgstr ""
"A dynamic backoff interval looping call can only run one function at a time"
msgid "A dynamic interval looping call can only run one function at a time"
msgstr "A dynamic interval looping call can only run one function at a time"
msgid ""
"A dynamic interval looping call should supply either an interval or "
"periodic_interval_max"
msgstr ""
"A dynamic interval looping call should supply either an interval or "
"periodic_interval_max"
msgid "A fixed interval looping call can only run one function at a time"
msgstr "A fixed interval looping call can only run one function at a time"
msgid "A looping call can only run one function at a time"
msgstr "A looping call can only run one function at a time"
#, python-format
msgid "Could not find config at %(path)s"
msgstr "Could not find config at %(path)s"
#, python-format
msgid "Could not load paste app '%(name)s' from %(path)s"
msgstr "Could not load paste app '%(name)s' from %(path)s"
msgid "Did not create backdoor at requested location"
msgstr "Did not create backdoor at requested location"
msgid "Dynamic backoff interval looping call"
msgstr "Dynamic backoff interval looping call"
msgid "Dynamic interval looping call"
msgstr "Dynamic interval looping call"
msgid "Fixed interval looping call"
msgstr "Fixed interval looping call"
#, python-format
msgid "Invalid SSL version : %s"
msgstr "Invalid SSL version : %s"
#, python-format
msgid "Invalid backdoor_port configuration %(range)s: %(ex)s. %(help)s"
msgstr "Invalid backdoor_port configuration %(range)s: %(ex)s. %(help)s"
#, python-format
msgid ""
"Invalid input received: Unexpected argument for periodic task creation: "
"%(arg)s."
msgstr ""
"Invalid input received: Unexpected argument for periodic task creation: "
"%(arg)s."
#, python-format
msgid "Invalid restart_method: %s"
msgstr "Invalid restart_method: %s"
msgid "Launcher asked to start multiple workers"
msgstr "Launcher asked to start multiple workers"
#, python-format
msgid "Looping call timed out after %.02f seconds"
msgstr "Looping call timed out after %.02f seconds"
msgid "Number of workers should be positive!"
msgstr "Number of workers should be positive!"
#, python-format
msgid "Service %(service)s must an instance of %(base)s!"
msgstr "Service %(service)s must an instance of %(base)s!"
msgid "The backlog must be more than 0"
msgstr "The backlog must be more than 0"
#, python-format
msgid "Unable to find ca_file : %s"
msgstr "Unable to find ca_file : %s"
#, python-format
msgid "Unable to find cert_file : %s"
msgstr "Unable to find cert_file : %s"
#, python-format
msgid "Unable to find key_file : %s"
msgstr "Unable to find key_file : %s"
#, python-format
msgid "Unexpected argument for periodic task creation: %(arg)s."
msgstr "Unexpected argument for periodic task creation: %(arg)s."
msgid "Unknown looping call"
msgstr "Unknown looping call"
#, python-format
msgid "Unsupported socket family: %s"
msgstr "Unsupported socket family: %s"
msgid ""
"When running server in SSL mode, you must specify both a cert_file and "
"key_file option value in your configuration file"
msgstr ""
"When running server in SSL mode, you must specify both a cert_file and "
"key_file option value in your configuration file"

View File

@ -1,387 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# Copyright 2011 Justin Santa Barbara
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import random
import sys
from eventlet import event
from eventlet import greenthread
from oslo_log import log as logging
from oslo_utils import excutils
from oslo_utils import reflection
from oslo_utils import timeutils
import six
from oslo_service._i18n import _LE, _LW, _
LOG = logging.getLogger(__name__)
class LoopingCallDone(Exception):
"""Exception to break out and stop a LoopingCallBase.
The poll-function passed to LoopingCallBase can raise this exception to
break out of the loop normally. This is somewhat analogous to
StopIteration.
An optional return-value can be included as the argument to the exception;
this return-value will be returned by LoopingCallBase.wait()
"""
def __init__(self, retvalue=True):
""":param retvalue: Value that LoopingCallBase.wait() should return."""
self.retvalue = retvalue
class LoopingCallTimeOut(Exception):
"""Exception for a timed out LoopingCall.
The LoopingCall will raise this exception when a timeout is provided
and it is exceeded.
"""
pass
def _safe_wrapper(f, kind, func_name):
"""Wrapper that calls into wrapped function and logs errors as needed."""
def func(*args, **kwargs):
try:
return f(*args, **kwargs)
except LoopingCallDone:
raise # let the outer handler process this
except Exception:
LOG.error(_LE('%(kind)s %(func_name)r failed'),
{'kind': kind, 'func_name': func_name},
exc_info=True)
return 0
return func
class LoopingCallBase(object):
_KIND = _("Unknown looping call")
_RUN_ONLY_ONE_MESSAGE = _("A looping call can only run one function"
" at a time")
def __init__(self, f=None, *args, **kw):
self.args = args
self.kw = kw
self.f = f
self._running = False
self._thread = None
self.done = None
def stop(self):
self._running = False
def wait(self):
return self.done.wait()
def _on_done(self, gt, *args, **kwargs):
self._thread = None
self._running = False
def _start(self, idle_for, initial_delay=None, stop_on_exception=True):
"""Start the looping
:param idle_for: Callable that takes two positional arguments, returns
how long to idle for. The first positional argument is
the last result from the function being looped and the
second positional argument is the time it took to
calculate that result.
:param initial_delay: How long to delay before starting the looping.
Value is in seconds.
:param stop_on_exception: Whether to stop if an exception occurs.
:returns: eventlet event instance
"""
if self._thread is not None:
raise RuntimeError(self._RUN_ONLY_ONE_MESSAGE)
self._running = True
self.done = event.Event()
self._thread = greenthread.spawn(
self._run_loop, idle_for,
initial_delay=initial_delay, stop_on_exception=stop_on_exception)
self._thread.link(self._on_done)
return self.done
def _run_loop(self, idle_for_func,
initial_delay=None, stop_on_exception=True):
kind = self._KIND
func_name = reflection.get_callable_name(self.f)
func = self.f if stop_on_exception else _safe_wrapper(self.f, kind,
func_name)
if initial_delay:
greenthread.sleep(initial_delay)
try:
watch = timeutils.StopWatch()
while self._running:
watch.restart()
result = func(*self.args, **self.kw)
watch.stop()
if not self._running:
break
idle = idle_for_func(result, watch.elapsed())
LOG.trace('%(kind)s %(func_name)r sleeping '
'for %(idle).02f seconds',
{'func_name': func_name, 'idle': idle,
'kind': kind})
greenthread.sleep(idle)
except LoopingCallDone as e:
self.done.send(e.retvalue)
except Exception:
exc_info = sys.exc_info()
try:
LOG.error(_LE('%(kind)s %(func_name)r failed'),
{'kind': kind, 'func_name': func_name},
exc_info=exc_info)
self.done.send_exception(*exc_info)
finally:
del exc_info
return
else:
self.done.send(True)
class FixedIntervalLoopingCall(LoopingCallBase):
"""A fixed interval looping call."""
_RUN_ONLY_ONE_MESSAGE = _("A fixed interval looping call can only run"
" one function at a time")
_KIND = _('Fixed interval looping call')
def start(self, interval, initial_delay=None, stop_on_exception=True):
def _idle_for(result, elapsed):
delay = round(elapsed - interval, 2)
if delay > 0:
func_name = reflection.get_callable_name(self.f)
LOG.warning(_LW('Function %(func_name)r run outlasted '
'interval by %(delay).2f sec'),
{'func_name': func_name, 'delay': delay})
return -delay if delay < 0 else 0
return self._start(_idle_for, initial_delay=initial_delay,
stop_on_exception=stop_on_exception)
class DynamicLoopingCall(LoopingCallBase):
"""A looping call which sleeps until the next known event.
The function called should return how long to sleep for before being
called again.
"""
_RUN_ONLY_ONE_MESSAGE = _("A dynamic interval looping call can only run"
" one function at a time")
_TASK_MISSING_SLEEP_VALUE_MESSAGE = _(
"A dynamic interval looping call should supply either an"
" interval or periodic_interval_max"
)
_KIND = _('Dynamic interval looping call')
def start(self, initial_delay=None, periodic_interval_max=None,
stop_on_exception=True):
def _idle_for(suggested_delay, elapsed):
delay = suggested_delay
if delay is None:
if periodic_interval_max is not None:
delay = periodic_interval_max
else:
# Note(suro-patz): An application used to receive a
# TypeError thrown from eventlet layer, before
# this RuntimeError was introduced.
raise RuntimeError(
self._TASK_MISSING_SLEEP_VALUE_MESSAGE)
else:
if periodic_interval_max is not None:
delay = min(delay, periodic_interval_max)
return delay
return self._start(_idle_for, initial_delay=initial_delay,
stop_on_exception=stop_on_exception)
class BackOffLoopingCall(LoopingCallBase):
"""Run a method in a loop with backoff on error.
The passed in function should return True (no error, return to
initial_interval),
False (error, start backing off), or raise LoopingCallDone(retvalue=None)
(quit looping, return retvalue if set).
When there is an error, the call will backoff on each failure. The
backoff will be equal to double the previous base interval times some
jitter. If a backoff would put it over the timeout, it halts immediately,
so the call will never take more than timeout, but may and likely will
take less time.
When the function return value is True or False, the interval will be
multiplied by a random jitter. If min_jitter or max_jitter is None,
there will be no jitter (jitter=1). If min_jitter is below 0.5, the code
may not backoff and may increase its retry rate.
If func constantly returns True, this function will not return.
To run a func and wait for a call to finish (by raising a LoopingCallDone):
timer = BackOffLoopingCall(func)
response = timer.start().wait()
:param initial_delay: delay before first running of function
:param starting_interval: initial interval in seconds between calls to
function. When an error occurs and then a
success, the interval is returned to
starting_interval
:param timeout: time in seconds before a LoopingCallTimeout is raised.
The call will never take longer than timeout, but may quit
before timeout.
:param max_interval: The maximum interval between calls during errors
:param jitter: Used to vary when calls are actually run to avoid group of
calls all coming at the exact same time. Uses
random.gauss(jitter, 0.1), with jitter as the mean for the
distribution. If set below .5, it can cause the calls to
come more rapidly after each failure.
:raises: LoopingCallTimeout if time spent doing error retries would exceed
timeout.
"""
_RNG = random.SystemRandom()
_KIND = _('Dynamic backoff interval looping call')
_RUN_ONLY_ONE_MESSAGE = _("A dynamic backoff interval looping call can"
" only run one function at a time")
def __init__(self, f=None, *args, **kw):
super(BackOffLoopingCall, self).__init__(f=f, *args, **kw)
self._error_time = 0
self._interval = 1
def start(self, initial_delay=None, starting_interval=1, timeout=300,
max_interval=300, jitter=0.75):
if self._thread is not None:
raise RuntimeError(self._RUN_ONLY_ONE_MESSAGE)
# Reset any prior state.
self._error_time = 0
self._interval = starting_interval
def _idle_for(success, _elapsed):
random_jitter = self._RNG.gauss(jitter, 0.1)
if success:
# Reset error state now that it didn't error...
self._interval = starting_interval
self._error_time = 0
return self._interval * random_jitter
else:
# Perform backoff
self._interval = idle = min(
self._interval * 2 * random_jitter, max_interval)
# Don't go over timeout, end early if necessary. If
# timeout is 0, keep going.
if timeout > 0 and self._error_time + idle > timeout:
raise LoopingCallTimeOut(
_('Looping call timed out after %.02f seconds')
% self._error_time)
self._error_time += idle
return idle
return self._start(_idle_for, initial_delay=initial_delay)
class RetryDecorator(object):
"""Decorator for retrying a function upon suggested exceptions.
The decorated function is retried for the given number of times, and the
sleep time between the retries is incremented until max sleep time is
reached. If the max retry count is set to -1, then the decorated function
is invoked indefinitely until an exception is thrown, and the caught
exception is not in the list of suggested exceptions.
"""
def __init__(self, max_retry_count=-1, inc_sleep_time=10,
max_sleep_time=60, exceptions=()):
"""Configure the retry object using the input params.
:param max_retry_count: maximum number of times the given function must
be retried when one of the input 'exceptions'
is caught. When set to -1, it will be retried
indefinitely until an exception is thrown
and the caught exception is not in param
exceptions.
:param inc_sleep_time: incremental time in seconds for sleep time
between retries
:param max_sleep_time: max sleep time in seconds beyond which the sleep
time will not be incremented using param
inc_sleep_time. On reaching this threshold,
max_sleep_time will be used as the sleep time.
:param exceptions: suggested exceptions for which the function must be
retried, if no exceptions are provided (the default)
then all exceptions will be reraised, and no
retrying will be triggered.
"""
self._max_retry_count = max_retry_count
self._inc_sleep_time = inc_sleep_time
self._max_sleep_time = max_sleep_time
self._exceptions = exceptions
self._retry_count = 0
self._sleep_time = 0
def __call__(self, f):
func_name = reflection.get_callable_name(f)
def _func(*args, **kwargs):
result = None
try:
if self._retry_count:
LOG.debug("Invoking %(func_name)s; retry count is "
"%(retry_count)d.",
{'func_name': func_name,
'retry_count': self._retry_count})
result = f(*args, **kwargs)
except self._exceptions:
with excutils.save_and_reraise_exception() as ctxt:
LOG.debug("Exception which is in the suggested list of "
"exceptions occurred while invoking function:"
" %s.",
func_name)
if (self._max_retry_count != -1 and
self._retry_count >= self._max_retry_count):
LOG.debug("Cannot retry %(func_name)s upon "
"suggested exception "
"since retry count (%(retry_count)d) "
"reached max retry count "
"(%(max_retry_count)d).",
{'retry_count': self._retry_count,
'max_retry_count': self._max_retry_count,
'func_name': func_name})
else:
ctxt.reraise = False
self._retry_count += 1
self._sleep_time += self._inc_sleep_time
return self._sleep_time
raise LoopingCallDone(result)
@six.wraps(f)
def func(*args, **kwargs):
loop = DynamicLoopingCall(_func, *args, **kwargs)
evt = loop.start(periodic_interval_max=self._max_sleep_time)
LOG.debug("Waiting for function %s to return.", func_name)
return evt.wait()
return func

View File

@ -1,228 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import logging
import random
import time
from monotonic import monotonic as now # noqa
from oslo_utils import reflection
import six
from oslo_service._i18n import _, _LE, _LI
from oslo_service import _options
LOG = logging.getLogger(__name__)
DEFAULT_INTERVAL = 60.0
def list_opts():
"""Entry point for oslo-config-generator."""
return [(None, copy.deepcopy(_options.periodic_opts))]
class InvalidPeriodicTaskArg(Exception):
message = _("Unexpected argument for periodic task creation: %(arg)s.")
def periodic_task(*args, **kwargs):
"""Decorator to indicate that a method is a periodic task.
This decorator can be used in two ways:
1. Without arguments '@periodic_task', this will be run on the default
interval of 60 seconds.
2. With arguments:
@periodic_task(spacing=N [, run_immediately=[True|False]]
[, name=[None|"string"])
this will be run on approximately every N seconds. If this number is
negative the periodic task will be disabled. If the run_immediately
argument is provided and has a value of 'True', the first run of the
task will be shortly after task scheduler starts. If
run_immediately is omitted or set to 'False', the first time the
task runs will be approximately N seconds after the task scheduler
starts. If name is not provided, __name__ of function is used.
"""
def decorator(f):
# Test for old style invocation
if 'ticks_between_runs' in kwargs:
raise InvalidPeriodicTaskArg(arg='ticks_between_runs')
# Control if run at all
f._periodic_task = True
f._periodic_external_ok = kwargs.pop('external_process_ok', False)
f._periodic_enabled = kwargs.pop('enabled', True)
f._periodic_name = kwargs.pop('name', f.__name__)
# Control frequency
f._periodic_spacing = kwargs.pop('spacing', 0)
f._periodic_immediate = kwargs.pop('run_immediately', False)
if f._periodic_immediate:
f._periodic_last_run = None
else:
f._periodic_last_run = now()
return f
# NOTE(sirp): The `if` is necessary to allow the decorator to be used with
# and without parenthesis.
#
# In the 'with-parenthesis' case (with kwargs present), this function needs
# to return a decorator function since the interpreter will invoke it like:
#
# periodic_task(*args, **kwargs)(f)
#
# In the 'without-parenthesis' case, the original function will be passed
# in as the first argument, like:
#
# periodic_task(f)
if kwargs:
return decorator
else:
return decorator(args[0])
class _PeriodicTasksMeta(type):
def _add_periodic_task(cls, task):
"""Add a periodic task to the list of periodic tasks.
The task should already be decorated by @periodic_task.
:return: whether task was actually enabled
"""
name = task._periodic_name
if task._periodic_spacing < 0:
LOG.info(_LI('Skipping periodic task %(task)s because '
'its interval is negative'),
{'task': name})
return False
if not task._periodic_enabled:
LOG.info(_LI('Skipping periodic task %(task)s because '
'it is disabled'),
{'task': name})
return False
# A periodic spacing of zero indicates that this task should
# be run on the default interval to avoid running too
# frequently.
if task._periodic_spacing == 0:
task._periodic_spacing = DEFAULT_INTERVAL
cls._periodic_tasks.append((name, task))
cls._periodic_spacing[name] = task._periodic_spacing
return True
def __init__(cls, names, bases, dict_):
"""Metaclass that allows us to collect decorated periodic tasks."""
super(_PeriodicTasksMeta, cls).__init__(names, bases, dict_)
# NOTE(sirp): if the attribute is not present then we must be the base
# class, so, go ahead an initialize it. If the attribute is present,
# then we're a subclass so make a copy of it so we don't step on our
# parent's toes.
try:
cls._periodic_tasks = cls._periodic_tasks[:]
except AttributeError:
cls._periodic_tasks = []
try:
cls._periodic_spacing = cls._periodic_spacing.copy()
except AttributeError:
cls._periodic_spacing = {}
for value in cls.__dict__.values():
if getattr(value, '_periodic_task', False):
cls._add_periodic_task(value)
def _nearest_boundary(last_run, spacing):
"""Find the nearest boundary in the past.
The boundary is a multiple of the spacing with the last run as an offset.
Eg if last run was 10 and spacing was 7, the new last run could be: 17, 24,
31, 38...
0% to 5% of the spacing value will be added to this value to ensure tasks
do not synchronize. This jitter is rounded to the nearest second, this
means that spacings smaller than 20 seconds will not have jitter.
"""
current_time = now()
if last_run is None:
return current_time
delta = current_time - last_run
offset = delta % spacing
# Add up to 5% jitter
jitter = int(spacing * (random.random() / 20))
return current_time - offset + jitter
@six.add_metaclass(_PeriodicTasksMeta)
class PeriodicTasks(object):
def __init__(self, conf):
super(PeriodicTasks, self).__init__()
self.conf = conf
self.conf.register_opts(_options.periodic_opts)
self._periodic_last_run = {}
for name, task in self._periodic_tasks:
self._periodic_last_run[name] = task._periodic_last_run
def add_periodic_task(self, task):
"""Add a periodic task to the list of periodic tasks.
The task should already be decorated by @periodic_task.
"""
if self.__class__._add_periodic_task(task):
self._periodic_last_run[task._periodic_name] = (
task._periodic_last_run)
def run_periodic_tasks(self, context, raise_on_error=False):
"""Tasks to be run at a periodic interval."""
idle_for = DEFAULT_INTERVAL
for task_name, task in self._periodic_tasks:
if (task._periodic_external_ok and not
self.conf.run_external_periodic_tasks):
continue
cls_name = reflection.get_class_name(self, fully_qualified=False)
full_task_name = '.'.join([cls_name, task_name])
spacing = self._periodic_spacing[task_name]
last_run = self._periodic_last_run[task_name]
# Check if due, if not skip
idle_for = min(idle_for, spacing)
if last_run is not None:
delta = last_run + spacing - now()
if delta > 0:
idle_for = min(idle_for, delta)
continue
LOG.debug("Running periodic task %(full_task_name)s",
{"full_task_name": full_task_name})
self._periodic_last_run[task_name] = _nearest_boundary(
last_run, spacing)
try:
task(self, context)
except Exception:
if raise_on_error:
raise
LOG.exception(_LE("Error during %(full_task_name)s"),
{"full_task_name": full_task_name})
time.sleep(0)
return idle_for

View File

@ -1,738 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# Copyright 2011 Justin Santa Barbara
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Generic Node base class for all workers that run on hosts."""
import abc
import collections
import copy
import errno
import io
import logging
import os
import random
import signal
import six
import sys
import time
import eventlet
from eventlet import event
from oslo_concurrency import lockutils
from oslo_service import eventlet_backdoor
from oslo_service._i18n import _LE, _LI, _LW, _
from oslo_service import _options
from oslo_service import systemd
from oslo_service import threadgroup
LOG = logging.getLogger(__name__)
_LAUNCHER_RESTART_METHODS = ['reload', 'mutate']
def list_opts():
"""Entry point for oslo-config-generator."""
return [(None, copy.deepcopy(_options.eventlet_backdoor_opts +
_options.service_opts))]
def _is_daemon():
# The process group for a foreground process will match the
# process group of the controlling terminal. If those values do
# not match, or ioctl() fails on the stdout file handle, we assume
# the process is running in the background as a daemon.
# http://www.gnu.org/software/bash/manual/bashref.html#Job-Control-Basics
try:
is_daemon = os.getpgrp() != os.tcgetpgrp(sys.stdout.fileno())
except io.UnsupportedOperation:
# Could not get the fileno for stdout, so we must be a daemon.
is_daemon = True
except OSError as err:
if err.errno == errno.ENOTTY:
# Assume we are a daemon because there is no terminal.
is_daemon = True
else:
raise
return is_daemon
def _is_sighup_and_daemon(signo):
if not (SignalHandler().is_signal_supported('SIGHUP') and
signo == signal.SIGHUP):
# Avoid checking if we are a daemon, because the signal isn't
# SIGHUP.
return False
return _is_daemon()
def _check_service_base(service):
if not isinstance(service, ServiceBase):
raise TypeError(_("Service %(service)s must an instance of %(base)s!")
% {'service': service, 'base': ServiceBase})
@six.add_metaclass(abc.ABCMeta)
class ServiceBase(object):
"""Base class for all services."""
@abc.abstractmethod
def start(self):
"""Start service."""
@abc.abstractmethod
def stop(self):
"""Stop service."""
@abc.abstractmethod
def wait(self):
"""Wait for service to complete."""
@abc.abstractmethod
def reset(self):
"""Reset service.
Called in case service running in daemon mode receives SIGHUP.
"""
class Singleton(type):
_instances = {}
_semaphores = lockutils.Semaphores()
def __call__(cls, *args, **kwargs):
with lockutils.lock('singleton_lock', semaphores=cls._semaphores):
if cls not in cls._instances:
cls._instances[cls] = super(Singleton, cls).__call__(
*args, **kwargs)
return cls._instances[cls]
@six.add_metaclass(Singleton)
class SignalHandler(object):
def __init__(self, *args, **kwargs):
super(SignalHandler, self).__init__(*args, **kwargs)
# Map all signal names to signal integer values and create a
# reverse mapping (for easier + quick lookup).
self._ignore_signals = ('SIG_DFL', 'SIG_IGN')
self._signals_by_name = dict((name, getattr(signal, name))
for name in dir(signal)
if name.startswith("SIG")
and name not in self._ignore_signals)
self.signals_to_name = dict(
(sigval, name)
for (name, sigval) in self._signals_by_name.items())
self._signal_handlers = collections.defaultdict(set)
self.clear()
def clear(self):
for sig in self._signal_handlers:
signal.signal(sig, signal.SIG_DFL)
self._signal_handlers.clear()
def add_handlers(self, signals, handler):
for sig in signals:
self.add_handler(sig, handler)
def add_handler(self, sig, handler):
if not self.is_signal_supported(sig):
return
signo = self._signals_by_name[sig]
self._signal_handlers[signo].add(handler)
signal.signal(signo, self._handle_signal)
def _handle_signal(self, signo, frame):
# This method can be called anytime, even between two Python
# instructions. It's scheduled by the C signal handler of Python using
# Py_AddPendingCall().
#
# We only do one thing: schedule a call to _handle_signal_cb() later.
# eventlet.spawn() is not signal-safe: _handle_signal() can be called
# during a call to eventlet.spawn(). This case is supported, it is
# ok to schedule multiple calls to _handle_signal() with the same
# signal number.
#
# To call to _handle_signal_cb() is delayed to avoid reentrant calls to
# _handle_signal_cb(). It avoids race conditions like reentrant call to
# clear(): clear() is not reentrant (bug #1538204).
eventlet.spawn(self._handle_signal_cb, signo, frame)
def _handle_signal_cb(self, signo, frame):
for handler in self._signal_handlers[signo]:
handler(signo, frame)
def is_signal_supported(self, sig_name):
return sig_name in self._signals_by_name
class Launcher(object):
"""Launch one or more services and wait for them to complete."""
def __init__(self, conf, restart_method='reload'):
"""Initialize the service launcher.
:param restart_method: If 'reload', calls reload_config_files on
SIGHUP. If 'mutate', calls mutate_config_files on SIGHUP. Other
values produce a ValueError.
:returns: None
"""
self.conf = conf
conf.register_opts(_options.service_opts)
self.services = Services()
self.backdoor_port = (
eventlet_backdoor.initialize_if_enabled(self.conf))
self.restart_method = restart_method
if restart_method not in _LAUNCHER_RESTART_METHODS:
raise ValueError(_("Invalid restart_method: %s") % restart_method)
def launch_service(self, service, workers=1):
"""Load and start the given service.
:param service: The service you would like to start, must be an
instance of :class:`oslo_service.service.ServiceBase`
:param workers: This param makes this method compatible with
ProcessLauncher.launch_service. It must be None, 1 or
omitted.
:returns: None
"""
if workers is not None and workers != 1:
raise ValueError(_("Launcher asked to start multiple workers"))
_check_service_base(service)
service.backdoor_port = self.backdoor_port
self.services.add(service)
def stop(self):
"""Stop all services which are currently running.
:returns: None
"""
self.services.stop()
def wait(self):
"""Wait until all services have been stopped, and then return.
:returns: None
"""
self.services.wait()
def restart(self):
"""Reload config files and restart service.
:returns: The return value from reload_config_files or
mutate_config_files, according to the restart_method.
"""
if self.restart_method == 'reload':
self.conf.reload_config_files()
elif self.restart_method == 'mutate':
self.conf.mutate_config_files()
self.services.restart()
class SignalExit(SystemExit):
def __init__(self, signo, exccode=1):
super(SignalExit, self).__init__(exccode)
self.signo = signo
class ServiceLauncher(Launcher):
"""Runs one or more service in a parent process."""
def __init__(self, conf, restart_method='reload'):
"""Constructor.
:param conf: an instance of ConfigOpts
:param restart_method: passed to super
"""
super(ServiceLauncher, self).__init__(
conf, restart_method=restart_method)
self.signal_handler = SignalHandler()
def _graceful_shutdown(self, *args):
self.signal_handler.clear()
if (self.conf.graceful_shutdown_timeout and
self.signal_handler.is_signal_supported('SIGALRM')):
signal.alarm(self.conf.graceful_shutdown_timeout)
self.stop()
def _reload_service(self, *args):
self.signal_handler.clear()
raise SignalExit(signal.SIGHUP)
def _fast_exit(self, *args):
LOG.info(_LI('Caught SIGINT signal, instantaneous exiting'))
os._exit(1)
def _on_timeout_exit(self, *args):
LOG.info(_LI('Graceful shutdown timeout exceeded, '
'instantaneous exiting'))
os._exit(1)
def handle_signal(self):
"""Set self._handle_signal as a signal handler."""
self.signal_handler.add_handler('SIGTERM', self._graceful_shutdown)
self.signal_handler.add_handler('SIGINT', self._fast_exit)
self.signal_handler.add_handler('SIGHUP', self._reload_service)
self.signal_handler.add_handler('SIGALRM', self._on_timeout_exit)
def _wait_for_exit_or_signal(self):
status = None
signo = 0
if self.conf.log_options:
LOG.debug('Full set of CONF:')
self.conf.log_opt_values(LOG, logging.DEBUG)
try:
super(ServiceLauncher, self).wait()
except SignalExit as exc:
signame = self.signal_handler.signals_to_name[exc.signo]
LOG.info(_LI('Caught %s, exiting'), signame)
status = exc.code
signo = exc.signo
except SystemExit as exc:
self.stop()
status = exc.code
except Exception:
self.stop()
return status, signo
def wait(self):
"""Wait for a service to terminate and restart it on SIGHUP.
:returns: termination status
"""
systemd.notify_once()
self.signal_handler.clear()
while True:
self.handle_signal()
status, signo = self._wait_for_exit_or_signal()
if not _is_sighup_and_daemon(signo):
break
self.restart()
super(ServiceLauncher, self).wait()
return status
class ServiceWrapper(object):
def __init__(self, service, workers):
self.service = service
self.workers = workers
self.children = set()
self.forktimes = []
class ProcessLauncher(object):
"""Launch a service with a given number of workers."""
def __init__(self, conf, wait_interval=0.01, restart_method='reload'):
"""Constructor.
:param conf: an instance of ConfigOpts
:param wait_interval: The interval to sleep for between checks
of child process exit.
:param restart_method: If 'reload', calls reload_config_files on
SIGHUP. If 'mutate', calls mutate_config_files on SIGHUP. Other
values produce a ValueError.
"""
self.conf = conf
conf.register_opts(_options.service_opts)
self.children = {}
self.sigcaught = None
self.running = True
self.wait_interval = wait_interval
self.launcher = None
rfd, self.writepipe = os.pipe()
self.readpipe = eventlet.greenio.GreenPipe(rfd, 'r')
self.signal_handler = SignalHandler()
self.handle_signal()
self.restart_method = restart_method
if restart_method not in _LAUNCHER_RESTART_METHODS:
raise ValueError(_("Invalid restart_method: %s") % restart_method)
def handle_signal(self):
"""Add instance's signal handlers to class handlers."""
self.signal_handler.add_handlers(('SIGTERM', 'SIGHUP'),
self._handle_signal)
self.signal_handler.add_handler('SIGINT', self._fast_exit)
self.signal_handler.add_handler('SIGALRM', self._on_alarm_exit)
def _handle_signal(self, signo, frame):
"""Set signal handlers.
:param signo: signal number
:param frame: current stack frame
"""
self.sigcaught = signo
self.running = False
# Allow the process to be killed again and die from natural causes
self.signal_handler.clear()
def _fast_exit(self, signo, frame):
LOG.info(_LI('Caught SIGINT signal, instantaneous exiting'))
os._exit(1)
def _on_alarm_exit(self, signo, frame):
LOG.info(_LI('Graceful shutdown timeout exceeded, '
'instantaneous exiting'))
os._exit(1)
def _pipe_watcher(self):
# This will block until the write end is closed when the parent
# dies unexpectedly
self.readpipe.read(1)
LOG.info(_LI('Parent process has died unexpectedly, exiting'))
if self.launcher:
self.launcher.stop()
sys.exit(1)
def _child_process_handle_signal(self):
# Setup child signal handlers differently
def _sigterm(*args):
self.signal_handler.clear()
self.launcher.stop()
def _sighup(*args):
self.signal_handler.clear()
raise SignalExit(signal.SIGHUP)
self.signal_handler.clear()
# Parent signals with SIGTERM when it wants us to go away.
self.signal_handler.add_handler('SIGTERM', _sigterm)
self.signal_handler.add_handler('SIGHUP', _sighup)
self.signal_handler.add_handler('SIGINT', self._fast_exit)
def _child_wait_for_exit_or_signal(self, launcher):
status = 0
signo = 0
# NOTE(johannes): All exceptions are caught to ensure this
# doesn't fallback into the loop spawning children. It would
# be bad for a child to spawn more children.
try:
launcher.wait()
except SignalExit as exc:
signame = self.signal_handler.signals_to_name[exc.signo]
LOG.info(_LI('Child caught %s, exiting'), signame)
status = exc.code
signo = exc.signo
except SystemExit as exc:
status = exc.code
except BaseException:
LOG.exception(_LE('Unhandled exception'))
status = 2
return status, signo
def _child_process(self, service):
self._child_process_handle_signal()
# Reopen the eventlet hub to make sure we don't share an epoll
# fd with parent and/or siblings, which would be bad
eventlet.hubs.use_hub()
# Close write to ensure only parent has it open
os.close(self.writepipe)
# Create greenthread to watch for parent to close pipe
eventlet.spawn_n(self._pipe_watcher)
# Reseed random number generator
random.seed()
launcher = Launcher(self.conf, restart_method=self.restart_method)
launcher.launch_service(service)
return launcher
def _start_child(self, wrap):
if len(wrap.forktimes) > wrap.workers:
# Limit ourselves to one process a second (over the period of
# number of workers * 1 second). This will allow workers to
# start up quickly but ensure we don't fork off children that
# die instantly too quickly.
if time.time() - wrap.forktimes[0] < wrap.workers:
LOG.info(_LI('Forking too fast, sleeping'))
time.sleep(1)
wrap.forktimes.pop(0)
wrap.forktimes.append(time.time())
pid = os.fork()
if pid == 0:
self.launcher = self._child_process(wrap.service)
while True:
self._child_process_handle_signal()
status, signo = self._child_wait_for_exit_or_signal(
self.launcher)
if not _is_sighup_and_daemon(signo):
self.launcher.wait()
break
self.launcher.restart()
os._exit(status)
LOG.debug('Started child %d', pid)
wrap.children.add(pid)
self.children[pid] = wrap
return pid
def launch_service(self, service, workers=1):
"""Launch a service with a given number of workers.
:param service: a service to launch, must be an instance of
:class:`oslo_service.service.ServiceBase`
:param workers: a number of processes in which a service
will be running
"""
_check_service_base(service)
wrap = ServiceWrapper(service, workers)
LOG.info(_LI('Starting %d workers'), wrap.workers)
while self.running and len(wrap.children) < wrap.workers:
self._start_child(wrap)
def _wait_child(self):
try:
# Don't block if no child processes have exited
pid, status = os.waitpid(0, os.WNOHANG)
if not pid:
return None
except OSError as exc:
if exc.errno not in (errno.EINTR, errno.ECHILD):
raise
return None
if os.WIFSIGNALED(status):
sig = os.WTERMSIG(status)
LOG.info(_LI('Child %(pid)d killed by signal %(sig)d'),
dict(pid=pid, sig=sig))
else:
code = os.WEXITSTATUS(status)
LOG.info(_LI('Child %(pid)s exited with status %(code)d'),
dict(pid=pid, code=code))
if pid not in self.children:
LOG.warning(_LW('pid %d not in child list'), pid)
return None
wrap = self.children.pop(pid)
wrap.children.remove(pid)
return wrap
def _respawn_children(self):
while self.running:
wrap = self._wait_child()
if not wrap:
# Yield to other threads if no children have exited
# Sleep for a short time to avoid excessive CPU usage
# (see bug #1095346)
eventlet.greenthread.sleep(self.wait_interval)
continue
while self.running and len(wrap.children) < wrap.workers:
self._start_child(wrap)
def wait(self):
"""Loop waiting on children to die and respawning as necessary."""
systemd.notify_once()
if self.conf.log_options:
LOG.debug('Full set of CONF:')
self.conf.log_opt_values(LOG, logging.DEBUG)
try:
while True:
self.handle_signal()
self._respawn_children()
# No signal means that stop was called. Don't clean up here.
if not self.sigcaught:
return
signame = self.signal_handler.signals_to_name[self.sigcaught]
LOG.info(_LI('Caught %s, stopping children'), signame)
if not _is_sighup_and_daemon(self.sigcaught):
break
if self.restart_method == 'reload':
self.conf.reload_config_files()
elif self.restart_method == 'mutate':
self.conf.mutate_config_files()
for service in set(
[wrap.service for wrap in self.children.values()]):
service.reset()
for pid in self.children:
os.kill(pid, signal.SIGTERM)
self.running = True
self.sigcaught = None
except eventlet.greenlet.GreenletExit:
LOG.info(_LI("Wait called after thread killed. Cleaning up."))
# if we are here it means that we are trying to do graceful shutdown.
# add alarm watching that graceful_shutdown_timeout is not exceeded
if (self.conf.graceful_shutdown_timeout and
self.signal_handler.is_signal_supported('SIGALRM')):
signal.alarm(self.conf.graceful_shutdown_timeout)
self.stop()
def stop(self):
"""Terminate child processes and wait on each."""
self.running = False
LOG.debug("Stop services.")
for service in set(
[wrap.service for wrap in self.children.values()]):
service.stop()
LOG.debug("Killing children.")
for pid in self.children:
try:
os.kill(pid, signal.SIGTERM)
except OSError as exc:
if exc.errno != errno.ESRCH:
raise
# Wait for children to die
if self.children:
LOG.info(_LI('Waiting on %d children to exit'), len(self.children))
while self.children:
self._wait_child()
class Service(ServiceBase):
"""Service object for binaries running on hosts."""
def __init__(self, threads=1000):
self.tg = threadgroup.ThreadGroup(threads)
def reset(self):
"""Reset a service in case it received a SIGHUP."""
def start(self):
"""Start a service."""
def stop(self, graceful=False):
"""Stop a service.
:param graceful: indicates whether to wait for all threads to finish
or terminate them instantly
"""
self.tg.stop(graceful)
def wait(self):
"""Wait for a service to shut down."""
self.tg.wait()
class Services(object):
def __init__(self):
self.services = []
self.tg = threadgroup.ThreadGroup()
self.done = event.Event()
def add(self, service):
"""Add a service to a list and create a thread to run it.
:param service: service to run
"""
self.services.append(service)
self.tg.add_thread(self.run_service, service, self.done)
def stop(self):
"""Wait for graceful shutdown of services and kill the threads."""
for service in self.services:
service.stop()
# Each service has performed cleanup, now signal that the run_service
# wrapper threads can now die:
if not self.done.ready():
self.done.send()
# reap threads:
self.tg.stop()
def wait(self):
"""Wait for services to shut down."""
for service in self.services:
service.wait()
self.tg.wait()
def restart(self):
"""Reset services and start them in new threads."""
self.stop()
self.done = event.Event()
for restart_service in self.services:
restart_service.reset()
self.tg.add_thread(self.run_service, restart_service, self.done)
@staticmethod
def run_service(service, done):
"""Service start wrapper.
:param service: service to run
:param done: event to wait on until a shutdown is triggered
:returns: None
"""
try:
service.start()
except Exception:
LOG.exception(_LE('Error starting thread.'))
raise SystemExit(1)
else:
done.wait()
def launch(conf, service, workers=1, restart_method='reload'):
"""Launch a service with a given number of workers.
:param conf: an instance of ConfigOpts
:param service: a service to launch, must be an instance of
:class:`oslo_service.service.ServiceBase`
:param workers: a number of processes in which a service will be running
:param restart_method: Passed to the constructed launcher. If 'reload', the
launcher will call reload_config_files on SIGHUP. If 'mutate', it will
call mutate_config_files on SIGHUP. Other values produce a ValueError.
:returns: instance of a launcher that was used to launch the service
"""
if workers is not None and workers <= 0:
raise ValueError(_("Number of workers should be positive!"))
if workers is None or workers == 1:
launcher = ServiceLauncher(conf, restart_method=restart_method)
else:
launcher = ProcessLauncher(conf, restart_method=restart_method)
launcher.launch_service(service, workers=workers)
return launcher

View File

@ -1,104 +0,0 @@
# Copyright 2013 IBM Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import os
import ssl
from oslo_service._i18n import _
from oslo_service import _options
config_section = 'ssl'
_SSL_PROTOCOLS = {
"tlsv1": ssl.PROTOCOL_TLSv1,
"sslv23": ssl.PROTOCOL_SSLv23
}
_OPTIONAL_PROTOCOLS = {
'sslv2': 'PROTOCOL_SSLv2',
'sslv3': 'PROTOCOL_SSLv3',
'tlsv1_1': 'PROTOCOL_TLSv1_1',
'tlsv1_2': 'PROTOCOL_TLSv1_2',
}
for protocol in _OPTIONAL_PROTOCOLS:
try:
_SSL_PROTOCOLS[protocol] = getattr(ssl,
_OPTIONAL_PROTOCOLS[protocol])
except AttributeError: # nosec
pass
def list_opts():
"""Entry point for oslo-config-generator."""
return [(config_section, copy.deepcopy(_options.ssl_opts))]
def register_opts(conf):
"""Registers sslutils config options."""
return conf.register_opts(_options.ssl_opts, config_section)
def is_enabled(conf):
conf.register_opts(_options.ssl_opts, config_section)
cert_file = conf.ssl.cert_file
key_file = conf.ssl.key_file
ca_file = conf.ssl.ca_file
use_ssl = cert_file or key_file
if cert_file and not os.path.exists(cert_file):
raise RuntimeError(_("Unable to find cert_file : %s") % cert_file)
if ca_file and not os.path.exists(ca_file):
raise RuntimeError(_("Unable to find ca_file : %s") % ca_file)
if key_file and not os.path.exists(key_file):
raise RuntimeError(_("Unable to find key_file : %s") % key_file)
if use_ssl and (not cert_file or not key_file):
raise RuntimeError(_("When running server in SSL mode, you must "
"specify both a cert_file and key_file "
"option value in your configuration file"))
return use_ssl
def wrap(conf, sock):
conf.register_opts(_options.ssl_opts, config_section)
ssl_kwargs = {
'server_side': True,
'certfile': conf.ssl.cert_file,
'keyfile': conf.ssl.key_file,
'cert_reqs': ssl.CERT_NONE,
}
if conf.ssl.ca_file:
ssl_kwargs['ca_certs'] = conf.ssl.ca_file
ssl_kwargs['cert_reqs'] = ssl.CERT_REQUIRED
if conf.ssl.version:
key = conf.ssl.version.lower()
try:
ssl_kwargs['ssl_version'] = _SSL_PROTOCOLS[key]
except KeyError:
raise RuntimeError(
_("Invalid SSL version : %s") % conf.ssl.version)
if conf.ssl.ciphers:
ssl_kwargs['ciphers'] = conf.ssl.ciphers
# NOTE(eezhova): SSL/TLS protocol version is injected in ssl_kwargs above,
# so skipping bandit check
return ssl.wrap_socket(sock, **ssl_kwargs) # nosec

View File

@ -1,104 +0,0 @@
# Copyright 2012-2014 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Helper module for systemd service readiness notification.
"""
import contextlib
import logging
import os
import socket
import sys
LOG = logging.getLogger(__name__)
def _abstractify(socket_name):
if socket_name.startswith('@'):
# abstract namespace socket
socket_name = '\0%s' % socket_name[1:]
return socket_name
def _sd_notify(unset_env, msg):
notify_socket = os.getenv('NOTIFY_SOCKET')
if notify_socket:
sock = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
with contextlib.closing(sock):
try:
sock.connect(_abstractify(notify_socket))
sock.sendall(msg)
if unset_env:
del os.environ['NOTIFY_SOCKET']
except EnvironmentError:
LOG.debug("Systemd notification failed", exc_info=True)
def notify():
"""Send notification to Systemd that service is ready.
For details see
http://www.freedesktop.org/software/systemd/man/sd_notify.html
"""
_sd_notify(False, b'READY=1')
def notify_once():
"""Send notification once to Systemd that service is ready.
Systemd sets NOTIFY_SOCKET environment variable with the name of the
socket listening for notifications from services.
This method removes the NOTIFY_SOCKET environment variable to ensure
notification is sent only once.
"""
_sd_notify(True, b'READY=1')
def onready(notify_socket, timeout):
"""Wait for systemd style notification on the socket.
:param notify_socket: local socket address
:type notify_socket: string
:param timeout: socket timeout
:type timeout: float
:returns: 0 service ready
1 service not ready
2 timeout occurred
"""
sock = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
sock.settimeout(timeout)
sock.bind(_abstractify(notify_socket))
with contextlib.closing(sock):
try:
msg = sock.recv(512)
except socket.timeout:
return 2
if b'READY=1' == msg:
return 0
else:
return 1
if __name__ == '__main__':
# simple CLI for testing
if len(sys.argv) == 1:
notify()
elif len(sys.argv) >= 2:
timeout = float(sys.argv[1])
notify_socket = os.getenv('NOTIFY_SOCKET')
if notify_socket:
retval = onready(notify_socket, timeout)
sys.exit(retval)

View File

@ -1,27 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import eventlet
if os.name == 'nt':
# eventlet monkey patching the os and thread modules causes
# subprocess.Popen to fail on Windows when using pipes due
# to missing non-blocking IO support.
#
# bug report on eventlet:
# https://bitbucket.org/eventlet/eventlet/issue/132/
# eventletmonkey_patch-breaks
eventlet.monkey_patch(os=False, thread=False)
else:
eventlet.monkey_patch()

View File

@ -1,73 +0,0 @@
# Copyright 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import fixtures
from oslo_config import fixture as config
from oslotest import base as test_base
from oslo_service import _options
from oslo_service import sslutils
class ServiceBaseTestCase(test_base.BaseTestCase):
def setUp(self):
super(ServiceBaseTestCase, self).setUp()
self.conf_fixture = self.useFixture(config.Config())
self.conf_fixture.register_opts(_options.eventlet_backdoor_opts)
self.conf_fixture.register_opts(_options.service_opts)
self.conf_fixture.register_opts(_options.ssl_opts,
sslutils.config_section)
self.conf_fixture.register_opts(_options.periodic_opts)
self.conf_fixture.register_opts(_options.wsgi_opts)
self.conf = self.conf_fixture.conf
self.config = self.conf_fixture.config
self.conf(args=[], default_config_files=[])
def get_new_temp_dir(self):
"""Create a new temporary directory.
:returns fixtures.TempDir
"""
return self.useFixture(fixtures.TempDir())
def get_default_temp_dir(self):
"""Create a default temporary directory.
Returns the same directory during the whole test case.
:returns fixtures.TempDir
"""
if not hasattr(self, '_temp_dir'):
self._temp_dir = self.get_new_temp_dir()
return self._temp_dir
def get_temp_file_path(self, filename, root=None):
"""Returns an absolute path for a temporary file.
If root is None, the file is created in default temporary directory. It
also creates the directory if it's not initialized yet.
If root is not None, the file is created inside the directory passed as
root= argument.
:param filename: filename
:type filename: string
:param root: temporary directory to create a new file in
:type root: fixtures.TempDir
:returns absolute file path string
"""
root = root or self.get_default_temp_dir()
return root.join(filename)

View File

@ -1,145 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# An eventlet server that runs a service.py pool.
# Opens listens on a random port. The port # is printed to stdout.
import socket
import sys
import eventlet.wsgi
import greenlet
from oslo_config import cfg
from oslo_service import service
POOL_SIZE = 1
class Server(service.ServiceBase):
"""Server class to manage multiple WSGI sockets and applications."""
def __init__(self, application, host=None, port=None, keepalive=False,
keepidle=None):
self.application = application
self.host = host or '0.0.0.0'
self.port = port or 0
# Pool for a green thread in which wsgi server will be running
self.pool = eventlet.GreenPool(POOL_SIZE)
self.socket_info = {}
self.greenthread = None
self.keepalive = keepalive
self.keepidle = keepidle
self.socket = None
def listen(self, key=None, backlog=128):
"""Create and start listening on socket.
Call before forking worker processes.
Raises Exception if this has already been called.
"""
# TODO(dims): eventlet's green dns/socket module does not actually
# support IPv6 in getaddrinfo(). We need to get around this in the
# future or monitor upstream for a fix.
# Please refer below link
# (https://bitbucket.org/eventlet/eventlet/
# src/e0f578180d7d82d2ed3d8a96d520103503c524ec/eventlet/support/
# greendns.py?at=0.12#cl-163)
info = socket.getaddrinfo(self.host,
self.port,
socket.AF_UNSPEC,
socket.SOCK_STREAM)[0]
self.socket = eventlet.listen(info[-1], family=info[0],
backlog=backlog)
def start(self, key=None, backlog=128):
"""Run a WSGI server with the given application."""
if self.socket is None:
self.listen(key=key, backlog=backlog)
dup_socket = self.socket.dup()
if key:
self.socket_info[key] = self.socket.getsockname()
# Optionally enable keepalive on the wsgi socket.
if self.keepalive:
dup_socket.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
if self.keepidle is not None:
dup_socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE,
self.keepidle)
self.greenthread = self.pool.spawn(self._run,
self.application,
dup_socket)
def stop(self):
if self.greenthread is not None:
self.greenthread.kill()
def wait(self):
"""Wait until all servers have completed running."""
try:
self.pool.waitall()
except KeyboardInterrupt:
pass
except greenlet.GreenletExit:
pass
def reset(self):
"""Required by the service interface.
The service interface is used by the launcher when receiving a
SIGHUP. The service interface is defined in
oslo_service.Service.
Test server does not need to do anything here.
"""
pass
def _run(self, application, socket):
"""Start a WSGI server with a new green thread pool."""
try:
eventlet.wsgi.server(socket, application, debug=False)
except greenlet.GreenletExit:
# Wait until all servers have completed running
pass
def run(port_queue, workers=3):
eventlet.patcher.monkey_patch()
def hi_app(environ, start_response):
start_response('200 OK', [('Content-Type', 'application/json')])
yield 'hi'
server = Server(hi_app)
server.listen()
launcher = service.launch(cfg.CONF, server, workers)
port = server.socket.getsockname()[1]
port_queue.put(port)
sys.stdout.flush()
launcher.wait()
if __name__ == '__main__':
run()

View File

@ -1,40 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIHADCCBOigAwIBAgIJAOjPGLL9VDhjMA0GCSqGSIb3DQEBDQUAMIGwMQswCQYD
VQQGEwJVUzEOMAwGA1UECBMFVGV4YXMxDzANBgNVBAcTBkF1c3RpbjEdMBsGA1UE
ChMUT3BlblN0YWNrIEZvdW5kYXRpb24xHTAbBgNVBAsTFE9wZW5TdGFjayBEZXZl
bG9wZXJzMRAwDgYDVQQDEwdUZXN0IENBMTAwLgYJKoZIhvcNAQkBFiFvcGVuc3Rh
Y2stZGV2QGxpc3RzLm9wZW5zdGFjay5vcmcwHhcNMTUwMTA4MDIyOTEzWhcNMjUw
MTA4MDIyOTEzWjCBsDELMAkGA1UEBhMCVVMxDjAMBgNVBAgTBVRleGFzMQ8wDQYD
VQQHEwZBdXN0aW4xHTAbBgNVBAoTFE9wZW5TdGFjayBGb3VuZGF0aW9uMR0wGwYD
VQQLExRPcGVuU3RhY2sgRGV2ZWxvcGVyczEQMA4GA1UEAxMHVGVzdCBDQTEwMC4G
CSqGSIb3DQEJARYhb3BlbnN0YWNrLWRldkBsaXN0cy5vcGVuc3RhY2sub3JnMIIC
IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAwILIMebpHYK1E1zhyi6713GG
TQ9DFeLOE1T25+XTJqAkO7efQzZfB8QwCXy/8bmbhmKgQQ7APuuDci8SKCkYeWCx
qJRGmg0tZVlj5gCfrV2u+olwS+XyaOGCFkYScs6D34BaE2rGD2GDryoSPc2feAt6
X4+ZkDPZnvaHQP6j9Ofq/4WmsECEas0IO5X8SDF8afA47U9ZXFkcgQK6HCHDcokL
aaZxEyZFSaPex6ZAESNthkGOxEThRPxAkJhqYCeMl3Hff98XEUcFNzuAOmcnQJJg
RemwJO2hS5KS3Y3p9/nBRlh3tSAG1nbY5kXSpyaq296D9x/esnXlt+9JUmn1rKyv
maFBC/SbzyyQoO3MT5r8rKte0bulLw1bZOZNlhxSv2KCg5RD6vlNrnpsZszw4nj2
8fBroeFp0JMeT8jcqGs3qdm8sXLcBgiTalLYtiCNV9wZjOduQotuFN6mDwZvfa6h
zZjcBNfqeLyTEnFb5k6pIla0wydWx/jvBAzoxOkEcVjak747A+p/rriD5hVUBH0B
uNaWcEgKe9jcHnLvU8hUxFtgPxUHOOR+eMa+FS3ApKf9sJ/zVUq0uxyA9hUnsvnq
v/CywLSvaNKBiKQTL0QLEXnw6EQb7g/XuwC5mmt+l30wGh9M1U/QMaU/+YzT4sVL
TXIHJ7ExRTbEecbNbjsCAwEAAaOCARkwggEVMB0GA1UdDgQWBBQTWz2WEB0sJg9c
xfM5JeJMIAJq0jCB5QYDVR0jBIHdMIHagBQTWz2WEB0sJg9cxfM5JeJMIAJq0qGB
tqSBszCBsDELMAkGA1UEBhMCVVMxDjAMBgNVBAgTBVRleGFzMQ8wDQYDVQQHEwZB
dXN0aW4xHTAbBgNVBAoTFE9wZW5TdGFjayBGb3VuZGF0aW9uMR0wGwYDVQQLExRP
cGVuU3RhY2sgRGV2ZWxvcGVyczEQMA4GA1UEAxMHVGVzdCBDQTEwMC4GCSqGSIb3
DQEJARYhb3BlbnN0YWNrLWRldkBsaXN0cy5vcGVuc3RhY2sub3JnggkA6M8Ysv1U
OGMwDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQ0FAAOCAgEAIfAD6uVorT5WomG1
2DWRm3kuwa+EDimgVF6VRvxCzyHx7e/6KJQj149KpMQ6e0ZPjqQw+pZ+jJSgq6TP
MEjCHgIDwdKhi9LmQWIlo8xdzgfZW2VQkVLvwkqAnWWhCy9oGc/Ypk8pjiZfCx+/
DSJBbFnopI9f8epAKMq7N3jJyEMoTctzmI0KckrZnJ1Gq4MZpoxGmkJiGhWoUk8p
r8apXZ6B1DzO1XxpGw2BIcrUC3bQS/vPrg5/XbyaAu2BSgu6iF7ULqkBsEd0yK/L
i2gO9eTacaX3zJBQOlMJFsIAgIiVw6Rq6BuhU9zxDoopY4feta/NDOpk1OjY3MV7
4rcLTU6XYaItMDRe+dmjBOK+xspsaCU4kHEkA7mHL5YZhEEWLHj6QY8tAiIQMVQZ
RuTpQIbNkjLW8Ls+CbwL2LkUFB19rKu9tFpzEJ1IIeFmt5HZsL5ri6W2qkSPIbIe
Qq15kl/a45jgBbgn2VNA5ecjW20hhXyaS9AKWXK+AeFBaFIFDUrB2UP4YSDbJWUJ
0LKe+QuumXdl+iRdkgb1Tll7qme8gXAeyzVGHK2AsaBg+gkEeSyVLRKIixceyy+3
6yqlKJhk2qeV3ceOfVm9ZdvRlzWyVctaTcGIpDFqf4y8YyVhL1e2KGKcmYtbLq+m
rtku4CM3HldxcM4wqSB1VcaTX8o=
-----END CERTIFICATE-----

View File

@ -1,51 +0,0 @@
-----BEGIN RSA PRIVATE KEY-----
MIIJJwIBAAKCAgEAwILIMebpHYK1E1zhyi6713GGTQ9DFeLOE1T25+XTJqAkO7ef
QzZfB8QwCXy/8bmbhmKgQQ7APuuDci8SKCkYeWCxqJRGmg0tZVlj5gCfrV2u+olw
S+XyaOGCFkYScs6D34BaE2rGD2GDryoSPc2feAt6X4+ZkDPZnvaHQP6j9Ofq/4Wm
sECEas0IO5X8SDF8afA47U9ZXFkcgQK6HCHDcokLaaZxEyZFSaPex6ZAESNthkGO
xEThRPxAkJhqYCeMl3Hff98XEUcFNzuAOmcnQJJgRemwJO2hS5KS3Y3p9/nBRlh3
tSAG1nbY5kXSpyaq296D9x/esnXlt+9JUmn1rKyvmaFBC/SbzyyQoO3MT5r8rKte
0bulLw1bZOZNlhxSv2KCg5RD6vlNrnpsZszw4nj28fBroeFp0JMeT8jcqGs3qdm8
sXLcBgiTalLYtiCNV9wZjOduQotuFN6mDwZvfa6hzZjcBNfqeLyTEnFb5k6pIla0
wydWx/jvBAzoxOkEcVjak747A+p/rriD5hVUBH0BuNaWcEgKe9jcHnLvU8hUxFtg
PxUHOOR+eMa+FS3ApKf9sJ/zVUq0uxyA9hUnsvnqv/CywLSvaNKBiKQTL0QLEXnw
6EQb7g/XuwC5mmt+l30wGh9M1U/QMaU/+YzT4sVLTXIHJ7ExRTbEecbNbjsCAwEA
AQKCAgA0ySd/l2NANkDUaFl5CMt0zaoXoyGv9Jqw7lEtUPVO2AZXYYgH8/amuIK7
dztiWpRsisqKTDMmjYljW8jMvkf5sCvGn7GkOAzEh3g+7tjZvqBmDh1+kjSf0YXL
+bbBSCMcu6L3RAW+3ewvsYeC7sjVL8CER2nCApWfYtW/WpM2agkju0/zcB1e841Y
WU3ttbP5kGbrmyBTlBOexFKnuBJRa4Z3l63VpF7HTGmfsNRMXrx/XaZ55rEmK0zA
2SoB55ZDSHQSKee3UxP5CxWj7fjzWa+QO/2Sgp4BjNU8btdCqXb3hPZ98aQuVjQv
H+Ic9xtOYnso3dJAeNdeUfx23psAHhUqYruD+xrjwTJV5viGO05AHjp/i4dKjOaD
CMFKP/AGUcGAsL/Mjq5oMbWovbqhGaaOw4I0Xl/JuB0XQXWwr5D2cLUjMaCS9bLq
WV8lfEitoCVihAi21s8MIyQWHvl4m4d/aD5KNh0MJYo3vYCrs6A256dhbmlEmGBr
DY1++4yxz4YkY07jYbQYkDlCtwu51g+YE8lKAE9+Mz+PDgbRB7dgw7K3Q9SsXp1P
ui7/vnrgqppnYm4aaHvXEZ1qwwt2hpoumhQo/k1xrSzVKQ83vjzjXoDc9o84Vsv2
dmcLGKPpu+cm2ks8q6x2EI09dfkJjb/7N9SpU0AOjU7CgDye0QKCAQEA5/mosLuC
vXwh5FkJuV/wpipwqkS4vu+KNQiN83wdz+Yxw6siAz6/SIjr0sRmROop6CNCaBNq
887+mgm62rEe5eU4vHRlBOlYQD0qa+il09uwYPU0JunSOabxUCBhSuW/LXZyq7rA
ywGB7OVSTWwgb6Y0X1pUcOXK5qYaWJUdUEi2oVrU160phbDAcZNH+vAyl+IRJmVJ
LP7f1QwVrnIvIBgpIvPLRigagn84ecXPITClq4KjGNy2Qq/iarEwY7llFG10xHmK
xbzQ8v5XfPZ4Swmp+35kwNhfp6HRVWV3RftX4ftFArcFGYEIActItIz10rbLJ+42
fc8oZKq/MB9NlwKCAQEA1HLOuODXrFsKtLaQQzupPLpdyfYWR7A6tbghH5paKkIg
A+BSO/b91xOVx0jN2lkxe0Ns1QCpHZU8BXZ9MFCaZgr75z0+vhIRjrMTXXirlray
1mptar018j79sDJLFBF8VQFfi7Edd3OwB2dbdDFJhzNUbNJIVkVo+bXYfuWGlotG
EVWxX/CnPgnKknl6vX/8YSg6qJCwcUTmQRoqermd02VtrMrGgytcOG6QdKYTT/ct
b3zDNXdeLOJKyLZS1eW4V2Pcl4Njbaxq/U7KYkjWWZzVVsiCjWA8H0RXGf+Uk9Gu
cUg5hm5zxXcOGdI6yRVxHEU7CKc25Ks5xw4xPkhA/QKCAQBd7yC6ABQe+qcWul9P
q2PdRY49xHozBvimJQKmN/oyd3prS18IhV4b1yX3QQRQn6m8kJqRXluOwqEiaxI5
AEQMv9dLqK5HYN4VlS8aZyjPM0Sm3mPx5fj0038f/RyooYPauv4QQB1VlxSvguTi
6QfxbhIDEqbi2Ipi/5vnhupJ2kfp6sgJVdtcgYhL9WHOYXl7O1XKgHUzPToSIUSe
USp4CpCN0L7dd9vUQAP0e382Z2aOnuXAaY98TZCXt4xqtWYS8Ye5D6Z8D8tkuk1f
Esb/S7iDWFkgJf4F+Wa099NmiTK7FW6KfOYZv8AoSdL1GadpXg/B6ZozM7Gdoe6t
Y9+dAoIBABH2Rv4gnHuJEwWmbdoRYESvKSDbOpUDFGOq1roaTcdG4fgR7kH9pwaZ
NE+uGyF76xAV6ky0CphitrlrhDgiiHtaMGQjrHtbgbqD7342pqNOfR5dzzR4HOiH
ZOGRzwE6XT2+qPphljE0SczGc1gGlsXklB3DRbRtl+uM8WoBM/jke58ZlK6c5Tb8
kvEBblw5Rvhb82GvIgvhnGoisTbBHNPzvmseldwfPWPUDUifhgB70I6diM+rcP3w
gAwqRiSpkIVq/wqcZDqwmjcigz/+EolvFiaJO2iCm3K1T3v2PPSmhM41Ig/4pLcs
UrfiK3A27OJMBCq+IIkC5RasX4N5jm0CggEAXT9oyIO+a7ggpfijuba0xuhFwf+r
NY49hx3YshWXX5T3LfKZpTh+A1vjGcj57MZacRcTkFQgHVcyu+haA9lI4vsFMesU
9GqenrJNvxsV4i3avIxGjjx7d0Ok/7UuawTDuRea8m13se/oJOl5ftQK+ZoVqtO8
SzeNNpakiuCxmIEqaD8HUwWvgfA6n0HPJNc0vFAqu6Y5oOr8GDHd5JoKA8Sb15N9
AdFqwCbW9SqUVsvHDuiOKXy8lCr3OiuyjgBfbIyuuWbaU0PqIiKW++lTluXkl7Uz
vUawgfgX85sY6A35g1O/ydEQw2+h2tzDvQdhhyTYpMZjZwzIIPjCQMgHPA==
-----END RSA PRIVATE KEY-----

View File

@ -1,41 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIHHjCCBQagAwIBAgIBATANBgkqhkiG9w0BAQ0FADCBsDELMAkGA1UEBhMCVVMx
DjAMBgNVBAgTBVRleGFzMQ8wDQYDVQQHEwZBdXN0aW4xHTAbBgNVBAoTFE9wZW5T
dGFjayBGb3VuZGF0aW9uMR0wGwYDVQQLExRPcGVuU3RhY2sgRGV2ZWxvcGVyczEQ
MA4GA1UEAxMHVGVzdCBDQTEwMC4GCSqGSIb3DQEJARYhb3BlbnN0YWNrLWRldkBs
aXN0cy5vcGVuc3RhY2sub3JnMB4XDTE1MDEwODAyNTQzNVoXDTI1MDEwODAyNTQz
NVoweDELMAkGA1UEBhMCVVMxDjAMBgNVBAgTBVRleGFzMQ8wDQYDVQQHEwZBdXN0
aW4xHTAbBgNVBAoTFE9wZW5TdGFjayBGb3VuZGF0aW9uMR0wGwYDVQQLExRPcGVu
U3RhY2sgRGV2ZWxvcGVyczEKMAgGA1UEAxQBKjCCAiIwDQYJKoZIhvcNAQEBBQAD
ggIPADCCAgoCggIBANBJtvyhMKBn397hE7x9Ce/Ny+4ENQfr9VrHuvGNCR3W/uUb
QafdNdZCYNAGPrq2T3CEYK0IJxZjr2HuTcSK9StBMFauTeIPqVUVkO3Tjq1Rkv+L
np/e6DhHkjCU6Eq/jIw3ic0QoxLygTybGxXgJgVoBzGsJufzOQ14tfkzGeGyE3L5
z5DpCNQqWLWF7soMx3kM5hBm+LWeoiBPjmsEXQY+UYiDlSLW/6I855X/wwDW5+Ot
P6/1lWUfcyAyIqj3t0pmxZeY7xQnabWjhXT2dTK+dlwRjb77w665AgeF1R5lpTvU
yT1aQwgH1kd9GeQbkBDwWSVLH9boPPgdMLtX2ipUgQAAEhIOUWXOYZVHVNXhV6Cr
jAgvfdF39c9hmuXovPP24ikW4L+d5RPE7Vq9KJ4Uzijw9Ghu4lQQCRZ8SCNZIYJn
Tz53+6fs93WwnnEPto9tFRKeNWt3jx/wjluDFhhBTZO4snNIq9xnCYSEQAIsRBVW
Ahv7LqWLigUy7a9HMIyi3tQEZN9NCDy4BNuJDu33XWLLVMwNrIiR5mdCUFoRKt/E
+YPj7bNlzZMTSGLoBFPM71Lnfym9HazHDE1KxvT4gzYMubK4Y07meybiL4QNvU08
ITgFU6DAGob+y/GHqw+bmez5y0F/6FlyV+SiSrbVEEtzp9Ewyrxb85OJFK0tAgMB
AAGjggF4MIIBdDBLBgNVHREERDBCgglsb2NhbGhvc3SCDWlwNi1sb2NhbGhvc3SC
CTEyNy4wLjAuMYIDOjoxhwR/AAABhxAAAAAAAAAAAAAAAAAAAAABMB0GA1UdDgQW
BBSjWxD0qedj9eeGUWyGphy5PU67dDCB5QYDVR0jBIHdMIHagBQTWz2WEB0sJg9c
xfM5JeJMIAJq0qGBtqSBszCBsDELMAkGA1UEBhMCVVMxDjAMBgNVBAgTBVRleGFz
MQ8wDQYDVQQHEwZBdXN0aW4xHTAbBgNVBAoTFE9wZW5TdGFjayBGb3VuZGF0aW9u
MR0wGwYDVQQLExRPcGVuU3RhY2sgRGV2ZWxvcGVyczEQMA4GA1UEAxMHVGVzdCBD
QTEwMC4GCSqGSIb3DQEJARYhb3BlbnN0YWNrLWRldkBsaXN0cy5vcGVuc3RhY2su
b3JnggkA6M8Ysv1UOGMwCQYDVR0TBAIwADATBgNVHSUEDDAKBggrBgEFBQcDATAN
BgkqhkiG9w0BAQ0FAAOCAgEAIGx/acXQEiGYFBJUduE6/Y6LBuHEVMcj0yfbLzja
Eb35xKWHuX7tgQPwXy6UGlYM8oKIptIp/9eEuYXte6u5ncvD7e/JldCUVd0fW8hm
fBOhfqVstcTmlfZ6WqTJD6Bp/FjUH+8qf8E+lsjNy7i0EsmcQOeQm4mkocHG1AA4
MEeuDg33lV6XCjW450BoZ/FTfwZSuTlGgFlEzUUrAe/ETdajF9G9aJ+0OvXzE1tU
pvbvkU8eg4pLXxrzboOhyQMEmCikdkMYjo/0ZQrXrrJ1W8mCinkJdz6CToc7nUkU
F8tdAY0rKMEM8SYHngMJU2943lpGbQhE5B4oms8I+SMTyCVz2Vu5I43Px68Y0GUN
Bn5qu0w2Vj8eradoPF8pEAIVICIvlbiRepPbNZ7FieSsY2TEfLtxBd2DLE1YWeE5
p/RDBxqcDrGQuSg6gFSoLEhYgQcGnYgD75EIE8f/LrHFOAeSYEOhibFbK5G8p/2h
EHcKZ9lvTgqwHn0FiTqZ3LWxVFsZiTsiyXErpJ2Nu2WTzo0k1xJMUpJqHuUZraei
N5fA5YuDp2ShXRoZyVieRvp0TCmm6sHL8Pn0K8weJchYrvV1yvPKeuISN/fVCQev
88yih5Rh5R2szwoY3uVImpd99bMm0e1bXrQug43ZUz9rC4ABN6+lZvuorDWRVI7U
I1M=
-----END CERTIFICATE-----

View File

@ -1,51 +0,0 @@
-----BEGIN RSA PRIVATE KEY-----
MIIJKAIBAAKCAgEA0Em2/KEwoGff3uETvH0J783L7gQ1B+v1Wse68Y0JHdb+5RtB
p9011kJg0AY+urZPcIRgrQgnFmOvYe5NxIr1K0EwVq5N4g+pVRWQ7dOOrVGS/4ue
n97oOEeSMJToSr+MjDeJzRCjEvKBPJsbFeAmBWgHMawm5/M5DXi1+TMZ4bITcvnP
kOkI1CpYtYXuygzHeQzmEGb4tZ6iIE+OawRdBj5RiIOVItb/ojznlf/DANbn460/
r/WVZR9zIDIiqPe3SmbFl5jvFCdptaOFdPZ1Mr52XBGNvvvDrrkCB4XVHmWlO9TJ
PVpDCAfWR30Z5BuQEPBZJUsf1ug8+B0wu1faKlSBAAASEg5RZc5hlUdU1eFXoKuM
CC990Xf1z2Ga5ei88/biKRbgv53lE8TtWr0onhTOKPD0aG7iVBAJFnxII1khgmdP
Pnf7p+z3dbCecQ+2j20VEp41a3ePH/COW4MWGEFNk7iyc0ir3GcJhIRAAixEFVYC
G/supYuKBTLtr0cwjKLe1ARk300IPLgE24kO7fddYstUzA2siJHmZ0JQWhEq38T5
g+Pts2XNkxNIYugEU8zvUud/Kb0drMcMTUrG9PiDNgy5srhjTuZ7JuIvhA29TTwh
OAVToMAahv7L8YerD5uZ7PnLQX/oWXJX5KJKttUQS3On0TDKvFvzk4kUrS0CAwEA
AQKCAgAkdpMrPMi3fBfL+9kpqTYhHgTyYRgrj9o/DzIh8U/EQowS7aebzHUNUkeC
g2Vd6GaVywblo8S7/a2JVl+U5cKv1NSyiAcoaRd6xrC9gci7fMlgJUAauroqiBUG
njrgQxJGxb5BAQWbXorTYk/mj3v4fFKuFnYlKwY03on020ZPpY4UFbmJo9Ig2lz3
QkAgbQZKocBw5KXrnZ7CS0siXvwuCKDbZjWoiLzt2P2t2712myizSfQZSMPjlRLh
cwVwURVsV/uFY4ePHqs52iuV40N3I7KywXvwEEEciFTbnklF7gN0Kvcj33ZWpJCV
qUfsEAsze/APQEyNodBymyGZ2nJdn9PqaQYnVhE9xpjiXejQHZsuMnrA3jYr8Mtx
j0EZiX4ICI4Njt9oI/EtWhQtcDt86hTEtBlyFRU6jhW8O5Ai7hzxCYgUJ7onWVOE
PtCC9FoOwumXWgdZNz/hMqQSn91O8trferccdUGIfx8N/G4QkyzOLI0Hc6Mubby7
+GGRwVXnLsIGxpFc+VBHY/J6offCkXx3MPbfn57x0LGZu1GtHoep391yLUrBs9jx
nJrUI9OuwaeOG0iesTuGT+PbZWxDrJEtA7DRM1FBMNMvn5BTTg7yx8EqUM35hnFf
5J1XEf0DW5nUPH1Qadgi1LZjCAhiD5OuNooFsTmN7dSdleF+PQKCAQEA7jq7drTu
O1ePCO+dQeECauy1qv9SO2LIHfLZ/L4OwcEtEnE8xBbvrZfUqkbUITCS6rR8UITp
6ru0MyhUEsRsk4FHIJV2P1pB2Zy+8tV4Dm3aHh4bCoECqAPHMgXUkP+9kIOn2QsE
uRXnsEiQAl0SxSTcduy5F+WIWLVl4A72ry3cSvrEGwMEz0sjaEMmCZ2B8X8EJt64
uWUSHDaAMSg80bADy3p+OhmWMGZTDl/KRCz9pJLyICMxsotfbvE0BadAZr+UowSe
ldqKlgRYlYL3pAhwjeMO/QxmMfRxjvG09romqe0Bcs8BDNII/ShAjjHQUwxcEszQ
P14g8QwmTQVm5wKCAQEA39M3GveyIhX6vmyR4DUlxE5+yloTACdlCZu6wvFlRka8
3FEw8DWKVfnmYYFt/RPukYeBRmXwqLciGSly7PnaBXeNFqNXiykKETzS2UISZoqT
Dur06GmcI+Lk1my9v5gLB1LT/D8XWjwmjA5hNO1J1UYmp+X4dgaYxWzOKBsTTJ8j
SVaEaxBUwLHy58ehoQm+G5+QqL5yU/n1hPwXx1XYvd33OscSGQRbALrH2ZxsqxMZ
yvNa2NYt3TnihXcF36Df5861DTNI7NDqpY72C4U8RwaqgTdDkD+t8zrk/r3LUa5d
NGkGQF+59spBcb64IPZ4DuJ9//GaEsyj0jPF/FTMywKCAQEA1DiB83eumjKf+yfq
AVv/GV2RYKleigSvnO5QfrSY1MXP7xPtPAnqrcwJ6T57jq2E04zBCcG92BwqpUAR
1T4iMy0BPeenlTxEWSUnfY/pCYGWwymykSLoSOBEvS0wdZM9PdXq2pDUPkVjRkj9
8P0U0YbK1y5+nOkfE1dVT8pEuz2xdyH5PM7to/SdsC3RXtNvhMDP5AiYqp99CKEM
hb4AoBOa7dNLS1qrzqX4618uApnJwqgdBcAUb6d09pHs8/RQjLeyI57j3z72Ijnw
6A/pp7jU+7EAEzDOgUXvO5Xazch61PmLRsldeBxLYapQB9wcZz8lbqICCdFCqzlV
jVt4lQKCAQA9CYxtfj7FrNjENTdSvSufbQiGhinIUPXsuNslbk7/6yp1qm5+Exu2
dn+s927XJShZ52oJmKMYX1idJACDP1+FPiTrl3+4I2jranrVZH9AF2ojF0/SUXqT
Drz4/I6CQSRAywWkNFBZ+y1H5GP92vfXgVnpT32CMipXLGTL6xZIPt2QkldqGvoB
0oU7T+Vz1QRS5CC+47Cp1fBuY5DYe0CwBmf1T3RP/jAS8tytK0s3G+5cuiB8IWxA
eBid7OddJLHqtSQKhYHNkutqWqIeYicd92Nn+XojTDpTqivojDl1/ObN9BYQWAqO
knlmW2w7EPuMk5doxKoPll7WY+gJ99YhAoIBAHf5HYRh4ZuYkx+R1ow8/Ahp7N4u
BGFRNnCpMG358Zws95wvBg5dkW8VU0M3256M0kFkw2AOyyyNsHqIhMNakzHesGo/
TWhqCh23p1xBLY5p14K8K6iOc1Jfa1LqGsL2TZ06TeNNyONMGqq0yOyD62CdLRDj
0ACL/z2j494LmfqhV45hYuqjQbrLizjrr6ln75g2WJ32U+zwl7KUHnBL7IEwb4Be
KOl1bfVwZAs0GtHuaiScBYRLUaSC/Qq7YPjTh1nmg48DQC/HUCNGMqhoZ950kp9k
76HX+MpwUi5y49moFmn/3qDvefGFpX1td8vYMokx+eyKTXGFtxBUwPnMUSQ=
-----END RSA PRIVATE KEY-----

View File

@ -1,147 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Unit Tests for eventlet backdoor
"""
import errno
import os
import socket
import eventlet
import mock
from oslo_service import eventlet_backdoor
from oslo_service.tests import base
class BackdoorSocketPathTest(base.ServiceBaseTestCase):
@mock.patch.object(eventlet, 'spawn')
@mock.patch.object(eventlet, 'listen')
def test_backdoor_path(self, listen_mock, spawn_mock):
self.config(backdoor_socket="/tmp/my_special_socket")
listen_mock.side_effect = mock.MagicMock()
path = eventlet_backdoor.initialize_if_enabled(self.conf)
self.assertEqual("/tmp/my_special_socket", path)
@mock.patch.object(os, 'unlink')
@mock.patch.object(eventlet, 'spawn')
@mock.patch.object(eventlet, 'listen')
def test_backdoor_path_already_exists(self, listen_mock,
spawn_mock, unlink_mock):
self.config(backdoor_socket="/tmp/my_special_socket")
sock = mock.MagicMock()
listen_mock.side_effect = [socket.error(errno.EADDRINUSE, ''), sock]
path = eventlet_backdoor.initialize_if_enabled(self.conf)
self.assertEqual("/tmp/my_special_socket", path)
unlink_mock.assert_called_with("/tmp/my_special_socket")
@mock.patch.object(os, 'unlink')
@mock.patch.object(eventlet, 'spawn')
@mock.patch.object(eventlet, 'listen')
def test_backdoor_path_already_exists_and_gone(self, listen_mock,
spawn_mock, unlink_mock):
self.config(backdoor_socket="/tmp/my_special_socket")
sock = mock.MagicMock()
listen_mock.side_effect = [socket.error(errno.EADDRINUSE, ''), sock]
unlink_mock.side_effect = OSError(errno.ENOENT, '')
path = eventlet_backdoor.initialize_if_enabled(self.conf)
self.assertEqual("/tmp/my_special_socket", path)
unlink_mock.assert_called_with("/tmp/my_special_socket")
@mock.patch.object(os, 'unlink')
@mock.patch.object(eventlet, 'spawn')
@mock.patch.object(eventlet, 'listen')
def test_backdoor_path_already_exists_and_not_gone(self, listen_mock,
spawn_mock,
unlink_mock):
self.config(backdoor_socket="/tmp/my_special_socket")
listen_mock.side_effect = socket.error(errno.EADDRINUSE, '')
unlink_mock.side_effect = OSError(errno.EPERM, '')
self.assertRaises(OSError, eventlet_backdoor.initialize_if_enabled,
self.conf)
@mock.patch.object(eventlet, 'spawn')
@mock.patch.object(eventlet, 'listen')
def test_backdoor_path_no_perms(self, listen_mock, spawn_mock):
self.config(backdoor_socket="/tmp/my_special_socket")
listen_mock.side_effect = socket.error(errno.EPERM, '')
self.assertRaises(socket.error,
eventlet_backdoor.initialize_if_enabled,
self.conf)
class BackdoorPortTest(base.ServiceBaseTestCase):
@mock.patch.object(eventlet, 'spawn')
@mock.patch.object(eventlet, 'listen')
def test_backdoor_port(self, listen_mock, spawn_mock):
self.config(backdoor_port=1234)
sock = mock.MagicMock()
sock.getsockname.return_value = ('127.0.0.1', 1234)
listen_mock.return_value = sock
port = eventlet_backdoor.initialize_if_enabled(self.conf)
self.assertEqual(1234, port)
@mock.patch.object(eventlet, 'spawn')
@mock.patch.object(eventlet, 'listen')
def test_backdoor_port_inuse(self, listen_mock, spawn_mock):
self.config(backdoor_port=2345)
listen_mock.side_effect = socket.error(errno.EADDRINUSE, '')
self.assertRaises(socket.error,
eventlet_backdoor.initialize_if_enabled, self.conf)
@mock.patch.object(eventlet, 'spawn')
@mock.patch.object(eventlet, 'listen')
def test_backdoor_port_range(self, listen_mock, spawn_mock):
self.config(backdoor_port='8800:8899')
sock = mock.MagicMock()
sock.getsockname.return_value = ('127.0.0.1', 8800)
listen_mock.return_value = sock
port = eventlet_backdoor.initialize_if_enabled(self.conf)
self.assertEqual(8800, port)
@mock.patch.object(eventlet, 'spawn')
@mock.patch.object(eventlet, 'listen')
def test_backdoor_port_range_one_inuse(self, listen_mock, spawn_mock):
self.config(backdoor_port='8800:8900')
sock = mock.MagicMock()
sock.getsockname.return_value = ('127.0.0.1', 8801)
listen_mock.side_effect = [socket.error(errno.EADDRINUSE, ''), sock]
port = eventlet_backdoor.initialize_if_enabled(self.conf)
self.assertEqual(8801, port)
@mock.patch.object(eventlet, 'spawn')
@mock.patch.object(eventlet, 'listen')
def test_backdoor_port_range_all_inuse(self, listen_mock, spawn_mock):
self.config(backdoor_port='8800:8899')
side_effects = []
for i in range(8800, 8900):
side_effects.append(socket.error(errno.EADDRINUSE, ''))
listen_mock.side_effect = side_effects
self.assertRaises(socket.error,
eventlet_backdoor.initialize_if_enabled, self.conf)
def test_backdoor_port_reverse_range(self):
self.config(backdoor_port='8888:7777')
self.assertRaises(eventlet_backdoor.EventletBackdoorConfigValueError,
eventlet_backdoor.initialize_if_enabled, self.conf)
def test_backdoor_port_bad(self):
self.config(backdoor_port='abc')
self.assertRaises(eventlet_backdoor.EventletBackdoorConfigValueError,
eventlet_backdoor.initialize_if_enabled, self.conf)

View File

@ -1,457 +0,0 @@
# Copyright 2012 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import eventlet
from eventlet.green import threading as greenthreading
import mock
from oslotest import base as test_base
import oslo_service
from oslo_service import loopingcall
threading = eventlet.patcher.original('threading')
time = eventlet.patcher.original('time')
class LoopingCallTestCase(test_base.BaseTestCase):
def setUp(self):
super(LoopingCallTestCase, self).setUp()
self.num_runs = 0
def test_return_true(self):
def _raise_it():
raise loopingcall.LoopingCallDone(True)
timer = loopingcall.FixedIntervalLoopingCall(_raise_it)
self.assertTrue(timer.start(interval=0.5).wait())
def test_monotonic_timer(self):
def _raise_it():
clock = eventlet.hubs.get_hub().clock
ok = (clock == oslo_service._monotonic)
raise loopingcall.LoopingCallDone(ok)
timer = loopingcall.FixedIntervalLoopingCall(_raise_it)
self.assertTrue(timer.start(interval=0.5).wait())
def test_eventlet_clock(self):
# Make sure that by default the oslo_service.service_hub() kicks in,
# test in the main thread
hub = eventlet.hubs.get_hub()
self.assertEqual(oslo_service._monotonic,
hub.clock)
def test_eventlet_use_hub_override(self):
ns = {}
def task():
try:
self._test_eventlet_use_hub_override()
except Exception as exc:
ns['result'] = exc
else:
ns['result'] = 'ok'
# test overriding the hub of in a new thread to not modify the hub
# of the main thread
thread = threading.Thread(target=task)
thread.start()
thread.join()
self.assertEqual('ok', ns['result'])
def _test_eventlet_use_hub_override(self):
# Make sure that by default the
# oslo_service.service_hub() kicks in
old_clock = eventlet.hubs.get_hub().clock
self.assertEqual(oslo_service._monotonic,
old_clock)
# eventlet will use time.monotonic() by default, same clock than
# oslo.service_hub():
# https://github.com/eventlet/eventlet/pull/303
if not hasattr(time, 'monotonic'):
# If anyone wants to override it
try:
eventlet.hubs.use_hub('poll')
except Exception:
eventlet.hubs.use_hub('selects')
# then we get a new clock and the override works fine too!
clock = eventlet.hubs.get_hub().clock
self.assertNotEqual(old_clock, clock)
def test_return_false(self):
def _raise_it():
raise loopingcall.LoopingCallDone(False)
timer = loopingcall.FixedIntervalLoopingCall(_raise_it)
self.assertFalse(timer.start(interval=0.5).wait())
def test_terminate_on_exception(self):
def _raise_it():
raise RuntimeError()
timer = loopingcall.FixedIntervalLoopingCall(_raise_it)
self.assertRaises(RuntimeError, timer.start(interval=0.5).wait)
def _raise_and_then_done(self):
if self.num_runs == 0:
raise loopingcall.LoopingCallDone(False)
else:
self.num_runs = self.num_runs - 1
raise RuntimeError()
def test_do_not_stop_on_exception(self):
self.num_runs = 2
timer = loopingcall.FixedIntervalLoopingCall(self._raise_and_then_done)
res = timer.start(interval=0.5, stop_on_exception=False).wait()
self.assertFalse(res)
def _wait_for_zero(self):
"""Called at an interval until num_runs == 0."""
if self.num_runs == 0:
raise loopingcall.LoopingCallDone(False)
else:
self.num_runs = self.num_runs - 1
def test_no_double_start(self):
wait_ev = greenthreading.Event()
def _run_forever_until_set():
if wait_ev.is_set():
raise loopingcall.LoopingCallDone(True)
timer = loopingcall.FixedIntervalLoopingCall(_run_forever_until_set)
timer.start(interval=0.01)
self.assertRaises(RuntimeError, timer.start, interval=0.01)
wait_ev.set()
timer.wait()
def test_repeat(self):
self.num_runs = 2
timer = loopingcall.FixedIntervalLoopingCall(self._wait_for_zero)
self.assertFalse(timer.start(interval=0.5).wait())
def assertAlmostEqual(self, expected, actual, precision=7, message=None):
self.assertEqual(0, round(actual - expected, precision), message)
@mock.patch('eventlet.greenthread.sleep')
@mock.patch('oslo_utils.timeutils.now')
def test_interval_adjustment(self, time_mock, sleep_mock):
"""Ensure the interval is adjusted to account for task duration."""
self.num_runs = 3
now = 1234567890
second = 1
smidgen = 0.01
time_mock.side_effect = [now, # restart
now + second - smidgen, # end
now, # restart
now + second + second, # end
now, # restart
now + second + smidgen, # end
now] # restart
timer = loopingcall.FixedIntervalLoopingCall(self._wait_for_zero)
timer.start(interval=1.01).wait()
expected_calls = [0.02, 0.00, 0.00]
for i, call in enumerate(sleep_mock.call_args_list):
expected = expected_calls[i]
args, kwargs = call
actual = args[0]
message = ('Call #%d, expected: %s, actual: %s' %
(i, expected, actual))
self.assertAlmostEqual(expected, actual, message=message)
class DynamicLoopingCallTestCase(test_base.BaseTestCase):
def setUp(self):
super(DynamicLoopingCallTestCase, self).setUp()
self.num_runs = 0
def test_return_true(self):
def _raise_it():
raise loopingcall.LoopingCallDone(True)
timer = loopingcall.DynamicLoopingCall(_raise_it)
self.assertTrue(timer.start().wait())
def test_monotonic_timer(self):
def _raise_it():
clock = eventlet.hubs.get_hub().clock
ok = (clock == oslo_service._monotonic)
raise loopingcall.LoopingCallDone(ok)
timer = loopingcall.DynamicLoopingCall(_raise_it)
self.assertTrue(timer.start().wait())
def test_no_double_start(self):
wait_ev = greenthreading.Event()
def _run_forever_until_set():
if wait_ev.is_set():
raise loopingcall.LoopingCallDone(True)
else:
return 0.01
timer = loopingcall.DynamicLoopingCall(_run_forever_until_set)
timer.start()
self.assertRaises(RuntimeError, timer.start)
wait_ev.set()
timer.wait()
def test_return_false(self):
def _raise_it():
raise loopingcall.LoopingCallDone(False)
timer = loopingcall.DynamicLoopingCall(_raise_it)
self.assertFalse(timer.start().wait())
def test_terminate_on_exception(self):
def _raise_it():
raise RuntimeError()
timer = loopingcall.DynamicLoopingCall(_raise_it)
self.assertRaises(RuntimeError, timer.start().wait)
def _raise_and_then_done(self):
if self.num_runs == 0:
raise loopingcall.LoopingCallDone(False)
else:
self.num_runs = self.num_runs - 1
raise RuntimeError()
def test_do_not_stop_on_exception(self):
self.num_runs = 2
timer = loopingcall.DynamicLoopingCall(self._raise_and_then_done)
timer.start(stop_on_exception=False).wait()
def _wait_for_zero(self):
"""Called at an interval until num_runs == 0."""
if self.num_runs == 0:
raise loopingcall.LoopingCallDone(False)
else:
self.num_runs = self.num_runs - 1
sleep_for = self.num_runs * 10 + 1 # dynamic duration
return sleep_for
def test_repeat(self):
self.num_runs = 2
timer = loopingcall.DynamicLoopingCall(self._wait_for_zero)
self.assertFalse(timer.start().wait())
def _timeout_task_without_any_return(self):
pass
def test_timeout_task_without_return_and_max_periodic(self):
timer = loopingcall.DynamicLoopingCall(
self._timeout_task_without_any_return
)
self.assertRaises(RuntimeError, timer.start().wait)
def _timeout_task_without_return_but_with_done(self):
if self.num_runs == 0:
raise loopingcall.LoopingCallDone(False)
else:
self.num_runs = self.num_runs - 1
@mock.patch('eventlet.greenthread.sleep')
def test_timeout_task_without_return(self, sleep_mock):
self.num_runs = 1
timer = loopingcall.DynamicLoopingCall(
self._timeout_task_without_return_but_with_done
)
timer.start(periodic_interval_max=5).wait()
sleep_mock.assert_has_calls([mock.call(5)])
@mock.patch('eventlet.greenthread.sleep')
def test_interval_adjustment(self, sleep_mock):
self.num_runs = 2
timer = loopingcall.DynamicLoopingCall(self._wait_for_zero)
timer.start(periodic_interval_max=5).wait()
sleep_mock.assert_has_calls([mock.call(5), mock.call(1)])
@mock.patch('eventlet.greenthread.sleep')
def test_initial_delay(self, sleep_mock):
self.num_runs = 1
timer = loopingcall.DynamicLoopingCall(self._wait_for_zero)
timer.start(initial_delay=3).wait()
sleep_mock.assert_has_calls([mock.call(3), mock.call(1)])
class TestBackOffLoopingCall(test_base.BaseTestCase):
@mock.patch('random.SystemRandom.gauss')
@mock.patch('eventlet.greenthread.sleep')
def test_exponential_backoff(self, sleep_mock, random_mock):
def false():
return False
random_mock.return_value = .8
self.assertRaises(loopingcall.LoopingCallTimeOut,
loopingcall.BackOffLoopingCall(false).start()
.wait)
expected_times = [mock.call(1.6000000000000001),
mock.call(2.5600000000000005),
mock.call(4.096000000000001),
mock.call(6.5536000000000021),
mock.call(10.485760000000004),
mock.call(16.777216000000006),
mock.call(26.843545600000013),
mock.call(42.949672960000022),
mock.call(68.719476736000033),
mock.call(109.95116277760006)]
self.assertEqual(expected_times, sleep_mock.call_args_list)
@mock.patch('random.SystemRandom.gauss')
@mock.patch('eventlet.greenthread.sleep')
def test_no_backoff(self, sleep_mock, random_mock):
random_mock.return_value = 1
func = mock.Mock()
# func.side_effect
func.side_effect = [True, True, True, loopingcall.LoopingCallDone(
retvalue='return value')]
retvalue = loopingcall.BackOffLoopingCall(func).start().wait()
expected_times = [mock.call(1), mock.call(1), mock.call(1)]
self.assertEqual(expected_times, sleep_mock.call_args_list)
self.assertTrue(retvalue, 'return value')
@mock.patch('random.SystemRandom.gauss')
@mock.patch('eventlet.greenthread.sleep')
def test_no_sleep(self, sleep_mock, random_mock):
# Any call that executes properly the first time shouldn't sleep
random_mock.return_value = 1
func = mock.Mock()
# func.side_effect
func.side_effect = loopingcall.LoopingCallDone(retvalue='return value')
retvalue = loopingcall.BackOffLoopingCall(func).start().wait()
self.assertFalse(sleep_mock.called)
self.assertTrue(retvalue, 'return value')
@mock.patch('random.SystemRandom.gauss')
@mock.patch('eventlet.greenthread.sleep')
def test_max_interval(self, sleep_mock, random_mock):
def false():
return False
random_mock.return_value = .8
self.assertRaises(loopingcall.LoopingCallTimeOut,
loopingcall.BackOffLoopingCall(false).start(
max_interval=60)
.wait)
expected_times = [mock.call(1.6000000000000001),
mock.call(2.5600000000000005),
mock.call(4.096000000000001),
mock.call(6.5536000000000021),
mock.call(10.485760000000004),
mock.call(16.777216000000006),
mock.call(26.843545600000013),
mock.call(42.949672960000022),
mock.call(60),
mock.call(60),
mock.call(60)]
self.assertEqual(expected_times, sleep_mock.call_args_list)
class AnException(Exception):
pass
class UnknownException(Exception):
pass
class RetryDecoratorTest(test_base.BaseTestCase):
"""Tests for retry decorator class."""
def test_retry(self):
result = "RESULT"
@loopingcall.RetryDecorator()
def func(*args, **kwargs):
return result
self.assertEqual(result, func())
def func2(*args, **kwargs):
return result
retry = loopingcall.RetryDecorator()
self.assertEqual(result, retry(func2)())
self.assertTrue(retry._retry_count == 0)
def test_retry_with_expected_exceptions(self):
result = "RESULT"
responses = [AnException(None),
AnException(None),
result]
def func(*args, **kwargs):
response = responses.pop(0)
if isinstance(response, Exception):
raise response
return response
sleep_time_incr = 0.01
retry_count = 2
retry = loopingcall.RetryDecorator(10, sleep_time_incr, 10,
(AnException,))
self.assertEqual(result, retry(func)())
self.assertTrue(retry._retry_count == retry_count)
self.assertEqual(retry_count * sleep_time_incr, retry._sleep_time)
def test_retry_with_max_retries(self):
responses = [AnException(None),
AnException(None),
AnException(None)]
def func(*args, **kwargs):
response = responses.pop(0)
if isinstance(response, Exception):
raise response
return response
retry = loopingcall.RetryDecorator(2, 0, 0,
(AnException,))
self.assertRaises(AnException, retry(func))
self.assertTrue(retry._retry_count == 2)
def test_retry_with_unexpected_exception(self):
def func(*args, **kwargs):
raise UnknownException(None)
retry = loopingcall.RetryDecorator()
self.assertRaises(UnknownException, retry(func))
self.assertTrue(retry._retry_count == 0)

View File

@ -1,391 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Unit Tests for periodic_task decorator and PeriodicTasks class.
"""
import mock
from testtools import matchers
from oslo_service import periodic_task
from oslo_service.tests import base
class AnException(Exception):
pass
class PeriodicTasksTestCase(base.ServiceBaseTestCase):
"""Test cases for PeriodicTasks."""
@mock.patch('oslo_service.periodic_task.now')
def test_called_thrice(self, mock_now):
time = 340
mock_now.return_value = time
# Class inside test def to mock 'now' in
# the periodic task decorator
class AService(periodic_task.PeriodicTasks):
def __init__(self, conf):
super(AService, self).__init__(conf)
self.called = {'doit': 0, 'urg': 0, 'ticks': 0, 'tocks': 0}
@periodic_task.periodic_task
def doit(self, context):
self.called['doit'] += 1
@periodic_task.periodic_task
def crashit(self, context):
self.called['urg'] += 1
raise AnException('urg')
@periodic_task.periodic_task(
spacing=10 + periodic_task.DEFAULT_INTERVAL,
run_immediately=True)
def doit_with_ticks(self, context):
self.called['ticks'] += 1
@periodic_task.periodic_task(
spacing=10 + periodic_task.DEFAULT_INTERVAL)
def doit_with_tocks(self, context):
self.called['tocks'] += 1
external_called = {'ext1': 0, 'ext2': 0}
@periodic_task.periodic_task
def ext1(self, context):
external_called['ext1'] += 1
@periodic_task.periodic_task(
spacing=10 + periodic_task.DEFAULT_INTERVAL)
def ext2(self, context):
external_called['ext2'] += 1
serv = AService(self.conf)
serv.add_periodic_task(ext1)
serv.add_periodic_task(ext2)
serv.run_periodic_tasks(None)
# Time: 340
self.assertEqual(0, serv.called['doit'])
self.assertEqual(0, serv.called['urg'])
# New last run will be 350
self.assertEqual(1, serv.called['ticks'])
self.assertEqual(0, serv.called['tocks'])
self.assertEqual(0, external_called['ext1'])
self.assertEqual(0, external_called['ext2'])
time = time + periodic_task.DEFAULT_INTERVAL
mock_now.return_value = time
serv.run_periodic_tasks(None)
# Time:400
# New Last run: 420
self.assertEqual(1, serv.called['doit'])
self.assertEqual(1, serv.called['urg'])
# Closest multiple of 70 is 420
self.assertEqual(1, serv.called['ticks'])
self.assertEqual(0, serv.called['tocks'])
self.assertEqual(1, external_called['ext1'])
self.assertEqual(0, external_called['ext2'])
time = time + periodic_task.DEFAULT_INTERVAL / 2
mock_now.return_value = time
serv.run_periodic_tasks(None)
self.assertEqual(1, serv.called['doit'])
self.assertEqual(1, serv.called['urg'])
self.assertEqual(2, serv.called['ticks'])
self.assertEqual(1, serv.called['tocks'])
self.assertEqual(1, external_called['ext1'])
self.assertEqual(1, external_called['ext2'])
time = time + periodic_task.DEFAULT_INTERVAL
mock_now.return_value = time
serv.run_periodic_tasks(None)
self.assertEqual(2, serv.called['doit'])
self.assertEqual(2, serv.called['urg'])
self.assertEqual(3, serv.called['ticks'])
self.assertEqual(2, serv.called['tocks'])
self.assertEqual(2, external_called['ext1'])
self.assertEqual(2, external_called['ext2'])
@mock.patch('oslo_service.periodic_task.now')
def test_called_correct(self, mock_now):
time = 360444
mock_now.return_value = time
test_spacing = 9
# Class inside test def to mock 'now' in
# the periodic task decorator
class AService(periodic_task.PeriodicTasks):
def __init__(self, conf):
super(AService, self).__init__(conf)
self.called = {'ticks': 0}
@periodic_task.periodic_task(spacing=test_spacing)
def tick(self, context):
self.called['ticks'] += 1
serv = AService(self.conf)
for i in range(200):
serv.run_periodic_tasks(None)
self.assertEqual(int(i / test_spacing), serv.called['ticks'])
time += 1
mock_now.return_value = time
@mock.patch('oslo_service.periodic_task.now')
def test_raises(self, mock_now):
time = 230000
mock_now.return_value = time
class AService(periodic_task.PeriodicTasks):
def __init__(self, conf):
super(AService, self).__init__(conf)
self.called = {'urg': 0, }
@periodic_task.periodic_task
def crashit(self, context):
self.called['urg'] += 1
raise AnException('urg')
serv = AService(self.conf)
now = serv._periodic_last_run['crashit']
mock_now.return_value = now + periodic_task.DEFAULT_INTERVAL
self.assertRaises(AnException,
serv.run_periodic_tasks,
None, raise_on_error=True)
def test_name(self):
class AService(periodic_task.PeriodicTasks):
def __init__(self, conf):
super(AService, self).__init__(conf)
@periodic_task.periodic_task(name='better-name')
def tick(self, context):
pass
@periodic_task.periodic_task
def tack(self, context):
pass
@periodic_task.periodic_task(name='another-name')
def foo(self, context):
pass
serv = AService(self.conf)
serv.add_periodic_task(foo)
self.assertIn('better-name', serv._periodic_last_run)
self.assertIn('another-name', serv._periodic_last_run)
self.assertIn('tack', serv._periodic_last_run)
class ManagerMetaTestCase(base.ServiceBaseTestCase):
"""Tests for the meta class which manages creation of periodic tasks."""
def test_meta(self):
class Manager(periodic_task.PeriodicTasks):
@periodic_task.periodic_task
def foo(self):
return 'foo'
@periodic_task.periodic_task(spacing=4)
def bar(self):
return 'bar'
@periodic_task.periodic_task(enabled=False)
def baz(self):
return 'baz'
m = Manager(self.conf)
self.assertThat(m._periodic_tasks, matchers.HasLength(2))
self.assertEqual(periodic_task.DEFAULT_INTERVAL,
m._periodic_spacing['foo'])
self.assertEqual(4, m._periodic_spacing['bar'])
self.assertThat(
m._periodic_spacing, matchers.Not(matchers.Contains('baz')))
@periodic_task.periodic_task
def external():
return 42
m.add_periodic_task(external)
self.assertThat(m._periodic_tasks, matchers.HasLength(3))
self.assertEqual(periodic_task.DEFAULT_INTERVAL,
m._periodic_spacing['external'])
class ManagerTestCase(base.ServiceBaseTestCase):
"""Tests the periodic tasks portion of the manager class."""
def setUp(self):
super(ManagerTestCase, self).setUp()
def test_periodic_tasks_with_idle(self):
class Manager(periodic_task.PeriodicTasks):
@periodic_task.periodic_task(spacing=200)
def bar(self):
return 'bar'
m = Manager(self.conf)
self.assertThat(m._periodic_tasks, matchers.HasLength(1))
self.assertEqual(200, m._periodic_spacing['bar'])
# Now a single pass of the periodic tasks
idle = m.run_periodic_tasks(None)
self.assertAlmostEqual(60, idle, 1)
def test_periodic_tasks_constant(self):
class Manager(periodic_task.PeriodicTasks):
@periodic_task.periodic_task(spacing=0)
def bar(self):
return 'bar'
m = Manager(self.conf)
idle = m.run_periodic_tasks(None)
self.assertAlmostEqual(60, idle, 1)
@mock.patch('oslo_service.periodic_task.now')
def test_periodic_tasks_idle_calculation(self, mock_now):
fake_time = 32503680000.0
mock_now.return_value = fake_time
class Manager(periodic_task.PeriodicTasks):
@periodic_task.periodic_task(spacing=10)
def bar(self, context):
return 'bar'
m = Manager(self.conf)
# Ensure initial values are correct
self.assertEqual(1, len(m._periodic_tasks))
task_name, task = m._periodic_tasks[0]
# Test task values
self.assertEqual('bar', task_name)
self.assertEqual(10, task._periodic_spacing)
self.assertTrue(task._periodic_enabled)
self.assertFalse(task._periodic_external_ok)
self.assertFalse(task._periodic_immediate)
self.assertAlmostEqual(32503680000.0,
task._periodic_last_run)
# Test the manager's representation of those values
self.assertEqual(10, m._periodic_spacing[task_name])
self.assertAlmostEqual(32503680000.0,
m._periodic_last_run[task_name])
mock_now.return_value = fake_time + 5
idle = m.run_periodic_tasks(None)
self.assertAlmostEqual(5, idle, 1)
self.assertAlmostEqual(32503680000.0,
m._periodic_last_run[task_name])
mock_now.return_value = fake_time + 10
idle = m.run_periodic_tasks(None)
self.assertAlmostEqual(10, idle, 1)
self.assertAlmostEqual(32503680010.0,
m._periodic_last_run[task_name])
@mock.patch('oslo_service.periodic_task.now')
def test_periodic_tasks_immediate_runs_now(self, mock_now):
fake_time = 32503680000.0
mock_now.return_value = fake_time
class Manager(periodic_task.PeriodicTasks):
@periodic_task.periodic_task(spacing=10, run_immediately=True)
def bar(self, context):
return 'bar'
m = Manager(self.conf)
# Ensure initial values are correct
self.assertEqual(1, len(m._periodic_tasks))
task_name, task = m._periodic_tasks[0]
# Test task values
self.assertEqual('bar', task_name)
self.assertEqual(10, task._periodic_spacing)
self.assertTrue(task._periodic_enabled)
self.assertFalse(task._periodic_external_ok)
self.assertTrue(task._periodic_immediate)
self.assertIsNone(task._periodic_last_run)
# Test the manager's representation of those values
self.assertEqual(10, m._periodic_spacing[task_name])
self.assertIsNone(m._periodic_last_run[task_name])
idle = m.run_periodic_tasks(None)
self.assertAlmostEqual(32503680000.0,
m._periodic_last_run[task_name])
self.assertAlmostEqual(10, idle, 1)
mock_now.return_value = fake_time + 5
idle = m.run_periodic_tasks(None)
self.assertAlmostEqual(5, idle, 1)
def test_periodic_tasks_disabled(self):
class Manager(periodic_task.PeriodicTasks):
@periodic_task.periodic_task(spacing=-1)
def bar(self):
return 'bar'
m = Manager(self.conf)
idle = m.run_periodic_tasks(None)
self.assertAlmostEqual(60, idle, 1)
def test_external_running_here(self):
self.config(run_external_periodic_tasks=True)
class Manager(periodic_task.PeriodicTasks):
@periodic_task.periodic_task(spacing=200, external_process_ok=True)
def bar(self):
return 'bar'
m = Manager(self.conf)
self.assertThat(m._periodic_tasks, matchers.HasLength(1))
@mock.patch('oslo_service.periodic_task.now')
@mock.patch('random.random')
def test_nearest_boundary(self, mock_random, mock_now):
mock_now.return_value = 19
mock_random.return_value = 0
self.assertEqual(17, periodic_task._nearest_boundary(10, 7))
mock_now.return_value = 28
self.assertEqual(27, periodic_task._nearest_boundary(13, 7))
mock_now.return_value = 1841
self.assertEqual(1837, periodic_task._nearest_boundary(781, 88))
mock_now.return_value = 1835
self.assertEqual(mock_now.return_value,
periodic_task._nearest_boundary(None, 88))
# Add 5% jitter
mock_random.return_value = 1.0
mock_now.return_value = 1300
self.assertEqual(1200 + 10, periodic_task._nearest_boundary(1000, 200))
# Add 2.5% jitter
mock_random.return_value = 0.5
mock_now.return_value = 1300
self.assertEqual(1200 + 5, periodic_task._nearest_boundary(1000, 200))

View File

@ -1,647 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Unit Tests for service class
"""
from __future__ import print_function
import logging
import multiprocessing
import os
import signal
import socket
import time
import traceback
import eventlet
from eventlet import event
import mock
from oslotest import base as test_base
from oslo_service import service
from oslo_service.tests import base
from oslo_service.tests import eventlet_service
LOG = logging.getLogger(__name__)
class ExtendedService(service.Service):
def test_method(self):
return 'service'
class ServiceManagerTestCase(test_base.BaseTestCase):
"""Test cases for Services."""
def test_override_manager_method(self):
serv = ExtendedService()
serv.start()
self.assertEqual('service', serv.test_method())
class ServiceWithTimer(service.Service):
def __init__(self, ready_event=None):
super(ServiceWithTimer, self).__init__()
self.ready_event = ready_event
def start(self):
super(ServiceWithTimer, self).start()
self.timer_fired = 0
self.tg.add_timer(1, self.timer_expired)
def wait(self):
if self.ready_event:
self.ready_event.set()
super(ServiceWithTimer, self).wait()
def timer_expired(self):
self.timer_fired = self.timer_fired + 1
class ServiceCrashOnStart(ServiceWithTimer):
def start(self):
super(ServiceCrashOnStart, self).start()
raise ValueError
class ServiceTestBase(base.ServiceBaseTestCase):
"""A base class for ServiceLauncherTest and ServiceRestartTest."""
def _spawn_service(self,
workers=1,
service_maker=None,
launcher_maker=None):
self.workers = workers
pid = os.fork()
if pid == 0:
os.setsid()
# NOTE(johannes): We can't let the child processes exit back
# into the unit test framework since then we'll have multiple
# processes running the same tests (and possibly forking more
# processes that end up in the same situation). So we need
# to catch all exceptions and make sure nothing leaks out, in
# particular SystemExit, which is raised by sys.exit(). We use
# os._exit() which doesn't have this problem.
status = 0
try:
serv = service_maker() if service_maker else ServiceWithTimer()
if launcher_maker:
launcher = launcher_maker()
launcher.launch_service(serv, workers=workers)
else:
launcher = service.launch(self.conf, serv, workers=workers)
status = launcher.wait()
except SystemExit as exc:
status = exc.code
except BaseException:
# We need to be defensive here too
try:
traceback.print_exc()
except BaseException:
print("Couldn't print traceback")
status = 2
# Really exit
os._exit(status or 0)
return pid
def _wait(self, cond, timeout):
start = time.time()
while not cond():
if time.time() - start > timeout:
break
time.sleep(.1)
def setUp(self):
super(ServiceTestBase, self).setUp()
# NOTE(markmc): ConfigOpts.log_opt_values() uses CONF.config-file
self.conf(args=[], default_config_files=[])
self.addCleanup(self.conf.reset)
self.addCleanup(self._reap_pid)
self.pid = 0
def _reap_pid(self):
if self.pid:
# Make sure all processes are stopped
os.kill(self.pid, signal.SIGTERM)
# Make sure we reap our test process
self._reap_test()
def _reap_test(self):
pid, status = os.waitpid(self.pid, 0)
self.pid = None
return status
class ServiceLauncherTest(ServiceTestBase):
"""Originally from nova/tests/integrated/test_multiprocess_api.py."""
def _spawn(self):
self.pid = self._spawn_service(workers=2)
# Wait at most 10 seconds to spawn workers
cond = lambda: self.workers == len(self._get_workers())
timeout = 10
self._wait(cond, timeout)
workers = self._get_workers()
self.assertEqual(len(workers), self.workers)
return workers
def _get_workers(self):
f = os.popen('ps ax -o pid,ppid,command')
# Skip ps header
f.readline()
processes = [tuple(int(p) for p in l.strip().split()[:2])
for l in f]
return [p for p, pp in processes if pp == self.pid]
def test_killed_worker_recover(self):
start_workers = self._spawn()
# kill one worker and check if new worker can come up
LOG.info('pid of first child is %s' % start_workers[0])
os.kill(start_workers[0], signal.SIGTERM)
# Wait at most 5 seconds to respawn a worker
cond = lambda: start_workers != self._get_workers()
timeout = 5
self._wait(cond, timeout)
# Make sure worker pids don't match
end_workers = self._get_workers()
LOG.info('workers: %r' % end_workers)
self.assertNotEqual(start_workers, end_workers)
def _terminate_with_signal(self, sig):
self._spawn()
os.kill(self.pid, sig)
# Wait at most 5 seconds to kill all workers
cond = lambda: not self._get_workers()
timeout = 5
self._wait(cond, timeout)
workers = self._get_workers()
LOG.info('workers: %r' % workers)
self.assertFalse(workers, 'No OS processes left.')
def test_terminate_sigkill(self):
self._terminate_with_signal(signal.SIGKILL)
status = self._reap_test()
self.assertTrue(os.WIFSIGNALED(status))
self.assertEqual(signal.SIGKILL, os.WTERMSIG(status))
def test_terminate_sigterm(self):
self._terminate_with_signal(signal.SIGTERM)
status = self._reap_test()
self.assertTrue(os.WIFEXITED(status))
self.assertEqual(0, os.WEXITSTATUS(status))
def test_crashed_service(self):
service_maker = lambda: ServiceCrashOnStart()
self.pid = self._spawn_service(service_maker=service_maker)
status = self._reap_test()
self.assertTrue(os.WIFEXITED(status))
self.assertEqual(1, os.WEXITSTATUS(status))
def test_child_signal_sighup(self):
start_workers = self._spawn()
os.kill(start_workers[0], signal.SIGHUP)
# Wait at most 5 seconds to respawn a worker
cond = lambda: start_workers == self._get_workers()
timeout = 5
self._wait(cond, timeout)
# Make sure worker pids match
end_workers = self._get_workers()
LOG.info('workers: %r' % end_workers)
self.assertEqual(start_workers, end_workers)
def test_parent_signal_sighup(self):
start_workers = self._spawn()
os.kill(self.pid, signal.SIGHUP)
def cond():
workers = self._get_workers()
return (len(workers) == len(start_workers) and
not set(start_workers).intersection(workers))
# Wait at most 5 seconds to respawn a worker
timeout = 10
self._wait(cond, timeout)
self.assertTrue(cond())
class ServiceRestartTest(ServiceTestBase):
def _spawn(self):
ready_event = multiprocessing.Event()
service_maker = lambda: ServiceWithTimer(ready_event=ready_event)
self.pid = self._spawn_service(service_maker=service_maker)
return ready_event
def test_service_restart(self):
ready = self._spawn()
timeout = 5
ready.wait(timeout)
self.assertTrue(ready.is_set(), 'Service never became ready')
ready.clear()
os.kill(self.pid, signal.SIGHUP)
ready.wait(timeout)
self.assertTrue(ready.is_set(), 'Service never back after SIGHUP')
def test_terminate_sigterm(self):
ready = self._spawn()
timeout = 5
ready.wait(timeout)
self.assertTrue(ready.is_set(), 'Service never became ready')
os.kill(self.pid, signal.SIGTERM)
status = self._reap_test()
self.assertTrue(os.WIFEXITED(status))
self.assertEqual(0, os.WEXITSTATUS(status))
def test_mutate_hook_service_launcher(self):
"""Test mutate_config_files is called by ServiceLauncher on SIGHUP.
Not using _spawn_service because ServiceLauncher doesn't fork and it's
simplest to stay all in one process.
"""
mutate = multiprocessing.Event()
self.conf.register_mutate_hook(lambda c, f: mutate.set())
launcher = service.launch(
self.conf, ServiceWithTimer(), restart_method='mutate')
self.assertFalse(mutate.is_set(), "Hook was called too early")
launcher.restart()
self.assertTrue(mutate.is_set(), "Hook wasn't called")
def test_mutate_hook_process_launcher(self):
"""Test mutate_config_files is called by ProcessLauncher on SIGHUP.
Forks happen in _spawn_service and ProcessLauncher. So we get three
tiers of processes, the top tier being the test process. self.pid
refers to the middle tier, which represents our application. Both
service_maker and launcher_maker execute in the middle tier. The bottom
tier is the workers.
The behavior we want is that when the application (middle tier)
receives a SIGHUP, it catches that, calls mutate_config_files and
relaunches all the workers. This causes them to inherit the mutated
config.
"""
mutate = multiprocessing.Event()
ready = multiprocessing.Event()
def service_maker():
self.conf.register_mutate_hook(lambda c, f: mutate.set())
return ServiceWithTimer(ready)
def launcher_maker():
return service.ProcessLauncher(self.conf, restart_method='mutate')
self.pid = self._spawn_service(1, service_maker, launcher_maker)
timeout = 5
ready.wait(timeout)
self.assertTrue(ready.is_set(), 'Service never became ready')
ready.clear()
self.assertFalse(mutate.is_set(), "Hook was called too early")
os.kill(self.pid, signal.SIGHUP)
ready.wait(timeout)
self.assertTrue(ready.is_set(), 'Service never back after SIGHUP')
self.assertTrue(mutate.is_set(), "Hook wasn't called")
class _Service(service.Service):
def __init__(self):
super(_Service, self).__init__()
self.init = event.Event()
self.cleaned_up = False
def start(self):
self.init.send()
def stop(self):
self.cleaned_up = True
super(_Service, self).stop()
class LauncherTest(base.ServiceBaseTestCase):
def test_graceful_shutdown(self):
# test that services are given a chance to clean up:
svc = _Service()
launcher = service.launch(self.conf, svc)
# wait on 'init' so we know the service had time to start:
svc.init.wait()
launcher.stop()
self.assertTrue(svc.cleaned_up)
# make sure stop can be called more than once. (i.e. play nice with
# unit test fixtures in nova bug #1199315)
launcher.stop()
@mock.patch('oslo_service.service.ServiceLauncher.launch_service')
def _test_launch_single(self, workers, mock_launch):
svc = service.Service()
service.launch(self.conf, svc, workers=workers)
mock_launch.assert_called_with(svc, workers=workers)
def test_launch_none(self):
self._test_launch_single(None)
def test_launch_one_worker(self):
self._test_launch_single(1)
def test_launch_invalid_workers_number(self):
svc = service.Service()
for num_workers in [0, -1]:
self.assertRaises(ValueError, service.launch, self.conf,
svc, num_workers)
@mock.patch('signal.alarm')
@mock.patch('oslo_service.service.ProcessLauncher.launch_service')
def test_multiple_worker(self, mock_launch, alarm_mock):
svc = service.Service()
service.launch(self.conf, svc, workers=3)
mock_launch.assert_called_with(svc, workers=3)
def test_launch_wrong_service_base_class(self):
# check that services that do not subclass service.ServiceBase
# can not be launched.
svc = mock.Mock()
self.assertRaises(TypeError, service.launch, self.conf, svc)
@mock.patch('signal.alarm')
@mock.patch("oslo_service.service.Services.add")
@mock.patch("oslo_service.eventlet_backdoor.initialize_if_enabled")
def test_check_service_base(self, initialize_if_enabled_mock,
services_mock,
alarm_mock):
initialize_if_enabled_mock.return_value = None
launcher = service.Launcher(self.conf)
serv = _Service()
launcher.launch_service(serv)
@mock.patch('signal.alarm')
@mock.patch("oslo_service.service.Services.add")
@mock.patch("oslo_service.eventlet_backdoor.initialize_if_enabled")
def test_check_service_base_fails(self, initialize_if_enabled_mock,
services_mock,
alarm_mock):
initialize_if_enabled_mock.return_value = None
launcher = service.Launcher(self.conf)
class FooService(object):
def __init__(self):
pass
serv = FooService()
self.assertRaises(TypeError, launcher.launch_service, serv)
class ProcessLauncherTest(base.ServiceBaseTestCase):
@mock.patch('signal.alarm')
@mock.patch("signal.signal")
def test_stop(self, signal_mock, alarm_mock):
signal_mock.SIGTERM = 15
launcher = service.ProcessLauncher(self.conf)
self.assertTrue(launcher.running)
pid_nums = [22, 222]
fakeServiceWrapper = service.ServiceWrapper(service.Service(), 1)
launcher.children = {pid_nums[0]: fakeServiceWrapper,
pid_nums[1]: fakeServiceWrapper}
with mock.patch('oslo_service.service.os.kill') as mock_kill:
with mock.patch.object(launcher, '_wait_child') as _wait_child:
def fake_wait_child():
pid = pid_nums.pop()
return launcher.children.pop(pid)
_wait_child.side_effect = fake_wait_child
with mock.patch('oslo_service.service.Service.stop') as \
mock_service_stop:
mock_service_stop.side_effect = lambda: None
launcher.stop()
self.assertFalse(launcher.running)
self.assertFalse(launcher.children)
self.assertEqual([mock.call(222, signal_mock.SIGTERM),
mock.call(22, signal_mock.SIGTERM)],
mock_kill.mock_calls)
mock_service_stop.assert_called_once_with()
def test__handle_signal(self):
signal_handler = service.SignalHandler()
signal_handler.clear()
self.assertEqual(0,
len(signal_handler._signal_handlers[signal.SIGTERM]))
call_1, call_2 = mock.Mock(), mock.Mock()
signal_handler.add_handler('SIGTERM', call_1)
signal_handler.add_handler('SIGTERM', call_2)
self.assertEqual(2,
len(signal_handler._signal_handlers[signal.SIGTERM]))
signal_handler._handle_signal(signal.SIGTERM, 'test')
# execute pending eventlet callbacks
time.sleep(0)
for m in signal_handler._signal_handlers[signal.SIGTERM]:
m.assert_called_once_with(signal.SIGTERM, 'test')
signal_handler.clear()
@mock.patch('signal.alarm')
@mock.patch("os.kill")
@mock.patch("oslo_service.service.ProcessLauncher.stop")
@mock.patch("oslo_service.service.ProcessLauncher._respawn_children")
@mock.patch("oslo_service.service.ProcessLauncher.handle_signal")
@mock.patch("oslo_config.cfg.CONF.log_opt_values")
@mock.patch("oslo_service.systemd.notify_once")
@mock.patch("oslo_config.cfg.CONF.reload_config_files")
@mock.patch("oslo_service.service._is_sighup_and_daemon")
def test_parent_process_reload_config(self,
is_sighup_and_daemon_mock,
reload_config_files_mock,
notify_once_mock,
log_opt_values_mock,
handle_signal_mock,
respawn_children_mock,
stop_mock,
kill_mock,
alarm_mock):
is_sighup_and_daemon_mock.return_value = True
respawn_children_mock.side_effect = [None,
eventlet.greenlet.GreenletExit()]
launcher = service.ProcessLauncher(self.conf)
launcher.sigcaught = 1
launcher.children = {}
wrap_mock = mock.Mock()
launcher.children[222] = wrap_mock
launcher.wait()
reload_config_files_mock.assert_called_once_with()
wrap_mock.service.reset.assert_called_once_with()
@mock.patch("oslo_service.service.ProcessLauncher._start_child")
@mock.patch("oslo_service.service.ProcessLauncher.handle_signal")
@mock.patch("eventlet.greenio.GreenPipe")
@mock.patch("os.pipe")
def test_check_service_base(self, pipe_mock, green_pipe_mock,
handle_signal_mock, start_child_mock):
pipe_mock.return_value = [None, None]
launcher = service.ProcessLauncher(self.conf)
serv = _Service()
launcher.launch_service(serv, workers=0)
@mock.patch("oslo_service.service.ProcessLauncher._start_child")
@mock.patch("oslo_service.service.ProcessLauncher.handle_signal")
@mock.patch("eventlet.greenio.GreenPipe")
@mock.patch("os.pipe")
def test_check_service_base_fails(self, pipe_mock, green_pipe_mock,
handle_signal_mock, start_child_mock):
pipe_mock.return_value = [None, None]
launcher = service.ProcessLauncher(self.conf)
class FooService(object):
def __init__(self):
pass
serv = FooService()
self.assertRaises(TypeError, launcher.launch_service, serv, 0)
class GracefulShutdownTestService(service.Service):
def __init__(self):
super(GracefulShutdownTestService, self).__init__()
self.finished_task = event.Event()
def start(self, sleep_amount):
def sleep_and_send(finish_event):
time.sleep(sleep_amount)
finish_event.send()
self.tg.add_thread(sleep_and_send, self.finished_task)
def exercise_graceful_test_service(sleep_amount, time_to_wait, graceful):
svc = GracefulShutdownTestService()
svc.start(sleep_amount)
svc.stop(graceful)
def wait_for_task(svc):
svc.finished_task.wait()
return eventlet.timeout.with_timeout(time_to_wait, wait_for_task,
svc=svc, timeout_value="Timeout!")
class ServiceTest(test_base.BaseTestCase):
def test_graceful_stop(self):
# Here we wait long enough for the task to gracefully finish.
self.assertIsNone(exercise_graceful_test_service(1, 2, True))
def test_ungraceful_stop(self):
# Here we stop ungracefully, and will never see the task finish.
self.assertEqual("Timeout!",
exercise_graceful_test_service(1, 2, False))
class EventletServerProcessLauncherTest(base.ServiceBaseTestCase):
def setUp(self):
super(EventletServerProcessLauncherTest, self).setUp()
self.conf(args=[], default_config_files=[])
self.addCleanup(self.conf.reset)
self.workers = 3
def run_server(self):
queue = multiprocessing.Queue()
proc = multiprocessing.Process(target=eventlet_service.run,
args=(queue,),
kwargs={'workers': self.workers})
proc.start()
port = queue.get()
conn = socket.create_connection(('127.0.0.1', port))
# NOTE(blk-u): The sleep shouldn't be necessary. There must be a bug in
# the server implementation where it takes some time to set up the
# server or signal handlers.
time.sleep(1)
return (proc, conn)
def test_shuts_down_on_sigint_when_client_connected(self):
proc, conn = self.run_server()
# check that server is live
self.assertTrue(proc.is_alive())
# send SIGINT to the server and wait for it to exit while client still
# connected.
os.kill(proc.pid, signal.SIGINT)
proc.join()
conn.close()
def test_graceful_shuts_down_on_sigterm_when_client_connected(self):
self.config(graceful_shutdown_timeout=7)
proc, conn = self.run_server()
# send SIGTERM to the server and wait for it to exit while client still
# connected.
os.kill(proc.pid, signal.SIGTERM)
# server with graceful shutdown must wait forewer if
# option graceful_shutdown_timeout is not specified.
# we can not wait forever ... so 3 seconds are enough
time.sleep(3)
self.assertTrue(proc.is_alive())
conn.close()
proc.join()
def test_graceful_stop_with_exceeded_graceful_shutdown_timeout(self):
# Server must exit if graceful_shutdown_timeout exceeded
graceful_shutdown_timeout = 4
self.config(graceful_shutdown_timeout=graceful_shutdown_timeout)
proc, conn = self.run_server()
time_before = time.time()
os.kill(proc.pid, signal.SIGTERM)
self.assertTrue(proc.is_alive())
proc.join()
self.assertFalse(proc.is_alive())
time_after = time.time()
self.assertTrue(time_after - time_before > graceful_shutdown_timeout)
class EventletServerServiceLauncherTest(EventletServerProcessLauncherTest):
def setUp(self):
super(EventletServerServiceLauncherTest, self).setUp()
self.workers = 1

View File

@ -1,134 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import os
import ssl
from oslo_config import cfg
from oslo_service import sslutils
from oslo_service.tests import base
CONF = cfg.CONF
SSL_CERT_DIR = os.path.normpath(os.path.join(
os.path.dirname(os.path.abspath(__file__)),
'ssl_cert'))
class SslutilsTestCase(base.ServiceBaseTestCase):
"""Test cases for sslutils."""
def setUp(self):
super(SslutilsTestCase, self).setUp()
self.cert_file_name = os.path.join(SSL_CERT_DIR, 'certificate.crt')
self.key_file_name = os.path.join(SSL_CERT_DIR, 'privatekey.key')
self.ca_file_name = os.path.join(SSL_CERT_DIR, 'ca.crt')
@mock.patch("%s.RuntimeError" % RuntimeError.__module__)
@mock.patch("os.path.exists")
def test_is_enabled(self, exists_mock, runtime_error_mock):
exists_mock.return_value = True
self.conf.set_default("cert_file", self.cert_file_name,
group=sslutils.config_section)
self.conf.set_default("key_file", self.key_file_name,
group=sslutils.config_section)
self.conf.set_default("ca_file", self.ca_file_name,
group=sslutils.config_section)
sslutils.is_enabled(self.conf)
self.assertFalse(runtime_error_mock.called)
@mock.patch("os.path.exists")
def test_is_enabled_no_ssl_cert_file_fails(self, exists_mock):
exists_mock.side_effect = [False]
self.conf.set_default("cert_file", "/no/such/file",
group=sslutils.config_section)
self.assertRaises(RuntimeError, sslutils.is_enabled, self.conf)
@mock.patch("os.path.exists")
def test_is_enabled_no_ssl_key_file_fails(self, exists_mock):
exists_mock.side_effect = [True, False]
self.conf.set_default("cert_file", self.cert_file_name,
group=sslutils.config_section)
self.conf.set_default("key_file", "/no/such/file",
group=sslutils.config_section)
self.assertRaises(RuntimeError, sslutils.is_enabled, self.conf)
@mock.patch("os.path.exists")
def test_is_enabled_no_ssl_ca_file_fails(self, exists_mock):
exists_mock.side_effect = [True, True, False]
self.conf.set_default("cert_file", self.cert_file_name,
group=sslutils.config_section)
self.conf.set_default("key_file", self.key_file_name,
group=sslutils.config_section)
self.conf.set_default("ca_file", "/no/such/file",
group=sslutils.config_section)
self.assertRaises(RuntimeError, sslutils.is_enabled, self.conf)
@mock.patch("ssl.wrap_socket")
@mock.patch("os.path.exists")
def _test_wrap(self, exists_mock, wrap_socket_mock, **kwargs):
exists_mock.return_value = True
sock = mock.Mock()
self.conf.set_default("cert_file", self.cert_file_name,
group=sslutils.config_section)
self.conf.set_default("key_file", self.key_file_name,
group=sslutils.config_section)
ssl_kwargs = {'server_side': True,
'certfile': self.conf.ssl.cert_file,
'keyfile': self.conf.ssl.key_file,
'cert_reqs': ssl.CERT_NONE,
}
if kwargs:
ssl_kwargs.update(**kwargs)
sslutils.wrap(self.conf, sock)
wrap_socket_mock.assert_called_once_with(sock, **ssl_kwargs)
def test_wrap(self):
self._test_wrap()
def test_wrap_ca_file(self):
self.conf.set_default("ca_file", self.ca_file_name,
group=sslutils.config_section)
ssl_kwargs = {'ca_certs': self.conf.ssl.ca_file,
'cert_reqs': ssl.CERT_REQUIRED
}
self._test_wrap(**ssl_kwargs)
def test_wrap_ciphers(self):
self.conf.set_default("ca_file", self.ca_file_name,
group=sslutils.config_section)
ciphers = (
'ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+'
'AES:ECDH+HIGH:DH+HIGH:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:'
'RSA+HIGH:RSA+3DES:!aNULL:!eNULL:!MD5:!DSS:!RC4'
)
self.conf.set_default("ciphers", ciphers,
group=sslutils.config_section)
ssl_kwargs = {'ca_certs': self.conf.ssl.ca_file,
'cert_reqs': ssl.CERT_REQUIRED,
'ciphers': ciphers}
self._test_wrap(**ssl_kwargs)
def test_wrap_ssl_version(self):
self.conf.set_default("ca_file", self.ca_file_name,
group=sslutils.config_section)
self.conf.set_default("version", "tlsv1",
group=sslutils.config_section)
ssl_kwargs = {'ca_certs': self.conf.ssl.ca_file,
'cert_reqs': ssl.CERT_REQUIRED,
'ssl_version': ssl.PROTOCOL_TLSv1}
self._test_wrap(**ssl_kwargs)

View File

@ -1,78 +0,0 @@
# Copyright 2014 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import socket
import mock
from oslotest import base as test_base
from oslo_service import systemd
class SystemdTestCase(test_base.BaseTestCase):
"""Test case for Systemd service readiness."""
def test__abstractify(self):
sock_name = '@fake_socket'
res = systemd._abstractify(sock_name)
self.assertEqual('\0{0}'.format(sock_name[1:]), res)
@mock.patch.object(os, 'getenv', return_value='@fake_socket')
def _test__sd_notify(self, getenv_mock, unset_env=False):
self.ready = False
self.closed = False
class FakeSocket(object):
def __init__(self, family, type):
pass
def connect(fs, socket):
pass
def close(fs):
self.closed = True
def sendall(fs, data):
if data == b'READY=1':
self.ready = True
with mock.patch.object(socket, 'socket', new=FakeSocket):
if unset_env:
systemd.notify_once()
else:
systemd.notify()
self.assertTrue(self.ready)
self.assertTrue(self.closed)
def test_notify(self):
self._test__sd_notify()
def test_notify_once(self):
os.environ['NOTIFY_SOCKET'] = '@fake_socket'
self._test__sd_notify(unset_env=True)
self.assertRaises(KeyError, os.environ.__getitem__, 'NOTIFY_SOCKET')
@mock.patch("socket.socket")
def test_onready(self, sock_mock):
recv_results = [b'READY=1', '', socket.timeout]
expected_results = [0, 1, 2]
for recv, expected in zip(recv_results, expected_results):
if recv == socket.timeout:
sock_mock.return_value.recv.side_effect = recv
else:
sock_mock.return_value.recv.return_value = recv
actual = systemd.onready('@fake_socket', 1)
self.assertEqual(expected, actual)

View File

@ -1,163 +0,0 @@
# Copyright (c) 2012 Rackspace Hosting
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Unit Tests for thread groups
"""
import time
from eventlet import event
from oslotest import base as test_base
from oslo_service import threadgroup
class ThreadGroupTestCase(test_base.BaseTestCase):
"""Test cases for thread group."""
def setUp(self):
super(ThreadGroupTestCase, self).setUp()
self.tg = threadgroup.ThreadGroup()
self.addCleanup(self.tg.stop)
def test_add_dynamic_timer(self):
def foo(*args, **kwargs):
pass
initial_delay = 1
periodic_interval_max = 2
self.tg.add_dynamic_timer(foo, initial_delay, periodic_interval_max,
'arg', kwarg='kwarg')
self.assertEqual(1, len(self.tg.timers))
timer = self.tg.timers[0]
self.assertTrue(timer._running)
self.assertEqual(('arg',), timer.args)
self.assertEqual({'kwarg': 'kwarg'}, timer.kw)
def test_stop_current_thread(self):
stop_event = event.Event()
quit_event = event.Event()
def stop_self(*args, **kwargs):
if args[0] == 1:
time.sleep(1)
self.tg.stop()
stop_event.send('stop_event')
quit_event.wait()
for i in range(0, 4):
self.tg.add_thread(stop_self, i, kwargs='kwargs')
stop_event.wait()
self.assertEqual(1, len(self.tg.threads))
quit_event.send('quit_event')
def test_stop_immediately(self):
def foo(*args, **kwargs):
time.sleep(1)
start_time = time.time()
self.tg.add_thread(foo, 'arg', kwarg='kwarg')
time.sleep(0)
self.tg.stop()
end_time = time.time()
self.assertEqual(0, len(self.tg.threads))
self.assertTrue(end_time - start_time < 1)
def test_stop_gracefully(self):
def foo(*args, **kwargs):
time.sleep(1)
start_time = time.time()
self.tg.add_thread(foo, 'arg', kwarg='kwarg')
self.tg.stop(True)
end_time = time.time()
self.assertEqual(0, len(self.tg.threads))
self.assertTrue(end_time - start_time >= 1)
def test_cancel_early(self):
def foo(*args, **kwargs):
time.sleep(1)
self.tg.add_thread(foo, 'arg', kwarg='kwarg')
self.tg.cancel()
self.assertEqual(0, len(self.tg.threads))
def test_cancel_late(self):
def foo(*args, **kwargs):
time.sleep(0.3)
self.tg.add_thread(foo, 'arg', kwarg='kwarg')
time.sleep(0)
self.tg.cancel()
self.assertEqual(1, len(self.tg.threads))
def test_cancel_timeout(self):
def foo(*args, **kwargs):
time.sleep(0.3)
self.tg.add_thread(foo, 'arg', kwarg='kwarg')
time.sleep(0)
self.tg.cancel(timeout=0.2, wait_time=0.1)
self.assertEqual(0, len(self.tg.threads))
def test_stop_timers(self):
def foo(*args, **kwargs):
pass
self.tg.add_timer('1234', foo)
self.assertEqual(1, len(self.tg.timers))
self.tg.stop_timers()
self.assertEqual(0, len(self.tg.timers))
def test_add_and_remove_timer(self):
def foo(*args, **kwargs):
pass
timer = self.tg.add_timer('1234', foo)
self.assertEqual(1, len(self.tg.timers))
timer.stop()
self.assertEqual(1, len(self.tg.timers))
self.tg.timer_done(timer)
self.assertEqual(0, len(self.tg.timers))
def test_add_and_remove_dynamic_timer(self):
def foo(*args, **kwargs):
pass
initial_delay = 1
periodic_interval_max = 2
timer = self.tg.add_dynamic_timer(foo, initial_delay,
periodic_interval_max)
self.assertEqual(1, len(self.tg.timers))
self.assertTrue(timer._running)
timer.stop()
self.assertEqual(1, len(self.tg.timers))
self.tg.timer_done(timer)
self.assertEqual(0, len(self.tg.timers))

View File

@ -1,380 +0,0 @@
# Copyright 2011 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Unit tests for `wsgi`."""
import os
import platform
import six
import socket
import tempfile
import testtools
import eventlet
import eventlet.wsgi
import mock
import requests
import webob
from oslo_config import cfg
from oslo_service import _options
from oslo_service import sslutils
from oslo_service.tests import base
from oslo_service import wsgi
from oslo_utils import netutils
from oslotest import moxstubout
SSL_CERT_DIR = os.path.normpath(os.path.join(
os.path.dirname(os.path.abspath(__file__)),
'ssl_cert'))
CONF = cfg.CONF
class WsgiTestCase(base.ServiceBaseTestCase):
"""Base class for WSGI tests."""
def setUp(self):
super(WsgiTestCase, self).setUp()
self.conf_fixture.register_opts(_options.wsgi_opts)
self.conf(args=[], default_config_files=[])
class TestLoaderNothingExists(WsgiTestCase):
"""Loader tests where os.path.exists always returns False."""
def setUp(self):
super(TestLoaderNothingExists, self).setUp()
mox_fixture = self.useFixture(moxstubout.MoxStubout())
self.stubs = mox_fixture.stubs
self.stubs.Set(os.path, 'exists', lambda _: False)
def test_relpath_config_not_found(self):
self.config(api_paste_config='api-paste.ini')
self.assertRaises(
wsgi.ConfigNotFound,
wsgi.Loader,
self.conf
)
def test_asbpath_config_not_found(self):
self.config(api_paste_config='/etc/openstack-srv/api-paste.ini')
self.assertRaises(
wsgi.ConfigNotFound,
wsgi.Loader,
self.conf
)
class TestLoaderNormalFilesystem(WsgiTestCase):
"""Loader tests with normal filesystem (unmodified os.path module)."""
_paste_config = """
[app:test_app]
use = egg:Paste#static
document_root = /tmp
"""
def setUp(self):
super(TestLoaderNormalFilesystem, self).setUp()
self.paste_config = tempfile.NamedTemporaryFile(mode="w+t")
self.paste_config.write(self._paste_config.lstrip())
self.paste_config.seek(0)
self.paste_config.flush()
self.config(api_paste_config=self.paste_config.name)
self.loader = wsgi.Loader(CONF)
def test_config_found(self):
self.assertEqual(self.paste_config.name, self.loader.config_path)
def test_app_not_found(self):
self.assertRaises(
wsgi.PasteAppNotFound,
self.loader.load_app,
"nonexistent app",
)
def test_app_found(self):
url_parser = self.loader.load_app("test_app")
self.assertEqual("/tmp", url_parser.directory)
def tearDown(self):
self.paste_config.close()
super(TestLoaderNormalFilesystem, self).tearDown()
class TestWSGIServer(WsgiTestCase):
"""WSGI server tests."""
def setUp(self):
super(TestWSGIServer, self).setUp()
def test_no_app(self):
server = wsgi.Server(self.conf, "test_app", None)
self.assertEqual("test_app", server.name)
def test_custom_max_header_line(self):
self.config(max_header_line=4096) # Default value is 16384
wsgi.Server(self.conf, "test_custom_max_header_line", None)
self.assertEqual(eventlet.wsgi.MAX_HEADER_LINE,
self.conf.max_header_line)
def test_start_random_port(self):
server = wsgi.Server(self.conf, "test_random_port", None,
host="127.0.0.1", port=0)
server.start()
self.assertNotEqual(0, server.port)
server.stop()
server.wait()
@testtools.skipIf(not netutils.is_ipv6_enabled(), "no ipv6 support")
def test_start_random_port_with_ipv6(self):
server = wsgi.Server(self.conf, "test_random_port", None,
host="::1", port=0)
server.start()
self.assertEqual("::1", server.host)
self.assertNotEqual(0, server.port)
server.stop()
server.wait()
@testtools.skipIf(platform.mac_ver()[0] != '',
'SO_REUSEADDR behaves differently '
'on OSX, see bug 1436895')
def test_socket_options_for_simple_server(self):
# test normal socket options has set properly
self.config(tcp_keepidle=500)
server = wsgi.Server(self.conf, "test_socket_options", None,
host="127.0.0.1", port=0)
server.start()
sock = server.socket
self.assertEqual(1, sock.getsockopt(socket.SOL_SOCKET,
socket.SO_REUSEADDR))
self.assertEqual(1, sock.getsockopt(socket.SOL_SOCKET,
socket.SO_KEEPALIVE))
if hasattr(socket, 'TCP_KEEPIDLE'):
self.assertEqual(self.conf.tcp_keepidle,
sock.getsockopt(socket.IPPROTO_TCP,
socket.TCP_KEEPIDLE))
self.assertFalse(server._server.dead)
server.stop()
server.wait()
self.assertTrue(server._server.dead)
@testtools.skipIf(not hasattr(socket, "AF_UNIX"),
'UNIX sockets not supported')
def test_server_with_unix_socket(self):
socket_file = self.get_temp_file_path('sock')
socket_mode = 0o644
server = wsgi.Server(self.conf, "test_socket_options", None,
socket_family=socket.AF_UNIX,
socket_mode=socket_mode,
socket_file=socket_file)
self.assertEqual(socket_file, server.socket.getsockname())
self.assertEqual(socket_mode,
os.stat(socket_file).st_mode & 0o777)
server.start()
self.assertFalse(server._server.dead)
server.stop()
server.wait()
self.assertTrue(server._server.dead)
def test_server_pool_waitall(self):
# test pools waitall method gets called while stopping server
server = wsgi.Server(self.conf, "test_server", None, host="127.0.0.1")
server.start()
with mock.patch.object(server._pool,
'waitall') as mock_waitall:
server.stop()
server.wait()
mock_waitall.assert_called_once_with()
def test_uri_length_limit(self):
eventlet.monkey_patch(os=False, thread=False)
server = wsgi.Server(self.conf, "test_uri_length_limit", None,
host="127.0.0.1", max_url_len=16384, port=33337)
server.start()
self.assertFalse(server._server.dead)
uri = "http://127.0.0.1:%d/%s" % (server.port, 10000 * 'x')
resp = requests.get(uri, proxies={"http": ""})
eventlet.sleep(0)
self.assertNotEqual(requests.codes.REQUEST_URI_TOO_LARGE,
resp.status_code)
uri = "http://127.0.0.1:%d/%s" % (server.port, 20000 * 'x')
resp = requests.get(uri, proxies={"http": ""})
eventlet.sleep(0)
self.assertEqual(requests.codes.REQUEST_URI_TOO_LARGE,
resp.status_code)
server.stop()
server.wait()
def test_reset_pool_size_to_default(self):
server = wsgi.Server(self.conf, "test_resize", None,
host="127.0.0.1", max_url_len=16384)
server.start()
# Stopping the server, which in turn sets pool size to 0
server.stop()
self.assertEqual(0, server._pool.size)
# Resetting pool size to default
server.reset()
server.start()
self.assertEqual(CONF.wsgi_default_pool_size, server._pool.size)
def test_client_socket_timeout(self):
self.config(client_socket_timeout=5)
# mocking eventlet spawn method to check it is called with
# configured 'client_socket_timeout' value.
with mock.patch.object(eventlet,
'spawn') as mock_spawn:
server = wsgi.Server(self.conf, "test_app", None,
host="127.0.0.1", port=0)
server.start()
_, kwargs = mock_spawn.call_args
self.assertEqual(self.conf.client_socket_timeout,
kwargs['socket_timeout'])
server.stop()
def test_wsgi_keep_alive(self):
self.config(wsgi_keep_alive=False)
# mocking eventlet spawn method to check it is called with
# configured 'wsgi_keep_alive' value.
with mock.patch.object(eventlet,
'spawn') as mock_spawn:
server = wsgi.Server(self.conf, "test_app", None,
host="127.0.0.1", port=0)
server.start()
_, kwargs = mock_spawn.call_args
self.assertEqual(self.conf.wsgi_keep_alive,
kwargs['keepalive'])
server.stop()
class TestWSGIServerWithSSL(WsgiTestCase):
"""WSGI server with SSL tests."""
def setUp(self):
super(TestWSGIServerWithSSL, self).setUp()
self.conf_fixture.register_opts(_options.ssl_opts,
sslutils.config_section)
cert_file_name = os.path.join(SSL_CERT_DIR, 'certificate.crt')
key_file_name = os.path.join(SSL_CERT_DIR, 'privatekey.key')
eventlet.monkey_patch(os=False, thread=False)
self.config(cert_file=cert_file_name,
key_file=key_file_name,
group=sslutils.config_section)
@testtools.skipIf(six.PY3, "bug/1482633: test hangs on Python 3")
def test_ssl_server(self):
def test_app(env, start_response):
start_response('200 OK', {})
return ['PONG']
fake_ssl_server = wsgi.Server(self.conf, "fake_ssl", test_app,
host="127.0.0.1", port=0, use_ssl=True)
fake_ssl_server.start()
self.assertNotEqual(0, fake_ssl_server.port)
response = requests.post(
'https://127.0.0.1:%s/' % fake_ssl_server.port,
verify=os.path.join(SSL_CERT_DIR, 'ca.crt'), data='PING')
self.assertEqual('PONG', response.text)
fake_ssl_server.stop()
fake_ssl_server.wait()
@testtools.skipIf(six.PY3, "bug/1482633: test hangs on Python 3")
def test_two_servers(self):
def test_app(env, start_response):
start_response('200 OK', {})
return ['PONG']
fake_ssl_server = wsgi.Server(self.conf, "fake_ssl", test_app,
host="127.0.0.1", port=0, use_ssl=True)
fake_ssl_server.start()
self.assertNotEqual(0, fake_ssl_server.port)
fake_server = wsgi.Server(self.conf, "fake", test_app,
host="127.0.0.1", port=0)
fake_server.start()
self.assertNotEqual(0, fake_server.port)
response = requests.post(
'https://127.0.0.1:%s/' % fake_ssl_server.port,
verify=os.path.join(SSL_CERT_DIR, 'ca.crt'), data='PING')
self.assertEqual('PONG', response.text)
response = requests.post(
'http://127.0.0.1:%s/' % fake_server.port, data='PING')
self.assertEqual('PONG', response.text)
fake_ssl_server.stop()
fake_ssl_server.wait()
fake_server.stop()
fake_server.wait()
@testtools.skipIf(platform.mac_ver()[0] != '',
'SO_REUSEADDR behaves differently '
'on OSX, see bug 1436895')
@testtools.skipIf(six.PY3, "bug/1482633: test hangs on Python 3")
def test_socket_options_for_ssl_server(self):
# test normal socket options has set properly
self.config(tcp_keepidle=500)
server = wsgi.Server(self.conf, "test_socket_options", None,
host="127.0.0.1", port=0, use_ssl=True)
server.start()
sock = server.socket
self.assertEqual(1, sock.getsockopt(socket.SOL_SOCKET,
socket.SO_REUSEADDR))
self.assertEqual(1, sock.getsockopt(socket.SOL_SOCKET,
socket.SO_KEEPALIVE))
if hasattr(socket, 'TCP_KEEPIDLE'):
self.assertEqual(CONF.tcp_keepidle,
sock.getsockopt(socket.IPPROTO_TCP,
socket.TCP_KEEPIDLE))
server.stop()
server.wait()
@testtools.skipIf(not netutils.is_ipv6_enabled(), "no ipv6 support")
@testtools.skipIf(six.PY3, "bug/1482633: test hangs on Python 3")
def test_app_using_ipv6_and_ssl(self):
greetings = 'Hello, World!!!'
@webob.dec.wsgify
def hello_world(req):
return greetings
server = wsgi.Server(self.conf, "fake_ssl",
hello_world,
host="::1",
port=0,
use_ssl=True)
server.start()
response = requests.get('https://[::1]:%d/' % server.port,
verify=os.path.join(SSL_CERT_DIR, 'ca.crt'))
self.assertEqual(greetings, response.text)
server.stop()
server.wait()

View File

@ -1,188 +0,0 @@
# Copyright 2012 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import threading
import eventlet
from eventlet import greenpool
from oslo_service._i18n import _LE
from oslo_service import loopingcall
from oslo_utils import timeutils
LOG = logging.getLogger(__name__)
def _on_thread_done(_greenthread, group, thread):
"""Callback function to be passed to GreenThread.link() when we spawn().
Calls the :class:`ThreadGroup` to notify it to remove this thread from
the associated group.
"""
group.thread_done(thread)
class Thread(object):
"""Wrapper around a greenthread.
Holds a reference to the :class:`ThreadGroup`. The Thread will notify
the :class:`ThreadGroup` when it has done so it can be removed from
the threads list.
"""
def __init__(self, thread, group):
self.thread = thread
self.thread.link(_on_thread_done, group, self)
self._ident = id(thread)
@property
def ident(self):
return self._ident
def stop(self):
self.thread.kill()
def wait(self):
return self.thread.wait()
def link(self, func, *args, **kwargs):
self.thread.link(func, *args, **kwargs)
def cancel(self, *throw_args):
self.thread.cancel(*throw_args)
class ThreadGroup(object):
"""The point of the ThreadGroup class is to:
* keep track of timers and greenthreads (making it easier to stop them
when need be).
* provide an easy API to add timers.
"""
def __init__(self, thread_pool_size=10):
self.pool = greenpool.GreenPool(thread_pool_size)
self.threads = []
self.timers = []
def add_dynamic_timer(self, callback, initial_delay=None,
periodic_interval_max=None, *args, **kwargs):
timer = loopingcall.DynamicLoopingCall(callback, *args, **kwargs)
timer.start(initial_delay=initial_delay,
periodic_interval_max=periodic_interval_max)
self.timers.append(timer)
return timer
def add_timer(self, interval, callback, initial_delay=None,
*args, **kwargs):
pulse = loopingcall.FixedIntervalLoopingCall(callback, *args, **kwargs)
pulse.start(interval=interval,
initial_delay=initial_delay)
self.timers.append(pulse)
return pulse
def add_thread(self, callback, *args, **kwargs):
gt = self.pool.spawn(callback, *args, **kwargs)
th = Thread(gt, self)
self.threads.append(th)
return th
def thread_done(self, thread):
self.threads.remove(thread)
def timer_done(self, timer):
self.timers.remove(timer)
def _perform_action_on_threads(self, action_func, on_error_func):
current = threading.current_thread()
# Iterate over a copy of self.threads so thread_done doesn't
# modify the list while we're iterating
for x in self.threads[:]:
if x.ident == current.ident:
# Don't perform actions on the current thread.
continue
try:
action_func(x)
except eventlet.greenlet.GreenletExit: # nosec
# greenlet exited successfully
pass
except Exception:
on_error_func(x)
def _stop_threads(self):
self._perform_action_on_threads(
lambda x: x.stop(),
lambda x: LOG.exception(_LE('Error stopping thread.')))
def stop_timers(self):
for timer in self.timers:
timer.stop()
self.timers = []
def stop(self, graceful=False):
"""stop function has the option of graceful=True/False.
* In case of graceful=True, wait for all threads to be finished.
Never kill threads.
* In case of graceful=False, kill threads immediately.
"""
self.stop_timers()
if graceful:
# In case of graceful=True, wait for all threads to be
# finished, never kill threads
self.wait()
else:
# In case of graceful=False(Default), kill threads
# immediately
self._stop_threads()
def wait(self):
for x in self.timers:
try:
x.wait()
except eventlet.greenlet.GreenletExit: # nosec
# greenlet exited successfully
pass
except Exception:
LOG.exception(_LE('Error waiting on timer.'))
self._perform_action_on_threads(
lambda x: x.wait(),
lambda x: LOG.exception(_LE('Error waiting on thread.')))
def _any_threads_alive(self):
current = threading.current_thread()
for x in self.threads[:]:
if x.ident == current.ident:
# Don't check current thread.
continue
if not x.thread.dead:
return True
return False
def cancel(self, *throw_args, **kwargs):
self._perform_action_on_threads(
lambda x: x.cancel(*throw_args),
lambda x: LOG.exception(_LE('Error canceling thread.')))
timeout = kwargs.get('timeout', None)
if timeout is None:
return
wait_time = kwargs.get('wait_time', 1)
watch = timeutils.StopWatch(duration=timeout)
watch.start()
while self._any_threads_alive():
if not watch.expired():
eventlet.sleep(wait_time)
continue
LOG.debug("Cancel timeout reached, stopping threads.")
self.stop()

View File

@ -1,356 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# Copyright 2010 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Utility methods for working with WSGI servers."""
from __future__ import print_function
import copy
import os
import socket
import eventlet
import eventlet.wsgi
import greenlet
from paste import deploy
import routes.middleware
import webob.dec
import webob.exc
from oslo_log import log as logging
from oslo_service import _options
from oslo_service import service
from oslo_service import sslutils
from oslo_service._i18n import _, _LE, _LI
LOG = logging.getLogger(__name__)
def list_opts():
"""Entry point for oslo-config-generator."""
return [(None, copy.deepcopy(_options.wsgi_opts))]
def register_opts(conf):
"""Registers WSGI config options."""
return conf.register_opts(_options.wsgi_opts)
class InvalidInput(Exception):
message = _("Invalid input received: "
"Unexpected argument for periodic task creation: %(arg)s.")
class Server(service.ServiceBase):
"""Server class to manage a WSGI server, serving a WSGI application."""
# TODO(eezhova): Consider changing the default host value to prevent
# possible binding to all interfaces. The most appropriate value seems
# to be 127.0.0.1, but it has to be verified that the change wouldn't
# break any consuming project.
def __init__(self, conf, name, app, host='0.0.0.0', port=0, # nosec
pool_size=None, protocol=eventlet.wsgi.HttpProtocol,
backlog=128, use_ssl=False, max_url_len=None,
logger_name='eventlet.wsgi.server',
socket_family=None, socket_file=None, socket_mode=None):
"""Initialize, but do not start, a WSGI server.
:param conf: Instance of ConfigOpts.
:param name: Pretty name for logging.
:param app: The WSGI application to serve.
:param host: IP address to serve the application.
:param port: Port number to server the application.
:param pool_size: Maximum number of eventlets to spawn concurrently.
:param protocol: Protocol class.
:param backlog: Maximum number of queued connections.
:param use_ssl: Wraps the socket in an SSL context if True.
:param max_url_len: Maximum length of permitted URLs.
:param logger_name: The name for the logger.
:param socket_family: Socket family.
:param socket_file: location of UNIX socket.
:param socket_mode: UNIX socket mode.
:returns: None
:raises: InvalidInput
:raises: EnvironmentError
"""
self.conf = conf
self.conf.register_opts(_options.wsgi_opts)
self.default_pool_size = self.conf.wsgi_default_pool_size
# Allow operators to customize http requests max header line size.
eventlet.wsgi.MAX_HEADER_LINE = conf.max_header_line
self.name = name
self.app = app
self._server = None
self._protocol = protocol
self.pool_size = pool_size or self.default_pool_size
self._pool = eventlet.GreenPool(self.pool_size)
self._logger = logging.getLogger(logger_name)
self._use_ssl = use_ssl
self._max_url_len = max_url_len
self.client_socket_timeout = conf.client_socket_timeout or None
if backlog < 1:
raise InvalidInput(reason=_('The backlog must be more than 0'))
if not socket_family or socket_family in [socket.AF_INET,
socket.AF_INET6]:
self.socket = self._get_socket(host, port, backlog)
elif hasattr(socket, "AF_UNIX") and socket_family == socket.AF_UNIX:
self.socket = self._get_unix_socket(socket_file, socket_mode,
backlog)
else:
raise ValueError(_("Unsupported socket family: %s"), socket_family)
(self.host, self.port) = self.socket.getsockname()[0:2]
if self._use_ssl:
sslutils.is_enabled(conf)
def _get_socket(self, host, port, backlog):
bind_addr = (host, port)
# TODO(dims): eventlet's green dns/socket module does not actually
# support IPv6 in getaddrinfo(). We need to get around this in the
# future or monitor upstream for a fix
try:
info = socket.getaddrinfo(bind_addr[0],
bind_addr[1],
socket.AF_UNSPEC,
socket.SOCK_STREAM)[0]
family = info[0]
bind_addr = info[-1]
except Exception:
family = socket.AF_INET
try:
sock = eventlet.listen(bind_addr, family, backlog=backlog)
except EnvironmentError:
LOG.error(_LE("Could not bind to %(host)s:%(port)s"),
{'host': host, 'port': port})
raise
sock = self._set_socket_opts(sock)
LOG.info(_LI("%(name)s listening on %(host)s:%(port)s"),
{'name': self.name, 'host': host, 'port': port})
return sock
def _get_unix_socket(self, socket_file, socket_mode, backlog):
sock = eventlet.listen(socket_file, family=socket.AF_UNIX,
backlog=backlog)
if socket_mode is not None:
os.chmod(socket_file, socket_mode)
LOG.info(_LI("%(name)s listening on %(socket_file)s:"),
{'name': self.name, 'socket_file': socket_file})
return sock
def start(self):
"""Start serving a WSGI application.
:returns: None
"""
# The server socket object will be closed after server exits,
# but the underlying file descriptor will remain open, and will
# give bad file descriptor error. So duplicating the socket object,
# to keep file descriptor usable.
self.dup_socket = self.socket.dup()
if self._use_ssl:
self.dup_socket = sslutils.wrap(self.conf, self.dup_socket)
wsgi_kwargs = {
'func': eventlet.wsgi.server,
'sock': self.dup_socket,
'site': self.app,
'protocol': self._protocol,
'custom_pool': self._pool,
'log': self._logger,
'log_format': self.conf.wsgi_log_format,
'debug': False,
'keepalive': self.conf.wsgi_keep_alive,
'socket_timeout': self.client_socket_timeout
}
if self._max_url_len:
wsgi_kwargs['url_length_limit'] = self._max_url_len
self._server = eventlet.spawn(**wsgi_kwargs)
def _set_socket_opts(self, _socket):
_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
# sockets can hang around forever without keepalive
_socket.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
# This option isn't available in the OS X version of eventlet
if hasattr(socket, 'TCP_KEEPIDLE'):
_socket.setsockopt(socket.IPPROTO_TCP,
socket.TCP_KEEPIDLE,
self.conf.tcp_keepidle)
return _socket
def reset(self):
"""Reset server greenpool size to default.
:returns: None
"""
self._pool.resize(self.pool_size)
def stop(self):
"""Stops eventlet server. Doesn't allow accept new connecting.
:returns: None
"""
LOG.info(_LI("Stopping WSGI server."))
if self._server is not None:
# let eventlet close socket
self._pool.resize(0)
self._server.kill()
def wait(self):
"""Block, until the server has stopped.
Waits on the server's eventlet to finish, then returns.
:returns: None
"""
try:
if self._server is not None:
num = self._pool.running()
LOG.debug("Waiting WSGI server to finish %d requests.", num)
self._pool.waitall()
except greenlet.GreenletExit:
LOG.info(_LI("WSGI server has stopped."))
class Request(webob.Request):
pass
class Router(object):
"""WSGI middleware that maps incoming requests to WSGI apps."""
def __init__(self, mapper):
"""Create a router for the given routes.Mapper.
Each route in `mapper` must specify a 'controller', which is a
WSGI app to call. You'll probably want to specify an 'action' as
well and have your controller be an object that can route
the request to the action-specific method.
Examples:
mapper = routes.Mapper()
sc = ServerController()
# Explicit mapping of one route to a controller+action
mapper.connect(None, '/svrlist', controller=sc, action='list')
# Actions are all implicitly defined
mapper.resource('server', 'servers', controller=sc)
# Pointing to an arbitrary WSGI app. You can specify the
# {path_info:.*} parameter so the target app can be handed just that
# section of the URL.
mapper.connect(None, '/v1.0/{path_info:.*}', controller=BlogApp())
"""
self.map = mapper
self._router = routes.middleware.RoutesMiddleware(self._dispatch,
self.map)
@webob.dec.wsgify(RequestClass=Request)
def __call__(self, req):
"""Route the incoming request to a controller based on self.map.
If no match, return a 404.
"""
return self._router
@staticmethod
@webob.dec.wsgify(RequestClass=Request)
def _dispatch(req):
"""Dispatch the request to the appropriate controller.
Called by self._router after matching the incoming request to a route
and putting the information into req.environ. Either returns 404
or the routed WSGI app's response.
"""
match = req.environ['wsgiorg.routing_args'][1]
if not match:
return webob.exc.HTTPNotFound()
app = match['controller']
return app
class ConfigNotFound(Exception):
def __init__(self, path):
msg = _('Could not find config at %(path)s') % {'path': path}
super(ConfigNotFound, self).__init__(msg)
class PasteAppNotFound(Exception):
def __init__(self, name, path):
msg = (_("Could not load paste app '%(name)s' from %(path)s") %
{'name': name, 'path': path})
super(PasteAppNotFound, self).__init__(msg)
class Loader(object):
"""Used to load WSGI applications from paste configurations."""
def __init__(self, conf):
"""Initialize the loader, and attempt to find the config.
:param conf: Application config
:returns: None
"""
conf.register_opts(_options.wsgi_opts)
self.config_path = None
config_path = conf.api_paste_config
if not os.path.isabs(config_path):
self.config_path = conf.find_file(config_path)
elif os.path.exists(config_path):
self.config_path = config_path
if not self.config_path:
raise ConfigNotFound(path=config_path)
def load_app(self, name):
"""Return the paste URLMap wrapped WSGI application.
:param name: Name of the application to load.
:returns: Paste URLMap object wrapping the requested application.
:raises: PasteAppNotFound
"""
try:
LOG.debug("Loading app %(name)s from %(path)s",
{'name': name, 'path': self.config_path})
return deploy.loadapp("config:%s" % self.config_path, name=name)
except LookupError:
LOG.exception(_LE("Couldn't lookup app: %s"), name)
raise PasteAppNotFound(name=name, path=self.config_path)

View File

@ -1,18 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
WebOb>=1.2.3 # MIT
eventlet!=0.18.3,>=0.18.2 # MIT
greenlet>=0.3.2 # MIT
monotonic>=0.6 # Apache-2.0
oslo.utils>=3.16.0 # Apache-2.0
oslo.concurrency>=3.8.0 # Apache-2.0
oslo.config>=3.14.0 # Apache-2.0
oslo.log>=1.14.0 # Apache-2.0
six>=1.9.0 # MIT
oslo.i18n>=2.1.0 # Apache-2.0
PasteDeploy>=1.5.0 # MIT
Routes!=2.0,!=2.1,!=2.3.0,>=1.12.3;python_version=='2.7' # MIT
Routes!=2.0,!=2.3.0,>=1.12.3;python_version!='2.7' # MIT
Paste # MIT

View File

@ -1,59 +0,0 @@
[metadata]
name = oslo.service
summary = oslo.service library
description-file =
README.rst
author = OpenStack
author-email = openstack-dev@lists.openstack.org
home-page = http://wiki.openstack.org/wiki/Oslo#oslo.service
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 3
Programming Language :: Python :: 3.3
Programming Language :: Python :: 3.4
[files]
packages =
oslo_service
[pbr]
warnerrors = true
[entry_points]
oslo.config.opts =
oslo.service.periodic_task = oslo_service.periodic_task:list_opts
oslo.service.service = oslo_service.service:list_opts
oslo.service.sslutils = oslo_service.sslutils:list_opts
oslo.service.wsgi = oslo_service.wsgi:list_opts
[build_sphinx]
source-dir = doc/source
build-dir = doc/build
all_files = 1
[upload_sphinx]
upload-dir = doc/build/html
[compile_catalog]
directory = oslo_service/locale
domain = oslo_service
[update_catalog]
domain = oslo_service
output_dir = oslo_service/locale
input_file = oslo_service/locale/oslo_service.pot
[extract_messages]
keywords = _ gettext ngettext l_ lazy_gettext
mapping_file = babel.cfg
output_file = oslo_service/locale/oslo_service.pot
[wheel]
universal = true

View File

@ -1,29 +0,0 @@
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
setup_requires=['pbr>=1.8'],
pbr=True)

View File

@ -1,18 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
fixtures>=3.0.0 # Apache-2.0/BSD
hacking<0.11,>=0.10.0
mock>=2.0 # BSD
oslotest>=1.10.0 # Apache-2.0
# These are needed for docs generation/testing
oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
sphinx!=1.3b1,<1.3,>=1.2.1 # BSD
doc8 # Apache-2.0
coverage>=3.6 # Apache-2.0
# Bandit security code scanner
bandit>=1.0.1 # Apache-2.0

53
tox.ini
View File

@ -1,53 +0,0 @@
[tox]
minversion = 1.6
envlist = py34,py27,pypy,pep8, bandit
[testenv]
deps = -r{toxinidir}/test-requirements.txt
whitelist_externals = find
commands =
find . -type f -name "*.pyc" -delete
python setup.py testr --slowest --testr-args='{posargs}'
[testenv:pep8]
commands = flake8
[testenv:py27]
commands =
find . -type f -name "*.pyc" -delete
python setup.py testr --slowest --testr-args='{posargs}'
doc8 --ignore-path "doc/source/history.rst" doc/source
[testenv:venv]
commands = {posargs}
[testenv:docs]
commands = python setup.py build_sphinx
[testenv:cover]
commands = python setup.py test --coverage --coverage-package-name=oslo_service --testr-args='{posargs}'
[flake8]
# E123, E125 skipped as they are invalid PEP-8.
show-source = True
ignore = E123,E125
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build
[hacking]
import_exceptions = oslo_service._i18n
[testenv:pip-missing-reqs]
# do not install test-requirements as that will pollute the virtualenv for
# determining missing packages
# this also means that pip-missing-reqs must be installed separately, outside
# of the requirements.txt files
deps = pip_missing_reqs
commands = pip-missing-reqs -d --ignore-module=oslo_service* --ignore-module=pkg_resources --ignore-file=oslo_service/tests/* oslo_service
[testenv:debug]
commands = oslo_debug_helper -t oslo_service/tests {posargs}
[testenv:bandit]
deps = -r{toxinidir}/test-requirements.txt
commands = bandit -c bandit.yaml -r oslo_service -n5 -p gate