Retire repo

This repo was created by accident, use deb-python-oslo.messaging
instead.

Needed-By: I1ac1a06931c8b6dd7c2e73620a0302c29e605f03
Change-Id: I81894aea69b9d09b0977039623c26781093a397a
This commit is contained in:
Andreas Jaeger 2017-04-17 19:37:54 +02:00
parent a0336c8aa1
commit dc609fde79
209 changed files with 13 additions and 29819 deletions

View File

@ -1,7 +0,0 @@
[run]
branch = True
source = oslo_messaging
omit = oslo_messaging/tests/*,oslo_messaging/openstack/*
[report]
ignore_errors = True

17
.gitignore vendored
View File

@ -1,17 +0,0 @@
AUTHORS
ChangeLog
*~
*.swp
*.pyc
*.log
.tox
.coverage
*.egg-info/
.eggs
*.egg
build/
doc/build/
doc/source/api/
dist/
.testrepository/
releasenotes/build

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/oslo.messaging.git

View File

@ -1,4 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} ${PYTHON:-python} -m subunit.run discover -t ./ . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -1,16 +0,0 @@
If you would like to contribute to the development of OpenStack,
you must follow the steps in this page:
http://docs.openstack.org/infra/manual/developers.html
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/oslo.messaging

204
LICENSE
View File

@ -1,204 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
--- License for python-keystoneclient versions prior to 2.1 ---
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. Neither the name of this project nor the names of its contributors may
be used to endorse or promote products derived from this software without
specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -1,18 +0,0 @@
Oslo Messaging Library
======================
.. image:: https://img.shields.io/pypi/v/oslo.messaging.svg
:target: https://pypi.python.org/pypi/oslo.messaging/
:alt: Latest Version
.. image:: https://img.shields.io/pypi/dm/oslo.messaging.svg
:target: https://pypi.python.org/pypi/oslo.messaging/
:alt: Downloads
The Oslo messaging API supports RPC and notifications over a number of
different messaging transports.
* License: Apache License, Version 2.0
* Documentation: http://docs.openstack.org/developer/oslo.messaging
* Source: http://git.openstack.org/cgit/openstack/oslo.messaging
* Bugs: http://bugs.launchpad.net/oslo.messaging

13
README.txt Normal file
View File

@ -0,0 +1,13 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
Use instead the project deb-python-oslo.messaging at
http://git.openstack.org/cgit/openstack/deb-python-oslo.messaging .
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1 +0,0 @@
[python: **.py]

View File

@ -1,193 +0,0 @@
-------------------------
AMQP 1.0 Protocol Support
-------------------------
.. currentmodule:: oslo_messaging
============
Introduction
============
This release of oslo.messaging includes an experimental driver that
provides support for version 1.0 of the Advanced Message Queuing
Protocol (AMQP 1.0, ISO/IEC 19464).
The current implementation of this driver is considered
*experimental*. It is not recommended that this driver be used in
production systems. Rather, this driver is being provided as a
*technical preview*, in hopes that it will encourage further testing
by the AMQP 1.0 community.
More detail regarding the driver's implementation is available from the `specification`_.
.. _specification: https://git.openstack.org/cgit/openstack/oslo-specs/tree/specs/juno/amqp10-driver-implementation.rst
=============
Prerequisites
=============
This driver uses the Apache QPID `Proton`_ AMQP 1.0 protocol engine.
This engine consists of a platform specific library and a python
binding. The driver does not directly interface with the engine API,
as the API is a very low-level interface to the AMQP protocol.
Instead, the driver uses the pure python `Pyngus`_ client API, which
is layered on top of the protocol engine.
.. _Proton: http://qpid.apache.org/proton/index.html
.. _Pyngus: https://github.com/kgiusti/pyngus
In order to run the driver the Proton Python bindings, Proton
library, Proton header files, and Pyngus must be installed.
Pyngus is available via `Pypi`__.
.. __: https://pypi.python.org/pypi/pyngus
While the Proton Python bindings are available via `Pypi`__, it
includes a C extension that requires the Proton library and header
files be pre-installed in order for the binding to install properly.
If the target platform's distribution provides a pre-packaged version
of the Proton Python binding (see packages_ below), it is recommended
to use these pre-built packages instead of pulling them from Pypi.
.. __: https://pypi.python.org/pypi/python-qpid-proton
The driver also requires a *broker* that supports version 1.0 of the
AMQP protocol.
The driver has only been tested using `qpidd`_ in a `patched
devstack`_ environment. The version of qpidd **must** be at least
0.26. qpidd also uses the Proton engine for its AMQP 1.0 support, so
the Proton library must be installed on the system hosting the qpidd
daemon.
.. _qpidd: http://qpid.apache.org/components/cpp-broker/index.html
.. _patched devstack: https://review.openstack.org/#/c/109118/
At present, RabbitMQ does not work with this driver. This driver
makes use of the *dynamic* flag on the link Source to automatically
provision a node at the peer. RabbitMQ's AMQP 1.0 implementation has
yet to implement this feature.
See the `specification`_ for additional information regarding testing
done on the driver.
=============
Configuration
=============
driver
------
It is recommended to start with the default configuration options
supported by the driver. The remaining configuration steps described
below assume that none of the driver's options have been manually
overridden.
**Note Well:** The driver currently does **not** support the generic
*amqp* options used by the existing drivers, such as
*amqp_durable_queues* or *amqp_auto_delete*. Support for these are
TBD.
qpidd
-----
First, verify that the Proton library has been installed and is
imported by the qpidd broker. This can checked by running::
$ qpidd --help
and looking for the AMQP 1.0 options in the help text. If no AMQP 1.0
options are listed, verify that the Proton libraries are installed and
that the version of qpidd is greater than or equal to 0.26.
Second, configure the address patterns used by the driver. This is
done by adding the following to /etc/qpid/qpidd.conf::
queue-patterns=exclusive
queue-patterns=unicast
topic-patterns=broadcast
These patterns, *exclusive*, *unicast*, and *broadcast* are the
default values used by the driver. These can be overridden via the
driver configuration options if desired. If manually overridden,
update the qpidd.conf values to match.
services
--------
The new driver is selected by specifying **amqp** as the transport
name. For example::
from oslo import messaging
from oslo.config import cfg
amqp_transport = messaging.get_transport(cfg.CONF,
"amqp://me:passwd@host:5672")
The new driver can be loaded and used by existing applications by
specifying *amqp* as the RPC backend in the service's configuration
file. For example, in nova.conf::
rpc_backend = amqp
.. _packages:
======================
Platforms and Packages
======================
Pyngus is available via Pypi.
Pre-built packages for the Proton library and qpidd are available for
some popular distributions:
RHEL and Fedora
---------------
Packages exist in EPEL for RHEL/Centos 7, and Fedora 19+.
Unfortunately, RHEL/Centos 6 base packages include a very old version
of qpidd that does not support AMQP 1.0. EPEL's policy does not allow
a newer version of qpidd for RHEL/Centos 6.
The following packages must be installed on the system running the
qpidd daemon:
- qpid-cpp-server (version 0.26+)
- qpid-proton-c
The following packages must be installed on the systems running the
services that use the new driver:
- Proton libraries: qpid-proton-c-devel
- Proton python bindings: python-qpid-proton
- pyngus (via Pypi)
Debian and Ubuntu
-----------------
Packages for the Proton library, headers, and Python bindings are
available in the Debian/Testing repository. Proton packages are not
yet available in the Ubuntu repository. The version of qpidd on both
platforms is too old and does not support AMQP 1.0.
Until the proper package version arrive the latest packages can be
pulled from the `Apache Qpid PPA`_ on Launchpad::
sudo add-apt-repository ppa:qpid/released
.. _Apache Qpid PPA: https://launchpad.net/~qpid/+archive/ubuntu/released
The following packages must be installed on the system running the
qpidd daemon:
- qpidd (version 0.26+)
- libqpid-proton2
The following packages must be installed on the systems running the
services that use the new driver:
- Proton libraries: libqpid-proton2-dev
- Proton python bindings: python-qpid-proton
- pyngus (via Pypi)

View File

@ -1,42 +0,0 @@
============================
Frequently Asked Questions
============================
I don't need notifications on the message bus. How do I disable them?
=====================================================================
Notification messages can be disabled using the ``noop`` notify
driver. Set ``driver = noop`` in your configuration file under the
[oslo_messaging_notifications] section.
Why does the notification publisher create queues, too? Shouldn't the subscriber do that?
=========================================================================================
The notification messages are meant to be used for integration with
external services, including services that are not part of
OpenStack. To ensure that the subscriber does not miss any messages if
it starts after the publisher, ``oslo.messaging`` ensures that
subscriber queues exist when notifications are sent.
How do I change the queue names where notifications are published?
==================================================================
Notifications are published to the configured exchange using a topic
built from a base value specified in the configuration file and the
notification "level". The default topic is ``notifications``, so an
info-level notification is published to the topic
``notifications.info``. A subscriber queue of the same name is created
automatically for each of these topics. To change the queue names,
change the notification topic using the ``topics``
configuration option in ``[oslo_messaging_notifications]``. The option
accepts a list of values, so it is possible to publish to multiple topics.
What are the other choices of notification drivers available?
=============================================================
- messaging Send notifications using the 1.0 message format.
- messagingv2 Send notifications using the 2.0 message format (with a message envelope).
- routing Configurable routing notifier (by priority or event_type).
- log Publish notifications via Python logging infrastructure.
- test Store notifications in memory for test verification.
- noop Disable sending notifications entirely.

View File

@ -1,75 +0,0 @@
# -*- coding: utf-8 -*-
import os
import subprocess
import sys
import warnings
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'oslosphinx',
'stevedore.sphinxext',
'oslo_config.sphinxext',
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# Add any paths that contain templates here, relative to this directory.
# templates_path = []
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'oslo.messaging'
copyright = u'2013, OpenStack Foundation'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
git_cmd = ["git", "log", "--pretty=format:'%ad, commit %h'", "--date=local",
"-n1"]
try:
html_last_updated_fmt = subprocess.Popen(
git_cmd, stdout=subprocess.PIPE).communicate()[0]
except Exception:
warnings.warn('Cannot get last updated time from git repository. '
'Not setting "html_last_updated_fmt".')
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
'%s Documentation' % project,
'OpenStack Foundation', 'manual'),
]

View File

@ -1,9 +0,0 @@
----------------------
Testing Configurations
----------------------
.. currentmodule:: oslo_messaging.conffixture
.. autoclass:: ConfFixture
:members:

View File

@ -1,5 +0,0 @@
==============
Contributing
==============
.. include:: ../../CONTRIBUTING.rst

View File

@ -1,6 +0,0 @@
===================
Available Drivers
===================
.. list-plugins:: oslo.messaging.drivers
:detailed:

View File

@ -1,18 +0,0 @@
----------
Exceptions
----------
.. currentmodule:: oslo_messaging
.. autoexception:: ClientSendError
.. autoexception:: DriverLoadFailure
.. autoexception:: ExecutorLoadFailure
.. autoexception:: InvalidTransportURL
.. autoexception:: MessagingException
.. autoexception:: MessagingTimeout
.. autoexception:: MessagingServerError
.. autoexception:: NoSuchMethod
.. autoexception:: RPCDispatcherError
.. autoexception:: RPCVersionCapError
.. autoexception:: ServerListenError
.. autoexception:: UnsupportedVersion

View File

@ -1,13 +0,0 @@
=========
Executors
=========
Executors are providing the way an incoming message will be dispatched so that
the message can be used for meaningful work. Different types of executors are
supported, each with its own set of restrictions and capabilities.
Available Executors
===================
.. list-plugins:: oslo.messaging.executors
:detailed:

View File

@ -1 +0,0 @@
.. include:: ../../ChangeLog

View File

@ -1,46 +0,0 @@
oslo.messaging
==============
The Oslo messaging API supports RPC and notifications over a number of
different messaging transports.
Contents
========
.. toctree::
:maxdepth: 1
transport
executors
target
server
rpcclient
notifier
notification_driver
notification_listener
serializer
exceptions
opts
conffixture
drivers
supported-messaging-drivers
AMQP1.0
zmq_driver
FAQ
contributing
Release Notes
=============
.. toctree::
:maxdepth: 1
history
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,15 +0,0 @@
-------------------
Notification Driver
-------------------
.. automodule:: oslo_messaging.notify.messaging
.. autoclass:: MessagingDriver
.. autoclass:: MessagingV2Driver
.. currentmodule:: oslo_messaging.notify.notifier
.. autoclass:: Driver
:members:
:noindex:

View File

@ -1,16 +0,0 @@
---------------------
Notification Listener
---------------------
.. automodule:: oslo_messaging.notify.listener
.. currentmodule:: oslo_messaging
.. autofunction:: get_notification_listener
.. autoclass:: MessageHandlingServer
:members:
:noindex:
.. autofunction:: get_local_context
:noindex:

View File

@ -1,20 +0,0 @@
==========
Notifier
==========
.. currentmodule:: oslo_messaging
.. autoclass:: Notifier
:members:
.. autoclass:: LoggingNotificationHandler
:members:
.. autoclass:: LoggingErrorNotificationHandler
:members:
Available Notifier Drivers
==========================
.. list-plugins:: oslo.messaging.notify.drivers
:detailed:

View File

@ -1,16 +0,0 @@
=======================
Configuration Options
=======================
oslo.messaging uses oslo.config to define and manage configuration
options to allow the deployer to control how an application uses the
underlying messaging system.
.. show-options:: oslo.messaging
API
===
.. currentmodule:: oslo_messaging.opts
.. autofunction:: list_opts

View File

@ -1,156 +0,0 @@
------------------------------
Pika Driver Deployment Guide
------------------------------
.. currentmodule:: oslo_messaging
============
Introduction
============
Pika is a pure-Python implementation of the AMQP 0-9-1 protocol including
RabbitMQ's extensions. It is very actively supported and recommended by
RabbitMQ developers
========
Abstract
========
PikaDriver is one of oslo.messaging backend drivers. It supports RPC and Notify
patterns. Currently it could be the only oslo.messaging driver across the
OpenStack cluster. This document provides deployment information for this
driver in oslo_messaging.
This driver is able to work with single instance of RabbitMQ server or
RabbitMQ cluster.
=============
Configuration
=============
Enabling (mandatory)
--------------------
To enable the driver, in the section [DEFAULT] of the conf file,
the 'transport_url' parameter should be set to
`pika://user:pass@host1:port[,hostN:portN]`
[DEFAULT]
transport_url = pika://guest:guest@localhost:5672
Connection options (optional)
-----------------------------
In section [oslo_messaging_pika]:
#. channel_max - Maximum number of channels to allow,
#. frame_max (default - pika default value): The maximum byte size for
an AMQP frame,
#. heartbeat_interval (default=1): How often to send heartbeats for
consumer's connections in seconds. If 0 - disable heartbeats,
#. ssl (default=False): Enable SSL if True,
#. ssl_options (default=None): Arguments passed to ssl.wrap_socket,
#. socket_timeout (default=0.25): Set timeout for opening new connection's
socket,
#. tcp_user_timeout (default=0.25): Set TCP_USER_TIMEOUT in seconds for
connection's socket,
#. host_connection_reconnect_delay (default=0.25): Set delay for reconnection
to some host after connection error
Connection pool options (optional)
----------------------------------
In section [oslo_messaging_pika]:
#. pool_max_size (default=10): Maximum number of connections to keep queued,
#. pool_max_overflow (default=0): Maximum number of connections to create above
`pool_max_size`,
#. pool_timeout (default=30): Default number of seconds to wait for a
connections to available,
#. pool_recycle (default=600): Lifetime of a connection (since creation) in
seconds or None for no recycling. Expired connections are closed on acquire,
#. pool_stale (default=60): Threshold at which inactive (since release)
connections are considered stale in seconds or None for no staleness.
Stale connections are closed on acquire.")
RPC related options (optional)
------------------------------
In section [oslo_messaging_pika]:
#. rpc_queue_expiration (default=60): Time to live for rpc queues without
consumers in seconds,
#. default_rpc_exchange (default="${control_exchange}_rpc"): Exchange name for
sending RPC messages,
#. rpc_reply_exchange', default=("${control_exchange}_rpc_reply"): Exchange
name for receiving RPC replies,
#. rpc_listener_prefetch_count (default=100): Max number of not acknowledged
message which RabbitMQ can send to rpc listener,
#. rpc_reply_listener_prefetch_count (default=100): Max number of not
acknowledged message which RabbitMQ can send to rpc reply listener,
#. rpc_reply_retry_attempts (default=-1): Reconnecting retry count in case of
connectivity problem during sending reply. -1 means infinite retry during
rpc_timeout,
#. rpc_reply_retry_delay (default=0.25) Reconnecting retry delay in case of
connectivity problem during sending reply,
#. default_rpc_retry_attempts (default=-1): Reconnecting retry count in case of
connectivity problem during sending RPC message, -1 means infinite retry. If
actual retry attempts in not 0 the rpc request could be processed more then
one time,
#. rpc_retry_delay (default=0.25): Reconnecting retry delay in case of
connectivity problem during sending RPC message
$control_exchange in this code is value of [DEFAULT].control_exchange option,
which is "openstack" by default
Notification related options (optional)
---------------------------------------
In section [oslo_messaging_pika]:
#. notification_persistence (default=False): Persist notification messages,
#. default_notification_exchange (default="${control_exchange}_notification"):
Exchange name for for sending notifications,
#. notification_listener_prefetch_count (default=100): Max number of not
acknowledged message which RabbitMQ can send to notification listener,
#. default_notification_retry_attempts (default=-1): Reconnecting retry count
in case of connectivity problem during sending notification, -1 means
infinite retry,
#. notification_retry_delay (default=0.25): Reconnecting retry delay in case of
connectivity problem during sending notification message
$control_exchange in this code is value of [DEFAULT].control_exchange option,
which is "openstack" by default
DevStack Support
----------------
Pika driver is supported by DevStack. To enable it you should edit
local.conf [localrc] section and add next there:
enable_plugin pika https://git.openstack.org/openstack/devstack-plugin-pika

View File

@ -1,10 +0,0 @@
----------
RPC Client
----------
.. currentmodule:: oslo_messaging
.. autoclass:: RPCClient
:members:
.. autoexception:: RemoteError

View File

@ -1,10 +0,0 @@
----------
Serializer
----------
.. currentmodule:: oslo_messaging
.. autoclass:: Serializer
:members:
.. autoclass:: NoOpSerializer

View File

@ -1,20 +0,0 @@
------
Server
------
.. automodule:: oslo_messaging.rpc.server
.. currentmodule:: oslo_messaging
.. autofunction:: get_rpc_server
.. autoclass:: RPCDispatcher
.. autoclass:: MessageHandlingServer
:members:
.. autofunction:: expected_exceptions
.. autoexception:: ExpectedException
.. autofunction:: get_local_context

View File

@ -1,60 +0,0 @@
=============================
Supported Messaging Drivers
=============================
RabbitMQ may not be sufficient for the entire community as the community
grows. Pluggability is still something we should maintain, but we should
have a very high standard for drivers that are shipped and documented
as being supported.
This document defines a very clear policy as to the requirements
for drivers to be carried in oslo.messaging and thus supported by the
OpenStack community as a whole. We will deprecate any drivers that do not
meet the requirements, and announce said deprecations in any appropriate
channels to give users time to signal their needs. Deprecation will last
for two release cycles before removing the code. We will also review and
update documentation to annotate which drivers are supported and which
are deprecated given these policies
Policy
------
Testing
~~~~~~~
* Must have unit and/or functional test coverage of at least 60% as
reported by coverage report. Unit tests must be run for all versions
of python oslo.messaging currently gates on.
* Must have integration testing including at least 3 popular oslo.messaging
dependents, preferably at the minimum a devstack-gate job with Nova,
Cinder, and Neutron.
* All testing above must be voting in the gate of oslo.messaging.
Documentation
~~~~~~~~~~~~~
* Must have a reasonable amount of documentation including documentation
in the official OpenStack deployment guide.
Support
~~~~~~~
* Must have at least two individuals from the community committed to
triaging and fixing bugs, and responding to test failures in a timely
manner.
Prospective Drivers
~~~~~~~~~~~~~~~~~~~
* Drivers that intend to meet the requirements above, but that do not yet
meet them will be given one full release cycle, or 6 months, whichever
is longer, to comply before being marked for deprecation. Their use,
however, will not be supported by the community. This will prevent a
chicken and egg problem for new drivers.
.. note::
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode

View File

@ -1,54 +0,0 @@
------
Target
------
.. currentmodule:: oslo_messaging
.. autoclass:: Target
===============
Target Versions
===============
Target version numbers take the form Major.Minor. For a given message with
version X.Y, the server must be marked as able to handle messages of version
A.B, where A == X and B >= Y.
The Major version number should be incremented for an almost completely new
API. The Minor version number would be incremented for backwards compatible
changes to an existing API. A backwards compatible change could be something
like adding a new method, adding an argument to an existing method (but not
requiring it), or changing the type for an existing argument (but still
handling the old type as well).
If no version is specified it defaults to '1.0'.
In the case of RPC, if you wish to allow your server interfaces to evolve such
that clients do not need to be updated in lockstep with the server, you should
take care to implement the server changes in a backwards compatible and have
the clients specify which interface version they require for each method.
Adding a new method to an endpoint is a backwards compatible change and the
version attribute of the endpoint's target should be bumped from X.Y to X.Y+1.
On the client side, the new RPC invocation should have a specific version
specified to indicate the minimum API version that must be implemented for the
method to be supported. For example::
def get_host_uptime(self, ctxt, host):
cctxt = self.client.prepare(server=host, version='1.1')
return cctxt.call(ctxt, 'get_host_uptime')
In this case, version '1.1' is the first version that supported the
get_host_uptime() method.
Adding a new parameter to an RPC method can be made backwards compatible. The
endpoint version on the server side should be bumped. The implementation of
the method must not expect the parameter to be present.::
def some_remote_method(self, arg1, arg2, newarg=None):
# The code needs to deal with newarg=None for cases
# where an older client sends a message without it.
pass
On the client side, the same changes should be made as in example 1. The
minimum version that supports the new parameter should be specified.

View File

@ -1,28 +0,0 @@
---------
Transport
---------
.. currentmodule:: oslo_messaging
.. autofunction:: get_transport
.. autoclass:: Transport
.. autoclass:: TransportURL
:members:
.. autoclass:: TransportHost
.. autofunction:: set_transport_defaults
Forking Processes and oslo.messaging Transport objects
------------------------------------------------------
oslo.messaging can't ensure that forking a process that shares the same
transport object is safe for the library consumer, because it relies on
different 3rd party libraries that don't ensure that. In certain
cases, with some drivers, it does work:
* rabbit: works only if no connection have already been established.
* amqp1: works

View File

@ -1,266 +0,0 @@
------------------------------
ZeroMQ Driver Deployment Guide
------------------------------
.. currentmodule:: oslo_messaging
============
Introduction
============
0MQ (also known as ZeroMQ or zmq) is embeddable networking library
but acts like a concurrency framework. It gives you sockets
that carry atomic messages across various transports
like in-process, inter-process, TCP, and multicast. You can connect
sockets N-to-N with patterns like fan-out, pub-sub, task distribution,
and request-reply. It's fast enough to be the fabric for clustered
products. Its asynchronous I/O model gives you scalable multi-core
applications, built as asynchronous message-processing tasks. It has
a score of language APIs and runs on most operating systems.
Originally the zero in 0MQ was meant as "zero broker" and (as close to)
"zero latency" (as possible). Since then, it has come to encompass
different goals: zero administration, zero cost, and zero waste.
More generally, "zero" refers to the culture of minimalism that permeates
the project.
More detail regarding ZeroMQ library is available from the `specification`_.
.. _specification: http://zguide.zeromq.org/page:all
========
Abstract
========
Currently, ZeroMQ is one of the RPC backend drivers in oslo.messaging. ZeroMQ
can be the only RPC driver across the OpenStack cluster.
This document provides deployment information for this driver in oslo_messaging.
Other than AMQP-based drivers, like RabbitMQ, ZeroMQ doesn't have
any central brokers in oslo.messaging, instead, each host (running OpenStack
services) is both ZeroMQ client and server. As a result, each host needs to
listen to a certain TCP port for incoming connections and directly connect
to other hosts simultaneously.
Another option is to use a router proxy. It is not a broker because it
doesn't assume any message ownership or persistence or replication etc. It
performs only a redirection of messages to endpoints taking routing info from
message envelope.
Topics are used to identify the destination for a ZeroMQ RPC call. There are
two types of topics, bare topics and directed topics. Bare topics look like
'compute', while directed topics look like 'compute.machine1'.
========
Scenario
========
Assuming the following systems as a goal.
::
+--------+
| Client |
+----+---+
|
-----+---------+-----------------------+---------------------
| |
+--------+------------+ +-------+----------------+
| Controller Node | | Compute Node |
| Nova | | Neutron |
| Keystone | | Nova |
| Glance | | nova-compute |
| Neutron | | Ceilometer |
| Cinder | | |
| Ceilometer | +------------------------+
| zmq-proxy |
| Redis |
| Horizon |
+---------------------+
=============
Configuration
=============
Enabling (mandatory)
--------------------
To enable the driver the 'transport_url' option must be set to 'zmq://'
in the section [DEFAULT] of the conf file, the 'rpc_zmq_host' flag
must be set to the hostname of the current node. ::
[DEFAULT]
transport_url = "zmq://"
[oslo_messaging_zmq]
rpc_zmq_host = {hostname}
Match Making (mandatory)
------------------------
The ZeroMQ driver implements a matching capability to discover hosts available
for communication when sending to a bare topic. This allows broker-less
communications.
The MatchMaker is pluggable and it provides two different MatchMaker classes.
DummyMatchMaker: default matchmaker driver for all-in-one scenario (messages
are sent to itself).
RedisMatchMaker: loads the hash table from a remote Redis server, supports
dynamic host/topic registrations, host expiration, and hooks for consuming
applications to acknowledge or neg-acknowledge topic.host service availability.
For ZeroMQ driver Redis is configured in transport_url also. For using Redis
specify the URL as follows::
[DEFAULT]
transport_url = "zmq+redis://127.0.0.1:6379"
In order to cleanup redis storage from expired records (e.g. target listener
goes down) TTL may be applied for keys. Configure 'zmq_target_expire' option
which is 120 (seconds) by default. The option is related not specifically to
redis so it is also defined in [oslo_messaging_zmq] section. If option value
is <= 0 then keys don't expire and live forever in the storage.
MatchMaker Data Source (mandatory)
----------------------------------
MatchMaker data source is stored in files or Redis server discussed in the
previous section. How to make up the database is the key issue for making ZeroMQ
driver work.
If deploying the RedisMatchMaker, a Redis server is required. Each (K, V) pair
stored in Redis is that the key is a base topic and the corresponding values are
hostname arrays to be sent to.
HA for Redis database
---------------------
Single node Redis works fine for testing, but for production there is some
availability guarantees wanted. Without Redis database zmq deployment should
continue working anyway, because there is no need in Redis for services when
connections established already. But if you would like to restart some services
or run more workers or add more hardware nodes to the deployment you will need
nodes discovery mechanism to work and it requires Redis.
To provide database recovery in situations when redis node goes down for example,
we use Sentinel solution and redis master-slave-slave configuration (if we have
3 controllers and run Redis on each of them).
To deploy redis with HA follow the `sentinel-install`_ instructions. From the
messaging driver's side you will need to setup following configuration ::
[DEFAULT]
transport_url = "zmq+redis://host1:26379,host2:26379,host3:26379"
Restrict the number of TCP sockets on controller
------------------------------------------------
The most heavily used RPC pattern (CALL) may consume too many TCP sockets on
controller node in directly connected configuration. To solve the issue
ROUTER proxy may be used.
In order to configure driver to use ROUTER proxy set up the 'use_router_proxy'
option to true in [oslo_messaging_zmq] section (false is set by default).
For example::
use_router_proxy = true
Not less than 3 proxies should be running on controllers or on stand alone
nodes. The parameters for the script oslo-messaging-zmq-proxy should be::
oslo-messaging-zmq-proxy
--config-file /etc/oslo/zeromq.conf
--log-file /var/log/oslo/zmq-router-proxy.log
Fanout-based patterns like CAST+Fanout and notifications always use proxy
as they act over PUB/SUB, 'use_pub_sub' option defaults to true. In such case
publisher proxy should be running. Actually proxy does both: routing to a
DEALER endpoint for direct messages and publishing to all subscribers over
zmq.PUB socket.
If not using PUB/SUB (use_pub_sub = false) then fanout will be emulated over
direct DEALER/ROUTER unicast which is possible but less efficient and therefore
is not recommended. In a case of direct DEALER/ROUTER unicast proxy is not
needed.
This option can be set in [oslo_messaging_zmq] section.
For example::
use_pub_sub = true
In case of using a proxy all publishers (clients) talk to servers over
the proxy connecting to it via TCP.
You can specify ZeroMQ options in /etc/oslo/zeromq.conf if necessary.
Listening Address (optional)
----------------------------
All services bind to an IP address or Ethernet adapter. By default, all services
bind to '*', effectively binding to 0.0.0.0. This may be changed with the option
'rpc_zmq_bind_address' which accepts a wildcard, IP address, or Ethernet adapter.
This configuration can be set in [oslo_messaging_zmq] section.
For example::
rpc_zmq_bind_address = *
Currently zmq driver uses dynamic port binding mechanism, which means that
each listener will allocate port of a random number. Ports range is controlled
by two options 'rpc_zmq_min_port' and 'rpc_zmq_max_port'. Change them to
restrict current service's port binding range. 'rpc_zmq_bind_port_retries'
controls number of retries before 'ports range exceeded' failure.
For example::
rpc_zmq_min_port = 9050
rpc_zmq_max_port = 10050
rpc_zmq_bind_port_retries = 100
DevStack Support
----------------
ZeroMQ driver has been supported by DevStack. The configuration is as follows::
ENABLED_SERVICES+=,-rabbit,zeromq
ZEROMQ_MATCHMAKER=redis
In local.conf [localrc] section need to enable zmq plugin which lives in
`devstack-plugin-zmq`_ repository.
For example::
enable_plugin zmq https://github.com/openstack/devstack-plugin-zmq.git
Example of local.conf::
[[local|localrc]]
DATABASE_PASSWORD=password
ADMIN_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=password
enable_plugin zmq https://github.com/openstack/devstack-plugin-zmq.git
OSLOMSG_REPO=https://review.openstack.org/openstack/oslo.messaging
OSLOMSG_BRANCH=master
ZEROMQ_MATCHMAKER=redis
LIBS_FROM_GIT=oslo.messaging
ENABLE_DEBUG_LOG_LEVEL=True
.. _devstack-plugin-zmq: https://github.com/openstack/devstack-plugin-zmq.git
.. _sentinel-install: http://redis.io/topics/sentinel

View File

@ -1,29 +0,0 @@
# Setting a priority AND an event means both have to be satisfied.
#
# However, defining different sets for the same driver allows you
# to do OR operations.
#
# See how this logic is modelled below:
#
# if (priority in info, warn or error) or
# (event == compute.scheduler.run_instance)
# send to messaging driver ...
#
# if priority == 'poll' and
# event == 'bandwidth.*'
# send to poll driver
group_1:
messaging:
accepted_priorities: ['info', 'warn', 'error']
poll:
accepted_priorities: ['poll']
accepted_events: ['bandwidth.*']
log:
accepted_events: ['compute.instance.exists']
group_2:
messaging:⋅
accepted_events: ['compute.scheduler.run_instance.*']

View File

@ -1,22 +0,0 @@
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from .exceptions import *
from .notify import *
from .rpc import *
from .serializer import *
from .server import *
from .target import *
from .transport import *

View File

@ -1,94 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import argparse
import logging
from oslo_config import cfg
from oslo_messaging._drivers.zmq_driver.proxy import zmq_proxy
from oslo_messaging._drivers.zmq_driver.proxy import zmq_queue_proxy
from oslo_messaging._drivers.zmq_driver import zmq_options
CONF = cfg.CONF
zmq_options.register_opts(CONF)
opt_group = cfg.OptGroup(name='zmq_proxy_opts',
title='ZeroMQ proxy options')
CONF.register_opts(zmq_proxy.zmq_proxy_opts, group=opt_group)
USAGE = """ Usage: ./zmq-proxy.py [-h] [] ...
Usage example:
python oslo_messaging/_cmd/zmq-proxy.py"""
def main():
parser = argparse.ArgumentParser(
description='ZeroMQ proxy service',
usage=USAGE
)
parser.add_argument('--config-file', dest='config_file', type=str,
help='Path to configuration file')
parser.add_argument('--host', dest='host', type=str,
help='Host FQDN for current proxy')
parser.add_argument('--frontend-port', dest='frontend_port', type=int,
help='Front-end ROUTER port number')
parser.add_argument('--backend-port', dest='backend_port', type=int,
help='Back-end ROUTER port number')
parser.add_argument('--publisher-port', dest='publisher_port', type=int,
help='Back-end PUBLISHER port number')
parser.add_argument('-d', '--debug', dest='debug', type=bool,
default=False,
help="Turn on DEBUG logging level instead of INFO")
args = parser.parse_args()
if args.config_file:
cfg.CONF(["--config-file", args.config_file])
log_level = logging.INFO
if args.debug:
log_level = logging.DEBUG
logging.basicConfig(level=log_level,
format='%(asctime)s %(name)s '
'%(levelname)-8s %(message)s')
if args.host:
CONF.zmq_proxy_opts.host = args.host
if args.frontend_port:
CONF.set_override('frontend_port', args.frontend_port,
group='zmq_proxy_opts')
if args.backend_port:
CONF.set_override('backend_port', args.backend_port,
group='zmq_proxy_opts')
if args.publisher_port:
CONF.set_override('publisher_port', args.publisher_port,
group='zmq_proxy_opts')
reactor = zmq_proxy.ZmqProxy(CONF, zmq_queue_proxy.UniversalQueueProxy)
try:
while True:
reactor.run()
except (KeyboardInterrupt, SystemExit):
reactor.close()
if __name__ == "__main__":
main()

View File

@ -1,137 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
# Copyright 2011 - 2012, Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Shared code between AMQP based openstack.common.rpc implementations.
The code in this module is shared between the rpc implementations based on
AMQP. Specifically, this includes impl_kombu. impl_carrot also
uses AMQP, but is deprecated and predates this code.
"""
import collections
import uuid
from oslo_config import cfg
import six
from oslo_messaging._drivers import common as rpc_common
deprecated_durable_opts = [
cfg.DeprecatedOpt('amqp_durable_queues',
group='DEFAULT'),
cfg.DeprecatedOpt('rabbit_durable_queues',
group='DEFAULT')
]
amqp_opts = [
cfg.BoolOpt('amqp_durable_queues',
default=False,
deprecated_opts=deprecated_durable_opts,
help='Use durable queues in AMQP.'),
cfg.BoolOpt('amqp_auto_delete',
default=False,
deprecated_group='DEFAULT',
help='Auto-delete queues in AMQP.'),
]
UNIQUE_ID = '_unique_id'
class RpcContext(rpc_common.CommonRpcContext):
"""Context that supports replying to a rpc.call."""
def __init__(self, **kwargs):
self.msg_id = kwargs.pop('msg_id', None)
self.reply_q = kwargs.pop('reply_q', None)
super(RpcContext, self).__init__(**kwargs)
def deepcopy(self):
values = self.to_dict()
values['conf'] = self.conf
values['msg_id'] = self.msg_id
values['reply_q'] = self.reply_q
return self.__class__(**values)
def unpack_context(msg):
"""Unpack context from msg."""
context_dict = {}
for key in list(msg.keys()):
key = six.text_type(key)
if key.startswith('_context_'):
value = msg.pop(key)
context_dict[key[9:]] = value
context_dict['msg_id'] = msg.pop('_msg_id', None)
context_dict['reply_q'] = msg.pop('_reply_q', None)
return RpcContext.from_dict(context_dict)
def pack_context(msg, context):
"""Pack context into msg.
Values for message keys need to be less than 255 chars, so we pull
context out into a bunch of separate keys. If we want to support
more arguments in rabbit messages, we may want to do the same
for args at some point.
"""
if isinstance(context, dict):
context_d = six.iteritems(context)
else:
context_d = six.iteritems(context.to_dict())
msg.update(('_context_%s' % key, value)
for (key, value) in context_d)
class _MsgIdCache(object):
"""This class checks any duplicate messages."""
# NOTE: This value is considered can be a configuration item, but
# it is not necessary to change its value in most cases,
# so let this value as static for now.
DUP_MSG_CHECK_SIZE = 16
def __init__(self, **kwargs):
self.prev_msgids = collections.deque([],
maxlen=self.DUP_MSG_CHECK_SIZE)
def check_duplicate_message(self, message_data):
"""AMQP consumers may read same message twice when exceptions occur
before ack is returned. This method prevents doing it.
"""
try:
msg_id = message_data.pop(UNIQUE_ID)
except KeyError:
return
if msg_id in self.prev_msgids:
raise rpc_common.DuplicateMessageError(msg_id=msg_id)
return msg_id
def add(self, msg_id):
if msg_id and msg_id not in self.prev_msgids:
self.prev_msgids.append(msg_id)
def _add_unique_id(msg):
"""Add unique_id for checking duplicate messages."""
unique_id = uuid.uuid4().hex
msg.update({UNIQUE_ID: unique_id})
class AMQPDestinationNotFound(Exception):
pass

View File

@ -1,748 +0,0 @@
# Copyright 2014, Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Controller that manages the interface between the driver and the messaging
service.
This module defines a Controller class that is responsible for performing
messaging-related operations (Tasks) requested by the driver, and for managing
the connection to the messaging service. The Controller creates a background
thread which performs all messaging operations and socket I/O. The
Controller's messaging logic is executed in the background thread via lambda
functions scheduled by the Controller.
"""
import abc
import logging
import random
import threading
import uuid
from oslo_config import cfg
import proton
import pyngus
from six import moves
from oslo_messaging._drivers.amqp1_driver import eventloop
from oslo_messaging._drivers.amqp1_driver import opts
from oslo_messaging._i18n import _LE, _LI, _LW
from oslo_messaging import exceptions
from oslo_messaging import transport
LOG = logging.getLogger(__name__)
class Task(object):
"""Perform a messaging operation via the Controller."""
@abc.abstractmethod
def execute(self, controller):
"""This method will be run on the eventloop thread."""
class Sender(pyngus.SenderEventHandler):
"""A single outgoing link to a given address"""
def __init__(self, address):
self._address = address
self._link = None
def attach(self, connection):
# open a link to the destination
sname = "Producer-%s:src=%s:tgt=%s" % (uuid.uuid4().hex,
self._address,
self._address)
self._link = connection.create_sender(name=sname,
source_address=self._address,
target_address=self._address)
self._link.open()
def detach(self):
# close the link
if self._link:
self._link.close()
def destroy(self):
# drop reference to link. The link will be freed when the
# connection is destroyed
self._link = None
def send(self, message, callback):
# send message out the link, invoke callback when acked
self._link.send(message, delivery_callback=callback)
def sender_remote_closed(self, sender_link, pn_condition):
LOG.debug("sender_remote_closed condition=%s", pn_condition)
sender_link.close()
def sender_failed(self, sender_link, error):
"""Protocol error occurred."""
LOG.error(_LE("Outgoing link to %(addr) failed. error=%(error)"),
{"addr": self._address, "error": error})
class Replies(pyngus.ReceiverEventHandler):
"""This is the receiving link for all reply messages. Messages are routed
to the proper Listener's incoming queue using the correlation-id header in
the message.
"""
def __init__(self, connection, on_ready):
self._correlation = {} # map of correlation-id to response queue
self._ready = False
self._on_ready = on_ready
rname = "Consumer-%s:src=[dynamic]:tgt=replies" % uuid.uuid4().hex
self._receiver = connection.create_receiver("replies",
event_handler=self,
name=rname)
# capacity determines the maximum number of reply messages this link
# can receive. As messages are received and credit is consumed, this
# driver will 'top up' the credit back to max capacity. This number
# should be large enough to avoid needlessly flow-controlling the
# replies.
self.capacity = 100 # TODO(kgiusti) guesstimate - make configurable
self._credit = 0
self._receiver.open()
def detach(self):
# close the link
self._receiver.close()
def destroy(self):
# drop reference to link. Link will be freed when the connection is
# released.
self._receiver = None
def ready(self):
return self._ready
def prepare_for_response(self, request, reply_queue):
"""Apply a unique message identifier to this request message. This will
be used to identify messages sent in reply. The identifier is placed
in the 'id' field of the request message. It is expected that the
identifier will appear in the 'correlation-id' field of the
corresponding response message.
"""
request.id = uuid.uuid4().hex
# reply is placed on reply_queue
self._correlation[request.id] = reply_queue
request.reply_to = self._receiver.source_address
LOG.debug("Reply for msg id=%(id)s expected on link %(reply_to)s",
{'id': request.id, 'reply_to': request.reply_to})
return request.id
def cancel_response(self, msg_id):
"""Abort waiting for a response message. This can be used if the
request fails and no reply is expected.
"""
if msg_id in self._correlation:
del self._correlation[msg_id]
# Pyngus ReceiverLink event callbacks:
def receiver_active(self, receiver_link):
"""This is a Pyngus callback, invoked by Pyngus when the receiver_link
has transitioned to the open state and is able to receive incoming
messages.
"""
self._ready = True
self._update_credit()
self._on_ready()
LOG.debug("Replies expected on link %s",
self._receiver.source_address)
def receiver_remote_closed(self, receiver, pn_condition):
"""This is a Pyngus callback, invoked by Pyngus when the peer of this
receiver link has initiated closing the connection.
"""
# TODO(kgiusti) Log for now, possibly implement a recovery strategy if
# necessary.
if pn_condition:
LOG.error(_LE("Reply subscription closed by peer: %s"),
pn_condition)
receiver.close()
def receiver_failed(self, receiver_link, error):
"""Protocol error occurred."""
LOG.error(_LE("Link to reply queue %(addr) failed. error=%(error)"),
{"addr": self._address, "error": error})
def message_received(self, receiver, message, handle):
"""This is a Pyngus callback, invoked by Pyngus when a new message
arrives on this receiver link from the peer.
"""
self._credit = self._credit - 1
self._update_credit()
key = message.correlation_id
if key in self._correlation:
LOG.debug("Received response for msg id=%s", key)
result = {"status": "OK",
"response": message}
self._correlation[key].put(result)
# cleanup (only need one response per request)
del self._correlation[key]
receiver.message_accepted(handle)
else:
LOG.warning(_LW("Can't find receiver for response msg id=%s, "
"dropping!"), key)
receiver.message_modified(handle, True, True, None)
def _update_credit(self):
# ensure we have enough credit
if self._credit < self.capacity / 2:
self._receiver.add_capacity(self.capacity - self._credit)
self._credit = self.capacity
class Server(pyngus.ReceiverEventHandler):
"""A group of links that receive messages from a set of addresses derived
from a given target. Messages arriving on the links are placed on the
'incoming' queue.
"""
def __init__(self, addresses, incoming, subscription_id):
self._incoming = incoming
self._addresses = addresses
self._capacity = 500 # credit per link
self._receivers = []
self._id = subscription_id
def attach(self, connection):
"""Create receiver links over the given connection for all the
configured addresses.
"""
for a in self._addresses:
props = {"snd-settle-mode": "settled"}
rname = "Consumer-%s:src=%s:tgt=%s" % (uuid.uuid4().hex, a, a)
r = connection.create_receiver(source_address=a,
target_address=a,
event_handler=self,
name=rname,
properties=props)
# TODO(kgiusti) Hardcoding credit here is sub-optimal. A better
# approach would monitor for a back-up of inbound messages to be
# processed by the consuming application and backpressure the
# sender based on configured thresholds.
r.add_capacity(self._capacity)
r.open()
self._receivers.append(r)
def detach(self):
# close the links
for receiver in self._receivers:
receiver.close()
def reset(self):
# destroy the links, but keep the addresses around since we may be
# failing over. Since links are destroyed, this cannot be called from
# any of the following ReceiverLink callbacks.
for r in self._receivers:
r.destroy()
self._receivers = []
# Pyngus ReceiverLink event callbacks:
def receiver_remote_closed(self, receiver, pn_condition):
"""This is a Pyngus callback, invoked by Pyngus when the peer of this
receiver link has initiated closing the connection.
"""
if pn_condition:
vals = {
"addr": receiver.source_address or receiver.target_address,
"err_msg": pn_condition
}
LOG.error(_LE("Server subscription %(addr)s closed "
"by peer: %(err_msg)s"), vals)
receiver.close()
def receiver_failed(self, receiver_link, error):
"""Protocol error occurred."""
LOG.error(_LE("Listener link queue %(addr) failed. error=%(error)"),
{"addr": self._address, "error": error})
def message_received(self, receiver, message, handle):
"""This is a Pyngus callback, invoked by Pyngus when a new message
arrives on this receiver link from the peer.
"""
if receiver.capacity < self._capacity / 2:
receiver.add_capacity(self._capacity - receiver.capacity)
self._incoming.put(message)
LOG.debug("message received: %s", message)
receiver.message_accepted(handle)
class Hosts(object):
"""An order list of TransportHost addresses. Connection failover
progresses from one host to the next. username and password come from the
configuration and are used only if no username/password was given in the
URL.
"""
def __init__(self, entries=None, default_username=None,
default_password=None):
if entries:
self._entries = entries[:]
else:
self._entries = [transport.TransportHost(hostname="localhost",
port=5672)]
for entry in self._entries:
entry.port = entry.port or 5672
entry.username = entry.username or default_username
entry.password = entry.password or default_password
self._current = random.randint(0, len(self._entries) - 1)
@property
def current(self):
return self._entries[self._current]
def next(self):
if len(self._entries) > 1:
self._current = (self._current + 1) % len(self._entries)
return self.current
def __repr__(self):
return '<Hosts ' + str(self) + '>'
def __str__(self):
return ", ".join(["%r" % th for th in self._entries])
class Controller(pyngus.ConnectionEventHandler):
"""Controls the connection to the AMQP messaging service. This object is
the 'brains' of the driver. It maintains the logic for addressing, sending
and receiving messages, and managing the connection. All messaging and I/O
work is done on the Eventloop thread, allowing the driver to run
asynchronously from the messaging clients.
"""
def __init__(self, hosts, default_exchange, config):
self.processor = None
self._socket_connection = None
# queue of Task() objects to execute on the eventloop once the
# connection is ready:
self._tasks = moves.queue.Queue(maxsize=500)
# limit the number of Task()'s to execute per call to _process_tasks().
# This allows the eventloop main thread to return to servicing socket
# I/O in a timely manner
self._max_task_batch = 50
# cache of sending links indexed by address:
self._senders = {}
# Servers indexed by target. Each entry is a map indexed by the
# specific ProtonListener's identifier:
self._servers = {}
opt_group = cfg.OptGroup(name='oslo_messaging_amqp',
title='AMQP 1.0 driver options')
config.register_group(opt_group)
config.register_opts(opts.amqp1_opts, group=opt_group)
self.server_request_prefix = \
config.oslo_messaging_amqp.server_request_prefix
self.broadcast_prefix = config.oslo_messaging_amqp.broadcast_prefix
self.group_request_prefix = \
config.oslo_messaging_amqp.group_request_prefix
self._container_name = config.oslo_messaging_amqp.container_name
self.idle_timeout = config.oslo_messaging_amqp.idle_timeout
self.trace_protocol = config.oslo_messaging_amqp.trace
self.ssl_ca_file = config.oslo_messaging_amqp.ssl_ca_file
self.ssl_cert_file = config.oslo_messaging_amqp.ssl_cert_file
self.ssl_key_file = config.oslo_messaging_amqp.ssl_key_file
self.ssl_key_password = config.oslo_messaging_amqp.ssl_key_password
self.ssl_allow_insecure = \
config.oslo_messaging_amqp.allow_insecure_clients
self.sasl_mechanisms = config.oslo_messaging_amqp.sasl_mechanisms
self.sasl_config_dir = config.oslo_messaging_amqp.sasl_config_dir
self.sasl_config_name = config.oslo_messaging_amqp.sasl_config_name
self.hosts = Hosts(hosts, config.oslo_messaging_amqp.username,
config.oslo_messaging_amqp.password)
self.separator = "."
self.fanout_qualifier = "all"
self.default_exchange = default_exchange
# can't handle a request until the replies link is active, as
# we need the peer assigned address, so need to delay any
# processing of task queue until this is done
self._replies = None
# Set True when the driver is shutting down
self._closing = False
# only schedule one outstanding reconnect attempt at a time
self._reconnecting = False
self._delay = 0 # seconds between retries
# prevent queuing up multiple requests to run _process_tasks()
self._process_tasks_scheduled = False
self._process_tasks_lock = threading.Lock()
def connect(self):
"""Connect to the messaging service."""
self.processor = eventloop.Thread(self._container_name)
self.processor.wakeup(lambda: self._do_connect())
def add_task(self, task):
"""Add a Task for execution on processor thread."""
self._tasks.put(task)
self._schedule_task_processing()
def shutdown(self, timeout=None):
"""Shutdown the messaging service."""
LOG.info(_LI("Shutting down the AMQP 1.0 connection"))
if self.processor:
self.processor.wakeup(lambda: self._start_shutdown())
LOG.debug("Waiting for eventloop to exit")
self.processor.join(timeout)
self._hard_reset()
self.processor.destroy()
self.processor = None
LOG.debug("Eventloop exited, driver shut down")
# The remaining methods are reserved to run from the eventloop thread only!
# They must not be invoked directly!
# methods executed by Tasks created by the driver:
def request(self, target, request, result_queue, reply_expected=False):
"""Send a request message to the given target and arrange for a
result to be put on the result_queue. If reply_expected, the result
will include the reply message (if successful).
"""
address = self._resolve(target)
LOG.debug("Sending request for %(target)s to %(address)s",
{'target': target, 'address': address})
if reply_expected:
msg_id = self._replies.prepare_for_response(request, result_queue)
def _callback(link, handle, state, info):
if state == pyngus.SenderLink.ACCEPTED: # message received
if not reply_expected:
# can wake up the sender now
result = {"status": "OK"}
result_queue.put(result)
else:
# we will wake up the sender when the reply message is
# received. See Replies.message_received()
pass
else: # send failed/rejected/etc
msg = "Message send failed: remote disposition: %s, info: %s"
exc = exceptions.MessageDeliveryFailure(msg % (state, info))
result = {"status": "ERROR", "error": exc}
if reply_expected:
# no response will be received, so cancel the correlation
self._replies.cancel_response(msg_id)
result_queue.put(result)
self._send(address, request, _callback)
def response(self, address, response):
"""Send a response message to the client listening on 'address'.
To prevent a misbehaving client from blocking a server indefinitely,
the message is send asynchronously.
"""
LOG.debug("Sending response to %s", address)
self._send(address, response)
def subscribe(self, target, in_queue, subscription_id):
"""Subscribe to messages sent to 'target', place received messages on
'in_queue'.
"""
addresses = [
self._server_address(target),
self._broadcast_address(target),
self._group_request_address(target)
]
self._subscribe(target, addresses, in_queue, subscription_id)
def subscribe_notifications(self, target, in_queue, subscription_id):
"""Subscribe for notifications on 'target', place received messages on
'in_queue'.
"""
addresses = [self._group_request_address(target)]
self._subscribe(target, addresses, in_queue, subscription_id)
def _subscribe(self, target, addresses, in_queue, subscription_id):
LOG.debug("Subscribing to %(target)s (%(addresses)s)",
{'target': target, 'addresses': addresses})
server = Server(addresses, in_queue, subscription_id)
servers = self._servers.get(target)
if servers is None:
servers = {}
self._servers[target] = servers
servers[subscription_id] = server
server.attach(self._socket_connection.connection)
def _resolve(self, target):
"""Return a link address for a given target."""
if target.fanout:
return self._broadcast_address(target)
elif target.server:
return self._server_address(target)
else:
return self._group_request_address(target)
def _sender(self, address):
# if we already have a sender for that address, use it
# else establish the sender and cache it
sender = self._senders.get(address)
if sender is None:
sender = Sender(address)
sender.attach(self._socket_connection.connection)
self._senders[address] = sender
return sender
def _send(self, addr, message, callback=None, handle=None):
"""Send the message out the link addressed by 'addr'. If a
delivery_callback is given it will be invoked when the send has
completed (whether successfully or in error).
"""
address = str(addr)
message.address = address
self._sender(address).send(message, callback)
def _server_address(self, target):
return self._concatenate([self.server_request_prefix,
target.exchange or self.default_exchange,
target.topic, target.server])
def _broadcast_address(self, target):
return self._concatenate([self.broadcast_prefix,
target.exchange or self.default_exchange,
target.topic, self.fanout_qualifier])
def _group_request_address(self, target):
return self._concatenate([self.group_request_prefix,
target.exchange or self.default_exchange,
target.topic])
def _concatenate(self, items):
return self.separator.join(filter(bool, items))
# commands executed on the processor (eventloop) via 'wakeup()':
def _do_connect(self):
"""Establish connection and reply subscription on processor thread."""
host = self.hosts.current
conn_props = {'hostname': host.hostname}
if self.idle_timeout:
conn_props["idle-time-out"] = float(self.idle_timeout)
if self.trace_protocol:
conn_props["x-trace-protocol"] = self.trace_protocol
if self.ssl_ca_file:
conn_props["x-ssl-ca-file"] = self.ssl_ca_file
if self.ssl_cert_file:
# assume this connection is for a server. If client authentication
# support is developed, we'll need an explicit flag (server or
# client)
conn_props["x-ssl-server"] = True
conn_props["x-ssl-identity"] = (self.ssl_cert_file,
self.ssl_key_file,
self.ssl_key_password)
conn_props["x-ssl-allow-cleartext"] = self.ssl_allow_insecure
# SASL configuration:
if self.sasl_mechanisms:
conn_props["x-sasl-mechs"] = self.sasl_mechanisms
if self.sasl_config_dir:
conn_props["x-sasl-config-dir"] = self.sasl_config_dir
if self.sasl_config_name:
conn_props["x-sasl-config-name"] = self.sasl_config_name
self._socket_connection = self.processor.connect(host,
handler=self,
properties=conn_props)
LOG.debug("Connection initiated")
def _process_tasks(self):
"""Execute Task objects in the context of the processor thread."""
with self._process_tasks_lock:
self._process_tasks_scheduled = False
count = 0
while (not self._tasks.empty() and
count < self._max_task_batch and
self._can_process_tasks):
try:
self._tasks.get(False).execute(self)
except Exception as e:
LOG.exception(_LE("Error processing task: %s"), e)
count += 1
# if we hit _max_task_batch, resume task processing later:
if not self._tasks.empty() and self._can_process_tasks:
self._schedule_task_processing()
def _schedule_task_processing(self):
"""_process_tasks() helper: prevent queuing up multiple requests for
task processing. This method is called both by the application thread
and the processing thread.
"""
if self.processor:
with self._process_tasks_lock:
already_scheduled = self._process_tasks_scheduled
self._process_tasks_scheduled = True
if not already_scheduled:
self.processor.wakeup(lambda: self._process_tasks())
@property
def _can_process_tasks(self):
"""_process_tasks helper(): indicates that the driver is ready to
process Tasks. In order to process messaging-related tasks, the reply
queue link must be active.
"""
return (not self._closing and
self._replies and self._replies.ready())
def _start_shutdown(self):
"""Called when the application is closing the transport.
Attempt to cleanly flush/close all links.
"""
self._closing = True
if (self._socket_connection
and self._socket_connection.connection
and self._socket_connection.connection.active):
# try a clean shutdown
for sender in self._senders.values():
sender.detach()
for servers in self._servers.values():
for server in servers.values():
server.detach()
self._replies.detach()
self._socket_connection.connection.close()
else:
# don't wait for a close from the remote, may never happen
self.processor.shutdown()
# reply link active callback:
def _reply_link_ready(self):
"""Invoked when the Replies reply link has become active. At this
point, we are ready to send/receive messages (via Task processing).
"""
LOG.info(_LI("Messaging is active (%(hostname)s:%(port)s)"),
{'hostname': self.hosts.current.hostname,
'port': self.hosts.current.port})
self._schedule_task_processing()
# callback from eventloop on socket error
def socket_error(self, error):
"""Called by eventloop when a socket error occurs."""
LOG.error(_LE("Socket failure: %s"), error)
self._handle_connection_loss()
# Pyngus connection event callbacks (and their helpers), all invoked from
# the eventloop thread:
def connection_failed(self, connection, error):
"""This is a Pyngus callback, invoked by Pyngus when a non-recoverable
error occurs on the connection.
"""
if connection is not self._socket_connection.connection:
# pyngus bug: ignore failure callback on destroyed connections
return
LOG.debug("AMQP Connection failure: %s", error)
self._handle_connection_loss()
def connection_active(self, connection):
"""This is a Pyngus callback, invoked by Pyngus when the connection to
the peer is up. At this point, the driver will activate all subscriber
links (server) and the reply link.
"""
LOG.debug("Connection active (%(hostname)s:%(port)s), subscribing...",
{'hostname': self.hosts.current.hostname,
'port': self.hosts.current.port})
for servers in self._servers.values():
for server in servers.values():
server.attach(self._socket_connection.connection)
self._replies = Replies(self._socket_connection.connection,
lambda: self._reply_link_ready())
self._delay = 0
def connection_closed(self, connection):
"""This is a Pyngus callback, invoked by Pyngus when the connection has
cleanly closed. This occurs after the driver closes the connection
locally, and the peer has acknowledged the close. At this point, the
shutdown of the driver's connection is complete.
"""
LOG.debug("AMQP connection closed.")
# if the driver isn't being shutdown, failover and reconnect
self._handle_connection_loss()
def connection_remote_closed(self, connection, reason):
"""This is a Pyngus callback, invoked by Pyngus when the peer has
requested that the connection be closed.
"""
# The messaging service/broker is trying to shut down the
# connection. Acknowledge the close, and try to reconnect/failover
# later once the connection has closed (connection_closed is called).
if reason:
LOG.info(_LI("Connection closed by peer: %s"), reason)
self._socket_connection.connection.close()
def sasl_done(self, connection, pn_sasl, outcome):
"""This is a Pyngus callback invoked when the SASL handshake
has completed. The outcome of the handshake is passed in the outcome
argument.
"""
if outcome == proton.SASL.OK:
return
LOG.error(_LE("AUTHENTICATION FAILURE: Cannot connect to "
"%(hostname)s:%(port)s as user %(username)s"),
{'hostname': self.hosts.current.hostname,
'port': self.hosts.current.port,
'username': self.hosts.current.username})
# connection failure will be handled later
def _handle_connection_loss(self):
"""The connection to the messaging service has been lost. Try to
reestablish the connection/failover if not shutting down the driver.
"""
if self._closing:
# we're in the middle of shutting down the driver anyways,
# just consider it done:
self.processor.shutdown()
else:
# for some reason, we've lost the connection to the messaging
# service. Try to re-establish the connection:
if not self._reconnecting:
self._reconnecting = True
LOG.info(_LI("delaying reconnect attempt for %d seconds"),
self._delay)
self.processor.schedule(lambda: self._do_reconnect(),
self._delay)
self._delay = (1 if self._delay == 0
else min(self._delay * 2, 60))
def _do_reconnect(self):
"""Invoked on connection/socket failure, failover and re-connect to the
messaging service.
"""
if not self._closing:
self._hard_reset()
self._reconnecting = False
host = self.hosts.next()
LOG.info(_LI("Reconnecting to: %(hostname)s:%(port)s"),
{'hostname': host.hostname, 'port': host.port})
self._socket_connection.connect(host)
def _hard_reset(self):
"""Reset the controller to its pre-connection state"""
# note well: since this method destroys the connection, it cannot be
# invoked directly from a pyngus callback. Use processor.schedule() to
# run this method on the main loop instead.
for sender in self._senders.values():
sender.destroy()
self._senders.clear()
for servers in self._servers.values():
for server in servers.values():
# discard links, but keep servers around to re-attach if
# failing over
server.reset()
if self._replies:
self._replies.destroy()
self._replies = None
if self._socket_connection:
self._socket_connection.reset()

View File

@ -1,111 +0,0 @@
# Copyright 2014, Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import threading
import time
from oslo_messaging._drivers.amqp1_driver import controller
from oslo_messaging._i18n import _LW
from oslo_messaging import exceptions
from six import moves
LOG = logging.getLogger(__name__)
class SendTask(controller.Task):
"""A task that sends a message to a target, and optionally waits for a
reply message. The caller may block until the remote confirms receipt or
the reply message has arrived.
"""
def __init__(self, target, request, wait_for_reply, deadline):
super(SendTask, self).__init__()
self._target = target
self._request = request
self._deadline = deadline
self._wait_for_reply = wait_for_reply
self._results_queue = moves.queue.Queue()
def wait(self, timeout):
"""Wait for the send to complete, and, optionally, a reply message from
the remote. Will raise MessagingTimeout if the send does not complete
or no reply is received within timeout seconds. If the request has
failed for any other reason, a MessagingException is raised.
"""
try:
result = self._results_queue.get(timeout=timeout)
except moves.queue.Empty:
if self._wait_for_reply:
reason = "Timed out waiting for a reply."
else:
reason = "Timed out waiting for send to complete."
raise exceptions.MessagingTimeout(reason)
if result["status"] == "OK":
return result.get("response", None)
raise result["error"]
def execute(self, controller):
"""Runs on eventloop thread - sends request."""
if not self._deadline or self._deadline > time.time():
controller.request(self._target, self._request,
self._results_queue, self._wait_for_reply)
else:
LOG.warning(_LW("Send request to %s aborted: TTL expired."),
self._target)
class ListenTask(controller.Task):
"""A task that creates a subscription to the given target. Messages
arriving from the target are given to the listener.
"""
def __init__(self, target, listener, notifications=False):
"""Create a subscription to the target."""
super(ListenTask, self).__init__()
self._target = target
self._listener = listener
self._notifications = notifications
def execute(self, controller):
"""Run on the eventloop thread - subscribes to target. Inbound messages
are queued to the listener's incoming queue.
"""
if self._notifications:
controller.subscribe_notifications(self._target,
self._listener.incoming,
self._listener.id)
else:
controller.subscribe(self._target,
self._listener.incoming,
self._listener.id)
class ReplyTask(controller.Task):
"""A task that sends 'response' message to 'address'.
"""
def __init__(self, address, response):
super(ReplyTask, self).__init__()
self._address = address
self._response = response
self._wakeup = threading.Event()
def wait(self):
"""Wait for the controller to send the message.
"""
self._wakeup.wait()
def execute(self, controller):
"""Run on the eventloop thread - send the response message."""
controller.response(self._address, self._response)
self._wakeup.set()

View File

@ -1,345 +0,0 @@
# Copyright 2014, Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
A thread that performs all messaging I/O and protocol event handling.
This module provides a background thread that handles messaging operations
scheduled via the Controller, and performs blocking socket I/O and timer
processing. This thread is designed to be as simple as possible - all the
protocol specific intelligence is provided by the Controller and executed on
the background thread via callables.
"""
import errno
import heapq
import logging
import os
import select
import socket
import sys
import threading
import time
import uuid
import pyngus
from six import moves
from oslo_messaging._i18n import _LE, _LI, _LW
LOG = logging.getLogger(__name__)
class _SocketConnection(object):
"""Associates a pyngus Connection with a python network socket,
and handles all connection-related I/O and timer events.
"""
def __init__(self, name, container, properties, handler):
self.name = name
self.socket = None
self._properties = properties or {}
self._properties["properties"] = self._get_name_and_pid()
# The handler is a pyngus ConnectionEventHandler, which is invoked by
# pyngus on connection-related events (active, closed, error, etc).
# Currently it is the Controller object.
self._handler = handler
self._container = container
self.connection = None
def _get_name_and_pid(self):
# helps identify the process that is using the connection
return {u'process': os.path.basename(sys.argv[0]), u'pid': os.getpid()}
def fileno(self):
"""Allows use of a _SocketConnection in a select() call.
"""
return self.socket.fileno()
def read(self):
"""Called when socket is read-ready."""
while True:
try:
rc = pyngus.read_socket_input(self.connection, self.socket)
self.connection.process(time.time())
return rc
except (socket.timeout, socket.error) as e:
# pyngus handles EAGAIN/EWOULDBLOCK and EINTER
self.connection.close_input()
self.connection.close_output()
self._handler.socket_error(str(e))
return pyngus.Connection.EOS
def write(self):
"""Called when socket is write-ready."""
while True:
try:
rc = pyngus.write_socket_output(self.connection, self.socket)
self.connection.process(time.time())
return rc
except (socket.timeout, socket.error) as e:
# pyngus handles EAGAIN/EWOULDBLOCK and EINTER
self.connection.close_output()
self.connection.close_input()
self._handler.socket_error(str(e))
return pyngus.Connection.EOS
def connect(self, host):
"""Connect to host and start the AMQP protocol."""
addr = socket.getaddrinfo(host.hostname, host.port,
socket.AF_INET, socket.SOCK_STREAM)
if not addr:
key = "%s:%i" % (host.hostname, host.port)
error = "Invalid peer address '%s'" % key
LOG.error(_LE("Invalid peer address '%s'"), key)
self._handler.socket_error(error)
return
my_socket = socket.socket(addr[0][0], addr[0][1], addr[0][2])
my_socket.setblocking(0) # 0=non-blocking
my_socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
try:
my_socket.connect(addr[0][4])
except socket.error as e:
if e.errno != errno.EINPROGRESS:
error = "Socket connect failure '%s'" % str(e)
LOG.error(_LE("Socket connect failure '%s'"), str(e))
self._handler.socket_error(error)
return
self.socket = my_socket
props = self._properties.copy()
if pyngus.VERSION >= (2, 0, 0):
# configure client authentication
#
props['x-server'] = False
if host.username:
props['x-username'] = host.username
props['x-password'] = host.password or ""
c = self._container.create_connection(self.name, self._handler, props)
c.user_context = self
self.connection = c
if pyngus.VERSION < (2, 0, 0):
# older versions of pyngus requires manual SASL configuration:
# determine the proper SASL mechanism: PLAIN if a username/password
# is present, else ANONYMOUS
pn_sasl = self.connection.pn_sasl
if host.username:
password = host.password if host.password else ""
pn_sasl.plain(host.username, password)
else:
pn_sasl.mechanisms("ANONYMOUS")
# TODO(kgiusti): server if accepting inbound connections
pn_sasl.client()
self.connection.open()
def reset(self, name=None):
"""Clean up the current state, expect 'connect()' to be recalled
later.
"""
# note well: since destroy() is called on the connection, do not invoke
# this method from a pyngus callback!
if self.connection:
self.connection.destroy()
self.connection = None
self.close()
if name:
self.name = name
def close(self):
if self.socket:
self.socket.close()
self.socket = None
class Schedule(object):
"""A list of callables (requests). Each callable may have a delay (in
milliseconds) which causes the callable to be scheduled to run after the
delay passes.
"""
def __init__(self):
self._entries = []
def schedule(self, request, delay):
"""Request a callable be executed after delay."""
entry = (time.time() + delay, request)
heapq.heappush(self._entries, entry)
def get_delay(self, max_delay=None):
"""Get the delay in milliseconds until the next callable needs to be
run, or 'max_delay' if no outstanding callables or the delay to the
next callable is > 'max_delay'.
"""
due = self._entries[0][0] if self._entries else None
if due is None:
return max_delay
now = time.time()
if due < now:
return 0
else:
return min(due - now, max_delay) if max_delay else due - now
def process(self):
"""Invoke all expired callables."""
while self._entries and self._entries[0][0] < time.time():
heapq.heappop(self._entries)[1]()
class Requests(object):
"""A queue of callables to execute from the eventloop thread's main
loop.
"""
def __init__(self):
self._requests = moves.queue.Queue(maxsize=10)
self._wakeup_pipe = os.pipe()
def wakeup(self, request=None):
"""Enqueue a callable to be executed by the eventloop, and force the
eventloop thread to wake up from select().
"""
if request:
self._requests.put(request)
os.write(self._wakeup_pipe[1], b'!')
def fileno(self):
"""Allows this request queue to be used by select()."""
return self._wakeup_pipe[0]
def read(self):
"""Invoked by the eventloop thread, execute each queued callable."""
os.read(self._wakeup_pipe[0], 512)
# first pop of all current tasks
requests = []
while not self._requests.empty():
requests.append(self._requests.get())
# then process them, this allows callables to re-register themselves to
# be run on the next iteration of the I/O loop
for r in requests:
r()
class Thread(threading.Thread):
"""Manages socket I/O and executes callables queued up by external
threads.
"""
def __init__(self, container_name=None):
super(Thread, self).__init__()
# callables from other threads:
self._requests = Requests()
# delayed callables (only used on this thread for now):
self._schedule = Schedule()
# Configure a container
if container_name is None:
container_name = "Container-" + uuid.uuid4().hex
self._container = pyngus.Container(container_name)
self.name = "Thread for Proton container: %s" % self._container.name
self._shutdown = False
self.daemon = True
self.start()
def wakeup(self, request=None):
"""Wake up the eventloop thread, Optionally providing a callable to run
when the eventloop wakes up. Thread safe.
"""
self._requests.wakeup(request)
def shutdown(self):
"""Shutdown the eventloop thread. Thread safe.
"""
LOG.debug("eventloop shutdown requested")
self._shutdown = True
self.wakeup()
def destroy(self):
# release the container. This can only be called after the eventloop
# thread exited
self._container.destroy()
self._container = None
# the following methods are not thread safe - they must be run from the
# eventloop thread
def schedule(self, request, delay):
"""Invoke request after delay seconds."""
self._schedule.schedule(request, delay)
def connect(self, host, handler, properties=None, name=None):
"""Get a _SocketConnection to a peer represented by url."""
key = name or "%s:%i" % (host.hostname, host.port)
# return pre-existing
conn = self._container.get_connection(key)
if conn:
return conn.user_context
# create a new connection - this will be stored in the
# container, using the specified name as the lookup key, or if
# no name was provided, the host:port combination
sc = _SocketConnection(key, self._container,
properties, handler=handler)
sc.connect(host)
return sc
def run(self):
"""Run the proton event/timer loop."""
LOG.debug("Starting Proton thread, container=%s",
self._container.name)
while not self._shutdown:
readers, writers, timers = self._container.need_processing()
readfds = [c.user_context for c in readers]
# additionally, always check for readability of pipe we
# are using to wakeup processing thread by other threads
readfds.append(self._requests)
writefds = [c.user_context for c in writers]
timeout = None
if timers:
deadline = timers[0].deadline # 0 == next expiring timer
now = time.time()
timeout = 0 if deadline <= now else deadline - now
# adjust timeout for any deferred requests
timeout = self._schedule.get_delay(timeout)
try:
results = select.select(readfds, writefds, [], timeout)
except select.error as serror:
if serror[0] == errno.EINTR:
LOG.warning(_LW("ignoring interrupt from select(): %s"),
str(serror))
continue
raise # assuming fatal...
readable, writable, ignore = results
for r in readable:
r.read()
for t in timers:
if t.deadline > time.time():
break
t.process(time.time())
for w in writable:
w.write()
self._schedule.process() # run any deferred requests
LOG.info(_LI("eventloop thread exiting, container=%s"),
self._container.name)

View File

@ -1,98 +0,0 @@
# Copyright 2014, Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
amqp1_opts = [
cfg.StrOpt('server_request_prefix',
default='exclusive',
deprecated_group='amqp1',
help="address prefix used when sending to a specific server"),
cfg.StrOpt('broadcast_prefix',
default='broadcast',
deprecated_group='amqp1',
help="address prefix used when broadcasting to all servers"),
cfg.StrOpt('group_request_prefix',
default='unicast',
deprecated_group='amqp1',
help="address prefix when sending to any server in group"),
cfg.StrOpt('container_name',
deprecated_group='amqp1',
help='Name for the AMQP container'),
cfg.IntOpt('idle_timeout',
default=0, # disabled
deprecated_group='amqp1',
help='Timeout for inactive connections (in seconds)'),
cfg.BoolOpt('trace',
default=False,
deprecated_group='amqp1',
help='Debug: dump AMQP frames to stdout'),
cfg.StrOpt('ssl_ca_file',
default='',
deprecated_group='amqp1',
help="CA certificate PEM file to verify server certificate"),
cfg.StrOpt('ssl_cert_file',
default='',
deprecated_group='amqp1',
help='Identifying certificate PEM file to present to clients'),
cfg.StrOpt('ssl_key_file',
default='',
deprecated_group='amqp1',
help='Private key PEM file used to sign cert_file certificate'),
cfg.StrOpt('ssl_key_password',
deprecated_group='amqp1',
secret=True,
help='Password for decrypting ssl_key_file (if encrypted)'),
cfg.BoolOpt('allow_insecure_clients',
default=False,
deprecated_group='amqp1',
help='Accept clients using either SSL or plain TCP'),
cfg.StrOpt('sasl_mechanisms',
default='',
deprecated_group='amqp1',
help='Space separated list of acceptable SASL mechanisms'),
cfg.StrOpt('sasl_config_dir',
default='',
deprecated_group='amqp1',
help='Path to directory that contains the SASL configuration'),
cfg.StrOpt('sasl_config_name',
default='',
deprecated_group='amqp1',
help='Name of configuration file (without .conf suffix)'),
cfg.StrOpt('username',
default='',
deprecated_group='amqp1',
help='User name for message broker authentication'),
cfg.StrOpt('password',
default='',
deprecated_group='amqp1',
secret=True,
help='Password for message broker authentication')
]

View File

@ -1,511 +0,0 @@
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
__all__ = ['AMQPDriverBase']
import logging
import threading
import time
import uuid
import cachetools
from oslo_utils import timeutils
from six import moves
import oslo_messaging
from oslo_messaging._drivers import amqp as rpc_amqp
from oslo_messaging._drivers import base
from oslo_messaging._drivers import common as rpc_common
from oslo_messaging._i18n import _
from oslo_messaging._i18n import _LE
from oslo_messaging._i18n import _LI
from oslo_messaging._i18n import _LW
LOG = logging.getLogger(__name__)
class AMQPIncomingMessage(base.RpcIncomingMessage):
def __init__(self, listener, ctxt, message, unique_id, msg_id, reply_q,
obsolete_reply_queues):
super(AMQPIncomingMessage, self).__init__(ctxt, message)
self.listener = listener
self.unique_id = unique_id
self.msg_id = msg_id
self.reply_q = reply_q
self._obsolete_reply_queues = obsolete_reply_queues
self.stopwatch = timeutils.StopWatch()
self.stopwatch.start()
def _send_reply(self, conn, reply=None, failure=None):
if not self._obsolete_reply_queues.reply_q_valid(self.reply_q,
self.msg_id):
return
if failure:
failure = rpc_common.serialize_remote_exception(failure)
# NOTE(sileht): ending can be removed in N*, see Listener.wait()
# for more detail.
msg = {'result': reply, 'failure': failure, 'ending': True,
'_msg_id': self.msg_id}
rpc_amqp._add_unique_id(msg)
unique_id = msg[rpc_amqp.UNIQUE_ID]
LOG.debug("sending reply msg_id: %(msg_id)s "
"reply queue: %(reply_q)s "
"time elapsed: %(elapsed)ss", {
'msg_id': self.msg_id,
'unique_id': unique_id,
'reply_q': self.reply_q,
'elapsed': self.stopwatch.elapsed()})
conn.direct_send(self.reply_q, rpc_common.serialize_msg(msg))
def reply(self, reply=None, failure=None):
if not self.msg_id:
# NOTE(Alexei_987) not sending reply, if msg_id is empty
# because reply should not be expected by caller side
return
# NOTE(sileht): return without hold the a connection if possible
if not self._obsolete_reply_queues.reply_q_valid(self.reply_q,
self.msg_id):
return
# NOTE(sileht): we read the configuration value from the driver
# to be able to backport this change in previous version that
# still have the qpid driver
duration = self.listener.driver.missing_destination_retry_timeout
timer = rpc_common.DecayingTimer(duration=duration)
timer.start()
while True:
try:
with self.listener.driver._get_connection(
rpc_common.PURPOSE_SEND) as conn:
self._send_reply(conn, reply, failure)
return
except rpc_amqp.AMQPDestinationNotFound:
if timer.check_return() > 0:
LOG.debug(("The reply %(msg_id)s cannot be sent "
"%(reply_q)s reply queue don't exist, "
"retrying..."), {
'msg_id': self.msg_id,
'reply_q': self.reply_q})
time.sleep(0.25)
else:
self._obsolete_reply_queues.add(self.reply_q, self.msg_id)
LOG.info(_LI("The reply %(msg_id)s cannot be sent "
"%(reply_q)s reply queue don't exist after "
"%(duration)s sec abandoning..."), {
'msg_id': self.msg_id,
'reply_q': self.reply_q,
'duration': duration})
return
def acknowledge(self):
self.message.acknowledge()
self.listener.msg_id_cache.add(self.unique_id)
def requeue(self):
# NOTE(sileht): In case of the connection is lost between receiving the
# message and requeing it, this requeue call fail
# but because the message is not acknowledged and not added to the
# msg_id_cache, the message will be reconsumed, the only difference is
# the message stay at the beginning of the queue instead of moving to
# the end.
self.message.requeue()
class ObsoleteReplyQueuesCache(object):
"""Cache of reply queue id that doesn't exists anymore.
NOTE(sileht): In case of a broker restart/failover
a reply queue can be unreachable for short period
the IncomingMessage.send_reply will block for 60 seconds
in this case or until rabbit recovers.
But in case of the reply queue is unreachable because the
rpc client is really gone, we can have a ton of reply to send
waiting 60 seconds.
This leads to a starvation of connection of the pool
The rpc server take to much time to send reply, other rpc client will
raise TimeoutError because their don't receive their replies in time.
This object cache stores already known gone client to not wait 60 seconds
and hold a connection of the pool.
Keeping 200 last gone rpc client for 1 minute is enough
and doesn't hold to much memory.
"""
SIZE = 200
TTL = 60
def __init__(self):
self._lock = threading.RLock()
self._cache = cachetools.TTLCache(self.SIZE, self.TTL)
def reply_q_valid(self, reply_q, msg_id):
if reply_q in self._cache:
self._no_reply_log(reply_q, msg_id)
return False
return True
def add(self, reply_q, msg_id):
with self._lock:
self._cache.update({reply_q: msg_id})
self._no_reply_log(reply_q, msg_id)
def _no_reply_log(self, reply_q, msg_id):
LOG.warning(_LW("%(reply_queue)s doesn't exists, drop reply to "
"%(msg_id)s"), {'reply_queue': reply_q,
'msg_id': msg_id})
class AMQPListener(base.PollStyleListener):
def __init__(self, driver, conn):
super(AMQPListener, self).__init__(driver.prefetch_size)
self.driver = driver
self.conn = conn
self.msg_id_cache = rpc_amqp._MsgIdCache()
self.incoming = []
self._stopped = threading.Event()
self._obsolete_reply_queues = ObsoleteReplyQueuesCache()
def __call__(self, message):
ctxt = rpc_amqp.unpack_context(message)
unique_id = self.msg_id_cache.check_duplicate_message(message)
if ctxt.msg_id:
LOG.debug("received message msg_id: %(msg_id)s reply to "
"%(queue)s", {'queue': ctxt.reply_q,
'msg_id': ctxt.msg_id})
else:
LOG.debug("received message with unique_id: %s", unique_id)
self.incoming.append(AMQPIncomingMessage(self,
ctxt.to_dict(),
message,
unique_id,
ctxt.msg_id,
ctxt.reply_q,
self._obsolete_reply_queues))
@base.batch_poll_helper
def poll(self, timeout=None):
while not self._stopped.is_set():
if self.incoming:
return self.incoming.pop(0)
try:
self.conn.consume(timeout=timeout)
except rpc_common.Timeout:
return None
def stop(self):
self._stopped.set()
self.conn.stop_consuming()
def cleanup(self):
# Closes listener connection
self.conn.close()
class ReplyWaiters(object):
WAKE_UP = object()
def __init__(self):
self._queues = {}
self._wrn_threshold = 10
def get(self, msg_id, timeout):
try:
return self._queues[msg_id].get(block=True, timeout=timeout)
except moves.queue.Empty:
raise oslo_messaging.MessagingTimeout(
'Timed out waiting for a reply '
'to message ID %s' % msg_id)
def put(self, msg_id, message_data):
queue = self._queues.get(msg_id)
if not queue:
LOG.info(_LI('No calling threads waiting for msg_id : %s'), msg_id)
LOG.debug(' queues: %(queues)s, message: %(message)s',
{'queues': len(self._queues), 'message': message_data})
else:
queue.put(message_data)
def add(self, msg_id):
self._queues[msg_id] = moves.queue.Queue()
if len(self._queues) > self._wrn_threshold:
LOG.warning(_LW('Number of call queues is greater than warning '
'threshold: %(old_threshold)s. There could be a '
'leak. Increasing threshold to: %(threshold)s'),
{'old_threshold': self._wrn_threshold,
'threshold': self._wrn_threshold * 2})
self._wrn_threshold *= 2
def remove(self, msg_id):
del self._queues[msg_id]
class ReplyWaiter(object):
def __init__(self, reply_q, conn, allowed_remote_exmods):
self.conn = conn
self.allowed_remote_exmods = allowed_remote_exmods
self.msg_id_cache = rpc_amqp._MsgIdCache()
self.waiters = ReplyWaiters()
self.conn.declare_direct_consumer(reply_q, self)
self._thread_exit_event = threading.Event()
self._thread = threading.Thread(target=self.poll)
self._thread.daemon = True
self._thread.start()
def stop(self):
if self._thread:
self._thread_exit_event.set()
self.conn.stop_consuming()
self._thread.join()
self._thread = None
def poll(self):
while not self._thread_exit_event.is_set():
try:
self.conn.consume()
except Exception:
LOG.exception(_LE("Failed to process incoming message, "
"retrying..."))
def __call__(self, message):
message.acknowledge()
incoming_msg_id = message.pop('_msg_id', None)
if message.get('ending'):
LOG.debug("received reply msg_id: %s", incoming_msg_id)
self.waiters.put(incoming_msg_id, message)
def listen(self, msg_id):
self.waiters.add(msg_id)
def unlisten(self, msg_id):
self.waiters.remove(msg_id)
@staticmethod
def _raise_timeout_exception(msg_id):
raise oslo_messaging.MessagingTimeout(
_('Timed out waiting for a reply to message ID %s.') % msg_id)
def _process_reply(self, data):
self.msg_id_cache.check_duplicate_message(data)
if data['failure']:
failure = data['failure']
result = rpc_common.deserialize_remote_exception(
failure, self.allowed_remote_exmods)
else:
result = data.get('result', None)
ending = data.get('ending', False)
return result, ending
def wait(self, msg_id, timeout):
# NOTE(sileht): for each msg_id we receive two amqp message
# first one with the payload, a second one to ensure the other
# have finish to send the payload
# NOTE(viktors): We are going to remove this behavior in the N
# release, but we need to keep backward compatibility, so we should
# support both cases for now.
timer = rpc_common.DecayingTimer(duration=timeout)
timer.start()
final_reply = None
ending = False
while not ending:
timeout = timer.check_return(self._raise_timeout_exception, msg_id)
try:
message = self.waiters.get(msg_id, timeout=timeout)
except moves.queue.Empty:
self._raise_timeout_exception(msg_id)
reply, ending = self._process_reply(message)
if reply is not None:
# NOTE(viktors): This can be either first _send_reply() with an
# empty `result` field or a second _send_reply() with
# ending=True and no `result` field.
final_reply = reply
return final_reply
class AMQPDriverBase(base.BaseDriver):
missing_destination_retry_timeout = 0
def __init__(self, conf, url, connection_pool,
default_exchange=None, allowed_remote_exmods=None):
super(AMQPDriverBase, self).__init__(conf, url, default_exchange,
allowed_remote_exmods)
self._default_exchange = default_exchange
self._connection_pool = connection_pool
self._reply_q_lock = threading.Lock()
self._reply_q = None
self._reply_q_conn = None
self._waiter = None
def _get_exchange(self, target):
return target.exchange or self._default_exchange
def _get_connection(self, purpose=rpc_common.PURPOSE_SEND):
return rpc_common.ConnectionContext(self._connection_pool,
purpose=purpose)
def _get_reply_q(self):
with self._reply_q_lock:
if self._reply_q is not None:
return self._reply_q
reply_q = 'reply_' + uuid.uuid4().hex
conn = self._get_connection(rpc_common.PURPOSE_LISTEN)
self._waiter = ReplyWaiter(reply_q, conn,
self._allowed_remote_exmods)
self._reply_q = reply_q
self._reply_q_conn = conn
return self._reply_q
def _send(self, target, ctxt, message,
wait_for_reply=None, timeout=None,
envelope=True, notify=False, retry=None):
# FIXME(markmc): remove this temporary hack
class Context(object):
def __init__(self, d):
self.d = d
def to_dict(self):
return self.d
context = Context(ctxt)
msg = message
if wait_for_reply:
msg_id = uuid.uuid4().hex
msg.update({'_msg_id': msg_id})
msg.update({'_reply_q': self._get_reply_q()})
rpc_amqp._add_unique_id(msg)
unique_id = msg[rpc_amqp.UNIQUE_ID]
rpc_amqp.pack_context(msg, context)
if envelope:
msg = rpc_common.serialize_msg(msg)
if wait_for_reply:
self._waiter.listen(msg_id)
log_msg = "CALL msg_id: %s " % msg_id
else:
log_msg = "CAST unique_id: %s " % unique_id
try:
with self._get_connection(rpc_common.PURPOSE_SEND) as conn:
if notify:
exchange = self._get_exchange(target)
log_msg += "NOTIFY exchange '%(exchange)s'" \
" topic '%(topic)s'" % {
'exchange': exchange,
'topic': target.topic}
LOG.debug(log_msg)
conn.notify_send(exchange, target.topic, msg, retry=retry)
elif target.fanout:
log_msg += "FANOUT topic '%(topic)s'" % {
'topic': target.topic}
LOG.debug(log_msg)
conn.fanout_send(target.topic, msg, retry=retry)
else:
topic = target.topic
exchange = self._get_exchange(target)
if target.server:
topic = '%s.%s' % (target.topic, target.server)
log_msg += "exchange '%(exchange)s'" \
" topic '%(topic)s'" % {
'exchange': exchange,
'topic': target.topic}
LOG.debug(log_msg)
conn.topic_send(exchange_name=exchange, topic=topic,
msg=msg, timeout=timeout, retry=retry)
if wait_for_reply:
result = self._waiter.wait(msg_id, timeout)
if isinstance(result, Exception):
raise result
return result
finally:
if wait_for_reply:
self._waiter.unlisten(msg_id)
def send(self, target, ctxt, message, wait_for_reply=None, timeout=None,
retry=None):
return self._send(target, ctxt, message, wait_for_reply, timeout,
retry=retry)
def send_notification(self, target, ctxt, message, version, retry=None):
return self._send(target, ctxt, message,
envelope=(version == 2.0), notify=True, retry=retry)
def listen(self, target, batch_size, batch_timeout):
conn = self._get_connection(rpc_common.PURPOSE_LISTEN)
listener = AMQPListener(self, conn)
conn.declare_topic_consumer(exchange_name=self._get_exchange(target),
topic=target.topic,
callback=listener)
conn.declare_topic_consumer(exchange_name=self._get_exchange(target),
topic='%s.%s' % (target.topic,
target.server),
callback=listener)
conn.declare_fanout_consumer(target.topic, listener)
return base.PollStyleListenerAdapter(listener, batch_size,
batch_timeout)
def listen_for_notifications(self, targets_and_priorities, pool,
batch_size, batch_timeout):
conn = self._get_connection(rpc_common.PURPOSE_LISTEN)
listener = AMQPListener(self, conn)
for target, priority in targets_and_priorities:
conn.declare_topic_consumer(
exchange_name=self._get_exchange(target),
topic='%s.%s' % (target.topic, priority),
callback=listener, queue_name=pool)
return base.PollStyleListenerAdapter(listener, batch_size,
batch_timeout)
def cleanup(self):
if self._connection_pool:
self._connection_pool.empty()
self._connection_pool = None
with self._reply_q_lock:
if self._reply_q is not None:
self._waiter.stop()
self._reply_q_conn.close()
self._reply_q_conn = None
self._reply_q = None
self._waiter = None

View File

@ -1,274 +0,0 @@
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import threading
from oslo_config import cfg
from oslo_utils import excutils
from oslo_utils import timeutils
import six
from oslo_messaging import exceptions
base_opts = [
cfg.IntOpt('rpc_conn_pool_size', default=30,
deprecated_group='DEFAULT',
help='Size of RPC connection pool.'),
cfg.IntOpt('conn_pool_min_size', default=2,
help='The pool size limit for connections expiration policy'),
cfg.IntOpt('conn_pool_ttl', default=1200,
help='The time-to-live in sec of idle connections in the pool')
]
def batch_poll_helper(func):
"""Decorator to poll messages in batch
This decorator helps driver that polls message one by one,
to returns a list of message.
"""
def wrapper(in_self, timeout=None, batch_size=1, batch_timeout=None):
incomings = []
driver_prefetch = in_self.prefetch_size
if driver_prefetch > 0:
batch_size = min(batch_size, driver_prefetch)
with timeutils.StopWatch(timeout) as timeout_watch:
# poll first message
msg = func(in_self, timeout=timeout_watch.leftover(True))
if msg is not None:
incomings.append(msg)
if batch_size == 1 or msg is None:
return incomings
# update batch_timeout according to timeout for whole operation
timeout_left = timeout_watch.leftover(True)
if timeout_left is not None and (
batch_timeout is None or timeout_left < batch_timeout):
batch_timeout = timeout_left
with timeutils.StopWatch(batch_timeout) as batch_timeout_watch:
# poll remained batch messages
while len(incomings) < batch_size and msg is not None:
msg = func(in_self, timeout=batch_timeout_watch.leftover(True))
if msg is not None:
incomings.append(msg)
return incomings
return wrapper
class TransportDriverError(exceptions.MessagingException):
"""Base class for transport driver specific exceptions."""
@six.add_metaclass(abc.ABCMeta)
class IncomingMessage(object):
def __init__(self, ctxt, message):
self.ctxt = ctxt
self.message = message
def acknowledge(self):
"""Acknowledge the message."""
@abc.abstractmethod
def requeue(self):
"""Requeue the message."""
@six.add_metaclass(abc.ABCMeta)
class RpcIncomingMessage(IncomingMessage):
@abc.abstractmethod
def reply(self, reply=None, failure=None):
"""Send a reply or failure back to the client."""
@six.add_metaclass(abc.ABCMeta)
class PollStyleListener(object):
def __init__(self, prefetch_size=-1):
self.prefetch_size = prefetch_size
@abc.abstractmethod
def poll(self, timeout=None, batch_size=1, batch_timeout=None):
"""Blocking until 'batch_size' message is pending and return
[IncomingMessage].
Waits for first message. Then waits for next batch_size-1 messages
during batch window defined by batch_timeout
This method block current thread until message comes, stop() is
executed by another thread or timemout is elapsed.
"""
def stop(self):
"""Stop listener.
Stop the listener message polling
"""
pass
def cleanup(self):
"""Cleanup listener.
Close connection (socket) used by listener if any.
As this is listener specific method, overwrite it in to derived class
if cleanup of listener required.
"""
pass
@six.add_metaclass(abc.ABCMeta)
class Listener(object):
def __init__(self, batch_size, batch_timeout,
prefetch_size=-1):
"""Init Listener
:param batch_size: desired number of messages passed to
single on_incoming_callback notification
:param batch_timeout: defines how long should we wait for batch_size
messages if we already have some messages waiting for processing
:param prefetch_size: defines how many massages we want to prefetch
from backend (depend on driver type) by single request
"""
self.on_incoming_callback = None
self.batch_timeout = batch_timeout
self.prefetch_size = prefetch_size
if prefetch_size > 0:
batch_size = min(batch_size, prefetch_size)
self.batch_size = batch_size
def start(self, on_incoming_callback):
"""Start listener.
Start the listener message polling
:param on_incoming_callback: callback function to be executed when
listener received messages. Messages should be processed and
acked/nacked by callback
"""
self.on_incoming_callback = on_incoming_callback
def stop(self):
"""Stop listener.
Stop the listener message polling
"""
self.on_incoming_callback = None
@abc.abstractmethod
def cleanup(self):
"""Cleanup listener.
Close connection (socket) used by listener if any.
As this is listener specific method, overwrite it in to derived class
if cleanup of listener required.
"""
class PollStyleListenerAdapter(Listener):
def __init__(self, poll_style_listener, batch_size, batch_timeout):
super(PollStyleListenerAdapter, self).__init__(
batch_size, batch_timeout, poll_style_listener.prefetch_size
)
self._poll_style_listener = poll_style_listener
self._listen_thread = threading.Thread(target=self._runner)
self._listen_thread.daemon = True
self._started = False
def start(self, on_incoming_callback):
"""Start listener.
Start the listener message polling
:param on_incoming_callback: callback function to be executed when
listener received messages. Messages should be processed and
acked/nacked by callback
"""
super(PollStyleListenerAdapter, self).start(on_incoming_callback)
self._started = True
self._listen_thread.start()
@excutils.forever_retry_uncaught_exceptions
def _runner(self):
while self._started:
incoming = self._poll_style_listener.poll(
batch_size=self.batch_size, batch_timeout=self.batch_timeout)
if incoming:
self.on_incoming_callback(incoming)
# listener is stopped but we need to process all already consumed
# messages
while True:
incoming = self._poll_style_listener.poll(
batch_size=self.batch_size, batch_timeout=self.batch_timeout)
if not incoming:
return
self.on_incoming_callback(incoming)
def stop(self):
"""Stop listener.
Stop the listener message polling
"""
self._started = False
self._poll_style_listener.stop()
self._listen_thread.join()
super(PollStyleListenerAdapter, self).stop()
def cleanup(self):
"""Cleanup listener.
Close connection (socket) used by listener if any.
As this is listener specific method, overwrite it in to derived class
if cleanup of listener required.
"""
self._poll_style_listener.cleanup()
@six.add_metaclass(abc.ABCMeta)
class BaseDriver(object):
prefetch_size = 0
def __init__(self, conf, url,
default_exchange=None, allowed_remote_exmods=None):
self.conf = conf
self._url = url
self._default_exchange = default_exchange
self._allowed_remote_exmods = allowed_remote_exmods or []
def require_features(self, requeue=False):
if requeue:
raise NotImplementedError('Message requeueing not supported by '
'this transport driver')
@abc.abstractmethod
def send(self, target, ctxt, message,
wait_for_reply=None, timeout=None, envelope=False):
"""Send a message to the given target."""
@abc.abstractmethod
def send_notification(self, target, ctxt, message, version):
"""Send a notification message to the given target."""
@abc.abstractmethod
def listen(self, target, batch_size, batch_timeout):
"""Construct a Listener for the given target."""
@abc.abstractmethod
def listen_for_notifications(self, targets_and_priorities, pool,
batch_size, batch_timeout):
"""Construct a notification Listener for the given list of
tuple of (target, priority).
"""
@abc.abstractmethod
def cleanup(self):
"""Release all resources."""

View File

@ -1,509 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
# Copyright 2011 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import copy
import logging
import sys
import traceback
from oslo_serialization import jsonutils
from oslo_utils import timeutils
import six
import oslo_messaging
from oslo_messaging._i18n import _
from oslo_messaging._i18n import _LE
from oslo_messaging import _utils as utils
LOG = logging.getLogger(__name__)
_EXCEPTIONS_MODULE = 'exceptions' if six.PY2 else 'builtins'
'''RPC Envelope Version.
This version number applies to the top level structure of messages sent out.
It does *not* apply to the message payload, which must be versioned
independently. For example, when using rpc APIs, a version number is applied
for changes to the API being exposed over rpc. This version number is handled
in the rpc proxy and dispatcher modules.
This version number applies to the message envelope that is used in the
serialization done inside the rpc layer. See serialize_msg() and
deserialize_msg().
The current message format (version 2.0) is very simple. It is:
{
'oslo.version': <RPC Envelope Version as a String>,
'oslo.message': <Application Message Payload, JSON encoded>
}
Message format version '1.0' is just considered to be the messages we sent
without a message envelope.
So, the current message envelope just includes the envelope version. It may
eventually contain additional information, such as a signature for the message
payload.
We will JSON encode the application message payload. The message envelope,
which includes the JSON encoded application message body, will be passed down
to the messaging libraries as a dict.
'''
_RPC_ENVELOPE_VERSION = '2.0'
_VERSION_KEY = 'oslo.version'
_MESSAGE_KEY = 'oslo.message'
_REMOTE_POSTFIX = '_Remote'
class RPCException(Exception):
msg_fmt = _("An unknown RPC related exception occurred.")
def __init__(self, message=None, **kwargs):
self.kwargs = kwargs
if not message:
try:
message = self.msg_fmt % kwargs
except Exception:
# kwargs doesn't match a variable in the message
# log the issue and the kwargs
LOG.exception(_LE('Exception in string format operation, '
'kwargs are:'))
for name, value in six.iteritems(kwargs):
LOG.error("%s: %s", name, value)
# at least get the core message out if something happened
message = self.msg_fmt
super(RPCException, self).__init__(message)
class Timeout(RPCException):
"""Signifies that a timeout has occurred.
This exception is raised if the rpc_response_timeout is reached while
waiting for a response from the remote side.
"""
msg_fmt = _('Timeout while waiting on RPC response - '
'topic: "%(topic)s", RPC method: "%(method)s" '
'info: "%(info)s"')
def __init__(self, info=None, topic=None, method=None):
"""Initiates Timeout object.
:param info: Extra info to convey to the user
:param topic: The topic that the rpc call was sent to
:param method: The name of the rpc method being
called
"""
self.info = info
self.topic = topic
self.method = method
super(Timeout, self).__init__(
None,
info=info or _('<unknown>'),
topic=topic or _('<unknown>'),
method=method or _('<unknown>'))
class DuplicateMessageError(RPCException):
msg_fmt = _("Found duplicate message(%(msg_id)s). Skipping it.")
class InvalidRPCConnectionReuse(RPCException):
msg_fmt = _("Invalid reuse of an RPC connection.")
class UnsupportedRpcVersion(RPCException):
msg_fmt = _("Specified RPC version, %(version)s, not supported by "
"this endpoint.")
class UnsupportedRpcEnvelopeVersion(RPCException):
msg_fmt = _("Specified RPC envelope version, %(version)s, "
"not supported by this endpoint.")
class RpcVersionCapError(RPCException):
msg_fmt = _("Specified RPC version cap, %(version_cap)s, is too low")
class Connection(object):
"""A connection, returned by rpc.create_connection().
This class represents a connection to the message bus used for rpc.
An instance of this class should never be created by users of the rpc API.
Use rpc.create_connection() instead.
"""
def close(self):
"""Close the connection.
This method must be called when the connection will no longer be used.
It will ensure that any resources associated with the connection, such
as a network connection, and cleaned up.
"""
raise NotImplementedError()
def serialize_remote_exception(failure_info):
"""Prepares exception data to be sent over rpc.
Failure_info should be a sys.exc_info() tuple.
"""
tb = traceback.format_exception(*failure_info)
failure = failure_info[1]
kwargs = {}
if hasattr(failure, 'kwargs'):
kwargs = failure.kwargs
# NOTE(matiu): With cells, it's possible to re-raise remote, remote
# exceptions. Lets turn it back into the original exception type.
cls_name = six.text_type(failure.__class__.__name__)
mod_name = six.text_type(failure.__class__.__module__)
if (cls_name.endswith(_REMOTE_POSTFIX) and
mod_name.endswith(_REMOTE_POSTFIX)):
cls_name = cls_name[:-len(_REMOTE_POSTFIX)]
mod_name = mod_name[:-len(_REMOTE_POSTFIX)]
data = {
'class': cls_name,
'module': mod_name,
'message': six.text_type(failure),
'tb': tb,
'args': failure.args,
'kwargs': kwargs
}
json_data = jsonutils.dumps(data)
return json_data
def deserialize_remote_exception(data, allowed_remote_exmods):
failure = jsonutils.loads(six.text_type(data))
trace = failure.get('tb', [])
message = failure.get('message', "") + "\n" + "\n".join(trace)
name = failure.get('class')
module = failure.get('module')
# NOTE(ameade): We DO NOT want to allow just any module to be imported, in
# order to prevent arbitrary code execution.
if module != _EXCEPTIONS_MODULE and module not in allowed_remote_exmods:
return oslo_messaging.RemoteError(name, failure.get('message'), trace)
try:
__import__(module)
mod = sys.modules[module]
klass = getattr(mod, name)
if not issubclass(klass, Exception):
raise TypeError("Can only deserialize Exceptions")
failure = klass(*failure.get('args', []), **failure.get('kwargs', {}))
except (AttributeError, TypeError, ImportError):
return oslo_messaging.RemoteError(name, failure.get('message'), trace)
ex_type = type(failure)
str_override = lambda self: message
new_ex_type = type(ex_type.__name__ + _REMOTE_POSTFIX, (ex_type,),
{'__str__': str_override, '__unicode__': str_override})
new_ex_type.__module__ = '%s%s' % (module, _REMOTE_POSTFIX)
try:
# NOTE(ameade): Dynamically create a new exception type and swap it in
# as the new type for the exception. This only works on user defined
# Exceptions and not core Python exceptions. This is important because
# we cannot necessarily change an exception message so we must override
# the __str__ method.
failure.__class__ = new_ex_type
except TypeError:
# NOTE(ameade): If a core exception then just add the traceback to the
# first exception argument.
failure.args = (message,) + failure.args[1:]
return failure
class CommonRpcContext(object):
def __init__(self, **kwargs):
self.values = kwargs
def __getattr__(self, key):
try:
return self.values[key]
except KeyError:
raise AttributeError(key)
def to_dict(self):
return copy.deepcopy(self.values)
@classmethod
def from_dict(cls, values):
return cls(**values)
def deepcopy(self):
return self.from_dict(self.to_dict())
def update_store(self):
# local.store.context = self
pass
class ClientException(Exception):
"""Encapsulates actual exception expected to be hit by a RPC proxy object.
Merely instantiating it records the current exception information, which
will be passed back to the RPC client without exceptional logging.
"""
def __init__(self):
self._exc_info = sys.exc_info()
def serialize_msg(raw_msg):
# NOTE(russellb) See the docstring for _RPC_ENVELOPE_VERSION for more
# information about this format.
msg = {_VERSION_KEY: _RPC_ENVELOPE_VERSION,
_MESSAGE_KEY: jsonutils.dumps(raw_msg)}
return msg
def deserialize_msg(msg):
# NOTE(russellb): Hang on to your hats, this road is about to
# get a little bumpy.
#
# Robustness Principle:
# "Be strict in what you send, liberal in what you accept."
#
# At this point we have to do a bit of guessing about what it
# is we just received. Here is the set of possibilities:
#
# 1) We received a dict. This could be 2 things:
#
# a) Inspect it to see if it looks like a standard message envelope.
# If so, great!
#
# b) If it doesn't look like a standard message envelope, it could either
# be a notification, or a message from before we added a message
# envelope (referred to as version 1.0).
# Just return the message as-is.
#
# 2) It's any other non-dict type. Just return it and hope for the best.
# This case covers return values from rpc.call() from before message
# envelopes were used. (messages to call a method were always a dict)
if not isinstance(msg, dict):
# See #2 above.
return msg
base_envelope_keys = (_VERSION_KEY, _MESSAGE_KEY)
if not all(map(lambda key: key in msg, base_envelope_keys)):
# See #1.b above.
return msg
# At this point we think we have the message envelope
# format we were expecting. (#1.a above)
if not utils.version_is_compatible(_RPC_ENVELOPE_VERSION,
msg[_VERSION_KEY]):
raise UnsupportedRpcEnvelopeVersion(version=msg[_VERSION_KEY])
raw_msg = jsonutils.loads(msg[_MESSAGE_KEY])
return raw_msg
class DecayingTimer(object):
def __init__(self, duration=None):
self._watch = timeutils.StopWatch(duration=duration)
def start(self):
self._watch.start()
def check_return(self, timeout_callback=None, *args, **kwargs):
maximum = kwargs.pop('maximum', None)
left = self._watch.leftover(return_none=True)
if left is None:
return maximum
if left <= 0 and timeout_callback is not None:
timeout_callback(*args, **kwargs)
return left if maximum is None else min(left, maximum)
# NOTE(sileht): Even if rabbit has only one Connection class,
# this connection can be used for two purposes:
# * wait and receive amqp messages (only do read stuffs on the socket)
# * send messages to the broker (only do write stuffs on the socket)
# The code inside a connection class is not concurrency safe.
# Using one Connection class instance for doing both, will result
# of eventlet complaining of multiple greenthreads that read/write the
# same fd concurrently... because 'send' and 'listen' run in different
# greenthread.
# So, a connection cannot be shared between thread/greenthread and
# this two variables permit to define the purpose of the connection
# to allow drivers to add special handling if needed (like heatbeat).
# amqp drivers create 3 kind of connections:
# * driver.listen*(): each call create a new 'PURPOSE_LISTEN' connection
# * driver.send*(): a pool of 'PURPOSE_SEND' connections is used
# * driver internally have another 'PURPOSE_LISTEN' connection dedicated
# to wait replies of rpc call
PURPOSE_LISTEN = 'listen'
PURPOSE_SEND = 'send'
class ConnectionContext(Connection):
"""The class that is actually returned to the create_connection() caller.
This is essentially a wrapper around Connection that supports 'with'.
It can also return a new Connection, or one from a pool.
The function will also catch when an instance of this class is to be
deleted. With that we can return Connections to the pool on exceptions
and so forth without making the caller be responsible for catching them.
If possible the function makes sure to return a connection to the pool.
"""
def __init__(self, connection_pool, purpose):
"""Create a new connection, or get one from the pool."""
self.connection = None
self.connection_pool = connection_pool
pooled = purpose == PURPOSE_SEND
if pooled:
self.connection = connection_pool.get()
else:
# a non-pooled connection is requested, so create a new connection
self.connection = connection_pool.create(purpose)
self.pooled = pooled
self.connection.pooled = pooled
def __enter__(self):
"""When with ConnectionContext() is used, return self."""
return self
def _done(self):
"""If the connection came from a pool, clean it up and put it back.
If it did not come from a pool, close it.
"""
if self.connection:
if self.pooled:
# Reset the connection so it's ready for the next caller
# to grab from the pool
try:
self.connection.reset()
except Exception:
LOG.exception(_LE("Fail to reset the connection, drop it"))
try:
self.connection.close()
except Exception:
pass
self.connection = self.connection_pool.create()
finally:
self.connection_pool.put(self.connection)
else:
try:
self.connection.close()
except Exception:
pass
self.connection = None
def __exit__(self, exc_type, exc_value, tb):
"""End of 'with' statement. We're done here."""
self._done()
def __del__(self):
"""Caller is done with this connection. Make sure we cleaned up."""
self._done()
def close(self):
"""Caller is done with this connection."""
self._done()
def __getattr__(self, key):
"""Proxy all other calls to the Connection instance."""
if self.connection:
return getattr(self.connection, key)
else:
raise InvalidRPCConnectionReuse()
class ConfigOptsProxy(collections.Mapping):
"""Proxy for oslo_config.cfg.ConfigOpts.
Values from the query part of the transport url (if they are both present
and valid) override corresponding values from the configuration.
"""
def __init__(self, conf, url):
self._conf = conf
self._url = url
def __getattr__(self, name):
value = getattr(self._conf, name)
if isinstance(value, self._conf.GroupAttr):
return self.GroupAttrProxy(self._conf, name, value, self._url)
return value
def __getitem__(self, name):
return self.__getattr__(name)
def __contains__(self, name):
return name in self._conf
def __iter__(self):
return iter(self._conf)
def __len__(self):
return len(self._conf)
class GroupAttrProxy(collections.Mapping):
"""Internal helper proxy for oslo_config.cfg.ConfigOpts.GroupAttr."""
_VOID_MARKER = object()
def __init__(self, conf, group_name, group, url):
self._conf = conf
self._group_name = group_name
self._group = group
self._url = url
def __getattr__(self, opt_name):
# Make sure that the group has this specific option
opt_value_conf = getattr(self._group, opt_name)
# If the option is also present in the url and has a valid
# (i.e. convertible) value type, then try to override it
opt_value_url = self._url.query.get(opt_name, self._VOID_MARKER)
if opt_value_url is self._VOID_MARKER:
return opt_value_conf
opt_info = self._conf._get_opt_info(opt_name, self._group_name)
return opt_info['opt'].type(opt_value_url)
def __getitem__(self, opt_name):
return self.__getattr__(opt_name)
def __contains__(self, opt_name):
return opt_name in self._group
def __iter__(self):
return iter(self._group)
def __len__(self):
return len(self._group)

View File

@ -1,299 +0,0 @@
# Copyright 2014, Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Driver for the 'amqp' transport.
This module provides a transport driver that speaks version 1.0 of the AMQP
messaging protocol. The driver sends messages and creates subscriptions via
'tasks' that are performed on its behalf via the controller module.
"""
import collections
import logging
import os
import threading
import time
import uuid
from oslo_serialization import jsonutils
from oslo_utils import importutils
from oslo_utils import timeutils
from oslo_messaging._drivers import base
from oslo_messaging._drivers import common
from oslo_messaging._i18n import _LI, _LW
from oslo_messaging import target as messaging_target
proton = importutils.try_import('proton')
controller = importutils.try_import(
'oslo_messaging._drivers.amqp1_driver.controller'
)
drivertasks = importutils.try_import(
'oslo_messaging._drivers.amqp1_driver.drivertasks'
)
LOG = logging.getLogger(__name__)
def marshal_response(reply=None, failure=None):
# TODO(grs): do replies have a context?
# NOTE(flaper87): Set inferred to True since rabbitmq-amqp-1.0 doesn't
# have support for vbin8.
msg = proton.Message(inferred=True)
if failure:
failure = common.serialize_remote_exception(failure)
data = {"failure": failure}
else:
data = {"response": reply}
msg.body = jsonutils.dumps(data)
return msg
def unmarshal_response(message, allowed):
# TODO(kgiusti) This may fail to unpack and raise an exception. Need to
# communicate this to the caller!
data = jsonutils.loads(message.body)
failure = data.get('failure')
if failure is not None:
raise common.deserialize_remote_exception(failure, allowed)
return data.get("response")
def marshal_request(request, context, envelope):
# NOTE(flaper87): Set inferred to True since rabbitmq-amqp-1.0 doesn't
# have support for vbin8.
msg = proton.Message(inferred=True)
if envelope:
request = common.serialize_msg(request)
data = {
"request": request,
"context": context
}
msg.body = jsonutils.dumps(data)
return msg
def unmarshal_request(message):
data = jsonutils.loads(message.body)
msg = common.deserialize_msg(data.get("request"))
return (msg, data.get("context"))
class ProtonIncomingMessage(base.RpcIncomingMessage):
def __init__(self, listener, ctxt, request, message):
super(ProtonIncomingMessage, self).__init__(ctxt, request)
self.listener = listener
self._reply_to = message.reply_to
self._correlation_id = message.id
def reply(self, reply=None, failure=None):
"""Schedule a ReplyTask to send the reply."""
if self._reply_to:
response = marshal_response(reply=reply, failure=failure)
response.correlation_id = self._correlation_id
LOG.debug("Replying to %s", self._correlation_id)
task = drivertasks.ReplyTask(self._reply_to, response)
self.listener.driver._ctrl.add_task(task)
else:
LOG.debug("Ignoring reply as no reply address available")
def acknowledge(self):
pass
def requeue(self):
pass
class Queue(object):
def __init__(self):
self._queue = collections.deque()
self._lock = threading.Lock()
self._pop_wake_condition = threading.Condition(self._lock)
self._started = True
def put(self, item):
with self._lock:
self._queue.appendleft(item)
self._pop_wake_condition.notify()
def pop(self, timeout):
with timeutils.StopWatch(timeout) as stop_watcher:
with self._lock:
while len(self._queue) == 0:
if stop_watcher.expired() or not self._started:
return None
self._pop_wake_condition.wait(
stop_watcher.leftover(return_none=True)
)
return self._queue.pop()
def stop(self):
with self._lock:
self._started = False
self._pop_wake_condition.notify_all()
class ProtonListener(base.PollStyleListener):
def __init__(self, driver):
super(ProtonListener, self).__init__(driver.prefetch_size)
self.driver = driver
self.incoming = Queue()
self.id = uuid.uuid4().hex
def stop(self):
self.incoming.stop()
@base.batch_poll_helper
def poll(self, timeout=None):
message = self.incoming.pop(timeout)
if message is None:
return None
request, ctxt = unmarshal_request(message)
LOG.debug("Returning incoming message")
return ProtonIncomingMessage(self, ctxt, request, message)
class ProtonDriver(base.BaseDriver):
"""AMQP 1.0 Driver
See :doc:`AMQP1.0` for details.
"""
def __init__(self, conf, url,
default_exchange=None, allowed_remote_exmods=[]):
# TODO(kgiusti) Remove once driver fully stabilizes:
LOG.warning(_LW("Support for the 'amqp' transport is EXPERIMENTAL."))
if proton is None or hasattr(controller, "fake_controller"):
raise NotImplementedError("Proton AMQP C libraries not installed")
super(ProtonDriver, self).__init__(conf, url, default_exchange,
allowed_remote_exmods)
# TODO(grs): handle authentication etc
self._hosts = url.hosts
self._conf = conf
self._default_exchange = default_exchange
# lazy connection setup - don't create the controller until
# after the first messaging request:
self._ctrl = None
self._pid = None
self._lock = threading.Lock()
def _ensure_connect_called(func):
"""Causes a new controller to be created when the messaging service is
first used by the current process. It is safe to push tasks to it
whether connected or not, but those tasks won't be processed until
connection completes.
"""
def wrap(self, *args, **kws):
with self._lock:
# check to see if a fork was done after the Controller and its
# I/O thread was spawned. old_pid will be None the first time
# this is called which will cause the Controller to be created.
old_pid = self._pid
self._pid = os.getpid()
if old_pid != self._pid:
if self._ctrl is not None:
# fork was called after the Controller was created, and
# we are now executing as the child process. Do not
# touch the existing Controller - it is owned by the
# parent. Best we can do here is simply drop it and
# hope we get lucky.
LOG.warning(_LW("Process forked after connection "
"established!"))
self._ctrl = None
# Create a Controller that connects to the messaging
# service:
self._ctrl = controller.Controller(self._hosts,
self._default_exchange,
self._conf)
self._ctrl.connect()
return func(self, *args, **kws)
return wrap
@_ensure_connect_called
def send(self, target, ctxt, message,
wait_for_reply=None, timeout=None, envelope=False,
retry=None):
"""Send a message to the given target."""
# TODO(kgiusti) need to add support for retry
if retry is not None:
raise NotImplementedError('"retry" not implemented by '
'this transport driver')
request = marshal_request(message, ctxt, envelope)
expire = 0
if timeout:
expire = time.time() + timeout # when the caller times out
# amqp uses millisecond time values, timeout is seconds
request.ttl = int(timeout * 1000)
request.expiry_time = int(expire * 1000)
LOG.debug("Send to %s", target)
task = drivertasks.SendTask(target, request, wait_for_reply, expire)
self._ctrl.add_task(task)
# wait for the eventloop to process the command. If the command is
# an RPC call retrieve the reply message
if wait_for_reply:
reply = task.wait(timeout)
if reply:
# TODO(kgiusti) how to handle failure to un-marshal?
# Must log, and determine best way to communicate this failure
# back up to the caller
reply = unmarshal_response(reply, self._allowed_remote_exmods)
LOG.debug("Send to %s returning", target)
return reply
@_ensure_connect_called
def send_notification(self, target, ctxt, message, version,
retry=None):
"""Send a notification message to the given target."""
# TODO(kgiusti) need to add support for retry
if retry is not None:
raise NotImplementedError('"retry" not implemented by '
'this transport driver')
return self.send(target, ctxt, message, envelope=(version == 2.0))
@_ensure_connect_called
def listen(self, target, batch_size, batch_timeout):
"""Construct a Listener for the given target."""
LOG.debug("Listen to %s", target)
listener = ProtonListener(self)
self._ctrl.add_task(drivertasks.ListenTask(target, listener))
return base.PollStyleListenerAdapter(listener, batch_size,
batch_timeout)
return listener
@_ensure_connect_called
def listen_for_notifications(self, targets_and_priorities, pool,
batch_size, batch_timeout):
LOG.debug("Listen for notifications %s", targets_and_priorities)
if pool:
raise NotImplementedError('"pool" not implemented by '
'this transport driver')
listener = ProtonListener(self)
for target, priority in targets_and_priorities:
topic = '%s.%s' % (target.topic, priority)
t = messaging_target.Target(topic=topic)
self._ctrl.add_task(drivertasks.ListenTask(t, listener, True))
return base.PollStyleListenerAdapter(listener, batch_size,
batch_timeout)
def cleanup(self):
"""Release all resources."""
if self._ctrl:
self._ctrl.shutdown()
self._ctrl = None
LOG.info(_LI("AMQP 1.0 messaging driver shutdown"))

View File

@ -1,251 +0,0 @@
# Copyright 2011 OpenStack Foundation
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import json
import threading
import time
from six import moves
import oslo_messaging
from oslo_messaging._drivers import base
class FakeIncomingMessage(base.RpcIncomingMessage):
def __init__(self, ctxt, message, reply_q, requeue):
super(FakeIncomingMessage, self).__init__(ctxt, message)
self.requeue_callback = requeue
self._reply_q = reply_q
def reply(self, reply=None, failure=None):
if self._reply_q:
failure = failure[1] if failure else None
self._reply_q.put((reply, failure))
def requeue(self):
self.requeue_callback()
class FakeListener(base.PollStyleListener):
def __init__(self, exchange_manager, targets, pool=None):
super(FakeListener, self).__init__()
self._exchange_manager = exchange_manager
self._targets = targets
self._pool = pool
self._stopped = threading.Event()
# NOTE(sileht): Ensure that all needed queues exists even the listener
# have not been polled yet
for target in self._targets:
exchange = self._exchange_manager.get_exchange(target.exchange)
exchange.ensure_queue(target, pool)
@base.batch_poll_helper
def poll(self, timeout=None):
if timeout is not None:
deadline = time.time() + timeout
else:
deadline = None
while not self._stopped.is_set():
for target in self._targets:
exchange = self._exchange_manager.get_exchange(target.exchange)
(ctxt, message, reply_q, requeue) = exchange.poll(target,
self._pool)
if message is not None:
message = FakeIncomingMessage(ctxt, message, reply_q,
requeue)
return message
if deadline is not None:
pause = deadline - time.time()
if pause < 0:
break
pause = min(pause, 0.050)
else:
pause = 0.050
time.sleep(pause)
return None
def stop(self):
self._stopped.set()
class FakeExchange(object):
def __init__(self, name):
self.name = name
self._queues_lock = threading.RLock()
self._topic_queues = {}
self._server_queues = {}
def ensure_queue(self, target, pool):
with self._queues_lock:
if target.server:
self._get_server_queue(target.topic, target.server)
else:
self._get_topic_queue(target.topic, pool)
def _get_topic_queue(self, topic, pool=None):
if pool and (topic, pool) not in self._topic_queues:
# NOTE(sileht): if the pool name is set, we need to
# copy all the already delivered messages from the
# default queue to this queue
self._topic_queues[(topic, pool)] = copy.deepcopy(
self._get_topic_queue(topic))
return self._topic_queues.setdefault((topic, pool), [])
def _get_server_queue(self, topic, server):
return self._server_queues.setdefault((topic, server), [])
def deliver_message(self, topic, ctxt, message,
server=None, fanout=False, reply_q=None):
with self._queues_lock:
if fanout:
queues = [q for t, q in self._server_queues.items()
if t[0] == topic]
elif server is not None:
queues = [self._get_server_queue(topic, server)]
else:
# NOTE(sileht): ensure at least the queue without
# pool name exists
self._get_topic_queue(topic)
queues = [q for t, q in self._topic_queues.items()
if t[0] == topic]
def requeue():
self.deliver_message(topic, ctxt, message, server=server,
fanout=fanout, reply_q=reply_q)
for queue in queues:
queue.append((ctxt, message, reply_q, requeue))
def poll(self, target, pool):
with self._queues_lock:
if target.server:
queue = self._get_server_queue(target.topic, target.server)
else:
queue = self._get_topic_queue(target.topic, pool)
return queue.pop(0) if queue else (None, None, None, None)
class FakeExchangeManager(object):
def __init__(self, default_exchange):
self._default_exchange = default_exchange
self._exchanges_lock = threading.Lock()
self._exchanges = {}
def get_exchange(self, name):
if name is None:
name = self._default_exchange
with self._exchanges_lock:
return self._exchanges.setdefault(name, FakeExchange(name))
class FakeDriver(base.BaseDriver):
"""Fake driver used for testing.
This driver passes messages in memory, and should only be used for
unit tests.
"""
def __init__(self, conf, url, default_exchange=None,
allowed_remote_exmods=None):
super(FakeDriver, self).__init__(conf, url, default_exchange,
allowed_remote_exmods)
self._exchange_manager = FakeExchangeManager(default_exchange)
def require_features(self, requeue=True):
pass
@staticmethod
def _check_serialize(message):
"""Make sure a message intended for rpc can be serialized.
We specifically want to use json, not our own jsonutils because
jsonutils has some extra logic to automatically convert objects to
primitive types so that they can be serialized. We want to catch all
cases where non-primitive types make it into this code and treat it as
an error.
"""
json.dumps(message)
def _send(self, target, ctxt, message, wait_for_reply=None, timeout=None):
self._check_serialize(message)
exchange = self._exchange_manager.get_exchange(target.exchange)
reply_q = None
if wait_for_reply:
reply_q = moves.queue.Queue()
exchange.deliver_message(target.topic, ctxt, message,
server=target.server,
fanout=target.fanout,
reply_q=reply_q)
if wait_for_reply:
try:
reply, failure = reply_q.get(timeout=timeout)
if failure:
raise failure
else:
return reply
except moves.queue.Empty:
raise oslo_messaging.MessagingTimeout(
'No reply on topic %s' % target.topic)
return None
def send(self, target, ctxt, message, wait_for_reply=None, timeout=None,
retry=None):
# NOTE(sileht): retry doesn't need to be implemented, the fake
# transport always works
return self._send(target, ctxt, message, wait_for_reply, timeout)
def send_notification(self, target, ctxt, message, version, retry=None):
# NOTE(sileht): retry doesn't need to be implemented, the fake
# transport always works
self._send(target, ctxt, message)
def listen(self, target, batch_size, batch_timeout):
exchange = target.exchange or self._default_exchange
listener = FakeListener(self._exchange_manager,
[oslo_messaging.Target(
topic=target.topic,
server=target.server,
exchange=exchange),
oslo_messaging.Target(
topic=target.topic,
exchange=exchange)])
return base.PollStyleListenerAdapter(listener, batch_size,
batch_timeout)
def listen_for_notifications(self, targets_and_priorities, pool,
batch_size, batch_timeout):
targets = [
oslo_messaging.Target(
topic='%s.%s' % (target.topic, priority),
exchange=target.exchange)
for target, priority in targets_and_priorities]
listener = FakeListener(self._exchange_manager, targets, pool)
return base.PollStyleListenerAdapter(listener, batch_size,
batch_timeout)
def cleanup(self):
pass

View File

@ -1,378 +0,0 @@
# Copyright (C) 2015 Cisco Systems, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import threading
from oslo_messaging._drivers import base
from oslo_messaging._drivers import common as driver_common
from oslo_messaging._drivers import pool as driver_pool
from oslo_messaging._i18n import _LE
from oslo_messaging._i18n import _LW
from oslo_serialization import jsonutils
import kafka
from kafka.common import KafkaError
from oslo_config import cfg
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
PURPOSE_SEND = 'send'
PURPOSE_LISTEN = 'listen'
kafka_opts = [
cfg.StrOpt('kafka_default_host', default='localhost',
deprecated_for_removal=True,
deprecated_reason="Replaced by [DEFAULT]/transport_url",
help='Default Kafka broker Host'),
cfg.PortOpt('kafka_default_port', default=9092,
deprecated_for_removal=True,
deprecated_reason="Replaced by [DEFAULT]/transport_url",
help='Default Kafka broker Port'),
cfg.IntOpt('kafka_max_fetch_bytes', default=1024 * 1024,
help='Max fetch bytes of Kafka consumer'),
cfg.IntOpt('kafka_consumer_timeout', default=1.0,
help='Default timeout(s) for Kafka consumers'),
cfg.IntOpt('pool_size', default=10,
help='Pool Size for Kafka Consumers'),
cfg.IntOpt('conn_pool_min_size', default=2,
help='The pool size limit for connections expiration policy'),
cfg.IntOpt('conn_pool_ttl', default=1200,
help='The time-to-live in sec of idle connections in the pool')
]
CONF = cfg.CONF
def pack_context_with_message(ctxt, msg):
"""Pack context into msg."""
if isinstance(ctxt, dict):
context_d = ctxt
else:
context_d = ctxt.to_dict()
return {'message': msg, 'context': context_d}
def target_to_topic(target, priority=None):
"""Convert target into topic string
:param target: Message destination target
:type target: oslo_messaging.Target
:param priority: Notification priority
:type priority: string
"""
if not priority:
return target.topic
return target.topic + '.' + priority
class Connection(object):
def __init__(self, conf, url, purpose):
driver_conf = conf.oslo_messaging_kafka
self.conf = conf
self.kafka_client = None
self.producer = None
self.consumer = None
self.fetch_messages_max_bytes = driver_conf.kafka_max_fetch_bytes
self.consumer_timeout = float(driver_conf.kafka_consumer_timeout)
self.url = url
self._parse_url()
# TODO(Support for manual/auto_commit functionality)
# When auto_commit is False, consumer can manually notify
# the completion of the subscription.
# Currently we don't support for non auto commit option
self.auto_commit = True
self._consume_loop_stopped = False
def _parse_url(self):
driver_conf = self.conf.oslo_messaging_kafka
self.hostaddrs = []
for host in self.url.hosts:
if host.hostname:
self.hostaddrs.append("%s:%s" % (
host.hostname,
host.port or driver_conf.kafka_default_port))
if not self.hostaddrs:
self.hostaddrs.append("%s:%s" % (driver_conf.kafka_default_host,
driver_conf.kafka_default_port))
def notify_send(self, topic, ctxt, msg, retry):
"""Send messages to Kafka broker.
:param topic: String of the topic
:param ctxt: context for the messages
:param msg: messages for publishing
:param retry: the number of retry
"""
message = pack_context_with_message(ctxt, msg)
self._ensure_connection()
self._send_and_retry(message, topic, retry)
def _send_and_retry(self, message, topic, retry):
current_retry = 0
if not isinstance(message, str):
message = jsonutils.dumps(message)
while message is not None:
try:
self._send(message, topic)
message = None
except Exception:
LOG.warning(_LW("Failed to publish a message of topic %s"),
topic)
current_retry += 1
if retry is not None and current_retry >= retry:
LOG.exception(_LE("Failed to retry to send data "
"with max retry times"))
message = None
def _send(self, message, topic):
self.producer.send_messages(topic, message)
def consume(self, timeout=None):
"""Receive up to 'max_fetch_messages' messages.
:param timeout: poll timeout in seconds
"""
duration = (self.consumer_timeout if timeout is None else timeout)
timer = driver_common.DecayingTimer(duration=duration)
timer.start()
def _raise_timeout():
LOG.debug('Timed out waiting for Kafka response')
raise driver_common.Timeout()
poll_timeout = (self.consumer_timeout if timeout is None
else min(timeout, self.consumer_timeout))
while True:
if self._consume_loop_stopped:
return
try:
next_timeout = poll_timeout * 1000.0
# TODO(use configure() method instead)
# Currently KafkaConsumer does not support for
# the case of updating only fetch_max_wait_ms parameter
self.consumer._config['fetch_max_wait_ms'] = next_timeout
messages = list(self.consumer.fetch_messages())
except Exception as e:
LOG.exception(_LE("Failed to consume messages: %s"), e)
messages = None
if not messages:
poll_timeout = timer.check_return(
_raise_timeout, maximum=self.consumer_timeout)
continue
return messages
def stop_consuming(self):
self._consume_loop_stopped = True
def reset(self):
"""Reset a connection so it can be used again."""
if self.consumer:
self.consumer.close()
self.consumer = None
def close(self):
if self.kafka_client:
self.kafka_client.close()
self.kafka_client = None
if self.producer:
self.producer.stop()
self.consumer = None
def commit(self):
"""Commit is used by subscribers belonging to the same group.
After subscribing messages, commit is called to prevent
the other subscribers which belong to the same group
from re-subscribing the same messages.
Currently self.auto_commit option is always True,
so we don't need to call this function.
"""
self.consumer.commit()
def _ensure_connection(self):
if self.kafka_client:
return
try:
self.kafka_client = kafka.KafkaClient(
self.hostaddrs)
self.producer = kafka.SimpleProducer(self.kafka_client)
except KafkaError as e:
LOG.exception(_LE("Kafka Connection is not available: %s"), e)
self.kafka_client = None
def declare_topic_consumer(self, topics, group=None):
self._ensure_connection()
for topic in topics:
self.kafka_client.ensure_topic_exists(topic)
self.consumer = kafka.KafkaConsumer(
*topics, group_id=group,
bootstrap_servers=self.hostaddrs,
fetch_message_max_bytes=self.fetch_messages_max_bytes)
self._consume_loop_stopped = False
class OsloKafkaMessage(base.RpcIncomingMessage):
def __init__(self, ctxt, message):
super(OsloKafkaMessage, self).__init__(ctxt, message)
def requeue(self):
LOG.warning(_LW("requeue is not supported"))
def reply(self, reply=None, failure=None):
LOG.warning(_LW("reply is not supported"))
class KafkaListener(base.PollStyleListener):
def __init__(self, conn):
super(KafkaListener, self).__init__()
self._stopped = threading.Event()
self.conn = conn
self.incoming_queue = []
@base.batch_poll_helper
def poll(self, timeout=None):
while not self._stopped.is_set():
if self.incoming_queue:
return self.incoming_queue.pop(0)
try:
messages = self.conn.consume(timeout=timeout)
for msg in messages:
message = msg.value
LOG.debug('poll got message : %s', message)
message = jsonutils.loads(message)
self.incoming_queue.append(OsloKafkaMessage(
ctxt=message['context'], message=message['message']))
except driver_common.Timeout:
return None
def stop(self):
self._stopped.set()
self.conn.stop_consuming()
def cleanup(self):
self.conn.close()
def commit(self):
# TODO(Support for manually/auto commit functionality)
# It's better to allow users to commit manually and support for
# self.auto_commit = False option. For now, this commit function
# is meaningless since user couldn't call this function and
# auto_commit option is always True.
self.conn.commit()
class KafkaDriver(base.BaseDriver):
"""Note: Current implementation of this driver is experimental.
We will have functional and/or integrated testing enabled for this driver.
"""
def __init__(self, conf, url, default_exchange=None,
allowed_remote_exmods=None):
opt_group = cfg.OptGroup(name='oslo_messaging_kafka',
title='Kafka driver options')
conf.register_group(opt_group)
conf.register_opts(kafka_opts, group=opt_group)
super(KafkaDriver, self).__init__(
conf, url, default_exchange, allowed_remote_exmods)
# the pool configuration properties
max_size = self.conf.oslo_messaging_kafka.pool_size
min_size = self.conf.oslo_messaging_kafka.conn_pool_min_size
ttl = self.conf.oslo_messaging_kafka.conn_pool_ttl
self.connection_pool = driver_pool.ConnectionPool(
self.conf, max_size, min_size, ttl,
self._url, Connection)
self.listeners = []
def cleanup(self):
for c in self.listeners:
c.close()
self.listeners = []
def send(self, target, ctxt, message, wait_for_reply=None, timeout=None,
retry=None):
raise NotImplementedError(
'The RPC implementation for Kafka is not implemented')
def send_notification(self, target, ctxt, message, version, retry=None):
"""Send notification to Kafka brokers
:param target: Message destination target
:type target: oslo_messaging.Target
:param ctxt: Message context
:type ctxt: dict
:param message: Message payload to pass
:type message: dict
:param version: Messaging API version (currently not used)
:type version: str
:param retry: an optional default kafka consumer retries configuration
None means to retry forever
0 means no retry
N means N retries
:type retry: int
"""
with self._get_connection(purpose=PURPOSE_SEND) as conn:
conn.notify_send(target_to_topic(target), ctxt, message, retry)
def listen(self, target):
raise NotImplementedError(
'The RPC implementation for Kafka is not implemented')
def listen_for_notifications(self, targets_and_priorities, pool,
batch_size, batch_timeout):
"""Listen to a specified list of targets on Kafka brokers
:param targets_and_priorities: List of pairs (target, priority)
priority is not used for kafka driver
target.exchange_target.topic is used as
a kafka topic
:type targets_and_priorities: list
:param pool: consumer group of Kafka consumers
:type pool: string
"""
conn = self._get_connection(purpose=PURPOSE_LISTEN)
topics = set()
for target, priority in targets_and_priorities:
topics.add(target_to_topic(target, priority))
conn.declare_topic_consumer(topics, pool)
listener = KafkaListener(conn)
return base.PollStyleListenerAdapter(listener, batch_size,
batch_timeout)
def _get_connection(self, purpose):
return driver_common.ConnectionContext(self.connection_pool, purpose)

View File

@ -1,334 +0,0 @@
# Copyright 2011 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_log import log as logging
from oslo_utils import timeutils
import pika_pool
import retrying
from oslo_messaging._drivers import base
from oslo_messaging._drivers.pika_driver import (pika_connection_factory as
pika_drv_conn_factory)
from oslo_messaging._drivers.pika_driver import pika_commons as pika_drv_cmns
from oslo_messaging._drivers.pika_driver import pika_engine as pika_drv_engine
from oslo_messaging._drivers.pika_driver import pika_exceptions as pika_drv_exc
from oslo_messaging._drivers.pika_driver import pika_listener as pika_drv_lstnr
from oslo_messaging._drivers.pika_driver import pika_message as pika_drv_msg
from oslo_messaging._drivers.pika_driver import pika_poller as pika_drv_poller
from oslo_messaging import exceptions
LOG = logging.getLogger(__name__)
pika_pool_opts = [
cfg.IntOpt('pool_max_size', default=30,
help="Maximum number of connections to keep queued."),
cfg.IntOpt('pool_max_overflow', default=0,
help="Maximum number of connections to create above "
"`pool_max_size`."),
cfg.IntOpt('pool_timeout', default=30,
help="Default number of seconds to wait for a connections to "
"available"),
cfg.IntOpt('pool_recycle', default=600,
help="Lifetime of a connection (since creation) in seconds "
"or None for no recycling. Expired connections are "
"closed on acquire."),
cfg.IntOpt('pool_stale', default=60,
help="Threshold at which inactive (since release) connections "
"are considered stale in seconds or None for no "
"staleness. Stale connections are closed on acquire.")
]
notification_opts = [
cfg.BoolOpt('notification_persistence', default=False,
help="Persist notification messages."),
cfg.StrOpt('default_notification_exchange',
default="${control_exchange}_notification",
help="Exchange name for sending notifications"),
cfg.IntOpt(
'notification_listener_prefetch_count', default=100,
help="Max number of not acknowledged message which RabbitMQ can send "
"to notification listener."
),
cfg.IntOpt(
'default_notification_retry_attempts', default=-1,
help="Reconnecting retry count in case of connectivity problem during "
"sending notification, -1 means infinite retry."
),
cfg.FloatOpt(
'notification_retry_delay', default=0.25,
help="Reconnecting retry delay in case of connectivity problem during "
"sending notification message"
)
]
rpc_opts = [
cfg.IntOpt('rpc_queue_expiration', default=60,
help="Time to live for rpc queues without consumers in "
"seconds."),
cfg.StrOpt('default_rpc_exchange', default="${control_exchange}_rpc",
help="Exchange name for sending RPC messages"),
cfg.StrOpt('rpc_reply_exchange', default="${control_exchange}_rpc_reply",
help="Exchange name for receiving RPC replies"),
cfg.IntOpt(
'rpc_listener_prefetch_count', default=100,
help="Max number of not acknowledged message which RabbitMQ can send "
"to rpc listener."
),
cfg.IntOpt(
'rpc_reply_listener_prefetch_count', default=100,
help="Max number of not acknowledged message which RabbitMQ can send "
"to rpc reply listener."
),
cfg.IntOpt(
'rpc_reply_retry_attempts', default=-1,
help="Reconnecting retry count in case of connectivity problem during "
"sending reply. -1 means infinite retry during rpc_timeout"
),
cfg.FloatOpt(
'rpc_reply_retry_delay', default=0.25,
help="Reconnecting retry delay in case of connectivity problem during "
"sending reply."
),
cfg.IntOpt(
'default_rpc_retry_attempts', default=-1,
help="Reconnecting retry count in case of connectivity problem during "
"sending RPC message, -1 means infinite retry. If actual "
"retry attempts in not 0 the rpc request could be processed more "
"then one time"
),
cfg.FloatOpt(
'rpc_retry_delay', default=0.25,
help="Reconnecting retry delay in case of connectivity problem during "
"sending RPC message"
)
]
class PikaDriver(base.BaseDriver):
def __init__(self, conf, url, default_exchange=None,
allowed_remote_exmods=None):
opt_group = cfg.OptGroup(name='oslo_messaging_pika',
title='Pika driver options')
conf.register_group(opt_group)
conf.register_opts(pika_drv_conn_factory.pika_opts, group=opt_group)
conf.register_opts(pika_pool_opts, group=opt_group)
conf.register_opts(rpc_opts, group=opt_group)
conf.register_opts(notification_opts, group=opt_group)
self._pika_engine = pika_drv_engine.PikaEngine(
conf, url, default_exchange, allowed_remote_exmods
)
self._reply_listener = pika_drv_lstnr.RpcReplyPikaListener(
self._pika_engine
)
super(PikaDriver, self).__init__(conf, url, default_exchange,
allowed_remote_exmods)
def require_features(self, requeue=False):
pass
def _declare_rpc_exchange(self, exchange, stopwatch):
timeout = stopwatch.leftover(return_none=True)
with (self._pika_engine.connection_without_confirmation_pool
.acquire(timeout=timeout)) as conn:
try:
self._pika_engine.declare_exchange_by_channel(
conn.channel,
self._pika_engine.get_rpc_exchange_name(
exchange
), "direct", False
)
except pika_pool.Timeout as e:
raise exceptions.MessagingTimeout(
"Timeout for current operation was expired. {}.".format(
str(e)
)
)
def send(self, target, ctxt, message, wait_for_reply=None, timeout=None,
retry=None):
with timeutils.StopWatch(duration=timeout) as stopwatch:
if retry is None:
retry = self._pika_engine.default_rpc_retry_attempts
exchange = self._pika_engine.get_rpc_exchange_name(
target.exchange
)
def on_exception(ex):
if isinstance(ex, pika_drv_exc.ExchangeNotFoundException):
# it is desired to create exchange because if we sent to
# exchange which is not exists, we get ChannelClosed
# exception and need to reconnect
try:
self._declare_rpc_exchange(exchange, stopwatch)
except pika_drv_exc.ConnectionException as e:
LOG.warning("Problem during declaring exchange. %s", e)
return True
elif isinstance(ex, (pika_drv_exc.ConnectionException,
exceptions.MessageDeliveryFailure)):
LOG.warning("Problem during message sending. %s", ex)
return True
else:
return False
retrier = (
None if retry == 0 else
retrying.retry(
stop_max_attempt_number=(None if retry == -1 else retry),
retry_on_exception=on_exception,
wait_fixed=self._pika_engine.rpc_retry_delay * 1000,
)
)
if target.fanout:
return self.cast_all_workers(
exchange, target.topic, ctxt, message, stopwatch, retrier
)
routing_key = self._pika_engine.get_rpc_queue_name(
target.topic, target.server, retrier is None
)
msg = pika_drv_msg.RpcPikaOutgoingMessage(self._pika_engine,
message, ctxt)
try:
reply = msg.send(
exchange=exchange,
routing_key=routing_key,
reply_listener=(
self._reply_listener if wait_for_reply else None
),
stopwatch=stopwatch,
retrier=retrier
)
except pika_drv_exc.ExchangeNotFoundException as ex:
try:
self._declare_rpc_exchange(exchange, stopwatch)
except pika_drv_exc.ConnectionException as e:
LOG.warning("Problem during declaring exchange. %s", e)
raise ex
if reply is not None:
if reply.failure is not None:
raise reply.failure
return reply.result
def cast_all_workers(self, exchange, topic, ctxt, message, stopwatch,
retrier=None):
msg = pika_drv_msg.PikaOutgoingMessage(self._pika_engine, message,
ctxt)
try:
msg.send(
exchange=exchange,
routing_key=self._pika_engine.get_rpc_queue_name(
topic, "all_workers", retrier is None
),
mandatory=False,
stopwatch=stopwatch,
retrier=retrier
)
except pika_drv_exc.ExchangeNotFoundException:
try:
self._declare_rpc_exchange(exchange, stopwatch)
except pika_drv_exc.ConnectionException as e:
LOG.warning("Problem during declaring exchange. %s", e)
def _declare_notification_queue_binding(
self, target, stopwatch=pika_drv_cmns.INFINITE_STOP_WATCH):
if stopwatch.expired():
raise exceptions.MessagingTimeout(
"Timeout for current operation was expired."
)
try:
timeout = stopwatch.leftover(return_none=True)
with (self._pika_engine.connection_without_confirmation_pool
.acquire)(timeout=timeout) as conn:
self._pika_engine.declare_queue_binding_by_channel(
conn.channel,
exchange=(
target.exchange or
self._pika_engine.default_notification_exchange
),
queue=target.topic,
routing_key=target.topic,
exchange_type='direct',
queue_expiration=None,
durable=self._pika_engine.notification_persistence,
)
except pika_pool.Timeout as e:
raise exceptions.MessagingTimeout(
"Timeout for current operation was expired. {}.".format(str(e))
)
def send_notification(self, target, ctxt, message, version, retry=None):
if retry is None:
retry = self._pika_engine.default_notification_retry_attempts
def on_exception(ex):
if isinstance(ex, (pika_drv_exc.ExchangeNotFoundException,
pika_drv_exc.RoutingException)):
LOG.warning("Problem during sending notification. %s", ex)
try:
self._declare_notification_queue_binding(target)
except pika_drv_exc.ConnectionException as e:
LOG.warning("Problem during declaring notification queue "
"binding. %s", e)
return True
elif isinstance(ex, (pika_drv_exc.ConnectionException,
pika_drv_exc.MessageRejectedException)):
LOG.warning("Problem during sending notification. %s", ex)
return True
else:
return False
retrier = retrying.retry(
stop_max_attempt_number=(None if retry == -1 else retry),
retry_on_exception=on_exception,
wait_fixed=self._pika_engine.notification_retry_delay * 1000,
)
msg = pika_drv_msg.PikaOutgoingMessage(self._pika_engine, message,
ctxt)
return msg.send(
exchange=(
target.exchange or
self._pika_engine.default_notification_exchange
),
routing_key=target.topic,
confirm=True,
mandatory=True,
persistent=self._pika_engine.notification_persistence,
retrier=retrier
)
def listen(self, target, batch_size, batch_timeout):
return pika_drv_poller.RpcServicePikaPoller(
self._pika_engine, target, batch_size, batch_timeout,
self._pika_engine.rpc_listener_prefetch_count
)
def listen_for_notifications(self, targets_and_priorities, pool,
batch_size, batch_timeout):
return pika_drv_poller.NotificationPikaPoller(
self._pika_engine, targets_and_priorities, batch_size,
batch_timeout,
self._pika_engine.notification_listener_prefetch_count, pool
)
def cleanup(self):
self._reply_listener.cleanup()
self._pika_engine.cleanup()

File diff suppressed because it is too large Load Diff

View File

@ -1,217 +0,0 @@
# Copyright 2011 Cloudscaling Group, Inc
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import os
import threading
from stevedore import driver
from oslo_messaging._drivers import base
from oslo_messaging._drivers import common as rpc_common
from oslo_messaging._drivers.zmq_driver.client import zmq_client
from oslo_messaging._drivers.zmq_driver.server import zmq_server
from oslo_messaging._drivers.zmq_driver import zmq_async
from oslo_messaging._drivers.zmq_driver import zmq_options
from oslo_messaging._i18n import _LE
RPCException = rpc_common.RPCException
LOG = logging.getLogger(__name__)
class LazyDriverItem(object):
def __init__(self, item_cls, *args, **kwargs):
self._lock = threading.Lock()
self.item = None
self.item_class = item_cls
self.args = args
self.kwargs = kwargs
self.process_id = os.getpid()
def get(self):
# NOTE(ozamiatin): Lazy initialization.
# All init stuff moved closer to usage point - lazy init.
# Better design approach is to initialize in the driver's
# __init__, but 'fork' extensively used by services
# breaks all things.
if self.item is not None and os.getpid() == self.process_id:
return self.item
with self._lock:
if self.item is None or os.getpid() != self.process_id:
self.process_id = os.getpid()
self.item = self.item_class(*self.args, **self.kwargs)
return self.item
def cleanup(self):
if self.item:
self.item.cleanup()
class ZmqDriver(base.BaseDriver):
"""ZeroMQ Driver implementation.
Provides implementation of RPC and Notifier APIs by means
of ZeroMQ library.
See :doc:`zmq_driver` for details.
"""
def __init__(self, conf, url, default_exchange=None,
allowed_remote_exmods=None):
"""Construct ZeroMQ driver.
Initialize driver options.
Construct matchmaker - pluggable interface to targets management
Name Service
Construct client and server controllers
:param conf: oslo messaging configuration object
:type conf: oslo_config.CONF
:param url: transport URL
:type url: TransportUrl
:param default_exchange: Not used in zmq implementation
:type default_exchange: None
:param allowed_remote_exmods: remote exception passing options
:type allowed_remote_exmods: list
"""
zmq = zmq_async.import_zmq()
if zmq is None:
raise ImportError(_LE("ZeroMQ is not available!"))
zmq_options.register_opts(conf)
self.conf = conf
self.allowed_remote_exmods = allowed_remote_exmods
self.matchmaker = driver.DriverManager(
'oslo.messaging.zmq.matchmaker',
self.get_matchmaker_backend(url),
).driver(self.conf, url=url)
client_cls = zmq_client.ZmqClientProxy
if conf.oslo_messaging_zmq.use_pub_sub and not \
conf.oslo_messaging_zmq.use_router_proxy:
client_cls = zmq_client.ZmqClientMixDirectPubSub
elif not conf.oslo_messaging_zmq.use_pub_sub and not \
conf.oslo_messaging_zmq.use_router_proxy:
client_cls = zmq_client.ZmqClientDirect
self.client = LazyDriverItem(
client_cls, self.conf, self.matchmaker,
self.allowed_remote_exmods)
self.notifier = LazyDriverItem(
client_cls, self.conf, self.matchmaker,
self.allowed_remote_exmods)
super(ZmqDriver, self).__init__(conf, url, default_exchange,
allowed_remote_exmods)
def get_matchmaker_backend(self, url):
zmq_transport, p, matchmaker_backend = url.transport.partition('+')
assert zmq_transport == 'zmq', "Needs to be zmq for this transport!"
if not matchmaker_backend:
return self.conf.oslo_messaging_zmq.rpc_zmq_matchmaker
elif matchmaker_backend not in zmq_options.MATCHMAKER_BACKENDS:
raise rpc_common.RPCException(
_LE("Incorrect matchmaker backend name %(backend_name)s!"
"Available names are: %(available_names)s") %
{"backend_name": matchmaker_backend,
"available_names": zmq_options.MATCHMAKER_BACKENDS})
return matchmaker_backend
def send(self, target, ctxt, message, wait_for_reply=None, timeout=None,
retry=None):
"""Send RPC message to server
:param target: Message destination target
:type target: oslo_messaging.Target
:param ctxt: Message context
:type ctxt: dict
:param message: Message payload to pass
:type message: dict
:param wait_for_reply: Waiting for reply flag
:type wait_for_reply: bool
:param timeout: Reply waiting timeout in seconds
:type timeout: int
:param retry: an optional default connection retries configuration
None or -1 means to retry forever
0 means no retry
N means N retries
:type retry: int
"""
client = self.client.get()
if wait_for_reply:
return client.send_call(target, ctxt, message, timeout, retry)
elif target.fanout:
client.send_fanout(target, ctxt, message, retry)
else:
client.send_cast(target, ctxt, message, retry)
def send_notification(self, target, ctxt, message, version, retry=None):
"""Send notification to server
:param target: Message destination target
:type target: oslo_messaging.Target
:param ctxt: Message context
:type ctxt: dict
:param message: Message payload to pass
:type message: dict
:param version: Messaging API version
:type version: str
:param retry: an optional default connection retries configuration
None or -1 means to retry forever
0 means no retry
N means N retries
:type retry: int
"""
client = self.notifier.get()
client.send_notify(target, ctxt, message, version, retry)
def listen(self, target, batch_size, batch_timeout):
"""Listen to a specified target on a server side
:param target: Message destination target
:type target: oslo_messaging.Target
"""
listener = zmq_server.ZmqServer(self, self.conf, self.matchmaker,
target)
return base.PollStyleListenerAdapter(listener, batch_size,
batch_timeout)
def listen_for_notifications(self, targets_and_priorities, pool,
batch_size, batch_timeout):
"""Listen to a specified list of targets on a server side
:param targets_and_priorities: List of pairs (target, priority)
:type targets_and_priorities: list
:param pool: Not used for zmq implementation
:type pool: object
"""
listener = zmq_server.ZmqNotificationServer(
self, self.conf, self.matchmaker, targets_and_priorities)
return base.PollStyleListenerAdapter(listener, batch_size,
batch_timeout)
def cleanup(self):
"""Cleanup all driver's connections finally
"""
self.client.cleanup()
self.notifier.cleanup()

View File

@ -1,33 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import select
import socket
from oslo_utils import timeutils
from pika import exceptions as pika_exceptions
import six
PIKA_CONNECTIVITY_ERRORS = (
pika_exceptions.AMQPConnectionError,
pika_exceptions.ConnectionClosed,
pika_exceptions.ChannelClosed,
socket.timeout,
select.error
)
EXCEPTIONS_MODULE = 'exceptions' if six.PY2 else 'builtins'
INFINITE_STOP_WATCH = timeutils.StopWatch(duration=None).start()

View File

@ -1,542 +0,0 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import logging
import os
import threading
import futurist
from pika.adapters import select_connection
from pika import exceptions as pika_exceptions
from pika import spec as pika_spec
from oslo_utils import eventletutils
current_thread = eventletutils.fetch_current_thread_functor()
LOG = logging.getLogger(__name__)
class ThreadSafePikaConnection(object):
def __init__(self, parameters=None,
_impl_class=select_connection.SelectConnection):
self.params = parameters
self._connection_lock = threading.Lock()
self._evt_closed = threading.Event()
self._task_queue = collections.deque()
self._pending_connection_futures = set()
create_connection_future = self._register_pending_future()
def on_open_error(conn, err):
create_connection_future.set_exception(
pika_exceptions.AMQPConnectionError(err)
)
self._impl = _impl_class(
parameters=parameters,
on_open_callback=create_connection_future.set_result,
on_open_error_callback=on_open_error,
on_close_callback=self._on_connection_close,
stop_ioloop_on_close=False,
)
self._interrupt_pipein, self._interrupt_pipeout = os.pipe()
self._impl.ioloop.add_handler(self._interrupt_pipein,
self._impl.ioloop.read_interrupt,
select_connection.READ)
self._thread = threading.Thread(target=self._process_io)
self._thread.daemon = True
self._thread_id = None
self._thread.start()
create_connection_future.result()
def _check_called_not_from_event_loop(self):
if current_thread() == self._thread_id:
raise RuntimeError("This call is not allowed from ioloop thread")
def _execute_task(self, func, *args, **kwargs):
if current_thread() == self._thread_id:
return func(*args, **kwargs)
future = futurist.Future()
self._task_queue.append((func, args, kwargs, future))
if self._evt_closed.is_set():
self._notify_all_futures_connection_close()
elif self._interrupt_pipeout is not None:
os.write(self._interrupt_pipeout, b'X')
return future.result()
def _register_pending_future(self):
future = futurist.Future()
self._pending_connection_futures.add(future)
def on_done_callback(fut):
try:
self._pending_connection_futures.remove(fut)
except KeyError:
pass
future.add_done_callback(on_done_callback)
if self._evt_closed.is_set():
self._notify_all_futures_connection_close()
return future
def _notify_all_futures_connection_close(self):
while self._task_queue:
try:
method_res_future = self._task_queue.pop()[3]
except KeyError:
break
else:
method_res_future.set_exception(
pika_exceptions.ConnectionClosed()
)
while self._pending_connection_futures:
try:
pending_connection_future = (
self._pending_connection_futures.pop()
)
except KeyError:
break
else:
pending_connection_future.set_exception(
pika_exceptions.ConnectionClosed()
)
def _on_connection_close(self, conn, reply_code, reply_text):
self._evt_closed.set()
self._notify_all_futures_connection_close()
if self._interrupt_pipeout:
os.close(self._interrupt_pipeout)
os.close(self._interrupt_pipein)
def add_on_close_callback(self, callback):
return self._execute_task(self._impl.add_on_close_callback, callback)
def _do_process_io(self):
while self._task_queue:
func, args, kwargs, future = self._task_queue.pop()
try:
res = func(*args, **kwargs)
except BaseException as e:
LOG.exception(e)
future.set_exception(e)
else:
future.set_result(res)
self._impl.ioloop.poll()
self._impl.ioloop.process_timeouts()
def _process_io(self):
self._thread_id = current_thread()
while not self._evt_closed.is_set():
try:
self._do_process_io()
except BaseException:
LOG.exception("Error during processing connection's IO")
def close(self, *args, **kwargs):
self._check_called_not_from_event_loop()
res = self._execute_task(self._impl.close, *args, **kwargs)
self._evt_closed.wait()
self._thread.join()
return res
def channel(self, channel_number=None):
self._check_called_not_from_event_loop()
channel_opened_future = self._register_pending_future()
impl_channel = self._execute_task(
self._impl.channel,
on_open_callback=channel_opened_future.set_result,
channel_number=channel_number
)
# Create our proxy channel
channel = ThreadSafePikaChannel(impl_channel, self)
# Link implementation channel with our proxy channel
impl_channel._set_cookie(channel)
channel_opened_future.result()
return channel
def add_timeout(self, timeout, callback):
return self._execute_task(self._impl.add_timeout, timeout, callback)
def remove_timeout(self, timeout_id):
return self._execute_task(self._impl.remove_timeout, timeout_id)
@property
def is_closed(self):
return self._impl.is_closed
@property
def is_closing(self):
return self._impl.is_closing
@property
def is_open(self):
return self._impl.is_open
class ThreadSafePikaChannel(object): # pylint: disable=R0904,R0902
def __init__(self, channel_impl, connection):
self._impl = channel_impl
self._connection = connection
self._delivery_confirmation = False
self._message_returned = False
self._current_future = None
self._evt_closed = threading.Event()
self.add_on_close_callback(self._on_channel_close)
def _execute_task(self, func, *args, **kwargs):
return self._connection._execute_task(func, *args, **kwargs)
def _on_channel_close(self, channel, reply_code, reply_text):
self._evt_closed.set()
if self._current_future:
self._current_future.set_exception(
pika_exceptions.ChannelClosed(reply_code, reply_text))
def _on_message_confirmation(self, frame):
self._current_future.set_result(frame)
def add_on_close_callback(self, callback):
self._execute_task(self._impl.add_on_close_callback, callback)
def add_on_cancel_callback(self, callback):
self._execute_task(self._impl.add_on_cancel_callback, callback)
def __int__(self):
return self.channel_number
@property
def channel_number(self):
return self._impl.channel_number
@property
def is_closed(self):
return self._impl.is_closed
@property
def is_closing(self):
return self._impl.is_closing
@property
def is_open(self):
return self._impl.is_open
def close(self, reply_code=0, reply_text="Normal Shutdown"):
self._impl.close(reply_code=reply_code, reply_text=reply_text)
self._evt_closed.wait()
def _check_called_not_from_event_loop(self):
self._connection._check_called_not_from_event_loop()
def flow(self, active):
self._check_called_not_from_event_loop()
self._current_future = futurist.Future()
self._execute_task(
self._impl.flow, callback=self._current_future.set_result,
active=active
)
return self._current_future.result()
def basic_consume(self, # pylint: disable=R0913
consumer_callback,
queue,
no_ack=False,
exclusive=False,
consumer_tag=None,
arguments=None):
self._check_called_not_from_event_loop()
self._current_future = futurist.Future()
self._execute_task(
self._impl.add_callback, self._current_future.set_result,
replies=[pika_spec.Basic.ConsumeOk], one_shot=True
)
self._impl.add_callback(self._current_future.set_result,
replies=[pika_spec.Basic.ConsumeOk],
one_shot=True)
tag = self._execute_task(
self._impl.basic_consume,
consumer_callback=consumer_callback,
queue=queue,
no_ack=no_ack,
exclusive=exclusive,
consumer_tag=consumer_tag,
arguments=arguments
)
self._current_future.result()
return tag
def basic_cancel(self, consumer_tag):
self._check_called_not_from_event_loop()
self._current_future = futurist.Future()
self._execute_task(
self._impl.basic_cancel,
callback=self._current_future.set_result,
consumer_tag=consumer_tag,
nowait=False)
self._current_future.result()
def basic_ack(self, delivery_tag=0, multiple=False):
return self._execute_task(
self._impl.basic_ack, delivery_tag=delivery_tag, multiple=multiple)
def basic_nack(self, delivery_tag=None, multiple=False, requeue=True):
return self._execute_task(
self._impl.basic_nack, delivery_tag=delivery_tag,
multiple=multiple, requeue=requeue
)
def publish(self, exchange, routing_key, body, # pylint: disable=R0913
properties=None, mandatory=False, immediate=False):
if self._delivery_confirmation:
self._check_called_not_from_event_loop()
# In publisher-acknowledgments mode
self._message_returned = False
self._current_future = futurist.Future()
self._execute_task(self._impl.basic_publish,
exchange=exchange,
routing_key=routing_key,
body=body,
properties=properties,
mandatory=mandatory,
immediate=immediate)
conf_method = self._current_future.result().method
if isinstance(conf_method, pika_spec.Basic.Nack):
raise pika_exceptions.NackError((None,))
else:
assert isinstance(conf_method, pika_spec.Basic.Ack), (
conf_method)
if self._message_returned:
raise pika_exceptions.UnroutableError((None,))
else:
# In non-publisher-acknowledgments mode
self._execute_task(self._impl.basic_publish,
exchange=exchange,
routing_key=routing_key,
body=body,
properties=properties,
mandatory=mandatory,
immediate=immediate)
def basic_qos(self, prefetch_size=0, prefetch_count=0, all_channels=False):
self._check_called_not_from_event_loop()
self._current_future = futurist.Future()
self._execute_task(self._impl.basic_qos,
callback=self._current_future.set_result,
prefetch_size=prefetch_size,
prefetch_count=prefetch_count,
all_channels=all_channels)
self._current_future.result()
def basic_recover(self, requeue=False):
self._check_called_not_from_event_loop()
self._current_future = futurist.Future()
self._execute_task(
self._impl.basic_recover,
callback=lambda: self._current_future.set_result(None),
requeue=requeue
)
self._current_future.result()
def basic_reject(self, delivery_tag=None, requeue=True):
self._execute_task(self._impl.basic_reject,
delivery_tag=delivery_tag,
requeue=requeue)
def _on_message_returned(self, *args, **kwargs):
self._message_returned = True
def confirm_delivery(self):
self._check_called_not_from_event_loop()
self._current_future = futurist.Future()
self._execute_task(self._impl.add_callback,
callback=self._current_future.set_result,
replies=[pika_spec.Confirm.SelectOk],
one_shot=True)
self._execute_task(self._impl.confirm_delivery,
callback=self._on_message_confirmation,
nowait=False)
self._current_future.result()
self._delivery_confirmation = True
self._execute_task(self._impl.add_on_return_callback,
self._on_message_returned)
def exchange_declare(self, exchange=None, # pylint: disable=R0913
exchange_type='direct', passive=False, durable=False,
auto_delete=False, internal=False,
arguments=None, **kwargs):
self._check_called_not_from_event_loop()
self._current_future = futurist.Future()
self._execute_task(self._impl.exchange_declare,
callback=self._current_future.set_result,
exchange=exchange,
exchange_type=exchange_type,
passive=passive,
durable=durable,
auto_delete=auto_delete,
internal=internal,
nowait=False,
arguments=arguments,
type=kwargs["type"] if kwargs else None)
return self._current_future.result()
def exchange_delete(self, exchange=None, if_unused=False):
self._check_called_not_from_event_loop()
self._current_future = futurist.Future()
self._execute_task(self._impl.exchange_delete,
callback=self._current_future.set_result,
exchange=exchange,
if_unused=if_unused,
nowait=False)
return self._current_future.result()
def exchange_bind(self, destination=None, source=None, routing_key='',
arguments=None):
self._check_called_not_from_event_loop()
self._current_future = futurist.Future()
self._execute_task(self._impl.exchange_bind,
callback=self._current_future.set_result,
destination=destination,
source=source,
routing_key=routing_key,
nowait=False,
arguments=arguments)
return self._current_future.result()
def exchange_unbind(self, destination=None, source=None, routing_key='',
arguments=None):
self._check_called_not_from_event_loop()
self._current_future = futurist.Future()
self._execute_task(self._impl.exchange_unbind,
callback=self._current_future.set_result,
destination=destination,
source=source,
routing_key=routing_key,
nowait=False,
arguments=arguments)
return self._current_future.result()
def queue_declare(self, queue='', passive=False, durable=False,
exclusive=False, auto_delete=False,
arguments=None):
self._check_called_not_from_event_loop()
self._current_future = futurist.Future()
self._execute_task(self._impl.queue_declare,
callback=self._current_future.set_result,
queue=queue,
passive=passive,
durable=durable,
exclusive=exclusive,
auto_delete=auto_delete,
nowait=False,
arguments=arguments)
return self._current_future.result()
def queue_delete(self, queue='', if_unused=False, if_empty=False):
self._check_called_not_from_event_loop()
self._current_future = futurist.Future()
self._execute_task(self._impl.queue_delete,
callback=self._current_future.set_result,
queue=queue,
if_unused=if_unused,
if_empty=if_empty,
nowait=False)
return self._current_future.result()
def queue_purge(self, queue=''):
self._check_called_not_from_event_loop()
self._current_future = futurist.Future()
self._execute_task(self._impl.queue_purge,
callback=self._current_future.set_result,
queue=queue,
nowait=False)
return self._current_future.result()
def queue_bind(self, queue, exchange, routing_key=None,
arguments=None):
self._check_called_not_from_event_loop()
self._current_future = futurist.Future()
self._execute_task(self._impl.queue_bind,
callback=self._current_future.set_result,
queue=queue,
exchange=exchange,
routing_key=routing_key,
nowait=False,
arguments=arguments)
return self._current_future.result()
def queue_unbind(self, queue='', exchange=None, routing_key=None,
arguments=None):
self._check_called_not_from_event_loop()
self._current_future = futurist.Future()
self._execute_task(self._impl.queue_unbind,
callback=self._current_future.set_result,
queue=queue,
exchange=exchange,
routing_key=routing_key,
arguments=arguments)
return self._current_future.result()

View File

@ -1,307 +0,0 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import random
import socket
import threading
import time
from oslo_config import cfg
import pika
from pika import credentials as pika_credentials
from oslo_messaging._drivers.pika_driver import pika_commons as pika_drv_cmns
from oslo_messaging._drivers.pika_driver import pika_connection
from oslo_messaging._drivers.pika_driver import pika_exceptions as pika_drv_exc
LOG = logging.getLogger(__name__)
# constant for setting tcp_user_timeout socket option
# (it should be defined in 'select' module of standard library in future)
TCP_USER_TIMEOUT = 18
# constants for creating connection statistics
HOST_CONNECTION_LAST_TRY_TIME = "last_try_time"
HOST_CONNECTION_LAST_SUCCESS_TRY_TIME = "last_success_try_time"
pika_opts = [
cfg.IntOpt('channel_max',
help='Maximum number of channels to allow'),
cfg.IntOpt('frame_max',
help='The maximum byte size for an AMQP frame'),
cfg.IntOpt('heartbeat_interval', default=3,
help="How often to send heartbeats for consumer's connections"),
cfg.BoolOpt('ssl',
help='Enable SSL'),
cfg.DictOpt('ssl_options',
help='Arguments passed to ssl.wrap_socket'),
cfg.FloatOpt('socket_timeout', default=0.25,
help="Set socket timeout in seconds for connection's socket"),
cfg.FloatOpt('tcp_user_timeout', default=0.25,
help="Set TCP_USER_TIMEOUT in seconds for connection's "
"socket"),
cfg.FloatOpt('host_connection_reconnect_delay', default=0.25,
help="Set delay for reconnection to some host which has "
"connection error"),
cfg.StrOpt('connection_factory', default="single",
choices=["new", "single", "read_write"],
help='Connection factory implementation')
]
class PikaConnectionFactory(object):
def __init__(self, url, conf):
self._url = url
self._conf = conf
self._connection_lock = threading.RLock()
if not url.hosts:
raise ValueError("You should provide at least one RabbitMQ host")
# initializing connection parameters for configured RabbitMQ hosts
self._common_pika_params = {
'virtual_host': url.virtual_host,
'channel_max': conf.oslo_messaging_pika.channel_max,
'frame_max': conf.oslo_messaging_pika.frame_max,
'ssl': conf.oslo_messaging_pika.ssl,
'ssl_options': conf.oslo_messaging_pika.ssl_options,
'socket_timeout': conf.oslo_messaging_pika.socket_timeout
}
self._host_list = url.hosts
self._heartbeat_interval = conf.oslo_messaging_pika.heartbeat_interval
self._host_connection_reconnect_delay = (
conf.oslo_messaging_pika.host_connection_reconnect_delay
)
self._tcp_user_timeout = conf.oslo_messaging_pika.tcp_user_timeout
self._connection_host_status = {}
self._cur_connection_host_num = random.randint(
0, len(url.hosts) - 1
)
def cleanup(self):
pass
def create_connection(self, for_listening=False):
"""Create and return connection to any available host.
:return: created connection
:raise: ConnectionException if all hosts are not reachable
"""
with self._connection_lock:
host_count = len(self._host_list)
connection_attempts = host_count
while connection_attempts > 0:
self._cur_connection_host_num += 1
self._cur_connection_host_num %= host_count
try:
return self._create_host_connection(
self._cur_connection_host_num, for_listening
)
except pika_drv_cmns.PIKA_CONNECTIVITY_ERRORS as e:
LOG.warning("Can't establish connection to host. %s", e)
except pika_drv_exc.HostConnectionNotAllowedException as e:
LOG.warning("Connection to host is not allowed. %s", e)
connection_attempts -= 1
raise pika_drv_exc.EstablishConnectionException(
"Can not establish connection to any configured RabbitMQ "
"host: " + str(self._host_list)
)
def _set_tcp_user_timeout(self, s):
if not self._tcp_user_timeout:
return
try:
s.setsockopt(
socket.IPPROTO_TCP, TCP_USER_TIMEOUT,
int(self._tcp_user_timeout * 1000)
)
except socket.error:
LOG.warning(
"Whoops, this kernel doesn't seem to support TCP_USER_TIMEOUT."
)
def _create_host_connection(self, host_index, for_listening):
"""Create new connection to host #host_index
:param host_index: Integer, number of host for connection establishing
:param for_listening: Boolean, creates connection for listening
if True
:return: New connection
"""
host = self._host_list[host_index]
cur_time = time.time()
host_connection_status = self._connection_host_status.get(host)
if host_connection_status is None:
host_connection_status = {
HOST_CONNECTION_LAST_SUCCESS_TRY_TIME: 0,
HOST_CONNECTION_LAST_TRY_TIME: 0
}
self._connection_host_status[host] = host_connection_status
last_success_time = host_connection_status[
HOST_CONNECTION_LAST_SUCCESS_TRY_TIME
]
last_time = host_connection_status[
HOST_CONNECTION_LAST_TRY_TIME
]
# raise HostConnectionNotAllowedException if we tried to establish
# connection in last 'host_connection_reconnect_delay' and got
# failure
if (last_time != last_success_time and
cur_time - last_time <
self._host_connection_reconnect_delay):
raise pika_drv_exc.HostConnectionNotAllowedException(
"Connection to host #{} is not allowed now because of "
"previous failure".format(host_index)
)
try:
connection = self._do_create_host_connection(
host, for_listening
)
self._connection_host_status[host][
HOST_CONNECTION_LAST_SUCCESS_TRY_TIME
] = cur_time
return connection
finally:
self._connection_host_status[host][
HOST_CONNECTION_LAST_TRY_TIME
] = cur_time
def _do_create_host_connection(self, host, for_listening):
connection_params = pika.ConnectionParameters(
host=host.hostname,
port=host.port,
credentials=pika_credentials.PlainCredentials(
host.username, host.password
),
heartbeat_interval=(
self._heartbeat_interval if for_listening else None
),
**self._common_pika_params
)
if for_listening:
connection = pika_connection.ThreadSafePikaConnection(
parameters=connection_params
)
else:
connection = pika.BlockingConnection(
parameters=connection_params
)
connection.params = connection_params
self._set_tcp_user_timeout(connection._impl.socket)
return connection
class NotClosableConnection(object):
def __init__(self, connection):
self._connection = connection
def __getattr__(self, item):
return getattr(self._connection, item)
def close(self):
pass
class SinglePikaConnectionFactory(PikaConnectionFactory):
def __init__(self, url, conf):
super(SinglePikaConnectionFactory, self).__init__(url, conf)
self._connection = None
def create_connection(self, for_listening=False):
with self._connection_lock:
if self._connection is None or not self._connection.is_open:
self._connection = (
super(SinglePikaConnectionFactory, self).create_connection(
True
)
)
return NotClosableConnection(self._connection)
def cleanup(self):
with self._connection_lock:
if self._connection is not None and self._connection.is_open:
try:
self._connection.close()
except Exception:
LOG.warning(
"Unexpected exception during connection closing",
exc_info=True
)
self._connection = None
class ReadWritePikaConnectionFactory(PikaConnectionFactory):
def __init__(self, url, conf):
super(ReadWritePikaConnectionFactory, self).__init__(url, conf)
self._read_connection = None
self._write_connection = None
def create_connection(self, for_listening=False):
with self._connection_lock:
if for_listening:
if (self._read_connection is None or
not self._read_connection.is_open):
self._read_connection = super(
ReadWritePikaConnectionFactory, self
).create_connection(True)
return NotClosableConnection(self._read_connection)
else:
if (self._write_connection is None or
not self._write_connection.is_open):
self._write_connection = super(
ReadWritePikaConnectionFactory, self
).create_connection(True)
return NotClosableConnection(self._write_connection)
def cleanup(self):
with self._connection_lock:
if (self._read_connection is not None and
self._read_connection.is_open):
try:
self._read_connection.close()
except Exception:
LOG.warning(
"Unexpected exception during connection closing",
exc_info=True
)
self._read_connection = None
if (self._write_connection is not None and
self._write_connection.is_open):
try:
self._write_connection.close()
except Exception:
LOG.warning(
"Unexpected exception during connection closing",
exc_info=True
)
self._write_connection = None

View File

@ -1,301 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import os
import threading
import uuid
from oslo_utils import eventletutils
import pika_pool
from stevedore import driver
from oslo_messaging._drivers import common as drv_cmn
from oslo_messaging._drivers.pika_driver import pika_commons as pika_drv_cmns
from oslo_messaging._drivers.pika_driver import pika_exceptions as pika_drv_exc
LOG = logging.getLogger(__name__)
class _PooledConnectionWithConfirmations(pika_pool.Connection):
"""Derived from 'pika_pool.Connection' and extends its logic - adds
'confirm_delivery' call after channel creation to enable delivery
confirmation for channel
"""
@property
def channel(self):
if self.fairy.channel is None:
self.fairy.channel = self.fairy.cxn.channel()
self.fairy.channel.confirm_delivery()
return self.fairy.channel
class PikaEngine(object):
"""Used for shared functionality between other pika driver modules, like
connection factory, connection pools, processing and holding configuration,
etc.
"""
def __init__(self, conf, url, default_exchange=None,
allowed_remote_exmods=None):
conf = drv_cmn.ConfigOptsProxy(conf, url)
self.conf = conf
self.url = url
self._connection_factory_type = (
self.conf.oslo_messaging_pika.connection_factory
)
self._connection_factory = None
self._connection_without_confirmation_pool = None
self._connection_with_confirmation_pool = None
self._pid = None
self._init_lock = threading.Lock()
self.host_connection_reconnect_delay = (
conf.oslo_messaging_pika.host_connection_reconnect_delay
)
# processing rpc options
self.default_rpc_exchange = (
conf.oslo_messaging_pika.default_rpc_exchange
)
self.rpc_reply_exchange = (
conf.oslo_messaging_pika.rpc_reply_exchange
)
self.allowed_remote_exmods = [pika_drv_cmns.EXCEPTIONS_MODULE]
if allowed_remote_exmods:
self.allowed_remote_exmods.extend(allowed_remote_exmods)
self.rpc_listener_prefetch_count = (
conf.oslo_messaging_pika.rpc_listener_prefetch_count
)
self.default_rpc_retry_attempts = (
conf.oslo_messaging_pika.default_rpc_retry_attempts
)
self.rpc_retry_delay = (
conf.oslo_messaging_pika.rpc_retry_delay
)
if self.rpc_retry_delay < 0:
raise ValueError("rpc_retry_delay should be non-negative integer")
self.rpc_reply_listener_prefetch_count = (
conf.oslo_messaging_pika.rpc_listener_prefetch_count
)
self.rpc_reply_retry_attempts = (
conf.oslo_messaging_pika.rpc_reply_retry_attempts
)
self.rpc_reply_retry_delay = (
conf.oslo_messaging_pika.rpc_reply_retry_delay
)
if self.rpc_reply_retry_delay < 0:
raise ValueError("rpc_reply_retry_delay should be non-negative "
"integer")
self.rpc_queue_expiration = (
self.conf.oslo_messaging_pika.rpc_queue_expiration
)
# processing notification options
self.default_notification_exchange = (
conf.oslo_messaging_pika.default_notification_exchange
)
self.notification_persistence = (
conf.oslo_messaging_pika.notification_persistence
)
self.notification_listener_prefetch_count = (
conf.oslo_messaging_pika.notification_listener_prefetch_count
)
self.default_notification_retry_attempts = (
conf.oslo_messaging_pika.default_notification_retry_attempts
)
if self.default_notification_retry_attempts is None:
raise ValueError("default_notification_retry_attempts should be "
"an integer")
self.notification_retry_delay = (
conf.oslo_messaging_pika.notification_retry_delay
)
if (self.notification_retry_delay is None or
self.notification_retry_delay < 0):
raise ValueError("notification_retry_delay should be non-negative "
"integer")
def _init_if_needed(self):
cur_pid = os.getpid()
if self._pid == cur_pid:
return
with self._init_lock:
if self._pid == cur_pid:
return
if self._pid:
LOG.warning("New pid is detected. Old: %s, new: %s. "
"Cleaning up...", self._pid, cur_pid)
# Note(dukhlov): we need to force select poller usage in case
# when 'thread' module is monkey patched becase current
# eventlet implementation does not support patching of
# poll/epoll/kqueue
if eventletutils.is_monkey_patched("thread"):
from pika.adapters import select_connection
select_connection.SELECT_TYPE = "select"
mgr = driver.DriverManager(
'oslo.messaging.pika.connection_factory',
self._connection_factory_type
)
self._connection_factory = mgr.driver(self.url, self.conf)
# initializing 2 connection pools: 1st for connections without
# confirmations, 2nd - with confirmations
self._connection_without_confirmation_pool = pika_pool.QueuedPool(
create=self.create_connection,
max_size=self.conf.oslo_messaging_pika.pool_max_size,
max_overflow=self.conf.oslo_messaging_pika.pool_max_overflow,
timeout=self.conf.oslo_messaging_pika.pool_timeout,
recycle=self.conf.oslo_messaging_pika.pool_recycle,
stale=self.conf.oslo_messaging_pika.pool_stale,
)
self._connection_with_confirmation_pool = pika_pool.QueuedPool(
create=self.create_connection,
max_size=self.conf.oslo_messaging_pika.pool_max_size,
max_overflow=self.conf.oslo_messaging_pika.pool_max_overflow,
timeout=self.conf.oslo_messaging_pika.pool_timeout,
recycle=self.conf.oslo_messaging_pika.pool_recycle,
stale=self.conf.oslo_messaging_pika.pool_stale,
)
self._connection_with_confirmation_pool.Connection = (
_PooledConnectionWithConfirmations
)
self._pid = cur_pid
def create_connection(self, for_listening=False):
self._init_if_needed()
return self._connection_factory.create_connection(for_listening)
@property
def connection_without_confirmation_pool(self):
self._init_if_needed()
return self._connection_without_confirmation_pool
@property
def connection_with_confirmation_pool(self):
self._init_if_needed()
return self._connection_with_confirmation_pool
def cleanup(self):
if self._connection_factory:
self._connection_factory.cleanup()
def declare_exchange_by_channel(self, channel, exchange, exchange_type,
durable):
"""Declare exchange using already created channel, if they don't exist
:param channel: Channel for communication with RabbitMQ
:param exchange: String, RabbitMQ exchange name
:param exchange_type: String ('direct', 'topic' or 'fanout')
exchange type for exchange to be declared
:param durable: Boolean, creates durable exchange if true
"""
try:
channel.exchange_declare(
exchange, exchange_type, auto_delete=True, durable=durable
)
except pika_drv_cmns.PIKA_CONNECTIVITY_ERRORS as e:
raise pika_drv_exc.ConnectionException(
"Connectivity problem detected during declaring exchange: "
"exchange:{}, exchange_type: {}, durable: {}. {}".format(
exchange, exchange_type, durable, str(e)
)
)
def declare_queue_binding_by_channel(self, channel, exchange, queue,
routing_key, exchange_type,
queue_expiration, durable):
"""Declare exchange, queue and bind them using already created
channel, if they don't exist
:param channel: Channel for communication with RabbitMQ
:param exchange: String, RabbitMQ exchange name
:param queue: Sting, RabbitMQ queue name
:param routing_key: Sting, RabbitMQ routing key for queue binding
:param exchange_type: String ('direct', 'topic' or 'fanout')
exchange type for exchange to be declared
:param queue_expiration: Integer, time in seconds which queue will
remain existing in RabbitMQ when there no consumers connected
:param durable: Boolean, creates durable exchange and queue if true
"""
try:
channel.exchange_declare(
exchange, exchange_type, auto_delete=True, durable=durable
)
arguments = {}
if queue_expiration > 0:
arguments['x-expires'] = queue_expiration * 1000
channel.queue_declare(queue, durable=durable, arguments=arguments)
channel.queue_bind(queue, exchange, routing_key)
except pika_drv_cmns.PIKA_CONNECTIVITY_ERRORS as e:
raise pika_drv_exc.ConnectionException(
"Connectivity problem detected during declaring queue "
"binding: exchange:{}, queue: {}, routing_key: {}, "
"exchange_type: {}, queue_expiration: {}, "
"durable: {}. {}".format(
exchange, queue, routing_key, exchange_type,
queue_expiration, durable, str(e)
)
)
def get_rpc_exchange_name(self, exchange):
"""Returns RabbitMQ exchange name for given rpc request
:param exchange: String, oslo.messaging target's exchange
:return: String, RabbitMQ exchange name
"""
return exchange or self.default_rpc_exchange
@staticmethod
def get_rpc_queue_name(topic, server, no_ack, worker=False):
"""Returns RabbitMQ queue name for given rpc request
:param topic: String, oslo.messaging target's topic
:param server: String, oslo.messaging target's server
:param no_ack: Boolean, use message delivery with acknowledges or not
:param worker: Boolean, use queue by single worker only or not
:return: String, RabbitMQ queue name
"""
queue_parts = ["no_ack" if no_ack else "with_ack", topic]
if server is not None:
queue_parts.append(server)
if worker:
queue_parts.append("worker")
queue_parts.append(uuid.uuid4().hex)
queue = '.'.join(queue_parts)
return queue

View File

@ -1,68 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_messaging import exceptions
class ExchangeNotFoundException(exceptions.MessageDeliveryFailure):
"""Is raised if specified exchange is not found in RabbitMQ."""
pass
class MessageRejectedException(exceptions.MessageDeliveryFailure):
"""Is raised if message which you are trying to send was nacked by RabbitMQ
it may happen if RabbitMQ is not able to process message
"""
pass
class RoutingException(exceptions.MessageDeliveryFailure):
"""Is raised if message can not be delivered to any queue. Usually it means
that any queue is not binded to given exchange with given routing key.
Raised if 'mandatory' flag specified only
"""
pass
class ConnectionException(exceptions.MessagingException):
"""Is raised if some operation can not be performed due to connectivity
problem
"""
pass
class TimeoutConnectionException(ConnectionException):
"""Is raised if socket timeout was expired during network interaction"""
pass
class EstablishConnectionException(ConnectionException):
"""Is raised if we have some problem during establishing connection
procedure
"""
pass
class HostConnectionNotAllowedException(EstablishConnectionException):
"""Is raised in case of try to establish connection to temporary
not allowed host (because of reconnection policy for example)
"""
pass
class UnsupportedDriverVersion(exceptions.MessagingException):
"""Is raised when message is received but was sent by different,
not supported driver version
"""
pass

View File

@ -1,123 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import threading
import uuid
from concurrent import futures
from oslo_log import log as logging
from oslo_messaging._drivers.pika_driver import pika_poller as pika_drv_poller
LOG = logging.getLogger(__name__)
class RpcReplyPikaListener(object):
"""Provide functionality for listening RPC replies. Create and handle
reply poller and coroutine for performing polling job
"""
def __init__(self, pika_engine):
super(RpcReplyPikaListener, self).__init__()
self._pika_engine = pika_engine
# preparing poller for listening replies
self._reply_queue = None
self._reply_poller = None
self._reply_waiting_futures = {}
self._reply_consumer_initialized = False
self._reply_consumer_initialization_lock = threading.Lock()
self._shutdown = False
def get_reply_qname(self):
"""As result return reply queue name, shared for whole process,
but before this check is RPC listener initialized or not and perform
initialization if needed
:return: String, queue name which hould be used for reply sending
"""
if self._reply_consumer_initialized:
return self._reply_queue
with self._reply_consumer_initialization_lock:
if self._reply_consumer_initialized:
return self._reply_queue
# generate reply queue name if needed
if self._reply_queue is None:
self._reply_queue = "reply.{}.{}.{}".format(
self._pika_engine.conf.project,
self._pika_engine.conf.prog, uuid.uuid4().hex
)
# initialize reply poller if needed
if self._reply_poller is None:
self._reply_poller = pika_drv_poller.RpcReplyPikaPoller(
self._pika_engine, self._pika_engine.rpc_reply_exchange,
self._reply_queue, 1, None,
self._pika_engine.rpc_reply_listener_prefetch_count
)
self._reply_poller.start(self._on_incoming)
self._reply_consumer_initialized = True
return self._reply_queue
def _on_incoming(self, incoming):
"""Reply polling job. Poll replies in infinite loop and notify
registered features
"""
for message in incoming:
try:
message.acknowledge()
future = self._reply_waiting_futures.pop(
message.msg_id, None
)
if future is not None:
future.set_result(message)
except Exception:
LOG.exception("Unexpected exception during processing"
"reply message")
def register_reply_waiter(self, msg_id):
"""Register reply waiter. Should be called before message sending to
the server
:param msg_id: String, message_id of expected reply
:return future: Future, container for expected reply to be returned
over
"""
future = futures.Future()
self._reply_waiting_futures[msg_id] = future
return future
def unregister_reply_waiter(self, msg_id):
"""Unregister reply waiter. Should be called if client has not got
reply and doesn't want to continue waiting (if timeout_expired for
example)
:param msg_id:
"""
self._reply_waiting_futures.pop(msg_id, None)
def cleanup(self):
"""Stop replies consuming and cleanup resources"""
self._shutdown = True
if self._reply_poller:
self._reply_poller.stop()
self._reply_poller.cleanup()
self._reply_poller = None
self._reply_queue = None

View File

@ -1,613 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import socket
import time
import traceback
import uuid
from concurrent import futures
from oslo_log import log as logging
from oslo_serialization import jsonutils
from oslo_utils import importutils
from oslo_utils import timeutils
from pika import exceptions as pika_exceptions
from pika import spec as pika_spec
import pika_pool
import retrying
import six
import oslo_messaging
from oslo_messaging._drivers import base
from oslo_messaging._drivers.pika_driver import pika_commons as pika_drv_cmns
from oslo_messaging._drivers.pika_driver import pika_exceptions as pika_drv_exc
from oslo_messaging import _utils as utils
from oslo_messaging import exceptions
LOG = logging.getLogger(__name__)
_VERSION_HEADER = "version"
_VERSION = "1.0"
class RemoteExceptionMixin(object):
"""Used for constructing dynamic exception type during deserialization of
remote exception. It defines unified '__init__' method signature and
exception message format
"""
def __init__(self, module, clazz, message, trace):
"""Store serialized data
:param module: String, module name for importing original exception
class of serialized remote exception
:param clazz: String, original class name of serialized remote
exception
:param message: String, original message of serialized remote
exception
:param trace: String, original trace of serialized remote exception
"""
self.module = module
self.clazz = clazz
self.message = message
self.trace = trace
self._str_msgs = message + "\n" + "\n".join(trace)
def __str__(self):
return self._str_msgs
class PikaIncomingMessage(base.IncomingMessage):
"""Driver friendly adapter for received message. Extract message
information from RabbitMQ message and provide access to it
"""
def __init__(self, pika_engine, channel, method, properties, body):
"""Parse RabbitMQ message
:param pika_engine: PikaEngine, shared object with configuration and
shared driver functionality
:param channel: Channel, RabbitMQ channel which was used for
this message delivery, used for sending ack back.
If None - ack is not required
:param method: Method, RabbitMQ message method
:param properties: Properties, RabbitMQ message properties
:param body: Bytes, RabbitMQ message body
"""
headers = getattr(properties, "headers", {})
version = headers.get(_VERSION_HEADER, None)
if not utils.version_is_compatible(version, _VERSION):
raise pika_drv_exc.UnsupportedDriverVersion(
"Message's version: {} is not compatible with driver version: "
"{}".format(version, _VERSION))
self._pika_engine = pika_engine
self._channel = channel
self._delivery_tag = method.delivery_tag
self._version = version
self._content_type = properties.content_type
self._content_encoding = properties.content_encoding
self.unique_id = properties.message_id
self.expiration_time = (
None if properties.expiration is None else
time.time() + float(properties.expiration) / 1000
)
if self._content_type != "application/json":
raise NotImplementedError(
"Content-type['{}'] is not valid, "
"'application/json' only is supported.".format(
self._content_type
)
)
message_dict = jsonutils.loads(body, encoding=self._content_encoding)
context_dict = {}
for key in list(message_dict.keys()):
key = six.text_type(key)
if key.startswith('_$_'):
value = message_dict.pop(key)
context_dict[key[3:]] = value
super(PikaIncomingMessage, self).__init__(context_dict, message_dict)
def need_ack(self):
return self._channel is not None
def acknowledge(self):
"""Ack the message. Should be called by message processing logic when
it considered as consumed (means that we don't need redelivery of this
message anymore)
"""
if self.need_ack():
self._channel.basic_ack(delivery_tag=self._delivery_tag)
def requeue(self):
"""Rollback the message. Should be called by message processing logic
when it can not process the message right now and should be redelivered
later if it is possible
"""
if self.need_ack():
return self._channel.basic_nack(delivery_tag=self._delivery_tag,
requeue=True)
class RpcPikaIncomingMessage(PikaIncomingMessage, base.RpcIncomingMessage):
"""PikaIncomingMessage implementation for RPC messages. It expects
extra RPC related fields in message body (msg_id and reply_q). Also 'reply'
method added to allow consumer to send RPC reply back to the RPC client
"""
def __init__(self, pika_engine, channel, method, properties, body):
"""Defines default values of msg_id and reply_q fields and just call
super.__init__ method
:param pika_engine: PikaEngine, shared object with configuration and
shared driver functionality
:param channel: Channel, RabbitMQ channel which was used for
this message delivery, used for sending ack back.
If None - ack is not required
:param method: Method, RabbitMQ message method
:param properties: Properties, RabbitMQ message properties
:param body: Bytes, RabbitMQ message body
"""
super(RpcPikaIncomingMessage, self).__init__(
pika_engine, channel, method, properties, body
)
self.reply_q = properties.reply_to
self.msg_id = properties.correlation_id
def reply(self, reply=None, failure=None):
"""Send back reply to the RPC client
:param reply: Dictionary, reply. In case of exception should be None
:param failure: Tuple, should be a sys.exc_info() tuple.
Should be None if RPC request was successfully processed.
:return RpcReplyPikaIncomingMessage, message with reply
"""
if self.reply_q is None:
return
reply_outgoing_message = RpcReplyPikaOutgoingMessage(
self._pika_engine, self.msg_id, reply=reply, failure_info=failure,
content_type=self._content_type,
content_encoding=self._content_encoding
)
def on_exception(ex):
if isinstance(ex, pika_drv_exc.ConnectionException):
LOG.warning(
"Connectivity related problem during reply sending. %s",
ex
)
return True
else:
return False
retrier = retrying.retry(
stop_max_attempt_number=(
None if self._pika_engine.rpc_reply_retry_attempts == -1
else self._pika_engine.rpc_reply_retry_attempts
),
retry_on_exception=on_exception,
wait_fixed=self._pika_engine.rpc_reply_retry_delay * 1000,
) if self._pika_engine.rpc_reply_retry_attempts else None
try:
timeout = (None if self.expiration_time is None else
max(self.expiration_time - time.time(), 0))
with timeutils.StopWatch(duration=timeout) as stopwatch:
reply_outgoing_message.send(
reply_q=self.reply_q,
stopwatch=stopwatch,
retrier=retrier
)
LOG.debug(
"Message [id:'%s'] replied to '%s'.", self.msg_id, self.reply_q
)
except Exception:
LOG.exception(
"Message [id:'%s'] wasn't replied to : %s", self.msg_id,
self.reply_q
)
class RpcReplyPikaIncomingMessage(PikaIncomingMessage):
"""PikaIncomingMessage implementation for RPC reply messages. It expects
extra RPC reply related fields in message body (result and failure).
"""
def __init__(self, pika_engine, channel, method, properties, body):
"""Defines default values of result and failure fields, call
super.__init__ method and then construct Exception object if failure is
not None
:param pika_engine: PikaEngine, shared object with configuration and
shared driver functionality
:param channel: Channel, RabbitMQ channel which was used for
this message delivery, used for sending ack back.
If None - ack is not required
:param method: Method, RabbitMQ message method
:param properties: Properties, RabbitMQ message properties
:param body: Bytes, RabbitMQ message body
"""
super(RpcReplyPikaIncomingMessage, self).__init__(
pika_engine, channel, method, properties, body
)
self.msg_id = properties.correlation_id
self.result = self.message.get("s", None)
self.failure = self.message.get("e", None)
if self.failure is not None:
trace = self.failure.get('t', [])
message = self.failure.get('s', "")
class_name = self.failure.get('c')
module_name = self.failure.get('m')
res_exc = None
if module_name in pika_engine.allowed_remote_exmods:
try:
module = importutils.import_module(module_name)
klass = getattr(module, class_name)
ex_type = type(
klass.__name__,
(RemoteExceptionMixin, klass),
{}
)
res_exc = ex_type(module_name, class_name, message, trace)
except ImportError as e:
LOG.warning(
"Can not deserialize remote exception [module:%s, "
"class:%s]. %s", module_name, class_name, e
)
# if we have not processed failure yet, use RemoteError class
if res_exc is None:
res_exc = oslo_messaging.RemoteError(
class_name, message, trace
)
self.failure = res_exc
class PikaOutgoingMessage(object):
"""Driver friendly adapter for sending message. Construct RabbitMQ message
and send it
"""
def __init__(self, pika_engine, message, context,
content_type="application/json", content_encoding="utf-8"):
"""Parse RabbitMQ message
:param pika_engine: PikaEngine, shared object with configuration and
shared driver functionality
:param message: Dictionary, user's message fields
:param context: Dictionary, request context's fields
:param content_type: String, content-type header, defines serialization
mechanism
:param content_encoding: String, defines encoding for text data
"""
self._pika_engine = pika_engine
self._content_type = content_type
self._content_encoding = content_encoding
if self._content_type != "application/json":
raise NotImplementedError(
"Content-type['{}'] is not valid, "
"'application/json' only is supported.".format(
self._content_type
)
)
self.message = message
self.context = context
self.unique_id = uuid.uuid4().hex
def _prepare_message_to_send(self):
"""Combine user's message fields an system fields (_unique_id,
context's data etc)
"""
msg = self.message.copy()
if self.context:
for key, value in six.iteritems(self.context):
key = six.text_type(key)
msg['_$_' + key] = value
props = pika_spec.BasicProperties(
content_encoding=self._content_encoding,
content_type=self._content_type,
headers={_VERSION_HEADER: _VERSION},
message_id=self.unique_id,
)
return msg, props
@staticmethod
def _publish(pool, exchange, routing_key, body, properties, mandatory,
stopwatch):
"""Execute pika publish method using connection from connection pool
Also this message catches all pika related exceptions and raise
oslo.messaging specific exceptions
:param pool: Pool, pika connection pool for connection choosing
:param exchange: String, RabbitMQ exchange name for message sending
:param routing_key: String, RabbitMQ routing key for message routing
:param body: Bytes, RabbitMQ message payload
:param properties: Properties, RabbitMQ message properties
:param mandatory: Boolean, RabbitMQ publish mandatory flag (raise
exception if it is not possible to deliver message to any queue)
:param stopwatch: StopWatch, stopwatch object for calculating
allowed timeouts
"""
if stopwatch.expired():
raise exceptions.MessagingTimeout(
"Timeout for current operation was expired."
)
try:
timeout = stopwatch.leftover(return_none=True)
with pool.acquire(timeout=timeout) as conn:
if timeout is not None:
properties.expiration = str(int(timeout * 1000))
conn.channel.publish(
exchange=exchange,
routing_key=routing_key,
body=body,
properties=properties,
mandatory=mandatory
)
except pika_exceptions.NackError as e:
raise pika_drv_exc.MessageRejectedException(
"Can not send message: [body: {}], properties: {}] to "
"target [exchange: {}, routing_key: {}]. {}".format(
body, properties, exchange, routing_key, str(e)
)
)
except pika_exceptions.UnroutableError as e:
raise pika_drv_exc.RoutingException(
"Can not deliver message:[body:{}, properties: {}] to any "
"queue using target: [exchange:{}, "
"routing_key:{}]. {}".format(
body, properties, exchange, routing_key, str(e)
)
)
except pika_pool.Timeout as e:
raise exceptions.MessagingTimeout(
"Timeout for current operation was expired. {}".format(str(e))
)
except pika_pool.Connection.connectivity_errors as e:
if (isinstance(e, pika_exceptions.ChannelClosed)
and e.args and e.args[0] == 404):
raise pika_drv_exc.ExchangeNotFoundException(
"Attempt to send message to not existing exchange "
"detected, message: [body:{}, properties: {}], target: "
"[exchange:{}, routing_key:{}]. {}".format(
body, properties, exchange, routing_key, str(e)
)
)
raise pika_drv_exc.ConnectionException(
"Connectivity problem detected during sending the message: "
"[body:{}, properties: {}] to target: [exchange:{}, "
"routing_key:{}]. {}".format(
body, properties, exchange, routing_key, str(e)
)
)
except socket.timeout:
raise pika_drv_exc.TimeoutConnectionException(
"Socket timeout exceeded."
)
def _do_send(self, exchange, routing_key, msg_dict, msg_props,
confirm=True, mandatory=True, persistent=False,
stopwatch=pika_drv_cmns.INFINITE_STOP_WATCH, retrier=None):
"""Send prepared message with configured retrying
:param exchange: String, RabbitMQ exchange name for message sending
:param routing_key: String, RabbitMQ routing key for message routing
:param msg_dict: Dictionary, message payload
:param msg_props: Properties, message properties
:param confirm: Boolean, enable publisher confirmation if True
:param mandatory: Boolean, RabbitMQ publish mandatory flag (raise
exception if it is not possible to deliver message to any queue)
:param persistent: Boolean, send persistent message if True, works only
for routing into durable queues
:param stopwatch: StopWatch, stopwatch object for calculating
allowed timeouts
:param retrier: retrying.Retrier, configured retrier object for sending
message, if None no retrying is performed
"""
msg_props.delivery_mode = 2 if persistent else 1
pool = (self._pika_engine.connection_with_confirmation_pool
if confirm else
self._pika_engine.connection_without_confirmation_pool)
body = jsonutils.dump_as_bytes(msg_dict,
encoding=self._content_encoding)
LOG.debug(
"Sending message:[body:%s; properties: %s] to target: "
"[exchange:%s; routing_key:%s]", body, msg_props, exchange,
routing_key
)
publish = (self._publish if retrier is None else
retrier(self._publish))
return publish(pool, exchange, routing_key, body, msg_props,
mandatory, stopwatch)
def send(self, exchange, routing_key='', confirm=True, mandatory=True,
persistent=False, stopwatch=pika_drv_cmns.INFINITE_STOP_WATCH,
retrier=None):
"""Send message with configured retrying
:param exchange: String, RabbitMQ exchange name for message sending
:param routing_key: String, RabbitMQ routing key for message routing
:param confirm: Boolean, enable publisher confirmation if True
:param mandatory: Boolean, RabbitMQ publish mandatory flag (raise
exception if it is not possible to deliver message to any queue)
:param persistent: Boolean, send persistent message if True, works only
for routing into durable queues
:param stopwatch: StopWatch, stopwatch object for calculating
allowed timeouts
:param retrier: retrying.Retrier, configured retrier object for sending
message, if None no retrying is performed
"""
msg_dict, msg_props = self._prepare_message_to_send()
return self._do_send(exchange, routing_key, msg_dict, msg_props,
confirm, mandatory, persistent,
stopwatch, retrier)
class RpcPikaOutgoingMessage(PikaOutgoingMessage):
"""PikaOutgoingMessage implementation for RPC messages. It adds
possibility to wait and receive RPC reply
"""
def __init__(self, pika_engine, message, context,
content_type="application/json", content_encoding="utf-8"):
super(RpcPikaOutgoingMessage, self).__init__(
pika_engine, message, context, content_type, content_encoding
)
self.msg_id = None
self.reply_q = None
def send(self, exchange, routing_key, reply_listener=None,
stopwatch=pika_drv_cmns.INFINITE_STOP_WATCH, retrier=None):
"""Send RPC message with configured retrying
:param exchange: String, RabbitMQ exchange name for message sending
:param routing_key: String, RabbitMQ routing key for message routing
:param reply_listener: RpcReplyPikaListener, listener for waiting
reply. If None - return immediately without reply waiting
:param stopwatch: StopWatch, stopwatch object for calculating
allowed timeouts
:param retrier: retrying.Retrier, configured retrier object for sending
message, if None no retrying is performed
"""
msg_dict, msg_props = self._prepare_message_to_send()
if reply_listener:
self.msg_id = uuid.uuid4().hex
msg_props.correlation_id = self.msg_id
LOG.debug('MSG_ID is %s', self.msg_id)
self.reply_q = reply_listener.get_reply_qname()
msg_props.reply_to = self.reply_q
future = reply_listener.register_reply_waiter(msg_id=self.msg_id)
self._do_send(
exchange=exchange, routing_key=routing_key, msg_dict=msg_dict,
msg_props=msg_props, confirm=True, mandatory=True,
persistent=False, stopwatch=stopwatch, retrier=retrier
)
try:
return future.result(stopwatch.leftover(return_none=True))
except BaseException as e:
reply_listener.unregister_reply_waiter(self.msg_id)
if isinstance(e, futures.TimeoutError):
e = exceptions.MessagingTimeout()
raise e
else:
self._do_send(
exchange=exchange, routing_key=routing_key, msg_dict=msg_dict,
msg_props=msg_props, confirm=True, mandatory=True,
persistent=False, stopwatch=stopwatch, retrier=retrier
)
class RpcReplyPikaOutgoingMessage(PikaOutgoingMessage):
"""PikaOutgoingMessage implementation for RPC reply messages. It sets
correlation_id AMQP property to link this reply with response
"""
def __init__(self, pika_engine, msg_id, reply=None, failure_info=None,
content_type="application/json", content_encoding="utf-8"):
"""Initialize with reply information for sending
:param pika_engine: PikaEngine, shared object with configuration and
shared driver functionality
:param msg_id: String, msg_id of RPC request, which waits for reply
:param reply: Dictionary, reply. In case of exception should be None
:param failure_info: Tuple, should be a sys.exc_info() tuple.
Should be None if RPC request was successfully processed.
:param content_type: String, content-type header, defines serialization
mechanism
:param content_encoding: String, defines encoding for text data
"""
self.msg_id = msg_id
if failure_info is not None:
ex_class = failure_info[0]
ex = failure_info[1]
tb = traceback.format_exception(*failure_info)
if issubclass(ex_class, RemoteExceptionMixin):
failure_data = {
'c': ex.clazz,
'm': ex.module,
's': ex.message,
't': tb
}
else:
failure_data = {
'c': six.text_type(ex_class.__name__),
'm': six.text_type(ex_class.__module__),
's': six.text_type(ex),
't': tb
}
msg = {'e': failure_data}
else:
msg = {'s': reply}
super(RpcReplyPikaOutgoingMessage, self).__init__(
pika_engine, msg, None, content_type, content_encoding
)
def send(self, reply_q, stopwatch=pika_drv_cmns.INFINITE_STOP_WATCH,
retrier=None):
"""Send RPC message with configured retrying
:param reply_q: String, queue name for sending reply
:param stopwatch: StopWatch, stopwatch object for calculating
allowed timeouts
:param retrier: retrying.Retrier, configured retrier object for sending
message, if None no retrying is performed
"""
msg_dict, msg_props = self._prepare_message_to_send()
msg_props.correlation_id = self.msg_id
self._do_send(
exchange=self._pika_engine.rpc_reply_exchange, routing_key=reply_q,
msg_dict=msg_dict, msg_props=msg_props, confirm=True,
mandatory=True, persistent=False, stopwatch=stopwatch,
retrier=retrier
)

View File

@ -1,538 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import threading
from oslo_log import log as logging
from oslo_service import loopingcall
from oslo_messaging._drivers import base
from oslo_messaging._drivers.pika_driver import pika_commons as pika_drv_cmns
from oslo_messaging._drivers.pika_driver import pika_exceptions as pika_drv_exc
from oslo_messaging._drivers.pika_driver import pika_message as pika_drv_msg
LOG = logging.getLogger(__name__)
class PikaPoller(base.Listener):
"""Provides user friendly functionality for RabbitMQ message consuming,
handles low level connectivity problems and restore connection if some
connectivity related problem detected
"""
def __init__(self, pika_engine, batch_size, batch_timeout, prefetch_count,
incoming_message_class):
"""Initialize required fields
:param pika_engine: PikaEngine, shared object with configuration and
shared driver functionality
:param batch_size: desired number of messages passed to
single on_incoming_callback call
:param batch_timeout: defines how long should we wait for batch_size
messages if we already have some messages waiting for processing
:param prefetch_count: Integer, maximum count of unacknowledged
messages which RabbitMQ broker sends to this consumer
:param incoming_message_class: PikaIncomingMessage, wrapper for
consumed RabbitMQ message
"""
super(PikaPoller, self).__init__(batch_size, batch_timeout,
prefetch_count)
self._pika_engine = pika_engine
self._incoming_message_class = incoming_message_class
self._connection = None
self._channel = None
self._recover_loopingcall = None
self._lock = threading.RLock()
self._cur_batch_buffer = None
self._cur_batch_timeout_id = None
self._started = False
self._closing_connection_by_poller = False
self._queues_to_consume = None
def _on_connection_close(self, connection, reply_code, reply_text):
self._deliver_cur_batch()
if self._closing_connection_by_poller:
return
with self._lock:
self._connection = None
self._start_recover_consuming_task()
def _on_channel_close(self, channel, reply_code, reply_text):
if self._cur_batch_buffer:
self._cur_batch_buffer = [
message for message in self._cur_batch_buffer
if not message.need_ack()
]
if self._closing_connection_by_poller:
return
with self._lock:
self._channel = None
self._start_recover_consuming_task()
def _on_consumer_cancel(self, method_frame):
with self._lock:
if self._queues_to_consume:
consumer_tag = method_frame.method.consumer_tag
for queue_info in self._queues_to_consume:
if queue_info["consumer_tag"] == consumer_tag:
queue_info["consumer_tag"] = None
self._start_recover_consuming_task()
def _on_message_no_ack_callback(self, unused, method, properties, body):
"""Is called by Pika when message was received from queue listened with
no_ack=True mode
"""
incoming_message = self._incoming_message_class(
self._pika_engine, None, method, properties, body
)
self._on_incoming_message(incoming_message)
def _on_message_with_ack_callback(self, unused, method, properties, body):
"""Is called by Pika when message was received from queue listened with
no_ack=False mode
"""
incoming_message = self._incoming_message_class(
self._pika_engine, self._channel, method, properties, body
)
self._on_incoming_message(incoming_message)
def _deliver_cur_batch(self):
if self._cur_batch_timeout_id is not None:
self._connection.remove_timeout(self._cur_batch_timeout_id)
self._cur_batch_timeout_id = None
if self._cur_batch_buffer:
buf_to_send = self._cur_batch_buffer
self._cur_batch_buffer = None
try:
self.on_incoming_callback(buf_to_send)
except Exception:
LOG.exception("Unexpected exception during incoming delivery")
def _on_incoming_message(self, incoming_message):
if self._cur_batch_buffer is None:
self._cur_batch_buffer = [incoming_message]
else:
self._cur_batch_buffer.append(incoming_message)
if len(self._cur_batch_buffer) >= self.batch_size:
self._deliver_cur_batch()
return
if self._cur_batch_timeout_id is None:
self._cur_batch_timeout_id = self._connection.add_timeout(
self.batch_timeout, self._deliver_cur_batch)
def _start_recover_consuming_task(self):
"""Start async job for checking connection to the broker."""
if self._recover_loopingcall is None and self._started:
self._recover_loopingcall = (
loopingcall.DynamicLoopingCall(
self._try_recover_consuming
)
)
LOG.info("Starting recover consuming job for listener: %s", self)
self._recover_loopingcall.start()
def _try_recover_consuming(self):
with self._lock:
try:
if self._started:
self._start_or_recover_consuming()
except pika_drv_exc.EstablishConnectionException as e:
LOG.warning(
"Problem during establishing connection for pika "
"poller %s", e, exc_info=True
)
return self._pika_engine.host_connection_reconnect_delay
except pika_drv_exc.ConnectionException as e:
LOG.warning(
"Connectivity exception during starting/recovering pika "
"poller %s", e, exc_info=True
)
except pika_drv_cmns.PIKA_CONNECTIVITY_ERRORS as e:
LOG.warning(
"Connectivity exception during starting/recovering pika "
"poller %s", e, exc_info=True
)
except BaseException:
# NOTE (dukhlov): I preffer to use here BaseException because
# if this method raise such exception LoopingCall stops
# execution Probably it should never happen and Exception
# should be enough but in case of programmer mistake it could
# be and it is potentially hard to catch problem if we will
# stop background task. It is better when it continue to work
# and write a lot of LOG with this error
LOG.exception("Unexpected exception during "
"starting/recovering pika poller")
else:
self._recover_loopingcall = None
LOG.info("Recover consuming job was finished for listener: %s",
self)
raise loopingcall.LoopingCallDone(True)
return 0
def _start_or_recover_consuming(self):
"""Performs reconnection to the broker. It is unsafe method for
internal use only
"""
if self._connection is None or not self._connection.is_open:
self._connection = self._pika_engine.create_connection(
for_listening=True
)
self._connection.add_on_close_callback(self._on_connection_close)
self._channel = None
if self._channel is None or not self._channel.is_open:
if self._queues_to_consume:
for queue_info in self._queues_to_consume:
queue_info["consumer_tag"] = None
self._channel = self._connection.channel()
self._channel.add_on_close_callback(self._on_channel_close)
self._channel.add_on_cancel_callback(self._on_consumer_cancel)
self._channel.basic_qos(prefetch_count=self.prefetch_size)
if self._queues_to_consume is None:
self._queues_to_consume = self._declare_queue_binding()
self._start_consuming()
def _declare_queue_binding(self):
"""Is called by recovering connection logic if target RabbitMQ
exchange and (or) queue do not exist. Should be overridden in child
classes
:return Dictionary, declared_queue_name -> no_ack_mode
"""
raise NotImplementedError(
"It is base class. Please declare exchanges and queues here"
)
def _start_consuming(self):
"""Is called by recovering connection logic for starting consumption
of configured RabbitMQ queues
"""
assert self._queues_to_consume is not None
try:
for queue_info in self._queues_to_consume:
if queue_info["consumer_tag"] is not None:
continue
no_ack = queue_info["no_ack"]
on_message_callback = (
self._on_message_no_ack_callback if no_ack
else self._on_message_with_ack_callback
)
queue_info["consumer_tag"] = self._channel.basic_consume(
on_message_callback, queue_info["queue_name"],
no_ack=no_ack
)
except Exception:
self._queues_to_consume = None
raise
def _stop_consuming(self):
"""Is called by poller's stop logic for stopping consumption
of configured RabbitMQ queues
"""
assert self._queues_to_consume is not None
for queue_info in self._queues_to_consume:
consumer_tag = queue_info["consumer_tag"]
if consumer_tag is not None:
self._channel.basic_cancel(consumer_tag)
queue_info["consumer_tag"] = None
def start(self, on_incoming_callback):
"""Starts poller. Should be called before polling to allow message
consuming
:param on_incoming_callback: callback function to be executed when
listener received messages. Messages should be processed and
acked/nacked by callback
"""
super(PikaPoller, self).start(on_incoming_callback)
with self._lock:
if self._started:
return
connected = False
try:
self._start_or_recover_consuming()
except pika_drv_exc.EstablishConnectionException as exc:
LOG.warning(
"Can not establish connection during pika poller's "
"start(). %s", exc, exc_info=True
)
except pika_drv_exc.ConnectionException as exc:
LOG.warning(
"Connectivity problem during pika poller's start(). %s",
exc, exc_info=True
)
except pika_drv_cmns.PIKA_CONNECTIVITY_ERRORS as exc:
LOG.warning(
"Connectivity problem during pika poller's start(). %s",
exc, exc_info=True
)
else:
connected = True
self._started = True
if not connected:
self._start_recover_consuming_task()
def stop(self):
"""Stops poller. Should be called when polling is not needed anymore to
stop new message consuming. After that it is necessary to poll already
prefetched messages
"""
super(PikaPoller, self).stop()
with self._lock:
if not self._started:
return
if self._recover_loopingcall is not None:
self._recover_loopingcall.stop()
self._recover_loopingcall = None
if (self._queues_to_consume and self._channel and
self._channel.is_open):
try:
self._stop_consuming()
except pika_drv_cmns.PIKA_CONNECTIVITY_ERRORS as exc:
LOG.warning(
"Connectivity problem detected during consumer "
"cancellation. %s", exc, exc_info=True
)
self._deliver_cur_batch()
self._started = False
def cleanup(self):
"""Cleanup allocated resources (channel, connection, etc)."""
with self._lock:
if self._connection and self._connection.is_open:
try:
self._closing_connection_by_poller = True
self._connection.close()
self._closing_connection_by_poller = False
except pika_drv_cmns.PIKA_CONNECTIVITY_ERRORS:
# expected errors
pass
except Exception:
LOG.exception("Unexpected error during closing connection")
finally:
self._channel = None
self._connection = None
class RpcServicePikaPoller(PikaPoller):
"""PikaPoller implementation for polling RPC messages. Overrides base
functionality according to RPC specific
"""
def __init__(self, pika_engine, target, batch_size, batch_timeout,
prefetch_count):
"""Adds target parameter for declaring RPC specific exchanges and
queues
:param pika_engine: PikaEngine, shared object with configuration and
shared driver functionality
:param target: Target, oslo.messaging Target object which defines RPC
endpoint
:param batch_size: desired number of messages passed to
single on_incoming_callback call
:param batch_timeout: defines how long should we wait for batch_size
messages if we already have some messages waiting for processing
:param prefetch_count: Integer, maximum count of unacknowledged
messages which RabbitMQ broker sends to this consumer
"""
self._target = target
super(RpcServicePikaPoller, self).__init__(
pika_engine, batch_size, batch_timeout, prefetch_count,
pika_drv_msg.RpcPikaIncomingMessage
)
def _declare_queue_binding(self):
"""Overrides base method and perform declaration of RabbitMQ exchanges
and queues which correspond to oslo.messaging RPC target
:return Dictionary, declared_queue_name -> no_ack_mode
"""
queue_expiration = self._pika_engine.rpc_queue_expiration
exchange = self._pika_engine.get_rpc_exchange_name(
self._target.exchange
)
queues_to_consume = []
for no_ack in [True, False]:
queue = self._pika_engine.get_rpc_queue_name(
self._target.topic, None, no_ack
)
self._pika_engine.declare_queue_binding_by_channel(
channel=self._channel, exchange=exchange, queue=queue,
routing_key=queue, exchange_type='direct', durable=False,
queue_expiration=queue_expiration
)
queues_to_consume.append(
{"queue_name": queue, "no_ack": no_ack, "consumer_tag": None}
)
if self._target.server:
server_queue = self._pika_engine.get_rpc_queue_name(
self._target.topic, self._target.server, no_ack
)
self._pika_engine.declare_queue_binding_by_channel(
channel=self._channel, exchange=exchange, durable=False,
queue=server_queue, routing_key=server_queue,
exchange_type='direct', queue_expiration=queue_expiration
)
queues_to_consume.append(
{"queue_name": server_queue, "no_ack": no_ack,
"consumer_tag": None}
)
worker_queue = self._pika_engine.get_rpc_queue_name(
self._target.topic, self._target.server, no_ack, True
)
all_workers_routing_key = self._pika_engine.get_rpc_queue_name(
self._target.topic, "all_workers", no_ack
)
self._pika_engine.declare_queue_binding_by_channel(
channel=self._channel, exchange=exchange, durable=False,
queue=worker_queue, routing_key=all_workers_routing_key,
exchange_type='direct', queue_expiration=queue_expiration
)
queues_to_consume.append(
{"queue_name": worker_queue, "no_ack": no_ack,
"consumer_tag": None}
)
return queues_to_consume
class RpcReplyPikaPoller(PikaPoller):
"""PikaPoller implementation for polling RPC reply messages. Overrides
base functionality according to RPC reply specific
"""
def __init__(self, pika_engine, exchange, queue, batch_size, batch_timeout,
prefetch_count):
"""Adds exchange and queue parameter for declaring exchange and queue
used for RPC reply delivery
:param pika_engine: PikaEngine, shared object with configuration and
shared driver functionality
:param exchange: String, exchange name used for RPC reply delivery
:param queue: String, queue name used for RPC reply delivery
:param batch_size: desired number of messages passed to
single on_incoming_callback call
:param batch_timeout: defines how long should we wait for batch_size
messages if we already have some messages waiting for processing
:param prefetch_count: Integer, maximum count of unacknowledged
messages which RabbitMQ broker sends to this consumer
"""
self._exchange = exchange
self._queue = queue
super(RpcReplyPikaPoller, self).__init__(
pika_engine, batch_size, batch_timeout, prefetch_count,
pika_drv_msg.RpcReplyPikaIncomingMessage
)
def _declare_queue_binding(self):
"""Overrides base method and perform declaration of RabbitMQ exchange
and queue used for RPC reply delivery
:return Dictionary, declared_queue_name -> no_ack_mode
"""
self._pika_engine.declare_queue_binding_by_channel(
channel=self._channel,
exchange=self._exchange, queue=self._queue,
routing_key=self._queue, exchange_type='direct',
queue_expiration=self._pika_engine.rpc_queue_expiration,
durable=False
)
return [{"queue_name": self._queue, "no_ack": False,
"consumer_tag": None}]
class NotificationPikaPoller(PikaPoller):
"""PikaPoller implementation for polling Notification messages. Overrides
base functionality according to Notification specific
"""
def __init__(self, pika_engine, targets_and_priorities,
batch_size, batch_timeout, prefetch_count, queue_name=None):
"""Adds targets_and_priorities and queue_name parameter
for declaring exchanges and queues used for notification delivery
:param pika_engine: PikaEngine, shared object with configuration and
shared driver functionality
:param targets_and_priorities: list of (target, priority), defines
default queue names for corresponding notification types
:param batch_size: desired number of messages passed to
single on_incoming_callback call
:param batch_timeout: defines how long should we wait for batch_size
messages if we already have some messages waiting for processing
:param prefetch_count: Integer, maximum count of unacknowledged
messages which RabbitMQ broker sends to this consumer
:param queue: String, alternative queue name used for this poller
instead of default queue name
"""
self._targets_and_priorities = targets_and_priorities
self._queue_name = queue_name
super(NotificationPikaPoller, self).__init__(
pika_engine, batch_size, batch_timeout, prefetch_count,
pika_drv_msg.PikaIncomingMessage
)
def _declare_queue_binding(self):
"""Overrides base method and perform declaration of RabbitMQ exchanges
and queues used for notification delivery
:return Dictionary, declared_queue_name -> no_ack_mode
"""
queues_to_consume = []
for target, priority in self._targets_and_priorities:
routing_key = '%s.%s' % (target.topic, priority)
queue = self._queue_name or routing_key
self._pika_engine.declare_queue_binding_by_channel(
channel=self._channel,
exchange=(
target.exchange or
self._pika_engine.default_notification_exchange
),
queue=queue,
routing_key=routing_key,
exchange_type='direct',
queue_expiration=None,
durable=self._pika_engine.notification_persistence,
)
queues_to_consume.append(
{"queue_name": queue, "no_ack": False, "consumer_tag": None}
)
return queues_to_consume

View File

@ -1,148 +0,0 @@
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import collections
import sys
import threading
from oslo_log import log as logging
from oslo_utils import timeutils
import six
from oslo_messaging._drivers import common
LOG = logging.getLogger(__name__)
# TODO(harlowja): remove this when we no longer have to support 2.7
if sys.version_info[0:2] < (3, 2):
def wait_condition(cond):
# FIXME(markmc): timeout needed to allow keyboard interrupt
# http://bugs.python.org/issue8844
cond.wait(timeout=1)
else:
def wait_condition(cond):
cond.wait()
@six.add_metaclass(abc.ABCMeta)
class Pool(object):
"""A thread-safe object pool.
Modelled after the eventlet.pools.Pool interface, but designed to be safe
when using native threads without the GIL.
Resizing is not supported.
"""
def __init__(self, max_size=4, min_size=2, ttl=1200, on_expire=None):
super(Pool, self).__init__()
self._min_size = min_size
self._max_size = max_size
self._item_ttl = ttl
self._current_size = 0
self._cond = threading.Condition()
self._items = collections.deque()
self._on_expire = on_expire
def expire(self):
"""Remove expired items from left (the oldest item) to
right (the newest item).
"""
with self._cond:
while len(self._items) > self._min_size:
try:
ttl_watch, item = self._items.popleft()
if ttl_watch.expired():
self._on_expire and self._on_expire(item)
self._current_size -= 1
else:
self._items.appendleft((ttl_watch, item))
return
except IndexError:
break
def put(self, item):
"""Return an item to the pool."""
with self._cond:
ttl_watch = timeutils.StopWatch(duration=self._item_ttl)
ttl_watch.start()
self._items.append((ttl_watch, item))
self._cond.notify()
def get(self):
"""Return an item from the pool, when one is available.
This may cause the calling thread to block.
"""
with self._cond:
while True:
try:
ttl_watch, item = self._items.pop()
self.expire()
return item
except IndexError:
pass
if self._current_size < self._max_size:
self._current_size += 1
break
wait_condition(self._cond)
# We've grabbed a slot and dropped the lock, now do the creation
try:
return self.create()
except Exception:
with self._cond:
self._current_size -= 1
raise
def iter_free(self):
"""Iterate over free items."""
while True:
try:
_, item = self._items.pop()
yield item
except IndexError:
raise StopIteration
@abc.abstractmethod
def create(self):
"""Construct a new item."""
class ConnectionPool(Pool):
"""Class that implements a Pool of Connections."""
def __init__(self, conf, max_size, min_size, ttl, url, connection_cls):
self.connection_cls = connection_cls
self.conf = conf
self.url = url
super(ConnectionPool, self).__init__(max_size, min_size, ttl,
self._on_expire)
def _on_expire(self, connection):
connection.close()
LOG.debug("Idle connection has expired and been closed."
" Pool size: %d" % len(self._items))
def create(self, purpose=common.PURPOSE_SEND):
LOG.debug('Pool creating new connection')
return self.connection_cls(self.conf, self.url, purpose)
def empty(self):
for item in self.iter_free():
item.close()

View File

@ -1,110 +0,0 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
from concurrent import futures
import logging
import retrying
import oslo_messaging
from oslo_messaging._drivers import common as rpc_common
from oslo_messaging._drivers.zmq_driver.client.publishers \
import zmq_publisher_base
from oslo_messaging._drivers.zmq_driver.client import zmq_sockets_manager
from oslo_messaging._drivers.zmq_driver import zmq_async
from oslo_messaging._drivers.zmq_driver import zmq_names
from oslo_messaging._i18n import _LE
LOG = logging.getLogger(__name__)
zmq = zmq_async.import_zmq()
class DealerPublisherBase(zmq_publisher_base.PublisherBase):
"""Abstract DEALER-publisher."""
def __init__(self, conf, matchmaker, sender, receiver):
sockets_manager = zmq_sockets_manager.SocketsManager(
conf, matchmaker, zmq.ROUTER, zmq.DEALER
)
super(DealerPublisherBase, self).__init__(sockets_manager, sender,
receiver)
@staticmethod
def _check_pattern(request, supported_pattern):
if request.msg_type != supported_pattern:
raise zmq_publisher_base.UnsupportedSendPattern(
zmq_names.message_type_str(request.msg_type)
)
@staticmethod
def _raise_timeout(request):
raise oslo_messaging.MessagingTimeout(
"Timeout %(tout)s seconds was reached for message %(msg_id)s" %
{"tout": request.timeout, "msg_id": request.message_id}
)
@abc.abstractmethod
def _connect_socket(self, request):
pass
def _recv_reply(self, request):
reply_future, = self.receiver.track_request(request)
try:
_, reply = reply_future.result(timeout=request.timeout)
except AssertionError:
LOG.error(_LE("Message format error in reply for %s"),
request.message_id)
return None
except futures.TimeoutError:
self._raise_timeout(request)
finally:
self.receiver.untrack_request(request)
if reply.failure:
raise rpc_common.deserialize_remote_exception(
reply.failure, request.allowed_remote_exmods
)
else:
return reply.reply_body
def send_call(self, request):
self._check_pattern(request, zmq_names.CALL_TYPE)
try:
socket = self._connect_socket(request)
except retrying.RetryError:
self._raise_timeout(request)
self.sender.send(socket, request)
self.receiver.register_socket(socket)
return self._recv_reply(request)
@abc.abstractmethod
def _send_non_blocking(self, request):
pass
def send_cast(self, request):
self._check_pattern(request, zmq_names.CAST_TYPE)
self._send_non_blocking(request)
def send_fanout(self, request):
self._check_pattern(request, zmq_names.CAST_FANOUT_TYPE)
self._send_non_blocking(request)
def send_notify(self, request):
self._check_pattern(request, zmq_names.NOTIFY_TYPE)
self._send_non_blocking(request)

View File

@ -1,53 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import retrying
from oslo_messaging._drivers.zmq_driver.client.publishers.dealer \
import zmq_dealer_publisher_base
from oslo_messaging._drivers.zmq_driver.client import zmq_receivers
from oslo_messaging._drivers.zmq_driver.client import zmq_senders
from oslo_messaging._drivers.zmq_driver import zmq_async
from oslo_messaging._drivers.zmq_driver import zmq_names
LOG = logging.getLogger(__name__)
zmq = zmq_async.import_zmq()
class DealerPublisherDirect(zmq_dealer_publisher_base.DealerPublisherBase):
"""DEALER-publisher using direct connections."""
def __init__(self, conf, matchmaker):
sender = zmq_senders.RequestSenderDirect(conf)
receiver = zmq_receivers.ReplyReceiverDirect(conf)
super(DealerPublisherDirect, self).__init__(conf, matchmaker, sender,
receiver)
def _connect_socket(self, request):
return self.sockets_manager.get_socket(request.target)
def _send_non_blocking(self, request):
try:
socket = self._connect_socket(request)
except retrying.RetryError:
return
if request.msg_type in zmq_names.MULTISEND_TYPES:
for _ in range(socket.connections_count()):
self.sender.send(socket, request)
else:
self.sender.send(socket, request)

View File

@ -1,87 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import retrying
from oslo_messaging._drivers.zmq_driver.client.publishers.dealer \
import zmq_dealer_publisher_base
from oslo_messaging._drivers.zmq_driver.client import zmq_receivers
from oslo_messaging._drivers.zmq_driver.client import zmq_routing_table
from oslo_messaging._drivers.zmq_driver.client import zmq_senders
from oslo_messaging._drivers.zmq_driver import zmq_address
from oslo_messaging._drivers.zmq_driver import zmq_async
from oslo_messaging._drivers.zmq_driver import zmq_names
from oslo_messaging._drivers.zmq_driver import zmq_updater
LOG = logging.getLogger(__name__)
zmq = zmq_async.import_zmq()
class DealerPublisherProxy(zmq_dealer_publisher_base.DealerPublisherBase):
"""DEALER-publisher via proxy."""
def __init__(self, conf, matchmaker):
sender = zmq_senders.RequestSenderProxy(conf)
receiver = zmq_receivers.ReplyReceiverProxy(conf)
super(DealerPublisherProxy, self).__init__(conf, matchmaker, sender,
receiver)
self.socket = self.sockets_manager.get_socket_to_publishers()
self.routing_table = zmq_routing_table.RoutingTable(self.conf,
self.matchmaker)
self.connection_updater = \
PublisherConnectionUpdater(self.conf, self.matchmaker, self.socket)
def _connect_socket(self, request):
return self.socket
def send_call(self, request):
try:
request.routing_key = \
self.routing_table.get_routable_host(request.target)
except retrying.RetryError:
self._raise_timeout(request)
return super(DealerPublisherProxy, self).send_call(request)
def _get_routing_keys(self, request):
try:
if request.msg_type in zmq_names.DIRECT_TYPES:
return [self.routing_table.get_routable_host(request.target)]
else:
return \
[zmq_address.target_to_subscribe_filter(request.target)] \
if self.conf.oslo_messaging_zmq.use_pub_sub else \
self.routing_table.get_all_hosts(request.target)
except retrying.RetryError:
return []
def _send_non_blocking(self, request):
for routing_key in self._get_routing_keys(request):
request.routing_key = routing_key
self.sender.send(self.socket, request)
def cleanup(self):
super(DealerPublisherProxy, self).cleanup()
self.connection_updater.stop()
self.socket.close()
class PublisherConnectionUpdater(zmq_updater.ConnectionUpdater):
def _update_connection(self):
publishers = self.matchmaker.get_publishers()
for pub_address, router_address in publishers:
self.socket.connect_to_host(router_address)

View File

@ -1,94 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import logging
import six
from oslo_messaging._drivers import common as rpc_common
from oslo_messaging._drivers.zmq_driver import zmq_async
from oslo_messaging._i18n import _LE
LOG = logging.getLogger(__name__)
zmq = zmq_async.import_zmq()
class UnsupportedSendPattern(rpc_common.RPCException):
"""Exception to raise from publishers in case of unsupported
sending pattern called.
"""
def __init__(self, pattern_name):
"""Construct exception object
:param pattern_name: Message type name from zmq_names
:type pattern_name: str
"""
errmsg = _LE("Sending pattern %s is unsupported.") % pattern_name
super(UnsupportedSendPattern, self).__init__(errmsg)
@six.add_metaclass(abc.ABCMeta)
class PublisherBase(object):
"""Abstract publisher class
Each publisher from zmq-driver client should implement
this interface to serve as a messages publisher.
Publisher can send request objects from zmq_request.
"""
def __init__(self, sockets_manager, sender, receiver):
"""Construct publisher
Accept sockets manager, sender and receiver objects.
:param sockets_manager: sockets manager object
:type sockets_manager: zmq_sockets_manager.SocketsManager
:param senders: request sender object
:type senders: zmq_senders.RequestSender
:param receiver: reply receiver object
:type receiver: zmq_receivers.ReplyReceiver
"""
self.sockets_manager = sockets_manager
self.conf = sockets_manager.conf
self.matchmaker = sockets_manager.matchmaker
self.sender = sender
self.receiver = receiver
@abc.abstractmethod
def send_call(self, request):
pass
@abc.abstractmethod
def send_cast(self, request):
pass
@abc.abstractmethod
def send_fanout(self, request):
pass
@abc.abstractmethod
def send_notify(self, request):
pass
def cleanup(self):
"""Cleanup publisher. Close allocated connections."""
self.receiver.stop()
self.sockets_manager.cleanup()

View File

@ -1,106 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_messaging._drivers import common
from oslo_messaging._drivers.zmq_driver.client.publishers.dealer \
import zmq_dealer_publisher_direct
from oslo_messaging._drivers.zmq_driver.client.publishers.dealer \
import zmq_dealer_publisher_proxy
from oslo_messaging._drivers.zmq_driver.client import zmq_client_base
from oslo_messaging._drivers.zmq_driver import zmq_async
from oslo_messaging._drivers.zmq_driver import zmq_names
zmq = zmq_async.import_zmq()
class WrongClientException(common.RPCException):
"""Raised if client type doesn't match configuration"""
class ZmqClientMixDirectPubSub(zmq_client_base.ZmqClientBase):
"""Client for using with direct connections and fanout over proxy:
use_pub_sub = true
use_router_proxy = false
"""
def __init__(self, conf, matchmaker=None, allowed_remote_exmods=None):
if conf.oslo_messaging_zmq.use_router_proxy or not \
conf.oslo_messaging_zmq.use_pub_sub:
raise WrongClientException()
publisher_direct = \
zmq_dealer_publisher_direct.DealerPublisherDirect(conf, matchmaker)
publisher_proxy = \
zmq_dealer_publisher_proxy.DealerPublisherProxy(conf, matchmaker)
super(ZmqClientMixDirectPubSub, self).__init__(
conf, matchmaker, allowed_remote_exmods,
publishers={
zmq_names.CAST_FANOUT_TYPE: publisher_proxy,
zmq_names.NOTIFY_TYPE: publisher_proxy,
"default": publisher_direct
}
)
class ZmqClientDirect(zmq_client_base.ZmqClientBase):
"""This kind of client (publishers combination) is to be used for
direct connections only:
use_pub_sub = false
use_router_proxy = false
"""
def __init__(self, conf, matchmaker=None, allowed_remote_exmods=None):
if conf.oslo_messaging_zmq.use_pub_sub or \
conf.oslo_messaging_zmq.use_router_proxy:
raise WrongClientException()
publisher = \
zmq_dealer_publisher_direct.DealerPublisherDirect(conf, matchmaker)
super(ZmqClientDirect, self).__init__(
conf, matchmaker, allowed_remote_exmods,
publishers={"default": publisher}
)
class ZmqClientProxy(zmq_client_base.ZmqClientBase):
"""Client for using with proxy:
use_pub_sub = true
use_router_proxy = true
or
use_pub_sub = false
use_router_proxy = true
"""
def __init__(self, conf, matchmaker=None, allowed_remote_exmods=None):
if not conf.oslo_messaging_zmq.use_router_proxy:
raise WrongClientException()
publisher = \
zmq_dealer_publisher_proxy.DealerPublisherProxy(conf, matchmaker)
super(ZmqClientProxy, self).__init__(
conf, matchmaker, allowed_remote_exmods,
publishers={"default": publisher}
)

View File

@ -1,71 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_messaging._drivers.zmq_driver.client import zmq_request
from oslo_messaging._drivers.zmq_driver import zmq_async
from oslo_messaging._drivers.zmq_driver import zmq_names
zmq = zmq_async.import_zmq()
class ZmqClientBase(object):
def __init__(self, conf, matchmaker=None, allowed_remote_exmods=None,
publishers=None):
self.conf = conf
self.matchmaker = matchmaker
self.allowed_remote_exmods = allowed_remote_exmods or []
self.publishers = publishers
self.call_publisher = publishers.get(zmq_names.CALL_TYPE,
publishers["default"])
self.cast_publisher = publishers.get(zmq_names.CAST_TYPE,
publishers["default"])
self.fanout_publisher = publishers.get(zmq_names.CAST_FANOUT_TYPE,
publishers["default"])
self.notify_publisher = publishers.get(zmq_names.NOTIFY_TYPE,
publishers["default"])
def send_call(self, target, context, message, timeout=None, retry=None):
request = zmq_request.CallRequest(
target, context=context, message=message, retry=retry,
timeout=timeout, allowed_remote_exmods=self.allowed_remote_exmods
)
return self.call_publisher.send_call(request)
def send_cast(self, target, context, message, retry=None):
request = zmq_request.CastRequest(
target, context=context, message=message, retry=retry
)
self.cast_publisher.send_cast(request)
def send_fanout(self, target, context, message, retry=None):
request = zmq_request.FanoutRequest(
target, context=context, message=message, retry=retry
)
self.fanout_publisher.send_fanout(request)
def send_notify(self, target, context, message, version, retry=None):
request = zmq_request.NotificationRequest(
target, context=context, message=message, retry=retry,
version=version
)
self.notify_publisher.send_notify(request)
def cleanup(self):
cleaned = set()
for publisher in self.publishers.values():
if publisher not in cleaned:
publisher.cleanup()
cleaned.add(publisher)

View File

@ -1,146 +0,0 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import logging
import threading
import futurist
import six
from oslo_messaging._drivers.zmq_driver.client import zmq_response
from oslo_messaging._drivers.zmq_driver import zmq_async
from oslo_messaging._drivers.zmq_driver import zmq_names
LOG = logging.getLogger(__name__)
zmq = zmq_async.import_zmq()
@six.add_metaclass(abc.ABCMeta)
class ReceiverBase(object):
"""Base response receiving interface."""
def __init__(self, conf):
self.conf = conf
self._lock = threading.Lock()
self._requests = {}
self._poller = zmq_async.get_poller()
self._executor = zmq_async.get_executor(method=self._run_loop)
self._executor.execute()
@abc.abstractproperty
def message_types(self):
"""A list of supported incoming response types."""
def register_socket(self, socket):
"""Register a socket for receiving data."""
self._poller.register(socket, recv_method=self.recv_response)
@abc.abstractmethod
def recv_response(self, socket):
"""Receive a response and return a tuple of the form
(reply_id, message_type, message_id, response).
"""
def track_request(self, request):
"""Track a request via already registered sockets and return
a list of futures for monitoring all types of responses.
"""
futures = []
for message_type in self.message_types:
future = futurist.Future()
self._set_future(request.message_id, message_type, future)
futures.append(future)
return futures
def untrack_request(self, request):
"""Untrack a request and stop monitoring any responses."""
for message_type in self.message_types:
self._pop_future(request.message_id, message_type)
def stop(self):
self._poller.close()
self._executor.stop()
def _get_future(self, message_id, message_type):
with self._lock:
return self._requests.get((message_id, message_type))
def _set_future(self, message_id, message_type, future):
with self._lock:
self._requests[(message_id, message_type)] = future
def _pop_future(self, message_id, message_type):
with self._lock:
return self._requests.pop((message_id, message_type), None)
def _run_loop(self):
data, socket = self._poller.poll(
timeout=self.conf.oslo_messaging_zmq.rpc_poll_timeout)
if data is None:
return
reply_id, message_type, message_id, response = data
assert message_type in self.message_types, \
"%s is not supported!" % zmq_names.message_type_str(message_type)
future = self._get_future(message_id, message_type)
if future is not None:
LOG.debug("Received %(msg_type)s for %(msg_id)s",
{"msg_type": zmq_names.message_type_str(message_type),
"msg_id": message_id})
future.set_result((reply_id, response))
class AckReceiver(ReceiverBase):
message_types = (zmq_names.ACK_TYPE,)
class ReplyReceiver(ReceiverBase):
message_types = (zmq_names.REPLY_TYPE,)
class ReplyReceiverProxy(ReplyReceiver):
def recv_response(self, socket):
empty = socket.recv()
assert empty == b'', "Empty expected!"
reply_id = socket.recv()
assert reply_id is not None, "Reply ID expected!"
message_type = int(socket.recv())
assert message_type == zmq_names.REPLY_TYPE, "Reply expected!"
message_id = socket.recv()
raw_reply = socket.recv_loaded()
assert isinstance(raw_reply, dict), "Dict expected!"
reply = zmq_response.Response(**raw_reply)
LOG.debug("Received reply for %s", message_id)
return reply_id, message_type, message_id, reply
class ReplyReceiverDirect(ReplyReceiver):
def recv_response(self, socket):
empty = socket.recv()
assert empty == b'', "Empty expected!"
raw_reply = socket.recv_loaded()
assert isinstance(raw_reply, dict), "Dict expected!"
reply = zmq_response.Response(**raw_reply)
LOG.debug("Received reply for %s", reply.message_id)
return reply.reply_id, reply.msg_type, reply.message_id, reply
class AckAndReplyReceiver(ReceiverBase):
message_types = (zmq_names.ACK_TYPE, zmq_names.REPLY_TYPE)

View File

@ -1,123 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import logging
import uuid
import six
from oslo_messaging._drivers.zmq_driver import zmq_async
from oslo_messaging._drivers.zmq_driver import zmq_names
from oslo_messaging._i18n import _LE
LOG = logging.getLogger(__name__)
zmq = zmq_async.import_zmq()
@six.add_metaclass(abc.ABCMeta)
class Request(object):
"""Zmq request abstract class
Represents socket (publisher) independent data object to publish.
Request object should contain all needed information for a publisher
to publish it, for instance: message payload, target, timeout
and retries etc.
"""
def __init__(self, target, context=None, message=None, retry=None):
"""Construct request object
:param target: Message destination target
:type target: oslo_messaging.Target
:param context: Message context
:type context: dict
:param message: Message payload to pass
:type message: dict
:param retry: an optional default connection retries configuration
None or -1 means to retry forever
0 means no retry
N means N retries
:type retry: int
"""
if self.msg_type not in zmq_names.MESSAGE_TYPES:
raise RuntimeError("Unknown message type!")
self.target = target
self.context = context
self.message = message
self.retry = retry
if not isinstance(retry, int) and retry is not None:
raise ValueError(
"retry must be an integer, not {0}".format(type(retry)))
self.message_id = str(uuid.uuid1())
@abc.abstractproperty
def msg_type(self):
"""ZMQ message type"""
class RpcRequest(Request):
def __init__(self, *args, **kwargs):
message = kwargs.get("message")
if message['method'] is None:
errmsg = _LE("No method specified for RPC call")
LOG.error(_LE("No method specified for RPC call"))
raise KeyError(errmsg)
super(RpcRequest, self).__init__(*args, **kwargs)
class CallRequest(RpcRequest):
msg_type = zmq_names.CALL_TYPE
def __init__(self, *args, **kwargs):
self.allowed_remote_exmods = kwargs.pop("allowed_remote_exmods")
self.timeout = kwargs.pop("timeout")
if self.timeout is None:
raise ValueError("Timeout should be specified for a RPC call!")
elif not isinstance(self.timeout, int):
raise ValueError(
"timeout must be an integer, not {0}"
.format(type(self.timeout)))
super(CallRequest, self).__init__(*args, **kwargs)
class CastRequest(RpcRequest):
msg_type = zmq_names.CAST_TYPE
class FanoutRequest(RpcRequest):
msg_type = zmq_names.CAST_FANOUT_TYPE
class NotificationRequest(Request):
msg_type = zmq_names.NOTIFY_TYPE
def __init__(self, *args, **kwargs):
self.version = kwargs.pop("version")
super(NotificationRequest, self).__init__(*args, **kwargs)

View File

@ -1,57 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_messaging._drivers.zmq_driver import zmq_names
class Response(object):
def __init__(self, msg_type=None, message_id=None,
reply_id=None, reply_body=None, failure=None):
self._msg_type = msg_type
self._message_id = message_id
self._reply_id = reply_id
self._reply_body = reply_body
self._failure = failure
@property
def msg_type(self):
return self._msg_type
@property
def message_id(self):
return self._message_id
@property
def reply_id(self):
return self._reply_id
@property
def reply_body(self):
return self._reply_body
@property
def failure(self):
return self._failure
def to_dict(self):
return {zmq_names.FIELD_MSG_TYPE: self._msg_type,
zmq_names.FIELD_MSG_ID: self._message_id,
zmq_names.FIELD_REPLY_ID: self._reply_id,
zmq_names.FIELD_REPLY_BODY: self._reply_body,
zmq_names.FIELD_FAILURE: self._failure}
def __str__(self):
return str(self.to_dict())

View File

@ -1,66 +0,0 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import time
from oslo_messaging._drivers.zmq_driver import zmq_async
from oslo_messaging._drivers.zmq_driver import zmq_names
zmq = zmq_async.import_zmq()
class RoutingTable(object):
"""This class implements local routing-table cache
taken from matchmaker. Its purpose is to give the next routable
host id (remote DEALER's id) by request for specific target in
round-robin fashion.
"""
def __init__(self, conf, matchmaker):
self.conf = conf
self.matchmaker = matchmaker
self.routing_table = {}
self.routable_hosts = {}
def get_all_hosts(self, target):
self._update_routing_table(target)
return list(self.routable_hosts.get(str(target)) or [])
def get_routable_host(self, target):
self._update_routing_table(target)
hosts_for_target = self.routable_hosts[str(target)]
host = hosts_for_target.pop(0)
if not hosts_for_target:
self._renew_routable_hosts(target)
return host
def _is_tm_expired(self, tm):
return 0 <= self.conf.oslo_messaging_zmq.zmq_target_expire \
<= time.time() - tm
def _update_routing_table(self, target):
routing_record = self.routing_table.get(str(target))
if routing_record is None:
self._fetch_hosts(target)
self._renew_routable_hosts(target)
elif self._is_tm_expired(routing_record[1]):
self._fetch_hosts(target)
def _fetch_hosts(self, target):
self.routing_table[str(target)] = (self.matchmaker.get_hosts(
target, zmq_names.socket_type_str(zmq.DEALER)), time.time())
def _renew_routable_hosts(self, target):
hosts, _ = self.routing_table[str(target)]
self.routable_hosts[str(target)] = list(hosts)

View File

@ -1,105 +0,0 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import logging
import six
from oslo_messaging._drivers.zmq_driver import zmq_async
from oslo_messaging._drivers.zmq_driver import zmq_names
LOG = logging.getLogger(__name__)
zmq = zmq_async.import_zmq()
@six.add_metaclass(abc.ABCMeta)
class SenderBase(object):
"""Base request/ack/reply sending interface."""
def __init__(self, conf):
self.conf = conf
@abc.abstractmethod
def send(self, socket, message):
pass
class RequestSender(SenderBase):
pass
class ReplySender(SenderBase):
pass
class RequestSenderProxy(RequestSender):
def send(self, socket, request):
socket.send(b'', zmq.SNDMORE)
socket.send(six.b(str(request.msg_type)), zmq.SNDMORE)
socket.send(six.b(request.routing_key), zmq.SNDMORE)
socket.send(six.b(request.message_id), zmq.SNDMORE)
socket.send_dumped(request.context, zmq.SNDMORE)
socket.send_dumped(request.message)
LOG.debug("->[proxy:%(addr)s] Sending %(msg_type)s message "
"%(msg_id)s to target %(target)s",
{"addr": list(socket.connections),
"msg_type": zmq_names.message_type_str(request.msg_type),
"msg_id": request.message_id,
"target": request.target})
class ReplySenderProxy(ReplySender):
def send(self, socket, reply):
LOG.debug("Replying to %s", reply.message_id)
assert reply.msg_type == zmq_names.REPLY_TYPE, "Reply expected!"
socket.send(b'', zmq.SNDMORE)
socket.send(six.b(str(reply.msg_type)), zmq.SNDMORE)
socket.send(reply.reply_id, zmq.SNDMORE)
socket.send(reply.message_id, zmq.SNDMORE)
socket.send_dumped(reply.to_dict())
class RequestSenderDirect(RequestSender):
def send(self, socket, request):
socket.send(b'', zmq.SNDMORE)
socket.send(six.b(str(request.msg_type)), zmq.SNDMORE)
socket.send_string(request.message_id, zmq.SNDMORE)
socket.send_dumped(request.context, zmq.SNDMORE)
socket.send_dumped(request.message)
LOG.debug("Sending %(msg_type)s message %(msg_id)s to "
"target %(target)s",
{"msg_type": zmq_names.message_type_str(request.msg_type),
"msg_id": request.message_id,
"target": request.target})
class ReplySenderDirect(ReplySender):
def send(self, socket, reply):
LOG.debug("Replying to %s", reply.message_id)
assert reply.msg_type == zmq_names.REPLY_TYPE, "Reply expected!"
socket.send(reply.reply_id, zmq.SNDMORE)
socket.send(b'', zmq.SNDMORE)
socket.send_dumped(reply.to_dict())

View File

@ -1,97 +0,0 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import time
from oslo_messaging._drivers.zmq_driver import zmq_async
from oslo_messaging._drivers.zmq_driver import zmq_names
from oslo_messaging._drivers.zmq_driver import zmq_socket
zmq = zmq_async.import_zmq()
class SocketsManager(object):
def __init__(self, conf, matchmaker, listener_type, socket_type):
self.conf = conf
self.matchmaker = matchmaker
self.listener_type = listener_type
self.socket_type = socket_type
self.zmq_context = zmq.Context()
self.outbound_sockets = {}
self.socket_to_publishers = None
self.socket_to_routers = None
def get_hosts(self, target):
return self.matchmaker.get_hosts(
target, zmq_names.socket_type_str(self.listener_type))
@staticmethod
def _key_from_target(target):
return target.topic if target.fanout else str(target)
def _get_hosts_and_connect(self, socket, target):
hosts = self.get_hosts(target)
self._connect_to_hosts(socket, target, hosts)
def _track_socket(self, socket, target):
key = self._key_from_target(target)
self.outbound_sockets[key] = (socket, time.time())
def _connect_to_hosts(self, socket, target, hosts):
for host in hosts:
socket.connect_to_host(host)
self._track_socket(socket, target)
def _check_for_new_hosts(self, target):
key = self._key_from_target(target)
socket, tm = self.outbound_sockets[key]
if 0 <= self.conf.oslo_messaging_zmq.zmq_target_expire \
<= time.time() - tm:
self._get_hosts_and_connect(socket, target)
return socket
def get_socket(self, target):
key = self._key_from_target(target)
if key in self.outbound_sockets:
socket = self._check_for_new_hosts(target)
else:
socket = zmq_socket.ZmqSocket(self.conf, self.zmq_context,
self.socket_type, immediate=False)
self._get_hosts_and_connect(socket, target)
return socket
def get_socket_to_publishers(self):
if self.socket_to_publishers is not None:
return self.socket_to_publishers
self.socket_to_publishers = zmq_socket.ZmqSocket(
self.conf, self.zmq_context, self.socket_type)
publishers = self.matchmaker.get_publishers()
for pub_address, router_address in publishers:
self.socket_to_publishers.connect_to_host(router_address)
return self.socket_to_publishers
def get_socket_to_routers(self):
if self.socket_to_routers is not None:
return self.socket_to_routers
self.socket_to_routers = zmq_socket.ZmqSocket(
self.conf, self.zmq_context, self.socket_type)
routers = self.matchmaker.get_routers()
for router_address in routers:
self.socket_to_routers.connect_to_host(router_address)
return self.socket_to_routers
def cleanup(self):
for socket, tm in self.outbound_sockets.values():
socket.close()

View File

@ -1,167 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import collections
import six
from oslo_messaging._drivers.zmq_driver import zmq_address
@six.add_metaclass(abc.ABCMeta)
class MatchMakerBase(object):
def __init__(self, conf, *args, **kwargs):
super(MatchMakerBase, self).__init__()
self.conf = conf
self.url = kwargs.get('url')
@abc.abstractmethod
def register_publisher(self, hostname):
"""Register publisher on nameserver.
This works for PUB-SUB only
:param hostname: host for the topic in "host:port" format
host for back-chatter in "host:port" format
:type hostname: tuple
"""
@abc.abstractmethod
def unregister_publisher(self, hostname):
"""Unregister publisher on nameserver.
This works for PUB-SUB only
:param hostname: host for the topic in "host:port" format
host for back-chatter in "host:port" format
:type hostname: tuple
"""
@abc.abstractmethod
def get_publishers(self):
"""Get all publisher-hosts from nameserver.
:returns: a list of tuples of strings "hostname:port" hosts
"""
@abc.abstractmethod
def register_router(self, hostname):
"""Register router on the nameserver.
This works for ROUTER proxy only
:param hostname: host for the topic in "host:port" format
:type hostname: string
"""
@abc.abstractmethod
def unregister_router(self, hostname):
"""Unregister router on the nameserver.
This works for ROUTER proxy only
:param hostname: host for the topic in "host:port" format
:type hostname: string
"""
@abc.abstractmethod
def get_routers(self):
"""Get all router-hosts from nameserver.
:returns: a list of strings "hostname:port" hosts
"""
@abc.abstractmethod
def register(self, target, hostname, listener_type, expire=-1):
"""Register target on nameserver.
If record already exists and has expiration timeout it will be
updated. Existing records without timeout will stay untouched
:param target: the target for host
:type target: Target
:param hostname: host for the topic in "host:port" format
:type hostname: String
:param listener_type: Listener socket type ROUTER, SUB etc.
:type listener_type: String
:param expire: Record expiration timeout
:type expire: int
"""
@abc.abstractmethod
def unregister(self, target, hostname, listener_type):
"""Unregister target from nameserver.
:param target: the target for host
:type target: Target
:param hostname: host for the topic in "host:port" format
:type hostname: String
:param listener_type: Listener socket type ROUTER, SUB etc.
:type listener_type: String
"""
@abc.abstractmethod
def get_hosts(self, target, listener_type):
"""Get all hosts from nameserver by target.
:param target: the default target for invocations
:type target: Target
:returns: a list of "hostname:port" hosts
"""
class DummyMatchMaker(MatchMakerBase):
def __init__(self, conf, *args, **kwargs):
super(DummyMatchMaker, self).__init__(conf, *args, **kwargs)
self._cache = collections.defaultdict(list)
self._publishers = set()
self._routers = set()
def register_publisher(self, hostname):
if hostname not in self._publishers:
self._publishers.add(hostname)
def unregister_publisher(self, hostname):
if hostname in self._publishers:
self._publishers.remove(hostname)
def get_publishers(self):
return list(self._publishers)
def register_router(self, hostname):
if hostname not in self._routers:
self._routers.add(hostname)
def unregister_router(self, hostname):
if hostname in self._routers:
self._routers.remove(hostname)
def get_routers(self):
return list(self._routers)
def register(self, target, hostname, listener_type, expire=-1):
key = zmq_address.target_to_key(target, listener_type)
if hostname not in self._cache[key]:
self._cache[key].append(hostname)
def unregister(self, target, hostname, listener_type):
key = zmq_address.target_to_key(target, listener_type)
if hostname in self._cache[key]:
self._cache[key].remove(hostname)
def get_hosts(self, target, listener_type):
key = zmq_address.target_to_key(target, listener_type)
return self._cache[key]

View File

@ -1,204 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import inspect
import logging
from oslo_config import cfg
from oslo_utils import importutils
from oslo_messaging._drivers.zmq_driver.matchmaker import base
from oslo_messaging._drivers.zmq_driver import zmq_address
from retrying import retry
redis = importutils.try_import('redis')
redis_sentinel = importutils.try_import('redis.sentinel')
LOG = logging.getLogger(__name__)
matchmaker_redis_opts = [
cfg.StrOpt('host',
default='127.0.0.1',
deprecated_for_removal=True,
deprecated_reason="Replaced by [DEFAULT]/transport_url",
help='Host to locate redis.'),
cfg.PortOpt('port',
default=6379,
deprecated_for_removal=True,
deprecated_reason="Replaced by [DEFAULT]/transport_url",
help='Use this port to connect to redis host.'),
cfg.StrOpt('password',
default='',
secret=True,
deprecated_for_removal=True,
deprecated_reason="Replaced by [DEFAULT]/transport_url",
help='Password for Redis server (optional).'),
cfg.ListOpt('sentinel_hosts',
default=[],
deprecated_for_removal=True,
deprecated_reason="Replaced by [DEFAULT]/transport_url",
help='List of Redis Sentinel hosts (fault tolerance mode) e.g.\
[host:port, host1:port ... ]'),
cfg.StrOpt('sentinel_group_name',
default='oslo-messaging-zeromq',
help='Redis replica set name.'),
cfg.IntOpt('wait_timeout',
default=5000,
help='Time in ms to wait between connection attempts.'),
cfg.IntOpt('check_timeout',
default=60000,
help='Time in ms to wait before the transaction is killed.'),
cfg.IntOpt('socket_timeout',
default=10000,
help='Timeout in ms on blocking socket operations'),
]
_PUBLISHERS_KEY = "PUBLISHERS"
_ROUTERS_KEY = "ROUTERS"
_RETRY_METHODS = ("get_hosts", "get_publishers", "get_routers")
def retry_if_connection_error(ex):
return isinstance(ex, redis.ConnectionError)
def retry_if_empty(hosts):
return not hosts
def apply_retrying(obj, cfg):
for attr_name, attr in inspect.getmembers(obj):
if not (inspect.ismethod(attr) or inspect.isfunction(attr)):
continue
if attr_name in _RETRY_METHODS:
setattr(
obj,
attr_name,
retry(
wait_fixed=cfg.matchmaker_redis.wait_timeout,
stop_max_delay=cfg.matchmaker_redis.check_timeout,
retry_on_exception=retry_if_connection_error,
retry_on_result=retry_if_empty
)(attr))
class RedisMatchMaker(base.MatchMakerBase):
def __init__(self, conf, *args, **kwargs):
super(RedisMatchMaker, self).__init__(conf, *args, **kwargs)
self.conf.register_opts(matchmaker_redis_opts, "matchmaker_redis")
self.sentinel_hosts = self._extract_sentinel_options()
if not self.sentinel_hosts:
self.standalone_redis = self._extract_standalone_redis_options()
self._redis = redis.StrictRedis(
host=self.standalone_redis["host"],
port=self.standalone_redis["port"],
password=self.standalone_redis["password"]
)
else:
socket_timeout = self.conf.matchmaker_redis.socket_timeout / 1000.
sentinel = redis.sentinel.Sentinel(
sentinels=self.sentinel_hosts,
socket_timeout=socket_timeout
)
self._redis = sentinel.master_for(
self.conf.matchmaker_redis.sentinel_group_name,
socket_timeout=socket_timeout
)
apply_retrying(self, self.conf)
def _extract_sentinel_options(self):
if self.url and self.url.hosts:
if len(self.url.hosts) > 1:
return [(host.hostname, host.port) for host in self.url.hosts]
elif self.conf.matchmaker_redis.sentinel_hosts:
s = self.conf.matchmaker_redis.sentinel_hosts
return [tuple(i.split(":")) for i in s]
def _extract_standalone_redis_options(self):
if self.url and self.url.hosts:
redis_host = self.url.hosts[0]
return {"host": redis_host.hostname,
"port": redis_host.port,
"password": redis_host.password}
else:
return {"host": self.conf.matchmaker_redis.host,
"port": self.conf.matchmaker_redis.port,
"password": self.conf.matchmaker_redis.password}
def _add_key_with_expire(self, key, value, expire):
self._redis.sadd(key, value)
if expire > 0:
self._redis.expire(key, expire)
def register_publisher(self, hostname, expire=-1):
host_str = ",".join(hostname)
self._add_key_with_expire(_PUBLISHERS_KEY, host_str, expire)
def unregister_publisher(self, hostname):
host_str = ",".join(hostname)
self._redis.srem(_PUBLISHERS_KEY, host_str)
def get_publishers(self):
hosts = []
hosts.extend([tuple(host_str.split(","))
for host_str in
self._get_hosts_by_key(_PUBLISHERS_KEY)])
return hosts
def register_router(self, hostname, expire=-1):
self._add_key_with_expire(_ROUTERS_KEY, hostname, expire)
def unregister_router(self, hostname):
self._redis.srem(_ROUTERS_KEY, hostname)
def get_routers(self):
return self._get_hosts_by_key(_ROUTERS_KEY)
def _get_hosts_by_key(self, key):
return self._redis.smembers(key)
def register(self, target, hostname, listener_type, expire=-1):
if target.topic and target.server:
key = zmq_address.target_to_key(target, listener_type)
self._add_key_with_expire(key, hostname, expire)
if target.topic:
key = zmq_address.prefix_str(target.topic, listener_type)
self._add_key_with_expire(key, hostname, expire)
def unregister(self, target, hostname, listener_type):
if target.topic and target.server:
key = zmq_address.target_to_key(target, listener_type)
self._redis.srem(key, hostname)
if target.topic:
key = zmq_address.prefix_str(target.topic, listener_type)
self._redis.srem(key, hostname)
def get_hosts(self, target, listener_type):
LOG.debug("[Redis] get_hosts for target %s", target)
hosts = []
if target.topic and target.server:
key = zmq_address.target_to_key(target, listener_type)
hosts.extend(self._get_hosts_by_key(key))
if not hosts and target.topic:
key = zmq_address.prefix_str(target.topic, listener_type)
hosts.extend(self._get_hosts_by_key(key))
return hosts

View File

@ -1,80 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import threading
import eventlet
from oslo_messaging._drivers.zmq_driver import zmq_poller
class GreenPoller(zmq_poller.ZmqPoller):
def __init__(self):
self.incoming_queue = eventlet.queue.LightQueue()
self.green_pool = eventlet.GreenPool()
self.thread_by_socket = {}
def register(self, socket, recv_method=None):
if socket not in self.thread_by_socket:
self.thread_by_socket[socket] = self.green_pool.spawn(
self._socket_receive, socket, recv_method)
def _socket_receive(self, socket, recv_method=None):
while True:
if recv_method:
incoming = recv_method(socket)
else:
incoming = socket.recv_multipart()
self.incoming_queue.put((incoming, socket))
eventlet.sleep()
def poll(self, timeout=None):
try:
return self.incoming_queue.get(timeout=timeout)
except eventlet.queue.Empty:
return None, None
def close(self):
for thread in self.thread_by_socket.values():
thread.kill()
self.thread_by_socket = {}
class GreenExecutor(zmq_poller.Executor):
def __init__(self, method):
self._method = method
super(GreenExecutor, self).__init__(None)
self._done = threading.Event()
def _loop(self):
while not self._done.is_set():
self._method()
eventlet.sleep()
def execute(self):
self.thread = eventlet.spawn(self._loop)
def wait(self):
if self.thread is not None:
self.thread.wait()
def stop(self):
if self.thread is not None:
self.thread.kill()
def done(self):
self._done.set()

View File

@ -1,85 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import threading
from oslo_messaging._drivers.zmq_driver import zmq_async
from oslo_messaging._drivers.zmq_driver import zmq_poller
zmq = zmq_async.import_zmq()
LOG = logging.getLogger(__name__)
class ThreadingPoller(zmq_poller.ZmqPoller):
def __init__(self):
self.poller = zmq.Poller()
self.recv_methods = {}
def register(self, socket, recv_method=None):
if socket in self.recv_methods:
return
LOG.debug("Registering socket")
if recv_method is not None:
self.recv_methods[socket] = recv_method
self.poller.register(socket, zmq.POLLIN)
def poll(self, timeout=None):
if timeout is not None and timeout > 0:
timeout *= 1000 # convert seconds to milliseconds
sockets = {}
try:
sockets = dict(self.poller.poll(timeout=timeout))
except zmq.ZMQError as e:
LOG.debug("Polling terminated with error: %s", e)
if not sockets:
return None, None
for socket in sockets:
if socket in self.recv_methods:
return self.recv_methods[socket](socket), socket
else:
return socket.recv_multipart(), socket
def close(self):
pass # Nothing to do for threading poller
class ThreadingExecutor(zmq_poller.Executor):
def __init__(self, method):
self._method = method
super(ThreadingExecutor, self).__init__(
threading.Thread(target=self._loop))
self._stop = threading.Event()
def _loop(self):
while not self._stop.is_set():
self._method()
def execute(self):
self.thread.daemon = True
self.thread.start()
def stop(self):
self._stop.set()
def wait(self):
pass
def done(self):
self._stop.set()

View File

@ -1,98 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import socket
from stevedore import driver
from oslo_config import cfg
from oslo_messaging._drivers.zmq_driver import zmq_async
from oslo_messaging._i18n import _LI
zmq = zmq_async.import_zmq()
LOG = logging.getLogger(__name__)
zmq_proxy_opts = [
cfg.StrOpt('host', default=socket.gethostname(),
help='Hostname (FQDN) of current proxy'
' an ethernet interface, or IP address.'),
cfg.IntOpt('frontend_port', default=0,
help='Front-end ROUTER port number. Zero means random.'),
cfg.IntOpt('backend_port', default=0,
help='Back-end ROUTER port number. Zero means random.'),
cfg.IntOpt('publisher_port', default=0,
help='Publisher port number. Zero means random.'),
]
class ZmqProxy(object):
"""Wrapper class for Publishers and Routers proxies.
The main reason to have a proxy is high complexity of TCP sockets number
growth with direct connections (when services connect directly to
each other). The general complexity for ZeroMQ+Openstack deployment
with direct connections may be square(N) (where N is a number of nodes
in deployment). With proxy the complexity is reduced to k*N where
k is a number of services.
Currently there are 2 types of proxy, they are Publishers and Routers.
Publisher proxy serves for PUB-SUB pattern implementation where
Publisher is a server which performs broadcast to subscribers.
Router is used for direct message types in case of number of TCP socket
connections is critical for specific deployment. Generally 3 publishers
is enough for deployment.
Router is used for direct messages in order to reduce the number of
allocated TCP sockets in controller. The list of requirements to Router:
1. There may be any number of routers in the deployment. Routers are
registered in a name-server and client connects dynamically to all of
them performing load balancing.
2. Routers should be transparent for clients and servers. Which means
it doesn't change the way of messaging between client and the final
target by hiding the target from a client.
3. Router may be restarted or shut down at any time losing all messages
in its queue. Smart retrying (based on acknowledgements from server
side) and load balancing between other Router instances from the
client side should handle the situation.
4. Router takes all the routing information from message envelope and
doesn't perform Target-resolution in any way.
5. Routers don't talk to each other and no synchronization is needed.
6. Load balancing is performed by the client in a round-robin fashion.
Those requirements should limit the performance impact caused by using
of proxies making proxies as lightweight as possible.
"""
def __init__(self, conf, proxy_cls):
super(ZmqProxy, self).__init__()
self.conf = conf
self.matchmaker = driver.DriverManager(
'oslo.messaging.zmq.matchmaker',
self.conf.oslo_messaging_zmq.rpc_zmq_matchmaker,
).driver(self.conf)
self.context = zmq.Context()
self.proxy = proxy_cls(conf, self.context, self.matchmaker)
def run(self):
self.proxy.run()
def close(self):
LOG.info(_LI("Proxy shutting down ..."))
self.proxy.cleanup()

View File

@ -1,74 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
from oslo_messaging._drivers.zmq_driver import zmq_async
from oslo_messaging._drivers.zmq_driver import zmq_names
from oslo_messaging._drivers.zmq_driver import zmq_socket
LOG = logging.getLogger(__name__)
zmq = zmq_async.import_zmq()
class PublisherProxy(object):
"""PUB/SUB based request publisher
The publisher intended to be used for Fanout and Notify
multi-sending patterns.
It differs from direct publishers like DEALER or PUSH based
in a way it treats matchmaker. Here all publishers register
in the matchmaker. Subscribers (server-side) take the list
of publishers and connect to all of them but subscribe
only to a specific topic-filtering tag generated from the
Target object.
"""
def __init__(self, conf, matchmaker):
super(PublisherProxy, self).__init__()
self.conf = conf
self.zmq_context = zmq.Context()
self.matchmaker = matchmaker
port = conf.zmq_proxy_opts.publisher_port
self.socket = zmq_socket.ZmqFixedPortSocket(
self.conf, self.zmq_context, zmq.PUB, conf.zmq_proxy_opts.host,
port) if port != 0 else \
zmq_socket.ZmqRandomPortSocket(
self.conf, self.zmq_context, zmq.PUB, conf.zmq_proxy_opts.host)
self.host = self.socket.connect_address
def send_request(self, multipart_message):
message_type = multipart_message.pop(0)
assert message_type in (zmq_names.CAST_FANOUT_TYPE,
zmq_names.NOTIFY_TYPE), "Fanout expected!"
topic_filter = multipart_message.pop(0)
reply_id = multipart_message.pop(0)
message_id = multipart_message.pop(0)
assert reply_id is not None, "Reply id expected!"
self.socket.send(topic_filter, zmq.SNDMORE)
self.socket.send(message_id, zmq.SNDMORE)
self.socket.send_multipart(multipart_message)
LOG.debug("Publishing message %(message_id)s on [%(topic)s]",
{"topic": topic_filter,
"message_id": message_id})
def cleanup(self):
self.socket.close()

View File

@ -1,152 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import six
from oslo_messaging._drivers.zmq_driver.proxy import zmq_publisher_proxy
from oslo_messaging._drivers.zmq_driver import zmq_async
from oslo_messaging._drivers.zmq_driver import zmq_names
from oslo_messaging._drivers.zmq_driver import zmq_socket
from oslo_messaging._drivers.zmq_driver import zmq_updater
from oslo_messaging._i18n import _LI
zmq = zmq_async.import_zmq()
LOG = logging.getLogger(__name__)
class UniversalQueueProxy(object):
def __init__(self, conf, context, matchmaker):
self.conf = conf
self.context = context
super(UniversalQueueProxy, self).__init__()
self.matchmaker = matchmaker
self.poller = zmq_async.get_poller()
port = conf.zmq_proxy_opts.frontend_port
host = conf.zmq_proxy_opts.host
self.fe_router_socket = zmq_socket.ZmqFixedPortSocket(
conf, context, zmq.ROUTER, host,
conf.zmq_proxy_opts.frontend_port) if port != 0 else \
zmq_socket.ZmqRandomPortSocket(conf, context, zmq.ROUTER, host)
port = conf.zmq_proxy_opts.backend_port
self.be_router_socket = zmq_socket.ZmqFixedPortSocket(
conf, context, zmq.ROUTER, host,
conf.zmq_proxy_opts.backend_port) if port != 0 else \
zmq_socket.ZmqRandomPortSocket(conf, context, zmq.ROUTER, host)
self.poller.register(self.fe_router_socket.handle,
self._receive_in_request)
self.poller.register(self.be_router_socket.handle,
self._receive_in_request)
self.pub_publisher = zmq_publisher_proxy.PublisherProxy(
conf, matchmaker)
self._router_updater = RouterUpdater(
conf, matchmaker, self.pub_publisher.host,
self.fe_router_socket.connect_address,
self.be_router_socket.connect_address)
def run(self):
message, socket = self.poller.poll()
if message is None:
return
msg_type = message[0]
if self.conf.oslo_messaging_zmq.use_pub_sub and \
msg_type in (zmq_names.CAST_FANOUT_TYPE,
zmq_names.NOTIFY_TYPE):
self.pub_publisher.send_request(message)
else:
self._redirect_message(self.be_router_socket.handle
if socket is self.fe_router_socket.handle
else self.fe_router_socket.handle, message)
@staticmethod
def _receive_in_request(socket):
try:
reply_id = socket.recv()
assert reply_id is not None, "Valid id expected"
empty = socket.recv()
assert empty == b'', "Empty delimiter expected"
msg_type = int(socket.recv())
routing_key = socket.recv()
payload = socket.recv_multipart()
payload.insert(0, reply_id)
payload.insert(0, routing_key)
payload.insert(0, msg_type)
return payload
except (AssertionError, ValueError, zmq.ZMQError):
LOG.error("Received message with wrong format")
return None
@staticmethod
def _redirect_message(socket, multipart_message):
message_type = multipart_message.pop(0)
routing_key = multipart_message.pop(0)
reply_id = multipart_message.pop(0)
message_id = multipart_message[0]
socket.send(routing_key, zmq.SNDMORE)
socket.send(b'', zmq.SNDMORE)
socket.send(reply_id, zmq.SNDMORE)
socket.send(six.b(str(message_type)), zmq.SNDMORE)
LOG.debug("Dispatching %(msg_type)s message %(msg_id)s - to %(rkey)s" %
{"msg_type": zmq_names.message_type_str(message_type),
"msg_id": message_id,
"rkey": routing_key})
socket.send_multipart(multipart_message)
def cleanup(self):
self.fe_router_socket.close()
self.be_router_socket.close()
self.pub_publisher.cleanup()
self._router_updater.cleanup()
class RouterUpdater(zmq_updater.UpdaterBase):
"""This entity performs periodic async updates
from router proxy to the matchmaker.
"""
def __init__(self, conf, matchmaker, publisher_address, fe_router_address,
be_router_address):
self.publisher_address = publisher_address
self.fe_router_address = fe_router_address
self.be_router_address = be_router_address
super(RouterUpdater, self).__init__(conf, matchmaker,
self._update_records)
def _update_records(self):
self.matchmaker.register_publisher(
(self.publisher_address, self.fe_router_address),
expire=self.conf.oslo_messaging_zmq.zmq_target_expire)
LOG.info(_LI("[PUB:%(pub)s, ROUTER:%(router)s] Update PUB publisher"),
{"pub": self.publisher_address,
"router": self.fe_router_address})
self.matchmaker.register_router(
self.be_router_address,
expire=self.conf.oslo_messaging_zmq.zmq_target_expire)
LOG.info(_LI("[Backend ROUTER:%(router)s] Update ROUTER"),
{"router": self.be_router_address})
def cleanup(self):
super(RouterUpdater, self).cleanup()
self.matchmaker.unregister_publisher(
(self.publisher_address, self.fe_router_address))
self.matchmaker.unregister_router(
self.be_router_address)

View File

@ -1,128 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import logging
import six
from oslo_messaging._drivers import common as rpc_common
from oslo_messaging._drivers.zmq_driver import zmq_address
from oslo_messaging._drivers.zmq_driver import zmq_async
from oslo_messaging._drivers.zmq_driver import zmq_names
from oslo_messaging._drivers.zmq_driver import zmq_socket
from oslo_messaging._drivers.zmq_driver import zmq_updater
from oslo_messaging._i18n import _LE
LOG = logging.getLogger(__name__)
zmq = zmq_async.import_zmq()
@six.add_metaclass(abc.ABCMeta)
class ConsumerBase(object):
def __init__(self, conf, poller, server):
self.conf = conf
self.poller = poller
self.server = server
self.sockets = []
self.context = zmq.Context()
def stop(self):
"""Stop consumer polling/updates"""
pass
@abc.abstractmethod
def receive_message(self, target):
"""Method for poller - receiving message routine"""
def cleanup(self):
for socket in self.sockets:
if not socket.handle.closed:
socket.close()
self.sockets = []
class SingleSocketConsumer(ConsumerBase):
def __init__(self, conf, poller, server, socket_type):
super(SingleSocketConsumer, self).__init__(conf, poller, server)
self.matchmaker = server.matchmaker
self.target = server.target
self.socket_type = socket_type
self.host = None
self.socket = self.subscribe_socket(socket_type)
self.target_updater = TargetUpdater(
conf, self.matchmaker, self.target, self.host, socket_type)
def stop(self):
self.target_updater.stop()
def subscribe_socket(self, socket_type):
try:
socket = zmq_socket.ZmqRandomPortSocket(
self.conf, self.context, socket_type)
self.sockets.append(socket)
LOG.debug("Run %(stype)s consumer on %(addr)s:%(port)d",
{"stype": zmq_names.socket_type_str(socket_type),
"addr": socket.bind_address,
"port": socket.port})
self.host = zmq_address.combine_address(
self.conf.oslo_messaging_zmq.rpc_zmq_host, socket.port)
self.poller.register(socket, self.receive_message)
return socket
except zmq.ZMQError as e:
errmsg = _LE("Failed binding to port %(port)d: %(e)s")\
% (self.port, e)
LOG.error(_LE("Failed binding to port %(port)d: %(e)s"),
(self.port, e))
raise rpc_common.RPCException(errmsg)
@property
def address(self):
return self.socket.bind_address
@property
def port(self):
return self.socket.port
def cleanup(self):
self.target_updater.cleanup()
super(SingleSocketConsumer, self).cleanup()
class TargetUpdater(zmq_updater.UpdaterBase):
"""This entity performs periodic async updates
to the matchmaker.
"""
def __init__(self, conf, matchmaker, target, host, socket_type):
self.target = target
self.host = host
self.socket_type = socket_type
super(TargetUpdater, self).__init__(conf, matchmaker,
self._update_target)
def _update_target(self):
self.matchmaker.register(
self.target, self.host,
zmq_names.socket_type_str(self.socket_type),
expire=self.conf.oslo_messaging_zmq.zmq_target_expire)
def stop(self):
super(TargetUpdater, self).stop()
self.matchmaker.unregister(
self.target, self.host,
zmq_names.socket_type_str(self.socket_type))

View File

@ -1,93 +0,0 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
from oslo_messaging._drivers import common as rpc_common
from oslo_messaging._drivers.zmq_driver.client import zmq_senders
from oslo_messaging._drivers.zmq_driver.client import zmq_sockets_manager
from oslo_messaging._drivers.zmq_driver.server.consumers \
import zmq_consumer_base
from oslo_messaging._drivers.zmq_driver.server import zmq_incoming_message
from oslo_messaging._drivers.zmq_driver import zmq_async
from oslo_messaging._drivers.zmq_driver import zmq_names
from oslo_messaging._drivers.zmq_driver import zmq_updater
from oslo_messaging._i18n import _LE, _LI
LOG = logging.getLogger(__name__)
zmq = zmq_async.import_zmq()
class DealerConsumer(zmq_consumer_base.SingleSocketConsumer):
def __init__(self, conf, poller, server):
self.sender = zmq_senders.ReplySenderProxy(conf)
self.sockets_manager = zmq_sockets_manager.SocketsManager(
conf, server.matchmaker, zmq.ROUTER, zmq.DEALER)
self.host = None
super(DealerConsumer, self).__init__(conf, poller, server, zmq.DEALER)
self.connection_updater = ConsumerConnectionUpdater(
conf, self.matchmaker, self.socket)
LOG.info(_LI("[%s] Run DEALER consumer"), self.host)
def subscribe_socket(self, socket_type):
try:
socket = self.sockets_manager.get_socket_to_routers()
self.sockets.append(socket)
self.host = socket.handle.identity
self.poller.register(socket, self.receive_message)
return socket
except zmq.ZMQError as e:
LOG.error(_LE("Failed connecting to ROUTER socket %(e)s") % e)
raise rpc_common.RPCException(str(e))
def receive_message(self, socket):
try:
empty = socket.recv()
assert empty == b'', 'Bad format: empty delimiter expected'
reply_id = socket.recv()
message_type = int(socket.recv())
message_id = socket.recv()
context = socket.recv_loaded()
message = socket.recv_loaded()
LOG.debug("[%(host)s] Received %(msg_type)s message %(msg_id)s",
{"host": self.host,
"msg_type": zmq_names.message_type_str(message_type),
"msg_id": message_id})
if message_type == zmq_names.CALL_TYPE:
return zmq_incoming_message.ZmqIncomingMessage(
context, message, reply_id, message_id, socket, self.sender
)
elif message_type in zmq_names.NON_BLOCKING_TYPES:
return zmq_incoming_message.ZmqIncomingMessage(context,
message)
else:
LOG.error(_LE("Unknown message type: %s"),
zmq_names.message_type_str(message_type))
except (zmq.ZMQError, AssertionError, ValueError) as e:
LOG.error(_LE("Receiving message failure: %s"), str(e))
def cleanup(self):
LOG.info(_LI("[%s] Destroy DEALER consumer"), self.host)
self.connection_updater.cleanup()
super(DealerConsumer, self).cleanup()
class ConsumerConnectionUpdater(zmq_updater.ConnectionUpdater):
def _update_connection(self):
routers = self.matchmaker.get_routers()
for router_address in routers:
self.socket.connect_to_host(router_address)

View File

@ -1,71 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
from oslo_messaging._drivers.zmq_driver.client import zmq_senders
from oslo_messaging._drivers.zmq_driver.server.consumers \
import zmq_consumer_base
from oslo_messaging._drivers.zmq_driver.server import zmq_incoming_message
from oslo_messaging._drivers.zmq_driver import zmq_async
from oslo_messaging._drivers.zmq_driver import zmq_names
from oslo_messaging._i18n import _LE, _LI
LOG = logging.getLogger(__name__)
zmq = zmq_async.import_zmq()
class RouterConsumer(zmq_consumer_base.SingleSocketConsumer):
def __init__(self, conf, poller, server):
self.sender = zmq_senders.ReplySenderDirect(conf)
super(RouterConsumer, self).__init__(conf, poller, server, zmq.ROUTER)
LOG.info(_LI("[%s] Run ROUTER consumer"), self.host)
def _receive_request(self, socket):
reply_id = socket.recv()
empty = socket.recv()
assert empty == b'', 'Bad format: empty delimiter expected'
msg_type = int(socket.recv())
message_id = socket.recv_string()
context = socket.recv_loaded()
message = socket.recv_loaded()
return reply_id, msg_type, message_id, context, message
def receive_message(self, socket):
try:
reply_id, msg_type, message_id, context, message = \
self._receive_request(socket)
LOG.debug("[%(host)s] Received %(msg_type)s message %(msg_id)s",
{"host": self.host,
"msg_type": zmq_names.message_type_str(msg_type),
"msg_id": message_id})
if msg_type == zmq_names.CALL_TYPE:
return zmq_incoming_message.ZmqIncomingMessage(
context, message, reply_id, message_id, socket, self.sender
)
elif msg_type in zmq_names.NON_BLOCKING_TYPES:
return zmq_incoming_message.ZmqIncomingMessage(context,
message)
else:
LOG.error(_LE("Unknown message type: %s"),
zmq_names.message_type_str(msg_type))
except (zmq.ZMQError, AssertionError, ValueError) as e:
LOG.error(_LE("Receiving message failed: %s"), str(e))
def cleanup(self):
LOG.info(_LI("[%s] Destroy ROUTER consumer"), self.host)
super(RouterConsumer, self).cleanup()

View File

@ -1,82 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import six
from oslo_messaging._drivers.zmq_driver.server.consumers \
import zmq_consumer_base
from oslo_messaging._drivers.zmq_driver.server import zmq_incoming_message
from oslo_messaging._drivers.zmq_driver import zmq_address
from oslo_messaging._drivers.zmq_driver import zmq_async
from oslo_messaging._drivers.zmq_driver import zmq_socket
from oslo_messaging._i18n import _LE
LOG = logging.getLogger(__name__)
zmq = zmq_async.import_zmq()
class SubConsumer(zmq_consumer_base.ConsumerBase):
def __init__(self, conf, poller, server):
super(SubConsumer, self).__init__(conf, poller, server)
self.matchmaker = server.matchmaker
self.target = server.target
self.socket = zmq_socket.ZmqSocket(self.conf, self.context, zmq.SUB)
self.sockets.append(self.socket)
self._subscribe_on_target(self.target)
self.on_publishers(self.matchmaker.get_publishers())
self.poller.register(self.socket, self.receive_message)
def on_publishers(self, publishers):
for host, sync in publishers:
self.socket.connect(zmq_address.get_tcp_direct_address(host))
LOG.debug("[%s] SUB consumer connected to publishers %s",
self.socket.handle.identity, publishers)
def _subscribe_on_target(self, target):
topic_filter = zmq_address.target_to_subscribe_filter(target)
if target.topic:
self.socket.setsockopt(zmq.SUBSCRIBE, six.b(target.topic))
if target.server:
self.socket.setsockopt(zmq.SUBSCRIBE, six.b(target.server))
if target.topic and target.server:
self.socket.setsockopt(zmq.SUBSCRIBE, topic_filter)
LOG.debug("[%(host)s] Subscribing to topic %(filter)s",
{"host": self.socket.handle.identity,
"filter": topic_filter})
@staticmethod
def _receive_request(socket):
topic_filter = socket.recv()
message_id = socket.recv()
context = socket.recv_loaded()
message = socket.recv_loaded()
LOG.debug("Received %(topic_filter)s topic message %(id)s",
{'id': message_id, 'topic_filter': topic_filter})
return context, message
def receive_message(self, socket):
try:
context, message = self._receive_request(socket)
if not message:
return None
return zmq_incoming_message.ZmqIncomingMessage(context, message)
except (zmq.ZMQError, AssertionError) as e:
LOG.error(_LE("Receiving message failed: %s"), str(e))
def cleanup(self):
super(SubConsumer, self).cleanup()

View File

@ -1,61 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
from oslo_messaging._drivers import base
from oslo_messaging._drivers import common as rpc_common
from oslo_messaging._drivers.zmq_driver.client import zmq_response
from oslo_messaging._drivers.zmq_driver import zmq_async
from oslo_messaging._drivers.zmq_driver import zmq_names
LOG = logging.getLogger(__name__)
zmq = zmq_async.import_zmq()
class ZmqIncomingMessage(base.RpcIncomingMessage):
def __init__(self, context, message, reply_id=None, message_id=None,
socket=None, sender=None):
if sender is not None:
assert socket is not None, "Valid socket expected!"
assert message_id is not None, "Valid message ID expected!"
assert reply_id is not None, "Valid reply ID expected!"
super(ZmqIncomingMessage, self).__init__(context, message)
self.reply_id = reply_id
self.message_id = message_id
self.socket = socket
self.sender = sender
def acknowledge(self):
"""Not sending acknowledge"""
def reply(self, reply=None, failure=None):
if self.sender is not None:
if failure is not None:
failure = rpc_common.serialize_remote_exception(failure)
reply = zmq_response.Response(msg_type=zmq_names.REPLY_TYPE,
message_id=self.message_id,
reply_id=self.reply_id,
reply_body=reply,
failure=failure)
self.sender.send(self.socket, reply)
def requeue(self):
"""Requeue is not supported"""

View File

@ -1,109 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import logging
from oslo_messaging._drivers import base
from oslo_messaging._drivers.zmq_driver.server.consumers\
import zmq_dealer_consumer
from oslo_messaging._drivers.zmq_driver.server.consumers\
import zmq_router_consumer
from oslo_messaging._drivers.zmq_driver.server.consumers\
import zmq_sub_consumer
from oslo_messaging._drivers.zmq_driver import zmq_async
from oslo_messaging._i18n import _LI
LOG = logging.getLogger(__name__)
zmq = zmq_async.import_zmq()
class ZmqServer(base.PollStyleListener):
def __init__(self, driver, conf, matchmaker, target, poller=None):
super(ZmqServer, self).__init__()
self.driver = driver
self.conf = conf
self.matchmaker = matchmaker
self.target = target
self.poller = poller or zmq_async.get_poller()
self.router_consumer = zmq_router_consumer.RouterConsumer(
conf, self.poller, self) \
if not conf.oslo_messaging_zmq.use_router_proxy else None
self.dealer_consumer = zmq_dealer_consumer.DealerConsumer(
conf, self.poller, self) \
if conf.oslo_messaging_zmq.use_router_proxy else None
self.sub_consumer = zmq_sub_consumer.SubConsumer(
conf, self.poller, self) \
if conf.oslo_messaging_zmq.use_pub_sub else None
self.consumers = []
if self.router_consumer is not None:
self.consumers.append(self.router_consumer)
if self.dealer_consumer is not None:
self.consumers.append(self.dealer_consumer)
if self.sub_consumer is not None:
self.consumers.append(self.sub_consumer)
@base.batch_poll_helper
def poll(self, timeout=None):
message, socket = self.poller.poll(
timeout or self.conf.oslo_messaging_zmq.rpc_poll_timeout)
return message
def stop(self):
self.poller.close()
LOG.info(_LI("Stop server %(target)s"), {'target': self.target})
for consumer in self.consumers:
consumer.stop()
def cleanup(self):
self.poller.close()
for consumer in self.consumers:
consumer.cleanup()
class ZmqNotificationServer(base.PollStyleListener):
def __init__(self, driver, conf, matchmaker, targets_and_priorities):
super(ZmqNotificationServer, self).__init__()
self.driver = driver
self.conf = conf
self.matchmaker = matchmaker
self.servers = []
self.poller = zmq_async.get_poller()
self._listen(targets_and_priorities)
def _listen(self, targets_and_priorities):
for target, priority in targets_and_priorities:
t = copy.deepcopy(target)
t.topic = target.topic + '.' + priority
self.servers.append(ZmqServer(
self.driver, self.conf, self.matchmaker, t, self.poller))
@base.batch_poll_helper
def poll(self, timeout=None):
message, socket = self.poller.poll(
timeout or self.conf.oslo_messaging_zmq.rpc_poll_timeout)
return message
def stop(self):
for server in self.servers:
server.stop()
def cleanup(self):
for server in self.servers:
server.cleanup()

View File

@ -1,59 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import six
def combine_address(host, port):
return "%s:%s" % (host, port)
def get_tcp_direct_address(host):
return "tcp://%s" % str(host)
def get_tcp_random_address(conf):
return "tcp://%s" % conf.oslo_messaging_zmq.rpc_zmq_bind_address
def get_broker_address(conf):
return "ipc://%s/zmq-broker" % conf.oslo_messaging_zmq.rpc_zmq_ipc_dir
def prefix_str(key, listener_type):
return listener_type + "_" + key
def target_to_key(target, listener_type):
def prefix(key):
return prefix_str(key, listener_type)
if target.topic and target.server:
attributes = ['topic', 'server']
key = ".".join(getattr(target, attr) for attr in attributes)
return prefix(key)
if target.topic:
return prefix(target.topic)
def target_to_subscribe_filter(target):
if target.topic and target.server:
attributes = ['topic', 'server']
key = "/".join(getattr(target, attr) for attr in attributes)
return six.b(key)
if target.topic:
return six.b(target.topic)
if target.server:
return six.b(target.server)

View File

@ -1,51 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_utils import eventletutils
from oslo_utils import importutils
def import_zmq():
imported_zmq = importutils.try_import(
'eventlet.green.zmq' if eventletutils.is_monkey_patched('thread') else
'zmq', default=None
)
return imported_zmq
def get_poller():
if eventletutils.is_monkey_patched('thread'):
from oslo_messaging._drivers.zmq_driver.poller import green_poller
return green_poller.GreenPoller()
from oslo_messaging._drivers.zmq_driver.poller import threading_poller
return threading_poller.ThreadingPoller()
def get_executor(method):
if eventletutils.is_monkey_patched('thread'):
from oslo_messaging._drivers.zmq_driver.poller import green_poller
return green_poller.GreenExecutor(method)
from oslo_messaging._drivers.zmq_driver.poller import threading_poller
return threading_poller.ThreadingExecutor(method)
def get_queue():
if eventletutils.is_monkey_patched('thread'):
import eventlet
return eventlet.queue.Queue(), eventlet.queue.Empty
import six
return six.moves.queue.Queue(), six.moves.queue.Empty

View File

@ -1,72 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_messaging._drivers.zmq_driver import zmq_async
zmq = zmq_async.import_zmq()
FIELD_MSG_TYPE = 'msg_type'
FIELD_MSG_ID = 'message_id'
FIELD_REPLY_ID = 'reply_id'
FIELD_REPLY_BODY = 'reply_body'
FIELD_FAILURE = 'failure'
IDX_REPLY_TYPE = 1
IDX_REPLY_BODY = 2
MULTIPART_IDX_ENVELOPE = 0
MULTIPART_IDX_BODY = 1
CALL_TYPE = 1
CAST_TYPE = 2
CAST_FANOUT_TYPE = 3
NOTIFY_TYPE = 4
REPLY_TYPE = 5
ACK_TYPE = 6
MESSAGE_TYPES = (CALL_TYPE,
CAST_TYPE,
CAST_FANOUT_TYPE,
NOTIFY_TYPE)
MULTISEND_TYPES = (CAST_FANOUT_TYPE, NOTIFY_TYPE)
DIRECT_TYPES = (CALL_TYPE, CAST_TYPE, REPLY_TYPE)
CAST_TYPES = (CAST_TYPE, CAST_FANOUT_TYPE)
NOTIFY_TYPES = (NOTIFY_TYPE,)
NON_BLOCKING_TYPES = CAST_TYPES + NOTIFY_TYPES
def socket_type_str(socket_type):
zmq_socket_str = {zmq.DEALER: "DEALER",
zmq.ROUTER: "ROUTER",
zmq.PUSH: "PUSH",
zmq.PULL: "PULL",
zmq.REQ: "REQ",
zmq.REP: "REP",
zmq.PUB: "PUB",
zmq.SUB: "SUB"}
return zmq_socket_str[socket_type]
def message_type_str(message_type):
msg_type_str = {CALL_TYPE: "CALL",
CAST_TYPE: "CAST",
CAST_FANOUT_TYPE: "CAST_FANOUT",
NOTIFY_TYPE: "NOTIFY",
REPLY_TYPE: "REPLY",
ACK_TYPE: "ACK"}
return msg_type_str[message_type]

View File

@ -1,122 +0,0 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import socket
from oslo_config import cfg
from oslo_messaging._drivers import base
from oslo_messaging import server
MATCHMAKER_BACKENDS = ('redis', 'dummy')
MATCHMAKER_DEFAULT = 'redis'
zmq_opts = [
cfg.StrOpt('rpc_zmq_bind_address', default='*',
deprecated_group='DEFAULT',
help='ZeroMQ bind address. Should be a wildcard (*), '
'an ethernet interface, or IP. '
'The "host" option should point or resolve to this '
'address.'),
cfg.StrOpt('rpc_zmq_matchmaker', default=MATCHMAKER_DEFAULT,
choices=MATCHMAKER_BACKENDS,
deprecated_group='DEFAULT',
help='MatchMaker driver.'),
cfg.IntOpt('rpc_zmq_contexts', default=1,
deprecated_group='DEFAULT',
help='Number of ZeroMQ contexts, defaults to 1.'),
cfg.IntOpt('rpc_zmq_topic_backlog',
deprecated_group='DEFAULT',
help='Maximum number of ingress messages to locally buffer '
'per topic. Default is unlimited.'),
cfg.StrOpt('rpc_zmq_ipc_dir', default='/var/run/openstack',
deprecated_group='DEFAULT',
help='Directory for holding IPC sockets.'),
cfg.StrOpt('rpc_zmq_host', default=socket.gethostname(),
sample_default='localhost',
deprecated_group='DEFAULT',
help='Name of this node. Must be a valid hostname, FQDN, or '
'IP address. Must match "host" option, if running Nova.'),
cfg.IntOpt('rpc_cast_timeout', default=-1,
deprecated_group='DEFAULT',
help='Seconds to wait before a cast expires (TTL). '
'The default value of -1 specifies an infinite linger '
'period. The value of 0 specifies no linger period. '
'Pending messages shall be discarded immediately '
'when the socket is closed. Only supported by impl_zmq.'),
cfg.IntOpt('rpc_poll_timeout', default=1,
deprecated_group='DEFAULT',
help='The default number of seconds that poll should wait. '
'Poll raises timeout exception when timeout expired.'),
cfg.IntOpt('zmq_target_expire', default=300,
deprecated_group='DEFAULT',
help='Expiration timeout in seconds of a name service record '
'about existing target ( < 0 means no timeout).'),
cfg.IntOpt('zmq_target_update', default=180,
deprecated_group='DEFAULT',
help='Update period in seconds of a name service record '
'about existing target.'),
cfg.BoolOpt('use_pub_sub', default=True,
deprecated_group='DEFAULT',
help='Use PUB/SUB pattern for fanout methods. '
'PUB/SUB always uses proxy.'),
cfg.BoolOpt('use_router_proxy', default=True,
deprecated_group='DEFAULT',
help='Use ROUTER remote proxy.'),
cfg.PortOpt('rpc_zmq_min_port',
default=49153,
deprecated_group='DEFAULT',
help='Minimal port number for random ports range.'),
cfg.IntOpt('rpc_zmq_max_port',
min=1,
max=65536,
default=65536,
deprecated_group='DEFAULT',
help='Maximal port number for random ports range.'),
cfg.IntOpt('rpc_zmq_bind_port_retries',
default=100,
deprecated_group='DEFAULT',
help='Number of retries to find free port number before '
'fail with ZMQBindError.'),
cfg.StrOpt('rpc_zmq_serialization', default='json',
choices=('json', 'msgpack'),
deprecated_group='DEFAULT',
help='Default serialization mechanism for '
'serializing/deserializing outgoing/incoming messages')
]
def register_opts(conf):
opt_group = cfg.OptGroup(name='oslo_messaging_zmq',
title='ZeroMQ driver options')
conf.register_opts(zmq_opts, group=opt_group)
conf.register_opts(server._pool_opts)
conf.register_opts(base.base_opts)

Some files were not shown because too many files have changed in this diff Show More