Tagger: OpenStack Release Bot <infra-root@openstack.org>
Date: Fri Sep 2 09:39:36 2016 +0000 Retag 7.0.0_b3 of ceilometer 7.0.0.0b3 development milestone -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAABCAAGBQJX2wqjAAoJENQWrRWsa0P+p2QP/212g+fcAkR9SBtWNr2W9ymU xyAW7Oh6qZTymIMuvfsF9I5ve6mcn4H8WPSiBhxIqebBJvwztNcVpgloEzWqqQ6Q hcwSxdJI5lr0GDeo/CraMgGZcX8JIckd+8heVqYCGO4KbIHYDk0DxbqZTUW3AJap vfnQk9IbxeK8vicau9OEy9Cy/aW3ENrM9T6cgep/GVfyRqMWWMCHwma0NKa7JoVb zJVeDU7zdGPiqrhEAeZHdLtn9YupZWk+ipb+GIYYixnB45oGJydMZ54AVuK0RFL8 H6lf5MmyWSULmXBU+atxlVwx8mUMcVYPQy/4TD3KUkuTMbzsW2WtyGkv+CX2uhjp j3kKRDEpeL+xx3P+yzMnSwwUHJwdyKr9ZUV5Z5vTxFhOPyau6Rh+7HUWSuluD0mS 9HEYpuKyTc4UVk3p24YeX4Maf3Fmb6yDdwibFlo0tdqf685aTi88ZRi+uZAFAnqY cUFry2YUuRd2YNLAv1CZGflTfsJYkuvMZhEKMCMDoZi+bvyYVo51thVYtkTc3BWs TE+nDGG2L77RkRybyXDbPTDJMvDOgq1Vneuj+WyxM/lr0hmrOtqZuygKC4gNbou7 UhI/5pfmUegA5z/abJhis1IgDpIjQFxeUbPiWC2FY253hAl2INqSJbT8D8isaQ7K IilfU5JA9nn8eARQPbeq =ZQ+7 -----END PGP SIGNATURE----- Merge tag '7.0.0_b3' into debian/newton Tagger: OpenStack Release Bot <infra-root@openstack.org> Date: Fri Sep 2 09:39:36 2016 +0000 Retag 7.0.0_b3 of ceilometer 7.0.0.0b3 development milestone * New upstream release. * Fixed (build-)depends for this release. * Using OpenStack's Gerrit as VCS URLs. * Points .gitreview to OpenStack packaging-deb's Gerrit. * Fixed installation of files in /etc/ceilometer for this release. Change-Id: I9003586700e4d30a4ee83df6578c83fe719ce49e
This commit is contained in:
commit
47efb7b339
|
@ -16,3 +16,6 @@ subunit.log
|
|||
|
||||
# Files created by releasenotes build
|
||||
releasenotes/build
|
||||
|
||||
# Files created by api-ref build
|
||||
api-ref/build
|
||||
|
|
31
README.rst
31
README.rst
|
@ -1,11 +1,28 @@
|
|||
ceilometer
|
||||
==========
|
||||
Ceilometer
|
||||
==========
|
||||
|
||||
Release notes can be read online at:
|
||||
http://docs.openstack.org/developer/ceilometer/releasenotes/index.html
|
||||
Ceilometer is a data collection service that collects event and metering
|
||||
data by monitoring notifications sent from OpenStack services. It publishes
|
||||
collected data to various targets including data stores
|
||||
and message queues.
|
||||
|
||||
Documentation for the project can be found at:
|
||||
http://docs.openstack.org/developer/ceilometer/
|
||||
Ceilometer is distributed under the terms of the Apache
|
||||
License, Version 2.0. The full terms and conditions of this
|
||||
license are detailed in the LICENSE file.
|
||||
|
||||
The project home is at:
|
||||
http://launchpad.net/ceilometer
|
||||
For more information about Ceilometer APIs, see
|
||||
http://developer.openstack.org/api-ref-telemetry-v2.html
|
||||
|
||||
Release notes are available at
|
||||
https://releases.openstack.org/teams/telemetry.html
|
||||
|
||||
Developer documentation is available at
|
||||
http://docs.openstack.org/developer/ceilometer/
|
||||
|
||||
For information on how to contribute to ceilometer, see the CONTRIBUTING.rst
|
||||
file.
|
||||
|
||||
The project home is at http://launchpad.net/ceilometer
|
||||
|
||||
To report any ceilometer related bugs, see http://bugs.launchpad.net/ceilometer/
|
|
@ -0,0 +1,333 @@
|
|||
.. -*- rst -*-
|
||||
|
||||
======
|
||||
Alarms
|
||||
======
|
||||
|
||||
Lists, creates, gets details for, updates, and deletes alarms.
|
||||
|
||||
|
||||
Show alarm details
|
||||
==================
|
||||
|
||||
.. rest_method:: GET /v2/alarms/{alarm_id}
|
||||
|
||||
Shows details for an alarm, by alarm ID.
|
||||
|
||||
|
||||
Normal response codes: 200
|
||||
Error response codes:
|
||||
|
||||
|
||||
Request
|
||||
-------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- alarm_id: alarm_id
|
||||
|
||||
|
||||
Response Parameters
|
||||
-------------------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- alarm_actions: alarm_actions
|
||||
- alarm_id: alarm_id
|
||||
- combination_rule: combination_rule
|
||||
- description: description
|
||||
- enabled: enabled
|
||||
- insufficient_data_actions: insufficient_data_actions
|
||||
- timestamp: timestamp
|
||||
- name: name
|
||||
- ok_actions: ok_actions
|
||||
- project_id: project_id
|
||||
- state_timestamp: state_timestamp
|
||||
- threshold_rule: threshold_rule
|
||||
- repeat_actions: repeat_actions
|
||||
- state: state
|
||||
- type: type
|
||||
- user_id: user_id
|
||||
|
||||
Response Example
|
||||
----------------
|
||||
|
||||
.. literalinclude:: ../samples/alarm-show-response.json
|
||||
:language: javascript
|
||||
|
||||
|
||||
|
||||
|
||||
Update alarm
|
||||
============
|
||||
|
||||
.. rest_method:: PUT /v2/alarms/{alarm_id}
|
||||
|
||||
Updates an alarm.
|
||||
|
||||
|
||||
Normal response codes: 200
|
||||
Error response codes:
|
||||
|
||||
|
||||
Request
|
||||
-------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- alarm_id: alarm_id
|
||||
- data: data
|
||||
|
||||
|
||||
Response Parameters
|
||||
-------------------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- alarm_actions: alarm_actions
|
||||
- ok_actions: ok_actions
|
||||
- description: description
|
||||
- timestamp: timestamp
|
||||
- enabled: enabled
|
||||
- combination_rule: combination_rule
|
||||
- state_timestamp: state_timestamp
|
||||
- threshold_rule: threshold_rule
|
||||
- alarm_id: alarm_id
|
||||
- state: state
|
||||
- insufficient_data_actions: insufficient_data_actions
|
||||
- repeat_actions: repeat_actions
|
||||
- user_id: user_id
|
||||
- project_id: project_id
|
||||
- type: type
|
||||
- name: name
|
||||
|
||||
|
||||
|
||||
Response Example
|
||||
----------------
|
||||
|
||||
.. literalinclude:: ../samples/alarm-show-response.json
|
||||
:language: javascript
|
||||
|
||||
|
||||
|
||||
|
||||
Delete alarm
|
||||
============
|
||||
|
||||
.. rest_method:: DELETE /v2/alarms/{alarm_id}
|
||||
|
||||
Deletes an alarm, by alarm ID.
|
||||
|
||||
Error response codes:204
|
||||
|
||||
|
||||
Request
|
||||
-------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- alarm_id: alarm_id
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
Update alarm state
|
||||
==================
|
||||
|
||||
.. rest_method:: PUT /v2/alarms/{alarm_id}/state
|
||||
|
||||
Sets the state of an alarm.
|
||||
|
||||
|
||||
Normal response codes: 200
|
||||
Error response codes:
|
||||
|
||||
|
||||
Request
|
||||
-------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- alarm_id: alarm_id
|
||||
- state: state
|
||||
|
||||
|
||||
|
||||
|
||||
Response Example
|
||||
----------------
|
||||
|
||||
.. literalinclude::
|
||||
:language: javascript
|
||||
|
||||
|
||||
|
||||
|
||||
Show alarm state
|
||||
================
|
||||
|
||||
.. rest_method:: GET /v2/alarms/{alarm_id}/state
|
||||
|
||||
Shows the state for an alarm, by alarm ID.
|
||||
|
||||
|
||||
Normal response codes: 200
|
||||
Error response codes:
|
||||
|
||||
|
||||
Request
|
||||
-------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- alarm_id: alarm_id
|
||||
|
||||
|
||||
|
||||
|
||||
Response Example
|
||||
----------------
|
||||
|
||||
.. literalinclude::
|
||||
:language: javascript
|
||||
|
||||
|
||||
|
||||
|
||||
List alarms
|
||||
===========
|
||||
|
||||
.. rest_method:: GET /v2/alarms
|
||||
|
||||
Lists alarms, based on a query.
|
||||
|
||||
|
||||
Normal response codes: 200
|
||||
Error response codes:
|
||||
|
||||
|
||||
Request
|
||||
-------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- q: q
|
||||
|
||||
|
||||
Response Parameters
|
||||
-------------------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- alarm_actions: alarm_actions
|
||||
- ok_actions: ok_actions
|
||||
- description: description
|
||||
- timestamp: timestamp
|
||||
- enabled: enabled
|
||||
- combination_rule: combination_rule
|
||||
- state_timestamp: state_timestamp
|
||||
- threshold_rule: threshold_rule
|
||||
- alarm_id: alarm_id
|
||||
- state: state
|
||||
- insufficient_data_actions: insufficient_data_actions
|
||||
- repeat_actions: repeat_actions
|
||||
- user_id: user_id
|
||||
- project_id: project_id
|
||||
- type: type
|
||||
- name: name
|
||||
|
||||
|
||||
|
||||
Response Example
|
||||
----------------
|
||||
|
||||
.. literalinclude:: ../samples/alarms-list-response.json
|
||||
:language: javascript
|
||||
|
||||
|
||||
|
||||
|
||||
Create alarm
|
||||
============
|
||||
|
||||
.. rest_method:: POST /v2/alarms
|
||||
|
||||
Creates an alarm.
|
||||
|
||||
|
||||
Normal response codes: 200
|
||||
Error response codes:
|
||||
|
||||
|
||||
Request
|
||||
-------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- data: data
|
||||
|
||||
|
||||
Response Parameters
|
||||
-------------------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- alarm_actions: alarm_actions
|
||||
- ok_actions: ok_actions
|
||||
- description: description
|
||||
- timestamp: timestamp
|
||||
- enabled: enabled
|
||||
- combination_rule: combination_rule
|
||||
- state_timestamp: state_timestamp
|
||||
- threshold_rule: threshold_rule
|
||||
- alarm_id: alarm_id
|
||||
- state: state
|
||||
- insufficient_data_actions: insufficient_data_actions
|
||||
- repeat_actions: repeat_actions
|
||||
- user_id: user_id
|
||||
- project_id: project_id
|
||||
- type: type
|
||||
- name: name
|
||||
|
||||
|
||||
|
||||
Response Example
|
||||
----------------
|
||||
|
||||
.. literalinclude:: ../samples/alarm-show-response.json
|
||||
:language: javascript
|
||||
|
||||
|
||||
|
||||
|
||||
Show alarm history
|
||||
==================
|
||||
|
||||
.. rest_method:: GET /v2/alarms/{alarm_id}/history
|
||||
|
||||
Assembles and shows the history for an alarm, by alarm ID.
|
||||
|
||||
|
||||
Normal response codes: 200
|
||||
Error response codes:
|
||||
|
||||
|
||||
Request
|
||||
-------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- alarm_id: alarm_id
|
||||
- q: q
|
||||
|
||||
|
||||
|
||||
|
||||
Response Example
|
||||
----------------
|
||||
|
||||
.. literalinclude::
|
||||
:language: javascript
|
|
@ -0,0 +1,92 @@
|
|||
.. -*- rst -*-
|
||||
|
||||
============
|
||||
Capabilities
|
||||
============
|
||||
|
||||
Gets information for API and storage capabilities.
|
||||
|
||||
The Telemetry service enables you to store samples, events, and
|
||||
alarm definitions in supported database back ends. The
|
||||
``capabilities`` resource enables you to list the capabilities that
|
||||
a database supports.
|
||||
|
||||
The ``capabilities`` resource returns a flattened dictionary of
|
||||
capability properties, each with an associated boolean value. A
|
||||
value of ``true`` indicates that the corresponding capability is
|
||||
available in the back end.
|
||||
|
||||
You can optionally configure separate database back ends for
|
||||
samples, events, and alarms definitions. The ``capabilities``
|
||||
response shows a value of ``true`` to indicate that the definitions
|
||||
database for samples, events, or alarms is ready to use in a
|
||||
production environment.
|
||||
|
||||
|
||||
List capabilities
|
||||
=================
|
||||
|
||||
.. rest_method:: GET /v2/capabilities
|
||||
|
||||
A representation of the API and storage capabilities. Usually, the storage driver imposes constraints.
|
||||
|
||||
|
||||
Normal response codes: 200
|
||||
Error response codes:
|
||||
|
||||
|
||||
Request
|
||||
-------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
|
||||
|
||||
Response Parameters
|
||||
-------------------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- statistics:query:complex: statistics:query:complex
|
||||
- alarms:history:query:simple: alarms:history:query:simple
|
||||
- meters:query:metadata: meters:query:metadata
|
||||
- alarms:query:simple: alarms:query:simple
|
||||
- resources:query:simple: resources:query:simple
|
||||
- api: api
|
||||
- statistics:aggregation:selectable:quartile: statistics:aggregation:selectable:quartile
|
||||
- statistics:query:simple: statistics:query:simple
|
||||
- statistics:aggregation:selectable:count: statistics:aggregation:selectable:count
|
||||
- statistics:aggregation:selectable:min: statistics:aggregation:selectable:min
|
||||
- statistics:aggregation:selectable:sum: statistics:aggregation:selectable:sum
|
||||
- storage: storage
|
||||
- alarm_storage: alarm_storage
|
||||
- statistics:aggregation:selectable:avg: statistics:aggregation:selectable:avg
|
||||
- meters:query:complex: meters:query:complex
|
||||
- statistics:groupby: statistics:groupby
|
||||
- alarms:history:query:complex: alarms:history:query:complex
|
||||
- meters:query:simple: meters:query:simple
|
||||
- samples:query:metadata: samples:query:metadata
|
||||
- statistics:query:metadata: statistics:query:metadata
|
||||
- storage:production_ready: storage:production_ready
|
||||
- samples:query:simple: samples:query:simple
|
||||
- resources:query:metadata: resources:query:metadata
|
||||
- statistics:aggregation:selectable:max: statistics:aggregation:selectable:max
|
||||
- samples:query:complex: samples:query:complex
|
||||
- statistics:aggregation:standard: statistics:aggregation:standard
|
||||
- events:query:simple: events:query:simple
|
||||
- statistics:aggregation:selectable:stddev: statistics:aggregation:selectable:stddev
|
||||
- alarms:query:complex: alarms:query:complex
|
||||
- statistics:aggregation:selectable:cardinality: statistics:aggregation:selectable:cardinality
|
||||
- event_storage: event_storage
|
||||
- resources:query:complex: resources:query:complex
|
||||
|
||||
|
||||
|
||||
Response Example
|
||||
----------------
|
||||
|
||||
.. literalinclude:: ../samples/capabilities-list-response.json
|
||||
:language: javascript
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,292 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
#
|
||||
# nova documentation build configuration file, created by
|
||||
# sphinx-quickstart on Sat May 1 15:17:47 2010.
|
||||
#
|
||||
# This file is execfile()d with the current directory set to
|
||||
# its containing dir.
|
||||
#
|
||||
# Note that not all possible configuration values are present in this
|
||||
# autogenerated file.
|
||||
#
|
||||
# All configuration values have a default; values that are commented out
|
||||
# serve to show the default.
|
||||
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
import warnings
|
||||
|
||||
# TODO(Graham Hayes): Remove the following block of code when os-api-ref is
|
||||
# using openstackdocstheme
|
||||
|
||||
import os_api_ref
|
||||
|
||||
if getattr(os_api_ref, 'THEME', 'olsosphinx') == 'openstackdocstheme':
|
||||
# We are on the new version with openstackdocstheme support
|
||||
|
||||
extensions = [
|
||||
'os_api_ref',
|
||||
]
|
||||
|
||||
import openstackdocstheme # noqa
|
||||
|
||||
html_theme = 'openstackdocs'
|
||||
html_theme_path = [openstackdocstheme.get_html_theme_path()]
|
||||
html_theme_options = {
|
||||
"sidebar_mode": "toc",
|
||||
}
|
||||
|
||||
else:
|
||||
# We are on the old version without openstackdocstheme support
|
||||
|
||||
extensions = [
|
||||
'os_api_ref',
|
||||
'oslosphinx',
|
||||
]
|
||||
|
||||
# End temporary block
|
||||
|
||||
# If extensions (or modules to document with autodoc) are in another directory,
|
||||
# add these directories to sys.path here. If the directory is relative to the
|
||||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||
sys.path.insert(0, os.path.abspath('../../'))
|
||||
sys.path.insert(0, os.path.abspath('../'))
|
||||
sys.path.insert(0, os.path.abspath('./'))
|
||||
|
||||
# -- General configuration ----------------------------------------------------
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be
|
||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
|
||||
|
||||
# The suffix of source filenames.
|
||||
source_suffix = '.rst'
|
||||
|
||||
# The encoding of source files.
|
||||
#
|
||||
# source_encoding = 'utf-8'
|
||||
|
||||
# The master toctree document.
|
||||
master_doc = 'index'
|
||||
|
||||
# General information about the project.
|
||||
project = u'Compute API Reference'
|
||||
copyright = u'2010-present, OpenStack Foundation'
|
||||
|
||||
# The version info for the project you're documenting, acts as replacement for
|
||||
# |version| and |release|, also used in various other places throughout the
|
||||
# built documents.
|
||||
#
|
||||
from ceilometer.version import version_info as ceilometer_version
|
||||
# The full version, including alpha/beta/rc tags.
|
||||
release = ceilometer_version.version_string_with_vcs()
|
||||
# The short X.Y version.
|
||||
version = ceilometer_version.canonical_version_string()
|
||||
|
||||
# Config logABug feature
|
||||
giturl = (
|
||||
u'http://git.openstack.org/cgit/openstack/ceilometer/tree/api-ref/source')
|
||||
# source tree
|
||||
# html_context allows us to pass arbitrary values into the html template
|
||||
html_context = {'bug_tag': 'api-ref',
|
||||
'giturl': giturl,
|
||||
'bug_project': 'ceilometer'}
|
||||
|
||||
# The language for content autogenerated by Sphinx. Refer to documentation
|
||||
# for a list of supported languages.
|
||||
#
|
||||
# language = None
|
||||
|
||||
# There are two options for replacing |today|: either, you set today to some
|
||||
# non-false value, then it is used:
|
||||
# today = ''
|
||||
# Else, today_fmt is used as the format for a strftime call.
|
||||
# today_fmt = '%B %d, %Y'
|
||||
|
||||
# The reST default role (used for this markup: `text`) to use
|
||||
# for all documents.
|
||||
# default_role = None
|
||||
|
||||
# If true, '()' will be appended to :func: etc. cross-reference text.
|
||||
# add_function_parentheses = True
|
||||
|
||||
# If true, the current module name will be prepended to all description
|
||||
# unit titles (such as .. function::).
|
||||
add_module_names = False
|
||||
|
||||
# If true, sectionauthor and moduleauthor directives will be shown in the
|
||||
# output. They are ignored by default.
|
||||
show_authors = False
|
||||
|
||||
# The name of the Pygments (syntax highlighting) style to use.
|
||||
pygments_style = 'sphinx'
|
||||
|
||||
# -- Options for man page output ----------------------------------------------
|
||||
|
||||
# Grouping the document tree for man pages.
|
||||
# List of tuples 'sourcefile', 'target', u'title', u'Authors name', 'manual'
|
||||
|
||||
|
||||
# -- Options for HTML output --------------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages. Major themes that come with
|
||||
# Sphinx are currently 'default' and 'sphinxdoc'.
|
||||
# html_theme_path = ["."]
|
||||
# html_theme = '_theme'
|
||||
|
||||
# Theme options are theme-specific and customize the look and feel of a theme
|
||||
# further. For a list of options available for each theme, see the
|
||||
# documentation.
|
||||
# html_theme_options = {}
|
||||
|
||||
# Add any paths that contain custom themes here, relative to this directory.
|
||||
# html_theme_path = []
|
||||
|
||||
# The name for this set of Sphinx documents. If None, it defaults to
|
||||
# "<project> v<release> documentation".
|
||||
# html_title = None
|
||||
|
||||
# A shorter title for the navigation bar. Default is the same as html_title.
|
||||
# html_short_title = None
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top
|
||||
# of the sidebar.
|
||||
# html_logo = None
|
||||
|
||||
# The name of an image file (within the static path) to use as favicon of the
|
||||
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
|
||||
# pixels large.
|
||||
# html_favicon = None
|
||||
|
||||
# Add any paths that contain custom static files (such as style sheets) here,
|
||||
# relative to this directory. They are copied after the builtin static files,
|
||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||
# html_static_path = ['_static']
|
||||
|
||||
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
|
||||
# using the given strftime format.
|
||||
# html_last_updated_fmt = '%b %d, %Y'
|
||||
git_cmd = ["git", "log", "--pretty=format:'%ad, commit %h'", "--date=local",
|
||||
"-n1"]
|
||||
try:
|
||||
html_last_updated_fmt = subprocess.Popen(
|
||||
git_cmd, stdout=subprocess.PIPE).communicate()[0].decode()
|
||||
except Exception:
|
||||
warnings.warn('Cannot get last updated time from git repository. '
|
||||
'Not setting "html_last_updated_fmt".')
|
||||
|
||||
# If true, SmartyPants will be used to convert quotes and dashes to
|
||||
# typographically correct entities.
|
||||
# html_use_smartypants = True
|
||||
|
||||
# Custom sidebar templates, maps document names to template names.
|
||||
# html_sidebars = {}
|
||||
|
||||
# Additional templates that should be rendered to pages, maps page names to
|
||||
# template names.
|
||||
# html_additional_pages = {}
|
||||
|
||||
# If false, no module index is generated.
|
||||
# html_use_modindex = True
|
||||
|
||||
# If false, no index is generated.
|
||||
# html_use_index = True
|
||||
|
||||
# If true, the index is split into individual pages for each letter.
|
||||
# html_split_index = False
|
||||
|
||||
# If true, links to the reST sources are added to the pages.
|
||||
# html_show_sourcelink = True
|
||||
|
||||
# If true, an OpenSearch description file will be output, and all pages will
|
||||
# contain a <link> tag referring to it. The value of this option must be the
|
||||
# base URL from which the finished HTML is served.
|
||||
# html_use_opensearch = ''
|
||||
|
||||
# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
|
||||
# html_file_suffix = ''
|
||||
|
||||
# Output file base name for HTML help builder.
|
||||
htmlhelp_basename = 'novadoc'
|
||||
|
||||
|
||||
# -- Options for LaTeX output -------------------------------------------------
|
||||
|
||||
# The paper size ('letter' or 'a4').
|
||||
# latex_paper_size = 'letter'
|
||||
|
||||
# The font size ('10pt', '11pt' or '12pt').
|
||||
# latex_font_size = '10pt'
|
||||
|
||||
# Grouping the document tree into LaTeX files. List of tuples
|
||||
# (source start file, target name, title, author, documentclass
|
||||
# [howto/manual]).
|
||||
latex_documents = [
|
||||
('index', 'CeilometerReleaseNotes.tex',
|
||||
u'Ceilometer Release Notes Documentation',
|
||||
u'Ceilometer Developers', 'manual'),
|
||||
]
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top of
|
||||
# the title page.
|
||||
# latex_logo = None
|
||||
|
||||
# For "manual" documents, if this is true, then toplevel headings are parts,
|
||||
# not chapters.
|
||||
# latex_use_parts = False
|
||||
|
||||
|
||||
# Additional stuff for the LaTeX preamble.
|
||||
# latex_preamble = ''
|
||||
|
||||
# Documents to append as an appendix to all manuals.
|
||||
# latex_appendices = []
|
||||
|
||||
# If false, no module index is generated.
|
||||
# latex_use_modindex = True
|
||||
# (source start file, name, description, authors, manual section).
|
||||
man_pages = [
|
||||
('index', 'ceilometerreleasenotes',
|
||||
u'Ceilometer Release Notes Documentation', [u'Ceilometer Developers'], 1)
|
||||
]
|
||||
|
||||
# If true, show URL addresses after external links.
|
||||
# man_show_urls = False
|
||||
|
||||
|
||||
# -- Options for Texinfo output -------------------------------------------
|
||||
|
||||
# Grouping the document tree into Texinfo files. List of tuples
|
||||
# (source start file, target name, title, author,
|
||||
# dir menu entry, description, category)
|
||||
texinfo_documents = [
|
||||
('index', 'CeilometerReleaseNotes',
|
||||
u'Ceilometer Release Notes Documentation',
|
||||
u'Ceilometer Developers', 'CeilometerReleaseNotes',
|
||||
'One line description of project.',
|
||||
'Miscellaneous'),
|
||||
]
|
||||
|
||||
# Documents to append as an appendix to all manuals.
|
||||
# texinfo_appendices = []
|
||||
|
||||
# If false, no module index is generated.
|
||||
# texinfo_domain_indices = True
|
||||
|
||||
# How to display URL addresses: 'footnote', 'no', or 'inline'.
|
||||
# texinfo_show_urls = 'footnote'
|
||||
|
||||
# If true, do not generate a @detailmenu in the "Top" node's menu.
|
||||
# texinfo_no_detailmenu = False
|
|
@ -0,0 +1,93 @@
|
|||
.. -*- rst -*-
|
||||
|
||||
======
|
||||
Events
|
||||
======
|
||||
|
||||
Lists all events and shows details for an event.
|
||||
|
||||
|
||||
Show event details
|
||||
==================
|
||||
|
||||
.. rest_method:: GET /v2/events/{message_id}
|
||||
|
||||
Shows details for an event.
|
||||
|
||||
|
||||
Normal response codes: 200
|
||||
Error response codes:
|
||||
|
||||
|
||||
Request
|
||||
-------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- message_id: message_id
|
||||
|
||||
|
||||
Response Parameters
|
||||
-------------------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- traits: traits
|
||||
- raw: raw
|
||||
- generated: generated
|
||||
- event_type: event_type
|
||||
- message_id: message_id
|
||||
|
||||
|
||||
|
||||
Response Example
|
||||
----------------
|
||||
|
||||
.. literalinclude:: ../samples/event-show-response.json
|
||||
:language: javascript
|
||||
|
||||
|
||||
|
||||
|
||||
List events
|
||||
===========
|
||||
|
||||
.. rest_method:: GET /v2/events
|
||||
|
||||
Lists all events.
|
||||
|
||||
|
||||
Normal response codes: 200
|
||||
Error response codes:
|
||||
|
||||
|
||||
Request
|
||||
-------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- q: q
|
||||
- limit: limit
|
||||
|
||||
|
||||
Response Parameters
|
||||
-------------------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- traits: traits
|
||||
- raw: raw
|
||||
- generated: generated
|
||||
- event_type: event_type
|
||||
- message_id: message_id
|
||||
|
||||
|
||||
|
||||
Response Example
|
||||
----------------
|
||||
|
||||
.. literalinclude:: ../samples/events-list-response.json
|
||||
:language: javascript
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,8 @@
|
|||
=========================
|
||||
Ceilometer Release Notes
|
||||
=========================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
|
|
@ -0,0 +1,386 @@
|
|||
.. -*- rst -*-
|
||||
|
||||
======
|
||||
Meters
|
||||
======
|
||||
|
||||
Lists all meters, adds samples to meters, and lists samples for
|
||||
meters. For list operations, if you do not explicitly set the
|
||||
``limit`` query parameter, a default limit is applied. The default
|
||||
limit is the ``default_api_return_limit`` configuration option
|
||||
value.
|
||||
|
||||
Also, computes and lists statistics for samples in a time range.
|
||||
You can use the ``aggregate`` query parameter in the ``statistics``
|
||||
URI to explicitly select the ``stddev``, ``cardinality``, or any
|
||||
other standard function. For example:
|
||||
|
||||
::
|
||||
|
||||
GET /v2/meters/METER_NAME/statistics?aggregate.func=NAME
|
||||
&
|
||||
aggregate.param=VALUE
|
||||
|
||||
The ``aggregate.param`` parameter value is optional for all
|
||||
functions except the ``cardinality`` function.
|
||||
|
||||
The API silently ignores any duplicate aggregate function and
|
||||
parameter pairs.
|
||||
|
||||
The API accepts and storage drivers support duplicate functions
|
||||
with different parameter values. In this example, the
|
||||
``cardinality`` function is accepted twice with two different
|
||||
parameter values:
|
||||
|
||||
::
|
||||
|
||||
GET /v2/meters/METER_NAME/statistics?aggregate.func=cardinality
|
||||
&
|
||||
aggregate.param=resource_id
|
||||
&
|
||||
aggregate.func=cardinality
|
||||
&
|
||||
aggregate.param=project_id
|
||||
|
||||
**Examples:**
|
||||
|
||||
Use the ``stddev`` function to request the standard deviation of
|
||||
CPU utilization:
|
||||
|
||||
::
|
||||
|
||||
GET /v2/meters/cpu_util/statistics?aggregate.func=stddev
|
||||
|
||||
The response looks like this:
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
[
|
||||
{
|
||||
"aggregate": {
|
||||
"stddev": 0.6858829
|
||||
},
|
||||
"duration_start": "2014-01-30T11:13:23",
|
||||
"duration_end": "2014-01-31T16:07:13",
|
||||
"duration": 104030,
|
||||
"period": 0,
|
||||
"period_start": "2014-01-30T11:13:23",
|
||||
"period_end": "2014-01-31T16:07:13",
|
||||
"groupby": null,
|
||||
"unit": "%"
|
||||
}
|
||||
]
|
||||
|
||||
Use the ``cardinality`` function with the project ID to return the
|
||||
number of distinct tenants with images:
|
||||
|
||||
::
|
||||
|
||||
GET /v2/meters/image/statistics?aggregate.func=cardinality
|
||||
&
|
||||
aggregate.param=project_id
|
||||
|
||||
The following, more complex, example determines:
|
||||
|
||||
- The number of distinct instances (``cardinality``)
|
||||
|
||||
- The total number of instance samples (``count``) for a tenant in
|
||||
15-minute intervals (``period`` and ``groupby`` options)
|
||||
|
||||
::
|
||||
|
||||
GET /v2/meters/instance/statistics?aggregate.func=cardinality
|
||||
&
|
||||
aggregate.param=resource_id
|
||||
&
|
||||
aggregate.func=count
|
||||
&
|
||||
groupby=project_id
|
||||
&
|
||||
period=900
|
||||
|
||||
The response looks like this:
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
[
|
||||
{
|
||||
"count": 19,
|
||||
"aggregate": {
|
||||
"count": 19,
|
||||
"cardinality/resource_id": 3
|
||||
},
|
||||
"duration": 328.47803,
|
||||
"duration_start": "2014-01-31T10:00:41.823919",
|
||||
"duration_end": "2014-01-31T10:06:10.301948",
|
||||
"period": 900,
|
||||
"period_start": "2014-01-31T10:00:00",
|
||||
"period_end": "2014-01-31T10:15:00",
|
||||
"groupby": {
|
||||
"project_id": "061a5c91811e4044b7dc86c6136c4f99"
|
||||
},
|
||||
"unit": "instance"
|
||||
},
|
||||
{
|
||||
"count": 22,
|
||||
"aggregate": {
|
||||
"count": 22,
|
||||
"cardinality/resource_id": 4
|
||||
},
|
||||
"duration": 808.00385,
|
||||
"duration_start": "2014-01-31T10:15:15",
|
||||
"duration_end": "2014-01-31T10:28:43.003840",
|
||||
"period": 900,
|
||||
"period_start": "2014-01-31T10:15:00",
|
||||
"period_end": "2014-01-31T10:30:00",
|
||||
"groupby": {
|
||||
"project_id": "061a5c91811e4044b7dc86c6136c4f99"
|
||||
},
|
||||
"unit": "instance"
|
||||
},
|
||||
{
|
||||
"count": 2,
|
||||
"aggregate": {
|
||||
"count": 2,
|
||||
"cardinality/resource_id": 2
|
||||
},
|
||||
"duration": 0,
|
||||
"duration_start": "2014-01-31T10:35:15",
|
||||
"duration_end": "2014-01-31T10:35:15",
|
||||
"period": 900,
|
||||
"period_start": "2014-01-31T10:30:00",
|
||||
"period_end": "2014-01-31T10:45:00",
|
||||
"groupby": {
|
||||
"project_id": "061a5c91811e4044b7dc86c6136c4f99"
|
||||
},
|
||||
"unit": "instance"
|
||||
}
|
||||
]
|
||||
|
||||
|
||||
Show meter statistics
|
||||
=====================
|
||||
|
||||
.. rest_method:: GET /v2/meters/{meter_name}/statistics
|
||||
|
||||
Computes and lists statistics for samples in a time range.
|
||||
|
||||
|
||||
Normal response codes: 200
|
||||
Error response codes:
|
||||
|
||||
|
||||
Request
|
||||
-------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- meter_name: meter_name
|
||||
- q: q
|
||||
- groupby: groupby
|
||||
- period: period
|
||||
- aggregate: aggregate
|
||||
- limit: limit
|
||||
|
||||
|
||||
Response Parameters
|
||||
-------------------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- count: count
|
||||
- duration_start: duration_start
|
||||
- min: min
|
||||
- max: max
|
||||
- duration_end: duration_end
|
||||
- period: period
|
||||
- sum: sum
|
||||
- duration: duration
|
||||
- period_end: period_end
|
||||
- aggregate: aggregate
|
||||
- period_start: period_start
|
||||
- avg: avg
|
||||
- groupby: groupby
|
||||
- unit: unit
|
||||
|
||||
|
||||
|
||||
Response Example
|
||||
----------------
|
||||
|
||||
.. literalinclude:: ../samples/statistics-list-response.json
|
||||
:language: javascript
|
||||
|
||||
|
||||
|
||||
|
||||
List meters
|
||||
===========
|
||||
|
||||
.. rest_method:: GET /v2/meters
|
||||
|
||||
Lists meters, based on the data recorded so far.
|
||||
|
||||
|
||||
Normal response codes: 200
|
||||
Error response codes:
|
||||
|
||||
|
||||
Request
|
||||
-------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- q: q
|
||||
- limit: limit
|
||||
|
||||
|
||||
Response Parameters
|
||||
-------------------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- user_id: user_id
|
||||
- name: name
|
||||
- resource_id: resource_id
|
||||
- source: source
|
||||
- meter_id: meter_id
|
||||
- project_id: project_id
|
||||
- type: type
|
||||
- unit: unit
|
||||
|
||||
|
||||
|
||||
Response Example
|
||||
----------------
|
||||
|
||||
.. literalinclude:: ../samples/meters-list-response.json
|
||||
:language: javascript
|
||||
|
||||
|
||||
|
||||
|
||||
List samples for meter
|
||||
======================
|
||||
|
||||
.. rest_method:: GET /v2/meters/{meter_name}
|
||||
|
||||
Lists samples for a meter, by meter name.
|
||||
|
||||
|
||||
Normal response codes: 200
|
||||
Error response codes:
|
||||
|
||||
|
||||
Request
|
||||
-------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- meter_name: meter_name
|
||||
- q: q
|
||||
- limit: limit
|
||||
|
||||
|
||||
Response Parameters
|
||||
-------------------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- user_id: user_id
|
||||
- resource_id: resource_id
|
||||
- timestamp: timestamp
|
||||
- meter: meter
|
||||
- volume: volume
|
||||
- source: source
|
||||
- recorded_at: recorded_at
|
||||
- project_id: project_id
|
||||
- type: type
|
||||
- id: id
|
||||
- unit: unit
|
||||
- metadata: metadata
|
||||
|
||||
|
||||
|
||||
Response Example
|
||||
----------------
|
||||
|
||||
.. literalinclude:: ../samples/samples-list-response.json
|
||||
:language: javascript
|
||||
|
||||
|
||||
|
||||
|
||||
Add samples to meter
|
||||
====================
|
||||
|
||||
.. rest_method:: POST /v2/meters/{meter_name}
|
||||
|
||||
Adds samples to a meter, by meter name.
|
||||
|
||||
If you attempt to add a sample that is not supported, this call
|
||||
returns the ``409`` response code.
|
||||
|
||||
|
||||
Normal response codes: 200
|
||||
Error response codes:409,
|
||||
|
||||
|
||||
Request
|
||||
-------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- user_id: user_id
|
||||
- resource_id: resource_id
|
||||
- timestamp: timestamp
|
||||
- meter: meter
|
||||
- volume: volume
|
||||
- source: source
|
||||
- recorded_at: recorded_at
|
||||
- project_id: project_id
|
||||
- type: type
|
||||
- id: id
|
||||
- unit: unit
|
||||
- metadata: metadata
|
||||
- meter_name: meter_name
|
||||
- direct: direct
|
||||
- samples: samples
|
||||
|
||||
Request Example
|
||||
---------------
|
||||
|
||||
.. literalinclude:: ../samples/sample-create-request.json
|
||||
:language: javascript
|
||||
|
||||
|
||||
|
||||
Response Parameters
|
||||
-------------------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- user_id: user_id
|
||||
- resource_id: resource_id
|
||||
- timestamp: timestamp
|
||||
- meter: meter
|
||||
- volume: volume
|
||||
- source: source
|
||||
- recorded_at: recorded_at
|
||||
- project_id: project_id
|
||||
- type: type
|
||||
- id: id
|
||||
- unit: unit
|
||||
- metadata: metadata
|
||||
|
||||
|
||||
|
||||
Response Example
|
||||
----------------
|
||||
|
||||
.. literalinclude:: ../samples/sample-show-response.json
|
||||
:language: javascript
|
||||
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,734 @@
|
|||
# variables in header
|
||||
{}
|
||||
|
||||
# variables in path
|
||||
alarm_id_1:
|
||||
description: |
|
||||
The UUID of the alarm.
|
||||
in: path
|
||||
required: false
|
||||
type: string
|
||||
message_id_1:
|
||||
description: |
|
||||
The UUID of the message.
|
||||
in: path
|
||||
required: false
|
||||
type: string
|
||||
meter_name:
|
||||
description: |
|
||||
The name of the meter.
|
||||
in: path
|
||||
required: false
|
||||
type: string
|
||||
resource_id_2:
|
||||
description: |
|
||||
The UUID of the resource.
|
||||
in: path
|
||||
required: false
|
||||
type: string
|
||||
sample_id:
|
||||
description: |
|
||||
The UUID of the sample.
|
||||
in: path
|
||||
required: false
|
||||
type: string
|
||||
|
||||
# variables in query
|
||||
aggregate:
|
||||
description: |
|
||||
A list of selectable aggregation functions to apply.
|
||||
|
||||
For example:
|
||||
|
||||
::
|
||||
|
||||
GET /v2/meters/METER_NAME/statistics?aggregate.func=cardinality
|
||||
&
|
||||
aggregate.param=resource_id
|
||||
&
|
||||
aggregate.func=cardinality
|
||||
&
|
||||
aggregate.param=project_id
|
||||
in: query
|
||||
required: false
|
||||
type: object
|
||||
data:
|
||||
description: |
|
||||
An alarm within the request body.
|
||||
in: query
|
||||
required: false
|
||||
type: string
|
||||
direct:
|
||||
description: |
|
||||
Indicates whether the samples are POST ed
|
||||
directly to storage. Set ``?direct=True`` to POST the samples
|
||||
directly to storage.
|
||||
in: query
|
||||
required: false
|
||||
type: string
|
||||
groupby:
|
||||
description: |
|
||||
Fields for group by aggregation.
|
||||
in: query
|
||||
required: false
|
||||
type: object
|
||||
limit:
|
||||
description: |
|
||||
Limits the maximum number of samples that the response returns.
|
||||
|
||||
For example:
|
||||
|
||||
::
|
||||
|
||||
GET /v2/events?limit=1000
|
||||
in: query
|
||||
required: false
|
||||
type: integer
|
||||
limit_1:
|
||||
description: |
|
||||
Requests a page size of items. Returns a number
|
||||
of items up to a limit value. Use the ``limit`` parameter to make
|
||||
an initial limited request and use the ID of the last-seen item
|
||||
from the response as the ``marker`` parameter value in a
|
||||
subsequent limited request.
|
||||
in: query
|
||||
required: false
|
||||
type: integer
|
||||
meter_links:
|
||||
description: |
|
||||
Set ``?meter_links=1`` to return a self link and
|
||||
related meter links.
|
||||
in: query
|
||||
required: false
|
||||
type: integer
|
||||
period:
|
||||
description: |
|
||||
The period, in seconds, for which you want
|
||||
statistics.
|
||||
in: query
|
||||
required: false
|
||||
type: integer
|
||||
q:
|
||||
description: |
|
||||
Filters the response by one or more arguments.
|
||||
For example: ``?q.field=Foo & q.value=my_text``.
|
||||
in: query
|
||||
required: false
|
||||
type: array
|
||||
q_1:
|
||||
description: |
|
||||
Filters the response by one or more event arguments.
|
||||
|
||||
For example:
|
||||
|
||||
::
|
||||
|
||||
GET /v2/events?q.field=Foo
|
||||
&
|
||||
q.value=my_text
|
||||
in: query
|
||||
required: false
|
||||
type: array
|
||||
samples:
|
||||
description: |
|
||||
A list of samples.
|
||||
in: query
|
||||
required: false
|
||||
type: array
|
||||
state_1:
|
||||
description: |
|
||||
The alarm state. A valid value is ``ok``,
|
||||
``alarm``, or ``insufficient data``.
|
||||
in: query
|
||||
required: true
|
||||
type: string
|
||||
|
||||
# variables in body
|
||||
alarm_actions:
|
||||
description: |
|
||||
The list of actions that the alarm performs.
|
||||
in: body
|
||||
required: true
|
||||
type: array
|
||||
alarm_id:
|
||||
description: |
|
||||
The UUID of the alarm.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
alarm_storage:
|
||||
description: |
|
||||
Defines the capabilities for the storage that
|
||||
stores persisting alarm definitions. A value of ``true`` indicates
|
||||
that the capability is available.
|
||||
in: body
|
||||
required: true
|
||||
type: object
|
||||
alarms:history:query:complex:
|
||||
description: |
|
||||
If ``true``, the complex query capability for
|
||||
alarm history is available for the configured database back end.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
alarms:history:query:simple:
|
||||
description: |
|
||||
If ``true``, the simple query capability for
|
||||
alarm history is available for the configured database back end.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
alarms:query:complex:
|
||||
description: |
|
||||
If ``true``, the complex query capability for
|
||||
alarm definitions is available for the configured database back
|
||||
end.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
alarms:query:simple:
|
||||
description: |
|
||||
If ``true``, the simple query capability for
|
||||
alarm definitions is available for the configured database back
|
||||
end.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
api:
|
||||
description: |
|
||||
A set of key and value pairs that contain the API
|
||||
capabilities for the configured storage driver.
|
||||
in: body
|
||||
required: true
|
||||
type: object
|
||||
avg:
|
||||
description: |
|
||||
The average of all volume values in the data.
|
||||
in: body
|
||||
required: true
|
||||
type: number
|
||||
combination_rule:
|
||||
description: |
|
||||
The rules for the combination alarm type.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
count:
|
||||
description: |
|
||||
The number of samples seen.
|
||||
in: body
|
||||
required: true
|
||||
type: integer
|
||||
description:
|
||||
description: |
|
||||
Describes the alarm.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
duration:
|
||||
description: |
|
||||
The number of seconds between the oldest and
|
||||
newest date and time stamp.
|
||||
in: body
|
||||
required: true
|
||||
type: number
|
||||
duration_end:
|
||||
description: |
|
||||
The date and time in UTC format of the query end
|
||||
time.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
duration_start:
|
||||
description: |
|
||||
The date and time in UTC format of the query
|
||||
start time.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
enabled:
|
||||
description: |
|
||||
If ``true``, evaluation and actioning is enabled
|
||||
for the alarm.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
event_storage:
|
||||
description: |
|
||||
If ``true``, the capabilities for the storage
|
||||
that stores persisting events is available.
|
||||
in: body
|
||||
required: true
|
||||
type: object
|
||||
event_type:
|
||||
description: |
|
||||
The dotted string that represents the event.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
events:query:simple:
|
||||
description: |
|
||||
If ``true``, the simple query capability for
|
||||
events is available for the configured database back end.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
generated:
|
||||
description: |
|
||||
The date and time when the event occurred.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
id:
|
||||
description: |
|
||||
The UUID of the sample.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
insufficient_data_actions:
|
||||
description: |
|
||||
The list of actions that the alarm performs when
|
||||
the alarm state is ``insufficient_data``.
|
||||
in: body
|
||||
required: true
|
||||
type: array
|
||||
links:
|
||||
description: |
|
||||
A list that contains a self link and associated
|
||||
meter links.
|
||||
in: body
|
||||
required: true
|
||||
type: array
|
||||
max:
|
||||
description: |
|
||||
The maximum volume seen in the data.
|
||||
in: body
|
||||
required: true
|
||||
type: number
|
||||
message_id:
|
||||
description: |
|
||||
The UUID of the message.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
metadata:
|
||||
description: |
|
||||
An arbitrary set of one or more metadata key and
|
||||
value pairs that are associated with the sample.
|
||||
in: body
|
||||
required: true
|
||||
type: object
|
||||
metadata_1:
|
||||
description: |
|
||||
A set of one or more arbitrary metadata key and
|
||||
value pairs that are associated with the resource.
|
||||
in: body
|
||||
required: true
|
||||
type: object
|
||||
meter:
|
||||
description: |
|
||||
The meter name.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
meter_id:
|
||||
description: |
|
||||
The UUID of the meter.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
meters:query:complex:
|
||||
description: |
|
||||
If ``true``, the complex query capability for
|
||||
meters is available for the configured database back end.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
meters:query:metadata:
|
||||
description: |
|
||||
If ``true``, the simple query capability for the
|
||||
metadata of meters is available for the configured database back
|
||||
end.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
meters:query:simple:
|
||||
description: |
|
||||
If ``true``, the simple query capability for
|
||||
meters is available for the configured database back end.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
min:
|
||||
description: |
|
||||
The minimum volume seen in the data.
|
||||
in: body
|
||||
required: true
|
||||
type: number
|
||||
name:
|
||||
description: |
|
||||
The name of the alarm.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
name_1:
|
||||
description: |
|
||||
The meter name.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
ok_actions:
|
||||
description: |
|
||||
The list of actions that the alarm performs when
|
||||
the alarm state is ``ok``.
|
||||
in: body
|
||||
required: true
|
||||
type: array
|
||||
period_end:
|
||||
description: |
|
||||
The period end date and time in UTC format.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
period_start:
|
||||
description: |
|
||||
The period start date and time in UTC format.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
project_id:
|
||||
description: |
|
||||
The UUID of the project or tenant that owns the
|
||||
resource.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
project_id_1:
|
||||
description: |
|
||||
The UUID of the project.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
project_id_2:
|
||||
description: |
|
||||
The UUID of the owning project or tenant.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
raw:
|
||||
description: |
|
||||
A dictionary object that stores event messages
|
||||
for future evaluation.
|
||||
in: body
|
||||
required: true
|
||||
type: object
|
||||
recorded_at:
|
||||
description: |
|
||||
The date and time when the sample was recorded.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
repeat_actions:
|
||||
description: |
|
||||
If set to ``true``, the alarm notifications are
|
||||
repeated. Otherwise, this value is ``false``.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
resource_id:
|
||||
description: |
|
||||
The UUID of the resource for which the
|
||||
measurements are taken.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
resource_id_1:
|
||||
description: |
|
||||
The UUID of the resource.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
resources:query:complex:
|
||||
description: |
|
||||
If ``true``, the complex query capability for
|
||||
resources is available for the configured database back end.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
resources:query:metadata:
|
||||
description: |
|
||||
If ``true``, the simple query capability for the
|
||||
metadata of resources is available for the configured database
|
||||
back end.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
resources:query:simple:
|
||||
description: |
|
||||
If ``true``, the simple query capability for
|
||||
resources is available for the configured database back end.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
samples:query:complex:
|
||||
description: |
|
||||
If ``true``, the complex query capability for
|
||||
samples is available for the configured database back end.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
samples:query:metadata:
|
||||
description: |
|
||||
If ``true``, the simple query capability for the
|
||||
metadata of samples is available for the configured database back
|
||||
end.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
samples:query:simple:
|
||||
description: |
|
||||
If ``true``, the simple query capability for
|
||||
samples is available for the configured database back end.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
source:
|
||||
description: |
|
||||
The name of the source that identifies where the
|
||||
sample comes from.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
source_1:
|
||||
description: |
|
||||
The name of the source from which the meter came.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
source_2:
|
||||
description: |
|
||||
The name of the source from which the resource
|
||||
came.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
state:
|
||||
description: |
|
||||
The state of the alarm.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
state_timestamp:
|
||||
description: |
|
||||
The date and time of the alarm state.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
statistics:aggregation:selectable:avg:
|
||||
description: |
|
||||
If ``true``, the ``avg`` capability is available
|
||||
for the configured database back end. Use the ``avg`` capability
|
||||
to get average values for samples.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
statistics:aggregation:selectable:cardinality:
|
||||
description: |
|
||||
If ``true``, the ``cardinality`` capability is
|
||||
available for the configured database back end. Use the
|
||||
``cardinality`` capability to get cardinality for samples.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
statistics:aggregation:selectable:count:
|
||||
description: |
|
||||
If ``true``, the ``count`` capability is
|
||||
available for the configured database back end. Use the ``count``
|
||||
capability to calculate the number of samples for a query.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
statistics:aggregation:selectable:max:
|
||||
description: |
|
||||
If ``true``, the ``max`` capability is available
|
||||
for the configured database back end. . Use the ``max`` capability
|
||||
to calculate the maximum value for a query.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
statistics:aggregation:selectable:min:
|
||||
description: |
|
||||
If ``true``, the ``min`` capability is available
|
||||
for the configured database back end. Use the ``min`` capability
|
||||
to calculate the minimum value for a query.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
statistics:aggregation:selectable:quartile:
|
||||
description: |
|
||||
If ``true``, the ``quartile`` capability is
|
||||
available for the configured database back end. Use the
|
||||
``quartile`` capability to calculate the quartile of sample
|
||||
volumes for a query.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
statistics:aggregation:selectable:stddev:
|
||||
description: |
|
||||
If ``true``, the ``stddev`` capability is
|
||||
available for the configured database back end. Use the ``stddev``
|
||||
capability to calculate the standard deviation of sample volumes
|
||||
for a query.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
statistics:aggregation:selectable:sum:
|
||||
description: |
|
||||
If ``true``, the ``sum`` capability is available
|
||||
for the configured database back end. Use the ``sum`` capability
|
||||
to calculate the sum of sample volumes for a query.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
statistics:aggregation:standard:
|
||||
description: |
|
||||
If ``true``, the ``standard`` set of aggregation
|
||||
capability is available for the configured database back end.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
statistics:groupby:
|
||||
description: |
|
||||
If ``true``, the ``groupby`` capability is
|
||||
available for calculating statistics for the configured database
|
||||
back end.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
statistics:query:complex:
|
||||
description: |
|
||||
If ``true``, the complex query capability for
|
||||
statistics is available for the configured database back end.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
statistics:query:metadata:
|
||||
description: |
|
||||
If ``true``, the simple query capability for the
|
||||
sample metadata that is used to calculate statistics is available
|
||||
for the configured database back end.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
statistics:query:simple:
|
||||
description: |
|
||||
If ``true``, the simple query capability for
|
||||
statistics is available for the configured database back end.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
storage:
|
||||
description: |
|
||||
If ``true``, the capabilities for the storage
|
||||
that stores persisting samples is available.
|
||||
in: body
|
||||
required: true
|
||||
type: object
|
||||
storage:production_ready:
|
||||
description: |
|
||||
If ``true``, the database back end is ready to
|
||||
use in a production environment.
|
||||
in: body
|
||||
required: true
|
||||
type: boolean
|
||||
sum:
|
||||
description: |
|
||||
The total of all of the volume values seen in the
|
||||
data.
|
||||
in: body
|
||||
required: true
|
||||
type: number
|
||||
threshold_rule:
|
||||
description: |
|
||||
The rules for the threshold alarm type.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
timestamp:
|
||||
description: |
|
||||
The date and time in UTC format when the
|
||||
measurement was made.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
timestamp_1:
|
||||
description: |
|
||||
The date and time of the alarm.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
traits:
|
||||
description: |
|
||||
A list of objects. Each object contains key and
|
||||
value pairs that describe the event.
|
||||
in: body
|
||||
required: true
|
||||
type: array
|
||||
type:
|
||||
description: |
|
||||
The meter type.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
type_1:
|
||||
description: |
|
||||
The type of the alarm, which is either
|
||||
``threshold`` or ``combination``.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
type_2:
|
||||
description: |
|
||||
The meter type. The type value is gauge, delta,
|
||||
or cumulative.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
unit:
|
||||
description: |
|
||||
The unit of measure for the ``volume`` value.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
unit_1:
|
||||
description: |
|
||||
The unit of measure.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
unit_2:
|
||||
description: |
|
||||
The unit type of the data set.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
user_id:
|
||||
description: |
|
||||
The UUID of the user who either created or last
|
||||
updated the resource.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
user_id_1:
|
||||
description: |
|
||||
The UUID of the user.
|
||||
in: body
|
||||
required: true
|
||||
type: string
|
||||
volume:
|
||||
description: |
|
||||
The actual measured value.
|
||||
in: body
|
||||
required: true
|
||||
type: number
|
||||
|
|
@ -0,0 +1,95 @@
|
|||
.. -*- rst -*-
|
||||
|
||||
=========
|
||||
Resources
|
||||
=========
|
||||
|
||||
Lists all and gets information for resources.
|
||||
|
||||
|
||||
List resources
|
||||
==============
|
||||
|
||||
.. rest_method:: GET /v2/resources
|
||||
|
||||
Lists definitions for all resources.
|
||||
|
||||
|
||||
Normal response codes: 200
|
||||
Error response codes:
|
||||
|
||||
|
||||
Request
|
||||
-------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- q: q
|
||||
- meter_links: meter_links
|
||||
|
||||
|
||||
Response Parameters
|
||||
-------------------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- user_id: user_id
|
||||
- links: links
|
||||
- resource_id: resource_id
|
||||
- source: source
|
||||
- project_id: project_id
|
||||
- metadata: metadata
|
||||
|
||||
|
||||
|
||||
Response Example
|
||||
----------------
|
||||
|
||||
.. literalinclude:: ../samples/resources-list-response.json
|
||||
:language: javascript
|
||||
|
||||
|
||||
|
||||
|
||||
Show resource details
|
||||
=====================
|
||||
|
||||
.. rest_method:: GET /v2/resources/{resource_id}
|
||||
|
||||
Shows details for a resource, by resource ID.
|
||||
|
||||
|
||||
Normal response codes: 200
|
||||
Error response codes:
|
||||
|
||||
|
||||
Request
|
||||
-------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- resource_id: resource_id
|
||||
|
||||
|
||||
Response Parameters
|
||||
-------------------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- user_id: user_id
|
||||
- links: links
|
||||
- resource_id: resource_id
|
||||
- source: source
|
||||
- project_id: project_id
|
||||
- metadata: metadata
|
||||
|
||||
|
||||
|
||||
Response Example
|
||||
----------------
|
||||
|
||||
.. literalinclude:: ../samples/resource-show-response.json
|
||||
:language: javascript
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,111 @@
|
|||
.. -*- rst -*-
|
||||
|
||||
=======
|
||||
Samples
|
||||
=======
|
||||
|
||||
Lists all samples and gets information for a sample.
|
||||
|
||||
For list operations, if you do not explicitly set the ``limit``
|
||||
query parameter, a default limit is applied. The default limit is
|
||||
the ``default_api_return_limit`` configuration option value.
|
||||
|
||||
|
||||
Show sample details
|
||||
===================
|
||||
|
||||
.. rest_method:: GET /v2/samples/{sample_id}
|
||||
|
||||
Shows details for a sample, by sample ID.
|
||||
|
||||
|
||||
Normal response codes: 200
|
||||
Error response codes:
|
||||
|
||||
|
||||
Request
|
||||
-------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- sample_id: sample_id
|
||||
|
||||
|
||||
Response Parameters
|
||||
-------------------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- user_id: user_id
|
||||
- resource_id: resource_id
|
||||
- timestamp: timestamp
|
||||
- meter: meter
|
||||
- volume: volume
|
||||
- source: source
|
||||
- recorded_at: recorded_at
|
||||
- project_id: project_id
|
||||
- type: type
|
||||
- id: id
|
||||
- unit: unit
|
||||
- metadata: metadata
|
||||
|
||||
|
||||
|
||||
Response Example
|
||||
----------------
|
||||
|
||||
.. literalinclude:: ../samples/sample-show-response.json
|
||||
:language: javascript
|
||||
|
||||
|
||||
|
||||
|
||||
List samples
|
||||
============
|
||||
|
||||
.. rest_method:: GET /v2/samples
|
||||
|
||||
Lists all known samples, based on the data recorded so far.
|
||||
|
||||
|
||||
Normal response codes: 200
|
||||
Error response codes:
|
||||
|
||||
|
||||
Request
|
||||
-------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- q: q
|
||||
- limit: limit
|
||||
|
||||
|
||||
Response Parameters
|
||||
-------------------
|
||||
|
||||
.. rest_parameters:: parameters.yaml
|
||||
|
||||
- user_id: user_id
|
||||
- resource_id: resource_id
|
||||
- timestamp: timestamp
|
||||
- meter: meter
|
||||
- volume: volume
|
||||
- source: source
|
||||
- recorded_at: recorded_at
|
||||
- project_id: project_id
|
||||
- type: type
|
||||
- id: id
|
||||
- unit: unit
|
||||
- metadata: metadata
|
||||
|
||||
|
||||
|
||||
Response Example
|
||||
----------------
|
||||
|
||||
.. literalinclude:: ../samples/samples-list-response.json
|
||||
:language: javascript
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,24 @@
|
|||
{
|
||||
"alarm_actions": [
|
||||
"http://site:8000/alarm"
|
||||
],
|
||||
"alarm_id": null,
|
||||
"combination_rule": null,
|
||||
"description": "An alarm",
|
||||
"enabled": true,
|
||||
"insufficient_data_actions": [
|
||||
"http://site:8000/nodata"
|
||||
],
|
||||
"name": "SwiftObjectAlarm",
|
||||
"ok_actions": [
|
||||
"http://site:8000/ok"
|
||||
],
|
||||
"project_id": "c96c887c216949acbdfbd8b494863567",
|
||||
"repeat_actions": false,
|
||||
"state": "ok",
|
||||
"state_timestamp": "2013-11-21T12:33:08.486228",
|
||||
"threshold_rule": null,
|
||||
"timestamp": "2013-11-21T12:33:08.486221",
|
||||
"type": "threshold",
|
||||
"user_id": "c96c887c216949acbdfbd8b494863567"
|
||||
}
|
|
@ -0,0 +1,25 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<value>
|
||||
<alarm_actions>
|
||||
<item>http://site:8000/alarm</item>
|
||||
</alarm_actions>
|
||||
<alarm_id nil="true" />
|
||||
<combination_rule nil="true" />
|
||||
<description>An alarm</description>
|
||||
<enabled>true</enabled>
|
||||
<insufficient_data_actions>
|
||||
<item>http://site:8000/nodata</item>
|
||||
</insufficient_data_actions>
|
||||
<name>SwiftObjectAlarm</name>
|
||||
<ok_actions>
|
||||
<item>http://site:8000/ok</item>
|
||||
</ok_actions>
|
||||
<project_id>c96c887c216949acbdfbd8b494863567</project_id>
|
||||
<repeat_actions>false</repeat_actions>
|
||||
<state>ok</state>
|
||||
<state_timestamp>2013-11-21T12:33:08.486228</state_timestamp>
|
||||
<threshold_rule nil="true" />
|
||||
<timestamp>2013-11-21T12:33:08.486221</timestamp>
|
||||
<type>threshold</type>
|
||||
<user_id>c96c887c216949acbdfbd8b494863567</user_id>
|
||||
</value>
|
|
@ -0,0 +1,26 @@
|
|||
[
|
||||
{
|
||||
"alarm_actions": [
|
||||
"http://site:8000/alarm"
|
||||
],
|
||||
"alarm_id": null,
|
||||
"combination_rule": null,
|
||||
"description": "An alarm",
|
||||
"enabled": true,
|
||||
"insufficient_data_actions": [
|
||||
"http://site:8000/nodata"
|
||||
],
|
||||
"name": "SwiftObjectAlarm",
|
||||
"ok_actions": [
|
||||
"http://site:8000/ok"
|
||||
],
|
||||
"project_id": "c96c887c216949acbdfbd8b494863567",
|
||||
"repeat_actions": false,
|
||||
"state": "ok",
|
||||
"state_timestamp": "2013-11-21T12:33:08.486228",
|
||||
"threshold_rule": null,
|
||||
"timestamp": "2013-11-21T12:33:08.486221",
|
||||
"type": "threshold",
|
||||
"user_id": "c96c887c216949acbdfbd8b494863567"
|
||||
}
|
||||
]
|
|
@ -0,0 +1,27 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<values>
|
||||
<value>
|
||||
<alarm_actions>
|
||||
<item>http://site:8000/alarm</item>
|
||||
</alarm_actions>
|
||||
<alarm_id nil="true" />
|
||||
<combination_rule nil="true" />
|
||||
<description>An alarm</description>
|
||||
<enabled>true</enabled>
|
||||
<insufficient_data_actions>
|
||||
<item>http://site:8000/nodata</item>
|
||||
</insufficient_data_actions>
|
||||
<name>SwiftObjectAlarm</name>
|
||||
<ok_actions>
|
||||
<item>http://site:8000/ok</item>
|
||||
</ok_actions>
|
||||
<project_id>c96c887c216949acbdfbd8b494863567</project_id>
|
||||
<repeat_actions>false</repeat_actions>
|
||||
<state>ok</state>
|
||||
<state_timestamp>2013-11-21T12:33:08.486228</state_timestamp>
|
||||
<threshold_rule nil="true" />
|
||||
<timestamp>2013-11-21T12:33:08.486221</timestamp>
|
||||
<type>threshold</type>
|
||||
<user_id>c96c887c216949acbdfbd8b494863567</user_id>
|
||||
</value>
|
||||
</values>
|
|
@ -0,0 +1,40 @@
|
|||
{
|
||||
"alarm_storage": {
|
||||
"storage:production_ready": true
|
||||
},
|
||||
"api": {
|
||||
"alarms:history:query:complex": true,
|
||||
"alarms:history:query:simple": true,
|
||||
"alarms:query:complex": true,
|
||||
"alarms:query:simple": true,
|
||||
"events:query:simple": true,
|
||||
"meters:query:complex": false,
|
||||
"meters:query:metadata": true,
|
||||
"meters:query:simple": true,
|
||||
"resources:query:complex": false,
|
||||
"resources:query:metadata": true,
|
||||
"resources:query:simple": true,
|
||||
"samples:query:complex": true,
|
||||
"samples:query:metadata": true,
|
||||
"samples:query:simple": true,
|
||||
"statistics:aggregation:selectable:avg": true,
|
||||
"statistics:aggregation:selectable:cardinality": true,
|
||||
"statistics:aggregation:selectable:count": true,
|
||||
"statistics:aggregation:selectable:max": true,
|
||||
"statistics:aggregation:selectable:min": true,
|
||||
"statistics:aggregation:selectable:quartile": false,
|
||||
"statistics:aggregation:selectable:stddev": true,
|
||||
"statistics:aggregation:selectable:sum": true,
|
||||
"statistics:aggregation:standard": true,
|
||||
"statistics:groupby": true,
|
||||
"statistics:query:complex": false,
|
||||
"statistics:query:metadata": true,
|
||||
"statistics:query:simple": true
|
||||
},
|
||||
"event_storage": {
|
||||
"storage:production_ready": true
|
||||
},
|
||||
"storage": {
|
||||
"storage:production_ready": true
|
||||
}
|
||||
}
|
|
@ -0,0 +1,131 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<value>
|
||||
<api>
|
||||
<item>
|
||||
<key>statistics:query:complex</key>
|
||||
<value>false</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>alarms:history:query:simple</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>meters:query:metadata</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>alarms:query:simple</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>resources:query:simple</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>statistics:aggregation:selectable:quartile</key>
|
||||
<value>false</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>statistics:query:simple</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>statistics:aggregation:selectable:count</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>statistics:aggregation:selectable:min</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>statistics:aggregation:selectable:sum</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>alarms:query:complex</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>meters:query:complex</key>
|
||||
<value>false</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>statistics:groupby</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>alarms:history:query:complex</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>meters:query:simple</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>samples:query:metadata</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>statistics:query:metadata</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>samples:query:simple</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>resources:query:metadata</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>statistics:aggregation:selectable:max</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>samples:query:complex</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>statistics:aggregation:standard</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>events:query:simple</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>statistics:aggregation:selectable:stddev</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>statistics:aggregation:selectable:avg</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>statistics:aggregation:selectable:cardinality</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>resources:query:complex</key>
|
||||
<value>false</value>
|
||||
</item>
|
||||
</api>
|
||||
<storage>
|
||||
<item>
|
||||
<key>storage:production_ready</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
</storage>
|
||||
<alarm_storage>
|
||||
<item>
|
||||
<key>storage:production_ready</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
</alarm_storage>
|
||||
<event_storage>
|
||||
<item>
|
||||
<key>storage:production_ready</key>
|
||||
<value>true</value>
|
||||
</item>
|
||||
</event_storage>
|
||||
</value>
|
|
@ -0,0 +1,18 @@
|
|||
{
|
||||
"raw": {},
|
||||
"traits": [
|
||||
{
|
||||
"type": "string",
|
||||
"name": "action",
|
||||
"value": "read"
|
||||
},
|
||||
{
|
||||
"type": "string",
|
||||
"name": "eventTime",
|
||||
"value": "2015-10-28T20:26:58.545477+0000"
|
||||
}
|
||||
],
|
||||
"generated": "2015-10-28T20:26:58.546933",
|
||||
"message_id": "bae43de6-e9fa-44ad-8c15-40a852584444",
|
||||
"event_type": "http.request"
|
||||
}
|
|
@ -0,0 +1,20 @@
|
|||
[
|
||||
{
|
||||
"raw": {},
|
||||
"traits": [
|
||||
{
|
||||
"type": "string",
|
||||
"name": "action",
|
||||
"value": "read"
|
||||
},
|
||||
{
|
||||
"type": "string",
|
||||
"name": "eventTime",
|
||||
"value": "2015-10-28T20:26:58.545477+0000"
|
||||
}
|
||||
],
|
||||
"generated": "2015-10-28T20:26:58.546933",
|
||||
"message_id": "bae43de6-e9fa-44ad-8c15-40a852584444",
|
||||
"event_type": "http.request"
|
||||
}
|
||||
]
|
|
@ -0,0 +1,12 @@
|
|||
[
|
||||
{
|
||||
"meter_id": "YmQ5NDMxYzEtOGQ2OS00YWQzLTgwM2EtOGQ0YTZiODlmZDM2K2luc3RhbmNl",
|
||||
"name": "instance",
|
||||
"project_id": "35b17138-b364-4e6a-a131-8f3099c5be68",
|
||||
"resource_id": "bd9431c1-8d69-4ad3-803a-8d4a6b89fd36",
|
||||
"source": "openstack",
|
||||
"type": "gauge",
|
||||
"unit": "instance",
|
||||
"user_id": "efd87807-12d2-4b38-9c70-5f5c2ac427ff"
|
||||
}
|
||||
]
|
|
@ -0,0 +1,13 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<values>
|
||||
<value>
|
||||
<name>instance</name>
|
||||
<type>gauge</type>
|
||||
<unit>instance</unit>
|
||||
<resource_id>bd9431c1-8d69-4ad3-803a-8d4a6b89fd36</resource_id>
|
||||
<project_id>35b17138-b364-4e6a-a131-8f3099c5be68</project_id>
|
||||
<user_id>efd87807-12d2-4b38-9c70-5f5c2ac427ff</user_id>
|
||||
<source>openstack</source>
|
||||
<meter_id>YmQ5NDMxYzEtOGQ2OS00YWQzLTgwM2EtOGQ0YTZiODlmZDM2K2luc3RhbmNl</meter_id>
|
||||
</value>
|
||||
</values>
|
|
@ -0,0 +1,20 @@
|
|||
{
|
||||
"links": [
|
||||
{
|
||||
"href": "http://localhost:8777/v2/resources/bd9431c1-8d69-4ad3-803a-8d4a6b89fd36",
|
||||
"rel": "self"
|
||||
},
|
||||
{
|
||||
"href": "http://localhost:8777/v2/meters/volume?q.field=resource_id&q.value=bd9431c1-8d69-4ad3-803a-8d4a6b89fd36",
|
||||
"rel": "volume"
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"name1": "value1",
|
||||
"name2": "value2"
|
||||
},
|
||||
"project_id": "35b17138-b364-4e6a-a131-8f3099c5be68",
|
||||
"resource_id": "bd9431c1-8d69-4ad3-803a-8d4a6b89fd36",
|
||||
"source": "openstack",
|
||||
"user_id": "efd87807-12d2-4b38-9c70-5f5c2ac427ff"
|
||||
}
|
|
@ -0,0 +1,27 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<value>
|
||||
<resource_id>bd9431c1-8d69-4ad3-803a-8d4a6b89fd36</resource_id>
|
||||
<project_id>35b17138-b364-4e6a-a131-8f3099c5be68</project_id>
|
||||
<user_id>efd87807-12d2-4b38-9c70-5f5c2ac427ff</user_id>
|
||||
<metadata>
|
||||
<item>
|
||||
<key>name2</key>
|
||||
<value>value2</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>name1</key>
|
||||
<value>value1</value>
|
||||
</item>
|
||||
</metadata>
|
||||
<links>
|
||||
<item>
|
||||
<href>http://localhost:8777/v2/resources/bd9431c1-8d69-4ad3-803a-8d4a6b89fd36</href>
|
||||
<rel>self</rel>
|
||||
</item>
|
||||
<item>
|
||||
<href>http://localhost:8777/v2/meters/volume?q.field=resource_id&q.value=bd9431c1-8d69-4ad3-803a-8d4a6b89fd36</href>
|
||||
<rel>volume</rel>
|
||||
</item>
|
||||
</links>
|
||||
<source>openstack</source>
|
||||
</value>
|
|
@ -0,0 +1,22 @@
|
|||
[
|
||||
{
|
||||
"links": [
|
||||
{
|
||||
"href": "http://localhost:8777/v2/resources/bd9431c1-8d69-4ad3-803a-8d4a6b89fd36",
|
||||
"rel": "self"
|
||||
},
|
||||
{
|
||||
"href": "http://localhost:8777/v2/meters/volume?q.field=resource_id&q.value=bd9431c1-8d69-4ad3-803a-8d4a6b89fd36",
|
||||
"rel": "volume"
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"name1": "value1",
|
||||
"name2": "value2"
|
||||
},
|
||||
"project_id": "35b17138-b364-4e6a-a131-8f3099c5be68",
|
||||
"resource_id": "bd9431c1-8d69-4ad3-803a-8d4a6b89fd36",
|
||||
"source": "openstack",
|
||||
"user_id": "efd87807-12d2-4b38-9c70-5f5c2ac427ff"
|
||||
}
|
||||
]
|
|
@ -0,0 +1,29 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<values>
|
||||
<value>
|
||||
<resource_id>bd9431c1-8d69-4ad3-803a-8d4a6b89fd36</resource_id>
|
||||
<project_id>35b17138-b364-4e6a-a131-8f3099c5be68</project_id>
|
||||
<user_id>efd87807-12d2-4b38-9c70-5f5c2ac427ff</user_id>
|
||||
<metadata>
|
||||
<item>
|
||||
<key>name2</key>
|
||||
<value>value2</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>name1</key>
|
||||
<value>value1</value>
|
||||
</item>
|
||||
</metadata>
|
||||
<links>
|
||||
<item>
|
||||
<href>http://localhost:8777/v2/resources/bd9431c1-8d69-4ad3-803a-8d4a6b89fd36</href>
|
||||
<rel>self</rel>
|
||||
</item>
|
||||
<item>
|
||||
<href>http://localhost:8777/v2/meters/volume?q.field=resource_id&q.value=bd9431c1-8d69-4ad3-803a-8d4a6b89fd36</href>
|
||||
<rel>volume</rel>
|
||||
</item>
|
||||
</links>
|
||||
<source>openstack</source>
|
||||
</value>
|
||||
</values>
|
|
@ -0,0 +1,17 @@
|
|||
{
|
||||
"id": "8db08c68-bc70-11e4-a8c4-fa163e1d1a9b",
|
||||
"metadata": {
|
||||
"name1": "value1",
|
||||
"name2": "value2"
|
||||
},
|
||||
"meter": "instance",
|
||||
"project_id": "35b17138-b364-4e6a-a131-8f3099c5be68",
|
||||
"recorded_at": "2015-02-24T22:00:32.747930",
|
||||
"resource_id": "bd9431c1-8d69-4ad3-803a-8d4a6b89fd36",
|
||||
"source": "openstack",
|
||||
"timestamp": "2015-02-24T22:00:32.747930",
|
||||
"type": "gauge",
|
||||
"unit": "instance",
|
||||
"user_id": "efd87807-12d2-4b38-9c70-5f5c2ac427ff",
|
||||
"volume": 1.0
|
||||
}
|
|
@ -0,0 +1,23 @@
|
|||
<value>
|
||||
<id>8db08c68-bc70-11e4-a8c4-fa163e1d1a9b</id>
|
||||
<meter>instance</meter>
|
||||
<type>gauge</type>
|
||||
<unit>instance</unit>
|
||||
<volume>1.0</volume>
|
||||
<user_id>efd87807-12d2-4b38-9c70-5f5c2ac427ff</user_id>
|
||||
<project_id>35b17138-b364-4e6a-a131-8f3099c5be68</project_id>
|
||||
<resource_id>bd9431c1-8d69-4ad3-803a-8d4a6b89fd36</resource_id>
|
||||
<source>openstack</source>
|
||||
<timestamp>2015-02-24T22:00:32.747930</timestamp>
|
||||
<recorded_at>2015-02-24T22:00:32.747930</recorded_at>
|
||||
<metadata>
|
||||
<item>
|
||||
<key>name2</key>
|
||||
<value>value2</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>name1</key>
|
||||
<value>value1</value>
|
||||
</item>
|
||||
</metadata>
|
||||
</value>
|
|
@ -0,0 +1,17 @@
|
|||
{
|
||||
"id": "9b23b398-6139-11e5-97e9-bc764e045bf6",
|
||||
"metadata": {
|
||||
"name1": "value1",
|
||||
"name2": "value2"
|
||||
},
|
||||
"meter": "instance",
|
||||
"project_id": "35b17138-b364-4e6a-a131-8f3099c5be68",
|
||||
"recorded_at": "2015-09-22T14:52:54.850725",
|
||||
"resource_id": "bd9431c1-8d69-4ad3-803a-8d4a6b89fd36",
|
||||
"source": "openstack",
|
||||
"timestamp": "2015-09-22T14:52:54.850718",
|
||||
"type": "gauge",
|
||||
"unit": "instance",
|
||||
"user_id": "efd87807-12d2-4b38-9c70-5f5c2ac427ff",
|
||||
"volume": 1
|
||||
}
|
|
@ -0,0 +1,24 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<value>
|
||||
<id>9b23b398-6139-11e5-97e9-bc764e045bf6</id>
|
||||
<meter>instance</meter>
|
||||
<type>gauge</type>
|
||||
<unit>instance</unit>
|
||||
<volume>1.0</volume>
|
||||
<user_id>efd87807-12d2-4b38-9c70-5f5c2ac427ff</user_id>
|
||||
<project_id>35b17138-b364-4e6a-a131-8f3099c5be68</project_id>
|
||||
<resource_id>bd9431c1-8d69-4ad3-803a-8d4a6b89fd36</resource_id>
|
||||
<source>openstack</source>
|
||||
<timestamp>2015-09-22T14:52:54.850718</timestamp>
|
||||
<recorded_at>2015-09-22T14:52:54.850725</recorded_at>
|
||||
<metadata>
|
||||
<item>
|
||||
<key>name2</key>
|
||||
<value>value2</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>name1</key>
|
||||
<value>value1</value>
|
||||
</item>
|
||||
</metadata>
|
||||
</value>
|
|
@ -0,0 +1,19 @@
|
|||
[
|
||||
{
|
||||
"id": "9b23b398-6139-11e5-97e9-bc764e045bf6",
|
||||
"metadata": {
|
||||
"name1": "value1",
|
||||
"name2": "value2"
|
||||
},
|
||||
"meter": "instance",
|
||||
"project_id": "35b17138-b364-4e6a-a131-8f3099c5be68",
|
||||
"recorded_at": "2015-09-22T14:52:54.850725",
|
||||
"resource_id": "bd9431c1-8d69-4ad3-803a-8d4a6b89fd36",
|
||||
"source": "openstack",
|
||||
"timestamp": "2015-09-22T14:52:54.850718",
|
||||
"type": "gauge",
|
||||
"unit": "instance",
|
||||
"user_id": "efd87807-12d2-4b38-9c70-5f5c2ac427ff",
|
||||
"volume": 1
|
||||
}
|
||||
]
|
|
@ -0,0 +1,26 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<values>
|
||||
<value>
|
||||
<id>9b23b398-6139-11e5-97e9-bc764e045bf6</id>
|
||||
<meter>instance</meter>
|
||||
<type>gauge</type>
|
||||
<unit>instance</unit>
|
||||
<volume>1.0</volume>
|
||||
<user_id>efd87807-12d2-4b38-9c70-5f5c2ac427ff</user_id>
|
||||
<project_id>35b17138-b364-4e6a-a131-8f3099c5be68</project_id>
|
||||
<resource_id>bd9431c1-8d69-4ad3-803a-8d4a6b89fd36</resource_id>
|
||||
<source>openstack</source>
|
||||
<timestamp>2015-09-22T14:52:54.850718</timestamp>
|
||||
<recorded_at>2015-09-22T14:52:54.850725</recorded_at>
|
||||
<metadata>
|
||||
<item>
|
||||
<key>name2</key>
|
||||
<value>value2</value>
|
||||
</item>
|
||||
<item>
|
||||
<key>name1</key>
|
||||
<value>value1</value>
|
||||
</item>
|
||||
</metadata>
|
||||
</value>
|
||||
</values>
|
|
@ -0,0 +1,16 @@
|
|||
[
|
||||
{
|
||||
"avg": 4.5,
|
||||
"count": 10,
|
||||
"duration": 300,
|
||||
"duration_end": "2013-01-04T16:47:00",
|
||||
"duration_start": "2013-01-04T16:42:00",
|
||||
"max": 9,
|
||||
"min": 1,
|
||||
"period": 7200,
|
||||
"period_end": "2013-01-04T18:00:00",
|
||||
"period_start": "2013-01-04T16:00:00",
|
||||
"sum": 45,
|
||||
"unit": "GiB"
|
||||
}
|
||||
]
|
|
@ -0,0 +1,17 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<values>
|
||||
<value>
|
||||
<avg>4.5</avg>
|
||||
<count>10</count>
|
||||
<duration>300.0</duration>
|
||||
<duration_end>2013-01-04T16:47:00</duration_end>
|
||||
<duration_start>2013-01-04T16:42:00</duration_start>
|
||||
<max>9.0</max>
|
||||
<min>1.0</min>
|
||||
<period>7200</period>
|
||||
<period_end>2013-01-04T18:00:00</period_end>
|
||||
<period_start>2013-01-04T16:00:00</period_start>
|
||||
<sum>45.0</sum>
|
||||
<unit>GiB</unit>
|
||||
</value>
|
||||
</values>
|
|
@ -21,7 +21,6 @@ import random
|
|||
from concurrent import futures
|
||||
from futurist import periodics
|
||||
from keystoneauth1 import exceptions as ka_exceptions
|
||||
from keystoneclient import exceptions as ks_exceptions
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
import oslo_messaging
|
||||
|
@ -233,7 +232,7 @@ class PollingTask(object):
|
|||
|
||||
class AgentManager(service_base.PipelineBasedService):
|
||||
|
||||
def __init__(self, namespaces=None, pollster_list=None):
|
||||
def __init__(self, namespaces=None, pollster_list=None, worker_id=0):
|
||||
namespaces = namespaces or ['compute', 'central']
|
||||
pollster_list = pollster_list or []
|
||||
group_prefix = cfg.CONF.polling.partitioning_group_prefix
|
||||
|
@ -244,7 +243,7 @@ class AgentManager(service_base.PipelineBasedService):
|
|||
if pollster_list and cfg.CONF.coordination.backend_url:
|
||||
raise PollsterListForbidden()
|
||||
|
||||
super(AgentManager, self).__init__()
|
||||
super(AgentManager, self).__init__(worker_id)
|
||||
|
||||
def _match(pollster):
|
||||
"""Find out if pollster name matches to one of the list."""
|
||||
|
@ -402,29 +401,28 @@ class AgentManager(service_base.PipelineBasedService):
|
|||
else delay_polling_time)
|
||||
|
||||
@periodics.periodic(spacing=interval, run_immediately=False)
|
||||
def task():
|
||||
self.interval_task(polling_task)
|
||||
def task(running_task):
|
||||
self.interval_task(running_task)
|
||||
|
||||
utils.spawn_thread(utils.delayed, delay_time,
|
||||
self.polling_periodics.add, task)
|
||||
self.polling_periodics.add, task, polling_task)
|
||||
|
||||
if data:
|
||||
# Don't start useless threads if no task will run
|
||||
utils.spawn_thread(self.polling_periodics.start, allow_empty=True)
|
||||
|
||||
def start(self):
|
||||
super(AgentManager, self).start()
|
||||
def run(self):
|
||||
super(AgentManager, self).run()
|
||||
self.polling_manager = pipeline.setup_polling()
|
||||
self.join_partitioning_groups()
|
||||
self.start_polling_tasks()
|
||||
self.init_pipeline_refresh()
|
||||
|
||||
def stop(self):
|
||||
if self.started:
|
||||
self.stop_pollsters_tasks()
|
||||
self.heartbeat_timer.stop()
|
||||
self.partition_coordinator.stop()
|
||||
super(AgentManager, self).stop()
|
||||
def terminate(self):
|
||||
self.stop_pollsters_tasks()
|
||||
self.heartbeat_timer.stop()
|
||||
self.partition_coordinator.stop()
|
||||
super(AgentManager, self).terminate()
|
||||
|
||||
def interval_task(self, task):
|
||||
# NOTE(sileht): remove the previous keystone client
|
||||
|
@ -454,8 +452,7 @@ class AgentManager(service_base.PipelineBasedService):
|
|||
try:
|
||||
self._keystone = keystone_client.get_client()
|
||||
self._keystone_last_exception = None
|
||||
except (ka_exceptions.ClientException,
|
||||
ks_exceptions.ClientException) as e:
|
||||
except ka_exceptions.ClientException as e:
|
||||
self._keystone = None
|
||||
self._keystone_last_exception = e
|
||||
if self._keystone is not None:
|
||||
|
@ -505,8 +502,7 @@ class AgentManager(service_base.PipelineBasedService):
|
|||
resources.extend(partitioned)
|
||||
if discovery_cache is not None:
|
||||
discovery_cache[url] = partitioned
|
||||
except (ka_exceptions.ClientException,
|
||||
ks_exceptions.ClientException) as e:
|
||||
except ka_exceptions.ClientException as e:
|
||||
LOG.error(_LE('Skipping %(name)s, keystone issue: '
|
||||
'%(exc)s'), {'name': name, 'exc': e})
|
||||
except Exception as err:
|
||||
|
|
|
@ -77,6 +77,19 @@ class NotificationBase(PluginBase):
|
|||
:param message: Message to process.
|
||||
"""
|
||||
|
||||
@staticmethod
|
||||
def _consume_and_drop(notifications):
|
||||
"""RPC endpoint for useless notification level"""
|
||||
# NOTE(sileht): nothing special todo here, but because we listen
|
||||
# for the generic notification exchange we have to consume all its
|
||||
# queues
|
||||
|
||||
audit = _consume_and_drop
|
||||
debug = _consume_and_drop
|
||||
warn = _consume_and_drop
|
||||
error = _consume_and_drop
|
||||
critical = _consume_and_drop
|
||||
|
||||
def info(self, notifications):
|
||||
"""RPC endpoint for notification messages at info level
|
||||
|
||||
|
|
|
@ -60,31 +60,6 @@ class ProjectNotAuthorized(ClientSideError):
|
|||
status_code=401)
|
||||
|
||||
|
||||
class AdvEnum(wtypes.wsproperty):
|
||||
"""Handle default and mandatory for wtypes.Enum."""
|
||||
def __init__(self, name, *args, **kwargs):
|
||||
self._name = '_advenum_%s' % name
|
||||
self._default = kwargs.pop('default', None)
|
||||
mandatory = kwargs.pop('mandatory', False)
|
||||
enum = wtypes.Enum(*args, **kwargs)
|
||||
super(AdvEnum, self).__init__(datatype=enum, fget=self._get,
|
||||
fset=self._set, mandatory=mandatory)
|
||||
|
||||
def _get(self, parent):
|
||||
if hasattr(parent, self._name):
|
||||
value = getattr(parent, self._name)
|
||||
return value or self._default
|
||||
return self._default
|
||||
|
||||
def _set(self, parent, value):
|
||||
try:
|
||||
if self.datatype.validate(value):
|
||||
setattr(parent, self._name, value)
|
||||
except ValueError as e:
|
||||
raise wsme.exc.InvalidInput(self._name.replace('_advenum_', '', 1),
|
||||
value, e)
|
||||
|
||||
|
||||
class Base(wtypes.DynamicBase):
|
||||
|
||||
@classmethod
|
||||
|
|
|
@ -19,7 +19,7 @@
|
|||
# under the License.
|
||||
|
||||
import datetime
|
||||
import urllib
|
||||
from six.moves import urllib
|
||||
|
||||
import pecan
|
||||
from pecan import rest
|
||||
|
@ -124,7 +124,7 @@ class ResourcesController(rest.RestController):
|
|||
# In case we have special character in resource id, for example, swift
|
||||
# can generate samples with resource id like
|
||||
# 29f809d9-88bb-4c40-b1ba-a77a1fcf8ceb/glance
|
||||
resource_id = urllib.unquote(resource_id)
|
||||
resource_id = urllib.parse.unquote(resource_id)
|
||||
|
||||
authorized_project = rbac.get_limited_to_project(pecan.request.headers)
|
||||
resources = list(pecan.request.storage_conn.get_resources(
|
||||
|
|
|
@ -45,6 +45,13 @@ API_OPTS = [
|
|||
help=('The endpoint of Aodh to redirect alarms URLs '
|
||||
'to Aodh API. Default autodetection by querying '
|
||||
'keystone.')),
|
||||
cfg.BoolOpt('panko_is_enabled',
|
||||
help=('Set True to redirect events URLs to Panko. '
|
||||
'Default autodetection by querying keystone.')),
|
||||
cfg.StrOpt('panko_url',
|
||||
help=('The endpoint of Panko to redirect events URLs '
|
||||
'to Panko API. Default autodetection by querying '
|
||||
'keystone.')),
|
||||
]
|
||||
|
||||
cfg.CONF.register_opts(API_OPTS, group='api')
|
||||
|
@ -64,7 +71,7 @@ def aodh_abort():
|
|||
"disabled or unavailable."))
|
||||
|
||||
|
||||
def aodh_redirect(url):
|
||||
def _redirect(url):
|
||||
# NOTE(sileht): we use 307 and not 301 or 302 to allow
|
||||
# client to redirect POST/PUT/DELETE/...
|
||||
# FIXME(sileht): it would be better to use 308, but webob
|
||||
|
@ -75,14 +82,15 @@ def aodh_redirect(url):
|
|||
|
||||
|
||||
class QueryController(object):
|
||||
def __init__(self, gnocchi_is_enabled=False, aodh_url=None):
|
||||
def __init__(self, gnocchi_is_enabled=False,
|
||||
aodh_url=None):
|
||||
self.gnocchi_is_enabled = gnocchi_is_enabled
|
||||
self.aodh_url = aodh_url
|
||||
|
||||
@pecan.expose()
|
||||
def _lookup(self, kind, *remainder):
|
||||
if kind == 'alarms' and self.aodh_url:
|
||||
aodh_redirect(self.aodh_url)
|
||||
_redirect(self.aodh_url)
|
||||
elif kind == 'alarms':
|
||||
aodh_abort()
|
||||
elif kind == 'samples' and self.gnocchi_is_enabled:
|
||||
|
@ -96,14 +104,14 @@ class QueryController(object):
|
|||
class V2Controller(object):
|
||||
"""Version 2 API controller root."""
|
||||
|
||||
event_types = events.EventTypesController()
|
||||
events = events.EventsController()
|
||||
capabilities = capabilities.CapabilitiesController()
|
||||
|
||||
def __init__(self):
|
||||
self._gnocchi_is_enabled = None
|
||||
self._aodh_is_enabled = None
|
||||
self._aodh_url = None
|
||||
self._panko_is_enabled = None
|
||||
self._panko_url = None
|
||||
|
||||
@property
|
||||
def gnocchi_is_enabled(self):
|
||||
|
@ -137,13 +145,13 @@ class V2Controller(object):
|
|||
if cfg.CONF.api.aodh_is_enabled is False:
|
||||
self._aodh_url = ""
|
||||
elif cfg.CONF.api.aodh_url is not None:
|
||||
self._aodh_url = self._normalize_aodh_url(
|
||||
self._aodh_url = self._normalize_url(
|
||||
cfg.CONF.api.aodh_url)
|
||||
else:
|
||||
try:
|
||||
catalog = keystone_client.get_service_catalog(
|
||||
keystone_client.get_client())
|
||||
self._aodh_url = self._normalize_aodh_url(
|
||||
self._aodh_url = self._normalize_url(
|
||||
catalog.url_for(service_type='alarming'))
|
||||
except exceptions.EndpointNotFound:
|
||||
self._aodh_url = ""
|
||||
|
@ -156,6 +164,32 @@ class V2Controller(object):
|
|||
"to aodh endpoint."))
|
||||
return self._aodh_url
|
||||
|
||||
@property
|
||||
def panko_url(self):
|
||||
if self._panko_url is None:
|
||||
if cfg.CONF.api.panko_is_enabled is False:
|
||||
self._panko_url = ""
|
||||
elif cfg.CONF.api.panko_url is not None:
|
||||
self._panko_url = self._normalize_url(
|
||||
cfg.CONF.api.panko_url)
|
||||
else:
|
||||
try:
|
||||
catalog = keystone_client.get_service_catalog(
|
||||
keystone_client.get_client())
|
||||
self._panko_url = self._normalize_url(
|
||||
catalog.url_for(service_type='event'))
|
||||
except exceptions.EndpointNotFound:
|
||||
self._panko_url = ""
|
||||
except exceptions.ClientException:
|
||||
LOG.warning(
|
||||
_LW("Can't connect to keystone, assuming Panko "
|
||||
"is disabled and retry later."))
|
||||
else:
|
||||
LOG.warning(_LW("ceilometer-api started with Panko "
|
||||
"enabled. Events URLs will be redirected "
|
||||
"to Panko endpoint."))
|
||||
return self._panko_url
|
||||
|
||||
@pecan.expose()
|
||||
def _lookup(self, kind, *remainder):
|
||||
if (kind in ['meters', 'resources', 'samples']
|
||||
|
@ -181,12 +215,20 @@ class V2Controller(object):
|
|||
elif kind == 'alarms' and (not self.aodh_url):
|
||||
aodh_abort()
|
||||
elif kind == 'alarms' and self.aodh_url:
|
||||
aodh_redirect(self.aodh_url)
|
||||
_redirect(self.aodh_url)
|
||||
elif kind == 'events':
|
||||
if self.panko_url:
|
||||
return _redirect(self.panko_url)
|
||||
return events.EventsController(), remainder
|
||||
elif kind == 'event_types':
|
||||
if self.panko_url:
|
||||
return _redirect(self.panko_url)
|
||||
return events.EventTypesController(), remainder
|
||||
else:
|
||||
pecan.abort(404)
|
||||
|
||||
@staticmethod
|
||||
def _normalize_aodh_url(url):
|
||||
def _normalize_url(url):
|
||||
if url.endswith("/"):
|
||||
return url[:-1]
|
||||
return url
|
||||
|
|
|
@ -14,8 +14,8 @@
|
|||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import cotyledon
|
||||
from oslo_config import cfg
|
||||
from oslo_service import service as os_service
|
||||
|
||||
from ceilometer import notification
|
||||
from ceilometer import service
|
||||
|
@ -25,5 +25,8 @@ CONF = cfg.CONF
|
|||
|
||||
def main():
|
||||
service.prepare_service()
|
||||
os_service.launch(CONF, notification.NotificationService(),
|
||||
workers=CONF.notification.workers).wait()
|
||||
|
||||
sm = cotyledon.ServiceManager()
|
||||
sm.add(notification.NotificationService,
|
||||
workers=CONF.notification.workers)
|
||||
sm.run()
|
||||
|
|
|
@ -14,8 +14,8 @@
|
|||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import cotyledon
|
||||
from oslo_config import cfg
|
||||
from oslo_service import service as os_service
|
||||
|
||||
from ceilometer import collector
|
||||
from ceilometer import service
|
||||
|
@ -25,5 +25,6 @@ CONF = cfg.CONF
|
|||
|
||||
def main():
|
||||
service.prepare_service()
|
||||
os_service.launch(CONF, collector.CollectorService(),
|
||||
workers=CONF.collector.workers).wait()
|
||||
sm = cotyledon.ServiceManager()
|
||||
sm.add(collector.CollectorService, workers=CONF.collector.workers)
|
||||
sm.run()
|
||||
|
|
|
@ -14,9 +14,9 @@
|
|||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import cotyledon
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
from oslo_service import service as os_service
|
||||
|
||||
from ceilometer.agent import manager
|
||||
from ceilometer.i18n import _LW
|
||||
|
@ -78,7 +78,14 @@ CLI_OPTS = [
|
|||
CONF.register_cli_opts(CLI_OPTS)
|
||||
|
||||
|
||||
def create_polling_service(worker_id):
|
||||
return manager.AgentManager(CONF.polling_namespaces,
|
||||
CONF.pollster_list,
|
||||
worker_id)
|
||||
|
||||
|
||||
def main():
|
||||
service.prepare_service()
|
||||
os_service.launch(CONF, manager.AgentManager(CONF.polling_namespaces,
|
||||
CONF.pollster_list)).wait()
|
||||
sm = cotyledon.ServiceManager()
|
||||
sm.add(create_polling_service)
|
||||
sm.run()
|
||||
|
|
|
@ -16,6 +16,7 @@
|
|||
from itertools import chain
|
||||
import socket
|
||||
|
||||
import cotyledon
|
||||
import msgpack
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
|
@ -27,7 +28,6 @@ from ceilometer import dispatcher
|
|||
from ceilometer.i18n import _, _LE, _LW
|
||||
from ceilometer import messaging
|
||||
from ceilometer.publisher import utils as publisher_utils
|
||||
from ceilometer import service_base
|
||||
from ceilometer import utils
|
||||
|
||||
OPTS = [
|
||||
|
@ -59,17 +59,17 @@ cfg.CONF.import_opt('store_events', 'ceilometer.notification',
|
|||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class CollectorService(service_base.ServiceBase):
|
||||
class CollectorService(cotyledon.Service):
|
||||
"""Listener for the collector service."""
|
||||
def start(self):
|
||||
def run(self):
|
||||
"""Bind the UDP socket and handle incoming data."""
|
||||
super(CollectorService, self).run()
|
||||
# ensure dispatcher is configured before starting other services
|
||||
dispatcher_managers = dispatcher.load_dispatcher_manager()
|
||||
(self.meter_manager, self.event_manager) = dispatcher_managers
|
||||
self.sample_listener = None
|
||||
self.event_listener = None
|
||||
self.udp_thread = None
|
||||
super(CollectorService, self).start()
|
||||
|
||||
if cfg.CONF.collector.udp_address:
|
||||
self.udp_thread = utils.spawn_thread(self.start_udp)
|
||||
|
@ -133,16 +133,15 @@ class CollectorService(service_base.ServiceBase):
|
|||
LOG.warning(_LW('sample signature invalid, '
|
||||
'discarding: %s'), sample)
|
||||
|
||||
def stop(self):
|
||||
if self.started:
|
||||
if self.sample_listener:
|
||||
utils.kill_listeners([self.sample_listener])
|
||||
if self.event_listener:
|
||||
utils.kill_listeners([self.event_listener])
|
||||
if self.udp_thread:
|
||||
self.udp_run = False
|
||||
self.udp_thread.join()
|
||||
super(CollectorService, self).stop()
|
||||
def terminate(self):
|
||||
if self.sample_listener:
|
||||
utils.kill_listeners([self.sample_listener])
|
||||
if self.event_listener:
|
||||
utils.kill_listeners([self.event_listener])
|
||||
if self.udp_thread:
|
||||
self.udp_run = False
|
||||
self.udp_thread.join()
|
||||
super(CollectorService, self).terminate()
|
||||
|
||||
|
||||
class CollectorEndpoint(object):
|
||||
|
|
|
@ -18,6 +18,7 @@ from oslo_utils import timeutils
|
|||
import six
|
||||
|
||||
from ceilometer.agent import plugin_base
|
||||
from ceilometer.compute.pollsters import util
|
||||
from ceilometer.compute.virt import inspector as virt_inspector
|
||||
|
||||
|
||||
|
@ -75,3 +76,18 @@ class BaseComputePollster(plugin_base.PollsterBase):
|
|||
current_time)
|
||||
self._last_poll_time = current_time
|
||||
return duration
|
||||
|
||||
@staticmethod
|
||||
def _get_samples_per_devices(attribute, instance, _name, _type, _unit):
|
||||
samples = []
|
||||
for disk, value in six.iteritems(attribute):
|
||||
samples.append(util.make_sample_from_instance(
|
||||
instance,
|
||||
name=_name,
|
||||
type=_type,
|
||||
unit=_unit,
|
||||
volume=value,
|
||||
resource_id="%s-%s" % (instance.id, disk),
|
||||
additional_metadata={'disk_name': disk},
|
||||
))
|
||||
return samples
|
||||
|
|
|
@ -130,21 +130,11 @@ class _Base(pollsters.BaseComputePollster):
|
|||
'device': c_data.per_disk_requests[_metadata].keys()},
|
||||
)]
|
||||
|
||||
@staticmethod
|
||||
def _get_samples_per_device(c_data, _attr, instance, _name, _unit):
|
||||
def _get_samples_per_device(self, c_data, _attr, instance, _name, _unit):
|
||||
"""Return one or more Samples for meter 'disk.device.*'"""
|
||||
samples = []
|
||||
for disk, value in six.iteritems(c_data.per_disk_requests[_attr]):
|
||||
samples.append(util.make_sample_from_instance(
|
||||
instance,
|
||||
name=_name,
|
||||
type=sample.TYPE_CUMULATIVE,
|
||||
unit=_unit,
|
||||
volume=value,
|
||||
resource_id="%s-%s" % (instance.id, disk),
|
||||
additional_metadata={'disk_name': disk},
|
||||
))
|
||||
return samples
|
||||
return self._get_samples_per_devices(c_data.per_disk_requests[_attr],
|
||||
instance, _name,
|
||||
sample.TYPE_CUMULATIVE, _unit)
|
||||
|
||||
def get_samples(self, manager, cache, resources):
|
||||
for instance in resources:
|
||||
|
@ -318,19 +308,9 @@ class _DiskRatesPollsterBase(pollsters.BaseComputePollster):
|
|||
def _get_samples_per_device(self, disk_rates_info, _attr, instance,
|
||||
_name, _unit):
|
||||
"""Return one or more Samples for meter 'disk.device.*'."""
|
||||
samples = []
|
||||
for disk, value in six.iteritems(disk_rates_info.per_disk_rate[
|
||||
_attr]):
|
||||
samples.append(util.make_sample_from_instance(
|
||||
instance,
|
||||
name=_name,
|
||||
type=sample.TYPE_GAUGE,
|
||||
unit=_unit,
|
||||
volume=value,
|
||||
resource_id="%s-%s" % (instance.id, disk),
|
||||
additional_metadata={'disk_name': disk},
|
||||
))
|
||||
return samples
|
||||
return self._get_samples_per_devices(
|
||||
disk_rates_info.per_disk_rate[_attr],
|
||||
instance, _name, sample.TYPE_GAUGE, _unit)
|
||||
|
||||
def _get_sample_read_and_write(self, instance, _name, _unit, _element,
|
||||
_attr1, _attr2):
|
||||
|
|
|
@ -13,6 +13,9 @@
|
|||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import abc
|
||||
import collections
|
||||
|
||||
from oslo_log import log
|
||||
|
||||
import ceilometer
|
||||
|
@ -26,6 +29,10 @@ from ceilometer import sample
|
|||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
MemoryBandwidthData = collections.namedtuple('MemoryBandwidthData',
|
||||
['total', 'local'])
|
||||
|
||||
|
||||
class MemoryUsagePollster(pollsters.BaseComputePollster):
|
||||
|
||||
def get_samples(self, manager, cache, resources):
|
||||
|
@ -117,3 +124,82 @@ class MemoryResidentPollster(pollsters.BaseComputePollster):
|
|||
LOG.exception(_LE('Could not get Resident Memory Usage for '
|
||||
'%(id)s: %(e)s'), {'id': instance.id,
|
||||
'e': err})
|
||||
|
||||
|
||||
class _MemoryBandwidthPollster(pollsters.BaseComputePollster):
|
||||
|
||||
CACHE_KEY_MEMORY_BANDWIDTH = 'memory-bandwidth'
|
||||
|
||||
def _populate_cache(self, inspector, cache, instance):
|
||||
i_cache = cache.setdefault(self.CACHE_KEY_MEMORY_BANDWIDTH, {})
|
||||
if instance.id not in i_cache:
|
||||
memory_bandwidth = self.inspector.inspect_memory_bandwidth(
|
||||
instance, self._inspection_duration)
|
||||
i_cache[instance.id] = MemoryBandwidthData(
|
||||
memory_bandwidth.total,
|
||||
memory_bandwidth.local,
|
||||
)
|
||||
return i_cache[instance.id]
|
||||
|
||||
@abc.abstractmethod
|
||||
def _get_samples(self, instance, c_data):
|
||||
"""Return one or more Samples."""
|
||||
|
||||
def _get_sample_total_and_local(self, instance, _name, _unit,
|
||||
c_data, _element):
|
||||
"""Total / local Pollster and return one Sample"""
|
||||
return [util.make_sample_from_instance(
|
||||
instance,
|
||||
name=_name,
|
||||
type=sample.TYPE_GAUGE,
|
||||
unit=_unit,
|
||||
volume=getattr(c_data, _element),
|
||||
)]
|
||||
|
||||
def get_samples(self, manager, cache, resources):
|
||||
self._inspection_duration = self._record_poll_time()
|
||||
for instance in resources:
|
||||
try:
|
||||
c_data = self._populate_cache(
|
||||
self.inspector,
|
||||
cache,
|
||||
instance,
|
||||
)
|
||||
for s in self._get_samples(instance, c_data):
|
||||
yield s
|
||||
except virt_inspector.InstanceNotFoundException as err:
|
||||
# Instance was deleted while getting samples. Ignore it.
|
||||
LOG.debug('Exception while getting samples %s', err)
|
||||
except virt_inspector.InstanceShutOffException as e:
|
||||
LOG.debug('Instance %(instance_id)s was shut off while '
|
||||
'getting samples of %(pollster)s: %(exc)s',
|
||||
{'instance_id': instance.id,
|
||||
'pollster': self.__class__.__name__, 'exc': e})
|
||||
except virt_inspector.NoDataException as e:
|
||||
LOG.warning(_LW('Cannot inspect data of %(pollster)s for '
|
||||
'%(instance_id)s, non-fatal reason: %(exc)s'),
|
||||
{'pollster': self.__class__.__name__,
|
||||
'instance_id': instance.id, 'exc': e})
|
||||
raise plugin_base.PollsterPermanentError(resources)
|
||||
except ceilometer.NotImplementedError:
|
||||
# Selected inspector does not implement this pollster.
|
||||
LOG.debug('Obtaining memory bandwidth is not implemented'
|
||||
' for %s', self.inspector.__class__.__name__)
|
||||
except Exception as err:
|
||||
LOG.exception(_LE('Could not get memory bandwidth for '
|
||||
'%(id)s: %(e)s'), {'id': instance.id,
|
||||
'e': err})
|
||||
|
||||
|
||||
class MemoryBandwidthTotalPollster(_MemoryBandwidthPollster):
|
||||
|
||||
def _get_samples(self, instance, c_data):
|
||||
return self._get_sample_total_and_local(
|
||||
instance, 'memory.bandwidth.total', 'B/s', c_data, 'total')
|
||||
|
||||
|
||||
class MemoryBandwidthLocalPollster(_MemoryBandwidthPollster):
|
||||
|
||||
def _get_samples(self, instance, c_data):
|
||||
return self._get_sample_total_and_local(
|
||||
instance, 'memory.bandwidth.local', 'B/s', c_data, 'local')
|
||||
|
|
|
@ -134,6 +134,21 @@ class _RateBase(_Base):
|
|||
return info.tx_bytes_rate
|
||||
|
||||
|
||||
class _PacketsBase(_Base):
|
||||
|
||||
NET_USAGE_MESSAGE = ' '.join(["NETWORK USAGE:", "%s %s:",
|
||||
"read-packets=%d",
|
||||
"write-packets=%d"])
|
||||
|
||||
@staticmethod
|
||||
def _get_rx_info(info):
|
||||
return info.rx_packets
|
||||
|
||||
@staticmethod
|
||||
def _get_tx_info(info):
|
||||
return info.tx_packets
|
||||
|
||||
|
||||
class IncomingBytesPollster(_Base):
|
||||
|
||||
def _get_sample(self, instance, vnic, info):
|
||||
|
@ -147,7 +162,7 @@ class IncomingBytesPollster(_Base):
|
|||
)
|
||||
|
||||
|
||||
class IncomingPacketsPollster(_Base):
|
||||
class IncomingPacketsPollster(_PacketsBase):
|
||||
|
||||
def _get_sample(self, instance, vnic, info):
|
||||
return self.make_vnic_sample(
|
||||
|
@ -173,7 +188,7 @@ class OutgoingBytesPollster(_Base):
|
|||
)
|
||||
|
||||
|
||||
class OutgoingPacketsPollster(_Base):
|
||||
class OutgoingPacketsPollster(_PacketsBase):
|
||||
|
||||
def _get_sample(self, instance, vnic, info):
|
||||
return self.make_vnic_sample(
|
||||
|
|
|
@ -80,6 +80,14 @@ MemoryResidentStats = collections.namedtuple('MemoryResidentStats',
|
|||
['resident'])
|
||||
|
||||
|
||||
# Named tuple representing memory bandwidth statistics.
|
||||
#
|
||||
# total: total system bandwidth from one level of cache
|
||||
# local: bandwidth of memory traffic for a memory controller
|
||||
#
|
||||
MemoryBandwidthStats = collections.namedtuple('MemoryBandwidthStats',
|
||||
['total', 'local'])
|
||||
|
||||
# Named tuple representing vNICs.
|
||||
#
|
||||
# name: the name of the vNIC
|
||||
|
@ -286,6 +294,16 @@ class Inspector(object):
|
|||
"""
|
||||
raise ceilometer.NotImplementedError
|
||||
|
||||
def inspect_memory_bandwidth(self, instance, duration=None):
|
||||
"""Inspect the memory bandwidth statistics for an instance.
|
||||
|
||||
:param instance: the target instance
|
||||
:param duration: the last 'n' seconds, over which the value should be
|
||||
inspected
|
||||
:return:
|
||||
"""
|
||||
raise ceilometer.NotImplementedError
|
||||
|
||||
def inspect_disk_rates(self, instance, duration=None):
|
||||
"""Inspect the disk statistics as rates for an instance.
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@ import six
|
|||
|
||||
from ceilometer.compute.pollsters import util
|
||||
from ceilometer.compute.virt import inspector as virt_inspector
|
||||
from ceilometer.i18n import _
|
||||
from ceilometer.i18n import _LW, _
|
||||
|
||||
libvirt = None
|
||||
|
||||
|
@ -231,21 +231,51 @@ class LibvirtInspector(virt_inspector.Inspector):
|
|||
|
||||
def inspect_disk_info(self, instance):
|
||||
domain = self._get_domain_not_shut_off_or_raise(instance)
|
||||
|
||||
tree = etree.fromstring(domain.XMLDesc(0))
|
||||
for device in filter(
|
||||
bool,
|
||||
[target.get("dev")
|
||||
for target in tree.findall('devices/disk/target')]):
|
||||
disk = virt_inspector.Disk(device=device)
|
||||
block_info = domain.blockInfo(device)
|
||||
info = virt_inspector.DiskInfo(capacity=block_info[0],
|
||||
allocation=block_info[1],
|
||||
physical=block_info[2])
|
||||
|
||||
yield (disk, info)
|
||||
for disk in tree.findall('devices/disk'):
|
||||
disk_type = disk.get('type')
|
||||
if disk_type:
|
||||
if disk_type == 'network':
|
||||
LOG.warning(
|
||||
_LW('Inspection disk usage of network disk '
|
||||
'%(instance_uuid)s unsupported by libvirt') % {
|
||||
'instance_uuid': instance.id})
|
||||
continue
|
||||
target = disk.find('target')
|
||||
device = target.get('dev')
|
||||
if device:
|
||||
dsk = virt_inspector.Disk(device=device)
|
||||
block_info = domain.blockInfo(device)
|
||||
info = virt_inspector.DiskInfo(capacity=block_info[0],
|
||||
allocation=block_info[1],
|
||||
physical=block_info[2])
|
||||
yield (dsk, info)
|
||||
|
||||
def inspect_memory_resident(self, instance, duration=None):
|
||||
domain = self._get_domain_not_shut_off_or_raise(instance)
|
||||
memory = domain.memoryStats()['rss'] / units.Ki
|
||||
return virt_inspector.MemoryResidentStats(resident=memory)
|
||||
|
||||
def inspect_memory_bandwidth(self, instance, duration=None):
|
||||
domain = self._get_domain_not_shut_off_or_raise(instance)
|
||||
|
||||
try:
|
||||
stats = self.connection.domainListGetStats(
|
||||
[domain], libvirt.VIR_DOMAIN_STATS_PERF)
|
||||
perf = stats[0][1]
|
||||
return virt_inspector.MemoryBandwidthStats(total=perf["perf.mbmt"],
|
||||
local=perf["perf.mbml"])
|
||||
except AttributeError as e:
|
||||
msg = _('Perf is not supported by current version of libvirt, and '
|
||||
'failed to inspect memory bandwidth of %(instance_uuid)s, '
|
||||
'can not get info from libvirt: %(error)s') % {
|
||||
'instance_uuid': instance.id, 'error': e}
|
||||
raise virt_inspector.NoDataException(msg)
|
||||
# domainListGetStats might launch an exception if the method or
|
||||
# mbmt/mbml perf event is not supported by the underlying hypervisor
|
||||
# being used by libvirt.
|
||||
except libvirt.libvirtError as e:
|
||||
msg = _('Failed to inspect memory bandwidth of %(instance_uuid)s, '
|
||||
'can not get info from libvirt: %(error)s') % {
|
||||
'instance_uuid': instance.id, 'error': e}
|
||||
raise virt_inspector.NoDataException(msg)
|
||||
|
|
|
@ -120,18 +120,15 @@ class XenapiInspector(virt_inspector.Inspector):
|
|||
def inspect_cpu_util(self, instance, duration=None):
|
||||
instance_name = util.instance_name(instance)
|
||||
vm_ref = self._lookup_by_name(instance_name)
|
||||
metrics_ref = self._call_xenapi("VM.get_metrics", vm_ref)
|
||||
metrics_rec = self._call_xenapi("VM_metrics.get_record",
|
||||
metrics_ref)
|
||||
vcpus_number = metrics_rec['VCPUs_number']
|
||||
vcpus_utils = metrics_rec['VCPUs_utilisation']
|
||||
if len(vcpus_utils) == 0:
|
||||
msg = _("Could not get VM %s CPU Utilization") % instance_name
|
||||
vcpus_number = int(self._call_xenapi("VM.get_VCPUs_max", vm_ref))
|
||||
if vcpus_number <= 0:
|
||||
msg = _("Could not get VM %s CPU number") % instance_name
|
||||
raise XenapiException(msg)
|
||||
|
||||
utils = 0.0
|
||||
for num in range(int(vcpus_number)):
|
||||
utils += vcpus_utils.get(str(num))
|
||||
for index in range(vcpus_number):
|
||||
utils += float(self._call_xenapi("VM.query_data_source",
|
||||
vm_ref,
|
||||
"cpu%d" % index))
|
||||
utils = utils / int(vcpus_number) * 100
|
||||
return virt_inspector.CPUUtilStats(util=utils)
|
||||
|
||||
|
|
|
@ -61,7 +61,7 @@ class ErrorJoiningPartitioningGroup(Exception):
|
|||
class MemberNotInGroupError(Exception):
|
||||
def __init__(self, group_id, members, my_id):
|
||||
super(MemberNotInGroupError, self).__init__(_LE(
|
||||
'Group ID: %{group_id}s, Members: %{members}s, Me: %{me}s: '
|
||||
'Group ID: %(group_id)s, Members: %(members)s, Me: %(me)s: '
|
||||
'Current agent is not part of group and cannot take tasks') %
|
||||
{'group_id': group_id, 'members': members, 'me': my_id})
|
||||
|
||||
|
@ -210,7 +210,8 @@ class PartitionCoordinator(object):
|
|||
self.join_group(group_id)
|
||||
try:
|
||||
members = self._get_members(group_id)
|
||||
LOG.debug('Members of group: %s, Me: %s', members, self._my_id)
|
||||
LOG.debug('Members of group %s are: %s, Me: %s',
|
||||
group_id, members, self._my_id)
|
||||
if self._my_id not in members:
|
||||
LOG.warning(_LW('Cannot extract tasks because agent failed to '
|
||||
'join group properly. Rejoining group.'))
|
||||
|
@ -218,10 +219,14 @@ class PartitionCoordinator(object):
|
|||
members = self._get_members(group_id)
|
||||
if self._my_id not in members:
|
||||
raise MemberNotInGroupError(group_id, members, self._my_id)
|
||||
LOG.debug('Members of group %s are: %s, Me: %s',
|
||||
group_id, members, self._my_id)
|
||||
hr = utils.HashRing(members)
|
||||
iterable = list(iterable)
|
||||
filtered = [v for v in iterable
|
||||
if hr.get_node(str(v)) == self._my_id]
|
||||
LOG.debug('My subset: %s', [str(f) for f in filtered])
|
||||
LOG.debug('The universal set: %s, my subset: %s',
|
||||
[str(f) for f in iterable], [str(f) for f in filtered])
|
||||
return filtered
|
||||
except tooz.coordination.ToozError:
|
||||
LOG.exception(_LE('Error getting group membership info from '
|
||||
|
|
|
@ -24,8 +24,7 @@ from ceilometer import storage
|
|||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class DatabaseDispatcher(dispatcher.MeterDispatcherBase,
|
||||
dispatcher.EventDispatcherBase):
|
||||
class DatabaseDispatcher(dispatcher.Base):
|
||||
"""Dispatcher class for recording metering data into database.
|
||||
|
||||
The dispatcher class which records each meter into a database configured
|
||||
|
@ -39,35 +38,17 @@ class DatabaseDispatcher(dispatcher.MeterDispatcherBase,
|
|||
event_dispatchers = database
|
||||
"""
|
||||
|
||||
def __init__(self, conf):
|
||||
super(DatabaseDispatcher, self).__init__(conf)
|
||||
|
||||
self._meter_conn = self._get_db_conn('metering', True)
|
||||
self._event_conn = self._get_db_conn('event', True)
|
||||
|
||||
def _get_db_conn(self, purpose, ignore_exception=False):
|
||||
try:
|
||||
return storage.get_connection_from_config(self.conf, purpose)
|
||||
except Exception as err:
|
||||
params = {"purpose": purpose, "err": err}
|
||||
LOG.exception(_LE("Failed to connect to db, purpose %(purpose)s "
|
||||
"re-try later: %(err)s") % params)
|
||||
if not ignore_exception:
|
||||
raise
|
||||
|
||||
@property
|
||||
def meter_conn(self):
|
||||
if not self._meter_conn:
|
||||
self._meter_conn = self._get_db_conn('metering')
|
||||
def conn(self):
|
||||
if not hasattr(self, "_conn"):
|
||||
self._conn = storage.get_connection_from_config(
|
||||
self.conf, self.CONNECTION_TYPE)
|
||||
return self._conn
|
||||
|
||||
return self._meter_conn
|
||||
|
||||
@property
|
||||
def event_conn(self):
|
||||
if not self._event_conn:
|
||||
self._event_conn = self._get_db_conn('event')
|
||||
|
||||
return self._event_conn
|
||||
class MeterDatabaseDispatcher(dispatcher.MeterDispatcherBase,
|
||||
DatabaseDispatcher):
|
||||
CONNECTION_TYPE = 'metering'
|
||||
|
||||
def record_metering_data(self, data):
|
||||
# We may have receive only one counter on the wire
|
||||
|
@ -91,12 +72,17 @@ class DatabaseDispatcher(dispatcher.MeterDispatcherBase,
|
|||
ts = timeutils.parse_isotime(meter['timestamp'])
|
||||
meter['timestamp'] = timeutils.normalize_time(ts)
|
||||
try:
|
||||
self.meter_conn.record_metering_data_batch(data)
|
||||
self.conn.record_metering_data_batch(data)
|
||||
except Exception as err:
|
||||
LOG.error(_LE('Failed to record %(len)s: %(err)s.'),
|
||||
{'len': len(data), 'err': err})
|
||||
raise
|
||||
|
||||
|
||||
class EventDatabaseDispatcher(dispatcher.EventDispatcherBase,
|
||||
DatabaseDispatcher):
|
||||
CONNECTION_TYPE = 'event'
|
||||
|
||||
def record_events(self, events):
|
||||
if not isinstance(events, list):
|
||||
events = [events]
|
||||
|
@ -119,4 +105,4 @@ class DatabaseDispatcher(dispatcher.MeterDispatcherBase,
|
|||
except Exception:
|
||||
LOG.exception(_LE("Error processing event and it will be "
|
||||
"dropped: %s"), ev)
|
||||
self.event_conn.record_events(event_list)
|
||||
self.conn.record_events(event_list)
|
||||
|
|
|
@ -28,6 +28,7 @@ from keystoneauth1 import session as ka_session
|
|||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
from oslo_utils import fnmatch
|
||||
from oslo_utils import timeutils
|
||||
import requests
|
||||
import retrying
|
||||
import six
|
||||
|
@ -73,6 +74,9 @@ def cache_key_mangler(key):
|
|||
return uuid.uuid5(CACHE_NAMESPACE, key).hex
|
||||
|
||||
|
||||
EVENT_CREATE, EVENT_UPDATE, EVENT_DELETE = ("create", "update", "delete")
|
||||
|
||||
|
||||
class ResourcesDefinition(object):
|
||||
|
||||
MANDATORY_FIELDS = {'resource_type': six.string_types,
|
||||
|
@ -105,13 +109,37 @@ class ResourcesDefinition(object):
|
|||
else:
|
||||
self.metrics[t] = dict(archive_policy_name=archive_policy)
|
||||
|
||||
def match(self, metric_name):
|
||||
@staticmethod
|
||||
def _ensure_list(value):
|
||||
if isinstance(value, list):
|
||||
return value
|
||||
return [value]
|
||||
|
||||
def metric_match(self, metric_name):
|
||||
for t in self.cfg['metrics']:
|
||||
if fnmatch.fnmatch(metric_name, t):
|
||||
return True
|
||||
return False
|
||||
|
||||
def attributes(self, sample):
|
||||
@property
|
||||
def support_events(self):
|
||||
for e in ["event_create", "event_delete", "event_update"]:
|
||||
if e in self.cfg:
|
||||
return True
|
||||
return False
|
||||
|
||||
def event_match(self, event_type):
|
||||
for e in self._ensure_list(self.cfg.get('event_create', [])):
|
||||
if fnmatch.match(event_type, e):
|
||||
return EVENT_CREATE
|
||||
for e in self._ensure_list(self.cfg.get('event_delete', [])):
|
||||
if fnmatch.match(event_type, e):
|
||||
return EVENT_DELETE
|
||||
for e in self._ensure_list(self.cfg.get('event_update', [])):
|
||||
if fnmatch.match(event_type, e):
|
||||
return EVENT_UPDATE
|
||||
|
||||
def sample_attributes(self, sample):
|
||||
attrs = {}
|
||||
for name, definition in self._attributes.items():
|
||||
value = definition.parse(sample)
|
||||
|
@ -119,6 +147,15 @@ class ResourcesDefinition(object):
|
|||
attrs[name] = value
|
||||
return attrs
|
||||
|
||||
def event_attributes(self, event):
|
||||
attrs = {}
|
||||
traits = dict([(trait[0], trait[2]) for trait in event['traits']])
|
||||
for attr, field in self.cfg.get('event_attributes', {}).items():
|
||||
value = traits.get(field)
|
||||
if value is not None:
|
||||
attrs[attr] = value
|
||||
return attrs
|
||||
|
||||
|
||||
def get_gnocchiclient(conf):
|
||||
requests_session = requests.session()
|
||||
|
@ -262,7 +299,7 @@ class GnocchiDispatcher(dispatcher.MeterDispatcherBase):
|
|||
def _is_swift_account_sample(self, sample):
|
||||
return bool([rd for rd in self.resources_definition
|
||||
if rd.cfg['resource_type'] == 'swift_account'
|
||||
and rd.match(sample['counter_name'])])
|
||||
and rd.metric_match(sample['counter_name'])])
|
||||
|
||||
def _is_gnocchi_activity(self, sample):
|
||||
return (self.filter_service_activity and self.gnocchi_project_id and (
|
||||
|
@ -273,11 +310,17 @@ class GnocchiDispatcher(dispatcher.MeterDispatcherBase):
|
|||
self._is_swift_account_sample(sample))
|
||||
))
|
||||
|
||||
def _get_resource_definition(self, metric_name):
|
||||
def _get_resource_definition_from_metric(self, metric_name):
|
||||
for rd in self.resources_definition:
|
||||
if rd.match(metric_name):
|
||||
if rd.metric_match(metric_name):
|
||||
return rd
|
||||
|
||||
def _get_resource_definition_from_event(self, event_type):
|
||||
for rd in self.resources_definition:
|
||||
operation = rd.event_match(event_type)
|
||||
if operation:
|
||||
return rd, operation
|
||||
|
||||
def record_metering_data(self, data):
|
||||
# We may have receive only one counter on the wire
|
||||
if not isinstance(data, list):
|
||||
|
@ -309,12 +352,12 @@ class GnocchiDispatcher(dispatcher.MeterDispatcherBase):
|
|||
# because batch_resources_metrics_measures exception
|
||||
# returns this id and not the ceilometer one
|
||||
gnocchi_id = gnocchi_utils.encode_resource_id(resource_id)
|
||||
res_info = gnocchi_data[gnocchi_id] = {}
|
||||
res_info = {}
|
||||
for metric_name, samples in metric_grouped_samples:
|
||||
stats['metrics'] += 1
|
||||
|
||||
samples = list(samples)
|
||||
rd = self._get_resource_definition(metric_name)
|
||||
rd = self._get_resource_definition_from_metric(metric_name)
|
||||
if rd is None:
|
||||
LOG.warning(_LW("metric %s is not handled by Gnocchi") %
|
||||
metric_name)
|
||||
|
@ -332,7 +375,7 @@ class GnocchiDispatcher(dispatcher.MeterDispatcherBase):
|
|||
|
||||
for sample in samples:
|
||||
res_info.setdefault("resource_extra", {}).update(
|
||||
rd.attributes(sample))
|
||||
rd.sample_attributes(sample))
|
||||
m = measures.setdefault(gnocchi_id, {}).setdefault(
|
||||
metric_name, [])
|
||||
m.append({'timestamp': sample['timestamp'],
|
||||
|
@ -343,6 +386,8 @@ class GnocchiDispatcher(dispatcher.MeterDispatcherBase):
|
|||
|
||||
stats['measures'] += len(measures[gnocchi_id][metric_name])
|
||||
res_info["resource"].update(res_info["resource_extra"])
|
||||
if res_info:
|
||||
gnocchi_data[gnocchi_id] = res_info
|
||||
|
||||
try:
|
||||
self.batch_measures(measures, gnocchi_data, stats)
|
||||
|
@ -467,3 +512,35 @@ class GnocchiDispatcher(dispatcher.MeterDispatcherBase):
|
|||
return attribute_hash
|
||||
else:
|
||||
return None
|
||||
|
||||
def record_events(self, events):
|
||||
for event in events:
|
||||
rd = self._get_resource_definition_from_event(event['event_type'])
|
||||
if not rd:
|
||||
LOG.debug("No gnocchi definition for event type: %s",
|
||||
event['event_type'])
|
||||
continue
|
||||
|
||||
rd, operation = rd
|
||||
resource_type = rd.cfg['resource_type']
|
||||
resource = rd.event_attributes(event)
|
||||
|
||||
if operation == EVENT_DELETE:
|
||||
ended_at = timeutils.utcnow().isoformat()
|
||||
resources_to_end = [resource]
|
||||
extra_resources = cfg['event_associated_resources'].items()
|
||||
for resource_type, filters in extra_resources:
|
||||
resources_to_end.extend(self._gnocchi.search_resource(
|
||||
resource_type, filters['query'] % resource['id']))
|
||||
for resource in resources_to_end:
|
||||
try:
|
||||
self._gnocchi.update_resource(resource_type,
|
||||
resource['id'],
|
||||
{'ended_at': ended_at})
|
||||
except gnocchi_exc.NoSuchResource:
|
||||
LOG.debug(_("Delete event received on unexiting "
|
||||
"resource (%s), ignore it.") %
|
||||
resource['id'])
|
||||
except Exception:
|
||||
LOG.error(_LE("Fail to update the resource %s") %
|
||||
resource, exc_info=True)
|
||||
|
|
|
@ -16,6 +16,7 @@ import json
|
|||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
from oslo_utils import strutils
|
||||
import requests
|
||||
|
||||
from ceilometer import dispatcher
|
||||
|
@ -37,6 +38,10 @@ http_dispatcher_opts = [
|
|||
default=5,
|
||||
help='The max time in seconds to wait for a request to '
|
||||
'timeout.'),
|
||||
cfg.StrOpt('verify_ssl',
|
||||
help='The path to a server certificate or directory if the '
|
||||
'system CAs are not used or if a self-signed certificate '
|
||||
'is used. Set to False to ignore SSL cert verification.'),
|
||||
]
|
||||
|
||||
cfg.CONF.register_opts(http_dispatcher_opts, group="dispatcher_http")
|
||||
|
@ -59,6 +64,12 @@ class HttpDispatcher(dispatcher.MeterDispatcherBase,
|
|||
target = www.example.com
|
||||
event_target = www.example.com
|
||||
timeout = 2
|
||||
# No SSL verification
|
||||
#verify_ssl = False
|
||||
# SSL verification with system-installed CAs
|
||||
verify_ssl = True
|
||||
# SSL verification with specific CA or directory of certs
|
||||
#verify_ssl = /path/to/ca_certificate.crt
|
||||
"""
|
||||
|
||||
def __init__(self, conf):
|
||||
|
@ -68,6 +79,11 @@ class HttpDispatcher(dispatcher.MeterDispatcherBase,
|
|||
self.target = self.conf.dispatcher_http.target
|
||||
self.event_target = (self.conf.dispatcher_http.event_target or
|
||||
self.target)
|
||||
try:
|
||||
self.verify_ssl = strutils.bool_from_string(
|
||||
self.conf.dispatcher_http.verify_ssl, strict=True)
|
||||
except ValueError:
|
||||
self.verify_ssl = self.conf.dispatcher_http.verify_ssl or True
|
||||
|
||||
def record_metering_data(self, data):
|
||||
if self.target == '':
|
||||
|
@ -91,28 +107,45 @@ class HttpDispatcher(dispatcher.MeterDispatcherBase,
|
|||
'counter_volume': meter['counter_volume']})
|
||||
try:
|
||||
# Every meter should be posted to the target
|
||||
meter_json = json.dumps(meter)
|
||||
LOG.trace('Meter Message: %s', meter_json)
|
||||
res = requests.post(self.target,
|
||||
data=json.dumps(meter),
|
||||
data=meter_json,
|
||||
headers=self.headers,
|
||||
verify=self.verify_ssl,
|
||||
timeout=self.timeout)
|
||||
LOG.debug('Message posting finished with status code '
|
||||
LOG.debug('Meter message posting finished with status code '
|
||||
'%d.', res.status_code)
|
||||
except Exception as err:
|
||||
LOG.exception(_LE('Failed to record metering data: %s.'), err)
|
||||
res.raise_for_status()
|
||||
except requests.exceptions.HTTPError:
|
||||
LOG.exception(_LE('Status Code: %(code)s. Failed to '
|
||||
'dispatch meter: %(meter)s'),
|
||||
{'code': res.status_code, 'meter': meter})
|
||||
|
||||
def record_events(self, events):
|
||||
if self.event_target == '':
|
||||
# if the event target was not set, do not do anything
|
||||
LOG.error(_LE('Dispatcher event target was not set, no event will '
|
||||
'be posted. Set event_target in the ceilometer.conf '
|
||||
'file.'))
|
||||
return
|
||||
|
||||
if not isinstance(events, list):
|
||||
events = [events]
|
||||
|
||||
for event in events:
|
||||
res = None
|
||||
try:
|
||||
res = requests.post(self.event_target, data=event,
|
||||
event_json = json.dumps(event)
|
||||
LOG.trace('Event Message: %s', event_json)
|
||||
res = requests.post(self.event_target,
|
||||
data=event_json,
|
||||
headers=self.headers,
|
||||
verify=self.verify_ssl,
|
||||
timeout=self.timeout)
|
||||
LOG.debug('Event Message posting finished with status code '
|
||||
'%d.', res.status_code)
|
||||
res.raise_for_status()
|
||||
except Exception:
|
||||
error_code = res.status_code if res else 'unknown'
|
||||
LOG.exception(_LE('Status Code: %{code}s. Failed to'
|
||||
'dispatch event: %{event}s'),
|
||||
{'code': error_code, 'event': event})
|
||||
except requests.exceptions.HTTPError:
|
||||
LOG.exception(_LE('Status Code: %(code)s. Failed to '
|
||||
'dispatch event: %(event)s'),
|
||||
{'code': res.status_code, 'event': event})
|
||||
|
|
|
@ -142,7 +142,20 @@ class Connection(base.Connection):
|
|||
path = os.path.join(os.path.abspath(os.path.dirname(__file__)),
|
||||
'..', '..', 'storage', 'sqlalchemy',
|
||||
'migrate_repo')
|
||||
migration.db_sync(self._engine_facade.get_engine(), path)
|
||||
engine = self._engine_facade.get_engine()
|
||||
|
||||
from migrate import exceptions as migrate_exc
|
||||
from migrate.versioning import api
|
||||
from migrate.versioning import repository
|
||||
|
||||
repo = repository.Repository(path)
|
||||
try:
|
||||
api.db_version(engine, repo)
|
||||
except migrate_exc.DatabaseNotControlledError:
|
||||
models.Base.metadata.create_all(engine)
|
||||
api.version_control(engine, repo, repo.latest)
|
||||
else:
|
||||
migration.db_sync(engine, path)
|
||||
|
||||
def clear(self):
|
||||
engine = self._engine_facade.get_engine()
|
||||
|
|
|
@ -32,10 +32,26 @@ OPTS = [
|
|||
help='SNMPd user name of all nodes running in the cloud.'),
|
||||
cfg.StrOpt('readonly_user_password',
|
||||
default='password',
|
||||
help='SNMPd password of all the nodes running in the cloud.',
|
||||
help='SNMPd v3 authentication password of all the nodes '
|
||||
'running in the cloud.',
|
||||
secret=True),
|
||||
cfg.StrOpt('readonly_user_auth_proto',
|
||||
choices=['md5', 'sha'],
|
||||
help='SNMPd v3 authentication algorithm of all the nodes '
|
||||
'running in the cloud'),
|
||||
cfg.StrOpt('readonly_user_priv_proto',
|
||||
choices=['des', 'aes128', '3des', 'aes192', 'aes256'],
|
||||
help='SNMPd v3 encryption algorithm of all the nodes '
|
||||
'running in the cloud'),
|
||||
cfg.StrOpt('readonly_user_priv_password',
|
||||
help='SNMPd v3 encryption password of all the nodes '
|
||||
'running in the cloud.',
|
||||
secret=True),
|
||||
|
||||
|
||||
]
|
||||
cfg.CONF.register_opts(OPTS, group='hardware')
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(OPTS, group='hardware')
|
||||
|
||||
|
||||
class NodesDiscoveryTripleO(plugin_base.DiscoveryBase):
|
||||
|
@ -49,6 +65,31 @@ class NodesDiscoveryTripleO(plugin_base.DiscoveryBase):
|
|||
def _address(instance, field):
|
||||
return instance.addresses['ctlplane'][0].get(field)
|
||||
|
||||
@staticmethod
|
||||
def _make_resource_url(ip):
|
||||
params = [('readonly_user_auth_proto', 'auth_proto'),
|
||||
('readonly_user_priv_proto', 'priv_proto'),
|
||||
('readonly_user_priv_password', 'priv_password')]
|
||||
hwconf = CONF.hardware
|
||||
url = hwconf.url_scheme
|
||||
username = hwconf.readonly_user_name
|
||||
password = hwconf.readonly_user_password
|
||||
if username:
|
||||
url += username
|
||||
if password:
|
||||
url += ':' + password
|
||||
if username or password:
|
||||
url += '@'
|
||||
url += ip
|
||||
|
||||
query = "&".join(
|
||||
param + "=" + hwconf.get(conf) for (conf, param) in params
|
||||
if hwconf.get(conf))
|
||||
if query:
|
||||
url += '?' + query
|
||||
|
||||
return url
|
||||
|
||||
def discover(self, manager, param=None):
|
||||
"""Discover resources to monitor.
|
||||
|
||||
|
@ -75,11 +116,7 @@ class NodesDiscoveryTripleO(plugin_base.DiscoveryBase):
|
|||
for instance in self.instances.values():
|
||||
try:
|
||||
ip_address = self._address(instance, 'addr')
|
||||
final_address = (
|
||||
cfg.CONF.hardware.url_scheme +
|
||||
cfg.CONF.hardware.readonly_user_name + ':' +
|
||||
cfg.CONF.hardware.readonly_user_password + '@' +
|
||||
ip_address)
|
||||
final_address = self._make_resource_url(ip_address)
|
||||
|
||||
resource = {
|
||||
'resource_id': instance.id,
|
||||
|
|
|
@ -16,13 +16,19 @@
|
|||
"""Inspector for collecting data over SNMP"""
|
||||
|
||||
import copy
|
||||
from pysnmp.entity.rfc3413.oneliner import cmdgen
|
||||
|
||||
from oslo_log import log
|
||||
from pysnmp.entity.rfc3413.oneliner import cmdgen
|
||||
from pysnmp.proto import rfc1905
|
||||
import six
|
||||
import six.moves.urllib.parse as urlparse
|
||||
|
||||
from ceilometer.hardware.inspector import base
|
||||
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class SNMPException(Exception):
|
||||
pass
|
||||
|
||||
|
@ -56,6 +62,22 @@ def parse_snmp_return(ret, is_bulk=False):
|
|||
EXACT = 'type_exact'
|
||||
PREFIX = 'type_prefix'
|
||||
|
||||
_auth_proto_mapping = {
|
||||
'md5': cmdgen.usmHMACMD5AuthProtocol,
|
||||
'sha': cmdgen.usmHMACSHAAuthProtocol,
|
||||
}
|
||||
_priv_proto_mapping = {
|
||||
'des': cmdgen.usmDESPrivProtocol,
|
||||
'aes128': cmdgen.usmAesCfb128Protocol,
|
||||
'3des': cmdgen.usm3DESEDEPrivProtocol,
|
||||
'aes192': cmdgen.usmAesCfb192Protocol,
|
||||
'aes256': cmdgen.usmAesCfb256Protocol,
|
||||
}
|
||||
_usm_proto_mapping = {
|
||||
'auth_proto': ('authProtocol', _auth_proto_mapping),
|
||||
'priv_proto': ('privProtocol', _priv_proto_mapping),
|
||||
}
|
||||
|
||||
|
||||
class SNMPInspector(base.Inspector):
|
||||
# Default port
|
||||
|
@ -170,18 +192,24 @@ class SNMPInspector(base.Inspector):
|
|||
return matched
|
||||
|
||||
@staticmethod
|
||||
def get_oid_value(oid_cache, oid_def, suffix=''):
|
||||
def get_oid_value(oid_cache, oid_def, suffix='', host=None):
|
||||
oid, converter = oid_def
|
||||
value = oid_cache[oid + suffix]
|
||||
if converter:
|
||||
value = converter(value)
|
||||
try:
|
||||
value = converter(value)
|
||||
except ValueError:
|
||||
if isinstance(value, rfc1905.NoSuchObject):
|
||||
LOG.debug("OID %s%s has no value" % (
|
||||
oid, " on %s" % host.hostname if host else ""))
|
||||
return None
|
||||
return value
|
||||
|
||||
@classmethod
|
||||
def construct_metadata(cls, oid_cache, meta_defs, suffix=''):
|
||||
def construct_metadata(cls, oid_cache, meta_defs, suffix='', host=None):
|
||||
metadata = {}
|
||||
for key, oid_def in six.iteritems(meta_defs):
|
||||
metadata[key] = cls.get_oid_value(oid_cache, oid_def, suffix)
|
||||
metadata[key] = cls.get_oid_value(oid_cache, oid_def, suffix, host)
|
||||
return metadata
|
||||
|
||||
@classmethod
|
||||
|
@ -226,11 +254,11 @@ class SNMPInspector(base.Inspector):
|
|||
suffix = oid[len(meter_def['metric_oid'][0]):]
|
||||
value = self.get_oid_value(oid_cache,
|
||||
meter_def['metric_oid'],
|
||||
suffix)
|
||||
suffix, host)
|
||||
# get the metadata for this sample value
|
||||
metadata = self.construct_metadata(oid_cache,
|
||||
meter_def['metadata'],
|
||||
suffix)
|
||||
suffix, host)
|
||||
extra_metadata = copy.deepcopy(input_extra_metadata) or {}
|
||||
# call post_op for special cases
|
||||
if meter_def['post_op']:
|
||||
|
@ -283,9 +311,24 @@ class SNMPInspector(base.Inspector):
|
|||
|
||||
@staticmethod
|
||||
def _get_auth_strategy(host):
|
||||
options = urlparse.parse_qs(host.query)
|
||||
kwargs = {}
|
||||
|
||||
for key in _usm_proto_mapping:
|
||||
opt = options.get(key, [None])[-1]
|
||||
value = _usm_proto_mapping[key][1].get(opt)
|
||||
if value:
|
||||
kwargs[_usm_proto_mapping[key][0]] = value
|
||||
|
||||
priv_pass = options.get('priv_password', [None])[-1]
|
||||
if priv_pass:
|
||||
kwargs['privKey'] = priv_pass
|
||||
if host.password:
|
||||
kwargs['authKey'] = host.password
|
||||
|
||||
if kwargs:
|
||||
auth_strategy = cmdgen.UsmUserData(host.username,
|
||||
authKey=host.password)
|
||||
**kwargs)
|
||||
else:
|
||||
auth_strategy = cmdgen.CommunityData(host.username or 'public')
|
||||
return auth_strategy
|
||||
|
|
|
@ -0,0 +1,43 @@
|
|||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import glanceclient
|
||||
from oslo_config import cfg
|
||||
|
||||
from ceilometer.agent import plugin_base
|
||||
from ceilometer import keystone_client
|
||||
|
||||
SERVICE_OPTS = [
|
||||
cfg.StrOpt('glance',
|
||||
default='image',
|
||||
help='Glance service type.'),
|
||||
]
|
||||
|
||||
cfg.CONF.register_opts(SERVICE_OPTS, group='service_types')
|
||||
cfg.CONF.import_group('service_credentials', 'ceilometer.keystone_client')
|
||||
|
||||
|
||||
class ImagesDiscovery(plugin_base.DiscoveryBase):
|
||||
def __init__(self):
|
||||
super(ImagesDiscovery, self).__init__()
|
||||
conf = cfg.CONF.service_credentials
|
||||
self.glance_client = glanceclient.Client(
|
||||
version='2',
|
||||
session=keystone_client.get_session(),
|
||||
region_name=conf.region_name,
|
||||
interface=conf.interface,
|
||||
service_type=cfg.CONF.service_types.glance)
|
||||
|
||||
def discover(self, manager, param=None):
|
||||
"""Discover resources to monitor."""
|
||||
return self.glance_client.images.list()
|
|
@ -17,63 +17,14 @@
|
|||
|
||||
from __future__ import absolute_import
|
||||
|
||||
import glanceclient
|
||||
from oslo_config import cfg
|
||||
|
||||
from ceilometer.agent import plugin_base
|
||||
from ceilometer import keystone_client
|
||||
from ceilometer import sample
|
||||
|
||||
|
||||
OPTS = [
|
||||
cfg.IntOpt('glance_page_size',
|
||||
default=0,
|
||||
help="Number of items to request in "
|
||||
"each paginated Glance API request "
|
||||
"(parameter used by glanceclient). "
|
||||
"If this is less than or equal to 0, "
|
||||
"page size is not specified "
|
||||
"(default value in glanceclient is used)."),
|
||||
]
|
||||
|
||||
SERVICE_OPTS = [
|
||||
cfg.StrOpt('glance',
|
||||
default='image',
|
||||
help='Glance service type.'),
|
||||
]
|
||||
|
||||
cfg.CONF.register_opts(OPTS)
|
||||
cfg.CONF.register_opts(SERVICE_OPTS, group='service_types')
|
||||
|
||||
|
||||
class _Base(plugin_base.PollsterBase):
|
||||
|
||||
@property
|
||||
def default_discovery(self):
|
||||
return 'endpoint:%s' % cfg.CONF.service_types.glance
|
||||
|
||||
@staticmethod
|
||||
def get_glance_client(ksclient, endpoint):
|
||||
# hard-code v1 glance API version selection while v2 API matures
|
||||
return glanceclient.Client('1',
|
||||
session=keystone_client.get_session(),
|
||||
endpoint=endpoint,
|
||||
auth=ksclient.session.auth)
|
||||
|
||||
def _get_images(self, ksclient, endpoint):
|
||||
client = self.get_glance_client(ksclient, endpoint)
|
||||
page_size = cfg.CONF.glance_page_size
|
||||
kwargs = {}
|
||||
if page_size > 0:
|
||||
kwargs['page_size'] = page_size
|
||||
return client.images.list(filters={"is_public": None}, **kwargs)
|
||||
|
||||
def _iter_images(self, ksclient, cache, endpoint):
|
||||
"""Iterate over all images."""
|
||||
key = '%s-images' % endpoint
|
||||
if key not in cache:
|
||||
cache[key] = list(self._get_images(ksclient, endpoint))
|
||||
return iter(cache[key])
|
||||
return 'images'
|
||||
|
||||
@staticmethod
|
||||
def extract_image_metadata(image):
|
||||
|
@ -81,49 +32,45 @@ class _Base(plugin_base.PollsterBase):
|
|||
for k in
|
||||
[
|
||||
"status",
|
||||
"is_public",
|
||||
"visibility",
|
||||
"name",
|
||||
"deleted",
|
||||
"container_format",
|
||||
"created_at",
|
||||
"disk_format",
|
||||
"updated_at",
|
||||
"properties",
|
||||
"min_disk",
|
||||
"protected",
|
||||
"checksum",
|
||||
"deleted_at",
|
||||
"min_ram",
|
||||
"size", ])
|
||||
"tags",
|
||||
"virtual_size"])
|
||||
|
||||
|
||||
class ImagePollster(_Base):
|
||||
def get_samples(self, manager, cache, resources):
|
||||
for endpoint in resources:
|
||||
for image in self._iter_images(manager.keystone, cache, endpoint):
|
||||
yield sample.Sample(
|
||||
name='image',
|
||||
type=sample.TYPE_GAUGE,
|
||||
unit='image',
|
||||
volume=1,
|
||||
user_id=None,
|
||||
project_id=image.owner,
|
||||
resource_id=image.id,
|
||||
resource_metadata=self.extract_image_metadata(image),
|
||||
)
|
||||
for image in resources:
|
||||
yield sample.Sample(
|
||||
name='image',
|
||||
type=sample.TYPE_GAUGE,
|
||||
unit='image',
|
||||
volume=1,
|
||||
user_id=None,
|
||||
project_id=image.owner,
|
||||
resource_id=image.id,
|
||||
resource_metadata=self.extract_image_metadata(image),
|
||||
)
|
||||
|
||||
|
||||
class ImageSizePollster(_Base):
|
||||
def get_samples(self, manager, cache, resources):
|
||||
for endpoint in resources:
|
||||
for image in self._iter_images(manager.keystone, cache, endpoint):
|
||||
yield sample.Sample(
|
||||
name='image.size',
|
||||
type=sample.TYPE_GAUGE,
|
||||
unit='B',
|
||||
volume=image.size,
|
||||
user_id=None,
|
||||
project_id=image.owner,
|
||||
resource_id=image.id,
|
||||
resource_metadata=self.extract_image_metadata(image),
|
||||
)
|
||||
for image in resources:
|
||||
yield sample.Sample(
|
||||
name='image.size',
|
||||
type=sample.TYPE_GAUGE,
|
||||
unit='B',
|
||||
volume=image.size,
|
||||
user_id=None,
|
||||
project_id=image.owner,
|
||||
resource_id=image.id,
|
||||
resource_metadata=self.extract_image_metadata(image),
|
||||
)
|
||||
|
|
|
@ -33,12 +33,6 @@ class BaseServicesPollster(plugin_base.PollsterBase):
|
|||
|
||||
FIELDS = []
|
||||
|
||||
@staticmethod
|
||||
def _iter_cache(cache, meter_name, method):
|
||||
if meter_name not in cache:
|
||||
cache[meter_name] = list(method())
|
||||
return iter(cache[meter_name])
|
||||
|
||||
def extract_metadata(self, metric):
|
||||
return dict((k, metric[k]) for k in self.FIELDS)
|
||||
|
||||
|
|
|
@ -392,19 +392,25 @@ class Client(object):
|
|||
@logged
|
||||
def list_listener(self):
|
||||
"""This method is used to get the list of the listeners."""
|
||||
resp = self.client.list_listeners()
|
||||
resources = resp.get('listeners')
|
||||
for listener in resources:
|
||||
loadbalancer_id = listener.get('loadbalancers')[0].get('id')
|
||||
status = self._get_listener_status(loadbalancer_id)
|
||||
listener['operating_status'] = status[listener.get('id')]
|
||||
resources = []
|
||||
if self.lb_version == 'v2':
|
||||
# list_listeners works only with lbaas v2 extension
|
||||
resp = self.client.list_listeners()
|
||||
resources = resp.get('listeners')
|
||||
for listener in resources:
|
||||
loadbalancer_id = listener.get('loadbalancers')[0].get('id')
|
||||
status = self._get_listener_status(loadbalancer_id)
|
||||
listener['operating_status'] = status[listener.get('id')]
|
||||
return resources
|
||||
|
||||
@logged
|
||||
def list_loadbalancer(self):
|
||||
"""This method is used to get the list of the loadbalancers."""
|
||||
resp = self.client.list_loadbalancers()
|
||||
resources = resp.get('loadbalancers')
|
||||
resources = []
|
||||
if self.lb_version == 'v2':
|
||||
# list_loadbalancers works only with lbaas v2 extension
|
||||
resp = self.client.list_loadbalancers()
|
||||
resources = resp.get('loadbalancers')
|
||||
return resources
|
||||
|
||||
@logged
|
||||
|
|
|
@ -146,8 +146,8 @@ class NotificationService(service_base.PipelineBasedService):
|
|||
|
||||
return event_pipe_manager
|
||||
|
||||
def start(self):
|
||||
super(NotificationService, self).start()
|
||||
def run(self):
|
||||
super(NotificationService, self).run()
|
||||
self.shutdown = False
|
||||
self.periodic = None
|
||||
self.partition_coordinator = None
|
||||
|
@ -311,19 +311,18 @@ class NotificationService(service_base.PipelineBasedService):
|
|||
batch_timeout=cfg.CONF.notification.batch_timeout)
|
||||
self.pipeline_listener.start()
|
||||
|
||||
def stop(self):
|
||||
if self.started:
|
||||
self.shutdown = True
|
||||
if self.periodic:
|
||||
self.periodic.stop()
|
||||
self.periodic.wait()
|
||||
if self.partition_coordinator:
|
||||
self.partition_coordinator.stop()
|
||||
with self.coord_lock:
|
||||
if self.pipeline_listener:
|
||||
utils.kill_listeners([self.pipeline_listener])
|
||||
utils.kill_listeners(self.listeners)
|
||||
super(NotificationService, self).stop()
|
||||
def terminate(self):
|
||||
self.shutdown = True
|
||||
if self.periodic:
|
||||
self.periodic.stop()
|
||||
self.periodic.wait()
|
||||
if self.partition_coordinator:
|
||||
self.partition_coordinator.stop()
|
||||
with self.coord_lock:
|
||||
if self.pipeline_listener:
|
||||
utils.kill_listeners([self.pipeline_listener])
|
||||
utils.kill_listeners(self.listeners)
|
||||
super(NotificationService, self).terminate()
|
||||
|
||||
def reload_pipeline(self):
|
||||
LOG.info(_LI("Reloading notification agent and listeners."))
|
||||
|
|
|
@ -40,7 +40,7 @@ SERVICE_OPTS = [
|
|||
cfg.CONF.register_opts(OPTS)
|
||||
cfg.CONF.register_opts(SERVICE_OPTS, group='service_types')
|
||||
cfg.CONF.import_opt('http_timeout', 'ceilometer.service')
|
||||
cfg.CONF.import_opt('glance', 'ceilometer.image.glance', 'service_types')
|
||||
cfg.CONF.import_opt('glance', 'ceilometer.image.discovery', 'service_types')
|
||||
cfg.CONF.import_group('service_credentials', 'ceilometer.keystone_client')
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
|
|
@ -34,7 +34,7 @@ import ceilometer.dispatcher.gnocchi
|
|||
import ceilometer.energy.kwapi
|
||||
import ceilometer.event.converter
|
||||
import ceilometer.hardware.discovery
|
||||
import ceilometer.image.glance
|
||||
import ceilometer.image.discovery
|
||||
import ceilometer.ipmi.notifications.ironic
|
||||
import ceilometer.ipmi.platform.intel_node_manager
|
||||
import ceilometer.ipmi.pollsters
|
||||
|
@ -67,7 +67,6 @@ def list_opts():
|
|||
ceilometer.compute.virt.inspector.OPTS,
|
||||
ceilometer.compute.virt.libvirt.inspector.OPTS,
|
||||
ceilometer.dispatcher.OPTS,
|
||||
ceilometer.image.glance.OPTS,
|
||||
ceilometer.ipmi.notifications.ironic.OPTS,
|
||||
ceilometer.middleware.OPTS,
|
||||
ceilometer.network.notifications.OPTS,
|
||||
|
@ -113,7 +112,7 @@ def list_opts():
|
|||
loading.get_auth_plugin_conf_options('password'))),
|
||||
('service_types',
|
||||
itertools.chain(ceilometer.energy.kwapi.SERVICE_OPTS,
|
||||
ceilometer.image.glance.SERVICE_OPTS,
|
||||
ceilometer.image.discovery.SERVICE_OPTS,
|
||||
ceilometer.neutron_client.SERVICE_OPTS,
|
||||
ceilometer.nova_client.SERVICE_OPTS,
|
||||
ceilometer.objectstore.rgw.SERVICE_OPTS,
|
||||
|
|
|
@ -703,7 +703,7 @@ class PipelineManager(object):
|
|||
|
||||
unique_names = set()
|
||||
sources = []
|
||||
for s in cfg.get('sources', []):
|
||||
for s in cfg.get('sources'):
|
||||
name = s.get('name')
|
||||
if name in unique_names:
|
||||
raise PipelineException("Duplicated source names: %s" %
|
||||
|
@ -714,7 +714,7 @@ class PipelineManager(object):
|
|||
unique_names.clear()
|
||||
|
||||
sinks = {}
|
||||
for s in cfg.get('sinks', []):
|
||||
for s in cfg.get('sinks'):
|
||||
name = s.get('name')
|
||||
if name in unique_names:
|
||||
raise PipelineException("Duplicated sink names: %s" %
|
||||
|
@ -764,7 +764,7 @@ class PollingManager(object):
|
|||
LOG.info(_LI('detected decoupled pipeline config format'))
|
||||
|
||||
unique_names = set()
|
||||
for s in cfg.get('sources', []):
|
||||
for s in cfg.get('sources'):
|
||||
name = s.get('name')
|
||||
if name in unique_names:
|
||||
raise PipelineException("Duplicated source names: %s" %
|
||||
|
|
|
@ -12,48 +12,85 @@
|
|||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_utils import timeutils
|
||||
from oslo_log import log
|
||||
import six.moves.urllib.parse as urlparse
|
||||
from stevedore import driver
|
||||
import stevedore.exception
|
||||
|
||||
from ceilometer.dispatcher import database
|
||||
from ceilometer.i18n import _LE
|
||||
from ceilometer import publisher
|
||||
from ceilometer.publisher import utils
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class DirectPublisher(publisher.PublisherBase):
|
||||
"""A publisher that allows saving directly from the pipeline.
|
||||
|
||||
Samples are saved to the currently configured database by hitching
|
||||
a ride on the DatabaseDispatcher. This is useful where it is desirable
|
||||
to limit the number of external services that are required.
|
||||
"""
|
||||
Samples are saved to a configured dispatcher. This is useful
|
||||
where it is desirable to limit the number of external services that
|
||||
are required.
|
||||
|
||||
By default, the database dispatcher is used to select another one we
|
||||
can use direct://?dispatcher=gnocchi, direct://?dispatcher=http,
|
||||
direct://?dispatcher=log, ...
|
||||
"""
|
||||
def __init__(self, parsed_url):
|
||||
super(DirectPublisher, self).__init__(parsed_url)
|
||||
dispatcher = database.DatabaseDispatcher(cfg.CONF)
|
||||
self.meter_conn = dispatcher.meter_conn
|
||||
self.event_conn = dispatcher.event_conn
|
||||
options = urlparse.parse_qs(parsed_url.query)
|
||||
self.dispatcher_name = options.get('dispatcher', ['database'])[-1]
|
||||
self._sample_dispatcher = None
|
||||
self._event_dispatcher = None
|
||||
|
||||
try:
|
||||
self.sample_driver = driver.DriverManager(
|
||||
'ceilometer.dispatcher.meter', self.dispatcher_name).driver
|
||||
except stevedore.exception.NoMatches:
|
||||
self.sample_driver = None
|
||||
|
||||
try:
|
||||
self.event_driver = driver.DriverManager(
|
||||
'ceilometer.dispatcher.event', self.dispatcher_name).driver
|
||||
except stevedore.exception.NoMatches:
|
||||
self.event_driver = None
|
||||
|
||||
def get_sample_dispatcher(self):
|
||||
if not self._sample_dispatcher:
|
||||
self._sample_dispatcher = self.sample_driver(cfg.CONF)
|
||||
return self._sample_dispatcher
|
||||
|
||||
def get_event_dispatcher(self):
|
||||
if not self._event_dispatcher:
|
||||
if self.event_driver != self.sample_driver:
|
||||
self._event_dispatcher = self.event_driver(cfg.CONF)
|
||||
else:
|
||||
self._event_dispatcher = self.get_sample_dispatcher()
|
||||
return self._event_dispatcher
|
||||
|
||||
def publish_samples(self, samples):
|
||||
if not self.sample_driver:
|
||||
LOG.error(_LE("Can't publish samples to a non-existing dispatcher "
|
||||
"'%s'"), self.dispatcher_name)
|
||||
return
|
||||
|
||||
if not isinstance(samples, list):
|
||||
samples = [samples]
|
||||
|
||||
# Transform the Sample objects into a list of dicts
|
||||
meters = [
|
||||
self.get_sample_dispatcher().record_metering_data([
|
||||
utils.meter_message_from_counter(
|
||||
sample, cfg.CONF.publisher.telemetry_secret)
|
||||
for sample in samples
|
||||
]
|
||||
|
||||
for meter in meters:
|
||||
if meter.get('timestamp'):
|
||||
ts = timeutils.parse_isotime(meter['timestamp'])
|
||||
meter['timestamp'] = timeutils.normalize_time(ts)
|
||||
self.meter_conn.record_metering_data(meter)
|
||||
])
|
||||
|
||||
def publish_events(self, events):
|
||||
if not self.event_driver:
|
||||
LOG.error(_LE("Can't publish events to a non-existing dispatcher"
|
||||
"'%s'"), self.dispatcher_name)
|
||||
return
|
||||
|
||||
if not isinstance(events, list):
|
||||
events = [events]
|
||||
|
||||
self.event_conn.record_events(events)
|
||||
self.get_event_dispatcher().record_events([
|
||||
utils.message_from_event(
|
||||
event, cfg.CONF.publisher.telemetry_secret)
|
||||
for event in events])
|
||||
|
|
|
@ -192,9 +192,16 @@ class NotifierPublisher(MessagingPublisher):
|
|||
def __init__(self, parsed_url, default_topic):
|
||||
super(NotifierPublisher, self).__init__(parsed_url)
|
||||
options = urlparse.parse_qs(parsed_url.query)
|
||||
topic = options.get('topic', [default_topic])
|
||||
topic = options.pop('topic', [default_topic])
|
||||
driver = options.pop('driver', ['rabbit'])[0]
|
||||
url = None
|
||||
if parsed_url.netloc != '':
|
||||
url = urlparse.urlunsplit([driver, parsed_url.netloc,
|
||||
parsed_url.path,
|
||||
urlparse.urlencode(options, True),
|
||||
parsed_url.fragment])
|
||||
self.notifier = oslo_messaging.Notifier(
|
||||
messaging.get_transport(),
|
||||
messaging.get_transport(url),
|
||||
driver=cfg.CONF.publisher_notifier.telemetry_driver,
|
||||
publisher_id='telemetry.publisher.%s' % cfg.CONF.host,
|
||||
topics=topic,
|
||||
|
|
|
@ -23,7 +23,7 @@ from oslo_log import log
|
|||
from oslo_utils import netutils
|
||||
|
||||
import ceilometer
|
||||
from ceilometer.i18n import _
|
||||
from ceilometer.i18n import _, _LW
|
||||
from ceilometer import publisher
|
||||
from ceilometer.publisher import utils
|
||||
|
||||
|
@ -38,9 +38,22 @@ class UDPPublisher(publisher.PublisherBase):
|
|||
self.host, self.port = netutils.parse_host_port(
|
||||
parsed_url.netloc,
|
||||
default_port=cfg.CONF.collector.udp_port)
|
||||
if netutils.is_valid_ipv6(self.host):
|
||||
addr_family = socket.AF_INET6
|
||||
addrinfo = None
|
||||
try:
|
||||
addrinfo = socket.getaddrinfo(self.host, None, socket.AF_INET6,
|
||||
socket.SOCK_DGRAM)[0]
|
||||
except socket.gaierror:
|
||||
try:
|
||||
addrinfo = socket.getaddrinfo(self.host, None, socket.AF_INET,
|
||||
socket.SOCK_DGRAM)[0]
|
||||
except socket.gaierror:
|
||||
pass
|
||||
if addrinfo:
|
||||
addr_family = addrinfo[0]
|
||||
else:
|
||||
LOG.warning(_LW(
|
||||
"Cannot resolve host %s, creating AF_INET socket..."),
|
||||
self.host)
|
||||
addr_family = socket.AF_INET
|
||||
self.socket = socket.socket(addr_family,
|
||||
socket.SOCK_DGRAM)
|
||||
|
|
|
@ -15,9 +15,9 @@
|
|||
|
||||
import abc
|
||||
|
||||
import cotyledon
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
from oslo_service import service as os_service
|
||||
import six
|
||||
|
||||
from ceilometer.i18n import _LE, _LI
|
||||
|
@ -27,18 +27,8 @@ from ceilometer import utils
|
|||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class ServiceBase(os_service.Service):
|
||||
def __init__(self):
|
||||
self.started = False
|
||||
super(ServiceBase, self).__init__()
|
||||
|
||||
def start(self):
|
||||
self.started = True
|
||||
super(ServiceBase, self).start()
|
||||
|
||||
|
||||
@six.add_metaclass(abc.ABCMeta)
|
||||
class PipelineBasedService(ServiceBase):
|
||||
class PipelineBasedService(cotyledon.Service):
|
||||
def clear_pipeline_validation_status(self):
|
||||
"""Clears pipeline validation status flags."""
|
||||
self.pipeline_validated = False
|
||||
|
@ -65,11 +55,10 @@ class PipelineBasedService(ServiceBase):
|
|||
spacing=cfg.CONF.pipeline_polling_interval)
|
||||
utils.spawn_thread(self.refresh_pipeline_periodic.start)
|
||||
|
||||
def stop(self):
|
||||
if self.started and self.refresh_pipeline_periodic:
|
||||
def terminate(self):
|
||||
if self.refresh_pipeline_periodic:
|
||||
self.refresh_pipeline_periodic.stop()
|
||||
self.refresh_pipeline_periodic.wait()
|
||||
super(PipelineBasedService, self).stop()
|
||||
|
||||
def get_pipeline_mtime(self, p_type=pipeline.SAMPLE_TYPE):
|
||||
return (self.event_pipeline_mtime if p_type == pipeline.EVENT_TYPE else
|
||||
|
|
|
@ -237,7 +237,20 @@ class Connection(base.Connection):
|
|||
from oslo_db.sqlalchemy import migration
|
||||
path = os.path.join(os.path.abspath(os.path.dirname(__file__)),
|
||||
'sqlalchemy', 'migrate_repo')
|
||||
migration.db_sync(self._engine_facade.get_engine(), path)
|
||||
engine = self._engine_facade.get_engine()
|
||||
|
||||
from migrate import exceptions as migrate_exc
|
||||
from migrate.versioning import api
|
||||
from migrate.versioning import repository
|
||||
|
||||
repo = repository.Repository(path)
|
||||
try:
|
||||
api.db_version(engine, repo)
|
||||
except migrate_exc.DatabaseNotControlledError:
|
||||
models.Base.metadata.create_all(engine)
|
||||
api.version_control(engine, repo, repo.latest)
|
||||
else:
|
||||
migration.db_sync(engine, path)
|
||||
|
||||
def clear(self):
|
||||
engine = self._engine_facade.get_engine()
|
||||
|
|
|
@ -34,7 +34,7 @@ from ceilometer import utils
|
|||
class JSONEncodedDict(TypeDecorator):
|
||||
"""Represents an immutable structure as a json-encoded string."""
|
||||
|
||||
impl = String
|
||||
impl = Text
|
||||
|
||||
@staticmethod
|
||||
def process_bind_param(value, dialect):
|
||||
|
@ -78,10 +78,12 @@ class PreciseTimestamp(TypeDecorator):
|
|||
return value
|
||||
|
||||
|
||||
_COMMON_TABLE_ARGS = {'mysql_charset': "utf8", 'mysql_engine': "InnoDB"}
|
||||
|
||||
|
||||
class CeilometerBase(object):
|
||||
"""Base class for Ceilometer Models."""
|
||||
__table_args__ = {'mysql_charset': "utf8",
|
||||
'mysql_engine': "InnoDB"}
|
||||
__table_args__ = _COMMON_TABLE_ARGS
|
||||
__table_initialized__ = False
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
|
@ -105,6 +107,7 @@ class MetaText(Base):
|
|||
__tablename__ = 'metadata_text'
|
||||
__table_args__ = (
|
||||
Index('ix_meta_text_key', 'meta_key'),
|
||||
_COMMON_TABLE_ARGS,
|
||||
)
|
||||
id = Column(Integer, ForeignKey('resource.internal_id'), primary_key=True)
|
||||
meta_key = Column(String(255), primary_key=True)
|
||||
|
@ -117,6 +120,7 @@ class MetaBool(Base):
|
|||
__tablename__ = 'metadata_bool'
|
||||
__table_args__ = (
|
||||
Index('ix_meta_bool_key', 'meta_key'),
|
||||
_COMMON_TABLE_ARGS,
|
||||
)
|
||||
id = Column(Integer, ForeignKey('resource.internal_id'), primary_key=True)
|
||||
meta_key = Column(String(255), primary_key=True)
|
||||
|
@ -129,6 +133,7 @@ class MetaBigInt(Base):
|
|||
__tablename__ = 'metadata_int'
|
||||
__table_args__ = (
|
||||
Index('ix_meta_int_key', 'meta_key'),
|
||||
_COMMON_TABLE_ARGS,
|
||||
)
|
||||
id = Column(Integer, ForeignKey('resource.internal_id'), primary_key=True)
|
||||
meta_key = Column(String(255), primary_key=True)
|
||||
|
@ -141,6 +146,7 @@ class MetaFloat(Base):
|
|||
__tablename__ = 'metadata_float'
|
||||
__table_args__ = (
|
||||
Index('ix_meta_float_key', 'meta_key'),
|
||||
_COMMON_TABLE_ARGS,
|
||||
)
|
||||
id = Column(Integer, ForeignKey('resource.internal_id'), primary_key=True)
|
||||
meta_key = Column(String(255), primary_key=True)
|
||||
|
@ -154,6 +160,7 @@ class Meter(Base):
|
|||
__table_args__ = (
|
||||
UniqueConstraint('name', 'type', 'unit', name='def_unique'),
|
||||
Index('ix_meter_name', 'name'),
|
||||
_COMMON_TABLE_ARGS,
|
||||
)
|
||||
id = Column(Integer, primary_key=True)
|
||||
name = Column(String(255), nullable=False)
|
||||
|
@ -175,6 +182,7 @@ class Resource(Base):
|
|||
# name='res_def_unique'),
|
||||
Index('ix_resource_resource_id', 'resource_id'),
|
||||
Index('ix_resource_metadata_hash', 'metadata_hash'),
|
||||
_COMMON_TABLE_ARGS,
|
||||
)
|
||||
|
||||
internal_id = Column(Integer, primary_key=True)
|
||||
|
@ -209,7 +217,8 @@ class Sample(Base):
|
|||
Index('ix_sample_timestamp', 'timestamp'),
|
||||
Index('ix_sample_resource_id', 'resource_id'),
|
||||
Index('ix_sample_meter_id', 'meter_id'),
|
||||
Index('ix_sample_meter_id_resource_id', 'meter_id', 'resource_id')
|
||||
Index('ix_sample_meter_id_resource_id', 'meter_id', 'resource_id'),
|
||||
_COMMON_TABLE_ARGS,
|
||||
)
|
||||
id = Column(Integer, primary_key=True)
|
||||
meter_id = Column(Integer, ForeignKey('meter.id'))
|
||||
|
@ -260,7 +269,8 @@ class Event(Base):
|
|||
__table_args__ = (
|
||||
Index('ix_event_message_id', 'message_id'),
|
||||
Index('ix_event_type_id', 'event_type_id'),
|
||||
Index('ix_event_generated', 'generated')
|
||||
Index('ix_event_generated', 'generated'),
|
||||
_COMMON_TABLE_ARGS,
|
||||
)
|
||||
id = Column(Integer, primary_key=True)
|
||||
message_id = Column(String(50), unique=True)
|
||||
|
@ -289,6 +299,7 @@ class TraitText(Base):
|
|||
__tablename__ = 'trait_text'
|
||||
__table_args__ = (
|
||||
Index('ix_trait_text_event_id_key', 'event_id', 'key'),
|
||||
_COMMON_TABLE_ARGS,
|
||||
)
|
||||
event_id = Column(Integer, ForeignKey('event.id'), primary_key=True)
|
||||
key = Column(String(255), primary_key=True)
|
||||
|
@ -301,6 +312,7 @@ class TraitInt(Base):
|
|||
__tablename__ = 'trait_int'
|
||||
__table_args__ = (
|
||||
Index('ix_trait_int_event_id_key', 'event_id', 'key'),
|
||||
_COMMON_TABLE_ARGS,
|
||||
)
|
||||
event_id = Column(Integer, ForeignKey('event.id'), primary_key=True)
|
||||
key = Column(String(255), primary_key=True)
|
||||
|
@ -313,6 +325,7 @@ class TraitFloat(Base):
|
|||
__tablename__ = 'trait_float'
|
||||
__table_args__ = (
|
||||
Index('ix_trait_float_event_id_key', 'event_id', 'key'),
|
||||
_COMMON_TABLE_ARGS,
|
||||
)
|
||||
event_id = Column(Integer, ForeignKey('event.id'), primary_key=True)
|
||||
key = Column(String(255), primary_key=True)
|
||||
|
@ -325,6 +338,7 @@ class TraitDatetime(Base):
|
|||
__tablename__ = 'trait_datetime'
|
||||
__table_args__ = (
|
||||
Index('ix_trait_datetime_event_id_key', 'event_id', 'key'),
|
||||
_COMMON_TABLE_ARGS,
|
||||
)
|
||||
event_id = Column(Integer, ForeignKey('event.id'), primary_key=True)
|
||||
key = Column(String(255), primary_key=True)
|
||||
|
|
|
@ -36,11 +36,16 @@ class TestAPIUpgradePath(v2.FunctionalTest):
|
|||
self.CONF.set_override('aodh_is_enabled', True, group='api')
|
||||
self.CONF.set_override('aodh_url', 'http://alarm-endpoint:8008/',
|
||||
group='api')
|
||||
self.CONF.set_override('panko_is_enabled', True, group='api')
|
||||
self.CONF.set_override('panko_url', 'http://event-endpoint:8009/',
|
||||
group='api')
|
||||
|
||||
def _setup_keystone_mock(self):
|
||||
self.CONF.set_override('gnocchi_is_enabled', None, group='api')
|
||||
self.CONF.set_override('aodh_is_enabled', None, group='api')
|
||||
self.CONF.set_override('aodh_url', None, group='api')
|
||||
self.CONF.set_override('panko_is_enabled', None, group='api')
|
||||
self.CONF.set_override('panko_url', None, group='api')
|
||||
self.CONF.set_override('meter_dispatchers', ['database'])
|
||||
self.ks = mock.Mock()
|
||||
self.catalog = (self.ks.session.auth.get_access.
|
||||
|
@ -55,6 +60,8 @@ class TestAPIUpgradePath(v2.FunctionalTest):
|
|||
return 'http://gnocchi/'
|
||||
elif service_type == 'alarming':
|
||||
return 'http://alarm-endpoint:8008/'
|
||||
elif service_type == 'event':
|
||||
return 'http://event-endpoint:8009/'
|
||||
|
||||
def _do_test_gnocchi_enabled_without_database_backend(self):
|
||||
self.CONF.set_override('meter_dispatchers', 'gnocchi')
|
||||
|
@ -63,14 +70,6 @@ class TestAPIUpgradePath(v2.FunctionalTest):
|
|||
status=410)
|
||||
self.assertIn(b'Gnocchi API', response.body)
|
||||
|
||||
headers_events = {"X-Roles": "admin",
|
||||
"X-User-Id": "user1",
|
||||
"X-Project-Id": "project1"}
|
||||
for endpoint in ['events', 'event_types']:
|
||||
self.app.get(self.PATH_PREFIX + '/' + endpoint,
|
||||
headers=headers_events,
|
||||
status=200)
|
||||
|
||||
response = self.post_json('/query/samples',
|
||||
params={
|
||||
"filter": '{"=": {"type": "creation"}}',
|
||||
|
@ -125,13 +124,40 @@ class TestAPIUpgradePath(v2.FunctionalTest):
|
|||
self.assertEqual("http://alarm-endpoint:8008/v2/query/alarms",
|
||||
response.headers['Location'])
|
||||
|
||||
def _do_test_event_redirect(self):
|
||||
response = self.app.get(self.PATH_PREFIX + '/events',
|
||||
expect_errors=True)
|
||||
|
||||
self.assertEqual(307, response.status_code)
|
||||
self.assertEqual("http://event-endpoint:8009/v2/events",
|
||||
response.headers['Location'])
|
||||
|
||||
response = self.app.get(self.PATH_PREFIX + '/events/uuid',
|
||||
expect_errors=True)
|
||||
|
||||
self.assertEqual(307, response.status_code)
|
||||
self.assertEqual("http://event-endpoint:8009/v2/events/uuid",
|
||||
response.headers['Location'])
|
||||
|
||||
response = self.app.delete(self.PATH_PREFIX + '/events/uuid',
|
||||
expect_errors=True)
|
||||
|
||||
self.assertEqual(307, response.status_code)
|
||||
self.assertEqual("http://event-endpoint:8009/v2/events/uuid",
|
||||
response.headers['Location'])
|
||||
|
||||
response = self.app.get(self.PATH_PREFIX + '/event_types',
|
||||
expect_errors=True)
|
||||
|
||||
self.assertEqual(307, response.status_code)
|
||||
self.assertEqual("http://event-endpoint:8009/v2/event_types",
|
||||
response.headers['Location'])
|
||||
|
||||
def test_gnocchi_enabled_without_database_backend_keystone(self):
|
||||
self._setup_keystone_mock()
|
||||
self._do_test_gnocchi_enabled_without_database_backend()
|
||||
self.catalog.url_for.assert_has_calls([
|
||||
mock.call(service_type="alarming"),
|
||||
mock.call(service_type="metric")],
|
||||
any_order=True)
|
||||
self.catalog.url_for.assert_has_calls(
|
||||
[mock.call(service_type="metric")])
|
||||
|
||||
def test_gnocchi_enabled_without_database_backend_configoptions(self):
|
||||
self._setup_osloconfig_options()
|
||||
|
@ -140,9 +166,19 @@ class TestAPIUpgradePath(v2.FunctionalTest):
|
|||
def test_alarm_redirect_keystone(self):
|
||||
self._setup_keystone_mock()
|
||||
self._do_test_alarm_redirect()
|
||||
self.assertEqual([mock.call(service_type="alarming")],
|
||||
self.catalog.url_for.mock_calls)
|
||||
self.catalog.url_for.assert_has_calls(
|
||||
[mock.call(service_type="alarming")])
|
||||
|
||||
def test_event_redirect_keystone(self):
|
||||
self._setup_keystone_mock()
|
||||
self._do_test_event_redirect()
|
||||
self.catalog.url_for.assert_has_calls(
|
||||
[mock.call(service_type="event")])
|
||||
|
||||
def test_alarm_redirect_configoptions(self):
|
||||
self._setup_osloconfig_options()
|
||||
self._do_test_alarm_redirect()
|
||||
|
||||
def test_event_redirect_configoptions(self):
|
||||
self._setup_osloconfig_options()
|
||||
self._do_test_event_redirect()
|
||||
|
|
|
@ -89,7 +89,7 @@ class TestEventDirectPublisher(tests_db.TestBase):
|
|||
for i in range(0, 5)]
|
||||
|
||||
def test_direct_publisher(self):
|
||||
parsed_url = netutils.urlsplit('direct://')
|
||||
parsed_url = netutils.urlsplit('direct://dispatcher=database')
|
||||
publisher = direct.DirectPublisher(parsed_url)
|
||||
publisher.publish_events(self.test_data)
|
||||
|
||||
|
|
|
@ -76,7 +76,7 @@ class TestCollector(tests_base.BaseTestCase):
|
|||
),
|
||||
'not-so-secret')
|
||||
|
||||
self.srv = collector.CollectorService()
|
||||
self.srv = collector.CollectorService(0)
|
||||
|
||||
def _setup_messaging(self, enabled=True):
|
||||
if enabled:
|
||||
|
@ -121,8 +121,8 @@ class TestCollector(tests_base.BaseTestCase):
|
|||
|
||||
with mock.patch('socket.socket') as mock_socket:
|
||||
mock_socket.return_value = udp_socket
|
||||
self.srv.start()
|
||||
self.addCleanup(self.srv.stop)
|
||||
self.srv.run()
|
||||
self.addCleanup(self.srv.terminate)
|
||||
self.srv.udp_thread.join(5)
|
||||
self.assertFalse(self.srv.udp_thread.is_alive())
|
||||
mock_socket.assert_called_with(socket.AF_INET, socket.SOCK_DGRAM)
|
||||
|
@ -139,8 +139,8 @@ class TestCollector(tests_base.BaseTestCase):
|
|||
|
||||
with mock.patch.object(socket, 'socket') as mock_socket:
|
||||
mock_socket.return_value = sock
|
||||
self.srv.start()
|
||||
self.addCleanup(self.srv.stop)
|
||||
self.srv.run()
|
||||
self.addCleanup(self.srv.terminate)
|
||||
self.srv.udp_thread.join(5)
|
||||
self.assertFalse(self.srv.udp_thread.is_alive())
|
||||
mock_socket.assert_called_with(socket.AF_INET6, socket.SOCK_DGRAM)
|
||||
|
@ -153,8 +153,8 @@ class TestCollector(tests_base.BaseTestCase):
|
|||
|
||||
udp_socket = self._make_fake_socket(self.sample)
|
||||
with mock.patch('socket.socket', return_value=udp_socket):
|
||||
self.srv.start()
|
||||
self.addCleanup(self.srv.stop)
|
||||
self.srv.run()
|
||||
self.addCleanup(self.srv.terminate)
|
||||
self.srv.udp_thread.join(5)
|
||||
self.assertFalse(self.srv.udp_thread.is_alive())
|
||||
|
||||
|
@ -172,8 +172,8 @@ class TestCollector(tests_base.BaseTestCase):
|
|||
udp_socket = self._make_fake_socket(self.sample)
|
||||
with mock.patch('socket.socket', return_value=udp_socket):
|
||||
with mock.patch('msgpack.loads', self._raise_error):
|
||||
self.srv.start()
|
||||
self.addCleanup(self.srv.stop)
|
||||
self.srv.run()
|
||||
self.addCleanup(self.srv.terminate)
|
||||
self.srv.udp_thread.join(5)
|
||||
self.assertFalse(self.srv.udp_thread.is_alive())
|
||||
|
||||
|
@ -189,8 +189,8 @@ class TestCollector(tests_base.BaseTestCase):
|
|||
with mock.patch.object(oslo_messaging.MessageHandlingServer,
|
||||
'start', side_effect=real_start) as rpc_start:
|
||||
with mock.patch('socket.socket', return_value=udp_socket):
|
||||
self.srv.start()
|
||||
self.addCleanup(self.srv.stop)
|
||||
self.srv.run()
|
||||
self.addCleanup(self.srv.terminate)
|
||||
self.srv.udp_thread.join(5)
|
||||
self.assertFalse(self.srv.udp_thread.is_alive())
|
||||
self.assertEqual(0, rpc_start.call_count)
|
||||
|
@ -202,8 +202,8 @@ class TestCollector(tests_base.BaseTestCase):
|
|||
self.data_sent = []
|
||||
with mock.patch('socket.socket',
|
||||
return_value=self._make_fake_socket(self.utf8_msg)):
|
||||
self.srv.start()
|
||||
self.addCleanup(self.srv.stop)
|
||||
self.srv.run()
|
||||
self.addCleanup(self.srv.terminate)
|
||||
self.srv.udp_thread.join(5)
|
||||
self.assertFalse(self.srv.udp_thread.is_alive())
|
||||
self.assertTrue(utils.verify_signature(
|
||||
|
@ -218,8 +218,8 @@ class TestCollector(tests_base.BaseTestCase):
|
|||
mock_record.side_effect = Exception('boom')
|
||||
mock_dispatcher.record_events.side_effect = Exception('boom')
|
||||
|
||||
self.srv.start()
|
||||
self.addCleanup(self.srv.stop)
|
||||
self.srv.run()
|
||||
self.addCleanup(self.srv.terminate)
|
||||
endp = getattr(self.srv, listener).dispatcher.endpoints[0]
|
||||
ret = endp.sample([{'ctxt': {}, 'publisher_id': 'pub_id',
|
||||
'event_type': 'event', 'payload': {},
|
||||
|
|
|
@ -19,7 +19,6 @@ import shutil
|
|||
import mock
|
||||
from oslo_config import fixture as fixture_config
|
||||
import oslo_messaging
|
||||
import oslo_service.service
|
||||
from oslo_utils import fileutils
|
||||
from oslo_utils import timeutils
|
||||
import six
|
||||
|
@ -95,7 +94,7 @@ class TestNotification(tests_base.BaseTestCase):
|
|||
self.CONF.set_override("disable_non_metric_meters", False,
|
||||
group="notification")
|
||||
self.setup_messaging(self.CONF)
|
||||
self.srv = notification.NotificationService()
|
||||
self.srv = notification.NotificationService(0)
|
||||
|
||||
def fake_get_notifications_manager(self, pm):
|
||||
self.plugin = instance.Instance(pm)
|
||||
|
@ -115,8 +114,8 @@ class TestNotification(tests_base.BaseTestCase):
|
|||
with mock.patch.object(self.srv,
|
||||
'_get_notifications_manager') as get_nm:
|
||||
get_nm.side_effect = self.fake_get_notifications_manager
|
||||
self.srv.start()
|
||||
self.addCleanup(self.srv.stop)
|
||||
self.srv.run()
|
||||
self.addCleanup(self.srv.terminate)
|
||||
self.fake_event_endpoint = fake_event_endpoint_class.return_value
|
||||
|
||||
def test_start_multiple_listeners(self):
|
||||
|
@ -165,12 +164,13 @@ class TestNotification(tests_base.BaseTestCase):
|
|||
with mock.patch.object(self.srv,
|
||||
'_get_notifications_manager') as get_nm:
|
||||
get_nm.side_effect = fake_get_notifications_manager_dup_targets
|
||||
self.srv.start()
|
||||
self.addCleanup(self.srv.stop)
|
||||
self.srv.run()
|
||||
self.addCleanup(self.srv.terminate)
|
||||
self.assertEqual(1, len(mock_listener.call_args_list))
|
||||
args, kwargs = mock_listener.call_args
|
||||
self.assertEqual(1, len(args[1]))
|
||||
self.assertIsInstance(args[1][0], oslo_messaging.Target)
|
||||
self.assertEqual(1, len(self.srv.listeners))
|
||||
|
||||
|
||||
class BaseRealNotification(tests_base.BaseTestCase):
|
||||
|
@ -245,8 +245,8 @@ class BaseRealNotification(tests_base.BaseTestCase):
|
|||
self.publisher = test_publisher.TestPublisher("")
|
||||
|
||||
def _check_notification_service(self):
|
||||
self.srv.start()
|
||||
self.addCleanup(self.srv.stop)
|
||||
self.srv.run()
|
||||
self.addCleanup(self.srv.terminate)
|
||||
|
||||
notifier = messaging.get_notifier(self.transport,
|
||||
"compute.vagrant-precise")
|
||||
|
@ -271,21 +271,21 @@ class TestRealNotificationReloadablePipeline(BaseRealNotification):
|
|||
self.CONF.set_override('refresh_pipeline_cfg', True)
|
||||
self.CONF.set_override('refresh_event_pipeline_cfg', True)
|
||||
self.CONF.set_override('pipeline_polling_interval', 1)
|
||||
self.srv = notification.NotificationService()
|
||||
self.srv = notification.NotificationService(0)
|
||||
|
||||
@mock.patch('ceilometer.publisher.test.TestPublisher')
|
||||
def test_notification_pipeline_poller(self, fake_publisher_cls):
|
||||
fake_publisher_cls.return_value = self.publisher
|
||||
self.srv.start()
|
||||
self.addCleanup(self.srv.stop)
|
||||
self.srv.run()
|
||||
self.addCleanup(self.srv.terminate)
|
||||
self.assertIsNotNone(self.srv.refresh_pipeline_periodic)
|
||||
|
||||
def test_notification_reloaded_pipeline(self):
|
||||
pipeline_cfg_file = self.setup_pipeline(['instance'])
|
||||
self.CONF.set_override("pipeline_cfg_file", pipeline_cfg_file)
|
||||
|
||||
self.srv.start()
|
||||
self.addCleanup(self.srv.stop)
|
||||
self.srv.run()
|
||||
self.addCleanup(self.srv.terminate)
|
||||
|
||||
pipeline = self.srv.pipe_manager
|
||||
|
||||
|
@ -306,8 +306,8 @@ class TestRealNotificationReloadablePipeline(BaseRealNotification):
|
|||
|
||||
self.CONF.set_override("store_events", True, group="notification")
|
||||
|
||||
self.srv.start()
|
||||
self.addCleanup(self.srv.stop)
|
||||
self.srv.run()
|
||||
self.addCleanup(self.srv.terminate)
|
||||
|
||||
pipeline = self.srv.event_pipe_manager
|
||||
|
||||
|
@ -327,7 +327,7 @@ class TestRealNotification(BaseRealNotification):
|
|||
|
||||
def setUp(self):
|
||||
super(TestRealNotification, self).setUp()
|
||||
self.srv = notification.NotificationService()
|
||||
self.srv = notification.NotificationService(0)
|
||||
|
||||
@mock.patch('ceilometer.publisher.test.TestPublisher')
|
||||
def test_notification_service(self, fake_publisher_cls):
|
||||
|
@ -337,8 +337,8 @@ class TestRealNotification(BaseRealNotification):
|
|||
@mock.patch('ceilometer.publisher.test.TestPublisher')
|
||||
def test_notification_service_error_topic(self, fake_publisher_cls):
|
||||
fake_publisher_cls.return_value = self.publisher
|
||||
self.srv.start()
|
||||
self.addCleanup(self.srv.stop)
|
||||
self.srv.run()
|
||||
self.addCleanup(self.srv.terminate)
|
||||
notifier = messaging.get_notifier(self.transport,
|
||||
'compute.vagrant-precise')
|
||||
notifier.error({}, 'compute.instance.error',
|
||||
|
@ -359,14 +359,6 @@ class TestRealNotification(BaseRealNotification):
|
|||
self._check_notification_service()
|
||||
self.assertEqual('memory', self.publisher.samples[0].name)
|
||||
|
||||
@mock.patch.object(oslo_service.service.Service, 'stop')
|
||||
def test_notification_service_start_abnormal(self, mocked):
|
||||
try:
|
||||
self.srv.stop()
|
||||
except Exception:
|
||||
pass
|
||||
self.assertEqual(1, mocked.call_count)
|
||||
|
||||
|
||||
class TestRealNotificationHA(BaseRealNotification):
|
||||
|
||||
|
@ -374,7 +366,7 @@ class TestRealNotificationHA(BaseRealNotification):
|
|||
super(TestRealNotificationHA, self).setUp()
|
||||
self.CONF.set_override('workload_partitioning', True,
|
||||
group='notification')
|
||||
self.srv = notification.NotificationService()
|
||||
self.srv = notification.NotificationService(0)
|
||||
|
||||
@mock.patch('ceilometer.publisher.test.TestPublisher')
|
||||
def test_notification_service(self, fake_publisher_cls):
|
||||
|
@ -389,8 +381,8 @@ class TestRealNotificationHA(BaseRealNotification):
|
|||
mock.MagicMock(), # refresh pipeline listener
|
||||
]
|
||||
|
||||
self.srv.start()
|
||||
self.addCleanup(self.srv.stop)
|
||||
self.srv.run()
|
||||
self.addCleanup(self.srv.terminate)
|
||||
|
||||
def _check_listener_targets():
|
||||
args, kwargs = mock_listener.call_args
|
||||
|
@ -409,8 +401,8 @@ class TestRealNotificationHA(BaseRealNotification):
|
|||
def test_retain_common_targets_on_refresh(self, mock_listener):
|
||||
with mock.patch('ceilometer.coordination.PartitionCoordinator'
|
||||
'.extract_my_subset', return_value=[1, 2]):
|
||||
self.srv.start()
|
||||
self.addCleanup(self.srv.stop)
|
||||
self.srv.run()
|
||||
self.addCleanup(self.srv.terminate)
|
||||
listened_before = [target.topic for target in
|
||||
mock_listener.call_args[0][1]]
|
||||
self.assertEqual(4, len(listened_before))
|
||||
|
@ -426,8 +418,8 @@ class TestRealNotificationHA(BaseRealNotification):
|
|||
|
||||
@mock.patch('oslo_messaging.get_batch_notification_listener')
|
||||
def test_notify_to_relevant_endpoint(self, mock_listener):
|
||||
self.srv.start()
|
||||
self.addCleanup(self.srv.stop)
|
||||
self.srv.run()
|
||||
self.addCleanup(self.srv.terminate)
|
||||
|
||||
targets = mock_listener.call_args[0][1]
|
||||
self.assertIsNotEmpty(targets)
|
||||
|
@ -449,8 +441,8 @@ class TestRealNotificationHA(BaseRealNotification):
|
|||
|
||||
@mock.patch('oslo_messaging.Notifier.sample')
|
||||
def test_broadcast_to_relevant_pipes_only(self, mock_notifier):
|
||||
self.srv.start()
|
||||
self.addCleanup(self.srv.stop)
|
||||
self.srv.run()
|
||||
self.addCleanup(self.srv.terminate)
|
||||
for endpoint in self.srv.listeners[0].dispatcher.endpoints:
|
||||
if (hasattr(endpoint, 'filter_rule') and
|
||||
not endpoint.filter_rule.match(None, None, 'nonmatching.end',
|
||||
|
@ -531,16 +523,16 @@ class TestRealNotificationMultipleAgents(tests_base.BaseTestCase):
|
|||
def _check_notifications(self, fake_publisher_cls):
|
||||
fake_publisher_cls.side_effect = [self.publisher, self.publisher2]
|
||||
|
||||
self.srv = notification.NotificationService()
|
||||
self.srv2 = notification.NotificationService()
|
||||
self.srv = notification.NotificationService(0)
|
||||
self.srv2 = notification.NotificationService(0)
|
||||
with mock.patch('ceilometer.coordination.PartitionCoordinator'
|
||||
'._get_members', return_value=['harry', 'lloyd']):
|
||||
with mock.patch('uuid.uuid4', return_value='harry'):
|
||||
self.srv.start()
|
||||
self.addCleanup(self.srv.stop)
|
||||
self.srv.run()
|
||||
self.addCleanup(self.srv.terminate)
|
||||
with mock.patch('uuid.uuid4', return_value='lloyd'):
|
||||
self.srv2.start()
|
||||
self.addCleanup(self.srv2.stop)
|
||||
self.srv2.run()
|
||||
self.addCleanup(self.srv2.terminate)
|
||||
|
||||
notifier = messaging.get_notifier(self.transport,
|
||||
"compute.vagrant-precise")
|
||||
|
|
|
@ -119,7 +119,7 @@ class ServerUnreachable(TempestException):
|
|||
# of get_network_from_name and preprov_creds to tempest.lib, and it should
|
||||
# be migrated along with them
|
||||
class InvalidTestResource(TempestException):
|
||||
message = "%(name) is not a valid %(type), or the name is ambiguous"
|
||||
message = "%(name)s is not a valid %(type)s, or the name is ambiguous"
|
||||
|
||||
|
||||
class RFCViolation(RestClientException):
|
||||
|
|
|
@ -40,5 +40,8 @@ class CeilometerTempestPlugin(plugins.TempestPlugin):
|
|||
tempest_config.TelemetryGroup)
|
||||
|
||||
def get_opt_lists(self):
|
||||
return [(tempest_config.telemetry_group.name,
|
||||
tempest_config.TelemetryGroup)]
|
||||
return [
|
||||
(tempest_config.telemetry_group.name,
|
||||
tempest_config.TelemetryGroup),
|
||||
('service_available', tempest_config.ServiceAvailableGroup)
|
||||
]
|
||||
|
|
|
@ -18,18 +18,18 @@ from six.moves.urllib import parse as urllib
|
|||
|
||||
from tempest import config
|
||||
from tempest.lib.common import rest_client
|
||||
from tempest.lib.services.compute.flavors_client import FlavorsClient
|
||||
from tempest.lib.services.compute.floating_ips_client import FloatingIPsClient
|
||||
from tempest.lib.services.compute.networks_client import NetworksClient
|
||||
from tempest.lib.services.compute.servers_client import ServersClient
|
||||
from tempest.lib.services.compute import flavors_client as flavor_cli
|
||||
from tempest.lib.services.compute import floating_ips_client as floatingip_cli
|
||||
from tempest.lib.services.compute import networks_client as network_cli
|
||||
from tempest.lib.services.compute import servers_client as server_cli
|
||||
from tempest import manager
|
||||
from tempest.services.object_storage.container_client import ContainerClient
|
||||
from tempest.services.object_storage.object_client import ObjectClient
|
||||
from tempest.services.object_storage import container_client as container_cli
|
||||
from tempest.services.object_storage import object_client as obj_cli
|
||||
|
||||
from ceilometer.tests.tempest.service.images.v1.images_client import \
|
||||
ImagesClient
|
||||
from ceilometer.tests.tempest.service.images.v2.images_client import \
|
||||
ImagesClient as ImagesClientV2
|
||||
from ceilometer.tests.tempest.service.images.v1 import images_client as \
|
||||
img_cli_v1
|
||||
from ceilometer.tests.tempest.service.images.v2 import images_client as \
|
||||
img_cli_v2
|
||||
|
||||
|
||||
CONF = config.CONF
|
||||
|
@ -156,38 +156,45 @@ class Manager(manager.Manager):
|
|||
getattr(self, 'set_%s' % client)()
|
||||
|
||||
def set_servers_client(self):
|
||||
self.servers_client = ServersClient(self.auth_provider,
|
||||
**self.compute_params)
|
||||
self.servers_client = server_cli.ServersClient(
|
||||
self.auth_provider,
|
||||
**self.compute_params)
|
||||
|
||||
def set_compute_networks_client(self):
|
||||
self.compute_networks_client = NetworksClient(self.auth_provider,
|
||||
**self.compute_params)
|
||||
self.compute_networks_client = network_cli.NetworksClient(
|
||||
self.auth_provider,
|
||||
**self.compute_params)
|
||||
|
||||
def set_compute_floating_ips_client(self):
|
||||
self.compute_floating_ips_client = FloatingIPsClient(
|
||||
self.compute_floating_ips_client = floatingip_cli.FloatingIPsClient(
|
||||
self.auth_provider,
|
||||
**self.compute_params)
|
||||
|
||||
def set_flavors_client(self):
|
||||
self.flavors_client = FlavorsClient(self.auth_provider,
|
||||
**self.compute_params)
|
||||
self.flavors_client = flavor_cli.FlavorsClient(
|
||||
self.auth_provider,
|
||||
**self.compute_params)
|
||||
|
||||
def set_image_client(self):
|
||||
self.image_client = ImagesClient(self.auth_provider,
|
||||
**self.image_params)
|
||||
self.image_client = img_cli_v1.ImagesClient(
|
||||
self.auth_provider,
|
||||
**self.image_params)
|
||||
|
||||
def set_image_client_v2(self):
|
||||
self.image_client_v2 = ImagesClientV2(self.auth_provider,
|
||||
**self.image_params)
|
||||
self.image_client_v2 = img_cli_v2.ImagesClient(
|
||||
self.auth_provider,
|
||||
**self.image_params)
|
||||
|
||||
def set_telemetry_client(self):
|
||||
self.telemetry_client = TelemetryClient(self.auth_provider,
|
||||
**self.telemetry_params)
|
||||
|
||||
def set_container_client(self):
|
||||
self.container_client = ContainerClient(self.auth_provider,
|
||||
**self.object_storage_params)
|
||||
self.container_client = container_cli.ContainerClient(
|
||||
self.auth_provider,
|
||||
**self.object_storage_params)
|
||||
|
||||
def set_object_client(self):
|
||||
self.object_client = ObjectClient(self.auth_provider,
|
||||
**self.object_storage_params)
|
||||
self.object_client = obj_cli.ObjectClient(
|
||||
self.auth_provider,
|
||||
**self.object_storage_params)
|
||||
|
|
|
@ -265,7 +265,7 @@ class BaseAgentManagerTestCase(base.BaseTestCase):
|
|||
'name': 'test_pipeline',
|
||||
'interval': 60,
|
||||
'meters': ['test'],
|
||||
'resources': ['test://'] if self.source_resources else [],
|
||||
'resources': ['test://'],
|
||||
'sinks': ['test_sink']}],
|
||||
'sinks': [{
|
||||
'name': 'test_sink',
|
||||
|
@ -310,7 +310,7 @@ class BaseAgentManagerTestCase(base.BaseTestCase):
|
|||
mpc.is_active.return_value = False
|
||||
self.CONF.set_override('heartbeat', 1.0, group='coordination')
|
||||
self.mgr.partition_coordinator.heartbeat = mock.MagicMock()
|
||||
self.mgr.start()
|
||||
self.mgr.run()
|
||||
setup_polling.assert_called_once_with()
|
||||
mpc.start.assert_called_once_with()
|
||||
self.assertEqual(2, mpc.join_group.call_count)
|
||||
|
@ -325,7 +325,7 @@ class BaseAgentManagerTestCase(base.BaseTestCase):
|
|||
time.sleep(0.5)
|
||||
self.assertGreaterEqual(1, runs)
|
||||
|
||||
self.mgr.stop()
|
||||
self.mgr.terminate()
|
||||
mpc.stop.assert_called_once_with()
|
||||
|
||||
@mock.patch('ceilometer.pipeline.setup_polling')
|
||||
|
@ -338,9 +338,8 @@ class BaseAgentManagerTestCase(base.BaseTestCase):
|
|||
|
||||
self.CONF.set_override('refresh_pipeline_cfg', True)
|
||||
self.CONF.set_override('pipeline_polling_interval', 5)
|
||||
self.addCleanup(self.mgr.stop)
|
||||
self.mgr.start()
|
||||
self.addCleanup(self.mgr.stop)
|
||||
self.mgr.run()
|
||||
self.addCleanup(self.mgr.terminate)
|
||||
setup_polling.assert_called_once_with()
|
||||
mpc.start.assert_called_once_with()
|
||||
self.assertEqual(2, mpc.join_group.call_count)
|
||||
|
@ -384,7 +383,7 @@ class BaseAgentManagerTestCase(base.BaseTestCase):
|
|||
'name': 'test_pipeline_1',
|
||||
'interval': 10,
|
||||
'meters': ['test'],
|
||||
'resources': ['test://'] if self.source_resources else [],
|
||||
'resources': ['test://'],
|
||||
'sinks': ['test_sink']
|
||||
})
|
||||
self.setup_polling()
|
||||
|
@ -411,7 +410,7 @@ class BaseAgentManagerTestCase(base.BaseTestCase):
|
|||
'name': 'test_pipeline_1',
|
||||
'interval': 60,
|
||||
'meters': ['testanother'],
|
||||
'resources': ['testanother://'] if self.source_resources else [],
|
||||
'resources': ['testanother://'],
|
||||
'sinks': ['test_sink']
|
||||
})
|
||||
self.setup_polling()
|
||||
|
@ -432,8 +431,8 @@ class BaseAgentManagerTestCase(base.BaseTestCase):
|
|||
mgr = self.create_manager()
|
||||
mgr.extensions = self.mgr.extensions
|
||||
mgr.create_polling_task = mock.MagicMock()
|
||||
mgr.start()
|
||||
self.addCleanup(mgr.stop)
|
||||
mgr.run()
|
||||
self.addCleanup(mgr.terminate)
|
||||
mgr.create_polling_task.assert_called_once_with()
|
||||
|
||||
def test_manager_exception_persistency(self):
|
||||
|
|
|
@ -31,7 +31,7 @@ class TestEndpointDiscovery(base.BaseTestCase):
|
|||
self.discovery = endpoint.EndpointDiscovery()
|
||||
self.manager = mock.MagicMock()
|
||||
self.CONF = self.useFixture(fixture_config.Config()).conf
|
||||
self.CONF.set_override('interface', 'test-endpoint-type',
|
||||
self.CONF.set_override('interface', 'publicURL',
|
||||
group='service_credentials')
|
||||
self.CONF.set_override('region_name', 'test-region-name',
|
||||
group='service_credentials')
|
||||
|
@ -41,14 +41,14 @@ class TestEndpointDiscovery(base.BaseTestCase):
|
|||
def test_keystone_called(self):
|
||||
self.discovery.discover(self.manager, param='test-service-type')
|
||||
expected = [mock.call(service_type='test-service-type',
|
||||
interface='test-endpoint-type',
|
||||
interface='publicURL',
|
||||
region_name='test-region-name')]
|
||||
self.assertEqual(expected, self.catalog.get_urls.call_args_list)
|
||||
|
||||
def test_keystone_called_no_service_type(self):
|
||||
self.discovery.discover(self.manager)
|
||||
expected = [mock.call(service_type=None,
|
||||
interface='test-endpoint-type',
|
||||
interface='publicURL',
|
||||
region_name='test-region-name')]
|
||||
self.assertEqual(expected,
|
||||
self.catalog.get_urls
|
||||
|
@ -87,11 +87,22 @@ class TestHardwareDiscovery(base.BaseTestCase):
|
|||
'flavor_id': 'flavor_id',
|
||||
}
|
||||
|
||||
expected_usm = {
|
||||
'resource_id': 'resource_id',
|
||||
'resource_url': ''.join(['snmp://ro_snmp_user:password@0.0.0.0',
|
||||
'?priv_proto=aes192',
|
||||
'&priv_password=priv_pass']),
|
||||
'mac_addr': '01-23-45-67-89-ab',
|
||||
'image_id': 'image_id',
|
||||
'flavor_id': 'flavor_id',
|
||||
}
|
||||
|
||||
def setUp(self):
|
||||
super(TestHardwareDiscovery, self).setUp()
|
||||
self.discovery = hardware.NodesDiscoveryTripleO()
|
||||
self.discovery.nova_cli = mock.MagicMock()
|
||||
self.manager = mock.MagicMock()
|
||||
self.CONF = self.useFixture(fixture_config.Config()).conf
|
||||
|
||||
def test_hardware_discovery(self):
|
||||
self.discovery.nova_cli.instance_get_all.return_value = [
|
||||
|
@ -106,3 +117,13 @@ class TestHardwareDiscovery(base.BaseTestCase):
|
|||
self.discovery.nova_cli.instance_get_all.return_value = [instance]
|
||||
resources = self.discovery.discover(self.manager)
|
||||
self.assertEqual(0, len(resources))
|
||||
|
||||
def test_hardware_discovery_usm(self):
|
||||
self.CONF.set_override('readonly_user_priv_proto', 'aes192',
|
||||
group='hardware')
|
||||
self.CONF.set_override('readonly_user_priv_password', 'priv_pass',
|
||||
group='hardware')
|
||||
self.discovery.nova_cli.instance_get_all.return_value = [
|
||||
self.MockInstance()]
|
||||
resources = self.discovery.discover(self.manager)
|
||||
self.assertEqual(self.expected_usm, resources[0])
|
||||
|
|
|
@ -16,11 +16,10 @@
|
|||
|
||||
import shutil
|
||||
|
||||
from keystoneclient import exceptions as ks_exceptions
|
||||
from keystoneauth1 import exceptions as ka_exceptions
|
||||
import mock
|
||||
from novaclient import client as novaclient
|
||||
from oslo_config import fixture as fixture_config
|
||||
from oslo_service import service as os_service
|
||||
from oslo_utils import fileutils
|
||||
from oslotest import base
|
||||
from oslotest import mockpatch
|
||||
|
@ -270,7 +269,6 @@ class TestRunTasks(agentbase.BaseAgentManagerTestCase):
|
|||
self.notifier.sample.side_effect = self.fake_notifier_sample
|
||||
self.useFixture(mockpatch.Patch('oslo_messaging.Notifier',
|
||||
return_value=self.notifier))
|
||||
self.source_resources = True
|
||||
super(TestRunTasks, self).setUp()
|
||||
self.useFixture(mockpatch.Patch(
|
||||
'keystoneclient.v2_0.client.Client',
|
||||
|
@ -304,13 +302,13 @@ class TestRunTasks(agentbase.BaseAgentManagerTestCase):
|
|||
"""Test for bug 1316532."""
|
||||
self.useFixture(mockpatch.Patch(
|
||||
'keystoneclient.v2_0.client.Client',
|
||||
side_effect=ks_exceptions.ClientException))
|
||||
side_effect=ka_exceptions.ClientException))
|
||||
self.pipeline_cfg = {
|
||||
'sources': [{
|
||||
'name': "test_keystone",
|
||||
'interval': 10,
|
||||
'meters': ['testkeystone'],
|
||||
'resources': ['test://'] if self.source_resources else [],
|
||||
'resources': ['test://'],
|
||||
'sinks': ['test_sink']}],
|
||||
'sinks': [{
|
||||
'name': 'test_sink',
|
||||
|
@ -379,7 +377,7 @@ class TestRunTasks(agentbase.BaseAgentManagerTestCase):
|
|||
'name': source_name,
|
||||
'interval': 10,
|
||||
'meters': ['testpollingexception'],
|
||||
'resources': ['test://'] if self.source_resources else [],
|
||||
'resources': ['test://'],
|
||||
'sinks': ['test_sink']}],
|
||||
'sinks': [{
|
||||
'name': 'test_sink',
|
||||
|
@ -414,7 +412,7 @@ class TestRunTasks(agentbase.BaseAgentManagerTestCase):
|
|||
def _batching_samples(self, expected_samples, call_count):
|
||||
self.useFixture(mockpatch.PatchObject(manager.utils, 'delayed',
|
||||
side_effect=fakedelayed))
|
||||
pipeline = yaml.dump({
|
||||
pipeline_cfg = {
|
||||
'sources': [{
|
||||
'name': 'test_pipeline',
|
||||
'interval': 1,
|
||||
|
@ -425,18 +423,12 @@ class TestRunTasks(agentbase.BaseAgentManagerTestCase):
|
|||
'name': 'test_sink',
|
||||
'transformers': [],
|
||||
'publishers': ["test"]}]
|
||||
})
|
||||
}
|
||||
|
||||
pipeline_cfg_file = self.setup_pipeline_file(pipeline)
|
||||
|
||||
self.CONF.set_override("pipeline_cfg_file", pipeline_cfg_file)
|
||||
|
||||
self.mgr.start()
|
||||
self.addCleanup(self.mgr.stop)
|
||||
# Manually executes callbacks
|
||||
for cb, __, args, kwargs in self.mgr.polling_periodics._callables:
|
||||
cb(*args, **kwargs)
|
||||
self.mgr.polling_manager = pipeline.PollingManager(pipeline_cfg)
|
||||
polling_task = list(self.mgr.setup_polling_tasks().values())[0]
|
||||
|
||||
self.mgr.interval_task(polling_task)
|
||||
samples = self.notified_samples
|
||||
self.assertEqual(expected_samples, len(samples))
|
||||
self.assertEqual(call_count, self.notifier.sample.call_count)
|
||||
|
@ -452,7 +444,7 @@ class TestRunTasks(agentbase.BaseAgentManagerTestCase):
|
|||
'name': 'test_pipeline',
|
||||
'interval': 1,
|
||||
'meters': ['test'],
|
||||
'resources': ['test://'] if self.source_resources else [],
|
||||
'resources': ['test://'],
|
||||
'sinks': ['test_sink']}],
|
||||
'sinks': [{
|
||||
'name': 'test_sink',
|
||||
|
@ -463,9 +455,8 @@ class TestRunTasks(agentbase.BaseAgentManagerTestCase):
|
|||
pipeline_cfg_file = self.setup_pipeline_file(pipeline)
|
||||
|
||||
self.CONF.set_override("pipeline_cfg_file", pipeline_cfg_file)
|
||||
self.mgr.tg = os_service.threadgroup.ThreadGroup(1000)
|
||||
self.mgr.start()
|
||||
self.addCleanup(self.mgr.stop)
|
||||
self.mgr.run()
|
||||
self.addCleanup(self.mgr.terminate)
|
||||
|
||||
# we only got the old name of meters
|
||||
for sample in self.notified_samples:
|
||||
|
@ -479,7 +470,7 @@ class TestRunTasks(agentbase.BaseAgentManagerTestCase):
|
|||
'name': 'test_pipeline',
|
||||
'interval': 1,
|
||||
'meters': ['testanother'],
|
||||
'resources': ['test://'] if self.source_resources else [],
|
||||
'resources': ['test://'],
|
||||
'sinks': ['test_sink']}],
|
||||
'sinks': [{
|
||||
'name': 'test_sink',
|
||||
|
|
|
@ -1,33 +0,0 @@
|
|||
#
|
||||
# Copyright 2013 eNovance <licensing@enovance.com>
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
from oslotest import base
|
||||
import wsme
|
||||
|
||||
from ceilometer.api.controllers.v2 import base as v2_base
|
||||
|
||||
|
||||
class TestWsmeCustomType(base.BaseTestCase):
|
||||
|
||||
def test_advenum_default(self):
|
||||
class dummybase(wsme.types.Base):
|
||||
ae = v2_base.AdvEnum("name", str, "one", "other", default="other")
|
||||
|
||||
obj = dummybase()
|
||||
self.assertEqual("other", obj.ae)
|
||||
|
||||
obj = dummybase(ae="one")
|
||||
self.assertEqual("one", obj.ae)
|
||||
|
||||
self.assertRaises(wsme.exc.InvalidInput, dummybase, ae="not exists")
|
|
@ -132,3 +132,62 @@ class TestResidentMemoryPollster(base.TestPollsterBase):
|
|||
_verify_resident_memory_metering(1, 2.0, 0)
|
||||
_verify_resident_memory_metering(0, 0, 1)
|
||||
_verify_resident_memory_metering(0, 0, 0)
|
||||
|
||||
|
||||
class TestMemoryBandwidthPollster(base.TestPollsterBase):
|
||||
|
||||
def setUp(self):
|
||||
super(TestMemoryBandwidthPollster, self).setUp()
|
||||
|
||||
@mock.patch('ceilometer.pipeline.setup_pipeline', mock.MagicMock())
|
||||
def test_get_samples(self):
|
||||
next_value = iter((
|
||||
virt_inspector.MemoryBandwidthStats(total=1892352, local=1802240),
|
||||
virt_inspector.MemoryBandwidthStats(total=1081344, local=90112),
|
||||
))
|
||||
|
||||
def inspect_memory_bandwidth(instance, duration):
|
||||
return next(next_value)
|
||||
|
||||
self.inspector.inspect_memory_bandwidth = mock.Mock(
|
||||
side_effect=inspect_memory_bandwidth)
|
||||
mgr = manager.AgentManager()
|
||||
|
||||
def _check_memory_bandwidth_total(expected_usage):
|
||||
pollster = memory.MemoryBandwidthTotalPollster()
|
||||
|
||||
samples = list(pollster.get_samples(mgr, {}, [self.instance]))
|
||||
self.assertEqual(1, len(samples))
|
||||
self.assertEqual(set(['memory.bandwidth.total']),
|
||||
set([s.name for s in samples]))
|
||||
self.assertEqual(expected_usage, samples[0].volume)
|
||||
|
||||
def _check_memory_bandwidth_local(expected_usage):
|
||||
pollster = memory.MemoryBandwidthLocalPollster()
|
||||
|
||||
samples = list(pollster.get_samples(mgr, {}, [self.instance]))
|
||||
self.assertEqual(1, len(samples))
|
||||
self.assertEqual(set(['memory.bandwidth.local']),
|
||||
set([s.name for s in samples]))
|
||||
self.assertEqual(expected_usage, samples[0].volume)
|
||||
|
||||
_check_memory_bandwidth_total(1892352)
|
||||
_check_memory_bandwidth_local(90112)
|
||||
|
||||
@mock.patch('ceilometer.pipeline.setup_pipeline', mock.MagicMock())
|
||||
def test_get_samples_with_empty_stats(self):
|
||||
|
||||
def inspect_memory_bandwidth(instance, duration):
|
||||
raise virt_inspector.NoDataException()
|
||||
|
||||
self.inspector.inspect_memory_bandwidth = mock.Mock(
|
||||
side_effect=inspect_memory_bandwidth)
|
||||
|
||||
mgr = manager.AgentManager()
|
||||
pollster = memory.MemoryBandwidthTotalPollster()
|
||||
|
||||
def all_samples():
|
||||
return list(pollster.get_samples(mgr, {}, [self.instance]))
|
||||
|
||||
self.assertRaises(plugin_base.PollsterPermanentError,
|
||||
all_samples)
|
||||
|
|
|
@ -316,6 +316,38 @@ class TestLibvirtInspection(base.BaseTestCase):
|
|||
self.assertEqual(2, info0.allocation)
|
||||
self.assertEqual(3, info0.physical)
|
||||
|
||||
def test_inspect_disk_info_network_type(self):
|
||||
dom_xml = """
|
||||
<domain type='kvm'>
|
||||
<devices>
|
||||
<disk type='network' device='disk'>
|
||||
<driver name='qemu' type='qcow2' cache='none'/>
|
||||
<source file='/path/instance-00000001/disk'/>
|
||||
<target dev='vda' bus='virtio'/>
|
||||
<alias name='virtio-disk0'/>
|
||||
<address type='pci' domain='0x0000' bus='0x00'
|
||||
slot='0x04' function='0x0'/>
|
||||
</disk>
|
||||
</devices>
|
||||
</domain>
|
||||
"""
|
||||
|
||||
with contextlib.ExitStack() as stack:
|
||||
stack.enter_context(mock.patch.object(self.inspector.connection,
|
||||
'lookupByUUIDString',
|
||||
return_value=self.domain))
|
||||
stack.enter_context(mock.patch.object(self.domain, 'XMLDesc',
|
||||
return_value=dom_xml))
|
||||
stack.enter_context(mock.patch.object(self.domain, 'blockInfo',
|
||||
return_value=(1, 2, 3,
|
||||
-1)))
|
||||
stack.enter_context(mock.patch.object(self.domain, 'info',
|
||||
return_value=(0, 0, 0,
|
||||
2, 999999)))
|
||||
disks = list(self.inspector.inspect_disk_info(self.instance))
|
||||
|
||||
self.assertEqual(0, len(disks))
|
||||
|
||||
def test_inspect_memory_usage_with_domain_shutoff(self):
|
||||
connection = self.inspector.connection
|
||||
with mock.patch.object(connection, 'lookupByUUIDString',
|
||||
|
@ -340,6 +372,20 @@ class TestLibvirtInspection(base.BaseTestCase):
|
|||
self.inspector.inspect_memory_usage,
|
||||
self.instance)
|
||||
|
||||
def test_inspect_memory_bandwidth(self):
|
||||
fake_stats = [({}, {'perf.mbmt': 1892352, 'perf.mbml': 1802240})]
|
||||
connection = self.inspector.connection
|
||||
with mock.patch.object(connection, 'lookupByUUIDString',
|
||||
return_value=self.domain):
|
||||
with mock.patch.object(self.domain, 'info',
|
||||
return_value=(0, 0, 51200,
|
||||
2, 999999)):
|
||||
with mock.patch.object(connection, 'domainListGetStats',
|
||||
return_value=fake_stats):
|
||||
mb = self.inspector.inspect_memory_bandwidth(self.instance)
|
||||
self.assertEqual(1892352, mb.total)
|
||||
self.assertEqual(1802240, mb.local)
|
||||
|
||||
|
||||
class TestLibvirtInspectionWithError(base.BaseTestCase):
|
||||
|
||||
|
|
|
@ -63,18 +63,12 @@ class TestXenapiInspection(base.BaseTestCase):
|
|||
fake_stat = virt_inspector.CPUUtilStats(util=40)
|
||||
|
||||
def fake_xenapi_request(method, args):
|
||||
metrics_rec = {
|
||||
'memory_actual': '536870912',
|
||||
'VCPUs_number': '1',
|
||||
'VCPUs_utilisation': {'0': 0.4, }
|
||||
}
|
||||
|
||||
if method == 'VM.get_by_name_label':
|
||||
return ['vm_ref']
|
||||
elif method == 'VM.get_metrics':
|
||||
return 'metrics_ref'
|
||||
elif method == 'VM_metrics.get_record':
|
||||
return metrics_rec
|
||||
elif method == 'VM.get_VCPUs_max':
|
||||
return '1'
|
||||
elif method == 'VM.query_data_source':
|
||||
return 0.4
|
||||
else:
|
||||
return None
|
||||
|
||||
|
|
|
@ -30,7 +30,8 @@ class TestDispatcherDB(base.BaseTestCase):
|
|||
super(TestDispatcherDB, self).setUp()
|
||||
self.CONF = self.useFixture(fixture_config.Config()).conf
|
||||
self.CONF.set_override('connection', 'sqlite://', group='database')
|
||||
self.dispatcher = database.DatabaseDispatcher(self.CONF)
|
||||
self.meter_dispatcher = database.MeterDatabaseDispatcher(self.CONF)
|
||||
self.event_dispatcher = database.EventDatabaseDispatcher(self.CONF)
|
||||
self.ctx = None
|
||||
|
||||
def test_event_conn(self):
|
||||
|
@ -39,9 +40,9 @@ class TestDispatcherDB(base.BaseTestCase):
|
|||
[], {})
|
||||
event = utils.message_from_event(event,
|
||||
self.CONF.publisher.telemetry_secret)
|
||||
with mock.patch.object(self.dispatcher.event_conn,
|
||||
with mock.patch.object(self.event_dispatcher.conn,
|
||||
'record_events') as record_events:
|
||||
self.dispatcher.record_events(event)
|
||||
self.event_dispatcher.record_events(event)
|
||||
self.assertEqual(1, len(record_events.call_args_list[0][0][0]))
|
||||
|
||||
def test_valid_message(self):
|
||||
|
@ -53,9 +54,9 @@ class TestDispatcherDB(base.BaseTestCase):
|
|||
msg, self.CONF.publisher.telemetry_secret,
|
||||
)
|
||||
|
||||
with mock.patch.object(self.dispatcher.meter_conn,
|
||||
with mock.patch.object(self.meter_dispatcher.conn,
|
||||
'record_metering_data') as record_metering_data:
|
||||
self.dispatcher.record_metering_data(msg)
|
||||
self.meter_dispatcher.record_metering_data(msg)
|
||||
|
||||
record_metering_data.assert_called_once_with(msg)
|
||||
|
||||
|
@ -72,9 +73,9 @@ class TestDispatcherDB(base.BaseTestCase):
|
|||
expected = msg.copy()
|
||||
expected['timestamp'] = datetime.datetime(2012, 7, 2, 13, 53, 40)
|
||||
|
||||
with mock.patch.object(self.dispatcher.meter_conn,
|
||||
with mock.patch.object(self.meter_dispatcher.conn,
|
||||
'record_metering_data') as record_metering_data:
|
||||
self.dispatcher.record_metering_data(msg)
|
||||
self.meter_dispatcher.record_metering_data(msg)
|
||||
|
||||
record_metering_data.assert_called_once_with(expected)
|
||||
|
||||
|
@ -92,8 +93,8 @@ class TestDispatcherDB(base.BaseTestCase):
|
|||
expected['timestamp'] = datetime.datetime(2012, 9, 30, 23,
|
||||
31, 50, 262000)
|
||||
|
||||
with mock.patch.object(self.dispatcher.meter_conn,
|
||||
with mock.patch.object(self.meter_dispatcher.conn,
|
||||
'record_metering_data') as record_metering_data:
|
||||
self.dispatcher.record_metering_data(msg)
|
||||
self.meter_dispatcher.record_metering_data(msg)
|
||||
|
||||
record_metering_data.assert_called_once_with(expected)
|
||||
|
|
|
@ -19,16 +19,12 @@ from ceilometer import dispatcher
|
|||
from ceilometer.tests import base
|
||||
|
||||
|
||||
class FakeDispatcherSample(dispatcher.MeterDispatcherBase):
|
||||
class FakeMeterDispatcher(dispatcher.MeterDispatcherBase):
|
||||
def record_metering_data(self, data):
|
||||
pass
|
||||
|
||||
|
||||
class FakeDispatcher(dispatcher.MeterDispatcherBase,
|
||||
dispatcher.EventDispatcherBase):
|
||||
def record_metering_data(self, data):
|
||||
pass
|
||||
|
||||
class FakeEventDispatcher(dispatcher.EventDispatcherBase):
|
||||
def record_events(self, events):
|
||||
pass
|
||||
|
||||
|
@ -41,10 +37,13 @@ class TestDispatchManager(base.BaseTestCase):
|
|||
event_dispatchers=['database'])
|
||||
self.useFixture(mockpatch.Patch(
|
||||
'ceilometer.dispatcher.gnocchi.GnocchiDispatcher',
|
||||
new=FakeDispatcherSample))
|
||||
new=FakeMeterDispatcher))
|
||||
self.useFixture(mockpatch.Patch(
|
||||
'ceilometer.dispatcher.database.DatabaseDispatcher',
|
||||
new=FakeDispatcher))
|
||||
'ceilometer.dispatcher.database.MeterDatabaseDispatcher',
|
||||
new=FakeMeterDispatcher))
|
||||
self.useFixture(mockpatch.Patch(
|
||||
'ceilometer.dispatcher.database.EventDatabaseDispatcher',
|
||||
new=FakeEventDispatcher))
|
||||
|
||||
def test_load(self):
|
||||
sample_mg, event_mg = dispatcher.load_dispatcher_manager()
|
||||
|
|
|
@ -168,6 +168,25 @@ class DispatcherTest(base.BaseTestCase):
|
|||
def test_activity_filter_nomatch(self):
|
||||
self._do_test_activity_filter(2)
|
||||
|
||||
@mock.patch('ceilometer.dispatcher.gnocchi.GnocchiDispatcher'
|
||||
'.batch_measures')
|
||||
def test_unhandled_meter(self, fake_batch):
|
||||
samples = [{
|
||||
'counter_name': 'unknown.meter',
|
||||
'counter_unit': 'GB',
|
||||
'counter_type': 'gauge',
|
||||
'counter_volume': '2',
|
||||
'user_id': 'test_user',
|
||||
'project_id': 'test_project',
|
||||
'source': 'openstack',
|
||||
'timestamp': '2014-05-08 20:23:48.028195',
|
||||
'resource_id': 'randomid',
|
||||
'resource_metadata': {}
|
||||
}]
|
||||
d = gnocchi.GnocchiDispatcher(self.conf.conf)
|
||||
d.record_metering_data(samples)
|
||||
self.assertEqual(0, len(fake_batch.call_args[0][1]))
|
||||
|
||||
|
||||
class MockResponse(mock.NonCallableMock):
|
||||
def __init__(self, code):
|
||||
|
|
|
@ -27,6 +27,7 @@ from ceilometer.publisher import utils
|
|||
|
||||
|
||||
class TestDispatcherHttp(base.BaseTestCase):
|
||||
"""Test sending meters with the http dispatcher"""
|
||||
|
||||
def setUp(self):
|
||||
super(TestDispatcherHttp, self).setUp()
|
||||
|
@ -70,52 +71,123 @@ class TestDispatcherHttp(base.BaseTestCase):
|
|||
|
||||
self.assertEqual(1, post.call_count)
|
||||
|
||||
def test_http_dispatcher_with_ssl_default(self):
|
||||
self.CONF.dispatcher_http.target = 'https://example.com'
|
||||
self.CONF.dispatcher_http.verify_ssl = ''
|
||||
dispatcher = http.HttpDispatcher(self.CONF)
|
||||
|
||||
self.assertEqual(True, dispatcher.verify_ssl)
|
||||
|
||||
with mock.patch.object(requests, 'post') as post:
|
||||
dispatcher.record_metering_data(self.msg)
|
||||
|
||||
self.assertEqual(True, post.call_args[1]['verify'])
|
||||
|
||||
def test_http_dispatcher_with_ssl_true(self):
|
||||
self.CONF.dispatcher_http.target = 'https://example.com'
|
||||
self.CONF.dispatcher_http.verify_ssl = 'true'
|
||||
dispatcher = http.HttpDispatcher(self.CONF)
|
||||
|
||||
self.assertEqual(True, dispatcher.verify_ssl)
|
||||
|
||||
with mock.patch.object(requests, 'post') as post:
|
||||
dispatcher.record_metering_data(self.msg)
|
||||
|
||||
self.assertEqual(True, post.call_args[1]['verify'])
|
||||
|
||||
def test_http_dispatcher_with_ssl_false(self):
|
||||
self.CONF.dispatcher_http.target = 'https://example.com'
|
||||
self.CONF.dispatcher_http.verify_ssl = 'false'
|
||||
dispatcher = http.HttpDispatcher(self.CONF)
|
||||
|
||||
self.assertEqual(False, dispatcher.verify_ssl)
|
||||
|
||||
with mock.patch.object(requests, 'post') as post:
|
||||
dispatcher.record_metering_data(self.msg)
|
||||
|
||||
self.assertEqual(False, post.call_args[1]['verify'])
|
||||
|
||||
def test_http_dispatcher_with_ssl_path(self):
|
||||
self.CONF.dispatcher_http.target = 'https://example.com'
|
||||
self.CONF.dispatcher_http.verify_ssl = '/path/to/cert.crt'
|
||||
dispatcher = http.HttpDispatcher(self.CONF)
|
||||
|
||||
self.assertEqual('/path/to/cert.crt', dispatcher.verify_ssl)
|
||||
|
||||
with mock.patch.object(requests, 'post') as post:
|
||||
dispatcher.record_metering_data(self.msg)
|
||||
|
||||
self.assertEqual('/path/to/cert.crt', post.call_args[1]['verify'])
|
||||
|
||||
|
||||
class TestEventDispatcherHttp(base.BaseTestCase):
|
||||
|
||||
"""Test sending events with the http dispatcher"""
|
||||
def setUp(self):
|
||||
super(TestEventDispatcherHttp, self).setUp()
|
||||
self.CONF = self.useFixture(fixture_config.Config()).conf
|
||||
|
||||
# repr(uuid.uuid4()) is used in test event creation to avoid an
|
||||
# exception being thrown when the uuid is serialized to JSON
|
||||
event = event_models.Event(repr(uuid.uuid4()), 'test',
|
||||
datetime.datetime(2012, 7, 2, 13, 53, 40),
|
||||
[], {})
|
||||
event = utils.message_from_event(event,
|
||||
self.CONF.publisher.telemetry_secret)
|
||||
self.event = event
|
||||
|
||||
def test_http_dispatcher(self):
|
||||
self.CONF.dispatcher_http.event_target = 'fake'
|
||||
dispatcher = http.HttpDispatcher(self.CONF)
|
||||
|
||||
event = event_models.Event(uuid.uuid4(), 'test',
|
||||
datetime.datetime(2012, 7, 2, 13, 53, 40),
|
||||
[], {})
|
||||
event = utils.message_from_event(event,
|
||||
self.CONF.publisher.telemetry_secret)
|
||||
|
||||
with mock.patch.object(requests, 'post') as post:
|
||||
dispatcher.record_events(event)
|
||||
dispatcher.record_events(self.event)
|
||||
|
||||
self.assertEqual(1, post.call_count)
|
||||
|
||||
def test_http_dispatcher_bad(self):
|
||||
def test_http_dispatcher_bad_server(self):
|
||||
self.CONF.dispatcher_http.event_target = 'fake'
|
||||
dispatcher = http.HttpDispatcher(self.CONF)
|
||||
|
||||
with mock.patch.object(requests, 'post') as post:
|
||||
response = requests.Response()
|
||||
response.status_code = 500
|
||||
post.return_value = response
|
||||
with mock.patch('ceilometer.dispatcher.http.LOG',
|
||||
mock.MagicMock()) as LOG:
|
||||
dispatcher.record_events(self.event)
|
||||
self.assertTrue(LOG.exception.called)
|
||||
|
||||
def test_http_dispatcher_with_no_target(self):
|
||||
self.CONF.dispatcher_http.event_target = ''
|
||||
dispatcher = http.HttpDispatcher(self.CONF)
|
||||
|
||||
event = event_models.Event(uuid.uuid4(), 'test',
|
||||
datetime.datetime(2012, 7, 2, 13, 53, 40),
|
||||
[], {})
|
||||
event = utils.message_from_event(event,
|
||||
self.CONF.publisher.telemetry_secret)
|
||||
with mock.patch('ceilometer.dispatcher.http.LOG',
|
||||
mock.MagicMock()) as LOG:
|
||||
dispatcher.record_events(event)
|
||||
self.assertTrue(LOG.exception.called)
|
||||
# The target should be None
|
||||
self.assertEqual('', dispatcher.event_target)
|
||||
|
||||
with mock.patch.object(requests, 'post') as post:
|
||||
dispatcher.record_events(self.event)
|
||||
|
||||
# Since the target is not set, no http post should occur, thus the
|
||||
# call_count should be zero.
|
||||
self.assertEqual(0, post.call_count)
|
||||
|
||||
def test_http_dispatcher_share_target(self):
|
||||
self.CONF.dispatcher_http.target = 'fake'
|
||||
self.CONF.dispatcher_http.event_target = 'fake'
|
||||
dispatcher = http.HttpDispatcher(self.CONF)
|
||||
|
||||
event = event_models.Event(uuid.uuid4(), 'test',
|
||||
datetime.datetime(2012, 7, 2, 13, 53, 40),
|
||||
[], {})
|
||||
event = utils.message_from_event(event,
|
||||
self.CONF.publisher.telemetry_secret)
|
||||
with mock.patch.object(requests, 'post') as post:
|
||||
dispatcher.record_events(event)
|
||||
dispatcher.record_events(self.event)
|
||||
|
||||
self.assertEqual('fake', post.call_args[0][0])
|
||||
|
||||
def test_http_dispatcher_with_ssl_path(self):
|
||||
self.CONF.dispatcher_http.event_target = 'https://example.com'
|
||||
self.CONF.dispatcher_http.verify_ssl = '/path/to/cert.crt'
|
||||
dispatcher = http.HttpDispatcher(self.CONF)
|
||||
|
||||
self.assertEqual('/path/to/cert.crt', dispatcher.verify_ssl)
|
||||
|
||||
with mock.patch.object(requests, 'post') as post:
|
||||
dispatcher.record_events(self.event)
|
||||
|
||||
self.assertEqual('/path/to/cert.crt', post.call_args[1]['verify'])
|
||||
|
|
|
@ -14,8 +14,10 @@
|
|||
# under the License.
|
||||
"""Tests for ceilometer/hardware/inspector/snmp/inspector.py
|
||||
"""
|
||||
import mock
|
||||
from oslo_utils import netutils
|
||||
from oslotest import mockpatch
|
||||
from pysnmp.proto.rfc1905 import noSuchObject
|
||||
|
||||
from ceilometer.hardware.inspector import snmp
|
||||
from ceilometer.tests import base as test_base
|
||||
|
@ -33,8 +35,14 @@ class FakeObjectName(object):
|
|||
|
||||
class FakeCommandGenerator(object):
|
||||
def getCmd(self, authData, transportTarget, *oids, **kwargs):
|
||||
varBinds = [(FakeObjectName(oid),
|
||||
int(oid.split('.')[-1])) for oid in oids]
|
||||
emptyOID = '1.3.6.1.4.1.2021.4.14.0'
|
||||
varBinds = [
|
||||
(FakeObjectName(oid), int(oid.split('.')[-1]))
|
||||
for oid in oids
|
||||
if oid != emptyOID
|
||||
]
|
||||
if emptyOID in oids:
|
||||
varBinds += [(FakeObjectName(emptyOID), noSuchObject)]
|
||||
return (None, None, 0, varBinds)
|
||||
|
||||
def bulkCmd(authData, transportTarget, nonRepeaters, maxRepetitions,
|
||||
|
@ -64,6 +72,12 @@ class TestSNMPInspector(test_base.BaseTestCase):
|
|||
},
|
||||
'post_op': None,
|
||||
},
|
||||
'test_nosuch': {
|
||||
'matching_type': snmp.EXACT,
|
||||
'metric_oid': ('1.3.6.1.4.1.2021.4.14.0', int),
|
||||
'metadata': {},
|
||||
'post_op': None,
|
||||
},
|
||||
}
|
||||
|
||||
def setUp(self):
|
||||
|
@ -98,6 +112,18 @@ class TestSNMPInspector(test_base.BaseTestCase):
|
|||
extra.update(project_id=2)
|
||||
return value
|
||||
|
||||
def test_inspect_no_such_object(self):
|
||||
cache = {}
|
||||
try:
|
||||
# inspect_generic() is a generator, so we explicitly need to
|
||||
# iterate through it in order to trigger the exception.
|
||||
list(self.inspector.inspect_generic(self.host,
|
||||
cache,
|
||||
{},
|
||||
self.mapping['test_nosuch']))
|
||||
except ValueError:
|
||||
self.fail("got ValueError when interpreting NoSuchObject return")
|
||||
|
||||
def test_inspect_generic_exact(self):
|
||||
self.inspector._fake_post_op = self._fake_post_op
|
||||
cache = {}
|
||||
|
@ -202,3 +228,20 @@ class TestSNMPInspector(test_base.BaseTestCase):
|
|||
name = rfc1902.ObjectName(oid)
|
||||
|
||||
self.assertEqual(oid, str(name))
|
||||
|
||||
@mock.patch.object(snmp.cmdgen, 'UsmUserData')
|
||||
def test_auth_strategy(self, mock_method):
|
||||
host = ''.join(['snmp://a:b@foo?auth_proto=sha',
|
||||
'&priv_password=pass&priv_proto=aes256'])
|
||||
host = netutils.urlsplit(host)
|
||||
self.inspector._get_auth_strategy(host)
|
||||
mock_method.assert_called_with(
|
||||
'a', authKey='b',
|
||||
authProtocol=snmp.cmdgen.usmHMACSHAAuthProtocol,
|
||||
privProtocol=snmp.cmdgen.usmAesCfb256Protocol,
|
||||
privKey='pass')
|
||||
|
||||
host2 = 'snmp://a:b@foo?&priv_password=pass'
|
||||
host2 = netutils.urlsplit(host2)
|
||||
self.inspector._get_auth_strategy(host2)
|
||||
mock_method.assert_called_with('a', authKey='b', privKey='pass')
|
||||
|
|
|
@ -14,214 +14,107 @@
|
|||
# under the License.
|
||||
|
||||
import mock
|
||||
from oslo_config import fixture as fixture_config
|
||||
from oslotest import base
|
||||
from oslotest import mockpatch
|
||||
|
||||
from ceilometer.agent import manager
|
||||
from ceilometer.image import glance
|
||||
import ceilometer.tests.base as base
|
||||
|
||||
IMAGE_LIST = [
|
||||
type('Image', (object,),
|
||||
{u'status': u'queued',
|
||||
u'name': "some name",
|
||||
u'deleted': False,
|
||||
u'container_format': None,
|
||||
u'created_at': u'2012-09-18T16:29:46',
|
||||
u'disk_format': None,
|
||||
u'updated_at': u'2012-09-18T16:29:46',
|
||||
u'properties': {},
|
||||
u'min_disk': 0,
|
||||
u'protected': False,
|
||||
u'id': u'1d21a8d0-25f4-4e0a-b4ec-85f40237676b',
|
||||
u'location': None,
|
||||
u'checksum': None,
|
||||
u'owner': u'4c8364fc20184ed7971b76602aa96184',
|
||||
u'is_public': True,
|
||||
u'deleted_at': None,
|
||||
{u'status': u'active',
|
||||
u'tags': [],
|
||||
u'kernel_id': u'fd24d91a-dfd5-4a3c-b990-d4563eb27396',
|
||||
u'container_format': u'ami',
|
||||
u'min_ram': 0,
|
||||
u'size': 2048}),
|
||||
u'ramdisk_id': u'd629522b-ebaa-4c92-9514-9e31fe760d18',
|
||||
u'updated_at': u'2016-06-20T13: 34: 41Z',
|
||||
u'visibility': u'public',
|
||||
u'owner': u'6824974c08974d4db864bbaa6bc08303',
|
||||
u'file': u'/v2/images/fda54a44-3f96-40bf-ab07-0a4ce9e1761d/file',
|
||||
u'min_disk': 0,
|
||||
u'virtual_size': None,
|
||||
u'id': u'fda54a44-3f96-40bf-ab07-0a4ce9e1761d',
|
||||
u'size': 25165824,
|
||||
u'name': u'cirros-0.3.4-x86_64-uec',
|
||||
u'checksum': u'eb9139e4942121f22bbc2afc0400b2a4',
|
||||
u'created_at': u'2016-06-20T13: 34: 40Z',
|
||||
u'disk_format': u'ami',
|
||||
u'protected': False,
|
||||
u'schema': u'/v2/schemas/image'}),
|
||||
type('Image', (object,),
|
||||
{u'status': u'active',
|
||||
u'name': "hello world",
|
||||
u'deleted': False,
|
||||
u'container_format': None,
|
||||
u'created_at': u'2012-09-18T16:27:41',
|
||||
u'disk_format': None,
|
||||
u'updated_at': u'2012-09-18T16:27:41',
|
||||
u'properties': {},
|
||||
u'min_disk': 0,
|
||||
u'protected': False,
|
||||
u'id': u'22be9f90-864d-494c-aa74-8035fd535989',
|
||||
u'location': None,
|
||||
u'checksum': None,
|
||||
u'owner': u'9e4f98287a0246daa42eaf4025db99d4',
|
||||
u'is_public': True,
|
||||
u'deleted_at': None,
|
||||
u'tags': [],
|
||||
u'container_format': u'ari',
|
||||
u'min_ram': 0,
|
||||
u'size': 0}),
|
||||
u'updated_at': u'2016-06-20T13: 34: 38Z',
|
||||
u'visibility': u'public',
|
||||
u'owner': u'6824974c08974d4db864bbaa6bc08303',
|
||||
u'file': u'/v2/images/d629522b-ebaa-4c92-9514-9e31fe760d18/file',
|
||||
u'min_disk': 0,
|
||||
u'virtual_size': None,
|
||||
u'id': u'd629522b-ebaa-4c92-9514-9e31fe760d18',
|
||||
u'size': 3740163,
|
||||
u'name': u'cirros-0.3.4-x86_64-uec-ramdisk',
|
||||
u'checksum': u'be575a2b939972276ef675752936977f',
|
||||
u'created_at': u'2016-06-20T13: 34: 37Z',
|
||||
u'disk_format': u'ari',
|
||||
u'protected': False,
|
||||
u'schema': u'/v2/schemas/image'}),
|
||||
type('Image', (object,),
|
||||
{u'status': u'queued',
|
||||
u'name': None,
|
||||
u'deleted': False,
|
||||
u'container_format': None,
|
||||
u'created_at': u'2012-09-18T16:23:27',
|
||||
u'disk_format': "raw",
|
||||
u'updated_at': u'2012-09-18T16:23:27',
|
||||
u'properties': {},
|
||||
u'min_disk': 0,
|
||||
u'protected': False,
|
||||
u'id': u'8d133f6c-38a8-403c-b02c-7071b69b432d',
|
||||
u'location': None,
|
||||
u'checksum': None,
|
||||
u'owner': u'5f8806a76aa34ee8b8fc8397bd154319',
|
||||
u'is_public': True,
|
||||
u'deleted_at': None,
|
||||
{u'status': u'active',
|
||||
u'tags': [],
|
||||
u'container_format': u'aki',
|
||||
u'min_ram': 0,
|
||||
u'size': 1024}),
|
||||
type('Image', (object,),
|
||||
{u'status': u'queued',
|
||||
u'name': "some name",
|
||||
u'deleted': False,
|
||||
u'container_format': None,
|
||||
u'created_at': u'2012-09-18T16:29:46',
|
||||
u'disk_format': None,
|
||||
u'updated_at': u'2012-09-18T16:29:46',
|
||||
u'properties': {},
|
||||
u'updated_at': u'2016-06-20T13: 34: 35Z',
|
||||
u'visibility': u'public',
|
||||
u'owner': u'6824974c08974d4db864bbaa6bc08303',
|
||||
u'file': u'/v2/images/fd24d91a-dfd5-4a3c-b990-d4563eb27396/file',
|
||||
u'min_disk': 0,
|
||||
u'virtual_size': None,
|
||||
u'id': u'fd24d91a-dfd5-4a3c-b990-d4563eb27396',
|
||||
u'size': 4979632,
|
||||
u'name': u'cirros-0.3.4-x86_64-uec-kernel',
|
||||
u'checksum': u'8a40c862b5735975d82605c1dd395796',
|
||||
u'created_at': u'2016-06-20T13: 34: 35Z',
|
||||
u'disk_format': u'aki',
|
||||
u'protected': False,
|
||||
u'id': u'e753b196-49b4-48e8-8ca5-09ebd9805f40',
|
||||
u'location': None,
|
||||
u'checksum': None,
|
||||
u'owner': u'4c8364fc20184ed7971b76602aa96184',
|
||||
u'is_public': True,
|
||||
u'deleted_at': None,
|
||||
u'min_ram': 0,
|
||||
u'size': 2048}),
|
||||
u'schema': u'/v2/schemas/image'}),
|
||||
]
|
||||
|
||||
ENDPOINT = 'end://point'
|
||||
|
||||
|
||||
class _BaseObject(object):
|
||||
pass
|
||||
|
||||
|
||||
class FakeGlanceClient(object):
|
||||
class images(object):
|
||||
pass
|
||||
|
||||
|
||||
class TestManager(manager.AgentManager):
|
||||
|
||||
def __init__(self):
|
||||
super(TestManager, self).__init__()
|
||||
self._keystone = mock.Mock()
|
||||
access = self._keystone.session.auth.get_access.return_value
|
||||
access.service_catalog.get_endpoints = mock.Mock(
|
||||
return_value={'image': mock.ANY})
|
||||
|
||||
|
||||
class TestImagePollsterPageSize(base.BaseTestCase):
|
||||
|
||||
@staticmethod
|
||||
def fake_get_glance_client(ksclient, endpoint):
|
||||
glanceclient = FakeGlanceClient()
|
||||
glanceclient.images.list = mock.MagicMock(return_value=IMAGE_LIST)
|
||||
return glanceclient
|
||||
|
||||
@mock.patch('ceilometer.pipeline.setup_pipeline', mock.MagicMock())
|
||||
def setUp(self):
|
||||
super(TestImagePollsterPageSize, self).setUp()
|
||||
self.manager = TestManager()
|
||||
self.useFixture(mockpatch.PatchObject(
|
||||
glance._Base, 'get_glance_client',
|
||||
side_effect=self.fake_get_glance_client))
|
||||
self.CONF = self.useFixture(fixture_config.Config()).conf
|
||||
self.manager = manager.AgentManager()
|
||||
self.pollster = glance.ImageSizePollster()
|
||||
|
||||
def _do_test_iter_images(self, page_size=0, length=0):
|
||||
self.CONF.set_override("glance_page_size", page_size)
|
||||
images = list(glance.ImagePollster().
|
||||
_iter_images(self.manager.keystone, {}, ENDPOINT))
|
||||
kwargs = {}
|
||||
if page_size > 0:
|
||||
kwargs['page_size'] = page_size
|
||||
FakeGlanceClient.images.list.assert_called_with(
|
||||
filters={'is_public': None}, **kwargs)
|
||||
self.assertEqual(length, len(images))
|
||||
|
||||
def test_page_size(self):
|
||||
self._do_test_iter_images(100, 4)
|
||||
|
||||
def test_page_size_default(self):
|
||||
self._do_test_iter_images(length=4)
|
||||
|
||||
def test_page_size_negative_number(self):
|
||||
self._do_test_iter_images(-1, 4)
|
||||
def test_image_pollster(self):
|
||||
image_samples = list(
|
||||
self.pollster.get_samples(self.manager, {}, resources=IMAGE_LIST))
|
||||
self.assertEqual(3, len(image_samples))
|
||||
self.assertEqual('image.size', image_samples[0].name)
|
||||
self.assertEqual(25165824, image_samples[0].volume)
|
||||
self.assertEqual('6824974c08974d4db864bbaa6bc08303',
|
||||
image_samples[0].project_id)
|
||||
self.assertEqual('fda54a44-3f96-40bf-ab07-0a4ce9e1761d',
|
||||
image_samples[0].resource_id)
|
||||
|
||||
|
||||
class TestImagePollster(base.BaseTestCase):
|
||||
|
||||
@staticmethod
|
||||
def fake_get_glance_client(ksclient, endpoint):
|
||||
glanceclient = _BaseObject()
|
||||
setattr(glanceclient, "images", _BaseObject())
|
||||
setattr(glanceclient.images,
|
||||
"list", lambda *args, **kwargs: iter(IMAGE_LIST))
|
||||
return glanceclient
|
||||
|
||||
class TestImagePageSize(base.BaseTestCase):
|
||||
@mock.patch('ceilometer.pipeline.setup_pipeline', mock.MagicMock())
|
||||
def setUp(self):
|
||||
super(TestImagePollster, self).setUp()
|
||||
self.manager = TestManager()
|
||||
self.useFixture(mockpatch.PatchObject(
|
||||
glance._Base, 'get_glance_client',
|
||||
side_effect=self.fake_get_glance_client))
|
||||
super(TestImagePageSize, self).setUp()
|
||||
self.manager = manager.AgentManager()
|
||||
self.pollster = glance.ImagePollster()
|
||||
|
||||
def test_default_discovery(self):
|
||||
pollster = glance.ImagePollster()
|
||||
self.assertEqual('endpoint:image', pollster.default_discovery)
|
||||
|
||||
def test_iter_images(self):
|
||||
# Tests whether the iter_images method returns a unique image
|
||||
# list when there is nothing in the cache
|
||||
images = list(glance.ImagePollster().
|
||||
_iter_images(self.manager.keystone, {}, ENDPOINT))
|
||||
self.assertEqual(len(set(image.id for image in images)), len(images))
|
||||
|
||||
def test_iter_images_cached(self):
|
||||
# Tests whether the iter_images method returns the values from
|
||||
# the cache
|
||||
cache = {'%s-images' % ENDPOINT: []}
|
||||
images = list(glance.ImagePollster().
|
||||
_iter_images(self.manager.keystone, cache,
|
||||
ENDPOINT))
|
||||
self.assertEqual([], images)
|
||||
|
||||
def test_image(self):
|
||||
samples = list(glance.ImagePollster().get_samples(self.manager, {},
|
||||
[ENDPOINT]))
|
||||
self.assertEqual(4, len(samples))
|
||||
for sample in samples:
|
||||
self.assertEqual(1, sample.volume)
|
||||
|
||||
def test_image_size(self):
|
||||
samples = list(glance.ImageSizePollster().get_samples(self.manager,
|
||||
{},
|
||||
[ENDPOINT]))
|
||||
self.assertEqual(4, len(samples))
|
||||
for image in IMAGE_LIST:
|
||||
self.assertTrue(
|
||||
any(map(lambda sample: sample.volume == image.size,
|
||||
samples)))
|
||||
|
||||
def test_image_get_sample_names(self):
|
||||
samples = list(glance.ImagePollster().get_samples(self.manager, {},
|
||||
[ENDPOINT]))
|
||||
self.assertEqual(set(['image']), set([s.name for s in samples]))
|
||||
|
||||
def test_image_size_get_sample_names(self):
|
||||
samples = list(glance.ImageSizePollster().get_samples(self.manager,
|
||||
{},
|
||||
[ENDPOINT]))
|
||||
self.assertEqual(set(['image.size']), set([s.name for s in samples]))
|
||||
def test_image_pollster(self):
|
||||
image_samples = list(
|
||||
self.pollster.get_samples(self.manager, {}, resources=IMAGE_LIST))
|
||||
self.assertEqual(3, len(image_samples))
|
||||
self.assertEqual('image', image_samples[0].name)
|
||||
self.assertEqual(1, image_samples[0].volume)
|
||||
self.assertEqual('6824974c08974d4db864bbaa6bc08303',
|
||||
image_samples[0].project_id)
|
||||
self.assertEqual('fda54a44-3f96-40bf-ab07-0a4ce9e1761d',
|
||||
image_samples[0].resource_id)
|
||||
|
|
|
@ -117,6 +117,30 @@ class NotifierOnlyPublisherTest(BasePublisherTestCase):
|
|||
driver=mock.ANY, retry=mock.ANY,
|
||||
publisher_id=mock.ANY)
|
||||
|
||||
@mock.patch('ceilometer.messaging.get_transport')
|
||||
def test_publish_other_host(self, cgt):
|
||||
msg_publisher.SampleNotifierPublisher(
|
||||
netutils.urlsplit('notifier://foo:foo@127.0.0.1:1234'))
|
||||
cgt.assert_called_with('rabbit://foo:foo@127.0.0.1:1234')
|
||||
|
||||
msg_publisher.EventNotifierPublisher(
|
||||
netutils.urlsplit('notifier://foo:foo@127.0.0.1:1234'))
|
||||
cgt.assert_called_with('rabbit://foo:foo@127.0.0.1:1234')
|
||||
|
||||
@mock.patch('ceilometer.messaging.get_transport')
|
||||
def test_publish_other_host_vhost_and_query(self, cgt):
|
||||
msg_publisher.SampleNotifierPublisher(
|
||||
netutils.urlsplit('notifier://foo:foo@127.0.0.1:1234/foo'
|
||||
'?driver=amqp&amqp_auto_delete=true'))
|
||||
cgt.assert_called_with('amqp://foo:foo@127.0.0.1:1234/foo'
|
||||
'?amqp_auto_delete=true')
|
||||
|
||||
msg_publisher.EventNotifierPublisher(
|
||||
netutils.urlsplit('notifier://foo:foo@127.0.0.1:1234/foo'
|
||||
'?driver=amqp&amqp_auto_delete=true'))
|
||||
cgt.assert_called_with('amqp://foo:foo@127.0.0.1:1234/foo'
|
||||
'?amqp_auto_delete=true')
|
||||
|
||||
|
||||
class TestPublisher(testscenarios.testcase.WithScenarios,
|
||||
BasePublisherTestCase):
|
||||
|
|
|
@ -127,6 +127,28 @@ class TestUDPPublisher(base.BaseTestCase):
|
|||
self._check_udp_socket('udp://[::1]:4952',
|
||||
socket.AF_INET6)
|
||||
|
||||
def test_publisher_udp_socket_ipv4_hostname(self):
|
||||
host = "ipv4.google.com"
|
||||
try:
|
||||
socket.getaddrinfo(host, None,
|
||||
socket.AF_INET,
|
||||
socket.SOCK_DGRAM)
|
||||
except socket.gaierror:
|
||||
self.skipTest("cannot resolve not running test")
|
||||
url = "udp://"+host+":4952"
|
||||
self._check_udp_socket(url, socket.AF_INET)
|
||||
|
||||
def test_publisher_udp_socket_ipv6_hostname(self):
|
||||
host = "ipv6.google.com"
|
||||
try:
|
||||
socket.getaddrinfo(host, None,
|
||||
socket.AF_INET6,
|
||||
socket.SOCK_DGRAM)
|
||||
except socket.gaierror:
|
||||
self.skipTest("cannot resolve not running test")
|
||||
url = "udp://"+host+":4952"
|
||||
self._check_udp_socket(url, socket.AF_INET6)
|
||||
|
||||
def test_published(self):
|
||||
self.data_sent = []
|
||||
with mock.patch('socket.socket',
|
||||
|
|
|
@ -37,7 +37,7 @@ class TestEventDispatcherVerifier(base.BaseTestCase):
|
|||
'ceilometer.publisher.utils',
|
||||
'publisher')
|
||||
self.useFixture(mockpatch.Patch(
|
||||
'ceilometer.dispatcher.database.DatabaseDispatcher',
|
||||
'ceilometer.dispatcher.database.EventDatabaseDispatcher',
|
||||
new=FakeDispatcher))
|
||||
|
||||
@mock.patch('ceilometer.publisher.utils.verify_signature')
|
||||
|
@ -46,7 +46,7 @@ class TestEventDispatcherVerifier(base.BaseTestCase):
|
|||
return ev.get('message_signature') != 'bad_signature'
|
||||
mocked_verify.side_effect = _fake_verify
|
||||
sample = {"payload": [{"message_signature": "bad_signature"}]}
|
||||
manager = dispatcher.load_dispatcher_manager()[0]
|
||||
manager = dispatcher.load_dispatcher_manager()[1]
|
||||
v = collector.EventEndpoint("secret", manager)
|
||||
v.sample([sample])
|
||||
self.assertEqual([], manager['database'].obj.events)
|
||||
|
|
|
@ -184,3 +184,9 @@ class TestNeutronClient(base.BaseTestCase):
|
|||
self.assertEqual(2, stats[0]['total_connections'])
|
||||
self.assertEqual(3, stats[0]['bytes_in'])
|
||||
self.assertEqual(4, stats[0]['bytes_out'])
|
||||
|
||||
def test_v1_list_loadbalancer_returns_empty_list(self):
|
||||
self.assertEqual([], self.nc.list_loadbalancer())
|
||||
|
||||
def test_v1_list_listener_returns_empty_list(self):
|
||||
self.assertEqual([], self.nc.list_listener())
|
||||
|
|
|
@ -40,10 +40,8 @@ class BaseConversionTransformer(transformer.TransformerBase):
|
|||
unit and scaling factor (a missing value
|
||||
connotes no change)
|
||||
"""
|
||||
source = source or {}
|
||||
target = target or {}
|
||||
self.source = source
|
||||
self.target = target
|
||||
self.source = source or {}
|
||||
self.target = target or {}
|
||||
super(BaseConversionTransformer, self).__init__(**kwargs)
|
||||
|
||||
def _map(self, s, attr):
|
||||
|
|
|
@ -1 +1 @@
|
|||
debian/mans/ceilometer-api.8
|
||||
debian/mans/ceilometer-api.8
|
||||
|
|
|
@ -1 +1 @@
|
|||
debian/mans/ceilometer-collector.8
|
||||
debian/mans/ceilometer-collector.8
|
||||
|
|
|
@ -1,11 +1,15 @@
|
|||
/usr/bin/ceilometer-db-legacy-clean
|
||||
/usr/bin/ceilometer-dbsync
|
||||
/usr/bin/ceilometer-expirer
|
||||
/usr/bin/ceilometer-polling
|
||||
/usr/bin/ceilometer-rootwrap
|
||||
/usr/bin/ceilometer-send-sample
|
||||
/usr/bin/ceilometer-db-legacy-clean
|
||||
etc/ceilometer/api_paste.ini /etc/ceilometer
|
||||
etc/ceilometer/event_definitions.yaml /etc/ceilometer
|
||||
etc/ceilometer/event_pipeline.yaml /etc/ceilometer
|
||||
etc/ceilometer/examples/loadbalancer_v2_meter_definitions.yaml /etc/ceilometer/examples
|
||||
etc/ceilometer/examples/osprofiler_event_definitions.yaml /etc/ceilometer/examples
|
||||
etc/ceilometer/gnocchi_resources.yaml /etc/ceilometer
|
||||
etc/ceilometer/pipeline.yaml /etc/ceilometer
|
||||
etc/ceilometer/policy.json /etc/ceilometer
|
||||
etc/ceilometer/rootwrap.conf /etc/ceilometer
|
||||
|
|
|
@ -1,3 +1,13 @@
|
|||
ceilometer (1:7.0.0~b3-1) experimental; urgency=medium
|
||||
|
||||
* New upstream release.
|
||||
* Fixed (build-)depends for this release.
|
||||
* Using OpenStack's Gerrit as VCS URLs.
|
||||
* Points .gitreview to OpenStack packaging-deb's Gerrit.
|
||||
* Fixed installation of files in /etc/ceilometer for this release.
|
||||
|
||||
-- Thomas Goirand <zigo@debian.org> Thu, 15 Sep 2016 22:58:16 +0200
|
||||
|
||||
ceilometer (1:7.0.0~b2-1) experimental; urgency=medium
|
||||
|
||||
* Updated Danish translation of the debconf templates (Closes: #830640).
|
||||
|
|
|
@ -6,17 +6,18 @@ Uploaders: Thomas Goirand <zigo@debian.org>,
|
|||
Build-Depends: debhelper (>= 9),
|
||||
dh-python,
|
||||
dh-systemd,
|
||||
openstack-pkg-tools (>= 40~),
|
||||
openstack-pkg-tools (>= 52~),
|
||||
po-debconf,
|
||||
python-all,
|
||||
python-pbr (>= 1.8),
|
||||
python-setuptools,
|
||||
python-sphinx,
|
||||
Build-Depends-Indep: mongodb,
|
||||
git,
|
||||
Build-Depends-Indep: git,
|
||||
mongodb,
|
||||
python-awsauth,
|
||||
python-concurrent.futures,
|
||||
python-contextlib2,
|
||||
python-cotyledon,
|
||||
python-coverage,
|
||||
python-dateutil (>= 2.4.2),
|
||||
python-debtcollector (>= 1.2.0),
|
||||
|
@ -42,6 +43,9 @@ Build-Depends-Indep: mongodb,
|
|||
python-neutronclient (>= 1:4.2.0),
|
||||
python-novaclient (>= 2:2.29.0),
|
||||
python-openstackdocstheme (>= 1.0.3),
|
||||
python-os-api-ref (>= 0.1.0),
|
||||
python-os-testr (>= 0.4.1),
|
||||
python-os-win (>= 0.2.3),
|
||||
python-oslo.cache (>= 1.5.0),
|
||||
python-oslo.concurrency (>= 3.5.0),
|
||||
python-oslo.config (>= 1:3.9.0),
|
||||
|
@ -54,13 +58,10 @@ Build-Depends-Indep: mongodb,
|
|||
python-oslo.reports (>= 1.0.0),
|
||||
python-oslo.rootwrap (>= 2.0.0),
|
||||
python-oslo.serialization (>= 2.0.0),
|
||||
python-oslo.service (>= 1.0.0),
|
||||
python-oslo.utils (>= 3.5.0),
|
||||
python-oslo.vmware (>= 1.16.0),
|
||||
python-oslosphinx (>= 2.5.0),
|
||||
python-oslotest (>= 1.10.0),
|
||||
python-os-testr (>= 0.4.1),
|
||||
python-os-win (>= 0.2.3),
|
||||
python-pastedeploy,
|
||||
python-pecan (>= 1.0.0),
|
||||
python-psycopg2 (>= 2.5),
|
||||
|
@ -70,8 +71,8 @@ Build-Depends-Indep: mongodb,
|
|||
python-requests (>= 2.8.1),
|
||||
python-retrying,
|
||||
python-six (>= 1.9.0),
|
||||
python-sphinxcontrib.httpdomain,
|
||||
python-sphinxcontrib-pecanwsme,
|
||||
python-sphinxcontrib.httpdomain,
|
||||
python-sqlalchemy (>= 1.0.10),
|
||||
python-stevedore (>= 1.9.0),
|
||||
python-swiftclient (>= 1:2.2.0),
|
||||
|
@ -88,8 +89,8 @@ Build-Depends-Indep: mongodb,
|
|||
tempest (>= 1:12.1.0),
|
||||
testrepository,
|
||||
Standards-Version: 3.9.8
|
||||
Vcs-Browser: https://anonscm.debian.org/cgit/openstack/ceilometer.git/
|
||||
Vcs-Git: https://anonscm.debian.org/git/openstack/ceilometer.git
|
||||
Vcs-Browser: https://git.openstack.org/cgit/openstack/deb-ceilometer
|
||||
Vcs-Git: https://git.openstack.org/openstack/deb-ceilometer
|
||||
Homepage: http://wiki.openstack.org/Ceilometer
|
||||
|
||||
Package: python-ceilometer
|
||||
|
@ -97,6 +98,7 @@ Section: python
|
|||
Architecture: all
|
||||
Depends: libjs-jquery,
|
||||
python-concurrent.futures,
|
||||
python-cotyledon,
|
||||
python-dateutil (>= 2.4.2),
|
||||
python-debtcollector (>= 1.2.0),
|
||||
python-futurist (>= 0.11.0),
|
||||
|
@ -127,7 +129,6 @@ Depends: libjs-jquery,
|
|||
python-oslo.reports (>= 1.0.0),
|
||||
python-oslo.rootwrap (>= 2.0.0),
|
||||
python-oslo.serialization (>= 2.0.0),
|
||||
python-oslo.service (>= 1.0.0),
|
||||
python-oslo.utils (>= 3.5.0),
|
||||
python-oslo.vmware (>= 1.16.0),
|
||||
python-pastedeploy,
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue