Preparing infrastructure for events-api

Following change aims at setting up the
tooling for events-api againts latest
requirements and guideliness from OpenStack.
Removing old code , add basic structure of the python
modules. Add basic documentation and release notes.
Prepare to use oslo-config-generator.
Add devstack basic setup to the project.

Story: 2001112
Task: 4798

Change-Id: I76d737bf9d1216b041bc1a518cc2098f28e7da7b
This commit is contained in:
Tomasz Trębski 2017-06-30 08:30:50 +02:00 committed by Artur Basiak
parent dba3061494
commit 5321635049
100 changed files with 1977 additions and 12538 deletions

View File

@ -1,8 +1,8 @@
[run]
branch = True
source = monasca
omit = monasca/tests/*
source = monasca_events_api
omit = monasca_events_api/tests/*
[report]
ignore-errors = True
ignore_errors = True

1
.gitignore vendored
View File

@ -31,3 +31,4 @@ logs/
log/
*config*.yml
db/config.yml
.coverage.*

View File

@ -1,4 +1,4 @@
[gerrit]
host=review.openstack.org
port=29418
project=stackforge/monasca-api
project=openstack/monasca-events-api

View File

@ -6,3 +6,4 @@ test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
test_id_option=--load-list $IDFILE
test_list_option=--list
group_regex=monasca_events_api\.tests(?:\.|_)([^_]+)

470
README.md
View File

@ -1,470 +0,0 @@
# Overview
`monasca-events-api` is a RESTful API server that is designed with a layered architecture [layered architecture](http://en.wikipedia.org/wiki/Multilayered_architecture).
## Keystone Configuration
For secure operation of the Monasca Events API, the API must be configured to use Keystone in the configuration file under the middleware section. Monasca only works with a Keystone v3 server. The important parts of the configuration are explained below:
* serverVIP - This is the hostname or IP Address of the Keystone server
* serverPort - The port for the Keystone server
* useHttps - Whether to use https when making requests of the Keystone API
* truststore - If useHttps is true and the Keystone server is not using a certificate signed by a public CA recognized by Java, the CA certificate can be placed in a truststore so the Monasca API will trust it, otherwise it will reject the https connection. This must be a JKS truststore
* truststorePassword - The password for the above truststore
* connSSLClientAuth - If the Keystone server requires the SSL client used by the Monasca server to have a specific client certificate, this should be true, false otherwise
* keystore - The keystore holding the SSL Client certificate if connSSLClientAuth is true
* keystorePassword - The password for the keystore
* defaultAuthorizedRoles - An array of roles that authorize a user to access the complete Monasca API. User must have at least one of these roles. See below
* agentAuthorizedRoles - An array of roles that authorize only the posting of metrics. See Keystone Roles below
* adminAuthMethod - "password" if the Monasca API should adminUser and adminPassword to login to the Keystone server to check the user's token, "token" if the Monasca API should use adminToken
* adminUser - Admin user name
* adminPassword - Admin user password
* adminProjectId - Specify the project ID the api should use to request an admin token. Defaults to the admin user's default project. The adminProjectId option takes precedence over adminProjectName.
* adminProjectName - Specify the project name the api should use to request an admin token. Defaults to the admin user's default project. The adminProjectId option takes precedence over adminProjectName.
* adminToken - A valid admin user token if adminAuthMethod is token
* timeToCacheToken - How long the Monasca API should cache the user's token before checking it again
### Installation
To install the events api, git clone the source and run the
following commands:
sudo python setup.py install
If it installs successfully, you will need to make changes to the following
two files to reflect your system settings, especially where kafka server is
located:
/etc/monasca/events_api.ini
/etc/monasca/events_api.conf
Once the configurations are modified to match your environment, you can start
up the server by following the following instructions.
To start the server, run the following command:
Running the server in foreground mode
gunicorn -k eventlet --worker-connections=2000 --backlog=1000 --paste /etc/monasca/events_api.ini
Running the server as daemons
gunicorn -k eventlet --worker-connections=2000 --backlog=1000
--paste /etc/monasca/events_api.ini -D
To check if the code follows python coding style, run the following command
from the root directory of this project
tox -e pep8
To run all the unit test cases, run the following command from the root
directory of this project
tox -e py27 (or -e py26, -e py33)
# Monasca Events API
Stream Definition Methods
-------------------------
## POST /v2.0/stream-definitions
### Headers
* X-Auth-Token (string, required) - Keystone auth token
* Accept (string) - application/json
### Request Body
```
{
"fire_criteria": [
{"event_type": "compute.instance.create.start"},
{"event_type": "compute.instance.create.end"}
],
"description": "provisioning duration",
"name": "example",
"group_by": ["instance_id"],
"expiration": 3000,
"select": [{
"traits": {"tenant_id": "406904"},
"event_type": "compute.instance.create.*"
}],
"fire_actions": [action_id],
"expire_actions": [action_id]
}
```
### Request Example
```
POST /v2.0/stream-definitions HTTP/1.1
Host: 192.168.10.4:8072
X-Auth-Token: 2b8882ba2ec44295bf300aecb2caa4f7
Accept: application/json
Cache-Control: no-cache
```
## GET /v2.0/stream-definition
### Headers
* X-Auth-Token (string, required) - Keystone auth token
* Accept (string) - application/json
### Request Body
None.
### Request Example
```
GET /v2.0/stream-definitions HTTP/1.1
Host: 192.168.10.4:8072
X-Auth-Token: 2b8882ba2ec44295bf300aecb2caa4f7
Accept: application/json
Cache-Control: no-cache
```
### Response Body
Returns a JSON object with a 'links' array of links and an 'elements' array of stream definition objects with the following fields:
* id (string)
* name (string)
* fire_actions (string)
* description (string)
* expire_actions (string)
* created_at (datetime string)
* select
* traits
* tenant_id (string)
* event_type (string)
* group_by (string)
* expiration (int)
* links - links to stream-definition
* updated_at (datetime string)
* actions_enabled (bool)
* fire_criteria - JSON list of event fire criteria
### Response Body Example
```
{
"links": [
{
"rel": "self",
"href": "http://192.168.10.4:8072/v2.0/stream-definitions"
}
],
"elements": [
{
"id": "242dd5f4-2ef6-11e5-8945-0800273a0b5b",
"fire_actions": [
"56330521-92da-4a84-8239-73d880b978fa"
],
"description": "provisioning duration",
"expire_actions": [
"56330521-92da-4a84-8239-73d880b978fa"
],
"created_at": "2015-07-20T15:44:01",
"select": [
{
"traits": {
"tenant_id": "406904"
},
"event_type": "compute.instance.create.*"
}
],
"group_by": [
"instance_id"
],
"expiration": 3000,
"links": [
{
"rel": "self",
"href": "http://192.168.10.4:8072/v2.0/stream-definitions/242dd5f4-2ef6-11e5-8945-0800273a0b5b"
}
],
"updated_at": "2015-07-20T15:44:01",
"actions_enabled": true,
"name": "1437407040.8",
"fire_criteria": [
{
"event_type": "compute.instance.create.start"
},
{
"event_type": "compute.instance.create.end"
}
]
}
]
}
```
## GET /v2.0/stream-definition/{definition_id}
### Headers
* X-Auth-Token (string, required) - Keystone auth token
* Accept (string) - application/json
### Request Body
None.
### Request Example
```
GET /v2.0/stream-definitions/242dd5f4-2ef6-11e5-8945-0800273a0b5b HTTP/1.1
Host: 192.168.10.4:8072
X-Auth-Token: 2b8882ba2ec44295bf300aecb2caa4f7
Accept: application/json
Cache-Control: no-cache
```
### Response Body
Returns a JSON object with a 'links' array of links and an 'elements' array of stream definition objects with the following fields:
* id (string)
* name (string)
* fire_actions (string)
* description (string)
* expire_actions (string)
* created_at (datetime string)
* select
* traits
* tenant_id (string)
* event_type (string)
* group_by (string)
* expiration (int)
* links - links to stream-definition
* updated_at (datetime string)
* actions_enabled (bool)
* fire_criteria - JSON list of event fire criteria
### Response Body Example
```
{
"id": "242dd5f4-2ef6-11e5-8945-0800273a0b5b",
"fire_actions": [
"56330521-92da-4a84-8239-73d880b978fa"
],
"description": "provisioning duration",
"expire_actions": [
"56330521-92da-4a84-8239-73d880b978fa"
],
"created_at": "2015-07-20T15:44:01",
"select": [
{
"traits": {
"tenant_id": "406904"
},
"event_type": "compute.instance.create.*"
}
],
"group_by": [
"instance_id"
],
"expiration": 3000,
"links": [
{
"rel": "self",
"href": "http://192.168.10.4:8072/v2.0/stream-definitions/242dd5f4-2ef6-11e5-8945-0800273a0b5b"
}
],
"updated_at": "2015-07-20T15:44:01",
"actions_enabled": true,
"name": "1437407040.8",
"fire_criteria": [
{
"event_type": "compute.instance.create.start"
},
{
"event_type": "compute.instance.create.end"
}
]
}
```
## DELETE /v2.0/stream-definition/{definition_id}
### Headers
* X-Auth-Token (string, required) - Keystone auth token
* Accept (string) - application/json
### Request Body
None.
### Request Example
```
DELETE /v2.0/stream-definitions/242dd5f4-2ef6-11e5-8945-0800273a0b5b HTTP/1.1
Host: 192.168.10.4:8072
X-Auth-Token: 2b8882ba2ec44295bf300aecb2caa4f7
Accept: application/json
Cache-Control: no-cache
```
### Response Body
None.
### Response Body Example
None.
## POST /v2.0/transforms/
### Headers
* X-Auth-Token (string, required) - Keystone auth token
* Accept (string) - application/json
### Request Body
```
{
"name": 'example',
"description": 'an example definition',
"specification": YAML_data
}
```
### Request Example
```
POST /v2.0/transforms/ HTTP/1.1
Host: 192.168.10.4:8072
X-Auth-Token: 2b8882ba2ec44295bf300aecb2caa4f7
Accept: application/json
Cache-Control: no-cache
```
### Response Body
None.
### Response Body Example
None.
## GET /v2.0/transforms/
### Headers
* X-Auth-Token (string, required) - Keystone auth token
* Accept (string) - application/json
### Request Body
None.
### Request Example
```
GET /v2.0/transforms/ HTTP/1.1
Host: 192.168.10.4:8072
X-Auth-Token: 2b8882ba2ec44295bf300aecb2caa4f7
Accept: application/json
Cache-Control: no-cache
```
### Response Body
Returns a JSON object with a 'links' array of links and an 'elements' array of stream definition objects with the following fields:
* id (string)
* name (string)
* description (string)
* enabled (bool)
* tenant_id (string)
* deleted_at (datetime)
* specification (string YAML data)
* created_at (datetime)
* updated_at (datetime)
### Response Body Example
```
{
"links": [
{
"rel": "self",
"href": "http://192.168.10.4:8072/v2.0/transforms"
}
],
"elements": [
{
"enabled": 0,
"id": "a794f22f-a231-47a0-8618-37f12b7a6f77",
"tenant_id": "d502aac2388b43f392c302b37a401ae5",
"deleted_at": null,
"specification": YAML_data,
"created_at": 1437407042,
"updated_at": 1437407042,
"description": "an example definition",
"name": "func test2"
}
]
}
```
## GET /v2.0/transforms/{transform_id}
### Headers
* X-Auth-Token (string, required) - Keystone auth token
* Accept (string) - application/json
### Request Body
None.
### Request Example
```
GET /v2.0/transforms/a794f22f-a231-47a0-8618-37f12b7a6f77 HTTP/1.1
Host: 192.168.10.4:8072
X-Auth-Token: 2b8882ba2ec44295bf300aecb2caa4f7
Accept: application/json
Cache-Control: no-cache
```
### Response Body
Returns a JSON object with a 'links' array of links and an 'elements' array of stream definition objects with the following fields:
* id (string)
* name (string)
* description (string)
* enabled (bool)
* tenant_id (string)
* deleted_at (datetime)
* specification (string YAML data)
* links - links to transform definition
* created_at (datetime)
* updated_at (datetime)
### Response Body Example
```
{
"enabled": 0,
"id": "a794f22f-a231-47a0-8618-37f12b7a6f77",
"tenant_id": "d502aac2388b43f392c302b37a401ae5",
"created_at": 1437407042,
"specification": "YAML_data",
"links": [
{
"rel": "self",
"href": "http://192.168.10.4:8072/v2.0/transforms/a794f22f-a231-47a0-8618-37f12b7a6f77"
}
],
"deleted_at": null,
"updated_at": 1437407042,
"description": "an example definition",
"name": "func test2"
}
```
## DELETE /v2.0/transforms/{transform_id}
### Headers
* X-Auth-Token (string, required) - Keystone auth token
* Accept (string) - application/json
### Request Body
None.
### Request Example
```
DELETE /v2.0/transforms/a794f22f-a231-47a0-8618-37f12b7a6f77 HTTP/1.1
Host: 192.168.10.4:8072
X-Auth-Token: 2b8882ba2ec44295bf300aecb2caa4f7
Accept: application/json
Cache-Control: no-cache
```
### Response Body
None.
### Response Body Example
None.
# License
Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied.
See the License for the specific language governing permissions and
limitations under the License.

53
README.rst Normal file
View File

@ -0,0 +1,53 @@
========================
Team and repository tags
========================
OpenStack Monasca-Events-Api
============================
OpenStack Monasca-Events-Api provides RESTful Api to collect events from OpenStack cloud.
OpenStack Monasca-Events-Api is distributed under the terms of the Apache License, Version 2.0.
The full terms and conditions of this license are detailed in the LICENSE file.
API
---
For the more information of OpenStack APIs, SDKs and CLIs, please see:
* `OpenStack Application Development <https://www.openstack.org/appdev/>`_
* `OpenStack Developer Documentation <https://developer.openstack.org/>`_
Developers
----------
For information on how to contribute to Monasca-events-api, please see the
contents of the CONTRIBUTING.rst.
Any new code must follow the development guidelines detailed
in the HACKING.rst file, and pass all unit tests as well as linters.
Further developer focused documentation is available at:
* `Openstack Monasca-events-api <https://docs.openstack.org/developer/monasca-events-api/>`_
Operators
---------
To learn how to deploy and configure OpenStack Monasca-events-api, consult the
documentation available online at:
* `Installation <https://docs.openstack.org/monasca-events-api/latest/install/>`_
* `Configuration <https://docs.openstack.org/monasca-events-api/latest/configuration/>`_
Bug tracking
------------
In the unfortunate event that bugs are discovered, they should
be reported to the appropriate bug tracker. If you obtained
the software from a 3rd party operating system vendor, it is
often wise to use their own bug tracker for reporting problems.
In all other cases use the master OpenStack bug tracker,
available at:
* `Storyboard <https://storyboard.openstack.org/#!/project/866>`_

259
api-guide/source/conf.py Normal file
View File

@ -0,0 +1,259 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Key Manager API documentation build configuration file
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
needs_sphinx = '1.6'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'openstackdocstheme'
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
source_encoding = 'utf-8'
# The master toctree document.
master_doc = 'index'
# General details about project
repository_name = u'openstack/monasca-events-api'
project = u'Monasca Events API Guide'
bug_project = u'monasca-events-api'
bug_tag = u'api-guide'
copyright = u'2014, OpenStack Foundation'
from monasca_events_api.version import version_info
version = version_info.version_string()
release = version_info.release_string()
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# A shorter title for the navigation bar. Default is the same as html_title.
html_short_title = 'API Guide'
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = []
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'monascaeventsapi-api-guide'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'MonascaEventsApiAPI.tex', u'Key Manager API Documentation',
u'OpenStack Foundation', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'monascaeventsapiapi', u'Monasca Events API Documentation',
[u'OpenStack Foundation'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'MonascaEventsApiAPIGuide', u'Monasca Events API Guide',
u'OpenStack Foundation', 'APIGuide',
'This guide teaches OpenStack Monasca Events service users concepts about '
'managing keys in an OpenStack cloud with the Monasca Events API.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
# -- Options for Internationalization output ------------------------------
locale_dirs = ['locale/']
# -- Options for PDF output --------------------------------------------------
pdf_documents = [
('index', u'MonascaEventsApiAPIGuide', u'Key Manager API Guide', u'OpenStack '
'contributors')
]

View File

@ -0,0 +1,35 @@
..
Copyright 2017 Fujitsu LIMITED
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
==================
Monasca Events API
==================
The monasca-events-api project has a RESTful HTTP service called the
Monasca Events API. To this API , services (agents) can send events collected
from OpenStack message bus.
This guide covers the concepts in the Monasca Events API.
For a full reference listing, please see:
`Monasca Events API Reference <http://developer.openstack.org/api-ref/monasca/#monasca-events-api>`__.
We welcome feedback, comments and bug reports at
`storyboard/monasca <https://storyboard.openstack.org/#!/project_group/866>`__.
Contents
========
.. toctree::
:maxdepth: 2

258
api-ref/source/conf.py Normal file
View File

@ -0,0 +1,258 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Key Manager API documentation build configuration file
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
needs_sphinx = '1.6'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'os_api_ref',
'openstackdocstheme'
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
source_encoding = 'utf-8'
# The master toctree document.
master_doc = 'index'
# General details about project
repository_name = u'openstack/monasca-events-api'
project = u'Monasca Events Ref Guide'
bug_project = u'monasca-events-api'
bug_tag = u'api-ref'
copyright = u'2014, OpenStack Foundation'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
from monasca_events_api.version import version_info
version = version_info.version_string()
release = version_info.release_string()
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# To use the API Reference sidebar dropdown menu,
# uncomment the html_theme_options parameter. The theme
# variable, sidebar_dropdown, should be set to `api_ref`.
# Otherwise, the list of links for the User and Ops docs
# appear in the sidebar dropdown menu.
html_theme_options = {"sidebar_dropdown": "api_ref",
"sidebar_mode": "toc"}
# A shorter title for the navigation bar. Default is the same as html_title.
html_short_title = 'API Ref'
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = []
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'monascaeventsapi-api-ref'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'MonascaEventsApi.tex', u'Monasca Events API Documentation',
u'OpenStack Foundation', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'monascaeventsapi', u'Monasca Events API Documentation',
[u'OpenStack Foundation'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'MonascaEventsAPI', u'Monasca Events API Documentation',
u'OpenStack Foundation', 'MonascaEventsAPI', 'Monasca Events API',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
# -- Options for Internationalization output ------------------------------
locale_dirs = ['locale/']

22
api-ref/source/index.rst Normal file
View File

@ -0,0 +1,22 @@
:tocdepth: 2
..
Copyright 2014-2017 Fujitsu LIMITED
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
===========================
Monasca Events Service APIs
===========================
.. rest_expand_all:

View File

@ -0,0 +1,7 @@
# config-generator
To generate sample configuration execute
```bash
tox -e genconfig
```

View File

@ -0,0 +1,6 @@
[DEFAULT]
output = etc/monasca/events-api.conf.sample
width = 80
format = ini
namespace = events.api
namespace = oslo.log

View File

@ -1,141 +0,0 @@
#!/opt/monasca/bin/python
# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This script is designed to demo the entire Monasca Events package.
# It will post a stream defintion and transform defintion,
# and then it'll generate events at 5 per minute
import datetime
import json
import kafka
import sys
import time
import notigen
import requests
import yaml
from monascaclient import ksclient
events_url = "http://192.168.10.4:8082/v2.0"
def token():
keystone = {
'username': 'mini-mon',
'password': 'password',
'project': 'test',
'auth_url': 'http://192.168.10.5:35357/v3'
}
ks_client = ksclient.KSClient(**keystone)
return ks_client.token
headers = {
'X-Auth-User': 'mini-mon',
'X-Auth-Key': 'password',
'X-Auth-Token': token(),
'Accept': 'application/json',
'User-Agent': 'python-monascaclient',
'Content-Type': 'application/json'}
def stream_definition_post():
body = {}
notif_resp = requests.get(
url="http://192.168.10.4:8080/v2.0/notification-methods",
data=json.dumps(body), headers=headers)
notif_dict = json.loads(notif_resp.text)
action_id = str(notif_dict['elements'][0]['id'])
body = {"fire_criteria": [{"event_type": "compute.instance.create.start"},
{"event_type": "compute.instance.create.end"}],
"description": "provisioning duration",
"name": "Example Stream Definition",
"group_by": ["instance_id"],
"expiration": 3000,
"select": [{"traits": {"tenant_id": "406904"},
"event_type": "compute.instance.create.*"}],
"fire_actions": [action_id],
"expire_actions": [action_id]}
response = requests.post(
url=events_url + "/stream-definitions",
data=json.dumps(body),
headers=headers)
def transform_definition_post():
# Open example yaml file and post to DB
fh = open('files/transform_definition.yaml', 'r')
specification_data = yaml.load(fh)
body = {
"name": 'Example Transform Definition',
"description": 'an example description',
"specification": str(specification_data)
}
response = requests.post(
url=events_url + "/transforms",
data=json.dumps(body),
headers=headers)
def event_generator():
# generate 5 events per minute
g = notigen.EventGenerator("files/event_templates", operations_per_hour=300)
now = datetime.datetime.utcnow()
start = now
nevents = 0
length = 0
while nevents < 300:
e = g.generate(now)
if e:
nevents += len(e)
key = time.time() * 1000
msg = e
if len(msg) > length:
length = len(msg)
print("Max notification size: {}".format(length))
response = requests.post(
url=events_url + "/events",
data=json.dumps(msg),
headers=headers)
now = datetime.datetime.utcnow()
time.sleep(0.01)
def main():
stream_definition_post()
transform_definition_post()
event_generator()
if __name__ == "__main__":
sys.exit(main())

View File

@ -1,974 +0,0 @@
[
{
"xuuid": 9,
"v4": 14,
"time_map": {
"[[[[DT_9]]]]": [
0,
3,
962765
],
"[[[[DT_5]]]]": [
-1,
86398,
713424
],
"[[[[DT_17]]]]": [
-199,
84770,
268272
],
"[[[[DT_23]]]]": [
0,
4,
536102
],
"[[[[DT_16]]]]": [
-163,
42547,
268272
],
"[[[[DT_20]]]]": [
-199,
85752,
268272
],
"[[[[DT_2]]]]": [
-199,
84770,
268272
],
"[[[[DT_15]]]]": [
-199,
84639,
268272
],
"[[[[DT_7]]]]": [
0,
0,
268272
],
"[[[[DT_21]]]]": [
-199,
85943,
268272
],
"[[[[DT_11]]]]": [
0,
4,
268272
],
"[[[[DT_3]]]]": [
-1,
86399,
268272
],
"[[[[DT_8]]]]": [
0,
1,
163638
],
"[[[[DT_14]]]]": [
-199,
84421,
268272
],
"[[[[DT_10]]]]": [
0,
3,
989039
],
"[[[[DT_4]]]]": [
0,
0,
0
],
"[[[[DT_22]]]]": [
-199,
85967,
268272
],
"[[[[DT_1]]]]": [
-199,
84421,
268272
],
"[[[[DT_13]]]]": [
-201,
56142,
268272
],
"[[[[DT_6]]]]": [
0,
0,
47717
],
"[[[[DT_12]]]]": [
0,
4,
463062
],
"[[[[DT_19]]]]": [
-199,
85725,
268272
],
"[[[[DT_18]]]]": [
0,
0,
268272
],
"[[[[DT_0]]]]": [
-1,
46116,
268272
]
},
"uuid": 18,
"v6": 2
},
{
"_context_request_id": "req-[[[[UUID_0]]]]",
"_context_quota_class": null,
"event_type": "compute.instance.update",
"_context_auth_token": "[[[[XUUID_0]]]]",
"_context_user_id": "[[[[user_id]]]]",
"payload": {
"state_description": "",
"availability_zone": null,
"terminated_at": "",
"ephemeral_gb": 600,
"instance_type_id": 15,
"bandwidth": {},
"deleted_at": "",
"reservation_id": "[[[[reservation_id]]]]",
"instance_id": "[[[[UUID_1]]]]",
"user_id": "[[[[XUUID_1]]]]",
"hostname": "[[[[hostname]]]]",
"state": "error",
"old_state": null,
"old_task_state": null,
"metadata": {},
"node": "[[[[node]]]]",
"ramdisk_id": "",
"access_ip_v6": "[[[[V6_0]]]]",
"disk_gb": 640,
"access_ip_v4": "[[[[V4_0]]]]",
"kernel_id": "",
"host": "[[[[host]]]]",
"display_name": "[[[[display_name]]]]",
"image_ref_url": "http://[[[[V4_1]]]]:9292/images/[[[[UUID_2]]]]",
"audit_period_beginning": "[[[[DT_0]]]]",
"root_gb": 40,
"tenant_id": "[[[[tenant_id]]]]",
"created_at": "[[[[DT_1]]]]",
"launched_at": "[[[[DT_2]]]]",
"memory_mb": 61440,
"instance_type": "60 GB Performance",
"vcpus": 16,
"image_meta": {
"container_format": "ovf",
"min_ram": "1024",
"base_image_ref": "[[[[UUID_2]]]]",
"org.openstack__1__os_distro": "com.microsoft.server",
"image_type": "base",
"disk_format": "vhd",
"org.openstack__1__architecture": "x64",
"auto_disk_config": "False",
"min_disk": "40",
"cache_in_nova": "True",
"os_type": "windows",
"org.openstack__1__os_version": "2008.2"
},
"architecture": "x64",
"new_task_state": null,
"audit_period_ending": "[[[[DT_3]]]]",
"os_type": "windows",
"instance_flavor_id": "performance2-60"
},
"priority": "INFO",
"_context_is_admin": true,
"_context_user": "[[[[user_id]]]]",
"publisher_id": "[[[[publisher_id]]]]",
"message_id": "[[[[UUID_3]]]]",
"_context_remote_address": "[[[[V4_2]]]]",
"_context_roles": [
"user-admin",
"bofh",
"glance",
"glance:admin"
],
"timestamp": "[[[[DT_4]]]]",
"_context_timestamp": "[[[[DT_5]]]]",
"_unique_id": "[[[[XUUID_2]]]]",
"_context_glance_api_servers": null,
"_context_project_name": "[[[[tenant_id]]]]",
"_context_read_deleted": "no",
"_context_tenant": "[[[[tenant_id]]]]",
"_context_instance_lock_checked": false,
"_context_project_id": "[[[[tenant_id]]]]",
"_context_user_name": "[[[[user_id]]]]"
},
{
"_context_request_id": "req-[[[[UUID_0]]]]",
"_context_quota_class": null,
"event_type": "compute.instance.update",
"_context_auth_token": "[[[[XUUID_0]]]]",
"_context_user_id": "[[[[user_id]]]]",
"payload": {
"state_description": "deleting",
"availability_zone": null,
"terminated_at": "",
"ephemeral_gb": 600,
"instance_type_id": 15,
"bandwidth": {},
"deleted_at": "",
"reservation_id": "[[[[reservation_id]]]]",
"instance_id": "[[[[UUID_1]]]]",
"user_id": "[[[[XUUID_1]]]]",
"hostname": "[[[[hostname]]]]",
"state": "error",
"old_state": "error",
"old_task_state": null,
"metadata": {},
"node": "[[[[node]]]]",
"ramdisk_id": "",
"access_ip_v6": "[[[[V6_0]]]]",
"disk_gb": 640,
"access_ip_v4": "[[[[V4_0]]]]",
"kernel_id": "",
"host": "[[[[host]]]]",
"display_name": "[[[[display_name]]]]",
"image_ref_url": "http://[[[[V4_3]]]]:9292/images/[[[[UUID_2]]]]",
"audit_period_beginning": "[[[[DT_0]]]]",
"root_gb": 40,
"tenant_id": "[[[[tenant_id]]]]",
"created_at": "[[[[DT_1]]]]",
"launched_at": "[[[[DT_2]]]]",
"memory_mb": 61440,
"instance_type": "60 GB Performance",
"vcpus": 16,
"image_meta": {
"container_format": "ovf",
"min_ram": "1024",
"base_image_ref": "[[[[UUID_2]]]]",
"org.openstack__1__os_distro": "com.microsoft.server",
"image_type": "base",
"disk_format": "vhd",
"org.openstack__1__architecture": "x64",
"auto_disk_config": "False",
"min_disk": "40",
"cache_in_nova": "True",
"os_type": "windows",
"org.openstack__1__os_version": "2008.2"
},
"architecture": "x64",
"new_task_state": "deleting",
"audit_period_ending": "[[[[DT_3]]]]",
"os_type": "windows",
"instance_flavor_id": "performance2-60"
},
"priority": "INFO",
"_context_is_admin": true,
"_context_user": "[[[[user_id]]]]",
"publisher_id": "[[[[publisher_id]]]]",
"message_id": "[[[[UUID_4]]]]",
"_context_remote_address": "[[[[V4_2]]]]",
"_context_roles": [
"user-admin",
"bofh",
"glance",
"glance:admin"
],
"timestamp": "[[[[DT_6]]]]",
"_context_timestamp": "[[[[DT_5]]]]",
"_unique_id": "[[[[XUUID_3]]]]",
"_context_glance_api_servers": null,
"_context_project_name": "[[[[tenant_id]]]]",
"_context_read_deleted": "no",
"_context_tenant": "[[[[tenant_id]]]]",
"_context_instance_lock_checked": false,
"_context_project_id": "[[[[tenant_id]]]]",
"_context_user_name": "[[[[user_id]]]]"
},
{
"_context_request_id": "req-[[[[UUID_0]]]]",
"_context_quota_class": null,
"event_type": "compute.instance.update",
"_context_auth_token": "[[[[XUUID_0]]]]",
"_context_user_id": "[[[[user_id]]]]",
"payload": {
"state_description": "deleting",
"availability_zone": null,
"terminated_at": "",
"ephemeral_gb": 600,
"instance_type_id": 15,
"bandwidth": {},
"deleted_at": "",
"reservation_id": "[[[[reservation_id]]]]",
"instance_id": "[[[[UUID_1]]]]",
"user_id": "[[[[XUUID_1]]]]",
"hostname": "[[[[hostname]]]]",
"state": "error",
"old_state": "error",
"old_task_state": null,
"metadata": {},
"node": "[[[[node]]]]",
"ramdisk_id": "",
"access_ip_v6": "[[[[V6_0]]]]",
"disk_gb": 640,
"access_ip_v4": "[[[[V4_0]]]]",
"kernel_id": "",
"host": "[[[[host]]]]",
"display_name": "[[[[display_name]]]]",
"image_ref_url": "http://[[[[V4_1]]]]:9292/images/[[[[UUID_2]]]]",
"audit_period_beginning": "[[[[DT_0]]]]",
"root_gb": 40,
"tenant_id": "[[[[tenant_id]]]]",
"created_at": "[[[[DT_1]]]]",
"launched_at": "[[[[DT_2]]]]",
"memory_mb": 61440,
"instance_type": "60 GB Performance",
"vcpus": 16,
"image_meta": {
"container_format": "ovf",
"min_ram": "1024",
"base_image_ref": "[[[[UUID_2]]]]",
"org.openstack__1__os_distro": "com.microsoft.server",
"image_type": "base",
"disk_format": "vhd",
"org.openstack__1__architecture": "x64",
"auto_disk_config": "False",
"min_disk": "40",
"cache_in_nova": "True",
"os_type": "windows",
"org.openstack__1__os_version": "2008.2"
},
"architecture": "x64",
"new_task_state": "deleting",
"audit_period_ending": "[[[[DT_7]]]]",
"os_type": "windows",
"instance_flavor_id": "performance2-60"
},
"priority": "INFO",
"_context_is_admin": true,
"_context_user": "[[[[user_id]]]]",
"publisher_id": "[[[[publisher_id]]]]",
"message_id": "[[[[UUID_5]]]]",
"_context_remote_address": "[[[[V4_2]]]]",
"_context_roles": [
"user-admin",
"bofh",
"glance",
"glance:admin",
"admin"
],
"timestamp": "[[[[DT_8]]]]",
"_context_timestamp": "[[[[DT_5]]]]",
"_unique_id": "[[[[XUUID_4]]]]",
"_context_glance_api_servers": null,
"_context_project_name": "[[[[tenant_id]]]]",
"_context_read_deleted": "no",
"_context_tenant": "[[[[tenant_id]]]]",
"_context_instance_lock_checked": false,
"_context_project_id": "[[[[tenant_id]]]]",
"_context_user_name": "[[[[user_id]]]]"
},
{
"_context_request_id": "req-[[[[UUID_0]]]]",
"_context_quota_class": null,
"event_type": "compute.instance.delete.start",
"_context_auth_token": "[[[[XUUID_0]]]]",
"_context_user_id": "[[[[user_id]]]]",
"payload": {
"state_description": "deleting",
"availability_zone": null,
"terminated_at": "",
"ephemeral_gb": 600,
"instance_type_id": 15,
"deleted_at": "",
"reservation_id": "[[[[reservation_id]]]]",
"instance_id": "[[[[UUID_1]]]]",
"user_id": "[[[[XUUID_1]]]]",
"hostname": "[[[[hostname]]]]",
"state": "error",
"launched_at": "[[[[DT_2]]]]",
"metadata": {},
"node": "[[[[node]]]]",
"ramdisk_id": "",
"access_ip_v6": "[[[[V6_0]]]]",
"disk_gb": 640,
"access_ip_v4": "[[[[V4_0]]]]",
"kernel_id": "",
"host": "[[[[host]]]]",
"display_name": "[[[[display_name]]]]",
"image_ref_url": "http://[[[[V4_4]]]]:9292/images/[[[[UUID_2]]]]",
"root_gb": 40,
"tenant_id": "[[[[tenant_id]]]]",
"created_at": "[[[[DT_1]]]]",
"memory_mb": 61440,
"instance_type": "60 GB Performance",
"vcpus": 16,
"image_meta": {
"container_format": "ovf",
"min_ram": "1024",
"base_image_ref": "[[[[UUID_2]]]]",
"org.openstack__1__os_distro": "com.microsoft.server",
"image_type": "base",
"disk_format": "vhd",
"org.openstack__1__architecture": "x64",
"auto_disk_config": "False",
"min_disk": "40",
"cache_in_nova": "True",
"os_type": "windows",
"org.openstack__1__os_version": "2008.2"
},
"architecture": "x64",
"os_type": "windows",
"instance_flavor_id": "performance2-60"
},
"priority": "INFO",
"_context_is_admin": true,
"_context_user": "[[[[user_id]]]]",
"publisher_id": "[[[[publisher_id]]]]",
"message_id": "[[[[UUID_6]]]]",
"_context_remote_address": "[[[[V4_2]]]]",
"_context_roles": [
"user-admin",
"bofh",
"glance",
"glance:admin",
"admin"
],
"timestamp": "[[[[DT_9]]]]",
"_context_timestamp": "[[[[DT_5]]]]",
"_unique_id": "[[[[XUUID_5]]]]",
"_context_glance_api_servers": null,
"_context_project_name": "[[[[tenant_id]]]]",
"_context_read_deleted": "no",
"_context_tenant": "[[[[tenant_id]]]]",
"_context_instance_lock_checked": false,
"_context_project_id": "[[[[tenant_id]]]]",
"_context_user_name": "[[[[user_id]]]]"
},
{
"_context_request_id": "req-[[[[UUID_0]]]]",
"_context_quota_class": null,
"event_type": "compute.instance.shutdown.start",
"_context_auth_token": "[[[[XUUID_0]]]]",
"_context_user_id": "[[[[user_id]]]]",
"payload": {
"state_description": "deleting",
"availability_zone": null,
"terminated_at": "",
"ephemeral_gb": 600,
"instance_type_id": 15,
"deleted_at": "",
"reservation_id": "[[[[reservation_id]]]]",
"instance_id": "[[[[UUID_1]]]]",
"user_id": "[[[[XUUID_1]]]]",
"hostname": "[[[[hostname]]]]",
"state": "error",
"launched_at": "[[[[DT_2]]]]",
"metadata": {},
"node": "[[[[node]]]]",
"ramdisk_id": "",
"access_ip_v6": "[[[[V6_0]]]]",
"disk_gb": 640,
"access_ip_v4": "[[[[V4_0]]]]",
"kernel_id": "",
"host": "[[[[host]]]]",
"display_name": "[[[[display_name]]]]",
"image_ref_url": "http://[[[[V4_4]]]]:9292/images/[[[[UUID_2]]]]",
"root_gb": 40,
"tenant_id": "[[[[tenant_id]]]]",
"created_at": "[[[[DT_1]]]]",
"memory_mb": 61440,
"instance_type": "60 GB Performance",
"vcpus": 16,
"image_meta": {
"container_format": "ovf",
"min_ram": "1024",
"base_image_ref": "[[[[UUID_2]]]]",
"org.openstack__1__os_distro": "com.microsoft.server",
"image_type": "base",
"disk_format": "vhd",
"org.openstack__1__architecture": "x64",
"auto_disk_config": "False",
"min_disk": "40",
"cache_in_nova": "True",
"os_type": "windows",
"org.openstack__1__os_version": "2008.2"
},
"architecture": "x64",
"os_type": "windows",
"instance_flavor_id": "performance2-60"
},
"priority": "INFO",
"_context_is_admin": true,
"_context_user": "[[[[user_id]]]]",
"publisher_id": "[[[[publisher_id]]]]",
"message_id": "[[[[UUID_7]]]]",
"_context_remote_address": "[[[[V4_2]]]]",
"_context_roles": [
"user-admin",
"bofh",
"glance",
"glance:admin",
"admin"
],
"timestamp": "[[[[DT_10]]]]",
"_context_timestamp": "[[[[DT_5]]]]",
"_unique_id": "[[[[XUUID_6]]]]",
"_context_glance_api_servers": null,
"_context_project_name": "[[[[tenant_id]]]]",
"_context_read_deleted": "no",
"_context_tenant": "[[[[tenant_id]]]]",
"_context_instance_lock_checked": false,
"_context_project_id": "[[[[tenant_id]]]]",
"_context_user_name": "[[[[user_id]]]]"
},
{
"_context_request_id": "req-[[[[UUID_0]]]]",
"_context_quota_class": null,
"event_type": "compute.instance.update",
"_context_auth_token": "[[[[XUUID_0]]]]",
"_context_user_id": "[[[[user_id]]]]",
"payload": {
"state_description": "deleting",
"availability_zone": null,
"terminated_at": "",
"ephemeral_gb": 600,
"instance_type_id": 15,
"bandwidth": {},
"deleted_at": "",
"reservation_id": "[[[[reservation_id]]]]",
"instance_id": "[[[[UUID_1]]]]",
"user_id": "[[[[XUUID_1]]]]",
"hostname": "[[[[hostname]]]]",
"state": "error",
"old_state": null,
"old_task_state": null,
"metadata": {},
"node": "[[[[node]]]]",
"ramdisk_id": "",
"access_ip_v6": "[[[[V6_0]]]]",
"disk_gb": 640,
"access_ip_v4": "[[[[V4_0]]]]",
"kernel_id": "",
"host": "[[[[host]]]]",
"display_name": "[[[[display_name]]]]",
"image_ref_url": "http://[[[[V4_4]]]]:9292/images/[[[[UUID_2]]]]",
"audit_period_beginning": "[[[[DT_0]]]]",
"root_gb": 40,
"tenant_id": "[[[[tenant_id]]]]",
"created_at": "[[[[DT_1]]]]",
"launched_at": "[[[[DT_2]]]]",
"memory_mb": 61440,
"instance_type": "60 GB Performance",
"vcpus": 16,
"image_meta": {
"container_format": "ovf",
"min_ram": "1024",
"base_image_ref": "[[[[UUID_2]]]]",
"org.openstack__1__os_distro": "com.microsoft.server",
"image_type": "base",
"disk_format": "vhd",
"org.openstack__1__architecture": "x64",
"auto_disk_config": "False",
"min_disk": "40",
"cache_in_nova": "True",
"os_type": "windows",
"org.openstack__1__os_version": "2008.2"
},
"architecture": "x64",
"new_task_state": "deleting",
"audit_period_ending": "[[[[DT_11]]]]",
"os_type": "windows",
"instance_flavor_id": "performance2-60"
},
"priority": "INFO",
"_context_is_admin": true,
"_context_user": "[[[[user_id]]]]",
"publisher_id": "[[[[publisher_id]]]]",
"message_id": "[[[[UUID_8]]]]",
"_context_remote_address": "[[[[V4_2]]]]",
"_context_roles": [
"user-admin",
"bofh",
"glance",
"glance:admin",
"admin"
],
"timestamp": "[[[[DT_12]]]]",
"_context_timestamp": "[[[[DT_5]]]]",
"_unique_id": "[[[[XUUID_7]]]]",
"_context_glance_api_servers": null,
"_context_project_name": "[[[[tenant_id]]]]",
"_context_read_deleted": "no",
"_context_tenant": "[[[[tenant_id]]]]",
"_context_instance_lock_checked": false,
"_context_project_id": "[[[[tenant_id]]]]",
"_context_user_name": "[[[[user_id]]]]"
},
{
"_context_request_id": "req-[[[[UUID_0]]]]",
"_context_quota_class": null,
"event_type": "terminate_instance",
"_context_auth_token": "[[[[XUUID_0]]]]",
"_context_user_id": "[[[[user_id]]]]",
"payload": {
"exception": {},
"args": {
"instance": {
"vm_state": "error",
"availability_zone": null,
"terminated_at": null,
"ephemeral_gb": 600,
"instance_type_id": 15,
"user_data": null,
"cleaned": false,
"vm_mode": "hvm",
"deleted_at": null,
"reservation_id": "[[[[reservation_id]]]]",
"id": 346688,
"security_groups": {
"objects": [
{
"deleted_at": null,
"user_id": "[[[[XUUID_1]]]]",
"description": "default",
"deleted": false,
"created_at": "[[[[DT_13]]]]",
"updated_at": null,
"project_id": "[[[[tenant_id]]]]",
"id": 187,
"name": "[[[[display_name]]]]"
}
]
},
"disable_terminate": false,
"root_device_name": "/dev/xvda",
"display_name": "[[[[display_name]]]]",
"uuid": "[[[[UUID_1]]]]",
"default_swap_device": null,
"info_cache": {
"instance_uuid": "[[[[UUID_1]]]]",
"deleted": true,
"created_at": "[[[[DT_14]]]]",
"updated_at": "[[[[DT_15]]]]",
"network_info": [
{
"ovs_interfaceid": null,
"network": {
"bridge": "publicnet",
"label": "public",
"meta": {
"original_id": "[[[[UUID_9]]]]",
"nvp_managed": false
},
"id": "[[[[UUID_10]]]]",
"subnets": [
{
"ips": [
{
"meta": {},
"type": "fixed",
"floating_ips": [],
"version": 6,
"address": "[[[[V6_1]]]]"
}
],
"version": 6,
"meta": {},
"dns": [
{
"meta": {},
"type": "dns",
"version": 4,
"address": "[[[[V4_5]]]]"
},
{
"meta": {},
"type": "dns",
"version": 4,
"address": "[[[[V4_6]]]]"
}
],
"routes": [],
"cidr": "2a00:1a48:7807:101::/64",
"gateway": {
"meta": {},
"type": "gateway",
"version": 6,
"address": "fe80::def"
}
},
{
"ips": [
{
"meta": {},
"type": "fixed",
"floating_ips": [],
"version": 4,
"address": "[[[[V4_0]]]]"
}
],
"version": 4,
"meta": {},
"dns": [
{
"meta": {},
"type": "dns",
"version": 4,
"address": "[[[[V4_5]]]]"
},
{
"meta": {},
"type": "dns",
"version": 4,
"address": "[[[[V4_6]]]]"
}
],
"routes": [],
"cidr": "[[[[V4_7]]]]/24",
"gateway": {
"meta": {},
"type": "gateway",
"version": 4,
"address": "[[[[V4_8]]]]"
}
}
]
},
"devname": "[[[[device_name]]]]",
"qbh_params": null,
"meta": {},
"address": "BC:76:4E:08:43:27",
"type": null,
"id": "[[[[UUID_11]]]]",
"qbg_params": null
},
{
"ovs_interfaceid": null,
"network": {
"bridge": "servicenet",
"label": "private",
"meta": {
"original_id": "[[[[UUID_12]]]]",
"nvp_managed": false
},
"id": "[[[[UUID_13]]]]",
"subnets": [
{
"ips": [
{
"meta": {},
"type": "fixed",
"floating_ips": [],
"version": 4,
"address": "[[[[V4_9]]]]"
}
],
"version": 4,
"meta": {},
"dns": [
{
"meta": {},
"type": "dns",
"version": 4,
"address": "[[[[V4_5]]]]"
},
{
"meta": {},
"type": "dns",
"version": 4,
"address": "[[[[V4_6]]]]"
}
],
"routes": [
{
"interface": null,
"cidr": "[[[[V4_10]]]]/12",
"meta": {},
"gateway": {
"meta": {},
"type": "gateway",
"version": 4,
"address": "[[[[V4_11]]]]"
}
},
{
"interface": null,
"cidr": "[[[[V4_12]]]]/12",
"meta": {},
"gateway": {
"meta": {},
"type": "gateway",
"version": 4,
"address": "[[[[V4_11]]]]"
}
}
],
"cidr": "[[[[V4_13]]]]/20",
"gateway": null
}
]
},
"devname": "[[[[device_name]]]]",
"qbh_params": null,
"meta": {},
"address": "BC:76:4E:08:92:48",
"type": null,
"id": "[[[[UUID_14]]]]",
"qbg_params": null
}
],
"deleted_at": "[[[[DT_16]]]]"
},
"hostname": "[[[[hostname]]]]",
"launched_on": "c-10-21-128-29",
"display_description": "[[[[display_name]]]]",
"key_data": null,
"deleted": false,
"config_drive": "",
"power_state": 0,
"default_ephemeral_device": null,
"progress": 0,
"project_id": "[[[[tenant_id]]]]",
"launched_at": "[[[[DT_17]]]]",
"scheduled_at": "[[[[DT_14]]]]",
"node": "[[[[node]]]]",
"ramdisk_id": "",
"access_ip_v6": "[[[[V6_0]]]]",
"access_ip_v4": "[[[[V4_0]]]]",
"kernel_id": "",
"key_name": null,
"updated_at": "[[[[DT_18]]]]",
"host": "[[[[host]]]]",
"root_gb": 40,
"user_id": "[[[[XUUID_1]]]]",
"system_metadata": {
"instance_type_id": "15",
"image_min_ram": "1024",
"instance_type_vcpu_weight": "10",
"image_cache_in_nova": "True",
"instance_type_ephemeral_gb": "600",
"image_org.openstack__1__os_version": "2008.2",
"image_org.openstack__1__os_distro": "com.microsoft.server",
"image_org.openstack__1__architecture": "x64",
"image_base_image_ref": "[[[[UUID_2]]]]",
"image_os_type": "windows",
"instance_type_root_gb": "40",
"instance_type_name": "60 GB Performance",
"image_image_type": "base",
"instance_type_rxtx_factor": "5000.0",
"image_auto_disk_config": "False",
"instance_type_vcpus": "16",
"image_disk_format": "vhd",
"instance_type_memory_mb": "61440",
"instance_type_swap": "0",
"image_min_disk": "40",
"instance_type_flavorid": "performance2-60",
"image_container_format": "ovf"
},
"task_state": "deleting",
"shutdown_terminate": false,
"cell_name": null,
"ephemeral_key_uuid": null,
"locked": false,
"name": "instance-[[[[UUID_1]]]]",
"created_at": "[[[[DT_14]]]]",
"locked_by": null,
"launch_index": 0,
"memory_mb": 61440,
"vcpus": 16,
"image_ref": "[[[[UUID_2]]]]",
"architecture": "x64",
"auto_disk_config": false,
"os_type": "windows",
"metadata": {}
},
"self": null,
"context": {
"project_name": "[[[[tenant_id]]]]",
"user_id": "[[[[user_id]]]]",
"roles": [
"user-admin",
"bofh",
"glance",
"glance:admin",
"admin"
],
"_read_deleted": "no",
"timestamp": "[[[[DT_5]]]]",
"auth_token": "[[[[XUUID_0]]]]",
"remote_address": "[[[[V4_2]]]]",
"quota_class": null,
"is_admin": true,
"glance_api_servers": null,
"request_id": "req-[[[[UUID_0]]]]",
"instance_lock_checked": false,
"project_id": "[[[[tenant_id]]]]",
"user_name": "[[[[user_id]]]]"
},
"bdms": [
{
"instance_uuid": "[[[[UUID_1]]]]",
"virtual_name": null,
"no_device": null,
"created_at": "[[[[DT_19]]]]",
"snapshot_id": null,
"updated_at": "[[[[DT_20]]]]",
"device_name": "/dev/xvdb",
"deleted": 0,
"volume_size": null,
"volume_id": "[[[[UUID_15]]]]",
"id": 13754,
"deleted_at": null,
"delete_on_termination": false
},
{
"instance_uuid": "[[[[UUID_1]]]]",
"virtual_name": null,
"no_device": null,
"created_at": "[[[[DT_21]]]]",
"snapshot_id": null,
"updated_at": "[[[[DT_22]]]]",
"device_name": "/dev/xvdc",
"deleted": 0,
"volume_size": null,
"volume_id": "[[[[UUID_16]]]]",
"id": 13755,
"deleted_at": null,
"delete_on_termination": false
}
],
"reservations": []
}
},
"priority": "ERROR",
"_context_is_admin": true,
"_context_user": "[[[[user_id]]]]",
"publisher_id": "[[[[publisher_id]]]]",
"message_id": "[[[[UUID_17]]]]",
"_context_remote_address": "[[[[V4_2]]]]",
"_context_roles": [
"user-admin",
"bofh",
"glance",
"glance:admin",
"admin"
],
"timestamp": "[[[[DT_23]]]]",
"_context_timestamp": "[[[[DT_5]]]]",
"_unique_id": "[[[[XUUID_8]]]]",
"_context_glance_api_servers": null,
"_context_project_name": "[[[[tenant_id]]]]",
"_context_read_deleted": "no",
"_context_tenant": "[[[[tenant_id]]]]",
"_context_instance_lock_checked": false,
"_context_project_id": "[[[[tenant_id]]]]",
"_context_user_name": "[[[[user_id]]]]"
}
]

View File

@ -1,370 +0,0 @@
[
{
"xuuid": 4,
"v4": 2,
"time_map": {
"[[[[DT_9]]]]": [
0,
1,
19109
],
"[[[[DT_5]]]]": [
-1,
86392,
365732
],
"[[[[DT_7]]]]": [
0,
0,
667761
],
"[[[[DT_1]]]]": [
-321,
54605,
667761
],
"[[[[DT_3]]]]": [
-1,
86399,
667761
],
"[[[[DT_8]]]]": [
0,
0,
990505
],
"[[[[DT_4]]]]": [
0,
0,
0
],
"[[[[DT_6]]]]": [
0,
0,
175295
],
"[[[[DT_0]]]]": [
-1,
19714,
667761
],
"[[[[DT_2]]]]": [
-321,
54911,
667761
]
},
"uuid": 7,
"v6": 1
},
{
"_context_request_id": "req-[[[[UUID_0]]]]",
"_context_quota_class": null,
"event_type": "compute.instance.update",
"_context_auth_token": null,
"_context_user_id": "[[[[user_id]]]]",
"payload": {
"state_description": "powering-off",
"availability_zone": null,
"terminated_at": "",
"ephemeral_gb": 0,
"instance_type_id": 5,
"bandwidth": {
"public": {
"bw_in": 537783,
"bw_out": 19189871
}
},
"deleted_at": "",
"reservation_id": "[[[[reservation_id]]]]",
"instance_id": "[[[[UUID_1]]]]",
"user_id": "[[[[user_id]]]]",
"hostname": "[[[[hostname]]]]",
"state": "active",
"old_state": "active",
"old_task_state": null,
"metadata": {},
"node": "[[[[node]]]]",
"ramdisk_id": "",
"access_ip_v6": "[[[[V6_0]]]]",
"disk_gb": 160,
"access_ip_v4": "[[[[V4_0]]]]",
"kernel_id": "",
"host": "[[[[host]]]]",
"display_name": "[[[[display_name]]]]",
"image_ref_url": "http://[[[[V4_1]]]]:9292/images/[[[[UUID_2]]]]",
"audit_period_beginning": "[[[[DT_0]]]]",
"root_gb": 160,
"tenant_id": "[[[[tenant_id]]]]",
"created_at": "[[[[DT_1]]]]",
"launched_at": "[[[[DT_2]]]]",
"memory_mb": 4096,
"instance_type": "4GB Standard Instance",
"vcpus": 2,
"image_meta": {
"container_format": "ovf",
"min_ram": "512",
"base_image_ref": "[[[[UUID_2]]]]",
"os_distro": "rhel",
"org.openstack__1__os_distro": "com.redhat",
"image_type": "base",
"disk_format": "vhd",
"org.openstack__1__architecture": "x64",
"auto_disk_config": "True",
"min_disk": "160",
"cache_in_nova": "True",
"os_type": "linux",
"org.openstack__1__os_version": "6.3"
},
"architecture": null,
"new_task_state": "powering-off",
"audit_period_ending": "[[[[DT_3]]]]",
"os_type": "linux",
"instance_flavor_id": "5"
},
"priority": "INFO",
"_context_is_admin": true,
"_context_user": "[[[[user_id]]]]",
"publisher_id": "[[[[publisher_id]]]]",
"message_id": "[[[[UUID_3]]]]",
"_context_remote_address": null,
"_context_roles": [],
"timestamp": "[[[[DT_4]]]]",
"_context_timestamp": "[[[[DT_5]]]]",
"_unique_id": "[[[[XUUID_0]]]]",
"_context_glance_api_servers": null,
"_context_project_name": "[[[[tenant_id]]]]",
"_context_read_deleted": "no",
"_context_tenant": "[[[[tenant_id]]]]",
"_context_instance_lock_checked": false,
"_context_project_id": "[[[[tenant_id]]]]",
"_context_user_name": "[[[[user_id]]]]"
},
{
"_context_request_id": "req-[[[[UUID_0]]]]",
"_context_quota_class": null,
"event_type": "compute.instance.power_off.start",
"_context_auth_token": null,
"_context_user_id": "[[[[user_id]]]]",
"payload": {
"state_description": "powering-off",
"availability_zone": null,
"terminated_at": "",
"ephemeral_gb": 0,
"instance_type_id": 5,
"deleted_at": "",
"reservation_id": "[[[[reservation_id]]]]",
"instance_id": "[[[[UUID_1]]]]",
"user_id": "[[[[user_id]]]]",
"hostname": "[[[[hostname]]]]",
"state": "active",
"launched_at": "[[[[DT_2]]]]",
"metadata": {},
"node": "[[[[node]]]]",
"ramdisk_id": "",
"access_ip_v6": "[[[[V6_0]]]]",
"disk_gb": 160,
"access_ip_v4": "[[[[V4_0]]]]",
"kernel_id": "",
"host": "[[[[host]]]]",
"display_name": "[[[[display_name]]]]",
"image_ref_url": "http://[[[[V4_1]]]]:9292/images/[[[[UUID_2]]]]",
"root_gb": 160,
"tenant_id": "[[[[tenant_id]]]]",
"created_at": "[[[[DT_1]]]]",
"memory_mb": 4096,
"instance_type": "4GB Standard Instance",
"vcpus": 2,
"image_meta": {
"container_format": "ovf",
"min_ram": "512",
"base_image_ref": "[[[[UUID_2]]]]",
"os_distro": "rhel",
"org.openstack__1__os_distro": "com.redhat",
"image_type": "base",
"disk_format": "vhd",
"org.openstack__1__architecture": "x64",
"auto_disk_config": "True",
"min_disk": "160",
"cache_in_nova": "True",
"os_type": "linux",
"org.openstack__1__os_version": "6.3"
},
"architecture": null,
"os_type": "linux",
"instance_flavor_id": "5"
},
"priority": "INFO",
"_context_is_admin": true,
"_context_user": "[[[[user_id]]]]",
"publisher_id": "[[[[publisher_id]]]]",
"message_id": "[[[[UUID_4]]]]",
"_context_remote_address": null,
"_context_roles": [],
"timestamp": "[[[[DT_6]]]]",
"_context_timestamp": "[[[[DT_5]]]]",
"_unique_id": "[[[[XUUID_1]]]]",
"_context_glance_api_servers": null,
"_context_project_name": "[[[[tenant_id]]]]",
"_context_read_deleted": "no",
"_context_tenant": "[[[[tenant_id]]]]",
"_context_instance_lock_checked": false,
"_context_project_id": "[[[[tenant_id]]]]",
"_context_user_name": "[[[[user_id]]]]"
},
{
"_context_request_id": "req-[[[[UUID_0]]]]",
"_context_quota_class": null,
"event_type": "compute.instance.update",
"_context_auth_token": null,
"_context_user_id": "[[[[user_id]]]]",
"payload": {
"state_description": "",
"availability_zone": null,
"terminated_at": "",
"ephemeral_gb": 0,
"instance_type_id": 5,
"bandwidth": {
"public": {
"bw_in": 537783,
"bw_out": 19189871
}
},
"deleted_at": "",
"reservation_id": "[[[[reservation_id]]]]",
"instance_id": "[[[[UUID_1]]]]",
"user_id": "[[[[user_id]]]]",
"hostname": "[[[[hostname]]]]",
"state": "stopped",
"old_state": "active",
"old_task_state": "powering-off",
"metadata": {},
"node": "[[[[node]]]]",
"ramdisk_id": "",
"access_ip_v6": "[[[[V6_0]]]]",
"disk_gb": 160,
"access_ip_v4": "[[[[V4_0]]]]",
"kernel_id": "",
"host": "[[[[host]]]]",
"display_name": "[[[[display_name]]]]",
"image_ref_url": "http://[[[[V4_1]]]]:9292/images/[[[[UUID_2]]]]",
"audit_period_beginning": "[[[[DT_0]]]]",
"root_gb": 160,
"tenant_id": "[[[[tenant_id]]]]",
"created_at": "[[[[DT_1]]]]",
"launched_at": "[[[[DT_2]]]]",
"memory_mb": 4096,
"instance_type": "4GB Standard Instance",
"vcpus": 2,
"image_meta": {
"container_format": "ovf",
"min_ram": "512",
"base_image_ref": "[[[[UUID_2]]]]",
"os_distro": "rhel",
"org.openstack__1__os_distro": "com.redhat",
"image_type": "base",
"disk_format": "vhd",
"org.openstack__1__architecture": "x64",
"auto_disk_config": "True",
"min_disk": "160",
"cache_in_nova": "True",
"os_type": "linux",
"org.openstack__1__os_version": "6.3"
},
"architecture": null,
"new_task_state": null,
"audit_period_ending": "[[[[DT_7]]]]",
"os_type": "linux",
"instance_flavor_id": "5"
},
"priority": "INFO",
"_context_is_admin": true,
"_context_user": "[[[[user_id]]]]",
"publisher_id": "[[[[publisher_id]]]]",
"message_id": "[[[[UUID_5]]]]",
"_context_remote_address": null,
"_context_roles": [],
"timestamp": "[[[[DT_8]]]]",
"_context_timestamp": "[[[[DT_5]]]]",
"_unique_id": "[[[[XUUID_2]]]]",
"_context_glance_api_servers": null,
"_context_project_name": "[[[[tenant_id]]]]",
"_context_read_deleted": "no",
"_context_tenant": "[[[[tenant_id]]]]",
"_context_instance_lock_checked": false,
"_context_project_id": "[[[[tenant_id]]]]",
"_context_user_name": "[[[[user_id]]]]"
},
{
"_context_request_id": "req-[[[[UUID_0]]]]",
"_context_quota_class": null,
"event_type": "compute.instance.power_off.end",
"_context_auth_token": null,
"_context_user_id": "[[[[user_id]]]]",
"payload": {
"state_description": "",
"availability_zone": null,
"terminated_at": "",
"ephemeral_gb": 0,
"instance_type_id": 5,
"deleted_at": "",
"reservation_id": "[[[[reservation_id]]]]",
"instance_id": "[[[[UUID_1]]]]",
"user_id": "[[[[user_id]]]]",
"hostname": "[[[[hostname]]]]",
"state": "stopped",
"launched_at": "[[[[DT_2]]]]",
"metadata": {},
"node": "[[[[node]]]]",
"ramdisk_id": "",
"access_ip_v6": "[[[[V6_0]]]]",
"disk_gb": 160,
"access_ip_v4": "[[[[V4_0]]]]",
"kernel_id": "",
"host": "[[[[host]]]]",
"display_name": "[[[[display_name]]]]",
"image_ref_url": "http://[[[[V4_1]]]]:9292/images/[[[[UUID_2]]]]",
"root_gb": 160,
"tenant_id": "[[[[tenant_id]]]]",
"created_at": "[[[[DT_1]]]]",
"memory_mb": 4096,
"instance_type": "4GB Standard Instance",
"vcpus": 2,
"image_meta": {
"container_format": "ovf",
"min_ram": "512",
"base_image_ref": "[[[[UUID_2]]]]",
"os_distro": "rhel",
"org.openstack__1__os_distro": "com.redhat",
"image_type": "base",
"disk_format": "vhd",
"org.openstack__1__architecture": "x64",
"auto_disk_config": "True",
"min_disk": "160",
"cache_in_nova": "True",
"os_type": "linux",
"org.openstack__1__os_version": "6.3"
},
"architecture": null,
"os_type": "linux",
"instance_flavor_id": "5"
},
"priority": "INFO",
"_context_is_admin": true,
"_context_user": "[[[[user_id]]]]",
"publisher_id": "[[[[publisher_id]]]]",
"message_id": "[[[[UUID_6]]]]",
"_context_remote_address": null,
"_context_roles": [],
"timestamp": "[[[[DT_9]]]]",
"_context_timestamp": "[[[[DT_5]]]]",
"_unique_id": "[[[[XUUID_3]]]]",
"_context_glance_api_servers": null,
"_context_project_name": "[[[[tenant_id]]]]",
"_context_read_deleted": "no",
"_context_tenant": "[[[[tenant_id]]]]",
"_context_instance_lock_checked": false,
"_context_project_id": "[[[[tenant_id]]]]",
"_context_user_name": "[[[[user_id]]]]"
}
]

View File

@ -1,598 +0,0 @@
[
{
"xuuid": 7,
"v4": 5,
"time_map": {
"[[[[DT_9]]]]": [
0,
1,
667540
],
"[[[[DT_12]]]]": [
0,
104,
358269
],
"[[[[DT_5]]]]": [
-1,
86399,
584928
],
"[[[[DT_7]]]]": [
0,
0,
838660
],
"[[[[DT_1]]]]": [
-53,
31457,
654695
],
"[[[[DT_3]]]]": [
-1,
86399,
654695
],
"[[[[DT_8]]]]": [
0,
1,
257119
],
"[[[[DT_10]]]]": [
0,
103,
654695
],
"[[[[DT_4]]]]": [
0,
0,
0
],
"[[[[DT_6]]]]": [
0,
0,
654695
],
"[[[[DT_0]]]]": [
-1,
81879,
654695
],
"[[[[DT_2]]]]": [
-53,
31518,
654695
],
"[[[[DT_11]]]]": [
0,
104,
332227
]
},
"uuid": 9,
"v6": 1
},
{
"_context_request_id": "req-[[[[UUID_0]]]]",
"_context_quota_class": null,
"event_type": "compute.instance.update",
"_context_auth_token": "[[[[XUUID_0]]]]",
"_context_user_id": "[[[[user_id]]]]",
"payload": {
"state_description": "rebooting",
"availability_zone": null,
"terminated_at": "",
"ephemeral_gb": 80,
"instance_type_id": 12,
"bandwidth": {},
"deleted_at": "",
"reservation_id": "[[[[reservation_id]]]]",
"instance_id": "[[[[UUID_1]]]]",
"user_id": "[[[[user_id]]]]",
"hostname": "[[[[hostname]]]]",
"state": "active",
"old_state": "active",
"old_task_state": null,
"metadata": {},
"node": "[[[[node]]]]",
"ramdisk_id": "",
"access_ip_v6": "[[[[V6_0]]]]",
"disk_gb": 120,
"access_ip_v4": "[[[[V4_0]]]]",
"kernel_id": "",
"host": "[[[[host]]]]",
"display_name": "[[[[display_name]]]]",
"image_ref_url": "http://[[[[V4_1]]]]:9292/images/[[[[UUID_2]]]]",
"audit_period_beginning": "[[[[DT_0]]]]",
"root_gb": 40,
"tenant_id": "[[[[tenant_id]]]]",
"created_at": "[[[[DT_1]]]]",
"launched_at": "[[[[DT_2]]]]",
"memory_mb": 8192,
"instance_type": "8 GB Performance",
"vcpus": 8,
"image_meta": {
"container_format": "ovf",
"min_ram": "512",
"vm_mode": "hvm",
"base_image_ref": "[[[[UUID_2]]]]",
"os_distro": "fedora",
"org.openstack__1__os_distro": "org.fedoraproject",
"image_type": "base",
"disk_format": "vhd",
"org.openstack__1__architecture": "x64",
"auto_disk_config": "disabled",
"min_disk": "40",
"cache_in_nova": "True",
"os_type": "linux",
"org.openstack__1__os_version": "20"
},
"architecture": "x64",
"new_task_state": "rebooting",
"audit_period_ending": "[[[[DT_3]]]]",
"os_type": "linux",
"instance_flavor_id": "performance1-8"
},
"priority": "INFO",
"_context_is_admin": false,
"_context_user": "[[[[user_id]]]]",
"publisher_id": "[[[[publisher_id]]]]",
"message_id": "[[[[UUID_3]]]]",
"_context_remote_address": "[[[[V4_2]]]]",
"_context_roles": [
"checkmate",
"object-store:default",
"compute:default",
"identity:user-admin"
],
"timestamp": "[[[[DT_4]]]]",
"_context_timestamp": "[[[[DT_5]]]]",
"_unique_id": "[[[[XUUID_1]]]]",
"_context_glance_api_servers": null,
"_context_project_name": "[[[[tenant_id]]]]",
"_context_read_deleted": "no",
"_context_tenant": "[[[[tenant_id]]]]",
"_context_instance_lock_checked": false,
"_context_project_id": "[[[[tenant_id]]]]",
"_context_user_name": "[[[[user_id]]]]"
},
{
"_context_request_id": "req-[[[[UUID_0]]]]",
"_context_quota_class": null,
"event_type": "compute.instance.update",
"_context_auth_token": "[[[[XUUID_0]]]]",
"_context_user_id": "[[[[user_id]]]]",
"payload": {
"state_description": "rebooting",
"availability_zone": null,
"terminated_at": "",
"ephemeral_gb": 80,
"instance_type_id": 12,
"bandwidth": {
"public": {
"bw_in": 1142550,
"bw_out": 4402404
},
"private": {
"bw_in": 29028,
"bw_out": 15580
}
},
"deleted_at": "",
"reservation_id": "[[[[reservation_id]]]]",
"instance_id": "[[[[UUID_1]]]]",
"user_id": "[[[[user_id]]]]",
"hostname": "[[[[hostname]]]]",
"state": "active",
"old_state": "active",
"old_task_state": null,
"metadata": {},
"node": "[[[[node]]]]",
"ramdisk_id": "",
"access_ip_v6": "[[[[V6_0]]]]",
"disk_gb": 120,
"access_ip_v4": "[[[[V4_0]]]]",
"kernel_id": "",
"host": "[[[[host]]]]",
"display_name": "[[[[display_name]]]]",
"image_ref_url": "http://[[[[V4_3]]]]:9292/images/[[[[UUID_2]]]]",
"audit_period_beginning": "[[[[DT_0]]]]",
"root_gb": 40,
"tenant_id": "[[[[tenant_id]]]]",
"created_at": "[[[[DT_1]]]]",
"launched_at": "[[[[DT_2]]]]",
"memory_mb": 8192,
"instance_type": "8 GB Performance",
"vcpus": 8,
"image_meta": {
"container_format": "ovf",
"min_ram": "512",
"vm_mode": "hvm",
"base_image_ref": "[[[[UUID_2]]]]",
"os_distro": "fedora",
"org.openstack__1__os_distro": "org.fedoraproject",
"image_type": "base",
"disk_format": "vhd",
"org.openstack__1__architecture": "x64",
"auto_disk_config": "disabled",
"min_disk": "40",
"cache_in_nova": "True",
"os_type": "linux",
"org.openstack__1__os_version": "20"
},
"architecture": "x64",
"new_task_state": "rebooting",
"audit_period_ending": "[[[[DT_6]]]]",
"os_type": "linux",
"instance_flavor_id": "performance1-8"
},
"priority": "INFO",
"_context_is_admin": false,
"_context_user": "[[[[user_id]]]]",
"publisher_id": "[[[[publisher_id]]]]",
"message_id": "[[[[UUID_4]]]]",
"_context_remote_address": "[[[[V4_2]]]]",
"_context_roles": [
"checkmate",
"object-store:default",
"compute:default",
"identity:user-admin"
],
"timestamp": "[[[[DT_7]]]]",
"_context_timestamp": "[[[[DT_5]]]]",
"_unique_id": "[[[[XUUID_2]]]]",
"_context_glance_api_servers": null,
"_context_project_name": "[[[[tenant_id]]]]",
"_context_read_deleted": "no",
"_context_tenant": "[[[[tenant_id]]]]",
"_context_instance_lock_checked": false,
"_context_project_id": "[[[[tenant_id]]]]",
"_context_user_name": "[[[[user_id]]]]"
},
{
"_context_request_id": "req-[[[[UUID_0]]]]",
"_context_quota_class": null,
"event_type": "compute.instance.reboot.start",
"_context_auth_token": "[[[[XUUID_0]]]]",
"_context_user_id": "[[[[user_id]]]]",
"payload": {
"state_description": "rebooting",
"availability_zone": null,
"terminated_at": "",
"ephemeral_gb": 80,
"instance_type_id": 12,
"deleted_at": "",
"reservation_id": "[[[[reservation_id]]]]",
"instance_id": "[[[[UUID_1]]]]",
"user_id": "[[[[user_id]]]]",
"hostname": "[[[[hostname]]]]",
"state": "active",
"launched_at": "[[[[DT_2]]]]",
"metadata": {},
"node": "[[[[node]]]]",
"ramdisk_id": "",
"access_ip_v6": "[[[[V6_0]]]]",
"disk_gb": 120,
"access_ip_v4": "[[[[V4_0]]]]",
"kernel_id": "",
"host": "[[[[host]]]]",
"display_name": "[[[[display_name]]]]",
"image_ref_url": "http://[[[[V4_4]]]]:9292/images/[[[[UUID_2]]]]",
"root_gb": 40,
"tenant_id": "[[[[tenant_id]]]]",
"created_at": "[[[[DT_1]]]]",
"memory_mb": 8192,
"instance_type": "8 GB Performance",
"vcpus": 8,
"image_meta": {
"container_format": "ovf",
"min_ram": "512",
"vm_mode": "hvm",
"base_image_ref": "[[[[UUID_2]]]]",
"os_distro": "fedora",
"org.openstack__1__os_distro": "org.fedoraproject",
"image_type": "base",
"disk_format": "vhd",
"org.openstack__1__architecture": "x64",
"auto_disk_config": "disabled",
"min_disk": "40",
"cache_in_nova": "True",
"os_type": "linux",
"org.openstack__1__os_version": "20"
},
"architecture": "x64",
"os_type": "linux",
"instance_flavor_id": "performance1-8"
},
"priority": "INFO",
"_context_is_admin": true,
"_context_user": "[[[[user_id]]]]",
"publisher_id": "[[[[publisher_id]]]]",
"message_id": "[[[[UUID_5]]]]",
"_context_remote_address": "[[[[V4_2]]]]",
"_context_roles": [
"checkmate",
"object-store:default",
"compute:default",
"identity:user-admin",
"admin"
],
"timestamp": "[[[[DT_8]]]]",
"_context_timestamp": "[[[[DT_5]]]]",
"_unique_id": "[[[[XUUID_3]]]]",
"_context_glance_api_servers": null,
"_context_project_name": "[[[[tenant_id]]]]",
"_context_read_deleted": "no",
"_context_tenant": "[[[[tenant_id]]]]",
"_context_instance_lock_checked": false,
"_context_project_id": "[[[[tenant_id]]]]",
"_context_user_name": "[[[[user_id]]]]"
},
{
"_context_request_id": "req-[[[[UUID_0]]]]",
"_context_quota_class": null,
"event_type": "compute.instance.update",
"_context_auth_token": "[[[[XUUID_0]]]]",
"_context_user_id": "[[[[user_id]]]]",
"payload": {
"state_description": "rebooting",
"availability_zone": null,
"terminated_at": "",
"ephemeral_gb": 80,
"instance_type_id": 12,
"bandwidth": {
"public": {
"bw_in": 1142550,
"bw_out": 4402404
},
"private": {
"bw_in": 29028,
"bw_out": 15580
}
},
"deleted_at": "",
"reservation_id": "[[[[reservation_id]]]]",
"instance_id": "[[[[UUID_1]]]]",
"user_id": "[[[[user_id]]]]",
"hostname": "[[[[hostname]]]]",
"state": "active",
"old_state": null,
"old_task_state": null,
"metadata": {},
"node": "[[[[node]]]]",
"ramdisk_id": "",
"access_ip_v6": "[[[[V6_0]]]]",
"disk_gb": 120,
"access_ip_v4": "[[[[V4_0]]]]",
"kernel_id": "",
"host": "[[[[host]]]]",
"display_name": "[[[[display_name]]]]",
"image_ref_url": "http://[[[[V4_4]]]]:9292/images/[[[[UUID_2]]]]",
"audit_period_beginning": "[[[[DT_0]]]]",
"root_gb": 40,
"tenant_id": "[[[[tenant_id]]]]",
"created_at": "[[[[DT_1]]]]",
"launched_at": "[[[[DT_2]]]]",
"memory_mb": 8192,
"instance_type": "8 GB Performance",
"vcpus": 8,
"image_meta": {
"container_format": "ovf",
"min_ram": "512",
"vm_mode": "hvm",
"base_image_ref": "[[[[UUID_2]]]]",
"os_distro": "fedora",
"org.openstack__1__os_distro": "org.fedoraproject",
"image_type": "base",
"disk_format": "vhd",
"org.openstack__1__architecture": "x64",
"auto_disk_config": "disabled",
"min_disk": "40",
"cache_in_nova": "True",
"os_type": "linux",
"org.openstack__1__os_version": "20"
},
"architecture": "x64",
"new_task_state": "rebooting",
"audit_period_ending": "[[[[DT_6]]]]",
"os_type": "linux",
"instance_flavor_id": "performance1-8"
},
"priority": "INFO",
"_context_is_admin": false,
"_context_user": "[[[[user_id]]]]",
"publisher_id": "[[[[publisher_id]]]]",
"message_id": "[[[[UUID_6]]]]",
"_context_remote_address": "[[[[V4_2]]]]",
"_context_roles": [
"checkmate",
"object-store:default",
"compute:default",
"identity:user-admin",
"admin"
],
"timestamp": "[[[[DT_9]]]]",
"_context_timestamp": "[[[[DT_5]]]]",
"_unique_id": "[[[[XUUID_4]]]]",
"_context_glance_api_servers": null,
"_context_project_name": "[[[[tenant_id]]]]",
"_context_read_deleted": "no",
"_context_tenant": "[[[[tenant_id]]]]",
"_context_instance_lock_checked": false,
"_context_project_id": "[[[[tenant_id]]]]",
"_context_user_name": "[[[[user_id]]]]"
},
{
"_context_request_id": "req-[[[[UUID_0]]]]",
"_context_quota_class": null,
"event_type": "compute.instance.update",
"_context_auth_token": "[[[[XUUID_0]]]]",
"_context_user_id": "[[[[user_id]]]]",
"payload": {
"state_description": "",
"availability_zone": null,
"terminated_at": "",
"ephemeral_gb": 80,
"instance_type_id": 12,
"bandwidth": {
"public": {
"bw_in": 1142550,
"bw_out": 4402404
},
"private": {
"bw_in": 29028,
"bw_out": 15580
}
},
"deleted_at": "",
"reservation_id": "[[[[reservation_id]]]]",
"instance_id": "[[[[UUID_1]]]]",
"user_id": "[[[[user_id]]]]",
"hostname": "[[[[hostname]]]]",
"state": "active",
"old_state": "active",
"old_task_state": "rebooting",
"metadata": {},
"node": "[[[[node]]]]",
"ramdisk_id": "",
"access_ip_v6": "[[[[V6_0]]]]",
"disk_gb": 120,
"access_ip_v4": "[[[[V4_0]]]]",
"kernel_id": "",
"host": "[[[[host]]]]",
"display_name": "[[[[display_name]]]]",
"image_ref_url": "http://[[[[V4_4]]]]:9292/images/[[[[UUID_2]]]]",
"audit_period_beginning": "[[[[DT_0]]]]",
"root_gb": 40,
"tenant_id": "[[[[tenant_id]]]]",
"created_at": "[[[[DT_1]]]]",
"launched_at": "[[[[DT_2]]]]",
"memory_mb": 8192,
"instance_type": "8 GB Performance",
"vcpus": 8,
"image_meta": {
"container_format": "ovf",
"min_ram": "512",
"vm_mode": "hvm",
"base_image_ref": "[[[[UUID_2]]]]",
"os_distro": "fedora",
"org.openstack__1__os_distro": "org.fedoraproject",
"image_type": "base",
"disk_format": "vhd",
"org.openstack__1__architecture": "x64",
"auto_disk_config": "disabled",
"min_disk": "40",
"cache_in_nova": "True",
"os_type": "linux",
"org.openstack__1__os_version": "20"
},
"architecture": "x64",
"new_task_state": null,
"audit_period_ending": "[[[[DT_10]]]]",
"os_type": "linux",
"instance_flavor_id": "performance1-8"
},
"priority": "INFO",
"_context_is_admin": false,
"_context_user": "[[[[user_id]]]]",
"publisher_id": "[[[[publisher_id]]]]",
"message_id": "[[[[UUID_7]]]]",
"_context_remote_address": "[[[[V4_2]]]]",
"_context_roles": [
"checkmate",
"object-store:default",
"compute:default",
"identity:user-admin",
"admin"
],
"timestamp": "[[[[DT_11]]]]",
"_context_timestamp": "[[[[DT_5]]]]",
"_unique_id": "[[[[XUUID_5]]]]",
"_context_glance_api_servers": null,
"_context_project_name": "[[[[tenant_id]]]]",
"_context_read_deleted": "no",
"_context_tenant": "[[[[tenant_id]]]]",
"_context_instance_lock_checked": false,
"_context_project_id": "[[[[tenant_id]]]]",
"_context_user_name": "[[[[user_id]]]]"
},
{
"_context_request_id": "req-[[[[UUID_0]]]]",
"_context_quota_class": null,
"event_type": "compute.instance.reboot.end",
"_context_auth_token": "[[[[XUUID_0]]]]",
"_context_user_id": "[[[[user_id]]]]",
"payload": {
"state_description": "",
"availability_zone": null,
"terminated_at": "",
"ephemeral_gb": 80,
"instance_type_id": 12,
"deleted_at": "",
"reservation_id": "[[[[reservation_id]]]]",
"instance_id": "[[[[UUID_1]]]]",
"user_id": "[[[[user_id]]]]",
"hostname": "[[[[hostname]]]]",
"state": "active",
"launched_at": "[[[[DT_2]]]]",
"metadata": {},
"node": "[[[[node]]]]",
"ramdisk_id": "",
"access_ip_v6": "[[[[V6_0]]]]",
"disk_gb": 120,
"access_ip_v4": "[[[[V4_0]]]]",
"kernel_id": "",
"host": "[[[[host]]]]",
"display_name": "[[[[display_name]]]]",
"image_ref_url": "http://[[[[V4_4]]]]:9292/images/[[[[UUID_2]]]]",
"root_gb": 40,
"tenant_id": "[[[[tenant_id]]]]",
"created_at": "[[[[DT_1]]]]",
"memory_mb": 8192,
"instance_type": "8 GB Performance",
"vcpus": 8,
"image_meta": {
"container_format": "ovf",
"min_ram": "512",
"vm_mode": "hvm",
"base_image_ref": "[[[[UUID_2]]]]",
"os_distro": "fedora",
"org.openstack__1__os_distro": "org.fedoraproject",
"image_type": "base",
"disk_format": "vhd",
"org.openstack__1__architecture": "x64",
"auto_disk_config": "disabled",
"min_disk": "40",
"cache_in_nova": "True",
"os_type": "linux",
"org.openstack__1__os_version": "20"
},
"architecture": "x64",
"os_type": "linux",
"instance_flavor_id": "performance1-8"
},
"priority": "INFO",
"_context_is_admin": true,
"_context_user": "[[[[user_id]]]]",
"publisher_id": "[[[[publisher_id]]]]",
"message_id": "[[[[UUID_8]]]]",
"_context_remote_address": "[[[[V4_2]]]]",
"_context_roles": [
"checkmate",
"object-store:default",
"compute:default",
"identity:user-admin",
"admin"
],
"timestamp": "[[[[DT_12]]]]",
"_context_timestamp": "[[[[DT_5]]]]",
"_unique_id": "[[[[XUUID_6]]]]",
"_context_glance_api_servers": null,
"_context_project_name": "[[[[tenant_id]]]]",
"_context_read_deleted": "no",
"_context_tenant": "[[[[tenant_id]]]]",
"_context_instance_lock_checked": false,
"_context_project_id": "[[[[tenant_id]]]]",
"_context_user_name": "[[[[user_id]]]]"
}
]

View File

@ -1,125 +0,0 @@
{"units": "ms", "VM Create time": "9570.2584153"}
{"units": "ms", "VM Create time": "446.770851046"}
{"units": "ms", "VM Create time": "5991.62451155"}
{"units": "ms", "VM Create time": "846.389115802"}
{"units": "ms", "VM Create time": "3084.12085218"}
{"units": "ms", "VM Create time": "1196.22683522"}
{"units": "ms", "VM Create time": "1334.13175423"}
{"units": "ms", "VM Create time": "9172.41568685"}
{"units": "ms", "VM Create time": "7996.15042076"}
{"units": "ms", "VM Create time": "6488.59957496"}
{"units": "ms", "VM Create time": "465.603776426"}
{"units": "ms", "VM Create time": "634.792828064"}
{"units": "ms", "VM Create time": "2137.55724679"}
{"units": "ms", "VM Create time": "2254.05914295"}
{"units": "ms", "VM Create time": "7998.16112413"}
{"units": "ms", "VM Create time": "7221.06290044"}
{"units": "ms", "VM Create time": "9076.93147098"}
{"units": "ms", "VM Create time": "7531.36895997"}
{"units": "ms", "VM Create time": "2806.79144593"}
{"units": "ms", "VM Create time": "7127.11748165"}
{"units": "ms", "VM Create time": "1558.39088299"}
{"units": "ms", "VM Create time": "8088.94655858"}
{"units": "ms", "VM Create time": "2881.88489074"}
{"units": "ms", "VM Create time": "5335.79029757"}
{"units": "ms", "VM Create time": "5129.36875123"}
{"units": "ms", "VM Create time": "3965.2004613"}
{"units": "ms", "VM Create time": "6715.42062931"}
{"units": "ms", "VM Create time": "1786.16123109"}
{"units": "ms", "VM Create time": "691.167466556"}
{"units": "ms", "VM Create time": "6707.23425229"}
{"units": "ms", "VM Create time": "6673.11348566"}
{"units": "ms", "VM Create time": "2312.18226096"}
{"units": "ms", "VM Create time": "7011.43478573"}
{"units": "ms", "VM Create time": "1186.25413352"}
{"units": "ms", "VM Create time": "5242.99701072"}
{"units": "ms", "VM Create time": "2994.34714079"}
{"units": "ms", "VM Create time": "4674.54921382"}
{"units": "ms", "VM Create time": "4847.97199783"}
{"units": "ms", "VM Create time": "3944.87138962"}
{"units": "ms", "VM Create time": "3690.30042863"}
{"units": "ms", "VM Create time": "1658.47695197"}
{"units": "ms", "VM Create time": "7429.74317636"}
{"units": "ms", "VM Create time": "5377.1862296"}
{"units": "ms", "VM Create time": "5888.2469715"}
{"units": "ms", "VM Create time": "1834.29633821"}
{"units": "ms", "VM Create time": "2580.14580011"}
{"units": "ms", "VM Create time": "9178.4218462"}
{"units": "ms", "VM Create time": "6342.36399788"}
{"units": "ms", "VM Create time": "9439.4370021"}
{"units": "ms", "VM Create time": "9454.2850887"}
{"units": "ms", "VM Create time": "4213.16152967"}
{"units": "ms", "VM Create time": "8052.45402528"}
{"units": "ms", "VM Create time": "6996.11911189"}
{"units": "ms", "VM Create time": "1539.08936682"}
{"units": "ms", "VM Create time": "2288.26174641"}
{"units": "ms", "VM Create time": "3474.45333147"}
{"units": "ms", "VM Create time": "7965.2900647"}
{"units": "ms", "VM Create time": "5507.09427158"}
{"units": "ms", "VM Create time": "6577.37130373"}
{"units": "ms", "VM Create time": "1063.49389062"}
{"units": "ms", "VM Create time": "7912.37447715"}
{"units": "ms", "VM Create time": "4572.77130949"}
{"units": "ms", "VM Create time": "8468.58886871"}
{"units": "ms", "VM Create time": "6263.13412453"}
{"units": "ms", "VM Create time": "4062.29104093"}
{"units": "ms", "VM Create time": "6122.22376788"}
{"units": "ms", "VM Create time": "8893.74825227"}
{"units": "ms", "VM Create time": "1084.78759899"}
{"units": "ms", "VM Create time": "5966.45439945"}
{"units": "ms", "VM Create time": "2951.03694691"}
{"units": "ms", "VM Create time": "9181.81285027"}
{"units": "ms", "VM Create time": "3075.10492721"}
{"units": "ms", "VM Create time": "7769.44625139"}
{"units": "ms", "VM Create time": "6234.24905493"}
{"units": "ms", "VM Create time": "5604.81166279"}
{"units": "ms", "VM Create time": "758.634256483"}
{"units": "ms", "VM Create time": "1037.91905026"}
{"units": "ms", "VM Create time": "2173.02199252"}
{"units": "ms", "VM Create time": "6298.34091503"}
{"units": "ms", "VM Create time": "571.821588484"}
{"units": "ms", "VM Create time": "5582.14586742"}
{"units": "ms", "VM Create time": "2312.01345747"}
{"units": "ms", "VM Create time": "7888.2290117"}
{"units": "ms", "VM Create time": "7319.17524024"}
{"units": "ms", "VM Create time": "7931.72647678"}
{"units": "ms", "VM Create time": "1311.824863"}
{"units": "ms", "VM Create time": "8645.06837416"}
{"units": "ms", "VM Create time": "1574.12084831"}
{"units": "ms", "VM Create time": "4879.29850065"}
{"units": "ms", "VM Create time": "2519.33964549"}
{"units": "ms", "VM Create time": "2840.99167157"}
{"units": "ms", "VM Create time": "8655.3201027"}
{"units": "ms", "VM Create time": "5258.79519678"}
{"units": "ms", "VM Create time": "2854.24140494"}
{"units": "ms", "VM Create time": "2281.93030935"}
{"units": "ms", "VM Create time": "4143.40529721"}
{"units": "ms", "VM Create time": "4697.61869996"}
{"units": "ms", "VM Create time": "7996.03172193"}
{"units": "ms", "VM Create time": "8128.96168686"}
{"units": "ms", "VM Create time": "1170.41907428"}
{"units": "ms", "VM Create time": "3234.8676953"}
{"units": "ms", "VM Create time": "6654.80764124"}
{"units": "ms", "VM Create time": "5040.62706515"}
{"units": "ms", "VM Create time": "9637.02143233"}
{"units": "ms", "VM Create time": "199.802866542"}
{"units": "ms", "VM Create time": "6729.32393547"}
{"units": "ms", "VM Create time": "3278.56058368"}
{"units": "ms", "VM Create time": "5936.73321557"}
{"units": "ms", "VM Create time": "9237.85519529"}
{"units": "ms", "VM Create time": "9451.60339974"}
{"units": "ms", "VM Create time": "6524.69437235"}
{"units": "ms", "VM Create time": "5196.50451815"}
{"units": "ms", "VM Create time": "8835.67720578"}
{"units": "ms", "VM Create time": "8546.71357438"}
{"units": "ms", "VM Create time": "5593.67992448"}
{"units": "ms", "VM Create time": "7436.64709304"}
{"units": "ms", "VM Create time": "6975.60195013"}
{"units": "ms", "VM Create time": "7580.80759109"}
{"units": "ms", "VM Create time": "6017.48688954"}
{"units": "ms", "VM Create time": "2854.47447994"}
{"units": "ms", "VM Create time": "2844.39668817"}
{"units": "ms", "VM Create time": "4427.58528007"}
{"units": "ms", "VM Create time": "3481.6964079"}
{"units": "ms", "VM Create time": "3711.15872333"}
{"units": "ms", "VM Create time": "8741.91833442"}

View File

@ -1,11 +0,0 @@
<?php
$event = file_get_contents('php://input');
$file = 'data.txt';
$current = file_get_contents($file);
$current .= $event ."\n";
file_put_contents($file, $current);
?>

View File

@ -1,70 +0,0 @@
<html>
<head>
<!-- D3.js -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/3.5.6/d3.min.js"></script>
<!-- jQuery -->
<script src="https://code.jquery.com/jquery-2.1.4.min.js"></script>
<!-- Plotly.js -->
<script src="https://d14fo0winaifog.cloudfront.net/plotly-basic.js"></script>
<script>
$(document).ready(function(){
setInterval(function(){
$("#events").load('data.txt');
}, 3000);
});
</script>
<style type="text/css">#graph {
-webkit-filter: grayscale(100%);
}</style>
</head>
<body>
<div id="graph"></div>
<div id="events"></div>
<script>
setInterval(function(){
var x_axis = [];
x_axis.push(1);
var y_axis = [];
var data_point = 0;
var txtFile = new XMLHttpRequest();
txtFile.open("GET", "data.txt", true);
txtFile.onreadystatechange = function()
{
if (txtFile.readyState === 4) { // document is ready to parse.
if (txtFile.status === 200) { // file is found
lines = txtFile.responseText.split("\n");
for (i=0; i<lines.length; i++) {
data_point = JSON.parse(lines[i])["VM Create time"];
y_axis.push(data_point);
x_axis.push(i+1)
}
}
}
}
var trace1 = {
x: x_axis,
y: y_axis,
type: "scatter",
};
var layout = {
xaxis: {
title: "Sample Number"
},
yaxis: {
title: "VM Creation Time (ms)"
},
showlegend: false
}
var data = [trace1];
var graphOptions = {layout: layout, fileopt: "overwrite"};
Plotly.plot("graph", data, layout);
txtFile.send(null);
}, 3000);
</script>
</body>
</html>

View File

@ -1,18 +0,0 @@
---
- event_type: compute.instance.*
traits: &instance_traits
tenant_id:
fields: payload.tenant_id
service:
fields: publisher_id
plugin: split
- event_type: compute.instance.exists
traits:
<<: *instance_traits
audit_period_beginning:
type: datetime
fields: payload.audit_period_beginning
audit_period_ending:
type: datetime
fields: payload.audit_period_ending

View File

@ -1,51 +0,0 @@
import datetime
import json
import random
import requests
import kafka.client
import kafka.common
import kafka.consumer
address = "http://192.168.10.4:8765/events.php"
kc = kafka.client.KafkaClient("192.168.10.4:9092")
consumer = kafka.consumer.SimpleConsumer(kc,
"Foo",
"stream-notifications",
auto_commit=True)
for raw_event in consumer:
event = json.loads(raw_event.message.value)
times = {}
for e in event['events']:
times[e['event_type']] = e['timestamp']
try:
microseconds_per_second = 1000000
time_format = '%Y-%m-%dT%H:%M:%S.%f'
start = datetime.datetime.strptime(times['compute.instance.create.start'],
time_format)
end = datetime.datetime.strptime(times['compute.instance.create.end'],
time_format)
duration = ((end - start).total_seconds() * microseconds_per_second)
duration = min(100, duration)
duration += random.uniform(5, 10)
except Exception as e:
continue
body = {'VM Create time': '{}'.format(duration),
'units': 'ms'}
headers = {'content-type': 'application/json'}
try:
requests.post(url=address,
data=json.dumps(body),
headers=headers)
except Exception as e:
print("unable to post")

View File

@ -1,2 +0,0 @@
#/bin/bash
php -S 0.0.0.0:8765 -t files/server

88
devstack/Vagrantfile vendored Normal file
View File

@ -0,0 +1,88 @@
require 'vagrant.rb'
Vagrant.configure(2) do |config|
config.cache.scope = :box if Vagrant.has_plugin?("vagrant-cachier")
config.timezone.value = :host if Vagrant.has_plugin?('vagrant-timezone')
if Vagrant.has_plugin?('vagrant-proxyconf')
config.proxy.http = ENV['http_proxy'] if ENV['http_proxy']
config.proxy.https = ENV['https_proxy'] if ENV['https_proxy']
if ENV['no_proxy']
local_no_proxy = ",192.168.10.6,10.0.2.15"
config.proxy.no_proxy = ENV['no_proxy'] + local_no_proxy
end
end
config.ssh.forward_agent = true
config.vm.hostname = "devstack"
config.vm.box = "bento/ubuntu-16.04"
config.vm.network "private_network",ip:"192.168.10.6"
config.vm.synced_folder "~/", "/vagrant_home"
config.vm.provider "virtualbox" do |vb|
vb.gui = false
vb.memory = "12800"
vb.cpus = 4
end
config.vm.provision "shell", privileged: false, inline: <<-SHELL
sudo apt-get -y install git
if [ $http_proxy ]; then
git config --global url.https://git.openstack.org/.insteadOf git://git.openstack.org/
sudo git config --global url.https://git.openstack.org/.insteadOf git://git.openstack.org/
protocol=`echo $http_proxy | awk -F: '{print $1}'`
host=`echo $http_proxy | awk -F/ '{print $3}' | awk -F: '{print $1}'`
port=`echo $http_proxy | awk -F/ '{print $3}' | awk -F: '{print $2}'`
echo "<settings>
<proxies>
<proxy>
<id>$host</id>
<active>true</active>
<protocol>$protocol</protocol>
<host>$host</host>
<port>$port</port>
</proxy>
</proxies>
</settings>" > ./maven_proxy_settings.xml
mkdir ~/.m2
cp ./maven_proxy_settings.xml ~/.m2/settings.xml
sudo mkdir /root/.m2
sudo cp ./maven_proxy_settings.xml /root/.m2/settings.xml
fi
git clone https://git.openstack.org/openstack-dev/devstack --branch master --depth 1
cd devstack
echo '[[local|localrc]]
GIT_DEPTH=1
DEST=/opt/stack
USE_VENV=False
SERVICE_HOST=192.168.10.6
HOST_IP=192.168.10.6
DATABASE_HOST=192.168.10.6
MYSQL_HOST=192.168.10.6
HOST_IP_IFACE=eth1
MYSQL_PASSWORD=secretmysql
DATABASE_PASSWORD=secretdatabase
RABBIT_PASSWORD=secretrabbit
ADMIN_PASSWORD=secretadmin
SERVICE_PASSWORD=secretservice
LOGFILE=$DEST/logs/stack.sh.log
LOGDIR=$DEST/logs
LOG_COLOR=False
disable_all_services
enable_service zookeeper rabbit mysql key tempest horizon
' > local.conf
./stack.sh
SHELL
end

17
devstack/plugin.sh Normal file
View File

@ -0,0 +1,17 @@
#!/bin/bash
#
# Copyright 2016 FUJITSU LIMITED
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@ -0,0 +1,28 @@
#
# (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
# (C) Copyright 2016-2017 FUJITSU LIMITED
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Sleep some time until all services are starting
sleep 6
function load_devstack_utilities {
source $BASE/new/devstack/stackrc
source $BASE/new/devstack/functions
source $BASE/new/devstack/openrc admin admin
# print OS_ variables
env | grep OS_
}

16
devstack/settings Normal file
View File

@ -0,0 +1,16 @@
#
# Copyright 2017 FUJITSU LIMITED
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

View File

@ -0,0 +1 @@
"{}"

View File

@ -0,0 +1,114 @@
[DEFAULT]
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of
# the default INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false
# The name of a logging configuration file. This file is appended to
# any existing logging configuration files. For details about logging
# configuration files, see the Python logging module documentation.
# Note that when logging configuration files are used then all logging
# configuration is set in the configuration file and other logging
# configuration options are ignored (for example,
# logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set.
# (string value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default
# is set, logging will go to stderr as defined by use_stderr. This
# option is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths.
# This option is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is
# moved or removed this handler will open a new log file with
# specified path instantaneously. It makes sense only if log_file
# option is specified and Linux platform is used. This option is
# ignored if log_config_append is set. (boolean value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and
# will be changed later to honor RFC5424. This option is ignored if
# log_config_append is set. (boolean value)
#use_syslog = false
# Enable journald for logging. If running in a systemd environment you
# may wish to enable journal support. Doing so will use the journal
# native protocol which includes structured metadata in addition to
# log messages.This option is ignored if log_config_append is set.
# (boolean value)
#use_journal = false
# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if
# log_config_append is set. (boolean value)
#use_stderr = false
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined.
# (string value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the
# message is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string
# value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is
# ignored if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message.
# (string value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message.
# (string value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Interval, number of seconds, of log rate limiting. (integer value)
#rate_limit_interval = 0
# Maximum number of logged messages per rate_limit_interval. (integer
# value)
#rate_limit_burst = 0
# Log level name used by rate limiting: CRITICAL, ERROR, INFO,
# WARNING, DEBUG or empty string. Logs with level greater or equal to
# rate_limit_except_level are not filtered. An empty string means that
# all levels are filtered. (string value)
#rate_limit_except_level = CRITICAL
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false

View File

@ -0,0 +1,7 @@
monasca_events_api
==================
.. toctree::
:maxdepth: 4
monasca_events_api

View File

@ -0,0 +1,8 @@
monasca\_events\_api\.conf package
==================================
.. automodule:: monasca_events_api.conf
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1,35 @@
monasca\_events\_api package
============================
.. automodule:: monasca_events_api
:members:
:undoc-members:
:show-inheritance:
Subpackages
-----------
.. toctree::
monasca_events_api.conf
Submodules
----------
monasca\_events\_api\.config module
-----------------------------------
.. automodule:: monasca_events_api.config
:members:
:undoc-members:
:show-inheritance:
monasca\_events\_api\.version module
------------------------------------
.. automodule:: monasca_events_api.version
:members:
:undoc-members:
:show-inheritance:

282
doc/source/conf.py Normal file
View File

@ -0,0 +1,282 @@
# -*- coding: utf-8 -*-
#
# monasca-events-api documentation build configuration file, created by
# sphinx-quickstart on Wed Nov 18 12:02:03 2015.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import os
import sys
from monasca_events_api.version import version_info
sys.path = [
os.path.abspath('../..'),
os.path.abspath('../../bin')
] + sys.path
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
needs_sphinx = '1.6'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.coverage',
'sphinx.ext.ifconfig',
'sphinx.ext.graphviz',
'sphinx.ext.autodoc',
'sphinx.ext.viewcode',
'oslo_config.sphinxconfiggen',
'oslo_config.sphinxext',
'openstackdocstheme',
]
# geeneral information about project
repository_name = u'openstack/monasca-events-api'
project = u'Monasca Events Dev Docs'
version = version_info.version_string()
release = version_info.release_string()
bug_project = u'monasca-events-api'
bug_tag = u'doc'
copyright = u'2017-present, OpenStack Foundation'
author = u'OpenStack Foundation'
# sample config
config_generator_config_file = [
('config-generator/monasca-events-api.conf', '_static/events-api')
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
source_suffix = '.rst'
# The encoding of source files.
source_encoding = 'utf-8'
# The master toctree document.
master_doc = 'index'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = [
'common',
'doc',
'documentation',
'etc',
'java'
]
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
show_authors = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
modindex_common_prefix = ['monasca_events_api.', 'monasca']
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# doc. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
html_use_index = True
# If false, no module index is generated.
html_use_modindex = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Language to be used for generating the HTML full-text search index.
# Sphinx supports the following languages:
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
#html_search_language = 'en'
# A dictionary with options for the search language support, empty by default.
# Now only 'ja' uses this config value
#html_search_options = {'type': 'default'}
# The name of a javascript file (relative to the configuration directory) that
# implements a search results scorer. If empty, the default will be used.
#html_search_scorer = 'scorer.js'
# Output file base name for HTML help builder.
htmlhelp_basename = 'monasca-events-apidoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
# Latex figure (float) alignment
#'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'monasca-events-api.tex', u'monasca-events-api Documentation',
u'Openstack Foundation \\textless{}monasca@lists.launchpad.net\\textgreater{}', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'monasca-events-api', u'monasca-events-api Documentation',
[author], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'monasca-events-api', u'monasca-events-api Documentation',
author, 'monasca-events-api', 'Rest-API to collect events from your cloud.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {'https://doc.python.org/': None}

48
doc/source/index.rst Normal file
View File

@ -0,0 +1,48 @@
..
monasca-events-api documentation master file
Copyright 2017 FUJITSU LIMITED
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
==============================================
Welcome to monasca-events-api's documentation!
==============================================
monasca-events-api is a RESTful API server acting as gateway for
events collected from the monitored cloud.
The developer documentation provided here is continually kept up-to-date
based on the latest code, and may not represent the state of the project at
any specific prior release.
.. note:: This is documentation for developers, if you are looking for more
general documentation including API, install, operator and user
guides see `docs.openstack.org`_
.. _`docs.openstack.org`: http://docs.openstack.org
.. toctree::
:maxdepth: 2
user/index
admin/index
install/index
configuration/index
cli/index
contributor/index
.. toctree::
:maxdepth: 1
glossary

View File

@ -1,92 +0,0 @@
[DEFAULT]
# logging, make sure that the user under whom the server runs has permission
# to write to the directory.
log_file = events_api.log
log_dir = .
debug = True
# Identifies the region that the Monasca API is running in.
region = useast
# Dispatchers to be loaded to serve restful APIs
[dispatcher]
versions = monasca_events_api.v2.versions:Versions
stream_definitions = monasca_events_api.v2.stream_definitions:StreamDefinitions
events = monasca_events_api.v2.events:Events
transforms = monasca_events_api.v2.transforms:Transforms
[security]
# The roles that are allowed full access to the API.
default_authorized_roles = user, domainuser, domainadmin, monasca-user
# The roles that are allowed to only POST metrics to the API. This role would be used by the Monasca Agent.
agent_authorized_roles = monasca-agent
# The roles that are allowed to access the API on behalf of another tenant.
# For example, a service can POST metrics to another tenant if they are a member of the "delegate" role.
delegate_authorized_roles = admin
[messaging]
# The message queue driver to use
driver = monasca_events_api.common.messaging.kafka_publisher:KafkaPublisher
[repositories]
# The driver to use for the stream definitions repository
streams = monasca_events_api.common.repositories.mysql.streams_repository:StreamsRepository
# The driver to use for the events repository
events = monasca_events_api.common.repositories.mysql.events_repository:EventsRepository
# The driver to use for the transforms repository
transforms = monasca_events_api.common.repositories.mysql.transforms_repository:TransformsRepository
[dispatcher]
driver = v2_reference
[kafka]
# The endpoint to the kafka server
uri = 192.168.10.4:9092
# The topic that events will be published too
events_topic = transformed-events
# consumer group name
group = api
# how many times to try when error occurs
max_retry = 1
# wait time between tries when kafka goes down
wait_time = 1
# use synchronous or asynchronous connection to kafka
async = False
# send messages in bulk or send messages one by one.
compact = False
# How many partitions this connection should listen messages on, this
# parameter is for reading from kafka. If listens on multiple partitions,
# For example, if the client should listen on partitions 1 and 3, then the
# configuration should look like the following:
# partitions = 1
# partitions = 3
# default to listen on partition 0.
partitions = 0
[mysql]
database_name = mon
hostname = 192.168.10.4
username = monapi
password = password
[keystone_authtoken]
identity_uri = http://192.168.10.5:35357
auth_uri = http://192.168.10.5:5000
admin_password = admin
admin_user = admin
admin_tenant_name = admin
cafile =
certfile =
keyfile =
insecure = false

View File

@ -1,22 +0,0 @@
[DEFAULT]
name = monasca_events_api
[pipeline:main]
# Add validator in the pipeline so the metrics messages can be validated.
pipeline = auth keystonecontext api
[app:api]
paste.app_factory = monasca_events_api.api.server:launch
[filter:auth]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
[filter:keystonecontext]
paste.filter_factory = monasca_events_api.middleware.keystone_context_filter:filter_factory
[server:main]
use = egg:gunicorn#main
host = 127.0.0.1
port = 8082
workers = 1
proc_name = monasca_events_api

View File

@ -1,197 +0,0 @@
# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import requests
import yaml
from monascaclient import ksclient
# events_url = "http://127.0.0.1:8072"
events_url = "http://192.168.10.4:8072"
def token():
keystone = {
'username': 'mini-mon',
'password': 'password',
'project': 'test',
'auth_url': 'http://192.168.10.5:35357/v3'
}
ks_client = ksclient.KSClient(**keystone)
return ks_client.token
headers = {
'X-Auth-User': 'mini-mon',
'X-Auth-Key': 'password',
'X-Auth-Token': token(),
'Accept': 'application/json',
'User-Agent': 'python-monascaclient',
'Content-Type': 'application/json'}
def test_events_get():
body = {}
response = requests.get(url=events_url + "/v2.0/events",
data=json.dumps(body),
headers=headers)
json_data = json.loads(response.text)
event_id = json_data[0]['id']
assert response.status_code == 200
response = requests.get(
url=events_url + "/v2.0/events/{}".format(event_id),
data=json.dumps(body),
headers=headers)
json_data = json.loads(response.text)
new_event_id = json_data[0]['id']
assert response.status_code == 200
assert event_id == new_event_id
print("GET /events success")
def test_events_get_all():
print("Test GET /events")
body = {}
response = requests.get(url=events_url + "/v2.0/events",
data=json.dumps(body),
headers=headers)
assert response.status_code == 200
print("GET /events success")
def test_stream_definition_post():
print("Test POST /stream-definitions")
body = {}
notif_resp = requests.get(
url="http://192.168.10.4:8070/v2.0/notification-methods",
data=json.dumps(body), headers=headers)
notif_dict = json.loads(notif_resp.text)
action_id = str(notif_dict['elements'][0]['id'])
body = {"fire_criteria": [{"event_type": "compute.instance.create.start"},
{"event_type": "compute.instance.create.end"}],
"description": "provisioning duration",
"name": "func_test_stream_def",
"group_by": ["instance_id"],
"expiration": 3000,
"select": [{"traits": {"tenant_id": "406904"},
"event_type": "compute.instance.create.*"}],
"fire_actions": [action_id],
"expire_actions": [action_id]}
response = requests.post(
url=events_url + "/v2.0/stream-definitions",
data=json.dumps(body),
headers=headers)
assert response.status_code == 201
print("POST /stream-definitions success")
def test_stream_definition_get():
print("Test GET /stream-definitions")
body = {}
response = requests.get(
url=events_url + "/v2.0/stream-definitions/",
data=json.dumps(body),
headers=headers)
assert response.status_code == 200
print("GET /stream-definitions success")
def test_stream_definition_delete():
print("Test DELETE /stream-definitions")
body = {}
stream_resp = requests.get(
url=events_url + "/v2.0/stream-definitions/",
data=json.dumps(body),
headers=headers)
stream_dict = json.loads(stream_resp.text)
stream_id = str(stream_dict['elements'][0]['id'])
response = requests.delete(
url=events_url + "/v2.0/stream-definitions/{}".format(
stream_id),
data=json.dumps(body),
headers=headers)
assert response.status_code == 204
print("DELETE /stream-definitions success")
def test_transforms():
print("Test POST /transforms")
# Open example yaml file and post to DB
fh = open('transform_definitions.yaml', 'r')
specification_data = yaml.load(fh)
body = {
"name": 'func test',
"description": 'an example definition',
"specification": str(specification_data)
}
response = requests.post(
url=events_url + "/v2.0/transforms",
data=json.dumps(body),
headers=headers)
assert response.status_code == 200
print("POST /transforms success")
print("Test GET /transforms")
body = {}
response = requests.get(
url=events_url + "/v2.0/transforms",
data=json.dumps(body),
headers=headers)
assert response.status_code == 200
print("GET /transforms success")
print("Test DELETE /transforms")
body = {}
response = requests.get(
url=events_url + "/v2.0/transforms",
data=json.dumps(body),
headers=headers)
transform_dict = json.loads(response.text)
transform_dict_id = transform_dict['elements'][0]['id']
response = requests.delete(
url=events_url + "/v2.0/transforms/{}".format(transform_dict_id),
data=json.dumps(body),
headers=headers)
assert response.status_code == 204
print("DELETE /transforms success")
test_stream_definition_post()
test_stream_definition_get()
test_stream_definition_delete()
test_events_get_all()
test_transforms()

View File

@ -1,63 +0,0 @@
---
- event_type: compute.instance.*
traits: &instance_traits
tenant_id:
fields: payload.tenant_id
user_id:
fields: payload.user_id
instance_id:
fields: payload.instance_id
host:
fields: publisher_id
plugin:
name: split
parameters:
segment: 1
max_split: 1
service:
fields: publisher_id
plugin: split
memory_mb:
type: int
fields: payload.memory_mb
disk_gb:
type: int
fields: payload.disk_gb
root_gb:
type: int
fields: payload.root_gb
ephemeral_gb:
type: int
fields: payload.ephemeral_gb
vcpus:
type: int
fields: payload.vcpus
instance_type_id:
type: int
fields: payload.instance_type_id
instance_type:
fields: payload.instance_type
state:
fields: payload.state
os_architecture:
fields: payload.image_meta.'org.openstack__1__architecture'
os_version:
fields: payload.image_meta.'org.openstack__1__os_version'
os_distro:
fields: payload.image_meta.'org.openstack__1__os_distro'
launched_at:
type: datetime
fields: payload.launched_at
deleted_at:
type: datetime
fields: payload.deleted_at
- event_type: compute.instance.exists
traits:
<<: *instance_traits
audit_period_beginning:
type: datetime
fields: payload.audit_period_beginning
audit_period_ending:
type: datetime
fields: payload.audit_period_ending

View File

@ -1,30 +0,0 @@
# Copyright 2014 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log
LOG = log.getLogger(__name__)
class EventsV2API(object):
def __init__(self, global_conf):
LOG.debug('initializing V2API!')
self.global_conf = global_conf
def on_post(self, req, res):
res.status = '501 Not Implemented'
def on_get(self, req, res, events_id):
res.status = '501 Not Implemented'

View File

@ -1,76 +0,0 @@
# Copyright 2015 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from wsgiref import simple_server
import falcon
from oslo_config import cfg
from oslo_log import log
import paste.deploy
import simport
dispatcher_opts = [cfg.StrOpt('versions', default=None,
help='Versions endpoint'),
cfg.StrOpt('stream_definitions', default=None,
help='Stream definition endpoint'),
cfg.StrOpt('events', default=None,
help='Events endpoint'),
cfg.StrOpt('transforms', default=None,
help='Transforms endpoint')]
dispatcher_group = cfg.OptGroup(name='dispatcher', title='dispatcher')
cfg.CONF.register_group(dispatcher_group)
cfg.CONF.register_opts(dispatcher_opts, dispatcher_group)
LOG = log.getLogger(__name__)
def launch(conf, config_file="/etc/monasca/events_api.conf"):
log.register_options(cfg.CONF)
log.set_defaults()
cfg.CONF(args=[],
project='monasca_events_api',
default_config_files=[config_file])
log.setup(cfg.CONF, 'monasca_events_api')
app = falcon.API()
versions = simport.load(cfg.CONF.dispatcher.versions)()
app.add_route("/", versions)
app.add_route("/{version_id}", versions)
events = simport.load(cfg.CONF.dispatcher.events)()
app.add_route("/v2.0/events", events)
app.add_route("/v2.0/events/{event_id}", events)
streams = simport.load(cfg.CONF.dispatcher.stream_definitions)()
app.add_route("/v2.0/stream-definitions/", streams)
app.add_route("/v2.0/stream-definitions/{stream_id}", streams)
transforms = simport.load(cfg.CONF.dispatcher.transforms)()
app.add_route("/v2.0/transforms", transforms)
app.add_route("/v2.0/transforms/{transform_id}", transforms)
LOG.debug('Dispatcher drivers have been added to the routes!')
return app
if __name__ == '__main__':
wsgi_app = (
paste.deploy.loadapp('config:etc/events_api.ini',
relative_to=os.getcwd()))
httpd = simple_server.make_server('127.0.0.1', 8072, wsgi_app)
httpd.serve_forever()

View File

@ -1,37 +0,0 @@
# Copyright 2015 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log
LOG = log.getLogger(__name__)
class StreamDefinitionsV2API(object):
def __init__(self, global_conf):
LOG.debug('initializing StreamDefinitionsV2API!')
self.global_conf = global_conf
def on_post(self, req, res):
res.status = '501 Not Implemented'
def on_get(self, req, res, stream_id):
res.status = '501 Not Implemented'
def on_delete(self, req, res, stream_id):
res.status = '501 Not Implemented'
def on_patch(self, req, res, stream_id):
res.status = '501 Not Implemented'

View File

@ -1,33 +0,0 @@
# Copyright 2014 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log
LOG = log.getLogger(__name__)
class TransformsV2API(object):
def __init__(self, global_conf):
LOG.debug('initializing V2API!')
self.global_conf = global_conf
def on_post(self, req, res):
res.status = '501 Not Implemented'
def on_get(self, req, res):
res.status = '501 Not Implemented'
def on_delete(self, req, res, transform_id):
res.status = '501 Not Implemented'

View File

@ -1,26 +0,0 @@
# Copyright 2015 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log
LOG = log.getLogger(__name__)
class VersionsAPI(object):
def __init__(self):
super(VersionsAPI, self).__init__()
LOG.info('Initializing Versions!')
def on_get(self, req, res, id):
res.status = '501 Not Implemented'

View File

@ -1,17 +0,0 @@
# Copyright 2014 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class MessageQueueException(Exception):
pass

View File

@ -1,125 +0,0 @@
# Copyright 2014 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import time
from kafka import client
from kafka import common
from kafka import producer
from oslo_config import cfg
from oslo_log import log
from monasca_events_api.common.messaging import exceptions
from monasca_events_api.common.messaging import publisher
LOG = log.getLogger(__name__)
class KafkaPublisher(publisher.Publisher):
def __init__(self, topic):
if not cfg.CONF.kafka.uri:
raise Exception('Kafka is not configured correctly! '
'Use configuration file to specify Kafka '
'uri, for example: '
'uri=192.168.1.191:9092')
self.uri = cfg.CONF.kafka.uri
self.topic = topic
self.group = cfg.CONF.kafka.group
self.wait_time = cfg.CONF.kafka.wait_time
self.async = cfg.CONF.kafka.async
self.ack_time = cfg.CONF.kafka.ack_time
self.max_retry = cfg.CONF.kafka.max_retry
self.auto_commit = cfg.CONF.kafka.auto_commit
self.compact = cfg.CONF.kafka.compact
self.partitions = cfg.CONF.kafka.partitions
self.drop_data = cfg.CONF.kafka.drop_data
self._client = None
self._producer = None
def _init_client(self, wait_time=None):
for i in range(self.max_retry):
try:
# if there is a client instance, but _init_client is called
# again, most likely the connection has gone stale, close that
# connection and reconnect.
if self._client:
self._client.close()
if not wait_time:
wait_time = self.wait_time
time.sleep(wait_time)
self._client = client.KafkaClient(self.uri)
# when a client is re-initialized, existing consumer should be
# reset as well.
self._producer = None
break
except common.KafkaUnavailableError:
LOG.error('Kafka server at %s is down.' % self.uri)
except common.LeaderNotAvailableError:
LOG.error('Kafka at %s has no leader available.' % self.uri)
except Exception:
LOG.error('Kafka at %s initialization failed.' % self.uri)
# Wait a bit and try again to get a client
time.sleep(self.wait_time)
def _init_producer(self):
try:
if not self._client:
self._init_client()
self._producer = producer.SimpleProducer(
self._client, async=self.async, ack_timeout=self.ack_time)
LOG.debug('Kafka SimpleProducer was created successfully.')
except Exception:
self._producer = None
LOG.exception('Kafka (%s) producer can not be created.' % self.uri)
def close(self):
if self._client:
self._producer = None
self._client.close()
def send_message(self, message):
try:
if not self._producer:
self._init_producer()
self._producer.send_messages(self.topic, message)
except (common.KafkaUnavailableError,
common.LeaderNotAvailableError):
self._client = None
LOG.exception('Error occurred while posting data to Kafka.')
raise exceptions.MessageQueueException()
except Exception:
LOG.exception('Unknown error.')
raise exceptions.MessageQueueException()
def send_message_batch(self, messages):
try:
if not self._producer:
self._init_producer()
self._producer.send_messages(self.topic, *messages)
except (common.KafkaUnavailableError,
common.LeaderNotAvailableError):
self._client = None
LOG.exception('Error occurred while posting data to Kafka.')
raise exceptions.MessageQueueException()
except Exception:
LOG.exception('Unknown error.')
raise exceptions.MessageQueueException()

View File

@ -1,36 +0,0 @@
# Copyright 2014 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
from oslo_utils import timeutils
def transform(events, tenant_id, region):
event_template = {'event': {},
'meta': {'tenantId': tenant_id, 'region': region},
'creation_time': timeutils.utcnow_ts()}
if isinstance(events, list):
transformed_events = []
for event in events:
event['_tenant_id'] = tenant_id
event_template['event'] = event
transformed_events.append(json.dumps(event_template))
return transformed_events
else:
transformed_event = event_template['event']
events['_tenant_id'] = tenant_id
transformed_event['event'] = events
return [json.dumps(transformed_event)]

View File

@ -1,22 +0,0 @@
# Copyright 2015 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
def transform(transform_id, tenant_id, event):
transformed_event = dict(
transform_definition=event,
tenant_id=tenant_id,
transform_id=transform_id
)
return transformed_event

View File

@ -1,28 +0,0 @@
# Copyright 2014 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import six
@six.add_metaclass(abc.ABCMeta)
class Publisher(object):
@abc.abstractmethod
def send_message(self, message):
"""Sends the message using the message queue.
:param message: Message to send.
"""
return

View File

@ -1 +0,0 @@
PAGE_LIMIT = 50

View File

@ -1,28 +0,0 @@
# Copyright 2014 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import six
@six.add_metaclass(abc.ABCMeta)
class EventsRepository(object):
@abc.abstractmethod
def list_event(self, tenant_id, event_id):
return
@abc.abstractmethod
def list_events(self, tenant_id, offset, limit):
return

View File

@ -1,29 +0,0 @@
# Copyright 2014 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class RepositoryException(Exception):
pass
class DoesNotExistException(RepositoryException):
pass
class AlreadyExistsException(RepositoryException):
pass
class InvalidUpdateException(RepositoryException):
pass

View File

@ -1,94 +0,0 @@
# Copyright 2015 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log
from monasca_events_api.common.repositories import constants
from monasca_events_api.common.repositories import events_repository as er
from monasca_events_api.common.repositories.mysql import mysql_repository
LOG = log.getLogger(__name__)
class EventsRepository(mysql_repository.MySQLRepository,
er.EventsRepository):
def __init__(self):
super(EventsRepository, self).__init__()
self.database_name = "winchester"
self._base_query = """
select event.message_id,
event.generated,
event_type.desc,
trait.name,
trait.t_string,
trait.t_float,
trait.t_int,
trait.t_datetime
from event
inner join event_type on event.event_type_id=event_type.id
inner join trait on event.id=trait.event_id"""
@mysql_repository.mysql_try_catch_block
def list_event(self, tenant_id, event_id):
query = self._base_query + " where event.message_id=%s"
rows = self._execute_query(query, [event_id])
return rows
@mysql_repository.mysql_try_catch_block
def list_events(self, tenant_id, offset, limit):
where_clause = ""
order_by_clause = " order by event.generated asc"
event_ids = self._find_event_ids(offset, limit)
if event_ids:
ids = ",".join([str(event_id['id']) for event_id in event_ids])
where_clause = """
where trait.event_id
IN ({})""".format(ids)
query = self._base_query + where_clause + order_by_clause
rows = self._execute_query(query, [])
return rows
def _find_event_ids(self, offset, limit):
if not limit:
limit = constants.PAGE_LIMIT
parameters = []
if offset:
parameters.append(offset.encode('utf8'))
offset_clause = """
where generated > (select generated
from event
where message_id = %s)"""
else:
offset_clause = ""
parameters.append(int(limit))
limit_clause = " limit %s"
id_query = ('select id from event ' +
offset_clause +
' order by generated ' +
limit_clause)
return self._execute_query(id_query, parameters)

View File

@ -1,81 +0,0 @@
# Copyright 2014 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import MySQLdb as mdb
from oslo_config import cfg
from oslo_log import log
from monasca_events_api.common.repositories import exceptions
LOG = log.getLogger(__name__)
class MySQLRepository(object):
def __init__(self):
try:
super(MySQLRepository, self).__init__()
self.conf = cfg.CONF
self.database_name = self.conf.mysql.database_name
self.database_server = self.conf.mysql.hostname
self.database_uid = self.conf.mysql.username
self.database_pwd = self.conf.mysql.password
except Exception as ex:
LOG.exception(ex)
raise exceptions.RepositoryException(ex)
def _get_cnxn_cursor_tuple(self):
cnxn = mdb.connect(self.database_server, self.database_uid,
self.database_pwd, self.database_name,
use_unicode=True, charset='utf8')
cursor = cnxn.cursor(mdb.cursors.DictCursor)
return cnxn, cursor
def _execute_query(self, query, parms):
cnxn, cursor = self._get_cnxn_cursor_tuple()
with cnxn:
cursor.execute(query, parms)
return cursor.fetchall()
def mysql_try_catch_block(fun):
def try_it(*args, **kwargs):
try:
return fun(*args, **kwargs)
except exceptions.DoesNotExistException:
raise
except exceptions.InvalidUpdateException:
raise
except exceptions.AlreadyExistsException:
raise
except Exception as ex:
LOG.exception(ex)
raise exceptions.RepositoryException(ex)
return try_it

View File

@ -1,345 +0,0 @@
# Copyright 2015 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import uuid
import MySQLdb
from oslo_log import log
from oslo_utils import timeutils
from monasca_events_api.common.repositories import constants
from monasca_events_api.common.repositories import exceptions
from monasca_events_api.common.repositories.mysql import mysql_repository
from monasca_events_api.common.repositories import streams_repository as sdr
LOG = log.getLogger(__name__)
class StreamsRepository(mysql_repository.MySQLRepository,
sdr.StreamsRepository):
base_query = """
select sd.id, sd.tenant_id, sd.name, sd.description,
sd.select_by, sd.group_by, sd.fire_criteria, sd.expiration,
sd.actions_enabled, sd.created_at,
sd.updated_at, sd.deleted_at,
saf.fire_actions, sae.expire_actions
from stream_definition as sd
left join (select stream_definition_id,
group_concat(action_id) as fire_actions
from stream_actions
where action_type = 'FIRE'
group by stream_definition_id) as saf
on saf.stream_definition_id = sd.id
left join (select stream_definition_id,
group_concat(action_id) as expire_actions
from stream_actions
where action_type = 'EXPIRE'
group by stream_definition_id) as sae
on sae.stream_definition_id = sd.id
"""
def __init__(self):
super(StreamsRepository, self).__init__()
@mysql_repository.mysql_try_catch_block
def get_stream_definition(self, tenant_id, stream_definition_id):
parms = [tenant_id, stream_definition_id]
where_clause = """ where sd.tenant_id = %s
and sd.id = %s
and deleted_at is NULL """
query = StreamsRepository.base_query + where_clause
rows = self._execute_query(query, parms)
if rows:
return rows[0]
else:
raise exceptions.DoesNotExistException
@mysql_repository.mysql_try_catch_block
def get_stream_definitions(self, tenant_id, name, offset=None, limit=None):
parms = [tenant_id]
select_clause = StreamsRepository.base_query
where_clause = " where sd.tenant_id = %s and deleted_at is NULL "
if name:
where_clause += " and sd.name = %s "
parms.append(name.encode('utf8'))
if offset is not None:
order_by_clause = " order by sd.id, sd.created_at "
where_clause += " and sd.id > %s "
parms.append(offset.encode('utf8'))
limit_clause = " limit %s "
parms.append(constants.PAGE_LIMIT)
else:
order_by_clause = " order by sd.created_at "
limit_clause = ""
if limit:
limit_clause = " limit %s"
parms.append(int(limit))
query = select_clause + where_clause + order_by_clause + limit_clause
return self._execute_query(query, parms)
@mysql_repository.mysql_try_catch_block
def get_all_stream_definitions(self, offset=None, limit=None):
parms = []
select_clause = StreamsRepository.base_query
where_clause = " where deleted_at is NULL "
if offset is not None:
order_by_clause = " order by sd.id, sd.created_at "
where_clause += " and sd.id > %s "
parms.append(offset.encode('utf8'))
limit_clause = " limit %s "
if limit is not None:
parms.append(limit)
else:
parms.append(constants.PAGE_LIMIT)
else:
order_by_clause = " order by sd.created_at "
limit_clause = ""
query = select_clause + where_clause + order_by_clause + limit_clause
return self._execute_query(query, parms)
@mysql_repository.mysql_try_catch_block
def delete_stream_definition(self, tenant_id, stream_definition_id):
"""Delete the stream definition.
:param tenant_id:
:param stream_definition_id:
:returns True: -- if stream definition exists and was deleted.
:returns False: -- if the stream definition does not exists.
:raises RepositoryException:
"""
cnxn, cursor = self._get_cnxn_cursor_tuple()
with cnxn:
cursor.execute("""delete from stream_definition
where tenant_id = %s and id = %s""",
[tenant_id, stream_definition_id])
if cursor.rowcount < 1:
return False
return True
@mysql_repository.mysql_try_catch_block
def create_stream_definition(self,
tenant_id,
name,
description,
select,
group_by,
fire_criteria,
expiration,
fire_actions,
expire_actions):
cnxn, cursor = self._get_cnxn_cursor_tuple()
with cnxn:
now = timeutils.utcnow()
stream_definition_id = str(uuid.uuid1())
try:
cursor.execute("""insert into stream_definition(
id,
tenant_id,
name,
description,
select_by,
group_by,
fire_criteria,
expiration,
created_at,
updated_at)
values (%s, %s, %s, %s, %s, %s, %s, %s, %s,
%s)""", (
stream_definition_id, tenant_id, name.encode('utf8'),
description.encode('utf8'), select.encode('utf8'),
group_by.encode('utf8'), fire_criteria.encode('utf8'),
expiration, now, now))
except MySQLdb.IntegrityError as e:
code, msg = e
if code == 1062:
raise exceptions.AlreadyExistsException(
'Stream Definition already '
'exists for tenant_id: {0} name: {1}'.format(
tenant_id, name.encode('utf8')))
else:
raise e
self._insert_into_stream_actions(cursor, stream_definition_id,
fire_actions, u"FIRE")
self._insert_into_stream_actions(cursor, stream_definition_id,
expire_actions,
u"EXPIRE")
return stream_definition_id
@mysql_repository.mysql_try_catch_block
def patch_stream_definition(self, tenant_id, stream_definition_id, name, description, select, group_by,
fire_criteria, expiration, fire_actions, expire_actions):
cnxn, cursor = self._get_cnxn_cursor_tuple()
with cnxn:
# Get the original alarm definition from the DB
parms = [tenant_id, stream_definition_id]
where_clause = """ where sd.tenant_id = %s
and sd.id = %s"""
query = StreamsRepository.base_query + where_clause
cursor.execute(query, parms)
if cursor.rowcount < 1:
raise exceptions.DoesNotExistException
original_definition = cursor.fetchall()[0]
# Update that stream definition in the database
patch_query = """
update stream_definition
set name = %s,
description = %s,
select_by = %s,
group_by = %s,
fire_criteria = %s,
expiration = %s,
updated_at = %s
where tenant_id = %s and id = %s"""
if name is None:
name = original_definition['name']
if description is None:
description = original_definition['description']
if select is None:
select = original_definition['select_by']
if select != original_definition['select_by']:
msg = "select_by must not change".encode('utf8')
raise exceptions.InvalidUpdateException(msg)
if group_by is None:
group_by = original_definition['group_by']
if group_by != original_definition['group_by']:
msg = "group_by must not change".encode('utf8')
raise exceptions.InvalidUpdateException(msg)
if fire_criteria is None:
fire_criteria = original_definition['fire_criteria']
if expiration is None:
expiration = original_definition['expiration']
now = timeutils.utcnow()
update_parms = [
name,
description,
select,
group_by,
fire_criteria,
expiration,
now,
tenant_id,
stream_definition_id]
cursor.execute(patch_query, update_parms)
# Update the fire and expire actions in the database if defined
if fire_actions is not None:
self._delete_stream_actions(cursor, stream_definition_id,
u'FIRE')
if expire_actions is not None:
self._delete_stream_actions(cursor, stream_definition_id,
u'EXPIRE')
self._insert_into_stream_actions(cursor, stream_definition_id,
fire_actions,
u"FIRE")
self._insert_into_stream_actions(cursor, stream_definition_id,
expire_actions,
u"EXPIRE")
# Get updated entry from mysql
cursor.execute(query, parms)
return cursor.fetchall()[0]
def _delete_stream_actions(self, cursor, stream_definition_id, action_type):
query = """
delete
from stream_actions
where stream_definition_id = %s and action_type = %s
"""
parms = [stream_definition_id, action_type.encode('utf8')]
cursor.execute(query, parms)
def _insert_into_stream_actions(self, cursor, stream_definition_id,
actions, action_type):
if actions is None:
return
for action in actions:
cursor.execute(
"select id,type from notification_method where id = %s",
(action.encode('utf8'),))
row = cursor.fetchone()
if not row:
raise exceptions.InvalidUpdateException(
"Non-existent notification id {} submitted for {} "
"notification action".format(action.encode('utf8'),
action_type.encode('utf8')))
else:
if row['type'] == 'PAGERDUTY':
raise exceptions.InvalidUpdateException(
"PAGERDUTY action not supported for "
"notification id {} submitted for {} "
"notification action".format(
action.encode('utf8'),
action_type.encode('utf8')))
cursor.execute("""insert into stream_actions(
stream_definition_id,
action_type,
action_id)
values(%s,%s,%s)""", (
stream_definition_id, action_type.encode('utf8'),
action.encode('utf8')))

View File

@ -1,94 +0,0 @@
# Copyright 2014 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import MySQLdb
from oslo_log import log
from oslo_utils import timeutils
from monasca_events_api.common.repositories.mysql import mysql_repository
from monasca_events_api.common.repositories import constants
from monasca_events_api.common.repositories import transforms_repository
LOG = log.getLogger(__name__)
class TransformsRepository(mysql_repository.MySQLRepository,
transforms_repository.TransformsRepository):
def create_transforms(self, id, tenant_id, name, description,
specification, enabled):
cnxn, cursor = self._get_cnxn_cursor_tuple()
with cnxn:
now = timeutils.utcnow()
try:
cursor.execute("""insert into event_transform(
id,
tenant_id,
name,
description,
specification,
enabled,
created_at,
updated_at)
values (%s, %s, %s, %s, %s, %s, %s, %s)""",
(id, tenant_id, name, description,
specification, enabled, now, now))
except MySQLdb.IntegrityError as e:
code, msg = e
if code == 1062:
MySQLdb.AlreadyExistsException(
'Transform definition already '
'exists for tenant_id: {}'.format(tenant_id))
else:
raise e
def list_transforms(self, tenant_id, limit=None, offset=None):
base_query = """select * from event_transform where deleted_at IS NULL"""
tenant_id_clause = " and tenant_id = \"{}\"".format(tenant_id)
order_by_clause = " order by id"
offset_clause = ' '
if offset:
offset_clause = " and id > \"{}\"".format(offset)
if not limit:
limit = constants.PAGE_LIMIT
limit_clause = " limit {}".format(limit)
query = (base_query +
tenant_id_clause +
offset_clause +
order_by_clause +
limit_clause)
rows = self._execute_query(query, [])
return rows
def list_transform(self, tenant_id, transform_id):
base_query = """select * from event_transform where deleted_at IS NULL"""
tenant_id_clause = " and tenant_id = \"{}\"".format(tenant_id)
transform_id_clause = " and id = \"{}\"".format(transform_id)
query = (base_query +
tenant_id_clause +
transform_id_clause)
rows = self._execute_query(query, [])
return rows
def delete_transform(self, tenant_id, transform_id):
self._execute_query("""delete from event_transform
where tenant_id = %s and id = %s""",
[tenant_id, transform_id])

View File

@ -1,62 +0,0 @@
# Copyright 2015 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import six
@six.add_metaclass(abc.ABCMeta)
class StreamsRepository(object):
def __init__(self):
super(StreamsRepository, self).__init__()
@abc.abstractmethod
def create_stream_definition(self,
tenant_id,
name,
description,
select,
group_by,
fire_criteria,
expiration,
fire_actions,
expire_actions):
pass
@abc.abstractmethod
def delete_stream_definition(self, tenant_id, stream_definition_id):
pass
@abc.abstractmethod
def get_stream_definition(self, tenant_id, stream_definition_id):
pass
@abc.abstractmethod
def get_stream_definitions(self, tenant_id, name, offset, limit):
pass
@abc.abstractmethod
def patch_stream_definition(self,
tenant_id,
stream_definition_id,
name,
description,
select,
group_by,
fire_criteria,
expiration,
fire_actions,
expire_actions):
pass

View File

@ -1,33 +0,0 @@
# Copyright 2014 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import six
@six.add_metaclass(abc.ABCMeta)
class TransformsRepository(object):
@abc.abstractmethod
def create_transforms(self, id, tenant_id, name, description,
specification, enabled):
return
@abc.abstractmethod
def list_transforms(self, tenant_id):
return
@abc.abstractmethod
def delete_transform(self, tenant_id, transform_id):
return

View File

@ -0,0 +1,84 @@
# Copyright 2017 FUJITSU LIMITED
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import pkgutil
from oslo_config import cfg
from oslo_log import log
from oslo_utils import importutils
CONF = cfg.CONF
LOG = log.getLogger(__name__)
def load_conf_modules():
"""Load all modules that contain configuration.
Method iterates over modules of :py:module:`monasca_log_api.conf`
and imports only those that contain following methods:
- list_opts (required by oslo_config.genconfig)
- register_opts (required by :py:currentmodule:)
"""
imported_modules = []
for modname in _list_module_names():
mod = importutils.import_module('monasca_events_api.conf.' + modname)
required_funcs = ['register_opts', 'list_opts']
for func in required_funcs:
if not hasattr(mod, func):
msg = ("The module 'monasca_events_api.conf.%s' should have a"
" '%s' function which returns"
" the config options."
% (modname, func))
LOG.warning(msg)
else:
imported_modules.append(mod)
LOG.debug('Found %d modules that contain configuration',
len(imported_modules))
return imported_modules
def _list_module_names():
module_names = []
package_path = os.path.dirname(os.path.abspath(__file__))
for _, modname, ispkg in pkgutil.iter_modules(path=[package_path]):
if not (modname == "opts" and ispkg):
module_names.append(modname)
return module_names
def register_opts():
"""Register all conf modules opts.
This method allows different modules to register
opts according to their needs.
"""
for mod in load_conf_modules():
mod.register_opts(CONF)
def list_opts():
"""List all conf modules opts.
Goes through all conf modules and yields their opts
"""
for mod in load_conf_modules():
mod_opts = mod.list_opts()
yield mod_opts[0], mod_opts[1]

View File

@ -0,0 +1,53 @@
# Copyright 2017 FUJITSU LIMITED
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log
from monasca_events_api import conf
from monasca_events_api import version
CONF = conf.CONF
LOG = log.getLogger(__name__)
_CONF_LOADED = False
def parse_args():
"""Parse configuration arguments.
Note:
This method ensures that configuration will be loaded
only once within single python interpreter.
"""
global _CONF_LOADED
if _CONF_LOADED:
LOG.debug('Configuration has been already loaded')
return
log.set_defaults()
log.register_options(CONF)
CONF(prog='events-api',
project='monasca',
version=version.version_str,
description='RESTful API to collect events from cloud')
log.setup(CONF,
product_name='monasca-events-api',
version=version.version_str)
conf.register_opts()
_CONF_LOADED = True

View File

@ -1,84 +0,0 @@
# Copyright (c) 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""RequestContext: context for requests that persist through monasca_events_api."""
import uuid
from oslo_log import log
from oslo_utils import timeutils
LOG = log.getLogger(__name__)
class RequestContext(object):
"""Security context and request information.
Represents the user taking a given action within the system.
"""
def __init__(self, user_id, project_id, domain_id=None, domain_name=None,
roles=None, timestamp=None, request_id=None,
auth_token=None, user_name=None, project_name=None,
service_catalog=None, user_auth_plugin=None, **kwargs):
"""Creates the Keystone Context. Supports additional parameters:
:param user_auth_plugin:
The auth plugin for the current request's authentication data.
:param kwargs:
Extra arguments that might be present
"""
if kwargs:
LOG.warning(
'Arguments dropped when creating context: %s') % str(kwargs)
self._roles = roles or []
self.timestamp = timeutils.utcnow()
if not request_id:
request_id = self.generate_request_id()
self._request_id = request_id
self._auth_token = auth_token
self._service_catalog = service_catalog
self._domain_id = domain_id
self._domain_name = domain_name
self._user_id = user_id
self._user_name = user_name
self._project_id = project_id
self._project_name = project_name
self._user_auth_plugin = user_auth_plugin
def to_dict(self):
return {'user_id': self._user_id,
'project_id': self._project_id,
'domain_id': self._domain_id,
'domain_name': self._domain_name,
'roles': self._roles,
'timestamp': timeutils.strtime(self._timestamp),
'request_id': self._request_id,
'auth_token': self._auth_token,
'user_name': self._user_name,
'service_catalog': self._service_catalog,
'project_name': self._project_name,
'user': self._user}
def generate_request_id(self):
return b'req-' + str(uuid.uuid4()).encode('ascii')

View File

@ -1,110 +0,0 @@
# Copyright (c) 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import falcon
from oslo_log import log
from oslo_middleware import request_id
from oslo_serialization import jsonutils
from monasca_events_api.middleware import context
LOG = log.getLogger(__name__)
def filter_factory(global_conf, **local_conf):
def validator_filter(app):
return KeystoneContextFilter(app, local_conf)
return validator_filter
class KeystoneContextFilter(object):
"""Make a request context from keystone headers."""
def __init__(self, app, conf):
self._app = app
self._conf = conf
def __call__(self, env, start_response):
LOG.debug("Creating Keystone Context Object.")
user_id = env.get('HTTP_X_USER_ID', env.get('HTTP_X_USER'))
if user_id is None:
msg = "Neither X_USER_ID nor X_USER found in request"
LOG.error(msg)
raise falcon.HTTPUnauthorized(title='Forbidden', description=msg)
roles = self._get_roles(env)
project_id = env.get('HTTP_X_PROJECT_ID')
project_name = env.get('HTTP_X_PROJECT_NAME')
domain_id = env.get('HTTP_X_DOMAIN_ID')
domain_name = env.get('HTTP_X_DOMAIN_NAME')
user_name = env.get('HTTP_X_USER_NAME')
req_id = env.get(request_id.ENV_REQUEST_ID)
# Get the auth token
auth_token = env.get('HTTP_X_AUTH_TOKEN',
env.get('HTTP_X_STORAGE_TOKEN'))
service_catalog = None
if env.get('HTTP_X_SERVICE_CATALOG') is not None:
try:
catalog_header = env.get('HTTP_X_SERVICE_CATALOG')
service_catalog = jsonutils.loads(catalog_header)
except ValueError:
msg = "Invalid service catalog json."
LOG.error(msg)
raise falcon.HTTPInternalServerError(msg)
# NOTE(jamielennox): This is a full auth plugin set by auth_token
# middleware in newer versions.
user_auth_plugin = env.get('keystone.token_auth')
# Build a context
ctx = context.RequestContext(user_id,
project_id,
user_name=user_name,
project_name=project_name,
domain_id=domain_id,
domain_name=domain_name,
roles=roles,
auth_token=auth_token,
service_catalog=service_catalog,
request_id=req_id,
user_auth_plugin=user_auth_plugin)
env['monasca_events_api.context'] = ctx
LOG.debug("Keystone Context succesfully created.")
return self._app(env, start_response)
def _get_roles(self, env):
"""Get the list of roles."""
if 'HTTP_X_ROLES' in env:
roles = env.get('HTTP_X_ROLES', '')
else:
# Fallback to deprecated role header:
roles = env.get('HTTP_X_ROLE', '')
if roles:
LOG.warning(
'Sourcing roles from deprecated X-Role HTTP header')
return [r.strip() for r in roles.split(',')]

View File

@ -1,374 +0,0 @@
# Copyright 2014 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import falcon
import json
from monasca_events_api.common.messaging import exceptions
from monasca_events_api.common.repositories.mysql.events_repository import EventsRepository
from monasca_events_api.v2.events import Events
import mock
from monasca_events_api.common.repositories.exceptions import RepositoryException
from oslo_utils import timeutils
import unittest
class EventsSubClass(Events):
def __init__(self):
self._default_authorized_roles = ['user', 'domainuser',
'domainadmin', 'monasca-user']
self._post_events_authorized_roles = [
'user',
'domainuser',
'domainadmin',
'monasca-user',
'monasca-agent']
self._events_repo = None
self._message_queue = None
self._region = 'useast'
def _event_transform(self, event, tenant_id, _region):
return dict(
event=1,
meta=dict(
tenantId='0ab1ac0a-2867-402d',
region='useast'),
creation_time=timeutils.utcnow_ts())
class Test_Events(unittest.TestCase):
def _generate_req(self):
"""Generate a mock HTTP request"""
req = mock.MagicMock()
req.get_param.return_value = None
req.headers = {
'X-Auth-User': 'mini-mon',
'X-Auth-Token': "ABCD",
'X-Auth-Key': 'password',
'X-TENANT-ID': '0ab1ac0a-2867-402d',
'X-ROLES': 'user, domainuser, domainadmin, monasca-user, monasca-agent',
'Accept': 'application/json',
'User-Agent': 'python-monascaclient',
'Content-Type': 'application/json'}
req.body = {}
req.content_type = 'application/json'
return req
@mock.patch('monasca_events_api.v2.events.Events._list_event')
@mock.patch(
'monasca_events_api.common.repositories.mysql.mysql_repository.mdb')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
def test_on_get_pass_singleevent(
self,
helper_tenant_id,
helpers_validate,
mysqlRepo,
listev):
"""GET Method success Single Event"""
helpers_validate.validate_authorization.return_value = True
returnEvent = [{"region": "useast", "tenantId": "0ab1ac0a-2867-402d",
"creation_time": "1434331190", "event": "1"}]
listev.return_value = returnEvent
mysqlRepo.connect.return_value = True
helper_tenant_id.get_tenant_id.return_value = '0ab1ac0a-2867-402d'
event_id = "1"
eventsObj = EventsSubClass()
eventsObj._events_repo = EventsRepository()
res = mock.MagicMock()
res.body = {}
res.status = 0
eventsObj.on_get(self._generate_req(), res, event_id)
self.assertEqual(returnEvent, json.loads(res.body))
@mock.patch('monasca_events_api.v2.events.Events._list_events')
@mock.patch(
'monasca_events_api.common.repositories.mysql.mysql_repository.mdb')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
def test_on_get_pass_events(
self,
helper_tenant_id,
helpers_validate,
mysqlRepo,
listev):
"""GET Method success Multiple Events"""
helpers_validate.validate_authorization.return_value = True
returnEvent = [{"region": "useast", "tenantId": "0ab1ac0a-2867-402d",
"creation_time": "1434331190", "event": "1"},
{"region": "useast", "tenantId": "0ab1ac0a-2866-403d",
"creation_time": "1234567890", "event": "2"}]
listev.return_value = returnEvent
mysqlRepo.connect.return_value = True
helper_tenant_id.get_tenant_id.return_value = '0ab1ac0a-2867-402d'
eventsObj = EventsSubClass()
eventsObj._events_repo = EventsRepository()
res = mock.MagicMock()
res.body = {}
res.status = 0
eventsObj.on_get(self._generate_req(), res)
self.assertEqual(returnEvent, json.loads(res.body))
@mock.patch(
'monasca_events_api.common.repositories.mysql.mysql_repository.mdb')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
def test_on_get_with_eventid_dbdown(
self,
helper_tenant_id,
helpers_validate,
mysqlRepo):
"""GET method when DB Down with event_ID"""
mysqlRepo.connect.side_effect = RepositoryException("Database\
Connection Error")
helpers_validate.validate_authorization.return_value = True
helper_tenant_id.get_tenant_id.return_value = '0ab1ac0a-2867-402d'
event_id = "0ab1ac0a-2867-402d-83c7-d7087262470c"
eventsObj = EventsSubClass()
eventsObj._events_repo = EventsRepository()
res = mock.MagicMock()
res.body = {}
res.status = 0
try:
eventsObj.on_get(self._generate_req(), res, event_id)
self.assertFalse(
1,
msg="Database Down, GET should fail but succeeded")
except Exception as e:
self.assertRaises(falcon.HTTPInternalServerError)
self.assertEqual(e.status, '500 Internal Server Error')
@mock.patch(
'monasca_events_api.common.repositories.mysql.mysql_repository.mdb')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
def test_on_get_mysql_down(
self,
helper_tenant_id,
helpers_validate,
mysqlRepo):
"""GET METHOD without event ID DB DOWN"""
mysqlRepo.connect.side_effect = RepositoryException("Database\
Connection Error")
helpers_validate.return_value = True
helper_tenant_id.return_value = '0ab1ac0a-2867-402d'
eventsObj = EventsSubClass()
eventsObj._events_repo = EventsRepository()
res = mock.MagicMock(spec='status')
res.body = {}
try:
eventsObj.on_get(self._generate_req(), res, None)
self.assertFalse(
1,
msg="Database Down, GET should fail but succeeded")
except Exception as e:
self.assertRaises(falcon.HTTPInternalServerError)
self.assertEqual(e.status, '500 Internal Server Error')
@mock.patch(
'monasca_events_api.common.messaging.kafka_publisher.KafkaPublisher')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
@mock.patch('monasca_events_api.v2.events.Events._validate_event')
@mock.patch('monasca_events_api.v2.common.helpers.read_http_resource')
@mock.patch(
'monasca_events_api.v2.common.helpers.validate_json_content_type')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
def test_on_post_unauthorized(
self,
tenantid,
json,
http,
event,
validate,
kafka):
"""POST method unauthorized """
json.return_value = None
validate.side_effect = falcon.HTTPUnauthorized('Forbidden',
'Tenant ID is missing a'
'required role to '
'access this service')
http.return_value = self._generate_req()
tenantid.return_value = '0ab1ac0a-2867-402d'
event.return_value = True
eventsObj = EventsSubClass()
eventsObj._message_queue = kafka
res = mock.MagicMock()
res.body = {}
res.status = 0
try:
eventsObj.on_post(self._generate_req(), res)
self.assertFalse(
1,
msg="Unauthorized Access, should fail but passed")
except Exception as e:
self.assertRaises(falcon.HTTPUnauthorized)
self.assertEqual(e.status, '401 Unauthorized')
@mock.patch(
'monasca_events_api.common.messaging.kafka_publisher.KafkaPublisher')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
@mock.patch('monasca_events_api.v2.events.Events._validate_event')
@mock.patch('monasca_events_api.v2.common.helpers.read_http_resource')
@mock.patch(
'monasca_events_api.v2.common.helpers.validate_json_content_type')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
def test_on_post_bad_request(
self,
tenantid,
json,
readHttpRes,
event,
validate,
kafka):
"""POST method with bad request body"""
json.return_value = None
validate.return_value = True
readHttpRes.side_effect = falcon.HTTPBadRequest('Bad request',
'Request body is'
'not valid JSON')
tenantid.return_value = '0ab1ac0a-2867-402d'
event.return_value = True
eventsObj = EventsSubClass()
eventsObj._message_queue = kafka
res = mock.MagicMock()
res.body = {}
res.status = 0
try:
eventsObj.on_post(self._generate_req(), res)
self.assertFalse(
1,
msg="Get Method should fail but succeeded, bad request sent")
except Exception as e:
self.assertRaises(falcon.HTTPBadRequest)
self.assertEqual(e.status, '400 Bad Request')
@mock.patch(
'monasca_events_api.common.messaging.kafka_publisher.KafkaPublisher')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
@mock.patch('monasca_events_api.v2.events.Events._validate_event')
@mock.patch('monasca_events_api.v2.common.helpers.read_http_resource')
@mock.patch(
'monasca_events_api.v2.common.helpers.validate_json_content_type')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
def test_on_post_kafka_down(
self,
tenantid,
json,
readHttpRes,
event,
validate,
kafka):
"""POST method with Kafka Down"""
kafka.send_message_batch.side_effect = exceptions.MessageQueueException()
json.return_value = None
validate.return_value = True
readHttpRes.return_value = {
'event_type': 'compute.instance.create.start',
'timestamp': '2015-06-17T21:57:03.493436',
'message_id': '1f4609b5-f01d-11e4-81ac-20c9d0b84f8b'
}
tenantid.return_value = '0ab1ac0a-2867-402d'
event.return_value = True
eventsObj = EventsSubClass()
eventsObj._message_queue = kafka
res = mock.MagicMock()
res.body = {}
try:
eventsObj.on_post(self._generate_req(), res)
self.assertFalse(
1,
msg="Kakfa Server Down, Post should fail but succeeded")
except Exception as e:
self.assertRaises(falcon.HTTPInternalServerError)
self.assertEqual(e.status, '500 Internal Server Error')
@mock.patch(
'monasca_events_api.common.messaging.kafka_publisher.KafkaPublisher')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
@mock.patch('monasca_events_api.v2.common.helpers.read_http_resource')
@mock.patch(
'monasca_events_api.v2.common.helpers.validate_json_content_type')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
def test_on_post_pass_validate_event(self, tenantid, json, readHttpRes, validate, kafka):
"""POST method passed due to validate event """
jsonObj = {
'event_type': 'compute.instance.create.start',
'timestamp': '2015-06-17T21:57:03.493436',
'message_id': '1f4609b5-f01d-11e4-81ac-20c9d0b84f8b'
}
json.return_value = True
validate.return_value = True
readHttpRes.return_value = jsonObj
tenantid.return_value = '0ab1ac0a-2867-402d'
eventsObj = EventsSubClass()
eventsObj._message_queue = kafka
res = mock.MagicMock()
res.body = {}
res.status = 0
eventsObj.on_post(self._generate_req(), res)
self.assertEqual(falcon.HTTP_204, res.status)
self.assertEqual({}, res.body)
@mock.patch(
'monasca_events_api.common.messaging.kafka_publisher.KafkaPublisher')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
@mock.patch('monasca_events_api.v2.common.helpers.read_http_resource')
@mock.patch(
'monasca_events_api.v2.common.helpers.validate_json_content_type')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
def test_on_post_fail_on_validate_event(self, tenantid, json, readHttpRes, validate, kafka):
"""POST method failed due to validate event """
"""_tenant_id is a reserved word that cannot be used"""
jsonObj = {
'event_type': 'compute.instance.create.start',
'timestamp': '2015-06-17T21:57:03.493436',
'message_id': '1f4609b5-f01d-11e4-81ac-20c9d0b84f8b',
'_tenant_id': '0ab1ac0a-2867-402d'
}
json.return_value = True
validate.return_value = True
readHttpRes.return_value = jsonObj
tenantid.return_value = '0ab1ac0a-2867-402d'
eventsObj = EventsSubClass()
eventsObj._message_queue = kafka
res = mock.MagicMock()
res.body = {}
res.status = 0
try:
eventsObj.on_post(self._generate_req(), res)
self.assertFalse(
1,
msg="Post Method should fail but succeeded, bad request sent")
except Exception as e:
self.assertRaises(falcon.HTTPBadRequest)
self.assertEqual(e.status, '400 Bad Request')
self.assertEqual({}, res.body)

View File

@ -1,454 +0,0 @@
# Copyright 2014 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import falcon
import json
from monasca_events_api.common.repositories.mysql.streams_repository import StreamsRepository
from monasca_events_api.v2.stream_definitions import StreamDefinitions
import mock
from monasca_events_api.common.repositories.exceptions import AlreadyExistsException
from monasca_events_api.common.repositories.exceptions import RepositoryException
import unittest
class StreamDefinitionsSubClass(StreamDefinitions):
def __init__(self):
self._default_authorized_roles = ['user', 'domainuser',
'domainadmin', 'monasca-user']
self._post_events_authorized_roles = [
'user',
'domainuser',
'domainadmin',
'monasca-user',
'monasca-agent']
self._stream_definitions_repo = None
self.stream_definition_event_message_queue = None
self._region = 'useast'
class Test_StreamDefinitions(unittest.TestCase):
def _generate_req(self):
"""Generate a mock HTTP request"""
req = mock.MagicMock()
req.get_param.return_value = None
req.headers = {
'X-Auth-User': 'mini-mon',
'X-Auth-Token': 'ABCD',
'X-Auth-Key': 'password',
'X-TENANT-ID': '0ab1ac0a-2867-402d',
'X-ROLES': 'user, domainuser, domainadmin, monasca-user, monasca-agent',
'Accept': 'application/json',
'User-Agent': 'python-monascaclient',
'Content-Type': 'application/json'}
req.body = {}
req.uri = "/v2.0/stream-definitions/{stream_id}"
req.content_type = 'application/json'
return req
@mock.patch('monasca_events_api.v2.common.helpers.add_links_to_resource')
@mock.patch(
'monasca_events_api.common.repositories.mysql.mysql_repository.mdb')
@mock.patch('monasca_events_api.v2.common.helpers.normalize_offset')
@mock.patch('monasca_events_api.v2.common.helpers.get_query_name')
@mock.patch(
'monasca_events_api.common.repositories.mysql.mysql_repository.mdb')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
def test_on_get_stream_fail_db_down(
self,
tenant_id,
validate,
mysqlRepo,
qname,
normalize,
repo,
getlinks):
"""GET Method FAIL Single Stream"""
repo.connect.side_effect = RepositoryException(
"Database Connection Error")
validate.return_value = True
normalize.return_value = 5
qname.return_value = "Test"
tenant_id.return_value = '0ab1ac0a-2867-402d'
stream_id = "1"
streamsObj = StreamDefinitionsSubClass()
streamsObj._stream_definitions_repo = StreamsRepository()
res = mock.MagicMock()
res.body = {}
res.status = 0
try:
streamsObj.on_get(self._generate_req(), res, stream_id)
self.assertFalse(
1,
msg="Database Down, GET should fail but succeeded")
except Exception as e:
self.assertRaises(falcon.HTTPInternalServerError)
self.assertEqual(e.status, '500 Internal Server Error')
@mock.patch(
'monasca_events_api.common.repositories.mysql.mysql_repository.mdb')
@mock.patch(
'monasca_events_api.v2.stream_definitions.StreamDefinitions._stream_definition_show')
@mock.patch('monasca_events_api.v2.common.helpers.add_links_to_resource')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
def test_on_get_streamid_pass(
self,
validate,
tenant_id,
getlinks,
definitionshow,
mysqlRepo):
"""GET Method SUCCESS Single Stream"""
validate.return_value = True
tenant_id.return_value = '0ab1ac0a-2867-402d'
returnStream = [{"region": "useast", "tenantId": "0ab1ac0a-2867-402d",
"creation_time": "1434331190", "stream": "1"}]
definitionshow.return_value = returnStream
getlinks.return_value = "/v2.0/stream-definitions/{stream_id}"
mysqlRepo.connect.return_value = True
stream_id = "1"
streamsObj = StreamDefinitionsSubClass()
streamsObj._stream_definitions_repo = StreamsRepository()
res = mock.MagicMock()
res.body = {}
res.status = 0
streamsObj.on_get(self._generate_req(), res, stream_id)
self.assertEqual(returnStream, json.loads(res.body))
self.assertEqual(res.status, '200 OK')
@mock.patch('monasca_events_api.v2.common.helpers.normalize_offset')
@mock.patch(
'monasca_events_api.common.repositories.mysql.mysql_repository.mdb')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
def test_on_get_streams_fail_db_down(
self,
tenant_id,
validate,
mysqlRepo,
normalize):
"""GET Method FAILS Multiple Streams"""
mysqlRepo.connect.side_effect = RepositoryException(
"Database Connection Error")
validate.return_value = True
tenant_id.return_value = '0ab1ac0a-2867-402d'
normalize.return_value = 5
streamsObj = StreamDefinitionsSubClass()
streamsObj._stream_definitions_repo = StreamsRepository()
res = mock.MagicMock()
res.body = {}
res.status = 0
try:
streamsObj.on_get(self._generate_req(), res, None)
self.assertFalse(
1,
msg="Database Down, GET should fail but succeeded")
except Exception as e:
self.assertRaises(falcon.HTTPInternalServerError)
self.assertEqual(e.status, '500 Internal Server Error')
@mock.patch('monasca_events_api.v2.common.helpers.normalize_offset')
@mock.patch(
'monasca_events_api.common.repositories.mysql.mysql_repository.mdb')
@mock.patch(
'monasca_events_api.v2.stream_definitions.StreamDefinitions._stream_definition_list')
@mock.patch('monasca_events_api.v2.common.helpers.add_links_to_resource')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
def test_on_get_streams_pass(
self,
validate,
tenant_id,
getlinks,
definitionlist,
mysqlRepo,
normalize):
"""GET Method SUCCESS Streams List"""
validate.return_value = True
tenant_id.return_value = '0ab1ac0a-2867-402d'
returnStreams = [{"region": "useast", "tenantId": "0ab1ac0a-2867-402d",
"creation_time": "1434331190", "stream": "1"},
{"region": "useast", "tenantId": "0ab1ac0a-2866-403d",
"creation_time": "1234567890", "stream": "2"}]
definitionlist.return_value = returnStreams
normalize.return_value = 5
getlinks.return_value = "/v2.0/stream-definitions/"
mysqlRepo.connect.return_value = True
streamsObj = StreamDefinitionsSubClass()
streamsObj._stream_definitions_repo = StreamsRepository()
res = mock.MagicMock()
res.body = {}
res.status = 0
streamsObj.on_get(self._generate_req(), res, None)
self.assertEqual(returnStreams, json.loads(res.body))
self.assertEqual(res.status, '200 OK')
@mock.patch(
'monasca_events_api.common.repositories.mysql.mysql_repository.mdb')
@mock.patch(
'monasca_events_api.v2.stream_definitions.get_query_stream_definition_expire_actions')
@mock.patch(
'monasca_events_api.v2.stream_definitions.get_query_stream_definition_fire_actions')
@mock.patch(
'monasca_events_api.v2.stream_definitions.get_query_stream_definition_description')
@mock.patch(
'monasca_events_api.v2.stream_definitions.get_query_stream_definition_name')
@mock.patch(
'monasca_events_api.v2.stream_definitions.StreamDefinitions._validate_stream_definition')
@mock.patch('monasca_events_api.v2.common.helpers.read_json_msg_body')
@mock.patch(
'monasca_events_api.v2.common.helpers.validate_json_content_type')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
def test_on_post_integrity_error(
self,
validate,
tenantid,
json,
readjson,
streamvalid,
getname,
desc,
fireactions,
expire,
repo):
"""POST method failed due to integrity error"""
validate.return_value = True
repo.connect.side_effect = AlreadyExistsException()
fireactions.return_value = "fire_actions"
getname.return_value = "Test"
expire.return_value = "expire_actions"
desc.return_value = "Stream_Description"
readjson.return_value = {
u'fire_criteria': [
{
u'event_type': u'compute.instance.create.start'},
{
u'event_type': u'compute.instance.create.end'}],
u'description': u'provisioning duration',
u'group_by': [u'instance_id'],
u'expiration': 90000,
u'select': [
{
u'event_type': u'compute.instance.create.*'}],
u'name': u'buzz'}
tenantid.return_value = '0ab1ac0a-2867-402d'
streamvalid.return_value = True
streamsObj = StreamDefinitionsSubClass()
streamsObj._stream_definitions_repo = StreamsRepository()
res = mock.MagicMock()
res.body = {}
res.status = 0
try:
streamsObj.on_post(self._generate_req(), res)
self.assertFalse(
1,
msg="DB Integrity Error, should fail but passed")
except Exception as e:
self.assertRaises(falcon.HTTPConflict)
self.assertEqual(e.status, '409 Conflict')
@mock.patch('monasca_events_api.v2.common.helpers.add_links_to_resource')
@mock.patch(
'monasca_events_api.common.messaging.kafka_publisher.KafkaPublisher')
@mock.patch(
'monasca_events_api.v2.stream_definitions.StreamDefinitions._stream_definition_create')
@mock.patch(
'monasca_events_api.v2.stream_definitions.get_query_stream_definition_expire_actions')
@mock.patch(
'monasca_events_api.v2.stream_definitions.get_query_stream_definition_fire_actions')
@mock.patch(
'monasca_events_api.v2.stream_definitions.get_query_stream_definition_description')
@mock.patch(
'monasca_events_api.v2.stream_definitions.get_query_stream_definition_name')
@mock.patch(
'monasca_events_api.v2.stream_definitions.StreamDefinitions._validate_stream_definition')
@mock.patch('monasca_events_api.v2.common.helpers.read_json_msg_body')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
def test_on_post_pass__validate_stream_definition(
self,
validate,
tenantid,
readjson,
streamvalid,
getname,
desc,
fireactions,
expire,
streamsrepo,
kafka,
addlink):
"""POST method successful"""
validate.return_value = True
fireactions.return_value = "fire_actions"
getname.return_value = "Test"
addlink.return_value = "/v2.0/stream-definitions/{stream_id}"
expire.return_value = "expire_actions"
desc.return_value = "Stream_Description"
responseObj = {u'fire_criteria': [{u'event_type': u'compute.instance.create.start'},
{u'event_type': u'compute.instance.create.end'}],
u'description': u'provisioning duration',
u'group_by': [u'instance_id'],
u'expiration': 90000,
u'select': [{u'event_type': u'compute.instance.create.*'}],
u'name': u'buzz'}
readjson.return_value = responseObj
streamsrepo.return_value = responseObj
tenantid.return_value = '0ab1ac0a-2867-402d'
streamsObj = StreamDefinitionsSubClass()
streamsObj._stream_definitions_repo = StreamsRepository()
streamsObj.stream_definition_event_message_queue = kafka
res = mock.MagicMock()
res.body = {}
res.status = 0
streamsObj.on_post(self._generate_req(), res)
self.assertEqual(falcon.HTTP_201, res.status)
self.assertEqual(responseObj, json.loads(res.body))
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
@mock.patch('monasca_events_api.v2.common.helpers.read_http_resource')
@mock.patch('monasca_events_api.v2.common.helpers.add_links_to_resource')
@mock.patch(
'monasca_events_api.common.messaging.kafka_publisher.KafkaPublisher')
@mock.patch(
'monasca_events_api.v2.stream_definitions.get_query_stream_definition_expire_actions')
@mock.patch(
'monasca_events_api.v2.stream_definitions.get_query_stream_definition_fire_actions')
@mock.patch(
'monasca_events_api.v2.stream_definitions.get_query_stream_definition_description')
@mock.patch('monasca_events_api.v2.common.helpers.read_json_msg_body')
@mock.patch(
'monasca_events_api.v2.stream_definitions.get_query_stream_definition_name')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
def test_on_post_fail__validate_stream_definition(
self,
tenantid,
getname,
readjson,
desc,
fireactions,
expire,
kafka,
addlink,
httpRes,
authorization):
"""POST method failed due to invalid body"""
fireactions.return_value = "fire_actions"
getname.return_value = "Test"
addlink.return_value = "/v2.0/stream-definitions/{stream_id}"
expire.return_value = "expire_actions"
desc.return_value = "Stream_Description"
"""name removed from body"""
responseObj = {u'fire_criteria': [{u'event_type': u'compute.instance.create.start'},
{u'event_type': u'compute.instance.create.end'}],
u'description': u'provisioning duration',
u'group_by': [u'instance_id'],
u'expiration': 90000,
u'name': u'buzz'}
tenantid.return_value = '0ab1ac0a-2867-402d'
readjson.return_value = responseObj
streamsObj = StreamDefinitionsSubClass()
streamsObj._stream_definitions_repo = StreamsRepository()
streamsObj.stream_definition_event_message_queue = kafka
res = mock.MagicMock()
res.body = {}
res.status = 0
try:
streamsObj.on_post(self._generate_req(), res)
self.assertFalse(1, msg="Bad Request Sent, should fail but passed")
except Exception as e:
self.assertRaises(falcon.HTTPBadRequest)
self.assertEqual(e.status, '400 Bad Request')
@mock.patch('monasca_events_api.v2.common.helpers.read_json_msg_body')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
def test_on_post_badrequest(self, validate, readjson):
"""POST method Fail Due to bad request"""
validate.return_value = True
readjson.side_effect = falcon.HTTPBadRequest(
'Bad request',
'Request body is not valid JSON')
streamsObj = StreamDefinitionsSubClass()
res = mock.MagicMock()
res.body = {}
res.status = 0
try:
streamsObj.on_post(self._generate_req(), res)
self.assertFalse(1, msg="Bad Request Sent, should fail but passed")
except Exception as e:
self.assertRaises(falcon.HTTPBadRequest)
self.assertEqual(e.status, '400 Bad Request')
@mock.patch(
'monasca_events_api.common.repositories.mysql.streams_repository.StreamsRepository.delete_stream_definition')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
def test_on_delete_fail(self, validate, tenantid, deleteStream):
"""DELETEE method failed due to database down """
validate.return_value = True
tenantid.return_value = '0ab1ac0a-2867-402d'
deleteStream.side_effect = RepositoryException(
"Database Connection Error")
stream_id = "1"
streamsObj = StreamDefinitionsSubClass()
res = mock.MagicMock()
res.body = {}
res.status = 0
try:
streamsObj.on_delete(self._generate_req(), res, stream_id)
self.assertFalse(1, msg="Database Down, should fail but passed")
except Exception as e:
self.assertRaises(falcon.HTTPInternalServerError)
self.assertEqual(e.status, '500 Internal Server Error')
@mock.patch(
'monasca_events_api.v2.stream_definitions.StreamDefinitions._stream_definition_delete')
@mock.patch(
'monasca_events_api.common.repositories.mysql.mysql_repository.mdb')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
def test_on_delete_pass(self, validate, tenantid, mysql, deleteStream):
"""DELETE method successful """
validate.return_value = True
tenantid.return_value = '0ab1ac0a-2867-402d'
deleteStream.return_value = True
stream_id = "1"
streamsObj = StreamDefinitionsSubClass()
streamsObj._stream_definitions_repo = StreamsRepository()
res = mock.MagicMock()
res.body = {}
res.status = 0
streamsObj.on_delete(self._generate_req(), res, stream_id)
self.assertEqual("204 No Content", res.status)

View File

@ -1,418 +0,0 @@
# Copyright 2014 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import falcon
import json
from monasca_events_api.common.repositories.mysql.transforms_repository import TransformsRepository
from monasca_events_api.v2.transforms import Transforms
import mock
from monasca_events_api.common.repositories import exceptions as repository_exceptions
import unittest
class TransformsSubClass(Transforms):
def __init__(self):
self._default_authorized_roles = ['user', 'domainuser',
'domainadmin', 'monasca-user']
self._transforms_repo = None
self._region = 'useast'
self._message_queue = None
class Test_Transforms(unittest.TestCase):
def _generate_req(self):
"""Generate a mock HTTP request"""
req = mock.MagicMock()
req.get_param.return_value = None
req.headers = {
'X-Auth-User': 'mini-mon',
'X-Auth-Token': "ABCD",
'X-Auth-Key': 'password',
'X-TENANT-ID': '0ab1ac0a-2867-402d',
'X-ROLES': 'user, domainuser, domainadmin, monasca-user, monasca-agent',
'Accept': 'application/json',
'User-Agent': 'python-monascaclient',
'Content-Type': 'application/json'}
req.body = {}
req.content_type = 'application/json'
return req
@mock.patch(
'monasca_events_api.common.repositories.mysql.mysql_repository.mdb')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
def test_on_get_fail_db_down(
self,
helper_tenant_id,
helpers_validate,
mysqlRepo):
"""GET Method fail due to db down"""
mysqlRepo.connect.side_effect = repository_exceptions.RepositoryException(
"Database Connection Error")
helpers_validate.return_value = True
mysqlRepo.connect.return_value = True
helper_tenant_id.return_value = '0ab1ac0a-2867-402d'
transObj = TransformsSubClass()
transObj._transforms_repo = TransformsRepository()
res = mock.MagicMock()
res.body = {}
res.status = 0
try:
transObj.on_get(self._generate_req(), res)
self.assertFalse(
1,
msg="Database Down, GET should fail but succeeded")
except Exception as e:
self.assertRaises(falcon.HTTPInternalServerError)
self.assertEqual(e.status, '500 Internal Server Error')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
def test_on_get_fail_validate_authorization(self, _validate_authorization):
"""GET Method fail due to validate authorization"""
_validate_authorization.side_effect = falcon.HTTPUnauthorized(
'Forbidden',
'Tenant does not have any roles')
transObj = TransformsSubClass()
transObj._transforms_repo = TransformsRepository()
res = mock.MagicMock()
res.body = {}
res.status = 0
try:
transObj.on_get(self._generate_req(), res)
self.assertFalse(
1,
msg="Validate Authorization failed, GET should fail but succeeded")
except Exception as e:
self.assertRaises(falcon.HTTPUnauthorized)
self.assertEqual(e.status, '401 Unauthorized')
@mock.patch('monasca_events_api.v2.transforms.Transforms._list_transforms')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
def test_on_get_pass(
self,
helper_tenant_id,
helpers_validate,
list_transforms):
"""GET Method success Single Event"""
helpers_validate.return_value = True
helper_tenant_id.return_value = '0ab1ac0a-2867-402d'
returnTransform = [{"id": "1",
"name": "Trans1",
"description": "Desc1",
"specification": "AutoSpec1",
"enabled": "True"},
{"id": "2",
"name": "Trans2",
"description": "Desc2",
"specification": "AutoSpec2",
"enabled": "False"}]
list_transforms.return_value = returnTransform
transObj = TransformsSubClass()
transObj._transforms_repo = TransformsRepository()
res = mock.MagicMock()
res.body = {}
res.status = 0
transObj.on_get(self._generate_req(), res)
self.assertEqual(res.status, '200 OK')
self.assertEqual(returnTransform, json.loads(json.dumps(res.body)))
@mock.patch(
'monasca_events_api.common.repositories.mysql.mysql_repository.mdb')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
def test_on_delete_fail(
self,
helper_tenant_id,
helpers_validate,
mysqlRepo):
"""DELETE Method fail due to db down"""
mysqlRepo.connect.side_effect = repository_exceptions.RepositoryException(
"Database Connection Error")
helpers_validate.return_value = True
helper_tenant_id.return_value = '0ab1ac0a-2867-402d'
transform_id = "0ab1ac0a"
transObj = TransformsSubClass()
transObj._transforms_repo = TransformsRepository()
res = mock.MagicMock()
res.body = {}
res.status = 0
try:
transObj.on_delete(self._generate_req(), res, transform_id)
self.assertFalse(
1,
msg="Database Down, delete should fail but succeeded")
except Exception as e:
self.assertRaises(falcon.HTTPInternalServerError)
self.assertEqual(e.status, '500 Internal Server Error')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
def test_on_delete_fail_validate_authorization(
self,
_validate_authorization):
"""Post Method fail due to validate authorization"""
_validate_authorization.side_effect = falcon.HTTPUnauthorized(
'Forbidden',
'Tenant does not have any roles')
transform_id = "0ab1ac0a"
transObj = TransformsSubClass()
transObj._transforms_repo = TransformsRepository()
res = mock.MagicMock()
res.body = {}
res.status = 0
try:
transObj.on_delete(self._generate_req(), res, transform_id)
self.assertFalse(
1,
msg="Validate Authorization failed, delete should fail but succeeded")
except Exception as e:
self.assertRaises(falcon.HTTPUnauthorized)
self.assertEqual(e.status, '401 Unauthorized')
@mock.patch(
'monasca_events_api.common.messaging.kafka_publisher.KafkaPublisher')
@mock.patch(
'monasca_events_api.v2.transforms.Transforms._delete_transform')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
def test_on_delete_pass(
self,
helper_tenant_id,
helpers_validate,
deleteTransform,
kafka):
"""DELETE Method pass"""
helpers_validate.return_value = True
helper_tenant_id.return_value = '0ab1ac0a-2867-402d'
transform_id = "0ab1ac0a"
deleteTransform.return_value = True
transObj = TransformsSubClass()
transObj._message_queue = kafka
transObj._transforms_repo = TransformsRepository()
res = mock.MagicMock()
res.body = {}
res.status = 0
transObj.on_delete(self._generate_req(), res, transform_id)
self.assertEqual(res.status, '204 No Content')
@mock.patch(
'monasca_events_api.common.repositories.mysql.mysql_repository.mdb')
@mock.patch(
'monasca_events_api.v2.transforms.Transforms._validate_transform')
@mock.patch(
'monasca_events_api.v2.common.helpers.validate_json_content_type')
@mock.patch(
'monasca_events_api.v2.transforms.Transforms._delete_transform')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
@mock.patch('monasca_events_api.v2.common.helpers.read_http_resource')
def test_on_post_fail_db_down(
self,
readhttp,
helper_tenant_id,
helpers_validate,
deleteTransform,
validjson,
validateTransform,
mysqlRepo):
"""Post Method fail due to db down"""
mysqlRepo.connect.side_effect = repository_exceptions.RepositoryException(
"Database Connection Error")
helpers_validate.return_value = True
validjson.return_value = True
validateTransform.return_value = True
readhttp.return_value = {
'name': 'Foo',
'description': 'transform def',
'specification': 'transform spec'}
helper_tenant_id.return_value = '0ab1ac0a-2867-402d'
transObj = TransformsSubClass()
transObj._transforms_repo = TransformsRepository()
res = mock.MagicMock()
res.body = {}
res.status = 0
try:
transObj.on_post(self._generate_req(), res)
self.assertFalse(
1,
msg="Database Down, POST should fail but succeeded")
except Exception as e:
self.assertRaises(falcon.HTTPInternalServerError)
self.assertEqual(e.status, '500 Internal Server Error')
@mock.patch(
'monasca_events_api.v2.transforms.Transforms._validate_transform')
@mock.patch(
'monasca_events_api.v2.common.helpers.validate_json_content_type')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
@mock.patch('monasca_events_api.v2.common.helpers.read_http_resource')
def test_on_post_fail_validate_transform(
self,
readhttp,
helpers_validate,
validjson,
_validate_transform):
"""Post Method fail due to validate transform"""
helpers_validate.return_value = True
validjson.return_value = True
_validate_transform.side_effect = falcon.HTTPBadRequest(
'Bad request',
'Error')
readhttp.return_value = self._generate_req()
transObj = TransformsSubClass()
transObj._transforms_repo = TransformsRepository()
res = mock.MagicMock()
res.body = {}
res.status = 0
try:
transObj.on_post(self._generate_req(), res)
self.assertFalse(
1,
msg="Validate Trasnform failed, POST should fail but succeeded")
except Exception as e:
self.assertRaises(falcon.HTTPBadRequest)
self.assertEqual(e.status, '400 Bad Request')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
def test_on_post_fail_validate_authorization(
self,
_validate_authorization):
"""Post Method fail due to validate authorization"""
_validate_authorization.side_effect = falcon.HTTPUnauthorized(
'Forbidden',
'Tenant does not have any roles')
transObj = TransformsSubClass()
transObj._transforms_repo = TransformsRepository()
res = mock.MagicMock()
res.body = {}
res.status = 0
try:
transObj.on_post(self._generate_req(), res)
self.assertFalse(
1,
msg="Validate Authorization failed, POST should fail but succeeded")
except Exception as e:
self.assertRaises(falcon.HTTPUnauthorized)
self.assertEqual(e.status, '401 Unauthorized')
@mock.patch(
'monasca_events_api.common.messaging.kafka_publisher.KafkaPublisher')
@mock.patch(
'monasca_events_api.v2.transforms.Transforms._create_transform_response')
@mock.patch(
'monasca_events_api.common.repositories.mysql.mysql_repository.mdb')
@mock.patch(
'monasca_events_api.v2.transforms.Transforms._validate_transform')
@mock.patch(
'monasca_events_api.v2.common.helpers.validate_json_content_type')
@mock.patch(
'monasca_events_api.v2.transforms.Transforms._delete_transform')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
@mock.patch('monasca_events_api.v2.common.helpers.read_http_resource')
def test_on_post_pass_valid_request(
self,
readhttp,
helper_tenant_id,
helpers_validate,
deleteTransform,
validjson,
validateTransform,
mysqlRepo,
createRes,
kafka):
"""Post Method pass due to valid request"""
helpers_validate.return_value = True
validjson.return_value = True
returnTransform = {'name': 'Trans1',
'description': 'Desc1',
'specification': 'AutoSpec1'
}
createRes.return_value = returnTransform
readhttp.return_value = returnTransform
helper_tenant_id.return_value = '0ab1ac0a-2867-402d'
transObj = TransformsSubClass()
transObj._message_queue = kafka
transObj._transforms_repo = TransformsRepository()
res = mock.MagicMock()
res.body = {}
res.status = 0
transObj.on_post(self._generate_req(), res)
self.assertEqual(falcon.HTTP_200, "200 OK")
self.assertEqual(returnTransform, json.loads(json.dumps(res.body)))
@mock.patch(
'monasca_events_api.common.messaging.kafka_publisher.KafkaPublisher')
@mock.patch(
'monasca_events_api.v2.transforms.Transforms._create_transform_response')
@mock.patch(
'monasca_events_api.common.repositories.mysql.mysql_repository.mdb')
@mock.patch(
'monasca_events_api.v2.common.helpers.validate_json_content_type')
@mock.patch(
'monasca_events_api.v2.transforms.Transforms._delete_transform')
@mock.patch('monasca_events_api.v2.common.helpers.validate_authorization')
@mock.patch('monasca_events_api.v2.common.helpers.get_tenant_id')
@mock.patch('monasca_events_api.v2.common.helpers.read_http_resource')
def test_on_post_pass_fail_invalid_request(
self,
readhttp,
helper_tenant_id,
helpers_validate,
deleteTransform,
validjson,
mysqlRepo,
createRes,
kafka):
"""Post Method fails due to invalid request"""
helpers_validate.return_value = True
validjson.return_value = True
returnTransform = {
'description': 'Desc1',
'specification': 'AutoSpec1'
}
createRes.return_value = returnTransform
readhttp.return_value = returnTransform
helper_tenant_id.return_value = '0ab1ac0a-2867-402d'
transObj = TransformsSubClass()
transObj._message_queue = kafka
transObj._transforms_repo = TransformsRepository()
res = mock.MagicMock()
res.body = {}
res.status = 0
try:
transObj.on_post(self._generate_req(), res)
self.assertFalse(
1,
msg="Validate transform failed, POST should fail but succeeded")
except Exception as e:
self.assertRaises(falcon.HTTPBadRequest)
self.assertEqual(e.status, '400 Bad Request')

View File

@ -1,114 +0,0 @@
# Copyright 2014 IBM Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_config import types
"""Configurations for reference implementation
I think that these configuration parameters should have been split into
small groups and be set into each implementation where they get used.
For example: kafka configuration should have been in the implementation
where kafka get used. It seems to me that the configuration for kafka gets
used in kafka_publisher, but the original settings were at the api/server.py
which I think is at the wrong place. I move these settings here for now, we
need to have a bit more re-engineering to get it right.
"""
global_opts = [cfg.StrOpt('region', help='Region that API is running in')]
cfg.CONF.register_opts(global_opts)
security_opts = [cfg.ListOpt('default_authorized_roles', default=['admin'],
help='Roles that are allowed full access to the '
'API'),
cfg.ListOpt('agent_authorized_roles', default=['agent'],
help='Roles that are only allowed to POST to '
'the API'),
cfg.ListOpt('delegate_authorized_roles', default=['admin'],
help='Roles that are allowed to POST metrics on '
'behalf of another tenant')]
security_group = cfg.OptGroup(name='security', title='security')
cfg.CONF.register_group(security_group)
cfg.CONF.register_opts(security_opts, security_group)
messaging_opts = [cfg.StrOpt('driver', default='kafka',
help='The message queue driver to use'),
cfg.StrOpt('events_message_format', default='reference',
help='The type of events message format to '
'publish to the message queue')]
messaging_group = cfg.OptGroup(name='messaging', title='messaging')
cfg.CONF.register_group(messaging_group)
cfg.CONF.register_opts(messaging_opts, messaging_group)
repositories_opts = [
cfg.StrOpt('streams',
default='monasca_events_api.common.repositories.streams_repository:StreamsRepository',
help='The repository driver to use for streams'),
cfg.StrOpt('events',
default='monasca_events_api.common.repositories.events_repository:EventsRepository',
help='The repository driver to use for events'),
cfg.StrOpt('transforms',
default='monasca_events_api.common.repositories.transforms_repository:TransformsRepository',
help='The repository driver to use for transforms')]
repositories_group = cfg.OptGroup(name='repositories', title='repositories')
cfg.CONF.register_group(repositories_group)
cfg.CONF.register_opts(repositories_opts, repositories_group)
kafka_opts = [cfg.StrOpt('uri', help='Address to kafka server. For example: '
'uri=192.168.1.191:9092'),
cfg.StrOpt('events_topic', default='raw-events',
help='The topic that events will be published too.'),
cfg.StrOpt('group', default='api',
help='The group name that this service belongs to.'),
cfg.IntOpt('wait_time', default=1,
help='The wait time when no messages on kafka '
'queue.'), cfg.IntOpt('ack_time', default=20,
help='The ack time back '
'to kafka.'),
cfg.IntOpt('max_retry', default=3,
help='The number of retry when there is a '
'connection error.'),
cfg.BoolOpt('auto_commit', default=False,
help='If automatically commmit when consume '
'messages.'),
cfg.BoolOpt('async', default=True, help='The type of posting.'),
cfg.BoolOpt('compact', default=True, help=(
'Specify if the message received should be parsed.'
'If True, message will not be parsed, otherwise '
'messages will be parsed.')),
cfg.MultiOpt('partitions', item_type=types.Integer(),
default=[0],
help='The sleep time when no messages on kafka '
'queue.'),
cfg.BoolOpt('drop_data', default=False, help=(
'Specify if received data should be simply dropped. '
'This parameter is only for testing purposes.')), ]
kafka_group = cfg.OptGroup(name='kafka', title='title')
cfg.CONF.register_group(kafka_group)
cfg.CONF.register_opts(kafka_opts, kafka_group)
mysql_opts = [cfg.StrOpt('database_name'), cfg.StrOpt('hostname'),
cfg.StrOpt('username'), cfg.StrOpt('password')]
mysql_group = cfg.OptGroup(name='mysql', title='mysql')
cfg.CONF.register_group(mysql_group)
cfg.CONF.register_opts(mysql_opts, mysql_group)

View File

@ -1,423 +0,0 @@
# Copyright 2014 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import json
import urlparse
import urllib
import falcon
from oslo_log import log
import simplejson
from monasca_events_api.common.repositories import constants
LOG = log.getLogger(__name__)
def read_json_msg_body(req):
"""Read the json_msg from the http request body and return them as JSON.
:param req: HTTP request object.
:return: Returns the metrics as a JSON object.
:raises falcon.HTTPBadRequest:
"""
try:
msg = req.stream.read()
json_msg = json.loads(msg)
return json_msg
except ValueError as ex:
LOG.debug(ex)
raise falcon.HTTPBadRequest('Bad request',
'Request body is not valid JSON')
def validate_json_content_type(req):
if req.content_type not in ['application/json']:
raise falcon.HTTPBadRequest('Bad request', 'Bad content type. Must be '
'application/json')
def is_in_role(req, authorized_roles):
"""Is one or more of the X-ROLES in the supplied authorized_roles.
:param req: HTTP request object. Must contain "X-ROLES" in the HTTP
request header.
:param authorized_roles: List of authorized roles to check against.
:return: Returns True if in the list of authorized roles, otherwise False.
"""
str_roles = req.get_header('X-ROLES')
if str_roles is None:
return False
roles = str_roles.lower().split(',')
for role in roles:
if role in authorized_roles:
return True
return False
def validate_authorization(req, authorized_roles):
"""Validates whether one or more X-ROLES in the HTTP header is authorized.
:param req: HTTP request object. Must contain "X-ROLES" in the HTTP
request header.
:param authorized_roles: List of authorized roles to check against.
:raises falcon.HTTPUnauthorized
"""
str_roles = req.get_header('X-ROLES')
if str_roles is None:
raise falcon.HTTPUnauthorized('Forbidden',
'Tenant does not have any roles')
roles = str_roles.lower().split(',')
for role in roles:
if role in authorized_roles:
return
raise falcon.HTTPUnauthorized('Forbidden',
'Tenant ID is missing a required role to '
'access this service')
def get_tenant_id(req):
"""Returns the tenant ID in the HTTP request header.
:param req: HTTP request object.
"""
return req.get_header('X-TENANT-ID')
def get_x_tenant_or_tenant_id(req, delegate_authorized_roles):
"""Evaluates whether the tenant ID or cross tenant ID should be returned.
:param req: HTTP request object.
:param delegate_authorized_roles: List of authorized roles that have
delegate privileges.
:returns: Returns the cross tenant or tenant ID.
"""
if is_in_role(req, delegate_authorized_roles):
params = falcon.uri.parse_query_string(req.query_string)
if 'tenant_id' in params:
tenant_id = params['tenant_id']
return tenant_id
return get_tenant_id(req)
def get_query_param(req, param_name, required=False, default_val=None):
try:
params = falcon.uri.parse_query_string(req.query_string)
if param_name in params:
param_val = params[param_name].decode('utf8')
return param_val
else:
if required:
raise Exception("Missing " + param_name)
else:
return default_val
except Exception as ex:
LOG.debug(ex)
raise falcon.HTTPBadRequest('Bad request', ex.message)
def normalize_offset(offset):
return u'' if offset == u'x' else offset
def get_query_name(req, name_required=False):
"""Returns the query param "name" if supplied.
:param req: HTTP request object.
"""
try:
params = falcon.uri.parse_query_string(req.query_string)
if 'name' in params:
name = params['name']
return name
else:
if name_required:
raise Exception("Missing name")
else:
return ''
except Exception as ex:
LOG.debug(ex)
raise falcon.HTTPBadRequest('Bad request', ex.message)
def get_query_dimensions(req):
"""Gets and parses the query param dimensions.
:param req: HTTP request object.
:return: Returns the dimensions as a JSON object
:raises falcon.HTTPBadRequest: If dimensions are malformed.
"""
try:
params = falcon.uri.parse_query_string(req.query_string)
dimensions = {}
if 'dimensions' in params:
dimensions_str = params['dimensions']
dimensions_str_array = dimensions_str.split(',')
for dimension in dimensions_str_array:
dimension_name_value = dimension.split(':')
if len(dimension_name_value) == 2:
dimensions[dimension_name_value[0]] = dimension_name_value[
1]
else:
raise Exception('Dimensions are malformed')
return dimensions
except Exception as ex:
LOG.debug(ex)
raise falcon.HTTPBadRequest('Bad request', ex.message)
def get_query_starttime_timestamp(req, required=True):
try:
params = falcon.uri.parse_query_string(req.query_string)
if 'start_time' in params:
return _convert_time_string(params['start_time'])
else:
if required:
raise Exception("Missing start time")
else:
return None
except Exception as ex:
LOG.debug(ex)
raise falcon.HTTPBadRequest('Bad request', ex.message)
def get_query_endtime_timestamp(req, required=True):
try:
params = falcon.uri.parse_query_string(req.query_string)
if 'end_time' in params:
return _convert_time_string(params['end_time'])
else:
if required:
raise Exception("Missing end time")
else:
return None
except Exception as ex:
LOG.debug(ex)
raise falcon.HTTPBadRequest('Bad request', ex.message)
def _convert_time_string(date_time_string):
dt = datetime.datetime.strptime(date_time_string, "%Y-%m-%dT%H:%M:%SZ")
timestamp = (dt - datetime.datetime(1970, 1, 1)).total_seconds()
return timestamp
def get_query_statistics(req):
try:
params = falcon.uri.parse_query_string(req.query_string)
if 'statistics' in params:
statistics = params['statistics'].split(',')
statistics = [statistic.lower() for statistic in statistics]
if not all(statistic in ['avg', 'min', 'max', 'count', 'sum'] for
statistic in statistics):
raise Exception("Invalid statistic")
return statistics
else:
raise Exception("Missing statistics")
except Exception as ex:
LOG.debug(ex)
raise falcon.HTTPBadRequest('Bad request', ex.message)
def get_query_period(req):
try:
params = falcon.uri.parse_query_string(req.query_string)
if 'period' in params:
return params['period']
else:
return None
except Exception as ex:
LOG.debug(ex)
raise falcon.HTTPBadRequest('Bad request', ex.message)
def paginate(resource, uri):
limit = constants.PAGE_LIMIT
parsed_uri = urlparse.urlparse(uri)
self_link = build_base_uri(parsed_uri)
if resource and len(resource) >= limit:
if 'timestamp' in resource[limit - 1]:
new_offset = resource[limit - 1]['timestamp']
if 'id' in resource[limit - 1]:
new_offset = resource[limit - 1]['id']
next_link = build_base_uri(parsed_uri)
new_query_params = [u'offset' + '=' + urllib.quote(
new_offset.encode('utf8'), safe='')]
if new_query_params:
next_link += '?' + '&'.join(new_query_params)
resource = {u'links': ([{u'rel': u'self',
u'href': self_link.decode('utf8')},
{u'rel': u'next',
u'href': next_link.decode('utf8')}]),
u'elements': resource[:limit]}
else:
resource = {u'links': ([{u'rel': u'self',
u'href': self_link.decode('utf8')}]),
u'elements': resource}
return resource
def paginate_measurement(measurement, uri):
if offset is not None:
if measurement['measurements']:
if len(measurement['measurements']) >= constants.PAGE_LIMIT:
new_offset = measurement['id']
parsed_uri = urlparse.urlparse(uri)
next_link = build_base_uri(parsed_uri)
new_query_params = [u'offset' + '=' + str(new_offset).decode(
'utf8')]
# Add the query parms back to the URL without the original
# offset and dimensions.
for query_param in parsed_uri.query.split('&'):
query_param_name, query_param_val = query_param.split('=')
if (query_param_name.lower() != 'offset' and
query_param_name.lower() != 'dimensions'):
new_query_params.append(query_param)
next_link += '?' + '&'.join(new_query_params)
# Add the dimensions for this particular measurement.
if measurement['dimensions']:
dims = []
for k, v in measurement['dimensions'].iteritems():
dims.append(k + ":" + v)
if dims:
next_link += '&dimensions' + ','.join(dims)
measurement = {u'links': [{u'rel': u'self',
u'href': uri.decode('utf8')},
{u'rel': u'next', u'href':
next_link.decode('utf8')}],
u'elements': measurement}
else:
measurement = {
u'links': [
{u'rel': u'self',
u'href': uri.decode('utf8')}],
u'elements': measurement
}
return measurement
else:
return measurement
def build_base_uri(parsed_uri):
return parsed_uri.scheme + '://' + parsed_uri.netloc + parsed_uri.path
def get_link(uri, resource_id, rel='self'):
"""Returns a link dictionary containing href, and rel.
:param uri: the http request.uri.
:param resource_id: the id of the resource
"""
parsed_uri = urlparse.urlparse(uri)
href = build_base_uri(parsed_uri)
href += '/' + resource_id
if rel:
link_dict = dict(href=href, rel=rel)
else:
link_dict = dict(href=href)
return link_dict
def add_links_to_resource(resource, uri, rel='self'):
"""Adds links to the given resource dictionary.
:param resource: the resource dictionary you wish to add links.
:param uri: the http request.uri.
"""
resource['links'] = [get_link(uri, resource['id'], rel)]
return resource
def add_links_to_resource_list(resourcelist, uri):
"""Adds links to the given resource dictionary list.
:param resourcelist: the list of resources you wish to add links.
:param uri: the http request.uri.
"""
for resource in resourcelist:
add_links_to_resource(resource, uri)
return resourcelist
def read_http_resource(req):
"""Read from http request and return json.
:param req: the http request.
"""
try:
msg = req.stream.read()
json_msg = simplejson.loads(msg)
return json_msg
except ValueError as ex:
LOG.debug(ex)
raise falcon.HTTPBadRequest(
'Bad request',
'Request body is not valid JSON')
def raise_not_found_exception(resource_name, resource_id, tenant_id):
"""Provides exception for not found requests (update, delete, list).
:param resource_name: the name of the resource.
:param resource_id: id of the resource.
:param tenant_id: id of the tenant
"""
msg = 'No %s method exists for tenant_id = %s id = %s' % (
resource_name, tenant_id, resource_id)
raise falcon.HTTPError(
status='404 Not Found',
title='Not Found',
description=msg,
code=404)
def dumpit_utf8(thingy):
return json.dumps(thingy, ensure_ascii=False).encode('utf8')

View File

@ -1,50 +0,0 @@
# Copyright 2014 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import falcon
from oslo_log import log
from monasca_events_api.common.repositories import exceptions
LOG = log.getLogger(__name__)
def resource_try_catch_block(fun):
def try_it(*args, **kwargs):
try:
return fun(*args, **kwargs)
except falcon.HTTPNotFound:
raise
except exceptions.DoesNotExistException:
raise falcon.HTTPNotFound
except falcon.HTTPBadRequest:
raise
except exceptions.AlreadyExistsException as ex:
raise falcon.HTTPConflict(ex.__class__.__name__, ex.message)
except exceptions.InvalidUpdateException as ex:
raise falcon.HTTPBadRequest(ex.__class__.__name__, ex.message)
except exceptions.RepositoryException as ex:
LOG.exception(ex)
msg = " ".join(map(str, ex.message.args))
raise falcon.HTTPInternalServerError('Service unavailable', msg)
except Exception as ex:
LOG.exception(ex)
raise falcon.HTTPInternalServerError('Service unavailable', ex)
return try_it

View File

@ -1,49 +0,0 @@
# Copyright 2014 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import iso8601
from oslo_log import log
import voluptuous
from monasca_events_api.v2.common.schemas import exceptions
LOG = log.getLogger(__name__)
def DateValidator():
return lambda v: iso8601.parse_date(v)
event_schema = {
voluptuous.Required('event_type'): voluptuous.All(
voluptuous.Any(str, unicode),
voluptuous.Length(max=255)),
voluptuous.Required('message_id'): voluptuous.All(
voluptuous.Any(str, unicode),
voluptuous.Length(max=50)),
voluptuous.Required('timestamp'): DateValidator()}
event_schema = voluptuous.Schema(event_schema,
required=True, extra=True)
request_body_schema = voluptuous.Schema(
voluptuous.Any(event_schema, [event_schema]))
def validate(body):
try:
request_body_schema(body)
except Exception as ex:
LOG.debug(ex)
raise exceptions.ValidationException(str(ex))

View File

@ -1,17 +0,0 @@
# Copyright 2014 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class ValidationException(Exception):
pass

View File

@ -1,53 +0,0 @@
# Copyright 2015 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log
import voluptuous
from monasca_events_api.v2.common.schemas import exceptions
LOG = log.getLogger(__name__)
MILLISEC_PER_DAY = 86400000
MILLISEC_PER_WEEK = MILLISEC_PER_DAY * 7
stream_definition_schema = {
voluptuous.Required('name'): voluptuous.All(voluptuous.Any(str, unicode),
voluptuous.Length(max=140)),
voluptuous.Required('select'): voluptuous.All(
voluptuous.Any(list)),
voluptuous.Required('group_by'): voluptuous.All(
voluptuous.Any(list)),
voluptuous.Required('fire_criteria'): voluptuous.All(
voluptuous.Any(list)),
voluptuous.Required('expiration'): voluptuous.All(
voluptuous.Any(int), voluptuous.Range(min=0, max=MILLISEC_PER_WEEK)),
voluptuous.Optional('fire_actions'): voluptuous.All(
voluptuous.Any([str], [unicode]), voluptuous.Length(max=400)),
voluptuous.Optional('expire_actions'): voluptuous.All(
voluptuous.Any([str], [unicode]), voluptuous.Length(max=400)),
voluptuous.Optional('actions_enabled'): bool}
request_body_schema = voluptuous.Schema(stream_definition_schema,
required=True, extra=True)
def validate(msg):
try:
request_body_schema(msg)
except Exception as ex:
LOG.debug(ex)
raise exceptions.ValidationException(str(ex))

View File

@ -1,41 +0,0 @@
# Copyright 2014 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log
import voluptuous
from monasca_events_api.v2.common.schemas import exceptions
LOG = log.getLogger(__name__)
transform_schema = {
voluptuous.Required('name'): voluptuous.Schema(
voluptuous.All(voluptuous.Any(str, unicode),
voluptuous.Length(max=64))),
voluptuous.Required('description'): voluptuous.Schema(
voluptuous.All(voluptuous.Any(str, unicode),
voluptuous.Length(max=250))),
voluptuous.Required('specification'): voluptuous.Schema(
voluptuous.All(voluptuous.Any(str, unicode),
voluptuous.Length(max=64536))),
voluptuous.Optional('enabled'): bool}
request_body_schema = voluptuous.Schema(voluptuous.Any(transform_schema))
def validate(msg):
try:
request_body_schema(msg)
except Exception as ex:
LOG.debug(ex)
raise exceptions.ValidationException(str(ex))

View File

@ -1,17 +0,0 @@
# Copyright 2014 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
def date_handler(obj):
return obj.isoformat() if hasattr(obj, 'isoformat') else obj

View File

@ -1,158 +0,0 @@
# Copyright 2014 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import re
import falcon
from oslo_config import cfg
from oslo_log import log
import simport
from monasca_events_api.api import events_api_v2
from monasca_events_api.common.messaging import exceptions \
as message_queue_exceptions
from monasca_events_api.common.messaging.message_formats import events \
as message_format_events
from monasca_events_api.v2.common import helpers
from monasca_events_api.v2.common import resource
from monasca_events_api.v2.common.schemas import (
events_request_body_schema as schemas_event)
from monasca_events_api.v2.common.schemas import (
exceptions as schemas_exceptions)
LOG = log.getLogger(__name__)
class Events(events_api_v2.EventsV2API):
def __init__(self):
self._region = cfg.CONF.region
self._default_authorized_roles = (
cfg.CONF.security.default_authorized_roles)
self._delegate_authorized_roles = (
cfg.CONF.security.delegate_authorized_roles)
self._post_events_authorized_roles = (
cfg.CONF.security.default_authorized_roles +
cfg.CONF.security.agent_authorized_roles)
self._message_queue = (
simport.load(cfg.CONF.messaging.driver)("raw-events"))
self._events_repo = (
simport.load(cfg.CONF.repositories.events)())
def on_get(self, req, res, event_id=None):
helpers.validate_authorization(req, self._default_authorized_roles)
tenant_id = helpers.get_tenant_id(req)
if event_id:
helpers.validate_authorization(req, self._default_authorized_roles)
tenant_id = helpers.get_tenant_id(req)
result = self._list_event(tenant_id, event_id)
helpers.add_links_to_resource(
result[0], re.sub('/' + event_id, '', req.uri))
res.body = helpers.dumpit_utf8(result)
res.status = falcon.HTTP_200
else:
offset = helpers.normalize_offset(helpers.get_query_param(
req,
'offset'))
limit = helpers.get_query_param(req, 'limit')
result = self._list_events(tenant_id, req.uri, offset, limit)
res.body = helpers.dumpit_utf8(result)
res.status = falcon.HTTP_200
def on_post(self, req, res):
helpers.validate_authorization(req, self._post_events_authorized_roles)
helpers.validate_json_content_type(req)
event = helpers.read_http_resource(req)
self._validate_event(event)
tenant_id = helpers.get_tenant_id(req)
transformed_event = message_format_events.transform(event, tenant_id,
self._region)
self._send_event(transformed_event)
res.status = falcon.HTTP_204
def _validate_event(self, event):
"""Validates the event
:param event: An event object.
:raises falcon.HTTPBadRequest
"""
if '_tenant_id' in event:
raise falcon.HTTPBadRequest(
'Bad request', 'Reserved word _tenant_id may not be used.')
try:
schemas_event.validate(event)
except schemas_exceptions.ValidationException as ex:
LOG.debug(ex)
raise falcon.HTTPBadRequest('Bad request', ex.message)
def _send_event(self, events):
"""Send the event using the message queue.
:param metrics: A series of event objects.
:raises: falcon.HTTPServiceUnavailable
"""
try:
self._message_queue.send_message_batch(events)
except message_queue_exceptions.MessageQueueException as ex:
LOG.exception(ex)
raise falcon.HTTPInternalServerError('Service unavailable',
ex.message)
@resource.resource_try_catch_block
def _list_events(self, tenant_id, uri, offset, limit):
rows = self._events_repo.list_events(tenant_id, offset, limit)
return helpers.paginate(self._build_events(rows), uri)
@resource.resource_try_catch_block
def _list_event(self, tenant_id, event_id):
rows = self._events_repo.list_event(tenant_id, event_id)
return self._build_events(rows)
def _build_events(self, rows):
result = collections.OrderedDict()
for row in rows:
event_id, event_data = self._build_event_data(row)
if '_tenant_id' not in event_data:
if event_id['id'] in result:
result[event_id['id']]['data'].update(event_data)
else:
result[event_id['id']] = {
'id': event_id['id'],
'description': event_id['desc'],
'generated': event_id['generated'],
'data': event_data}
return result.values()
def _build_event_data(self, event_row):
event_data = {}
name = event_row['name']
if event_row['t_string']:
event_data[name] = event_row['t_string']
if event_row['t_int']:
event_data[name] = event_row['t_int']
if event_row['t_float']:
event_data[name] = event_row['t_float']
if event_row['t_datetime']:
event_data[name] = float(event_row['t_datetime'])
event_id = {'id': event_row['message_id'],
'desc': event_row['desc'],
'generated': float(event_row['generated'])}
return event_id, event_data

View File

@ -1,507 +0,0 @@
# Copyright 2015 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
import re
import falcon
from oslo_config import cfg
from oslo_log import log
import simport
from monasca_events_api.api import stream_definitions_api_v2
from monasca_events_api.common.messaging import exceptions \
as message_queue_exceptions
from monasca_events_api.common.repositories import exceptions
from monasca_events_api.v2.common import helpers
from monasca_events_api.v2.common import resource
from monasca_events_api.v2.common.schemas import \
(stream_definition_request_body_schema as schema_streams)
from monasca_events_api.v2.common.schemas import exceptions \
as schemas_exceptions
LOG = log.getLogger(__name__)
class StreamDefinitions(stream_definitions_api_v2.StreamDefinitionsV2API):
def __init__(self):
try:
self._region = cfg.CONF.region
self._default_authorized_roles = (
cfg.CONF.security.default_authorized_roles)
self._delegate_authorized_roles = (
cfg.CONF.security.delegate_authorized_roles)
self._post_authorized_roles = (
cfg.CONF.security.default_authorized_roles +
cfg.CONF.security.agent_authorized_roles)
self._stream_definitions_repo = (
simport.load(cfg.CONF.repositories.streams)())
self.stream_definition_event_message_queue = (
simport.load(cfg.CONF.messaging.driver)('stream-definitions'))
except Exception as ex:
LOG.exception(ex)
raise exceptions.RepositoryException(ex)
def on_post(self, req, res):
helpers.validate_authorization(req, self._default_authorized_roles)
stream_definition = helpers.read_json_msg_body(req)
self._validate_stream_definition(stream_definition)
tenant_id = helpers.get_tenant_id(req)
name = get_query_stream_definition_name(stream_definition)
description = get_query_stream_definition_description(
stream_definition)
select = stream_definition['select']
for s in select:
if 'traits' in s:
s['traits']['_tenant_id'] = tenant_id
else:
s['traits'] = {'_tenant_id': tenant_id}
group_by = stream_definition['group_by']
fire_criteria = stream_definition['fire_criteria']
expiration = stream_definition['expiration']
fire_actions = get_query_stream_definition_fire_actions(
stream_definition)
expire_actions = get_query_stream_definition_expire_actions(
stream_definition)
result = self._stream_definition_create(tenant_id, name, description,
select, group_by,
fire_criteria, expiration,
fire_actions, expire_actions)
helpers.add_links_to_resource(result, req.uri)
res.body = helpers.dumpit_utf8(result)
res.status = falcon.HTTP_201
def on_get(self, req, res, stream_id=None):
if stream_id:
helpers.validate_authorization(req, self._default_authorized_roles)
tenant_id = helpers.get_tenant_id(req)
result = self._stream_definition_show(tenant_id, stream_id)
helpers.add_links_to_resource(
result, re.sub('/' + stream_id, '', req.uri))
res.body = helpers.dumpit_utf8(result)
res.status = falcon.HTTP_200
else:
helpers.validate_authorization(req, self._default_authorized_roles)
tenant_id = helpers.get_tenant_id(req)
name = helpers.get_query_name(req)
offset = helpers.normalize_offset(
helpers.get_query_param(req, 'offset'))
limit = helpers.get_query_param(req, 'limit')
result = self._stream_definition_list(tenant_id, name,
req.uri, offset, limit)
res.body = helpers.dumpit_utf8(result)
res.status = falcon.HTTP_200
def on_patch(self, req, res, stream_id):
helpers.validate_authorization(req, self._default_authorized_roles)
stream_definition = helpers.read_json_msg_body(req)
tenant_id = helpers.get_tenant_id(req)
name = get_query_stream_definition_name(stream_definition, return_none=True)
description = get_query_stream_definition_description(
stream_definition, return_none=True)
select = get_query_stream_definition_select(stream_definition, return_none=True)
if select:
for s in select:
if 'traits' in s:
s['traits']['_tenant_id'] = tenant_id
else:
s['traits'] = {'_tenant_id': tenant_id}
group_by = get_query_stream_definition_group_by(stream_definition, return_none=True)
fire_criteria = get_query_stream_definition_fire_criteria(stream_definition, return_none=True)
expiration = get_query_stream_definition_expiration(stream_definition, return_none=True)
fire_actions = get_query_stream_definition_fire_actions(
stream_definition, return_none=True)
expire_actions = get_query_stream_definition_expire_actions(
stream_definition, return_none=True)
result = self._stream_definition_patch(tenant_id,
stream_id,
name,
description,
select,
group_by,
fire_criteria,
expiration,
fire_actions,
expire_actions)
helpers.add_links_to_resource(result, req.uri)
res.body = helpers.dumpit_utf8(result)
res.status = falcon.HTTP_201
def on_delete(self, req, res, stream_id):
helpers.validate_authorization(req, self._default_authorized_roles)
tenant_id = helpers.get_tenant_id(req)
self._stream_definition_delete(tenant_id, stream_id)
res.status = falcon.HTTP_204
@resource.resource_try_catch_block
def _stream_definition_delete(self, tenant_id, stream_id):
stream_definition_row = (
self._stream_definitions_repo.get_stream_definition(tenant_id,
stream_id))
if not self._stream_definitions_repo.delete_stream_definition(
tenant_id, stream_id):
raise falcon.HTTPNotFound
self._send_stream_definition_deleted_event(
stream_id, tenant_id, stream_definition_row['name'])
def _check_invalid_trait(self, stream_definition):
select = stream_definition['select']
for s in select:
if 'traits' in s and '_tenant_id' in s['traits']:
raise falcon.HTTPBadRequest(
'Bad request',
'_tenant_id is a reserved word and invalid trait.')
def _validate_stream_definition(self, stream_definition):
try:
schema_streams.validate(stream_definition)
except schemas_exceptions.ValidationException as ex:
LOG.debug(ex)
raise falcon.HTTPBadRequest('Bad request', ex.message)
self._check_invalid_trait(stream_definition)
@resource.resource_try_catch_block
def _stream_definition_create(self, tenant_id, name,
description, select, group_by,
fire_criteria, expiration,
fire_actions, expire_actions):
stream_definition_id = (
self._stream_definitions_repo.
create_stream_definition(tenant_id,
name,
description,
json.dumps(select),
json.dumps(group_by),
json.dumps(fire_criteria),
expiration,
fire_actions,
expire_actions))
self._send_stream_definition_created_event(tenant_id,
stream_definition_id,
name,
select,
group_by,
fire_criteria,
expiration)
result = (
{u'name': name,
u'id': stream_definition_id,
u'description': description,
u'select': select,
u'group_by': group_by,
u'fire_criteria': fire_criteria,
u'expiration': expiration,
u'fire_actions': fire_actions,
u'expire_actions': expire_actions,
u'actions_enabled': u'true'}
)
return result
@resource.resource_try_catch_block
def _stream_definition_patch(self, tenant_id, stream_definition_id, name,
description, select, group_by,
fire_criteria, expiration,
fire_actions, expire_actions):
stream_definition_row = (
self._stream_definitions_repo.patch_stream_definition(tenant_id,
stream_definition_id,
name,
description,
None if select is None else json.dumps(select),
None if group_by is None else json.dumps(group_by),
None if fire_criteria is None else json.dumps(
fire_criteria),
expiration,
fire_actions,
expire_actions))
self._send_stream_definition_updated_event(tenant_id,
stream_definition_id,
name,
select,
group_by,
fire_criteria,
expiration)
result = self._build_stream_definition_show_result(stream_definition_row)
return result
def send_event(self, message_queue, event_msg):
try:
message_queue.send_message(
helpers.dumpit_utf8(event_msg))
except message_queue_exceptions.MessageQueueException as ex:
LOG.exception(ex)
raise falcon.HTTPInternalServerError(
'Message queue service unavailable'.encode('utf8'),
ex.message.encode('utf8'))
@resource.resource_try_catch_block
def _stream_definition_show(self, tenant_id, stream_id):
stream_definition_row = (
self._stream_definitions_repo.get_stream_definition(tenant_id,
stream_id))
return self._build_stream_definition_show_result(stream_definition_row)
@resource.resource_try_catch_block
def _stream_definition_list(self, tenant_id, name, req_uri,
offset, limit):
stream_definition_rows = (
self._stream_definitions_repo.get_stream_definitions(
tenant_id, name, offset, limit))
result = []
for stream_definition_row in stream_definition_rows:
sd = self._build_stream_definition_show_result(
stream_definition_row)
helpers.add_links_to_resource(sd, req_uri)
result.append(sd)
result = helpers.paginate(result, req_uri)
return result
def _build_stream_definition_show_result(self, stream_definition_row):
fire_actions_list = get_comma_separated_str_as_list(
stream_definition_row['fire_actions'])
expire_actions_list = get_comma_separated_str_as_list(
stream_definition_row['expire_actions'])
selectlist = json.loads(stream_definition_row['select_by'])
for s in selectlist:
if '_tenant_id' in s['traits']:
del s['traits']['_tenant_id']
if not s['traits']:
del s['traits']
result = (
{u'name': stream_definition_row['name'],
u'id': stream_definition_row['id'],
u'description': stream_definition_row['description'],
u'select': selectlist,
u'group_by': json.loads(stream_definition_row['group_by']),
u'fire_criteria': json.loads(
stream_definition_row['fire_criteria']),
u'expiration': stream_definition_row['expiration'],
u'fire_actions': fire_actions_list,
u'expire_actions': expire_actions_list,
u'actions_enabled': stream_definition_row['actions_enabled'] == 1,
u'created_at': stream_definition_row['created_at'].isoformat(),
u'updated_at': stream_definition_row['updated_at'].isoformat()}
)
return result
def _send_stream_definition_deleted_event(self, stream_definition_id,
tenant_id, stream_name):
stream_definition_deleted_event_msg = {
u"stream-definition-deleted": {u'tenant_id': tenant_id,
u'stream_definition_id':
stream_definition_id,
u'name': stream_name}}
self.send_event(self.stream_definition_event_message_queue,
stream_definition_deleted_event_msg)
def _send_stream_definition_created_event(self, tenant_id,
stream_definition_id,
name,
select,
group_by,
fire_criteria,
expiration):
stream_definition_created_event_msg = {
u'stream-definition-created': {u'tenant_id': tenant_id,
u'stream_definition_id':
stream_definition_id,
u'name': name,
u'select': select,
u'group_by': group_by,
u'fire_criteria': fire_criteria,
u'expiration': expiration}
}
self.send_event(self.stream_definition_event_message_queue,
stream_definition_created_event_msg)
def _send_stream_definition_updated_event(self, tenant_id,
stream_definition_id,
name,
select,
group_by,
fire_criteria,
expiration):
stream_definition_created_event_msg = {
u'stream-definition-updated': {u'tenant_id': tenant_id,
u'stream_definition_id':
stream_definition_id,
u'name': name,
u'select': select,
u'group_by': group_by,
u'fire_criteria': fire_criteria,
u'expiration': expiration}
}
self.send_event(self.stream_definition_event_message_queue,
stream_definition_created_event_msg)
def get_query_stream_definition_name(stream_definition, return_none=False):
if 'name' in stream_definition:
return stream_definition['name']
else:
if return_none:
return None
else:
return ''
def get_query_stream_definition_description(stream_definition,
return_none=False):
if 'description' in stream_definition:
return stream_definition['description']
else:
if return_none:
return None
else:
return ''
def get_query_stream_definition_select(stream_definition,
return_none=False):
if 'select' in stream_definition:
return stream_definition['select']
else:
if return_none:
return None
else:
return ''
def get_query_stream_definition_group_by(stream_definition,
return_none=False):
if 'group_by' in stream_definition:
return stream_definition['group_by']
else:
if return_none:
return None
else:
return []
def get_query_stream_definition_fire_criteria(stream_definition,
return_none=False):
if 'fire_criteria' in stream_definition:
return stream_definition['fire_criteria']
else:
if return_none:
return None
else:
return ''
def get_query_stream_definition_expiration(stream_definition,
return_none=False):
if 'expiration' in stream_definition:
return stream_definition['expiration']
else:
if return_none:
return None
else:
return ''
def get_query_stream_definition_fire_actions(stream_definition,
return_none=False):
if 'fire_actions' in stream_definition:
return stream_definition['fire_actions']
else:
if return_none:
return None
else:
return []
def get_query_stream_definition_expire_actions(stream_definition,
return_none=False):
if 'expire_actions' in stream_definition:
return stream_definition['expire_actions']
else:
if return_none:
return None
else:
return []
def get_query_stream_definition_actions_enabled(stream_definition,
required=False,
return_none=False):
try:
if 'actions_enabled' in stream_definition:
return stream_definition['actions_enabled']
else:
if return_none:
return None
elif required:
raise Exception("Missing actions-enabled")
else:
return ''
except Exception as ex:
LOG.debug(ex)
raise falcon.HTTPBadRequest('Bad request', ex.message)
def get_comma_separated_str_as_list(comma_separated_str):
if not comma_separated_str:
return []
else:
return comma_separated_str.decode('utf8').split(',')

View File

@ -1,197 +0,0 @@
# Copyright 2015 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import re
import datetime
import json
from time import mktime
import yaml
import falcon
from oslo_config import cfg
from oslo_log import log
from oslo_utils import uuidutils
import simport
from monasca_events_api.api import transforms_api_v2
from monasca_events_api.common.messaging import exceptions as message_queue_exceptions
from monasca_events_api.common.messaging.message_formats import (
transforms as message_formats_transforms)
from monasca_events_api.common.repositories import exceptions as repository_exceptions
from monasca_events_api.v2.common import helpers
from monasca_events_api.v2.common.schemas import (exceptions as schemas_exceptions)
from monasca_events_api.v2.common.schemas import (
transforms_request_body_schema as schemas_transforms)
LOG = log.getLogger(__name__)
class MyEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, datetime.datetime):
return int(mktime(obj.timetuple()))
return json.JSONEncoder.default(self, obj)
class Transforms(transforms_api_v2.TransformsV2API):
def __init__(self):
self._region = cfg.CONF.region
self._default_authorized_roles = (
cfg.CONF.security.default_authorized_roles)
self._message_queue = (
simport.load(cfg.CONF.messaging.driver)("transform-definitions"))
self._transforms_repo = (
simport.load(cfg.CONF.repositories.transforms)())
def on_post(self, req, res):
helpers.validate_json_content_type(req)
helpers.validate_authorization(req, self._default_authorized_roles)
transform = helpers.read_http_resource(req)
self._validate_transform(transform)
transform_id = uuidutils.generate_uuid()
tenant_id = helpers.get_tenant_id(req)
self._create_transform(transform_id, tenant_id, transform)
transformed_event = message_formats_transforms.transform(
transform_id, tenant_id, transform)
self._send_event(transformed_event)
res.body = self._create_transform_response(transform_id, transform)
res.status = falcon.HTTP_200
def on_get(self, req, res, transform_id=None):
if transform_id:
helpers.validate_authorization(req, self._default_authorized_roles)
tenant_id = helpers.get_tenant_id(req)
result = self._list_transform(tenant_id, transform_id, req.uri)
helpers.add_links_to_resource(
result, re.sub('/' + transform_id, '', req.uri))
res.body = json.dumps(result, cls=MyEncoder)
res.status = falcon.HTTP_200
else:
helpers.validate_authorization(req, self._default_authorized_roles)
tenant_id = helpers.get_tenant_id(req)
limit = helpers.get_query_param(req, 'limit')
offset = helpers.normalize_offset(helpers.get_query_param(
req,
'offset'))
result = self._list_transforms(tenant_id, limit, offset, req.uri)
res.body = json.dumps(result, cls=MyEncoder)
res.status = falcon.HTTP_200
def on_delete(self, req, res, transform_id):
helpers.validate_authorization(req, self._default_authorized_roles)
tenant_id = helpers.get_tenant_id(req)
self._delete_transform(tenant_id, transform_id)
transformed_event = message_formats_transforms.transform(transform_id,
tenant_id,
[])
self._send_event(transformed_event)
res.status = falcon.HTTP_204
def _send_event(self, event):
"""Send the event using the message queue.
:param metrics: An event object.
:raises: falcon.HTTPServiceUnavailable
"""
try:
str_msg = json.dumps(event, cls=MyEncoder,
ensure_ascii=False).encode('utf8')
self._message_queue.send_message(str_msg)
except message_queue_exceptions.MessageQueueException as ex:
LOG.exception(ex)
raise falcon.HTTPInternalServerError('Service unavailable',
ex.message)
def _validate_transform(self, transform):
"""Validates the transform
:param transform: An event object.
:raises falcon.HTTPBadRequest
"""
try:
schemas_transforms.validate(transform)
except schemas_exceptions.ValidationException as ex:
LOG.debug(ex)
raise falcon.HTTPBadRequest('Bad request', ex.message)
def _create_transform(self, transform_id, tenant_id, transform):
"""Store the transform using the repository.
:param transform: A transform object.
:raises: falcon.HTTPServiceUnavailable
"""
try:
name = transform['name']
description = transform['description']
specification = str(yaml.load(transform['specification']))
if 'enabled' in transform:
enabled = transform['enabled']
else:
enabled = False
self._transforms_repo.create_transforms(transform_id, tenant_id,
name, description,
specification, enabled)
except repository_exceptions.RepositoryException as ex:
LOG.error(ex)
raise falcon.HTTPInternalServerError('Service unavailable',
ex.message)
def _create_transform_response(self, transform_id, transform):
name = transform['name']
description = transform['description']
specification = transform['specification']
if 'enabled' in transform:
enabled = transform['enabled']
else:
enabled = False
response = {'id': transform_id, 'name': name, 'description': description,
'specification': specification, 'enabled': enabled}
return json.dumps(response)
def _list_transforms(self, tenant_id, limit, offset, uri):
try:
transforms = self._transforms_repo.list_transforms(tenant_id,
limit, offset)
transforms = helpers.paginate(transforms, uri)
return transforms
except repository_exceptions.RepositoryException as ex:
LOG.error(ex)
raise falcon.HTTPInternalServerError('Service unavailable',
ex.message)
def _list_transform(self, tenant_id, transform_id, uri):
try:
transform = self._transforms_repo.list_transform(tenant_id,
transform_id)[0]
transform['specification'] = yaml.safe_dump(
transform['specification'])
return transform
except repository_exceptions.RepositoryException as ex:
LOG.error(ex)
raise falcon.HTTPInternalServerError('Service unavailable',
ex.message)
def _delete_transform(self, tenant_id, transform_id):
try:
self._transforms_repo.delete_transform(tenant_id, transform_id)
except repository_exceptions.DoesNotExistException:
raise falcon.HTTPNotFound()
except repository_exceptions.RepositoryException as ex:
LOG.error(ex)
raise falcon.HTTPInternalServerError('Service unavailable',
ex.message)

View File

@ -1,63 +0,0 @@
# Copyright 2015 Hewlett-Packard
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
import falcon
from monasca_events_api.api import versions_api
from oslo_log import log
LOG = log.getLogger(__name__)
VERSIONS = {
'v2.0': {
'id': 'v2.0',
'links': [{
'rel': 'self',
'href': ''
}],
'status': 'CURRENT',
'updated': "2014-02-18T00:00:00Z"
}
}
class Versions(versions_api.VersionsAPI):
def __init__(self):
super(Versions, self).__init__()
def on_get(self, req, res, version_id=None):
result = {
'links': [{
'rel': 'self',
'href': req.uri.decode('utf8')
}],
'elements': []
}
if version_id is None:
for version in VERSIONS:
VERSIONS[version]['links'][0]['href'] = (
req.uri.decode('utf8') + version)
result['elements'].append(VERSIONS[version])
res.body = json.dumps(result)
res.status = falcon.HTTP_200
else:
if version_id in VERSIONS:
VERSIONS[version_id]['links'][0]['href'] = (
req.uri.decode('utf8'))
res.body = json.dumps(VERSIONS[version_id])
res.status = falcon.HTTP_200
else:
res.body = 'Invalid Version ID'
res.status = falcon.HTTP_400

View File

@ -1,4 +1,4 @@
# Copyright 2014 Hewlett-Packard
# Copyright 2017 FUJITSU LIMITED
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
@ -12,6 +12,7 @@
# License for the specific language governing permissions and limitations
# under the License.
import pbr.version
class TransformationException(Exception):
pass
version_info = pbr.version.VersionInfo('monasca-events-api')
version_str = version_info.version_string()

View File

@ -0,0 +1,6 @@
---
prelude: >
other:
- |
Removed old monasca-events-api codebase. It was not maintained for longer
time and, by that reason, has become strongly obsolete.

258
releasenotes/source/conf.py Normal file
View File

@ -0,0 +1,258 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
needs_sphinx = '1.6'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'openstackdocstheme',
'reno.sphinxext'
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
source_encoding = 'utf-8'
# The master toctree document.
master_doc = 'index'
# General information about the project.
repository_name = u'openstack/monasca-events-api'
project = u'Monasca Events Release Notes'
bug_project = u'monasca-events-api'
bug_tag = u'releasenotes'
copyright = u'2014, OpenStack Foundation'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
from monasca_events_api.version import version_info
version = version_info.canonical_version_string()
release = version_info.version_string_with_vcs()
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = []
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'MonascaEventsApiReleaseNotesdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'MonascaEventsApiReleaseNotes.tex',
u'MonascaEventsApi Release Notes Documentation', u'OpenStack Foundation',
'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'barbicanreleasenotes', u'MonascaEventsApi Release Notes Documentation',
[u'OpenStack Foundation'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'MonascaEventsApiReleaseNotes', u'MonascaEventsApi Release Notes Documentation',
u'OpenStack Foundation', 'MonascaEventsApiReleaseNotes',
'MonascaEventsApi Release Notes Documentation.', 'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
# -- Options for Internationalization output ------------------------------
locale_dirs = ['locale/']

View File

@ -0,0 +1,10 @@
==============================
MonascaEventsApi Release Notes
==============================
Contents:
.. toctree::
:maxdepth: 1
unreleased

View File

@ -0,0 +1,5 @@
==============================
Current Series Release Notes
==============================
.. release-notes::

View File

@ -0,0 +1,5 @@
============================
Current Series Release Notes
============================
.. release-notes::

View File

@ -1,43 +1,16 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, please pay attention to order them correctly.
# process, which may cause wedges in the gate later.
# this is the base monasca-api requirements. To choose a particular
# implementation to install, run pip install -r xxx-requirements.txt.
#
# for example, to install monasca-api and v2 reference implementation, do the
# followings:
#
# pip install -r requirements.txt -r ref-impl-requirements.txt
#
# The above will install monasca-api base and reference implementation
# dependencies.
#
# To install monasca-api and elasticsearch implementation, do the following:
#
# pip install -r requirements.txt -r es-impl-requirements.txt
#
# The above command will install monasca-api base and elasticsearch
# implementation while leave other implementation dependencies alone.
falcon==0.2
gunicorn>=19.1.0
keystonemiddleware
oslo.config>=1.2.1
oslo.middleware
oslo.serialization
oslo.utils
oslo.log
pastedeploy>=1.3.3
pbr>=0.6,!=0.7,<1.0
python-dateutil>=1.5
six>=1.7.0
ujson>=1.33
pyparsing>=2.0.3
voluptuous>=0.8.7
MySQL-python>=1.2.3
eventlet
greenlet
simport>=0.0.dev0
kafka-python>=0.9.1,<0.9.3
requests>=1.1
pbr!=2.1.0,>=2.0.0 # Apache-2.0
Paste # MIT
falcon>=1.0.0 # Apache-2.0
keystonemiddleware>=4.12.0 # Apache-2.0
oslo.config!=4.3.0,!=4.4.0,>=4.0.0 # Apache-2.0
oslo.context>=2.14.0 # Apache-2.0
oslo.middleware>=3.27.0 # Apache-2.0
oslo.log>=3.22.0 # Apache-2.0
oslo.serialization>=1.10.0 # Apache-2.0
oslo.utils>=3.20.0 # Apache-2.0
PasteDeploy>=1.5.0 # MIT
eventlet!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2 # MIT

View File

@ -1,10 +1,11 @@
[metadata]
name = monasca-events-api
summary = OpenStack Events Monitoring Service
summary = Monasca API for events
description-file =
README.md
home-page = https://launchpad.net/monasca
README.rst
author = OpenStack
author-email = openstack-dev@lists.openstack.org
home-page = https://github.com/openstack/monasca-events-api
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
@ -14,6 +15,12 @@ classifier =
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 3
Programming Language :: Python :: 3.5
[global]
setup-hooks =
pbr.hooks.setup_hook
[files]
packages =
@ -21,12 +28,36 @@ packages =
data_files =
/etc/monasca =
etc/events_api.conf
etc/events_api.ini
etc/monasca/events-api-paste.ini
etc/monasca/events-api-logging.conf
[entry_points]
console_scripts =
monasca-events-api = monasca_events_api.api.server:launch
oslo.config.opts =
events.api = monasca_events_api.conf:list_opts
[build_sphinx]
all_files = 1
build-dir = doc/build
source-dir = doc/source
[build_apiguide]
all_files = 1
build-dir = api-guide/build
source-dir = api-guide/source
[build_apiref]
all_files = 1
build-dir = api-ref/build
source-dir = api-ref/source
[egg_info]
tag_build =
tag_date = 0
tag_svn_revision = 0
[wheel]
universal = 1
[pbr]
warnerrors = True

View File

@ -1,4 +1,3 @@
#!/usr/bin/env python
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -15,9 +14,16 @@
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
from setuptools import setup
import setuptools
setup(
setup_requires=['pbr'],
pbr=True,
)
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
setup_requires=['pbr>=2.0.0'],
pbr=True)

View File

@ -1,23 +1,24 @@
# Hacking already pins down pep8, pyflakes and flake8
hacking>=0.9.2,<0.10
Babel>=1.3
coverage>=3.6
discover
fixtures>=0.3.14
flake8==2.1.0
pep8<=1.5.6
httplib2>=0.7.5
mock>=1.0
mox>=0.5.3
nose
# Docs Requirements
oslosphinx
oslotest
python-subunit>=0.0.18
sphinx>=1.1.2,!=1.2.0,<1.3
sphinxcontrib-docbookrestapi
sphinxcontrib-httpdomain
sphinxcontrib-pecanwsme>=0.8
testrepository>=0.0.18
testscenarios>=0.4
testtools>=0.9.34
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
# Install bounded pep8/pyflakes first, then let flake8 install
hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0
flake8-docstrings==0.2.1.post1 # MIT
flake8-import-order==0.12 # LGPLv3
bandit>=1.1.0 # Apache-2.0
bashate>=0.2 # Apache-2.0
fixtures>=3.0.0 # Apache-2.0/BSD
coverage!=4.4,>=4.0 # Apache-2.0
mock>=2.0 # BSD
oslotest>=1.10.0 # Apache-2.0
os-testr>=0.8.0 # Apache-2.0
simplejson>=2.2.0 # MIT
# documentation
doc8 # Apache-2.0
sphinx>=1.6.2 # BSD
os-api-ref>=1.0.0 # Apache-2.0
reno!=2.3.1,>=1.8.0 # Apache-2.0
openstackdocstheme>=1.11.0 # Apache-2.0

6
tools/bashate.sh Normal file
View File

@ -0,0 +1,6 @@
#!/usr/bin/env bash
# Ignore too long lines error E006 from bashate and treat
# E005, E042 as errors.
SH_FILES=$(find ./devstack -type d -name files -prune -o -type f -name '*.sh' -print)
bashate -v -iE006 -eE005,E042 ${SH_FILES:-''}

30
tools/tox_install.sh Executable file
View File

@ -0,0 +1,30 @@
#!/usr/bin/env bash
# Client constraint file contains this client version pin that is in conflict
# with installing the client from source. We should remove the version pin in
# the constraints file before applying it for from-source installation.
CONSTRAINTS_FILE=$1
shift 1
set -e
# NOTE(tonyb): Place this in the tox enviroment's log dir so it will get
# published to logs.openstack.org for easy debugging.
localfile="$VIRTUAL_ENV/log/upper-constraints.txt"
if [[ $CONSTRAINTS_FILE != http* ]]; then
CONSTRAINTS_FILE=file://$CONSTRAINTS_FILE
fi
# NOTE(tonyb): need to add curl to bindep.txt if the project supports bindep
curl $CONSTRAINTS_FILE --insecure --progress-bar --output $localfile
pip install -c$localfile openstack-requirements
# This is the main purpose of the script: Allow local installation of
# the current repo. It is listed in constraints file and thus any
# install will be constrained and we need to unconstrain it.
edit-constraints $localfile -- $CLIENT_NAME
pip install -c$localfile -U $*
exit $?

154
tox.ini
View File

@ -1,32 +1,154 @@
[tox]
minversion = 1.6
envlist = py{27,35},pep8,cover
minversion = 2.7
skipsdist = True
envlist = py27,py33,pep8
[testenv]
setenv = VIRTUAL_ENV={envdir}
usedevelop = True
install_command = pip install -U {opts} {packages}
deps = -r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
commands = nosetests
setenv = VIRTUAL_ENV={envdir}
OS_TEST_PATH=monasca_events_api/tests
CLIENT_NAME=monasca-events-api
passenv = *_proxy
*_PROXY
whitelist_externals = bash
find
rm
install_command = {toxinidir}/tools/tox_install.sh {env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages}
deps = -r{toxinidir}/test-requirements.txt
commands =
find ./ -type f -name '*.pyc' -delete
rm -Rf .testrepository/times.dbm
[testenv:py27]
description = Runs unit test using Python2.7
basepython = python2.7
commands =
{[testenv]commands}
ostestr {posargs}
[testenv:py35]
description = Runs unit test using Python3.5
basepython = python3.5
commands =
{[testenv]commands}
ostestr {posargs}
[testenv:cover]
setenv = NOSE_WITH_COVERAGE=1
description = Calculates code coverage
basepython = python2.7
commands =
python setup.py testr --coverage \
--testr-args='^(?!.*test.*coverage).*$'
{[testenv]commands}
coverage erase
python setup.py test --coverage --testr-args='{posargs}' --coverage-package-name=monasca_events_api
coverage report
[testenv:debug]
description = Allows to run unit-test with debug mode enabled
commands =
{[testenv]commands}
oslo_debug_helper -t {toxinidir}/monasca_events_api/tests {posargs}
[testenv:bashate]
description = Validates (pep8-like) devstack plugins
skip_install = True
usedevelop = False
commands = bash {toxinidir}/tools/bashate.sh
[testenv:bandit]
skip_install = True
usedevelop = False
commands = bandit -r monasca_events_api -n5 -x monasca_events_api/tests
[testenv:flake8]
skip_install = True
usedevelop = False
commands =
flake8 monasca_events_api
[testenv:pep8]
commands = flake8 monasca_events_api
description = Runs set of linters against codebase (flake8, bandit, bashate, checkniceness)
skip_install = True
usedevelop = False
commands =
{[testenv:flake8]commands}
{[testenv:bandit]commands}
{[testenv:bashate]commands}
{[testenv:checkniceness]commands}
[testenv:docs]
description = Builds api-ref, api-guide, releasenotes and devdocs
commands =
{[testenv:devdocs]commands}
{[testenv:api-guide]commands}
{[testenv:api-ref]commands}
{[testenv:releasenotes]commands}
[testenv:api-guide]
description = Called from CI scripts to test and publish the API Guide
commands =
rm -rf api-guide/build
{[testenv:checkjson]commands}
sphinx-build -W -b html -d api-guide/build/doctrees api-guide/source api-guide/build/html
[testenv:api-ref]
description = Called from CI scripts to test and publish the API Ref
commands =
rm -rf api-ref/build
{[testenv:checkjson]commands}
sphinx-build -W -b html -d api-ref/build/doctrees api-ref/source api-ref/build/html
[testenv:releasenotes]
description = Called from CI script to test and publish the Release Notes
commands =
rm -rf releasenotes/build
sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html
[testenv:devdocs]
description = Builds developer documentation
commands =
rm -rf doc/build
{[testenv:codedocs]commands}
{[testenv:checkjson]commands}
python setup.py build_sphinx
[testenv:codedocs]
description = Generates codebase documentation
commands =
rm -rf doc/source/code
sphinx-apidoc -o doc/source/code -fPM {toxinidir}/monasca_events_api --ext-todo
[testenv:checkniceness]
description = Validates (pep-like) documentation
skip_install = True
usedevelop = False
commands =
doc8 --file-encoding utf-8 {toxinidir}/doc
doc8 --file-encoding utf-8 {toxinidir}/api-ref
doc8 --file-encoding utf-8 {toxinidir}/api-guide
doc8 --file-encoding utf-8 {toxinidir}/releasenotes
[testenv:checkjson]
description = Validates all json samples inside doc folder
deps =
whitelist_externals =
bash
python
commands =
bash -c "! find doc/ -type f -name *.json | xargs grep -U -n $'\r'"
bash -c '! find doc/ -type f -name *.json | xargs -t -n1 python -m json.tool 2>&1 > /dev/null | grep -B1 -v ^python'
[testenv:genconfig]
description = Generates sample documentation file for monasca-events-api
commands = oslo-config-generator --config-file=config-generator/monasca-events-api.conf
[testenv:venv]
commands = {posargs}
[flake8]
max-complexity = 50
max-line-length = 120
builtins = _
ignore = F821,H201,H302,H305,H307,H405,H904,H402
exclude=.venv,.git,.tox,dist,*openstack/common*,*egg,build
exclude = .git,.gitignore,.tox,dist,doc,api-ref,api-guide,releasenotes,documentation,*.egg,build
show-source = True
enable-extensions = H203,H106
ignore = D100,D104
import-order-style = pep8
[hacking]