Revert "Revert "Retire project""

This reverts commit bc81f1c699.

Change-Id: I7a7bc3deeadd094d7a42b47b16cde2e8a8805a24
This commit is contained in:
Mehdi Abaakouk (sileht) 2017-06-05 17:04:30 +00:00
parent bc81f1c699
commit 9f720ecdd7
214 changed files with 10 additions and 30717 deletions

13
.gitignore vendored
View File

@ -1,13 +0,0 @@
.testrepository
*.pyc
.tox
*.egg-info
AUTHORS
ChangeLog
etc/gnocchi/gnocchi.conf
doc/build
doc/source/rest.rst
releasenotes/build
cover
.coverage
dist

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/gnocchi.git

View File

@ -1,5 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} ${PYTHON:-python} -m subunit.run discover -t . ${OS_TEST_PATH:-gnocchi/tests} $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list
group_regex=(gabbi\.suitemaker\.test_gabbi((_prefix_|_live_|_)([^_]+)))_

View File

@ -1,44 +0,0 @@
language: python
sudo: required
services:
- docker
cache:
directories:
- ~/.cache/pip
env:
- TARGET: bashate
- TARGET: pep8
- TARGET: docs
- TARGET: docs-gnocchi.xyz
- TARGET: py27-mysql-ceph-upgrade-from-3.1
- TARGET: py35-postgresql-file-upgrade-from-3.1
- TARGET: py27-mysql
- TARGET: py35-mysql
- TARGET: py27-postgresql
- TARGET: py35-postgresql
before_script:
# Travis We need to fetch all tags/branches for documentation target
- case $TARGET in
docs*)
git fetch origin $(git ls-remote -q | sed -n '/refs\/heads/s,.*refs/heads\(.*\),:remotes/origin\1,gp') ;
git fetch --tags ;
git fetch --unshallow ;
;;
esac
- docker build --tag gnocchi-ci --file=tools/travis-ci-setup.dockerfile .
script:
- docker run -v ~/.cache/pip:/home/tester/.cache/pip -v $(pwd):/home/tester/src gnocchi-ci tox -e ${TARGET}
notifications:
email: false
irc:
on_success: change
on_failure: always
channels:
- "irc.freenode.org#gnocchi"

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1 +0,0 @@
include etc/gnocchi/gnocchi.conf

10
README Normal file
View File

@ -0,0 +1,10 @@
This project has been moved to https://github.com/gnocchixyz/gnocchi
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev or #gnocchi on
Freenode.

View File

@ -1,14 +0,0 @@
===============================
Gnocchi - Metric as a Service
===============================
.. image:: doc/source/_static/gnocchi-logo.png
Gnocchi is a multi-tenant timeseries, metrics and resources database. It
provides an `HTTP REST`_ interface to create and manipulate the data. It is
designed to store metrics at a very large scale while providing access to
metrics and resources information and history.
You can read the full documentation online at http://gnocchi.xyz.
.. _`HTTP REST`: https://en.wikipedia.org/wiki/Representational_state_transfer

View File

@ -1,10 +0,0 @@
libpq-dev [platform:dpkg]
postgresql [platform:dpkg]
mysql-client [platform:dpkg]
mysql-server [platform:dpkg]
build-essential [platform:dpkg]
libffi-dev [platform:dpkg]
librados-dev [platform:dpkg]
ceph [platform:dpkg]
redis-server [platform:dpkg]
liberasurecode-dev [platform:dpkg]

View File

@ -1,15 +0,0 @@
============================
Enabling Gnocchi in DevStack
============================
1. Download DevStack::
git clone https://git.openstack.org/openstack-dev/devstack.git
cd devstack
2. Add this repo as an external repository in ``local.conf`` file::
[[local|localrc]]
enable_plugin gnocchi https://git.openstack.org/openstack/gnocchi
3. Run ``stack.sh``.

View File

@ -1,10 +0,0 @@
WSGIDaemonProcess gnocchi lang='en_US.UTF-8' locale='en_US.UTF-8' user=%USER% display-name=%{GROUP} processes=%APIWORKERS% threads=32 %VIRTUALENV%
WSGIProcessGroup gnocchi
WSGIScriptAlias %SCRIPT_NAME% %WSGI%
<Location %SCRIPT_NAME%>
WSGIProcessGroup gnocchi
WSGIApplicationGroup %{GLOBAL}
</Location>
WSGISocketPrefix /var/run/%APACHE_NAME%

View File

@ -1,15 +0,0 @@
Listen %GNOCCHI_PORT%
<VirtualHost *:%GNOCCHI_PORT%>
WSGIDaemonProcess gnocchi lang='en_US.UTF-8' locale='en_US.UTF-8' user=%USER% display-name=%{GROUP} processes=%APIWORKERS% threads=32 %VIRTUALENV%
WSGIProcessGroup gnocchi
WSGIScriptAlias / %WSGI%
WSGIApplicationGroup %{GLOBAL}
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/%APACHE_NAME%/gnocchi.log
CustomLog /var/log/%APACHE_NAME%/gnocchi-access.log combined
</VirtualHost>
WSGISocketPrefix /var/run/%APACHE_NAME%

View File

@ -1,59 +0,0 @@
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This script is executed inside gate_hook function in devstack gate.
STORAGE_DRIVER="$1"
SQL_DRIVER="$2"
ENABLED_SERVICES="key,gnocchi-api,gnocchi-metricd,tempest,"
# Use efficient wsgi web server
DEVSTACK_LOCAL_CONFIG+=$'\nexport GNOCCHI_DEPLOY=uwsgi'
DEVSTACK_LOCAL_CONFIG+=$'\nexport KEYSTONE_DEPLOY=uwsgi'
export DEVSTACK_GATE_INSTALL_TESTONLY=1
export DEVSTACK_GATE_NO_SERVICES=1
export DEVSTACK_GATE_TEMPEST=1
export DEVSTACK_GATE_TEMPEST_NOTESTS=1
export DEVSTACK_GATE_EXERCISES=0
export KEEP_LOCALRC=1
case $STORAGE_DRIVER in
file)
DEVSTACK_LOCAL_CONFIG+=$'\nexport GNOCCHI_STORAGE_BACKEND=file'
;;
swift)
ENABLED_SERVICES+="s-proxy,s-account,s-container,s-object,"
DEVSTACK_LOCAL_CONFIG+=$'\nexport GNOCCHI_STORAGE_BACKEND=swift'
# FIXME(sileht): use mod_wsgi as workaround for LP#1508424
DEVSTACK_GATE_TEMPEST+=$'\nexport SWIFT_USE_MOD_WSGI=True'
;;
ceph)
DEVSTACK_LOCAL_CONFIG+=$'\nexport GNOCCHI_STORAGE_BACKEND=ceph'
;;
esac
# default to mysql
case $SQL_DRIVER in
postgresql)
export DEVSTACK_GATE_POSTGRES=1
;;
esac
export ENABLED_SERVICES
export DEVSTACK_LOCAL_CONFIG
$BASE/new/devstack-gate/devstack-vm-gate.sh

View File

@ -1,78 +0,0 @@
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This script is executed inside post_test_hook function in devstack gate.
source $BASE/new/devstack/openrc admin admin
set -e
function generate_testr_results {
if [ -f .testrepository/0 ]; then
sudo /usr/os-testr-env/bin/testr last --subunit > $WORKSPACE/testrepository.subunit
sudo mv $WORKSPACE/testrepository.subunit $BASE/logs/testrepository.subunit
sudo /usr/os-testr-env/bin/subunit2html $BASE/logs/testrepository.subunit $BASE/logs/testr_results.html
sudo gzip -9 $BASE/logs/testrepository.subunit
sudo gzip -9 $BASE/logs/testr_results.html
sudo chown jenkins:jenkins $BASE/logs/testrepository.subunit.gz $BASE/logs/testr_results.html.gz
sudo chmod a+r $BASE/logs/testrepository.subunit.gz $BASE/logs/testr_results.html.gz
fi
}
set -x
export GNOCCHI_DIR="$BASE/new/gnocchi"
sudo chown -R stack:stack $GNOCCHI_DIR
cd $GNOCCHI_DIR
openstack catalog list
export GNOCCHI_SERVICE_TOKEN=$(openstack token issue -c id -f value)
export GNOCCHI_ENDPOINT=$(openstack catalog show metric -c endpoints -f value | awk '/public/{print $2}')
export GNOCCHI_AUTHORIZATION="" # Temporary set to transition to the new functional testing
curl -X GET ${GNOCCHI_ENDPOINT}/v1/archive_policy -H "Content-Type: application/json"
sudo gnocchi-upgrade
# Just ensure tools still works
sudo -E -H -u stack $GNOCCHI_DIR/tools/measures_injector.py --metrics 1 --batch-of-measures 2 --measures-per-batch 2
# NOTE(sileht): on swift job permissions are wrong, I don't known why
sudo chown -R tempest:stack $BASE/new/tempest
sudo chown -R tempest:stack $BASE/data/tempest
# Run tests with tempst
cd $BASE/new/tempest
set +e
sudo -H -u tempest OS_TEST_TIMEOUT=$TEMPEST_OS_TEST_TIMEOUT tox -eall-plugin -- gnocchi --concurrency=$TEMPEST_CONCURRENCY
TEMPEST_EXIT_CODE=$?
set -e
if [[ $TEMPEST_EXIT_CODE != 0 ]]; then
# Collect and parse result
generate_testr_results
exit $TEMPEST_EXIT_CODE
fi
# Run tests with tox
cd $GNOCCHI_DIR
echo "Running gnocchi functional test suite"
set +e
sudo -E -H -u stack tox -epy27-gate
EXIT_CODE=$?
set -e
# Collect and parse result
generate_testr_results
exit $EXIT_CODE

View File

@ -1,474 +0,0 @@
# Gnocchi devstack plugin
# Install and start **Gnocchi** service
# To enable Gnocchi service, add the following to localrc:
#
# enable_plugin gnocchi https://github.com/openstack/gnocchi master
#
# This will turn on both gnocchi-api and gnocchi-metricd services.
# If you don't want one of those (you do) you can use the
# disable_service command in local.conf.
# Dependencies:
#
# - functions
# - ``functions``
# - ``DEST``, ``STACK_USER`` must be defined
# - ``APACHE_NAME`` for wsgi
# - ``SERVICE_{TENANT_NAME|PASSWORD}`` must be defined
# - ``SERVICE_HOST``
# - ``OS_AUTH_URL``, ``KEYSTONE_SERVICE_URI`` for auth in api
# stack.sh
# ---------
# - install_gnocchi
# - configure_gnocchi
# - init_gnocchi
# - start_gnocchi
# - stop_gnocchi
# - cleanup_gnocchi
# Save trace setting
XTRACE=$(set +o | grep xtrace)
set -o xtrace
if [ -z "$GNOCCHI_DEPLOY" ]; then
# Default
GNOCCHI_DEPLOY=simple
# Fallback to common wsgi devstack configuration
if [ "$ENABLE_HTTPD_MOD_WSGI_SERVICES" == "True" ]; then
GNOCCHI_DEPLOY=mod_wsgi
# Deprecated config
elif [ -n "$GNOCCHI_USE_MOD_WSGI" ] ; then
echo_summary "GNOCCHI_USE_MOD_WSGI is deprecated, use GNOCCHI_DEPLOY instead"
if [ "$GNOCCHI_USE_MOD_WSGI" == True ]; then
GNOCCHI_DEPLOY=mod_wsgi
fi
fi
fi
# Functions
# ---------
# Test if any Gnocchi services are enabled
# is_gnocchi_enabled
function is_gnocchi_enabled {
[[ ,${ENABLED_SERVICES} =~ ,"gnocchi-" ]] && return 0
return 1
}
# Test if a Ceph services are enabled
# _is_ceph_enabled
function _is_ceph_enabled {
type is_ceph_enabled_for_service >/dev/null 2>&1 && return 0
return 1
}
# create_gnocchi_accounts() - Set up common required gnocchi accounts
# Project User Roles
# -------------------------------------------------------------------------
# $SERVICE_TENANT_NAME gnocchi service
# gnocchi_swift gnocchi_swift ResellerAdmin (if Swift is enabled)
function create_gnocchi_accounts {
# Gnocchi
if [ "$GNOCCHI_USE_KEYSTONE" == "True" ] && is_service_enabled gnocchi-api ; then
# At this time, the /etc/openstack/clouds.yaml is available,
# we could leverage that by setting OS_CLOUD
OLD_OS_CLOUD=$OS_CLOUD
export OS_CLOUD='devstack-admin'
create_service_user "gnocchi"
local gnocchi_service=$(get_or_create_service "gnocchi" \
"metric" "OpenStack Metric Service")
get_or_create_endpoint $gnocchi_service \
"$REGION_NAME" \
"$(gnocchi_service_url)" \
"$(gnocchi_service_url)" \
"$(gnocchi_service_url)"
if is_service_enabled swift && [[ "$GNOCCHI_STORAGE_BACKEND" = 'swift' ]] ; then
get_or_create_project "gnocchi_swift" default
local gnocchi_swift_user=$(get_or_create_user "gnocchi_swift" \
"$SERVICE_PASSWORD" default "gnocchi_swift@example.com")
get_or_add_user_project_role "ResellerAdmin" $gnocchi_swift_user "gnocchi_swift"
fi
export OS_CLOUD=$OLD_OS_CLOUD
fi
}
# return the service url for gnocchi
function gnocchi_service_url {
if [[ -n $GNOCCHI_SERVICE_PORT ]]; then
echo "$GNOCCHI_SERVICE_PROTOCOL://$GNOCCHI_SERVICE_HOST:$GNOCCHI_SERVICE_PORT"
else
echo "$GNOCCHI_SERVICE_PROTOCOL://$GNOCCHI_SERVICE_HOST$GNOCCHI_SERVICE_PREFIX"
fi
}
# install redis
# NOTE(chdent): We shouldn't rely on ceilometer being present so cannot
# use its install_redis. There are enough packages now using redis
# that there should probably be something devstack itself for
# installing it.
function _gnocchi_install_redis {
if is_ubuntu; then
install_package redis-server
restart_service redis-server
else
# This will fail (correctly) where a redis package is unavailable
install_package redis
restart_service redis
fi
pip_install_gr redis
}
function _gnocchi_install_grafana {
if is_ubuntu; then
local file=$(mktemp /tmp/grafanapkg-XXXXX)
wget -O "$file" "$GRAFANA_DEB_PKG"
sudo dpkg -i "$file"
rm $file
elif is_fedora; then
sudo yum install "$GRAFANA_RPM_PKG"
fi
if [ ! "$GRAFANA_PLUGIN_VERSION" ]; then
sudo grafana-cli plugins install sileht-gnocchi-datasource
elif [ "$GRAFANA_PLUGIN_VERSION" != "git" ]; then
tmpfile=/tmp/sileht-gnocchi-datasource-${GRAFANA_PLUGIN_VERSION}.tar.gz
wget https://github.com/sileht/grafana-gnocchi-datasource/releases/download/${GRAFANA_PLUGIN_VERSION}/sileht-gnocchi-datasource-${GRAFANA_PLUGIN_VERSION}.tar.gz -O $tmpfile
sudo -u grafana tar -xzf $tmpfile -C /var/lib/grafana/plugins
rm -f $file
else
git_clone ${GRAFANA_PLUGINS_REPO} ${GRAFANA_PLUGINS_DIR}
sudo ln -sf ${GRAFANA_PLUGINS_DIR}/dist /var/lib/grafana/plugins/grafana-gnocchi-datasource
# NOTE(sileht): This is long and have chance to fail, thx nodejs/npm
(cd /var/lib/grafana/plugins/grafana-gnocchi-datasource && npm install && ./run-tests.sh) || true
fi
sudo service grafana-server restart
}
function _cleanup_gnocchi_apache_wsgi {
sudo rm -f $GNOCCHI_WSGI_DIR/*.wsgi
sudo rm -f $(apache_site_config_for gnocchi)
}
# _config_gnocchi_apache_wsgi() - Set WSGI config files of Gnocchi
function _config_gnocchi_apache_wsgi {
sudo mkdir -p $GNOCCHI_WSGI_DIR
local gnocchi_apache_conf=$(apache_site_config_for gnocchi)
local venv_path=""
local script_name=$GNOCCHI_SERVICE_PREFIX
if [[ ${USE_VENV} = True ]]; then
venv_path="python-path=${PROJECT_VENV["gnocchi"]}/lib/$(python_version)/site-packages"
fi
# copy wsgi file
sudo cp $GNOCCHI_DIR/gnocchi/rest/app.wsgi $GNOCCHI_WSGI_DIR/
# Only run the API on a custom PORT if it has been specifically
# asked for.
if [[ -n $GNOCCHI_SERVICE_PORT ]]; then
sudo cp $GNOCCHI_DIR/devstack/apache-ported-gnocchi.template $gnocchi_apache_conf
sudo sed -e "
s|%GNOCCHI_PORT%|$GNOCCHI_SERVICE_PORT|g;
" -i $gnocchi_apache_conf
else
sudo cp $GNOCCHI_DIR/devstack/apache-gnocchi.template $gnocchi_apache_conf
sudo sed -e "
s|%SCRIPT_NAME%|$script_name|g;
" -i $gnocchi_apache_conf
fi
sudo sed -e "
s|%APACHE_NAME%|$APACHE_NAME|g;
s|%WSGI%|$GNOCCHI_WSGI_DIR/app.wsgi|g;
s|%USER%|$STACK_USER|g
s|%APIWORKERS%|$API_WORKERS|g
s|%VIRTUALENV%|$venv_path|g
" -i $gnocchi_apache_conf
}
# cleanup_gnocchi() - Remove residual data files, anything left over from previous
# runs that a clean run would need to clean up
function cleanup_gnocchi {
if [ "$GNOCCHI_DEPLOY" == "mod_wsgi" ]; then
_cleanup_gnocchi_apache_wsgi
fi
}
# configure_gnocchi() - Set config files, create data dirs, etc
function configure_gnocchi {
[ ! -d $GNOCCHI_DATA_DIR ] && sudo mkdir -m 755 -p $GNOCCHI_DATA_DIR
sudo chown $STACK_USER $GNOCCHI_DATA_DIR
# Configure logging
iniset $GNOCCHI_CONF DEFAULT debug "$ENABLE_DEBUG_LOG_LEVEL"
iniset $GNOCCHI_CONF metricd metric_processing_delay "$GNOCCHI_METRICD_PROCESSING_DELAY"
# Set up logging
if [ "$SYSLOG" != "False" ]; then
iniset $GNOCCHI_CONF DEFAULT use_syslog "True"
fi
# Format logging
if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ] && [ "$GNOCCHI_DEPLOY" != "mod_wsgi" ]; then
setup_colorized_logging $GNOCCHI_CONF DEFAULT
fi
if [ -n "$GNOCCHI_COORDINATOR_URL" ]; then
iniset $GNOCCHI_CONF storage coordination_url "$GNOCCHI_COORDINATOR_URL"
fi
if is_service_enabled gnocchi-statsd ; then
iniset $GNOCCHI_CONF statsd resource_id $GNOCCHI_STATSD_RESOURCE_ID
iniset $GNOCCHI_CONF statsd project_id $GNOCCHI_STATSD_PROJECT_ID
iniset $GNOCCHI_CONF statsd user_id $GNOCCHI_STATSD_USER_ID
fi
# Configure the storage driver
if _is_ceph_enabled && [[ "$GNOCCHI_STORAGE_BACKEND" = 'ceph' ]] ; then
iniset $GNOCCHI_CONF storage driver ceph
iniset $GNOCCHI_CONF storage ceph_username ${GNOCCHI_CEPH_USER}
iniset $GNOCCHI_CONF storage ceph_secret $(awk '/key/{print $3}' ${CEPH_CONF_DIR}/ceph.client.${GNOCCHI_CEPH_USER}.keyring)
elif is_service_enabled swift && [[ "$GNOCCHI_STORAGE_BACKEND" = 'swift' ]] ; then
iniset $GNOCCHI_CONF storage driver swift
iniset $GNOCCHI_CONF storage swift_user gnocchi_swift
iniset $GNOCCHI_CONF storage swift_key $SERVICE_PASSWORD
iniset $GNOCCHI_CONF storage swift_project_name "gnocchi_swift"
iniset $GNOCCHI_CONF storage swift_auth_version 3
iniset $GNOCCHI_CONF storage swift_authurl $KEYSTONE_SERVICE_URI_V3
elif [[ "$GNOCCHI_STORAGE_BACKEND" = 'file' ]] ; then
iniset $GNOCCHI_CONF storage driver file
iniset $GNOCCHI_CONF storage file_basepath $GNOCCHI_DATA_DIR/
elif [[ "$GNOCCHI_STORAGE_BACKEND" = 'redis' ]] ; then
iniset $GNOCCHI_CONF storage driver redis
iniset $GNOCCHI_CONF storage redis_url $GNOCCHI_REDIS_URL
else
echo "ERROR: could not configure storage driver"
exit 1
fi
if [ "$GNOCCHI_USE_KEYSTONE" == "True" ] ; then
# Configure auth token middleware
configure_auth_token_middleware $GNOCCHI_CONF gnocchi $GNOCCHI_AUTH_CACHE_DIR
iniset $GNOCCHI_CONF api auth_mode keystone
if is_service_enabled gnocchi-grafana; then
iniset $GNOCCHI_CONF cors allowed_origin ${GRAFANA_URL}
fi
else
inidelete $GNOCCHI_CONF api auth_mode
fi
# Configure the indexer database
iniset $GNOCCHI_CONF indexer url `database_connection_url gnocchi`
if [ "$GNOCCHI_DEPLOY" == "mod_wsgi" ]; then
_config_gnocchi_apache_wsgi
elif [ "$GNOCCHI_DEPLOY" == "uwsgi" ]; then
# iniset creates these files when it's called if they don't exist.
GNOCCHI_UWSGI_FILE=$GNOCCHI_CONF_DIR/uwsgi.ini
rm -f "$GNOCCHI_UWSGI_FILE"
iniset "$GNOCCHI_UWSGI_FILE" uwsgi http $GNOCCHI_SERVICE_HOST:$GNOCCHI_SERVICE_PORT
iniset "$GNOCCHI_UWSGI_FILE" uwsgi wsgi-file "/usr/local/bin/gnocchi-api"
# This is running standalone
iniset "$GNOCCHI_UWSGI_FILE" uwsgi master true
# Set die-on-term & exit-on-reload so that uwsgi shuts down
iniset "$GNOCCHI_UWSGI_FILE" uwsgi die-on-term true
iniset "$GNOCCHI_UWSGI_FILE" uwsgi exit-on-reload true
iniset "$GNOCCHI_UWSGI_FILE" uwsgi threads 32
iniset "$GNOCCHI_UWSGI_FILE" uwsgi processes $API_WORKERS
iniset "$GNOCCHI_UWSGI_FILE" uwsgi enable-threads true
iniset "$GNOCCHI_UWSGI_FILE" uwsgi plugins python
# uwsgi recommends this to prevent thundering herd on accept.
iniset "$GNOCCHI_UWSGI_FILE" uwsgi thunder-lock true
# Override the default size for headers from the 4k default.
iniset "$GNOCCHI_UWSGI_FILE" uwsgi buffer-size 65535
# Make sure the client doesn't try to re-use the connection.
iniset "$GNOCCHI_UWSGI_FILE" uwsgi add-header "Connection: close"
# Don't share rados resources and python-requests globals between processes
iniset "$GNOCCHI_UWSGI_FILE" uwsgi lazy-apps true
fi
}
# configure_keystone_for_gnocchi() - Configure Keystone needs for Gnocchi
function configure_keystone_for_gnocchi {
if [ "$GNOCCHI_USE_KEYSTONE" == "True" ] ; then
if is_service_enabled gnocchi-grafana; then
# NOTE(sileht): keystone configuration have to be set before uwsgi
# is started
iniset $KEYSTONE_CONF cors allowed_origin ${GRAFANA_URL}
fi
fi
}
# configure_ceph_gnocchi() - gnocchi config needs to come after gnocchi is set up
function configure_ceph_gnocchi {
# Configure gnocchi service options, ceph pool, ceph user and ceph key
sudo ceph -c ${CEPH_CONF_FILE} osd pool create ${GNOCCHI_CEPH_POOL} ${GNOCCHI_CEPH_POOL_PG} ${GNOCCHI_CEPH_POOL_PGP}
sudo ceph -c ${CEPH_CONF_FILE} osd pool set ${GNOCCHI_CEPH_POOL} size ${CEPH_REPLICAS}
if [[ $CEPH_REPLICAS -ne 1 ]]; then
sudo ceph -c ${CEPH_CONF_FILE} osd pool set ${GNOCCHI_CEPH_POOL} crush_ruleset ${RULE_ID}
fi
sudo ceph -c ${CEPH_CONF_FILE} auth get-or-create client.${GNOCCHI_CEPH_USER} mon "allow r" osd "allow rwx pool=${GNOCCHI_CEPH_POOL}" | sudo tee ${CEPH_CONF_DIR}/ceph.client.${GNOCCHI_CEPH_USER}.keyring
sudo chown ${STACK_USER}:$(id -g -n $whoami) ${CEPH_CONF_DIR}/ceph.client.${GNOCCHI_CEPH_USER}.keyring
}
# init_gnocchi() - Initialize etc.
function init_gnocchi {
# Create cache dir
sudo mkdir -p $GNOCCHI_AUTH_CACHE_DIR
sudo chown $STACK_USER $GNOCCHI_AUTH_CACHE_DIR
rm -f $GNOCCHI_AUTH_CACHE_DIR/*
if is_service_enabled mysql postgresql; then
recreate_database gnocchi
fi
$GNOCCHI_BIN_DIR/gnocchi-upgrade
}
function preinstall_gnocchi {
if is_ubuntu; then
# libpq-dev is needed to build psycopg2
# uuid-runtime is needed to use the uuidgen command
install_package libpq-dev uuid-runtime
else
install_package postgresql-devel
fi
if [[ "$GNOCCHI_STORAGE_BACKEND" = 'ceph' ]] ; then
install_package cython
install_package librados-dev
fi
}
# install_gnocchi() - Collect source and prepare
function install_gnocchi {
if [[ "$GNOCCHI_STORAGE_BACKEND" = 'redis' ]] || [[ "${GNOCCHI_COORDINATOR_URL%%:*}" == "redis" ]]; then
_gnocchi_install_redis
fi
if [[ "$GNOCCHI_STORAGE_BACKEND" = 'ceph' ]] ; then
pip_install cradox
fi
if is_service_enabled gnocchi-grafana
then
_gnocchi_install_grafana
fi
[ "$GNOCCHI_USE_KEYSTONE" == "True" ] && EXTRA_FLAVOR=,keystone
# We don't use setup_package because we don't follow openstack/requirements
sudo -H pip install -e "$GNOCCHI_DIR"[test,$GNOCCHI_STORAGE_BACKEND,${DATABASE_TYPE}${EXTRA_FLAVOR}]
if [ "$GNOCCHI_DEPLOY" == "mod_wsgi" ]; then
install_apache_wsgi
elif [ "$GNOCCHI_DEPLOY" == "uwsgi" ]; then
pip_install uwsgi
fi
# Create configuration directory
[ ! -d $GNOCCHI_CONF_DIR ] && sudo mkdir -m 755 -p $GNOCCHI_CONF_DIR
sudo chown $STACK_USER $GNOCCHI_CONF_DIR
}
# start_gnocchi() - Start running processes, including screen
function start_gnocchi {
if [ "$GNOCCHI_DEPLOY" == "mod_wsgi" ]; then
enable_apache_site gnocchi
restart_apache_server
if [[ -n $GNOCCHI_SERVICE_PORT ]]; then
tail_log gnocchi /var/log/$APACHE_NAME/gnocchi.log
tail_log gnocchi-api /var/log/$APACHE_NAME/gnocchi-access.log
else
# NOTE(chdent): At the moment this is very noisy as it
# will tail the entire apache logs, not just the gnocchi
# parts. If you don't like this either USE_SCREEN=False
# or set GNOCCHI_SERVICE_PORT.
tail_log gnocchi /var/log/$APACHE_NAME/error[_\.]log
tail_log gnocchi-api /var/log/$APACHE_NAME/access[_\.]log
fi
elif [ "$GNOCCHI_DEPLOY" == "uwsgi" ]; then
run_process gnocchi-api "$GNOCCHI_BIN_DIR/uwsgi $GNOCCHI_UWSGI_FILE"
else
run_process gnocchi-api "$GNOCCHI_BIN_DIR/gnocchi-api --port $GNOCCHI_SERVICE_PORT"
fi
# only die on API if it was actually intended to be turned on
if is_service_enabled gnocchi-api; then
echo "Waiting for gnocchi-api to start..."
if ! timeout $SERVICE_TIMEOUT sh -c "while ! curl -v --max-time 5 --noproxy '*' -s $(gnocchi_service_url)/v1/resource/generic ; do sleep 1; done"; then
die $LINENO "gnocchi-api did not start"
fi
fi
# run metricd last so we are properly waiting for swift and friends
run_process gnocchi-metricd "$GNOCCHI_BIN_DIR/gnocchi-metricd -d --config-file $GNOCCHI_CONF"
run_process gnocchi-statsd "$GNOCCHI_BIN_DIR/gnocchi-statsd -d --config-file $GNOCCHI_CONF"
}
# stop_gnocchi() - Stop running processes
function stop_gnocchi {
if [ "$GNOCCHI_DEPLOY" == "mod_wsgi" ]; then
disable_apache_site gnocchi
restart_apache_server
fi
# Kill the gnocchi screen windows
for serv in gnocchi-api gnocchi-metricd gnocchi-statsd; do
stop_process $serv
done
}
if is_service_enabled gnocchi-api; then
if [[ "$1" == "stack" && "$2" == "pre-install" ]]; then
echo_summary "Configuring system services for Gnocchi"
preinstall_gnocchi
elif [[ "$1" == "stack" && "$2" == "install" ]]; then
echo_summary "Installing Gnocchi"
stack_install_service gnocchi
configure_keystone_for_gnocchi
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
echo_summary "Configuring Gnocchi"
if _is_ceph_enabled && [[ "$GNOCCHI_STORAGE_BACKEND" = 'ceph' ]] ; then
echo_summary "Configuring Gnocchi for Ceph"
configure_ceph_gnocchi
fi
configure_gnocchi
create_gnocchi_accounts
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
echo_summary "Initializing Gnocchi"
init_gnocchi
start_gnocchi
fi
if [[ "$1" == "unstack" ]]; then
echo_summary "Stopping Gnocchi"
stop_gnocchi
fi
if [[ "$1" == "clean" ]]; then
cleanup_gnocchi
fi
fi
# Restore xtrace
$XTRACE
# Tell emacs to use shell-script-mode
## Local variables:
## mode: shell-script
## End:

View File

@ -1,65 +0,0 @@
enable_service gnocchi-api
enable_service gnocchi-metricd
enable_service gnocchi-statsd
# Set up default directories
GNOCCHI_DIR=$DEST/gnocchi
GNOCCHI_CONF_DIR=/etc/gnocchi
GNOCCHI_CONF=$GNOCCHI_CONF_DIR/gnocchi.conf
GNOCCHI_LOG_DIR=/var/log/gnocchi
GNOCCHI_AUTH_CACHE_DIR=${GNOCCHI_AUTH_CACHE_DIR:-/var/cache/gnocchi}
GNOCCHI_WSGI_DIR=${GNOCCHI_WSGI_DIR:-/var/www/gnocchi}
GNOCCHI_DATA_DIR=${GNOCCHI_DATA_DIR:-${DATA_DIR}/gnocchi}
GNOCCHI_COORDINATOR_URL=${GNOCCHI_COORDINATOR_URL:-redis://localhost:6379}
GNOCCHI_METRICD_PROCESSING_DELAY=${GNOCCHI_METRICD_PROCESSING_DELAY:-5}
# GNOCCHI_DEPLOY defines how Gnocchi is deployed, allowed values:
# - mod_wsgi : Run Gnocchi under Apache HTTPd mod_wsgi
# - simple : Run gnocchi-api
# - uwsgi : Run Gnocchi under uwsgi
# - <empty>: Fallback to GNOCCHI_USE_MOD_WSGI or ENABLE_HTTPD_MOD_WSGI_SERVICES
GNOCCHI_DEPLOY=${GNOCCHI_DEPLOY}
# Toggle for deploying Gnocchi with/without Keystone
GNOCCHI_USE_KEYSTONE=$(trueorfalse True GNOCCHI_USE_KEYSTONE)
# Support potential entry-points console scripts and venvs
if [[ ${USE_VENV} = True ]]; then
PROJECT_VENV["gnocchi"]=${GNOCCHI_DIR}.venv
GNOCCHI_BIN_DIR=${PROJECT_VENV["gnocchi"]}/bin
else
GNOCCHI_BIN_DIR=$(get_python_exec_prefix)
fi
# Gnocchi connection info.
GNOCCHI_SERVICE_PROTOCOL=http
# NOTE(chdent): If you are not using mod wsgi you need to set port!
GNOCCHI_SERVICE_PORT=${GNOCCHI_SERVICE_PORT:-8041}
GNOCCHI_SERVICE_PREFIX=${GNOCCHI_SERVICE_PREFIX:-'/metric'}
GNOCCHI_SERVICE_HOST=${GNOCCHI_SERVICE_HOST:-${SERVICE_HOST}}
# Gnocchi statsd info
GNOCCHI_STATSD_RESOURCE_ID=${GNOCCHI_STATSD_RESOURCE_ID:-$(uuidgen)}
GNOCCHI_STATSD_USER_ID=${GNOCCHI_STATSD_USER_ID:-$(uuidgen)}
GNOCCHI_STATSD_PROJECT_ID=${GNOCCHI_STATSD_PROJECT_ID:-$(uuidgen)}
# Ceph gnocchi info
GNOCCHI_CEPH_USER=${GNOCCHI_CEPH_USER:-gnocchi}
GNOCCHI_CEPH_POOL=${GNOCCHI_CEPH_POOL:-gnocchi}
GNOCCHI_CEPH_POOL_PG=${GNOCCHI_CEPH_POOL_PG:-8}
GNOCCHI_CEPH_POOL_PGP=${GNOCCHI_CEPH_POOL_PGP:-8}
# Redis gnocchi info
GNOCCHI_REDIS_URL=${GNOCCHI_REDIS_URL:-redis://localhost:6379}
# Gnocchi backend
GNOCCHI_STORAGE_BACKEND=${GNOCCHI_STORAGE_BACKEND:-redis}
# Grafana settings
GRAFANA_RPM_PKG=${GRAFANA_RPM_PKG:-https://grafanarel.s3.amazonaws.com/builds/grafana-3.0.4-1464167696.x86_64.rpm}
GRAFANA_DEB_PKG=${GRAFANA_DEB_PKG:-https://grafanarel.s3.amazonaws.com/builds/grafana_3.0.4-1464167696_amd64.deb}
GRAFANA_PLUGIN_VERSION=${GRAFANA_PLUGIN_VERSION}
GRAFANA_PLUGINS_DIR=${GRAFANA_PLUGINS_DIR:-$DEST/grafana-gnocchi-datasource}
GRAFANA_PLUGINS_REPO=${GRAFANA_PLUGINS_REPO:-http://github.com/gnocchixyz/grafana-gnocchi-datasource.git}
GRAFANA_URL=${GRAFANA_URL:-http://$HOST_IP:3000}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 362 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 91 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 59 KiB

View File

@ -1,82 +0,0 @@
======================
Project Architecture
======================
Gnocchi consists of several services: a HTTP REST API (see :doc:`rest`), an
optional statsd-compatible daemon (see :doc:`statsd`), and an asynchronous
processing daemon (named `gnocchi-metricd`). Data is received via the HTTP REST
API or statsd daemon. `gnocchi-metricd` performs operations (statistics
computing, metric cleanup, etc...) on the received data in the background.
Both the HTTP REST API and the asynchronous processing daemon are stateless and
are scalable. Additional workers can be added depending on load.
.. image:: architecture.png
:align: center
:width: 80%
:alt: Gnocchi architecture
Back-ends
---------
Gnocchi uses three different back-ends for storing data: one for storing new
incoming measures (the incoming driver), one for storing the time series (the
storage driver) and one for indexing the data (the index driver).
The *incoming* storage is responsible for storing new measures sent to metrics.
It is by default and usually the same driver as the *storage* one.
The *storage* is responsible for storing measures of created metrics. It
receives timestamps and values, and pre-computes aggregations according to the
defined archive policies.
The *indexer* is responsible for storing the index of all resources, archive
policies and metrics, along with their definitions, types and properties. The
indexer is also responsible for linking resources with metrics.
Available storage back-ends
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Gnocchi currently offers different storage drivers:
* File (default)
* `Ceph`_ (preferred)
* `OpenStack Swift`_
* `S3`_
* `Redis`_
The drivers are based on an intermediate library, named *Carbonara*, which
handles the time series manipulation, since none of these storage technologies
handle time series natively.
The four *Carbonara* based drivers are working well and are as scalable as
their back-end technology permits. Ceph and Swift are inherently more scalable
than the file driver.
Depending on the size of your architecture, using the file driver and storing
your data on a disk might be enough. If you need to scale the number of server
with the file driver, you can export and share the data via NFS among all
Gnocchi processes. In any case, it is obvious that S3, Ceph and Swift drivers
are largely more scalable. Ceph also offers better consistency, and hence is
the recommended driver.
.. _OpenStack Swift: http://docs.openstack.org/developer/swift/
.. _Ceph: https://ceph.com
.. _`S3`: https://aws.amazon.com/s3/
.. _`Redis`: https://redis.io
Available index back-ends
~~~~~~~~~~~~~~~~~~~~~~~~~
Gnocchi currently offers different index drivers:
* `PostgreSQL`_ (preferred)
* `MySQL`_ (at least version 5.6.4)
Those drivers offer almost the same performance and features, though PostgreSQL
tends to be more performant and has some additional features (e.g. resource
duration computing).
.. _PostgreSQL: http://postgresql.org
.. _MySQL: http://mysql.org

View File

@ -1,13 +0,0 @@
========
Client
========
Gnocchi currently only provides a Python client and SDK which can be installed
using *pip*::
pip install gnocchiclient
This package provides the `gnocchi` command line tool that can be used to send
requests to Gnocchi. You can read the `full documentation online`_.
.. _full documentation online: http://gnocchi.xyz/gnocchiclient

View File

@ -1,14 +0,0 @@
==================
Collectd support
==================
`Collectd`_ can use Gnocchi to store its data through a plugin called
`collectd-gnocchi`. It can be installed with *pip*::
pip install collectd-gnocchi
`Sources and documentation`_ are also available.
.. _`Collectd`: https://www.collectd.org/
.. _`Sources and documentation`: https://github.com/gnocchixyz/collectd-gnocchi

View File

@ -1,197 +0,0 @@
# -*- coding: utf-8 -*-
#
# Gnocchi documentation build configuration file
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import datetime
import os
import subprocess
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'gnocchi.gendoc',
'sphinxcontrib.httpdomain',
'sphinx.ext.autodoc',
'reno.sphinxext',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Gnocchi'
copyright = u'%s, OpenStack Foundation' % datetime.date.today().year
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = subprocess.Popen(['sh', '-c', 'cd ../..; python setup.py --version'],
stdout=subprocess.PIPE).stdout.read()
version = version.strip()
# The full version, including alpha/beta/rc tags.
release = version
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'sphinx_rtd_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
import sphinx_rtd_theme
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
html_logo = '_static/gnocchi-logo.png'
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
html_favicon = '_static/gnocchi-icon.ico'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'gnocchidoc'
html_theme_options = {
'logo_only': True,
}
# Multiversion docs
scv_sort = ('semver',)
scv_show_banner = True
scv_banner_greatest_tag = True
scv_priority = 'branches'
scv_whitelist_branches = ('master', '^stable/(2\.1|2\.2|[3-9]\.)')
scv_whitelist_tags = ("^[2-9]\.",)
here = os.path.dirname(os.path.realpath(__file__))
html_static_path_abs = ",".join([os.path.join(here, p) for p in html_static_path])
# NOTE(sileht): Override some conf for old version. Also, warning as error have
# been enable in version > 3.1. so we can remove all of this when we don't
# publish version <= 3.1.X anymore
scv_overflow = ("-D", "html_theme=sphinx_rtd_theme",
"-D", "html_theme_options.logo_only=True",
"-D", "html_logo=gnocchi-logo.png",
"-D", "html_favicon=gnocchi-icon.ico",
"-D", "html_static_path=%s" % html_static_path_abs)

View File

@ -1,33 +0,0 @@
========
Glossary
========
.. glossary::
Resource
An entity representing anything in your infrastructure that you will
associate metric(s) with. It is identified by a unique ID and can contain
attributes.
Metric
An entity storing measures identified by an UUID. It can be attached to a
resource using a name. How a metric stores its measure is defined by the
archive policy it is associated to.
Measure
A datapoint tuple composed of timestamp and a value.
Archive policy
A measure storage policy attached to a metric. It determines how long
measures will be kept in a metric and how they will be aggregated.
Granularity
The time between two measures in an aggregated timeseries of a metric.
Timeseries
A list of measures.
Aggregation method
Function used to aggregate multiple measures in one. For example, the
`min` aggregation method will aggregate the values of different measures
to the minimum value of all the measures in time range.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 81 KiB

View File

@ -1,52 +0,0 @@
=================
Grafana support
=================
`Grafana`_ has support for Gnocchi through a plugin. It can be installed with
grafana-cli::
sudo grafana-cli plugins install sileht-gnocchi-datasource
`Source`_ and `Documentation`_ are also available.
Grafana has 2 modes of operation: proxy or direct mode. In proxy mode, your
browser only communicates with Grafana, and Grafana communicates with Gnocchi.
In direct mode, your browser communicates with Grafana, Gnocchi, and possibly
Keystone.
Picking the right mode depends if your Gnocchi server is reachable by your
browser and/or by your Grafana server.
In order to use Gnocchi with Grafana in proxy mode, you just need to:
1. Install Grafana and its Gnocchi plugin
2. Configure a new datasource in Grafana with the Gnocchi URL.
If you are using the Keystone middleware for authentication, you can also
provide an authentication token.
In order to use Gnocchi with Grafana in direct mode, you need to do a few more
steps:
1. Configure the CORS middleware in `gnocchi.conf` to allow request from
Grafana::
[cors]
allowed_origin = http://example.com/grafana
2. Configure the CORS middleware in Keystone to allow request from Grafana too:
[cors]
allowed_origin = http://example.com/grafana
3. Configure a new datasource in Grafana with the Keystone URL, a user, a
project and a password. Your browser will query Keystone for a token, and
then query Gnocchi based on what Grafana needs.
.. image:: grafana-screenshot.png
:align: center
:alt: Grafana screenshot
.. _`Grafana`: http://grafana.org
.. _`Documentation`: https://grafana.net/plugins/sileht-gnocchi-datasource
.. _`Source`: https://github.com/gnocchixyz/grafana-gnocchi-datasource
.. _`CORS`: https://en.wikipedia.org/wiki/Cross-origin_resource_sharing

View File

@ -1,70 +0,0 @@
==================================
Gnocchi Metric as a Service
==================================
.. include:: ../../README.rst
:start-line: 6
Key Features
------------
- HTTP REST interface
- Horizontal scalability
- Metric aggregation
- Measures batching support
- Archiving policy
- Metric value search
- Structured resources
- Resource history
- Queryable resource indexer
- Multi-tenant
- Grafana support
- Nagios/Icinga support
- Statsd protocol support
- Collectd plugin support
Community
---------
You can join Gnocchi's community via the following channels:
- Bug tracker: https://bugs.launchpad.net/gnocchi
- IRC: #gnocchi on `Freenode <https://freenode.net>`_
- Mailing list: `openstack-dev@lists.openstack.org
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>`_ with
*[gnocchi]* in the `Subject` header.
Why Gnocchi?
------------
Gnocchi has been created to fulfill the need of a time series database usable
in the context of cloud computing: providing the ability to store large
quantities of metrics. It has been designed to handle large amount of measures
being stored, while being performant, scalable and fault-tolerant. While doing
this, the goal was to be sure to not build any hard dependency on any complex
storage system.
The Gnocchi project was started in 2014 as a spin-off of the `OpenStack
Ceilometer`_ project to address the performance issues that Ceilometer
encountered while using standard databases as a storage backends for metrics.
More information are available on `Julien's blog post on Gnocchi
<https://julien.danjou.info/blog/2014/openstack-ceilometer-the-gnocchi-experiment>`_.
.. _`OpenStack Ceilometer`: https://docs.openstack.org/developer/ceilometer/
Documentation
-------------
.. toctree::
:maxdepth: 1
architecture
install
running
client
rest
statsd
grafana
nagios
collectd
glossary
releasenotes/index.rst

View File

@ -1,191 +0,0 @@
==============
Installation
==============
.. _installation:
Installation
============
To install Gnocchi using `pip`, just type::
pip install gnocchi
Depending on the drivers and features you want to use (see :doc:`architecture`
for which driver to pick), you need to install extra variants using, for
example::
pip install gnocchi[postgresql,ceph,keystone]
This would install PostgreSQL support for the indexer, Ceph support for
storage, and Keystone support for authentication and authorization.
The list of variants available is:
* keystone provides Keystone authentication support
* mysql - provides MySQL indexer support
* postgresql provides PostgreSQL indexer support
* swift provides OpenStack Swift storage support
* s3 provides Amazon S3 storage support
* ceph provides common part of Ceph storage support
* ceph_recommended_lib provides Ceph (>=0.80) storage support
* ceph_alternative_lib provides Ceph (>=10.1.0) storage support
* file provides file driver support
* redis provides Redis storage support
* doc documentation building support
* test unit and functional tests support
To install Gnocchi from source, run the standard Python installation
procedure::
pip install -e .
Again, depending on the drivers and features you want to use, you need to
install extra variants using, for example::
pip install -e .[postgresql,ceph,ceph_recommended_lib]
Ceph requirements
-----------------
The ceph driver needs to have a Ceph user and a pool already created. They can
be created for example with:
::
ceph osd pool create metrics 8 8
ceph auth get-or-create client.gnocchi mon "allow r" osd "allow rwx pool=metrics"
Gnocchi leverages some librados features (omap, async, operation context)
available in python binding only since python-rados >= 10.1.0. To handle this,
Gnocchi uses 'cradox' python library which has exactly the same API but works
with Ceph >= 0.80.0.
If Ceph and python-rados are >= 10.1.0, cradox python library becomes optional
but is still recommended.
Configuration
=============
Configuration file
-------------------
By default, gnocchi looks for its configuration file in the following places,
in order:
* ``~/.gnocchi/gnocchi.conf``
* ``~/gnocchi.conf``
* ``/etc/gnocchi/gnocchi.conf``
* ``/etc/gnocchi.conf``
* ``~/gnocchi/gnocchi.conf.d``
* ``~/gnocchi.conf.d``
* ``/etc/gnocchi/gnocchi.conf.d``
* ``/etc/gnocchi.conf.d``
No config file is provided with the source code; it will be created during the
installation. In case where no configuration file was installed, one can be
easily created by running:
::
gnocchi-config-generator > /path/to/gnocchi.conf
Configure Gnocchi by editing the appropriate file.
The configuration file should be pretty explicit, but here are some of the base
options you want to change and configure:
+---------------------+---------------------------------------------------+
| Option name | Help |
+=====================+===================================================+
| storage.driver | The storage driver for metrics. |
+---------------------+---------------------------------------------------+
| indexer.url | URL to your indexer. |
+---------------------+---------------------------------------------------+
| storage.file_* | Configuration options to store files |
| | if you use the file storage driver. |
+---------------------+---------------------------------------------------+
| storage.swift_* | Configuration options to access Swift |
| | if you use the Swift storage driver. |
+---------------------+---------------------------------------------------+
| storage.ceph_* | Configuration options to access Ceph |
| | if you use the Ceph storage driver. |
+---------------------+---------------------------------------------------+
| storage.s3_* | Configuration options to access S3 |
| | if you use the S3 storage driver. |
+---------------------+---------------------------------------------------+
| storage.redis_* | Configuration options to access Redis |
| | if you use the Redis storage driver. |
+---------------------+---------------------------------------------------+
Configuring authentication
-----------------------------
The API server supports different authentication methods: `basic` (the default)
which uses the standard HTTP `Authorization` header or `keystone` to use
`OpenStack Keystone`_. If you successfully installed the `keystone` flavor
using `pip` (see :ref:`installation`), you can set `api.auth_mode` to
`keystone` to enable Keystone authentication.
.. _`Paste Deployment`: http://pythonpaste.org/deploy/
.. _`OpenStack Keystone`: http://launchpad.net/keystone
Initialization
==============
Once you have configured Gnocchi properly you need to initialize the indexer
and storage:
::
gnocchi-upgrade
Upgrading
=========
In order to upgrade from a previous version of Gnocchi, you need to make sure
that your indexer and storage are properly upgraded. Run the following:
1. Stop the old version of Gnocchi API server and `gnocchi-statsd` daemon
2. Stop the old version of `gnocchi-metricd` daemon
.. note::
Data in backlog is never migrated between versions. Ensure the backlog is
empty before any upgrade to ensure data is not lost.
3. Install the new version of Gnocchi
4. Run `gnocchi-upgrade`
This can take several hours depending on the size of your index and
storage.
5. Start the new Gnocchi API server, `gnocchi-metricd`
and `gnocchi-statsd` daemons
Installation Using Devstack
===========================
To enable Gnocchi in `devstack`_, add the following to local.conf:
::
enable_plugin gnocchi https://github.com/openstack/gnocchi master
To enable Grafana support in devstack, you can also enable `gnocchi-grafana`::
enable_service gnocchi-grafana
Then, you can start devstack:
::
./stack.sh
.. _devstack: http://devstack.org

View File

@ -1,19 +0,0 @@
=====================
Nagios/Icinga support
=====================
`Nagios`_ and `Icinga`_ has support for Gnocchi through a Gnocchi-nagios
service. It can be installed with pip::
pip install gnocchi-nagios
`Source`_ and `Documentation`_ are also available.
Gnocchi-nagios collects perfdata files generated by `Nagios`_ or `Icinga`_;
transforms them into Gnocchi resources, metrics and measures format; and
publishes them to the Gnocchi REST API.
.. _`Nagios`: https://www.nagios.org/
.. _`Icinga`: https://www.icinga.com/
.. _`Documentation`: http://gnocchi-nagios.readthedocs.io/en/latest/
.. _`Source`: https://github.com/sileht/gnocchi-nagios

View File

@ -1,6 +0,0 @@
===================================
2.1 Series Release Notes
===================================
.. release-notes::
:branch: origin/stable/2.1

View File

@ -1,6 +0,0 @@
===================================
2.2 Series Release Notes
===================================
.. release-notes::
:branch: origin/stable/2.2

View File

@ -1,6 +0,0 @@
===================================
3.0 Series Release Notes
===================================
.. release-notes::
:branch: origin/stable/3.0

View File

@ -1,6 +0,0 @@
===================================
3.1 Series Release Notes
===================================
.. release-notes::
:branch: origin/stable/3.1

View File

@ -1,11 +0,0 @@
Release Notes
=============
.. toctree::
:maxdepth: 2
unreleased
3.1
3.0
2.2
2.1

View File

@ -1,5 +0,0 @@
============================
Current Series Release Notes
============================
.. release-notes::

View File

@ -1,586 +0,0 @@
================
REST API Usage
================
Authentication
==============
By default, the authentication is configured to the "basic" mode. You need to
provide an `Authorization` header in your HTTP requests with a valid username
(the password is not used). The "admin" password is granted all privileges,
whereas any other username is recognize as having standard permissions.
You can customize permissions by specifying a different `policy_file` than the
default one.
If you set the `api.auth_mode` value to `keystone`, the OpenStack Keystone
middleware will be enabled for authentication. It is then needed to
authenticate against Keystone and provide a `X-Auth-Token` header with a valid
token for each request sent to Gnocchi's API.
Metrics
=======
Gnocchi provides an object type that is called *metric*. A metric designates
any thing that can be measured: the CPU usage of a server, the temperature of a
room or the number of bytes sent by a network interface.
A metric only has a few properties: a UUID to identify it, a name, the archive
policy that will be used to store and aggregate the measures.
To create a metric, the following API request should be used:
{{ scenarios['create-metric']['doc'] }}
Once created, you can retrieve the metric information:
{{ scenarios['get-metric']['doc'] }}
To retrieve the list of all the metrics created, use the following request:
{{ scenarios['list-metric']['doc'] }}
.. note::
Considering the large volume of metrics Gnocchi will store, query results are
limited to `max_limit` value set in the configuration file. Returned results
are ordered by metrics' id values. To retrieve the next page of results, the
id of a metric should be given as `marker` for the beginning of the next page
of results.
Default ordering and limits as well as page start can be modified
using query parameters:
{{ scenarios['list-metric-pagination']['doc'] }}
It is possible to send measures to the metric:
{{ scenarios['post-measures']['doc'] }}
If there are no errors, Gnocchi does not return a response body, only a simple
status code. It is possible to provide any number of measures.
.. IMPORTANT::
While it is possible to send any number of (timestamp, value), it is still
needed to honor constraints defined by the archive policy used by the metric,
such as the maximum timespan.
Once measures are sent, it is possible to retrieve them using *GET* on the same
endpoint:
{{ scenarios['get-measures']['doc'] }}
Depending on the driver, there may be some lag after POSTing measures before
they are processed and queryable. To ensure your query returns all measures
that have been POSTed, you can force any unprocessed measures to be handled:
{{ scenarios['get-measures-refresh']['doc'] }}
.. note::
Depending on the amount of data that is unprocessed, `refresh` may add
some overhead to your query.
The list of points returned is composed of tuples with (timestamp, granularity,
value) sorted by timestamp. The granularity is the timespan covered by
aggregation for this point.
It is possible to filter the measures over a time range by specifying the
*start* and/or *stop* parameters to the query with timestamp. The timestamp
format can be either a floating number (UNIX epoch) or an ISO8601 formated
timestamp:
{{ scenarios['get-measures-from']['doc'] }}
By default, the aggregated values that are returned use the *mean* aggregation
method. It is possible to request for any other method by specifying the
*aggregation* query parameter:
{{ scenarios['get-measures-max']['doc'] }}
The list of aggregation method available is: *mean*, *sum*, *last*, *max*,
*min*, *std*, *median*, *first*, *count* and *Npct* (with 0 < N < 100).
It's possible to provide the `granularity` argument to specify the granularity
to retrieve, rather than all the granularities available:
{{ scenarios['get-measures-granularity']['doc'] }}
In addition to granularities defined by the archive policy, measures can be
resampled to a new granularity.
{{ scenarios['get-measures-resample']['doc'] }}
.. note::
Depending on the aggregation method and frequency of measures, resampled
data may lack accuracy as it is working against previously aggregated data.
Measures batching
=================
It is also possible to batch measures sending, i.e. send several measures for
different metrics in a simple call:
{{ scenarios['post-measures-batch']['doc'] }}
Or using named metrics of resources:
{{ scenarios['post-measures-batch-named']['doc'] }}
If some named metrics specified in the batch request do not exist, Gnocchi can
try to create them as long as an archive policy rule matches:
{{ scenarios['post-measures-batch-named-create']['doc'] }}
Archive Policy
==============
When sending measures for a metric to Gnocchi, the values are dynamically
aggregated. That means that Gnocchi does not store all sent measures, but
aggregates them over a certain period of time. Gnocchi provides several
aggregation methods (mean, min, max, sum…) that are builtin.
An archive policy is defined by a list of items in the `definition` field. Each
item is composed of the timespan and the level of precision that must be kept
when aggregating data, determined using at least 2 of the `points`,
`granularity` and `timespan` fields. For example, an item might be defined
as 12 points over 1 hour (one point every 5 minutes), or 1 point every 1 hour
over 1 day (24 points).
By default, new measures can only be processed if they have timestamps in the
future or part of the last aggregation period. The last aggregation period size
is based on the largest granularity defined in the archive policy definition.
To allow processing measures that are older than the period, the `back_window`
parameter can be used to set the number of coarsest periods to keep. That way
it is possible to process measures that are older than the last timestamp
period boundary.
For example, if an archive policy is defined with coarsest aggregation of 1
hour, and the last point processed has a timestamp of 14:34, it's possible to
process measures back to 14:00 with a `back_window` of 0. If the `back_window`
is set to 2, it will be possible to send measures with timestamp back to 12:00
(14:00 minus 2 times 1 hour).
The REST API allows to create archive policies in this way:
{{ scenarios['create-archive-policy']['doc'] }}
By default, the aggregation methods computed and stored are the ones defined
with `default_aggregation_methods` in the configuration file. It is possible to
change the aggregation methods used in an archive policy by specifying the list
of aggregation method to use in the `aggregation_methods` attribute of an
archive policy.
{{ scenarios['create-archive-policy-without-max']['doc'] }}
The list of aggregation methods can either be:
- a list of aggregation methods to use, e.g. `["mean", "max"]`
- a list of methods to remove (prefixed by `-`) and/or to add (prefixed by `+`)
to the default list (e.g. `["+mean", "-last"]`)
If `*` is included in the list, it's substituted by the list of all supported
aggregation methods.
Once the archive policy is created, the complete set of properties is computed
and returned, with the URL of the archive policy. This URL can be used to
retrieve the details of the archive policy later:
{{ scenarios['get-archive-policy']['doc'] }}
It is also possible to list archive policies:
{{ scenarios['list-archive-policy']['doc'] }}
Existing archive policies can be modified to retain more or less data depending
on requirements. If the policy coverage is expanded, measures are not
retroactively calculated as backfill to accommodate the new timespan:
{{ scenarios['update-archive-policy']['doc'] }}
.. note::
Granularities cannot be changed to a different rate. Also, granularities
cannot be added or dropped from a policy.
It is possible to delete an archive policy if it is not used by any metric:
{{ scenarios['delete-archive-policy']['doc'] }}
.. note::
An archive policy cannot be deleted until all metrics associated with it
are removed by a metricd daemon.
Archive Policy Rule
===================
Gnocchi provides the ability to define a mapping called `archive_policy_rule`.
An archive policy rule defines a mapping between a metric and an archive policy.
This gives users the ability to pre-define rules so an archive policy is assigned to
metrics based on a matched pattern.
An archive policy rule has a few properties: a name to identify it, an archive
policy name that will be used to store the policy name and metric pattern to
match metric names.
An archive policy rule for example could be a mapping to default a medium archive
policy for any volume metric with a pattern matching `volume.*`. When a sample metric
is posted with a name of `volume.size`, that would match the pattern and the
rule applies and sets the archive policy to medium. If multiple rules match,
the longest matching rule is taken. For example, if two rules exists which
match `*` and `disk.*`, a `disk.io.rate` metric would match the `disk.*` rule
rather than `*` rule.
To create a rule, the following API request should be used:
{{ scenarios['create-archive-policy-rule']['doc'] }}
The `metric_pattern` is used to pattern match so as some examples,
- `*` matches anything
- `disk.*` matches disk.io
- `disk.io.*` matches disk.io.rate
Once created, you can retrieve the rule information:
{{ scenarios['get-archive-policy-rule']['doc'] }}
It is also possible to list archive policy rules. The result set is ordered by
the `metric_pattern`, in reverse alphabetical order:
{{ scenarios['list-archive-policy-rule']['doc'] }}
It is possible to delete an archive policy rule:
{{ scenarios['delete-archive-policy-rule']['doc'] }}
Resources
=========
Gnocchi provides the ability to store and index resources. Each resource has a
type. The basic type of resources is *generic*, but more specialized subtypes
also exist, especially to describe OpenStack resources.
The REST API allows to manipulate resources. To create a generic resource:
{{ scenarios['create-resource-generic']['doc'] }}
The *id*, *user_id* and *project_id* attributes must be UUID. The timestamp
describing the lifespan of the resource are optional, and *started_at* is by
default set to the current timestamp.
It's possible to retrieve the resource by the URL provided in the `Location`
header.
More specialized resources can be created. For example, the *instance* is used
to describe an OpenStack instance as managed by Nova_.
{{ scenarios['create-resource-instance']['doc'] }}
All specialized types have their own optional and mandatory attributes,
but they all include attributes from the generic type as well.
It is possible to create metrics at the same time you create a resource to save
some requests:
{{ scenarios['create-resource-with-new-metrics']['doc'] }}
To retrieve a resource by its URL provided by the `Location` header at creation
time:
{{ scenarios['get-resource-generic']['doc'] }}
It's possible to modify a resource by re-uploading it partially with the
modified fields:
{{ scenarios['patch-resource']['doc'] }}
And to retrieve its modification history:
{{ scenarios['get-patched-instance-history']['doc'] }}
It is possible to delete a resource altogether:
{{ scenarios['delete-resource-generic']['doc'] }}
It is also possible to delete a batch of resources based on attribute values, and
returns a number of deleted resources.
To delete resources based on ids:
{{ scenarios['delete-resources-by-ids']['doc'] }}
or delete resources based on time:
{{ scenarios['delete-resources-by-time']['doc']}}
.. IMPORTANT::
When a resource is deleted, all its associated metrics are deleted at the
same time.
When a batch of resources are deleted, an attribute filter is required to
avoid deletion of the entire database.
All resources can be listed, either by using the `generic` type that will list
all types of resources, or by filtering on their resource type:
{{ scenarios['list-resource-generic']['doc'] }}
No attributes specific to the resource type are retrieved when using the
`generic` endpoint. To retrieve the details, either list using the specific
resource type endpoint:
{{ scenarios['list-resource-instance']['doc'] }}
or using `details=true` in the query parameter:
{{ scenarios['list-resource-generic-details']['doc'] }}
.. note::
Similar to metric list, query results are limited to `max_limit` value set in
the configuration file.
Returned results represent a single page of data and are ordered by resouces'
revision_start time and started_at values:
{{ scenarios['list-resource-generic-pagination']['doc'] }}
Each resource can be linked to any number of metrics. The `metrics` attributes
is a key/value field where the key is the name of the relationship and
the value is a metric:
{{ scenarios['create-resource-instance-with-metrics']['doc'] }}
It's also possible to create metrics dynamically while creating a resource:
{{ scenarios['create-resource-instance-with-dynamic-metrics']['doc'] }}
The metric associated with a resource can be accessed and manipulated using the
usual `/v1/metric` endpoint or using the named relationship with the resource:
{{ scenarios['get-resource-named-metrics-measures']['doc'] }}
The same endpoint can be used to append metrics to a resource:
{{ scenarios['append-metrics-to-resource']['doc'] }}
.. _Nova: http://launchpad.net/nova
Resource Types
==============
Gnocchi is able to manage resource types with custom attributes.
To create a new resource type:
{{ scenarios['create-resource-type']['doc'] }}
Then to retrieve its description:
{{ scenarios['get-resource-type']['doc'] }}
All resource types can be listed like this:
{{ scenarios['list-resource-type']['doc'] }}
It can also be deleted if no more resources are associated to it:
{{ scenarios['delete-resource-type']['doc'] }}
Attributes can be added or removed:
{{ scenarios['patch-resource-type']['doc'] }}
Creating resource type means creation of new tables on the indexer backend.
This is heavy operation that will lock some tables for a short amount of times.
When the resource type is created, its initial `state` is `creating`. When the
new tables have been created, the state switches to `active` and the new
resource type is ready to be used. If something unexpected occurs during this
step, the state switches to `creation_error`.
The same behavior occurs when the resource type is deleted. The state starts to
switch to `deleting`, the resource type is no more usable. Then the tables are
removed and the finally the resource_type is really deleted from the database.
If some unexpected occurs the state switches to `deletion_error`.
Searching for resources
=======================
It's possible to search for resources using a query mechanism, using the
`POST` method and uploading a JSON formatted query.
When listing resources, it is possible to filter resources based on attributes
values:
{{ scenarios['search-resource-for-user']['doc'] }}
Or even:
{{ scenarios['search-resource-for-host-like']['doc'] }}
Complex operators such as `and` and `or` are also available:
{{ scenarios['search-resource-for-user-after-timestamp']['doc'] }}
Details about the resource can also be retrieved at the same time:
{{ scenarios['search-resource-for-user-details']['doc'] }}
It's possible to search for old revisions of resources in the same ways:
{{ scenarios['search-resource-history']['doc'] }}
It is also possible to send the *history* parameter in the *Accept* header:
{{ scenarios['search-resource-history-in-accept']['doc'] }}
The timerange of the history can be set, too:
{{ scenarios['search-resource-history-partial']['doc'] }}
The supported operators are: equal to (`=`, `==` or `eq`), less than (`<` or
`lt`), greater than (`>` or `gt`), less than or equal to (`<=`, `le` or `≤`)
greater than or equal to (`>=`, `ge` or `≥`) not equal to (`!=`, `ne` or `≠`),
value is in (`in`), value is like (`like`), or (`or` or ``), and (`and` or
`∧`) and negation (`not`).
The special attribute `lifespan` which is equivalent to `ended_at - started_at`
is also available in the filtering queries.
{{ scenarios['search-resource-lifespan']['doc'] }}
Searching for values in metrics
===============================
It is possible to search for values in metrics. For example, this will look for
all values that are greater than or equal to 50 if we add 23 to them and that
are not equal to 55. You have to specify the list of metrics to look into by
using the `metric_id` query parameter several times.
{{ scenarios['search-value-in-metric']['doc'] }}
And it is possible to search for values in metrics by using one or multiple
granularities:
{{ scenarios['search-value-in-metrics-by-granularity']['doc'] }}
You can specify a time range to look for by specifying the `start` and/or
`stop` query parameter, and the aggregation method to use by specifying the
`aggregation` query parameter.
The supported operators are: equal to (`=`, `==` or `eq`), lesser than (`<` or
`lt`), greater than (`>` or `gt`), less than or equal to (`<=`, `le` or `≤`)
greater than or equal to (`>=`, `ge` or `≥`) not equal to (`!=`, `ne` or `≠`),
addition (`+` or `add`), substraction (`-` or `sub`), multiplication (`*`,
`mul` or `×`), division (`/`, `div` or `÷`). These operations take either one
argument, and in this case the second argument passed is the value, or it.
The operators or (`or` or ``), and (`and` or `∧`) and `not` are also
supported, and take a list of arguments as parameters.
Aggregation across metrics
==========================
Gnocchi allows to do on-the-fly aggregation of already aggregated data of
metrics.
It can also be done by providing the list of metrics to aggregate:
{{ scenarios['get-across-metrics-measures-by-metric-ids']['doc'] }}
.. Note::
This aggregation is done against the aggregates built and updated for
a metric when new measurements are posted in Gnocchi. Therefore, the aggregate
of this already aggregated data may not have sense for certain kind of
aggregation method (e.g. stdev).
By default, the measures are aggregated using the aggregation method provided,
e.g. you'll get a mean of means, or a max of maxs. You can specify what method
to use over the retrieved aggregation by using the `reaggregation` parameter:
{{ scenarios['get-across-metrics-measures-by-metric-ids-reaggregate']['doc'] }}
It's also possible to do that aggregation on metrics linked to resources. In
order to select these resources, the following endpoint accepts a query such as
the one described in `Searching for resources`_.
{{ scenarios['get-across-metrics-measures-by-attributes-lookup']['doc'] }}
It is possible to group the resource search results by any attribute of the
requested resource type, and the compute the aggregation:
{{ scenarios['get-across-metrics-measures-by-attributes-lookup-groupby']['doc'] }}
Similar to retrieving measures for a single metric, the `refresh` parameter
can be provided to force all POSTed measures to be processed across all
metrics before computing the result. The `resample` parameter may be used as
well.
.. note::
Resampling is done prior to any reaggregation if both parameters are
specified.
Also, aggregation across metrics have different behavior depending
on whether boundary values are set ('start' and 'stop') and if 'needed_overlap'
is set.
If boundaries are not set, Gnocchi makes the aggregation only with points
at timestamp present in all timeseries. When boundaries are set, Gnocchi
expects that we have certain percent of timestamps common between timeseries,
this percent is controlled by needed_overlap (defaulted with 100%). If this
percent is not reached an error is returned.
The ability to fill in points missing from a subset of timeseries is supported
by specifying a `fill` value. Valid fill values include any valid float or
`null` which will compute aggregation with only the points that exist. The
`fill` parameter will not backfill timestamps which contain no points in any
of the timeseries. Only timestamps which have datapoints in at least one of
the timeseries is returned.
.. note::
A granularity must be specified when using the `fill` parameter.
{{ scenarios['get-across-metrics-measures-by-metric-ids-fill']['doc'] }}
Capabilities
============
The list aggregation methods that can be used in Gnocchi are extendable and
can differ between deployments. It is possible to get the supported list of
aggregation methods from the API server:
{{ scenarios['get-capabilities']['doc'] }}
Status
======
The overall status of the Gnocchi installation can be retrieved via an API call
reporting values such as the number of new measures to process for each metric:
{{ scenarios['get-status']['doc'] }}
Timestamp format
================
Timestamps used in Gnocchi are always returned using the ISO 8601 format.
Gnocchi is able to understand a few formats of timestamp when querying or
creating resources, for example
- "2014-01-01 12:12:34" or "2014-05-20T10:00:45.856219", ISO 8601 timestamps.
- "10 minutes", which means "10 minutes from now".
- "-2 days", which means "2 days ago".
- 1421767030, a Unix epoch based timestamp.

View File

@ -1,749 +0,0 @@
- name: create-archive-policy
request: |
POST /v1/archive_policy HTTP/1.1
Content-Type: application/json
{
"name": "short",
"back_window": 0,
"definition": [
{
"granularity": "1s",
"timespan": "1 hour"
},
{
"points": 48,
"timespan": "1 day"
}
]
}
- name: create-archive-policy-without-max
request: |
POST /v1/archive_policy HTTP/1.1
Content-Type: application/json
{
"name": "short-without-max",
"aggregation_methods": ["-max", "-min"],
"back_window": 0,
"definition": [
{
"granularity": "1s",
"timespan": "1 hour"
},
{
"points": 48,
"timespan": "1 day"
}
]
}
- name: get-archive-policy
request: GET /v1/archive_policy/{{ scenarios['create-archive-policy']['response'].json['name'] }} HTTP/1.1
- name: list-archive-policy
request: GET /v1/archive_policy HTTP/1.1
- name: update-archive-policy
request: |
PATCH /v1/archive_policy/{{ scenarios['create-archive-policy']['response'].json['name'] }} HTTP/1.1
Content-Type: application/json
{
"definition": [
{
"granularity": "1s",
"timespan": "1 hour"
},
{
"points": 48,
"timespan": "1 day"
}
]
}
- name: create-archive-policy-to-delete
request: |
POST /v1/archive_policy HTTP/1.1
Content-Type: application/json
{
"name": "some-archive-policy",
"back_window": 0,
"definition": [
{
"granularity": "1s",
"timespan": "1 hour"
},
{
"points": 48,
"timespan": "1 day"
}
]
}
- name: delete-archive-policy
request: DELETE /v1/archive_policy/{{ scenarios['create-archive-policy-to-delete']['response'].json['name'] }} HTTP/1.1
- name: create-metric
request: |
POST /v1/metric HTTP/1.1
Content-Type: application/json
{
"archive_policy_name": "high"
}
- name: create-metric-2
request: |
POST /v1/metric HTTP/1.1
Content-Type: application/json
{
"archive_policy_name": "low"
}
- name: create-archive-policy-rule
request: |
POST /v1/archive_policy_rule HTTP/1.1
Content-Type: application/json
{
"name": "test_rule",
"metric_pattern": "disk.io.*",
"archive_policy_name": "low"
}
- name: get-archive-policy-rule
request: GET /v1/archive_policy_rule/{{ scenarios['create-archive-policy-rule']['response'].json['name'] }} HTTP/1.1
- name: list-archive-policy-rule
request: GET /v1/archive_policy_rule HTTP/1.1
- name: create-archive-policy-rule-to-delete
request: |
POST /v1/archive_policy_rule HTTP/1.1
Content-Type: application/json
{
"name": "test_rule_delete",
"metric_pattern": "disk.io.*",
"archive_policy_name": "low"
}
- name: delete-archive-policy-rule
request: DELETE /v1/archive_policy_rule/{{ scenarios['create-archive-policy-rule-to-delete']['response'].json['name'] }} HTTP/1.1
- name: get-metric
request: GET /v1/metric/{{ scenarios['create-metric']['response'].json['id'] }} HTTP/1.1
- name: list-metric
request: GET /v1/metric HTTP/1.1
- name: list-metric-pagination
request: GET /v1/metric?limit=100&sort=name:asc HTTP/1.1
- name: post-measures
request: |
POST /v1/metric/{{ scenarios['create-metric']['response'].json['id'] }}/measures HTTP/1.1
Content-Type: application/json
[
{
"timestamp": "2014-10-06T14:33:57",
"value": 43.1
},
{
"timestamp": "2014-10-06T14:34:12",
"value": 12
},
{
"timestamp": "2014-10-06T14:34:20",
"value": 2
}
]
- name: post-measures-batch
request: |
POST /v1/batch/metrics/measures HTTP/1.1
Content-Type: application/json
{
"{{ scenarios['create-metric']['response'].json['id'] }}":
[
{
"timestamp": "2014-10-06T14:34:12",
"value": 12
},
{
"timestamp": "2014-10-06T14:34:20",
"value": 2
}
],
"{{ scenarios['create-metric-2']['response'].json['id'] }}":
[
{
"timestamp": "2014-10-06T16:12:12",
"value": 3
},
{
"timestamp": "2014-10-06T18:14:52",
"value": 4
}
]
}
- name: search-value-in-metric
request: |
POST /v1/search/metric?metric_id={{ scenarios['create-metric']['response'].json['id'] }} HTTP/1.1
Content-Type: application/json
{"and": [{">=": [{"+": 23}, 50]}, {"!=": 55}]}
- name: create-metric-a
request: |
POST /v1/metric HTTP/1.1
Content-Type: application/json
{
"archive_policy_name": "short"
}
- name: post-measures-for-granularity-search
request: |
POST /v1/metric/{{ scenarios['create-metric-a']['response'].json['id'] }}/measures HTTP/1.1
Content-Type: application/json
[
{
"timestamp": "2014-10-06T14:34:12",
"value": 12
},
{
"timestamp": "2014-10-06T14:34:14",
"value": 12
},
{
"timestamp": "2014-10-06T14:34:16",
"value": 12
},
{
"timestamp": "2014-10-06T14:34:18",
"value": 12
},
{
"timestamp": "2014-10-06T14:34:20",
"value": 12
},
{
"timestamp": "2014-10-06T14:34:22",
"value": 12
},
{
"timestamp": "2014-10-06T14:34:24",
"value": 12
}
]
- name: search-value-in-metrics-by-granularity
request: |
POST /v1/search/metric?metric_id={{ scenarios['create-metric-a']['response'].json['id'] }}&granularity=1second&granularity=1800s HTTP/1.1
Content-Type: application/json
{"=": 12}
- name: get-measures
request: GET /v1/metric/{{ scenarios['create-metric']['response'].json['id'] }}/measures HTTP/1.1
- name: get-measures-from
request: GET /v1/metric/{{ scenarios['create-metric']['response'].json['id'] }}/measures?start=2014-10-06T14:34 HTTP/1.1
- name: get-measures-max
request: GET /v1/metric/{{ scenarios['create-metric']['response'].json['id'] }}/measures?aggregation=max HTTP/1.1
- name: get-measures-granularity
request: GET /v1/metric/{{ scenarios['create-metric']['response'].json['id'] }}/measures?granularity=1 HTTP/1.1
- name: get-measures-refresh
request: GET /v1/metric/{{ scenarios['create-metric']['response'].json['id'] }}/measures?refresh=true HTTP/1.1
- name: get-measures-resample
request: GET /v1/metric/{{ scenarios['create-metric']['response'].json['id'] }}/measures?resample=5&granularity=1 HTTP/1.1
- name: create-resource-generic
request: |
POST /v1/resource/generic HTTP/1.1
Content-Type: application/json
{
"id": "75C44741-CC60-4033-804E-2D3098C7D2E9",
"user_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D",
"project_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D"
}
- name: create-resource-with-new-metrics
request: |
POST /v1/resource/generic HTTP/1.1
Content-Type: application/json
{
"id": "AB68DA77-FA82-4E67-ABA9-270C5A98CBCB",
"user_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D",
"project_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D",
"metrics": {"temperature": {"archive_policy_name": "low"}}
}
- name: create-resource-type-instance
request: |
POST /v1/resource_type HTTP/1.1
Content-Type: application/json
{
"name": "instance",
"attributes": {
"display_name": {"type": "string", "required": true},
"flavor_id": {"type": "string", "required": true},
"image_ref": {"type": "string", "required": true},
"host": {"type": "string", "required": true},
"server_group": {"type": "string", "required": false}
}
}
- name: create-resource-instance
request: |
POST /v1/resource/instance HTTP/1.1
Content-Type: application/json
{
"id": "6868DA77-FA82-4E67-ABA9-270C5AE8CBCA",
"user_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D",
"project_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D",
"started_at": "2014-01-02 23:23:34",
"ended_at": "2014-01-04 10:00:12",
"flavor_id": "2",
"image_ref": "http://image",
"host": "compute1",
"display_name": "myvm",
"metrics": {}
}
- name: list-resource-generic
request: GET /v1/resource/generic HTTP/1.1
- name: list-resource-instance
request: GET /v1/resource/instance HTTP/1.1
- name: list-resource-generic-details
request: GET /v1/resource/generic?details=true HTTP/1.1
- name: list-resource-generic-pagination
request: GET /v1/resource/generic?limit=2&sort=id:asc HTTP/1.1
- name: search-resource-for-user
request: |
POST /v1/search/resource/instance HTTP/1.1
Content-Type: application/json
{"=": {"user_id": "{{ scenarios['create-resource-instance']['response'].json['user_id'] }}"}}
- name: search-resource-for-host-like
request: |
POST /v1/search/resource/instance HTTP/1.1
Content-Type: application/json
{"like": {"host": "compute%"}}
- name: search-resource-for-user-details
request: |
POST /v1/search/resource/generic?details=true HTTP/1.1
Content-Type: application/json
{"=": {"user_id": "{{ scenarios['create-resource-instance']['response'].json['user_id'] }}"}}
- name: search-resource-for-user-after-timestamp
request: |
POST /v1/search/resource/instance HTTP/1.1
Content-Type: application/json
{"and": [
{"=": {"user_id": "{{ scenarios['create-resource-instance']['response'].json['user_id'] }}"}},
{">=": {"started_at": "2010-01-01"}}
]}
- name: search-resource-lifespan
request: |
POST /v1/search/resource/instance HTTP/1.1
Content-Type: application/json
{">=": {"lifespan": "30 min"}}
- name: get-resource-generic
request: GET /v1/resource/generic/{{ scenarios['create-resource-generic']['response'].json['id'] }} HTTP/1.1
- name: get-instance
request: GET /v1/resource/instance/{{ scenarios['create-resource-instance']['response'].json['id'] }} HTTP/1.1
- name: create-resource-instance-bis
request: |
POST /v1/resource/instance HTTP/1.1
Content-Type: application/json
{
"id": "AB0B5802-E79B-4C84-8998-9237F60D9CAE",
"user_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D",
"project_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D",
"flavor_id": "2",
"image_ref": "http://image",
"host": "compute1",
"display_name": "myvm",
"metrics": {}
}
- name: patch-resource
request: |
PATCH /v1/resource/instance/{{ scenarios['create-resource-instance']['response'].json['id'] }} HTTP/1.1
Content-Type: application/json
{"host": "compute2"}
- name: get-patched-instance-history
request: GET /v1/resource/instance/{{ scenarios['create-resource-instance']['response'].json['id'] }}/history HTTP/1.1
- name: get-patched-instance
request: GET /v1/resource/instance/{{ scenarios['create-resource-instance']['response'].json['id'] }} HTTP/1.1
- name: create-resource-type
request: |
POST /v1/resource_type HTTP/1.1
Content-Type: application/json
{
"name": "my_custom_type",
"attributes": {
"myid": {"type": "uuid"},
"display_name": {"type": "string", "required": true},
"prefix": {"type": "string", "required": false, "max_length": 8, "min_length": 3},
"size": {"type": "number", "min": 5, "max": 32.8},
"enabled": {"type": "bool", "required": false}
}
}
- name: create-resource-type-2
request: |
POST /v1/resource_type HTTP/1.1
Content-Type: application/json
{"name": "my_other_type"}
- name: get-resource-type
request: GET /v1/resource_type/my_custom_type HTTP/1.1
- name: list-resource-type
request: GET /v1/resource_type HTTP/1.1
- name: patch-resource-type
request: |
PATCH /v1/resource_type/my_custom_type HTTP/1.1
Content-Type: application/json-patch+json
[
{
"op": "add",
"path": "/attributes/awesome-stuff",
"value": {"type": "bool", "required": false}
},
{
"op": "add",
"path": "/attributes/required-stuff",
"value": {"type": "bool", "required": true, "options": {"fill": true}}
},
{
"op": "remove",
"path": "/attributes/prefix"
}
]
- name: delete-resource-type
request: DELETE /v1/resource_type/my_custom_type HTTP/1.1
- name: search-resource-history
request: |
POST /v1/search/resource/instance?history=true HTTP/1.1
Content-Type: application/json
{"=": {"id": "{{ scenarios['create-resource-instance']['response'].json['id'] }}"}}
- name: search-resource-history-in-accept
request: |
POST /v1/search/resource/instance HTTP/1.1
Content-Type: application/json
Accept: application/json; history=true
{"=": {"id": "{{ scenarios['create-resource-instance']['response'].json['id'] }}"}}
- name: search-resource-history-partial
request: |
POST /v1/search/resource/instance HTTP/1.1
Content-Type: application/json
Accept: application/json; history=true
{"and": [
{"=": {"host": "compute1"}},
{">=": {"revision_start": "{{ scenarios['get-instance']['response'].json['revision_start'] }}"}},
{"or": [{"<=": {"revision_end": "{{ scenarios['get-patched-instance']['response'].json['revision_start'] }}"}},
{"=": {"revision_end": null}}]}
]}
- name: create-resource-instance-with-metrics
request: |
POST /v1/resource/instance HTTP/1.1
Content-Type: application/json
{
"id": "6F24EDD9-5A2F-4592-B708-FFBED821C5D2",
"user_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D",
"project_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D",
"flavor_id": "2",
"image_ref": "http://image",
"host": "compute1",
"display_name": "myvm2",
"server_group": "my_autoscaling_group",
"metrics": {"cpu.util": "{{ scenarios['create-metric']['response'].json['id'] }}"}
}
- name: create-resource-instance-with-dynamic-metrics
request: |
POST /v1/resource/instance HTTP/1.1
Content-Type: application/json
{
"id": "15e9c872-7ca9-11e4-a2da-2fb4032dfc09",
"user_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D",
"project_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D",
"flavor_id": "2",
"image_ref": "http://image",
"host": "compute2",
"display_name": "myvm3",
"server_group": "my_autoscaling_group",
"metrics": {"cpu.util": {"archive_policy_name": "{{ scenarios['create-archive-policy']['response'].json['name'] }}"}}
}
- name: post-measures-batch-named
request: |
POST /v1/batch/resources/metrics/measures HTTP/1.1
Content-Type: application/json
{
"{{ scenarios['create-resource-with-new-metrics']['response'].json['id'] }}": {
"temperature": [
{ "timestamp": "2014-10-06T14:34:12", "value": 17 },
{ "timestamp": "2014-10-06T14:34:20", "value": 18 }
]
},
"{{ scenarios['create-resource-instance-with-dynamic-metrics']['response'].json['id'] }}": {
"cpu.util": [
{ "timestamp": "2014-10-06T14:34:12", "value": 12 },
{ "timestamp": "2014-10-06T14:34:20", "value": 2 }
]
},
"{{ scenarios['create-resource-instance-with-metrics']['response'].json['id'] }}": {
"cpu.util": [
{ "timestamp": "2014-10-06T14:34:12", "value": 6 },
{ "timestamp": "2014-10-06T14:34:20", "value": 25 }
]
}
}
- name: post-measures-batch-named-create
request: |
POST /v1/batch/resources/metrics/measures?create_metrics=true HTTP/1.1
Content-Type: application/json
{
"{{ scenarios['create-resource-with-new-metrics']['response'].json['id'] }}": {
"disk.io.test": [
{ "timestamp": "2014-10-06T14:34:12", "value": 71 },
{ "timestamp": "2014-10-06T14:34:20", "value": 81 }
]
}
}
- name: delete-resource-generic
request: DELETE /v1/resource/generic/{{ scenarios['create-resource-generic']['response'].json['id'] }} HTTP/1.1
- name: create-resources-a
request: |
POST /v1/resource/generic HTTP/1.1
Content-Type: application/json
{
"id": "340102AA-AA19-BBE0-E1E2-2D3JDC7D289R",
"user_id": "BD3A1E52-KKKC-2123-BGLH-WWUUD88CD7WZ",
"project_id": "BD3A1E52-KKKC-2123-BGLH-WWUUD88CD7WZ"
}
- name: create-resources-b
request: |
POST /v1/resource/generic HTTP/1.1
Content-Type: application/json
{
"id": "340102AA-AAEF-AA90-E1E2-2D3JDC7D289R",
"user_id": "BD3A1E52-KKKC-2123-BGLH-WWUUD88CD7WZ",
"project_id": "BD3A1E52-KKKC-2123-BGLH-WWUUD88CD7WZ"
}
- name: create-resources-c
request: |
POST /v1/resource/generic HTTP/1.1
Content-Type: application/json
{
"id": "340102AA-AAEF-BCEF-E112-2D3JDC7D289R",
"user_id": "BD3A1E52-KKKC-2123-BGLH-WWUUD88CD7WZ",
"project_id": "BD3A1E52-KKKC-2123-BGLH-WWUUD88CD7WZ"
}
- name: create-resources-d
request: |
POST /v1/resource/generic HTTP/1.1
Content-Type: application/json
{
"id": "340102AA-AAEF-BCEF-E112-2D15DC7D289R",
"user_id": "BD3A1E52-KKKC-2123-BGLH-WWUUD88CD7WZ",
"project_id": "BD3A1E52-KKKC-2123-BGLH-WWUUD88CD7WZ"
}
- name: create-resources-e
request: |
POST /v1/resource/generic HTTP/1.1
Content-Type: application/json
{
"id": "340102AA-AAEF-BCEF-E112-2D3JDC30289R",
"user_id": "BD3A1E52-KKKC-2123-BGLH-WWUUD88CD7WZ",
"project_id": "BD3A1E52-KKKC-2123-BGLH-WWUUD88CD7WZ"
}
- name: create-resources-f
request: |
POST /v1/resource/generic HTTP/1.1
Content-Type: application/json
{
"id": "340102AA-AAEF-BCEF-E112-2D15349D109R",
"user_id": "BD3A1E52-KKKC-2123-BGLH-WWUUD88CD7WZ",
"project_id": "BD3A1E52-KKKC-2123-BGLH-WWUUD88CD7WZ"
}
- name: delete-resources-by-ids
request: |
DELETE /v1/resource/generic HTTP/1.1
Content-Type: application/json
{
"in": {
"id": [
"{{ scenarios['create-resources-a']['response'].json['id'] }}",
"{{ scenarios['create-resources-b']['response'].json['id'] }}",
"{{ scenarios['create-resources-c']['response'].json['id'] }}"
]
}
}
- name: delete-resources-by-time
request: |
DELETE /v1/resource/generic HTTP/1.1
Content-Type: application/json
{
">=": {"started_at": "{{ scenarios['create-resources-f']['response'].json['started_at'] }}"}
}
- name: get-resource-named-metrics-measures
request: GET /v1/resource/generic/{{ scenarios['create-resource-instance-with-metrics']['response'].json['id'] }}/metric/cpu.util/measures?start=2014-10-06T14:34 HTTP/1.1
- name: post-resource-named-metrics-measures1
request: |
POST /v1/resource/generic/{{ scenarios['create-resource-instance-with-metrics']['response'].json['id'] }}/metric/cpu.util/measures HTTP/1.1
Content-Type: application/json
[
{
"timestamp": "2014-10-06T14:33:57",
"value": 3.5
},
{
"timestamp": "2014-10-06T14:34:12",
"value": 20
},
{
"timestamp": "2014-10-06T14:34:20",
"value": 9
}
]
- name: post-resource-named-metrics-measures2
request: |
POST /v1/resource/generic/{{ scenarios['create-resource-instance-with-dynamic-metrics']['response'].json['id'] }}/metric/cpu.util/measures HTTP/1.1
Content-Type: application/json
[
{
"timestamp": "2014-10-06T14:33:57",
"value": 25.1
},
{
"timestamp": "2014-10-06T14:34:12",
"value": 4.5
},
{
"timestamp": "2014-10-06T14:34:20",
"value": 14.2
}
]
- name: get-across-metrics-measures-by-attributes-lookup
request: |
POST /v1/aggregation/resource/instance/metric/cpu.util?start=2014-10-06T14:34&aggregation=mean HTTP/1.1
Content-Type: application/json
{"=": {"server_group": "my_autoscaling_group"}}
- name: get-across-metrics-measures-by-attributes-lookup-groupby
request: |
POST /v1/aggregation/resource/instance/metric/cpu.util?groupby=host&groupby=flavor_id HTTP/1.1
Content-Type: application/json
{"=": {"server_group": "my_autoscaling_group"}}
- name: get-across-metrics-measures-by-metric-ids
request: |
GET /v1/aggregation/metric?metric={{ scenarios['create-resource-instance-with-metrics']['response'].json['metrics']['cpu.util'] }}&metric={{ scenarios['create-resource-instance-with-dynamic-metrics']['response'].json['metrics']['cpu.util'] }}&start=2014-10-06T14:34&aggregation=mean HTTP/1.1
- name: get-across-metrics-measures-by-metric-ids-reaggregate
request: |
GET /v1/aggregation/metric?metric={{ scenarios['create-resource-instance-with-metrics']['response'].json['metrics']['cpu.util'] }}&metric={{ scenarios['create-resource-instance-with-dynamic-metrics']['response'].json['metrics']['cpu.util'] }}&aggregation=mean&reaggregation=min HTTP/1.1
- name: get-across-metrics-measures-by-metric-ids-fill
request: |
GET /v1/aggregation/metric?metric={{ scenarios['create-resource-instance-with-metrics']['response'].json['metrics']['cpu.util'] }}&metric={{ scenarios['create-resource-instance-with-dynamic-metrics']['response'].json['metrics']['cpu.util'] }}&fill=0&granularity=1 HTTP/1.1
- name: append-metrics-to-resource
request: |
POST /v1/resource/generic/{{ scenarios['create-resource-instance-with-metrics']['response'].json['id'] }}/metric HTTP/1.1
Content-Type: application/json
{"memory": {"archive_policy_name": "low"}}
- name: get-capabilities
request: GET /v1/capabilities HTTP/1.1
- name: get-status
request: GET /v1/status HTTP/1.1

View File

@ -1,246 +0,0 @@
===============
Running Gnocchi
===============
To run Gnocchi, simply run the HTTP server and metric daemon:
::
gnocchi-api
gnocchi-metricd
Running API As A WSGI Application
=================================
The Gnocchi API tier runs using WSGI. This means it can be run using `Apache
httpd`_ and `mod_wsgi`_, or other HTTP daemon such as `uwsgi`_. You should
configure the number of process and threads according to the number of CPU you
have, usually around 1.5 × number of CPU. If one server is not enough, you can
spawn any number of new API server to scale Gnocchi out, even on different
machines.
The following uwsgi configuration file can be used::
[uwsgi]
http = localhost:8041
# Set the correct path depending on your installation
wsgi-file = /usr/local/bin/gnocchi-api
master = true
die-on-term = true
threads = 32
# Adjust based on the number of CPU
processes = 32
enabled-threads = true
thunder-lock = true
plugins = python
buffer-size = 65535
lazy-apps = true
Once written to `/etc/gnocchi/uwsgi.ini`, it can be launched this way::
uwsgi /etc/gnocchi/uwsgi.ini
.. _Apache httpd: http://httpd.apache.org/
.. _mod_wsgi: https://modwsgi.readthedocs.org/
.. _uwsgi: https://uwsgi-docs.readthedocs.org/
How to define archive policies
==============================
In Gnocchi, the archive policy definitions are expressed in number of points.
If your archive policy defines a policy of 10 points with a granularity of 1
second, the time series archive will keep up to 10 seconds, each representing
an aggregation over 1 second. This means the time series will at maximum retain
10 seconds of data (sometimes a bit more) between the more recent point and the
oldest point. That does not mean it will be 10 consecutive seconds: there might
be a gap if data is fed irregularly.
There is no expiry of data relative to the current timestamp.
Therefore, both the archive policy and the granularity entirely depends on your
use case. Depending on the usage of your data, you can define several archiving
policies. A typical low grained use case could be::
3600 points with a granularity of 1 second = 1 hour
1440 points with a granularity of 1 minute = 24 hours
720 points with a granularity of 1 hour = 30 days
365 points with a granularity of 1 day = 1 year
This would represent 6125 points × 9 = 54 KiB per aggregation method. If
you use the 8 standard aggregation method, your metric will take up to 8 × 54
KiB = 432 KiB of disk space.
Be aware that the more definitions you set in an archive policy, the more CPU
it will consume. Therefore, creating an archive policy with 2 definitons (e.g.
1 second granularity for 1 day and 1 minute granularity for 1 month) may
consume twice CPU than just one definition (e.g. just 1 second granularity for
1 day).
Default archive policies
========================
By default, 3 archive policies are created when calling `gnocchi-upgrade`:
*low*, *medium* and *high*. The name both describes the storage space and CPU
usage needs. They use `default_aggregation_methods` which is by default set to
*mean*, *min*, *max*, *sum*, *std*, *count*.
A fourth archive policy named `bool` is also provided by default and is
designed to store only boolean values (i.e. 0 and 1). It only stores one data
point for each second (using the `last` aggregation method), with a one year
retention period. The maximum optimistic storage size is estimated based on the
assumption that no other value than 0 and 1 are sent as measures. If other
values are sent, the maximum pessimistic storage size is taken into account.
- low
* 5 minutes granularity over 30 days
* aggregation methods used: `default_aggregation_methods`
* maximum estimated size per metric: 406 KiB
- medium
* 1 minute granularity over 7 days
* 1 hour granularity over 365 days
* aggregation methods used: `default_aggregation_methods`
* maximum estimated size per metric: 887 KiB
- high
* 1 second granularity over 1 hour
* 1 minute granularity over 1 week
* 1 hour granularity over 1 year
* aggregation methods used: `default_aggregation_methods`
* maximum estimated size per metric: 1 057 KiB
- bool
* 1 second granularity over 1 year
* aggregation methods used: *last*
* maximum optimistic size per metric: 1 539 KiB
* maximum pessimistic size per metric: 277 172 KiB
How to plan for Gnocchis storage
=================================
Gnocchi uses a custom file format based on its library *Carbonara*. In Gnocchi,
a time series is a collection of points, where a point is a given measure, or
sample, in the lifespan of a time series. The storage format is compressed
using various techniques, therefore the computing of a time series' size can be
estimated based on its **worst** case scenario with the following formula::
number of points × 8 bytes = size in bytes
The number of points you want to keep is usually determined by the following
formula::
number of points = timespan ÷ granularity
For example, if you want to keep a year of data with a one minute resolution::
number of points = (365 days × 24 hours × 60 minutes) ÷ 1 minute
number of points = 525 600
Then::
size in bytes = 525 600 bytes × 6 = 3 159 600 bytes = 3 085 KiB
This is just for a single aggregated time series. If your archive policy uses
the 6 default aggregation methods (mean, min, max, sum, std, count) with the
same "one year, one minute aggregations" resolution, the space used will go up
to a maximum of 6 × 4.1 MiB = 24.6 MiB.
How many metricd workers do we need to run
==========================================
By default, `gnocchi-metricd` daemon spans all your CPU power in order to
maximize CPU utilisation when computing metric aggregation. You can use the
`gnocchi status` command to query the HTTP API and get the cluster status for
metric processing. Itll show you the number of metric to process, known as the
processing backlog for `gnocchi-metricd`. As long as this backlog is not
continuously increasing, that means that `gnocchi-metricd` is able to cope with
the amount of metric that are being sent. In case this number of measure to
process is continuously increasing, you will need to (maybe temporarily)
increase the number of `gnocchi-metricd` daemons. You can run any number of
metricd daemon on any number of servers.
How to scale measure processing
===============================
Measurement data pushed to Gnocchi is divided into sacks for better
distribution. The number of partitions is controlled by the `sacks` option
under the `[incoming]` section. This value should be set based on the
number of active metrics the system will capture. Additionally, the number of
`sacks`, should be higher than the total number of active metricd workers.
distribution. Incoming metrics are pushed to specific sacks and each sack
is assigned to one or more `gnocchi-metricd` daemons for processing.
How many sacks do we need to create
-----------------------------------
This number of sacks enabled should be set based on the number of active
metrics the system will capture. Additionally, the number of sacks, should
be higher than the total number of active `gnocchi-metricd` workers.
In general, use the following equation to determine the appropriate `sacks`
value to set::
sacks value = number of **active** metrics / 300
If the estimated number of metrics is the absolute maximum, divide the value
by 500 instead. If the estimated number of active metrics is conservative and
expected to grow, divide the value by 100 instead to accommodate growth.
How do we change sack size
--------------------------
In the event your system grows to capture signficantly more metrics than
originally anticipated, the number of sacks can be changed to maintain good
distribution. To avoid any loss of data when modifying `sacks` option. The
option should be changed in the following order::
1. Stop all input services (api, statsd)
2. Stop all metricd services once backlog is cleared
3. Run gnocchi-change-sack-size <number of sacks> to set new sack size. Note
that sack value can only be changed if the backlog is empty.
4. Restart all gnocchi services (api, statsd, metricd) with new configuration
Alternatively, to minimise API downtime::
1. Run gnocchi-upgrade but use a new incoming storage target such as a new
ceph pool, file path, etc... Additionally, set aggregate storage to a
new target as well.
2. Run gnocchi-change-sack-size <number of sacks> against new target
3. Stop all input services (api, statsd)
4. Restart all input services but target newly created incoming storage
5. When done clearing backlog from original incoming storage, switch all
metricd datemons to target new incoming storage but maintain original
aggregate storage.
How to monitor Gnocchi
======================
The `/v1/status` endpoint of the HTTP API returns various information, such as
the number of measures to process (measures backlog), which you can easily
monitor (see `How many metricd workers do we need to run`_). Making sure that
the HTTP server and `gnocchi-metricd` daemon are running and are not writing
anything alarming in their logs is a sign of good health of the overall system.
Total measures for backlog status may not accurately reflect the number of
points to be processed when measures are submitted via batch.
How to backup and restore Gnocchi
=================================
In order to be able to recover from an unfortunate event, you need to backup
both the index and the storage. That means creating a database dump (PostgreSQL
or MySQL) and doing snapshots or copy of your data storage (Ceph, S3, Swift or
your file system). The procedure to restore is no more complicated than initial
deployment: restore your index and storage backups, reinstall Gnocchi if
necessary, and restart it.

View File

@ -1,43 +0,0 @@
===================
Statsd Daemon Usage
===================
What Is It?
===========
`Statsd`_ is a network daemon that listens for statistics sent over the network
using TCP or UDP, and then sends aggregates to another backend.
Gnocchi provides a daemon that is compatible with the statsd protocol and can
listen to metrics sent over the network, named `gnocchi-statsd`.
.. _`Statsd`: https://github.com/etsy/statsd/
How It Works?
=============
In order to enable statsd support in Gnocchi, you need to configure the
`[statsd]` option group in the configuration file. You need to provide a
resource ID that will be used as the main generic resource where all the
metrics will be attached, a user and project id that will be associated with
the resource and metrics, and an archive policy name that will be used to
create the metrics.
All the metrics will be created dynamically as the metrics are sent to
`gnocchi-statsd`, and attached with the provided name to the resource ID you
configured.
The `gnocchi-statsd` may be scaled, but trade-offs have to been made due to the
nature of the statsd protocol. That means that if you use metrics of type
`counter`_ or sampling (`c` in the protocol), you should always send those
metrics to the same daemon or not use them at all. The other supported
types (`timing`_ and `gauges`_) does not suffer this limitation, but be aware
that you might have more measures that expected if you send the same metric to
different `gnocchi-statsd` server, as their cache nor their flush delay are
synchronized.
.. _`counter`: https://github.com/etsy/statsd/blob/master/docs/metric_types.md#counting
.. _`timing`: https://github.com/etsy/statsd/blob/master/docs/metric_types.md#timing
.. _`gauges`: https://github.com/etsy/statsd/blob/master/docs/metric_types.md#gauges
.. note ::
The statsd protocol support is incomplete: relative gauge values with +/-
and sets are not supported yet.

View File

View File

@ -1,50 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright 2014 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import six
from gnocchi import exceptions
class CustomAggFailure(Exception):
"""Error raised when custom aggregation functions fail for any reason."""
def __init__(self, msg):
self.msg = msg
super(CustomAggFailure, self).__init__(msg)
@six.add_metaclass(abc.ABCMeta)
class CustomAggregator(object):
@abc.abstractmethod
def compute(self, storage_obj, metric, start, stop, **param):
"""Returns list of (timestamp, window, aggregate value) tuples.
:param storage_obj: storage object for retrieving the data
:param metric: metric
:param start: start timestamp
:param stop: stop timestamp
:param **param: parameters are window and optionally center.
'window' is the granularity over which to compute the moving
aggregate.
'center=True' returns the aggregated data indexed by the central
time in the sampling window, 'False' (default) indexes aggregates
by the oldest time in the window. center is not supported for EWMA.
"""
raise exceptions.NotImplementedError

View File

@ -1,145 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright 2014-2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import numpy
import pandas
import six
from gnocchi import aggregates
from gnocchi import utils
class MovingAverage(aggregates.CustomAggregator):
@staticmethod
def check_window_valid(window):
"""Takes in the window parameter string, reformats as a float."""
if window is None:
msg = 'Moving aggregate must have window specified.'
raise aggregates.CustomAggFailure(msg)
try:
return utils.to_timespan(six.text_type(window)).total_seconds()
except Exception:
raise aggregates.CustomAggFailure('Invalid value for window')
@staticmethod
def retrieve_data(storage_obj, metric, start, stop, window):
"""Retrieves finest-res data available from storage."""
all_data = storage_obj.get_measures(metric, start, stop)
try:
min_grain = min(set([row[1] for row in all_data if row[1] == 0
or window % row[1] == 0]))
except Exception:
msg = ("No data available that is either full-res or "
"of a granularity that factors into the window size "
"you specified.")
raise aggregates.CustomAggFailure(msg)
return min_grain, pandas.Series([r[2] for r in all_data
if r[1] == min_grain],
[r[0] for r in all_data
if r[1] == min_grain])
@staticmethod
def aggregate_data(data, func, window, min_grain, center=False,
min_size=1):
"""Calculates moving func of data with sampling width of window.
:param data: Series of timestamp, value pairs
:param func: the function to use when aggregating
:param window: (float) range of data to use in each aggregation.
:param min_grain: granularity of the data being passed in.
:param center: whether to index the aggregated values by the first
timestamp of the values picked up by the window or by the central
timestamp.
:param min_size: if the number of points in the window is less than
min_size, the aggregate is not computed and nan is returned for
that iteration.
"""
if center:
center = utils.strtobool(center)
def moving_window(x):
msec = datetime.timedelta(milliseconds=1)
zero = datetime.timedelta(seconds=0)
half_span = datetime.timedelta(seconds=window / 2)
start = utils.normalize_time(data.index[0])
stop = utils.normalize_time(
data.index[-1] + datetime.timedelta(seconds=min_grain))
# min_grain addition necessary since each bin of rolled-up data
# is indexed by leftmost timestamp of bin.
left = half_span if center else zero
right = 2 * half_span - left - msec
# msec subtraction is so we don't include right endpoint in slice.
x = utils.normalize_time(x)
if x - left >= start and x + right <= stop:
dslice = data[x - left: x + right]
if center and dslice.size % 2 == 0:
return func([func(data[x - msec - left: x - msec + right]),
func(data[x + msec - left: x + msec + right])
])
# (NOTE) atmalagon: the msec shift here is so that we have two
# consecutive windows; one centered at time x - msec,
# and one centered at time x + msec. We then average the
# aggregates from the two windows; this result is centered
# at time x. Doing this double average is a way to return a
# centered average indexed by a timestamp that existed in
# the input data (which wouldn't be the case for an even number
# of points if we did only one centered average).
else:
return numpy.nan
if dslice.size < min_size:
return numpy.nan
return func(dslice)
try:
result = pandas.Series(data.index).apply(moving_window)
# change from integer index to timestamp index
result.index = data.index
return [(t, window, r) for t, r
in six.iteritems(result[~result.isnull()])]
except Exception as e:
raise aggregates.CustomAggFailure(str(e))
def compute(self, storage_obj, metric, start, stop, window=None,
center=False):
"""Returns list of (timestamp, window, aggregated value) tuples.
:param storage_obj: a call is placed to the storage object to retrieve
the stored data.
:param metric: the metric
:param start: start timestamp
:param stop: stop timestamp
:param window: format string specifying the size over which to
aggregate the retrieved data
:param center: how to index the aggregated data (central timestamp or
leftmost timestamp)
"""
window = self.check_window_valid(window)
min_grain, data = self.retrieve_data(storage_obj, metric, start,
stop, window)
return self.aggregate_data(data, numpy.mean, window, min_grain, center,
min_size=1)

View File

@ -1,247 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright (c) 2014-2015 eNovance
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
import datetime
import operator
from oslo_config import cfg
from oslo_config import types
import six
class ArchivePolicy(object):
DEFAULT_AGGREGATION_METHODS = ()
# TODO(eglynn): figure out how to accommodate multi-valued aggregation
# methods, where there is no longer just a single aggregate
# value to be stored per-period (e.g. ohlc)
VALID_AGGREGATION_METHODS = set(
('mean', 'sum', 'last', 'max', 'min',
'std', 'median', 'first', 'count')).union(
set((str(i) + 'pct' for i in six.moves.range(1, 100))))
# Set that contains all the above values + their minus equivalent (-mean)
# and the "*" entry.
VALID_AGGREGATION_METHODS_VALUES = VALID_AGGREGATION_METHODS.union(
set(('*',)),
set(map(lambda s: "-" + s,
VALID_AGGREGATION_METHODS)),
set(map(lambda s: "+" + s,
VALID_AGGREGATION_METHODS)))
def __init__(self, name, back_window, definition,
aggregation_methods=None):
self.name = name
self.back_window = back_window
self.definition = []
for d in definition:
if isinstance(d, ArchivePolicyItem):
self.definition.append(d)
elif isinstance(d, dict):
self.definition.append(ArchivePolicyItem(**d))
elif len(d) == 2:
self.definition.append(
ArchivePolicyItem(points=d[0], granularity=d[1]))
else:
raise ValueError(
"Unable to understand policy definition %s" % d)
duplicate_granularities = [
granularity
for granularity, count in collections.Counter(
d.granularity for d in self.definition).items()
if count > 1
]
if duplicate_granularities:
raise ValueError(
"More than one archive policy "
"uses granularity `%s'"
% duplicate_granularities[0]
)
if aggregation_methods is None:
self.aggregation_methods = self.DEFAULT_AGGREGATION_METHODS
else:
self.aggregation_methods = aggregation_methods
@property
def aggregation_methods(self):
if '*' in self._aggregation_methods:
agg_methods = self.VALID_AGGREGATION_METHODS.copy()
elif all(map(lambda s: s.startswith('-') or s.startswith('+'),
self._aggregation_methods)):
agg_methods = set(self.DEFAULT_AGGREGATION_METHODS)
else:
agg_methods = set(self._aggregation_methods)
for entry in self._aggregation_methods:
if entry:
if entry[0] == '-':
agg_methods -= set((entry[1:],))
elif entry[0] == '+':
agg_methods.add(entry[1:])
return agg_methods
@aggregation_methods.setter
def aggregation_methods(self, value):
value = set(value)
rest = value - self.VALID_AGGREGATION_METHODS_VALUES
if rest:
raise ValueError("Invalid value for aggregation_methods: %s" %
rest)
self._aggregation_methods = value
@classmethod
def from_dict(cls, d):
return cls(d['name'],
d['back_window'],
d['definition'],
d.get('aggregation_methods'))
def __eq__(self, other):
return (isinstance(other, ArchivePolicy)
and self.name == other.name
and self.back_window == other.back_window
and self.definition == other.definition
and self.aggregation_methods == other.aggregation_methods)
def jsonify(self):
return {
"name": self.name,
"back_window": self.back_window,
"definition": self.definition,
"aggregation_methods": self.aggregation_methods,
}
@property
def max_block_size(self):
# The biggest block size is the coarse grained archive definition
return sorted(self.definition,
key=operator.attrgetter("granularity"))[-1].granularity
OPTS = [
cfg.ListOpt(
'default_aggregation_methods',
item_type=types.String(
choices=ArchivePolicy.VALID_AGGREGATION_METHODS),
default=['mean', 'min', 'max', 'sum', 'std', 'count'],
help='Default aggregation methods to use in created archive policies'),
]
class ArchivePolicyItem(dict):
def __init__(self, granularity=None, points=None, timespan=None):
if (granularity is not None
and points is not None
and timespan is not None):
if timespan != granularity * points:
raise ValueError(
u"timespan ≠ granularity × points")
if granularity is not None and granularity <= 0:
raise ValueError("Granularity should be > 0")
if points is not None and points <= 0:
raise ValueError("Number of points should be > 0")
if granularity is None:
if points is None or timespan is None:
raise ValueError(
"At least two of granularity/points/timespan "
"must be provided")
granularity = round(timespan / float(points))
else:
granularity = float(granularity)
if points is None:
if timespan is None:
self['timespan'] = None
else:
points = int(timespan / granularity)
self['timespan'] = granularity * points
else:
points = int(points)
self['timespan'] = granularity * points
self['points'] = points
self['granularity'] = granularity
@property
def granularity(self):
return self['granularity']
@property
def points(self):
return self['points']
@property
def timespan(self):
return self['timespan']
def jsonify(self):
"""Return a dict representation with human readable values."""
return {
'timespan': six.text_type(
datetime.timedelta(seconds=self.timespan))
if self.timespan is not None
else None,
'granularity': six.text_type(
datetime.timedelta(seconds=self.granularity)),
'points': self.points,
}
DEFAULT_ARCHIVE_POLICIES = {
'bool': ArchivePolicy(
"bool", 3600, [
# 1 second resolution for 365 days
ArchivePolicyItem(granularity=1,
timespan=365 * 24 * 60 * 60),
],
aggregation_methods=("last",),
),
'low': ArchivePolicy(
"low", 0, [
# 5 minutes resolution for 30 days
ArchivePolicyItem(granularity=300,
timespan=30 * 24 * 60 * 60),
],
),
'medium': ArchivePolicy(
"medium", 0, [
# 1 minute resolution for 7 days
ArchivePolicyItem(granularity=60,
timespan=7 * 24 * 60 * 60),
# 1 hour resolution for 365 days
ArchivePolicyItem(granularity=3600,
timespan=365 * 24 * 60 * 60),
],
),
'high': ArchivePolicy(
"high", 0, [
# 1 second resolution for an hour
ArchivePolicyItem(granularity=1, points=3600),
# 1 minute resolution for a week
ArchivePolicyItem(granularity=60, points=60 * 24 * 7),
# 1 hour resolution for a year
ArchivePolicyItem(granularity=3600, points=365 * 24),
],
),
}

View File

@ -1,980 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright © 2016 Red Hat, Inc.
# Copyright © 2014-2015 eNovance
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Time series data manipulation, better with pancetta."""
import datetime
import functools
import logging
import math
import numbers
import random
import re
import struct
import time
import lz4.block
import numpy
import numpy.lib.recfunctions
import pandas
from scipy import ndimage
import six
# NOTE(sileht): pandas relies on time.strptime()
# and often triggers http://bugs.python.org/issue7980
# its dues to our heavy threads usage, this is the workaround
# to ensure the module is correctly loaded before we use really it.
time.strptime("2016-02-19", "%Y-%m-%d")
LOG = logging.getLogger(__name__)
class NoDeloreanAvailable(Exception):
"""Error raised when trying to insert a value that is too old."""
def __init__(self, first_timestamp, bad_timestamp):
self.first_timestamp = first_timestamp
self.bad_timestamp = bad_timestamp
super(NoDeloreanAvailable, self).__init__(
"%s is before %s" % (bad_timestamp, first_timestamp))
class BeforeEpochError(Exception):
"""Error raised when a timestamp before Epoch is used."""
def __init__(self, timestamp):
self.timestamp = timestamp
super(BeforeEpochError, self).__init__(
"%s is before Epoch" % timestamp)
class UnAggregableTimeseries(Exception):
"""Error raised when timeseries cannot be aggregated."""
def __init__(self, reason):
self.reason = reason
super(UnAggregableTimeseries, self).__init__(reason)
class UnknownAggregationMethod(Exception):
"""Error raised when the aggregation method is unknown."""
def __init__(self, agg):
self.aggregation_method = agg
super(UnknownAggregationMethod, self).__init__(
"Unknown aggregation method `%s'" % agg)
class InvalidData(ValueError):
"""Error raised when data are corrupted."""
def __init__(self):
super(InvalidData, self).__init__("Unable to unpack, invalid data")
def round_timestamp(ts, freq):
return pandas.Timestamp(
(pandas.Timestamp(ts).value // freq) * freq)
class GroupedTimeSeries(object):
def __init__(self, ts, granularity):
# NOTE(sileht): The whole class assumes ts is ordered and don't have
# duplicate timestamps, it uses numpy.unique that sorted list, but
# we always assume the orderd to be the same as the input.
freq = granularity * 10e8
self._ts = ts
self.indexes = (numpy.array(ts.index, numpy.float) // freq) * freq
self.tstamps, self.counts = numpy.unique(self.indexes,
return_counts=True)
def mean(self):
return self._scipy_aggregate(ndimage.mean)
def sum(self):
return self._scipy_aggregate(ndimage.sum)
def min(self):
return self._scipy_aggregate(ndimage.minimum)
def max(self):
return self._scipy_aggregate(ndimage.maximum)
def median(self):
return self._scipy_aggregate(ndimage.median)
def std(self):
# NOTE(sileht): ndimage.standard_deviation is really more performant
# but it use ddof=0, to get the same result as pandas we have to use
# ddof=1. If one day scipy allow to pass ddof, this should be changed.
return self._scipy_aggregate(ndimage.labeled_comprehension,
remove_unique=True,
func=functools.partial(numpy.std, ddof=1),
out_dtype='float64',
default=None)
def _count(self):
timestamps = self.tstamps.astype('datetime64[ns]', copy=False)
return (self.counts, timestamps)
def count(self):
return pandas.Series(*self._count())
def last(self):
counts, timestamps = self._count()
cumcounts = numpy.cumsum(counts) - 1
values = self._ts.values[cumcounts]
return pandas.Series(values, pandas.to_datetime(timestamps))
def first(self):
counts, timestamps = self._count()
counts = numpy.insert(counts[:-1], 0, 0)
cumcounts = numpy.cumsum(counts)
values = self._ts.values[cumcounts]
return pandas.Series(values, pandas.to_datetime(timestamps))
def quantile(self, q):
return self._scipy_aggregate(ndimage.labeled_comprehension,
func=functools.partial(
numpy.percentile,
q=q,
),
out_dtype='float64',
default=None)
def _scipy_aggregate(self, method, remove_unique=False, *args, **kwargs):
if remove_unique:
tstamps = self.tstamps[self.counts > 1]
else:
tstamps = self.tstamps
if len(tstamps) == 0:
return pandas.Series()
values = method(self._ts.values, self.indexes, tstamps,
*args, **kwargs)
timestamps = tstamps.astype('datetime64[ns]', copy=False)
return pandas.Series(values, pandas.to_datetime(timestamps))
class TimeSerie(object):
"""A representation of series of a timestamp with a value.
Duplicate timestamps are not allowed and will be filtered to use the
last in the group when the TimeSerie is created or extended.
"""
def __init__(self, ts=None):
if ts is None:
ts = pandas.Series()
self.ts = ts
@staticmethod
def clean_ts(ts):
if ts.index.has_duplicates:
ts = ts[~ts.index.duplicated(keep='last')]
if not ts.index.is_monotonic:
ts = ts.sort_index()
return ts
@classmethod
def from_data(cls, timestamps=None, values=None, clean=False):
ts = pandas.Series(values, timestamps)
if clean:
# For format v2
ts = cls.clean_ts(ts)
return cls(ts)
@classmethod
def from_tuples(cls, timestamps_values):
return cls.from_data(*zip(*timestamps_values))
def __eq__(self, other):
return (isinstance(other, TimeSerie)
and self.ts.all() == other.ts.all())
def __getitem__(self, key):
return self.ts[key]
def set_values(self, values):
t = pandas.Series(*reversed(list(zip(*values))))
self.ts = self.clean_ts(t).combine_first(self.ts)
def __len__(self):
return len(self.ts)
@staticmethod
def _timestamps_and_values_from_dict(values):
timestamps = numpy.array(list(values.keys()), dtype='datetime64[ns]')
timestamps = pandas.to_datetime(timestamps)
v = list(values.values())
if v:
return timestamps, v
return (), ()
@staticmethod
def _to_offset(value):
if isinstance(value, numbers.Real):
return pandas.tseries.offsets.Nano(value * 10e8)
return pandas.tseries.frequencies.to_offset(value)
@property
def first(self):
try:
return self.ts.index[0]
except IndexError:
return
@property
def last(self):
try:
return self.ts.index[-1]
except IndexError:
return
def group_serie(self, granularity, start=0):
# NOTE(jd) Our whole serialization system is based on Epoch, and we
# store unsigned integer, so we can't store anything before Epoch.
# Sorry!
if self.ts.index[0].value < 0:
raise BeforeEpochError(self.ts.index[0])
return GroupedTimeSeries(self.ts[start:], granularity)
@staticmethod
def _compress(payload):
# FIXME(jd) lz4 > 0.9.2 returns bytearray instead of bytes. But Cradox
# does not accept bytearray but only bytes, so make sure that we have a
# byte type returned.
return memoryview(lz4.block.compress(payload)).tobytes()
class BoundTimeSerie(TimeSerie):
def __init__(self, ts=None, block_size=None, back_window=0):
"""A time serie that is limited in size.
Used to represent the full-resolution buffer of incoming raw
datapoints associated with a metric.
The maximum size of this time serie is expressed in a number of block
size, called the back window.
When the timeserie is truncated, a whole block is removed.
You cannot set a value using a timestamp that is prior to the last
timestamp minus this number of blocks. By default, a back window of 0
does not allow you to go back in time prior to the current block being
used.
"""
super(BoundTimeSerie, self).__init__(ts)
self.block_size = self._to_offset(block_size)
self.back_window = back_window
self._truncate()
@classmethod
def from_data(cls, timestamps=None, values=None,
block_size=None, back_window=0):
return cls(pandas.Series(values, timestamps),
block_size=block_size, back_window=back_window)
def __eq__(self, other):
return (isinstance(other, BoundTimeSerie)
and super(BoundTimeSerie, self).__eq__(other)
and self.block_size == other.block_size
and self.back_window == other.back_window)
def set_values(self, values, before_truncate_callback=None,
ignore_too_old_timestamps=False):
# NOTE: values must be sorted when passed in.
if self.block_size is not None and not self.ts.empty:
first_block_timestamp = self.first_block_timestamp()
if ignore_too_old_timestamps:
for index, (timestamp, value) in enumerate(values):
if timestamp >= first_block_timestamp:
values = values[index:]
break
else:
values = []
else:
# Check that the smallest timestamp does not go too much back
# in time.
smallest_timestamp = values[0][0]
if smallest_timestamp < first_block_timestamp:
raise NoDeloreanAvailable(first_block_timestamp,
smallest_timestamp)
super(BoundTimeSerie, self).set_values(values)
if before_truncate_callback:
before_truncate_callback(self)
self._truncate()
_SERIALIZATION_TIMESTAMP_VALUE_LEN = struct.calcsize("<Qd")
_SERIALIZATION_TIMESTAMP_LEN = struct.calcsize("<Q")
@classmethod
def unserialize(cls, data, block_size, back_window):
uncompressed = lz4.block.decompress(data)
nb_points = (
len(uncompressed) // cls._SERIALIZATION_TIMESTAMP_VALUE_LEN
)
timestamps_raw = uncompressed[
:nb_points*cls._SERIALIZATION_TIMESTAMP_LEN]
timestamps = numpy.frombuffer(timestamps_raw, dtype='<Q')
timestamps = numpy.cumsum(timestamps)
timestamps = timestamps.astype(dtype='datetime64[ns]', copy=False)
values_raw = uncompressed[nb_points*cls._SERIALIZATION_TIMESTAMP_LEN:]
values = numpy.frombuffer(values_raw, dtype='<d')
return cls.from_data(
pandas.to_datetime(timestamps),
values,
block_size=block_size,
back_window=back_window)
def serialize(self):
# NOTE(jd) Use a double delta encoding for timestamps
timestamps = numpy.insert(numpy.diff(self.ts.index),
0, self.first.value)
timestamps = timestamps.astype('<Q', copy=False)
values = self.ts.values.astype('<d', copy=False)
payload = (timestamps.tobytes() + values.tobytes())
return self._compress(payload)
@classmethod
def benchmark(cls):
"""Run a speed benchmark!"""
points = SplitKey.POINTS_PER_SPLIT
serialize_times = 50
now = datetime.datetime(2015, 4, 3, 23, 11)
print(cls.__name__)
print("=" * len(cls.__name__))
for title, values in [
("Simple continuous range", six.moves.range(points)),
("All 0", [float(0)] * points),
("All 1", [float(1)] * points),
("0 and 1", [0, 1] * (points // 2)),
("1 and 0 random",
[random.randint(0, 1)
for x in six.moves.range(points)]),
("Small number random pos/neg",
[random.randint(-100000, 10000)
for x in six.moves.range(points)]),
("Small number random pos",
[random.randint(0, 20000) for x in six.moves.range(points)]),
("Small number random neg",
[random.randint(-20000, 0) for x in six.moves.range(points)]),
("Sin(x)", map(math.sin, six.moves.range(points))),
("random ", [random.random()
for x in six.moves.range(points)]),
]:
print(title)
pts = pandas.Series(values,
[now + datetime.timedelta(
seconds=i * random.randint(1, 10),
microseconds=random.randint(1, 999999))
for i in six.moves.range(points)])
pts = pts.sort_index()
ts = cls(ts=pts)
t0 = time.time()
for i in six.moves.range(serialize_times):
s = ts.serialize()
t1 = time.time()
print(" Serialization speed: %.2f MB/s"
% (((points * 2 * 8)
/ ((t1 - t0) / serialize_times)) / (1024.0 * 1024.0)))
print(" Bytes per point: %.2f" % (len(s) / float(points)))
t0 = time.time()
for i in six.moves.range(serialize_times):
cls.unserialize(s, 1, 1)
t1 = time.time()
print(" Unserialization speed: %.2f MB/s"
% (((points * 2 * 8)
/ ((t1 - t0) / serialize_times)) / (1024.0 * 1024.0)))
def first_block_timestamp(self):
"""Return the timestamp of the first block."""
rounded = round_timestamp(self.ts.index[-1],
self.block_size.delta.value)
return rounded - (self.block_size * self.back_window)
def _truncate(self):
"""Truncate the timeserie."""
if self.block_size is not None and not self.ts.empty:
# Change that to remove the amount of block needed to have
# the size <= max_size. A block is a number of "seconds" (a
# timespan)
self.ts = self.ts[self.first_block_timestamp():]
@functools.total_ordering
class SplitKey(object):
"""A class representing a split key.
A split key is basically a timestamp that can be used to split
`AggregatedTimeSerie` objects in multiple parts. Each part will contain
`SplitKey.POINTS_PER_SPLIT` points. The split key for a given granularity
are regularly spaced.
"""
POINTS_PER_SPLIT = 3600
def __init__(self, value, sampling):
if isinstance(value, SplitKey):
self.key = value.key
elif isinstance(value, pandas.Timestamp):
self.key = value.value / 10e8
else:
self.key = float(value)
self._carbonara_sampling = float(sampling)
@classmethod
def from_timestamp_and_sampling(cls, timestamp, sampling):
return cls(
round_timestamp(
timestamp, freq=sampling * cls.POINTS_PER_SPLIT * 10e8),
sampling)
def __next__(self):
"""Get the split key of the next split.
:return: A `SplitKey` object.
"""
return self.__class__(
self.key + self._carbonara_sampling * self.POINTS_PER_SPLIT,
self._carbonara_sampling)
next = __next__
def __iter__(self):
return self
def __hash__(self):
return hash(self.key)
def __lt__(self, other):
if isinstance(other, SplitKey):
return self.key < other.key
if isinstance(other, pandas.Timestamp):
return self.key * 10e8 < other.value
return self.key < other
def __eq__(self, other):
if isinstance(other, SplitKey):
return self.key == other.key
if isinstance(other, pandas.Timestamp):
return self.key * 10e8 == other.value
return self.key == other
def __str__(self):
return str(float(self))
def __float__(self):
return self.key
def as_datetime(self):
return pandas.Timestamp(self.key, unit='s')
def __repr__(self):
return "<%s: %s / %fs>" % (self.__class__.__name__,
repr(self.key),
self._carbonara_sampling)
class AggregatedTimeSerie(TimeSerie):
_AGG_METHOD_PCT_RE = re.compile(r"([1-9][0-9]?)pct")
PADDED_SERIAL_LEN = struct.calcsize("<?d")
COMPRESSED_SERIAL_LEN = struct.calcsize("<Hd")
COMPRESSED_TIMESPAMP_LEN = struct.calcsize("<H")
def __init__(self, sampling, aggregation_method, ts=None, max_size=None):
"""A time serie that is downsampled.
Used to represent the downsampled timeserie for a single
granularity/aggregation-function pair stored for a metric.
"""
super(AggregatedTimeSerie, self).__init__(ts)
self.sampling = self._to_offset(sampling).nanos / 10e8
self.max_size = max_size
self.aggregation_method = aggregation_method
self._truncate(quick=True)
def resample(self, sampling):
return AggregatedTimeSerie.from_grouped_serie(
self.group_serie(sampling), sampling, self.aggregation_method)
@classmethod
def from_data(cls, sampling, aggregation_method, timestamps=None,
values=None, max_size=None):
return cls(sampling=sampling,
aggregation_method=aggregation_method,
ts=pandas.Series(values, timestamps),
max_size=max_size)
@staticmethod
def _get_agg_method(aggregation_method):
q = None
m = AggregatedTimeSerie._AGG_METHOD_PCT_RE.match(aggregation_method)
if m:
q = float(m.group(1))
aggregation_method_func_name = 'quantile'
else:
if not hasattr(GroupedTimeSeries, aggregation_method):
raise UnknownAggregationMethod(aggregation_method)
aggregation_method_func_name = aggregation_method
return aggregation_method_func_name, q
def split(self):
# NOTE(sileht): We previously use groupby with
# SplitKey.from_timestamp_and_sampling, but
# this is slow because pandas can do that on any kind DataFrame
# but we have ordered timestamps, so don't need
# to iter the whole series.
freq = self.sampling * SplitKey.POINTS_PER_SPLIT
ix = numpy.array(self.ts.index, numpy.float64) / 10e8
keys, counts = numpy.unique((ix // freq) * freq, return_counts=True)
start = 0
for key, count in six.moves.zip(keys, counts):
end = start + count
if key == -0.0:
key = abs(key)
yield (SplitKey(key, self.sampling),
AggregatedTimeSerie(self.sampling, self.aggregation_method,
self.ts[start:end]))
start = end
@classmethod
def from_timeseries(cls, timeseries, sampling, aggregation_method,
max_size=None):
ts = pandas.Series()
for t in timeseries:
ts = ts.combine_first(t.ts)
return cls(sampling=sampling,
aggregation_method=aggregation_method,
ts=ts, max_size=max_size)
@classmethod
def from_grouped_serie(cls, grouped_serie, sampling, aggregation_method,
max_size=None):
agg_name, q = cls._get_agg_method(aggregation_method)
return cls(sampling, aggregation_method,
ts=cls._resample_grouped(grouped_serie, agg_name,
q),
max_size=max_size)
def __eq__(self, other):
return (isinstance(other, AggregatedTimeSerie)
and super(AggregatedTimeSerie, self).__eq__(other)
and self.max_size == other.max_size
and self.sampling == other.sampling
and self.aggregation_method == other.aggregation_method)
def __repr__(self):
return "<%s 0x%x sampling=%fs max_size=%s agg_method=%s>" % (
self.__class__.__name__,
id(self),
self.sampling,
self.max_size,
self.aggregation_method,
)
@staticmethod
def is_compressed(serialized_data):
"""Check whatever the data was serialized with compression."""
return six.indexbytes(serialized_data, 0) == ord("c")
@classmethod
def unserialize(cls, data, start, agg_method, sampling):
x, y = [], []
start = float(start)
if data:
if cls.is_compressed(data):
# Compressed format
uncompressed = lz4.block.decompress(
memoryview(data)[1:].tobytes())
nb_points = len(uncompressed) // cls.COMPRESSED_SERIAL_LEN
timestamps_raw = uncompressed[
:nb_points*cls.COMPRESSED_TIMESPAMP_LEN]
try:
y = numpy.frombuffer(timestamps_raw, dtype='<H')
except ValueError:
raise InvalidData()
y = numpy.cumsum(y * sampling) + start
values_raw = uncompressed[
nb_points*cls.COMPRESSED_TIMESPAMP_LEN:]
x = numpy.frombuffer(values_raw, dtype='<d')
else:
# Padded format
try:
everything = numpy.frombuffer(data, dtype=[('b', '<?'),
('v', '<d')])
except ValueError:
raise InvalidData()
index = numpy.nonzero(everything['b'])[0]
y = index * sampling + start
x = everything['v'][index]
y = y.astype(numpy.float64, copy=False) * 10e8
y = y.astype('datetime64[ns]', copy=False)
y = pandas.to_datetime(y)
return cls.from_data(sampling, agg_method, y, x)
def get_split_key(self, timestamp=None):
"""Return the split key for a particular timestamp.
:param timestamp: If None, the first timestamp of the timeserie
is used.
:return: A SplitKey object.
"""
if timestamp is None:
timestamp = self.first
return SplitKey.from_timestamp_and_sampling(
timestamp, self.sampling)
def serialize(self, start, compressed=True):
"""Serialize an aggregated timeserie.
The serialization starts with a byte that indicate the serialization
format: 'c' for compressed format, '\x00' or '\x01' for uncompressed
format. Both format can be unserialized using the `unserialize` method.
The offset returned indicates at which offset the data should be
written from. In the case of compressed data, this is always 0.
:param start: Timestamp to start serialization at.
:param compressed: Serialize in a compressed format.
:return: a tuple of (offset, data)
"""
if not self.ts.index.is_monotonic:
self.ts = self.ts.sort_index()
offset_div = self.sampling * 10e8
if isinstance(start, SplitKey):
start = start.as_datetime().value
else:
start = pandas.Timestamp(start).value
# calculate how many seconds from start the series runs until and
# initialize list to store alternating delimiter, float entries
if compressed:
# NOTE(jd) Use a double delta encoding for timestamps
timestamps = numpy.insert(
numpy.diff(self.ts.index) // offset_div,
0, int((self.first.value - start) // offset_div))
timestamps = timestamps.astype('<H', copy=False)
values = self.ts.values.astype('<d', copy=False)
payload = (timestamps.tobytes() + values.tobytes())
return None, b"c" + self._compress(payload)
# NOTE(gordc): this binary serializes series based on the split
# time. the format is 1B True/False flag which denotes whether
# subsequent 8B is a real float or zero padding. every 9B
# represents one second from start time. this is intended to be run
# on data already split. ie. False,0,True,0 serialization means
# start datapoint is padding, and 1s after start time, the
# aggregate value is 0. calculate how many seconds from start the
# series runs until and initialize list to store alternating
# delimiter, float entries
first = self.first.value # NOTE(jd) needed because faster
e_offset = int((self.last.value - first) // offset_div) + 1
locs = (numpy.cumsum(numpy.diff(self.ts.index)) // offset_div)
locs = numpy.insert(locs, 0, 0)
locs = locs.astype(numpy.int, copy=False)
# Fill everything with zero
serial_dtype = [('b', '<?'), ('v', '<d')]
serial = numpy.zeros((e_offset,), dtype=serial_dtype)
# Create a structured array with two dimensions
values = self.ts.values.astype(dtype='<d', copy=False)
ones = numpy.ones_like(values, dtype='<?')
values = numpy.core.records.fromarrays((ones, values),
dtype=serial_dtype)
serial[locs] = values
payload = serial.tobytes()
offset = int((first - start) // offset_div) * self.PADDED_SERIAL_LEN
return offset, payload
def _truncate(self, quick=False):
"""Truncate the timeserie."""
if self.max_size is not None:
# Remove empty points if any that could be added by aggregation
self.ts = (self.ts[-self.max_size:] if quick
else self.ts.dropna()[-self.max_size:])
@staticmethod
def _resample_grouped(grouped_serie, agg_name, q=None):
agg_func = getattr(grouped_serie, agg_name)
return agg_func(q) if agg_name == 'quantile' else agg_func()
def fetch(self, from_timestamp=None, to_timestamp=None):
"""Fetch aggregated time value.
Returns a sorted list of tuples (timestamp, granularity, value).
"""
# Round timestamp to our granularity so we're sure that if e.g. 17:02
# is requested and we have points for 17:00 and 17:05 in a 5min
# granularity, we do return the 17:00 point and not nothing
if from_timestamp is None:
from_ = None
else:
from_ = round_timestamp(from_timestamp, self.sampling * 10e8)
points = self[from_:to_timestamp]
try:
# Do not include stop timestamp
del points[to_timestamp]
except KeyError:
pass
return [(timestamp, self.sampling, value)
for timestamp, value
in six.iteritems(points)]
def merge(self, ts):
"""Merge a timeserie into this one.
This is equivalent to `update` but is faster as they are is no
resampling. Be careful on what you merge.
"""
self.ts = self.ts.combine_first(ts.ts)
@classmethod
def benchmark(cls):
"""Run a speed benchmark!"""
points = SplitKey.POINTS_PER_SPLIT
sampling = 5
resample = 35
now = datetime.datetime(2015, 4, 3, 23, 11)
print(cls.__name__)
print("=" * len(cls.__name__))
for title, values in [
("Simple continuous range", six.moves.range(points)),
("All 0", [float(0)] * points),
("All 1", [float(1)] * points),
("0 and 1", [0, 1] * (points // 2)),
("1 and 0 random",
[random.randint(0, 1)
for x in six.moves.range(points)]),
("Small number random pos/neg",
[random.randint(-100000, 10000)
for x in six.moves.range(points)]),
("Small number random pos",
[random.randint(0, 20000) for x in six.moves.range(points)]),
("Small number random neg",
[random.randint(-20000, 0) for x in six.moves.range(points)]),
("Sin(x)", map(math.sin, six.moves.range(points))),
("random ", [random.random()
for x in six.moves.range(points)]),
]:
print(title)
serialize_times = 50
pts = pandas.Series(values,
[now + datetime.timedelta(seconds=i*sampling)
for i in six.moves.range(points)])
pts = pts.sort_index()
ts = cls(ts=pts, sampling=sampling, aggregation_method='mean')
t0 = time.time()
key = ts.get_split_key()
for i in six.moves.range(serialize_times):
e, s = ts.serialize(key, compressed=False)
t1 = time.time()
print(" Uncompressed serialization speed: %.2f MB/s"
% (((points * 2 * 8)
/ ((t1 - t0) / serialize_times)) / (1024.0 * 1024.0)))
print(" Bytes per point: %.2f" % (len(s) / float(points)))
t0 = time.time()
for i in six.moves.range(serialize_times):
cls.unserialize(s, key, 'mean', sampling)
t1 = time.time()
print(" Unserialization speed: %.2f MB/s"
% (((points * 2 * 8)
/ ((t1 - t0) / serialize_times)) / (1024.0 * 1024.0)))
t0 = time.time()
for i in six.moves.range(serialize_times):
o, s = ts.serialize(key, compressed=True)
t1 = time.time()
print(" Compressed serialization speed: %.2f MB/s"
% (((points * 2 * 8)
/ ((t1 - t0) / serialize_times)) / (1024.0 * 1024.0)))
print(" Bytes per point: %.2f" % (len(s) / float(points)))
t0 = time.time()
for i in six.moves.range(serialize_times):
cls.unserialize(s, key, 'mean', sampling)
t1 = time.time()
print(" Uncompression speed: %.2f MB/s"
% (((points * 2 * 8)
/ ((t1 - t0) / serialize_times)) / (1024.0 * 1024.0)))
t0 = time.time()
for i in six.moves.range(serialize_times):
list(ts.split())
t1 = time.time()
print(" split() speed: %.8f s" % ((t1 - t0) / serialize_times))
# NOTE(sileht): propose a new series with half overload timestamps
pts = ts.ts.copy(deep=True)
tsbis = cls(ts=pts, sampling=sampling, aggregation_method='mean')
tsbis.ts.reindex(tsbis.ts.index -
datetime.timedelta(seconds=sampling * points / 2))
t0 = time.time()
for i in six.moves.range(serialize_times):
ts.merge(tsbis)
t1 = time.time()
print(" merge() speed: %.8f s" % ((t1 - t0) / serialize_times))
for agg in ['mean', 'sum', 'max', 'min', 'std', 'median', 'first',
'last', 'count', '5pct', '90pct']:
serialize_times = 3 if agg.endswith('pct') else 10
ts = cls(ts=pts, sampling=sampling, aggregation_method=agg)
t0 = time.time()
for i in six.moves.range(serialize_times):
ts.resample(resample)
t1 = time.time()
print(" resample(%s) speed: %.8f s" % (agg, (t1 - t0) /
serialize_times))
@staticmethod
def aggregated(timeseries, aggregation, from_timestamp=None,
to_timestamp=None, needed_percent_of_overlap=100.0,
fill=None):
index = ['timestamp', 'granularity']
columns = ['timestamp', 'granularity', 'value']
dataframes = []
if not timeseries:
return []
for timeserie in timeseries:
timeserie_raw = timeserie.fetch(from_timestamp, to_timestamp)
if timeserie_raw:
dataframe = pandas.DataFrame(timeserie_raw, columns=columns)
dataframe = dataframe.set_index(index)
dataframes.append(dataframe)
if not dataframes:
return []
number_of_distinct_datasource = len(timeseries) / len(
set(ts.sampling for ts in timeseries)
)
left_boundary_ts = None
right_boundary_ts = None
if fill is not None:
fill_df = pandas.concat(dataframes, axis=1)
if fill != 'null':
fill_df = fill_df.fillna(fill)
single_df = pandas.concat([series for __, series in
fill_df.iteritems()]).to_frame()
grouped = single_df.groupby(level=index)
else:
grouped = pandas.concat(dataframes).groupby(level=index)
maybe_next_timestamp_is_left_boundary = False
left_holes = 0
right_holes = 0
holes = 0
for (timestamp, __), group in grouped:
if group.count()['value'] != number_of_distinct_datasource:
maybe_next_timestamp_is_left_boundary = True
if left_boundary_ts is not None:
right_holes += 1
else:
left_holes += 1
elif maybe_next_timestamp_is_left_boundary:
left_boundary_ts = timestamp
maybe_next_timestamp_is_left_boundary = False
else:
right_boundary_ts = timestamp
holes += right_holes
right_holes = 0
if to_timestamp is not None:
holes += left_holes
if from_timestamp is not None:
holes += right_holes
if to_timestamp is not None or from_timestamp is not None:
maximum = len(grouped)
percent_of_overlap = (float(maximum - holes) * 100.0 /
float(maximum))
if percent_of_overlap < needed_percent_of_overlap:
raise UnAggregableTimeseries(
'Less than %f%% of datapoints overlap in this '
'timespan (%.2f%%)' % (needed_percent_of_overlap,
percent_of_overlap))
if (needed_percent_of_overlap > 0 and
(right_boundary_ts == left_boundary_ts or
(right_boundary_ts is None
and maybe_next_timestamp_is_left_boundary))):
LOG.debug("We didn't find points that overlap in those "
"timeseries. "
"right_boundary_ts=%(right_boundary_ts)s, "
"left_boundary_ts=%(left_boundary_ts)s, "
"groups=%(groups)s", {
'right_boundary_ts': right_boundary_ts,
'left_boundary_ts': left_boundary_ts,
'groups': list(grouped)
})
raise UnAggregableTimeseries('No overlap')
# NOTE(sileht): this call the aggregation method on already
# aggregated values, for some kind of aggregation this can
# result can looks weird, but this is the best we can do
# because we don't have anymore the raw datapoints in those case.
# FIXME(sileht): so should we bailout is case of stddev, percentile
# and median?
agg_timeserie = getattr(grouped, aggregation)()
agg_timeserie = agg_timeserie.dropna().reset_index()
if from_timestamp is None and left_boundary_ts:
agg_timeserie = agg_timeserie[
agg_timeserie['timestamp'] >= left_boundary_ts]
if to_timestamp is None and right_boundary_ts:
agg_timeserie = agg_timeserie[
agg_timeserie['timestamp'] <= right_boundary_ts]
points = (agg_timeserie.sort_values(by=['granularity', 'timestamp'],
ascending=[0, 1]).itertuples())
return [(timestamp, granularity, value)
for __, timestamp, granularity, value in points]
if __name__ == '__main__':
import sys
args = sys.argv[1:]
if not args or "--boundtimeserie" in args:
BoundTimeSerie.benchmark()
if not args or "--aggregatedtimeserie" in args:
AggregatedTimeSerie.benchmark()

View File

@ -1,317 +0,0 @@
# Copyright (c) 2013 Mirantis Inc.
# Copyright (c) 2015-2017 Red Hat
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import threading
import time
import cotyledon
from cotyledon import oslo_config_glue
from futurist import periodics
from oslo_config import cfg
from oslo_log import log
import six
import tenacity
import tooz
from gnocchi import archive_policy
from gnocchi import genconfig
from gnocchi import indexer
from gnocchi import service
from gnocchi import statsd as statsd_service
from gnocchi import storage
from gnocchi.storage import incoming
from gnocchi import utils
LOG = log.getLogger(__name__)
def config_generator():
return genconfig.prehook(None, sys.argv[1:])
def upgrade():
conf = cfg.ConfigOpts()
conf.register_cli_opts([
cfg.BoolOpt("skip-index", default=False,
help="Skip index upgrade."),
cfg.BoolOpt("skip-storage", default=False,
help="Skip storage upgrade."),
cfg.BoolOpt("skip-archive-policies-creation", default=False,
help="Skip default archive policies creation."),
cfg.IntOpt("num-storage-sacks", default=128,
help="Initial number of storage sacks to create."),
])
conf = service.prepare_service(conf=conf)
index = indexer.get_driver(conf)
index.connect()
if not conf.skip_index:
LOG.info("Upgrading indexer %s", index)
index.upgrade()
if not conf.skip_storage:
s = storage.get_driver(conf)
LOG.info("Upgrading storage %s", s)
s.upgrade(index, conf.num_storage_sacks)
if (not conf.skip_archive_policies_creation
and not index.list_archive_policies()
and not index.list_archive_policy_rules()):
for name, ap in six.iteritems(archive_policy.DEFAULT_ARCHIVE_POLICIES):
index.create_archive_policy(ap)
index.create_archive_policy_rule("default", "*", "low")
def change_sack_size():
conf = cfg.ConfigOpts()
conf.register_cli_opts([
cfg.IntOpt("sack_size", required=True, min=1,
help="Number of sacks."),
])
conf = service.prepare_service(conf=conf)
s = storage.get_driver(conf)
report = s.incoming.measures_report(details=False)
remainder = report['summary']['measures']
if remainder:
LOG.error('Cannot change sack when non-empty backlog. Process '
'remaining %s measures and try again', remainder)
return
LOG.info("Changing sack size to: %s", conf.sack_size)
old_num_sacks = s.incoming.get_storage_sacks()
s.incoming.set_storage_settings(conf.sack_size)
s.incoming.remove_sack_group(old_num_sacks)
def statsd():
statsd_service.start()
class MetricProcessBase(cotyledon.Service):
def __init__(self, worker_id, conf, interval_delay=0):
super(MetricProcessBase, self).__init__(worker_id)
self.conf = conf
self.startup_delay = worker_id
self.interval_delay = interval_delay
self._shutdown = threading.Event()
self._shutdown_done = threading.Event()
def _configure(self):
self.store = storage.get_driver(self.conf)
self.index = indexer.get_driver(self.conf)
self.index.connect()
def run(self):
self._configure()
# Delay startup so workers are jittered.
time.sleep(self.startup_delay)
while not self._shutdown.is_set():
with utils.StopWatch() as timer:
self._run_job()
self._shutdown.wait(max(0, self.interval_delay - timer.elapsed()))
self._shutdown_done.set()
def terminate(self):
self._shutdown.set()
self.close_services()
LOG.info("Waiting ongoing metric processing to finish")
self._shutdown_done.wait()
@staticmethod
def close_services():
pass
@staticmethod
def _run_job():
raise NotImplementedError
class MetricReporting(MetricProcessBase):
name = "reporting"
def __init__(self, worker_id, conf):
super(MetricReporting, self).__init__(
worker_id, conf, conf.metricd.metric_reporting_delay)
def _run_job(self):
try:
report = self.store.incoming.measures_report(details=False)
LOG.info("%d measurements bundles across %d "
"metrics wait to be processed.",
report['summary']['measures'],
report['summary']['metrics'])
except incoming.ReportGenerationError:
LOG.warning("Unable to compute backlog. Retrying at next "
"interval.")
except Exception:
LOG.error("Unexpected error during pending measures reporting",
exc_info=True)
class MetricProcessor(MetricProcessBase):
name = "processing"
GROUP_ID = "gnocchi-processing"
def __init__(self, worker_id, conf):
super(MetricProcessor, self).__init__(
worker_id, conf, conf.metricd.metric_processing_delay)
self._coord, self._my_id = utils.get_coordinator_and_start(
conf.storage.coordination_url)
self._tasks = []
self.group_state = None
@utils.retry
def _configure(self):
super(MetricProcessor, self)._configure()
# create fallback in case paritioning fails or assigned no tasks
self.fallback_tasks = list(
six.moves.range(self.store.incoming.NUM_SACKS))
try:
self.partitioner = self._coord.join_partitioned_group(
self.GROUP_ID, partitions=200)
LOG.info('Joined coordination group: %s', self.GROUP_ID)
@periodics.periodic(spacing=self.conf.metricd.worker_sync_rate,
run_immediately=True)
def run_watchers():
self._coord.run_watchers()
self.periodic = periodics.PeriodicWorker.create([])
self.periodic.add(run_watchers)
t = threading.Thread(target=self.periodic.start)
t.daemon = True
t.start()
except NotImplementedError:
LOG.warning('Coordinator does not support partitioning. Worker '
'will battle against other workers for jobs.')
except tooz.ToozError as e:
LOG.error('Unexpected error configuring coordinator for '
'partitioning. Retrying: %s', e)
raise tenacity.TryAgain(e)
def _get_tasks(self):
try:
if (not self._tasks or
self.group_state != self.partitioner.ring.nodes):
self.group_state = self.partitioner.ring.nodes.copy()
self._tasks = [
i for i in six.moves.range(self.store.incoming.NUM_SACKS)
if self.partitioner.belongs_to_self(
i, replicas=self.conf.metricd.processing_replicas)]
finally:
return self._tasks or self.fallback_tasks
def _run_job(self):
m_count = 0
s_count = 0
in_store = self.store.incoming
for s in self._get_tasks():
# TODO(gordc): support delay release lock so we don't
# process a sack right after another process
lock = in_store.get_sack_lock(self._coord, s)
if not lock.acquire(blocking=False):
continue
try:
metrics = in_store.list_metric_with_measures_to_process(s)
m_count += len(metrics)
self.store.process_background_tasks(self.index, metrics)
s_count += 1
except Exception:
LOG.error("Unexpected error processing assigned job",
exc_info=True)
finally:
lock.release()
LOG.debug("%d metrics processed from %d sacks", m_count, s_count)
def close_services(self):
self._coord.stop()
class MetricJanitor(MetricProcessBase):
name = "janitor"
def __init__(self, worker_id, conf):
super(MetricJanitor, self).__init__(
worker_id, conf, conf.metricd.metric_cleanup_delay)
def _run_job(self):
try:
self.store.expunge_metrics(self.index)
LOG.debug("Metrics marked for deletion removed from backend")
except Exception:
LOG.error("Unexpected error during metric cleanup", exc_info=True)
class MetricdServiceManager(cotyledon.ServiceManager):
def __init__(self, conf):
super(MetricdServiceManager, self).__init__()
oslo_config_glue.setup(self, conf)
self.conf = conf
self.metric_processor_id = self.add(
MetricProcessor, args=(self.conf,),
workers=conf.metricd.workers)
if self.conf.metricd.metric_reporting_delay >= 0:
self.add(MetricReporting, args=(self.conf,))
self.add(MetricJanitor, args=(self.conf,))
self.register_hooks(on_reload=self.on_reload)
def on_reload(self):
# NOTE(sileht): We do not implement reload() in Workers so all workers
# will received SIGHUP and exit gracefully, then their will be
# restarted with the new number of workers. This is important because
# we use the number of worker to declare the capability in tooz and
# to select the block of metrics to proceed.
self.reconfigure(self.metric_processor_id,
workers=self.conf.metricd.workers)
def run(self):
super(MetricdServiceManager, self).run()
self.queue.close()
def metricd_tester(conf):
# NOTE(sileht): This method is designed to be profiled, we
# want to avoid issues with profiler and os.fork(), that
# why we don't use the MetricdServiceManager.
index = indexer.get_driver(conf)
index.connect()
s = storage.get_driver(conf)
metrics = set()
for i in six.moves.range(s.incoming.NUM_SACKS):
metrics.update(s.incoming.list_metric_with_measures_to_process(i))
if len(metrics) >= conf.stop_after_processing_metrics:
break
s.process_new_measures(
index, list(metrics)[:conf.stop_after_processing_metrics], True)
def metricd():
conf = cfg.ConfigOpts()
conf.register_cli_opts([
cfg.IntOpt("stop-after-processing-metrics",
default=0,
min=0,
help="Number of metrics to process without workers, "
"for testing purpose"),
])
conf = service.prepare_service(conf=conf)
if conf.stop_after_processing_metrics:
metricd_tester(conf)
else:
MetricdServiceManager(conf).run()

View File

@ -1,19 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright © 2014 eNovance
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class NotImplementedError(NotImplementedError):
pass

View File

@ -1,29 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright © 2016-2017 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
def prehook(cmd, args=None):
if args is None:
args = ['--output-file', 'etc/gnocchi/gnocchi.conf']
try:
from oslo_config import generator
generator.main(
['--config-file',
'%s/gnocchi-config-generator.conf' % os.path.dirname(__file__)]
+ args)
except Exception as e:
print("Unable to build sample configuration file: %s" % e)

View File

@ -1,178 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright © 2014-2015 eNovance
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import absolute_import
import json
import os
import subprocess
import sys
import tempfile
import jinja2
import six
import six.moves
import webob.request
import yaml
from gnocchi.tests import test_rest
# HACK(jd) Not sure why but Sphinx setup this multiple times, so we just avoid
# doing several times the requests by using this global variable :(
_RUN = False
def _setup_test_app():
t = test_rest.RestTest()
t.auth_mode = "basic"
t.setUpClass()
t.setUp()
return t.app
def _format_json(txt):
return json.dumps(json.loads(txt),
sort_keys=True,
indent=2)
def _extract_body(req_or_resp):
# TODO(jd) Make this a Sphinx option
if req_or_resp.content_type == "application/json":
body = _format_json(req_or_resp.body)
else:
body = req_or_resp.body
return "\n ".join(body.split("\n"))
def _format_headers(headers):
return "\n".join(
" %s: %s" % (k, v)
for k, v in six.iteritems(headers))
def _response_to_httpdomain(response):
return """
.. sourcecode:: http
HTTP/1.1 %(status)s
%(headers)s
%(body)s""" % {
'status': response.status,
'body': _extract_body(response),
'headers': _format_headers(response.headers),
}
def _request_to_httpdomain(request):
return """
.. sourcecode:: http
%(method)s %(path)s %(http_version)s
%(headers)s
%(body)s""" % {
'body': _extract_body(request),
'method': request.method,
'path': request.path_qs,
'http_version': request.http_version,
'headers': _format_headers(request.headers),
}
def _format_request_reply(request, response):
return (_request_to_httpdomain(request)
+ "\n"
+ _response_to_httpdomain(response))
class ScenarioList(list):
def __getitem__(self, key):
for scenario in self:
if scenario['name'] == key:
return scenario
return super(ScenarioList, self).__getitem__(key)
multiversion_hack = """
import sys
import os
srcdir = os.path.join("%s", "..", "..")
os.chdir(srcdir)
sys.path.insert(0, srcdir)
class FakeApp(object):
def info(self, *args, **kwasrgs):
pass
import gnocchi.gendoc
gnocchi.gendoc.setup(FakeApp())
"""
def setup(app):
global _RUN
if _RUN:
return
# NOTE(sileht): On gnocchi.xyz, we build a multiversion of the docs
# all versions are built with the master gnocchi.gendoc sphinx extension.
# So the hack here run an other python script to generate the rest.rst
# file of old version of the module.
# It also drop the database before each run.
if sys.argv[0].endswith("sphinx-versioning"):
subprocess.call(["dropdb", os.environ['PGDATABASE']])
subprocess.call(["createdb", os.environ['PGDATABASE']])
with tempfile.NamedTemporaryFile() as f:
f.write(multiversion_hack % app.confdir)
f.flush()
subprocess.call(['python', f.name])
_RUN = True
return
webapp = _setup_test_app()
# TODO(jd) Do not hardcode doc/source
with open("doc/source/rest.yaml") as f:
scenarios = ScenarioList(yaml.load(f))
for entry in scenarios:
template = jinja2.Template(entry['request'])
fake_file = six.moves.cStringIO()
fake_file.write(template.render(scenarios=scenarios).encode('utf-8'))
fake_file.seek(0)
request = webapp.RequestClass.from_file(fake_file)
# TODO(jd) Fix this lame bug in webob < 1.7
if (hasattr(webob.request, "http_method_probably_has_body")
and request.method == "DELETE"):
# Webob has a bug it does not read the body for DELETE, l4m3r
clen = request.content_length
if clen is None:
request.body = fake_file.read()
else:
request.body = fake_file.read(clen)
app.info("Doing request %s: %s" % (entry['name'],
six.text_type(request)))
with webapp.use_admin_user():
response = webapp.request(request)
entry['response'] = response
entry['doc'] = _format_request_reply(request, response)
with open("doc/source/rest.j2", "r") as f:
template = jinja2.Template(f.read().decode('utf-8'))
with open("doc/source/rest.rst", "w") as f:
f.write(template.render(scenarios=scenarios).encode('utf-8'))
_RUN = True

View File

@ -1,11 +0,0 @@
[DEFAULT]
wrap_width = 79
namespace = gnocchi
namespace = oslo.db
namespace = oslo.log
namespace = oslo.middleware.cors
namespace = oslo.middleware.healthcheck
namespace = oslo.middleware.http_proxy_to_wsgi
namespace = oslo.policy
namespace = cotyledon
namespace = keystonemiddleware.auth_token

View File

@ -1,411 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright © 2014-2015 eNovance
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import fnmatch
import hashlib
import os
import iso8601
from oslo_config import cfg
import six
from six.moves.urllib import parse
from stevedore import driver
from gnocchi import exceptions
OPTS = [
cfg.StrOpt('url',
secret=True,
required=True,
default=os.getenv("GNOCCHI_INDEXER_URL"),
help='Indexer driver to use'),
]
_marker = object()
class Resource(object):
def get_metric(self, metric_name):
for m in self.metrics:
if m.name == metric_name:
return m
def __eq__(self, other):
return (self.id == other.id
and self.type == other.type
and self.revision == other.revision
and self.revision_start == other.revision_start
and self.revision_end == other.revision_end
and self.creator == other.creator
and self.user_id == other.user_id
and self.project_id == other.project_id
and self.started_at == other.started_at
and self.ended_at == other.ended_at)
@property
def etag(self):
etag = hashlib.sha1()
etag.update(six.text_type(self.id).encode('utf-8'))
etag.update(six.text_type(
self.revision_start.isoformat()).encode('utf-8'))
return etag.hexdigest()
@property
def lastmodified(self):
# less precise revision start for Last-Modified http header
return self.revision_start.replace(microsecond=0,
tzinfo=iso8601.iso8601.UTC)
def get_driver(conf):
"""Return the configured driver."""
split = parse.urlsplit(conf.indexer.url)
d = driver.DriverManager('gnocchi.indexer',
split.scheme).driver
return d(conf)
class IndexerException(Exception):
"""Base class for all exceptions raised by an indexer."""
class NoSuchResourceType(IndexerException):
"""Error raised when the resource type is unknown."""
def __init__(self, type):
super(NoSuchResourceType, self).__init__(
"Resource type %s does not exist" % type)
self.type = type
class NoSuchMetric(IndexerException):
"""Error raised when a metric does not exist."""
def __init__(self, metric):
super(NoSuchMetric, self).__init__("Metric %s does not exist" %
metric)
self.metric = metric
class NoSuchResource(IndexerException):
"""Error raised when a resource does not exist."""
def __init__(self, resource):
super(NoSuchResource, self).__init__("Resource %s does not exist" %
resource)
self.resource = resource
class NoSuchArchivePolicy(IndexerException):
"""Error raised when an archive policy does not exist."""
def __init__(self, archive_policy):
super(NoSuchArchivePolicy, self).__init__(
"Archive policy %s does not exist" % archive_policy)
self.archive_policy = archive_policy
class UnsupportedArchivePolicyChange(IndexerException):
"""Error raised when modifying archive policy if not supported."""
def __init__(self, archive_policy, message):
super(UnsupportedArchivePolicyChange, self).__init__(
"Archive policy %s does not support change: %s" %
(archive_policy, message))
self.archive_policy = archive_policy
self.message = message
class ArchivePolicyInUse(IndexerException):
"""Error raised when an archive policy is still being used."""
def __init__(self, archive_policy):
super(ArchivePolicyInUse, self).__init__(
"Archive policy %s is still in use" % archive_policy)
self.archive_policy = archive_policy
class ResourceTypeInUse(IndexerException):
"""Error raised when an resource type is still being used."""
def __init__(self, resource_type):
super(ResourceTypeInUse, self).__init__(
"Resource type %s is still in use" % resource_type)
self.resource_type = resource_type
class UnexpectedResourceTypeState(IndexerException):
"""Error raised when an resource type state is not expected."""
def __init__(self, resource_type, expected_state, state):
super(UnexpectedResourceTypeState, self).__init__(
"Resource type %s state is %s (expected: %s)" % (
resource_type, state, expected_state))
self.resource_type = resource_type
self.expected_state = expected_state
self.state = state
class NoSuchArchivePolicyRule(IndexerException):
"""Error raised when an archive policy rule does not exist."""
def __init__(self, archive_policy_rule):
super(NoSuchArchivePolicyRule, self).__init__(
"Archive policy rule %s does not exist" %
archive_policy_rule)
self.archive_policy_rule = archive_policy_rule
class NoArchivePolicyRuleMatch(IndexerException):
"""Error raised when no archive policy rule found for metric."""
def __init__(self, metric_name):
super(NoArchivePolicyRuleMatch, self).__init__(
"No Archive policy rule found for metric %s" %
metric_name)
self.metric_name = metric_name
class NamedMetricAlreadyExists(IndexerException):
"""Error raised when a named metric already exists."""
def __init__(self, metric):
super(NamedMetricAlreadyExists, self).__init__(
"Named metric %s already exists" % metric)
self.metric = metric
class ResourceAlreadyExists(IndexerException):
"""Error raised when a resource already exists."""
def __init__(self, resource):
super(ResourceAlreadyExists, self).__init__(
"Resource %s already exists" % resource)
self.resource = resource
class ResourceTypeAlreadyExists(IndexerException):
"""Error raised when a resource type already exists."""
def __init__(self, resource_type):
super(ResourceTypeAlreadyExists, self).__init__(
"Resource type %s already exists" % resource_type)
self.resource_type = resource_type
class ResourceAttributeError(IndexerException, AttributeError):
"""Error raised when an attribute does not exist for a resource type."""
def __init__(self, resource, attribute):
super(ResourceAttributeError, self).__init__(
"Resource type %s has no %s attribute" % (resource, attribute))
self.resource = resource
self.attribute = attribute
class ResourceValueError(IndexerException, ValueError):
"""Error raised when an attribute value is invalid for a resource type."""
def __init__(self, resource_type, attribute, value):
super(ResourceValueError, self).__init__(
"Value %s for attribute %s on resource type %s is invalid"
% (value, attribute, resource_type))
self.resource_type = resource_type
self.attribute = attribute
self.value = value
class ArchivePolicyAlreadyExists(IndexerException):
"""Error raised when an archive policy already exists."""
def __init__(self, name):
super(ArchivePolicyAlreadyExists, self).__init__(
"Archive policy %s already exists" % name)
self.name = name
class ArchivePolicyRuleAlreadyExists(IndexerException):
"""Error raised when an archive policy rule already exists."""
def __init__(self, name):
super(ArchivePolicyRuleAlreadyExists, self).__init__(
"Archive policy rule %s already exists" % name)
self.name = name
class QueryError(IndexerException):
def __init__(self):
super(QueryError, self).__init__("Unable to parse this query")
class QueryValueError(QueryError, ValueError):
def __init__(self, v, f):
super(QueryError, self).__init__("Invalid value: `%s' for field `%s'"
% (v, f))
class QueryInvalidOperator(QueryError):
def __init__(self, op):
self.op = op
super(QueryError, self).__init__("Unknown operator `%s'" % op)
class QueryAttributeError(QueryError, ResourceAttributeError):
def __init__(self, resource, attribute):
ResourceAttributeError.__init__(self, resource, attribute)
class InvalidPagination(IndexerException):
"""Error raised when a resource does not exist."""
def __init__(self, reason):
self.reason = reason
super(InvalidPagination, self).__init__(
"Invalid pagination: `%s'" % reason)
class IndexerDriver(object):
@staticmethod
def __init__(conf):
pass
@staticmethod
def connect():
pass
@staticmethod
def disconnect():
pass
@staticmethod
def upgrade(nocreate=False):
pass
@staticmethod
def get_resource(resource_type, resource_id, with_metrics=False):
"""Get a resource from the indexer.
:param resource_type: The type of the resource to look for.
:param resource_id: The UUID of the resource.
:param with_metrics: Whether to include metrics information.
"""
raise exceptions.NotImplementedError
@staticmethod
def list_resources(resource_type='generic',
attribute_filter=None,
details=False,
history=False,
limit=None,
marker=None,
sorts=None):
raise exceptions.NotImplementedError
@staticmethod
def list_archive_policies():
raise exceptions.NotImplementedError
@staticmethod
def get_archive_policy(name):
raise exceptions.NotImplementedError
@staticmethod
def update_archive_policy(name, ap_items):
raise exceptions.NotImplementedError
@staticmethod
def delete_archive_policy(name):
raise exceptions.NotImplementedError
@staticmethod
def get_archive_policy_rule(name):
raise exceptions.NotImplementedError
@staticmethod
def list_archive_policy_rules():
raise exceptions.NotImplementedError
@staticmethod
def create_archive_policy_rule(name, metric_pattern, archive_policy_name):
raise exceptions.NotImplementedError
@staticmethod
def delete_archive_policy_rule(name):
raise exceptions.NotImplementedError
@staticmethod
def create_metric(id, creator,
archive_policy_name, name=None, unit=None,
resource_id=None):
raise exceptions.NotImplementedError
@staticmethod
def list_metrics(names=None, ids=None, details=False, status='active',
limit=None, marker=None, sorts=None, **kwargs):
raise exceptions.NotImplementedError
@staticmethod
def create_archive_policy(archive_policy):
raise exceptions.NotImplementedError
@staticmethod
def create_resource(resource_type, id, creator,
user_id=None, project_id=None,
started_at=None, ended_at=None, metrics=None,
**kwargs):
raise exceptions.NotImplementedError
@staticmethod
def update_resource(resource_type, resource_id, ended_at=_marker,
metrics=_marker,
append_metrics=False,
create_revision=True,
**kwargs):
raise exceptions.NotImplementedError
@staticmethod
def delete_resource(uuid):
raise exceptions.NotImplementedError
@staticmethod
def delete_resources(resource_type='generic',
attribute_filter=None):
raise exceptions.NotImplementedError
@staticmethod
def delete_metric(id):
raise exceptions.NotImplementedError
@staticmethod
def expunge_metric(id):
raise exceptions.NotImplementedError
def get_archive_policy_for_metric(self, metric_name):
"""Helper to get the archive policy according archive policy rules."""
rules = self.list_archive_policy_rules()
for rule in rules:
if fnmatch.fnmatch(metric_name or "", rule.metric_pattern):
return self.get_archive_policy(rule.archive_policy_name)
raise NoArchivePolicyRuleMatch(metric_name)
@staticmethod
def create_resource_type(resource_type):
raise exceptions.NotImplementedError
@staticmethod
def get_resource_type(name):
"""Get a resource type from the indexer.
:param name: name of the resource type
"""
raise exceptions.NotImplementedError
@staticmethod
def list_resource_types(attribute_filter=None,
limit=None,
marker=None,
sorts=None):
raise exceptions.NotImplementedError
@staticmethod
def get_resource_attributes_schemas():
raise exceptions.NotImplementedError
@staticmethod
def get_resource_type_schema():
raise exceptions.NotImplementedError

View File

@ -1,3 +0,0 @@
[alembic]
script_location = gnocchi.indexer:alembic
sqlalchemy.url = postgresql://localhost/gnocchi

View File

@ -1,90 +0,0 @@
#
# Copyright 2015 Red Hat. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""A test module to exercise the Gnocchi API with gabbi."""
from alembic import context
from gnocchi.indexer import sqlalchemy
from gnocchi.indexer import sqlalchemy_base
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
target_metadata = sqlalchemy_base.Base.metadata
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
conf = config.conf
context.configure(url=conf.indexer.url,
target_metadata=target_metadata)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
conf = config.conf
indexer = sqlalchemy.SQLAlchemyIndexer(conf)
indexer.connect()
with indexer.facade.writer_connection() as connectable:
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata
)
with context.begin_transaction():
context.run_migrations()
indexer.disconnect()
# If `alembic' was used directly from the CLI
if not hasattr(config, "conf"):
from gnocchi import service
config.conf = service.prepare_service([])
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()

View File

@ -1,36 +0,0 @@
# Copyright ${create_date.year} OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""${message}
Revision ID: ${up_revision}
Revises: ${down_revision | comma,n}
Create Date: ${create_date}
"""
from alembic import op
import sqlalchemy as sa
${imports if imports else ""}
# revision identifiers, used by Alembic.
revision = ${repr(up_revision)}
down_revision = ${repr(down_revision)}
branch_labels = ${repr(branch_labels)}
depends_on = ${repr(depends_on)}
def upgrade():
${upgrades if upgrades else "pass"}

View File

@ -1,54 +0,0 @@
# Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Add tablename to resource_type
Revision ID: 0718ed97e5b3
Revises: 828c16f70cce
Create Date: 2016-01-20 08:14:04.893783
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '0718ed97e5b3'
down_revision = '828c16f70cce'
branch_labels = None
depends_on = None
def upgrade():
op.add_column("resource_type", sa.Column('tablename', sa.String(18),
nullable=True))
resource_type = sa.Table(
'resource_type', sa.MetaData(),
sa.Column('name', sa.String(255), nullable=False),
sa.Column('tablename', sa.String(18), nullable=True)
)
op.execute(resource_type.update().where(
resource_type.c.name == "instance_network_interface"
).values({'tablename': op.inline_literal("'instance_net_int'")}))
op.execute(resource_type.update().where(
resource_type.c.name != "instance_network_interface"
).values({'tablename': resource_type.c.name}))
op.alter_column("resource_type", "tablename", type_=sa.String(18),
nullable=False)
op.create_unique_constraint("uniq_resource_type0tablename",
"resource_type", ["tablename"])

View File

@ -1,40 +0,0 @@
# Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""add original resource id column
Revision ID: 1c2c61ac1f4c
Revises: 1f21cbdd6bc2
Create Date: 2016-01-27 05:57:48.909012
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '1c2c61ac1f4c'
down_revision = '62a8dfb139bb'
branch_labels = None
depends_on = None
def upgrade():
op.add_column('resource', sa.Column('original_resource_id',
sa.String(length=255),
nullable=True))
op.add_column('resource_history', sa.Column('original_resource_id',
sa.String(length=255),
nullable=True))

View File

@ -1,267 +0,0 @@
# flake8: noqa
# Copyright 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Initial base for Gnocchi 1.0.0
Revision ID: 1c98ac614015
Revises:
Create Date: 2015-04-27 16:05:13.530625
"""
# revision identifiers, used by Alembic.
revision = '1c98ac614015'
down_revision = None
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
import sqlalchemy_utils
import gnocchi.indexer.sqlalchemy_base
def upgrade():
op.create_table('resource',
sa.Column('type', sa.Enum('generic', 'instance', 'swift_account', 'volume', 'ceph_account', 'network', 'identity', 'ipmi', 'stack', 'image', name='resource_type_enum'), nullable=False),
sa.Column('created_by_user_id', sqlalchemy_utils.types.uuid.UUIDType(binary=False), nullable=True),
sa.Column('created_by_project_id', sqlalchemy_utils.types.uuid.UUIDType(binary=False), nullable=True),
sa.Column('started_at', gnocchi.indexer.sqlalchemy_base.PreciseTimestamp(), nullable=False),
sa.Column('revision_start', gnocchi.indexer.sqlalchemy_base.PreciseTimestamp(), nullable=False),
sa.Column('ended_at', gnocchi.indexer.sqlalchemy_base.PreciseTimestamp(), nullable=True),
sa.Column('user_id', sqlalchemy_utils.types.uuid.UUIDType(binary=False), nullable=True),
sa.Column('project_id', sqlalchemy_utils.types.uuid.UUIDType(binary=False), nullable=True),
sa.Column('id', sqlalchemy_utils.types.uuid.UUIDType(binary=False), nullable=False),
sa.PrimaryKeyConstraint('id'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_index('ix_resource_id', 'resource', ['id'], unique=False)
op.create_table('archive_policy',
sa.Column('name', sa.String(length=255), nullable=False),
sa.Column('back_window', sa.Integer(), nullable=False),
sa.Column('definition', gnocchi.indexer.sqlalchemy_base.ArchivePolicyDefinitionType(), nullable=False),
sa.Column('aggregation_methods', gnocchi.indexer.sqlalchemy_base.SetType(), nullable=False),
sa.PrimaryKeyConstraint('name'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_index('ix_archive_policy_name', 'archive_policy', ['name'], unique=False)
op.create_table('volume',
sa.Column('display_name', sa.String(length=255), nullable=False),
sa.Column('id', sqlalchemy_utils.types.uuid.UUIDType(binary=False), nullable=False),
sa.ForeignKeyConstraint(['id'], ['resource.id'], name="fk_volume_id_resource_id", ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_index('ix_volume_id', 'volume', ['id'], unique=False)
op.create_table('instance',
sa.Column('flavor_id', sa.Integer(), nullable=False),
sa.Column('image_ref', sa.String(length=255), nullable=False),
sa.Column('host', sa.String(length=255), nullable=False),
sa.Column('display_name', sa.String(length=255), nullable=False),
sa.Column('server_group', sa.String(length=255), nullable=True),
sa.Column('id', sqlalchemy_utils.types.uuid.UUIDType(binary=False), nullable=False),
sa.ForeignKeyConstraint(['id'], ['resource.id'], name="fk_instance_id_resource_id", ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_index('ix_instance_id', 'instance', ['id'], unique=False)
op.create_table('stack',
sa.Column('id', sqlalchemy_utils.types.uuid.UUIDType(binary=False), nullable=False),
sa.ForeignKeyConstraint(['id'], ['resource.id'], name="fk_stack_id_resource_id", ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_index('ix_stack_id', 'stack', ['id'], unique=False)
op.create_table('archive_policy_rule',
sa.Column('name', sa.String(length=255), nullable=False),
sa.Column('archive_policy_name', sa.String(length=255), nullable=False),
sa.Column('metric_pattern', sa.String(length=255), nullable=False),
sa.ForeignKeyConstraint(['archive_policy_name'], ['archive_policy.name'], name="fk_archive_policy_rule_archive_policy_name_archive_policy_name", ondelete='RESTRICT'),
sa.PrimaryKeyConstraint('name'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_index('ix_archive_policy_rule_name', 'archive_policy_rule', ['name'], unique=False)
op.create_table('swift_account',
sa.Column('id', sqlalchemy_utils.types.uuid.UUIDType(binary=False), nullable=False),
sa.ForeignKeyConstraint(['id'], ['resource.id'], name="fk_swift_account_id_resource_id", ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_index('ix_swift_account_id', 'swift_account', ['id'], unique=False)
op.create_table('ceph_account',
sa.Column('id', sqlalchemy_utils.types.uuid.UUIDType(binary=False), nullable=False),
sa.ForeignKeyConstraint(['id'], ['resource.id'], name="fk_ceph_account_id_resource_id", ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_index('ix_ceph_account_id', 'ceph_account', ['id'], unique=False)
op.create_table('ipmi',
sa.Column('id', sqlalchemy_utils.types.uuid.UUIDType(binary=False), nullable=False),
sa.ForeignKeyConstraint(['id'], ['resource.id'], name="fk_ipmi_id_resource_id", ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_index('ix_ipmi_id', 'ipmi', ['id'], unique=False)
op.create_table('image',
sa.Column('name', sa.String(length=255), nullable=False),
sa.Column('container_format', sa.String(length=255), nullable=False),
sa.Column('disk_format', sa.String(length=255), nullable=False),
sa.Column('id', sqlalchemy_utils.types.uuid.UUIDType(binary=False), nullable=False),
sa.ForeignKeyConstraint(['id'], ['resource.id'], name="fk_image_id_resource_id", ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_index('ix_image_id', 'image', ['id'], unique=False)
op.create_table('resource_history',
sa.Column('type', sa.Enum('generic', 'instance', 'swift_account', 'volume', 'ceph_account', 'network', 'identity', 'ipmi', 'stack', 'image', name='resource_type_enum'), nullable=False),
sa.Column('created_by_user_id', sqlalchemy_utils.types.uuid.UUIDType(binary=False), nullable=True),
sa.Column('created_by_project_id', sqlalchemy_utils.types.uuid.UUIDType(binary=False), nullable=True),
sa.Column('started_at', gnocchi.indexer.sqlalchemy_base.PreciseTimestamp(), nullable=False),
sa.Column('revision_start', gnocchi.indexer.sqlalchemy_base.PreciseTimestamp(), nullable=False),
sa.Column('ended_at', gnocchi.indexer.sqlalchemy_base.PreciseTimestamp(), nullable=True),
sa.Column('user_id', sqlalchemy_utils.types.uuid.UUIDType(binary=False), nullable=True),
sa.Column('project_id', sqlalchemy_utils.types.uuid.UUIDType(binary=False), nullable=True),
sa.Column('revision', sa.Integer(), nullable=False),
sa.Column('id', sqlalchemy_utils.types.uuid.UUIDType(binary=False), nullable=False),
sa.Column('revision_end', gnocchi.indexer.sqlalchemy_base.PreciseTimestamp(), nullable=False),
sa.ForeignKeyConstraint(['id'], ['resource.id'], name="fk_resource_history_id_resource_id", ondelete='CASCADE'),
sa.PrimaryKeyConstraint('revision'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_index('ix_resource_history_id', 'resource_history', ['id'], unique=False)
op.create_table('identity',
sa.Column('id', sqlalchemy_utils.types.uuid.UUIDType(binary=False), nullable=False),
sa.ForeignKeyConstraint(['id'], ['resource.id'], name="fk_identity_id_resource_id", ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_index('ix_identity_id', 'identity', ['id'], unique=False)
op.create_table('network',
sa.Column('id', sqlalchemy_utils.types.uuid.UUIDType(binary=False), nullable=False),
sa.ForeignKeyConstraint(['id'], ['resource.id'], name="fk_network_id_resource_id", ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_index('ix_network_id', 'network', ['id'], unique=False)
op.create_table('metric',
sa.Column('id', sqlalchemy_utils.types.uuid.UUIDType(binary=False), nullable=False),
sa.Column('archive_policy_name', sa.String(length=255), nullable=False),
sa.Column('created_by_user_id', sqlalchemy_utils.types.uuid.UUIDType(binary=False), nullable=True),
sa.Column('created_by_project_id', sqlalchemy_utils.types.uuid.UUIDType(binary=False), nullable=True),
sa.Column('resource_id', sqlalchemy_utils.types.uuid.UUIDType(binary=False), nullable=True),
sa.Column('name', sa.String(length=255), nullable=True),
sa.ForeignKeyConstraint(['archive_policy_name'], ['archive_policy.name'], name="fk_metric_archive_policy_name_archive_policy_name", ondelete='RESTRICT'),
sa.ForeignKeyConstraint(['resource_id'], ['resource.id'], name="fk_metric_resource_id_resource_id", ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('resource_id', 'name', name='uniq_metric0resource_id0name'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_index('ix_metric_id', 'metric', ['id'], unique=False)
op.create_table('identity_history',
sa.Column('revision', sa.Integer(), nullable=False),
sa.ForeignKeyConstraint(['revision'], ['resource_history.revision'], name="fk_identity_history_resource_history_revision", ondelete='CASCADE'),
sa.PrimaryKeyConstraint('revision'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_index('ix_identity_history_revision', 'identity_history', ['revision'], unique=False)
op.create_table('instance_history',
sa.Column('flavor_id', sa.Integer(), nullable=False),
sa.Column('image_ref', sa.String(length=255), nullable=False),
sa.Column('host', sa.String(length=255), nullable=False),
sa.Column('display_name', sa.String(length=255), nullable=False),
sa.Column('server_group', sa.String(length=255), nullable=True),
sa.Column('revision', sa.Integer(), nullable=False),
sa.ForeignKeyConstraint(['revision'], ['resource_history.revision'], name="fk_instance_history_resource_history_revision", ondelete='CASCADE'),
sa.PrimaryKeyConstraint('revision'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_index('ix_instance_history_revision', 'instance_history', ['revision'], unique=False)
op.create_table('network_history',
sa.Column('revision', sa.Integer(), nullable=False),
sa.ForeignKeyConstraint(['revision'], ['resource_history.revision'], name="fk_network_history_resource_history_revision", ondelete='CASCADE'),
sa.PrimaryKeyConstraint('revision'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_index('ix_network_history_revision', 'network_history', ['revision'], unique=False)
op.create_table('swift_account_history',
sa.Column('revision', sa.Integer(), nullable=False),
sa.ForeignKeyConstraint(['revision'], ['resource_history.revision'], name="fk_swift_account_history_resource_history_revision", ondelete='CASCADE'),
sa.PrimaryKeyConstraint('revision'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_index('ix_swift_account_history_revision', 'swift_account_history', ['revision'], unique=False)
op.create_table('ceph_account_history',
sa.Column('revision', sa.Integer(), nullable=False),
sa.ForeignKeyConstraint(['revision'], ['resource_history.revision'], name="fk_ceph_account_history_resource_history_revision", ondelete='CASCADE'),
sa.PrimaryKeyConstraint('revision'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_index('ix_ceph_account_history_revision', 'ceph_account_history', ['revision'], unique=False)
op.create_table('ipmi_history',
sa.Column('revision', sa.Integer(), nullable=False),
sa.ForeignKeyConstraint(['revision'], ['resource_history.revision'], name="fk_ipmi_history_resource_history_revision", ondelete='CASCADE'),
sa.PrimaryKeyConstraint('revision'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_index('ix_ipmi_history_revision', 'ipmi_history', ['revision'], unique=False)
op.create_table('image_history',
sa.Column('name', sa.String(length=255), nullable=False),
sa.Column('container_format', sa.String(length=255), nullable=False),
sa.Column('disk_format', sa.String(length=255), nullable=False),
sa.Column('revision', sa.Integer(), nullable=False),
sa.ForeignKeyConstraint(['revision'], ['resource_history.revision'], name="fk_image_history_resource_history_revision", ondelete='CASCADE'),
sa.PrimaryKeyConstraint('revision'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_index('ix_image_history_revision', 'image_history', ['revision'], unique=False)
op.create_table('stack_history',
sa.Column('revision', sa.Integer(), nullable=False),
sa.ForeignKeyConstraint(['revision'], ['resource_history.revision'], name="fk_stack_history_resource_history_revision", ondelete='CASCADE'),
sa.PrimaryKeyConstraint('revision'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_index('ix_stack_history_revision', 'stack_history', ['revision'], unique=False)
op.create_table('volume_history',
sa.Column('display_name', sa.String(length=255), nullable=False),
sa.Column('revision', sa.Integer(), nullable=False),
sa.ForeignKeyConstraint(['revision'], ['resource_history.revision'], name="fk_volume_history_resource_history_revision", ondelete='CASCADE'),
sa.PrimaryKeyConstraint('revision'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_index('ix_volume_history_revision', 'volume_history', ['revision'], unique=False)

View File

@ -1,66 +0,0 @@
# Copyright 2017 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Make sure resource.original_resource_id is NOT NULL
Revision ID: 1e1a63d3d186
Revises: 397987e38570
Create Date: 2017-01-26 19:33:35.209688
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy import func
import sqlalchemy_utils
# revision identifiers, used by Alembic.
revision = '1e1a63d3d186'
down_revision = '397987e38570'
branch_labels = None
depends_on = None
def clean_substr(col, start, length):
return func.lower(func.substr(func.hex(col), start, length))
def upgrade():
bind = op.get_bind()
for table_name in ('resource', 'resource_history'):
table = sa.Table(table_name, sa.MetaData(),
sa.Column('id',
sqlalchemy_utils.types.uuid.UUIDType(),
nullable=False),
sa.Column('original_resource_id', sa.String(255)))
# NOTE(gordc): mysql stores id as binary so we need to rebuild back to
# string uuid.
if bind and bind.engine.name == "mysql":
vals = {'original_resource_id':
clean_substr(table.c.id, 1, 8) + '-' +
clean_substr(table.c.id, 9, 4) + '-' +
clean_substr(table.c.id, 13, 4) + '-' +
clean_substr(table.c.id, 17, 4) + '-' +
clean_substr(table.c.id, 21, 12)}
else:
vals = {'original_resource_id': table.c.id}
op.execute(table.update().where(
table.c.original_resource_id.is_(None)).values(vals))
op.alter_column(table_name, "original_resource_id", nullable=False,
existing_type=sa.String(255),
existing_nullable=True)

View File

@ -1,41 +0,0 @@
# Copyright 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""allow volume display name to be null
Revision ID: 1f21cbdd6bc2
Revises: 469b308577a9
Create Date: 2015-12-08 02:12:20.273880
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '1f21cbdd6bc2'
down_revision = '469b308577a9'
branch_labels = None
depends_on = None
def upgrade():
op.alter_column('volume', 'display_name',
existing_type=sa.String(length=255),
nullable=True)
op.alter_column('volume_history', 'display_name',
existing_type=sa.String(length=255),
nullable=True)

View File

@ -1,89 +0,0 @@
# Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Add updating resource type states
Revision ID: 27d2a1d205ff
Revises: 7e6f9d542f8b
Create Date: 2016-08-31 14:05:34.316496
"""
from alembic import op
import sqlalchemy as sa
from gnocchi.indexer import sqlalchemy_base
from gnocchi import utils
# revision identifiers, used by Alembic.
revision = '27d2a1d205ff'
down_revision = '7e6f9d542f8b'
branch_labels = None
depends_on = None
resource_type = sa.sql.table(
'resource_type',
sa.sql.column('updated_at', sqlalchemy_base.PreciseTimestamp()))
state_enum = sa.Enum("active", "creating",
"creation_error", "deleting",
"deletion_error", "updating",
"updating_error",
name="resource_type_state_enum")
def upgrade():
op.alter_column('resource_type', 'state',
type_=state_enum,
nullable=False,
server_default=None)
# NOTE(sileht): postgresql have a builtin ENUM type, so
# just altering the column won't works.
# https://bitbucket.org/zzzeek/alembic/issues/270/altering-enum-type
# Does it break offline migration because we use get_bind() ?
# NOTE(luogangyi): since we cannot use 'ALTER TYPE' in transaction,
# we split the 'ALTER TYPE' operation into several steps.
bind = op.get_bind()
if bind and bind.engine.name == "postgresql":
op.execute("ALTER TYPE resource_type_state_enum RENAME TO \
old_resource_type_state_enum")
op.execute("CREATE TYPE resource_type_state_enum AS ENUM \
('active', 'creating', 'creation_error', \
'deleting', 'deletion_error', 'updating', \
'updating_error')")
op.execute("ALTER TABLE resource_type ALTER COLUMN state TYPE \
resource_type_state_enum USING \
state::text::resource_type_state_enum")
op.execute("DROP TYPE old_resource_type_state_enum")
# NOTE(sileht): we can't alter type with server_default set on
# postgresql...
op.alter_column('resource_type', 'state',
type_=state_enum,
nullable=False,
server_default="creating")
op.add_column("resource_type",
sa.Column("updated_at",
sqlalchemy_base.PreciseTimestamp(),
nullable=True))
op.execute(resource_type.update().values({'updated_at': utils.utcnow()}))
op.alter_column("resource_type", "updated_at",
type_=sqlalchemy_base.PreciseTimestamp(),
nullable=False)

View File

@ -1,39 +0,0 @@
# Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""drop_useless_enum
Revision ID: 2e0b912062d1
Revises: 34c517bcc2dd
Create Date: 2016-04-15 07:29:38.492237
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = '2e0b912062d1'
down_revision = '34c517bcc2dd'
branch_labels = None
depends_on = None
def upgrade():
bind = op.get_bind()
if bind and bind.engine.name == "postgresql":
# NOTE(sileht): we use IF exists because if the database have
# been created from scratch with 2.1 the enum doesn't exists
op.execute("DROP TYPE IF EXISTS resource_type_enum")

View File

@ -1,91 +0,0 @@
# Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""shorter_foreign_key
Revision ID: 34c517bcc2dd
Revises: ed9c6ddc5c35
Create Date: 2016-04-13 16:58:42.536431
"""
from alembic import op
import sqlalchemy
# revision identifiers, used by Alembic.
revision = '34c517bcc2dd'
down_revision = 'ed9c6ddc5c35'
branch_labels = None
depends_on = None
resource_type_helper = sqlalchemy.Table(
'resource_type',
sqlalchemy.MetaData(),
sqlalchemy.Column('tablename', sqlalchemy.String(18), nullable=False)
)
to_rename = [
('fk_metric_archive_policy_name_archive_policy_name',
'fk_metric_ap_name_ap_name',
'archive_policy', 'name',
'metric', 'archive_policy_name',
"RESTRICT"),
('fk_resource_history_resource_type_name',
'fk_rh_resource_type_name',
'resource_type', 'name', 'resource_history', 'type',
"RESTRICT"),
('fk_resource_history_id_resource_id',
'fk_rh_id_resource_id',
'resource', 'id', 'resource_history', 'id',
"CASCADE"),
('fk_archive_policy_rule_archive_policy_name_archive_policy_name',
'fk_apr_ap_name_ap_name',
'archive_policy', 'name', 'archive_policy_rule', 'archive_policy_name',
"RESTRICT")
]
def upgrade():
connection = op.get_bind()
insp = sqlalchemy.inspect(connection)
op.alter_column("resource_type", "tablename",
type_=sqlalchemy.String(35),
existing_type=sqlalchemy.String(18), nullable=False)
for rt in connection.execute(resource_type_helper.select()):
if rt.tablename == "generic":
continue
fk_names = [fk['name'] for fk in insp.get_foreign_keys("%s_history" %
rt.tablename)]
fk_old = ("fk_%s_history_resource_history_revision" %
rt.tablename)
if fk_old not in fk_names:
# The table have been created from scratch recently
fk_old = ("fk_%s_history_revision_resource_history_revision" %
rt.tablename)
fk_new = "fk_%s_h_revision_rh_revision" % rt.tablename
to_rename.append((fk_old, fk_new, 'resource_history', 'revision',
"%s_history" % rt.tablename, 'revision', 'CASCADE'))
for (fk_old, fk_new, src_table, src_col, dst_table, dst_col, ondelete
) in to_rename:
op.drop_constraint(fk_old, dst_table, type_="foreignkey")
op.create_foreign_key(fk_new, dst_table, src_table,
[dst_col], [src_col], ondelete=ondelete)

View File

@ -1,103 +0,0 @@
# Copyright 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""create instance_disk and instance_net_int tables
Revision ID: 3901f5ea2b8e
Revises: 42ee7f3e25f8
Create Date: 2015-08-27 17:00:25.092891
"""
# revision identifiers, used by Alembic.
revision = '3901f5ea2b8e'
down_revision = '42ee7f3e25f8'
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
import sqlalchemy_utils
def upgrade():
for table in ["resource", "resource_history"]:
op.alter_column(table, "type",
type_=sa.Enum('generic', 'instance', 'swift_account',
'volume', 'ceph_account', 'network',
'identity', 'ipmi', 'stack', 'image',
'instance_network_interface',
'instance_disk',
name='resource_type_enum'),
nullable=False)
# NOTE(sileht): postgresql have a builtin ENUM type, so
# just altering the column won't works.
# https://bitbucket.org/zzzeek/alembic/issues/270/altering-enum-type
# Does it break offline migration because we use get_bind() ?
# NOTE(luogangyi): since we cannot use 'ALTER TYPE' in transaction,
# we split the 'ALTER TYPE' operation into several steps.
bind = op.get_bind()
if bind and bind.engine.name == "postgresql":
op.execute("ALTER TYPE resource_type_enum RENAME TO \
old_resource_type_enum")
op.execute("CREATE TYPE resource_type_enum AS ENUM \
('generic', 'instance', 'swift_account', \
'volume', 'ceph_account', 'network', \
'identity', 'ipmi', 'stack', 'image', \
'instance_network_interface', 'instance_disk')")
for table in ["resource", "resource_history"]:
op.execute("ALTER TABLE %s ALTER COLUMN type TYPE \
resource_type_enum USING \
type::text::resource_type_enum" % table)
op.execute("DROP TYPE old_resource_type_enum")
for table in ['instance_disk', 'instance_net_int']:
op.create_table(
table,
sa.Column('id', sqlalchemy_utils.types.uuid.UUIDType(binary=True),
nullable=False),
sa.Column('instance_id',
sqlalchemy_utils.types.uuid.UUIDType(binary=True),
nullable=False),
sa.Column('name', sa.String(length=255), nullable=False),
sa.Index('ix_%s_id' % table, 'id', unique=False),
sa.ForeignKeyConstraint(['id'], ['resource.id'],
name="fk_%s_id_resource_id" % table,
ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_table(
'%s_history' % table,
sa.Column('instance_id',
sqlalchemy_utils.types.uuid.UUIDType(binary=True),
nullable=False),
sa.Column('name', sa.String(length=255), nullable=False),
sa.Column('revision', sa.Integer(), nullable=False),
sa.Index('ix_%s_history_revision' % table, 'revision',
unique=False),
sa.ForeignKeyConstraint(['revision'],
['resource_history.revision'],
name=("fk_%s_history_"
"resource_history_revision") % table,
ondelete='CASCADE'),
sa.PrimaryKeyConstraint('revision'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)

View File

@ -1,184 +0,0 @@
# Copyright 2017 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Remove slashes from original resource IDs, recompute their id with creator
Revision ID: 397987e38570
Revises: aba5a217ca9b
Create Date: 2017-01-11 16:32:40.421758
"""
import uuid
from alembic import op
import six
import sqlalchemy as sa
import sqlalchemy_utils
from gnocchi import utils
# revision identifiers, used by Alembic.
revision = '397987e38570'
down_revision = 'aba5a217ca9b'
branch_labels = None
depends_on = None
resource_type_table = sa.Table(
'resource_type',
sa.MetaData(),
sa.Column('name', sa.String(255), nullable=False),
sa.Column('tablename', sa.String(35), nullable=False)
)
resource_table = sa.Table(
'resource',
sa.MetaData(),
sa.Column('id',
sqlalchemy_utils.types.uuid.UUIDType(),
nullable=False),
sa.Column('original_resource_id', sa.String(255)),
sa.Column('type', sa.String(255)),
sa.Column('creator', sa.String(255))
)
resourcehistory_table = sa.Table(
'resource_history',
sa.MetaData(),
sa.Column('id',
sqlalchemy_utils.types.uuid.UUIDType(),
nullable=False),
sa.Column('original_resource_id', sa.String(255))
)
metric_table = sa.Table(
'metric',
sa.MetaData(),
sa.Column('id',
sqlalchemy_utils.types.uuid.UUIDType(),
nullable=False),
sa.Column('name', sa.String(255)),
sa.Column('resource_id', sqlalchemy_utils.types.uuid.UUIDType())
)
uuidtype = sqlalchemy_utils.types.uuid.UUIDType()
def upgrade():
connection = op.get_bind()
resource_type_tables = {}
resource_type_tablenames = dict(
(rt.name, rt.tablename)
for rt in connection.execute(resource_type_table.select())
if rt.tablename != "generic"
)
op.drop_constraint("fk_metric_resource_id_resource_id", "metric",
type_="foreignkey")
for name, table in resource_type_tablenames.items():
op.drop_constraint("fk_%s_id_resource_id" % table, table,
type_="foreignkey")
resource_type_tables[name] = sa.Table(
table,
sa.MetaData(),
sa.Column('id',
sqlalchemy_utils.types.uuid.UUIDType(),
nullable=False),
)
for resource in connection.execute(resource_table.select()):
if resource.original_resource_id is None:
# statsd resource has no original_resource_id and is NULL
continue
try:
orig_as_uuid = uuid.UUID(str(resource.original_resource_id))
except ValueError:
pass
else:
if orig_as_uuid == resource.id:
continue
new_original_resource_id = resource.original_resource_id.replace(
'/', '_')
if six.PY2:
new_original_resource_id = new_original_resource_id.encode('utf-8')
new_id = sa.literal(uuidtype.process_bind_param(
str(utils.ResourceUUID(
new_original_resource_id, resource.creator)),
connection.dialect))
# resource table
connection.execute(
resource_table.update().where(
resource_table.c.id == resource.id
).values(
id=new_id,
original_resource_id=new_original_resource_id
)
)
# resource history table
connection.execute(
resourcehistory_table.update().where(
resourcehistory_table.c.id == resource.id
).values(
id=new_id,
original_resource_id=new_original_resource_id
)
)
if resource.type != "generic":
rtable = resource_type_tables[resource.type]
# resource table (type)
connection.execute(
rtable.update().where(
rtable.c.id == resource.id
).values(id=new_id)
)
# Metric
connection.execute(
metric_table.update().where(
metric_table.c.resource_id == resource.id
).values(
resource_id=new_id
)
)
for (name, table) in resource_type_tablenames.items():
op.create_foreign_key("fk_%s_id_resource_id" % table,
table, "resource",
("id",), ("id",),
ondelete="CASCADE")
op.create_foreign_key("fk_metric_resource_id_resource_id",
"metric", "resource",
("resource_id",), ("id",),
ondelete="SET NULL")
for metric in connection.execute(metric_table.select().where(
metric_table.c.name.like("%/%"))):
connection.execute(
metric_table.update().where(
metric_table.c.id == metric.id
).values(
name=metric.name.replace('/', '_'),
)
)

View File

@ -1,49 +0,0 @@
# Copyright 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""create metric status column
Revision ID: 39b7d449d46a
Revises: 3901f5ea2b8e
Create Date: 2015-09-16 13:25:34.249237
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '39b7d449d46a'
down_revision = '3901f5ea2b8e'
branch_labels = None
depends_on = None
def upgrade():
enum = sa.Enum("active", "delete", name="metric_status_enum")
enum.create(op.get_bind(), checkfirst=False)
op.add_column("metric",
sa.Column('status', enum,
nullable=False,
server_default="active"))
op.create_index('ix_metric_status', 'metric', ['status'], unique=False)
op.drop_constraint("fk_metric_resource_id_resource_id",
"metric", type_="foreignkey")
op.create_foreign_key("fk_metric_resource_id_resource_id",
"metric", "resource",
("resource_id",), ("id",),
ondelete="SET NULL")

View File

@ -1,39 +0,0 @@
#
# Copyright 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""ck_started_before_ended
Revision ID: 40c6aae14c3f
Revises: 1c98ac614015
Create Date: 2015-04-28 16:35:11.999144
"""
# revision identifiers, used by Alembic.
revision = '40c6aae14c3f'
down_revision = '1c98ac614015'
branch_labels = None
depends_on = None
from alembic import op
def upgrade():
op.create_check_constraint("ck_started_before_ended",
"resource",
"started_at <= ended_at")
op.create_check_constraint("ck_started_before_ended",
"resource_history",
"started_at <= ended_at")

View File

@ -1,38 +0,0 @@
#
# Copyright 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""alter flavorid from int to string
Revision ID: 42ee7f3e25f8
Revises: f7d44b47928
Create Date: 2015-05-10 21:20:24.941263
"""
# revision identifiers, used by Alembic.
revision = '42ee7f3e25f8'
down_revision = 'f7d44b47928'
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
def upgrade():
for table in ('instance', 'instance_history'):
op.alter_column(table, "flavor_id",
type_=sa.String(length=255),
nullable=False)

View File

@ -1,41 +0,0 @@
# Copyright 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""allow image_ref to be null
Revision ID: 469b308577a9
Revises: 39b7d449d46a
Create Date: 2015-11-29 00:23:39.998256
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '469b308577a9'
down_revision = '39b7d449d46a'
branch_labels = None
depends_on = None
def upgrade():
op.alter_column('instance', 'image_ref',
existing_type=sa.String(length=255),
nullable=True)
op.alter_column('instance_history', 'image_ref',
existing_type=sa.String(length=255),
nullable=True)

View File

@ -1,77 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""mysql_float_to_timestamp
Revision ID: 5c4f93e5bb4
Revises: 7e6f9d542f8b
Create Date: 2016-07-25 15:36:36.469847
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy.sql import func
from gnocchi.indexer import sqlalchemy_base
# revision identifiers, used by Alembic.
revision = '5c4f93e5bb4'
down_revision = '27d2a1d205ff'
branch_labels = None
depends_on = None
def upgrade():
bind = op.get_bind()
if bind and bind.engine.name == "mysql":
op.execute("SET time_zone = '+00:00'")
# NOTE(jd) So that crappy engine that is MySQL does not have "ALTER
# TABLE … USING …". We need to copy everything and convert…
for table_name, column_name in (("resource", "started_at"),
("resource", "ended_at"),
("resource", "revision_start"),
("resource_history", "started_at"),
("resource_history", "ended_at"),
("resource_history", "revision_start"),
("resource_history", "revision_end"),
("resource_type", "updated_at")):
nullable = column_name == "ended_at"
existing_type = sa.types.DECIMAL(
precision=20, scale=6, asdecimal=True)
existing_col = sa.Column(
column_name,
existing_type,
nullable=nullable)
temp_col = sa.Column(
column_name + "_ts",
sqlalchemy_base.TimestampUTC(),
nullable=True)
op.add_column(table_name, temp_col)
t = sa.sql.table(table_name, existing_col, temp_col)
op.execute(t.update().values(
**{column_name + "_ts": func.from_unixtime(existing_col)}))
op.drop_column(table_name, column_name)
op.alter_column(table_name,
column_name + "_ts",
nullable=nullable,
type_=sqlalchemy_base.TimestampUTC(),
existing_nullable=nullable,
existing_type=existing_type,
new_column_name=column_name)

View File

@ -1,249 +0,0 @@
# Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Change uuid to string
Revision ID: 62a8dfb139bb
Revises: 1f21cbdd6bc2
Create Date: 2016-01-20 11:57:45.954607
"""
from alembic import op
import sqlalchemy as sa
import sqlalchemy_utils
# revision identifiers, used by Alembic.
revision = '62a8dfb139bb'
down_revision = '1f21cbdd6bc2'
branch_labels = None
depends_on = None
resourcehelper = sa.Table(
'resource',
sa.MetaData(),
sa.Column('id',
sqlalchemy_utils.types.uuid.UUIDType(binary=True),
nullable=False),
sa.Column('tmp_created_by_user_id',
sqlalchemy_utils.types.uuid.UUIDType(binary=True),
nullable=True),
sa.Column('tmp_created_by_project_id',
sqlalchemy_utils.types.uuid.UUIDType(binary=True),
nullable=True),
sa.Column('tmp_user_id',
sqlalchemy_utils.types.uuid.UUIDType(binary=True),
nullable=True),
sa.Column('tmp_project_id',
sqlalchemy_utils.types.uuid.UUIDType(binary=True),
nullable=True),
sa.Column('created_by_user_id',
sa.String(length=255),
nullable=True),
sa.Column('created_by_project_id',
sa.String(length=255),
nullable=True),
sa.Column('user_id',
sa.String(length=255),
nullable=True),
sa.Column('project_id',
sa.String(length=255),
nullable=True),
)
resourcehistoryhelper = sa.Table(
'resource_history',
sa.MetaData(),
sa.Column('id',
sqlalchemy_utils.types.uuid.UUIDType(binary=True),
nullable=False),
sa.Column('tmp_created_by_user_id',
sqlalchemy_utils.types.uuid.UUIDType(binary=True),
nullable=True),
sa.Column('tmp_created_by_project_id',
sqlalchemy_utils.types.uuid.UUIDType(binary=True),
nullable=True),
sa.Column('tmp_user_id',
sqlalchemy_utils.types.uuid.UUIDType(binary=True),
nullable=True),
sa.Column('tmp_project_id',
sqlalchemy_utils.types.uuid.UUIDType(binary=True),
nullable=True),
sa.Column('created_by_user_id',
sa.String(length=255),
nullable=True),
sa.Column('created_by_project_id',
sa.String(length=255),
nullable=True),
sa.Column('user_id',
sa.String(length=255),
nullable=True),
sa.Column('project_id',
sa.String(length=255),
nullable=True),
)
metrichelper = sa.Table(
'metric',
sa.MetaData(),
sa.Column('id',
sqlalchemy_utils.types.uuid.UUIDType(binary=True),
nullable=False),
sa.Column('tmp_created_by_user_id',
sqlalchemy_utils.types.uuid.UUIDType(binary=True),
nullable=True),
sa.Column('tmp_created_by_project_id',
sqlalchemy_utils.types.uuid.UUIDType(binary=True),
nullable=True),
sa.Column('created_by_user_id',
sa.String(length=255),
nullable=True),
sa.Column('created_by_project_id',
sa.String(length=255),
nullable=True),
)
def upgrade():
connection = op.get_bind()
# Rename user/project fields to tmp_*
op.alter_column('metric', 'created_by_project_id',
new_column_name='tmp_created_by_project_id',
existing_type=sa.BINARY(length=16))
op.alter_column('metric', 'created_by_user_id',
new_column_name='tmp_created_by_user_id',
existing_type=sa.BINARY(length=16))
op.alter_column('resource', 'created_by_project_id',
new_column_name='tmp_created_by_project_id',
existing_type=sa.BINARY(length=16))
op.alter_column('resource', 'created_by_user_id',
new_column_name='tmp_created_by_user_id',
existing_type=sa.BINARY(length=16))
op.alter_column('resource', 'project_id',
new_column_name='tmp_project_id',
existing_type=sa.BINARY(length=16))
op.alter_column('resource', 'user_id',
new_column_name='tmp_user_id',
existing_type=sa.BINARY(length=16))
op.alter_column('resource_history', 'created_by_project_id',
new_column_name='tmp_created_by_project_id',
existing_type=sa.BINARY(length=16))
op.alter_column('resource_history', 'created_by_user_id',
new_column_name='tmp_created_by_user_id',
existing_type=sa.BINARY(length=16))
op.alter_column('resource_history', 'project_id',
new_column_name='tmp_project_id',
existing_type=sa.BINARY(length=16))
op.alter_column('resource_history', 'user_id',
new_column_name='tmp_user_id',
existing_type=sa.BINARY(length=16))
# Add new user/project fields as strings
op.add_column('metric',
sa.Column('created_by_project_id',
sa.String(length=255), nullable=True))
op.add_column('metric',
sa.Column('created_by_user_id',
sa.String(length=255), nullable=True))
op.add_column('resource',
sa.Column('created_by_project_id',
sa.String(length=255), nullable=True))
op.add_column('resource',
sa.Column('created_by_user_id',
sa.String(length=255), nullable=True))
op.add_column('resource',
sa.Column('project_id',
sa.String(length=255), nullable=True))
op.add_column('resource',
sa.Column('user_id',
sa.String(length=255), nullable=True))
op.add_column('resource_history',
sa.Column('created_by_project_id',
sa.String(length=255), nullable=True))
op.add_column('resource_history',
sa.Column('created_by_user_id',
sa.String(length=255), nullable=True))
op.add_column('resource_history',
sa.Column('project_id',
sa.String(length=255), nullable=True))
op.add_column('resource_history',
sa.Column('user_id',
sa.String(length=255), nullable=True))
# Migrate data
for tablehelper in [resourcehelper, resourcehistoryhelper]:
for resource in connection.execute(tablehelper.select()):
if resource.tmp_created_by_project_id:
created_by_project_id = \
str(resource.tmp_created_by_project_id).replace('-', '')
else:
created_by_project_id = None
if resource.tmp_created_by_user_id:
created_by_user_id = \
str(resource.tmp_created_by_user_id).replace('-', '')
else:
created_by_user_id = None
if resource.tmp_project_id:
project_id = str(resource.tmp_project_id).replace('-', '')
else:
project_id = None
if resource.tmp_user_id:
user_id = str(resource.tmp_user_id).replace('-', '')
else:
user_id = None
connection.execute(
tablehelper.update().where(
tablehelper.c.id == resource.id
).values(
created_by_project_id=created_by_project_id,
created_by_user_id=created_by_user_id,
project_id=project_id,
user_id=user_id,
)
)
for metric in connection.execute(metrichelper.select()):
if resource.tmp_created_by_project_id:
created_by_project_id = \
str(resource.tmp_created_by_project_id).replace('-', '')
else:
created_by_project_id = None
if resource.tmp_created_by_user_id:
created_by_user_id = \
str(resource.tmp_created_by_user_id).replace('-', '')
else:
created_by_user_id = None
connection.execute(
metrichelper.update().where(
metrichelper.c.id == metric.id
).values(
created_by_project_id=created_by_project_id,
created_by_user_id=created_by_user_id,
)
)
# Delete temp fields
op.drop_column('metric', 'tmp_created_by_project_id')
op.drop_column('metric', 'tmp_created_by_user_id')
op.drop_column('resource', 'tmp_created_by_project_id')
op.drop_column('resource', 'tmp_created_by_user_id')
op.drop_column('resource', 'tmp_project_id')
op.drop_column('resource', 'tmp_user_id')
op.drop_column('resource_history', 'tmp_created_by_project_id')
op.drop_column('resource_history', 'tmp_created_by_user_id')
op.drop_column('resource_history', 'tmp_project_id')
op.drop_column('resource_history', 'tmp_user_id')

View File

@ -1,43 +0,0 @@
# Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""resource_type state column
Revision ID: 7e6f9d542f8b
Revises: c62df18bf4ee
Create Date: 2016-05-19 16:52:58.939088
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '7e6f9d542f8b'
down_revision = 'c62df18bf4ee'
branch_labels = None
depends_on = None
def upgrade():
states = ("active", "creating", "creation_error", "deleting",
"deletion_error")
enum = sa.Enum(*states, name="resource_type_state_enum")
enum.create(op.get_bind(), checkfirst=False)
op.add_column("resource_type",
sa.Column('state', enum, nullable=False,
server_default="creating"))
rt = sa.sql.table('resource_type', sa.sql.column('state', enum))
op.execute(rt.update().values(state="active"))

View File

@ -1,85 +0,0 @@
# Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""create resource_type table
Revision ID: 828c16f70cce
Revises: 9901e5ea4b6e
Create Date: 2016-01-19 12:47:19.384127
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '828c16f70cce'
down_revision = '9901e5ea4b6e'
branch_labels = None
depends_on = None
type_string = sa.String(255)
type_enum = sa.Enum('generic', 'instance',
'swift_account', 'volume',
'ceph_account', 'network',
'identity', 'ipmi', 'stack',
'image', 'instance_disk',
'instance_network_interface',
'host', 'host_disk',
'host_network_interface',
name="resource_type_enum")
def type_string_col(name, table):
return sa.Column(
name, type_string,
sa.ForeignKey('resource_type.name',
ondelete="RESTRICT",
name="fk_%s_resource_type_name" % table))
def type_enum_col(name):
return sa.Column(name, type_enum,
nullable=False, default='generic')
def upgrade():
resource_type = op.create_table(
'resource_type',
sa.Column('name', sa.String(length=255), nullable=False),
sa.PrimaryKeyConstraint('name'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
resource = sa.Table('resource', sa.MetaData(),
type_string_col("type", "resource"))
op.execute(resource_type.insert().from_select(
['name'], sa.select([resource.c.type]).distinct()))
for table in ["resource", "resource_history"]:
op.alter_column(table, "type", new_column_name="old_type",
existing_type=type_enum)
op.add_column(table, type_string_col("type", table))
sa_table = sa.Table(table, sa.MetaData(),
type_string_col("type", table),
type_enum_col('old_type'))
op.execute(sa_table.update().values(
{sa_table.c.type: sa_table.c.old_type}))
op.drop_column(table, "old_type")
op.alter_column(table, "type", nullable=False,
existing_type=type_string)

View File

@ -1,48 +0,0 @@
# Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Migrate legacy resources to DB
Revision ID: 8f376189b9eb
Revises: d24877c22ab0
Create Date: 2016-01-20 15:03:28.115656
"""
import json
from alembic import op
import sqlalchemy as sa
from gnocchi.indexer import sqlalchemy_legacy_resources as legacy
# revision identifiers, used by Alembic.
revision = '8f376189b9eb'
down_revision = 'd24877c22ab0'
branch_labels = None
depends_on = None
def upgrade():
resource_type = sa.Table(
'resource_type', sa.MetaData(),
sa.Column('name', sa.String(255), nullable=False),
sa.Column('attributes', sa.Text, nullable=False)
)
for name, attributes in legacy.ceilometer_resources.items():
text_attributes = json.dumps(attributes)
op.execute(resource_type.update().where(
resource_type.c.name == name
).values({resource_type.c.attributes: text_attributes}))

View File

@ -1,127 +0,0 @@
# Copyright 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""create host tables
Revision ID: 9901e5ea4b6e
Revises: a54c57ada3f5
Create Date: 2015-12-15 17:20:25.092891
"""
# revision identifiers, used by Alembic.
revision = '9901e5ea4b6e'
down_revision = 'a54c57ada3f5'
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
import sqlalchemy_utils
def upgrade():
for table in ["resource", "resource_history"]:
op.alter_column(table, "type",
type_=sa.Enum('generic', 'instance', 'swift_account',
'volume', 'ceph_account', 'network',
'identity', 'ipmi', 'stack', 'image',
'instance_network_interface',
'instance_disk',
'host', 'host_disk',
'host_network_interface',
name='resource_type_enum'),
nullable=False)
# NOTE(sileht): postgresql have a builtin ENUM type, so
# just altering the column won't works.
# https://bitbucket.org/zzzeek/alembic/issues/270/altering-enum-type
# Does it break offline migration because we use get_bind() ?
# NOTE(luogangyi): since we cannot use 'ALTER TYPE' in transaction,
# we split the 'ALTER TYPE' operation into several steps.
bind = op.get_bind()
if bind and bind.engine.name == "postgresql":
op.execute("ALTER TYPE resource_type_enum RENAME TO \
old_resource_type_enum")
op.execute("CREATE TYPE resource_type_enum AS ENUM \
('generic', 'instance', 'swift_account', \
'volume', 'ceph_account', 'network', \
'identity', 'ipmi', 'stack', 'image', \
'instance_network_interface', 'instance_disk', \
'host', 'host_disk', \
'host_network_interface')")
for table in ["resource", "resource_history"]:
op.execute("ALTER TABLE %s ALTER COLUMN type TYPE \
resource_type_enum USING \
type::text::resource_type_enum" % table)
op.execute("DROP TYPE old_resource_type_enum")
op.create_table(
'host',
sa.Column('id', sqlalchemy_utils.types.uuid.UUIDType(binary=True),
nullable=False),
sa.Column('host_name', sa.String(length=255), nullable=False),
sa.ForeignKeyConstraint(['id'], ['resource.id'],
name="fk_hypervisor_id_resource_id",
ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_table(
'host_history',
sa.Column('host_name', sa.String(length=255), nullable=False),
sa.Column('revision', sa.Integer(), nullable=False),
sa.ForeignKeyConstraint(['revision'],
['resource_history.revision'],
name=("fk_hypervisor_history_"
"resource_history_revision"),
ondelete='CASCADE'),
sa.PrimaryKeyConstraint('revision'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
for table in ['host_disk', 'host_net_int']:
op.create_table(
table,
sa.Column('id', sqlalchemy_utils.types.uuid.UUIDType(binary=True),
nullable=False),
sa.Column('host_name', sa.String(length=255), nullable=False),
sa.Column('device_name', sa.String(length=255), nullable=True),
sa.ForeignKeyConstraint(['id'], ['resource.id'],
name="fk_%s_id_resource_id" % table,
ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)
op.create_table(
'%s_history' % table,
sa.Column('host_name', sa.String(length=255), nullable=False),
sa.Column('device_name', sa.String(length=255), nullable=True),
sa.Column('revision', sa.Integer(), nullable=False),
sa.ForeignKeyConstraint(['revision'],
['resource_history.revision'],
name=("fk_%s_history_"
"resource_history_revision") % table,
ondelete='CASCADE'),
sa.PrimaryKeyConstraint('revision'),
mysql_charset='utf8',
mysql_engine='InnoDB'
)

View File

@ -1,72 +0,0 @@
# Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""merges primarykey and indexes
Revision ID: a54c57ada3f5
Revises: 1c2c61ac1f4c
Create Date: 2016-02-04 09:09:23.180955
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = 'a54c57ada3f5'
down_revision = '1c2c61ac1f4c'
branch_labels = None
depends_on = None
resource_tables = [(t, "id") for t in [
"instance",
"instance_disk",
"instance_net_int",
"swift_account",
"volume",
"ceph_account",
"network",
"identity",
"ipmi",
"stack",
"image"
]]
history_tables = [("%s_history" % t, "revision")
for t, c in resource_tables]
other_tables = [("metric", "id"), ("archive_policy", "name"),
("archive_policy_rule", "name"),
("resource", "id"),
("resource_history", "id")]
def upgrade():
bind = op.get_bind()
# NOTE(sileht): mysql can't delete an index on a foreign key
# even this one is not the index used by the foreign key itself...
# In our case we have two indexes fk_resource_history_id_resource_id and
# and ix_resource_history_id, we want to delete only the second, but mysql
# can't do that with a simple DROP INDEX ix_resource_history_id...
# so we have to remove the constraint and put it back...
if bind.engine.name == "mysql":
op.drop_constraint("fk_resource_history_id_resource_id",
type_="foreignkey", table_name="resource_history")
for table, colname in resource_tables + history_tables + other_tables:
op.drop_index("ix_%s_%s" % (table, colname), table_name=table)
if bind.engine.name == "mysql":
op.create_foreign_key("fk_resource_history_id_resource_id",
"resource_history", "resource", ["id"], ["id"],
ondelete="CASCADE")

View File

@ -1,53 +0,0 @@
# Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""merge_created_in_creator
Revision ID: aba5a217ca9b
Revises: 5c4f93e5bb4
Create Date: 2016-12-06 17:40:25.344578
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = 'aba5a217ca9b'
down_revision = '5c4f93e5bb4'
branch_labels = None
depends_on = None
def upgrade():
for table_name in ("resource", "resource_history", "metric"):
creator_col = sa.Column("creator", sa.String(255))
created_by_user_id_col = sa.Column("created_by_user_id",
sa.String(255))
created_by_project_id_col = sa.Column("created_by_project_id",
sa.String(255))
op.add_column(table_name, creator_col)
t = sa.sql.table(
table_name, creator_col,
created_by_user_id_col, created_by_project_id_col)
op.execute(
t.update().values(
creator=(
created_by_user_id_col + ":" + created_by_project_id_col
)).where((created_by_user_id_col is not None)
| (created_by_project_id_col is not None)))
op.drop_column(table_name, "created_by_user_id")
op.drop_column(table_name, "created_by_project_id")

View File

@ -1,38 +0,0 @@
# Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""add unit column for metric
Revision ID: c62df18bf4ee
Revises: 2e0b912062d1
Create Date: 2016-05-04 12:31:25.350190
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = 'c62df18bf4ee'
down_revision = '2e0b912062d1'
branch_labels = None
depends_on = None
def upgrade():
op.add_column('metric', sa.Column('unit',
sa.String(length=31),
nullable=True))

View File

@ -1,38 +0,0 @@
# Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Add attributes to resource_type
Revision ID: d24877c22ab0
Revises: 0718ed97e5b3
Create Date: 2016-01-19 22:45:06.431190
"""
from alembic import op
import sqlalchemy as sa
import sqlalchemy_utils as sa_utils
# revision identifiers, used by Alembic.
revision = 'd24877c22ab0'
down_revision = '0718ed97e5b3'
branch_labels = None
depends_on = None
def upgrade():
op.add_column("resource_type",
sa.Column('attributes', sa_utils.JSONType(),))

View File

@ -1,53 +0,0 @@
# Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""fix_host_foreign_key
Revision ID: ed9c6ddc5c35
Revises: ffc7bbeec0b0
Create Date: 2016-04-15 06:25:34.649934
"""
from alembic import op
from sqlalchemy import inspect
# revision identifiers, used by Alembic.
revision = 'ed9c6ddc5c35'
down_revision = 'ffc7bbeec0b0'
branch_labels = None
depends_on = None
def upgrade():
conn = op.get_bind()
insp = inspect(conn)
fk_names = [fk['name'] for fk in insp.get_foreign_keys('host')]
if ("fk_hypervisor_id_resource_id" not in fk_names and
"fk_host_id_resource_id" in fk_names):
# NOTE(sileht): we are already good, the BD have been created from
# scratch after "a54c57ada3f5"
return
op.drop_constraint("fk_hypervisor_id_resource_id", "host",
type_="foreignkey")
op.drop_constraint("fk_hypervisor_history_resource_history_revision",
"host_history", type_="foreignkey")
op.create_foreign_key("fk_host_id_resource_id", "host", "resource",
["id"], ["id"], ondelete="CASCADE")
op.create_foreign_key("fk_host_history_resource_history_revision",
"host_history", "resource_history",
["revision"], ["revision"], ondelete="CASCADE")

View File

@ -1,89 +0,0 @@
#
# Copyright 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""uuid_to_binary
Revision ID: f7d44b47928
Revises: 40c6aae14c3f
Create Date: 2015-04-30 13:29:29.074794
"""
# revision identifiers, used by Alembic.
revision = 'f7d44b47928'
down_revision = '40c6aae14c3f'
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy_utils.types.uuid
def upgrade():
op.alter_column("metric", "id",
type_=sqlalchemy_utils.types.uuid.UUIDType(binary=True),
nullable=False)
for table in ('resource', 'resource_history', 'metric'):
op.alter_column(table, "created_by_user_id",
type_=sqlalchemy_utils.types.uuid.UUIDType(
binary=True))
op.alter_column(table, "created_by_project_id",
type_=sqlalchemy_utils.types.uuid.UUIDType(
binary=True))
for table in ('resource', 'resource_history'):
op.alter_column(table, "user_id",
type_=sqlalchemy_utils.types.uuid.UUIDType(
binary=True))
op.alter_column(table, "project_id",
type_=sqlalchemy_utils.types.uuid.UUIDType(
binary=True))
# Drop all foreign keys linking to resource.id
for table in ('ceph_account', 'identity', 'volume', 'swift_account',
'ipmi', 'image', 'network', 'stack', 'instance',
'resource_history'):
op.drop_constraint("fk_%s_id_resource_id" % table, table,
type_="foreignkey")
op.drop_constraint("fk_metric_resource_id_resource_id", "metric",
type_="foreignkey")
# Now change the type of resource.id
op.alter_column("resource", "id",
type_=sqlalchemy_utils.types.uuid.UUIDType(binary=True),
nullable=False)
# Now change all the types of $table.id and re-add the FK
for table in ('ceph_account', 'identity', 'volume', 'swift_account',
'ipmi', 'image', 'network', 'stack', 'instance',
'resource_history'):
op.alter_column(
table, "id",
type_=sqlalchemy_utils.types.uuid.UUIDType(binary=True),
nullable=False)
op.create_foreign_key("fk_%s_id_resource_id" % table,
table, "resource",
("id",), ("id",),
ondelete="CASCADE")
op.alter_column("metric", "resource_id",
type_=sqlalchemy_utils.types.uuid.UUIDType(binary=True))
op.create_foreign_key("fk_metric_resource_id_resource_id",
"metric", "resource",
("resource_id",), ("id",),
ondelete="CASCADE")

View File

@ -1,65 +0,0 @@
# Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""migrate_legacy_resources_to_db2
Revision ID: ffc7bbeec0b0
Revises: 8f376189b9eb
Create Date: 2016-04-14 15:57:13.072128
"""
import json
from alembic import op
import sqlalchemy as sa
from gnocchi.indexer import sqlalchemy_legacy_resources as legacy
# revision identifiers, used by Alembic.
revision = 'ffc7bbeec0b0'
down_revision = '8f376189b9eb'
branch_labels = None
depends_on = None
def upgrade():
bind = op.get_bind()
resource_type = sa.Table(
'resource_type', sa.MetaData(),
sa.Column('name', sa.String(255), nullable=False),
sa.Column('tablename', sa.String(18), nullable=False),
sa.Column('attributes', sa.Text, nullable=False)
)
# NOTE(gordc): fix for incorrect migration:
# 0718ed97e5b3_add_tablename_to_resource_type.py#L46
op.execute(resource_type.update().where(
resource_type.c.name == "instance_network_interface"
).values({'tablename': 'instance_net_int'}))
resource_type_names = [rt.name for rt in
list(bind.execute(resource_type.select()))]
for name, attributes in legacy.ceilometer_resources.items():
if name in resource_type_names:
continue
tablename = legacy.ceilometer_tablenames.get(name, name)
text_attributes = json.dumps(attributes)
op.execute(resource_type.insert().values({
resource_type.c.attributes: text_attributes,
resource_type.c.name: name,
resource_type.c.tablename: tablename,
}))

File diff suppressed because it is too large Load Diff

View File

@ -1,443 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright © 2016 Red Hat, Inc.
# Copyright © 2014-2015 eNovance
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import absolute_import
import calendar
import datetime
import decimal
import iso8601
from oslo_db.sqlalchemy import models
import six
import sqlalchemy
from sqlalchemy.dialects import mysql
from sqlalchemy.ext import declarative
from sqlalchemy import types
import sqlalchemy_utils
from gnocchi import archive_policy
from gnocchi import indexer
from gnocchi import resource_type
from gnocchi import storage
from gnocchi import utils
Base = declarative.declarative_base()
COMMON_TABLES_ARGS = {'mysql_charset': "utf8",
'mysql_engine': "InnoDB"}
class PreciseTimestamp(types.TypeDecorator):
"""Represents a timestamp precise to the microsecond.
Deprecated in favor of TimestampUTC.
Still used in alembic migrations.
"""
impl = sqlalchemy.DateTime
@staticmethod
def _decimal_to_dt(dec):
"""Return a datetime from Decimal unixtime format."""
if dec is None:
return None
integer = int(dec)
micro = (dec - decimal.Decimal(integer)) * decimal.Decimal(1000000)
daittyme = datetime.datetime.utcfromtimestamp(integer)
return daittyme.replace(microsecond=int(round(micro)))
@staticmethod
def _dt_to_decimal(utc):
"""Datetime to Decimal.
Some databases don't store microseconds in datetime
so we always store as Decimal unixtime.
"""
if utc is None:
return None
decimal.getcontext().prec = 30
return (decimal.Decimal(str(calendar.timegm(utc.utctimetuple()))) +
(decimal.Decimal(str(utc.microsecond)) /
decimal.Decimal("1000000.0")))
def load_dialect_impl(self, dialect):
if dialect.name == 'mysql':
return dialect.type_descriptor(
types.DECIMAL(precision=20,
scale=6,
asdecimal=True))
return dialect.type_descriptor(self.impl)
def compare_against_backend(self, dialect, conn_type):
if dialect.name == 'mysql':
return issubclass(type(conn_type), types.DECIMAL)
return issubclass(type(conn_type), type(self.impl))
def process_bind_param(self, value, dialect):
if value is not None:
value = utils.normalize_time(value)
if dialect.name == 'mysql':
return self._dt_to_decimal(value)
return value
def process_result_value(self, value, dialect):
if dialect.name == 'mysql':
value = self._decimal_to_dt(value)
if value is not None:
return utils.normalize_time(value).replace(
tzinfo=iso8601.iso8601.UTC)
class TimestampUTC(types.TypeDecorator):
"""Represents a timestamp precise to the microsecond."""
impl = sqlalchemy.DateTime
def load_dialect_impl(self, dialect):
if dialect.name == 'mysql':
return dialect.type_descriptor(mysql.DATETIME(fsp=6))
return self.impl
def process_bind_param(self, value, dialect):
if value is not None:
return utils.normalize_time(value)
def process_result_value(self, value, dialect):
if value is not None:
return value.replace(tzinfo=iso8601.iso8601.UTC)
class GnocchiBase(models.ModelBase):
__table_args__ = (
COMMON_TABLES_ARGS,
)
class ArchivePolicyDefinitionType(sqlalchemy_utils.JSONType):
def process_result_value(self, value, dialect):
values = super(ArchivePolicyDefinitionType,
self).process_result_value(value, dialect)
return [archive_policy.ArchivePolicyItem(**v) for v in values]
class SetType(sqlalchemy_utils.JSONType):
def process_result_value(self, value, dialect):
return set(super(SetType,
self).process_result_value(value, dialect))
class ArchivePolicy(Base, GnocchiBase, archive_policy.ArchivePolicy):
__tablename__ = 'archive_policy'
name = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True)
back_window = sqlalchemy.Column(sqlalchemy.Integer, nullable=False)
definition = sqlalchemy.Column(ArchivePolicyDefinitionType, nullable=False)
# TODO(jd) Use an array of string instead, PostgreSQL can do that
aggregation_methods = sqlalchemy.Column(SetType,
nullable=False)
class Metric(Base, GnocchiBase, storage.Metric):
__tablename__ = 'metric'
__table_args__ = (
sqlalchemy.Index('ix_metric_status', 'status'),
sqlalchemy.UniqueConstraint("resource_id", "name",
name="uniq_metric0resource_id0name"),
COMMON_TABLES_ARGS,
)
id = sqlalchemy.Column(sqlalchemy_utils.UUIDType(),
primary_key=True)
archive_policy_name = sqlalchemy.Column(
sqlalchemy.String(255),
sqlalchemy.ForeignKey(
'archive_policy.name',
ondelete="RESTRICT",
name="fk_metric_ap_name_ap_name"),
nullable=False)
archive_policy = sqlalchemy.orm.relationship(ArchivePolicy, lazy="joined")
creator = sqlalchemy.Column(sqlalchemy.String(255))
resource_id = sqlalchemy.Column(
sqlalchemy_utils.UUIDType(),
sqlalchemy.ForeignKey('resource.id',
ondelete="SET NULL",
name="fk_metric_resource_id_resource_id"))
name = sqlalchemy.Column(sqlalchemy.String(255))
unit = sqlalchemy.Column(sqlalchemy.String(31))
status = sqlalchemy.Column(sqlalchemy.Enum('active', 'delete',
name="metric_status_enum"),
nullable=False,
server_default='active')
def jsonify(self):
d = {
"id": self.id,
"creator": self.creator,
"name": self.name,
"unit": self.unit,
}
unloaded = sqlalchemy.inspect(self).unloaded
if 'resource' in unloaded:
d['resource_id'] = self.resource_id
else:
d['resource'] = self.resource
if 'archive_policy' in unloaded:
d['archive_policy_name'] = self.archive_policy_name
else:
d['archive_policy'] = self.archive_policy
if self.creator is None:
d['created_by_user_id'] = d['created_by_project_id'] = None
else:
d['created_by_user_id'], _, d['created_by_project_id'] = (
self.creator.partition(":")
)
return d
def __eq__(self, other):
# NOTE(jd) If `other` is a SQL Metric, we only compare
# archive_policy_name, and we don't compare archive_policy that might
# not be loaded. Otherwise we fallback to the original comparison for
# storage.Metric.
return ((isinstance(other, Metric)
and self.id == other.id
and self.archive_policy_name == other.archive_policy_name
and self.creator == other.creator
and self.name == other.name
and self.unit == other.unit
and self.resource_id == other.resource_id)
or (storage.Metric.__eq__(self, other)))
__hash__ = storage.Metric.__hash__
RESOURCE_TYPE_SCHEMA_MANAGER = resource_type.ResourceTypeSchemaManager(
"gnocchi.indexer.sqlalchemy.resource_type_attribute")
class ResourceTypeAttributes(sqlalchemy_utils.JSONType):
def process_bind_param(self, attributes, dialect):
return super(ResourceTypeAttributes, self).process_bind_param(
attributes.jsonify(), dialect)
def process_result_value(self, value, dialect):
attributes = super(ResourceTypeAttributes, self).process_result_value(
value, dialect)
return RESOURCE_TYPE_SCHEMA_MANAGER.attributes_from_dict(attributes)
class ResourceType(Base, GnocchiBase, resource_type.ResourceType):
__tablename__ = 'resource_type'
__table_args__ = (
sqlalchemy.UniqueConstraint("tablename",
name="uniq_resource_type0tablename"),
COMMON_TABLES_ARGS,
)
name = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True,
nullable=False)
tablename = sqlalchemy.Column(sqlalchemy.String(35), nullable=False)
attributes = sqlalchemy.Column(ResourceTypeAttributes)
state = sqlalchemy.Column(sqlalchemy.Enum("active", "creating",
"creation_error", "deleting",
"deletion_error", "updating",
"updating_error",
name="resource_type_state_enum"),
nullable=False,
server_default="creating")
updated_at = sqlalchemy.Column(TimestampUTC, nullable=False,
# NOTE(jd): We would like to use
# sqlalchemy.func.now, but we can't
# because the type of PreciseTimestamp in
# MySQL is not a Timestamp, so it would
# not store a timestamp but a date as an
# integer.
default=lambda: utils.utcnow())
def to_baseclass(self):
cols = {}
for attr in self.attributes:
cols[attr.name] = sqlalchemy.Column(attr.satype,
nullable=not attr.required)
return type(str("%s_base" % self.tablename), (object, ), cols)
class ResourceJsonifier(indexer.Resource):
def jsonify(self):
d = dict(self)
del d['revision']
if 'metrics' not in sqlalchemy.inspect(self).unloaded:
d['metrics'] = dict((m.name, six.text_type(m.id))
for m in self.metrics)
if self.creator is None:
d['created_by_user_id'] = d['created_by_project_id'] = None
else:
d['created_by_user_id'], _, d['created_by_project_id'] = (
self.creator.partition(":")
)
return d
class ResourceMixin(ResourceJsonifier):
@declarative.declared_attr
def __table_args__(cls):
return (sqlalchemy.CheckConstraint('started_at <= ended_at',
name="ck_started_before_ended"),
COMMON_TABLES_ARGS)
@declarative.declared_attr
def type(cls):
return sqlalchemy.Column(
sqlalchemy.String(255),
sqlalchemy.ForeignKey('resource_type.name',
ondelete="RESTRICT",
name="fk_%s_resource_type_name" %
cls.__tablename__),
nullable=False)
creator = sqlalchemy.Column(sqlalchemy.String(255))
started_at = sqlalchemy.Column(TimestampUTC, nullable=False,
default=lambda: utils.utcnow())
revision_start = sqlalchemy.Column(TimestampUTC, nullable=False,
default=lambda: utils.utcnow())
ended_at = sqlalchemy.Column(TimestampUTC)
user_id = sqlalchemy.Column(sqlalchemy.String(255))
project_id = sqlalchemy.Column(sqlalchemy.String(255))
original_resource_id = sqlalchemy.Column(sqlalchemy.String(255),
nullable=False)
class Resource(ResourceMixin, Base, GnocchiBase):
__tablename__ = 'resource'
_extra_keys = ['revision', 'revision_end']
revision = -1
id = sqlalchemy.Column(sqlalchemy_utils.UUIDType(),
primary_key=True)
revision_end = None
metrics = sqlalchemy.orm.relationship(
Metric, backref="resource",
primaryjoin="and_(Resource.id == Metric.resource_id, "
"Metric.status == 'active')")
def get_metric(self, metric_name):
m = super(Resource, self).get_metric(metric_name)
if m:
if sqlalchemy.orm.session.object_session(self):
# NOTE(jd) The resource is already loaded so that should not
# trigger a SELECT
m.resource
return m
class ResourceHistory(ResourceMixin, Base, GnocchiBase):
__tablename__ = 'resource_history'
revision = sqlalchemy.Column(sqlalchemy.Integer, autoincrement=True,
primary_key=True)
id = sqlalchemy.Column(sqlalchemy_utils.UUIDType(),
sqlalchemy.ForeignKey(
'resource.id',
ondelete="CASCADE",
name="fk_rh_id_resource_id"),
nullable=False)
revision_end = sqlalchemy.Column(TimestampUTC, nullable=False,
default=lambda: utils.utcnow())
metrics = sqlalchemy.orm.relationship(
Metric, primaryjoin="Metric.resource_id == ResourceHistory.id",
foreign_keys='Metric.resource_id')
class ResourceExt(object):
"""Default extension class for plugin
Used for plugin that doesn't need additional columns
"""
class ResourceExtMixin(object):
@declarative.declared_attr
def __table_args__(cls):
return (COMMON_TABLES_ARGS, )
@declarative.declared_attr
def id(cls):
tablename_compact = cls.__tablename__
if tablename_compact.endswith("_history"):
tablename_compact = tablename_compact[:-6]
return sqlalchemy.Column(
sqlalchemy_utils.UUIDType(),
sqlalchemy.ForeignKey(
'resource.id',
ondelete="CASCADE",
name="fk_%s_id_resource_id" % tablename_compact,
# NOTE(sileht): We use to ensure that postgresql
# does not use AccessExclusiveLock on destination table
use_alter=True),
primary_key=True
)
class ResourceHistoryExtMixin(object):
@declarative.declared_attr
def __table_args__(cls):
return (COMMON_TABLES_ARGS, )
@declarative.declared_attr
def revision(cls):
tablename_compact = cls.__tablename__
if tablename_compact.endswith("_history"):
tablename_compact = tablename_compact[:-6]
return sqlalchemy.Column(
sqlalchemy.Integer,
sqlalchemy.ForeignKey(
'resource_history.revision',
ondelete="CASCADE",
name="fk_%s_revision_rh_revision"
% tablename_compact,
# NOTE(sileht): We use to ensure that postgresql
# does not use AccessExclusiveLock on destination table
use_alter=True),
primary_key=True
)
class HistoryModelIterator(models.ModelIterator):
def __next__(self):
# NOTE(sileht): Our custom resource attribute columns don't
# have the same name in database than in sqlalchemy model
# so remove the additional "f_" for the model name
n = six.advance_iterator(self.i)
model_attr = n[2:] if n[:2] == "f_" else n
return model_attr, getattr(self.model, n)
class ArchivePolicyRule(Base, GnocchiBase):
__tablename__ = 'archive_policy_rule'
name = sqlalchemy.Column(sqlalchemy.String(255), primary_key=True)
archive_policy_name = sqlalchemy.Column(
sqlalchemy.String(255),
sqlalchemy.ForeignKey(
'archive_policy.name',
ondelete="RESTRICT",
name="fk_apr_ap_name_ap_name"),
nullable=False)
metric_pattern = sqlalchemy.Column(sqlalchemy.String(255), nullable=False)

View File

@ -1,56 +0,0 @@
# -*- encoding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import absolute_import
import sqlalchemy
import sqlalchemy_utils
from gnocchi import resource_type
class SchemaMixin(object):
def for_filling(self, dialect):
# NOTE(sileht): This must be used only for patching resource type
# to fill all row with a default value and then switch back the
# server_default to None
if self.fill is None:
return None
# NOTE(sileht): server_default must be converted in sql element
return sqlalchemy.literal(self.fill)
class StringSchema(resource_type.StringSchema, SchemaMixin):
@property
def satype(self):
return sqlalchemy.String(self.max_length)
class UUIDSchema(resource_type.UUIDSchema, SchemaMixin):
satype = sqlalchemy_utils.UUIDType()
def for_filling(self, dialect):
if self.fill is None:
return False # Don't set any server_default
return sqlalchemy.literal(
self.satype.process_bind_param(self.fill, dialect))
class NumberSchema(resource_type.NumberSchema, SchemaMixin):
satype = sqlalchemy.Float(53)
class BoolSchema(resource_type.BoolSchema, SchemaMixin):
satype = sqlalchemy.Boolean

View File

@ -1,78 +0,0 @@
# -*- encoding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# NOTE(sileht): this code is also in alembic migration
ceilometer_tablenames = {
"instance_network_interface": "instance_net_int",
"host_network_interface": "host_net_int",
}
ceilometer_resources = {
"generic": {},
"image": {
"name": {"type": "string", "min_length": 0, "max_length": 255,
"required": True},
"container_format": {"type": "string", "min_length": 0,
"max_length": 255, "required": True},
"disk_format": {"type": "string", "min_length": 0, "max_length": 255,
"required": True},
},
"instance": {
"flavor_id": {"type": "string", "min_length": 0, "max_length": 255,
"required": True},
"image_ref": {"type": "string", "min_length": 0, "max_length": 255,
"required": False},
"host": {"type": "string", "min_length": 0, "max_length": 255,
"required": True},
"display_name": {"type": "string", "min_length": 0, "max_length": 255,
"required": True},
"server_group": {"type": "string", "min_length": 0, "max_length": 255,
"required": False},
},
"instance_disk": {
"name": {"type": "string", "min_length": 0, "max_length": 255,
"required": True},
"instance_id": {"type": "uuid", "required": True},
},
"instance_network_interface": {
"name": {"type": "string", "min_length": 0, "max_length": 255,
"required": True},
"instance_id": {"type": "uuid", "required": True},
},
"volume": {
"display_name": {"type": "string", "min_length": 0, "max_length": 255,
"required": False},
},
"swift_account": {},
"ceph_account": {},
"network": {},
"identity": {},
"ipmi": {},
"stack": {},
"host": {
"host_name": {"type": "string", "min_length": 0, "max_length": 255,
"required": True},
},
"host_network_interface": {
"host_name": {"type": "string", "min_length": 0, "max_length": 255,
"required": True},
"device_name": {"type": "string", "min_length": 0, "max_length": 255,
"required": False},
},
"host_disk": {
"host_name": {"type": "string", "min_length": 0, "max_length": 255,
"required": True},
"device_name": {"type": "string", "min_length": 0, "max_length": 255,
"required": False},
},
}

View File

@ -1,58 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright © 2015-2017 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import uuid
import numpy
import six
import ujson
def to_primitive(obj):
if isinstance(obj, ((six.text_type,)
+ six.integer_types
+ (type(None), bool, float))):
return obj
if isinstance(obj, uuid.UUID):
return six.text_type(obj)
if isinstance(obj, datetime.datetime):
return obj.isoformat()
if isinstance(obj, numpy.datetime64):
# Do not include nanoseconds if null
return str(obj).rpartition(".000000000")[0] + "+00:00"
# This mimics what Pecan implements in its default JSON encoder
if hasattr(obj, "jsonify"):
return to_primitive(obj.jsonify())
if isinstance(obj, dict):
return {to_primitive(k): to_primitive(v)
for k, v in obj.items()}
if hasattr(obj, 'iteritems'):
return to_primitive(dict(obj.iteritems()))
# Python 3 does not have iteritems
if hasattr(obj, 'items'):
return to_primitive(dict(obj.items()))
if hasattr(obj, '__iter__'):
return list(map(to_primitive, obj))
return obj
def dumps(obj):
return ujson.dumps(to_primitive(obj))
# For convenience
loads = ujson.loads
load = ujson.load

View File

@ -1,167 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import itertools
import operator
import pkg_resources
import uuid
from oslo_config import cfg
from oslo_middleware import cors
import gnocchi.archive_policy
import gnocchi.indexer
import gnocchi.storage
import gnocchi.storage.ceph
import gnocchi.storage.file
import gnocchi.storage.redis
import gnocchi.storage.s3
import gnocchi.storage.swift
# NOTE(sileht): The oslo.config interpolation is buggy when the value
# is None, this replaces it by the expected empty string.
# Fix will perhaps be fixed by https://review.openstack.org/#/c/417496/
# But it seems some projects are relaying on the bug...
class CustomStrSubWrapper(cfg.ConfigOpts.StrSubWrapper):
def __getitem__(self, key):
value = super(CustomStrSubWrapper, self).__getitem__(key)
if value is None:
return ''
return value
cfg.ConfigOpts.StrSubWrapper = CustomStrSubWrapper
_STORAGE_OPTS = list(itertools.chain(gnocchi.storage.OPTS,
gnocchi.storage.ceph.OPTS,
gnocchi.storage.file.OPTS,
gnocchi.storage.swift.OPTS,
gnocchi.storage.redis.OPTS,
gnocchi.storage.s3.OPTS))
_INCOMING_OPTS = copy.deepcopy(_STORAGE_OPTS)
for opt in _INCOMING_OPTS:
opt.default = '${storage.%s}' % opt.name
def list_opts():
return [
("indexer", gnocchi.indexer.OPTS),
("metricd", (
cfg.IntOpt('workers', min=1,
required=True,
help='Number of workers for Gnocchi metric daemons. '
'By default the available number of CPU is used.'),
cfg.IntOpt('metric_processing_delay',
default=60,
required=True,
deprecated_group='storage',
help="How many seconds to wait between "
"scheduling new metrics to process"),
cfg.IntOpt('metric_reporting_delay',
deprecated_group='storage',
default=120,
min=-1,
required=True,
help="How many seconds to wait between "
"metric ingestion reporting. Set value to -1 to "
"disable reporting"),
cfg.IntOpt('metric_cleanup_delay',
deprecated_group='storage',
default=300,
required=True,
help="How many seconds to wait between "
"cleaning of expired data"),
cfg.IntOpt('worker_sync_rate',
default=30,
help="Frequency to detect when metricd workers join or "
"leave system (in seconds). A shorter rate, may "
"improve rebalancing but create more coordination "
"load"),
cfg.IntOpt('processing_replicas',
default=3,
min=1,
help="Number of workers that share a task. A higher "
"value may improve worker utilization but may also "
"increase load on coordination backend. Value is "
"capped by number of workers globally."),
)),
("api", (
cfg.StrOpt('paste_config',
default="api-paste.ini",
help='Path to API Paste configuration.'),
cfg.StrOpt('auth_mode',
default="basic",
choices=list(map(operator.attrgetter("name"),
pkg_resources.iter_entry_points(
"gnocchi.rest.auth_helper"))),
help='Authentication mode to use.'),
cfg.IntOpt('max_limit',
default=1000,
required=True,
help=('The maximum number of items returned in a '
'single response from a collection resource')),
cfg.IntOpt('refresh_timeout',
default=10, min=0,
help='Number of seconds before timeout when attempting '
'to force refresh of metric.'),
)),
("storage", (_STORAGE_OPTS + gnocchi.storage._carbonara.OPTS)),
("incoming", _INCOMING_OPTS),
("statsd", (
cfg.HostAddressOpt('host',
default='0.0.0.0',
help='The listen IP for statsd'),
cfg.PortOpt('port',
default=8125,
help='The port for statsd'),
cfg.Opt(
'resource_id',
type=uuid.UUID,
help='Resource UUID to use to identify statsd in Gnocchi'),
cfg.StrOpt(
'user_id',
deprecated_for_removal=True,
help='User ID to use to identify statsd in Gnocchi'),
cfg.StrOpt(
'project_id',
deprecated_for_removal=True,
help='Project ID to use to identify statsd in Gnocchi'),
cfg.StrOpt(
'creator',
default="${statsd.user_id}:${statsd.project_id}",
help='Creator value to use to identify statsd in Gnocchi'),
cfg.StrOpt(
'archive_policy_name',
help='Archive policy name to use when creating metrics'),
cfg.FloatOpt(
'flush_delay',
default=10,
help='Delay between flushes'),
)),
("archive_policy", gnocchi.archive_policy.OPTS),
]
def set_defaults():
cfg.set_defaults(cors.CORS_OPTS,
allow_headers=[
'X-Auth-Token',
'X-Subject-Token',
'X-User-Id',
'X-Domain-Id',
'X-Project-Id',
'X-Roles'])

View File

@ -1,266 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import numbers
import re
import six
import stevedore
import voluptuous
from gnocchi import utils
INVALID_NAMES = [
"id", "type", "metrics",
"revision", "revision_start", "revision_end",
"started_at", "ended_at",
"user_id", "project_id",
"created_by_user_id", "created_by_project_id", "get_metric",
"creator",
]
VALID_CHARS = re.compile("[a-zA-Z0-9][a-zA-Z0-9_]*")
class InvalidResourceAttribute(ValueError):
pass
class InvalidResourceAttributeName(InvalidResourceAttribute):
"""Error raised when the resource attribute name is invalid."""
def __init__(self, name):
super(InvalidResourceAttributeName, self).__init__(
"Resource attribute name %s is invalid" % str(name))
self.name = name
class InvalidResourceAttributeValue(InvalidResourceAttribute):
"""Error raised when the resource attribute min is greater than max"""
def __init__(self, min, max):
super(InvalidResourceAttributeValue, self).__init__(
"Resource attribute value min (or min_length) %s must be less "
"than or equal to max (or max_length) %s!" % (str(min), str(max)))
self.min = min
self.max = max
class InvalidResourceAttributeOption(InvalidResourceAttribute):
"""Error raised when the resource attribute name is invalid."""
def __init__(self, name, option, reason):
super(InvalidResourceAttributeOption, self).__init__(
"Option '%s' of resource attribute %s is invalid: %s" %
(option, str(name), str(reason)))
self.name = name
self.option = option
self.reason = reason
# NOTE(sileht): This is to store the behavior of some operations:
# * fill, to set a default value to all existing resource type
#
# in the future for example, we can allow to change the length of
# a string attribute, if the new one is shorter, we can add a option
# to define the behavior like:
# * resize = trunc or reject
OperationOptions = {
voluptuous.Optional('fill'): object
}
class CommonAttributeSchema(object):
meta_schema_ext = {}
schema_ext = None
def __init__(self, type, name, required, options=None):
if (len(name) > 63 or name in INVALID_NAMES
or not VALID_CHARS.match(name)):
raise InvalidResourceAttributeName(name)
self.name = name
self.required = required
self.fill = None
# options is set only when we update a resource type
if options is not None:
fill = options.get("fill")
if fill is None and required:
raise InvalidResourceAttributeOption(
name, "fill", "must not be empty if required=True")
elif fill is not None:
# Ensure fill have the correct attribute type
try:
self.fill = voluptuous.Schema(self.schema_ext)(fill)
except voluptuous.Error as e:
raise InvalidResourceAttributeOption(name, "fill", e)
@classmethod
def meta_schema(cls, for_update=False):
d = {
voluptuous.Required('type'): cls.typename,
voluptuous.Required('required', default=True): bool
}
if for_update:
d[voluptuous.Required('options', default={})] = OperationOptions
if callable(cls.meta_schema_ext):
d.update(cls.meta_schema_ext())
else:
d.update(cls.meta_schema_ext)
return d
def schema(self):
if self.required:
return {self.name: self.schema_ext}
else:
return {voluptuous.Optional(self.name): self.schema_ext}
def jsonify(self):
return {"type": self.typename,
"required": self.required}
class StringSchema(CommonAttributeSchema):
typename = "string"
def __init__(self, min_length, max_length, *args, **kwargs):
if min_length > max_length:
raise InvalidResourceAttributeValue(min_length, max_length)
self.min_length = min_length
self.max_length = max_length
super(StringSchema, self).__init__(*args, **kwargs)
meta_schema_ext = {
voluptuous.Required('min_length', default=0):
voluptuous.All(int, voluptuous.Range(min=0, max=255)),
voluptuous.Required('max_length', default=255):
voluptuous.All(int, voluptuous.Range(min=1, max=255))
}
@property
def schema_ext(self):
return voluptuous.All(six.text_type,
voluptuous.Length(
min=self.min_length,
max=self.max_length))
def jsonify(self):
d = super(StringSchema, self).jsonify()
d.update({"max_length": self.max_length,
"min_length": self.min_length})
return d
class UUIDSchema(CommonAttributeSchema):
typename = "uuid"
schema_ext = staticmethod(utils.UUID)
class NumberSchema(CommonAttributeSchema):
typename = "number"
def __init__(self, min, max, *args, **kwargs):
if max is not None and min is not None and min > max:
raise InvalidResourceAttributeValue(min, max)
self.min = min
self.max = max
super(NumberSchema, self).__init__(*args, **kwargs)
meta_schema_ext = {
voluptuous.Required('min', default=None): voluptuous.Any(
None, numbers.Real),
voluptuous.Required('max', default=None): voluptuous.Any(
None, numbers.Real)
}
@property
def schema_ext(self):
return voluptuous.All(numbers.Real,
voluptuous.Range(min=self.min,
max=self.max))
def jsonify(self):
d = super(NumberSchema, self).jsonify()
d.update({"min": self.min, "max": self.max})
return d
class BoolSchema(CommonAttributeSchema):
typename = "bool"
schema_ext = bool
class ResourceTypeAttributes(list):
def jsonify(self):
d = {}
for attr in self:
d[attr.name] = attr.jsonify()
return d
class ResourceTypeSchemaManager(stevedore.ExtensionManager):
def __init__(self, *args, **kwargs):
super(ResourceTypeSchemaManager, self).__init__(*args, **kwargs)
type_schemas = tuple([ext.plugin.meta_schema()
for ext in self.extensions])
self._schema = voluptuous.Schema({
"name": six.text_type,
voluptuous.Required("attributes", default={}): {
six.text_type: voluptuous.Any(*tuple(type_schemas))
}
})
type_schemas = tuple([ext.plugin.meta_schema(for_update=True)
for ext in self.extensions])
self._schema_for_update = voluptuous.Schema({
"name": six.text_type,
voluptuous.Required("attributes", default={}): {
six.text_type: voluptuous.Any(*tuple(type_schemas))
}
})
def __call__(self, definition):
return self._schema(definition)
def for_update(self, definition):
return self._schema_for_update(definition)
def attributes_from_dict(self, attributes):
return ResourceTypeAttributes(
self[attr["type"]].plugin(name=name, **attr)
for name, attr in attributes.items())
def resource_type_from_dict(self, name, attributes, state):
return ResourceType(name, self.attributes_from_dict(attributes), state)
class ResourceType(object):
def __init__(self, name, attributes, state):
self.name = name
self.attributes = attributes
self.state = state
@property
def schema(self):
schema = {}
for attr in self.attributes:
schema.update(attr.schema())
return schema
def __eq__(self, other):
return self.name == other.name
def jsonify(self):
return {"name": self.name,
"attributes": self.attributes.jsonify(),
"state": self.state}

File diff suppressed because it is too large Load Diff

View File

@ -1,46 +0,0 @@
[composite:gnocchi+noauth]
use = egg:Paste#urlmap
/ = gnocchiversions_pipeline
/v1 = gnocchiv1+noauth
/healthcheck = healthcheck
[composite:gnocchi+basic]
use = egg:Paste#urlmap
/ = gnocchiversions_pipeline
/v1 = gnocchiv1+noauth
/healthcheck = healthcheck
[composite:gnocchi+keystone]
use = egg:Paste#urlmap
/ = gnocchiversions_pipeline
/v1 = gnocchiv1+keystone
/healthcheck = healthcheck
[pipeline:gnocchiv1+noauth]
pipeline = http_proxy_to_wsgi gnocchiv1
[pipeline:gnocchiv1+keystone]
pipeline = http_proxy_to_wsgi keystone_authtoken gnocchiv1
[pipeline:gnocchiversions_pipeline]
pipeline = http_proxy_to_wsgi gnocchiversions
[app:gnocchiversions]
paste.app_factory = gnocchi.rest.app:app_factory
root = gnocchi.rest.VersionsController
[app:gnocchiv1]
paste.app_factory = gnocchi.rest.app:app_factory
root = gnocchi.rest.V1Controller
[filter:keystone_authtoken]
use = egg:keystonemiddleware#auth_token
oslo_config_project = gnocchi
[filter:http_proxy_to_wsgi]
use = egg:oslo.middleware#http_proxy_to_wsgi
oslo_config_project = gnocchi
[app:healthcheck]
use = egg:oslo.middleware#healthcheck
oslo_config_project = gnocchi

View File

@ -1,143 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright © 2014-2016 eNovance
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import pkg_resources
import uuid
import warnings
from oslo_log import log
from oslo_middleware import cors
from oslo_policy import policy
from paste import deploy
import pecan
from pecan import jsonify
from stevedore import driver
import webob.exc
from gnocchi import exceptions
from gnocchi import indexer as gnocchi_indexer
from gnocchi import json
from gnocchi import service
from gnocchi import storage as gnocchi_storage
LOG = log.getLogger(__name__)
# Register our encoder by default for everything
jsonify.jsonify.register(object)(json.to_primitive)
class GnocchiHook(pecan.hooks.PecanHook):
def __init__(self, storage, indexer, conf):
self.storage = storage
self.indexer = indexer
self.conf = conf
self.policy_enforcer = policy.Enforcer(conf)
self.auth_helper = driver.DriverManager("gnocchi.rest.auth_helper",
conf.api.auth_mode,
invoke_on_load=True).driver
def on_route(self, state):
state.request.storage = self.storage
state.request.indexer = self.indexer
state.request.conf = self.conf
state.request.policy_enforcer = self.policy_enforcer
state.request.auth_helper = self.auth_helper
class NotImplementedMiddleware(object):
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
try:
return self.app(environ, start_response)
except exceptions.NotImplementedError:
raise webob.exc.HTTPNotImplemented(
"Sorry, this Gnocchi server does "
"not implement this feature 😞")
# NOTE(sileht): pastedeploy uses ConfigParser to handle
# global_conf, since python 3 ConfigParser doesn't
# allow to store object as config value, only strings are
# permit, so to be able to pass an object created before paste load
# the app, we store them into a global var. But the each loaded app
# store it's configuration in unique key to be concurrency safe.
global APPCONFIGS
APPCONFIGS = {}
def load_app(conf, indexer=None, storage=None,
not_implemented_middleware=True):
global APPCONFIGS
# NOTE(sileht): We load config, storage and indexer,
# so all
if not storage:
storage = gnocchi_storage.get_driver(conf)
if not indexer:
indexer = gnocchi_indexer.get_driver(conf)
indexer.connect()
# Build the WSGI app
cfg_path = conf.api.paste_config
if not os.path.isabs(cfg_path):
cfg_path = conf.find_file(cfg_path)
if cfg_path is None or not os.path.exists(cfg_path):
LOG.debug("No api-paste configuration file found! Using default.")
cfg_path = pkg_resources.resource_filename(__name__, "api-paste.ini")
config = dict(conf=conf, indexer=indexer, storage=storage,
not_implemented_middleware=not_implemented_middleware)
configkey = str(uuid.uuid4())
APPCONFIGS[configkey] = config
LOG.info("WSGI config used: %s", cfg_path)
if conf.api.auth_mode == "noauth":
warnings.warn("The `noauth' authentication mode is deprecated",
category=DeprecationWarning)
appname = "gnocchi+" + conf.api.auth_mode
app = deploy.loadapp("config:" + cfg_path, name=appname,
global_conf={'configkey': configkey})
return cors.CORS(app, conf=conf)
def _setup_app(root, conf, indexer, storage, not_implemented_middleware):
app = pecan.make_app(
root,
hooks=(GnocchiHook(storage, indexer, conf),),
guess_content_type_from_ext=False,
)
if not_implemented_middleware:
app = webob.exc.HTTPExceptionMiddleware(NotImplementedMiddleware(app))
return app
def app_factory(global_config, **local_conf):
global APPCONFIGS
appconfig = APPCONFIGS.get(global_config.get('configkey'))
return _setup_app(root=local_conf.get('root'), **appconfig)
def build_wsgi_app():
return load_app(service.prepare_service())

View File

@ -1,29 +0,0 @@
#
# Copyright 2014 eNovance
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Use this file for deploying the API under mod_wsgi.
See http://pecan.readthedocs.org/en/latest/deployment.html for details.
"""
import debtcollector
from gnocchi.rest import app
application = app.build_wsgi_app()
debtcollector.deprecate(prefix="The wsgi script gnocchi/rest/app.wsgi is deprecated",
postfix=", please use gnocchi-api binary as wsgi script instead",
version="4.0", removal_version="4.1",
category=RuntimeWarning)

View File

@ -1,125 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright © 2016 Red Hat, Inc.
# Copyright © 2014-2015 eNovance
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import webob
import werkzeug.http
from gnocchi import rest
class KeystoneAuthHelper(object):
@staticmethod
def get_current_user(headers):
# FIXME(jd) should have domain but should not break existing :(
user_id = headers.get("X-User-Id", "")
project_id = headers.get("X-Project-Id", "")
return user_id + ":" + project_id
@staticmethod
def get_auth_info(headers):
user_id = headers.get("X-User-Id")
project_id = headers.get("X-Project-Id")
return {
"user": (user_id or "") + ":" + (project_id or ""),
"user_id": user_id,
"project_id": project_id,
'domain_id': headers.get("X-Domain-Id"),
'roles': headers.get("X-Roles", "").split(","),
}
@staticmethod
def get_resource_policy_filter(headers, rule, resource_type):
try:
# Check if the policy allows the user to list any resource
rest.enforce(rule, {
"resource_type": resource_type,
})
except webob.exc.HTTPForbidden:
policy_filter = []
project_id = headers.get("X-Project-Id")
try:
# Check if the policy allows the user to list resources linked
# to their project
rest.enforce(rule, {
"resource_type": resource_type,
"project_id": project_id,
})
except webob.exc.HTTPForbidden:
pass
else:
policy_filter.append({"=": {"project_id": project_id}})
try:
# Check if the policy allows the user to list resources linked
# to their created_by_project
rest.enforce(rule, {
"resource_type": resource_type,
"created_by_project_id": project_id,
})
except webob.exc.HTTPForbidden:
pass
else:
if project_id:
policy_filter.append(
{"like": {"creator": "%:" + project_id}})
else:
policy_filter.append({"=": {"creator": None}})
if not policy_filter:
# We need to have at least one policy filter in place
rest.abort(403, "Insufficient privileges")
return {"or": policy_filter}
class NoAuthHelper(KeystoneAuthHelper):
@staticmethod
def get_current_user(headers):
# FIXME(jd) Should be a single header
user_id = headers.get("X-User-Id")
project_id = headers.get("X-Project-Id")
if user_id:
if project_id:
return user_id + ":" + project_id
return user_id
if project_id:
return project_id
rest.abort(401, "Unable to determine current user")
class BasicAuthHelper(object):
@staticmethod
def get_current_user(headers):
auth = werkzeug.http.parse_authorization_header(
headers.get("Authorization"))
if auth is None:
rest.abort(401)
return auth.username
def get_auth_info(self, headers):
user = self.get_current_user(headers)
roles = []
if user == "admin":
roles.append("admin")
return {
"user": user,
"roles": roles
}
@staticmethod
def get_resource_policy_filter(headers, rule, resource_type):
return None

View File

@ -1,42 +0,0 @@
{
"admin_or_creator": "role:admin or user:%(creator)s or project_id:%(created_by_project_id)s",
"resource_owner": "project_id:%(project_id)s",
"metric_owner": "project_id:%(resource.project_id)s",
"get status": "role:admin",
"create resource": "",
"get resource": "rule:admin_or_creator or rule:resource_owner",
"update resource": "rule:admin_or_creator",
"delete resource": "rule:admin_or_creator",
"delete resources": "rule:admin_or_creator",
"list resource": "rule:admin_or_creator or rule:resource_owner",
"search resource": "rule:admin_or_creator or rule:resource_owner",
"create resource type": "role:admin",
"delete resource type": "role:admin",
"update resource type": "role:admin",
"list resource type": "",
"get resource type": "",
"get archive policy": "",
"list archive policy": "",
"create archive policy": "role:admin",
"update archive policy": "role:admin",
"delete archive policy": "role:admin",
"create archive policy rule": "role:admin",
"get archive policy rule": "",
"list archive policy rule": "",
"delete archive policy rule": "role:admin",
"create metric": "",
"delete metric": "rule:admin_or_creator",
"get metric": "rule:admin_or_creator or rule:metric_owner",
"search metric": "rule:admin_or_creator or rule:metric_owner",
"list metric": "",
"list all metric": "role:admin",
"get measures": "rule:admin_or_creator or rule:metric_owner",
"post measures": "rule:admin_or_creator"
}

View File

@ -1,93 +0,0 @@
# Copyright (c) 2016-2017 Red Hat, Inc.
# Copyright (c) 2015 eNovance
# Copyright (c) 2013 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from oslo_config import cfg
from oslo_db import options as db_options
from oslo_log import log
from oslo_policy import opts as policy_opts
import pbr.version
from six.moves.urllib import parse as urlparse
from gnocchi import archive_policy
from gnocchi import opts
from gnocchi import utils
LOG = log.getLogger(__name__)
def prepare_service(args=None, conf=None,
default_config_files=None):
if conf is None:
conf = cfg.ConfigOpts()
opts.set_defaults()
# FIXME(jd) Use the pkg_entry info to register the options of these libs
log.register_options(conf)
db_options.set_defaults(conf)
policy_opts.set_defaults(conf)
# Register our own Gnocchi options
for group, options in opts.list_opts():
conf.register_opts(list(options),
group=None if group == "DEFAULT" else group)
conf.set_default("workers", utils.get_default_workers(), group="metricd")
conf(args, project='gnocchi', validate_default_values=True,
default_config_files=default_config_files,
version=pbr.version.VersionInfo('gnocchi').version_string())
# HACK(jd) I'm not happy about that, fix AP class to handle a conf object?
archive_policy.ArchivePolicy.DEFAULT_AGGREGATION_METHODS = (
conf.archive_policy.default_aggregation_methods
)
# If no coordination URL is provided, default to using the indexer as
# coordinator
if conf.storage.coordination_url is None:
if conf.storage.driver == "redis":
conf.set_default("coordination_url",
conf.storage.redis_url,
"storage")
elif conf.incoming.driver == "redis":
conf.set_default("coordination_url",
conf.incoming.redis_url,
"storage")
else:
parsed = urlparse.urlparse(conf.indexer.url)
proto, _, _ = parsed.scheme.partition("+")
parsed = list(parsed)
# Set proto without the + part
parsed[0] = proto
conf.set_default("coordination_url",
urlparse.urlunparse(parsed),
"storage")
cfg_path = conf.oslo_policy.policy_file
if not os.path.isabs(cfg_path):
cfg_path = conf.find_file(cfg_path)
if cfg_path is None or not os.path.exists(cfg_path):
cfg_path = os.path.abspath(os.path.join(os.path.dirname(__file__),
'rest', 'policy.json'))
conf.set_default('policy_file', cfg_path, group='oslo_policy')
log.set_defaults(default_log_levels=log.get_default_log_levels() +
["passlib.utils.compat=INFO"])
log.setup(conf, 'gnocchi')
conf.log_opt_values(LOG, log.DEBUG)
return conf

View File

@ -1,195 +0,0 @@
# Copyright (c) 2015 eNovance
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import itertools
import uuid
try:
import asyncio
except ImportError:
import trollius as asyncio
from oslo_config import cfg
from oslo_log import log
import six
from gnocchi import indexer
from gnocchi import service
from gnocchi import storage
from gnocchi import utils
LOG = log.getLogger(__name__)
class Stats(object):
def __init__(self, conf):
self.conf = conf
self.storage = storage.get_driver(self.conf)
self.indexer = indexer.get_driver(self.conf)
self.indexer.connect()
try:
self.indexer.create_resource('generic',
self.conf.statsd.resource_id,
self.conf.statsd.creator)
except indexer.ResourceAlreadyExists:
LOG.debug("Resource %s already exists",
self.conf.statsd.resource_id)
else:
LOG.info("Created resource %s", self.conf.statsd.resource_id)
self.gauges = {}
self.counters = {}
self.times = {}
def reset(self):
self.gauges.clear()
self.counters.clear()
self.times.clear()
def treat_metric(self, metric_name, metric_type, value, sampling):
metric_name += "|" + metric_type
if metric_type == "ms":
if sampling is not None:
raise ValueError(
"Invalid sampling for ms: `%d`, should be none"
% sampling)
self.times[metric_name] = storage.Measure(
utils.dt_in_unix_ns(utils.utcnow()), value)
elif metric_type == "g":
if sampling is not None:
raise ValueError(
"Invalid sampling for g: `%d`, should be none"
% sampling)
self.gauges[metric_name] = storage.Measure(
utils.dt_in_unix_ns(utils.utcnow()), value)
elif metric_type == "c":
sampling = 1 if sampling is None else sampling
if metric_name in self.counters:
current_value = self.counters[metric_name].value
else:
current_value = 0
self.counters[metric_name] = storage.Measure(
utils.dt_in_unix_ns(utils.utcnow()),
current_value + (value * (1 / sampling)))
# TODO(jd) Support "set" type
# elif metric_type == "s":
# pass
else:
raise ValueError("Unknown metric type `%s'" % metric_type)
def flush(self):
resource = self.indexer.get_resource('generic',
self.conf.statsd.resource_id,
with_metrics=True)
for metric_name, measure in itertools.chain(
six.iteritems(self.gauges),
six.iteritems(self.counters),
six.iteritems(self.times)):
try:
# NOTE(jd) We avoid considering any concurrency here as statsd
# is not designed to run in parallel and we do not envision
# operators manipulating the resource/metrics using the Gnocchi
# API at the same time.
metric = resource.get_metric(metric_name)
if not metric:
ap_name = self._get_archive_policy_name(metric_name)
metric = self.indexer.create_metric(
uuid.uuid4(),
self.conf.statsd.creator,
archive_policy_name=ap_name,
name=metric_name,
resource_id=self.conf.statsd.resource_id)
self.storage.incoming.add_measures(metric, (measure,))
except Exception as e:
LOG.error("Unable to add measure %s: %s",
metric_name, e)
self.reset()
def _get_archive_policy_name(self, metric_name):
if self.conf.statsd.archive_policy_name:
return self.conf.statsd.archive_policy_name
# NOTE(sileht): We didn't catch NoArchivePolicyRuleMatch to log it
ap = self.indexer.get_archive_policy_for_metric(metric_name)
return ap.name
class StatsdServer(object):
def __init__(self, stats):
self.stats = stats
@staticmethod
def connection_made(transport):
pass
def datagram_received(self, data, addr):
LOG.debug("Received data `%r' from %s", data, addr)
try:
messages = [m for m in data.decode().split("\n") if m]
except Exception as e:
LOG.error("Unable to decode datagram: %s", e)
return
for message in messages:
metric = message.split("|")
if len(metric) == 2:
metric_name, metric_type = metric
sampling = None
elif len(metric) == 3:
metric_name, metric_type, sampling = metric
else:
LOG.error("Invalid number of | in `%s'", message)
continue
sampling = float(sampling[1:]) if sampling is not None else None
metric_name, metric_str_val = metric_name.split(':')
# NOTE(jd): We do not support +/- gauge, and we delete gauge on
# each flush.
value = float(metric_str_val)
try:
self.stats.treat_metric(metric_name, metric_type,
value, sampling)
except Exception as e:
LOG.error("Unable to treat metric %s: %s", message, str(e))
def start():
conf = service.prepare_service()
if conf.statsd.resource_id is None:
raise cfg.RequiredOptError("resource_id", cfg.OptGroup("statsd"))
stats = Stats(conf)
loop = asyncio.get_event_loop()
# TODO(jd) Add TCP support
listen = loop.create_datagram_endpoint(
lambda: StatsdServer(stats),
local_addr=(conf.statsd.host, conf.statsd.port))
def _flush():
loop.call_later(conf.statsd.flush_delay, _flush)
stats.flush()
loop.call_later(conf.statsd.flush_delay, _flush)
transport, protocol = loop.run_until_complete(listen)
LOG.info("Started on %s:%d", conf.statsd.host, conf.statsd.port)
LOG.info("Flush delay: %d seconds", conf.statsd.flush_delay)
try:
loop.run_forever()
except KeyboardInterrupt:
pass
transport.close()
loop.close()

View File

@ -1,372 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright © 2014-2015 eNovance
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import operator
from oslo_config import cfg
from oslo_log import log
from stevedore import driver
from gnocchi import exceptions
from gnocchi import indexer
OPTS = [
cfg.StrOpt('driver',
default='file',
help='Storage driver to use'),
]
LOG = log.getLogger(__name__)
class Measure(object):
def __init__(self, timestamp, value):
self.timestamp = timestamp
self.value = value
def __iter__(self):
"""Allow to transform measure to tuple."""
yield self.timestamp
yield self.value
class Metric(object):
def __init__(self, id, archive_policy,
creator=None,
name=None,
resource_id=None):
self.id = id
self.archive_policy = archive_policy
self.creator = creator
self.name = name
self.resource_id = resource_id
def __repr__(self):
return '<%s %s>' % (self.__class__.__name__, self.id)
def __str__(self):
return str(self.id)
def __eq__(self, other):
return (isinstance(other, Metric)
and self.id == other.id
and self.archive_policy == other.archive_policy
and self.creator == other.creator
and self.name == other.name
and self.resource_id == other.resource_id)
__hash__ = object.__hash__
class StorageError(Exception):
pass
class InvalidQuery(StorageError):
pass
class MetricDoesNotExist(StorageError):
"""Error raised when this metric does not exist."""
def __init__(self, metric):
self.metric = metric
super(MetricDoesNotExist, self).__init__(
"Metric %s does not exist" % metric)
class AggregationDoesNotExist(StorageError):
"""Error raised when the aggregation method doesn't exists for a metric."""
def __init__(self, metric, method):
self.metric = metric
self.method = method
super(AggregationDoesNotExist, self).__init__(
"Aggregation method '%s' for metric %s does not exist" %
(method, metric))
class GranularityDoesNotExist(StorageError):
"""Error raised when the granularity doesn't exist for a metric."""
def __init__(self, metric, granularity):
self.metric = metric
self.granularity = granularity
super(GranularityDoesNotExist, self).__init__(
"Granularity '%s' for metric %s does not exist" %
(granularity, metric))
class MetricAlreadyExists(StorageError):
"""Error raised when this metric already exists."""
def __init__(self, metric):
self.metric = metric
super(MetricAlreadyExists, self).__init__(
"Metric %s already exists" % metric)
class MetricUnaggregatable(StorageError):
"""Error raised when metrics can't be aggregated."""
def __init__(self, metrics, reason):
self.metrics = metrics
self.reason = reason
super(MetricUnaggregatable, self).__init__(
"Metrics %s can't be aggregated: %s"
% (", ".join((str(m.id) for m in metrics)), reason))
class LockedMetric(StorageError):
"""Error raised when this metric is already being handled by another."""
def __init__(self, metric):
self.metric = metric
super(LockedMetric, self).__init__("Metric %s is locked" % metric)
def get_driver_class(namespace, conf):
"""Return the storage driver class.
:param conf: The conf to use to determine the driver.
"""
return driver.DriverManager(namespace,
conf.driver).driver
def get_driver(conf):
"""Return the configured driver."""
incoming = get_driver_class('gnocchi.incoming', conf.incoming)(
conf.incoming)
return get_driver_class('gnocchi.storage', conf.storage)(
conf.storage, incoming)
class StorageDriver(object):
def __init__(self, conf, incoming):
self.incoming = incoming
@staticmethod
def stop():
pass
def upgrade(self, index, num_sacks):
self.incoming.upgrade(index, num_sacks)
def process_background_tasks(self, index, metrics, sync=False):
"""Process background tasks for this storage.
This calls :func:`process_new_measures` to process new measures
:param index: An indexer to be used for querying metrics
:param metrics: The list of metrics waiting for processing
:param sync: If True, then process everything synchronously and raise
on error
:type sync: bool
"""
LOG.debug("Processing new measures")
try:
self.process_new_measures(index, metrics, sync)
except Exception:
if sync:
raise
LOG.error("Unexpected error during measures processing",
exc_info=True)
def expunge_metrics(self, index, sync=False):
"""Remove deleted metrics
:param index: An indexer to be used for querying metrics
:param sync: If True, then delete everything synchronously and raise
on error
:type sync: bool
"""
metrics_to_expunge = index.list_metrics(status='delete')
for m in metrics_to_expunge:
try:
self.delete_metric(m, sync)
index.expunge_metric(m.id)
except (indexer.NoSuchMetric, LockedMetric):
# It's possible another process deleted or is deleting the
# metric, not a big deal
pass
except Exception:
if sync:
raise
LOG.error("Unable to expunge metric %s from storage", m,
exc_info=True)
@staticmethod
def process_new_measures(indexer, metrics, sync=False):
"""Process added measures in background.
Some drivers might need to have a background task running that process
the measures sent to metrics. This is used for that.
"""
@staticmethod
def get_measures(metric, from_timestamp=None, to_timestamp=None,
aggregation='mean', granularity=None, resample=None):
"""Get a measure to a metric.
:param metric: The metric measured.
:param from timestamp: The timestamp to get the measure from.
:param to timestamp: The timestamp to get the measure to.
:param aggregation: The type of aggregation to retrieve.
:param granularity: The granularity to retrieve.
:param resample: The granularity to resample to.
"""
if aggregation not in metric.archive_policy.aggregation_methods:
raise AggregationDoesNotExist(metric, aggregation)
@staticmethod
def delete_metric(metric, sync=False):
raise exceptions.NotImplementedError
@staticmethod
def get_cross_metric_measures(metrics, from_timestamp=None,
to_timestamp=None, aggregation='mean',
reaggregation=None, resample=None,
granularity=None, needed_overlap=None,
fill=None):
"""Get aggregated measures of multiple entities.
:param entities: The entities measured to aggregate.
:param from timestamp: The timestamp to get the measure from.
:param to timestamp: The timestamp to get the measure to.
:param granularity: The granularity to retrieve.
:param aggregation: The type of aggregation to retrieve.
:param reaggregation: The type of aggregation to compute
on the retrieved measures.
:param resample: The granularity to resample to.
:param fill: The value to use to fill in missing data in series.
"""
for metric in metrics:
if aggregation not in metric.archive_policy.aggregation_methods:
raise AggregationDoesNotExist(metric, aggregation)
if (granularity is not None and granularity
not in set(d.granularity
for d in metric.archive_policy.definition)):
raise GranularityDoesNotExist(metric, granularity)
@staticmethod
def search_value(metrics, query, from_timestamp=None,
to_timestamp=None,
aggregation='mean',
granularity=None):
"""Search for an aggregated value that realizes a predicate.
:param metrics: The list of metrics to look into.
:param query: The query being sent.
:param from_timestamp: The timestamp to get the measure from.
:param to_timestamp: The timestamp to get the measure to.
:param aggregation: The type of aggregation to retrieve.
:param granularity: The granularity to retrieve.
"""
raise exceptions.NotImplementedError
class MeasureQuery(object):
binary_operators = {
u"=": operator.eq,
u"==": operator.eq,
u"eq": operator.eq,
u"<": operator.lt,
u"lt": operator.lt,
u">": operator.gt,
u"gt": operator.gt,
u"<=": operator.le,
u"": operator.le,
u"le": operator.le,
u">=": operator.ge,
u"": operator.ge,
u"ge": operator.ge,
u"!=": operator.ne,
u"": operator.ne,
u"ne": operator.ne,
u"%": operator.mod,
u"mod": operator.mod,
u"+": operator.add,
u"add": operator.add,
u"-": operator.sub,
u"sub": operator.sub,
u"*": operator.mul,
u"×": operator.mul,
u"mul": operator.mul,
u"/": operator.truediv,
u"÷": operator.truediv,
u"div": operator.truediv,
u"**": operator.pow,
u"^": operator.pow,
u"pow": operator.pow,
}
multiple_operators = {
u"or": any,
u"": any,
u"and": all,
u"": all,
}
def __init__(self, tree):
self._eval = self.build_evaluator(tree)
def __call__(self, value):
return self._eval(value)
def build_evaluator(self, tree):
try:
operator, nodes = list(tree.items())[0]
except Exception:
return lambda value: tree
try:
op = self.multiple_operators[operator]
except KeyError:
try:
op = self.binary_operators[operator]
except KeyError:
raise InvalidQuery("Unknown operator %s" % operator)
return self._handle_binary_op(op, nodes)
return self._handle_multiple_op(op, nodes)
def _handle_multiple_op(self, op, nodes):
elements = [self.build_evaluator(node) for node in nodes]
return lambda value: op((e(value) for e in elements))
def _handle_binary_op(self, op, node):
try:
iterator = iter(node)
except Exception:
return lambda value: op(value, node)
nodes = list(iterator)
if len(nodes) != 2:
raise InvalidQuery(
"Binary operator %s needs 2 arguments, %d given" %
(op, len(nodes)))
node0 = self.build_evaluator(node[0])
node1 = self.build_evaluator(node[1])
return lambda value: op(node0(value), node1(value))

View File

@ -1,571 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright © 2016 Red Hat, Inc.
# Copyright © 2014-2015 eNovance
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import datetime
import itertools
import operator
from concurrent import futures
import iso8601
from oslo_config import cfg
from oslo_log import log
import six
import six.moves
from gnocchi import carbonara
from gnocchi import storage
from gnocchi import utils
OPTS = [
cfg.IntOpt('aggregation_workers_number',
default=1, min=1,
help='Number of threads to process and store aggregates. '
'Set value roughly equal to number of aggregates to be '
'computed per metric'),
cfg.StrOpt('coordination_url',
secret=True,
help='Coordination driver URL'),
]
LOG = log.getLogger(__name__)
class CorruptionError(ValueError):
"""Data corrupted, damn it."""
def __init__(self, message):
super(CorruptionError, self).__init__(message)
class SackLockTimeoutError(Exception):
pass
class CarbonaraBasedStorage(storage.StorageDriver):
def __init__(self, conf, incoming):
super(CarbonaraBasedStorage, self).__init__(conf, incoming)
self.aggregation_workers_number = conf.aggregation_workers_number
if self.aggregation_workers_number == 1:
# NOTE(jd) Avoid using futures at all if we don't want any threads.
self._map_in_thread = self._map_no_thread
else:
self._map_in_thread = self._map_in_futures_threads
self.coord, my_id = utils.get_coordinator_and_start(
conf.coordination_url)
def stop(self):
self.coord.stop()
@staticmethod
def _get_measures(metric, timestamp_key, aggregation, granularity,
version=3):
raise NotImplementedError
@staticmethod
def _get_unaggregated_timeserie(metric, version=3):
raise NotImplementedError
def _get_unaggregated_timeserie_and_unserialize(
self, metric, block_size, back_window):
"""Retrieve unaggregated timeserie for a metric and unserialize it.
Returns a gnocchi.carbonara.BoundTimeSerie object. If the data cannot
be retrieved, returns None.
"""
with utils.StopWatch() as sw:
raw_measures = (
self._get_unaggregated_timeserie(
metric)
)
if not raw_measures:
return
LOG.debug(
"Retrieve unaggregated measures "
"for %s in %.2fs",
metric.id, sw.elapsed())
try:
return carbonara.BoundTimeSerie.unserialize(
raw_measures, block_size, back_window)
except ValueError:
raise CorruptionError(
"Data corruption detected for %s "
"unaggregated timeserie" % metric.id)
@staticmethod
def _store_unaggregated_timeserie(metric, data, version=3):
raise NotImplementedError
@staticmethod
def _store_metric_measures(metric, timestamp_key, aggregation,
granularity, data, offset=None, version=3):
raise NotImplementedError
@staticmethod
def _list_split_keys_for_metric(metric, aggregation, granularity,
version=3):
raise NotImplementedError
@staticmethod
def _version_check(name, v):
"""Validate object matches expected version.
Version should be last attribute and start with 'v'
"""
return name.split("_")[-1] == 'v%s' % v
def get_measures(self, metric, from_timestamp=None, to_timestamp=None,
aggregation='mean', granularity=None, resample=None):
super(CarbonaraBasedStorage, self).get_measures(
metric, from_timestamp, to_timestamp, aggregation)
if granularity is None:
agg_timeseries = self._map_in_thread(
self._get_measures_timeserie,
((metric, aggregation, ap.granularity,
from_timestamp, to_timestamp)
for ap in reversed(metric.archive_policy.definition)))
else:
agg_timeseries = self._get_measures_timeserie(
metric, aggregation, granularity,
from_timestamp, to_timestamp)
if resample:
agg_timeseries = agg_timeseries.resample(resample)
agg_timeseries = [agg_timeseries]
return [(timestamp.replace(tzinfo=iso8601.iso8601.UTC), r, v)
for ts in agg_timeseries
for timestamp, r, v in ts.fetch(from_timestamp, to_timestamp)]
def _get_measures_and_unserialize(self, metric, key,
aggregation, granularity):
data = self._get_measures(metric, key, aggregation, granularity)
try:
return carbonara.AggregatedTimeSerie.unserialize(
data, key, aggregation, granularity)
except carbonara.InvalidData:
LOG.error("Data corruption detected for %s "
"aggregated `%s' timeserie, granularity `%s' "
"around time `%s', ignoring.",
metric.id, aggregation, granularity, key)
def _get_measures_timeserie(self, metric,
aggregation, granularity,
from_timestamp=None, to_timestamp=None):
# Find the number of point
for d in metric.archive_policy.definition:
if d.granularity == granularity:
points = d.points
break
else:
raise storage.GranularityDoesNotExist(metric, granularity)
all_keys = None
try:
all_keys = self._list_split_keys_for_metric(
metric, aggregation, granularity)
except storage.MetricDoesNotExist:
for d in metric.archive_policy.definition:
if d.granularity == granularity:
return carbonara.AggregatedTimeSerie(
sampling=granularity,
aggregation_method=aggregation,
max_size=d.points)
raise storage.GranularityDoesNotExist(metric, granularity)
if from_timestamp:
from_timestamp = str(
carbonara.SplitKey.from_timestamp_and_sampling(
from_timestamp, granularity))
if to_timestamp:
to_timestamp = str(
carbonara.SplitKey.from_timestamp_and_sampling(
to_timestamp, granularity))
timeseries = filter(
lambda x: x is not None,
self._map_in_thread(
self._get_measures_and_unserialize,
((metric, key, aggregation, granularity)
for key in all_keys
if ((not from_timestamp or key >= from_timestamp)
and (not to_timestamp or key <= to_timestamp))))
)
return carbonara.AggregatedTimeSerie.from_timeseries(
sampling=granularity,
aggregation_method=aggregation,
timeseries=timeseries,
max_size=points)
def _store_timeserie_split(self, metric, key, split,
aggregation, archive_policy_def,
oldest_mutable_timestamp):
# NOTE(jd) We write the full split only if the driver works that way
# (self.WRITE_FULL) or if the oldest_mutable_timestamp is out of range.
write_full = self.WRITE_FULL or next(key) <= oldest_mutable_timestamp
key_as_str = str(key)
if write_full:
try:
existing = self._get_measures_and_unserialize(
metric, key_as_str, aggregation,
archive_policy_def.granularity)
except storage.AggregationDoesNotExist:
pass
else:
if existing is not None:
if split is None:
split = existing
else:
split.merge(existing)
if split is None:
# `split' can be none if existing is None and no split was passed
# in order to rewrite and compress the data; in that case, it means
# the split key is present and listed, but some aggregation method
# or granularity is missing. That means data is corrupted, but it
# does not mean we have to fail, we can just do nothing and log a
# warning.
LOG.warning("No data found for metric %s, granularity %f "
"and aggregation method %s (split key %s): "
"possible data corruption",
metric, archive_policy_def.granularity,
aggregation, key)
return
offset, data = split.serialize(key, compressed=write_full)
return self._store_metric_measures(
metric, key_as_str, aggregation, archive_policy_def.granularity,
data, offset=offset)
def _add_measures(self, aggregation, archive_policy_def,
metric, grouped_serie,
previous_oldest_mutable_timestamp,
oldest_mutable_timestamp):
ts = carbonara.AggregatedTimeSerie.from_grouped_serie(
grouped_serie, archive_policy_def.granularity,
aggregation, max_size=archive_policy_def.points)
# Don't do anything if the timeserie is empty
if not ts:
return
# We only need to check for rewrite if driver is not in WRITE_FULL mode
# and if we already stored splits once
need_rewrite = (
not self.WRITE_FULL
and previous_oldest_mutable_timestamp is not None
)
if archive_policy_def.timespan or need_rewrite:
existing_keys = self._list_split_keys_for_metric(
metric, aggregation, archive_policy_def.granularity)
# First delete old splits
if archive_policy_def.timespan:
oldest_point_to_keep = ts.last - datetime.timedelta(
seconds=archive_policy_def.timespan)
oldest_key_to_keep = ts.get_split_key(oldest_point_to_keep)
oldest_key_to_keep_s = str(oldest_key_to_keep)
for key in list(existing_keys):
# NOTE(jd) Only delete if the key is strictly inferior to
# the timestamp; we don't delete any timeserie split that
# contains our timestamp, so we prefer to keep a bit more
# than deleting too much
if key < oldest_key_to_keep_s:
self._delete_metric_measures(
metric, key, aggregation,
archive_policy_def.granularity)
existing_keys.remove(key)
else:
oldest_key_to_keep = carbonara.SplitKey(0, 0)
# Rewrite all read-only splits just for fun (and compression). This
# only happens if `previous_oldest_mutable_timestamp' exists, which
# means we already wrote some splits at some point so this is not the
# first time we treat this timeserie.
if need_rewrite:
previous_oldest_mutable_key = str(ts.get_split_key(
previous_oldest_mutable_timestamp))
oldest_mutable_key = str(ts.get_split_key(
oldest_mutable_timestamp))
if previous_oldest_mutable_key != oldest_mutable_key:
for key in existing_keys:
if previous_oldest_mutable_key <= key < oldest_mutable_key:
LOG.debug(
"Compressing previous split %s (%s) for metric %s",
key, aggregation, metric)
# NOTE(jd) Rewrite it entirely for fun (and later for
# compression). For that, we just pass None as split.
self._store_timeserie_split(
metric, carbonara.SplitKey(
float(key), archive_policy_def.granularity),
None, aggregation, archive_policy_def,
oldest_mutable_timestamp)
for key, split in ts.split():
if key >= oldest_key_to_keep:
LOG.debug(
"Storing split %s (%s) for metric %s",
key, aggregation, metric)
self._store_timeserie_split(
metric, key, split, aggregation, archive_policy_def,
oldest_mutable_timestamp)
@staticmethod
def _delete_metric(metric):
raise NotImplementedError
def delete_metric(self, metric, sync=False):
LOG.debug("Deleting metric %s", metric)
lock = self.incoming.get_sack_lock(
self.coord, self.incoming.sack_for_metric(metric.id))
if not lock.acquire(blocking=sync):
raise storage.LockedMetric(metric)
# NOTE(gordc): no need to hold lock because the metric has been already
# marked as "deleted" in the indexer so no measure worker
# is going to process it anymore.
lock.release()
self._delete_metric(metric)
self.incoming.delete_unprocessed_measures_for_metric_id(metric.id)
@staticmethod
def _delete_metric_measures(metric, timestamp_key,
aggregation, granularity, version=3):
raise NotImplementedError
def refresh_metric(self, indexer, metric, timeout):
s = self.incoming.sack_for_metric(metric.id)
lock = self.incoming.get_sack_lock(self.coord, s)
if not lock.acquire(blocking=timeout):
raise SackLockTimeoutError(
'Unable to refresh metric: %s. Metric is locked. '
'Please try again.' % metric.id)
try:
self.process_new_measures(indexer, [six.text_type(metric.id)])
finally:
lock.release()
def process_new_measures(self, indexer, metrics_to_process,
sync=False):
# process only active metrics. deleted metrics with unprocessed
# measures will be skipped until cleaned by janitor.
metrics = indexer.list_metrics(ids=metrics_to_process)
for metric in metrics:
# NOTE(gordc): must lock at sack level
try:
LOG.debug("Processing measures for %s", metric)
with self.incoming.process_measure_for_metric(metric) \
as measures:
self._compute_and_store_timeseries(metric, measures)
LOG.debug("Measures for metric %s processed", metric)
except Exception:
if sync:
raise
LOG.error("Error processing new measures", exc_info=True)
def _compute_and_store_timeseries(self, metric, measures):
# NOTE(mnaser): The metric could have been handled by
# another worker, ignore if no measures.
if len(measures) == 0:
LOG.debug("Skipping %s (already processed)", metric)
return
measures = sorted(measures, key=operator.itemgetter(0))
agg_methods = list(metric.archive_policy.aggregation_methods)
block_size = metric.archive_policy.max_block_size
back_window = metric.archive_policy.back_window
definition = metric.archive_policy.definition
try:
ts = self._get_unaggregated_timeserie_and_unserialize(
metric, block_size=block_size, back_window=back_window)
except storage.MetricDoesNotExist:
try:
self._create_metric(metric)
except storage.MetricAlreadyExists:
# Created in the mean time, do not worry
pass
ts = None
except CorruptionError as e:
LOG.error(e)
ts = None
if ts is None:
# This is the first time we treat measures for this
# metric, or data are corrupted, create a new one
ts = carbonara.BoundTimeSerie(block_size=block_size,
back_window=back_window)
current_first_block_timestamp = None
else:
current_first_block_timestamp = ts.first_block_timestamp()
# NOTE(jd) This is Python where you need such
# hack to pass a variable around a closure,
# sorry.
computed_points = {"number": 0}
def _map_add_measures(bound_timeserie):
# NOTE (gordc): bound_timeserie is entire set of
# unaggregated measures matching largest
# granularity. the following takes only the points
# affected by new measures for specific granularity
tstamp = max(bound_timeserie.first, measures[0][0])
new_first_block_timestamp = bound_timeserie.first_block_timestamp()
computed_points['number'] = len(bound_timeserie)
for d in definition:
ts = bound_timeserie.group_serie(
d.granularity, carbonara.round_timestamp(
tstamp, d.granularity * 10e8))
self._map_in_thread(
self._add_measures,
((aggregation, d, metric, ts,
current_first_block_timestamp,
new_first_block_timestamp)
for aggregation in agg_methods))
with utils.StopWatch() as sw:
ts.set_values(measures,
before_truncate_callback=_map_add_measures,
ignore_too_old_timestamps=True)
number_of_operations = (len(agg_methods) * len(definition))
perf = ""
elapsed = sw.elapsed()
if elapsed > 0:
perf = " (%d points/s, %d measures/s)" % (
((number_of_operations * computed_points['number']) /
elapsed),
((number_of_operations * len(measures)) / elapsed)
)
LOG.debug("Computed new metric %s with %d new measures "
"in %.2f seconds%s",
metric.id, len(measures), elapsed, perf)
self._store_unaggregated_timeserie(metric, ts.serialize())
def get_cross_metric_measures(self, metrics, from_timestamp=None,
to_timestamp=None, aggregation='mean',
reaggregation=None, resample=None,
granularity=None, needed_overlap=100.0,
fill=None):
super(CarbonaraBasedStorage, self).get_cross_metric_measures(
metrics, from_timestamp, to_timestamp,
aggregation, reaggregation, resample, granularity, needed_overlap)
if reaggregation is None:
reaggregation = aggregation
if granularity is None:
granularities = (
definition.granularity
for metric in metrics
for definition in metric.archive_policy.definition
)
granularities_in_common = [
g
for g, occurrence in six.iteritems(
collections.Counter(granularities))
if occurrence == len(metrics)
]
if not granularities_in_common:
raise storage.MetricUnaggregatable(
metrics, 'No granularity match')
else:
granularities_in_common = [granularity]
if resample and granularity:
tss = self._map_in_thread(self._get_measures_timeserie,
[(metric, aggregation, granularity,
from_timestamp, to_timestamp)
for metric in metrics])
for i, ts in enumerate(tss):
tss[i] = ts.resample(resample)
else:
tss = self._map_in_thread(self._get_measures_timeserie,
[(metric, aggregation, g,
from_timestamp, to_timestamp)
for metric in metrics
for g in granularities_in_common])
try:
return [(timestamp.replace(tzinfo=iso8601.iso8601.UTC), r, v)
for timestamp, r, v
in carbonara.AggregatedTimeSerie.aggregated(
tss, reaggregation, from_timestamp, to_timestamp,
needed_overlap, fill)]
except carbonara.UnAggregableTimeseries as e:
raise storage.MetricUnaggregatable(metrics, e.reason)
def _find_measure(self, metric, aggregation, granularity, predicate,
from_timestamp, to_timestamp):
timeserie = self._get_measures_timeserie(
metric, aggregation, granularity,
from_timestamp, to_timestamp)
values = timeserie.fetch(from_timestamp, to_timestamp)
return {metric:
[(timestamp.replace(tzinfo=iso8601.iso8601.UTC),
g, value)
for timestamp, g, value in values
if predicate(value)]}
def search_value(self, metrics, query, from_timestamp=None,
to_timestamp=None, aggregation='mean',
granularity=None):
granularity = granularity or []
predicate = storage.MeasureQuery(query)
results = self._map_in_thread(
self._find_measure,
[(metric, aggregation,
gran, predicate,
from_timestamp, to_timestamp)
for metric in metrics
for gran in granularity or
(defin.granularity
for defin in metric.archive_policy.definition)])
result = collections.defaultdict(list)
for r in results:
for metric, metric_result in six.iteritems(r):
result[metric].extend(metric_result)
# Sort the result
for metric, r in six.iteritems(result):
# Sort by timestamp asc, granularity desc
r.sort(key=lambda t: (t[0], - t[1]))
return result
@staticmethod
def _map_no_thread(method, list_of_args):
return list(itertools.starmap(method, list_of_args))
def _map_in_futures_threads(self, method, list_of_args):
with futures.ThreadPoolExecutor(
max_workers=self.aggregation_workers_number) as executor:
# We use 'list' to iterate all threads here to raise the first
# exception now, not much choice
return list(executor.map(lambda args: method(*args), list_of_args))

View File

@ -1,203 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright © 2014-2015 eNovance
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from gnocchi import storage
from gnocchi.storage import _carbonara
from gnocchi.storage.common import ceph
OPTS = [
cfg.StrOpt('ceph_pool',
default='gnocchi',
help='Ceph pool name to use.'),
cfg.StrOpt('ceph_username',
help='Ceph username (ie: admin without "client." prefix).'),
cfg.StrOpt('ceph_secret', help='Ceph key', secret=True),
cfg.StrOpt('ceph_keyring', help='Ceph keyring path.'),
cfg.IntOpt('ceph_timeout', help='Ceph connection timeout'),
cfg.StrOpt('ceph_conffile',
default='/etc/ceph/ceph.conf',
help='Ceph configuration file.'),
]
rados = ceph.rados
class CephStorage(_carbonara.CarbonaraBasedStorage):
WRITE_FULL = False
def __init__(self, conf, incoming):
super(CephStorage, self).__init__(conf, incoming)
self.rados, self.ioctx = ceph.create_rados_connection(conf)
def stop(self):
ceph.close_rados_connection(self.rados, self.ioctx)
super(CephStorage, self).stop()
@staticmethod
def _get_object_name(metric, timestamp_key, aggregation, granularity,
version=3):
name = str("gnocchi_%s_%s_%s_%s" % (
metric.id, timestamp_key, aggregation, granularity))
return name + '_v%s' % version if version else name
def _object_exists(self, name):
try:
self.ioctx.stat(name)
return True
except rados.ObjectNotFound:
return False
def _create_metric(self, metric):
name = self._build_unaggregated_timeserie_path(metric, 3)
if self._object_exists(name):
raise storage.MetricAlreadyExists(metric)
else:
self.ioctx.write_full(name, b"")
def _store_metric_measures(self, metric, timestamp_key, aggregation,
granularity, data, offset=None, version=3):
name = self._get_object_name(metric, timestamp_key,
aggregation, granularity, version)
if offset is None:
self.ioctx.write_full(name, data)
else:
self.ioctx.write(name, data, offset=offset)
with rados.WriteOpCtx() as op:
self.ioctx.set_omap(op, (name,), (b"",))
self.ioctx.operate_write_op(
op, self._build_unaggregated_timeserie_path(metric, 3))
def _delete_metric_measures(self, metric, timestamp_key, aggregation,
granularity, version=3):
name = self._get_object_name(metric, timestamp_key,
aggregation, granularity, version)
try:
self.ioctx.remove_object(name)
except rados.ObjectNotFound:
# It's possible that we already remove that object and then crashed
# before removing it from the OMAP key list; then no big deal
# anyway.
pass
with rados.WriteOpCtx() as op:
self.ioctx.remove_omap_keys(op, (name,))
self.ioctx.operate_write_op(
op, self._build_unaggregated_timeserie_path(metric, 3))
def _delete_metric(self, metric):
with rados.ReadOpCtx() as op:
omaps, ret = self.ioctx.get_omap_vals(op, "", "", -1)
try:
self.ioctx.operate_read_op(
op, self._build_unaggregated_timeserie_path(metric, 3))
except rados.ObjectNotFound:
return
# NOTE(sileht): after reading the libradospy, I'm
# not sure that ret will have the correct value
# get_omap_vals transforms the C int to python int
# before operate_read_op is called, I dunno if the int
# content is copied during this transformation or if
# this is a pointer to the C int, I think it's copied...
try:
ceph.errno_to_exception(ret)
except rados.ObjectNotFound:
return
ops = [self.ioctx.aio_remove(name) for name, _ in omaps]
for op in ops:
op.wait_for_complete_and_cb()
try:
self.ioctx.remove_object(
self._build_unaggregated_timeserie_path(metric, 3))
except rados.ObjectNotFound:
# It's possible that the object does not exists
pass
def _get_measures(self, metric, timestamp_key, aggregation, granularity,
version=3):
try:
name = self._get_object_name(metric, timestamp_key,
aggregation, granularity, version)
return self._get_object_content(name)
except rados.ObjectNotFound:
if self._object_exists(
self._build_unaggregated_timeserie_path(metric, 3)):
raise storage.AggregationDoesNotExist(metric, aggregation)
else:
raise storage.MetricDoesNotExist(metric)
def _list_split_keys_for_metric(self, metric, aggregation, granularity,
version=3):
with rados.ReadOpCtx() as op:
omaps, ret = self.ioctx.get_omap_vals(op, "", "", -1)
try:
self.ioctx.operate_read_op(
op, self._build_unaggregated_timeserie_path(metric, 3))
except rados.ObjectNotFound:
raise storage.MetricDoesNotExist(metric)
# NOTE(sileht): after reading the libradospy, I'm
# not sure that ret will have the correct value
# get_omap_vals transforms the C int to python int
# before operate_read_op is called, I dunno if the int
# content is copied during this transformation or if
# this is a pointer to the C int, I think it's copied...
try:
ceph.errno_to_exception(ret)
except rados.ObjectNotFound:
raise storage.MetricDoesNotExist(metric)
keys = set()
for name, value in omaps:
meta = name.split('_')
if (aggregation == meta[3] and granularity == float(meta[4])
and self._version_check(name, version)):
keys.add(meta[2])
return keys
@staticmethod
def _build_unaggregated_timeserie_path(metric, version):
return (('gnocchi_%s_none' % metric.id)
+ ("_v%s" % version if version else ""))
def _get_unaggregated_timeserie(self, metric, version=3):
try:
return self._get_object_content(
self._build_unaggregated_timeserie_path(metric, version))
except rados.ObjectNotFound:
raise storage.MetricDoesNotExist(metric)
def _store_unaggregated_timeserie(self, metric, data, version=3):
self.ioctx.write_full(
self._build_unaggregated_timeserie_path(metric, version), data)
def _get_object_content(self, name):
offset = 0
content = b''
while True:
data = self.ioctx.read(name, offset=offset)
if not data:
break
content += data
offset += len(data)
return content

Some files were not shown because too many files have changed in this diff Show More