Add file to the reno documentation build to show release notes for
stable/2024.1.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/2024.1.
Sem-Ver: feature
Change-Id: Ic60212f53d02906092c8656b1fab8d9bb84d54a8
Given these backends are now used in many production environments, they
can no longer be considered experimental.
Change-Id: I9e9f3023bf2a50807540e69b764600c0c5f995d5
Switch to using oslo_db.sqlalchemy.enginefacade instead, as this is
required for SQLAlchemy 2.x support.
Change-Id: Ifcad28239b6907b8ca396d348cbfa54185355f68
This patch allows CloudKitty to use InfluxDB v2 with Flux queries. This
type of query uses less CPU and RAM to be processed in the InfluxDB
backend.
Change-Id: I8ee3c92776aa69afbede353981a5fcd65dd7d099
Depends-On: https://review.opendev.org/c/openstack/requirements/+/895629
Story: 2010863
Task: 48539
We use requests in several places in cloudkitty code, so we need to
depend on it directly instead of installing it through dependencies.
Change-Id: Ibedb9ddba61b39c80b1e1b3910a90468bdfc76ae
After the commit
8778c64776
The module PBR in openstack started to validating the parameters when
creating an embedded WSGI, now if invalid arguments are given
to the cloudkitty-api it raises an error as we are facing in the
devstack when using `CLOUDKITTY_USE_MOD_WSGI=False`:
cloudkitty-api[86126]: usage: cloudkitty-api [-h] [--port PORT] [--host IP] -- [passed options]
cloudkitty-api[86126]: cloudkitty-api: error: unrecognized arguments: --config-file=/etc/cloudkitty/cloudkitty.conf
This PR also extracts the upgrade database workflow to a function
to be used in grenade tests
Change-Id: Ifc1a8563a9efcae2abaa6f8eb036405a93ff296d
As per the current release tested runtime, we test
till python 3.11 so updating the same in python
classifier in setup.cfg
Change-Id: If66420bae76e8db527c1aa191cb31d6ed8a7ff91
There is the need to add human-readable description to the metric
definition. This can then be used to create custom reports in the
`summary` GET API. The value has to be stored in the backend as
we do with the alt_name and unit of the metric as well.
Depends-On: https://review.opendev.org/c/openstack/cloudkitty/+/861786
Change-Id: Icea8d00eaf3343e59f0f7b2234754f6abcb23258
To facilitate the switch from Elasticsearch to OpenSearch, the ES
backend has been duplicated and renamed where appropriate to OpenSearch.
The OpenSearch implementation was modified in places for compatibility
with OpenSearch 2.x, for example:
- remove mapping name from bulk API URL
- replace put_mapping by post_mapping
This will allow for the future removal of the Elasticsearch backend.
Change-Id: I88b0a30f66af13dad1bd75cde412d2880b4ead30
Co-Authored-By: Pierre Riteau <pierre@stackhpc.com>
Introduce new default groupby options: (i) time: to group data by
hourly; (ii) time-d: to group data by day of the year; (iii) time-w:
to group data by week of the year; (iv) time-m: to group data by month;
and, (v) time-y: to group data by year. If you have old data in
CloudKitty and you wish to use these group by methods, you will need
to reprocess the desired timeframe.
Story: #2009839
Task: #44438
Depends-On: https://review.opendev.org/c/x/wsme/+/893677
Change-Id: Iad296f54f6701af84e168796aec9b1033a2a8a2d
Calling GET /v2/task/reprocesses with python-cloudkittyclient was
returning Internal Server Error, with the following API trace:
File "/var/lib/kolla/venv/lib/python3.6/site-packages/cloudkitty/api/v2/task/reprocess.py", line 259, in get
order, ACCEPTED_GET_REPROCESSING_REQUEST_ORDERS)
TypeError: __init__() takes from 1 to 3 positional arguments but 4 were given
This was because http_exceptions.BadRequest was given multiple arguments
(similar to LOG.* methods) instead of a single string.
Another issue is that python-cloudkittyclient sends the "DESC" order
while the API only supports "desc" and "asc". Convert to lower case for
compatibility.
Change-Id: Id1145adff82bc9a01e4eb0f306f0bfa535142459
Currently, when a reprocessing task is scheduled, CloudKitty executes
the cleaning of the data for the reprocessing period in one hour
fashion (the default period). Therefore, for each one of the
timeframes, a delete query is sent to InfluxDB (when using it as a
backend). However, InfluxDB is not a very optimized time series database
for deletion; thus, this workflow generates quite some overhead and
slowness when reprocessing. If we clean right away the whole time
frame for the reprocessing task, and then we just reprocess it, it will
execute a single delete query in InfluxDB, which has a similar cost as
a delete to remove the data for a single time frame.
This patch optimized the reprocessing workflow to execute batch cleaning
of data in the storage backend of CloudKitty.
Change-Id: I8282f44ad837c71df0cb6c73776eafc7014ebedf
The option 'use_all_resource_revisions' is useful when using Gnocchi
with the patch introduced in [1]. That patch can cause queries to
return more than one entry per granularity (timespan), according to
the revisions a resource has. This can be problematic when using the
'mutate' option of Cloudkitty. Therefore, this option
('use_all_resource_revisions') allows operators to discard all
datapoints returned from Gnocchi, but the last one in the granularity
that is queried by CloudKitty. The default behavior is maintained,
which means, CloudKitty always uses all the data points returned.
However, when the 'mutate' option is not used, we need to sum all the
quantities and use this value with the latest version of the attributes
received. Otherwise, we will miss the complete accounting for the time
frame where the revision happened.
[1] https://github.com/gnocchixyz/gnocchi/pull/1059
Change-Id: I45bdaa3783ff483d49ecca70571caf529f3ccbc3
Gnocchi fixed its `aggregates` API with PR
https://github.com/gnocchixyz/gnocchi/pull/1059. Before that patch,
the `aggregates` API would only return the latest metadata for the
resource of the metric being handled. Therefore, for CloudKitty
processing and reprocessing, we would always have the possibility of
using the wrong attribute version to rate the computing resources.
With this patch we propose to always use the correct metadata for the
processing and reprocessing of CloudKitty. This means, we always use
the metadata for the timestamp that we are collecting at Gnocchi.
The patch was released under version 4.5.0 of Gnocchi.
Change-Id: I31bc2cdf620fb5c0f561dc9de8c10d7882895cce
It was discovered that in some situations the same reprocessing task
might be processed simultaneously by different workers, which can
lead to unnecessary processing. This was happening due to the use
of "current_reprocess_time" in the lock name, which would lead to
different locking name for some situations; for instance, when worker
start processing a brand new reprocessing task, and after reprocessing
a few time frames, the "current_reprocess_time" is updated, then when
other workers achieve the same locking moment, they would have a
different lock name for the same scope ID, and reprocess a scope
that is currently in reprocessing.
Change-Id: I487d0eeb1cedc162d44f8c879a27f924b5c76206
Currently, `doc/source/_static/cloudkitty.conf.sample` is stored in
the git; however, building the doc overrides it. This is a problem
when building distro packages, as the clean target cannot be written
properly without hacks.
Change-Id: I28fb70e646b000032fb7181a3ffcc0d7097f9dc1
Story: #2010920
Task: #48780
Add file to the reno documentation build to show release notes for
stable/2023.2.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/2023.2.
Sem-Ver: feature
Change-Id: I2d1191f45e036e9087618b7f3a2c2f758f28c85d
DevStack is moving to adopt VENV to manage its python versions.
However, CloudKittty DevStack integration was not using the
DevStack VENV variable. This, in turn, causes issues with tempest
tests, as they are based on a DevStack deployment.
We need to merge this patch to fix the tempest tests.
Change-Id: I17de617557fb86c002814941325d71e3c08e0e72
As per https://bugs.debian.org/1029646, Cloudkitty often fails to build
as it fails its unit tests during the package build. This error happens
randomly. Sometimes it fails, sometimes it does not fail, but it's
clearly a false positive, because we don't really want the test to fail
in such case.
This patch makes it a lot less likely (10 times less) to happen by
increasing the tolerance.
Change-Id: If217a639f9af1e2693e6a132e46033df6bf96415