There is the need to add human-readable description to the metric
definition. This can then be used to create custom reports in the
`summary` GET API. The value has to be stored in the backend as
we do with the alt_name and unit of the metric as well.
Depends-On: https://review.opendev.org/c/openstack/cloudkitty/+/861786
Change-Id: Icea8d00eaf3343e59f0f7b2234754f6abcb23258
To facilitate the switch from Elasticsearch to OpenSearch, the ES
backend has been duplicated and renamed where appropriate to OpenSearch.
The OpenSearch implementation was modified in places for compatibility
with OpenSearch 2.x, for example:
- remove mapping name from bulk API URL
- replace put_mapping by post_mapping
This will allow for the future removal of the Elasticsearch backend.
Change-Id: I88b0a30f66af13dad1bd75cde412d2880b4ead30
Co-Authored-By: Pierre Riteau <pierre@stackhpc.com>
Currently, `doc/source/_static/cloudkitty.conf.sample` is stored in
the git; however, building the doc overrides it. This is a problem
when building distro packages, as the clean target cannot be written
properly without hacks.
Change-Id: I28fb70e646b000032fb7181a3ffcc0d7097f9dc1
Story: #2010920
Task: #48780
Problem description
===================
It is not possible to create multiple rating types
for the same metric in Gnocchi, which forces operators
to create multiple metrics for the same resource type in
Gnocchi to create different rating types in Cloudkitty for
the same resource type in Gnocchi.
Proposal
========
We propose to extend the Gnocchi collector to allow operators
to create multiple rating types for the same metric in Gnocchi.
Using this approach we can create, for example, a rating type
for software licenses in a running instance and another rating
type for the instance flavor; it can be implemented using only
one metric in Gnocchi which has the instance installed softwares
and flavor metadata.
Change-Id: I69d4ba14cc72ba55e47baa6fd372f2085e1124da
The PyScript process in CloudKitty has been broken for a very long
time. This patch introduces changes required to make it work again.
Change-Id: I143ee6aa4352903921d2ab7b8d8468aedbdd6911
This is a follow-up to:
https://review.opendev.org/c/openstack/cloudkitty/+/867122
We need to add the `[testenv:api-ref]` section to `tox.ini` to be able
to run `tox -e api-ref`, which is what Zuul uses to build the API refs.
Sphinx had to be capped to the latest version 4 or otherwise `tox` fails
with the "add_content() takes 2 positional arguments but 3 were given"
error message.
Change-Id: I65b008152e2bc64f29229996c87ad4587fe85043
This is the first step towards moving API ref/docs to
https://docs.openstack.org/api
The `conf.py` file is a copy of the file from `doc/source` and all the
other files simply need moving to the new location.
Change-Id: I9ecf84b53274d9b86f05800fc9816de275f3e9c5
This mutator can map arbitrary values to new values. This is useful with
metrics reporting resource status as their value, but multiple statuses
are billable.
Change-Id: I8fcb9f2aa4ef23432089bfd6351a9c03ce3cf941
Currently, CloudKitty only allows creating rating rules as
"99.999999999999999999999999". Therefore, for prices equal to or higher
than 100, we would not be able to use them. This patch will enable
operators to use any value between 0 and 999999999999 (in the integer
part of the number), which will provide more flexibility.
Change-Id: I2ff4a09ce3b0fdf0b08a7e565b58794b25ac5ade
Story: 2009947
Task: 44865
This commit adds an API enabling the POST operation to create scopes in
an ad hoc fashion. This is useful for operators to register scopes
before they are created as resources in the collected backend so their
processing can be discarded right away, for example for trial
projects/accounts.
Otherwise, we need to wait for them to create resources, then for
example Ceilometer has to monitor these resources, persist measures in
Gnocchi, then CloudKitty has to discover the scopes and finally we can
disable their processing.
Change-Id: I3e947d36c9d5d5da07115d35dde578ae300cbe5c
This introduces GET methods for rating modules in the v2 API. Work items:
* Implement the "/v1/rating/modules" endpoints in the v2 API, including
unit tests and documentation
Story: 2006572
Task: 36677
Co-Authored-By: Rafael Weingärtner <rafael@apache.org>
Change-Id: I0a2f24051d268a396955c8df7e3e5615546f6293
- coordination_url is not driver URL but backend URL. Driver is
determined according to the backend.
- The parameter description should not use abbreviations, as these
can confuse users.
Change-Id: I16eb47e161ae826393d113082091c292d097fc03
This commit adds support for adding optional prefix and/or suffix to
Prometheus queries.
For example, this can be used to perform vector matches between the
collected metric and another one, to gather additional metadata.
Change-Id: I725f0f5ad00b67f55bcacaf8447e050af3815c73
The goal of this patch is to introduce support for multi-valued
parameters. For instance, for the `type` parameter, even though the code
was treating it as a possible list of types, the API would not allow a
user to send multiple types.
This patch enables users to send filters with multiple values, which can
be useful for filtering by project_ids for instance, or different types
(metric types).
Change-Id: I59397b33d014709eb976c78d517f009b8a2be4cf
The V2 summary endpoint uses a quite unconventional data format in
the response. Currently, the format is the following:
```
{"total": <number of elements in the response>,
"results": [array of arrays of data],
"columns": [array of columns]}
```
To process this, we need to find the index of a column in the column
list, and with this index, we retrieve the data in the array of data
that is found in the array of results. The proposal is to use the
following format in the response.
```
{"total": <number of elements in the response>,
"results": [array of objects/dictionary]}
```
With this new format, one does not need to consult the index of a
column to retrieve data in one of the entries. We would only need to
retrieve the data in the entry using its column name. Therefore, the
coding feels more natural. To maintain compatibility, this new format
would be only applied when an option is sent to CloudKitty via
`response_format` option.
Depends-on: https://review.opendev.org/c/openstack/cloudkitty/+/793973
Change-Id: I5869d527e6e4655c653b6852d6fb7bebc9d71520
This patch adds two options in fetcher_keystone to filter which tenants
should be rated:
ignore_disabled_tenants (Default=False)
ignore_rating_role (Default=False)
In our case we currently have 2k projects (growing) and we want to rate
all active projects, so checking the role rating is useless and consumes
resources for nothing. Besides, cloudkitty rates projects regardless if
there are enabled or disabled which is also useless and consumes
resources in our case.
Change-Id: I6479d76c367dc4217bce4de9c3db41c4612f0397
This patch adds active status fields in the storage state table
(cloudkitty_storage_states). A boolean column called "active",
which indicates if the CloudKitty scope is active for billing, and
another one called "scope_activation_toggle_date" (timestamp field)
to store the latest timestamp when the scope moved between the
active/deactivated states. Then, during CloudKitty processing, we
check the "active" column. If the resource is not active, we ignore
it during the processing.
Moreover, we introduce an API to allow operators to set the "active" field.
The "scope_activation_toggle_date" will not be exposed for operators to
change it. It is updated automatically according to the changes in the "active"
field.
This patch adds a new HTTP method to "/v2/scope" endpoint. We then use
"patch" HTTP method to allow operators to patch a storage scope. The API
will require the scope_id, and then, it takes into account some of the fields
we allow operators to change, and "active" field is one of them.
Change-Id: Ia02c2eeb98021c60549cb8deab6f2e964e573f1e
Implements: https://review.opendev.org/c/openstack/cloudkitty-specs/+/770928/
This patch proposes a method for operators to customize the aggregation
query executed against Gnocchi. By default, we use the following query:
(aggregate RE_AGGREGATION_METHOD (metric METRIC_NAME AGGREGATION_METHOD))
Therefore, this option enables operators to take full advantage of
operations available in Gnocchi, such as any arithmetic operations,
logical operations and many others. When using a custom aggregation
query, one can use the placeholders `RE_AGGREGATION_METHOD`,
`AGGREGATION_METHOD`, and `METRIC_NAME`: they will be replaced at
runtime by values from the metric configuration.
Different use cases can be addressed with the use of custom queries such
as handling RadosGW usage data trimming, which causes a decrease in the
usage data values; Libvirt attach/detach of disks, migration of VMs,
start/stop of VMs, which will zero the usage data that are gathered by
Ceilometer compute, and many other use cases where one might desire a
more complex operation to be executed on the data before CloudKitty
rates it.
Change-Id: I3419075d6df165409cb1375ad11a5b3f7faa7471
By default not even an admin can use the get_summary endpoint with
all_tenants=True or using a tenant_id parameter. This commit fixes that.
This rule is now the same as how cinder defines admin_or_owner.
Change-Id: I3e34927e8ab88f25d2975b4dbac89b52a7d94c98
As per the community goal of migrating the policy file
the format from JSON to YAML[1], we need to do two things:
1. Change the default value of '[oslo_policy] policy_file''
config option from 'policy.json' to 'policy.yaml' with
upgrade checks.
2. Deprecate the JSON formatted policy file on the project side
via warning in doc and releasenotes.
Also replace policy.json to policy.yaml ref from doc.
[1]https://governance.openstack.org/tc/goals/selected/wallaby/migrate-policy-format-from-json-to-yaml.html
Change-Id: I608d3f55dfa9b6052f92c4fd13f2aae6d714e287
This repo has not been testing lower-constraints at all due to
broken install_command. If you look at any lower-constraints run and
compare the install python packages with lower-constraints, you see
that those are completely different.
This change removes install_command and updates deps in tox.ini to
follow best practices (moving constraints into deps).
It also updates lower-constraints to newer versions.
Remove broken hacking test.
Co-Authored-By: Justin Ferrieu <jferrieu@objectif-libre.com>
Change-Id: I13daab9e53617266beff7053e50779d1f281802c
This option is useful when using Gnocchi with the patch introduced in
https://github.com/gnocchixyz/gnocchi/pull/1059. That patch can
cause queries to return more than one entry per granularity (
timespan), according to the revisions a resource has. This can be
problematic when using the 'mutate' option of Cloudkitty. Therefore,
we proposed this option to allow operators to discard all
datapoints returned from Gnocchi, but the last one in the
granularity queried by CloudKitty. The default behavior is
maintained, which means, CloudKitty always use all of the data
points returned.
Change-Id: I051ae1fa3ef6ace9aa417f4ccdca929dab0274b2
When working with some type of resources, and for some specific billing
requirements, we need to set costs that will use up to more than 8 digits on
the right side of the comma. By default, the Python object Decimal support 28
digits. Therefore, it makes sense for us to change the MySQL database schema
of CloudKitty to use 28 digits as well on the right side. This will avoid
confusion for people when using this feature.
One can argue that using the `factor` option can also be a solution for that,
but as I mentioned, for people used to Python, that can cause confusions
because the MySQL DB is using a different precision than the one supported in
Python for the data type we use to represent the `cost` field.
Change-Id: Ifbf5b2515c7eaf470b48f2695d1e45eeab708a72
Currently, only the quantity and price are returned in the v2 summary get API.
However, that might not satisfy all user's needs. This PR Enables custom fields
to be used in the Summary get API; thus, we start to allow users/operators/business
people to create richer reports via the summary Get API.
Change-Id: Id4dd83d0703ec0dff32510e6dd1d2dad9b181306
Switch to openstackdocstheme 2.2.1 and reno 3.1.0 versions. Using
these versions will allow especially:
* Linking from HTML to PDF document
* Allow parallel building of documents
* Fix some rendering problems
Update Sphinx version as well.
Set openstackdocs_pdf_link to link to PDF file. Note that
the link to the published document only works on docs.openstack.org
where the PDF file is placed in the top-level html directory. The
site-preview places the PDF in a pdf directory.
Disable openstackdocs_auto_name to use 'project' variable as name.
Change pygments_style to 'native' since old theme version always used
'native' and the theme now respects the setting and using 'sphinx' can
lead to some strange rendering.
openstackdocstheme renames some variables, so follow the renames
before the next release removes them. A couple of variables are also
not needed anymore, remove them.
See also
http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014971.html
Change-Id: I6be1174686cb1d8f11e8cb4be58c0e739bf0f931