fix: change tox.ini
fix: change queries for list_dimension_names and list_dimension_values because of influxdb time filter problem
fix: remove build_sphinx group from setup.cfg
fix: handle hashlib security problem
Change-Id: I0d31a8db5ed71c70e7b878ce5e7940e041d0fa43
Change-Id: I6f7066da10e834550cbf0c053c7bf425ac0ead93
Change-Id: If9575aee73d600bbc84fcdf58deb1c57b508d9c2
Change-Id: If515eaeee7539da3ca49997e88785dc65572b334
Pyparsing library was recently updated in golab requiremnts [1].
Since version 3.0.0 operatorPrecedence method was renamed to infixNotation [2].
[1]62f92c0187
[2]16b766b97c/CHANGES (L598)
Change-Id: I3bfefe5b9bc601f383e0b9d80046de387e420fd8
- The method create_notification returns an uuid string, if it isn't
mocked the 'notification's id' will be:
<MagicMock name='NotificationsRepository().create_notification()'
which can't be dump to json later.
- Bump Mako to 1.0.7 in lower-constraints.txt
- Bump decorator to 4.1.0 in lower-constraints.txt
Change-Id: I1ba563fd3144241127efe1cedf8853603dcca008
Replace removed Binary with LargeBinary import
Remove reflect=True from Alembic MetaData
Replace removed idle_timeout with connection_recycle_time option
- Binary was removed in SQLAlchemy 1.4.x [1]
- SQLAlchemy was updated to 1.4.15 in u-c [2]
- idle_timeout was removed in oslo.db 10.0.0 [3]
- oslo.db was updated to 10.0.0 in u-c [4]
- idle_timeout was already deprecated and renamed as
connection_recycle_time [5]
[1] https://github.com/sqlalchemy/sqlalchemy/issues/6263#issuecomment-819645247
[2] dc86260b28
[3] a857b83c9c
[4] f322cc13d8
[5] 6634218415
Change-Id: I13ec9c2b53174cfb2e3cb990ec773588cf68007c
- Support to new Falcon 3.0.0 and 3.0.1 keeping compatibility
for version 2.0.0
- Remove Falcon's class OptionalRepresentation
Starting from Falcon 3.0.0 version the class OptionalRepresentation
was removed. [1]
- Remove unnecessary URL slashes which are not compatible
with Falcon >= 3.0.0
- Keep facon.API instead of new falcon.App to keep support for
version 2.0.0
- Disable temporary docker-build and docker-publish Zuul jobs.
[1] https://falcon.readthedocs.io/en/stable/changes/3.0.0.html#breaking-changes
Change-Id: Ifb067429dd66fd350110187ac3a8b6a9977bad90
As per the community goal of migrating the policy file
the format from JSON to YAML[1], we need to do two things:
1. Change the default value of '[oslo_policy] policy_file''
config option from 'policy.json' to 'policy.yaml' with
upgrade checks.
2. Deprecate the JSON formatted policy file on the project side
via warning in doc and releasenotes.
Also replace policy.json to policy.yaml ref from doc and tests.
[1]https://governance.openstack.org/tc/goals/selected/wallaby/migrate-policy-format-from-json-to-yaml.html
Change-Id: Ibfb162f88cb04c0b2af3fbf41cfcd96bc7e351be
- Add a checker of version using the method ping()
if it failes, maybe because of an old version,
the flow will try with SHOW DIAGNOSTICS.
- Keep the timestamp output to 3 decimal digits as
it is working currently, independently of the version
of Influx Python Client (v5.2.3 or v5.3.1).
- Remove the support for Influxdb (the database)
older than v0.11.0
- Unittests: Adding data to handle more than 3 decimal
digits in timestamp, updating the tests to use mocks
for Influxdb from_0.11.0, creating the mocks with
from_0.11.0 explicitly.
- This change fixes monasca-tempest-python3-influxdb
Zuul job.
Change-Id: I5f8e6d2f0b56813f54fe025f91996b9d6863eadc
Story: 2007624
Task: 39658
As per victoria cycle testing runtime and community goal[1]
we need to migrate upstream CI/CD to Ubuntu Focal(20.04).
-Bump the lower constraints for required deps which added python3.8 support
in their later version.
-Changing the way to install and configure Zookeeper.
Installing Zookeeper from official Apache's tarball.
Adding the possiblity to set the specific Zookeeper version.
Minor change in zookeeper logger.
-Use mariadb JDBC for monasca-thresh in devstack, since Drizzle isn't
compatible with MySql Server v8.0.x which is default in Focal
-Python 3.8 doesn't seem to like dictionary keys changing during
iteration.
Fixing RuntimeError: dictionary keys changed during iteration.
Tech. details:
It runs well in py27: 5 iterations
It runs risky in py37: 7 iterations
It is forbbiden in py38: raised RuntimeError
Fixed with list(dic.items()) or tuple(dic.items())
dic = {'1': 'a', '2': 'b', '3': 'c', '4': 'd', '5': 'e'}
for key, value in dic.items():
print("Key: {0} Value: {1}".format(key,value))
del dic[key]
print(dic)
dic[key] = value
print(dic)
Story: #2007865
Task: #40197
Depends-On: https://review.opendev.org/756859
Change-Id: Ieb4cf38038ffb4d1a152f8ab3b64a14098c7cbb3
We change the default value of kafka.legacy_kafka_client_enabled from
True to False. The use of new Confluent Kafka client is recommended.
DevStack plugin does not set this option anymore.
Depends-On: https://review.opendev.org/740959
Depends-On: https://review.opendev.org/740966
Change-Id: I4d57b8893a6a131769009dc3299789d3fc89bab6
Story: 2007924
Task: 40338
The mock third party library was needed for mock support in py2
runtimes. Since we now only support py36 and later, we can use the
standard lib unittest.mock module instead.
Change-Id: I4c7cb63e0a816b361c2544b1be34d8a6dadeb5c0
Now that we no longer support py27, we can use the standard library
unittest.mock module instead of the third party mock lib.
Change-Id: Ib2843c62ff29b269139981f067ae6afcab624799
Signed-off-by: Sean McGinnis <sean.mcginnis@gmail.com>
The repo is Python 3 now, so update hacking to version 3.0 which
supports Python 3.
Fix problems found by updated hacking version.
Remove hacking and friends from lower-constraints, they are not needed
there at all.
Change-Id: I35d848e9af297d3561ea2838a4808166d1c36601
The change updates the imports to use simplejson library and
monasca_api.common.rest instead of monasca_common.rest, since
it was moved to this project during the API's merge.
Temporarily set following jobs as non-voting:
* monasca-tempest-python3-influxdb
* build-monasca-docker-image
* publish-monasca-api-docker-image
Change-Id: Ife3d2c9795a9dc406c2927cc9a077dda01c183c6
Story: 2007549
Task: 39389
As already implemented for metrics, also logs should be published to
Apache Kafka using the new Kafka client.
Change-Id: Ie909b13c7692267e787d481f4de658db3b07a1c4
Story: 2003705
Task: 36866
In some cases, users may want to send periodic notifications for
notification types other than webhooks.
Story: 2006837
Task: 37417
Depends-On: https://review.opendev.org/#/c/694596
Change-Id: Ia2c50e623aa79e06d2d35df4735fb2805fbf40ed
The Stein release does away with the concept of built in notifications,
in favour of treating all notification types equally.
This patch fixes an issue with the DB schema migration associated
with this change, which will fail if any notifications using
built-in notification types are configured at the time of the upgrade.
Story: 2006984
Task: 37746
Change-Id: I2e3f08edf1ab6aec526ad93d04effb91ddca600a
The change sets queue.buffering.max.messages configuration option for
Kafka producer effectively limiting the number of messages in the buffer
before sending them to Apache Kafka.
Depends-On: https://review.opendev.org/694738
Change-Id: I6ebd4e21e9d55d1ac836e92dd8bf02a678170c68
Story: 2006059
Task: 37532
When a large post (> 10s of MB) is made to the Monasca API an attempt
is made to write these metrics to the metrics topic in Kafka. However, due to
the large size of the write, this can fail with a number of obscure errors
which depend on exactly how much data is written. This change supports
splitting the post into chunks so that they can be written to Kafka in
sequence. A default has been chosen so that the maximum write to Kafka
should be comfortably under 1MB.
A future extension could support splitting the post by size, rather than the
number of measurements. A better time to look at this may be after the
Python Kafka library has been upgraded.
Story: 2006059
Task: 34772
Change-Id: I588a9bc0a19cd02ebfb8c0c1742896f208941396
This change solves the issue when: after removing the deterministic option
of an alarm definition by editing it, the alarm(s) asociated continue(s)
to behave as deterministic. Please see the story for more info.
Added Unittest.
Change-Id: I7743f2d2b8cd7c83541f77c7821f9512fb8abc36
story: 2006750
task: 37233
There can be lot of possibility for IPv6 address with port,
for example [::1]:80 or [2001:db8:85a3::8a2e:370]:7334.
Parsing that in more standard way is provided by oslo_uilts.netutils
parse_host_port() method[1].
Also unquote '[]' the SERVICE_HOST in case of IPv6 case so that
DB can listen on correct address.
Story: #2006309
Task: #36028
[1] 1b8bafb391/oslo_utils/netutils.py (L37)
Change-Id: I2d0ef40ab71f60564549d031185f99bc7eec40a7
At present, all time series are accumulated in the same database in
InfluxDB. This makes queries slow for tenants that have less data. This
patch enables the option to use separate database per tenancy.
This changeset implements the changes on monasca-api which handles read
requests to InfluxDB database.
It also updates the relevant docs providing link to the migration tool
which enables users to migrate their existing data to a database per
tenant model.
Change-Id: I7de6e0faf069b889d3953583b29a876c3d82c62c
Story: 2006331
Task: 36073
At present, dimensions are not scoped by time window, which makes
dimension related queries to large databases timeout because it searches
all of time instead of a time window specified on the grafana app.
This commit implements the server side changes required to scope the
search query by the time window specified on the app.
Change-Id: Ia760c6789ac0063b8a25e52c9e0c3cc3b790ad2d
Story: 2005204
Task: 35790
The change introduces the possibility to run the API with the new
confluent-kafka client. It has to be enabled in the configuration file.
Story: 2003705
Task: 35859
Depends-On: https://review.opendev.org/680653
Change-Id: Id513e01c60ea584548c954a8d2e61b9510eee8de
This change copies the code from monasca-common used by the 3
monasca APIs into monasca-api for the Merge-APIs target.
After mergin the APIs the duplicated code can be removed from
monasca-common.
Change-Id: I52d36fad846637baf10516f5cbbedc541d4c2064
Story: 2003881
Task: 30427
In Train, we will use python 3.6 and 3.7 for python3 runtime
in our gate jobs [1]. This commit also adds python 3.7.
In Python3.7 async is a reserved keyword so replacing it with is_async.
[1] https://governance.openstack.org/tc/reference/runtimes/train.html
Change-Id: I05f40c4a9304cad551cefd4f10c3ba9a72d69a6f
*Add unit test for helpers module
*Add unit test for metrics and notification endpoints
Story: 2006177
Task: 35701
Change-Id: Ie424f032588e4ad6afea3ea599b6febf9dd7f479
This commit updates hacking version in test-requirements and
fixes some related pep8 issues
Change-Id: I67d85eb5bef72c38cc5360b5625d6b1c37adb40f
Story: 2004930
Task: 29315
Creating a cassandra connection can be limited by connection_timeout option.
Story: 2005450
Task: 30502
Change-Id: I8803e28fe8c2c11e819be44db4ef93cb19b47a1d
Falcon 2.0.0 introduces some breaking changes. The relevant ones here are:
- falcon.testing.TestCase.api property was removed
- falcon.testing.TestBase class was removed
Additionally, the default behaviour for handling trailing slashes on
URIs also changed:
https://falcon.readthedocs.io/en/latest/user/faq.html#how-does-falcon-
handle-a-trailing-slash-in-the-request-path
This commit adds support for using the new release. It currently makes
no effort to be backwards compatible with older releases.
The change also updates the requirements for influxdb and sphinx
libraries to match global requirements.
Until monasca-log-api implementation is not updated to support the new
version of Falcon, `monascalog-python3-tempest` is marked to be
non-voting as agreed in the team meeting.
Story: 2005695
Task: 31015
Change-Id: I03bc8d502a333a7a71d9c12b8ddc7c5dc0a4f588
* Brings alarms count endpoint to parity with the alarms list endpoint
* Brings alarms count endpoint to parity with the alarms counnt endpoint
in the depricated java api
* Allow metric_dimensions filter to filter on multiple dimension values:
metric_dimensions=dns|compute|nova
Change-Id: I46ca0e6a6da46cb850af44768de237e41a43484a
Story: 2005311
Task: 30216
It is possible for a row in Cassandra to have a missing metric_id
(shows as 'null' in cqlsh). This causes an ugly NoneType error
to be passed up to the user on the command line for
'monaca metric-list'.
Fix is to detect the missing value, log an error, and return the
row with None for the metric_id.
Change-Id: Ie617932c6b12a6cfe441510e120bb77a3470b9cf
Story: 2005305
Task: 30194
It is impossible to execute this code because
old_sub_alarm_defs_by_id value is always empty dict.
Change-Id: Id0ae84c4bc96a18185db1e825cd11c7d2e88d2b1