Commit Graph

16 Commits

Author SHA1 Message Date
Witek Bedyk 811acd76c9 Remove project content on master branch
This is step 2b of repository deprecation process as described in [1].
Project deprecation has been anounced here [2].

[1] https://docs.openstack.org/project-team-guide/repository.html#step-2b-remove-project-content
[2] http://lists.openstack.org/pipermail/openstack-discuss/2020-August/016814.html

Depends-On: https://review.opendev.org/751983
Change-Id: I83bb2821d64a4dddd569ff9939aa78d271834f08
2020-09-15 10:12:44 +02:00
melissaml 040211ba89 Update hacking version to latest
This commit updates hacking version to 1.1.x and fixes related
pep8 issues.
Also added pycodestyle in test-requirements

Story: 2004930
Task: 29318

Co-Authored-By: Akhil Jain <akhil.jain@india.nec.com>

Change-Id: Id3ad30d23b902ee6f7277f7ec20d7d523df232f6
2019-06-12 13:22:41 +05:30
Amir Mofakhar 37d4f09057 Update pep8 checks
* set the maximum line length to 100
* cleaned up the codes for pep8

Change-Id: Iab260a4e77584aae31c0596f39146dd5092b807a
Signed-off-by: Amir Mofakhar <amofakhar@op5.com>
2018-04-18 10:05:00 +02:00
agateaaa 2da390414e Hourly aggregation account for early arrving metrics
With this change pre hourly processor which does the
hourly aggregation (second stage) and writes the
final aggregated metrics to metris topic in kafka
now accounts for any early arriving metrics.

This change along with two previous changes
to pre hourly processor that added
1.) configurable late metrics slack time
(https://review.openstack.org/#/c/394497/), and
2.) batch filtering
(https://review.openstack.org/#/c/363100/)
will make sure all late arriving or early
arriving metrics for an hour are aggregated
appropriately.

Also made improvement in MySQL offset to call
delete excess revisions only once.

Change-Id: I919cddf343821fe52ad6a1d4170362311f84c0e4
2017-04-17 15:29:34 -07:00
agatea 8aef98d2d2 Use to kafka_lib library in monasca common
kafka_python 0.9.5 was moved to monasca common
Upstream community wants to move to
newer version of kafka python which has number of
performance problems.
See https://review.openstack.org/#/c/420579/
and
https://review.openstack.org/#/c/424840/
Monasca transform
uses kafka python library to write aggregated
metrics to kafka as well as read offset information
in case of hourly aggregation. Since long term
plan is to move to pykafka in the future we will
have to investigate if that functionality
is available.

Change-Id: I831c9e259b3d7b92fb2834193034e15b62c80c37
2017-03-15 16:22:50 -07:00
David C Kennedy 00b4797a65 Corrected catch up aggregation logic
Fixed a bug where the hourly agregation would run at every iteration
if the hour is zero (midnight) because zero is falsey.

Change-Id: I9652f02aea30f3ddb6f154db716aa4057455be06
2017-02-14 14:56:23 +00:00
Ashwin Agate c189feeb8b Delete hourly offsets from offsets table
Pre Hourly processor fails if offsets recorded in
kafka_offsets table no longer exist in kafka.
This change deletes the offsets from kafka_offsets
table, so that the pre hourly processor can resume
processing with the next run.

Change-Id: I017c271e630fdf6de05a73b3bfcb14f5ed18615f
2017-01-09 19:35:51 +00:00
David C Kennedy 26e53336d4 Add configurable amnesty period for late metrics
Added configuration option to allow the pre-hourly transformation to be
done at a specified period past the hour.  This includes a check to
ensure that if not done yet for the hour but overdue processing is done
at the earliest time.

Change-Id: I8882f3089ca748ce435b4e9a92196a72a0a8e63f
2016-11-22 13:03:52 +00:00
Ashwin Agate 1c65ca011b Validate metrics before publishing to kafka
Validate monasca metrics using monasca-common
validate library (requires monasca-common >= 1.1.0)

Change-Id: Iea784edbb3b57db57e6a90d1fc557b2c386c3713
2016-09-30 19:07:02 +00:00
Flint Calvin 0ea79c0305 Added aggregation results to application log
Made changes such that debug-level log entries are written to
the application log noting which aggregated metrics are submitted
during pre-hourly and hourly processing.

Change-Id: I64c6a18233614fe680aa0b084570ee7885f316e5
2016-09-23 18:24:20 +00:00
Flint Calvin bf2e42b3e0 Made changes to prevent multiple metrics in the same batch.
Change-Id: Iec9935c21d8b65bf79067d4a009859c898b75993
2016-08-31 18:18:25 +00:00
Flint Calvin 615e52d5cd Modifications to make rate calculations work with two-stage
aggregation.

Change-Id: I8c7b6112a04ba378ba1911a342cb97e8c388ebc6
2016-08-09 16:33:34 +00:00
Ashwin Agate 90b20bfd41 Change to monasca-common simport
Use monasca-common simport library

Closes-Bug: #1596331

Change-Id: I695d6db9c5c49c0120e73b76ea75f7a30222419d
2016-07-09 19:04:19 +00:00
Flint Calvin 1c3a7989e7 Added some bulletproofing to catch invalid configuration
entries for caching levels.  Also changed the calculate_rate
component to use values from instance usage if available (rather
than using 'all').

Change-Id: Ibdbc8d57c2566de76051c9277f9c75225546d4d7
2016-07-07 17:49:11 +00:00
darfed 82fe3b9199 Update kafka-python version to 0.9.5
Two stage aggregation refactored to use kafka-python 0.9.5
as this is the version we are limited to by openstack.

Change-Id: I20c4dc58727432c1336c5cfdb37768a24e578eb0
2016-07-06 21:38:30 +00:00
Ashwin Agate 00b874a6b3 Two stage transformation
Breaking down the aggregation into two stages.

The first stage aggregates raw metrics frequently and is
implemented as a Spark Streaming job which
aggregates metrics at a configurable time interval
(defaults to 10 minutes) and writes the intermediate
aggregated data, or instance usage data
to new "metrics_pre_hourly" kafka topic.

The second stage is implemented
as a batch job using Spark Streaming createRDD
direct stream batch API, which is triggered by the
first stage only when first stage runs at the
top of the hour.

Also enhanced kafka offsets table to keep track
of offsets from two stages along with streaming
batch time, last time version row got updated
and revision number. By default it should keep
last 10 revisions to the offsets for each
application.

Change-Id: Ib2bf7df6b32ca27c89442a23283a89fea802d146
2016-06-28 13:47:50 +00:00