This repo is not properly retired. Retired repos can only have two files:
- README.rst
- gitreview
To cleanup the retirement of this repo, keeping these two files only and
removing the other remaining files if there is any.
Detail: https://etherpad.opendev.org/p/tc-retirement-cleanup
Change-Id: Iaeb98fe2b7a8f723a3baf41c9b91d0ed12742133
A couple of fix for the docker image used by travis:
* locale package is no more installed by default
* liberasurecode-dev is required, now
* sphinx >= 1.6.0 have broken sphinx-versioning
* Don't use sphinx math module
Change-Id: Iba06d0c4667e2a11495fb25375de7152b2b02597
We don't need to provide the app.wsgi script anymore.
pbr provides it within the gnocchi-api binary
Change-Id: I6e16607128849a18b9e6eb1bc5558d1c9df64775
There's no lock anymore so this should not be logged like that. instead, we
only LOG.debug if we successfully processed the log. The time is not useful
anymore and since log are timestamped, it can be computed if needed anyway.
Change-Id: I40d38985f7a4f3368aeeec56d2b592f64882d1f7
ceph sucks for small objects; POSTs to ceph incoming are always
small objects.
this changes logic to store new measures as omap values instead of
objects. this allows us to write to 'memory' aka leveldb/rocksdb
instead of disk.
this does not change the durability agreement since we are already
storing object keys in omap so if omap fails, we will lose link
to objects regardless if on disk or not.
using local 20OSD ceph cluster, with 18 metricd. this is:
- ~2x faster than aio_write patch to POST
- ~2x faster than aio_write patch to process
- ~3x faster than no aio_write patch to POST
- ~3x faster than no aio_write patch to process
- ~3.5x faster than 3.1 to POST
- something a lot faster than 3.1 to process (no idea why)
Change-Id: I4bae365955fdbafe4ad837596490774c42bc5251
we should use async writes since we're not using threads anymore.
testing on small ceph cluster and posting 20metrics/POST results in
~2x better write performance.
Change-Id: Ic451e6ea874a2a3695d3d148b1c05ecc54aa473b
lock sack when refresh param is given. check first if there's
anything to be refreshed rather than blindly blocking.
Change-Id: I945db03e80450d35877427fce9c12675891e016d
we don't need to lock metric on delete but rather we only need to
check if sack is being processed. either:
1) sack is locked, so there's a chance that metric is being processed.
therefore we skip.
2) sack is unlocked, so there's no chance that a concurrent process
will start processing metric as the indexer already says it's already
deleted and no other process can see it as not-deleted.
Change-Id: I8df135621dfabc3d17733a3577d0ea60b30e83e4
support ability to change sacks via command line. will set new
sack value and clear any old sacks (if applicable). backlog must
be empty to change value.
Change-Id: Icf74b081e4cfaaaa607a5b9c684cbad4b8ecc006
this adds framework to configure sack size without corruption.
- default to 128 sacks
- add note on how to calculate how many sacks to set
actual ability to change value done in subsequent patch.
Change-Id: I389c1a7ca9b3fe39b3716782e85073796ad26333
support hashring partitioning if user wants to reduce potential
locking load by sacrificing potential throughput. if hashring is not
supported or does not assign jobs, we just default to entire set of
sacks. if fails to set up partitioning, try again.
Change-Id: I1439fb3cdb171ce57ce7887857aa4789fe8f0d9c