deb-gnocchi/doc/source/configuration.rst

6.9 KiB

Configuration

Configure Gnocchi by editing /etc/gnocchi/gnocchi.conf.

No config file is provided with the source code; it will be created during the installation. In case where no configuration file was installed, one can be easily created by running:

gnocchi-config-generator > /etc/gnocchi/gnocchi.conf

The configuration file should be pretty explicit, but here are some of the base options you want to change and configure:

Option name Help
storage.driver The storage driver for metrics.
indexer.url URL to your indexer.
storage.file* Configuration options to store files if you use the file storage driver.
storage.swift* Configuration options to access Swift if you use the Swift storage driver.
storage.ceph* Configuration options to access Ceph if you use the Ceph storage driver.
storage.s3* Configuration options to access S3 if you use the S3 storage driver.

Gnocchi provides these storage drivers:

Gnocchi provides these indexer drivers:

Configuring authentication

The API server supports different authentication methods: basic (the default) which uses the standard HTTP Authorization header or keystone to use OpenStack Keystone. If you successfully installed the keystone flavor using pip (see installation), you can set api.auth_mode to keystone to enable Keystone authentication.

Driver notes

Carbonara based drivers (file, swift, ceph, s3)

To ensure consistency across all gnocchi-api and gnocchi-metricd workers, these drivers need a distributed locking mechanism. This is provided by the 'coordinator' of the tooz library.

By default, the configured backend for tooz is the same as the indexer (PostgreSQL or MySQL). This allows locking across workers from different nodes.

For a more robust multi-nodes deployment, the coordinator may be changed via the storage.coordination_url configuration option to one of the other tooz backends.

For example, to use Redis backend:

coordination_url = redis://<sentinel host>?sentinel=<master name>

or alternatively, to use the Zookeeper backend:

coordination_url = zookeeper:///hosts=<zookeeper_host1>&hosts=<zookeeper_host2>

Ceph driver implementation details

Each batch of measurements to process is stored into one rados object. These objects are named measures_<metric_id>_<random_uuid>_<timestamp>

Also a special empty object called measure has the list of measures to process stored in its omap attributes.

Because of the asynchronous nature of how we store measurements in Gnocchi, gnocchi-metricd needs to know the list of objects that are waiting to be processed:

  • Listing rados objects for this is not a solution since it takes too much time.
  • Using a custom format into a rados object, would force us to use a lock each time we would change it.

Instead, the omaps of one empty rados object are used. No lock is needed to add/remove an omap attribute.

Also xattrs attributes are used to store the list of aggregations used for a metric. So depending on the filesystem used by ceph OSDs, xattrs can have a limitation in terms of numbers and size if Ceph is not correctly configured. See Ceph extended attributes documentation for more details.

Then, each Carbonara generated file is stored in one rados object. So each metric has one rados object per aggregation in the archive policy.

Because of this, the filling of OSDs can look less balanced compared to RBD. Some objects will be big and others small, depending on how archive policies are set up.

We can imagine an unrealistic case such as retaining 1 point per second over a year, in which case the rados object size will be ~384MB.

Whereas in a more realistic scenario, a 4MB rados object (like RBD uses) could result from:

  • 20 days with 1 point every second
  • 100 days with 1 point every 5 seconds

So, in realistic scenarios, the direct relation between the archive policy and the size of the rados objects created by Gnocchi is not a problem.

Also Gnocchi can use cradox Python library if installed. This library is a Python binding to librados written with Cython, aiming to replace the one written with ctypes provided by Ceph. This new library will be part of next Ceph release (10.0.4).

The new Cython binding divides the gnocchi-metricd times to process measures by a large factor.

So, if the Ceph installation doesn't use latest Ceph version, cradox can be installed to improve the Ceph backend performance.

Swift driver implementation details

The Swift driver leverages the bulk delete functionality provided by the bulk middleware to minimise the amount of requests made to clean storage data. This middleware must be enabled to ensure Gnocchi functions correctly. By default, Swift has this middleware enabled in its pipeline.