update best practices
- remove 'prior to ocata' conditions.
- remove shuffle_time_before_polling_task as it doesn't work in
reality and is being removed.
- add a note when to enable workload_partitioning of notification
agent
Change-Id: I44c030835de1517a3c067ab0632c09f4a5fe2f15
(cherry picked from commit be1fa8f840
)
This commit is contained in:
parent
d54b1966a2
commit
5605630634
|
@ -11,40 +11,30 @@ Data collection
|
|||
#. The Telemetry service collects a continuously growing set of data. Not
|
||||
all the data will be relevant for an administrator to monitor.
|
||||
|
||||
- Based on your needs, you can edit the ``pipeline.yaml`` configuration
|
||||
file to include a selected number of meters while disregarding the
|
||||
rest. Similarly, in Ocata, you will need to edit ``polling.yaml`` to
|
||||
define which meters to generate.
|
||||
- Based on your needs, you can edit the ``polling.yaml`` and
|
||||
``pipeline.yaml`` configuration files to include select meters to
|
||||
generate or process
|
||||
|
||||
- By default, Telemetry service polls the service APIs every 10
|
||||
minutes. You can change the polling interval on a per meter basis by
|
||||
editing the ``polling.yaml`` configuration file.
|
||||
|
||||
.. note::
|
||||
|
||||
Prior to Ocata, the polling configuration was handled by
|
||||
``pipeline.yaml``
|
||||
|
||||
.. warning::
|
||||
|
||||
If the polling interval is too short, it will likely increase the
|
||||
stress on the service APIs.
|
||||
|
||||
- Expand the configuration to have greater control over different meter
|
||||
intervals. For more information, see the
|
||||
:ref:`telemetry-pipeline-configuration`.
|
||||
|
||||
#. You can delay or adjust polling requests by enabling the jitter support.
|
||||
This adds a random delay on how the polling agents send requests to the
|
||||
service APIs. To enable jitter, set ``shuffle_time_before_polling_task`` in
|
||||
the ``ceilometer.conf`` configuration file to an integer greater
|
||||
than 0.
|
||||
|
||||
#. If polling many resources or at a high frequency, you can add additional
|
||||
central and compute agents as necessary. The agents are designed to scale
|
||||
horizontally. For more information refer to the `high availability guide
|
||||
<https://docs.openstack.org/ha-guide/controller-ha-telemetry.html>`_.
|
||||
|
||||
#. `workload_partitioning` of notification agents is only required if
|
||||
the pipeline configuration leverages transformers. It may also be enabled if
|
||||
batching is required to minimize load on the defined publisher targets. If
|
||||
transformers are not enabled, multiple agents may still be deployed without
|
||||
`workload_partitioning` and processing will be done greedily.
|
||||
|
||||
Data storage
|
||||
------------
|
||||
|
||||
|
@ -125,4 +115,3 @@ Data storage
|
|||
|
||||
For more information on sharding, see the `MongoDB sharding
|
||||
docs <http://docs.mongodb.org/manual/sharding/>`__.
|
||||
|
||||
|
|
Loading…
Reference in New Issue