[docs] Edits the InfluxDB-Grafana plugin

Edits the StackLight InfluxDB-Grafana plugin documentation.

This is the final PDF build:
https://drive.google.com/a/mirantis.com/file/d/0B30Lksc8WVCRM3NHWFpQLUpXOFE/view?usp=sharing

Change-Id: I604aa3f6f859fcc0e2d77a042a5513371a07dfc8
This commit is contained in:
Maria Zlatkova 2016-07-21 18:18:12 +03:00
parent 31fcd5f49d
commit 0b0aba09ec
14 changed files with 533 additions and 478 deletions

View File

@ -5,7 +5,7 @@ source_suffix = '.rst'
master_doc = 'index'
project = u'The StackLight InfluxDB-Grafana plugin for Fuel'
project = u'The StackLight InfluxDB-Grafana Plugin for Fuel'
copyright = u'2016, Mirantis Inc.'
version = '0.10'
@ -19,7 +19,7 @@ html_theme = 'default'
html_static_path = ['_static']
latex_documents = [
('index', 'InfluxDBGrafana.tex', u'The StackLight InfluxDB-Grafana plugin for Fuel Documentation',
('index', 'InfluxDBGrafana.tex', u'The StackLight InfluxDB-Grafana Plugin for Fuel Documentation',
u'Mirantis Inc.', 'manual'),
]

View File

@ -1,135 +1,147 @@
.. _plugin_configuration:
.. raw:: latex
\pagebreak
Plugin configuration
--------------------
To configure the **StackLight InfluxDB-Grafana Plugin**, you need to follow these steps:
**To configure the StackLight InfluxDB-Grafana plugin:**
1. `Create a new environment
#. Create a new environment as described in `Create a new OpenStack environment
<http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide/create-environment/start-create-env.html>`_.
2. Click on the *Settings* tab of the Fuel web UI and select the *Other* category.
#. In the Fuel web UI, click the :guilabel:`Settings` tab and select the
:guilabel:`Other` category.
3. Scroll down through the settings until you find the **InfluxDB-Grafana Server
Plugin** section. You should see a page like this:
#. Scroll down through the settings until you find
:guilabel:`The InfluxDB-Grafana Server Plugin` section. You should see a
page like this:
.. image:: ../images/influx_grafana_settings.png
:width: 800
:width: 450pt
4. Tick the **InfluxDB-Grafana Plugin** box and fill-in the required fields as indicated below.
#. Select :guilabel:`The InfluxDB-Grafana Server Plugin` and fill in the
required fields as indicated below.
a. Specify the number of days of retention for your data.
b. Specify the InfluxDB admin password (called root password in the InfluxDB documentation).
c. Specify the database name (default is lma).
d. Specify the InfluxDB username and password.
e. Specify the Grafana username and password.
#. Specify the InfluxDB admin password (called root password in the InfluxDB
documentation.
#. Specify the database name (the default is ``lma``).
#. Specify the InfluxDB username and password.
#. Specify the Grafana username and password.
5. Since the introduction of Grafana 2.6.0, the plugin now uses a MySQL database
to store its configuration data such as the dashboard templates.
#. The plugin uses a MySQL database to store its configuration data, such as
the dashboard templates.
a. Select **Local MySQL** if you want to create the Grafana database using the MySQL server
of the OpenStack control-plane. Otherwise, select **Remote server** and specify
the fully qualified name or IP address of the MySQL server you want to use.
b. Then, specify the MySQL database name, username and password that will be used
a. Select :guilabel:`Local MySQL` if you want to create the Grafana
database using the MySQL server of the OpenStack control plane.
Otherwise, select :guilabel:`Remote server` and specify the fully
qualified name or the IP address of the MySQL server you want to use.
#. Specify the MySQL database name, username, and password that will be used
to access that database.
6. Tick the *Enable TLS for Grafana* box if you want to encrypt your
Grafana credentials (username, password). Then, fill-in the required
#. Select :guilabel:`Enable TLS for Grafana` if you want to encrypt your
Grafana credentials (username, password). Then, fill in the required
fields as indicated below.
.. image:: ../images/tls_settings.png
:width: 800
:width: 450pt
a. Specify the DNS name of the Grafana server. This parameter is used
to create a link in the Fuel dashboard to the Grafana server.
#. Specify the location of a PEM file that contains the certificate
and the private key of the Grafana server that will be used in TLS handchecks
a. Specify the DNS name of the Grafana server. This parameter is used to
create a link in the Fuel dashboard to the Grafana server.
#. Specify the location of a PEM file that contains the certificate and the
private key of the Grafana server that will be used in TLS handchecks
with the client.
7. Tick the *Use LDAP for Grafana authentication* box if you want to authenticate
via LDAP to Grafana. Then, fill-in the required fields as indicated below.
#. Select :guilabel:`Use LDAP for Grafana authentication` if you want to
authenticate to Grafana through LDAP. Then, fill in the required fields as
indicated below.
.. image:: ../images/ldap_auth.png
:width: 800
:width: 450pt
a. Select the *LDAPS* button if you want to enable LDAP authentication
over SSL.
#. Specify one or several LDAP server addresses separated by a space. Those
a. Select :guilabel:`LDAPS` if you want to enable LDAP authentication over
SSL.
#. Specify one or several LDAP server addresses separated by space. These
addresses must be accessible from the node where Grafana is installed.
Note that addresses external to the *management network* are not routable
by default (see the note below).
#. Specify the LDAP server port number or leave it empty to use the defaults.
#. Specify the *Bind DN* of a user who has search priviliges on the LDAP server.
#. Specify the password of the user identified by the *Bind DN* above.
#. Specify the *Base DN* in the Directory Information Tree (DIT) from where
to search for users.
#. Specify a valid user search filter (ex. (uid=%s)).
The result of the search should return a unique user entry.
#. Specify a valid search filter to search for users.
Example ``(uid=%s)``
Addresses outside the *management network* are not routable by default
(see the note below).
#. Specify the LDAP server port number or leave it empty to use the
defaults.
#. Specify the :guilabel:`Bind DN` of a user who has search privileges on
the LDAP server.
#. Specify the password of the user identified by the :guilabel:`Bind DN`
above.
#. Specify the :guilabel:`User search base DN` in the Directory Information
Tree (DIT) from where to search for users.
#. Specify a valid user search filter, for example, ``(uid=%s)``. The
result of the search should be a unique user entry.
You can further restrict access to Grafana to those users who
are member of a specific LDAP group.
You can further restrict access to Grafana to those users who are members
of a specific LDAP group.
a. Tick the *Enable group-based authorization*.
#. Specify the LDAP group *Base DN* in the DIT from where to search
for groups.
#. Specify the LDAP group search filter.
Example ``(&(objectClass=posixGroup)(memberUid=%s))``
#. Specify the CN of the LDAP group that will be mapped to the *admin role*
#. Specify the CN of the LDAP group that will be mapped to the *viewer role*
a. Select :guilabel:`Enable group-based authorization`.
#. Specify the LDAP group :guilabel:`Base DN` in the DIT from where to
search for groups.
#. Specify the LDAP group search filter. For example,
``(&(objectClass=posixGroup)(memberUid=%s))``.
#. Specify the CN of the LDAP group that will be mapped to the *admin role*.
#. Specify the CN of the LDAP group that will be mapped to the *viewer role*.
Users who have the *admin role* can modify the Grafana dashboards
or create new ones. Users who have the *viewer role* can only
visualise the Grafana dashboards.
Users who have the *admin role* can modify the Grafana dashboards or create
new ones. Users who have the *viewer role* can only visualize the Grafana
dashboards.
7. `Configure your environment
#. Configure your environment as described in `Configure your Environment
<http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide/configure-environment.html>`_.
.. note:: By default, StackLight is configured to use the *management network*,
of the so-called `Default Node Network Group
.. note:: By default, StackLight is configured to use the *management
network* of the so-called `Default Node Network Group
<http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide/configure-environment/network-settings.html>`_.
While this default setup may be appropriate for small deployments or
evaluation purposes, it is recommended not to use this network
for StackLight in production. It is instead recommended to create a network
dedicated to StackLight using the `networking templates
<https://docs.mirantis.com/openstack/fuel/fuel-8.0/operations.html#using-networking-templates>`_
capability of Fuel. Using a dedicated network for StackLight will
improve performances and reduce the monitoring footprint on the
control-plane. It will also facilitate access to the Gafana UI
after deployment as the *management network* is not routable.
evaluation purposes, it is recommended that you not use this network for
StackLight in production. Instead, create a network dedicated to
StackLight using the `networking templates
<https://docs.mirantis.com/openstack/fuel/fuel-9.0/operations.html#using-networking-templates>`_
Fuel capability. Using a dedicated network for StackLight will improve
performance and reduce the monitoring footprint on the control plane. It
will also facilitate access to the Gafana UI after deployment, as the
*management network* is not routable.
8. Click the *Nodes* tab and assign the *InfluxDB_Grafana* role
to the node(s) where you want to install the plugin.
#. Click the :guilabel:`Nodes` tab and assign the :guilabel:`InfluxDB_Grafana`
role to the node or multiple nodes where you want to install the plugin.
You can see in the example below that the *InfluxDB_Grafana*
role is assigned to three nodes along side with the
*Alerting_Infrastructure* and the *Elasticsearch_Kibana* roles.
Here, the three plugins of the LMA toolchain backend servers are
installed on the same nodes. You can assign the *InfluxDB_Grafana*
role to either one node (standalone install) or three nodes for HA.
The example below shows that the :guilabel:`InfluxDB_Grafana` role is
assigned to three nodes alongside with the
:guilabel:`Alerting_Infrastructure` and the
:guilabel:`Elasticsearch_Kibana` roles. The three plugins of the LMA
toolchain back-end servers are installed on the same nodes. You can assign
the :guilabel:`InfluxDB_Grafana` role to either one node (standalone
install) or three nodes for HA.
.. image:: ../images/influx_grafana_role.png
:width: 800
:width: 450pt
.. note:: Installing the InfluxDB server on more than three nodes
is currently not possible using the Fuel plugin.
Similarly, installing the InfluxDB server on two nodes
is not recommended to avoid split-brain situations in the Raft
consensus of the InfluxDB cluster as well as the *Pacemaker* cluster
which is responsible of the VIP address failover.
To be also noted that it is possible to add or remove nodes
with the *InfluxDB_Grafana* role in the cluster after deployment.
.. note:: Currently, installing the InfluxDB server on more than three
nodes is not possible using the Fuel plugin. Similarly, installing the
InfluxDB server on two nodes is not recommended to avoid split-brain
situations in the Raft consensus of the InfluxDB cluster, as well as the
*Pacemaker* cluster, which is responsible for the VIP address failover.
It is possible to add or remove nodes with the
:guilabel:`InfluxDB_Grafana` role in the cluster after deployment.
9. `Adjust the disk partitioning if necessary
#. If required, adjust the disk partitioning as described in
`Configure disk partitioning
<http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide/configure-environment/customize-partitions.html>`_.
By default, the InfluxDB-Grafana Plugin allocates:
* 20% of the first available disk for the operating system by honoring
a range of 15GB minimum to 50GB maximum.
* 10GB for */var/log*.
* At least 30 GB for the InfluxDB database in */var/lib/influxdb*.
a range of 15 GB minimum to 50 GB maximum.
* 10 GB for ``/var/log``.
* At least 30 GB for the InfluxDB database in ``/var/lib/influxdb``.
10. `Deploy your environment
<http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide/deploy-environment.html>`_.
#. Deploy your environment as described in `Deploy an OpenStack environment
<http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide/deploy-environment.html>`_.

View File

@ -3,8 +3,8 @@
Key terms
---------
The table below lists the key terms and acronyms that are used
in this document.
The table below lists the key terms and acronyms that are used in this
document.
+---------------------+-------------------------------------------------------+
| **Terms & acronyms**| **Definition** |
@ -17,13 +17,13 @@ in this document.
| | open-source database (MIT license). It is written in |
| | Go and has no external dependencies. |
| | InfluxDB is targeted at use cases for DevOps, metrics,|
| | sensor data, and real-time analytics. |
| | sensor data, and real-time analytics. |
+---------------------+-------------------------------------------------------+
| Grafana | Grafana is an Apache 2.0 licensed general purpose |
| | dashboard and graph composer. It is focused on |
| | providing rich ways to visualize metrics time-series, |
| | mainly though graphs but supports other ways to |
| | visualize data through a pluggable panel architecture.|
| Grafana | Grafana is a general-purpose dashboard and graph |
| | composer. It is focused on providing rich ways to |
| | visualize metrics time-series mainly through graphs |
| | but supports other ways to visualize data through a |
| | pluggable panel architecture. |
| | |
| | It has rich support for Graphite, InfluxDB, and |
| | OpenTSDB and also supports other data sources through |

View File

@ -22,7 +22,6 @@ Installing and configuring StackLight InfluxDB-Grafana plugin
.. toctree::
:maxdepth: 1
install_intro
install
configure_plugin
verification

View File

@ -1,78 +1,126 @@
.. _user_installation:
Introduction
------------
You can install the StackLight InfluxDB-Grafana plugin using one of the
following options:
• Install using the RPM file
• Install from source
The following is a list of software components installed by the StackLight
InfluxDB-Grafana plugin:
+----------------+-------------------------------------+
| Components | Version |
+================+=====================================+
| InfluxDB | v0.11.1 for Ubuntu (64-bit) |
+----------------+-------------------------------------+
| Grafana | v3.0.4 for Ubuntu (64-bit) |
+----------------+-------------------------------------+
Install using the RPM file of the Fuel plugins catalog
------------------------------------------------------
To install the StackLight InfluxDB-Grafana Fuel Plugin using the RPM file of the Fuel Plugins
Catalog, you need to follow these steps:
**To install the StackLight InfluxDB-Grafana Fuel plugin using the RPM file of
the Fuel plugins catalog:**
1. Select, using the MONITORING category and the Mirantis OpenStack version you are using,
the RPM file you want to download from the `Fuel Plugins Catalog
#. Go to the `Fuel Plugins Catalog
<https://www.mirantis.com/validated-solution-integrations/fuel-plugins>`_.
2. Copy the RPM file to the Fuel Master node::
#. From the :guilabel:`Filter` drop-down menu, select the Mirantis OpenStack
version you are using and the :guilabel:`Monitoring` category.
[root@home ~]# scp influxdb_grafana-0.10-0.10.0-1.noarch.rpm \
root@<Fuel Master node IP address>:
#. Download the RPM file.
3. Install the plugin using the `Fuel CLI
<http://docs.mirantis.com/openstack/fuel/fuel-8.0/user-guide.html#using-fuel-cli>`_::
#. Copy the RPM file to the Fuel Master node:
[root@fuel ~]# fuel plugins --install influxdb_grafana-0.10-0.10.0-1.noarch.rpm
.. code-block:: console
4. Verify that the plugin is installed correctly::
[root@home ~]# scp influxdb_grafana-0.10-0.10.0-1.noarch.rpm \
root@<Fuel Master node IP address>:
[root@fuel ~]# fuel plugins --list
id | name | version | package_version
---|----------------------|----------|----------------
1 | influxdb_grafana | 0.10.0 | 4.0.0
#. Install the plugin using the `Fuel Plugins CLI
<http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide/cli/cli_plugins.html>`_:
.. code-block:: console
[root@fuel ~]# fuel plugins --install influxdb_grafana-0.10-0.10.0-1.noarch.rpm
#. Verify that the plugin is installed correctly:
.. code-block:: console
[root@fuel ~]# fuel plugins --list
id | name | version | package_version
---|----------------------|----------|----------------
1 | influxdb_grafana | 0.10.0 | 4.0.0
Install from source
-------------------
Alternatively, you may want to build the RPM file of the plugin from source if,
for example, you want to test the latest features of the master branch or customize the plugin.
for example, you want to test the latest features of the master branch or
customize the plugin.
.. note:: Be aware that running a Fuel plugin that you built yourself is at your
own risk and will not be supported.
.. note:: Running a Fuel plugin that you built yourself is at your own risk
and will not be supported.
To install the StackLight InfluxDB-Grafana Plugin from source,
you first need to prepare an environment to build the RPM file.
The recommended approach is to build the RPM file directly onto the Fuel Master
node so that you won't have to copy that file later on.
To install the StackLight InfluxDB-Grafana Plugin from source, first prepare
an environment to build the RPM file. The recommended approach is to build the
RPM file directly onto the Fuel Master node so that you will not have to copy
that file later on.
**Preparing an environment for building the plugin on the Fuel Master Node**
**To prepare an environment and build the plugin:**
1. Install the standard Linux development tools::
#. Install the standard Linux development tools:
[root@home ~] yum install createrepo rpm rpm-build dpkg-devel
.. code-block:: console
2. Install the Fuel Plugin Builder. To do that, you should first get pip::
[root@home ~] yum install createrepo rpm rpm-build dpkg-devel
[root@home ~] easy_install pip
#. Install the Fuel Plugin Builder. To do that, first get pip:
3. Then install the Fuel Plugin Builder (the `fpb` command line) with `pip`::
.. code-block:: console
[root@home ~] pip install fuel-plugin-builder
[root@home ~] easy_install pip
.. note:: You may also need to build the Fuel Plugin Builder if the package version of the
plugin is higher than the package version supported by the Fuel Plugin Builder you get from `pypi`.
In this case, please refer to the section "Preparing an environment for plugin development"
of the `Fuel Plugins wiki <https://wiki.openstack.org/wiki/Fuel/Plugins>`_
if you need further instructions about how to build the Fuel Plugin Builder.
#. Then install the Fuel Plugin Builder (the `fpb` command line) with `pip`:
4. Clone the plugin git repository::
.. code-block:: console
[root@home ~] git clone https://github.com/openstack/fuel-plugin-influxdb-grafana.git
[root@home ~] pip install fuel-plugin-builder
5. Check that the plugin is valid::
.. note:: You may also need to build the Fuel Plugin Builder if the package
version of the plugin is higher than the package version supported by
the Fuel Plugin Builder you get from `pypi`. For instructions on how to
build the Fuel Plugin Builder, see the *Install Fuel Plugin Builder*
section of the `Fuel Plugin SDK Guide <http://docs.openstack.org/developer/fuel-docs/plugindocs/fuel-plugin-sdk-guide/create-plugin/install-plugin-builder.html>`_.
[root@home ~] fpb --check ./fuel-plugin-influxdb-grafana
#. Clone the plugin repository:
6. And finally, build the plugin::
.. code-block:: console
[root@home ~] fpb --build ./fuel-plugin-influxdb-grafana
[root@home ~] git clone https://github.com/openstack/fuel-plugin-influxdb-grafana.git
7. Now that you have created the RPM file, you can install the plugin using the `fuel plugins --install` command::
#. Verify that the plugin is valid:
[root@fuel ~] fuel plugins --install ./fuel-plugin-influxdb-grafana/*.noarch.rpm
.. code-block:: console
[root@home ~] fpb --check ./fuel-plugin-influxdb-grafana
#. Build the plugin:
.. code-block:: console
[root@home ~] fpb --build ./fuel-plugin-influxdb-grafana
**To install the plugin:**
Now that you have created the RPM file, install the plugin using the
:command:`fuel plugins --install` command:
.. code-block:: console
[root@fuel ~] fuel plugins --install ./fuel-plugin-influxdb-grafana/*.noarch.rpm

View File

@ -1,19 +0,0 @@
Introduction
------------
You can install the StackLight InfluxDB-Grafana plugin using one of the
following options:
• Install using the RPM file
• Install from source
The following is a list of software components installed by the StackLight
InfluxDB-Grafana plugin:
+----------------+-------------------------------------+
| Components | Version |
+================+=====================================+
| InfluxDB | v0.11.1 for Ubuntu (64-bit) |
+----------------+-------------------------------------+
| Grafana | v3.0.4 for Ubuntu (64-bit) |
+----------------+-------------------------------------+

View File

@ -3,26 +3,25 @@
Introduction
------------
The **StackLight InfluxDB-Grafana Fuel Plugin** is used to install and configure
InfluxDB and Grafana which collectively provide access to the
metrics analytics of Mirantis OpenStack.
InfluxDB is a powerful distributed time-series database
to store and search metrics time-series. The metrics analytics are used to
visualize the time-series and the annotations produced by the StackLight Collector.
The annotations contain insightful information about the faults and anomalies
that resulted in a change of state for the clusters of nodes and services
of the OpenStack environment.
The **StackLight InfluxDB-Grafana Plugin for Fuel** is used to install and
configure InfluxDB and Grafana, which collectively provide access to the
metrics analytics of Mirantis OpenStack. InfluxDB is a powerful distributed
time-series database to store and search metrics time-series. The metrics
analytics are used to visualize the time-series and the annotations produced
by the StackLight Collector. The annotations contain insightful information
about the faults and anomalies that resulted in a change of state for the
clusters of nodes and services of the OpenStack environment.
The InfluxDB-Grafana Plugin is an indispensable tool to answering
the questions "what has changed in my OpenStack environment, when and why?".
Grafana is installed with a collection of predefined dashboards for each
of the OpenStack services that are monitored.
Among those dashboards, the *Main Dashboard* provides a single pane of glass
overview of your OpenStack environment status.
The InfluxDB-Grafana plugin is an indispensable tool to answer the questions
of what has changed in your OpenStack environment, when, and why. Grafana is
installed with a collection of predefined dashboards for each of the OpenStack
services that are monitored. Among those dashboards, the *Main Dashboard*
provides a single pane of glass overview of your OpenStack environment status.
InfluxDB and Grafana are key components
of the `LMA Toolchain project <https://launchpad.net/lma-toolchain>`_
as shown in the figure below.
InfluxDB and Grafana are the key components of the
`LMA Toolchain project <https://launchpad.net/lma-toolchain>`_ as shown in the figure below.
.. image:: ../images/toolchain_map.png
:width: 430pt
:width: 445pt
:align: center

View File

@ -1,13 +1,17 @@
.. _licenses:
.. raw:: latex
\pagebreak
Licenses
--------
Third Party Components
Third-party components
++++++++++++++++++++++
+----------+-----------------------+-----------+
| Name | Project Web Site | License |
| Name | Project website | License |
+==========+=======================+===========+
| InfluxDB | https://influxdb.com/ | MIT |
+----------+-----------------------+-----------+
@ -18,13 +22,13 @@ Puppet modules
++++++++++++++
+---------+--------------------------------------------------+-----------+
| Name | Project Web Site | License |
| Name | Project website | License |
+=========+==================================================+===========+
| Apt | https://github.com/puppetlabs/puppetlabs-apt | Apache V2 |
| Apt | https://github.com/puppetlabs/puppetlabs-apt | Apache v2 |
+---------+--------------------------------------------------+-----------+
| Concat | https://github.com/puppetlabs/puppetlabs-concat | Apache V2 |
| Concat | https://github.com/puppetlabs/puppetlabs-concat | Apache v2 |
+---------+--------------------------------------------------+-----------+
| Stdlib | https://github.com/puppetlabs/puppetlabs-stdlib | Apache V2 |
| Stdlib | https://github.com/puppetlabs/puppetlabs-stdlib | Apache v2 |
+---------+--------------------------------------------------+-----------+
| IniFile | https://github.com/puppetlabs/puppetlabs-inifile | Apache v2 |
+---------+--------------------------------------------------+-----------+

View File

@ -3,7 +3,10 @@
Limitations
-----------
Currently, the size of an InfluxDB cluster the Fuel plugin can deploy is limited to three nodes. In addition to this,
each node of the InfluxDB cluster is configured to run under the *meta* node role and the *data* node role. Therefore,
it is not possible to separate the nodes participating in the Raft consensus cluster from
the nodes accessing the data replicas.
The StackLight InfluxDB-Grafana plugin 0.10.0 has the following limitation:
* The size of an InfluxDB cluster the Fuel plugin can deploy is limited to
three nodes. Additionally, each node of the InfluxDB cluster is configured to
run under the *meta* node role and the *data* node role. Therefore, it is not
possible to separate the nodes participating in the Raft consensus cluster
from the nodes accessing the data replicas.

View File

@ -3,42 +3,45 @@
Release notes
-------------
0.10.0
++++++
Version 0.10.0
++++++++++++++
* Changes
The StackLight InfluxDB-Grafana plugin 0.10.0 contains the following updates:
* Add support for LDAP(S) authentication to access Grafana.
* Add support for TLS encryption to access Grafana.
A PEM file obtained by concatenating the SSL certificate with the private key
of the server must be provided in the settings of the plugin to configure the
TLS termination.
* Upgrade to InfluxDB v0.11.1.
* Upgrade to Grafana v3.0.4.
* Added support for LDAP(S) authentication to access Grafana.
* Added support for TLS encryption to access Grafana. A PEM file obtained by
concatenating the SSL certificate with the private key of the server must be
provided in the settings of the plugin to configure the TLS termination.
* Upgraded to InfluxDB v0.11.1.
* Upgraded to Grafana v3.0.4.
0.9.0
+++++
Version 0.9.0
+++++++++++++
- A new dashboard for hypervisor metrics.
- A new dashboard for InfluxDB cluster.
- A new dashboard for Elasticsearch cluster.
- Upgrade to Grafana 2.6.0.
- Upgrade to InfluxDB 0.10.0.
- Add support for InfluxDB clustering (beta state).
- Use MySQL as Grafana backend to support HA.
The StackLight InfluxDB-Grafana plugin 0.9.0 contains the following updates:
0.8.0
+++++
* Added a new dashboard for hypervisor metrics.
* Added a new dashboard for InfluxDB cluster.
* Added a new dashboard for Elasticsearch cluster.
* Upgraded to Grafana 2.6.0.
* Upgraded to InfluxDB 0.10.0.
* Added support for InfluxDB clustering (beta state).
* Added the capability to use MySQL as Grafana back end to support HA.
- Add support for the "influxdb_grafana" Fuel Plugin role instead of
the "base-os" role which had several limitations.
- Add support for retention policy configuration.
- Upgrade to InfluxDB 0.9.4 which brings metrics time-series with tagging.
- Upgrade to Grafana 2.5.0.
- Several dashboard visualisation improvements.
- A new self-monitoring dashboard.
Version 0.8.0
+++++++++++++
0.7.0
+++++
The StackLight InfluxDB-Grafana plugin 0.8.0 contains the following updates:
- Initial release of the plugin. This is a beta version.
* Added support for the ``influxdb_grafana`` Fuel plugin role instead of the
``base-os`` role which had several limitations.
* Added support for retention policy configuration.
* Upgraded to InfluxDB 0.9.4 which brings metrics time-series with tagging.
* Upgraded to Grafana 2.5.0.
* Improved dashboard visualization.
* Added a new self-monitoring dashboard.
Version 0.7.0
+++++++++++++
The initial release of the plugin. This is a beta version.

View File

@ -3,25 +3,25 @@
Requirements
------------
+-----------------------+-----------------------------------------------------------------------+
| **Requirement** | **Version/Comment** |
+=======================+=======================================================================+
| Disk space | The plugins specification requires to provision at least 15GB of disk|
| | spase for the system, 10GB for the logs and 30GB for the database. The|
| | installation of the plugin will fail if there is less than 55GB of |
| | disk space available on the node. |
+-----------------------+-----------------------------------------------------------------------+
| Mirantis OpenStack | 8.0, 9.0 |
+-----------------------+-----------------------------------------------------------------------+
| Hardware configuration| The hardware configuration (RAM, CPU, disk(s)) required by this plugin|
| | depends on the size of your cloud environment and other factors like |
| | the retention policy. An average setup would require a quad-core |
| | server with 8 GB of RAM and access to a 500-1000 IOPS disk. |
| | See the `InfluxDB Hardware Sizing Guide |
| | <https://docs.influxdata.com/influxdb/v0.10/guides/hardware_sizing/>`_|
| | for additional sizing information. |
| | |
| | It is also highly recommended to use a dedicated disk for your data |
| | storage. Otherwise, the InfluxDB-Grafana Plugin will use the root |
| | filesystem by default. |
+-----------------------+-----------------------------------------------------------------------+
+-----------------------+------------------------------------------------------------------------+
| **Requirement** | **Version/Comment** |
+=======================+========================================================================+
| Disk space | The plugins specification requires provisioning at least 15 GB of disk|
| | space for the system, 10 GB for the logs, and 30 GB for the database. |
| | Therefore, the installation of the plugin will fail if there is less |
| | than 55 GB of disk space available on the node. |
+-----------------------+------------------------------------------------------------------------+
| Mirantis OpenStack | 8.0, 9.0 |
+-----------------------+------------------------------------------------------------------------+
| Hardware configuration| The hardware configuration (RAM, CPU, disk(s)) required by this plugin |
| | depends on the size of your cloud environment and other factors like |
| | the retention policy. An average setup would require a quad-core |
| | server with 8 GB of RAM and access to a 500-1000 IOPS disk. |
| | See the `InfluxDB Hardware Sizing Guide |
| | <https://docs.influxdata.com/influxdb/v0.10/guides/hardware_sizing/>`_ |
| | for additional sizing information. |
| | |
| | It is highly recommended that you use a dedicated disk for your data |
| | storage. Otherwise, the InfluxDB-Grafana Plugin will use the root |
| | file system by default. |
+-----------------------+------------------------------------------------------------------------+

View File

@ -3,49 +3,67 @@
Troubleshooting
---------------
If you get no data in Grafana, follow these troubleshooting tips.
If Grafana contains no data, use the following troubleshooting tips:
#. First, check that the LMA Collector is running properly by following the
LMA Collector troubleshooting instructions in the
`LMA Collector Fuel Plugin User Guide <http://fuel-plugin-lma-collector.readthedocs.org/>`_.
#. Verify that the StackLight Collector is running properly by following the
troubleshooting instructions in the
`StackLight Collector Fuel plugin documentation <http://fuel-plugin-lma-collector.readthedocs.org/>`_.
#. Check that the nodes are able to connect to the InfluxDB cluster via the VIP address
(see above how to get the InfluxDB cluster VIP address) on port *8086*::
#. Verify that the nodes are able to connect to the InfluxDB cluster through
the VIP address (See the *Verify InfluxDB* section for instructions on how
to get the InfluxDB cluster VIP address) on port *8086*:
root@node-2:~# curl -I http://<VIP>:8086/ping
.. code-block:: console
The server should return a 204 HTTP status::
root@node-2:~# curl -I http://<VIP>:8086/ping
HTTP/1.1 204 No Content
Request-Id: cdc3c545-d19d-11e5-b457-000000000000
X-Influxdb-Version: 0.10.0
Date: Fri, 12 Feb 2016 15:32:19 GMT
The server should return a 204 HTTP status:
#. Check that InfluxDB cluster VIP address is up and running::
.. code-block:: console
root@node-1:~# crm resource status vip__influxdb
resource vip__influxdb is running on: node-1.test.domain.local
HTTP/1.1 204 No Content
Request-Id: cdc3c545-d19d-11e5-b457-000000000000
X-Influxdb-Version: 0.10.0
Date: Fri, 12 Feb 2016 15:32:19 GMT
#. Check that the InfluxDB service is started on all nodes of the cluster::
#. Verify that InfluxDB cluster VIP address is up and running:
root@node-1:~# service influxdb status
influxdb Process is running [ OK ]
.. code-block:: console
#. If not, (re)start it::
root@node-1:~# crm resource status vip__influxdb
resource vip__influxdb is running on: node-1.test.domain.local
root@node-1:~# service influxdb start
Starting the process influxdb [ OK ]
influxdb process was started [ OK ]
#. Verify that the InfluxDB service is running on all nodes of the cluster:
#. Check that Grafana server is running::
.. code-block:: console
root@node-1:~# service grafana-server status
* grafana is running
root@node-1:~# service influxdb status
influxdb Process is running [ OK ]
#. If not, (re)start it::
#. If the InfluxDB service is not running, restart it:
root@node-1:~# service grafana-server start
* Starting Grafana Server
.. code-block:: console
#. If none of the above solves the problem, check the logs in ``/var/log/influxdb/influxdb.log``
and ``/var/log/grafana/grafana.log`` to find out what might have gone wrong.
root@node-1:~# service influxdb start
Starting the process influxdb [ OK ]
influxdb process was started [ OK ]
#. Verify that the Grafana server is running:
.. code-block:: console
root@node-1:~# service grafana-server status
* grafana is running
#. If the Grafana server is not running, restart it:
.. code-block:: console
root@node-1:~# service grafana-server start
* Starting Grafana Server
#. If none of the above solves the issue, look for errors in the following log
files:
* InfluxDB -- ``/var/log/influxdb/influxdb.log``
* Grafana -- ``/var/log/grafana/grafana.log``

View File

@ -3,215 +3,193 @@
Exploring your time-series with Grafana
---------------------------------------
The InfluxDB-Grafana Plugin comes with a collection of predefined
dashboards you can use to visualize the time-series stored in InfluxDB.
The InfluxDB-Grafana Plugin comes with a collection of predefined dashboards
you can use to visualize the time-series stored in InfluxDB.
Please check the LMA Collector documentation for a complete list of all the
`metrics time-series <http://fuel-plugin-lma-collector.readthedocs.org/en/latest/appendix_b.html>`_
that are collected and stored in InfluxDB.
For a complete list of all the metrics time-series that are collected and
stored in InfluxDB, see the `List of metrics` section of the
`StackLight Collector documentation <http://fuel-plugin-lma-collector.readthedocs.org/en/latest/>`_.
The Main Dashboard
The Main dashboard
++++++++++++++++++
We suggest you start with the **Main Dashboard**, as shown
below, as an entry to the other dashboards.
The **Main Dashboard** provides a single pane of glass from where you can visualize the
overall health status of your OpenStack services such as Nova and Cinder
but also HAProxy, MySQL and RabbitMQ to name a few..
We recommend that you start with the **Main dashboard**, as shown below, as an
entry to other dashboards. The **Main dashboard** provides a single pane of
glass from where you can visualize the overall health status of your OpenStack
services, such as Nova, Cinder, HAProxy, MySQL, RabbitMQ, and others.
.. image:: ../images/grafana_main.png
:width: 800
:width: 450pt
As you can see, the **Main Dashboard** (as most dashboards) provides
a drop down menu list in the upper left corner of the window
from where you can pick a particular metric dimension such as
the *controller name* or the *device name* you want to select.
The **Main dashboard**, like most dashboards, provides a drop-down menu in the
upper left corner from where you can pick a particular metric dimension, such
as the *controller name* or the *device name* you want to select.
In the example above, the system metrics of *node-48* are
being displayed in the dashboard.
In the example above, the dashboard displays the system metrics of *node-48*.
Within the **OpenStack Services** row, each of the services
represented can be assigned five different status.
Within the **OpenStack Services** section, each of the services represented
can be assigned five different statuses.
.. note:: The precise determination of a service health status depends
on the correlation policies implemented for that service by a `Global Status Evaluation (GSE)
plugin <http://fuel-plugin-lma-collector.readthedocs.org/en/latest/alarms.html#cluster-policies>`_.
.. note:: The precise determination of a service health status depends on the
correlation policies implemented for that service by a Global Status
Evaluation (GSE) plugin. See the `Configuring alarms` section in the
`StackLight Collector documentation <http://fuel-plugin-lma-collector.readthedocs.org/en/latest/>`_.
The meaning associated with a service health status is the following:
The service health statuses can be as follows:
- **Down**: One or several primary functions of a service
cluster has failed. For example,
all API endpoints of a service cluster like Nova
or Cinder are failed.
- **Critical**: One or several primary functions of a
service cluster are severely degraded. The quality
of service delivered to the end-user should be severely
impacted.
- **Warning**: One or several primary functions of a
service cluster are slightly degraded. The quality
of service delivered to the end-user should be slightly
impacted.
- **Unknown**: There is not enough data to infer the actual
health status of a service cluster.
- **Okay**: None of the above was found to be true.
* **Down**: One or several primary functions of a service cluster has failed.
For example, all API endpoints of a service cluster like Nova or Cinder
failed.
* **Critical**: One or several primary functions of a service cluster are
severely degraded. The quality of service delivered to the end user is
severely impacted.
* **Warning**: One or several primary functions of a service cluster are
slightly degraded. The quality of service delivered to the end user is
slightly impacted.
* **Unknown**: There is not enough data to infer the actual health status of a
service cluster.
* **Okay**: None of the above was found to be true.
The **Virtual Compute Resources** row provides an overview of
the amount of virtual resources being used by the compute nodes
including the number of virtual CPUs, the amount of memory
and disk space being used as well as the amount of virtual
resources remaining available to create new instances.
The **Virtual compute resources** section provides an overview of the amount
of virtual resources being used by the compute nodes including the number of
virtual CPUs, the amount of memory and disk space being used, as well as the
amount of virtual resources remaining available to create new instances.
The "System" row provides an overview of the amount of physical
resources being used on the control plane (the controller cluster).
You can select a specific controller using the
controller's drop down list in the left corner of the toolbar.
The **System** section provides an overview of the amount of physical
resources being used on the control plane (the controller cluster). You can
select a specific controller using the controller's drop-down list in the left
corner of the toolbar.
The "Ceph" row provides an overview of the resources usage
and current health status of the Ceph cluster when it is deployed
in the OpenStack environment.
The **Ceph** section provides an overview of the resources usage and current
health status of the Ceph cluster when it is deployed in the OpenStack
environment.
The **Main Dashboard** is also an entry point to access more detailed
dashboards for each of the OpenStack services that are monitored.
For example, if you click on the *Nova box*, the **Nova
Dashboard** is displayed.
The **Main dashboard** is also an entry point to access more detailed
dashboards for each of the OpenStack services that are monitored. For example,
if you click the **Nova** box, the **Nova dashboard** is displayed.
.. image:: ../images/grafana_nova.png
:width: 800
:width: 450pt
The Nova dashboard
++++++++++++++++++
The **Nova Dashboard** provides a detailed view of the
Nova service's related metrics.
The **Nova** dashboard provides a detailed view of the Nova service's related
metrics and consists of the following sections:
The **Service Status** row provides information about the Nova service
cluster health status as a whole including the status of the API frontend
(the HAProxy public VIP), a counter of HTTP 5xx errors,
the HTTP requests response time and status code.
**Service status** -- information about the Nova service cluster
overall health status, including the status of the API front end (the HAProxy
public VIP), a counter of HTTP 5xx errors, the HTTP requests response time and
status code.
The **Nova API** row provides information about the current health status of
the API backends (nova-api, ec2-api, ...).
**Nova API** -- information about the current health status of the API
back ends, for example, nova-api, ec2-api, and others.
The **Nova Services** row provides information about the current and
historical status of the Nova *workers*.
**Nova services** -- information about the current and historical status
of the Nova *workers*.
The **Instances** row provides information about the number of active
instances in error and instances creation time statistics.
**Instances** -- information about the number of active instances in
error and instances creation time statistics.
The **Resources** row provides various virtual resources usage indicators.
**Resources** -- various virtual resources usage indicators.
Self-monitoring dashboards
++++++++++++++++++++++++++
The first **Self-Monitoring Dashboard** was introduced in LMA 0.8.
The intent of the self-monitoring dashboards is to bring operational
insights about how the monitoring system itself (the toolchain) performs overall.
The **Self-Monitoring** dashboard brings operational insights about the
overall monitoring system (the toolchain) performance. It provides information
about the *hekad* and *collectd* processes. In particular, the
**Self-Monitoring** dashboard provides information about the amount of system
resources consumed by these processes, the time allocated to the Lua plugins
running within *hekad*, the number of messages being processed, and the time
it takes to process those messages.
The **Self-Monitoring Dashboard**, provides information about the *hekad*
and *collectd* processes.
In particular, it gives information about the amount of system resources
consumed by these processes, the time allocated to the Lua plugins
running within *hekad*, the amount of messages being processed and
the time it takes to process those messages.
You can select a particular node view using the drop-down menu.
Again, it is possible to select a particular node view using the drop down
menu list.
Since StackLight 0.9, there are two new dashboards:
With LMA 0.9, we have introduced two new dashboards.
#. The **Elasticsearch Cluster Dashboard** provides information about
the overall health status of the Elasticsearch cluster including
the state of the shards, the number of pending tasks and various resources
usage metrics.
#. The **InfluxDB Cluster Dashboard** provides statistics about the InfluxDB
processes running in the InfluxDB cluster including various resources usage metrics.
* The **Elasticsearch Cluster** dashboard provides information about the
overall health status of the Elasticsearch cluster including the state of
the shards, the number of pending tasks, and various resources usage metrics.
* The **InfluxDB Cluster** dashboard provides statistics about the InfluxDB
processes running in the InfluxDB cluster including various resources usage
metrics.
The hypervisor dashboard
++++++++++++++++++++++++
LMA 0.9 introduces a new **Hypervisor Dashboard** which brings operational
insights about the virtual instances managed through *libvirt*.
As shown in the figure below, the **Hypervisor Dashboard** assembles a
view of various *libvirt* metrics. A dropdown menu list allows to pick
a particular instance UUID running on a particular node. In the
example below, the metrics for the instance id *ba844a75-b9db-4c2f-9cb9-0b083fe03fb7*
running on *node-4* are displayed.
The **Hypervisor** dashboard brings operational insights about the virtual
instances managed through *libvirt*. As shown in the figure below, the
**Hypervisor** dashboard assembles a view of various *libvirt* metrics. Use
the drop-down menu to pick a particular instance UUID running on a particular
node. The example below shows the metrics for the instance ID
``ba844a75-b9db-4c2f-9cb9-0b083fe03fb7`` running on *node-4*.
.. image:: ../images/grafana_hypervisor.png
:width: 800
:width: 450pt
Check the LMA Collector documentation for additional information about the
`*libvirt* metrics <http://fuel-plugin-lma-collector.readthedocs.org/en/latest/appendix_b.html#libvirt>`_
that are displayed in the **Hypervisor Dashboard**.
For additional information on the *libvirt* metrics that are displayed in the
**Hypervisor** dashboard, see the `List of metrics` section of the
`StackLight Collector documentation <http://fuel-plugin-lma-collector.readthedocs.org/en/latest/>`_.
Other dashboards
++++++++++++++++
In total there are 19 different dashboards you can use to
explore different time-series facets of your OpenStack environment.
There are 19 different dashboards in total that you can use to explore
different time-series facets of your OpenStack environment.
Viewing faults and anomalies
++++++++++++++++++++++++++++
The LMA Toolchain is capable of detecting a number of service-affecting
conditions such as the faults and anomalies that occured in your OpenStack
environment.
Those conditions are reported in annotations that are displayed in
Grafana. The Grafana annotations contain a textual
representation of the alarm (or set of alarms) that were triggered
by the Collectors for a service.
In other words, the annotations contain valuable insights
that you could use to diagnose and
troubleshoot problems. Furthermore, with the Grafana annotations,
the system makes a distinction between what is estimated as a
direct root cause versus what is estimated as an indirect
root cause. This is internally represented in a dependency graph.
There are first degree dependencies used to describe situations
whereby the health status of an entity
strictly depends on the health status of another entity. For
example Nova as a service has first degree dependencies
with the nova-api endpoints and the nova-scheduler workers. But
there are also second degree dependencies whereby the health
status of an entity doesn't strictly depends on the health status
of another entity, although it might, depending on other operations
being performed. For example, by default we declared that Nova
has a second degree dependency with Neutron. As a result, the
health status of Nova will not be directly impacted by the health
status of Neutron but the annotation will provide
a root cause analysis hint. Let's assume a situation
where Nova has changed from *okay* to *critical* status (because of
5xx HTTP errors) and that Neutron has been in *down* status for a while.
In this case, the Nova dashboard will display an annotation showing that
Nova has changed to a *warning* status because the system has detected
5xx errors and that it may be due to the fact that Neutron is *down*.
An example of what an annotation looks like is shown below.
conditions, such as the faults and anomalies that occurred in your OpenStack
environment. These conditions are reported in annotations that are displayed in
Grafana. The Grafana annotations contain a textual representation of the alarm
(or set of alarms) that were triggered by the Collectors for a service.
In other words, the annotations contain valuable insights that you can use to
diagnose and troubleshoot issues. Furthermore, with the Grafana annotations,
the system makes a distinction between what is estimated as a direct root
cause versus what is estimated as an indirect root cause. This is internally
represented in a dependency graph. There are first degree dependencies used to
describe situations, whereby the health status of an entity strictly depends on
the health status of another entity. For example, Nova as a service has
first-degree dependencies with the nova-api endpoints and the nova-scheduler
workers. But there are also second-degree dependencies, whereby the health
status of an entity does not strictly depend on the health status of another
entity, although it might, depending on other operations being performed. For
example, by default, we declared that Nova has a second-degree dependency with
Neutron. As a result, the health status of Nova will not be directly impacted
by the health status of Neutron, but the annotation will provide a root cause
analysis hint. Consider a situation where Nova has changed from *okay* to
the *critical* status (because of 5xx HTTP errors) and that Neutron has been
in the *down* status for a while. In this case, the Nova dashboard will
display an annotation showing that Nova has changed to a *warning* status
because the system has detected 5xx errors and that it may be due to the fact
that Neutron is *down*. Below is an example of an annotation, which shows that
the health status of Nova is *down* because there is no *nova-api* service
back end (viewed from HAProxy) that is *up*.
.. image:: ../images/grafana_nova_annot.png
:width: 800
This annotation shows that the health status of Nova is *down*
because there is no *nova-api* service backend (viewed from HAProxy)
that is *up*.
:width: 450pt
Hiding nodes from dashboards
++++++++++++++++++++++++++++
When you remove a node from the environment, it is still displayed in
the 'server' and 'controller' drop-down lists. To hide it from the list
you need to edit the associated InfluxDB query in the *templating* section.
For example, if you want to remove *node-1*, you need to add the following
condition to the *where* clause::
When you remove a node from the environment, it is still displayed in the
:guilabel:`server` and :guilabel:`controller` drop-down lists. To hide it from
the list, edit the associated InfluxDB query in the *Templating* section. For
example, if you want to remove *node-1*, add the following condition to the
*where* clause::
and hostname != 'node-1'
.. image:: ../images/remove_controllers_from_templating.png
:width: 450pt
If you want to hide more than one node you can add more conditions like this::
To hide more than one node, add more conditions. For example::
and hostname != 'node-1' and hostname != 'node-2'
This should be done for all dashboards that display the deleted node and you
need to save them afterwards.
Perform these actions for all dashboards that display the deleted node and
save them afterward.

View File

@ -3,90 +3,100 @@
Plugin verification
-------------------
Be aware that depending on the number of nodes and deployment setup,
deploying a Mirantis OpenStack environment can typically take anything
from 30 minutes to several hours. But once your deployment is complete,
you should see a notification message indicating that you deployment
successfully completed as in the figure below.
Depending on the number of nodes and deployment setup, deploying a Mirantis
OpenStack environment may take 30 minutes to several hours. Once the
deployment is complete, you should see a deployment success notification
with a link to the Grafana web UI as shown below.
.. image:: ../images/deployment_notification.png
:width: 800
:width: 460pt
Verifying InfluxDB
~~~~~~~~~~~~~~~~~~
Verify InfluxDB
+++++++++++++++
You should verify that the InfluxDB cluster is running properly.
First, you need first to retreive the InfluxDB cluster VIP address.
Here is how to proceed.
To verify that the InfluxDB cluster is running properly, first retrieve the
InfluxDB cluster VIP address:
#. On the Fuel Master node, find the IP address of a node where the InfluxDB
server is installed using the following command::
server is installed using the :command:`fuel nodes` command. For example:
[root@fuel ~]# fuel nodes
id | status | name | cluster | ip | mac | roles |
---|----------|------------------|---------|------------|-----|------------------|
1 | ready | Untitled (fa:87) | 1 | 10.109.0.8 | ... | influxdb_grafana |
2 | ready | Untitled (12:aa) | 1 | 10.109.0.3 | ... | influxdb_grafana |
3 | ready | Untitled (4e:6e) | 1 | 10.109.0.7 | ... | influxdb_grafana |
.. code-block::
[root@fuel ~]# fuel nodes
id | status | name | cluster | ip | mac | roles |
---|----------|------------------|---------|------------|-----|------------------|
1 | ready | Untitled (fa:87) | 1 | 10.109.0.8 | ... | influxdb_grafana |
2 | ready | Untitled (12:aa) | 1 | 10.109.0.3 | ... | influxdb_grafana |
3 | ready | Untitled (4e:6e) | 1 | 10.109.0.7 | ... | influxdb_grafana |
#. Then `ssh` to anyone of these nodes (ex. *node-1*) and type the command::
#. Log in to any of these nodes through SSH, for example, to ``node-1``
#. Run the following command:
root@node-1:~# hiera lma::influxdb::vip
10.109.1.4
.. code-block:: console
This tells you that the VIP address of your InfluxDB cluster is *10.109.1.4*.
root@node-1:~# hiera lma::influxdb::vip
10.109.1.4
#. With that VIP address type the command::
Where ``10.109.1.4`` is the virtual IP address (VIP) of your InfluxDB
cluster.
root@node-1:~# /usr/bin/influx -database lma -password lmapass \
--username root -host 10.109.1.4 -port 8086
Visit https://enterprise.influxdata.com to register for updates,
InfluxDB server management, and monitoring.
Connected to http://10.109.1.4:8086 version 0.10.0
InfluxDB shell 0.10.0
>
#. Using that VIP address, run the following command:
As you can see, executing */usr/bin/influx* will start an interactive CLI and automatically connect to
the InfluxDB server. Then if you type::
.. code-block:: console
> show series
root@node-1:~# /usr/bin/influx -database lma -password lmapass \
--username root -host 10.109.1.4 -port 8086
Visit https://enterprise.influxdata.com to register for updates,
InfluxDB server management, and monitoring.
Connected to http://10.109.1.4:8086 version 0.10.0
InfluxDB shell 0.10.0
>
You should see a dump of all the time-series collected so far.
Then, if you type::
The example above shows that executing ``/usr/bin/influx`` starts an
interactive CLI and automatically connects to the InfluxDB server. Then
run the following command:
> show servers
name: data_nodes
----------------
id http_addr tcp_addr
1 node-1:8086 node-1:8088
3 node-2:8086 node-2:8088
5 node-3:8086 node-3:8088
.. code-block:: console
name: meta_nodes
----------------
id http_addr tcp_addr
1 node-1:8091 node-1:8088
2 node-2:8091 node-2:8088
4 node-3:8091 node-3:8088
> show series
You should see a list of the nodes participating in the `InfluxDB cluster
<https://docs.influxdata.com/influxdb/v0.10/guides/clustering/>`_ with their roles (data or meta).
You should see a dump of all the time-series collected so far. Then run:
.. code-block:: console
Verifying Grafana
~~~~~~~~~~~~~~~~~
> show servers
name: data_nodes
----------------
id http_addr tcp_addr
1 node-1:8086 node-1:8088
3 node-2:8086 node-2:8088
5 node-3:8086 node-3:8088
From the Fuel dDashboard, click on the **Grafana** link (or enter the IP address
and port number if your DNS is not setup).
The first time you access Grafana, you are requested to
authenticate using your credentials.
name: meta_nodes
----------------
id http_addr tcp_addr
1 node-1:8091 node-1:8088
2 node-2:8091 node-2:8088
4 node-3:8091 node-3:8088
You should see a list of nodes participating in the `InfluxDB cluster
<https://docs.influxdata.com/influxdb/v0.10/guides/clustering/>`_ with
their roles (data or meta).
Verify Grafana
++++++++++++++
#. Log in to the Fuel web UI.
#. In the :guilabel:`Dashboard` tab, click :guilabel:`Grafana`. If your DNS is
not set up, enter the IP address and the port number.
#. Authenticate using your credentials.
.. image:: ../images/grafana_login.png
:width: 800
:width: 400pt
Then you should be redirected to the *Grafana Home Page*
from where you can select a dashboard as shown below.
You should be redirected to the :guilabel:`Grafana Home Page` where you can
select a dashboard as shown below.
.. image:: ../images/grafana_home.png
:width: 800
:width: 400pt