Elasticsearch moved to fuel-ccp-elasticsearch repo, due
to collisions between similar docker image names, remove
elasticsearch code from here.
Change-Id: Ia8c74a335ffe9355e4c033d0998080ce56fb1d8f
Depends-On: Ic39eb474f42b25e55772cb95edd362e4be5623c3
We want to alling with k8s definitions and it would be more convenient
to have one param instead of several flags.
Change-Id: I378f7fd3e89ac12e9f6d16fca3591d09ff33d4f9
Now we have "node_name" variable to render config files
Change-Id: I6ff108cf769da846fd878a3f0fb221df2854917f
Depends-On: I8ebbbd94803ccb9a8d13eede2db7db8b13673937
alarm-manager is responsible for watching a configurable
location within the filesystem where the user can put
a YAML file which defines alarms. When change/creation is
detected, the YAML file is checked for proper contents
and if verification is successfull, LUA code is generated
as well as LUA configuration files. Hindsight will pick
up those changes after a certain period of time and
provides proper alarming to the platform.
Change-Id: I7b2b98f379c49bdbf23177a038bdca9433d1c6e5
Due to the fact that yamllint utity is now run
on each YAML file, this change fix the current
error for line being too long
Change-Id: If35a7fc823c5d13a0dbdade6900be6ee51b1a28f
This commits adds Lua code for generating AFD (Anomaly and
Fault Detection) metrics based on the evaluation of alarms.
The Lua code was copied from the lma_collector Fuel plugin
[*], with changes to accomodate Hindsight and the versions
of lua_sandbox and lua_sandbox_extensions we rely on.
In the future we plan to move this Lua code in its own Git
repository. And the Hindsight Dockerfile will install the
Lua code in the image using Debian packages.
The afd_node_default_cpu_alarms.lua and
hindsight_afd_node_default_cpu_alarms.cfg.j2 files will be
removed. Instead the operator will configure alarms through
a YAML file, and we will use a sidecar container for
generating Lua tables including alarm definitions and
corresponding plugin configuration files.
[*] https://github.com/openstack/fuel-plugin-lma-collector/
Change-Id: If182c3a6453f7bf8b72f03af56a14ace109eaa68
This commit adds support for metrics with multiple values.
Multi-value metrics will for example be needed for alarming.
Change-Id: I496fa1925c389f2638cf9b99243fbf45d7d2dad7
Note that we can not gather SMART infos from the host within
VMs
See https://www.smartmontools.org/wiki/FAQ
and more particularly the
DosmartctlandsmartdrunonavirtualmachineguestOS
sub-entry
Change-Id: Idee7d48e45a5a388061d196d1e07c55404780085
Until we find a better official and publicly available
location for hosting these binaries, they will be hosted on
bintray.com
This should change when/if Intel will be providing access
to nightly binary builds. Please note that the binaries
have been produced using Intel's build scripts
The snap task file has been updated to take into account
the fact that cpu metrics are now dynamic and that due to
snap framework issue #1144 you can not request a specific
instance of dynamic metrics
The Grafana system dashboard has been updated to comply with
the snap task change above
Change-Id: I76a2eac0497c8e2024234aab5e117d173e136049
This commit fixes a bug where hostnames were not correct in
metrics collected by Snap and Hindsight. It relies on
Kubernetes' downward API and the spec.nodeName field [1].
The latter is only supported by Kubernetes 1.4 and higher,
and the deployment of stacklight-collector pods will fail if
Kubernetes 1.3 or lower is used.
[1] <https://github.com/kubernetes/kubernetes/pull/27880>
Change-Id: I73cd35803a2201a09144bf925753156e47489cff
Depends-On: I293bb3aa113883c02f2e738f9d74291bf2f23d95
Partial-Bug: #1614484
This commit adds an input Hindsight plugin that scrapes the
Kubelet stats API at a regular time interval. This is to
collect system metrics (CPU usage, etc.) relative to pods
running on the cluster. The metrics created by the plugin
are injected into the Hindsight pipeline, and then read by
the InfluxDB plugin which sends them to InfluxDB for
storage.
Change-Id: I0b39d416ebc4e8090a959267d6fc813ddab2674a
Currently the grafana-configure.sh script is executed in
a job. So if the Grafana Pod is re-created all the
pre-configured Grafana dashboards are lost. This commit
fixes the bug by using a "local" post command instead of
a "single" post command for the execution of
grafana-configure.sh.
Change-Id: I9368b010da684ac7f2c352b920b7df1785fd0e78
Having a "process" node in a "collect" node is not
mandatory. This commit removes the "process" node
and places the "publish" node in "collect" node
directly.
Change-Id: I482b0ea37f80dd7f188ef38467f9ab41c7e5221e
Note that this require the snap framework commit ID
to be updated to a later version so that this effectively
work properly
Change-Id: Ie3963834d875c40ff0e34760d82e908365feaae0
This is to be consistent with the names used for the
"keystone-db-create" pre job and the others of the same
type.
Change-Id: I0c3ef71b77b82589e903923c7d0ccaa9bde0bfe8
This commit updates Hindsight, lua_sandbox and
lua_sandbox_extensions.
For lua_sandbox a tag/version is now used (v1.0.3).
Change-Id: Ie5eaf8ecbeb1bfe77600fe553c904c8031895ede
Since ovs logging to stderr too and it's not OS service, I have to
create a multidecoder for dockerlogs. Using it, I can try different
decoders on each log string to find the right one.
It's the only option available for us right now, since Dockerlogs
plugin doesnt support docker containers filtering.
Change-Id: I53ab2beb49e5847c3e17e443a7838b0448cb066f
I think it's better to always use "mysql" as a name for mysql logs
for all mysql forks(Mariadb, Percona, etc), since we could use any of
this avalible forks.
Change-Id: Ia6dc753908ee9986904a2255b13766835013f208