This change upgrades the version of elasticsearch-curator because the
previous version (3.3.0) doesn't support Elasticsearch 2.x. As a
consequence, data older than the defined retention period was never
removed from Elasticsearch.
The curator is now installed on all the Elasticsearch nodes (previously
only on the primary node) and by configuration, it will only be
executed on the ES cluster master node.
Change-Id: I9da9e67fa4d353e78bd752456a9b01ca1fbae704
Closes-Bug: #1616765
Related-Bug: #1602719
In order to fully wrap up the Elasticsearch configuration including
log management, this commit moves all the Elasticsearch related stuff
from the manifest into a new Puppet module lma_logging_analytics::elasticsearch.
Closes-bug: #1572929
Change-Id: I3dd6d027d2b1de3d6ae3baa01a92dbca1d0ff95b
Elasticsearch and Nginx can be deployed on any network thanks to network
templates. This change removes all the hard-coded dependencies on the
management network. All port numbers for Nginx and Elasticsearch are
also moved to Hiera to make it easier to customize if needed.
Related-Bug: #1514365
Closes-Bug: #1577358
Change-Id: If3656be46d93418a2f481e740c59ec9df5ce8523
All Elasticsearch templates and Kibana dashboard imports are performed only
once from the primary-elasticserach_kibana node.
Implements: blueprint elasticsearch-clustering
Change-Id: Ifa605c9dcdc603080b8adb80dbc71f2796cdf34b
- Add firewall rules for corosync communication
- Use dedicated cluster.pp manifest to allow the deployment of coexistant
clusters for LMA plugins
- Use primary role property
Implements: blueprint elasticsearch-clustering
Change-Id: Ibf4c1c4e62f214725875869621b40a3ef4c20e53
The cluster recovery starts immediately when all expected nodes are up.
Without this parameter, the recovery waits 'recover_after_time' time
if the number of running nodes is at least equal to 'recover_after_nodes'.
Change-Id: Ice2a0b8ae41e9db5307724a8716d25e07b6ee82a
These parameters are configured with default values if not provided:
* number_of_replicas
* minimum_master_nodes
* recover_after_time
* recover_after_nodes
Change-Id: I3f4c1135c61b209e6e7b6160142c147721306cf9
Configure a VIP and Corosync/Pacemaker cluster. The plugin must override
Hiera data to set explicitly the corosync node list.
Configure Elasticsearch instances with parameters:
* cluster.name
* unicast discovery
Add firewall rule to allow clustering traffic.
Implements: elasticsearch-clustering
Change-Id: I0636e02113bfdacc776beb20c08cc88308486c29
- Use plugin version 3.0.0 and remove compatibility with MOS 6.1
- Leverage common tasks to configure disk and network
- Update README
Change-Id: I7f185de0cfbe3098c9fee2d0f7a792df5b0a95e0
This change configures a cron job that launches the curator script to
clean up old indices from Elasticsearch. It also configures the number
of replicas to keep per index. Note the value is always zero for now
since the plugin doesn't support cluster deployment yet.
Change-Id: Id22ebb949aeda9c90a05a975152762e2fe4b7eea
This changes installs the curator package (and its dependencies) on the
Elasticsearch-Kibana node. A following commit will setup the cron job
that will clean up expired data in the Elasticsearch database.
Change-Id: I413957c0b39fc687cb18fc5bed08d270d6ccc3dd
With a size of 50 we see a lot of messages like:
EsRejectedExecutionException[rejected execution (queue capacity 50)
So we increase this size of the bulk thread pool.
Change-Id: I4511061dc4681838779a84e362ca2bb26a2f5f9e
By default the JVM launched by Elasticsearch can consume up to 1G of
RAM. This parameter can be set in the plugin UI so the user will
be able to choose up to 32G.
Change-Id: Iefdbb385462b37e5a9d0a9db92f038d029220d92
As nodes should have access to ubuntu repository we don't need to
download ubuntu packages in the pre_build_hook before installing the
plugin.
It also fixed https://bugs.launchpad.net/fuel/+bug/1435892
Change-Id: Idcbffefa4ae46e87a160a327269ac27d2df4ee36
This change makes sure that the required version of the tzdata package
is installed before installing Java. This only affects Ubuntu
deployments.
Change-Id: I8d7dba5b7d47d012707484264adf83e3df5e5274
Currently Elasticsearch and Kibana are deployed on all nodes that are
using base-os role. This change ensures that the plugin is only executed
on the node selected through the Fuel UI.
Change-Id: I1a621d4b453c703e4ba92ba40c0659ccb2fc041b
This change extends the scope of the kibana Puppet module so it is
renamed to lma_logging_analytics. In the future, this module may also
setup the clean-up of the Elasticsearch indices.
Change-Id: I8ae3143cd10dc4343c6cd8d7a2452d8bdcf9261b
- Install openjdk 1.8.0 headless.
- Manage the /boot that is mounted as RAID 1 on all available disks.
Change-Id: Ie346a466132d0dabc2eda61d4ac65ebd50b78222
This module allows to create physical volume with disks passed in
arguments and then create a logical volume. It will mount the directory
passed as parameters. When this module has finished the directory will
be available for elasticsearch.
Change-Id: Ieb80cbdb273cc780f6832466400d5712d6aae78c
- Renamed elasticsearch_kibana into kibana
- Now kibana manifests only holds kibana installation and configuration
- Move all configuration of elasticsearch into its own file that will be
called from tasks.
Change-Id: Ief27cc15b2498330446c1537aa0662bf84333048