This change upgrades the package version to 4.0.0 to use the reexecute_on
option.
All tasks defined in tasks.yaml are moved in deployment_tasks.yaml
to leverage this new feature.
The deployment order with 'priority' cannot be set but the only
requirement this plugin has is to be deployed before lma_collector,
which is always the case as long as lma_collector is deployed at
post_deployment time.
Change-Id: I08259f1646122aed0674610ddaf7a327d31b9a1a
All Elasticsearch templates and Kibana dashboard imports are performed only
once from the primary-elasticserach_kibana node.
Implements: blueprint elasticsearch-clustering
Change-Id: Ifa605c9dcdc603080b8adb80dbc71f2796cdf34b
- Add firewall rules for corosync communication
- Use dedicated cluster.pp manifest to allow the deployment of coexistant
clusters for LMA plugins
- Use primary role property
Implements: blueprint elasticsearch-clustering
Change-Id: Ibf4c1c4e62f214725875869621b40a3ef4c20e53
Before creating an index the cluster must be ready ('green' status). For
example if the number of replica cannot be honored the dashboard
importation fails with the error:
{"error":"UnavailableShardsException[[kibana-int][2] Not enough active
copies to meet write consistency of [QUORUM] (have 1, needed 3)
This patch also increases the global timeout for this particular task.
Implements: elasticsearch-clustering
Change-Id: I9d1c1dcb1c02391ceedb41b7617cdfe5ec812adb
- Use plugin version 3.0.0 and remove compatibility with MOS 6.1
- Leverage common tasks to configure disk and network
- Update README
Change-Id: I7f185de0cfbe3098c9fee2d0f7a792df5b0a95e0
This change adds a post-deployment task that verifies that the plugin
configuration matches with the configuration of the environment.
The checks are:
- The plugin is deployed on a bare base-os node.
- The disk configuration is valid.
- The JVM size doesn't exceed the physical RAM.
Change-Id: I6155fbad46502b9cd23a2fa54f3137441da8a270
This change configures explicitly the firewall rules for the node
running the Elasticsearch-Kibana plugin. This is needed because
otherwise the node becomes unavailable on CentOS after a reboot.
Change-Id: I6b123fac74e01c4cc4b16ff6a9f58654b9366fa1
Now we create a partition /dev/sdb1 instead of using the whole disk.
Also as we need to reboot the node after modifying disk partitions we
moved from fpb 1.0.0 to fpb 2.0.0.
Change-Id: I8b5bdde546858d1e4ad9fc30719415453a7268ab
This module allows to create physical volume with disks passed in
arguments and then create a logical volume. It will mount the directory
passed as parameters. When this module has finished the directory will
be available for elasticsearch.
Change-Id: Ieb80cbdb273cc780f6832466400d5712d6aae78c
- Renamed elasticsearch_kibana into kibana
- Now kibana manifests only holds kibana installation and configuration
- Move all configuration of elasticsearch into its own file that will be
called from tasks.
Change-Id: Ief27cc15b2498330446c1537aa0662bf84333048
Current limitations:
- Kibana is not installed yet.
- User must provide dedicated disk(s) for data storage.
- CentOS not supported yet.
Change-Id: Iac7e0fea4d8e1649452f232a7ca096bb7dfe28a8