Juju Charm - HACluster
Go to file
Liam Young d860f3406c Check for peer series upgrade in pause and status
Check whether peers have sent series upgrade notifications before
pausing a unit. If notifications have been sent then HA services
will have been shutdown and pausing will fail.

Similarly, if series upgrade notifications have been sent then
do not try and issue crm commands when assessing status.

Change-Id: I4de0ffe5d5e24578db614c2e8640ebd32b8cd469
Closes-Bug: #1877937
2020-05-11 10:56:22 +00:00
actions Convert charm to Python 3 2019-03-14 12:40:07 +00:00
charmhelpers Pre-freeze 'make sync' 2020-05-06 16:06:02 +02:00
files Cosmetic fix for long lines 2020-02-19 16:53:21 +01:00
hooks Check for peer series upgrade in pause and status 2020-05-11 10:56:22 +00:00
lib Update tox.ini files from release-tools gold copy 2016-09-09 19:22:07 +00:00
templates Convert charm to Python 3 2019-03-14 12:40:07 +00:00
tests Add focal-ussuri and bionic-ussuri bundle 2020-04-20 12:53:50 +01:00
unit_tests Check for peer series upgrade in pause and status 2020-05-11 10:56:22 +00:00
.gitignore Support network space binding of hanode relation 2017-09-28 09:00:43 +01:00
.gitreview OpenDev Migration Patch 2019-04-19 19:30:01 +00:00
.project Refactoring to use openstack charm helpers. 2013-03-24 12:01:17 +00:00
.pydevproject Refactoring to use openstack charm helpers. 2013-03-24 12:01:17 +00:00
.stestr.conf Replace ostestr with stestr in testing framework. 2019-03-07 17:11:32 -05:00
.zuul.yaml Switch to Ussuri jobs 2019-12-10 09:49:34 +08:00
LICENSE Re-license charm as Apache-2.0 2016-06-28 12:12:40 +01:00
Makefile Port hacluster tests from Amulet to Zaza 2019-12-19 02:54:10 +00:00
README.md Review README 2020-04-07 14:21:22 -04:00
actions.yaml Added actions status and cleanup 2018-04-06 06:27:42 +00:00
charm-helpers-hooks.yaml Sync charm/ceph helpers, tox, and requirements 2019-09-30 21:43:19 -05:00
config.yaml Add support for maas_source_key for offline deployments. 2020-02-26 09:59:15 +02:00
copyright Re-license charm as Apache-2.0 2016-06-28 12:12:40 +01:00
icon.svg Add icon and category 2014-04-11 12:22:46 +01:00
metadata.yaml Sync c-h for py38 support and add focal to metadata 2020-03-06 14:14:59 +01:00
requirements.txt Update requirements 2018-10-03 11:41:20 -05:00
setup.cfg fix coverage settings 2015-04-02 18:53:16 +01:00
test-requirements.txt Port hacluster tests from Amulet to Zaza 2019-12-19 02:54:10 +00:00
tox.ini Enable the py38 target 2020-03-17 13:15:16 -07:00

README.md

Overview

The hacluster charm provides high availability for OpenStack applications that lack native (built-in) HA functionality. The clustering solution is based on Corosync and Pacemaker.

It is a subordinate charm that works in conjunction with a principle charm that supports the 'hacluster' interface. The current list of such charms can be obtained from the Charm Store (the charms officially supported by the OpenStack Charms project are published by 'openstack-charmers').

Note: The hacluster charm is generally intended to be used with MAAS-based clouds.

Usage

High availability can be configured in two mutually exclusive ways:

  • virtual IP(s)
  • DNS

The virtual IP method of implementing HA requires that all units of the clustered OpenStack application are on the same subnet.

The DNS method of implementing HA requires that MAAS is used as the backing cloud. The clustered nodes must have static or "reserved" IP addresses registered in MAAS. If using a version of MAAS earlier than 2.3 the DNS hostname(s) should be pre-registered in MAAS before use with DNS HA.

Configuration

This section covers common configuration options. See file config.yaml for the full list of options, along with their descriptions and default values.

cluster_count

The cluster_count option sets the number of hacluster units required to form the principle application cluster (the default is 3). It is best practice to provide a value explicitly as doing so ensures that the hacluster charm will wait until all relations are made to the principle application before building the Corosync/Pacemaker cluster, thereby avoiding a race condition.

Deployment

At deploy time an application name should be set, and be based on the principle charm name (for organisational purposes):

juju deploy hacluster <principle-charm-name>-hacluster

A relation is then added between the hacluster application and the principle application.

In the below example the VIP approach is taken. These commands will deploy a three-node Keystone HA cluster, with a VIP of 10.246.114.11. Each will reside in a container on existing machines 0, 1, and 2:

juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --config vip=10.246.114.11 keystone
juju deploy --config cluster_count=3 hacluster keystone-hacluster
juju add-relation keystone-hacluster:ha keystone:ha

Actions

This section lists Juju actions supported by the charm. Actions allow specific operations to be performed on a per-unit basis.

  • pause
  • resume
  • status
  • cleanup

To display action descriptions run juju actions hacluster. If the charm is not deployed then see file actions.yaml.

Bugs

Please report bugs on Launchpad.

For general charm questions refer to the OpenStack Charm Guide.