Juju Charm - Percona XtraDB Cluster
Go to file
Rafael Lopez a86390aeab Additional check to replace missing seeded file
The additional check is based on cluster being bootstrapped and the last
backup being a SST.

The change includes new function for checking the last backup was SST and unittests to verify said function as well as the main charm_check_func where the check is used and seeded file is replaced.

Closes-Bug: #2000107
Signed-off-by: Rafael Lopez <rafael.lopez@canonical.com>
Change-Id: I8e516059da5299cc0e0ce8ef0802d3a46abb1a54
2023-01-27 08:34:44 +00:00
actions Ensure that nagios user gets created with a password 2021-04-21 18:17:45 +01:00
charmhelpers Add yoga bundles and release-tool syncs 2021-11-05 13:53:31 -04:00
files Sync charm/ceph helpers, tox, and requirements 2019-09-30 22:10:15 -05:00
hooks Additional check to replace missing seeded file 2023-01-27 08:34:44 +00:00
keys Initial charm 2013-09-03 17:52:02 +01:00
lib Update tox.ini files from release-tools gold copy 2016-09-09 19:43:36 +00:00
ocf/percona mysql_monitor: Apply patch available in upstream PR #52 2015-04-07 12:51:43 -03:00
templates Add an option to allow MySQL service to log to syslog. 2022-06-28 13:53:16 +08:00
tests Migrate percona-cluster to charmcraft build 2022-05-20 16:08:07 -07:00
unit_tests Additional check to replace missing seeded file 2023-01-27 08:34:44 +00:00
.coveragerc Tweak coverage settings 2015-04-20 11:55:40 +01:00
.gitignore Migrate percona-cluster to charmcraft build 2022-05-20 16:08:07 -07:00
.gitreview OpenDev Migration Patch 2019-04-19 19:36:23 +00:00
.stestr.conf Replace ostestr with stestr in testing framework. 2019-03-07 17:12:58 -05:00
.zuul.yaml Use unittest.mock instead of mock 2021-12-15 11:11:14 +00:00
Makefile Sync helpers for 20.05 2020-05-18 14:49:55 +02:00
README.md Improve deprecation text 2020-12-16 12:57:51 -05:00
actions.yaml NRPE: Monitor threads connected to MySQL. 2020-11-27 14:50:02 +00:00
build-requirements.txt Migrate percona-cluster to charmcraft build 2022-05-20 16:08:07 -07:00
charm-helpers-hooks.yaml Updates to flip all libraries back to master 2021-05-03 16:10:32 +01:00
charmcraft.yaml Migrate percona-cluster to charmcraft build 2022-05-20 16:08:07 -07:00
config.yaml Add an option to allow MySQL service to log to syslog. 2022-06-28 13:53:16 +08:00
copyright [freyes,r=james-page] Ensure VIP is tied to a good mysqld instance. 2015-04-20 11:53:43 +01:00
hardening.yaml Add hardening support 2016-03-24 18:40:04 +00:00
metadata.yaml Remove xenial metadata and function tests 2021-11-12 11:09:26 -05:00
osci.yaml Minimal update to tox.ini for tests to pass 2023-01-17 03:58:22 +00:00
rename.sh Migrate percona-cluster to charmcraft build 2022-05-20 16:08:07 -07:00
requirements.txt Sync release-tools 2021-07-22 15:00:29 +02:00
revision Rationalize configuration for percona/galera, add generic helpers for parsing mysql configuration options, use mysqlhelper for creation of SST user 2013-09-23 09:37:07 +01:00
setup.cfg setup.cfg: Replace dashes with underscores 2021-05-14 18:24:06 +08:00
test-requirements.txt Sync test-requirements.txt from release-tools 2022-06-27 09:35:26 +00:00
tox.ini Minimal update to tox.ini for tests to pass 2023-01-17 03:58:22 +00:00

README.md

Overview

Percona XtraDB Cluster is a high availability and high scalability solution for MySQL clustering. Percona XtraDB Cluster integrates Percona Server with the Galera library of MySQL high availability solutions in a single product package which enables you to create a cost-effective MySQL cluster.

The percona-cluster charm deploys Percona XtraDB Cluster and provides DB services to those charms that support the 'mysql-shared' interface. The current list of such charms can be obtained from the Charm Store (the charms officially supported by the OpenStack Charms project are published by 'openstack-charmers').

Series upgrades

Deprecation of percona-cluster charm on focal series

The eoan series is the last series supported by the percona-cluster charm. It is replaced by the mysql-innodb-cluster and mysql-router charms in the focal series. The migration steps are documented in percona-cluster charm: series upgrade to focal.

Caution: Do not upgrade (to the focal series) the machines hosting percona-cluster units. To be clear, if percona-cluster is containerised then it is the LXD container that must not be upgraded.

Upgrades to non-focal series

The procedure to upgrade to a pre-focal series, and thus to a new Percona version, is documented in the OpenStack Charms Deployment Guide.

Usage

Configuration

This section covers common configuration options. See file config.yaml for the full list of options, along with their descriptions and default values.

max-connections

The max-connections option set the maximum number of allowed connections. The default is 600. This is an important option and is discussed in the Memory section below.

min-cluster-size

The min-cluster-size option sets the number of percona-cluster units required to form its cluster. It is best practice to use this option as doing so ensures that the charm will wait until the cluster is up before accepting relations from other client applications.

nrpe-threads-connected

The nrpe-threads-connected option sets Warning and Critical thresholds (in percent) for NRPE check to monitor the number of threads connecting to the MySQL. If the nrpe-external-master relationship is set, a nagios user who does not have permission and can only connect from localhost is created before the check is created.

Deployment

To deploy a single percona-cluster unit:

juju deploy percona-cluster

To make use of DB services, simply add a relation between percona-cluster and an application that supports the 'mysql-shared' interface. For instance:

juju add-relation percona-cluster:shared-db keystone:shared-db

Passwords required for the correct operation of the deployment are automatically generated and stored by the application leader. The root password for mysql can be retrieved using the following command:

juju run --unit percona-cluster/0 leader-get root-password

Root user DB access is only usable from within one of the deployed units (access to root is restricted to localhost only).

Cold boot

When machines hosting the percona-cluster units are started in order for the application to assume a clustered and healthy state particular steps are required to be taken. This is documented in the OpenStack Charms Deployment Guide.

Limitations

Note that Percona XtraDB Cluster is not a 'scale-out' MySQL solution; reads and writes are channelled through a single service unit and synchronously replicated to other nodes in the cluster; reads/writes are as slow as the slowest node you have in your deployment.

High availability

When more than one unit is deployed with the hacluster application the charm will bring up an HA active/active cluster. The min-cluster-size option should be used (see description above).

To deploy a three-node cluster:

juju deploy -n 3 --config min-cluster-size=3 percona-cluster

There are two mutually exclusive high availability options: using virtual IP(s) or DNS. In both cases the hacluster subordinate charm is used to provide the Corosync and Pacemaker backend HA functionality.

See the OpenStack high availability appendix in the OpenStack Charms Deployment Guide for details.

Actions

This section lists Juju actions supported by the charm. Actions allow specific operations to be performed on a per-unit basis. To display action descriptions run juju actions percona-cluster. If the charm is not deployed then see file actions.yaml.

  • backup
  • bootstrap-pxc
  • complete-cluster-series-upgrade
  • mysqldump
  • notify-bootstrapped
  • pause
  • resume
  • set-pxc-strict-mode

Memory

Percona Cluster is extremely memory sensitive. Setting memory values too low will give poor performance. Setting them too high will create problems that are very difficult to diagnose. Please take time to evaluate these settings for each deployment environment rather than copying and pasting bundle configurations.

The Percona Cluster charm needs to be able to be deployed in small low memory development environments as well as high performance production environments. The charm configuration opinionated defaults favour the developer environment in order to ease initial testing. Production environments need to consider carefully the memory requirements for the hardware or cloud in use. Consult a MySQL memory calculator to understand the implications of the values.

Between the 5.5 and 5.6 releases a significant default was changed. The performance schema defaulted to on for 5.6 and later. This allocates all the memory that would be required to handle max-connections plus several other memory settings. With 5.5 memory was allocated during run-time as needed.

The charm now makes performance schema configurable and defaults to off (False). With the performance schema turned off memory is allocated when needed during run-time. It is important to understand this can lead to run-time memory exhaustion if the configuration values are set too high. Consult a MySQL memory calculator to understand the implications of the values.

The value of max-connections should strike a balance between connection exhaustion and memory exhaustion. Occasionally connection exhaustion occurs in large production HA clouds with a value of less than 2000. The common practice became to set it unrealistically high (near 10k or 20k). In the move to 5.6 on Xenial this became a problem as Percona would fail to start up or behave erratically as memory exhaustion occurred on the host due to performance schema being turned on. Even with the default now turned off this value should be carefully considered against the production requirements and resources available.

MySQL asynchronous replication

This charm supports MySQL asynchronous replication feature which can be used to replicate databases between multiple Percona XtraDB Clusters. In order to setup master-slave replication of "example1" and "example2" databases between "pxc1" and "pxc2" applications, first configure mandatory options:

juju config pxc1 databases-to-replicate="database1:table1,table2;database2"
juju config pxc2 databases-to-replicate="database1:table1,table2;database2"
juju config pxc1 cluster-id=1
juju config pxc2 cluster-id=2

and then relate them:

juju add-relation pxc1:master pxc2:slave

In order to setup master-master replication, add another relation:

juju add-relation pxc2:master pxc1:slave

In the same way circular replication can be setup between multiple clusters.

Network Space support

This charm supports the use of Juju Network Spaces, allowing the charm to be bound to network space configurations managed directly by Juju. This is only supported with Juju 2.0 and above.

You can ensure that database connections and cluster peer communication are bound to specific network spaces by binding the appropriate interfaces:

juju deploy percona-cluster --bind "shared-db=internal-space cluster=internal-space"

Alternatively, configuration can be provided as part of a bundle:

percona-cluster:
  charm: cs:xenial/percona-cluster
  num_units: 1
  bindings:
    shared-db: internal-space
    cluster: internal-space

The 'cluster' endpoint binding is used to determine which network space units within the percona-cluster deployment should use for communication with each other; the 'shared-db' endpoint binding is used to determine which network space should be used for access to MySQL databases services from other charms.

Note: Spaces must be configured in the underlying provider prior to attempting to use them.

Note: Existing deployments using the access-network configuration option will continue to function; this option is preferred over any network space binding provided for the 'shared-db' relation if set.

Bugs

Please report bugs on Launchpad.

For general charm questions refer to the OpenStack Charm Guide.