Juju Charm - Percona XtraDB Cluster
Go to file
Liam Young 4a7d926628 [thedac, r=gnuoy] Charm helpers sync. Make the service_running status check cover percona-cluster's unique status message. 2016-01-20 10:56:10 +00:00
actions Normalize messages in actions 2015-10-10 09:19:59 -07:00
charmhelpers Lint fix 2016-01-19 13:56:42 -08:00
hooks [jamespage, r=thedac] Fix Bug:1481362 Make datadir dynamic depending on ubuntu version 2016-01-14 16:04:57 -08:00
keys Initial charm 2013-09-03 17:52:02 +01:00
ocf/percona mysql_monitor: Apply patch available in upstream PR #52 2015-04-07 12:51:43 -03:00
templates Drop workaround for pxc 5.6 - not required 2015-09-29 12:03:48 +01:00
tests [thedac, trivial] Charm helpers sync 2016-01-19 13:27:25 -08:00
unit_tests Rework assess_status location, call on service resume action 2015-10-10 09:07:05 -07:00
.bzrignore configure mysql_monitor agent 2015-03-06 12:35:01 -03:00
.coveragerc Tweak coverage settings 2015-04-20 11:55:40 +01:00
Makefile Move 00-setup to prevent extra, unnecessary bootstrap in test runs. 2016-01-08 21:45:00 +00:00
README.md Add README 2013-09-19 16:57:31 +01:00
actions.yaml Refer to the MySQL service rather than the Percona service. 2015-08-06 15:45:33 +02:00
charm-helpers-hooks.yaml update makefile, c-h sync yaml, tests dir - to be consistent with other os-charms; update amulet test dependencies. 2015-08-26 13:22:41 +00:00
charm-helpers-tests.yaml sync charmhelpers 2015-09-02 11:32:16 +02:00
config.yaml [jjo, r=] expose wsrep_sst_method, and defailt it to recommended value: xtrabackup-v2 as per URL at description 2015-10-29 09:30:18 -03:00
copyright [freyes,r=james-page] Ensure VIP is tied to a good mysqld instance. 2015-04-20 11:53:43 +01:00
metadata.yaml Update maintainer 2015-11-18 10:47:58 +00:00
revision Rationalize configuration for percona/galera, add generic helpers for parsing mysql configuration options, use mysqlhelper for creation of SST user 2013-09-23 09:37:07 +01:00
setup.cfg Add unit tests for ha-relation-joined hook 2015-03-17 11:37:44 -03:00

README.md

Overview

Percona XtraDB Cluster is a high availability and high scalability solution for MySQL clustering. Percona XtraDB Cluster integrates Percona Server with the Galera library of MySQL high availability solutions in a single product package which enables you to create a cost-effective MySQL cluster.

This charm deploys Percona XtraDB Cluster onto Ubuntu.

Usage

WARNING: Its critical that you follow the bootstrap process detailed in this document in order to end up with a running Active/Active Percona Cluster.

Proxy Configuration

If you are deploying this charm on MAAS or in an environment without direct access to the internet, you will need to allow access to repo.percona.com as the charm installs packages direct from the Percona respositories. If you are using squid-deb-proxy, follow the steps below:

echo "repo.percona.com" | sudo tee /etc/squid-deb-proxy/mirror-dstdomain.acl.d/40-percona
sudo service squid-deb-proxy restart

Deployment

The first service unit deployed acts as the seed node for the rest of the cluster; in order for the cluster to function correctly, the same MySQL passwords must be used across all nodes:

cat > percona.yaml << EOF
percona-cluster:
    root-password: my-root-password
    sst-password: my-sst-password
EOF

Once you have created this file, you can deploy the first seed unit:

juju deploy --config percona.yaml percona-cluster

Once this node is full operational, you can add extra units one at a time to the deployment:

juju add-unit percona-cluster

A minimium cluster size of three units is recommended.

In order to access the cluster, use the hacluster charm to provide a single IP address:

juju set percona-cluster vip=10.0.3.200
juju deploy hacluster
juju add-relation hacluster percona-cluster

Clients can then access using the vip provided. This vip will be passed to related services:

juju add-relation keystone percona-cluster

Limitiations

Note that Percona XtraDB Cluster is not a 'scale-out' MySQL solution; reads and writes are channelled through a single service unit and synchronously replicated to other nodes in the cluster; reads/writes are as slow as the slowest node you have in your deployment.