190 lines
7.4 KiB
Plaintext
190 lines
7.4 KiB
Plaintext
[[ha-aa-db]]
|
||
=== Database
|
||
|
||
The first step is installing the database that sits at the heart of the
|
||
cluster. When we talk about High Availability, we talk about several databases (for redundancy) and a
|
||
means to keep them synchronized. In this case, we must choose the
|
||
MySQL database, along with Galera for synchronous multi-master replication.
|
||
|
||
The choice of database isn’t a foregone conclusion; you’re not required
|
||
to use MySQL. It is, however, a fairly common choice in OpenStack
|
||
installations, so we’ll cover it here.
|
||
|
||
[[ha-aa-db-mysql-galera]]
|
||
==== MySQL with Galera
|
||
|
||
|
||
Rather than starting with a vanilla version of MySQL, and then adding
|
||
Galera, you will want to install a version of MySQL patched for wsrep
|
||
(Write Set REPlication) from https://launchpad.net/codership-mysql/0.7.
|
||
The wsrep API is suitable for configuring MySQL High Availability in
|
||
OpenStack because it supports synchronous replication.
|
||
|
||
Note that the installation requirements call for careful attention. Read
|
||
the guide https://launchpadlibrarian.net/66669857/README-wsrep
|
||
to ensure you follow all the required steps.
|
||
|
||
Installing Galera through a MySQL version patched for wsrep:
|
||
|
||
1. Download Galera from https://launchpad.net/galera/+download, and
|
||
install the *.rpms or *.debs, which takes care of any dependencies
|
||
that your system doesn’t already have installed.
|
||
|
||
2. Adjust the configuration:
|
||
|
||
In the system-wide +my.conf+ file, make sure mysqld isn’t bound to
|
||
127.0.0.1, and that +/etc/mysql/conf.d/+ is included.
|
||
Typically you can find this file at +/etc/my.cnf+:
|
||
|
||
----
|
||
[mysqld]
|
||
...
|
||
!includedir /etc/mysql/conf.d/
|
||
...
|
||
#bind-address = 127.0.0.1
|
||
----
|
||
|
||
When adding a new node, you must configure it with a MySQL account that
|
||
can access the other nodes. The new node must be able to request a state
|
||
snapshot from one of the existing nodes:
|
||
|
||
3. Specify your MySQL account information in +/etc/mysql/conf.d/wsrep.cnf+:
|
||
|
||
----
|
||
wsrep_sst_auth=wsrep_sst:wspass
|
||
----
|
||
|
||
4. Connect as root and grant privileges to that user:
|
||
|
||
----
|
||
$ mysql -e "SET wsrep_on=OFF; GRANT ALL ON *.* TO wsrep_sst@'%' IDENTIFIED BY 'wspass'";
|
||
----
|
||
|
||
5. Remove user accounts with empty usernames because they cause problems:
|
||
|
||
----
|
||
$ mysql -e "SET wsrep_on=OFF; DELETE FROM mysql.user WHERE user='';"
|
||
----
|
||
|
||
6. Set up certain mandatory configuration options within MySQL itself.
|
||
These include:
|
||
|
||
----
|
||
query_cache_size=0
|
||
binlog_format=ROW
|
||
default_storage_engine=innodb
|
||
innodb_autoinc_lock_mode=2
|
||
innodb_doublewrite=1
|
||
----
|
||
|
||
7. Check that the nodes can access each other through the firewall.
|
||
Depending on your environment, this might mean adjusting iptables, as in:
|
||
|
||
----
|
||
# iptables --insert RH-Firewall-1-INPUT 1 --proto tcp --source <my IP>/24 --destination <my IP>/32 --dport 3306 -j ACCEPT
|
||
# iptables --insert RH-Firewall-1-INPUT 1 --proto tcp --source <my IP>/24 --destination <my IP>/32 --dport 4567 -j ACCEPT
|
||
----
|
||
|
||
This might also mean configuring any NAT firewall between nodes to allow
|
||
direct connections. You might need to disable SELinux, or configure it to
|
||
allow mysqld to listen to sockets at unprivileged ports.
|
||
|
||
Now you’re ready to create the cluster.
|
||
|
||
===== Create the cluster
|
||
|
||
In creating a cluster, you first start a single instance, which creates
|
||
the cluster. The rest of the MySQL instances then connect to
|
||
that cluster:
|
||
|
||
An example of creating the cluster:
|
||
|
||
1. Start on +10.0.0.10+ by executing the command:
|
||
|
||
----
|
||
# service mysql start wsrep_cluster_address=gcomm://
|
||
----
|
||
|
||
2. Connect to that cluster on the rest of the nodes by referencing the
|
||
address of that node, as in:
|
||
|
||
----
|
||
# service mysql start wsrep_cluster_address=gcomm://10.0.0.10
|
||
----
|
||
|
||
You also have the option to set the +wsrep_cluster_address+ in the
|
||
+/etc/mysql/conf.d/wsrep.cnf+ file, or within the client itself. (In
|
||
fact, for some systems, such as MariaDB or Percona, this may be your
|
||
only option.)
|
||
|
||
An example of checking the status of the cluster.
|
||
|
||
1. Open the MySQL client and check the status of the various parameters:
|
||
|
||
----
|
||
mysql> SET GLOBAL wsrep_cluster_address='<cluster address string>';
|
||
mysql> SHOW STATUS LIKE 'wsrep%';
|
||
----
|
||
|
||
You should see a status that looks something like this:
|
||
|
||
----
|
||
mysql> show status like 'wsrep%';
|
||
+----------------------------+--------------------------------------+
|
||
| Variable_name | Value |
|
||
+----------------------------+--------------------------------------+
|
||
| wsrep_local_state_uuid | 111fc28b-1b05-11e1-0800-e00ec5a7c930 |
|
||
| wsrep_protocol_version | 1 |
|
||
| wsrep_last_committed | 0 |
|
||
| wsrep_replicated | 0 |
|
||
| wsrep_replicated_bytes | 0 |
|
||
| wsrep_received | 2 |
|
||
| wsrep_received_bytes | 134 |
|
||
| wsrep_local_commits | 0 |
|
||
| wsrep_local_cert_failures | 0 |
|
||
| wsrep_local_bf_aborts | 0 |
|
||
| wsrep_local_replays | 0 |
|
||
| wsrep_local_send_queue | 0 |
|
||
| wsrep_local_send_queue_avg | 0.000000 |
|
||
| wsrep_local_recv_queue | 0 |
|
||
| wsrep_local_recv_queue_avg | 0.000000 |
|
||
| wsrep_flow_control_paused | 0.000000 |
|
||
| wsrep_flow_control_sent | 0 |
|
||
| wsrep_flow_control_recv | 0 |
|
||
| wsrep_cert_deps_distance | 0.000000 |
|
||
| wsrep_apply_oooe | 0.000000 |
|
||
| wsrep_apply_oool | 0.000000 |
|
||
| wsrep_apply_window | 0.000000 |
|
||
| wsrep_commit_oooe | 0.000000 |
|
||
| wsrep_commit_oool | 0.000000 |
|
||
| wsrep_commit_window | 0.000000 |
|
||
| wsrep_local_state | 4 |
|
||
| wsrep_local_state_comment | Synced (6) |
|
||
| wsrep_cert_index_size | 0 |
|
||
| wsrep_cluster_conf_id | 1 |
|
||
| wsrep_cluster_size | 1 |
|
||
| wsrep_cluster_state_uuid | 111fc28b-1b05-11e1-0800-e00ec5a7c930 |
|
||
| wsrep_cluster_status | Primary |
|
||
| wsrep_connected | ON |
|
||
| wsrep_local_index | 0 |
|
||
| wsrep_provider_name | Galera |
|
||
| wsrep_provider_vendor | Codership Oy |
|
||
| wsrep_provider_version | 21.1.0(r86) |
|
||
| wsrep_ready | ON |
|
||
+----------------------------+--------------------------------------+
|
||
38 rows in set (0.01 sec)
|
||
----
|
||
|
||
|
||
[[ha-aa-db-galera-monitoring]]
|
||
==== Galera monitoring scripts
|
||
|
||
(Coming soon)
|
||
|
||
==== Other ways to provide a highly available database
|
||
|
||
MySQL with Galera is by no means the only way to achieve database HA.
|
||
MariaDB (https://mariadb.org/) and Percona (http://www.percona.com/)
|
||
also work with Galera. You also have the option to use Postgres, which
|
||
has its own replication, or another database HA option.
|