High Available MySQL database cluster to eliminate your next SPOF

Written by Pim on Monday March 19, 2018 - Comment - Permalink
Categories: devops, technology, howto - Tags: percona, xtradb-cluster, mysql, ha-proxy, high-availability

MySQL servers are in many environments a Single-Point-Of-Failure (SPOF). In High-Available production environments like a Software-as-a-Service Cloud environment, you can't accept this kind of risks. Eliminating this risk can be achieved by clustering multiple database servers and by replicating the data in real time. The most simple approach would be a master-slave configuration, but when the master server will go down the environment will run in read-only mode because the slave server only accepts reads. You could promote a slave server to master, but this would require scripting, which I don't want. So, master-master replication should do the trick and Percona will help me with this using their Percona XtraDB server product.

Infrastructure

The minimum amount of MySQL servers you need to run a production High-Available MySQL cluster is 3. This number should always be odd to prevent any split-brain issues. In our infrastructure, we also have an HA-Proxy server to load balance the MySQL connections/requests. The HA-Proxy server will also guarantee that unavailable or non-synced servers are taken out of the rotation.

To run this cluster we need 4 servers. I have deployed these servers, pre-installed with CentOS 7, at our OpenStack provider and used the following specifications:

MySQL Server
4 CPU Cores
8 GB Memory
40 GB Hard Disk (OS)
50 GB SSD Disk (MySQL data)

HA-Proxy Server
2 CPU Cores
2 GB Memory
40 GB Hard Disk (OS)

These servers are using a virtual internal network to communicate with each other. You can use a second network, 1:1 NAT rules, port forwarding, etc. to create an entry for users to use the MySQL server using the HA-Proxy server as a man-in-the-middle. You should always connect to the database cluster using the HA-Proxy server!

Installation and configuration of Percona XtraDB Servers

We will start with the installation of the Percona XtraDB servers and use the YUM package manager to install the required packages. The Percona packages aren't in the base repository so we have to install the Percona YUM repository first.

$ rpm -Ivh http://www.percona.com/downloads/percona-release/redhat/0.1-4/percona-release-0.1-4.noarch.rpm
$ yum makecache fast

Before we run the install command, we check if the /var/lib/mysql is mounted correctly. Because the /var/lib/mysql directory is another disk, the lost+found directory is present. The Percona XtraDB server will fail if this directory is present, so remove it first.

$ rm -rf /var/lib/mysql/lost+found

To prevent any issues, we disable SELinux too.

$ vi /etc/sysconfig/selinux
# SELINUX=disabled

Install Percona XtraDB Cluster and Client. You can choose the versions (5.6 of 5.7).

$ yum install -y Percona-XtraDB-Cluster-56 Percona-XtraDB-Cluster-client-56
# OR
$ yum install -y Percona-XtraDB-Cluster-57 Percona-XtraDB-Cluster-client-57

We will use xinetd to advertise if a Percona XtraDB server is synced and should be available. We configure it later but install it already.

$ yum install -y xinetd

Now everything has been installed we can start to configure the Percona XtraDB server. The configuration can be found in /etc/my.cnf

You can use the following configuration as a template:

[mysqld]
bind_address=0.0.0.0
datadir=/var/lib/mysql
general_log = /var/log/mysqld.log

# Server mode
skip-name-resolve
skip-host-cache
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
default_storage_engine=InnoDB
binlog_format=ROW

# General
max_connections=1000
#thread_cache_size=100
#table_open_cache=200000
#table_open_cache_instances=64
#back_log=1500
#query_cache_type=0

# files
innodb_file_per_table=1
innodb_log_file_size=4G
#innodb_log_files_in_group=2
#innodb_open_files=4000
#innodb_io_capacity=10000
#loose-innodb_io_capacity_max=12000
#innodb_lru_scan_depth=1024
#innodb_page_cleaners=32

# buffers
innodb_buffer_pool_size=6G
innodb_buffer_pool_instances=8
innodb_log_buffer_size=64M

# tune
innodb_doublewrite=1
innodb_support_xa=0
innodb_thread_concurrency=0
innodb_flush_log_at_trx_commit=0
innodb_flush_method=O_DIRECT
# This is a recommended tuning variable for performance
innodb_locks_unsafe_for_binlog=1
# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=2
#innodb_max_dirty_pages_pct=90
#join_buffer_size=32K
#sort_buffer_size=32K
#innodb_use_native_aio=0
#innodb_stats_persistent=1

# Galera
wsrep_slave_threads=16
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so

wsrep_cluster_address=gcomm://<list of IP address>

wsrep_node_name=<hostname>
wsrep_node_address=<IP address>
wsrep_cluster_name=<cluster name>

wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth="<user>:<password>"

There are 5 configuration items you are required to change:

  • wsrep_cluster_address: the addresses of all the servers you want to include in the cluster (example: gcomm://10.0.0.1,10.0.0.2,10.0.0.3)
  • wsrep_node_name: the hostname of the node (example: master-1)
  • wsrep_node_address: IP address of the node (example: 10.0.0.1)
  • wsrep_cluster_name: Name of the cluster (example: pxc-cluster)
  • wsrep_sst_auth: username and password of the SST replication (example: sst_user:S3cr3t!)

The cluster should be bootstrapped by the primary server (master-1). The bootstrapped primary server will start without any cluster addresses configured. When the bootstrapping has been finished, the Percona XtraDB server will run as a "normal" Percona XtraDB server. After adding the other two servers you can restart the primary server with the cluster addresses configured.

$ systemctl start mysql@bootstrap.service
$ systemctl status mysql@bootstrap.service

When the primary server has been started, change the be default empty root password, and create three extra users to support SST, backup the server using Xtrabackup and monitor the server using clustercheck.

$ grep 'temporary password' /var/log/mysqld.log
$ mysqladmin -u root --password=<temp password> password <new password>
# Login to the MySQL server
$ mysql -uroot -p
# Check if no more root users are configured without a password. Set the password or remove the users.
$ select * from mysql.user;
# Update the password
$ SET PASSWORD FOR 'root'@'127.0.0.1' = '<password>';
# Or remove the user
$ DROP USER 'root'@'127.0.0.1';
# Create the SST user
$ CREATE USER '<SST user>'@'localhost' IDENTIFIED BY '<SST password>';
$ GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO '<SST user>'@'localhost';
# Create the xtrabackup user
$ GRANT ALL ON *.* TO '<xtrabackup user>'@'127.0.0.1' IDENTIFIED BY '<xtrabackup_password>';
# Create the clustercheck user
$ GRANT PROCESS ON *.* TO '<clustercheck user>'@'localhost' IDENTIFIED BY '<clustercheck password>';
# Flush privileges and exit
$ FLUSH PRIVILEGES;
$ exit;

When the primary server has been configured, start the other two.

$ systemctl start mysql

While the MySQL servers are starting, configure MySQL to start at boot on all three servers.

$ systemctl enable mysql

Wait till the initial SST has been completed (you can check the status in the MySQL log) and login to the server. Check if the server has been synced and has joined the cluster.

$ SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size';

The cluster size should be "3".

$ SHOW GLOBAL STATUS LIKE 'wsrep_local_state_comment';

And the local state should be "Synced" or "Joined".

You can also use the clustercheck command to check if the cluster node is synced.

$ clustercheck <clustercheck user> <clustercheck password>

We'll use this clustercheck command too in our xinetd configuration. This enables the HA-Proxy to check if a cluster node isn't too far behind on the other nodes.

Configuration of Xinetd

Xinetd is already installed. We only need to configure it. Test one more time if the clustercheck command is working on all three nodes.

$ clustercheck <clustercheck user> <clustercheck password>

Create a new file in the /etc/xinetd.d directory and configure Xinetd to advertise the clustercheck command.

$ vi /etc/xinetd.d/mysqlchk

Use the content below as a template:

# default: on
# description: mysqlchk
service mysqlchk
{
  flags           = REUSE
  socket_type     = stream
  protocol        = tcp
  port            = 9200
  wait            = no
  user            = nobody
  server          = /usr/bin/clustercheck
  server_args     = <clustercheck user> <clustercheck password>
  log_on_failure  += USERID
  disable         = no
}

Add the mysqlchk to the /etc/services list.

$ echo "mysqlchk 9200/tcp" >> /etc/services

And enable and restart the xinetd service.

$ systemctl enable xinetd
$ systemctl start xinetd

Check if the service is available at localhost.

$ telnet 127.0.0.1 9200

If you've installed a firewall, don't forget to open this port in the firewall!

Installation and configuration of HA-Proxy

HA-Proxy is available in the EPEL repository of CentOS. Install it using the YUM package manager.

$ yum install -y haproxy

The configuration file is located at /etc/haproxy/haproxy.cfg. Replace this configuration file with the following one (use it as a template!).

global
    log 127.0.0.1 local0
    log 127.0.0.1 local1 notice
    maxconn 4096
    chroot /usr/share/haproxy
    user haproxy
    group haproxy
    daemon
 
defaults
    log                     global
    mode                    http
    option                  tcplog
    option                  dontlognull
    option                  redispatch
    maxconn                 2000
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         1h
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
 
frontend stats-front
    bind *:80
    mode http
    default_backend stats-back
 
frontend pxc-front
    bind *:3306
    mode tcp
    default_backend pxc-back
 
frontend pxc-lb-front
    bind *:3307
    mode tcp
    default_backend pxc-lb-back
 
backend stats-back
    mode http
    balance roundrobin
    stats uri /haproxy/stats
    stats auth user:pass
 
backend pxc-back
    mode tcp
    balance leastconn
    option httpchk
    timeout server 8h
    server c1 10.0.0.1:3306 check port 9200 inter 2s downinter 10s rise 3 fall 3
    server c2 10.0.0.2:3306 check port 9200 inter 2s downinter 10s rise 3 fall 3 backup
    server c3 10.0.0.3:3306 check port 9200 inter 2s downinter 10s rise 3 fall 3 backup
 
backend pxc-lb-back
    mode tcp
    balance leastconn
    option httpchk
    timeout server 8h
    server c1 10.0.0.1:3306 check port 9200 inter 2s downinter 10s rise 3 fall 3
    server c2 10.0.0.2:3306 check port 9200 inter 2s downinter 10s rise 3 fall 3
    server c3 10.0.0.3:3306 check port 9200 inter 2s downinter 10s rise 3 fall 3

At port 3306 we configured the HA-Proxy server to use only one Percona XtraDB node for write/read transactions. I've done this to prevent any deadlocks caused by applications which aren't compatible with the side effects of database clusters. If an application is capable to handle deadlocks, or if an application is capable to split read and write actions, I can use port 3307 to read/write from three nodes.

Maybe you have noted the "check port 9200" line. This will call the clustercheck command on each server over TCP/IP. The clustercheck command will return status codes: 200 OK (everything is in-sync) or 503 Service Unavailable (server not in sync, disabled because of maintenance, etc.). The clustercheck command is run every two seconds when the server is UP or every ten seconds when it is DOWN. In this configuration, a server is taken out of the rotation when the server is unavailable for 6 seconds or more (three checks * 2 seconds = 6 seconds).

Start the HA-Proxy server to start load balancing the Percona XtraDB Cluster.

$ systemctl enable haproxy
$ systemctl start haproxy

Now it should be possible to access the MySQL cluster using the HA-Proxy load balancer.

# This will only work when the root user is also allowed to login using the HA-Proxy server!
$ mysql -h 127.0.0.1 -uroot -p

The status page of HA-Proxy is available at http://127.0.0.1/haproxy/stats.

Good luck building your own Percona XtraDB Cluster with HA-Proxy!