Install & Configure MySQL Cluster (Pacemaker, Corosync, DRBD, Stonith)

Posted on Posted in Mysql

Ok, after explain how install and configurate drbd and vmware stonith, it’s the moment to start a new project mount a MySQL cluster.

The Concept

The concept of an active/passive fail-over Cluster is the following:

  • Two servers (nodes).
  • They communicate over a cluster software (HeartbeatCorosyncOpenAIS)
  • They are running on DRBD failover storage system.
  • MySQL is only running in MASTER node (active), the other is the PASIVE node.
  • You reach MySQL over a Virtual IP (VIP)
  • In case of a problem the cluster fail-over the resources including the VIP to the passive node.
  • This fail-over is transparent for the application ( a lite SERVICEDOWN).

This is the infraestructure:

Network and Server settings

Before start with the pacemaker and corosync installation a pre-requisities are necessary.

Selinux

Desactivate selinux

[root@node1 ]# cat /etc/sysconfig/selinux
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - SELinux is fully disabled.
SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
# targeted - Only targeted network daemons are protected.
# strict - Full SELinux protection.
SELINUXTYPE=targeted

Sort Names

To simplificate the configuration use sort names.

#
# /etc/sysconfig/network
#
...
HOSTNAME=node1

Hosts

Add the two nodes in the /etc/hosts file

#
# /etc/hosts
#
...
192.168.1.101  node1.larry.com node1
192.168.1.102  node2.larry.com node2

Bonding

To reduce the risks of suffering a power outage, I’ll configurate a bonding system.

This is the configuration for the node1, the same standard for the node2

BOND0

Configure the virtual bond interface

#
# /etc/sysconfig/network-scripts/ifcfg-bond0
#
DEVICE=bond0
BOOTPROTO=static
ONBOOT=yes
NETWORK=192.168.1.0
NETMASK=255.255.255.0
IPADDR=192.168.1.101
USERCTL=no
BONDING_OPTS="mode=active-backup miimon=100"
GATEWAY=192.168.1.1

Now add the two slaves interfaces to the bond0

#
# /etc/sysconfig/network-scripts/ifcfg-eth0
#
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
HWADDR=08:00:27:ca:2d:f1
MASTER=bond0
SLAVE=yes
USERCTL=no
#
# /etc/sysconfig/network-scripts/ifcfg-eth2
#
DEVICE=eth2
BOOTPROTO=none
ONBOOT=yes
HWADDR=08:00:27:d3:g2:h2
MASTER=bond0
SLAVE=yes
USERCTL=no

BOND1

Configure the virtual bond interface

#
# /etc/sysconfig/network-scripts/ifcfg-bond1
#
DEVICE=bond1
BOOTPROTO=static
ONBOOT=yes
NETWORK=10.0.0.0
NETMASK=255.255.255.0
IPADDR=10.0.0.1
USERCTL=no
BONDING_OPTS="mode=active-backup miimon=100"

Now add the two slaves interfaces to the bond1

#
# /etc/sysconfig/network-scripts/ifcfg-eth1
#
DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
HWADDR=08:00:27:cb:5d:f3
MASTER=bond1
SLAVE=yes
USERCTL=no
#
# /etc/sysconfig/network-scripts/ifcfg-eth3
#
DEVICE=eth3
BOOTPROTO=none
ONBOOT=yes
HWADDR=08:00:27:a3:h2:f3
MASTER=bond1
SLAVE=yes
USERCTL=no

Add the bonding aliases:

# /etc/modprobe.conf
#
...
alias bond0 bond1 bonding

To apply the configuration reboot the system and check if all is correct.

DRBD Installation & Configuration

Download the software

To download the latest version, enter to this webpage or download with wget:

wget http://elrepo.org/linux/elrepo/el5/x86_64/RPMS/kmod-drbd84-8.4.1-1.el5.elrepo.x86_64.rpm
wget http://elrepo.org/linux/elrepo/el5/x86_64/RPMS/drbd84-utils-8.4.1-1.el5.elrepo.x86_64.rpm

Install DRBD

So easy than:

[root@nodo1 DRBD]# rpm -ivh *.rpm
warning: drbd84-utils-8.4.1-1.el5.elrepo.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID baadae52
Preparing...                ########################################### [100%]
   1:drbd84-utils           ########################################### [ 50%]
   2:kmod-drbd84            ########################################### [100%]
Working. This may take some time ...
Done.

In the other node

[root@nodo2 DRBD]# rpm -ivh *.rpm
warning: drbd84-utils-8.4.1-1.el5.elrepo.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID baadae52
Preparing...                ########################################### [100%]
   1:drbd84-utils           ########################################### [ 50%]
   2:kmod-drbd84            ########################################### [100%]
Working. This may take some time ...
Done.

Configuration

The official documentation
Copy the dist configuration

cp -pr /etc/drbd.conf /etc/drbd.conf-DIST

SHA1

To secure and trust the communication we’ll need a sha1 key, to generate one:

[root@nodo1 DRBD]# sha1sum /etc/drbd.conf
8a6cxxxxxxxxxxxxxxxxxxxxx49xxxxxxxxfb3  /etc/drbd.conf

DRBD.CONF

The /etc/drbd.conf is the configuration file for drbd, here my configuration:

# You can find an example in  /usr/share/doc/drbd.../drbd.conf.example

include "drbd.d/global_common.conf";

resource DISK1 {
    protocol C;
    net {
        cram-hmac-alg sha1;
        shared-secret "8a6cxxxxxxxxxxxxxxxxxxxxx49xxxxxxxxfb3";
        after-sb-0pri discard-zero-changes;
        after-sb-1pri discard-secondary;
        after-sb-2pri disconnect;
        rr-conflict disconnect;
    }
    device    /dev/drbd0;
    disk      /dev/sdc1;
    meta-disk internal;
    on node1 {
        address   10.0.0.1:7789;
    }
    on node2 {
        address   10.0.0.2:7789;
    }
}

As you can see I specify this parameters:

  • RESOURCE: The name of the resource
  • PROTOCOL: In this case C means synchronous
  • NET: The SHA1 key, that have the same in the two nodes
    • after-sb-0pri : When a Split Brain ocurrs, and no data have changed, the two nodes connect normally.
    • after-sb-1pri : If some data have been changed, discard the secondary data and synchronize with the primary
    • after-sb-2pri : If the previous option is impossible disconnect the two nodes, in this case manually Split-Brain solution is required
    • rr-conflict: In case that the previous statements don’t apply and the drbd system have a role conflict, the system disconnect automatically.
  • DEVICE: Virtual device, the patch to the fisical device.
  • DISK: Fisical device
  • META-DISK: Meta data are stored in the same disk (sdc1)
  • ON <NODE>: The nodes that form the cluster

Creating the resource

This commands in both nodes

Create partition

Create the partition without format

[root@node1 ~]# fdisk /dev/sdc
[root@node2 ~]# fdisk /dev/sdc

Create resource

[root@node1 ~]# drbdadm create-md DISK1
Writing meta data...
initializing activity log
NOT initializing bitmap
New drbd meta data block successfully created.
[root@node2 ~]# drbdadm create-md DISK1
Writing meta data...
initializing activity log
NOT initializing bitmap
New drbd meta data block successfully created.

Activate the Resource

Be sure that the drbd module is load (lsmod), if not load it:

[root@nodex ~]# modprobe drbd

Now activate the resource DISK1:

[root@node1 ~]# drbdadm up DISK1
[root@node2 ~]# drbdadm up DISK1

Synchronize

Only in the master node, we’ll say that the node1 is the primary:

/sbin/drbdadm -- --overwrite-data-of-peer primary DISK1

We’ll see that the disks synchronization are in progress, adn the state is UpToDate/Inconsistent

Every 2.0s: cat /proc/drbd                                                                                                           Wed Mar 28 13:51:50 2012

version: 8.4.1 (api:1/proto:86-100)
GIT-hash: 91b45df4f8g489w4er2b38we8r4w65ea80 build by dag@Build64R5, 2011-12-21 06:05:25
 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----
    ns:17540416 nr:0 dw:0 dr:17547264 al:0 bm:1070 lo:0 pe:4 ua:7 ap:0 ep:1 wo:b oos:13917216
        [==========>.........] sync'ed: 55.8% (13588/30716)M
        finish: 0:02:25 speed: 95,716 (89,020) K/sec

When this is over we will see that the state change to UpToDate/UpToDate

Every 2.0s: cat /proc/drbd                                                                                                           Wed Mar 28 13:57:05 2012

version: 8.4.1 (api:1/proto:86-100)
GIT-hash: 91b45df4f8g489w4er2b38we8r4w65ea80 build by dag@Build64R5, 2011-12-21 06:05:25
 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
    ns:31454240 nr:0 dw:0 dr:31454240 al:0 bm:1920 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0

Format the Resource

Only in the master node:

[root@node1 ~]# mkfs.ext3 /dev/drbd0

Testing

Mount the resource in the node1

[root@node1 ~]# mount /dev/drbd0 /usr/local/etc/mysql/data

Ok, now umount and mark the node1 like secondary

[root@node1 ~]# umount /usr/local/etc/mysql/data
[root@node1 ~]# drbdadm secondary DISK1

Mark Node2 like Primary and mount:

[root@node2 ~]# drbdadm primary DISK1
[root@node2 ~]# mount /dev/drbd0 /usr/local/etc/mysql/data

MySQL Installation

After install and mount the drbd system install the MySQL software in both nodes.

User and Group

groupadd mysql
useradd -r -g mysql mysql

Pre-Packages

yum install gcc-c++
yum install ncurses-devel
wget http://dl.atrpms.net/el5-x86_64/atrpms/stable/cmake-2.6.4-7.el5.x86_64.rpm
rpm -i cmake-2.6.4-7.el5.x86_64.rpm

MySQL INSTALLATION

DOWNLOAD here the last MySQL server version, in my case the 5.5.24

wget http://dev.mysql.com/get/Downloads/MySQL-5.5/mysql-5.5.24.tar.gz/from/http://mirrors.ircam.fr/pub/mysql/
cd mysql-5.5.24
cmake . -DCMAKE_INSTALL_PREFIX=/usr/local/etc/mysql
make
make install

Now Install the database in the PRIMARY node, that have the drbd0 disk mounted in /usr/local/etc2/mysql/data

cd /usr/local/etc/mysql
chown -R mysql .
chgrp -R mysql .
./scripts/mysql_install_db --user=mysql --basedir=/usr/local/etc/mysql --datadir=/usr/local/etc/mysql/data
chown -R root .
chown -R mysql data

Post – Installation

Copy the dist configuration to /etc

cp support-files/my-medium.cnf /etc/my.cnf

Test if all it’s ok and start mysql Server

bin/mysqld_safe --user=mysql &

Copy the start mysql init.d script to /etc/init.d/

cp support-files/mysql.server /etc/init.d/mysql

Change the socket
vim /etc/init.d/mysql.server

#lockdir='/var/lock/subsys'
lockdir='/tmp'
lock_file_path="$lockdir/mysql.sock"

Because the Cluster Software is the responsable to start Mysql service, disable the service

# chkconfig --list mysqld
# chkconfig mysqld off

Configure the root Mysql Password

mysqladmin -u root password '******'
mysql -h localhost -u root -p

I prefer configure the my.cnf file in a local directory, because if the drbd system fail or I need “destroy” the cluster (for update the system, or maitenance tasks), the configurate file must be in a local filesystem.
vim /etc/my.cnf

#
# /etc/my.cnf
#

[client]

port                           = 3306
socket                         = /tmp/mysql.sock

[mysqld]

port                           = 3306
socket                         = /tmp/mysql.sock

datadir                        = /usr/local/etc/mysql/data
user                           = mysql
memlock                        = 1

table_open_cache               = 3072
table_definition_cache         = 1024
max_heap_table_size            = 64M
tmp_table_size                 = 64M

# Connections

max_connections                = 505
max_user_connections           = 500
max_allowed_packet             = 16M
thread_cache_size              = 32

# Buffers

sort_buffer_size               = 8M
join_buffer_size               = 8M
read_buffer_size               = 2M
read_rnd_buffer_size           = 16M

# Query Cache

query_cache_size               = 64M

# InnoDB

default_table_type             = InnoDB

innodb_buffer_pool_size        = 1G
innodb_data_file_path          = ibdata1:2G:autoextend

innodb_log_file_size           = 128M
innodb_log_files_in_group      = 2

# MyISAM

myisam_recover                 = backup,force

# Logging

general-log = 0
general_log_file               = /var/log/mysql/mysql.log

log_warnings                   = 2
log_error                      = /var/log/mysql/mysql_error.log

slow_query_log                 = 1
slow_query_log_file            = /var/log/mysql/mysql_slow.log
long_query_time                = 0.5
log_queries_not_using_indexes  = 1
min_examined_row_limit         = 20

# Binary Log / Replication

server_id                      = 1
log-bin                        = mysql-bin
binlog_cache_size              = 1M
sync_binlog                    = 8
binlog_format                  = row
expire_logs_days               = 7
max_binlog_size                = 128M

[mysqldump]

quick
max_allowed_packet             = 16M

[mysql]

no_auto_rehash

[myisamchk]

key_buffer                     = 512M
sort_buffer_size               = 512M
read_buffer                    = 8M
write_buffer                   = 8M

[mysqld_safe]

open-files-limit               = 8192
log-error                      = /var/log/mysql/mysql_error.log

Create the logs files and directory

mkdir /var/log/mysql
touch /var/log/mysql.log
touch /var/log/mysql-query.log
touch /var/log/mysql-slow.log
chown -R mysql.mysql /var/log/mysql

Activate the logrotate
vim /etc/logrotate.d/mysql

/var/log/mysql/mysql.log /var/log/mysql/mysql-slow.log /var/log/mysql/mysql-query.log {
        weekly
        copytruncate
        rotate 8
}

Mysql Client

LRM or Local Resource Manage need mysql client binary to monitor if mysql server is running. Install in both nodes

[root@node1]# yum install mysql.x86_64

Installation of Corosync & Pacemaker

Dependences

[root@node1]# wget http://download3.fedora.redhat.com/pub/epel/5/x86_64/libesmtp-1.0.4-5.el5.x86_64.rpm
[root@node1]# rpm -ivh libesmtp-1.0.4-5.el5.x86_64.rpm

Configure YUM

[root@node1]# wget -O /etc/yum.repos.d/pacemaker.repo http://clusterlabs.org/rpm/epel-5/clusterlabs.repo

Install the software

[root@node1]# yum install pacemaker.x86_64 corosync.x86_64

Configure Corosync

Corosync Key

In one node create the corosync security comunication key.

[root@node1]# corosync-keygen
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/random.
Press keys on your keyboard to generate entropy.
Writing corosync key to /etc/corosync/authkey.

Copy to the other node and maintains the permissions to 400

[root@node1]# scp /etc/corosync/authkey node2:/etc/corosync/
[root@node1]# ll /etc/corosync/authkey
-r-------- 1 root root 128 May  7 10:26 /etc/corosync/authkey
[root@node2]# ll /etc/corosync/authkey
-r-------- 1 root root 128 May  7 10:27 /etc/corosync/authkey

Corosync.conf

Now configure the /etc/corosync/corosync.conf

# Please read the corosync.conf.5 manual page
compatibility: whitetank

aisexec {
        # Run as root - this is necessary to be able to manage resources with Pacemaker
        user:   root
        group:  root
}

service {
        # Load the Pacemaker Cluster Resource Manager
        ver:       0
        name:      pacemaker
        use_logd:  yes
}

totem {
        version: 2
        secauth: off
        threads: 0
        rrp_mode: passive
        interface {
                ringnumber: 0
                bindnetaddr: 192.168.1.0
                mcastaddr: 239.255.1.177
                mcastport: 5409
        }
        interface {
                ringnumber: 1
                bindnetaddr: 10.0.0.0
                mcastaddr: 239.255.1.178
                mcastport: 5411
        }

}

logging {
        fileline: off
        to_stderr: yes
        to_logfile: yes
        to_syslog: yes
        logfile: /var/log/corosync.log
#       debug: off
        debug: on
        timestamp: on
        logger_subsys {
                subsys: AMF
                debug: off
        }
}

amf {
        mode: disabled
}

Activate service

[root@node1]# chkconfig corosync on
[root@node1]# chkconfig logd on

Start service and check

[root@node1]#  /etc/init.d/corosync start
[root@node1]# corosync-cfgtool -s
Printing ring status.
Local node ID -1324290657
RING ID 0
	id	= 192.168.1.101
	status	= ring 0 active with no faults
RING ID 1
	id	= 10.0.0.1
	status	= ring 1 active with no faults

The state of the cluster

[root@node1]# crm_mon -rf
============
Last updated: Mon May  7 12:36:05 2012
Stack: openais
Current DC: node1 - partition with quorum
Version: 1.0.12-unknown
2 Nodes configured, 2 expected votes
0 Resources configured.
============

Online: [ node1 node2 ]

Full list of resources:

Migration summary:
* Node node1: 
* Node node2:

Vmware Stonith

Like previous post I’ll splain how install and configure stonith for virtual machines running under Vmware.

Update PERL

Is recommendable update the perl software to the last version, to do this download and compile the software:

wget http://www.cpan.org/src/5.0/perl-5.14.2.tar.gz
[root@node1]# tar xvfz perl-5.14.2.tar.gz
[root@node1]# cd perl-5.14.2
[root@node1]# ./Configure
......
....
...
What pager is used on your system? [/usr/bin/less -R] /usr/bin/less
...
.
[root@node1]# make && make install

Remplace the binary’s:

[root@node1]#  mv /usr/local/bin/perl /usr/local/bin/perl5.8
[root@node1]# cp -pr /usr/local/bin/perl5.14.2 /usr/local/bin/perl
[root@node1]# cp -pr /usr/local/bin/perl5.14.2 /usr/bin/perl
cp: overwrite `/usr/bin/perl'? yes

ClusterGlue

We need clusterglue to integrate stonith with the Cluster

Pre-Requisites

Before run the installation:

[root@node1]# yum install glib2-devel.x86_64 bzip2-devel.x86_64
[root@node1]# yum install libxml2-devel.x86_64 docbook-dtds.noarch

Installation

Download

[root@node1]# wget http://hg.linux-ha.org/glue/archive/glue-1.0.9.tar.bz2

And Install

[root@node1]# tar xvfj glue-1.0.9.tar.bz2
[root@node1]# cd Reusable-Cluster-Components-glue--glue-1.0.9/
[root@node1 Reusable-Cluster-Components-glue--glue-1.0.9]# ./autogen.sh
[root@node1 Reusable-Cluster-Components-glue--glue-1.0.9]# ./configure --localstatedir=/var

If the configuration is correct:

cluster-glue configuration:
  Version                  = 1.0.9 (Build: 0a08a359fdf4a0db1875365947bc83c523cef21)
  Features                 =

  Prefix                   = /usr
  Executables              = /usr/sbin
  Man pages                = /usr/man
  Libraries                = /usr/lib64
  Header files             = /usr/include
  Arch-independent files   = /usr/share
  Documentation            = /usr/share/doc
  State information        = /var
  System configuration     = /usr/etc

  Use system LTDL          = no

  HA group name            = haclient
  HA user name             = hacluster

  CFLAGS                   = -g -O2 -ggdb3 -O0  -fgnu89-inline -fstack-protector-all -Wall -Waggregate-return -Wbad-function-cast -Wcast-qual -Wcast-align -Wdeclaration-after-statement -Wendif-labels -Wfloat-equal -Wformat=2 -Wformat-security -Wformat-nonliteral -Winline -Wmissing-prototypes -Wmissing-declarations -Wmissing-format-attribute -Wnested-externs -Wno-long-long -Wno-strict-aliasing -Wpointer-arith -Wstrict-prototypes -Wwrite-strings -ansi -D_GNU_SOURCE -DANSI_ONLY -Werror
  Libraries                = -lbz2 -lxml2 -lc -luuid -lrt -ldl  -L/lib64 -lglib-2.0  
  Stack Libraries          =

Install

[root@node1 Reusable-Cluster-Components-glue--glue-1.0.9]# make 
[root@node1 Reusable-Cluster-Components-glue--glue-1.0.9]# make install

VMware vSphere Perl

This package provides tools to interact with the Virtual Center and his respective Virtual Machines, download from here

Installation

Extract and Install it

[root@node1]# tar xvfz VMware-vSphere-Perl-SDK-5.0.0-615831.x86_64.tar.gz 
[root@node1]# cd vmware-vsphere-cli-distrib

If you server need proxy to connect to Internet export it

[root@node1]# export ftp_proxy=http://proxy.larry.com:8080
[root@node1]# export http_proxy=http://proxy.larry.com:8080

Run the installer script

[root@node1 vmware-vsphere-cli-distrib]# ./vmware-install.pl

Vcenter Credential

The stonith plugin need the vcenter credentials to connect to the vcenter and interactuate with the Virtual Machines (before this you’ll need to created a new user that have Operate privileges to reset or shutdown both node’s)

[root@node1]# /usr/lib/vmware-vcli/apps/general/credstore_admin.pl add -s vcenter.larry.com -u stonith -p **********

Copy the result file to /etc

[root@node1]# cp -pr /root/.vmware/credstore/vicredentials.xml /etc/

HTTPS Certificate

Normally the HTTPS access for the Virtual Center don’t have a TRUST certificate to connect it and the plugin fail:

stonith -t external/vcenter VI_SERVER="vcenter.larry.com" VI_PORTNUMBER="443" VI_PROTOCOL="https" VI_SERVICEPATH="/sdk/webService" VI_CREDSTORE="/etc/vicredentials.xml" HOSTLIST="hostname1=node1;hostname2=node2" RESETPOWERON="1" -lS
external/vcenter[20593]: ERROR: [status] Server version unavailable at 'https://vcenter.larry.com:443/sdk/vimService.wsdl' at /usr/local/lib/perl5/5.14.2/VMware/VICommon.pm line 545.

Server version unavailable at 'https://vcenter.larry.com:443/sdk/vimService.wsdl' at /usr/local/lib/perl5/5.14.2/VMware/VICommon.pm line 545.

	...propagated at /usr/lib64/stonith/plugins/external/vcenter line 22.

To resolve it edit the VICommon.pm and add a condition that connect without TRUST certificate

[root@node1]# vim /usr/local/lib/perl5/5.14.2/VMware/VICommon.pm
#
# Copyright 2006 VMware, Inc.  All rights reserved.
#

use 5.006001;
use strict;
use warnings;

use Carp qw(confess croak);
use XML::LibXML;
use LWP::UserAgent;
use LWP::ConnCache;
use HTTP::Request;
use HTTP::Headers;
use HTTP::Response;
use HTTP::Cookies;
use Data::Dumper;

$ENV{PERL_LWP_SSL_VERIFY_HOSTNAME} = 0;

Ok check again

stonith -t external/vcenter VI_SERVER="vcenter.larry.com" VI_PORTNUMBER="443" VI_PROTOCOL="https" VI_SERVICEPATH="/sdk/webService" VI_CREDSTORE="/etc/vicredentials.xml" HOSTLIST="hostname1=node1;hostname2=node2" RESETPOWERON="1" -lS
info: external/vcenter device OK.
hostname1
hostname2

Resource Configuration

The CRM configuration only in one node, the other node copy it automatically.
Enter to CRM

[root@node1]#crm configure

VIP

Configure the VIP

crm(live)configure# primitive vip1 ocf:heartbeat:IPaddr2 params ip=192.168.1.100 cidr_netmask=32 op monitor interval=30s

STONITH

Now, configure the stonith resources
The HOSTLIST names are reference to HOSTLIST=”CRM node name=V.Machine vcenter name”, it’s means that the local name and the Virtual Machine VCenter name need not be the same, depend your infrastructure.

crm(live)configure# primitive st-node1 stonith:external/vcenter params VI_SERVER="vcenter.larry.com" VI_CREDSTORE="/etc/vicredentials.xml" HOSTLIST="node1=node1" RESETPOWERON="1" op monitor interval="60s"
crm(live)configure# primitive st-node2 stonith:external/vcenter params VI_SERVER="vcenter.larry.com" VI_CREDSTORE="/etc/vicredentials.xml" HOSTLIST="node2=node2" RESETPOWERON="1" op monitor interval="60s"

The locations:
Logically the stonith resource for kill node1 must be in the node2, and vice versa.

crm(live)configure# location loc-st-node1 st-node1 -inf: node1 
crm(live)configure# location loc-st-node2 st-node2 -inf: node2

DRBD

Now the filesystem, add DISK1 to the cluster

crm(live)configure# primitive drbd_mysql ocf:linbit:drbd params drbd_resource="DISK1" op monitor interval="15s" op start timeout="240s"

Define the mount point

crm(live)configure# primitive fs_mysql ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/usr/local/etc/mysql/data/" fstype="ext3"

Define only one Master node

crm(live)configure# ms ms_drbd_mysql drbd_mysql meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"

MySQL

Now the mysql server

crm(live)configure# primitive mysqld ocf:heartbeat:mysql params binary="/usr/local/etc/mysql/bin/mysqld_safe" config="/etc/my.cnf" user="mysql" group="mysql" log="/var/log/mysql/mysql.log" pid="/usr/local/etc/mysql/mysql.pid" datadir="/usr/local/etc/mysql/data" socket="/tmp/mysql.sock" op monitor interval="60s" timeout="60s" op start interval="0" timeout="180" op stop interval="0" timeout="240"

Groups & Colocations

With this group we ensure that the drbd, mysql and VIP are in the same node (master) and the order to stop and start is correctly:
start: fs_mysql–>mysqld–>vip1
stop: vip1–>mysqld–>fs_mysql

crm(live)configure# group group_mysql fs_mysql mysqld vip1 meta migration-threshold="5"

The group group_mysql allways in the MASTER node

crm(live)configure# colocation mysql_on_drbd inf: group_mysql ms_drbd_mysql:Master

Mysql start allways after drbd MASTER

crm(live)configure# order mysql_after_drbd inf: ms_drbd_mysql:promote group_mysql:start

PROPERTYS

Now some propertys, change general timeout, enable stonith and his action, etc

property expected-quorum-votes="2" 
property default-action-timeout="180s" 
property stonith-action="reboot" 
property stonith-enabled="true" 
property start-failure-is-fatal="false" 
property default-resource-stickiness="1" 
property no-quorum-policy="ignore"

CHECK

If all it’s correct commit and check

============
Last updated: Sun May 13 10:57:28 2012
Stack: openais
Current DC: node2 - partition with quorum
Version: 1.0.12-unknown
2 Nodes configured, 2 expected votes
4 Resources configured.
============

Online: [ node1 node2 ]

Full list of resources:

st-node1        (stonith:external/vcenter):     Started node2
st-node2        (stonith:external/vcenter):     Started node1
 Resource Group: group_mysql
     fs_mysql   (ocf::heartbeat:Filesystem):    Started node1
     mysqld     (ocf::heartbeat:mysql): Started node1
     vip1	(ocf::heartbeat:IPaddr2):	Started node1
 Master/Slave Set: ms_drbd_mysql
     Masters: [ node1 ]
     Slaves: [ node2 ]

Migration summary:
* Node node1:
* Node node2:

Ok now it do the full battery of tests for testing the Mysql Cluster, but this I leave to you.
Regards

3 thoughts on “Install & Configure MySQL Cluster (Pacemaker, Corosync, DRBD, Stonith)

Leave a Reply

Your email address will not be published. Required fields are marked *