CRM – Integration DRBD, Stonith & Mysql

Posted on Posted in Mysql

Ok, in previous entries I explained how install and configure DRBD and Stonith for Vmware. Now I explain how integrate all of them in CRM to control this resources with one tool.

The CRM (a.k.a Pacemaker) is a Cluster Resource Manager which implements the cluster configuration provided by the user in CIB (Cluster Information Base). The CIB is a set of instructions coded in XML. Editing the CIB is a challenge, not only due to its complexity and a wide variety of options, but also because XML is more computer than user friendly.

CRM – Stonith

First Integrate Vmware Stonith with crm, I created two stonith resources, one per node, why?. Teorically one node not kill himself, but if I need stop only one stonith resource?, when you clone the resource and put the location on both nodes, this operation could not be done because the resource stop on both.

crm(live)configure# primitive st-node1 stonith::external/vcenter params VI_SERVER="" VI_CREDSTORE="/etc/vicredentials.xml" HOSTLIST="node1" RESETPOWERON="1" op monitor interval="60s"
crm(live)configure# primitive st-node2 stonith::external/vcenter params VI_SERVER="" VI_CREDSTORE="/etc/vicredentials.xml" HOSTLIST="node2" RESETPOWERON="1" op monitor interval="60s"

Now the locations, put-infinity in the resource and node which does not want to run. For example the stonith for kill node1 need not be on node1

crm(live)configure# location loc-st-node1 st-node1 -inf: node1

The node2 stonith

crm(live)configure# location loc-st-node2 st-node2 -inf: node2

CRM – Mysql

To configure Mysql in CRM:

crm(live)configure# primitive mysqld ocf:heartbeat:mysql params binary="/usr/local/etc/mysql/bin/mysqld_safe" config="/etc/my.cnf" datadir="/usr/local/etc/mysql/var" log="/var/log/mysql/mysql.log" pid="/usr/local/etc/mysql/var/" socket="/tmp/mysql.sock" user="mysql" op monitor interval="120s" timeout="120s"


For integrate DRBD with CRM first configure the resource DISK1

crm(live)configure# primitive drbd_mysql ocf:linbit:drbd params drbd_resource="DISK1" op monitor interval="15s" op start timeout="240s"

Now specify where the drbd resource will be mounted, and file system used

crm(live)configure# primitive fs_mysql ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/usr/local/etc/mysql/var/" fstype="ext3"

Mysql + DRBD

Configure a new group with the drbd filesystem, this group means that mysql don’t start if the drbd resource not run.

crm(live)configure# group group_mysql fs_mysql mysqld


Because the drbd system is based on master/slave roles

crm(live)configure# ms ms_drbd_mysql drbd_mysql meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"

Now, we ensure that the resource is one the Master Node:

crm(live)configure# colocation mysql_on_drbd inf: group_mysql ms_drbd_mysql:Master

The Mysql resource has to start after drbd:

crm(live)configure# order mysql_after_drbd inf: ms_drbd_mysql:promote group_mysql:start

Check if all is OK:

Last updated: Sun Apr 22 21:17:53 2012
Stack: openais
Current DC: node1	- partition with quorum
Version: 1.0.12-unknown
2 Nodes configured, 2 expected votes
4 Resources configured.

Online: [ node1 node2 ]

Full list of resources:

st-node1  (stonith:external/vcenter):     Started node2
 Resource Group: group_mysql
     fs_mysql   (ocf::heartbeat:Filesystem):    Started node2
     mysqld     (ocf::heartbeat:mysql): Started node2
 Master/Slave Set: ms_drbd_mysql
     Masters: [ node2 ]
     Slaves: [ node1 ]
st-node2     (stonith:external/vcenter):     Started node1* Node node2: * Node node1:

Leave a Reply

Your email address will not be published. Required fields are marked *