2

I've got an active-passive Heartbeat cluster with Apache, MySQL, ActiveMQ and DRBD.

Today, I wanted to perform hardware-maintenance on the secondary node (node04), so I stopped the heartbeat service before shutting it down.

Then, the primary node (node03) received a shutdown notice from the secondary node (node04).

This logging comes from the primary node: node03

heartbeat[4458]: 2010/03/08_08:52:56 info: Received shutdown notice from 'node04.companydomain.nl'.
heartbeat[4458]: 2010/03/08_08:52:56 info: Resources being acquired from node04.companydomain.nl.
harc[27522]:    2010/03/08_08:52:56 info: Running /etc/ha.d/rc.d/status status
heartbeat[27523]: 2010/03/08_08:52:56 info: Local Resource acquisition completed.
mach_down[27567]:       2010/03/08_08:52:56 info: /usr/share/heartbeat/mach_down: nice_failback: foreign resources acquired
mach_down[27567]:       2010/03/08_08:52:56 info: mach_down takeover complete for node node04.companydomain.nl.
heartbeat[4458]: 2010/03/08_08:52:56 info: mach_down takeover complete.
harc[27620]:    2010/03/08_08:52:56 info: Running /etc/ha.d/rc.d/ip-request-resp ip-request-resp
ip-request-resp[27620]: 2010/03/08_08:52:56 received ip-request-resp drbddisk OK yes
ResourceManager[27645]: 2010/03/08_08:52:56 info: Acquiring resource group: node03.companydomain.nl drbddisk Filesystem::/dev/drbd0::/data::ext3 mysql apache::/etc/httpd/conf/httpd.conf LVSSyncDaemonSwap::master monitor activemq tivoli-cluster MailTo::ger.apeldoorn@mydomain.nl::DRBDFailureAcc MailTo::mail@company.nl::DRBDFailureAcc 1.2.3.212
ResourceManager[27645]: 2010/03/08_08:52:56 info: Running /etc/ha.d/resource.d/drbddisk  start
Filesystem[27700]:      2010/03/08_08:52:57 INFO:  Running OK
ResourceManager[27645]: 2010/03/08_08:52:57 info: Running /etc/ha.d/resource.d/mysql  start
mysql[27783]:   2010/03/08_08:52:57 Starting MySQL[ OK ]
apache[27853]:  2010/03/08_08:52:57 INFO:  Running OK
ResourceManager[27645]: 2010/03/08_08:52:57 info: Running /etc/ha.d/resource.d/monitor  start
monitor[28160]: 2010/03/08_08:52:58
ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/activemq  start
activemq[28210]:        2010/03/08_08:52:58 Starting ActiveMQ Broker... ActiveMQ Broker is already running.
ResourceManager[27645]: 2010/03/08_08:52:58 ERROR: Return code 1 from /etc/ha.d/resource.d/activemq
ResourceManager[27645]: 2010/03/08_08:52:58 CRIT: Giving up resources due to failure of activemq
ResourceManager[27645]: 2010/03/08_08:52:58 info: Releasing resource group: node03.companydomain.nl drbddisk Filesystem::/dev/drbd0::/data::ext3 mysql apache::/etc/httpd/conf/httpd.conf LVSSyncDaemonSwap::master monitor activemq tivoli-cluster MailTo::ger.apeldoorn@mydomain.nl::DRBDFailureAcc MailTo::mail@company.nl::DRBDFailureAcc 1.2.3.212
ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/IPaddr 1.2.3.212 stop
IPaddr[28329]:  2010/03/08_08:52:58 INFO: ifconfig eth0:0 down
IPaddr[28312]:  2010/03/08_08:52:58 INFO:  Success
ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/MailTo node@company.nl DRBDFailureAcc stop
MailTo[28378]:  2010/03/08_08:52:58 INFO:  Success
ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/MailTo ger.apeldoorn@mydomain.nl DRBDFailureAcc stop
MailTo[28433]:  2010/03/08_08:52:58 INFO:  Success
ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/tivoli-cluster  stop
ResourceManager[27645]: 2010/03/08_08:52:58 info: Running /etc/ha.d/resource.d/activemq  stop
activemq[28503]:        2010/03/08_08:53:01 Stopping ActiveMQ Broker... Stopped ActiveMQ Broker.
ResourceManager[27645]: 2010/03/08_08:53:01 info: Running /etc/ha.d/resource.d/monitor  stop
monitor[28681]: 2010/03/08_08:53:01
ResourceManager[27645]: 2010/03/08_08:53:01 info: Running /etc/ha.d/resource.d/LVSSyncDaemonSwap master stop
LVSSyncDaemonSwap[28714]:       2010/03/08_08:53:02 info: ipvs_syncmaster down
LVSSyncDaemonSwap[28714]:       2010/03/08_08:53:02 info: ipvs_syncbackup up
LVSSyncDaemonSwap[28714]:       2010/03/08_08:53:02 info: ipvs_syncmaster released
ResourceManager[27645]: 2010/03/08_08:53:02 info: Running /etc/ha.d/resource.d/apache /etc/httpd/conf/httpd.conf stop
apache[28782]:  2010/03/08_08:53:03 INFO: Killing apache PID 18390
apache[28782]:  2010/03/08_08:53:03 INFO: apache stopped.
apache[28771]:  2010/03/08_08:53:03 INFO:  Success
ResourceManager[27645]: 2010/03/08_08:53:03 info: Running /etc/ha.d/resource.d/mysql  stop
mysql[28851]:   2010/03/08_08:53:24 Shutting down MySQL.....................[ OK ]
ResourceManager[27645]: 2010/03/08_08:53:24 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd0 /data ext3 stop
Filesystem[29010]:      2010/03/08_08:53:25 INFO: Running stop for /dev/drbd0 on /data
Filesystem[29010]:      2010/03/08_08:53:25 INFO: Trying to unmount /data
Filesystem[29010]:      2010/03/08_08:53:25 ERROR: Couldn't unmount /data; trying cleanup with SIGTERM
Filesystem[29010]:      2010/03/08_08:53:25 INFO: Some processes on /data were signalled
Filesystem[29010]:      2010/03/08_08:53:27 INFO: unmounted /data successfully
Filesystem[28999]:      2010/03/08_08:53:27 INFO:  Success
ResourceManager[27645]: 2010/03/08_08:53:27 info: Running /etc/ha.d/resource.d/drbddisk  stop
heartbeat[4458]: 2010/03/08_08:53:29 WARN: node node04.companydomain.nl: is dead
heartbeat[4458]: 2010/03/08_08:53:29 info: Dead node node04.companydomain.nl gave up resources.
heartbeat[4458]: 2010/03/08_08:53:29 info: Link node04.companydomain.nl:eth0 dead.
heartbeat[4458]: 2010/03/08_08:53:29 info: Link node04.companydomain.nl:eth1 dead.
hb_standby[29193]:      2010/03/08_08:53:57 Going standby [foreign].
heartbeat[4458]: 2010/03/08_08:53:57 info: node03.companydomain.nl wants to go standby [foreign]

Soo... What just happened here???

  • Heartbeat on node04 stopped and told node03, which was the active node at the time.
  • Somehow, node03 decided to start the cluster processes that were already running. (For the processes that are not critical, I always return a 0 from the startupscript so it does not stops the entire cluster when a non-essential part fails.)
  • When starting ActiveMQ, it returns status 1 because it is already running.
  • This fails the node and shuts everything down. As heartbeat is not running on the secondary node, it cannot failover to there.

When I tried to run ha_takeover to restart the resources, absolutely nothing happened.

Only after I restarted heartbeat on the primary node the resources could be started (after a delay of 2 minutes).

These are my questions:

  • Why does heartbeat on the primary node try to start the cluster processes again?
  • Why did ha_takeover not work?
  • What can I do to prevent this from happening?

Server configuration:

DRBD:

version: 8.3.7 (api:88/proto:86-91)
GIT-hash: ea9e28dbff98e331a62bcbcc63a6135808fe2917 build by root@rhel-buildhost.mydomain.nl, 2010-01-20 09:14:48
 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate B r----
    ns:0 nr:6459432 dw:6459432 dr:0 al:0 bm:301 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0

uname -a

Linux node04 2.6.18-164.11.1.el5 #1 SMP Wed Jan 6 13:26:04 EST 2010 x86_64 x86_64 x86_64 GNU/Linux

haresources

node03.companydomain.nl \
          drbddisk \
          Filesystem::/dev/drbd0::/data::ext3 \
          mysql \
          apache::/etc/httpd/conf/httpd.conf \
          LVSSyncDaemonSwap::master \
          monitor \
          activemq \
          tivoli-cluster \
          MailTo::ger.apeldoorn@mydomain.nl::DRBDFailureAcc \
          MailTo::node@company.nl::DRBDFailureAcc \
          1.2.3.212

ha.cf

debugfile /var/log/ha-debug
logfile /var/log/ha-log
keepalive 500ms
deadtime 30
warntime 10
initdead 120
udpport 694
mcast eth0 225.0.0.3 694 1 0
mcast eth1 225.0.0.4 694 1 0
auto_failback off
node    node03.companydomain.nl
node    node04.companydomain.nl

respawn hacluster /usr/lib64/heartbeat/dopd
apiauth dopd gid=haclient uid=hacluster

Thank you very much in advance,

Ger Apeldoorn

Andy
  • 5,190
  • 23
  • 34
Ger Apeldoorn
  • 565
  • 3
  • 10

2 Answers2

1

This is no heartbeat bug. This is a common bug in some init-scripts. RTFM: The standard says:

  • Stopping a stopped resource is no error
  • Starting a started resource is no error

So what went wrong? ActiveMQ was started and already running.

This is no error! But: It returned 1=error instead of 0=ok - so heartbeat concluded that there was an error and stopped the whole resource group.

So if you use init-scripts for heartbeat make sure they conform to the LSB.

Nils
  • 7,657
  • 3
  • 31
  • 71
0

For what it is worth, I feel your pain. It seems that heartbeat considers the loss of the passive node the same as a takeover of the passive node, so it starts its services. When the start scripts failed, and there was no other node to which to failover, heartbeat stayed primary and shutdown all the services. The only way to get back up again is to re-start heartbeat when this happens.

We dealt with this problem by making a single script that starts all of the cluster services (IP, FS mount, ipvsadm, Apache, etc) only if they are not already running. We make sure the "all in one" init script only returns non-zero for actual startup failures (and not for warnings like "already running") to avoid problems like this.

Mark Porter
  • 991
  • 1
  • 5
  • 12
  • Ah. I was assuming that it was a configuration error of some kind. This is obviously a bug and I will report it as such. Thank you very much for your insight. – Ger Apeldoorn Mar 11 '10 at 08:22