5

I have two nodes in my cluster with drbd+pacemaker+corosync When the first node fails, the second assumes the service and it's ok, but when we have to failback (node1 back online) it shows some errors and the cluster stops working.

It's a CentOS 6 cluster with kernel 2.6.32-504.12.2.el6.x86_64 and these packages:

kmod-drbd83-8.3.16-3, drbd83-utils-8.3.16-1, corosynclib-1.4.7-1, corosync-1.4.7-1, pacemaker-1.1.12-4, pacemaker-cluster-libs-1.1.12-4, pacemaker-libs-1.1.12-4, pacemaker-cli-1.1.12-4.

Drbd config:

    resource r0
{
    startup {
        wfc-timeout 30;
        outdated-wfc-timeout 20;
        degr-wfc-timeout 30;
    }

net {
    cram-hmac-alg sha1;
    shared-secret sync_disk;
    max-buffers 512;
    sndbuf-size 0;
}

syncer {
    rate 100M;
    verify-alg sha1;
}

on XXX2 {
    device minor 1;
    disk /dev/sdb;
    address xx.xx.xx.xx:7789;
    meta-disk internal;
}

on XXX1 {
    device minor 1;
    disk /dev/sdb;
    address xx.xx.xx.xx:7789;
    meta-disk internal;
}
}

Corosync:

compatibility: whitetank

totem {
    version: 2
    secauth: on
    interface {
        member {
            memberaddr: xx.xx.xx.1
        }
        member {
            memberaddr: xx.xx.xx.2
        }
        ringnumber: 0
        bindnetaddr: xx.xx.xx.1
        mcastport: 5405
        ttl: 1
    }
    transport: udpu
}

logging {
    fileline: off
    to_logfile: yes
    to_syslog: yes
    debug: on
    logfile: /var/log/cluster/corosync.log
    debug: off
    timestamp: on
    logger_subsys {
        subsys: AMF
        debug: off
    }
}

Pacemaker:

node XXX1 \
        attributes standby=off
node XXX2 \
        attributes standby=off
primitive drbd_res ocf:linbit:drbd \
        params drbd_resource=r0 \
        op monitor interval=29s role=Master \
        op monitor interval=31s role=Slave
primitive failover_ip IPaddr2 \
        params ip=172.16.2.49 cidr_netmask=32 \
        op monitor interval=30s nic=eth0 \
        meta is-managed=true
primitive fs_res Filesystem \
        params device="/dev/drbd1" directory="/data" fstype=ext4 \
        meta is-managed=true
primitive res_exportfs_export1 exportfs \
        params fsid=1 directory="/data/export" options="rw,async,insecure,no_subtree_check,no_root_squash,no_all_squash" clientspec="*" wait_for_leasetime_on_stop=false \
        op monitor interval=40s \
        op stop interval=0 timeout=120s \
        op start interval=0 timeout=120s \
        meta is-managed=true
primitive res_exportfs_export2 exportfs \
        params fsid=2 directory="/data/teste1" options="rw,async,insecure,no_subtree_check,no_root_squash,no_all_squash" clientspec="*" wait_for_leasetime_on_stop=false \
        op monitor interval=40s \
        op stop interval=0 timeout=120s \
        op start interval=0 timeout=120s \
        meta is-managed=true
primitive res_exportfs_root exportfs \
        params clientspec="*" options="rw,async,fsid=root,insecure,no_subtree_check,no_root_squash,no_all_squash" directory="/data" fsid=0 unlock_on_stop=false wait_for_leasetime_on_stop=false \
        operations $id=res_exportfs_root-operations \
        op monitor interval=30 start-delay=0 \
        meta
group rg_export fs_res res_exportfs_export1 res_exportfs_export2 failover_ip
ms drbd_master_slave drbd_res \
        meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
clone cl_exportfs_root res_exportfs_root \
        meta
colocation c_nfs_on_root inf: rg_export cl_exportfs_root
colocation fs_drbd_colo inf: rg_export drbd_master_slave:Master
order fs_after_drbd Mandatory: drbd_master_slave:promote rg_export:start
order o_root_before_nfs inf: cl_exportfs_root rg_export:start
property cib-bootstrap-options: \
        expected-quorum-votes=2 \
        last-lrm-refresh=1427814473 \
        stonith-enabled=false \
        no-quorum-policy=ignore \
        dc-version=1.1.11-97629de \
        cluster-infrastructure="classic openais (with plugin)"

Errors:

res_exportfs_export2_stop_0 on xx.xx.xx.1 'unknown error' (1): call=47, status=Timed Out, last-rc-change='Tue Mar 31 12:53:04 2015', queued=0ms, exec=20003ms
res_exportfs_export2_stop_0 on xx.xx.xx.1 'unknown error' (1): call=47, status=Timed Out, last-rc-change='Tue Mar 31 12:53:04 2015', queued=0ms, exec=20003ms
res_exportfs_export2_stop_0 on xx.xx.xxx.2 'unknown error' (1): call=52, status=Timed Out, last-rc-change='Tue Mar 31 12:53:04 2015', queued=0ms, exec=20001ms
res_exportfs_export2_stop_0 on xx.xx.xx.2 'unknown error' (1): call=52, status=Timed Out, last-rc-change='Tue Mar 31 12:53:04 2015', queued=0ms, exec=20001ms

Is there some other log that I could check?

I checked in the second node /dev/drbd1 does not umount upon failback. If I restart the NFS service and apply the rule, everything works fine.

Edit: Thanks to Dok it's working now, i just have to adjust the time to 120s and set the start timeout too!

1 Answers1

4
res_exportfs_export2_stop_0 on xx.xx.xx.1 'unknown error' (1): call=47, status=Timed Out, last-rc-change='Tue Mar 31 12:53:04 2015', queued=0ms, exec=20003ms

Shows that your res_exportfs2 resources failed to stop due to a timeout. It may simply be that it needs a longer timeout. Try configuring a stop timeout for this resource like so:

primitive res_exportfs_export2 exportfs \
params fsid=2 directory="/data/teste1" options="rw,async,insecure,no_subtree_check,no_root_squash,no_all_squash" clientspec="*" wait_for_leasetime_on_stop=true \
op monitor interval=30s \
op stop interval=0 timeout=60s

If the timeout does not help check the messages log and/or corosync.log at the time shown in the errors for clues (Mar 31 12:53:04 2015).

Dok
  • 1,110
  • 1
  • 7
  • 13