-1

I have DNS roundrobin on 2 virtual IP in front of service. (Among others the service tested was: apache, nginx, varnish, postfix, … It really does not matter. Let's call it just service.)

I have corosync config where on two nodes service is running (as a clone with max=2 max-node=1) and each node has one of 2 virtual IP.

  • In case of node failure: Corosync stop, standby mode - other node takes over IP.
  • In case of stopping service: Cluster brings it up.

But:

  • In case of destroying config of the service: Cluster cannot start it and it remains stopped/errored but the virtual IP remains.

When the cluster was active/passive there waw no clone. Primitive service was in group with IP and in case of failure also virt IP wasn't started.

I cannot group clone.

How do I solve this?

Please note that it seems to have nothing with ordering, which works just fine.

JdeBP
  • 3,970
  • 17
  • 17
Arek B.
  • 307
  • 1
  • 3
  • 11

2 Answers2

0

Pay attention to the Pacemaker cluster project: http://clusterlabs.org

It can monitor/run/move services across the cluster.

splattne
  • 28,348
  • 19
  • 97
  • 147
Biriukov
  • 407
  • 2
  • 8
  • it was a shortcut, it is pacemaker indeed, it's monitoring, starting etc. - but not in all conditions – Arek B. Feb 06 '12 at 16:30
0

I have added to Primitive option to 'op start': on-fail="standby". Now when my service (the only primitive in clone) cannot start due to faulty config - the node looses also virtIP.
This way I end with migrated resources to healthy node.

Arek B.
  • 307
  • 1
  • 3
  • 11