3

There seem to be lots of options in Linux to provide a virtual IP for failover between multiple hosts. Some that I have found are heartbeat, vrrpd, carp, and keepalived.

In Linux I only have experience with heartbeat (and have used HSRP in Cisco). Do these various options have any particular advantage when it comes to providing a virtual IP that will be a gateway for hosts on the LAN. One feature I would like to have is the ability to track another interface. So for example if the virtual IP is shared between eth0 on Server A and eth0 on Server B, I would like to have it be able to failover to another server if it detects eth1 has gone down. I would also like to be able to set a preferred host.

Kyle Brandt
  • 82,107
  • 71
  • 302
  • 444

3 Answers3

2

One of the primary advantages I have found with heartbeat has been the ability to customize it to have multiple monitoring points. As per the default recommended configuration, it has multiple monitoring points between the serial uplink and the network monitoring.

For example, a heartbeat resource script could be created to monitor a daemon and in case of the daemon failing, initiate a failover.

CARP is based on HSRP, which as you identified monitors the interface. This certainly has a place and I like the technology but depending upon the server role you might find heartbeat to be advantageous.

I suppose it could be argued that even those protocols that do not support this could have a script written to imitate some of the behavior, which is essentially what I described with heartbeat.

While I have never used keepalived, it seems to be similar to ldirectord in that it monitors LVS hosts and removes them from the VIP in case of failure. I would not consider this to be in the exact same category as heartbeat or CARP.

Warner
  • 23,440
  • 2
  • 57
  • 69
1

We use switch/load-balancer based VIPs that round-robin based on a service-availability test such as a httpget or similar. This takes away the load and responsibility from the server - they each think they're the only ones responding. Then for our actual clusters (Oracle, WebLogic, ZXTM etc.) the same model is true but the clustering application itself ensures that the servers are in touch with each other, but the client-facing IPs still remain 'regular' ones. Essentially we've never found a reason for anything other than 'regular' IPs but I'd be interested to know your planned use case. Oh and we can then use the switch/LB to define which servers are in/out of service.

Chopper3
  • 100,240
  • 9
  • 106
  • 238
0

Failover sucks - you never know if it will work until something fails. Like Chopper3 I'd always go with load-balancing if its at all possible.

C.

symcbean
  • 19,931
  • 1
  • 29
  • 49
  • 1
    Or when you test it. There's plenty of situations where load balancing is not the best solution, such as with a write VIP on MySQL with InnoDB. – Warner Jul 20 '10 at 13:56