6

I'm investigating setting up a load balanced server solution consisting of three CentOS 5.4 boxes. Two of these boxes will reside in one facility, while a third will reside in a different facility.

I'm currently working to set up heartbeat, ldirectord, ipvsadm to load-balance the machines, but I'm not sure its going to work with

I'm not overly familiar with the details behind how all of these work, but is the load balancing going to work correctly when these servers are not all on the same LAN? I'm not sure if heartbeat is using SNMP to send signals or not, which would only work over a LAN. Has anyone tried this or found a different solution?

LinuxGnut
  • 217
  • 3
  • 9

3 Answers3

8

This is a large topic that gets complicated fast. The CAP theorem is a good starting point, as it identifies the higher level choices that must be made.

When you are dealing with a write heavy Web application, it makes it more difficult to distribute load across the Internet while maintaining data integrity. Read centric applications (search!) are easier to distribute, as you do not have to concern yourself with the logistics of writing the data.

ipvs allows Linux to essentially become a layer 4 switch. I have had the most success with using it on layer 2 (ARP/ethernet-- link layer) and that would be my first choice, but it may be feasible to use something like LVS-Tun for geographically separate servers that do not have a connection on the broadcast layer. Note, ipvsadm is the userland tool for ipvs and ldirectord is a daemon to manage ipvs resources.

heartbeat has effectively been succeeded by pacemaker. To monitor the other server, it is essential to have multiple links. The risk of not having a serial or redundant physical connection between the servers is substantially greater. Even multiple physically distinct Internet connections that heartbeat monitors between the two sites are bound to go down. This is where the risk of data comes into play, as automatic failover risks data corruption by split brain. There is no ideal method to mitigate this risk.

You could inject more logic into the failover process. For example:

If path1 is down, path2 is down, this process is not running, and I can't do this-- then failover.

This reduces the risk but even then still not necessarily to the point of being able to physically connect the servers over a short distance.

With static content, it is easy to employee the use of a Content Distribution Network.

Simple load balancing and failover can be accomplished using Round Robin DNS, which is more fallible.

Border Gateway Protocol is a network protocol that can enable high availability on the network layer.

Ultimately, with enough money (time/resources) an appropriate SLA can be developed to enable a high degree of availability. Your budget will be your ultimate constraint. Define your requirements and then see what you can accomplish within your budget, as there will be compromises.

I have often found that it makes more sense, at least in the case of write heavy applications, to enable high availability and automatic failover within the same physical premise. As part of the disaster recovery plan and SLA to have a manual failover process to a physically separate site, which allows data integrity to be maintained yet still maintains a quality service level.

Warner
  • 23,440
  • 2
  • 57
  • 69
  • This doesn't sound overly difficult to set up, but the MySQL drawbacks for a write-heavy website is troubling. This definitely helps continue my investigation, thanks. – LinuxGnut Jun 16 '10 at 15:02
  • Take a look at Tungsten Replicator from Continuent. http://www.continuent.com/community/tungsten-replicator I've just started using it for master/master (n master) mysql replication. Well worth a look. – Tom O'Connor Jun 16 '10 at 17:48
3

Having different servers in different locations shouldn't be a problem, until they can reach each other.
The problem would be the bandwidth between them and what you make flow on it.
Heartbeat doesn't use snmp and could be multicast, unicast or broadcast. It's a specific protocol (anyway snmp works between lans sicne it's a udp protocol).
What kind of service are trying to load balance?

PiL
  • 1,591
  • 8
  • 6
  • We'll definitely be load balancing Apache, but there's a chance we'll be doing MySQL as well (or as a cluster). – LinuxGnut Jun 16 '10 at 14:32
  • 1
    Apache will not be a problem, but load balance mysql is not so easy (bandwidth, clustering mysql is not easy and with some drawbacks) – PiL Jun 16 '10 at 14:39
  • Load balancing MySQL is easy with slave replication and a readonly ipvs VIP. Writing you typically scale up or out with sharding, which is more complicated. – Warner Jun 16 '10 at 14:46
  • Ok, but if you want to load balancing a heavy write mysql server? That's not so easy. – PiL Jun 16 '10 at 14:52
0

Another idea would be some implementation of DRBD with high availability. Check this site out http://www.drbd.org/home/what-is-drbd/

erimar77
  • 488
  • 2
  • 8