OK, I've never built an AWS load balancing solution with traffic on the levels of SmugMug myself, but just thinking of theory and AWS's services, a couple of ideas come to mind.
The original question is missing a few things that tend to impact the load balancing design:
- Sticky sessions or not? It is very preferable to not use sticky session, and just let all load balancers (LB's) use round robin (RR) or random backend selection. RR or random backend selections are simple, scalable, and provide even load distribution in all circumstances.
- SSL or not? Whether SSL is in use or not, and over which percentage of requests, generally has a impact on the load balancing design. It is often preferable to terminate SSL as early as possible, to simplify certificate handling and keep the SSL CPU load away from web application servers.
I'm answering from the perspective of how to keep the load balancing layer itself highly available. Keeping the application servers HA is just done with the health checks built into your L7 load balancers.
OK, a couple of ideas that should work:
1) "The AWS way":
- First layer, at the very front, use ELB in L4 (TCP/IP) mode.
- Second layer, use EC2 instances with your L7 load balancer of choice (nginx, HAProxy, Apache etc).
Benefits/idea: The L7 load balancers can be fairly simple EC2 AMI's, all cloned from the same AMI and using the same configuration. Thus Amazon's tools can handle all HA needs: ELB monitors the L7 load balancers. If a L7 LB dies or becomes unresponsive, ELB & Cloudwatch together spawn a new instance automatically and bring it into the ELB pool.
2) "The DNS round robin with monitoring way:"
- Use basic DNS round robin to get a coarse-grained load distribution out over a couple of IP addresses. Let's just say you publish 3 IP addresses for your site.
- Each of these 3 IP's is an AWS Elastic IP Address (EIA), bound to a EC2 instance, with a L7 load balancer of your choice.
- If a EC2 L7 LB dies, a compliant user agent (browser) should just use one of the other IPs instead.
- Set up an external monitoring server. Monitor each of the 3 EIPs. If one becomes unresponsive, use AWS's command line tools and some scripting to move the EIP over to another EC2 instance.
Benefits/idea: Compliant user agents should automatically switch over to another IP address if one becomes unresponsive. Thus, in the case of a failure, only 1/3 of your users should be impacted, and most of these shouldn't notice anything since their UA silently fails over to another IP. And your external monitoring box will notice that an EIP is unresponsive, and rectify the situation within a couple of minutes.
3) DNS RR to pairs of HA servers:
Basically this is Don's own suggestion of simple heartbeat between a pair of servers, but simplified for multiple IP addresses.
- Using DNS RR, publish a number of IP addresses for the service. Following the example above, let's just say you publish 3 IPs.
- Each of these IP's goes to a pair of EC2 servers, so 6 EC2 instances in total.
- Each of these pairs uses Heartbeat or another HA solution together with AWS tools to keep 1 IP address live, in a active/passive configuration.
- Each EC2 instance has your L7 load balancer of choice installed.
Benefits/idea: In AWS' completely virtualized environment it's actually not that easy to reason about L4 services and failover modes. By simplifying to one pair of identical servers keeping just 1 IP address alive, it gets simpler to reason about and test.
Conclusion: Again, I haven't actually tried any of this in production. Just from my gut feeling, option one with ELB in L4 mode, and self-managed EC2 instances as L7 LBs seems most aligned with the spirit of the AWS platform, and where Amazon is most likely to invest and expand later on. This would probably be my first choice.