180

From reading, it seems like DNS failover is not recommended just because DNS wasn't designed for it. But if you have two webservers on different subnets hosting redundant content, what other methods are there to ensure that all traffic gets routed to the live server if one server goes down?

To me it seems like DNS failover is the only failover option here, but the consensus is it's not a good option. Yet services like DNSmadeeasy.com provide it, so there must be merit to it. Any comments?

John Gardeniers
  • 27,262
  • 12
  • 53
  • 108
Lin
  • 2,869
  • 6
  • 26
  • 25
  • 3
    Look [here](http://serverfault.com/questions/563835) for an updated discussion on the subject. The failover is now done automatically by modern browsers. – GetFree Dec 28 '13 at 00:30

16 Answers16

97

By 'DNS failover' I take it you mean DNS Round Robin combined with some monitoring, i.e. publishing multiple IP addresses for a DNS hostname, and removing a dead address when monitoring detects that a server is down. This can be workable for small, less trafficked websites.

By design, when you answer a DNS request you also provide a Time To Live (TTL) for the response you hand out. In other words, you're telling other DNS servers and caches "you may store this answer and use it for x minutes before checking back with me". The drawbacks come from this:

  • With DNS failover, a unknown percentage of your users will have your DNS data cached with varying amounts of TTL left. Until the TTL expires these may connect to the dead server. There are faster ways of completing failover than this.
  • Because of the above, you're inclined to set the TTL quite low, say 5-10 minutes. But setting it higher gives a (very small) performance benefit, and may help your DNS propagation work reliably even if there is a short glitch in network traffic. So using DNS based failover goes against high TTLs, but high TTLs are a part of DNS and can be useful.

The more common methods of getting good uptime involve:

  • Placing servers together on the same LAN.
  • Place the LAN in a datacenter with highly available power and network planes.
  • Use a HTTP load balancer to spread load and fail over on individual server failures.
  • Get the level of redundancy / expected uptime you require for your firewalls, load balancers and switches.
  • Have a communication strategy in place for full-datacenter failures, and the occasional failure of a switch / database server / other resource that cannot easily be mirrored.

A very small minority of web sites use multi-datacenter setups, with 'geo-balancing' between datacenters.

  • 46
    I think he's specifically trying to manage failover between two different data centres (note the comments about different subnets), so placing the servers together/using load balancers/extra redundacy isn't going to help him (apart from redundant data centres. But you still need to tell the internet to go to the one that's still up). – Cian Aug 30 '09 at 23:22
  • 11
    Add anycast to the multi-datacenter setup and it becomes datacenter-failure proof. – petrus Feb 22 '11 at 00:30
  • 1
    wikipedia entry on anycast (http://en.wikipedia.org/wiki/Anycast) discusses this in relation to DNS root server resilience. – dunxd Apr 01 '11 at 01:54
  • 1
    Don't forget [the several **other** reasons that DNS "round robin" resource record set shuffling is useless](http://homepage.ntlworld.com./jonathan.deboynepollard/FGA/dns-round-robin-is-useless.html). – JdeBP May 25 '11 at 20:07
  • 1
    There are lots of different stories here being used as justification for casting RR DNS in a bad light. Regarding shuffling - the objective here is to support clients which don't properly implement the resolver with no net impact on those which do. Short TTLs don't work. RR DNS does work for browsers as clients, failover occurs in seconds not minutes or hours. – symcbean Sep 23 '14 at 15:55
  • @JdeBP one could argue that the link you provide is slightly biased :-} telling that fault tolerance is achieved by "having service clients that attempt to connect to each address in turn. Such clients are not hard to write" - yes, google could do that in chrome, but everybody else? – raudi Mar 03 '15 at 08:53
  • 4
    DDoS attacks are so common now entire data centres can be brought offline (happened to Linode London and their other datacentres December 2015). So using the same provider, in the same data centre is not recommended. Therefore multiple data centres with different providers would be a good strategy, which brings us back to DNS failover unless a better alternative exist. – Laurence Cope Feb 15 '16 at 15:18
  • A few weeks ago, half the state of Georgia went dark for almost eight hours due to some kind failure in their power system. Guess where our data center is located (hint: it sure wasn't New York). Guess how much good it would have done to have a "faster" solution of backup servers on the same LAN (hint: not much). Bottom line, for some purposes, even a "slow" failover is better than none at all. For us, that was eight hours of lost revenue. Guess how happy the Boss was about this (hint: not happy at all). – UncaAlby Jul 01 '16 at 23:18
  • 2
    Isn't why a failover exist, because you need to keep your site live when a device is down/faulty? What good will your failover be when it's in the same network sharing the same devices e.g. routers? – user2128576 Sep 20 '16 at 18:53
49

DNS failover defintely works great. I have been using it for many years to manually shift traffic between datacenters, or automatically when monitoring systems detected outages, connectivity issues, or overloaded servers. When you see the speed at which it works, and the volumes of real world traffic that can be shifted with ease - you'll never look back. I use Zabbix for monitoring all of my systems and the visual graphs that show what happens during a DNS failover situation put all my doubts to and end. There may be a few ISPs out there that ignore TTLs, and there are some users still out there with old browsers - but when you are looking at traffic from millions of page views a days across 2 datacenter locations and you do a DNS traffic shift - the residual traffic coming in that ignores TTLs is laughable. DNS failover is a solid technique.

DNS was not designed for failover - but it was designed with TTLs that work amazingly for failover needs when combined with a solid monitoring system. TTLs can be set very short. I have effectively used TTLs of 5 seconds in production for lightening fast DNS failover based solutions. You have to have DNS servers capable of handling the extra load - and named won't cut it. However, powerdns fits the bill when backed with a mysql replicated databases on redundant name servers. You also need a solid distributed monitoring system that you can trust for the automated failover integration. Zabbix works for me - I can verify outages from multiple distributed Zabbix systems almost instantly - update mysql records used by powerdns on the fly - and provide nearly instant failover during outages and traffic spikes.

But hey - I built a company that provides DNS failover services after years of making it work for large companies. So take my opinion with a grain of salt. If you want to see some zabbix traffic graphs of high volume sites during an outage - to see for yourself exactly how good DNS failover works - email me I'm more than happy to share.

Michael Hampton
  • 237,123
  • 42
  • 477
  • 940
Scott McDonald
  • 515
  • 4
  • 2
  • Cian's answer http://serverfault.com/a/60562/87017 directly contradicts your one..... so who is right? – Pacerier May 14 '14 at 05:49
  • 1
    It'as my eperience that short TTLs DO NOT WORK across the internet. You might be running DNS servers that respect the RFCs - but there are a lot of servers out there which don't. Please don't assume this is an argument against Round Robin DNS - see also vmiazzo's answer below - I've run busy sites using RR DNS and tested it - it works. The only problems I encountered were with some Java based clients (not browsers) which didn't even try to reconnect on failure let alone cycle the list of hosts on an RST – symcbean Sep 23 '14 at 15:50
  • 14
    I bet the people who say monitored DNS failover is great and the people who say it sucks are having similar experiences, but with different expectations. DNS failover is NOT seamless, but it DOES prevent significant downtime. If you need a completely seamless access (never lose a single request, even during server failure,) you probably need a much more sophisticated --and expensive- architecture. That's not a requirement for many applications. – Tom Wilson Aug 18 '15 at 21:03
33

The issue with DNS failover is that it is, in many cases, unreliable. Some ISPs will ignore your TTLs, it doesn't happen immediately even if they do respect your TTLs, and when your site comes back up, it can lead to some weirdness with sessions when a user's DNS cache times out, and they end up heading over to the other server.

Unfortunately, it is pretty much the only option, unless you're large enough to do your own (external) routing.

Cian
  • 5,777
  • 1
  • 27
  • 40
19

The prevalent opinion is that with DNS RR, when an IP goes down, some clients will continue to use the broken IP for minutes. This was stated in some of the previous answers to the question and it is also wrote on Wikipedia.

Anyway,

http://crypto.stanford.edu/dns/dns-rebinding.pdf explains that it is not true for most of the current HTML browsers. They will try the next IP in seconds.

http://www.tenereillo.com/GSLBPageOfShame.htm seems to be even more strong:

The use of multiple A records is not a trick of the trade, or a feature conceived by load balancing equipment vendors. The DNS protocol was designed with support for multiple A records for this very reason. Applications such as browsers and proxies and mail servers make use of that part of the DNS protocol.

Maybe some expert can comment and give a more clear explanation of why DNS RR is not good for high availability.

Thanks,

Valentino

PS: sorry for the broken link but, as new user, I cannot post more than 1

dtoubelis
  • 4,579
  • 1
  • 28
  • 31
Valentino Miazzo
  • 1,103
  • 1
  • 8
  • 10
  • 1
    Multiple A records are designed in, but for load balancing, rather than for fail over. Clients will cache the results, and continue using the full pool (including the broken IP) for a few minutes after you change the record. – Cian Sep 29 '09 at 10:10
  • 7
    So, is what is wrote on http://crypto.stanford.edu/dns/dns-rebinding.pdf chapter 3.1 false? <> – Valentino Miazzo Sep 29 '09 at 14:08
  • 2
    Moved my subquestion here http://serverfault.com/questions/69870/multiple-data-centers-and-http-traffic-dns-round-robin-is-the-only-way-to-assure – Valentino Miazzo Sep 30 '09 at 08:45
13

I ran DNS RR failover on a production moderate-trafficked but business-critical website (across two geographies) for many years.

It works fine, but there are at least three subtleties I learned the hard way.

1) Browsers will failover from a non-working IP to a working IP after 30 seconds (last time I checked) if both are considered active in whatever cached DNS is available to your clients. This is basically a good thing.

But having "half" your users wait 30 seconds is unacceptable, so you will probably want to update your TTL records to be a few minutes, not a few days or weeks so that in case of an outage, you can rapidly remove the down server from your DNS. Others have alluded to this in their responses.

2) If one of your nameservers (or one of your two geographies entirely) goes down which is serving your round-robin domain, and if the primary one of them goes down, I vaguely recall you can run into other issues trying to remove that downed nameserver from DNS if you have not set your SOA TTL/expiration for the nameserver to a sufficiently low value also. I could have the technical details wrong here, but there is more than just one TTL setting that you need to get right to really defend against single points of failure.

3) If you publish web APIs, REST services, etc, those are typically not called by browsers, and thus in my opinion DNS failover starts to show real flaws. This may be why some say, as you put it "it is not recommended". Here's why I say that. First, the apps that consume those URLs typically are not browsers, so they lack the 30-second failover properties/logic of common browsers. Second, whether or not the second DNS entry is called or even DNS is re-polled depends very much on the low-level programming details of networking libraries in the programming languages used by these API/REST clients, plus exactly how they are called by the API/REST client app. (Under they covers, does the library call get_addr, and when? If sockets hang or close, does the app re-open new sockets? Is there some sort of timeout logic? etc etc)

It's cheap, well-tested, and "mostly works". So as with most things, your mileage may vary.

GregW
  • 314
  • 4
  • 6
  • a library that doesn't retry on the other RRs for an address is broken. point the developers at the manual pages for getaddrinfo() etc. – Jasen Apr 19 '18 at 01:58
  • Also important is that browsers like Chrome and Firefox do not honour TTLs, but make them at least 1 minute even if you specify a few seconds ([Firefox reference](https://bugzilla.mozilla.org/show_bug.cgi?id=223861), [Chrome reference](https://unix.stackexchange.com/questions/363498/why-does-chromium-not-cache-dns-for-more-than-a-minute) and [another](https://bugs.chromium.org/p/chromium/issues/detail?id=164026)). I think this is bad because caching for longer than the TTL is against the spec. – nh2 Apr 05 '19 at 15:44
9

There are a bunch of people that use us (Dyn) for failover. It's the same reason sites can either do a status page when they have downtime (think of things like Twitter's Fail Whale)...or simply just reroute the traffic based on the TTLs. Some people may think that DNS Failover is ghetto...but we seriously designed our network with failover from the beginning...so that it would work as well as hardware. I'm not sure how DME does it, but we have 3 of 17 of our closest anycasted PoPs monitor your server from the closest location. When it detects from two of the three that it's down, we simply reroute the traffic to the other IP. The only downtime is for those that were at that requested for the remainder of that TTL interval.

Some people like to use both servers at once...and in that case can do something like a round robin load balancing...or geo based load balancing. For those that actually care about the performance... our real time traffic manager will monitor each server...and if one is slower...reroute the traffic to the fastest one based on what IPs you link in your hostnames. Again...this works based on the values you put in place in our UI/API/Portal.

I guess my point is...we engineered dns failover on purpose. While DNS wasn't made for failover when it originally was created...our DNS network was designed to implement it from the get go. It usually can be just as effective as hardware..without depreciation or the cost of hardware. Hope that doesn't make me sound lame for plugging Dyn...there are plenty of other companies that do it...I'm just speaking from our team's perspective. Hope this helps...

Ryan
  • 91
  • 1
  • 1
  • What do you mean by "can be just as effective as hardware"? What kind of hardware does DNS routing? – mpen Mar 28 '14 at 21:39
  • @Ryan, What do you mean when you say "ghetto"? – Pacerier May 14 '14 at 06:24
  • For that word urban dictionary gives no definitions with positive connotation, I waould assume "a beggar's solution" might be a suitable translation. – Jasen Apr 19 '18 at 02:03
5

Another option would be to set up name server 1 in location A and name server 2 in location B, but set each one up so all A records on NS1 point traffic to IPs for location A, and on NS2 all A records point to IPs for location B. Then set your TTLs for a very low number, and make sure your domain record at the registrar has been setup for NS1 and NS2. That way, it will automatically load balance, and fail over should one server or one link to a location goes down.

I've used this approach in a slightly different way. I have one location with two ISPs and use this method to direct traffic over each link. Now, it may be a bit more maintenance than you're willing to do... but I was able to create a simple piece of software that automatically pulls NS1 records, updates A record IP addresses for select zones, and pushes those zones to NS2.

Amal
  • 51
  • 1
  • 1
  • Don't the nameservers take too much to propagate? If you change a DNS record with low TTL it will work instantly, but when you change nameserver it will take 24 horus or more to propagate, hence I don't see how this could be a failover solution. – Marco Demaio Jan 27 '14 at 16:59
  • Interesting idea, but the catch is still "setting TTLs to low numbers". Instead of monitoring the endpoints and update the records, it is a more passive way to do the same thing, which may lower the latency to record updating. However, the limitation is still the same, namely "DNS record caching". – Curious Sam Aug 28 '20 at 02:31
4

The alternative is a BGP based failover system. It's not simple to set up, but it should be bullet proof. Set up site A in one location, site B in a second all with local IP addresses, then get a class C or other block of ip's that are portable and set up redirection from the portable IP's to the local IP's.

There are pitfalls, but it's better than DNS based solutions if you need that level of control.

Kyle
  • 1,849
  • 2
  • 17
  • 23
  • 4
    BGP based solutions aren't available to everyone though. And are far easier to break in particularly horrible ways than DNS is. Swings and roundabouts, I suppose. – Cian Aug 31 '09 at 03:48
3

One option for multi data-center failover is to train your users. We advertise to our customers that we provide multiple servers in multiple cities and in our signup emails and such include links directly to each "server" so that users know if one server is down they can use the link to the other server.

This totally bypasses the issue of DNS failover by just maintaining multiple domain names. Users who go to www.company.com or company.com and login get directed to server1.company.com or server2.company.com and have the choice of bookmarking either of those if they notice they get better performance using one or the other. If one goes down users are trained to go to the other server.

thelsdj
  • 830
  • 1
  • 11
  • 25
3

All of these answers have some validity to them, but I think it really depends on what you are doing and what your budget is. Here at CloudfloorDNS, a large percentage of our business is DNS, and offering not only fast DNS, but low TTL options and DNS failover. We wouldn't be in business if this didn't work and work well.

If you are a multinational corporation with unlimited budget on uptime, yeah, the hardware GSLB load balancers and tier 1 datacenters is great, but your DNS still needs to be fast and rock solid. As many of you know, DNS is a critical aspect of any infrastructure, other than the domain name itself, it's the lowest level service that every other part of your online presence rides on. Starting with a solid domain registrar, DNS is just as critical as not letting your domain expire. DNS goes down, it means the whole online aspect of your organization is also down!

When using DNS Failover, the other critical aspects are server monitoring (always multiple geo locations to check from and always multiple (at least 3) should be checking to avoid false positives) and managing the DNS records properly a failure is detected. Low TTL's and some options with the failover can make this a seamless process, and beats the heck out of waking up to a pager in the middle of the night if you are a sys admin.

Overall, DNS Failover really does work and can be very affordable. In most cases from us or most of the managed DNS providers you'll get Anycast DNS along with Server monitoring and failover for a fraction of the cost of hardware options.

So the real answer is yes, it works, but is it for everyone and every budget? Maybe not, but until you try it and do the tests for yourself, it's tough to ignore if you are a small to medium business with a limited IT budget that wants the best uptime possible.

2

I've been using DNS based site-balancing and failover for the last ten years, and there are some issues, but those can be mitigated. BGP, while superior in some ways is not a 100% solution either with increased complexity, probably additional hardware costs, convergence times, etc...

I've found combining local (LAN based) load balancing, GSLB, and cloud based zone hosting is working quite well to close up some of the issues normally associated with DNS load balancing.

Greeblesnort
  • 1,739
  • 8
  • 10
1

"and why you're taking your chances using it for most production environments (though it's better than nothing)."

Actually, "better than nothing" is better expressed as "the only option" when the presences are geographically diverse. Hardware load balancers are great for a single point of presence, but a single point of presence is also a single point of failure.

There are plenty of big-dollar sites that use dns based traffic manipulation to good effect. They are the type of sites who know on an hourly basis if sales are off. It would seem that they are the last to be up for "taking your chances using it for most production environments". Indeed, they have reviewed their options carefully, selected the technology, and pay well for it. If they thought something was better they would leave in a heartbeat. The fact that they still choose to stay speaks volumes about real world usage.

Dns based failover does suffer from a certain amount of latency. There is no way around it. But, it is still the only viable approach to failover management in a multi-pop scenario. As the only option, it is far more than "better than nothing".

spenser
  • 72
  • 2
1

Today good global load balancers that works using that technic and work pretty well. Check for example Azure Traffic Manager https://azure.microsoft.com/en-us/services/traffic-manager/

Ricardo Polo Jaramillo
  • 2,039
  • 1
  • 18
  • 35
0

I believe the idea of failover was intended for clustering, but because it could also run solo still made it possible to operate in a one-to-one availability.

Seth
  • 334
  • 2
  • 9
  • 21
0

If you want to learn more, read the application notes at

http://edgedirector.com

They cover: failover, global load balancing, and a host of related matters.

If your backend architecture permits it, the better option is global load balancing with the failover option. That way, all of the servers and bandwidth are in play as much as possible. Rather than inserting an additional available server on failure, this setup withdraws a failed server from service until it is recovered.

The short answer: it works, but you have to understand the limitations.

-1

I would recommend that you either A, select a datacenter that is multihomed on its own AS, or B, host your name servers in a public cloud. It is REALLY unlikely that EC2, or HP, or IBM will go down. Just a thought. While DNS works as a fix, it is a simply just a fix to a poor design in the network foundation in this case.

Another option, depending on your environment, is to use a combination with IPSLA, PBR and FHRP to accomplish your redundancy needs.

Matt Bram
  • 7
  • 1
  • 6
    "It is REALLY unlikely that EC2, or HP, or IBM will go down" - This "unlikely" thing has bitten us many times. _Everything_ fails. – talonx Aug 04 '13 at 06:26
  • 3
    If it was so "unlikely" people woule not come here asking for failover systems. – Marco Demaio Jan 27 '14 at 16:56