10

Is there a way that the DNS protocol can naturally hold a backup A record server address, like backup name server or mail server records? When searching for this I only saw results on backup nameservers (NS records).

If there isn't a way for DNS to support backup A records, what is the best way to simulate the results so that users will be directed to a working server in case the primary server is not responding?

kasperd
  • 29,894
  • 16
  • 72
  • 122
kjones1876
  • 213
  • 1
  • 2
  • 6

6 Answers6

12

Yes... sort of.

There are two things you can do here: If you put multiple A records in your DNS server for a given name, then they'll all be served to clients and those clients will pick one from the set to connect to, meaning that traffic will be "fairly" evenly distributed amongst all sites simultaneously. This isn't really what you seem to be describing, but it's a common situation (although I don't trust it, for a variety of reasons).

The other option is that you only put one A record in your DNS server, and the DNS server (or something anciliary to it, like a monitoring script) keeps an eye on your site's main address, and if it fails then the DNS server's A record gets changed to your other site. This means that only one site will be getting traffic at a time.

The downside to this second strategy is DNS caching. Anyone who got the old site address will be SOL until their DNS cache entries containing the old address get purged. This means that you have to keep your TTLs low (increasing the load on your DNS infrastructure, although that's rarely a practical problem), but there's still the problem of "rogue" DNS caches, which don't honour TTLs. These are a massive pain for anyone who ever has to change DNS entries, but they're a million times worse for anyone who needs to change DNS entries "often" (hopefully your site isn't going down several times a day, but still...) Basically, anyone behind one of these misbehaving DNS caches will see your site as being "down" for an extremely extended period of time, and just try explaining to them that it's their DNS cache that's at fault... Eugh.

In short, I wouldn't do it for a site, because there are better ways to mitigate whatever risk you're thinking of, but you'll need to describe that risk if you want suggestions on how to mitigate it.

womble
  • 95,029
  • 29
  • 173
  • 228
  • The risk is if the main server goes down (for what ever reason) i want my users to be forwarded to a backup server. I mean in the past year my server has run its gone down once (catastrophic raid failure). I had backups so the data was safe but my website was down for 12 hours. I though this would of been i common problem with a "proper" fix. I though companys would want a backup plan. – kjones1876 Jul 21 '11 at 11:05
  • 9
    You don't want DNS failover, you want more reliable hardware and possibly a hot standby server. – womble Jul 21 '11 at 11:20
  • The "rogue DNS caches" are an old wives' tale. No actual DNS server software exhibits the behaviour of ignoring TTLs. The places where DNS data are cached in such a way that causes problems are _applications_, such as [the infamous lookup caching problem of Netscape Navigator](http://tenereillo.com./BrowserDNSCache.htm) for example. – JdeBP Jul 25 '11 at 12:34
  • @JdeBP: In the words of Kevin Costner: "rogue DNS caches are not a myth... I've seen them!" I've done the digs and seen the insane and mind-bending results. Most popular with bandwidth-constrained and latency-afflicted services back in the days when that sort of thing was common (dialup ISPs whose upstream link was ISDN, for example), they're now mostly used by people who heard about them being a good idea a long time ago and just haven't changed their mind since (not that they were a particularly good idea then... but yeah). – womble Jul 25 '11 at 22:21
6

Everyone seems to think that you are talking about WWW servers, even though you explicitly wrote

like a backup name-server or mail server

The oft-overlooked truth is that HTTP service is the exception and not the norm when it comes to this. In the normal case, yes, there is a mechanism for publishing information to clients via the DNS so that they properly fallback from primary servers to backup servers. That mechanism is SRV resource records, as used by service clients for many other protocols apart from HTTP. See RFC 2782.

With SRV resource records, clients are told a list of servers, with priorities and weights, and are required to try servers in order order of priority, picking amongst servers with equal priorities according to weight, choosing higher-weighted servers more often than lower-weighted ones. So with SRV resource records, server administrators can tell clients what the fallback servers are, and how to distribute their load across a set of equal-priority servers.

Now content DNS servers are located by a special type of resource record of their own, NS resource records, which don't have priority and weight information. Equally, SMTP Relay servers are located by their own special type of resource record, MX, which has priority information but no weighting information. So for content DNS servers there's no provision for publishing fallback and load distribution information; and if one is using MX resource records then for SMTP Relay servers there's no provision for publishing load distribution information.

However, SRV-capable MTSes now exist. (The first was exim, which has been SRV-capable since 2005.) And for other service protocols, unencumbered with the baggage of MX and NS resource records, SRV adoption is far more thorough and widespread. If you have a Microsoft Windows domain, for example, then a whole raft of services are located through SRV lookups in the DNS. That's been the case for more than a decade, at this point.

The problem is that everyone thinks of HTTP, when HTTP is by far, nowadays in 2011, the exception and not the rule here.

JdeBP
  • 3,970
  • 17
  • 17
  • while srv records are great for internal network use when the environment is controlled, they just don't cut it for something like an external server with heterogeneous clients. you don't know that the record will be accessed since you don't know if the client will support accessing srv records. – Michael Lowman Jul 25 '11 at 13:31
  • 1
    Again you are letting HTTP govern your thinking. For many of the clients mentioned above, `SRV` records are the _defined_ way to locate the services. Also note that the question was whether the mechanism exists and what it was. The mechanism exists, and this is the mechanism. It's been in wide use for a decade. – JdeBP Jul 25 '11 at 14:21
  • Your certainly right, srv is certainly the correct mechanism and actually does other things which i though DNS couldn't do but wished it could. Sadly no browser's support srv. Also though question was HTTP specific because i said " like backup name-server or mail server", meaning that back-up solutions already exist for them. – kjones1876 Jul 31 '11 at 11:58
1

if you're serving dynamic content and it's not practical to simply have two servers giving content simultaneously then your other option is to have multiple records on your DNS anyway and configure the backup server to throw ICMP port unreachable to clients that try and connect to it; if at any point the main server goes down then you simply remove the port 80 block on the backup and traffic will start coming in.

The only other (budget) way you're going to be able to do it is setup a separate machine (or two) to perform NAT on requests, thus if a webserver dies, you can simply remove the NAT rule for it.

Olipro
  • 2,967
  • 18
  • 18
  • I originally tired your first idea, i just turned apache off on the main server but the browser just kept trying to connect anyway. But, would turning apache course an ICMP error ? If not how do i make the server though a ICMP error ? – kjones1876 Jul 21 '11 at 12:15
  • no, the connection will just time out, you should get iptables to reject it properly like so: – Olipro Jul 21 '11 at 12:22
  • iptables -I INPUT -p tcp --dport 80 -j REJECT --reject-with icmp-port-unreachable – Olipro Jul 21 '11 at 12:23
  • I tired that and people just couldn't connect.... I even unplugged the server i was testing on. – kjones1876 Jul 21 '11 at 15:45
  • The questioner wasn't specifically talking about only WWW servers. Indeed, xe mentioned mail and name servers explicitly. – JdeBP Jul 25 '11 at 12:37
1

This is a fairly old question but two fairly significant technologies have not been brought up in the answers: Dynamic DNS and CDNs.

Dynamic DNS is set up so that the DNS records can be modified in almost realtime, so a monitoring client can trigger changes to the public DNS A records as service availability dictates. (Of course, your DNS hosting service must support Dynamic DNS.)

CDNs can also be used to deliver DNS, as for example Cloudflare does (which launched in 2010, I believe).

0

There are no backup A records, but there can be several A records which are given out in random order.

Most browsers a capable of trying another server if one fails. (See: Web Resilience with Round Robin DNS)

You can have one cluster IP address backed by several servers with VRRP or CARP. Backup server takes over the address when primary server fails.

jkj
  • 592
  • 4
  • 12
0

Yes, but you have to do that yourself ;-)

Could you give more information on why you want a "backup A record" and how and in what circumstances you'd like to go to the backup.

Also, it would be helpful to know the relationship from a network perspective between the primary and backup hosts.

dmourati
  • 24,720
  • 2
  • 40
  • 69