10

I currently use DNS round robin for load balancing, which works great. The records look like this (I have a TTL of 120 seconds)

;; ANSWER SECTION:
orion.2x.to.        116 IN  A   80.237.201.41
orion.2x.to.        116 IN  A   87.230.54.12
orion.2x.to.        116 IN  A   87.230.100.10
orion.2x.to.        116 IN  A   87.230.51.65

I learned that not every ISP / device treats such a response the same way. For example some DNS servers rotate the addresses randomly or always cycle them through. Some just propagate the first entry, others try to determine which is best (regionally near) by looking at the IP address.

However if the user base is big enough (spreads over multiple ISPs, etc.) it balances pretty well. The discrepancies from highest to lowest loaded server hardly every exceeds 15%.

However now I have the problem that I am introducing more servers into the systems, and that not all have the same capacities.

I currently only have 1 Gbps servers, but I want to work with 100 Mbps and also 10 Gbps servers too.

So what I want is I want to introduce a server with 10 Gbps with a weight of 100, a 1 Gbps server with a weight of 10 and a 100 Mbps server with a weight of 1.

I previously added servers twice to bring more traffic to them (which worked nice—the bandwidth almost doubled). But adding a 10 Gbps server 100 times to DNS is a bit ridiculous.

So I thought about using the TTL.

If I give server A 240 seconds TTL and server B only 120 seconds (which is about about the minimum to use for round robin, as a lot of DNS servers set to 120 if a lower TTL is specified (so I have heard)). I think something like this should occur in an ideal scenario:

First 120 seconds
50% of requests get server A -> keep it for 240 seconds.
50% of requests get server B -> keep it for 120 seconds

Second 120 seconds
50% of requests still  have server A cached -> keep it for another 120 seconds.
25% of requests get server A -> keep it for 240 seconds
25% of requests get server B -> keep it for 120 seconds

Third 120 seconds
25% will get server A (from the 50% of Server A that now expired) -> cache 240 sec
25% will get server B  (from the 50% of Server A  that now expired) -> cache 120 sec
25% will have server A cached for another 120 seconds
12.5% will get server B (from the 25% of server B that now expired) -> cache 120sec
12.5% will get server A (from the 25% of server B that now expired) -> cache 240 sec

Fourth 120 seconds
25% will have server A cached -> cache for another 120 secs
12.5% will get server A (from the 25% of b that now expired) -> cache 240 secs
12.5% will get server B (from the 25% of b that now expired) -> cache 120 secs
12.5% will get server A (from the 25% of a that now expired) -> cache 240 secs
12.5% will get server B (from the 25% of a that now expired) -> cache 120 secs
6.25% will get server A (from the 12.5% of b that now expired) -> cache 240 secs
6.25% will get server B (from the 12.5% of b that now expired) -> cache 120 secs
12.5% will have server A cached -> cache another 120 secs
... I think I lost something at this point, but I think you get the idea...

As you can see this gets pretty complicated to predict and it will for sure not work out like this in practice. But it should definitely have an effect on the distribution!

I know that weighted round robin exists and is just controlled by the root server. It just cycles through DNS records when responding and returns DNS records with a set probability that corresponds to the weighting. My DNS server does not support this, and my requirements are not that precise. If it doesn't weight perfectly its okay, but it should go into the right direction.

I think using the TTL field could be a more elegant and easier solution—and it doesn't require a DNS server that controls this dynamically, which saves resources—which is in my opinion the whole point of DNS load balancing vs hardware load balancers.

My question now is: Are there any best practices / methods / rules of thumb to weight round robin distribution using the TTL attribute of DNS records?

Edit:

The system is a forward proxy server system. The amount of Bandwidth (not requests) exceeds what one single server with Ethernet can handle. So I need a balancing solution that distributes the bandwidth to several servers. Are there any alternative methods than using DNS? Of course I can use a load balancer with fibre channel etc, but the costs are ridiculous and it also increases only the width of the bottleneck and does not eliminate it. The only thing I can think of are anycast (is it anycast or multicast?) IP addresses, but I don't have the means to set up such a system.

ctype.h
  • 205
  • 1
  • 3
  • 11
The Shurrican
  • 2,230
  • 7
  • 39
  • 58
  • Be prepared to be hit on the head with a copy of RFC 2181 § 5.2 by a wide spectrum of respondents. – JdeBP Feb 07 '12 at 17:01
  • well i realise that RR was not designed for load balancing. but it works great... so... i am also not aware of an alternative. of course there are but they are either not possible for me to put into place or far too expensive or too complicated – The Shurrican Feb 07 '12 at 23:00
  • @JdeBP yes, good spot - the TTLs in an RRset MUST be the same. – Alnitak Feb 07 '12 at 23:02

5 Answers5

5

First off, I completely agree with @Alnitak that DNS isn't designed for this sort of thing, and best practice is to not (ab)use DNS as a poor man's load balancer.

My question now is... are there any best prectices / methos / rules of thumb to weight round robin distribution using the TTL attribute of DNS records?

To answer on the premise of the question, the approach used to perform basix weighted round robin using DNS is to:

  • Adjust the relative occurrence of records in authoritative DNS responses. I.e. if Server A is to have 1/3 of traffic and Server B is to have 2/3, then 1/3 of authoritative DNS responses to DNS proxies would contain only A's IP, and 2/3 of responses only B's IP. (If 2 or more servers share the same 'weight', then they can be bundled up into one response.)
  • Keep a low DNS TTL so that un-balanced load is evened out relatively quickly. Because the downstream DNS proxies have very un-even numbers of clients behind them, you'd want to re-shuffle records frequently.

Amazon's Route 53 DNS service uses this method.

The amount of Bandwidth (not requests) exceeds what one single server with ethernet can handle. So I need a balancing solution that distributes the bandwidth to several servers.

Right. So as I understand this, you have some sort of 'cheap' downloads / video distribution / large-file download service, where the total service bitrate exceeds 1 GBit.

Without knowing the exact specifics of your service and your server layout, it's hard to be precise. But a common solution in this case is:

  • DNS round robin to two or more TCP/IP or HTTP level load balancer instances.
  • Each load balancer instance being highly available (2 identical load balancers cooperating on keeping one IP address always on).
  • Each load balancer instance using weighted round robin or weighted random connection handling to the backend servers.

This kind of setup can be built with open-source software, or with purpose-built appliances from many vendors. The load balancing tag here is a great starting point, or you could hire sysadmins who have done this before to consult for you...

4

My question now is... are there any best prectices / methos / rules of thumb to weight round robin distribution using the TTL attribute of DNS records?

Yes, best practice is don't do it !!

Please repeat after me

  • DNS is not for load balancing
  • DNS does not provide resiliency
  • DNS does not provide fail-over facilities

DNS is for mapping a name to one or more IP addresses. Any subsequent balancing you get is through luck, not design.

Alnitak
  • 20,901
  • 3
  • 48
  • 81
  • 2
    `more IP addresses` ... how is that not balancing? furthermore this is why i gave my question an appropriate introduction. if i haven't done that i would appriciate your post as a COMMENT but like this i have to downvote it. maby it is not be design but it works great and provides great advantagescompared to all alternatives. and that is what websites like google, facebook, amazon etc think too and use it. however, comment noted. i updated my question with more information about the scenario and kindly ask you to suggest an alternative balancing solution @Alnitak – The Shurrican Feb 07 '12 at 16:47
  • 2
    Balancing in this fashion offers no guarantee of completeness since so many client side facing issues arise outside of your control. This is doubly so when you want to 'weight' because fundamentally you cant guarantee round robin in the first place. DNS is an advisary service only, clients dont need to follow it to the letter. I think that is the point that @Alnitak wanted to make – Matthew Ife Feb 07 '12 at 17:08
  • i understand that perfectly. quote from my question: I learned that not every ISP / device treats such a response the same way. For example some DNS servers rotate the addresses randomly or always cycle them through. Some just propagate the first entry, others try to determine which is best (regionally near) by looking at the ip address. However if the userbase is big enough (spreads over multiple ISPs etc) it balances pretty well. The discrepancies from highest to lowest loaded server hardly every exceeds 15%. – The Shurrican Feb 07 '12 at 17:51
  • @JoeHopfgartner the only foolproof way of providing resiliency, redunancy and balancing is at the IP layer - i.e. BGP routing, and layer 4 load balancers. I didn't say it in this answer because I've already said it dozens of times in other answers. – Alnitak Feb 07 '12 at 18:51
  • Is redundancy important to your solution? I.E if a server goes down it is appropriately handled? Because if it is your opening a can of worms with RR-DNS. – Matthew Ife Feb 07 '12 at 19:00
  • no, redundancy/avilibility is no criteria here. if one server goes down i fix it or remove it from the dns and the delay it needs to propagate is absolutely ok. you are right with the ip level @Alnitak, that is what i thought was anycasting or different methods. however afaik this is a complex solution that i cannot do on my own with usual end-customer servers. afaik this needs to be done by a datacenter provider authorized to publish routes for your servers ripe ranges... could somebody please tell me whats the probelm with RR? it works perfectly? i have no issues? – The Shurrican Feb 07 '12 at 22:56
  • and now i seriously want to know: IS THERE AN ALTERNATIVE? i mean a valid one that i can really use. that works across multiple datacenters, that has no horribly high hardware costs or where i need my own datacenter or access to routing protocols etc. i am not aware of any. – The Shurrican Feb 07 '12 at 23:03
  • @JoeHopfgartner anycast is normally only used for DNS servers, and makes the same IP address appear to be served from many locations at once. It's not suitable for web traffic. In any event, we need more info abour your architecture - I can tell you that DNS _isn't_ the answer, but without more info I can't tell you what _is_ ! – Alnitak Feb 07 '12 at 23:06
  • @JoeHopfgartner if you're trying to do it on the cheap, but you have 10x variation in server capacity, the only solution that makes sense to me is to have a couple of servers handle all _intial_ requests, and then use HTTP redirection to direct users to the real servers using a weighted RR algorithm. Caveat - the URL visible in the browser bar will change. – Alnitak Feb 07 '12 at 23:10
  • that is a very good method @Alnitak which I use, but in this case i cannot do it because i have a proxy system. I need one hostname or ip that all users can configure their web browser to surf to. I will add more detail about the system to the question. – The Shurrican Feb 08 '12 at 01:19
2

Take a look at PowerDNS. It allows you to create a custom pipe backend. I've modified an example load-balancer DNS backend written in perl to use the Algorithm::ConsistentHash::Ketama module. This lets me set arbitrary weights like so:

my $ketamahe = Algorithm::ConsistentHash::Ketama->new();

# Configure servers and weights
$ketamahe->add_bucket("192.168.1.2", 50);
$ketamahe->add_bucket("192.168.1.25", 50);

And another one:

# multi-colo hash
my $ketamamc = Algorithm::ConsistentHash::Ketama->new();

# Configure servers and weights
$ketamamc->add_bucket("192.168.1.2", 33);
$ketamamc->add_bucket("192.168.1.25", 33);
$ketamamc->add_bucket("192.168.2.2", 17);
$ketamamc->add_bucket("192.168.2.2", 17);

I've added a cname from my desired top level domain to a subdoman I call gslb, or Global Server Load Balancing. From there, I invoke this custom DNS server and send out A records according to my desired weights.

Works like a champ. The ketama hash has the nice property of minimal disruption to existing configuration as you add servers or adjust weights.

I recommend reading Alternative DNS Servers, by Jan-Piet Mens. He has many good ideas in there as well as example code.

I'd also recommend abandoning the TTL modulation. You are getting pretty far afield already and adding another kludge on top will make troubleshooting and documentation extremely difficult.

dmourati
  • 24,720
  • 2
  • 40
  • 69
1

To deal with this sort of setup you need to look at a real load balancing solution. Read Linux Virtual Server and HAProxy. You get the additional benefit of servers automatically being removed from the pool if they fail and the effects are much more easily understood. Weighting is simply a setting to be tweaked.

ctype.h
  • 205
  • 1
  • 3
  • 11
  • The problem with that is that I have a bandwith problem and not a problem of number of requests that one single server can handle. So a solution where I have to direct all traffic through one node is not a solution for me. The only thing that I can think of beside DNS solution is a multicast ip address. I edited my question accordingly. – The Shurrican Feb 07 '12 at 16:50
  • sorry i mean anycast, not multicast (i think) – The Shurrican Feb 07 '12 at 16:54
  • 1
    If bandwidth is the issue you should look into this + LACP on your switches. You could then bond multiple 10G cards in the loadbalancing devices(s). – Mark Harrigan Feb 07 '12 at 17:06
  • i upvoted this because it is interesting... but hten i have my switch as bottleneck! – The Shurrican Feb 07 '12 at 22:51
1

You can use PowerDNS to do weighted round robin, although distributing load in such an unbalanced fashion (100:1?) may get very interesting, at least with the algorithms I used in my solution, where each RR entry has a weight associated with it, between 1-100, and a random value is used to include or exclude records.

Here's an article I wrote on using the MySQL backend in PowerDNS to do weighted RR DNS: http://www.mccartney.ie/wordpress/2008/08/wrr-dns-with-powerdns/

R.I.Pienaar also has some Ruby based examples (using the PowerDNS pipe backend): http://code.google.com/p/ruby-pdns/wiki/RecipeWeightedRoundRobin