5

I've inherited a LAN where there is really no name resolution being done for local resources... i.e. all users enter IP addresses manually to access printers and network shares. There are no LDAP servers or domains either....workstations simply connect to the network without authentication. DHCP is handled via a core switch... And DNS settings are also handed out by this same core switch. Currently, the DNS assignments are as such, and in this order:

10.1.1.50     / old Pentium III Windows 2003 box running DNS service- 128 MB RAM
169.200.x.x   / ISP
4.2.2.2.      / the well known public one

There a couple thousand clients on the LAN....and most of the activity is web browsing ( this is an educational setting ).

First of all, the server seems woefully underpowered for this task...yet there is virtually no slowness when web surfing by clients....

How much horsepower should a heavily used DNS server have ?

I have also heard using 4.2.2.2 is a bad idea .... since it has been so overused...

Finally, wouldn't it make sense to have a robust external DNS server listed first? ( Google's 8.8.8.8 would seem to be a logical candidate )

CaseyIT
  • 427
  • 3
  • 8
  • 14

10 Answers10

13

When you outsource to another company, especially one that is doing it for free, you might consider what they are getting out of it. Google is in the information business, and they are getting another aspect of your (or your user's) traffic pattern.

If I were at a university that used google's name service, I would be raising privacy issues pretty darned fast.

Some things are best kept in house, and DNS resolution seems to be one of them. If you are unable or unwilling to run a stable server like BIND then purchase an appliance to do local DNS resolution.

DNS for a small site can run on a very small machine, but I'd not enable DNSSEC. :)

Michael Graff
  • 6,588
  • 1
  • 23
  • 36
6

I work for a university and we use internal DNS server for internal-only entries, other than that, all lookups are forwarded to Google's 8.8.8.8/8.8.4.4 without any issues.

solefald
  • 2,303
  • 15
  • 14
6

As that server is clearly not being stressed I'm inclined to think that there is no reason to change anything. The network you have described really doesn't need internal DNS and not having it may even slow (briefly?) down the hacking attempts by the students, as it will not be immediately obvious what machine does what.

As you have given no indication at all that the present system isn't working perfectly there isn't an actual need to change anything.

In regard to

Google's 8.8.8.8 would seem to be a logical candidate

Why is that logical? Why not just use the ISP's DNS, or some other unfiltered source?

I would go even further and remove 4.2.2.2, as the likelyhood of it ever being hit by the clients is slim to none. After all, both the 2003 machine and the ISP's DNS would have to be down for that to happen. If you really feel a need for a third DNS source add the ISP's secondary instead.

John Gardeniers
  • 27,262
  • 12
  • 53
  • 108
  • All good points thanks... Especially if the ISP's DNS is not responding..it's probably an indication of an outage, and Google's DNS ( or 4.2.2.2 ) wouldn't be reachable anyway – CaseyIT Apr 16 '10 at 01:58
  • 1
    I just got a downvote for this. While I really couldn't care less about the loss of a couple of points I would be extremely interested in why someone thinks this is a bad enough answer to downvote. Is there a technical error? Is it bad to use common sense? – John Gardeniers Apr 21 '10 at 22:32
  • 5
    I didn't downvote, but I'd rather use Google's DNS than my ISP. ISPs have something of a track record of intercepting NXDOMAIN responses, and mine goes down far more frequently than 8.8.8.8 does. – ceejayoz Nov 28 '11 at 22:04
5

You'll get faster response for web surfing and reduce your network traffic if you set up a local DNS server, even if you only use it as a proxy DNS server (e.g. all your client machines do their lookups to your local DNS server, which then does the lookups on the public/ISP DNS of your choice, and then caches the answers). Why will you get faster response? Ping a host on your 10/100/1000 Mbit network, and compare the result to pinging a public DNS server over your 1/2/8/10 Mbit internet connection. My guess is that you will benefit immediately from a local DNS infrastructure which won't cost you all that much.

If you also use it for local hostname resolution, then you'll benefit from easier to remember, meaningful hostnames for the hosts on your network.

dunxd
  • 9,482
  • 21
  • 80
  • 117
3

Reasons to use an internal DNS server:

  • You have internal domains that aren't public, or are different than what is available publically.
  • You'd like to fiddle with caching and other performance stuff.
  • You'd like to log the DNS queries to monitor who goes where.
  • You'd like to block certain things (note that people will circumvent this by changing their DNS server to 8.8.8.8 or something else, so you'll have to block DNS to anything other than your server at your firewall)
  • You'd like to redirect certain domains (e.g. redirect facebook.com to Terms of Employment policy)(of course, you'll have the same problems as if you block domains)
  • You'd like to really understand how DNS works.
  • You are a control freak.
  • You are a snoop.
Jed Daniels
  • 7,172
  • 2
  • 33
  • 41
2

I have barely heard of 4.2.2.2, but this is what I heard.

I would recommend you use at least two DNS servers for a large network of thousands of clients. I wouldn't want to be you if (or rather when) the ONE DNS goes down...

If you are looking for the same kind of simplicity and don't want to host your own, then I recommend you check out OpenDNS where you can even set up some filtering (porn etc) which you might want on an educational network. Especially on an educational network.

Of course, I would most definitively use Google's 8.8.8.8 AND 8.8.4.4 instead of 4.2.2.2.

Gomibushi
  • 1,303
  • 1
  • 12
  • 20
  • I hear the filtering concern...but the ISP has a fairly good filter -- I say fairly good since students regularly skirt around it with proxies like hidemyass.com and the like – CaseyIT Apr 16 '10 at 01:49
  • That is true, and DNS filtering is not really doing anything but stop the less technically inclined anyways. – Gomibushi Apr 16 '10 at 06:20
2

Is that Win2003 server used for anything else? I'd nuke it, toss on your Linux/BSD distro of choice and install DNSMasq on it.

gravyface
  • 13,947
  • 16
  • 65
  • 100
  • 1
    No it's not, and that's gonna happen I can assure you! Haven't heard of DNSMasq but will look into it. – CaseyIT Apr 16 '10 at 01:53
  • 1
    DNSMasq does DNS forwarding/caching (and DHCP, but you don't need that and it's not enabled by default). Another option (and may be more robust) would be BIND. – gravyface Apr 16 '10 at 02:07
  • I would nuke it and get totally rid of it - uses lots of electricity for nothing. As in: Get ONE server that is modern, more powerfull, virtualize all the old crap away. I run windows based DNS on 256MB machines with a very tight CPU slice successfully. – TomTom Apr 16 '10 at 05:52
1

Well setting up a dns server with a bit more horsepower might make sense. You can use dns debug logging on Windows 2003 DNS properties to log DNS requests to see what volume of DNS traffic you are getting. For internal requests you can set the TTL on local records to a high value to reduce the number of requests. 3600 seconds perhaps?

External requests will get cached on the internal server if it is the first DNS server listed for clients to ask. I believe that depending on the DNS server settings, if it isn't found on the internal server, that server will refer the client to one that can answer, or it will recurse the request--get the answer, then pass it the client.

It is worth reading up on, as your network usability and stability will probably increase by being able to use hostnames instead of IP's for internal resources, but to be useful it does need to be carefully configured.

0

Cites that use Content delivery network have a better chance of using a server close to you when you use an internal DNS, while when you use 8.8.8.8 the CDN will pick a server closer to where the 8.8.8.8 server is located.

muny
  • 1
  • 1
0

For performance reasons local DNS is hard to beat. If you don't need your own server for local name resolution, then second best choice IMO is to use DNS server provided by your ISP for a simple reason: theoretically you cannot do faster DNS resolution than through your ISP simply because requests to any other public DNS server (like 8.8.8.8 or 4.2.2.2) will go through your ISP servers anyways. (there is obviously possibility that DNS setup is misconfigured by your ISP, but I'll assume that it works as expected).

When any application tries to resolve a host name, at first the OS tries to resolve it by using internal cache, if the cache is not available, then it will contact the DNS server. By using dig you can bypass the OS's cache, directly query your or any random DNS server and see times that it takes to resolve using that DNS server.

For example, if I want to resolve google.com using DNS server from my ISP:

> dig -4 -u +notcp google.com @76.14.0.8
...
google.com.             210     IN      A       172.217.6.46
;; Query time: 9027 usec
...

It took 9027 microseconds to resolve it. If I run like 10 times I get consistent values within 9-10ms range. Now if I try to use google's DNS server:

> dig -4 -u +notcp google.com @76.14.0.8
...
google.com.             168     IN      A       216.58.192.14
;; Query time: 30024 usec
...

It took 30024 microseconds. If I dig using their servers I get values that range from 20 to 60 ms, which is way worse that using DNS server provided by my local ISP. I'm physically located a couple of miles from google's headquaters, perhaps my local point that handles 8.8.8.8 is also not that far, but for somebody far away from 8.8.8.8 the difference will be much worse.

If you have a local DNS server (your router might have it), then:

> dig -4 -u +notcp google.com @192.168.0.1
...
google.com.             4       IN      A       216.58.195.78
;; Query time: 9368 usec
...

it took 9368 microseconds, because my router didn't have cache for google.com and had to contact my ISP to resolve it. But if I run it second time now I always get cached results that are consistently less than 1ms:

> dig -4 -u +notcp google.com @192.168.0.1
...
google.com.             2       IN      A       216.58.195.78
;; Query time: 493 usec
...

Hard to beat that kind of performance.

Overall, in the order of performance:

  1. OS cache is the fastest (perhaps a microsecond or less to resolve it), as no network requests are involved

  2. Local DNS server with resolution times of 0.5-1ms

  3. You ISP's DNS server (could be anything, let's assume 10ms)

  4. Any other DNS server which should roughly add whatever extra time it takes to process the request from your ISP servers to the other DNS server, could be anything from 20 to 60ms at best.

So, when you would use 8.8.8.8, the 4th performance option over 2nd?

  • when your network is misconfigured or you don't know your DNS server's IP when you need to enter it you can simply use 8.8.8.8 as a quick fix.

  • use it as a secondary back up DNS server.

  • use it if your primary DNS server filters some domains (in some countries to block access to certain sites).

Pavel P
  • 113
  • 5